Code Quality¶
Nori ships pre-configured with Ruff for linting and formatting. The configuration lives in pyproject.toml at your project root and is yours to customize. CI runs the same checks locally, so style and obvious bugs are caught at PR time, not at runtime.
Quality is not optional. Lint catches real bugs (unused imports, missing exception chains, type comparisons that look right but aren't), and format keeps the codebase consistent across contributors. Both run in milliseconds — there's no excuse to skip them.
What ships configured¶
A single pyproject.toml at the project root holds the lint and format setup:
[tool.ruff]
line-length = 120
target-version = "py310"
[tool.ruff.lint]
select = ["E", "W", "F", "I", "UP", "B", "S", "C90"]
ignore = ["E501"]
[tool.ruff.format]
quote-style = "single"
indent-style = "space"
| Rule group | What it catches |
|---|---|
E, W |
pycodestyle errors and warnings (indentation, whitespace, comparisons) |
F |
pyflakes — unused imports, undefined names, redefined-while-unused |
I |
isort — import ordering (auto-fixable) |
UP |
pyupgrade — modernize syntax to your declared target-version |
B |
flake8-bugbear — likely bugs (mutable defaults, misuse of assert, etc.) |
S |
flake8-bandit — security checks (hardcoded secrets, SQL injection, weak hashes, insecure subprocess, etc.) |
C90 |
mccabe — caps cyclomatic complexity at 10; see the section below |
E501 (line-too-long) is delegated to the formatter. The format defaults are single quotes (matching the dominant convention in framework code) and 4-space indentation.
Running locally¶
After pip install -r requirements-dev.txt, ruff is on your venv's PATH.
Lint¶
ruff check . # report all violations
ruff check . --fix # apply auto-fixes (imports, unused, whitespace, modernizations)
ruff check . --statistics # tally by rule code
ruff check . --select F841 # only one rule
Auto-fixes are conservative — only changes that ruff guarantees are semantics-preserving are applied. Anything ruff considers risky is shown but not modified, and you can opt in with --unsafe-fixes after reviewing.
Format¶
ruff format . # rewrite files in place
ruff format --check . # verify formatting; non-zero exit if any file would change
ruff format --diff . # preview what would change
ruff format is intentionally narrow — it never moves code, only reformats whitespace, line breaks, quotes, and trailing commas. It's safe to run repeatedly.
CI gate¶
Every push and pull request to main runs the Lint workflow at .github/workflows/lint.yml:
Two separate steps so failures categorize cleanly: a lint failure says "fix the rule", a format failure says "run ruff format". The CI uses the same pyproject.toml you use locally, so passing locally implies passing in CI.
Per-file-ignores¶
Sometimes a rule has to break for architectural reasons — a bootstrap hook that legitimately needs imports out of order, a settings module that loads .env before reading vars, a test that patches sys.path before importing the module under test.
For these cases, document the exception once in pyproject.toml rather than scattering # noqa comments across many lines:
[tool.ruff.lint.per-file-ignores]
# Bootstrap hook MUST run before framework/third-party imports so observability
# SDKs (Sentry, OTel, Datadog) can patch instrumentable libraries at load time.
"rootsystem/application/asgi.py" = ["E402"]
# warnings.filterwarnings must precede framework imports — suppresses the
# Tortoise "Module 'X' has no models" RuntimeWarning fired during registration.
"rootsystem/application/core/__init__.py" = ["E402"]
# load_dotenv() must run before any module that reads env vars at import time.
"rootsystem/application/settings.py" = ["E402"]
# Test setup commonly needs sys.path edits, env var injection, or importlib
# patches before importing the module under test.
"tests/**/*.py" = ["E402"]
The comment above each entry is non-negotiable. A per-file-ignore without a justification is a bug being hidden, not a deliberate exception. If a future contributor can't tell why the rule is silenced, the rule should not be silenced.
For a one-off line that genuinely needs to break a rule (rare), use # noqa: <code> with an inline comment:
Customizing for your project¶
The pyproject.toml is yours after install — framework:update does not replace it. Tighten or relax the configuration as your project matures.
Detecting drift against the latest release¶
Because pyproject.toml is user-owned, projects can fall behind on framework-side tooling improvements (new ruff rules, new mypy strict modules, bumped coverage thresholds). To see what changed upstream without modifying anything locally:
The output is a categorized diff (added upstream / changed / local-only) with full paths like tool.coverage.report.fail_under. Read-only — you decide what to port. See the CLI reference for the full command shape.
Adding stricter rules¶
[tool.ruff.lint]
select = [
"E", "W", "F", "I", "UP", "B",
"S", # flake8-bandit — security checks
"SIM", # flake8-simplify — code simplification suggestions
"RET", # flake8-return — return-statement consistency
"TCH", # flake8-type-checking — move type-only imports to TYPE_CHECKING
]
Run ruff check . --statistics after enabling new rule groups to see the impact, then either fix them or selectively ignore the ones that don't fit your project.
Relaxing for tests¶
Test code often violates rules that make sense in production code (long functions, magic numbers, fixture imports). Add a per-file-ignore:
[tool.ruff.lint.per-file-ignores]
"tests/**/*.py" = ["E402", "S101"] # imports after sys.path setup, asserts are fine in tests
Custom line length¶
Adopting in existing Nori projects¶
framework:update does not retrofit pyproject.toml or requirements-dev.txt — both are user-owned files that the framework never replaces after the first install. To adopt ruff in a project created on Nori ≤ 1.10.6:
- Add
ruff>=0.6to yourrequirements-dev.txt. - Copy
pyproject.tomlfrom the framework repo to your project root, or write your own. - Run
.venv/bin/pip install -r requirements-dev.txt. - Run
ruff check . --fixto apply auto-fixes — review the diff, commit if you're happy. - Run
ruff format .to apply formatting in a separate commit. Add the commit's hash to a.git-blame-ignore-revsfile at the repo root sogit blameskips it:
Activate locally with git config blame.ignoreRevsFile .git-blame-ignore-revs.
- Optionally, copy
.github/workflows/lint.ymlto gate future PRs.
Pre-commit hooks¶
Nori ships with .pre-commit-config.yaml so ruff runs on every git commit. CI catches violations after the push, but pre-commit catches them before — saving the round trip.
Activate once per clone¶
.venv/bin/pip install -r requirements-dev.txt # installs pre-commit
.venv/bin/pre-commit install # writes .git/hooks/pre-commit
After that, every git commit runs:
ruff check --fix— lint and auto-fixruff format— format
If either modifies files, the commit is aborted so you can review the changes and re-stage. The first run downloads ruff into pre-commit's isolated environment; subsequent runs are fast (~100ms).
Run against all files manually¶
Useful before opening a PR or after pulling someone else's changes:
Bump the ruff version¶
Updates rev: in .pre-commit-config.yaml to the latest stable tag of astral-sh/ruff-pre-commit. Commit the resulting diff so other contributors pick up the same version.
Skipping hooks¶
git commit --no-verify bypasses the hooks. Avoid it. If a hook is firing on something you believe is wrong, fix the rule (per-file-ignore or pyproject.toml change) rather than the symptom.
Test coverage¶
Nori ships pre-configured with pytest-cov so every test run measures how much of rootsystem/application was exercised. The Tests workflow in CI reports coverage on every push and fails the build if the project drops below the configured threshold.
Running locally¶
pytest tests/ --cov # coverage report at the end of the run
pytest tests/ --cov --cov-report=html # HTML report at htmlcov/index.html
pytest tests/ --cov --cov-report=xml # XML for external dashboards
Coverage configuration lives in pyproject.toml under [tool.coverage]. Branch coverage is enabled — both lines and conditional branches must be covered.
Threshold¶
fail_under = 82 is the floor. Drops below 82% fail CI. The framework's current baseline is ~86%, leaving a ~4-point buffer for routine churn — tight enough that a meaningful regression flips the gate, loose enough that an unrelated PR adding a few lines without immediate tests doesn't break the build. The floor was raised from 75 to 82 in v1.14.2 after a focused coverage push on core/cli.py and the Redis backends; raise it again whenever the project sustains a higher number for a few releases.
What is excluded¶
migrations/— engine-specific SQL generated by aerich, not framework logicseeders/example_seeder.pyandcommands/_example.py— templates meant to be edited by users- Lines marked
# pragma: no cover(use sparingly, only where coverage truly cannot reach) if TYPE_CHECKING:blocks (imports for static type checkers, not runtime)
Adding it to your project¶
Existing Nori projects can opt in with the same setup:
- Add
pytest-cov>=5.0torequirements-dev.txt. - Copy the
[tool.coverage.run]and[tool.coverage.report]sections from the framework'spyproject.tomlinto your own. - Run
pytest --covlocally and tune theomitlist andfail_underthreshold for your code.
Type checking¶
Nori ships pre-configured with mypy for static type analysis. Configuration lives in pyproject.toml under [tool.mypy]. The framework codebase passes mypy with zero errors; CI enforces this on every push.
Type checking is gradual, not strict. The aim is to catch real bugs (Optional dereference, wrong return shapes, mismatched call signatures) without forcing every annotation to be exhaustive. Strictness can be tightened per-module as the project matures.
What ships configured¶
[tool.mypy]
python_version = "3.10"
files = ["rootsystem/application"]
exclude = [
"rootsystem/application/migrations",
"rootsystem/.framework_backups",
]
ignore_missing_imports = true
show_error_codes = true
warn_unused_ignores = true
pretty = true
| Option | Why |
|---|---|
python_version = "3.10" |
matches the framework's lower bound (raised in v1.11.0 when Python 3.9 reached EOL) |
ignore_missing_imports = true |
most third-party libs (Tortoise, Starlette, Jinja2) ship without complete stubs — treat them as Any rather than failing the run |
show_error_codes = true |
every error is reported with its code (e.g. [arg-type]), so it can be silenced precisely with # type: ignore[code] |
warn_unused_ignores = true |
flags # type: ignore comments that no longer apply — keeps the baseline honest as upstream stubs improve |
pretty = true |
nicer multi-line output for readability |
Running locally¶
mypy # type-check rootsystem/application
mypy path/to/file.py # check a specific file
mypy --show-traceback # debug mypy-internal errors
Mypy is on your venv's PATH after pip install -r requirements-dev.txt.
CI gate¶
The Typecheck workflow at .github/workflows/typecheck.yml runs on every push and PR to main. It installs requirements-dev.txt and runs mypy — failing the build on any new error.
Silencing errors with justification¶
When a type error reflects a stub limitation (not a real bug), silence it with an inline comment that explains why:
# Tortoise's QuerySet stubs don't preserve subclass identity through .filter();
# qs.__class__ is rebound at runtime so the cast is safe.
return SoftDeleteQuerySet(self._model).filter(deleted_at__isnull=True) # type: ignore[return-value]
# Tortoise attaches Model._meta dynamically at class creation; not in stubs.
for field in self._meta.fields_map: # type: ignore[attr-defined]
The comment is non-negotiable. With warn_unused_ignores = true, mypy will flag any silencer that is no longer needed — so dead # type: ignore won't accumulate.
Adopting in existing Nori projects¶
framework:update does not retrofit pyproject.toml — it's a user-owned file. To adopt mypy in a project created on Nori ≤ 1.10.8:
- Add
mypy>=1.10to yourrequirements-dev.txt. - Copy the
[tool.mypy]section from the framework'spyproject.tomlto yours. - Run
.venv/bin/pip install -r requirements-dev.txt. - Run
mypy— read the report, fix or silence each error. - Optionally, copy
.github/workflows/typecheck.ymlto gate future PRs.
Per-module strict mode¶
The framework ships gradual mode globally and applies --strict-equivalent flags per-module to a small set of high-stakes surfaces — modules where a type bug has security or correctness consequences. The current strict list lives in pyproject.toml:
[[tool.mypy.overrides]]
module = [
"core.auth.security", # PBKDF2 hashing, token generation
"core.auth.login_guard", # rate-limited login + lockout
"core.http.validation", # the input gate
]
disallow_untyped_defs = true
disallow_incomplete_defs = true
check_untyped_defs = true
no_implicit_optional = true
warn_return_any = true
Why per-module instead of global strict? Most of core/ is correctly typed in gradual mode and converting wholesale would generate noise without finding bugs. Strict mode is reserved for modules where an unannotated function is a real risk — auth (anything that touches credentials, sessions, or tokens) and validation (the trust boundary between user input and the rest of the framework).
Adding a new strict module is one entry in the override list:
- Add the dotted module path to the
module = [...]array. - Run
mypyand triage the new errors — annotate signatures, return types, optional dereferences. - Land the change in the same release (don't ship a half-strict module).
Going fully strict, if your project warrants it:
Loosen per-module with [[tool.mypy.overrides]] blocks where strict mode doesn't fit (e.g. tests, migration scripts, modules wrapping un-typed third-party libs).
Cyclomatic complexity¶
Ruff's C90 (mccabe) rule caps function complexity at 10 — the standard threshold across the Python ecosystem. A function whose branching exceeds 10 typically benefits from being split.
[tool.ruff.lint]
select = ["E", "W", "F", "I", "UP", "B", "S", "C90"]
[tool.ruff.lint.mccabe]
max-complexity = 10
Per-file-ignores in pyproject.toml document the legitimate exceptions (CLI dispatchers, DI decorator factories, validation rule dispatchers — places where flattening would fragment a coherent unit). New code must respect the default; raising the threshold is not a substitute for refactoring.
Dependency vulnerability scanning¶
The Audit workflow at .github/workflows/audit.yml runs pip-audit against both requirements.nori.txt and requirements-dev.txt on every push and PR to main. New CVEs in any direct or transitive dependency fail the build immediately.
Each --ignore-vuln flag in the workflow has a documented justification — usually one of:
- No upstream fix yet. Document the actual risk vector; revisit each release.
- Vulnerable function not used by Nori or its callers. Document which function and why we don't reach it.
A bare ignore without justification is a bug — pip-audit is only useful as a gate when the ignore list is honest.
Automated dependency updates¶
pip-audit is the passive half of dependency hygiene — it fails the build when a CVE lands on a pinned version. The active half is Dependabot, which opens the PR that fixes it. Together they close the loop.
The framework ships a .github/dependabot.yml configured for two ecosystems:
version: 2
updates:
- package-ecosystem: pip
directory: /
schedule:
interval: weekly
day: monday
time: "06:00"
timezone: America/Argentina/Buenos_Aires
groups:
dev-tooling:
patterns: ["pytest*", "ruff", "mypy", "pre-commit", "pip-audit", "interrogate", "filelock", "hypothesis"]
open-pull-requests-limit: 5
- package-ecosystem: github-actions
directory: /
schedule:
interval: weekly
open-pull-requests-limit: 3
The pip ecosystem follows the -r chain transitively, so requirements.txt → requirements.nori.txt → requirements-dev.txt are all watched from a single entry. Dev tooling is grouped into one PR per week to avoid noise; framework runtime deps land as individual PRs so each can be reviewed on its own merits.
To adopt in an existing Nori project: copy .github/dependabot.yml from the framework repo. No further setup is needed — GitHub picks it up automatically once the file is on the default branch.
Secrets scanning¶
The Secrets workflow at .github/workflows/secrets.yml runs gitleaks on every push and PR to main, scanning the full git history (not just the diff). Any high-confidence detection — AWS access keys, Stripe keys, JWT bearer tokens, PEM-encoded private keys, etc. — fails the build.
Why scan history instead of just the diff? A secret committed once and "fixed" by a follow-up commit is still in git log. Anyone who clones the repo has it. The scan ensures nothing slipped in before the gate was added, and that nothing slips in after.
Findings come with file path, commit SHA, and a redacted preview of the match. If a flagged value is a deliberate test fixture (e.g. a fake key in a test asset), document it via .gitleaks.toml's allowlist section with a comment explaining why — the same justification rule as pip-audit ignores.
If a real secret is found in history, rotate it first, then scrub the commit (git filter-repo or BFG). The scan will keep failing until the bad commit is rewritten and force-pushed — that is the point. Nori does not ship a "skip this finding" backdoor.
Software Bill of Materials (SBOM)¶
The SBOM workflow at .github/workflows/sbom.yml generates a CycloneDX 1.6 JSON document on every push to main listing every direct and transitive Python dependency with its resolved version, license, and PURL identifier. The artifact is uploaded to the workflow run (90-day retention) and automatically attached to GitHub releases on publish — required for SOC 2 and supply-chain audit work.
# Excerpted from sbom.yml
- run: python -m venv sbom-env
- run: sbom-env/bin/pip install -r requirements.nori.txt
- run: sbom-env/bin/pip install cyclonedx-bom
- run: sbom-env/bin/cyclonedx-py environment sbom-env --output-format json --output-reproducible -o sbom.json
Two design choices worth noting:
- Clean virtualenv. The SBOM is built from
requirements.nori.txtonly — no dev tooling, nocyclonedx-bomitself. The output reflects what a user installs in production, not what a contributor has on their dev box. --output-reproducible. Re-running the workflow against the same lockstep produces a byte-identical JSON file (modulo the per-buildserialNumberUUID). Diff-based change detection in supply-chain tooling becomes trivial: any non-UUID byte change means a dependency moved.
For local generation:
python -m venv .sbom-env
.sbom-env/bin/pip install -r requirements.nori.txt cyclonedx-bom
.sbom-env/bin/cyclonedx-py environment .sbom-env --output-format json -o sbom.json
Docstring coverage¶
The Docstrings workflow at .github/workflows/docstrings.yml enforces a minimum docstring coverage via interrogate. The v1.10.7 incident — 17 module docstrings silently lost when from __future__ import annotations was placed before the docstring — is exactly the kind of regression this gate prevents.
[tool.interrogate]
fail-under = 70
ignore-init-module = true
ignore-init-method = true
ignore-magic = true
ignore-property-decorators = true
ignore-nested-functions = true
ignore-regex = ["^Meta$"] # Tortoise convention; configuration sentinel
Module docstrings are NOT exempt — they are precisely what the v1.10.7 regression broke. Run locally:
The floor is intentionally a few points below the current baseline to absorb churn; raise it as the codebase sustains a higher number.
See also¶
- Ruff documentation
- Coverage.py documentation
- mypy documentation
- pip-audit documentation
- interrogate documentation
- gitleaks documentation
- Dependabot documentation
- CycloneDX SBOM specification
- Dependencies — how
requirements-dev.txtworks alongside framework deps - Testing — pytest setup for Nori projects, including property-based tests