Configuration¶
Noxaudit is configured through a noxaudit.yml file in your project root.
Getting Started¶
Copy the example config:
Or create a minimal one:
Repos¶
Define the repositories noxaudit should audit:
repos:
- name: my-app
path: .
provider_rotation: [anthropic]
exclude:
- vendor
- generated
- node_modules
| Field | Description |
|---|---|
name | Display name for the repo |
path | Path to the repository root (. for current directory) |
provider_rotation | AI providers to cycle through on each run |
exclude | Additional directory names to skip during file gathering (on top of default excludes) |
You can audit multiple repos in a single config:
repos:
- name: backend
path: ../backend
provider_rotation: [anthropic]
- name: frontend
path: ../frontend
provider_rotation: [gemini]
Model¶
Set the AI model to use:
The model must match the provider. See Providers for all available models.
Budget¶
Set cost limits per run:
See Cost Management for details.
Decisions¶
Configure decision memory:
Decisions expire after expiry_days so that previously dismissed findings get re-evaluated periodically. See Decision Memory for the full guide.
Reports¶
Set the directory for saved reports:
Reports are saved as markdown files at {reports_dir}/{repo}/{date}-{focus}.md.
Notifications¶
Send summaries via Telegram:
Requires TELEGRAM_BOT_TOKEN and TELEGRAM_CHAT_ID environment variables. See Notifications.
GitHub Issues¶
Auto-create GitHub issues for findings:
See GitHub Issues.
Dedup¶
Post-audit deduplication normalizes finding titles to canonical forms, improving consistency across runs:
dedup:
enabled: true # on by default
provider: gemini # gemini, openai, or anthropic
model: "" # empty = provider default
Validate¶
Post-audit validation sends each finding plus the actual source code to an LLM to check if the finding is real:
validate:
enabled: true
provider: gemini
model: "" # empty = provider default
drop_false_positives: true
min_confidence: "" # "", "low", "medium", or "high"
When enabled, findings classified as false_positive are dropped. Set min_confidence to filter further — e.g., min_confidence: medium drops low-confidence findings too.
Confidence Scoring¶
Confidence scoring runs automatically on every audit. It reads .noxaudit/findings-history.jsonl and scores each finding by how often it appears across recent runs:
- high — appeared in 60%+ of recent runs
- medium — appeared in 30-60% of recent runs
- low — appeared in fewer than 30% of recent runs
History-based confidence upgrades but never downgrades validation confidence. For example, if validation says "medium" but the finding appears in 80% of runs, it becomes "high". But if validation says "high" and history says "low", it stays "high".
No configuration needed — this runs automatically when history is available.
Chunking¶
For repos with many files, chunking splits the audit into smaller batches so each gets thorough coverage:
Each chunk runs as a separate batch API request. Findings are merged and deduplicated automatically.
Pre-pass¶
For large codebases, pre-pass uses a cheap model to classify files before the main audit:
See Cost Management for details.
Full Example¶
See noxaudit.yml.example or the Configuration Reference for every available option.