Skip to main content

Reading scan results

A scan result contains a findings list, a summary, and (if a contract is present) a Gatekeep verdict. This guide explains how to interpret each part and what to do with the information.

The summary section

The summary gives a quick count of findings by severity. Start here to understand the overall picture before looking at individual findings.

{
  "summary": {
    "critical": 1,
    "high": 3,
    "medium": 7,
    "low": 4,
    "info": 2,
    "total": 17
  }
}

A high critical count demands immediate attention. A high medium count with zero criticals might be acceptable for a service in active development. Context matters.

Reading individual findings

Each finding in the findings array has the same structure. The key fields to check:

severity

The risk level assigned by the scan layer. Prioritise critical and high. Do not ignore medium in security categories. Severity levels reference.

category

Which aspect of the codebase the finding relates to. A security finding from a shallow scan indicates a potential issue detected without deep analysis. The same issue re-detected by a deep sast or secret layer will have the full context (file path, line number).

title and description

The title is a short identifier. The description explains the issue and its risk. Read the description before deciding how to act.

remediation

The recommended fix. This is a starting point, not an exhaustive guide. For dependency CVEs, the remediation will specify a minimum safe version. For secrets, it will instruct rotation.

file_path and line_number

Present for deep scan findings where the layer can pinpoint the issue. Use these to navigate directly to the problem in your editor. file_path is the repository-relative path.

analysis_layer

Which deep scan layer produced the finding. Useful for understanding the analysis method. For example, a finding from SecretDetector used regex or entropy analysis; a finding from SASTEngine used AST analysis.

confidence

A score from 0.0 to 1.0 reflecting the scanner's confidence that the finding is a genuine issue. Scores are adjusted by the learning loop based on accumulated human feedback. A score of 1.0 means no feedback has been received (full confidence by default). A score below 0.5 indicates the pattern has received negative feedback repeatedly.

Low confidence does not mean a finding is wrong. It means the pattern has been marked as incorrect by users in similar contexts. Review low-confidence findings carefully before dismissing them.

Prioritising what to fix

A simple priority order:

  1. Critical severity in security, secret, or sast categories — act immediately. These represent direct exploitation risk.
  2. High severity in dependency or iac categories — plan remediation in the current sprint. Known CVEs have published exploits.
  3. High severity in license — consult your legal team before distributing code that uses the dependency.
  4. Medium severity across any category — address in the next maintenance window or tech debt sprint.
  5. Low severity and info — track and address incrementally. Do not ignore permanently.

The Gatekeep verdict

If a devcontract.json is present, the scan result includes a gatekeep object:

VerdictMeaningAction required
passedNo gates firedNone. The service meets its contract.
passed_with_warningsNon-blocking gates firedReview the warning gates. Plan to address the findings.
failedOne or more blocking gates firedIdentify which gates fired from the gates array. Fix the findings or update the contract threshold.
not_evaluatedNo contract foundAdd a devcontract.json to the repository root.
errorContract file invalidValidate the contract JSON. Check for unrecognised category values.

Acting on a failed verdict

When the verdict is failed, look at the gates array in the full report to identify which gates fired and how many findings triggered each one. There are two paths to resolution:

  1. Fix the findings: remediate the issues identified by the scan. On the next scan, the gate count will drop. This is the preferred path for security findings.
  2. Update the contract: if the findings represent pre-existing acceptable debt, increase the threshold to reflect the current baseline. Document why in the gate's description. This is appropriate for quality findings during a tech debt phase.

Do not increase thresholds to silence critical or high security findings unless you have a documented exception process and a remediation timeline. The contract's purpose is to record your standards; weakening it without rationale defeats the benefit.

Submitting feedback

If a finding is a false positive, submit a incorrect verdict via POST /api/scan/{id}/feedback. Over time, repeated negative feedback on a pattern lowers its confidence score in future scans. This is how the learning loop reduces noise specific to your codebase.

Next steps