Vulnerability Detection and Response: What VDR Requires and How to Automate It

April 23, 2026

What it is

Vulnerability Detection and Response (VDR) replaces CVSS-only severity with a contextual evaluation based on exploitability, internet-reachability, and environmental impact. Each finding is evaluated on three dimensions, remediation urgency scales to the risk, and the 192-day accepted-vulnerability rule forces transparency on long-standing findings.

FedRAMP restructured vulnerability management under Rev5 Balance, and VDR is the result.

Under the traditional model, vulnerability management was CVSS-driven. Scan, get a score, remediate by severity tier. A Critical vulnerability on an air-gapped internal tool and a Critical vulnerability on an internet-facing authentication service got the same treatment. The scan frequency, remediation SLAs, and reporting cadence were prescribed. 30 days for Critical and High, 90 for Moderate, 180 for Low. Simple to audit. Poor at prioritizing actual risk.

VDR changes this by forcing providers to evaluate each finding in the context of their environment, not just its CVSS score. The evaluation produces a three-dimensional risk picture: is the vulnerability likely exploitable? Is the affected resource internet-reachable? What is the potential adverse impact in this specific environment? The answers drive remediation urgency.

Status: optional open beta since February 2, 2026 (beta ends May 22, 2026).

The relevant KSI: KSI-AFR-VDR.

VDR supplements POA&Ms. It does not replace them. Traditional POA&M reporting continues for all frameworks. VDR adds a layer of contextual evaluation, risk-based prioritization, and transparency requirements on top. In Stratus GRC-ITSM, Issue Tickets become POA&Ms by setting “IS POA&M” to Yes. There is no separate POA&M object. The VDR evaluation data lives on the same ticket as the POA&M data.

What it requires

VDR introduces three evaluation dimensions, mandatory reporting, and the 192-day accepted-vulnerability threshold. The MUST vs SHOULD distinction is critical when you design the workflow.

Three evaluation dimensions

Every finding is evaluated against three questions.

LEV/NLEV (Likely Exploitable / Not Likely Exploitable). Is this vulnerability likely to be exploited in the wild? This is informed by EPSS (Exploit Prediction Scoring System) scores, CISA KEV (Known Exploited Vulnerabilities) catalog status, and threat intelligence. A vulnerability with active exploitation or a high EPSS score is LEV. One with a theoretical proof-of-concept, low EPSS score, and no observed exploitation is NLEV. The distinction drives remediation urgency. LEV findings get faster timelines.

IRV/Non-IRV (Internet-Reachable Vulnerability). Is the affected resource internet-reachable? An internet-facing web server and an internal database behind three network layers have different exposure profiles. Network topology, security groups, and routing determine reachability. IRV findings are exposed to a larger threat surface and get faster remediation timelines.

N1-N5 (Potential Adverse Impact). What is the potential impact to the environment? This is provider-evaluated based on the resource’s role, the data it handles, and the blast radius.

  • N1: Negligible impact
  • N2: Minor impact
  • N3: Moderate impact
  • N4: Significant impact
  • N5: Critical impact

The provider evaluates this based on their environment, not a universal scoring table. An RDS instance that stores federal CUI scores higher than an RDS instance that stores application logs. The evaluation is contextual, which is the entire point of VDR.

The combination of these three dimensions drives remediation urgency. An N5+LEV+IRV finding is the highest urgency. An N1+NLEV+Non-IRV finding is the lowest. Every combination in between maps to a specific recommended timeline.

Key MUST requirements

VDR-CSO-DET: Discover and identify vulnerabilities. You have to scan, and you have to scan across the scope. This is not a change from traditional FedRAMP. The scanning obligation continues.

VDR-CSO-RES: Respond to all vulnerabilities. Every finding gets a response. The response might be remediation, mitigation, or acceptance, but it cannot be silence. Ignoring a finding is not a valid response.

VDR-TFR-MAV: Categorize as “accepted vulnerability” if not remediated within 192 days. This is a MUST. The 192-day threshold is not a recommended target. It is a mandatory classification. If a finding sits open for 192 days without remediation, you MUST classify it as an accepted vulnerability and provide the documentation required by VDR-RPT-AVI: tracking identifier, detection time, evaluation time, exploitability status, reachability status, N1-N5 impact, explanation, and supplementary information. This is the transparency mechanism. VDR does not say you must fix everything in 192 days. It says you must be transparent about what you have not fixed.

VDR-TFR-MHR: Monthly human-readable activity reports. These are the VDR equivalent of ConMon vulnerability summaries, but with the three-dimensional evaluation data included. Detection source, timing, exploitability, reachability, impact, trajectory. More detailed than traditional ConMon vulnerability reporting.

VDR-RPT-PER: Persistent reporting to all necessary parties. Not one-time. Not on-request. Persistent. The data is available continuously through the reporting mechanism, not produced on a schedule and filed.

VDR-RPT-VDT: Detailed vulnerability information in reports. This goes beyond a CVE identifier and a severity score. Reports include detection source, timing, exploitability assessment, reachability status, environmental impact, and remediation trajectory.

Key SHOULD requirements (recommended, not mandatory)

The SHOULD requirements define FedRAMP’s expectations for mature implementations. They are not mandatory, but building toward them demonstrates the maturity that assessors and agencies want to see.

VDR-TFR-PVR: Remediation timeframes by baseline. These are SHOULD, not MUST.

For High baseline systems:

  • N5+LEV+IRV: 12 hours
  • N5+LEV: 1 day
  • N5+IRV: 2 days
  • N4+LEV+IRV: 2 days
  • N3+NLEV: 64 days

For Moderate baseline systems:

  • N5+LEV+IRV: 2 days
  • N5+LEV: 4 days
  • N5+IRV: 8 days
  • N4+LEV+IRV: 8 days
  • N3+NLEV: 128 days

For Low baseline systems:

  • N5+LEV+IRV: 4 days

The range is wide: 12 hours for the most critical findings on High baseline systems, 128 days for moderate-risk findings on Moderate systems. The three-dimensional evaluation drives the timeline, not just the CVSS score.

VDR-TFR-EVU: Evaluation timeframes. 2 days (High), 5 days (Moderate), 7 days (Low) from detection. This is how fast you should complete the three-dimensional evaluation after a finding is detected. The evaluation has to happen before the remediation SLA starts ticking.

VDR-TFR-MRH: Machine-readable data feeds. Every 14 days (Moderate), every 7 days (High). These feeds enable agency-side automation. An agency consuming your VDR data programmatically can integrate it into their risk management platform.

VDR-TFR-KEV: CISA KEV per BOD 22-01 (recommended). BOD 22-01 applies to federal agencies, not CSPs directly. VDR recommends following KEV timelines. In practice, agencies increasingly expect CSPs to treat KEVs with the same urgency.

VDR-BST-AKE: Do not deploy new resources with known KEVs. This is a deployment guardrail. If a proposed deployment includes a component with a known exploited vulnerability, it should be caught before deployment, not after.

graph TD
    FIND[Finding] --> LEV{Exploitable?}
    FIND --> IRV{Internet-Reachable?}
    FIND --> NAI{Potential Adverse Impact}
    LEV --> |LEV| HIGH[Faster SLA]
    LEV --> |NLEV| LOW[Slower SLA]
    IRV --> |IRV| HIGH
    IRV --> |Non-IRV| LOW
    NAI --> |N4-N5| HIGH
    NAI --> |N1-N3| LOW
    HIGH --> SLA[SLA Assignment]
    LOW --> SLA
    SLA --> REM[Remediation]
    SLA -->|192 days| ACC[Accepted Vulnerability]
 
    style FIND fill:#2b5797,stroke:#5b9bd5,color:#fff
    style LEV fill:#5c4a1a,stroke:#ffc857,color:#fff
    style IRV fill:#5c4a1a,stroke:#ffc857,color:#fff
    style NAI fill:#5c4a1a,stroke:#ffc857,color:#fff
    style HIGH fill:#5c1a1a,stroke:#ff6b6b,color:#fff
    style LOW fill:#1a5c3d,stroke:#51cf66,color:#fff
    style SLA fill:#4a1a5c,stroke:#c77dff,color:#fff
    style REM fill:#1a3d1a,stroke:#a9dc76,color:#fff
    style ACC fill:#5c1a3d,stroke:#ff6b9d,color:#fff

Why it matters

VDR moves vulnerability management from a checkbox (“scan monthly, remediate by CVSS severity”) to a risk-based methodology that weighs environment context.

The traditional FedRAMP approach was prescriptive and uniform: scan monthly, remediate Critical and High within 30 days, Moderate within 90, Low within 180. Simple. Easy to audit. But it treated all Criticals the same. A CVSS 9.8 vulnerability on an internet-facing authentication service and a CVSS 9.8 vulnerability on an air-gapped internal tool got the same 30-day SLA. It also did not account for exploitability. A theoretical vulnerability with no known exploit got the same SLA as one with active exploitation and a CISA KEV listing.

VDR fixes this by forcing contextual evaluation. Three dimensions instead of one. The result is more nuanced prioritization: an N5+LEV+IRV finding on a High baseline system gets a 12-hour SHOULD remediation target, while an N3+NLEV finding on a Moderate baseline system gets 128 days. Resources are directed toward the findings that actually matter.

The 192-day accepted-vulnerability rule adds transparency. Under the old model, a finding could sit on a POA&M indefinitely with a vague “planned remediation” status. Quarterly after quarterly, the same finding appeared with “in progress” status and a milestone date that kept moving. Nobody forced the conversation: is this actually going to be fixed, or should we accept it? VDR forces that conversation at 192 days. Either you fixed it, or it is an accepted vulnerability. The accepted-vulnerability classification requires detailed documentation (VDR-RPT-AVI): why it has not been fixed, what the risk is, and what mitigations are in place. No more indefinite deferral.

The MUST vs SHOULD distinction matters for implementation. VDR-TFR-MAV (192-day classification) is a MUST. VDR-TFR-PVR (remediation timeframes) is a SHOULD. You have to classify at 192 days. You are recommended to meet the baseline-specific remediation timeframes, but they are not mandatory. Building your workflow around the MUST requirements ensures compliance. Building it around the SHOULD requirements demonstrates maturity and positions you well for when those recommendations become requirements.

VDR also connects to the other Rev5 Balance improvements. Accepted vulnerabilities are a required section of the quarterly OAR under CCM. Machine-readable VDR data feeds through the trust center defined by ADS. The VDR evaluation data informs the CCM reporting and the ADS transparency requirements. The improvements are designed to work together.

The pain we lived

We ran multiple scanners across environments: infrastructure scanners, web application scanners, container scanners, database scanners. Each produced results in its own format with its own severity scale.

The first problem was normalization. Qualys, Nessus, AWS Inspector, Burp Suite. Different finding formats, different CVE mappings, different severity scores for the same vulnerability. A finding that Qualys rated as a 4 (Critical) might appear as a 9.8 CVSS from Nessus and a “High” from Inspector. Before we could do anything useful with the results, we had to normalize them into a common schema. We could not afford a six-figure enterprise vulnerability aggregation platform for each client, so we did it by hand. Every ConMon cycle, hours spent mapping scanner outputs into a common format.

The second problem was lifecycle tracking. Which findings were new this scan cycle? Which were remediated since last month? Which were closed last month but reappeared? A finding that changed CVE identifiers between scanner versions looked like a new finding even though it was the same vulnerability on the same host. A finding that was patched and then reappeared because a rollback happened looked like a new finding unless you tracked the history. Every ConMon cycle, reconciling the current scan against the prior scan consumed hours of manual comparison.

The third problem was context. CVSS was the only input. A Critical finding on a development server and a Critical finding on a production authentication service got the same 30-day SLA. Nobody had time to evaluate exploitability, reachability, and environmental impact for every finding manually. So everything was prioritized by CVSS, which was better than nothing but worse than risk-based prioritization.

The fourth problem was deviation management. Findings that could not be remediated on the standard timeline (false positives, operational requirements, risk adjustments) each needed documented justification and approval. The deviation lived in a separate tracker from the scan results. Mapping between the two was manual. When an assessor asked “show me the deviation for this finding,” someone had to cross-reference two systems.

Long-lived findings were a chronic issue. Findings sat on POA&Ms for months or years with “planned remediation” status and a milestone date that kept moving. Nobody forced the “accept or fix” conversation. The 192-day clock did not exist under the old model, but the problem it addresses, indefinite deferral without transparency, was real.

How we automate it

VDR shifts vulnerability management from “scan and remediate by CVSS” to “evaluate exploitability, reachability, and environmental impact for every finding.” That evaluation is hard to do manually at volume. Enrichment and environmental context are the core of any VDR implementation. Here is how we approach VDR automation in Stratus GRC-ITSM.

  1. Scanning across the environment. Infrastructure, containers, applications, databases, web apps. Agent-based and agentless as needed. Multiple sources, one normalized data model. Every scanner’s output maps to the same schema. The finding, the affected asset, the CVE, the detection source, and the detection timestamp are all captured the same way regardless of which scanner produced them. Normalization happens at intake, not at reporting time.
  2. Enrichment on intake. Every finding is enriched with CISA KEV status, EPSS score, and threat intelligence feeds. No manual CVE lookups. When a finding is ingested, the platform checks: is this on the KEV list? What is the EPSS probability? Is there active exploitation intelligence? The answers are attached to the finding automatically. The enrichment data drives the LEV/NLEV evaluation.
  3. Environmental context from live asset inventory. The platform knows which assets are internet-reachable (drives IRV), which handle federal data or CUI (drives Potential Adverse Impact), and what the blast radius is. Context is computed from the asset inventory, network topology, and data flow maps, not assigned manually per finding. When a finding is on an asset tagged as internet-reachable and handling CUI, the IRV and N1-N5 values are computed automatically.
  4. Three-dimensional evaluation. Exploitability (EPSS + KEV + threat intel drives LEV/NLEV). Reachability (network context drives IRV). Impact (asset criticality + data sensitivity + blast radius drives N1 through N5). VDR-TFR-EVU evaluation timeframes become SLA targets: 2 days for High, 5 days for Moderate, 7 days for Low as SHOULD targets. The evaluation is computed, not manually assessed per finding. For findings where the automated evaluation needs human review (ambiguous context, new asset types), the system flags the finding and assigns it for manual evaluation within the SLA.
  5. Remediation SLA assignment. VDR-TFR-PVR recommended timeframes (baseline-specific) are applied as SLA targets. N5+LEV+IRV gets the fastest SLA. N3+NLEV gets the slowest. Approaching deadlines escalate automatically. The SLA is computed from the three-dimensional evaluation, not manually assigned. The person remediating sees the SLA, the due date, and the evaluation data that drove it.
  6. 192-day accepted vulnerability workflow. Findings approaching the 192-day MUST threshold are surfaced early. 30 days before the threshold, the finding is flagged. The responsible party is prompted: remediate, or prepare to accept. When a finding crosses the 192-day line, VDR-RPT-AVI documentation fires automatically: tracking identifier, detection time, evaluation time, exploitability status, reachability status, N1-N5 impact, explanation, and supplementary information. The documentation template is pre-populated from the finding record. The responsible party fills in the explanation and supplementary information. Issue Tickets become POA&Ms by setting “IS POA&M” to Yes. No separate object.
  7. Monthly human-readable reports. VDR-TFR-MHR reports generated from live data with all required fields: detection source, timing, exploitability, reachability, impact, trajectory. The report is a generated view of the current vulnerability data, not a manually assembled document.
  8. Machine-readable feeds on schedule. Every 14 days (Moderate) or 7 days (High) per VDR-TFR-MRH. Generated from the same data as the human-readable report, so they are always consistent. The feeds are published to the trust center (ADS) for programmatic agency consumption.
  9. KEV deployment guardrails. VDR-BST-AKE (SHOULD NOT deploy new resources with known KEVs) is enforced at deployment time. The CI/CD pipeline checks the component manifest against the KEV catalog. If a proposed deployment includes a component with a KEV, it is flagged before deployment, not caught after the fact.

The throughline: VDR wants contextual evaluation, not just CVSS scores. Enrichment plus environmental context plus computed SLAs turn a scoring exercise into a workflow. The evaluation is automated. The SLA is computed. The 192-day clock runs automatically. The reports generate from the same data. Resources go to the findings that matter, not the findings that have the highest CVSS score regardless of context.

Compliance is a byproduct of operations, not a separate workstream.

FAQ

Q: Does VDR replace the traditional POA&M?

A: No. VDR supplements POA&Ms. Traditional POA&M reporting continues for all frameworks. VDR adds the three-dimensional evaluation layer (LEV/NLEV, IRV, N1-N5), the 192-day accepted-vulnerability classification, and structured reporting requirements. In a unified data model, the Issue Ticket is the POA&M entry and the VDR record. They are the same data object. The POA&M report and the VDR report pull from the same source. There is no reconciliation between VDR tracking and POA&M tracking.

Q: What is the MUST versus SHOULD distinction in VDR?

A: The 192-day accepted-vulnerability threshold (VDR-TFR-MAV) is a MUST. The baseline-specific remediation timeframes (VDR-TFR-PVR) are a SHOULD. Monthly human-readable activity reports (VDR-TFR-MHR) are a MUST. Machine-readable data feeds (VDR-TFR-MRH) are a SHOULD. Build workflows that enforce the MUST and surface the SHOULD. The MUST requirements are mandatory. The SHOULD requirements drive assessor expectations and demonstrate maturity.

Q: What is the 192-day accepted-vulnerability rule?

A: VDR-TFR-MAV requires that any vulnerability not fully remediated within 192 days of evaluation MUST be categorized as an “accepted vulnerability.” This is not a target. It is a mandatory reclassification point. The documentation (VDR-RPT-AVI) requires: tracking identifier, detection time, evaluation time, IRV/non-IRV status, LEV/NLEV status, N1 through N5 impact, explanation of acceptance, and supplementary information for agency risk decisions. This forces transparency on long-standing findings and replaces the pattern of indefinite POA&M extensions.

Q: How does the N1 through N5 impact scale work?

A: N1 (negligible) through N5 (critical, affecting multiple agencies) is provider-evaluated based on the resource’s role, the data it handles, and the blast radius. An RDS instance storing federal CUI scores higher than one storing application logs. The evaluation is contextual, which is the entire point of VDR. N1 through N5, combined with LEV/NLEV and IRV/non-IRV, produces the three-dimensional risk picture that drives remediation urgency.

Q: What do LEV/NLEV and IRV/non-IRV mean in practice?

A: LEV (Likely Exploitable Vulnerability) means the vulnerability has active exploitation, a high EPSS score, or CISA KEV catalog status. NLEV (Not Likely Exploitable) means low EPSS, no observed exploitation, theoretical risk only. IRV (Internet-Reachable Vulnerability) means the affected resource is internet-facing based on network topology, security groups, and routing. Non-IRV means it is behind network layers and not directly exposed. The combination drives remediation urgency: an LEV+IRV finding gets the fastest SLA. An NLEV+Non-IRV finding gets the slowest.

Q: How do machine-readable VDR feeds work?

A: VDR-TFR-MRH (SHOULD) requires structured vulnerability data feeds published every 14 days for Moderate baseline, every 7 days for High baseline. These feeds are published to the trust center (defined by ADS) for programmatic agency consumption. An agency consuming your VDR data programmatically can integrate it into their risk management platform. The feeds generate from the same data as the human-readable reports, so they are always consistent.

Q: How does VDR change vulnerability management from a CVSS-only approach?

A: Traditional FedRAMP treated all Criticals the same: 30-day SLA regardless of context. A CVSS 9.8 on an air-gapped internal tool and a CVSS 9.8 on an internet-facing authentication service got the same treatment. VDR forces contextual evaluation on three dimensions: exploitability (EPSS, KEV, threat intel), reachability (network architecture), and impact (asset criticality, data sensitivity, blast radius). The result is nuanced prioritization where resources go to the findings that actually matter.

This article is part of a 15-part series on the operational disciplines that CMMC, FedRAMP Rev5, and FedRAMP 20x all test. [Read the series overview: Stop Building for Compliance. Build for Operations.]


Share:

Recent Posts: