Vulnerability Management Across CMMC, FedRAMP Rev5, and FedRAMP 20x

April 16, 2026

The core concept

Vulnerability management is a pipeline: scan, enrich, evaluate, prioritize, remediate, verify, report. Every finding moves through it with an owner and a deadline. Frameworks differ on scanning frequency, severity evaluation, remediation timelines, and reporting format. The pipeline itself is the same.

Scanning is the easy part. Running a scanner and getting results is table stakes. The hard part is everything after: normalizing results from multiple scanners into one schema, determining which findings are new versus remediated versus reopened across scan cycles, enriching findings with threat intelligence, assigning owners and SLAs, tracking remediation, handling findings that cannot be fixed on the standard timeline, and reporting it all in the format each framework expects.

That pipeline is what breaks when tools are disconnected. And it is what every framework tests.

graph LR
    S[Scan Results] --> N[Normalize]
    N --> E[Enrich]
    E --> EV[Evaluate]
    EV --> ISS[Issue Ticket]
    ISS -->|remediate| CT[Change Ticket]
    ISS -->|cannot fix| DEV[Deviation]
    CT --> CL[Closed]
    DEV --> POAM[POA&M]
 
    style S fill:#2b5797,stroke:#5b9bd5,color:#fff
    style N fill:#1a3d5c,stroke:#4ecdc4,color:#fff
    style E fill:#5c4a1a,stroke:#ffc857,color:#fff
    style EV fill:#4a1a5c,stroke:#c77dff,color:#fff
    style ISS fill:#5c1a1a,stroke:#ff6b6b,color:#fff
    style CT fill:#1a5c3d,stroke:#51cf66,color:#fff
    style DEV fill:#5c1a3d,stroke:#ff6b9d,color:#fff
    style CL fill:#1a3d1a,stroke:#a9dc76,color:#fff
    style POAM fill:#5c4a1a,stroke:#ffc857,color:#fff

What CMMC requires

CMMC does not set a specific scanning frequency. It does require that you scan, remediate, and prove both. The evidence has to show a pattern, not a one-time event.

The relevant practices:

  • RA.L2-3.11.2 (Level 2): Scan for vulnerabilities in organizational systems and applications periodically and when new vulnerabilities affecting those systems and applications are identified. 5-point, not POA&M-eligible. This is the scanning practice. “Periodically” means you define a schedule and stick to it. “When new vulnerabilities are identified” means you have a process for responding to newly disclosed CVEs outside your routine schedule.
  • SI.L1-3.14.1 (Level 1): Identify, report, and correct information system flaws in a timely manner. 5-point, not POA&M-eligible. This is the remediation practice, and it comes from Level 1. “Timely” is not defined by CMMC with specific day counts the way FedRAMP does. But assessors expect to see severity-driven timelines and evidence that they are met.
  • RA.L2-3.11.3 (Level 2): Remediate vulnerabilities in accordance with risk assessments. 1-point, POA&M-eligible. This is the only vulnerability management practice that can be deferred via POA&M. It requires risk-based prioritization, meaning you make remediation decisions based on the risk each vulnerability poses in your specific environment.

Supporting practices:

  • SI.L2-3.14.3: Monitor security alerts, advisories, and directives and take action.
  • RA.L2-3.11.1: Periodically assess the risk to organizational operations, organizational assets, and individuals.

What a C3PAO assessor looks for:

  • A scan schedule and scan results that match it. If the policy says monthly, the assessor expects 12 months of monthly scan reports.
  • Remediation timelines tied to severity, and evidence of follow-through. Not just “we plan to fix criticals in 30 days” but actual closure data.
  • A process for handling newly disclosed CVEs outside the routine schedule.
  • Evidence you act on security alerts and advisories, not just receive them.
  • Scan coverage that matches the asset inventory. If the inventory lists 50 servers and the scan report covers 43, the assessor will ask about the other 7.

The common gap: you scan, but you cannot show timely remediation or consistent triage. Scan results pile up in a dashboard without tracking. Findings are not assigned to individuals. Remediation deadlines are not tied to severity. There is no evidence trail from “finding discovered” to “finding remediated” or “finding accepted as a risk.”

The scoring matters. RA.L2-3.11.2 (scanning) and SI.L1-3.14.1 (remediation) are both 5-point and not POA&M-eligible. If you cannot show timely scanning and remediation, those practices do not score, and you cannot defer them. At that point these are not findings on a POA&M. They are pass-fail gates.

What FedRAMP Rev5 requires

FedRAMP is prescriptive about vulnerability management. The timelines are explicit, the scan types are defined, and the cadences are testable.

The relevant controls:

  • RA-5 (Vulnerability Monitoring and Scanning)
  • SI-2 (Flaw Remediation)

RA-5 parameters for the Moderate baseline:

  • Monthly OS and infrastructure scanning
  • Monthly web application (including APIs) and database scanning
  • Annual independent assessor scans (penetration testing)

That monthly cadence applies across all scan types. If you run monthly infrastructure scans but only do quarterly web application scans, you have a finding.

Remediation SLAs from date of discovery:

  • Critical and High: 30 days
  • Moderate: 90 days
  • Low: 180 days

SI-2 requires security-relevant software updates (patches) within 30 days of release. CISA Known Exploited Vulnerabilities (KEV) remediation per BOD 22-01 is recommended.

What 3PAOs look for:

  • Monthly scan reports across every scan type, with results that match the asset inventory
  • Evidence of remediation within SLA for each severity level
  • Scan coverage that matches the authorization boundary. Every scannable asset in the inventory should appear in scan results.
  • A process for ad-hoc scanning on critical CVEs, not only the monthly cadence
  • Vulnerability Deviation Reports (VDR) showing false positives, operational requirements, and risk adjustments with authorizing official approval

Common gap: scanning covers servers but misses applications, containers, or cloud configurations. Remediation SLAs exist in the policy but are not tracked as actual deadlines with evidence. Scan coverage does not match the asset inventory, and nobody reconciles the two.

30, 90, and 180 are not suggestions. They are control parameters your 3PAO will test against. If a high-severity finding is 45 days old and still open, that is a finding on a finding.

What FedRAMP 20x requires

FedRAMP 20x, through the Vulnerability Detection and Response (VDR) process, layers a contextual, risk-based methodology on top of traditional vulnerability management. VDR does not replace POA&Ms. It adds a structured evaluation framework that considers exploitability, reachability, and environmental impact, not just a CVSS score.

The relevant KSI:

  • KSI-AFR-04 (Vulnerability Detection and Response): document the VDR methodology per the FedRAMP VDR process. This KSI has both machine-based and non-machine-based validations. Machine-based checks verify that vulnerability scanning coverage is active and comprehensive across resource types. Non-machine-based validations review the issue ticket details and the VDR standard operating procedure.

VDR evaluates every finding on three dimensions:

  • Exploitability: LEV (Likely Exploitable Vulnerability) versus NLEV (Not Likely Exploitable). Determined by CISA KEV status, EPSS score, and threat intelligence.
  • Reachability: IRV (Internet-Reachable Vulnerability) versus non-IRV. Determined by network architecture and exposure.
  • Potential Adverse Impact: N1 (negligible) through N5 (catastrophic, affecting multiple agencies). Determined by the asset’s role, the data it handles, and the blast radius of compromise.

Key MUST requirements:

  • VDR-CSO-DET: discover and identify vulnerabilities.
  • VDR-CSO-RES: respond to all vulnerabilities.
  • VDR-TFR-MAV: any vulnerability not fully remediated within 192 days MUST be categorized as an accepted vulnerability with the required documentation. This is the hard cutoff. 192 days is not a target. It is a mandatory reclassification point.
  • VDR-TFR-MHR: monthly human-readable activity reports.
  • VDR-RPT-PER: persistent reporting to all necessary parties.
  • VDR-RPT-VDT: detailed vulnerability information in reports.

Key SHOULD requirements (recommended, not mandatory):

  • VDR-TFR-PVR remediation timeframes, which vary by baseline. For the High baseline: N5+LEV+IRV = 12 hours; N3+NLEV = 64 days. For Moderate: N5+LEV+IRV = 2 days; N3+NLEV = 128 days. For Low: N5+LEV+IRV = 4 days. These are targets, not mandates, but your 3PAO will ask about them.
  • VDR-TFR-EVU evaluation timeframes: 2 days (High), 5 days (Moderate), 7 days (Low) from detection. This is how fast you need to evaluate a new finding, not fix it.
  • VDR-TFR-KEV: CISA KEV per BOD 22-01 (recommended; BOD 22-01 applies to federal agencies, not CSPs directly, but following it is expected).
  • VDR-BST-AKE: do not deploy new resources with known exploited vulnerabilities.

Machine-readable data feeds: every 14 days (Moderate baseline), every 7 days (High baseline).

The distinction between MUST and SHOULD matters. The 192-day accepted vulnerability rule (VDR-TFR-MAV) is mandatory. The specific remediation timeframes (VDR-TFR-PVR) are recommended targets that vary by baseline. Know the difference.

Here is what this looks like in practice. KSI-AFR-04 machine-based validations check that scanning coverage is active across all resource types. Non-machine-based validations reference the issue ticket details report and the VDR standard operating procedure. The KSI is validated both ways: automated checks confirm the scanning infrastructure is in place, and human review confirms the operational processes and evidence quality.

Common gap on the path to 20x: CVSS-only severity with no contextual evaluation, POA&M data that is not structured for VDR reporting, no 192-day tracking, and no process for the three-dimensional evaluation (exploitability, reachability, impact).

The pain we lived

Vulnerability management was the pain that started everything.

We run different combinations of scanners across environments depending on what is deployed: infrastructure scanners, web application scanners, container scanners, database scanners. Each tool produces results in its own format with its own severity scale. A “high” in one scanner is not the same as a “high” in another. A finding identified by CVE in one tool might be identified by a proprietary plugin ID in another.

Step one, every month, was normalizing all of that into a single view. We could not afford a six-figure enterprise vulnerability platform for each client. So we did it by hand: export from each scanner, map to a common severity, deduplicate, and produce one unified findings list. For each environment. Every month.

Then the reconciliation. Compare this month’s results against last month. Which findings are new? Which ones were remediated? Which ones were closed last month but reappeared? This reconciliation is where most programs break. If you cannot reliably determine new versus remediated versus reopened, your POA&M data drifts. Closed items reopen without anyone noticing. New items are mixed in with items that have been there for six months. The POA&M stops reflecting reality.

Then the mapping. Every finding that meets POA&M criteria has to be tracked as a POA&M entry. In a disconnected toolset, that means manually creating a POA&M entry for each finding, linking it to the scanner source, assigning an owner, setting an SLA, and tracking it to closure. When a change ticket remediates five findings, someone has to manually update five POA&M entries and link them to the change ticket.

The manual process consumed days every month per environment. Across 15+ environments, it consumed weeks. The error rate was constant. Inconsistencies between the POA&M, the scan report, and the deviation tracker were the norm, not the exception. The root cause was always the same: the finding, the POA&M entry, the change ticket, and the deviation all lived in different places. The data existed. The relationships did not.

How we automate it

Here is how we built the vulnerability management pipeline in Stratus GRC-ITSM. The goal was to eliminate every manual step between “scanner produces results” and “report is delivered.”

  1. Scan ingestion. Results from infrastructure, container, application, database, and web tools feed into one data model. Different scanners, different formats, one normalized schema. The platform handles the format translation. No export-and-paste.
  2. Automatic enrichment. Every finding is enriched on intake with CISA KEV status, EPSS score, and threat intelligence feeds. No manual CVE lookups. No “let me check if this is on the KEV list.” The enrichment data is there when the finding arrives.
  3. Asset context. Findings are correlated with the live asset inventory. The platform knows which assets are internet-facing, which handle federal data or CUI, and what the blast radius is. This context drives the three-dimensional VDR evaluation: exploitability from KEV + EPSS + threat intel, reachability from network context, and Potential Adverse Impact (N1 through N5) from asset criticality.
  4. New vs remediated vs reopened. On every scan cycle, findings are compared against prior results. New findings create new Issue Tickets. Remediated findings close existing tickets. Reopened findings reopen them. No manual reconciliation. The delta is calculated automatically.
  5. SLA assignment. Remediation timelines are set on intake based on the evaluation: Rev5 High/Moderate/Low for traditional programs, baseline-specific VDR SHOULD targets for 20x. Approaching deadlines escalate. Breached SLAs get flagged immediately. The 192-day accepted vulnerability threshold (VDR-TFR-MAV) is tracked automatically. Findings approaching the threshold are surfaced early. When they cross it, the required documentation fires.
  6. Issue Tickets as the unit of work. Every finding is a tracked Issue Ticket with an SLA, an owner, linked assets, and its full evaluation data. When the finding meets POA&M criteria, the “IS POA&M” field is set to Yes. The same ticket. The same data. No mapping to a separate POA&M tracker.
  7. Deviations linked to the parent issue. Operational requirements, false positives, and risk adjustments are attached to the Issue Ticket as structured deviation records, not in a separate spreadsheet. Each deviation type has required fields: justification, compensating controls, evidence, and authorizing official approval.
  8. Reports from live data. Rev5 monthly ConMon packages, VDR monthly human-readable reports per VDR-TFR-MHR, machine-readable VDR feeds on the 14-day / 7-day cadence, and CMMC remediation evidence all generate from the same data model. One data model, three frameworks, one reporting engine.
graph TD
    SCAN[Scan Results] --> ISS[Issue Ticket]
    ISS --> ENRICH[KEV + EPSS + Threat Intel]
    ISS --> ASSET[Linked Assets]
    ISS -->|remediate| CT[Change Ticket]
    ISS -->|cannot fix| DEV[Deviation Record]
    ISS -->|IS POA&M = Yes| POAM[POA&M Report]
    CT --> CLOSED[Closed Finding]
    DEV --> POAM
    POAM --> RPT[ConMon / VDR Report]
 
    style SCAN fill:#2b5797,stroke:#5b9bd5,color:#fff
    style ISS fill:#5c1a1a,stroke:#ff6b6b,color:#fff
    style ENRICH fill:#5c4a1a,stroke:#ffc857,color:#fff
    style ASSET fill:#1a3d5c,stroke:#4ecdc4,color:#fff
    style CT fill:#1a5c3d,stroke:#51cf66,color:#fff
    style DEV fill:#4a1a5c,stroke:#c77dff,color:#fff
    style POAM fill:#5c4a1a,stroke:#ffc857,color:#fff
    style CLOSED fill:#1a3d1a,stroke:#a9dc76,color:#fff
    style RPT fill:#1a3d1a,stroke:#a9dc76,color:#fff

The point: one data model produces CMMC remediation evidence, FedRAMP Rev5 monthly scan packages, and FedRAMP 20x VDR reports. No tool-to-tool reconciliation. No spreadsheet gymnastics. No mapping findings into POA&Ms. The finding is the POA&M.

Compliance is a byproduct of operations, not a separate workstream.

FAQ

Q: How do scan results become POA&M entries?

A: In a unified data model, the scan result creates an Issue Ticket automatically. The Issue Ticket carries the finding details, linked assets, severity, owner, and SLA. When the finding meets POA&M criteria (it came from an assessment, it is overdue, etc.), the “IS POA&M” field is set to Yes on the same ticket. There is no separate mapping step. The finding IS the POA&M entry. When you remediate it through a change ticket, the linked Issue updates. No spreadsheet gymnastics. No manual POA&M creation for each finding.

Q: How do you handle new versus remediated versus reopened findings across scan cycles?

A: On every scan cycle, findings are compared against prior results. New findings create new Issue Tickets. Remediated findings close existing tickets. Reopened findings (closed last month but reappeared) reopen them. This reconciliation is where most programs break when done manually. If you cannot reliably determine new versus remediated versus reopened, your POA&M data drifts. Closed items reopen without anyone noticing. New items are mixed in with items that have been there for six months.

Q: How do you normalize findings from multiple scanners?

A: Different scanners produce results in different formats with different severity scales. A “high” in one scanner is not the same as a “high” in another. The same CVE might be identified by a proprietary plugin ID in one tool and a standard CVE identifier in another. Normalization means mapping every scanner’s output to a single schema at intake: one finding format, one severity scale, one asset linkage model. Without normalization, you cannot reconcile across scanners or produce a coherent POA&M.

Q: How do the VDR remediation timeframes compare to Rev5 SLAs?

A: Rev5 uses a simple severity-based model: 30 days (Critical/High), 90 days (Moderate), 180 days (Low). VDR uses a three-dimensional model where the remediation target depends on the baseline, exploitability (LEV/NLEV), reachability (IRV), and impact (N1 through N5). A High-baseline N5+LEV+IRV finding has a 12-hour SHOULD target. A Moderate-baseline N3+NLEV finding has a 128-day target. The VDR timeframes are SHOULD recommendations, not MUST requirements, but they drive assessor expectations.

Q: What is the Issue Ticket data model and what is the “IS POA&M” field?

A: In Stratus GRC-ITSM, all vulnerabilities, misconfigurations, and assessment findings are tracked as Issue Tickets. An Issue Ticket is the single source of truth for a finding. It becomes a POA&M entry when the field “IS POA&M” is set to Yes. There is no separate POA&M object, no separate POA&M spreadsheet. The POA&M report pulls from Issue Tickets where that field is Yes. Deviations attach to the same ticket. The data model stays unified from scan result through remediation through reporting.

Q: What is the 192-day accepted vulnerability rule?

A: VDR-TFR-MAV requires that any vulnerability not fully remediated within 192 days MUST be categorized as an accepted vulnerability with required documentation. This is mandatory, not a target. When a finding crosses the 192-day threshold, it must be formally accepted with documented justification, exploitability status, reachability status, N1 through N5 impact, and supplementary information for agency risk decisions. This replaces the pattern of POA&M items that sit open for years with repeated extensions.

Q: What is the cost of disconnected vulnerability management tools?

Q: What is the cost of disconnected vulnerability management tools?
A: The cost is reconciliation labor. Running scans in one tool, tracking POA&Ms in another, managing deviations in a spreadsheet, and assembling reports from exports means someone spends days every month per environment normalizing, reconciling, and cross-checking. Across multiple environments, this consumes weeks monthly. The error rate is constant because the process makes errors inevitable. Inconsistencies between the POA&M, the scan report, and the deviation tracker are the norm, not the exception. One data model eliminates the reconciliation step.

This article is part of a 15-part series on the operational disciplines that CMMC, FedRAMP Rev5, and FedRAMP 20x all test. [Read the series overview: Stop Building for Compliance. Build for Operations.]


Share:

Recent Posts: