Continuous Monitoring Across CMMC, FedRAMP Rev5, and FedRAMP 20x

April 16, 2026

The core concept

Continuous monitoring is how you verify that security controls still work, on a defined cadence, with evidence. The activities vary: log reviews, scans, control testing, POA&M updates, access reviews, configuration checks, policy reviews. But the pattern is always the same. Scheduled task. Defined owner. Captured evidence. Completed on time.

This is the discipline that ties all the others together. Vulnerability management produces findings that feed into continuous monitoring. Change management produces change records that feed into continuous monitoring. Access reviews, configuration baselines, incident responses, and compliance checks all generate data that rolls up into the continuous monitoring program.

graph LR
    VM[Vulnerability Scans] --> CM[Continuous Monitoring]
    ChM[Change Records] --> CM
    UAM[Access Reviews] --> CM
    CFG[Config Baselines] --> CM
    IR[Incident Response] --> CM
    CM --> RPT[ConMon Report / OAR]
 
    style VM fill:#5c1a1a,stroke:#ff6b6b,color:#fff
    style ChM fill:#1a3d5c,stroke:#4ecdc4,color:#fff
    style UAM fill:#5c4a1a,stroke:#ffc857,color:#fff
    style CFG fill:#4a1a5c,stroke:#c77dff,color:#fff
    style IR fill:#5c1a3d,stroke:#ff6b9d,color:#fff
    style CM fill:#1a5c3d,stroke:#51cf66,color:#fff
    style RPT fill:#1a3d1a,stroke:#a9dc76,color:#fff

The failure mode is predictable. You deploy security tools. You write a monitoring plan. You run the activities for a few months. Then someone misses a weekly log review. Then a monthly scan slips by a week. Then quarterly access reviews are two weeks late. None of these feel like a crisis in the moment. But they compound. By the time an assessor looks at the evidence, the gaps paint a picture of a program that does not actually run on cadence.

What CMMC requires

CMMC does not give you the prescriptive cadence that FedRAMP does. It does expect you to prove that ongoing monitoring actually runs, that the results feed into your security program, and that you act on what you find.

The relevant practices:

  • CA.L2-3.12.3 (Level 2): Monitor security controls on an ongoing basis to ensure the continued effectiveness of the controls. 5-point, not POA&M-eligible. This is the core continuous monitoring practice. “Ongoing basis” means you define the cadence, document it, and stick to it. Assessors will look at whether the cadence is reasonable for each activity and whether you met it.

Supporting practices:

  • CA.L2-3.12.1: Periodically assess the security controls in organizational systems to determine if the controls are effective in their application. This is periodic control testing, not just monitoring.
  • CA.L2-3.12.2: Develop and implement plans of action designed to correct deficiencies and reduce or eliminate vulnerabilities. The POA&M practice. ConMon activities generate findings. Findings become POA&M items. POA&M items need to be tracked and worked.
  • SI.L2-3.14.6: Monitor organizational systems, including inbound and outbound communications traffic, to detect attacks and indicators of potential attacks. Active monitoring for threats, not just compliance checking.
  • AU.L2-3.3.1: Create and retain system audit logs and records. The logging infrastructure.
  • AU.L2-3.3.5: Correlate audit record review, analysis, and reporting processes for investigation and response. Correlation across sources, beyond collection.

What a C3PAO assessor looks for:

  • Monitoring tools actually deployed and producing output: SIEM, EDR, IDS/IPS, vulnerability scanners
  • A defined monitoring strategy that describes what is monitored, how often, and by whom
  • Active POA&M tracking with items being worked, not just documented
  • Alert triage with evidence of execution, beyond raw alert volume
  • Log retention that matches the stated policy
  • Periodic control effectiveness testing with documented results

The common gap: ad-hoc monitoring with no repeatable process. Tools are deployed but alerts are never triaged. POA&Ms exist on paper but nobody is working them. There is no single view of what monitoring activities are due, overdue, or complete. Evidence of monitoring execution does not exist in a structured, retrievable format.

CMMC does not prescribe the cadence. Assessors still expect to see the pattern. If you say you do weekly log reviews, they will ask to see 12 months of weekly log review evidence. If the evidence is sporadic, the practice does not score.

What FedRAMP Rev5 requires

FedRAMP ConMon is a calendar. Dozens of activities, each on a defined cadence, each with specific evidence requirements, each reported to authorizing officials on schedule. Miss a recurring task and you pick up findings that compound over time.

The relevant controls:

  • CA-7 (Continuous Monitoring): the full ConMon control. Defines the continuous monitoring strategy, metrics, monitoring frequencies, and reporting cadences.
  • AU-6 (Audit Record Review, Analysis, and Reporting): review and analyze audit records for indications of inappropriate or unusual activity. This is the log review requirement.
  • AU-2 (Event Logging): determine the events that organizational systems are capable of logging and must log. This defines what gets captured.

The cadences (these are the ones that drive most ConMon findings):

Weekly:

  • Audit log review and analysis

Monthly:

  • OS, database, web application, container, and service configuration scans
  • Privileged account compliance checks
  • POA&M updates
  • ConMon reporting to authorizing officials (JAB/AO per CA-7(g)-1)

Quarterly:

  • Public content reviews
  • Developer privilege reviews
  • Access recertifications
  • Ongoing Authorization Reports under CCM (for organizations adopting Rev5 Balance)

Annually:

  • Non-privileged account compliance reviews
  • Policy reviews across 17+ control families
  • Contingency plan testing
  • Incident response testing
  • Security awareness training
  • Role-based security training
  • 3PAO assessment
  • Penetration testing
  • Baseline configuration review

Multi-tenant environments have additional complexity. Consumer-specific review, analysis, and reporting is required. Each tenant’s data has to be accounted for in the ConMon program.

What 3PAOs look for: evidence that every cadenced activity actually ran on schedule. Not the plan to do it, the evidence that it was done. Complete and timely ConMon packages submitted on cadence. Consistency between what the ConMon plan says and what the evidence shows.

The common gap: activities tracked in spreadsheets with no enforcement. Cadences missed because nobody realized a task was due. No single view of what is due, overdue, or complete. Monthly ConMon packages assembled by hand from five different tools. The ConMon plan describes a full program, but the execution evidence tells a different story.

Dozens of recurring activities across four cadences cannot be run from a spreadsheet. Missed tasks become findings. Findings compound.

What FedRAMP 20x requires

Rev5 treats ConMon as a calendar. 20x treats it as a live data pipeline.

The shift is structural. Under Rev5, continuous monitoring means completing specific activities on specific cadences and reporting the results monthly. Under 20x, continuous monitoring means automated validation running persistently, with quarterly reporting that summarizes what the automated checks have already been doing.

The relevant KSIs:

  • KSI-AFR-06 (Collaborative Continuous Monitoring): maintain a plan and process for Ongoing Authorization Reports (OARs) and Quarterly Reviews per the CCM process. Non-machine-based validation reviews the ConMon standard operating procedure and the publication of collaborative monitoring reports.
  • KSI-MLA-01 (SIEM): operate a centralized, tamper-resistant Security Information and Event Management capability. Machine-based validations check that logging has integrity validation (tamper-resistant), log groups are encrypted, security telemetry is centralized, and log storage has access logging enabled.
  • KSI-MLA-02 (Audit Logging): comprehensive audit logging across the environment. Machine-based validations check that audit trails have comprehensive coverage. Non-machine validations include quarterly reviews of auditable events, sources, and configuration.
  • KSI-MLA-05 (Evaluate Configuration): persistently evaluate and test the configuration of machine-based information resources, especially Infrastructure-as-Code. Machine-based validations check that configuration rules are in compliance and that configuration recording is active.
  • KSI-MLA-07 (Event Types): ensure comprehensive event type coverage in logging. Machine-based validations check that audit trails capture complete event types. Non-machine validations review auditable events documentation.
  • KSI-MLA-08 (Log Data Access): protect log data access. Machine-based validations check that log groups have encryption at rest.

Under Rev5 Balance CCM:

  • Quarterly Ongoing Authorization Reports (CCM-OAR-AVL, MUST): replace the per-agency monthly ConMon package with one quarterly report shared with all agencies. One report. One cadence. All stakeholders.
  • Quarterly review meetings (CCM-QTR-MTG, MUST for Moderate and High; SHOULD for Low): structured discussions with agencies on the OAR findings and recommendations.
  • 20x KSI validations: daily automated checks and at least quarterly manual reviews. Automated checks validate that technical controls are in place. Manual reviews validate that operational processes and documentation are current.
  • Presumption of Adequacy (CCM-AGM-NAR, MUST NOT): agencies MUST NOT pile on requirements beyond what FedRAMP specifies. This is a legal and procedural guardrail that prevents the pattern of each agency adding their own monitoring requirements on top of FedRAMP.

The common gap on the path to 20x: no single OAR generation process (the report is assembled from scratch each quarter), no automated KSI validation engine (validations are done manually when someone remembers), and no trust center for report distribution (OARs are emailed to individual agencies).

In practice, this plays out in our own validation runs. KSI-AFR-06 is validated through the ConMon SOP and a user portal where collaborative monitoring reports are published. KSI-AFR-09 (Persistent Validation and Assessment) is validated through a process where KSI failures automatically create Issue Tickets. When an automated compliance check fails, it does not sit in a dashboard. It creates a tracked finding with an owner and an SLA. That is the bridge between “monitoring” and “management.” Detecting a problem and tracking the fix are the same workflow.

The phrase “continuous” matters. If your ConMon has a seasonal assembly cycle, where someone spends the last week of every month compiling a package, that is periodic monitoring with monthly reporting, not continuous monitoring. 20x expects the monitoring to run persistently and the reporting to summarize what already happened, not to trigger a data collection project.

The pain we lived

Continuous monitoring was a monthly assembly project. For every client. Every month.

The cycle started the same way. Log into the scanner portals, export the results, normalize them, reconcile against last month. Pull the POA&M, update each item’s status. Pull the asset inventory, check for drift. Pull the change log. Pull the access review evidence. Format everything into the ConMon package template. Cross-check for consistency. Make sure the POA&M counts match the scan summary counts. Make sure the asset inventory matches the scan coverage. Find the discrepancies. Fix them. Submit.

We have delivered over 500 ConMon packages this way.

The problem was not any single step. Each step was straightforward. The problem was the volume and the fragility. Across 15+ environments, with different scanners, different ticketing systems, and different reporting templates, the manual assembly process consumed weeks every month. And it was error-prone. Inconsistencies between artifacts, where the POA&M said one thing and the scan report said another, were a constant.

Activities that ran on cadence were tracked in spreadsheets. Weekly log reviews, monthly scans, quarterly access reviews, annual policy reviews. Each environment had its own spreadsheet. There was no single view across environments of what was due, overdue, or complete. When we missed a cadence, we often did not discover it until the next month’s assembly revealed a gap.

The missed cadences compounded. A missed weekly log review is a small gap. Twelve missed weekly log reviews is a pattern. When an assessor asked for evidence of weekly log reviews for the past year, the gaps were visible. Each one was a finding. Findings from missed monitoring activities are the preventable kind. The monitoring tools were deployed. The alerts were being generated. The reviews just did not happen on schedule because nobody was tracking whether they did.

Evidence was another pain point. When a review was completed, the evidence lived in different places: a screenshot in a shared drive, a note in the ticketing system, a comment in a chat channel. Reassembling the evidence for a specific control at assessment time was a scavenger hunt. “Show me evidence for AU-6 for the past 12 months.” That meant finding 52 weekly log review records across multiple tools and formats.

How we automate it

We built the continuous monitoring engine in Stratus GRC-ITSM to turn ConMon from a monthly assembly project into a system that runs itself and produces reports from the work it tracks.

  1. Pre-mapped task templates. Every recurring activity is a template: cadence, governing controls or practices, responsible role, required evidence, and SLA. The template library covers the full FedRAMP ConMon cadence across all four frequencies (weekly, monthly, quarterly, annual) plus CMMC-specific activities. Each template maps to the controls or practices it satisfies: AU-6 for log reviews, CA-7 for the ConMon program, CA.L2-3.12.3 for CMMC ongoing monitoring.
  2. Automated scheduling. Weekly tasks appear Monday. Monthly POA&M updates appear on the 1st. Quarterly access reviews appear at the start of the quarter. Annual policy reviews appear 30 days before the due date. Nobody has to remember what is due. The platform knows.
  3. Role-based assignment with coverage. Tasks are assigned to roles, not individuals. When someone is out, coverage applies automatically. Overdue tasks escalate. No task sits in a queue unnoticed.
  4. Evidence capture on completion. When a review is completed, the reviewer attaches the evidence directly to the task: review notes, screenshots, attestations, query results. All captured on the ticket. No separate evidence folder that goes stale. No scavenger hunt at assessment time.
  5. Control and practice linkage. Each task maps to the controls or practices it satisfies. When an assessor asks “show me evidence for AU-6” or “show me CA.L2-3.12.3,” you hand over completed tasks for the past 12 months, each with the evidence attached and timestamped.
  6. ConMon reporting from task data. Monthly ConMon packages pull from completed-task data. The POA&M report pulls from Issue Tickets with “IS POA&M” set to Yes. The vulnerability summary pulls from the scan ingestion pipeline. The change log pulls from change tickets. The access review summary pulls from review tasks. One reporting engine, one data model.
  7. 20x KSI validations on the same engine. Daily automated KSI checks run against live system data. Quarterly manual reviews are scheduled like any other recurring task. When an automated check fails, it creates an Issue Ticket automatically. The monitoring and the management are the same workflow. This is how KSI-AFR-09 (Persistent Validation and Assessment) works in practice: a failed check is not a dashboard alert. It is a tracked finding with an owner and a deadline.
graph TD
    SCHED[Scheduling Engine] --> W[Weekly Tasks]
    SCHED --> M[Monthly Tasks]
    SCHED --> Q[Quarterly Tasks]
    SCHED --> A[Annual Tasks]
    W --> EV[Evidence Captured]
    M --> EV
    Q --> EV
    A --> EV
    EV --> RPT[ConMon Report]
    EV --> OAR[Quarterly OAR]
    EV --> ASSESS[Assessment Evidence]
 
    style SCHED fill:#2b5797,stroke:#5b9bd5,color:#fff
    style W fill:#1a3d5c,stroke:#4ecdc4,color:#fff
    style M fill:#1a3d5c,stroke:#4ecdc4,color:#fff
    style Q fill:#1a3d5c,stroke:#4ecdc4,color:#fff
    style A fill:#1a3d5c,stroke:#4ecdc4,color:#fff
    style EV fill:#1a5c3d,stroke:#51cf66,color:#fff

The idea: define the task once. The platform handles scheduling, assignment, tracking, evidence, and reporting. One engine produces CMMC ongoing monitoring evidence, Rev5 ConMon packages, quarterly OARs under CCM, and 20x KSI validations.

When a CMMC assessor asks for evidence of CA.L2-3.12.3 for the past year, the data is there. When a 3PAO asks for 12 months of complete ConMon packages, the data is there. When a 20x validator asks about KSI-MLA-02 and KSI-MLA-05, the daily automated validations and quarterly manual reviews are all in the same system.

The ConMon package is not a project anymore. It generates from the work that was already done.

Compliance is a byproduct of operations, not a separate workstream.

FAQ

Q: What is continuous monitoring, if it is not a tool?

A: Continuous monitoring is not a product category. It is recurring activities with defined cadences, owners, and evidence requirements. Weekly log reviews, monthly scans, quarterly access reviews, annual policy reviews. Each activity runs on schedule, each completion captures evidence, and the results feed into a ConMon package or OAR. The tools (SIEM, scanners, EDR) generate data. The discipline turns that data into evidence by assigning ownership, tracking completion, and reporting on cadence.

Q: What are the best tools for continuous monitoring in FedRAMP?

A: The tools are table stakes: SIEM for log aggregation, vulnerability scanners for infrastructure and applications, EDR for endpoint detection, cloud-native configuration monitoring. Every FedRAMP environment runs some combination. The tools generate alerts and scan results. What matters is the integration: do scan results feed into Issue Tickets with SLAs? Do log reviews happen on the weekly cadence with captured evidence? Do missed activities escalate? The tools are necessary. The workflow that connects them to evidence and reporting is what assessors test.

Q: What are the FedRAMP Rev5 ConMon cadences?

A: Weekly: audit log review and analysis. Monthly: OS, database, web application, container, and service configuration scans, privileged account compliance checks, POA&M updates, and ConMon reporting. Quarterly: public content reviews, developer privilege reviews, access recertifications, and OARs under CCM. Annually: non-privileged access reviews, 17+ policy family reviews, contingency plan testing, IR testing, security training, 3PAO assessment, penetration testing, and baseline configuration review.

Q: How does CCM change the ConMon reporting model?

A: Under CCM, the per-agency monthly ConMon package shifts to a quarterly Ongoing Authorization Report (OAR) shared with all agencies at once through the trust center. CCM-OAR-AVL makes quarterly OARs mandatory. Monthly monitoring still happens. Monthly scan reports and VDR activity reports are still required deliverables. But the formal authorization reporting to agencies is quarterly and shared, not monthly and per-agency.

Q: What changes for continuous monitoring under FedRAMP 20x?

A: 20x treats ConMon as a live data pipeline, not a calendar of activities. KSI validations run daily as automated checks against live system data. KSI-AFR-06 governs the OAR and quarterly review process. KSI-MLA-01 through KSI-MLA-08 cover SIEM, audit logging, configuration evaluation, event types, and log data access. When an automated check fails, it creates an Issue Ticket automatically (KSI-AFR-09). The Presumption of Adequacy (CCM-AGM-NAR) prevents agencies from piling on requirements beyond FedRAMP. The monitoring runs persistently. The reporting summarizes what already happened.

Q: How does the KSI validation engine work in practice?

A: Daily automated KSI checks run against live system data. Each KSI has machine-based validations (automated) and non-machine-based validations (quarterly manual reviews). When a machine-based check fails, it creates an Issue Ticket with an owner and an SLA. The failed check is not a dashboard alert. It is a tracked finding that enters the same workflow as any other vulnerability or misconfiguration. Quarterly manual reviews are scheduled as recurring tasks on the same engine that drives all ConMon activities.

Q: What happens when a ConMon activity is missed?

A: Missed cadences become findings. A missed weekly log review is a small gap. Twelve missed weekly log reviews is a pattern. When an assessor asks for evidence of weekly log reviews for the past year, the gaps are visible, and each one is a finding. Findings from missed monitoring activities compound over time. The preventable kind. The monitoring tools were deployed. The alerts were generated. The reviews just did not happen on schedule because nobody tracked whether they did.

This article is part of a 15-part series on the operational disciplines that CMMC, FedRAMP Rev5, and FedRAMP 20x all test. [Read the series overview: Stop Building for Compliance. Build for Operations.]


Share:

Recent Posts: