The core concept
Compliance reporting is how you turn operational data into evidence for a specific audience and cadence. Monthly ConMon packages. Quarterly Ongoing Authorization Reports. Annual assessment artifacts. CMMC evidence packages. Machine-readable data feeds.
Every framework wants evidence. The reports differ in format, cadence, and audience. The substance is the same: what is the current state of your vulnerabilities, POA&Ms, deviations, changes, access reviews, and asset inventory? If your operational data is structured and current, the report is a query. If your data is scattered across five tools, the report is a monthly project.
The quality of every report is capped by the quality of the operational data underneath it. A compliance report cannot be more accurate than the vulnerability management pipeline that feeds it. It cannot cover more assets than the inventory provides. It cannot reflect deviations tracked in a different system. The report is an output. The operations are the input.
This is where the unified data model pays off. In Stratus GRC-ITSM, all findings are Issue Tickets. POA&Ms are Issue Tickets with “IS POA&M” set to Yes. Deviations are linked to those same Issue Tickets. Changes, access reviews, and assets all live in the same system. When you generate a report, you are querying one data model. You are not aggregating from multiple sources and hoping the data matches.
What CMMC requires
CMMC does not prescribe specific reporting deliverables the way FedRAMP does. There is no monthly ConMon package requirement. What CMMC wants is organized evidence for every practice, mapped to assessment objectives, that holds up under scrutiny.
Key CMMC deliverables:
- SSP (mandatory for Level 2): describes all 110 practices, boundaries, interconnections, and the operating environment
- POA&M: tracks deficiencies and has to close within 180 days of the assessment finding
- Network diagrams: accurate to the current environment
- Hardware and software inventory: matches what is actually deployed
- Assessment evidence: policies, procedures, screen captures, configuration files, interview notes, test results. Evidence for every practice mapped to the assessment objectives that practice requires.
- Customer Responsibility Matrix: for cloud services, showing the division of responsibility
Scoring: each practice is MET or NOT MET. Point values of 1, 3, or 5. All assessment objectives for a practice have to be MET for the practice to score. A single missing objective sinks the entire practice.
What assessors look for:
- Organized evidence mapped to practices, not a document dump
- Current evidence that reflects the actual state, not screenshots from six months ago
- Consistency between the SSP and actual implementations: the SSP says you do X, the evidence shows you do X
- Coverage across all 110 L2 practices with nothing missing
- People who can speak to the implementations during interviews, because they actually do the work described in the evidence
- POA&M items with real progress, not repeated milestone extensions
The common gap: evidence scattered across five tools. Assessment prep becomes a weeks-long data aggregation project. Someone pulls scan reports from the vulnerability scanner. Someone else pulls change tickets from the ticketing system. Someone else pulls access review logs from the identity provider. The asset inventory comes from a spreadsheet. Then someone stitches it all together into something presentable. The SSP describes what you wish you were doing, not what you are doing.
If your CMMC assessment prep starts 60 days before the audit and eats every senior engineer’s calendar, your operational data is not structured for evidence.
What FedRAMP Rev5 requires
FedRAMP has specific formats, cadences, and audiences. Reporting is prescriptive.
Traditional Rev5 deliverables:
- SSP, SAP, SAR, POA&M, Risk Exposure Table (RET), Customer Responsibility Matrix
- Monthly vulnerability scan reports
- Monthly ConMon packages to authorizing officials
- Annual assessment artifacts
Under Rev5 Balance ADS (Authorization Data Sharing):
- ADS-CSO-CBF (MUST): authorization data in human-readable AND machine-readable formats, with automation keeping them consistent
- ADS-CSO-HAD (MUST): 3 years of historical authorization data retained
Under Rev5 Balance CCM (Collaborative Continuous Monitoring):
- CCM-OAR-AVL (MUST): quarterly Ongoing Authorization Reports (OARs) covering changes, vulnerabilities, and security recommendations
The OAR replaces per-agency monthly ConMon packages with a single quarterly report shared with all agencies. One report, one cadence, all stakeholders. Instead of twelve monthly packages per year per agency, you produce four quarterly OARs shared with everyone.
What 3PAOs look for:
- Deliverables complete, on time, accurate, and dual-format under Balance. Missing a monthly ConMon package is a finding.
- Consistency across artifacts. The SSP says one thing, the SAR says another, the POA&M says a third. Every inconsistency is a question at best, a finding at worst.
- Evidence that cadenced activities actually ran. If the ConMon plan says monthly vulnerability scans, the assessor wants 12 months of monthly scan evidence.
Common gap: monthly and quarterly deliverables assembled by hand from multiple sources. Five days to assemble the ConMon package. Three days to reconcile inconsistencies. Two days for review and formatting. Ten days of work that produces one report that should have been a query. Inconsistencies between artifacts are discovered during assessment, not during assembly. If your ConMon package takes five days to assemble and three days to reconcile, you are running five tools that should be one.
What FedRAMP 20x requires
20x makes machine-readable a first-class output. The human-readable version is a view of the same data, not a separately written document.
The relevant KSIs:
- KSI-AFR-06 (Collaborative Continuous Monitoring): governs OAR and Quarterly Review deliverables. Non-machine-based validations check that collaborative continuous monitoring procedures exist and that reports are published and available through the trust center. Our 20x implementation data shows this validated through the user portal where continuous monitoring reports are published.
- KSI-AFR-09 (Persistent Validation and Assessment): persistently validate, assess, and report on the effectiveness and status of security decisions and policies. This is the continuous reporting KSI. It connects the validation layer (are your controls working?) to the reporting layer (are you communicating that status?). Our 20x data shows validation that KSI failures result in Issue Ticket creation, closing the loop between validation and operational response.
Rev5 Balance reporting deliverables for providers adopting Balance Improvements:
- Quarterly OAR (CCM-OAR-AVL, MUST): covers changes, vulnerabilities, security recommendations, and authorization status
- Monthly human-readable VDR activity report (VDR-TFR-MHR, MUST): vulnerability detection and response activity
- Machine-readable VDR data feed (VDR-TFR-MRH, SHOULD): every 14 days for Moderate baseline, every 7 days for High baseline
- SCN notifications per change type timelines
- SSP updated at least annually
Delivery is through a FedRAMP-compatible trust center (ADS-CSX-UTC, MUST for 20x). Reports are published, not emailed. Agencies access them through programmatic or self-service channels with access logging.
Presumption of Adequacy (44 USC SS 3613(e)): agencies MUST NOT place additional security requirements on providers beyond what FedRAMP requires, unless the agency head determines there is a demonstrable need (CCM-AGM-NAR). Under CCM, the reporting set is defined. Agencies receive the same package.
Common gap on the path to 20x: no OAR generation process, no machine-readable reporting capability, no trust center for self-service delivery, and no automated cadence for the required deliverables. In 20x, reports are a product your data emits, not a document you author.
The pain we lived
Compliance reporting was the most time-consuming part of our monthly operations.
Every month, for every client, we assembled a ConMon package by hand. The package: POA&M, Vulnerability Deviation Report, asset inventory, scan summaries, change logs, plus whatever additional evidence the specific authorizing official required. Each component came from a different tool.
Step one: export the vulnerability scan results from the scanner. Format them. Compare against last month’s results to show the delta. Step two: pull the POA&M from whichever spreadsheet or tracker it lived in. Update it with this month’s changes: new findings, closed findings, milestone updates, deviation status. Step three: pull the asset inventory. Verify it matches the scan scope. Step four: pull the change log from the ticketing system. Step five: pull the access review evidence from the identity system. Step six: assemble everything into the ConMon package format.
Then the reconciliation. The POA&M says a finding was remediated. The scan report still shows it open. Is the scan from before or after the remediation? The inventory lists a server the scanner did not cover. Is it a new server or a scan coverage gap? The deviation tracker shows an OR approved. The POA&M still shows the item as overdue. The SSP describes a control one way. The evidence shows a different implementation. Which one is current?
These inconsistencies were not bugs. They were a feature of disconnected tools. When each data source has its own update cadence, its own format, and its own owner, divergence is the default. Reconciliation is the tax you pay every month.
We delivered over 500 ConMon packages this way. The error rate was constant. Not because the team was careless, but because the process made errors inevitable. Copy data between five tools and reconcile by hand, and some reconciliation will be wrong. Some updates will be missed. Some inconsistencies will make it into the final package.
Assessment prep was worse. A ConMon package is one month of data. An assessment package is years of historical evidence: scan results, change records, access review logs, POA&M history, deviation approvals, SSP update evidence, incident response records. Assembling that from disconnected tools was a weeks-long project. Every assessment started with the same question: “where did we put the evidence for Q3 of last year?”
The root cause was the same every time: the data existed but lived in different places. The report was not a query against one data model. It was a monthly integration project across five tools, and the integration was done by humans.
How we automate it
Here is how we built compliance reporting in Stratus GRC-ITSM. The goal: make reports an output of operational data, not a separate production effort.
- One data model. Vulnerabilities, POA&Ms, deviations, changes, access reviews, and assets all live in one platform. All findings are Issue Tickets. POA&Ms are Issue Tickets with “IS POA&M” = Yes. Deviations are linked to Issue Tickets. Reports query this data directly. No aggregation from multiple sources. No reconciliation step. The data is already connected.
- Live reports. Reports are views of current data, not static snapshots assembled from exports. The ConMon report is current as of the moment you generate it because it pulls from the same tickets and workflows you use daily. No “this report reflects data as of last Tuesday.”
- Pre-formatted templates. Monthly ConMon packages, POA&M exports, quarterly OARs, VDR activity reports, and CMMC evidence packages generate in the required formats. Data tables populate automatically from the live data model. Narrative context and analysis are added during the review step, not the assembly step.
- Human and machine-readable outputs. Every report is available in both formats from the same source. Human-readable for agency reviewers, 3PAOs, and C3PAO assessors. Machine-readable for ADS compliance and for agencies that consume data feeds directly. Both formats are generated from one data model, satisfying ADS-CSO-CBF automatically.
- Trust center delivery. Reports are published through a trust center where agencies access authorization data on demand. Programmatic access with access logging satisfies ADS-TRC-PAC and ADS-TRC-ACL. No email distribution. No per-agency customization.
- Automated cadence. Monthly reporting tasks create automatically. The task includes the generated report for review before publication. Quarterly OAR assembly triggers on schedule. VDR machine-readable feeds update on the 14-day or 7-day cadence per baseline. Nobody has to remember it is ConMon week.
- Assessment evidence packaging for CMMC. Evidence tagged to practices during normal operations assembles into assessment-ready packages. The weeks-long prep cycle collapses to a review cycle. Evidence for CA.L2-3.12.4 (SSP documentation) comes from the same data model that produces the FedRAMP ConMon package. Evidence for RA.L2-3.11.2 (vulnerability scanning) comes from the same Issue Tickets that populate the VDR report. When an assessor asks for evidence of a specific practice, the answer is a filtered view of operational data, not a scavenger hunt across five tools. One data model, one set of evidence, multiple output formats.
graph LR
VULN[Vulnerabilities] --> RPT[Report Engine]
POAM[POA&Ms] --> RPT
DEV[Deviations] --> RPT
CHG[Changes] --> RPT
UAR[Access Reviews] --> RPT
INV[Inventory] --> RPT
RPT --> CONMON[Monthly ConMon]
RPT --> OAR[Quarterly OAR]
RPT --> ASSESS[Assessment Package]
RPT --> VDR[VDR Report]
style VULN fill:#5c1a1a,stroke:#ff6b6b,color:#fff
style POAM fill:#5c4a1a,stroke:#ffc857,color:#fff
style DEV fill:#4a1a5c,stroke:#c77dff,color:#fff
style CHG fill:#1a3d5c,stroke:#4ecdc4,color:#fff
style UAR fill:#1a5c3d,stroke:#51cf66,color:#fff
style INV fill:#2b5797,stroke:#5b9bd5,color:#fff
style RPT fill:#1a3d1a,stroke:#a9dc76,color:#fff
style CONMON fill:#5c1a3d,stroke:#ff6b9d,color:#fff
style OAR fill:#5c1a3d,stroke:#ff6b9d,color:#fff
style ASSESS fill:#5c1a3d,stroke:#ff6b9d,color:#fff
style VDR fill:#5c1a3d,stroke:#ff6b9d,color:#fff
If your operational workflows produce structured data, your compliance reports are already written. They are queries with formatting. If your operational data is scattered across tools, your reports are a monthly integration project.
CMMC assessment evidence, FedRAMP Rev5 monthly and quarterly deliverables, and FedRAMP 20x machine-readable feeds all pull from the same source. The operational data is the evidence. The report is a view of it. No assembly. No reconciliation. No monthly integration project.
Compliance is a byproduct of operations, not a separate workstream.
FAQ
A: When data is in five systems, assembly is a project. Export scan results from the scanner. Pull the POA&M from its spreadsheet. Pull the asset inventory. Pull the change log from the ticketing system. Pull access review evidence from the identity provider. Reconcile them all. Fix the inconsistencies. Format the package. That is days of work per environment per month. When data is in one system, assembly is a query. The POA&M report pulls from Issue Tickets. The vulnerability summary pulls from the scan pipeline. The change log pulls from change tickets. One reporting engine, one data model.
A: A report as a query means the report generates from structured data that already exists. The data was produced during normal operations. The report is a view of it. A report as a project means someone spends days extracting data from multiple tools, normalizing it, reconciling inconsistencies, and formatting the output. The assembly effort is proportional to the number of disconnected tools. If your ConMon package takes five days to assemble and three days to reconcile, you are running five tools that should be one.
A: Under CCM, the quarterly OAR (CCM-OAR-AVL) replaces per-agency monthly ConMon packages. One report, shared with all agencies, quarterly. Monthly monitoring still happens. Monthly scan reports and VDR activity reports (VDR-TFR-MHR) are still required deliverables. The formal authorization reporting to agencies shifts from monthly per-agency packages to quarterly shared OARs. For providers who have not adopted CCM, monthly ConMon continues as before.
A: Machine-readable evidence means your reports are available in structured formats (OSCAL, CSV, API) that automated tools can consume, not just PDF or Word documents that humans read. ADS-CSO-CBF (MUST) requires that human-readable and machine-readable formats stay consistent automatically. VDR-TFR-MRH (SHOULD) requires machine-readable vulnerability data feeds every 14 days (Moderate) or 7 days (High). These feeds enable agency-side automation: an agency’s risk management platform can pull your data programmatically.
A: Section 3613(e) of 44 USC establishes that agencies MUST NOT place additional security requirements on providers beyond what FedRAMP defines (CCM-AGM-NAR), unless the agency head determines there is a demonstrable need. Under CCM, the reporting set is defined at the FedRAMP level. No custom ConMon formats. No additional reporting cadences. No ad-hoc security briefings. Agencies receive the same package. Significant concerns go through FedRAMP (CCM-AGM-NFR), not directly to the provider as ad-hoc demands.
A: CMMC does not require monthly ConMon packages, but assessors want organized evidence for every practice mapped to assessment objectives. When evidence is tagged to practices during normal operations, it assembles into assessment-ready packages. Evidence for CA.L2-3.12.4 comes from the same data model that produces FedRAMP ConMon packages. Evidence for RA.L2-3.11.2 comes from the same Issue Tickets that populate the VDR report. The weeks-long assessment prep cycle collapses to a review cycle when the data is already structured.
A: The cost is the monthly assembly tax. Five days to assemble the ConMon package from multiple exports. Three days to reconcile inconsistencies between the POA&M, the scan report, and the deviation tracker. Two days for review and formatting. Ten days of work that produces one report that should have been a query. Multiply by the number of environments and consuming agencies. The labor cost compounds every month, and the error rate is constant because the process makes errors inevitable.
This article is part of a 15-part series on the operational disciplines that CMMC, FedRAMP Rev5, and FedRAMP 20x all test. [Read the series overview: Stop Building for Compliance. Build for Operations.]
Recent Posts:

Collaborative Continuous Monitoring: What CCM Requires and How to Automate It

Vulnerability Detection and Response: What VDR Requires and How to Automate It

Authorization Data Sharing: What ADS Requires and How to Automate It

Significant Change Notifications: What SCN Requires and How to Automate It

Minimum Assessment Scope: What MAS Requires and How to Automate It
