
We lived the pain first
Stratus Cyber manages 15+ compliant environments. We have delivered over 500 Continuous Monitoring packages. Here is what that work actually looks like when your tools are disconnected, because it is the reason we built what we built.
Every month, for every client, the cycle starts the same way. Export scan results from multiple scanners. We run different combinations depending on the environment: infrastructure scanners, web application scanners, container scanners, database scanners. Each tool has its own format, its own severity scale, its own way of identifying a finding. Step one is normalizing all of that into a single view. We could not afford a six-figure enterprise vulnerability platform for each client, so we did this by hand.
Then the reconciliation. Compare this month’s results against last month. Which findings are new? Which ones were remediated? Which ones were closed last month but reappeared? Paste the new ones into the POA&M. Update the status on the ones that moved. Cross-reference the deviation tracker to make sure the operational requirements, false positives, and risk adjustments still line up. Find the inconsistencies. Fix them. This takes hours.
Pull the asset inventory from the cloud provider. Compare it against the inventory spreadsheet. Find the resources that were stood up since last month and never added. Find the ones that were decommissioned but still listed. Identify assets that were never scanned. Flag the gaps.
Open change tickets for the remediation work. The tickets exist, but they have no linkage to the POA&M items they are remediating. When a patch window closes out six vulnerabilities, someone has to manually list those six POA&M entries in the change ticket. Approvals happen over email. Three weeks later, nobody can find the approval. It is buried in someone’s inbox.
Now assemble the ConMon package. POA&M, Vulnerability Deviation Report, asset inventory, scan summaries, change log. Format everything. Cross-check for consistency. Submit. Move to the next client. Do it again.
Then the audit happens. An assessor asks you to trace a change back to the POA&M item it remediated. You dig through the ticketing system, the scanner exports, and the email thread where the approval lived. You cannot connect the thread cleanly. That is a finding.
We did this for years. Across 15+ environments. Every month.
And that is just the operational side. We have also built many of these environments from the ground up in AWS, Azure, and M365. We know what it takes to not only run a compliant environment, but to build one and take it through an audit. We know the pain of being the auditee: generating all the evidence, doing the patching, hardening the environment, dealing with configuration drift, patches that break systems, and stitching it all into something an assessor can validate. That full picture is what drove us to build something different.
Three approaches to compliance
That experience forced us to look at how organizations handle compliance alongside security operations. There are three common approaches. Two of them reproduce the pain. The third eliminates it.
graph LR
subgraph bolt["Approach 1: Bolt-on GRC"]
direction TB
B1[Policies & Documentation] --> B2[GRC Platform]
B3[Scanners] --> B4[Ticketing System]
B5[Spreadsheets] --> B6[Email Approvals]
B2 ~~~ B4
B4 ~~~ B6
end
subgraph glue["Approach 2: ITSM + GRC Glue"]
direction TB
G1[ITSM Platform] -->|manual sync| G2[GRC Platform]
G2 -->|integration breaks| G1
end
subgraph unified["Approach 3: GRC-ITSM"]
direction TB
U1[Operations] --> U2[Evidence]
U2 --> U3[Reports]
U1 -->|"built-in<br/>relationships"| U3
end
bolt -..->|"6 tools, no linkage"| X1[ ]
glue -..->|"glue breaks, data drifts"| X1
unified -..->|"one platform, one data model"| X2[Compliance output]
style bolt fill:#5c1a1a,stroke:#ff6b6b,color:#fff
style glue fill:#5c4a1a,stroke:#ffc857,color:#fff
style unified fill:#1a3d1a,stroke:#51cf66,color:#fff
style X1 fill:none,stroke:none
style X2 fill:#1a3d1a,stroke:#51cf66,color:#fff
Approach 1: Bolt-on compliance. Start with policies and documentation. Buy a GRC platform that gives you a control catalog, gap assessment, POA&M tracker, and reporting engine. The GRC platform helps you document your compliance posture. It does not help you run the operations. The scans are still in a different tool. The changes are still in a ticketing system. The approvals are still in email. The asset inventory is still a spreadsheet. Now you have six tools instead of five, and one more tab to keep in sync.
Approach 2: ITSM plus GRC glue. Run your operations in Jira or ServiceNow. Track compliance in a separate GRC tool. Build integrations and manual processes to move data between them. This works until it does not. The integration breaks. The data drifts. Someone changes a field name in Jira and the GRC sync stops updating. You spend more time maintaining the glue than doing the work.
Approach 3: GRC-ITSM. A GRC-ITSM does not sit on top of your operations. It is the operations. The ticket is the audit trail. The approval is the evidence. The scan result is the POA&M entry. Relationships between assets, vulnerabilities, tickets, changes, and approvals are built into the data model. Reports generate from the data you produced while doing the actual work. Assignment and notification rules route work to the right people. Single-click approvals are tracked on the ticket. You do not assemble the ConMon package. It generates.
This is the approach we took with Stratus GRC-ITSM. Not because we wanted to build a product. Because we were drowning in the manual work and needed to fix it.
Quick framework orientation
Three frameworks. Same operational expectations. Different wording.
CMMC (Cybersecurity Maturity Model Certification) protects Controlled Unclassified Information in the defense supply chain. Level 2 maps to 110 practices from NIST SP 800-171 Rev 2. Level 2 uses self-assessment or C3PAO certification depending on the contract. Each practice is scored MET or NOT MET with point values of 1, 3, or 5. Some practices are not POA&M-eligible, meaning you cannot defer them. CMMC does not prescribe operational cadences the way FedRAMP does, but assessors expect to see evidence that processes actually run.
FedRAMP Rev5 authorizes cloud services for federal agency use. It is prescriptive: defined control baselines (Low, Moderate, High), specific scan frequencies, remediation SLAs (30 days for Critical and High, 90 for Moderate, 180 for Low), and monthly ConMon reporting cadences. Rev5 tells you what to do, how often to do it, and what format to deliver it in. The Rev5 Balance improvements (MAS, SCN, ADS, VDR, CCM) are moving FedRAMP toward outcome-based, automation-friendly requirements.
FedRAMP 20x is outcome-based. Instead of prescriptive controls, it defines Key Security Indicators (KSIs) organized by domain. KSIs describe what good looks like and validate it with machine-based and non-machine-based checks. 20x assumes automated pipelines, immutable infrastructure, and continuous validation. It is where FedRAMP is heading, and the Rev5 Balance improvements are the bridge.
The convergence matters. All three frameworks test the same operations. If you build those operations well, you can demonstrate compliance to any of them from the same data.
The 9 disciplines
Every compliance framework tests the same set of operational capabilities. We call them the 9 disciplines. They are not an exhaustive list of everything you need to do. They are the heavy hitters: the capabilities that consume most of your day-to-day operations and where the pain concentrates when tools are disconnected. The framework determines the wording, the cadence, and the evidence format. The work is the same.
Here is how they map across CMMC, FedRAMP Rev5, and FedRAMP 20x.
Master comparison table
| Discipline | CMMC Practices | FedRAMP Rev5 Controls | FedRAMP 20x KSIs |
| Change Management | CM.L2-3.4.3, CM.L2-3.4.4, CM.L2-3.4.5 | CM-3, CM-3(2), CM-4 | KSI-CMT-01 through KSI-CMT-04 |
| User Access Management | AC.L1-3.1.1, AC.L1-3.1.2, AC.L2-3.1.5 | AC-2, AC-6, AC-6(7) | KSI-IAM-01 through KSI-IAM-07 |
| Vulnerability Management | RA.L2-3.11.2, SI.L1-3.14.1, RA.L2-3.11.3 | RA-5, SI-2 | KSI-AFR-04 (VDR) |
| Continuous Monitoring | CA.L2-3.12.3 | CA-7, AU-6, AU-2 | KSI-AFR-06 (CCM), KSI-MLA-01, KSI-MLA-02, KSI-MLA-05, KSI-MLA-07, KSI-MLA-08 |
| OSCAL-Based Documentation | CA.L2-3.12.4 | PL-2 | KSI-AFR-03 (ADS), KSI-AFR-09 (PVA) |
| Asset Inventory | CM.L2-3.4.1 | CM-8 | KSI-PIY-01 |
| Deviation Management | RA.L2-3.11.1, CA.L2-3.12.2, RA.L2-3.11.3 | RA-5, SI-2 | KSI-AFR-04 (VDR) |
| Compliance Reporting | CA.L2-3.12.4, CA.L2-3.12.1 | CA-7, PL-2 | KSI-AFR-06 (CCM), KSI-AFR-09 (PVA) |
| Incident Response | IR.L2-3.6.1, IR.L2-3.6.2, IR.L2-3.6.3 | IR-2 through IR-8 | KSI-INR-01 through KSI-INR-03, KSI-AFR-08, KSI-AFR-10 |
graph TD
AI[Asset Inventory] --> VM[Vulnerability<br/>Management]
AI --> UAM[User Access<br/>Management]
AI --> ChM[Change<br/>Management]
VM --> DM[Deviation<br/>Management]
VM --> CM[Continuous<br/>Monitoring]
ChM --> CM
UAM --> CM
DM --> CR[Compliance<br/>Reporting]
CM --> CR
OD[OSCAL-Based<br/>Documentation] --> CR
IR[Incident<br/>Response] --> CM
style AI fill:#2b5797,stroke:#5b9bd5,color:#fff
style VM fill:#5c1a1a,stroke:#ff6b6b,color:#fff
style UAM fill:#5c4a1a,stroke:#ffc857,color:#fff
style ChM fill:#1a3d5c,stroke:#4ecdc4,color:#fff
style DM fill:#4a1a5c,stroke:#c77dff,color:#fff
style CM fill:#1a5c3d,stroke:#51cf66,color:#fff
style CR fill:#1a3d1a,stroke:#a9dc76,color:#fff
style OD fill:#5c4a1a,stroke:#ffd43b,color:#fff
style IR fill:#5c1a3d,stroke:#ff6b9d,color:#fff
Asset inventory feeds everything. Vulnerabilities feed deviations. Changes, access reviews, vulnerability scans, and incident responses all feed continuous monitoring. Continuous monitoring and deviations feed compliance reporting. The disciplines are not independent checkboxes. They are a connected system.
1. Change Management
Change management is how you evaluate, approve, implement, and log every change to production.
The pain we lived: Change tickets existed, but they had no linkage to the POA&M items they were remediating. When a patch window addressed six vulnerabilities, someone manually listed those six entries in the change ticket after the fact. Emergency changes bypassed every control. Approval responses lived in email threads that nobody could find three weeks later.
| CMMC | FedRAMP Rev5 | FedRAMP 20x |
| CM.L2-3.4.3, CM.L2-3.4.4, CM.L2-3.4.5 | CM-3, CM-3(2), CM-4 | KSI-CMT-01 through KSI-CMT-04 |
What we changed: Every change request links to the issues it addresses. Approvals are captured on the ticket with timestamps and the approver’s identity. Security impact analysis is a required field before the change routes for approval. The change record, the approval, and the linked POA&M items are one connected dataset.
Read the full breakdown on change management
2. User Access Management
User access management is the full lifecycle of an account or permission: request, approval, provisioning, periodic review, and revocation.
The pain we lived: Granting access was easy. Proving the lifecycle was not. Quarterly privileged access reviews were tracked in spreadsheets. When someone left the organization, deprovisioning happened eventually, but the timestamp evidence was scattered across three systems. Approval for access grants lived in email.
| CMMC | FedRAMP Rev5 | FedRAMP 20x |
| AC.L1-3.1.1, AC.L1-3.1.2, AC.L2-3.1.5 | AC-2, AC-6, AC-6(7) | KSI-IAM-01 through KSI-IAM-07 |
What we changed: Access requests are tickets with required justification fields. Reviews are recurring tasks on a cadence that cannot be closed without an explicit confirm or revoke action. Deprovisioning creates a tracked ticket with timestamps. Single-click approvals on the ticket replace email chains.
Read the full breakdown on user access management
3. Vulnerability Management
Vulnerability management is a pipeline: scan, enrich, evaluate, prioritize, remediate, verify, report. Every finding moves through it with an owner and a deadline.
The pain we lived: We ran multiple scanners across environments. Each one produced results in its own format with its own severity ratings. Normalizing findings into a single schema, determining which were new vs remediated vs reopened, and mapping them into POA&M entries consumed days every month.
| CMMC | FedRAMP Rev5 | FedRAMP 20x |
| RA.L2-3.11.2, SI.L1-3.14.1, RA.L2-3.11.3 | RA-5, SI-2 | KSI-AFR-04 (VDR) |
What we changed: Scan results import into a single schema regardless of source scanner. Each finding becomes an issue with an owner, severity, SLA, and linked assets. New vs remediated vs reopened is determined automatically by comparing against prior scan cycles. The issue IS the POA&M entry. No separate mapping step.
Read the full breakdown on vulnerability management
4. Continuous Monitoring
Continuous monitoring is how you verify that security controls still work, on a defined cadence, with evidence.
The pain we lived: ConMon was a monthly assembly project. Pull data from five or six sources. Reconcile it. Format it. Deliver it. For each client. Every month. Activities tracked in spreadsheets with no single view of what was due, overdue, or complete. Missed cadences turned into findings that compounded over time.
| CMMC | FedRAMP Rev5 | FedRAMP 20x |
| CA.L2-3.12.3 | CA-7, AU-6, AU-2 | KSI-AFR-06 (CCM), KSI-MLA-01, KSI-MLA-02, KSI-MLA-05, KSI-MLA-07, KSI-MLA-08 |
What we changed: Every recurring ConMon activity is a scheduled task with a defined cadence, owner, and evidence requirement. Weekly, monthly, quarterly, annual. Tasks auto-generate on cadence. Evidence is captured on completion. The ConMon package generates from the data produced during the work, not from a manual assembly process.
Read the full breakdown on continuous monitoring
5. OSCAL-Based Documentation
System documentation answers one question: does what you wrote down match what you are actually running?
The pain we lived: SSPs were Word documents that went stale the day they were published. When an assessor compared the SSP to the live environment, inconsistencies became findings. Implementation descriptions were copy-pasted from guidance instead of describing what we actually do. Updating the SSP meant searching a 300-page document for every section that referenced a component that changed.
| CMMC | FedRAMP Rev5 | FedRAMP 20x |
| CA.L2-3.12.4 | PL-2 | KSI-AFR-03 (ADS), KSI-AFR-09 (PVA) |
What we changed: Components, capabilities, and control implementations are stored as structured data. The SSP is a generated report, not a maintained document. Update a component’s implementation once and it reflects everywhere the component is referenced. OSCAL export gives you the machine-readable format that ADS requires alongside the human-readable version.
Read the full breakdown on OSCAL-based documentation
6. Asset Inventory
Asset inventory answers one question: what is running in our environment, and who owns it?
The pain we lived: Inventory lived in spreadsheets. Cloud resources were stood up and never added. Decommissioned resources stayed listed for months. Vulnerability scanners reported on assets that were not in the inventory. The inventory listed assets the scanners never touched. Every month we reconciled by hand. A stale inventory breaks everything downstream: scan coverage, boundary definition, change tracking, compliance reporting.
| CMMC | FedRAMP Rev5 | FedRAMP 20x |
| CM.L2-3.4.1 | CM-8 | KSI-PIY-01 |
What we changed: Assets are the core data object. Everything links to them: issues, changes, scan results, configurations. Discovery from cloud provider APIs feeds inventory directly. Scan coverage is cross-referenced against inventory, and gaps are flagged. An asset record is not a row in a spreadsheet. It is a node that connects to everything else.
Read the full breakdown on asset inventory
7. Deviation Management
Deviation management is how you formally document findings that cannot or will not be remediated on the standard timeline: operational requirements, false positives, and risk adjustments.
The pain we lived: Deviations were tracked in a spreadsheet separate from the POA&M. Every month, someone reconciled the deviation tracker against the POA&M entries manually. Authorizing official sign-off lived in email. Compensating controls were described in free-text notes with no structured review process. Overdue POA&Ms piled up with repeated extensions.
| CMMC | FedRAMP Rev5 | FedRAMP 20x |
| RA.L2-3.11.1, CA.L2-3.12.2, RA.L2-3.11.3 | RA-5, SI-2 | KSI-AFR-04 (VDR) |
What we changed: Deviations are attached to the parent finding as structured data, not in a separate tracker. Each deviation type has required fields: justification, compensating controls, evidence. Authorizing official approval is captured on the deviation record with a single-click approval and a timestamp. Periodic review is enforced by recurring tasks. The deviation and its parent finding are one connected record.
Read the full breakdown on deviation management
8. Compliance Reporting
Compliance reporting is how you turn operational data into evidence for a specific audience and cadence.
The pain we lived: Monthly ConMon packages assembled by hand. Evidence scattered across five tools. Assessment prep was a weeks-long data aggregation project. The SSP described what we wished we were doing. Errors and inconsistencies between artifacts: the SSP says one thing, the POA&M says another, the scan report says a third. Every report was a from-scratch assembly, not a generated output.
| CMMC | FedRAMP Rev5 | FedRAMP 20x |
| CA.L2-3.12.4, CA.L2-3.12.1 | CA-7, PL-2 | KSI-AFR-06 (CCM), KSI-AFR-09 (PVA) |
What we changed: One data model for vulnerabilities, POA&Ms, deviations, changes, access reviews, and assets. Reports are live views of that data. Human-readable and machine-readable versions generate from the same source. ConMon packages, assessment evidence, and quarterly reports pull from live data instead of stitched-together exports.
Read the full breakdown on compliance reporting
9. Incident Response
Incident response is what happens when something goes wrong: detect, contain, recover, learn.
The pain we lived: The IR plan existed as a PDF. Nobody had tested it recently. The notification chain referenced people who had changed roles. When something happened, the first 30 minutes were spent figuring out who to call and what to do. After-action reviews happened informally and the lessons did not feed back into the program. The FedRAMP Security Inbox did not exist as a monitored, tracked process.
| CMMC | FedRAMP Rev5 | FedRAMP 20x |
| IR.L2-3.6.1, IR.L2-3.6.2, IR.L2-3.6.3 | IR-2 through IR-8 | KSI-INR-01 through KSI-INR-03, KSI-AFR-08, KSI-AFR-10 |
What we changed: Incidents from SIEM, EDR, ITDR, and other detection tools all flow into one system as actionable tickets with required fields: type, severity, affected systems, initial assessment. We also track monitoring alerts (uptime, disk space, etc.) so no matter what monitoring tools you run, the alerts land where they can be actioned. Notification chains are defined in the system and triggered automatically on ticket creation. After-action review tasks auto-generate when an incident is resolved. Annual tabletop exercises are recurring tasks with attendance and findings captured. The FedRAMP Security Inbox routes to tracked tickets with SLA-based response timeframes.
Read the full breakdown on incident response
The Rev5 Balance bridge
FedRAMP Rev5 Balance is a set of five improvements that move FedRAMP from traditional compliance toward operational reality. They are being phased in alongside existing Rev5 requirements. Check fedramp.gov for current adoption status. They signal where FedRAMP is heading. If you are building for 20x readiness, these are the stepping stones.
Minimum Assessment Scope (MAS) narrows the authorization boundary to information resources that actually handle federal customer data. Tighter scope, less noise, faster assessments.
Significant Change Notifications (SCN) replaces advance government approval for changes with a notification-based model. Four categories: routine recurring changes need no notification, adaptive changes require notification after completion, transformative changes require advance notification, and impact categorization changes require a new assessment entirely. No change type requires pre-approval.
Authorization Data Sharing (ADS) replaces static PDF authorization packages with live, programmatically accessible data served through a trust center. Human-readable and machine-readable, kept in sync automatically.
Vulnerability Detection and Response (VDR) replaces CVSS-only severity with a contextual risk evaluation. Exploitability, internet-reachability, and environmental impact drive remediation urgency instead of a single score.
Collaborative Continuous Monitoring (CCM) replaces per-agency monthly ConMon packages with quarterly Ongoing Authorization Reports shared with all agencies at once. One report, one cadence, all stakeholders.
Each improvement has its own deep-dive article:
- Minimum Assessment Scope: what MAS requires and how to automate it
- Significant Change Notifications: what SCN requires and how to automate it
- Authorization Data Sharing: what ADS requires and how to automate it
- Vulnerability Detection and Response: what VDR requires and how to automate it
- Collaborative Continuous Monitoring: what CCM requires and how to automate it
The automation thesis
Here is the core idea behind everything we have built.
One platform. One data model. Three frameworks.
Assets, issues, changes, approvals, deviations, scans, and reports all live in the same system. The relationships between them are not maintained manually. They are the data model.
One detail that makes this work: we treat all vulnerabilities, misconfigurations, and assessment findings as Issue Tickets. An Issue Ticket is the source of truth. It becomes a POA&M entry when it meets the criteria for being one (it came from an assessment, it is overdue, etc.), but the POA&M is still the same Issue Ticket with the field “IS POA&M” set to Yes. There is no separate POA&M object. The data model stays unified. When you close a change ticket that remediated a vulnerability, the linked Issue updates. When you approve a user access review, the approval is timestamped on the ticket. When a compliance check fails, an Issue Ticket is created automatically.
graph LR
A[Asset] --> I[Issue / Finding]
I --> P[POA&M Entry]
P --> CT[Change Ticket]
CT --> AP[Approval]
AP --> CR[ConMon Report]
I --> D[Deviation]
D --> P
A --> S[Scan Result]
S --> I
style A fill:#2b5797,stroke:#5b9bd5,color:#fff
style I fill:#5c1a1a,stroke:#ff6b6b,color:#fff
style P fill:#5c4a1a,stroke:#ffc857,color:#fff
style CT fill:#1a3d5c,stroke:#4ecdc4,color:#fff
style AP fill:#1a5c3d,stroke:#51cf66,color:#fff
style CR fill:#1a3d1a,stroke:#a9dc76,color:#fff
style D fill:#4a1a5c,stroke:#c77dff,color:#fff
style S fill:#5c1a3d,stroke:#ff6b9d,color:#fff
A scan produces a finding. The finding links to a POA&M entry. The POA&M links to a change ticket. The change ticket carries an approval. The approval feeds the ConMon report. If the finding cannot be remediated, a deviation attaches to it and the POA&M reflects that status automatically. Every node in this graph is a record in the same system.
The workflow is the evidence. You do not produce evidence separately from doing the work. The work produces the evidence. ConMon packages, assessment artifacts, and quarterly reports generate from the data created during operations, not from a separate reporting project.
CMMC, FedRAMP Rev5, and FedRAMP 20x test the same 9 disciplines. The differences are in wording, cadence, and evidence format. If your operations run on structured data with relationships built in, you can demonstrate compliance to any of them.
Compliance is a byproduct of operations, not a separate workstream.
Where to go next
Each discipline and each Rev5 Balance improvement has a dedicated deep-dive article. The series covers CMMC, FedRAMP Rev5, and FedRAMP 20x requirements side by side, with implementation detail from our experience.
The 9 disciplines:
- Change Management across CMMC, Rev5, and 20x
- User Access Management across CMMC, Rev5, and 20x
- Vulnerability Management across CMMC, Rev5, and 20x
- Continuous Monitoring across CMMC, Rev5, and 20x
- OSCAL-Based Documentation across CMMC, Rev5, and 20x
- Asset Inventory across CMMC, Rev5, and 20x
- Deviation Management across CMMC, Rev5, and 20x
- Compliance Reporting across CMMC, Rev5, and 20x
- Incident Response across CMMC, Rev5, and 20x
Rev5 Balance improvements:
- Minimum Assessment Scope (MAS)
- Significant Change Notifications (SCN)
- Authorization Data Sharing (ADS)
- Vulnerability Detection and Response (VDR)
- Collaborative Continuous Monitoring (CCM)
FAQ
A: Both frameworks test the same 9 disciplines. CMMC uses 110 practices from NIST SP 800-171 Rev 2 scored as MET or NOT MET with point values of 1, 3, or 5. FedRAMP Rev5 uses NIST 800-53 controls with prescriptive cadences, SLAs, and reporting formats. FedRAMP 20x uses Key Security Indicators (KSIs) validated by machine-based and non-machine-based checks. The wording and evidence format differ. The operations are the same. Build the operations once and you can demonstrate compliance to any of them.
A: Look at whether they build operations or assemble documentation. A consultant who helps you fill out the SSP template and prepare evidence binders is doing documentation assembly. A consultant who builds the operational workflows, connects your scan results to your POA&M, wires change approvals to tickets, and automates your ConMon cadences is building the infrastructure that produces compliance as a byproduct. The work between readiness assessment and audit is what determines pass or fail.
A: A C3PAO (for CMMC) or 3PAO (for FedRAMP) assesses your environment. They test controls, review evidence, and issue findings. They do not build or run your operations. A consultant helps you prepare. Neither runs the day-to-day operations that produce the evidence. The gap between readiness and audit is operational: running the cadences, closing the POA&Ms, keeping inventory current, reviewing access on schedule. That work falls on the provider.
A: Tool sprawl and reconciliation labor. Running scans in one tool, tracking POA&Ms in another, managing changes in a third, and keeping inventory in a spreadsheet means someone has to reconcile all of that data every month. That monthly assembly tax is the hidden cost. It compounds with every additional environment and every additional framework. One platform with a unified data model eliminates the reconciliation step entirely.
A: A spreadsheet tracks compliance status. It does not produce compliance. Tracking which controls are met and which have gaps is useful during assessment prep, but it does not run the operations that close those gaps. A GRC-ITSM runs the operations: scan results become Issue Tickets, change requests carry approvals, access reviews happen on cadence, and the compliance report generates from the work. The difference is tracking compliance versus producing it.
A: GRC-on-top means a compliance platform that sits above your operations. It tracks your control posture, manages your documentation, and produces reports, but the actual work (ticketing, scanning, approvals, change management) happens in other tools. Evidence is a project you assemble from multiple sources. A GRC-ITSM is both the GRC and the ITSM. The ticket is the evidence. The approval is on the ticket. The scan result creates the Issue Ticket. Evidence is a byproduct of doing the work, not a separate collection effort.
A: Start with the five Rev5 Balance improvements: MAS (scoping), SCN (change notifications), ADS (trust center), VDR (vulnerability evaluation), and CCM (quarterly OARs). These are the bridge from Rev5 to 20x and are available for opt-in now. Beyond that, 20x assumes automated pipelines, immutable infrastructure (KSI-CMT-02), phishing-resistant MFA (KSI-IAM-01), automated inventory (KSI-PIY-01), and machine-readable documentation (KSI-AFR-03). These are architecture changes, not policy changes.
A: Start with the 9 disciplines described in this series. Map your current operations against them. Identify which ones are manual, disconnected, or missing. Then build the operational workflows that satisfy both frameworks at once. The practices and controls map to the same capabilities. If your vulnerability management pipeline produces Issue Tickets with owners, SLAs, linked assets, and deviation records, that data satisfies CMMC RA.L2-3.11.2, FedRAMP RA-5, and 20x KSI-AFR-04 from the same source.
Have questions about any of this? Reach out.
Recent Posts:

Stop Building for Compliance. Build for Operations. Here Are the 9 That Matter.

Success with Compliance – FedRAMP, GovCloud, and Staying Sane

Advanced Cloud Protection via Containerization

An ultimate guide to Docker and Kubernetes

An Ultimate Guide to Managing Cloud Security Posture across Environments
