The core concept
Incident response is what happens when something goes wrong: detect, contain, recover, learn. Every framework wants the same thing: a tested capability, not a plan in a binder.
The distinction matters. A plan is a document that describes what you would do. A capability is evidence that you have done it, tested it, and improved it based on what you learned. Frameworks differ on notification timelines, testing cadence, and how much of the “learn” part has to be continuous. The question is the same: when something breaks, can you respond effectively, and can you prove it?
The second half of that question is where most organizations struggle. The detection and containment usually happen because people care about keeping systems running. The documentation, the notification chain, the post-incident review, and the evidence trail that proves the plan actually executed as written, all of that falls apart when it is not wired into the operational workflow.
Incident response also touches other disciplines. The FedRAMP Security Inbox (FSI) is an inbound channel that creates tracked work. Monitoring and detection (SIEM, EDR, ITDR, uptime checks, disk space alerts) are the upstream feeds. After-action reviews produce improvement items that become tracked issues. IR is not a standalone capability. It is a node in the operational graph.
What CMMC requires
CMMC does not want a plan. It wants an operational capability with evidence of testing and execution.
The relevant practices:
- IR.L2-3.6.1 (Level 2, 5 points, not POA&M eligible): Establish an operational incident-handling capability for organizational systems that includes preparation, detection, analysis, containment, recovery, and user response activities.
- IR.L2-3.6.2 (Level 2, 5 points, not POA&M eligible): Track, document, and report incidents to designated officials and authorities, both internal and external.
- IR.L2-3.6.3 (Level 2, 1 point, POA&M eligible): Test the organizational incident response capability.
Two of the three IR practices are 5-point and not POA&M eligible. You cannot plan to fix them later. They have to be in place at assessment time. The capability has to be demonstrable, not aspirational.
IR.L2-3.6.1 has seven assessment objectives, [a] through [g]. Assessors verify each one:
- [a] An incident-handling capability is established
- [b] Preparation activities are performed
- [c] Detection activities are performed
- [d] Analysis activities are performed
- [e] Containment activities are performed
- [f] Recovery activities are performed
- [g] User response activities are performed
What a C3PAO assessor looks for:
- An incident response plan that names real people, not just “the security team.” Specific roles with specific individuals assigned.
- Evidence of actual incidents handled per the plan. Tickets with timelines, escalation records, resolution documentation. If you have had zero incidents and cannot show how you tested the capability, that is a gap.
- Tabletop exercises or functional tests at least annually, with documented results and lessons learned. This is the IR.L2-3.6.3 test.
- Training records for IR team members.
- A notification chain that is current. Not three job changes out of date.
- Post-incident reviews that feed back into the plan. Assessors look for evidence the plan evolved based on what happened.
The common gap: the plan lives on SharePoint. Nobody has tested it. The notification chain lists people who left. When something happens, the first 30 minutes are spent figuring out who to call. The plan says “escalate to the ISSO” but nobody remembers who the ISSO is because the person who held that role transferred six months ago.
The scoring matters. IR.L2-3.6.1 and IR.L2-3.6.2 are each 5-point and not POA&M eligible. If your incident response is a plan nobody has exercised, you fail both at assessment time. IR.L2-3.6.3 (testing) is 1-point and POA&M eligible. You can defer the testing practice, but not the capability itself.
What FedRAMP Rev5 requires
FedRAMP spells out what your IR program has to include, how often you test it, and who gets notified.
The relevant controls:
- IR-2 (Incident Response Training): train IR team members annually and upon assuming an IR role.
- IR-3 (Incident Response Testing): test the IR capability at least annually.
- IR-4 (Incident Handling): document, coordinate, and track incidents.
- IR-5 (Incident Monitoring): track and document incidents on an ongoing basis.
- IR-6 (Incident Reporting): report incidents to appropriate authorities within required timeframes per CISA Federal Incident Notification Guidelines.
- IR-7 (Incident Response Assistance): provide an incident response support resource available to system users.
- IR-8 (Incident Response Plan): maintain the plan, distribute to key personnel, review and update annually.
Rev5 is prescriptive about the lifecycle. IR-2 covers training. IR-3 covers testing. IR-4 through IR-5 cover the active handling and monitoring. IR-6 covers reporting. IR-7 covers user-facing assistance. IR-8 covers the plan itself, its maintenance, and its distribution.
Key requirements:
- Incident reporting to CISA within required timeframes per CISA Federal Incident Notification Guidelines.
- Agency notification alongside CISA reporting.
- Annual IR plan review and update (IR-8).
- Defined points of contact with specific roles, not generic team references.
- Evidence of annual testing with findings and corrective actions (IR-3).
- Training records for IR team members, including initial training on assumption of the IR role (IR-2).
- An incident response support resource available to system users (IR-7). This is not just an email address. It is a mechanism users can reach when they need to report or respond to an incident.
The FedRAMP Security Inbox
The FedRAMP Security Inbox (FSI) is required for both Rev5 and 20x, effective January 5, 2026. It is the official channel for communications from FedRAMP (gsa.gov and fedramp.gov email domains).
FSI requirements:
- Emails from gsa.gov and fedramp.gov MUST create tracked tickets (KSI-AFR-08).
- Emergency messages route to a senior security official (FSI-CSO-EMR) with impact-based completion timeframes: 12 hours for High impact systems (FSI-FRP-ERT), 2nd business day for Moderate, 3rd business day for Low.
- Acknowledgment of FSI messages (FSI-CSO-ACK) is a SHOULD, not a MUST. There is no prescribed timeframe for acknowledgment.
The FSI is not optional. If FedRAMP sends you an emergency message and your response time depends on someone noticing an email in a shared inbox, you have a problem. The FSI has to be wired into your ticketing system so that messages create tracked work with SLA timers.
What 3PAOs look for:
- Evidence the IR plan is current, tested, and operational. Not current on paper. Current meaning updated based on the most recent test results and incident findings.
- Incident tickets with timelines, escalation records, and resolution documentation. The ticket should reconstruct the timeline from first detection to final resolution.
- Training records matching the IR roster.
- Evidence of annual testing with findings and corrective actions.
- A working FSI with evidence that messages are tracked and emergency messages are routed and resolved within the required timeframes.
The common gap: the plan is current on paper but never exercised. Notification chains reference people who left. No evidence of annual testing. The FSI does not exist or is not monitored. When an emergency message arrives from FedRAMP, nobody knows it is there until days later.
What FedRAMP 20x requires
20x pushes incident response harder than Rev5, with a specific focus on what happens after the incident. The INR domain covers persistent review of procedures, pattern analysis of past incidents, and feeding lessons learned back into the program.
A critical distinction: detection and monitoring live in a separate domain. The MLA (Monitoring, Logging, and Auditing) KSIs cover SIEM, audit logging, configuration evaluation, event types, and log data access. Specifically, these are KSI-MLA-01, KSI-MLA-02, KSI-MLA-05, KSI-MLA-07, and KSI-MLA-08. INR is review, learn, improve. MLA is detect, monitor. Together, they close the full IR lifecycle.
The INR KSIs:
- KSI-INR-RIR (KSI-INR-01), Reviewing Incident Response Procedures: “Persistently review the effectiveness of documented incident response procedures.”
- KSI-INR-RPI (KSI-INR-02), Reviewing Past Incidents: “Persistently review past incidents for patterns or vulnerabilities.”
- KSI-INR-AAR (KSI-INR-03), Generating After Action Reports: “Generate incident after action reports and persistently incorporate lessons learned.”
Related KSIs:
- KSI-AFR-08 (FedRAMP Security Inbox): the FSI requirement, shared with Rev5.
- KSI-AFR-10 (Incident Communications Procedures): defines how you communicate about incidents.
- KSI-CED-04 (IR and Disaster Recovery Training): training for the IR team.
The “persistently” bar
The bar is persistent review, not periodic. This is a meaningful difference.
KSI-INR-RIR asks you to persistently review the effectiveness of your IR procedures. Not review them once a year. Persistently. That means your review cadence is tied to incident volume, new threat intelligence, and organizational changes, not just a calendar entry.
KSI-INR-RPI asks you to look across incidents for patterns and systemic weaknesses. Not handle each incident as a standalone event. If three incidents in six months share the same root cause, 20x expects you to identify that pattern and address the systemic issue.
KSI-INR-AAR asks that lessons learned are persistently incorporated, meaning the IR program changes based on what you find. After-action reports that go into a folder and never change anything do not satisfy this KSI.
Incident Communications Procedures
Incident Communications Procedures (ICP) are 20x-only. The timelines are aggressive:
- ICP-CSX-IRF: 1-hour reporting to FedRAMP.
- ICP-CSX-IRA: 1-hour reporting to agencies.
- ICP-CSX-IRC: 1-hour reporting to CISA for attack-vector incidents.
These are notification timelines, not containment timelines. Within one hour of determining that an incident qualifies, you need to have notified FedRAMP, affected agencies, and (for attack-vector incidents) CISA.
The common gap on the path to 20x: IR procedures reviewed once a year at best. Past incidents handled individually with no pattern analysis across them. After-action reports exist but lessons learned never track back to procedure updates. The connection between “incident happened, we learned something” and “the procedure changed because of what we learned” is not traceable.
The pain we lived
We manage compliant environments across multiple clients. Here is what incident response looked like before we built it into the platform.
The IR plan existed. It was a Word document. Last updated when the system was authorized. The notification chain listed people who no longer worked there. When an actual incident happened, the first 20 minutes were spent on Slack figuring out who owned what, because the plan was in a document library nobody had bookmarked.
Containment and recovery worked because the team knew their systems. The documentation did not. Timelines were reconstructed after the fact from memory and chat logs. “When did we first detect this?” turned into an argument about whether someone’s Slack message at 2:47 PM counted as detection or whether it was the monitoring alert at 2:32 PM that nobody saw until later.
Evidence collection was retroactive. After the incident was resolved, someone had to go back and document the timeline, the decisions, and the actions taken. This happened days later if at all. Details were lost. The documentation was a best-effort reconstruction, not a real-time record.
After-action reviews happened when we had time, which was rarely. When they did happen, the findings went into a document and nothing changed. The IR plan stayed the same. The notification chain stayed stale. The same gaps surfaced at the next incident. We knew what needed to change, wrote it down, and then moved on to the next fire. The improvement items were aspirational, not tracked.
Training was another gap. IR team members changed. New people joined. But the training was a one-time walkthrough when they started, not a structured program with annual refreshes. When an assessor asked for training records, we had onboarding notes, not IR-specific training evidence.
Annual tabletop exercises were a checkbox. Run the exercise, write up the findings, put the document in the compliance folder. Nobody tracked whether the findings were actually addressed. The exercise results and the plan existed in different locations with no linkage. The exercise in 2024 identified the same gaps as the exercise in 2023 because nothing had been fixed.
The FedRAMP Security Inbox was the worst gap. Before January 2026, the FSI requirement was not formalized the way it is now. Communications from FedRAMP arrived in email. Sometimes they got forwarded. Sometimes they sat. Emergency messages had no routing logic. There was no SLA timer, no senior security official automatically notified, no ticket created. If someone was on vacation, the message waited.
All incidents, whether from SIEM alerts, EDR detections, ITDR findings, or monitoring alerts (uptime, disk space, certificate expiry), needed to flow into one system. Instead, they were scattered across email, Slack, and three different dashboards. A SIEM alert generated a case in the SIEM console. A monitoring alert sent an email. An EDR detection went to the EDR dashboard. The team had to check multiple places to get a complete picture. Correlation across sources was a manual exercise.
How we automate it
An IR plan in a Word document is a plan that will fail when you need it. Here is how we built incident response into Stratus GRC-ITSM.
- Incident ticket creation. Incidents are tickets with required fields: type, severity, affected systems, initial assessment. All sources flow into one system. SIEM alerts, EDR detections, ITDR findings, monitoring alerts (uptime, disk space), manual reports, and the FedRAMP Security Inbox all create the same ticket type with the same required fields. One intake point, regardless of source.
- Automatic POC notification. Incident creation triggers notification to all defined points of contact. Nobody has to remember who to call. Roles are defined in the system and updated when people change. A P1 incident at 3 AM notifies the right people immediately, not when someone checks their email in the morning.
- Severity-based workflows. Different severities trigger different response workflows. A P1 incident escalates immediately with broader notification and a compressed timeline. A P3 follows standard triage. The routing is defined once and applied every time. No ad-hoc decisions about who to loop in.
- FedRAMP Security Inbox integration. The FSI is wired to create tickets automatically. Emails from gsa.gov and fedramp.gov create tracked tickets. Emergency messages route to the senior security official (FSI-CSO-EMR) with the right completion timeframe based on system impact level. High gets 12 hours. Moderate gets 2nd business day. Low gets 3rd business day. The SLA timer starts when the ticket is created, not when someone notices the email.
- Response phase tracking. Each phase of the response (detection, containment, eradication, recovery) is tracked with timestamps and assigned owners. The timeline reconstructs itself from the ticket history. When an assessor asks “show me the timeline for your last incident,” you pull the ticket. No reconstruction from memory or chat logs.
- After-action workflow. When an incident is resolved, an after-action review task is created automatically. This is not optional. The task is created by the workflow, assigned to the incident lead, and tracked like any other ticket. Findings feed back into the issue tracker as improvement items. Each improvement item links back to the incident that produced it. This directly supports KSI-INR-AAR, which requires lessons learned to be persistently incorporated.
- Tabletop exercise tracking. Annual IR exercises are scheduled as recurring tasks. Results, attendance, and improvement items are tracked as tickets, not stored in a separate folder. When an assessor asks for evidence of IR testing (IR.L2-3.6.3 for CMMC, IR-3 for Rev5), the exercise records, findings, and corrective actions are all in the same system as the incidents themselves.
- Pattern analysis. Incident data is aggregated for trending by type, severity, root cause, and response time. This supports KSI-INR-RPI, which asks you to persistently review past incidents for patterns. The platform surfaces the data. The IR lead reviews it and takes action when patterns emerge.
graph TD
SIEM[SIEM / EDR / ITDR] --> INC[Incident Ticket]
MON[Monitoring Alerts] --> INC
FSI[FedRAMP Security Inbox] --> INC
INC --> POC[POC Notification]
INC --> SEV{Severity}
SEV -->|P1| ESC[Escalation Workflow]
SEV -->|P2-P3| TRIAGE[Standard Triage]
ESC --> CONTAIN[Containment]
TRIAGE --> CONTAIN
CONTAIN --> RECOVER[Recovery]
RECOVER --> AAR[After-Action Review]
AAR --> IMPROVE[Improvement Items]
style SIEM fill:#2b5797,stroke:#5b9bd5,color:#fff
style MON fill:#1a3d5c,stroke:#4ecdc4,color:#fff
style FSI fill:#5c4a1a,stroke:#ffc857,color:#fff
style INC fill:#5c1a1a,stroke:#ff6b6b,color:#fff
style POC fill:#1a5c3d,stroke:#51cf66,color:#fff
style SEV fill:#4a1a5c,stroke:#c77dff,color:#fff
style ESC fill:#5c1a3d,stroke:#ff6b9d,color:#fff
style TRIAGE fill:#1a3d1a,stroke:#a9dc76,color:#fff
style CONTAIN fill:#5c1a1a,stroke:#ff6b6b,color:#fff
style RECOVER fill:#1a5c3d,stroke:#51cf66,color:#fff
style AAR fill:#5c4a1a,stroke:#ffc857,color:#fff
style IMPROVE fill:#1a3d1a,stroke:#a9dc76,color:#fff
The throughline: every incident, from every source, enters the same workflow. The workflow handles routing, notification, escalation, documentation, after-action review, and improvement tracking. One process serves CMMC assessors, FedRAMP 3PAOs, and 20x KSI validators.
When a CMMC assessor asks for evidence of an operational incident-handling capability (IR.L2-3.6.1), the incident tickets, timelines, escalation records, and after-action reviews are there. When a 3PAO asks for FSI evidence and CISA reporting, the tracked tickets with SLA timers and resolution records are there. When a 20x validator checks KSI-INR-RIR for persistent procedure review, the improvement items linked to incidents and the procedure updates they triggered are there.
Compliance is a byproduct of operations, not a separate workstream.
FAQ
A: A plan is a document that describes what you would do. A capability is evidence that you have done it, tested it, and improved it. IR.L2-3.6.1 requires an operational incident-handling capability, not just a document. Assessors want to see incident tickets with timelines, escalation records, and resolution documentation. If your IR plan is a Word document that nobody has tested and the notification chain lists people who have changed roles, you have a plan but not a capability. Both CMMC and FedRAMP test the capability, not the document.
A: The FSI is the official channel for communications from FedRAMP (gsa.gov and fedramp.gov email domains), required since January 5, 2026. Emails from those domains MUST create tracked tickets (KSI-AFR-08). Emergency messages route to a senior security official (FSI-CSO-EMR) with impact-based completion timeframes: 12 hours for High impact systems, 2nd business day for Moderate, 3rd business day for Low. Acknowledgment (FSI-CSO-ACK) is a SHOULD, not a MUST. The response is required. The acknowledgment is recommended.
A: INR covers what happens after an incident: reviewing procedures (KSI-INR-01), analyzing past incidents for patterns (KSI-INR-02), and generating after-action reports with lessons learned (KSI-INR-03). MLA covers detection and monitoring: SIEM (KSI-MLA-01), audit logging (KSI-MLA-02), configuration evaluation (KSI-MLA-05), event types (KSI-MLA-07), and log data access (KSI-MLA-08). INR is review, learn, improve. MLA is detect, monitor. Together they close the full lifecycle.
A: IR.L2-3.6.3 (CMMC, 1-point, POA&M eligible) and IR-3 (FedRAMP) require annual IR capability testing. Tabletop exercises satisfy this when they produce documented results: scenario description, participant attendance, decisions made, findings identified, and improvement items. The improvement items need to be tracked and actually addressed. If the exercise in 2025 identifies the same gaps as 2024 because nothing was fixed, assessors will flag the testing as ineffective.
A: IR.L2-3.6.1 (establish an operational incident-handling capability) and IR.L2-3.6.2 (track, document, and report incidents) are both 5-point and not POA&M eligible. They must be in place at assessment time. IR.L2-3.6.3 (test the IR capability) is 1-point and POA&M eligible, so you can defer the testing. But the capability itself cannot be deferred. You cannot plan to build an incident response capability after the assessment.
A: All detection sources should flow into one system as actionable tickets. SIEM alerts, EDR detections, ITDR findings, and monitoring alerts (uptime, disk space, certificate expiry) all create the same ticket type with the same required fields: type, severity, affected systems, initial assessment. If alerts scatter across email, Slack, and three different dashboards, the team has to check multiple places for a complete picture. Correlation across sources becomes a manual exercise. One intake point, regardless of source, is what makes a detection event actionable.
This article is part of a 15-part series on the operational disciplines that CMMC, FedRAMP Rev5, and FedRAMP 20x all test. [Read the series overview: Stop Building for Compliance. Build for Operations.]
Recent Posts:

Collaborative Continuous Monitoring: What CCM Requires and How to Automate It

Vulnerability Detection and Response: What VDR Requires and How to Automate It

Authorization Data Sharing: What ADS Requires and How to Automate It

Significant Change Notifications: What SCN Requires and How to Automate It

Minimum Assessment Scope: What MAS Requires and How to Automate It
