The point of this case study
When people talk about incident reporting laws, the conversation often stays theoretical. "Report within 72 hours" sounds simple until you are living inside an active incident with incomplete facts, frightened executives, a vendor pointing fingers, and systems that are still going up and down.
This case study shows what CIRCIA compliance looks like in real life. It focuses on decisions, documentation, timelines, and what a covered organization did to meet cyber incident reporting requirements while still fighting the incident.
This is a composite case study based on common real world patterns reported in public regulatory briefings and incident response practices. Names, exact system details, and identifiers are intentionally generalized to protect privacy and keep the focus on process and compliance.
Background: The organization and why CIRCIA mattered
Company profile
Riverton Energy Services is a mid sized U.S. critical infrastructure operator with operations across three states. The company is not a household name, but it supports energy distribution and field services that thousands of customers depend on.
It fits the type of organization CIRCIA is designed for: a real operator with real operational consequences if systems fail.
Technology footprint
Riverton's environment included:
- A corporate network with Microsoft 365, file shares, HR and finance apps
- An operational technology environment supporting field scheduling and dispatch
- Vendor managed remote access for equipment maintenance
- A small internal security team supported by an MSSP
They had baseline security controls, but like many organizations, their strongest controls were in the corporate environment. Their operational workflows depended on a mix of modern SaaS tools and older systems that were still business critical.
Their compliance blind spot
Riverton believed they were "probably covered" by critical infrastructure definitions, but they had not built a formal CIRCIA compliance workflow yet. They had an incident response plan, but it was written more for recovery than for legal reporting timelines.
They had also assumed that if something serious happened, their cyber insurance coach or outside counsel would "tell them what to do."
That assumption became their first major lesson.
The incident: A supplier compromise becomes a business disruption
Day 1, 6:40 AM: First signs
A field operations supervisor reported that dispatch screens were loading slowly and then failing. It looked like a routine outage at first.
Within an hour, the IT helpdesk logged multiple reports:
- Users unable to access a scheduling system
- Unusual account lockouts
- A spike in failed login attempts from unfamiliar IP addresses
The on call engineer escalated to the security lead, who initiated an incident bridge.
Day 1, 8:05 AM: Security notices abnormal access
The MSSP reported suspicious activity:
- Successful logins to a privileged account outside normal hours
- Access to a file server containing operational templates
- Unusual PowerShell execution on two endpoints used by dispatch staff
The first question leadership asked was predictable: "Are we hacked?"
The honest answer was: "We do not know yet. But we have evidence of unauthorized activity."
That sentence matters for legal reporting laws because it forces a decision point. Under CIRCIA, what matters is when you "reasonably believe" a substantial incident occurred, not when the forensic report is complete.
Day 1, 9:20 AM: Ransomware note appears
A dispatch workstation displayed a ransom note. Several shared folders were no longer accessible.
Riverton's initial assessment shifted from "suspicious access" to "active business disruption."
The company did three things quickly:
- Isolated affected endpoints
- Disabled remote access pathways while they investigated
- Engaged outside incident response support through their insurance panel
At this point, the incident became a high likelihood candidate for cyber incident reporting requirements under CIRCIA.
The compliance decision: When does the 72 hour clock start?
The internal debate
Their legal counsel asked a critical question: "When do we think we had reasonable belief this was a substantial cyber incident?"
Operations argued it started when dispatch failed, since that impacted service delivery.
IT argued it started when the ransom note appeared, because that confirmed malicious action.
Security argued it started when the MSSP reported unauthorized privileged access, because that was evidence of compromise.
The decision they made shaped everything that followed.
The defensible approach they chose
They documented a timeline of facts and chose the "reasonable belief" start time as 8:05 AM, when the MSSP confirmed anomalous privileged access tied to operational disruption signals.
Why?
- It was the first documented moment of credible compromise indicators
- It was captured in a ticketing system and incident bridge logs
- It was defensible to an outside reviewer
This documentation became vital later. The goal is not to pick the earliest possible time out of fear. The goal is to pick a defensible time backed by evidence.
That decision started their 72 hour incident report clock.
What they did in the first 12 hours
Stabilize operations without hiding facts
Operations needed dispatch restored. Security needed containment.
Riverton split workstreams:
- Workstream A: containment and investigation
- Workstream B: operational continuity and manual fallback
Dispatch moved to manual processes for critical jobs. They prioritized safety related service calls and delayed non urgent work.
Stop the bleeding
The incident response team identified a likely initial access vector:
- A vendor remote access account that had not been rotated recently
- Weak MFA enforcement for one legacy access pathway
They disabled the vendor account and blocked the associated access method while they coordinated with the vendor's security team.
Preserve evidence
One mistake organizations make is wiping machines too quickly. Riverton took images of:
- Two affected workstations
- A file server that showed encryption activity
- Authentication logs for the suspected privileged account
Evidence preservation mattered because it supports reporting accuracy and improves future defensibility.
Build a reporting packet
Even while technical teams worked, counsel instructed a parallel track: build a "CIRCIA reporting packet" containing:
- Known incident timeline
- Systems impacted
- Type of attack suspected
- Operational impact summary
- Mitigations in progress
- Contact points and roles
This packet created momentum. It prevented the common failure mode where reporting is delayed because "we do not know everything yet."
Under cyber incident reporting requirements, you report what you reasonably know, then update later as facts evolve.
The reporting sprint: Writing a report while the incident is active
What information they had by hour 24
By the next morning, they had:
- Confirmation of ransomware encryption on a file server and multiple endpoints
- Evidence of unauthorized access to a privileged account
- A working theory that initial access came through a vendor remote access pathway
- Documented operational disruption including delayed dispatch and manual fallback
They did not yet know:
- Whether data exfiltration occurred
- The full scope of lateral movement
- The threat actor identity
This is a common reality. Waiting for "perfect" answers is how companies miss deadlines.
Drafting the initial report
Their first draft focused on facts, not speculation.
It covered:
- The incident date and time, with the documented "reasonable belief" marker
- The nature of the incident as ransomware with confirmed encryption activity
- The business functions affected, especially dispatch and scheduling
- Early containment steps, including disabled access pathways and segmented systems
- What information remained under investigation
They also documented the internal decision process that led to the 72 hour clock start time.
Why their writing style mattered
Their counsel insisted on three rules:
- Use plain language
- Separate facts from hypotheses
- Avoid absolute statements unless proven
This reduced the risk of contradictions later. Many organizations create compliance problems by writing "we are certain" too early, then reversing that statement in later updates.
The ransom decision: To pay or not to pay
The reality check
On day 2, the threat actor demanded a ransom. Riverton had backups, but restoring would take time and would not guarantee immediate operational stability.
Leadership faced a choice:
- Restore from backups with downtime and uncertainty
- Consider payment to accelerate recovery
Their legal counsel added a second concern:
If we pay, we trigger the 24 hour ransom payment report requirement tied to CISA ransomware reporting obligations under CIRCIA.
That meant the compliance pressure would increase, not decrease.
What they chose
Riverton did not pay. Their reasoning was specific:
- Backups existed and were not fully compromised
- Paying could still fail
- Payment could introduce additional legal, regulatory, and reputational risk
They documented this decision as part of their governance record.
Even though they did not pay, they treated the ransom demand as a key incident detail to be included in their reporting narrative.
The 72 hour deadline: Submission and what they included
The final 72 hour push
As the 72 hour mark approached, Riverton submitted the initial report with:
- Confirmed incident classification and known scope
- Operational impact and service disruption description
- A preliminary root cause hypothesis tied to vendor access
- Immediate containment steps taken
- Contact information and escalation path
- A statement that investigation was ongoing and updates would be provided
They avoided two common mistakes:
- They did not delay submission because they lacked full certainty
- They did not include technical details that could increase risk if disclosed broadly
What made their report defensible
Three things:
- Timestamped logs supported their incident timeline
- Their report clearly labeled hypotheses as hypotheses
- They kept an internal record of how decisions were made
This is what compliance reviewers look for: reasonable actions under pressure supported by documentation.
After the report: Updates, recovery, and the second wave of risk
Supplemental reporting
Over the next week, Riverton submitted supplemental updates as new facts emerged:
- Confirmation that the attacker attempted to access additional servers
- Evidence that one data repository was accessed but not confirmed exfiltrated
- Strengthened containment steps and system hardening
Supplemental updates helped prevent inconsistencies and showed good faith compliance behavior.
Operational recovery
Their restoration followed a prioritized plan:
- Dispatch and scheduling first
- Finance and HR next
- Non critical file shares last
They implemented temporary restrictions:
- Vendor access disabled until new MFA enforcement was implemented
- Privileged account use restricted and monitored
- Remote PowerShell limited
What went wrong: Honest lessons from the incident
Lesson 1: Vendor access is a high risk pathway
Riverton discovered that vendor remote access controls were not aligned with their internal standards. The vendor used a method that bypassed the strongest MFA policies.
They fixed this by:
- Requiring MFA enforcement for every remote access pathway
- Implementing time limited vendor access approvals
- Reviewing vendor access logs weekly
Lesson 2: Reporting cannot be improvised
Before the incident, Riverton did not have a defined CIRCIA workflow. During the incident, they were forced to invent it under pressure.
They formalized a process afterward:
- A decision tree for "reasonable belief" timing
- A reporting packet template
- Pre assigned roles for drafting and approval
Lesson 3: Legal and security must work as one team
Their fastest progress happened when counsel sat directly in the incident bridge. When legal is treated as an external reviewer, you lose time. When legal is embedded, you gain speed and clarity.
Lesson 4: Communication discipline prevents later contradictions
They kept their external messaging simple and consistent:
- Acknowledge disruption
- Avoid speculation
- Commit to updates
- Document everything internally
This protected them from internal confusion and conflicting statements.
The final outcome: Compliance achieved, trust preserved
Riverton met the 72 hour incident report requirement and avoided payment, so they did not trigger the 24 hour ransom payment report timeline. They restored operations within days, not weeks.
The bigger win was structural:
They left the incident with a compliance grade they could defend.
They did not claim perfection. They showed reasonable, documented action under pressure.
That is what regulators care about.
Practical takeaways you can copy into your own program
What a CIRCIA ready organization should have in place
- A written rule for defining "reasonable belief"
- A known list of covered systems and critical functions
- A prebuilt reporting packet template
- Incident response exercises that include reporting deadlines
- Vendor access controls aligned with internal policy
- A documentation habit that captures timelines and decisions
A simple internal standard for incident classification
If an incident does any of the following, treat it as a CIRCIA candidate immediately:
- Disrupts critical operations
- Encrypts or destroys key data
- Impacts safety, service delivery, or infrastructure functions
- Involves ransomware demand or payment consideration
- Shows unauthorized privileged access with confirmed malicious actions
This does not mean every event is reportable. It means you begin the compliance track early so you do not lose time.
Closing
CIRCIA compliance is not just a reporting requirement. It is a discipline. If your organization waits until a crisis to decide what "reportable" means, you will lose time and increase risk. If you build a simple workflow ahead of time, you can meet cyber incident reporting requirements while still focusing on recovery.
This case study shows that compliance is possible without perfect information. It requires documentation, clear roles, and the courage to report facts while continuing to investigate.
References
- Cyber Incident Reporting for Critical Infrastructure Act of 2022 (CIRCIA)
- Cybersecurity and Infrastructure Security Agency (CISA) CIRCIA rulemaking and reporting guidance summaries
- Public sector incident reporting compliance briefings and critical infrastructure reporting expectations