Executive Summary
This case study follows a US professional services firm that thought it was "pretty secure" because it used strong passwords, MFA, and a reputable cloud backup vendor. The problem was not their effort. The problem was their assumption that cloud backup automatically meant privacy and control.
Their leadership asked a simple question after a vendor breach made headlines in their industry:
If someone got into our cloud storage account, could they read our backups?
The firm did not like the answer. Their backup data was encrypted in transit and encrypted at rest, but key ownership and data readability were still not fully under their control. They wanted a model where backups were protected even if cloud storage access was compromised.
They rebuilt their approach around a clear principle:
Encrypt data before it leaves the device and keep the keys under customer control.
They implemented secure cloud backups using RedVault Systems, which encrypts files before upload and stores encrypted objects in Backblaze B2. They paired the technology with a practical operating model: key handling discipline, documented restore testing, and a clean recovery runbook.
The results were tangible. They reduced exposure to cloud account compromise, improved confidence in their backups, cut restore time uncertainty, and built a more defensible story for audits, client security reviews, and insurance discussions.
This case study shows the real steps they took, where they struggled, and how they made the program work without turning it into an expensive, complicated project.
Organization Profile
The company in this case study was a US-based professional services firm with multiple offices. Their work involved sensitive client documentation, contracts, financial data, and case-related records. They were not a tech company, but they were a data business in the most practical sense.
Key characteristics
- Three offices across two states
- Roughly 140 employees
- A small internal IT team supported by a managed service provider
- A mix of Windows endpoints, on-prem file shares, and a growing set of cloud SaaS tools
- High sensitivity documents tied to client work and regulatory expectations
- Frequent client security questionnaires and vendor due diligence reviews
Their clients included organizations in regulated industries. That mattered because the firm regularly had to prove that it handled data responsibly, especially in storage and backup.
What data mattered most
The "crown jewel" data categories were not only one system. They were spread across multiple workflows:
- Client contract files and amendments
- Financial workpapers and reports
- Identity documents and onboarding packets for certain clients
- Internal HR records and payroll support data
- Email archives used for legal and compliance purposes
- Project documentation with confidential business details
For a professional services firm, losing access to documents is painful. Having those documents exposed is worse.
The Starting Problem
Before this project, the firm already used cloud backups. They had taken reasonable steps. They were not careless. But they had two gaps that became obvious once leadership started asking harder questions.
Gap 1: Cloud backup did not mean cloud safety
They assumed their backups were safe because:
- They used a major cloud service
- Backups were encrypted in transit
- Storage claimed encryption at rest
- They had MFA on their admin account
Those are good practices, but they do not answer the question that matters most during a breach:
If an attacker gains access to your cloud storage account, is your backup data readable?
In their old model, the answer depended on vendor controls and vendor key management. The firm wanted an architecture where they could honestly say:
Even if someone gets into the cloud storage environment, the backups are still unreadable without our keys.
Gap 2: Restore readiness was not predictable
Their IT team had restored files before, but restore testing was irregular. Leadership did not have confidence in:
- How long full folder restoration would take
- What the restore sequence should be during a real incident
- How to verify restored data integrity
- How to avoid restoring infected files
So they had backups, but they lacked predictable recovery behavior. During an incident, unpredictability becomes panic.
What Triggered the Change
The project started after a combination of external pressure and internal fear. This is how most real change happens.
Trigger 1: A competitor's cloud account compromise
A similar firm in their region suffered a cloud account compromise. Attackers accessed cloud storage and downloaded sensitive documents. The story spread quickly because it was exactly the kind of failure clients fear.
Leadership at the firm in this case study asked:
Could this happen to us, and if it did, would our backups be readable?
Trigger 2: Client due diligence became more aggressive
They started receiving deeper security questionnaires from enterprise clients. The questions moved beyond "do you encrypt data" and shifted to:
- Who controls the encryption keys?
- Is encryption performed before data leaves your environment?
- What happens if cloud storage access is compromised?
- Do you test restores and document results?
Those questions forced the firm to either improve its story or accept that it would lose deals.
Trigger 3: Insurance renewal questions
Their cyber insurance renewal asked about:
- Data encryption strategy
- Backup isolation
- Key management practices
- Restore testing frequency
- Documented recovery procedures
This triggered a business decision. Backup and recovery was no longer an IT topic. It became a revenue protection topic.
The New Requirements
They created requirements that were grounded in real risk rather than buzzwords.
Security requirements
They needed client side encryption so files are encrypted before upload.
They needed customer-controlled keys so no cloud provider can decrypt the backups without their passphrase.
They needed integrity verification so restored files can be validated as complete and unmodified.
They needed clear audit trails that show when backups ran and what was restored.
They needed a model that supports "least trust" assumptions, meaning even trusted vendors can be breached.
This is what the firm called a zero trust backup mindset. Not because they wanted a marketing term, but because they wanted a practical boundary:
Assume cloud access can be compromised and still protect the data.
Operational requirements
They needed management simplicity because their IT team was small.
They needed predictable costs.
They needed restore speed for critical folders.
They needed a runbook that any trained admin can follow under pressure.
They needed restore testing discipline with documentation.
They also set one rule that kept the project focused:
Any improvement must reduce incident risk without disrupting daily work.
Why They Chose RedVault
The firm shortlisted a few approaches, but they chose RedVault for one core reason: architecture and clarity.
They wanted to be able to explain, in plain English, how backups are protected. RedVault's approach gave them that:
Data is encrypted before upload and the keys are controlled by the customer, not by storage access.
They also wanted to use durable cloud storage as the destination, and RedVault's use of Backblaze B2 for storing encrypted objects aligned with their desire for reliable storage without giving up encryption control.
Leadership liked that the security story was simple and defensible:
If cloud storage access is compromised, the attacker still cannot read the backups without the customer-controlled key.
That sentence became a core part of their client security review responses.
Implementation Plan
They rolled out the new backup and recovery approach in three phases, intentionally avoiding a "big bang" migration.
Phase 1: Inventory and classification
They started by classifying data by business impact and sensitivity.
They created three tiers.
Tier 1: High sensitivity, high urgency
- Client contract repositories
- Active workpapers
- Executive and finance documentation
- Email archives tied to client commitments
Tier 2: Medium sensitivity, operational continuity
- HR and internal policy records
- Department shared folders
- Templates and internal process documentation
Tier 3: Lower urgency
- Archives older than a certain threshold
- Low usage departmental folders
- Historical project documentation
They also mapped where these folders lived:
- On-prem file shares
- Local devices for certain teams
- Cloud collaboration spaces
- Email and attachments
This mapping mattered because you cannot protect what you do not scope properly. They discovered data sprawl in places IT did not expect, especially on laptops used by senior staff.
Phase 2: Encryption-first deployment and key discipline
They deployed RedVault in a structured way:
Start with Tier 1 repositories first because those represent the highest risk and the highest value.
The most important operational change in this phase was not software installation. It was key discipline.
Key handling rules they adopted
They implemented a key handling policy that covered:
- Who can set or change the encryption passphrase
- How the passphrase is stored securely with restricted access
- How emergency access is approved
- What happens if the primary admin is unavailable
- How they test key accessibility during restore drills
They used a dual-control approach:
Two roles were involved in the emergency access process, which reduced the chance of a single person becoming a bottleneck or a single point of failure.
Leadership supported this because they understood the risk:
If the keys are lost, encrypted backups cannot be recovered.
Key discipline became part of their business continuity planning, not a technical afterthought.
Phase 3: Restore testing and recovery runbook
The firm learned quickly that secure cloud backups are only half the story. The other half is predictable restoration.
They built a restore testing cadence:
- Monthly Tier 1 restore test
- Quarterly larger restoration simulation
- Written notes captured after each test
- Measured restore time and friction points
They created a recovery runbook written in plain language. It included:
- How to identify which data sets are impacted during an incident
- How to select a safe restore point and avoid restoring infected files
- How to restore critical folders in the right sequence
- How to validate restored files for completeness
- How to communicate status to leadership without guessing
They also created a simple rule:
During an incident, we restore what the business needs first, not everything at once.
That reduced restore chaos during the actual event later.
The Real Test: A Cloud Account Scare That Could Have Become a Breach
Three months after rollout, they experienced an incident that proved why encryption before upload matters.
The event
Their IT team received alerts of unusual login attempts to their cloud admin environment. Shortly after, a user reported receiving MFA prompts they did not initiate.
This looked like a credential attack, potentially involving:
- Phishing
- Session token theft
- Credential stuffing
- MFA fatigue tactics
The firm reacted quickly.
Containment
They took immediate containment actions:
- Forced password resets for key accounts
- Revoked active sessions
- Hardened admin access policies
- Reviewed admin logs for suspicious activity
- Paused certain nonessential integrations temporarily
They concluded:
The attacker did not gain full administrative control, but the incident was close enough to matter.
This was not a ransomware event. It was a near-breach. And it triggered the leadership question again:
If the attacker had gained access to storage, would our backups be readable?
Their answer, because of client side encryption, became:
No, not without our keys.
That answer changed the tone of the incident response. People still took it seriously, but they were not terrified of catastrophic exposure of backups.
The Second Test: A Ransomware Event in a Department Share
Six months after rollout, the firm faced the event they feared most: ransomware encryption of shared folders.
Early symptoms
A staff member reported strange file extensions in a department shared drive. Another user said documents would not open and their machine was running unusually slow.
The helpdesk saw the pattern quickly:
This was not a single corrupted file. It looked like encryption activity.
They launched an incident bridge with security, IT, leadership, and their managed service partner.
Containment actions
They acted fast:
- Isolated affected endpoints
- Disabled a compromised user account suspected of spreading the infection
- Restricted access to impacted shares
- Preserved logs and evidence for investigation
- Paused nonessential remote access to reduce lateral movement risk
They contained the spread to one department share and a small subset of endpoints. That containment was a win, but the share still needed restoration.
Recovery Execution
Because they had a runbook and restore testing discipline, they executed recovery without improvisation.
Step 1: Decide what must be restored first
They did not restore the entire environment. They restored what the business needed for the next 24 hours.
Priority sets included:
- Active client workpapers for current engagements
- Templates and working documents needed for deliverables
- Shared folders supporting time-sensitive deadlines
They intentionally delayed restoration of nonessential archives.
Step 2: Select a safe restore point
They selected a restore point that predates the confirmed encryption activity. The runbook forced them to verify:
- The restore point is earlier than the first known malicious change
- The data set appears consistent and complete
- The restoration will not reintroduce infected files
This is a key step many organizations skip when stressed.
Step 3: Restore and validate
They restored the impacted share and validated:
- Folder structure completeness
- Key file readability
- Client deliverable templates opening correctly
- A sample of documents across multiple teams
- Version history alignment for key workpapers
They communicated restoration status to leadership in a practical way:
What is restored, what remains pending, what the business can do now.
Step 4: Resume operations with caution
Once the share was restored, they re-enabled access gradually, ensuring that compromised endpoints were cleaned before being allowed back.
They avoided the common mistake of restoring data and reconnecting infected systems immediately.
Outcome
The outcome was exactly what leadership wanted when they started the project.
They achieved:
- Fast restoration of critical shared documents
- No ransom payment
- No prolonged business shutdown
- Cleaner documentation of what happened and what they did
- Higher confidence in both security and recovery readiness
They also achieved something less obvious:
They removed the psychological pressure that drives bad decisions during incidents.
When teams believe recovery is possible, they act differently. They stay calm. They follow procedures. They avoid rash shortcuts.
What They Learned
The firm learned that "cloud backup" is not a single concept. There are levels of control.
Lesson 1: Cloud encryption is not always key control
Encryption at rest does not necessarily mean the customer controls the keys. When a customer needs strong defensibility, they should understand where encryption occurs and who holds keys.
Their move to client side encryption gave them a boundary that did not depend on the storage provider's access controls.
Lesson 2: Backups are not recovery without testing
Before this project, they had backups. After the project, they had backup and disaster recovery discipline.
Restore testing turned uncertainty into predictable timelines.
Lesson 3: Runbooks reduce panic
During the ransomware event, leadership had fewer emotional debates because the plan already existed and had been tested.
Lesson 4: "Secure" must be explainable
This firm won client trust because they could explain their backup security model in plain language:
Encrypted before upload, keys controlled by us, backups unreadable without the passphrase.
That clarity helped them in sales and in compliance conversations.
Program Improvements After the Incident
After the ransomware event, they improved a few things based on real friction points.
Improved identity controls
They tightened access review discipline, reduced over-permissioned accounts, and strengthened account monitoring.
Reduced data sprawl
They implemented a policy that sensitive client documents should not live on unmanaged endpoints. They migrated scattered local storage into controlled repositories and backed them up consistently.
Increased restore test cadence temporarily
They increased restore testing frequency for the most critical client repositories until they were fully confident in both tooling and process.
Better communication templates
They refined leadership communication templates for incidents:
What happened, what we know, what we are doing, what is restored, what comes next.
Clear language reduced stress and prevented speculation.
Key Takeaways for US Businesses
If you want backups that remain protected even when cloud accounts are compromised, focus on encryption architecture, not only on cloud branding.
A defensible approach includes:
- Client side encryption before upload
- Customer-controlled keys with disciplined handling
- A restore testing schedule with documented results
- A recovery runbook that ties restoration to business priorities
- Validation steps to ensure restored data is correct and complete
- Clear internal communication that avoids guessing during incidents
References
- RedVault Systems product and security feature descriptions, including encryption before upload, customer-controlled key model, and integrity verification concepts
- Backblaze B2 storage concepts and general guidance around secure object storage and client-side encryption approaches
- Industry-standard ransomware response and recovery best practices drawn from widely used incident response playbooks and business continuity guidance documents