Executive Summary
This case study follows a US-based SaaS company that handled sensitive customer files and learned a hard truth: "cloud storage is secure" is not a complete security strategy. The company already had strong authentication, role-based access, and standard encryption measures. But after a near-breach event and an uncomfortable customer security review, leadership realized their biggest gap was not in firewalls or passwords. It was in data readability.
They needed to answer a simple question without hesitation:
If someone gets access to our cloud storage environment, can they read our stored files or backup data?
Their previous architecture relied on cloud access controls and provider-side encryption, which are important but not enough when your threat model includes stolen credentials, compromised admin accounts, or third-party misconfiguration.
So they rebuilt the program around a principle that made everyone more confident:
Encrypt data before it leaves your environment and keep the keys under your control.
They implemented client side encryption using RedVault Systems for backup and recovery workflows, storing encrypted objects in Backblaze B2 while keeping encryption keys under company control. They paired the tooling with key handling discipline, restore testing, and a formal "data readability risk" policy.
The result was not just a stronger technical posture. It improved customer trust, reduced the blast radius of cloud account compromise, simplified security reviews, and made incident response calmer because the most sensitive question had a solid answer:
Even if storage access is compromised, data remains unreadable without our keys.
This case study covers the company's starting point, what triggered the change, how they implemented encrypted cloud storage with client side encryption, what went wrong during rollout, and the outcomes that mattered.
Organization Profile
The organization was a mid-market US SaaS company serving customers across multiple industries. Their platform processed and stored customer uploaded files that often contained sensitive business information. The company was growing quickly, which meant their compliance and customer security expectations grew faster than their internal processes.
Key characteristics
- A US-based SaaS platform with customers nationwide
- Approximately 110 employees
- A lean IT and security team, with engineering owning much of the infrastructure
- A mix of cloud-native systems, SaaS services, and object storage for files
- Frequent customer security questionnaires, especially from enterprise accounts
- A roadmap that included expanding into more regulated customer segments
What data mattered most
The company's risk was not only their application data. It was the documents customers uploaded and relied on daily.
High-value data categories included:
- Customer contracts, vendor agreements, and internal finance documentation
- Identity and onboarding documents for certain customer workflows
- Internal audit files and policy documentation
- Attachments in support tickets and operational workflows
- Exports used for reporting and business decisions
These files were highly attractive to attackers for two reasons:
They are useful for extortion and leverage, and they often contain sensitive details that can be embarrassing or damaging if leaked.
The Starting Point
Before the change, the company's storage and backup model looked like what many cloud-first firms use.
They had:
- Strong authentication and MFA for admin accounts
- Role-based access in their internal systems
- Encryption in transit for data movement
- Provider-side encryption at rest in cloud storage
- A backup system that copied key data sets into cloud storage for retention
On paper, it seemed like a mature posture.
The problem was that their model had one critical weakness:
Data readability still depended heavily on cloud access controls.
If cloud credentials were stolen, the attacker might not only access data. They might be able to read it.
Leadership wanted a clearer boundary. They wanted a model where cloud compromise did not automatically equal data exposure.
The Trigger: A Near-Breach That Changed Leadership's Risk Appetite
The change did not start from theory. It started from two real-world moments that forced leadership to confront uncomfortable questions.
Trigger 1: A suspicious admin access event
One Friday evening, the company's monitoring flagged unusual activity:
Repeated failed logins to an admin account, followed by a successful login from an unfamiliar location.
They responded quickly:
They forced password resets, revoked sessions, tightened conditional access rules, and reviewed logs.
They ultimately concluded that the attacker did not fully compromise their environment. But the event forced a serious conversation.
Leadership asked:
If the attacker had gotten into storage, what would they have seen?
The security team could not answer with full confidence because:
Their protection relied on access controls and provider encryption, but not on customer-controlled encryption keys.
Trigger 2: A customer security review that went deeper than usual
A potential enterprise customer sent a due diligence questionnaire that included a simple but direct section:
- Who controls encryption keys for stored customer files and backups?
- Is encryption performed before upload into cloud storage?
- If cloud storage credentials are compromised, is stored data readable?
- Can you demonstrate restore testing discipline and recovery runbooks?
The company's answers were not terrible, but they were not strong enough to feel confident. The responses sounded like:
We use encryption at rest and encryption in transit. We restrict access with IAM and MFA.
The customer pushed back politely:
That explains access controls. It does not explain data readability if access controls fail.
That was the turning point.
The Decision: Treat Data Readability as a First-Class Risk
Most security programs focus on preventing access. That is essential. But this company realized it also needed to reduce the harm if access is obtained.
They defined a new internal risk category:
Data readability risk.
They wrote a clear statement:
If cloud storage access is compromised, our stored backups and sensitive documents must remain unreadable without company-controlled keys.
This became a policy principle that guided the project.
It also created alignment across teams:
Security, engineering, leadership, and customer success all agreed on the goal.
Requirements for the New Approach
They wrote requirements in plain language to avoid vendor-driven confusion.
Security requirements
- Client side encryption for backups and sensitive file sets
- Keys must be controlled by the company, not by cloud storage access
- Encryption must happen before data is stored in cloud object storage
- Restore workflows must include integrity validation and version handling
- Audit trails must prove backups ran and restores were tested
Operational requirements
- The solution must be manageable by a lean team
- Key handling must not become a single-person dependency
- Recovery must be predictable and measurable
- The solution must scale as storage volume grows
- The rollout must not interrupt customer-facing operations
Customer and compliance requirements
They needed a security story that customers can understand and trust:
Backups are encrypted before upload, and only we can decrypt them.
They also needed evidence artifacts:
Key handling procedures, restore test logs, and incident response tie-ins.
Why They Selected RedVault
They evaluated approaches that offered encryption, but many still relied on provider key management or server-side encryption options that did not fully answer the "readability" question.
They chose RedVault because it fit the design they wanted:
Encrypt before upload, keep keys under customer control, and store encrypted objects in cloud storage.
They also liked that the model encouraged operational discipline:
Key handling, restore testing, and recovery runbooks are not optional if you are serious about encryption and recovery.
This selection was not only technical. It was strategic. It gave leadership a clear risk boundary they could explain to customers and insurers.
Implementation Plan
They implemented in phases to reduce operational risk.
Phase 1: Data classification and scope mapping
They started by mapping what data sets required the strongest protection.
They created three tiers:
Tier 1: Highly sensitive and high consequence
- Customer uploaded documents with sensitive business details
- Attachments that could contain identity or regulated information
- Backup sets tied to core customer data and platform operations
Tier 2: Sensitive but less catastrophic
- Internal policy documentation
- Operational exports used for reporting
- Support attachments that are lower sensitivity
Tier 3: Lower sensitivity or easily reconstructable
- Public assets
- Noncritical logs and temporary caches
- Archives with low business impact
They also mapped where these data sets lived:
- Object storage buckets
- Application databases
- Shared repositories used by internal teams
- Backup locations
This mapping revealed a common issue:
Sensitive data sprawl.
Some files were being stored in places that were not intended to be long-term storage, and those places were not being backed up consistently.
They tightened storage discipline as part of the scope work.
Phase 2: Encryption-first rollout for backup sets
They did not encrypt everything at once. They started with backups because backups represent the "worst-case exposure" scenario.
If an attacker can read backups, they can often reconstruct everything.
They implemented a backup flow that enforced:
- Encryption before upload
- Consistent backup scheduling
- Version handling and retention rules
- Integrity validation as part of restores
They also ensured that backups were not only taken but tested.
This is where many teams fail: they build a secure backup system and never prove it can restore under stress.
Phase 3: Key handling discipline and operational safety
This was the most important phase, because client side encryption is only as strong as key handling.
They created a key handling policy with:
- Two-person approval for key changes
- Secure storage of recovery information
- Emergency recovery access rules
- Quarterly verification that keys work in real restore scenarios
- A separation between backup administration and key custody
They also created a "key loss" incident scenario and practiced it.
The goal was simple:
Never discover key handling weakness during a real crisis.
They documented ownership and backups for every critical responsibility:
Primary key custodian, backup custodian, and executive escalation contact.
They treated keys like a business continuity asset.
Phase 4: Restore testing and measurable recovery targets
They created recovery targets that leadership could understand:
- What is the maximum acceptable data loss?
- How quickly do we need core data sets back?
They defined two internal targets:
A short window for critical recovery actions, and a longer window for full stabilization.
Then they tested.
Their restore testing cadence included:
- Monthly restore tests for Tier 1 backup sets
- Quarterly full recovery simulation that includes validation and documentation
- A checklist for verifying restored file integrity and completeness
- Time measurement for each recovery step
- Post-test reviews and runbook updates
They did not chase perfection. They chased repeatability.
The Friction: What Went Wrong During Rollout
No real implementation is smooth. They hit three friction points that taught valuable lessons.
Friction 1: Engineering wanted speed, security wanted discipline
Engineering teams sometimes see encryption as "extra steps" that slow delivery.
Security wanted strict process and testing.
The company resolved this by shifting the framing:
Encryption is not a feature. It is risk control.
Restore testing is not bureaucracy. It is operational readiness.
Once leadership reinforced that message, alignment improved.
Friction 2: Key handling felt inconvenient at first
Two-person approval and quarterly key verification drills felt like "extra work."
Then the team ran a tabletop scenario:
Primary admin unavailable during an incident, key needed for restore.
That scenario made the discipline feel justified immediately.
After that exercise, the team stopped seeing key handling as optional.
Friction 3: Data sprawl created hidden scope
They discovered sensitive file sets outside of intended storage locations.
Some were in local team folders, some in ad hoc SaaS storage, some in old project archives.
They addressed this with a simple policy:
Sensitive file storage must be in approved repositories that are backed up and encrypted consistently.
They also trained teams on how to classify and store data correctly.
The Real Test: Cloud Account Compromise Attempt
A few months after rollout, they experienced a real security incident that validated the architecture.
The incident
They detected suspicious access patterns:
An unusual login attempt sequence, then a successful access to an internal admin panel from an unknown device.
They initiated containment:
Revoked sessions, forced resets, tightened access controls, and reviewed logs.
They could not fully rule out that the attacker accessed some cloud storage controls. The investigation remained cautious.
The key question became:
If storage access was obtained, was data readable?
Because they had implemented encrypted cloud storage with client side encryption for backups, their answer was straightforward:
Encrypted objects in storage are unreadable without our keys.
That did not remove the need to investigate. But it reduced the worst-case fear and made leadership calmer.
Instead of panic, leadership asked better questions:
- Do we see evidence of data access attempts?
- Do we need to rotate keys?
- Do we need to alert customers based on our evidence?
The incident response stayed disciplined because the architecture removed some of the emotional urgency.
The Second Test: Ransomware in a Department Share
Later, they faced ransomware on a set of internal shared folders used by customer success and finance.
Early symptoms
Users reported files that would not open, unusual extensions, and system slowness.
Security confirmed encryption behavior and initiated containment.
Containment steps
They isolated endpoints, disabled accounts suspected of compromise, restricted access to shared resources, and preserved evidence.
The event affected operations, but it did not compromise the encrypted backup repositories.
Recovery execution
They restored impacted folders from backups.
Because they had restore testing discipline, recovery was structured:
- Identify critical folders first
- Choose safe restore points predating encryption activity
- Restore and validate
- Bring systems back gradually to avoid reinfection
They validated restoration with checklists:
- Folder completeness
- File readability
- Key templates and exports opening correctly
- Spot checks across multiple teams
They avoided ransom payment and restored core functions without a prolonged shutdown.
Outcomes: What Changed for the Business
This program delivered outcomes in four areas: risk, resilience, customer trust, and internal confidence.
Outcome 1: Reduced exposure to cloud access compromise
The company could now credibly say:
Even if cloud storage access is compromised, encrypted backups and protected file sets remain unreadable without our keys.
That is the practical value of client side encryption.
Outcome 2: Stronger recovery posture
They moved from "we have backups" to a true backup and disaster recovery capability.
They could measure restore performance, practice it, and improve it.
This reduced panic during real incidents.
Outcome 3: Better customer trust and easier security reviews
Customer security questionnaires became easier to answer because the company's story was clear and defensible.
They could explain their approach without jargon:
Encrypt before upload. Keys under our control. Restore tested.
That clarity reduced friction in enterprise deals.
Outcome 4: Better internal governance
Leadership gained a reliable view of recovery readiness through:
Restore test logs, recovery time tracking, and runbook updates.
This improved accountability and reduced reliance on informal knowledge.
What They Changed After the Tests
After their incidents and simulations, they improved the program further.
Stronger access controls around backup administration
They tightened administrative access rules and reduced the number of accounts that could manage backup settings.
More frequent restore drills for the most critical data sets
They increased restore test frequency temporarily for Tier 1 sets, then returned to monthly once confidence stabilized.
Improved internal training to reduce data sprawl
They trained staff on storage discipline and reduced "shadow storage" risk.
Updated incident response playbooks
They added a section that ties incident classification to:
Key rotation decisions, restore decisions, and customer communications discipline.
Key Takeaways for US Companies
If you want to reduce the impact of cloud account compromise, you need to reduce data readability, not only tighten access controls.
A practical approach includes:
- Client side encryption before upload
- Keys controlled by the company with disciplined handling
- Restore testing that produces measured recovery expectations
- A recovery runbook that works under stress
- Validation steps so restored files are correct and complete
- Storage discipline to prevent sensitive data sprawl
This is how you build encrypted cloud storage that stays protective even when other layers fail.
References
- RedVault Systems product and security feature descriptions, including encryption before upload, customer-controlled key approach, and integrity verification concepts
- Backblaze B2 object storage concepts and general security guidance around durable cloud storage and client-side encryption approaches
- Common enterprise security review patterns and incident response practices documented in widely used business continuity and ransomware recovery playbooks