Distributed architectures play a critical role and the more distributed your data environment, the greater the resilience, says Levent Ergin at Informatica from Salesforce.
What the current situation has drawn attention to is the necessity for a shift in mindset. Resilience can no longer be viewed purely through the lens of protecting against infrastructure-level failures, whether that is power, networking, or even regional and geopolitical disruption.
The focus needs to move towards ensuring data can be reliably replicated and recovered, using metadata, lineage, and robust integration pipelines to maintain a strong recovery posture.
In that context, distributed architectures play a critical role. The more distributed your data environment, the greater the resilience. But of course, that resilience inevitably comes with added complexity. Managing data across regions introduces challenges around consistency, governance, and operational control, which many organisations underestimate.
That is why establishing a strong data foundation becomes essential. This means having clear ownership and consistency across master data, robust metadata and lineage to track how data moves, resilient integration pipelines to enable replication, and strong data quality and governance frameworks to ensure trust.
When this foundation is combined with hyperscalers’ large-scale disaster recovery and multi-availability zone and multi-region capabilities, organisations can achieve the recovery point and recovery time objectives needed for true business continuity.
Ultimately, as organisations design their data architecture and select data management partners, it is prudent to look beyond infrastructure. They need to understand the underlying architectural model, the maturity of disaster recovery capabilities, and how well these align with regulatory and data sovereignty requirements.
Changes to SLAs and best practices
One of the biggest shifts is around responsibility. For a long time, many organisations assumed resilience was largely handled by the cloud provider. Recent events have made it clear that this is not the case and that failover, recovery, and validation sit firmly with the customer. SLAs need to reflect that shared responsibility much more explicitly.
The second change is moving SLAs from static documents to something that has actively tested. It is one thing to define recovery objectives on paper, but unless those scenarios are regularly exercised in real-world conditions, they do not hold much value. Resilience needs to be proven, not assumed.
And finally, there is a growing need to prioritise data within SLAs. Uptime alone is not enough. If systems recover but data is incomplete, inconsistent, or delayed, the business is still effectively down. That brings data portability, integrity, and recovery into the centre of SLA design.
In many ways, the fundamental shift needs to be from measuring infrastructure availability to ensuring the business can actually operate, with trusted data, no matter where workloads are running. While at the heart of it, these are technical issues to be addressed, they need to be approached with a business lens.
- How should regional enterprises operationalise and migrate data into multi-availability, multi-region cloud zones?
- What are the benefits, drawbacks, challenges of using multi-availability and multi-region cloud zones?
- What are the changes required to SLAs while moving operations into multi-availability, multi-region cloud zones?




