Zero-Downtime Redshift to Snowflake Migration
Migrated 100+ analysts to Snowflake without a single minute of downtime.
Intercom is a product-led SaaS company processing over 20 million messages per day for 25,000+ customers. The internal data platform had been built on AWS Redshift and was serving 100+ analysts, engineers, and product managers. As query volumes grew, performance had degraded and costs were rising without a clear ceiling.
The data team needed a migration path to a platform that could scale with query demand without proportional cost increases, while ensuring zero disruption to 100+ active users during the transition.
Redshift had become a productivity liability. Query times had degraded to the point where analysts were losing significant working time waiting for results, and infrastructure costs were rising without a ceiling in sight. The migration to Snowflake had to solve both problems without introducing a third: disruption to 100+ active users. Parity with the existing Redshift models was not optional — analysts had years of muscle memory built around how the data was structured, and silent inconsistencies post-cutover would have been worse than a failed migration.
The engagement also surfaced a deeper architectural problem: Intercom operates across US, EU, and Australian geographies, each with different data residency requirements. In the Redshift era, cross-regional data was simply excluded from analysis. With Snowflake, a federated architecture was designed using dbt macros to union non-sensitive data from each regional deployment into unified staging layers — enabling compliant cross-regional analysis for the first time.
- 01Inventoried every downstream consumer of the Redshift platform — dashboards, pipelines, embedded queries, analyst workflows — and mapped each to the teams and decisions it supported.
- 02Designed the target Snowflake schema and the full migration architecture, including the federated regional model structure, before writing a line of migration code.
- 03Defined dbt modelling standards across all 200+ models: layer separation, naming conventions, test coverage, and documentation requirements.
- 04Built dbt macros to union non-sensitive data from US, EU, and Australian regional deployments into unified federated staging layers, enabling compliant cross-regional analysis for the first time.
- 05Ran a parallel dual-write period and validated query parity across aggregates, row counts, and edge cases, covering at least one full business cycle.
- 06Migrated consumers in phases with a staged rollback procedure ready at each step; executed the final cutover during a low-traffic window with all consumers confirmed and validated on the new platform.
- 3–5× query performance improvement across the platform
- Approximately $1M in annual infrastructure cost savings
- Zero downtime during cutover — all 100+ active users migrated without disruption
- 200+ dbt models standardised, documented, and validated on the new platform
- Cross-regional analysis enabled for the first time, compliantly, via a federated architecture across US, EU, and Australian deployments
Start a conversation.
Every engagement begins with a focused discussion of your current data environment and priorities. To schedule an initial consultation, reach out directly.
Get in touch