Big bang rewrites fail catastrophically. Spend 18 months
rebuilding everything? Launch fails, users revolt, company pivots back to legacy system. The Strangler Fig pattern
eliminates this risk by incrementally replacing legacy systems piece by piece—new functionality wraps around the old
like a strangler fig tree, eventually replacing it completely without disruption.
This guide covers production-ready strangler fig migration
strategies that minimize risk while maximizing value delivery. We’ll migrate legacy systems safely, incrementally,
and reversibly.
Why Strangler Fig Transforms Migrations
The Big Bang Rewrite Problem
Full rewrites suffer from:
- Extended downtime: Months or years of no new features
- High risk: Single massive cutover can fail catastrophically
- Scope creep: Requirements change during multi-year projects
- No ROI: No value until 100% complete
- Team burnout: Endless rewrite without shipping
- Political failure: New leadership kills unfinished projects
Strangler Fig Benefits
- Incremental delivery: Ship value every sprint
- Low risk: Small, reversible changes
- Continuous validation: Test new system with real traffic
- Gradual learning: Understand legacy behavior incrementally
- Parallel operation: Old and new systems coexist
- Reversible: Roll back individual features if needed
Pattern 1: HTTP Proxy Routing
Route Requests to New vs Legacy
// Strangler proxy using Express
const express = require('express');
const { createProxyMiddleware } = require('http-proxy-middleware');
const app = express();
// Configuration: Which routes go where
const routingConfig = {
// New system handles these
newSystem: [
'/api/v2/users',
'/api/v2/orders',
'/api/products/search'
],
// Legacy handles everything else
legacySystem: '*'
};
// Proxy to new system
const newSystemProxy = createProxyMiddleware({
target: 'http://new-system:3000',
changeOrigin: true,
onProxyReq: (proxyReq, req) => {
console.log('→ New system:', req.path);
}
});
// Proxy to legacy system
const legacySystemProxy = createProxyMiddleware({
target: 'http://legacy-system:8080',
changeOrigin: true,
onProxyReq: (proxyReq, req) => {
console.log('→ Legacy system:', req.path);
}
});
// Routing logic
app.use((req, res, next) => {
// Check if route is migrated
const isMigrated = routingConfig.newSystem.some(route =>
req.path.startsWith(route)
);
if (isMigrated) {
newSystemProxy(req, res, next);
} else {
legacySystemProxy(req, res, next);
}
});
app.listen(8000, () => {
console.log('Strangler proxy running on port 8000');
});
// Benefits:
// - Single entry point for clients
// - Gradually move routes to new system
// - No client changes required
// - Easy rollback (just update routing config)
Pattern 2: Feature Flags for Gradual Migration
Toggle Between Old and New Implementations
public interface IOrderService
{
Task<Order> GetOrderAsync(string orderId);
Task<Order> CreateOrderAsync(CreateOrderRequest request);
}
// Legacy implementation
public class LegacyOrderService : IOrderService
{
private readonly LegacyDatabase _db;
public async Task<Order> GetOrderAsync(string orderId)
{
// Legacy database query
var legacyOrder = await _db.Orders.FindAsync(orderId);
return MapToOrder(legacyOrder);
}
public async Task<Order> CreateOrderAsync(CreateOrderRequest request)
{
// Legacy order creation
var legacyOrder = new LegacyOrder { /* map fields */ };
await _db.Orders.AddAsync(legacyOrder);
await _db.SaveChangesAsync();
return MapToOrder(legacyOrder);
}
}
// New implementation
public class ModernOrderService : IOrderService
{
private readonly IEventStore _eventStore;
public async Task<Order> GetOrderAsync(string orderId)
{
// Event-sourced implementation
var events = await _eventStore.GetEventsAsync(orderId);
return Order.FromEvents(events);
}
public async Task<Order> CreateOrderAsync(CreateOrderRequest request)
{
// Event-sourced order creation
var order = Order.Create(request);
await _eventStore.AppendEventsAsync(order.Id, order.Events);
return order;
}
}
// Strangler wrapper with feature flags
public class StranglerOrderService : IOrderService
{
private readonly LegacyOrderService _legacy;
private readonly ModernOrderService _modern;
private readonly IFeatureFlags _flags;
public StranglerOrderService(
LegacyOrderService legacy,
ModernOrderService modern,
IFeatureFlags flags)
{
_legacy = legacy;
_modern = modern;
_flags = flags;
}
public async Task<Order> GetOrderAsync(string orderId)
{
// Check feature flag
if (await _flags.IsEnabledAsync("NewOrderSystem"))
{
return await _modern.GetOrderAsync(orderId);
}
return await _legacy.GetOrderAsync(orderId);
}
public async Task<Order> CreateOrderAsync(CreateOrderRequest request)
{
// Gradual rollout by percentage
if (await _flags.IsEnabledForPercentageAsync("NewOrderCreation", 25))
{
// 25% of users get new system
return await _modern.CreateOrderAsync(request);
}
return await _legacy.CreateOrderAsync(request);
}
}
// Usage in DI
services.AddSingleton<LegacyOrderService>();
services.AddSingleton<ModernOrderService>();
services.AddSingleton<IOrderService, StranglerOrderService>();
Pattern 3: Database Migration with Dual Writes
Synchronize Legacy and New Data Stores
from typing import Optional
import asyncio
class LegacyDatabase:
async def save_user(self, user_data: dict) -> str:
# Legacy SQL database
print(f"Saving to legacy DB: {user_data}")
return "legacy_id_123"
class NewDatabase:
async def save_user(self, user_data: dict) -> str:
# New NoSQL database
print(f"Saving to new DB: {user_data}")
return "new_id_456"
class StranglerUserRepository:
def __init__(self, legacy_db: LegacyDatabase, new_db: NewDatabase):
self.legacy_db = legacy_db
self.new_db = new_db
self.dual_write_enabled = True
self.read_from_new = False
async def save_user(self, user_data: dict) -> str:
"""Dual write: Save to both databases"""
# Always write to legacy (source of truth)
legacy_id = await self.legacy_db.save_user(user_data)
if self.dual_write_enabled:
try:
# Also write to new system
new_id = await self.new_db.save_user(user_data)
# Store mapping for later migration
await self._store_id_mapping(legacy_id, new_id)
except Exception as e:
# Log error but don't fail - legacy is source of truth
print(f"New DB write failed: {e}")
return legacy_id
async def get_user(self, user_id: str) -> Optional[dict]:
"""Read from new or legacy based on flag"""
if self.read_from_new:
try:
# Try new system first
user = await self.new_db.get_user(user_id)
if user:
return user
except Exception as e:
print(f"New DB read failed, falling back: {e}")
# Fallback to legacy
return await self.legacy_db.get_user(user_id)
async def _store_id_mapping(self, legacy_id: str, new_id: str):
"""Store ID mapping for migration"""
# Store in mapping table
pass
# Migration phases:
# Phase 1: Dual write (legacy primary, new secondary)
# - All writes go to both systems
# - All reads from legacy
# - Build confidence in new system
# Phase 2: Verify sync for period
repo = StranglerUserRepository(legacy_db, new_db)
repo.dual_write_enabled = True
repo.read_from_new = False
# Phase 3: Shadow reads (write both, read new with fallback)
repo.read_from_new = True # Read from new, fallback to legacy
# Phase 4: Cutover (new is primary)
# - Switch reads fully to new
# - Stop writing to legacy
# - Legacy becomes backup only
# Phase 5: Decommission legacy
# - Turn off legacy system
# - Remove legacy code
Pattern 4: API Version Coexistence
Run Multiple API Versions Simultaneously
// Legacy API (v1) - Keep running
class LegacyUserController {
async getUser(req, res) {
// Legacy implementation with technical debt
const user = await db.query(
'SELECT * FROM users WHERE id = ?',
[req.params.id]
);
// Legacy response format
res.json({
user_id: user.id,
user_name: user.name,
user_email: user.email
});
}
}
// New API (v2) - Clean implementation
class ModernUserController {
async getUser(req, res) {
// Modern implementation
const user = await userRepository.findById(req.params.id);
// RESTful response format
res.json({
id: user.id,
name: user.name,
email: user.email,
// Additional fields
createdAt: user.createdAt,
updatedAt: user.updatedAt,
_links: {
self: `/api/v2/users/${user.id}`,
orders: `/api/v2/users/${user.id}/orders`
}
});
}
}
// Routing: Both versions coexist
app.get('/api/v1/users/:id', legacyController.getUser);
app.get('/api/v2/users/:id', modernController.getUser);
// Migration strategy:
// 1. Launch v2 alongside v1
// 2. Encourage clients to migrate (deprecation notices)
// 3. Monitor v1 traffic decline
// 4. Set sunset date for v1
// 5. Remove v1 after grace period
// Deprecation header
app.use('/api/v1/*', (req, res, next) => {
res.setHeader('Deprecated', 'true');
res.setHeader('Sunset', 'Sat, 31 Dec 2024 23:59:59 GMT');
res.setHeader('Link', '</api/v2>; rel="alternate"');
next();
});
Pattern 5: Event-Driven Migration
Capture Changes and Replicate
// Legacy system publishes events
public class LegacyOrderService {
private final EventPublisher eventPublisher;
public Order createOrder(CreateOrderRequest request) {
// Legacy business logic
Order order = legacyCreateOrder(request);
// Publish event for new system to consume
OrderCreatedEvent event = new OrderCreatedEvent(
order.getId(),
order.getCustomerId(),
order.getTotal(),
LocalDateTime.now()
);
eventPublisher.publish("orders.created", event);
return order;
}
}
// New system consumes events from legacy
@Service
public class OrderEventConsumer {
@KafkaListener(topics = "orders.created")
public void handleOrderCreated(OrderCreatedEvent event) {
// Build read model in new system
OrderReadModel readModel = OrderReadModel.builder()
.id(event.getOrderId())
.customerId(event.getCustomerId())
.total(event.getTotal())
.createdAt(event.getTimestamp())
.build();
orderReadModelRepository.save(readModel);
// Optionally trigger new business logic
orderAnalyticsService.processNewOrder(readModel);
}
}
// Benefits:
// - Legacy keeps working
// - New system builds up-to-date view
// - Can compare outputs
// - Easy rollback (just stop consuming events)
Pattern 6: UI Component Replacement
Micro-Frontends for Incremental Migration
// Legacy Angular app
// Shell application routes to old and new components
import { Component } from '@angular/core';
@Component({
selector: 'app-root',
template: `
<nav>
<a routerLink="/dashboard">Dashboard</a>
<a routerLink="/orders">Orders</a>
<a routerLink="/products">Products</a>
</nav>
<router-outlet></router-outlet>
`
})
export class AppComponent {}
// Routing configuration
const routes: Routes = [
// Legacy route (old Angular component)
{
path: 'dashboard',
component: LegacyDashboardComponent
},
// Migrated route (React micro-frontend)
{
path: 'orders',
loadChildren: () => import('./react-orders-wrapper.module')
},
// New route (built in React from start)
{
path: 'products',
loadChildren: () => import('./react-products-wrapper.module')
}
];
// React component wrapper (Single-SPA)
import * as singleSpa from 'single-spa';
singleSpa.registerApplication({
name: 'react-orders',
app: () => import('./orders-app/main'),
activeWhen: ['/orders']
});
singleSpa.start();
// Migration approach:
// 1. One page at a time
// 2. Shared authentication/state
// 3. Consistent design system
// 4. Eventually replace shell
Real-World Example: Complete Migration Strategy
E-commerce Platform Migration
# Migration Roadmap: Legacy Monolith → Microservices
# Phase 1: Setup Infrastructure (Month 1)
- Deploy strangler proxy
- Setup feature flag system
- Create shared event bus
- Implement monitoring/observability
# Phase 2: Extract User Service (Month 2-3)
migrate:
- route: /api/users/*
- database: Dual write users table
- features:
- user_creation: 10% → 50% → 100%
- user_authentication: 10% → 50% → 100%
- validation: Compare legacy vs new responses
- rollback_plan: Feature flag to 0%
# Phase 3: Extract Order Service (Month 4-6)
migrate:
- route: /api/orders/*
- database: Event sourcing for new orders
- features:
- order_creation: 5% → 25% → 100%
- order_retrieval: Shadow read → Primary read
- validation: Financial reconciliation
- rollback_plan: Dual write continues, switch primary
# Phase 4: Extract Product Catalog (Month 7-8)
migrate:
- route: /api/products/*
- database: Read-replica → Own database
- features:
- product_search: 100% (read-only, low risk)
- product_updates: 10% → 100%
- validation: Search result comparison
- rollback_plan: Switch routing config
# Phase 5: Payment Processing (Month 9-10)
migrate:
- route: /api/payments/*
- approach: New API version (v2)
- features:
- payment_processing: 1% → 5% → 20% → 100%
- validation: Financial audit on every transaction
- rollback_plan: Immediate flag to legacy
# Phase 6: Frontend Migration (Month 11-14)
migrate:
- pages:
- /checkout: React component
- /account: React component
- /products: React component
- approach: Micro-frontends with Single-SPA
- validation: A/B testing performance
# Phase 7: Decommission Legacy (Month 15-16)
decommission:
- Turn off legacy writes
- Archive legacy data
- Remove legacy code
- Celebrate! 🎉
Monitoring and Validation
Ensure Parity Between Systems
class MigrationValidator:
"""Compare legacy vs new system outputs"""
async def validate_order_creation(self, request):
# Call both systems
legacy_result = await self.legacy_service.create_order(request)
new_result = await self.new_service.create_order(request)
# Compare results
differences = self._compare_orders(legacy_result, new_result)
if differences:
# Log discrepancies
await self._log_difference({
'operation': 'create_order',
'request': request,
'legacy': legacy_result,
'new': new_result,
'differences': differences
})
# Alert if critical differences
if self._is_critical(differences):
await self._send_alert(differences)
# Return legacy result (source of truth during migration)
return legacy_result
def _compare_orders(self, legacy, new):
differences = []
if legacy.total != new.total:
differences.append(f"Total: {legacy.total} vs {new.total}")
if legacy.items != new.items:
differences.append(f"Items mismatch")
return differences
Best Practices
- Start small: Migrate lowest-risk components first
- Feature flags: Enable gradual rollout and instant rollback
- Dual write: Keep legacy as source of truth initially
- Validate continuously: Compare old vs new outputs
- Monitor everything: Track traffic, errors, performance
- Document dependencies: Understand what calls what
- Set deadlines: Prevent perpetual dual-running
Common Pitfalls
- No rollback plan: Always have instant rollback mechanism
- Skipping validation: Must verify parity between systems
- Migrating too fast: Gradual rollout reduces risk
- Ignoring dependencies: Map all integration points
- No monitoring: Can’t validate success without metrics
- Perpetual dual-run: Set sunset dates for legacy
Migration Checklist
✅ Before Migration:
- Map all dependencies and integration points
- Setup feature flag system
- Deploy strangler proxy/routing layer
- Implement comprehensive monitoring
- Create rollback procedures
- Document legacy behavior
✅ During Migration:
- Start with low-risk, low-dependency components
- Gradual rollout (1% → 10% → 50% → 100%)
- Validate output parity continuously
- Monitor error rates and performance
- Keep legacy running (source of truth)
- Document differences and decisions
✅ After Migration:
- Run in parallel for observation period
- Compare business metrics
- Get user feedback
- Set legacy sunset date
- Remove legacy code and infrastructure
- Document lessons learned
Key Takeaways
- Strangler Fig pattern enables safe, incremental legacy migration
- Avoid big bang rewrites—they fail 80% of the time
- Use proxy routing to redirect traffic to new system gradually
- Feature flags enable percentage-based rollout with instant rollback
- Dual write ensures data consistency during migration
- API versioning allows old and new to coexist
- Validate continuously—compare legacy vs new outputs
- Set sunset dates to avoid perpetual dual-running
The Strangler Fig pattern is the proven way to migrate legacy systems safely. By incrementally replacing
functionality while keeping the old system running, you eliminate the catastrophic risk of big bang rewrites.
Each piece migrated is value delivered, risk retired, and confidence built. It’s not glamorous, but it
works.
Discover more from C4: Container, Code, Cloud & Context
Subscribe to get the latest posts sent to your email.