An Azure Landing Zone is the foundational infrastructure scaffold that everything else in your cloud estate sits on. Get it right early and cloud adoption scales cleanly. Get it wrong and you spend years untangling networking, policy drift, and access sprawl.
This post walks through how I approach hub-spoke landing zone design for enterprise Azure environments - from management group hierarchy down to Terraform automation.
Why Hub-Spoke?
The hub-spoke model places shared services (firewall, DNS, VPN/ExpressRoute gateway, monitoring) in a central hub virtual network. Each workload team gets its own spoke VNet that peers to the hub but is isolated from every other spoke.
The result:
- Centralized egress — all outbound traffic routes through the hub firewall, one inspection point instead of twelve
- Workload isolation — spokes can't reach each other without an explicit routing policy permitting it
- Shared cost — ExpressRoute and firewall licences live in the hub once, not replicated per team
Management Group Hierarchy
Before any VNet, you need a management group structure that mirrors your policy and access model:
Tenant Root Group
└── Platform
│ ├── Identity
│ ├── Management
│ └── Connectivity
└── Landing Zones
│ ├── Corp (internal workloads)
│ └── Online (internet-facing)
└── Sandbox
The Platform group contains the hub subscription and shared services. The Landing Zones group is where spoke subscriptions live. Policy assignments at the Landing Zones level apply to every workload automatically - no per-subscription configuration drift.
Hub Subscription Layout
The hub subscription typically contains:
| Resource | Purpose |
|---|---|
| Hub VNet | Shared network backbone |
| Azure Firewall (Premium) | Centralized egress and east-west inspection |
| Azure Bastion | Secure RDP/SSH without public IPs |
| Private DNS Zones | Centralized DNS for private endpoints |
| VPN / ExpressRoute GW | Hybrid connectivity |
| Log Analytics Workspace | Central diagnostic sink |
Spoke Subscription Pattern
Each application team gets a subscription with:
- A spoke VNet peered to the hub
- A resource group per environment (dev, staging, prod)
- Contributor access scoped to their subscription only
- Budget alerts and cost anomaly detection
The key constraint: no spoke-to-spoke peering. If team A needs to call team B's service, the traffic goes through the hub firewall with an explicit rule permitting it. It's a deliberate friction point that keeps the blast radius of any incident contained.
Terraform Structure
The landing zone infrastructure lives in a monorepo with this layout:
infra/
modules/
hub/ # VNet, Firewall, Bastion, DNS
spoke/ # VNet, peering, route tables
management/ # Log Analytics, Automation, policies
environments/
platform/ # hub + management subscriptions
corp/ # corp spoke subscriptions
online/ # online spoke subscriptions
policies/ # custom Azure Policy definitions
The spoke module is reusable - each new workload team calls it with their subscription ID and CIDR block:
module "spoke_payments" {
source = "../../modules/spoke"
subscription_id = var.payments_subscription_id
vnet_cidr = "10.20.0.0/16"
hub_vnet_id = module.hub.vnet_id
firewall_ip = module.hub.firewall_private_ip
}
Policy as Code
Three categories matter most at the landing zone level. In practice, you'll spend most of your time on the first two.
Deny non-compliant resources
{
"if": {
"allOf": [
{ "field": "type", "equals": "Microsoft.Network/publicIPAddresses" },
{ "field": "Microsoft.Network/publicIPAddresses/sku.name", "notEquals": "Standard" }
]
},
"then": { "effect": "Deny" }
}
Deploy-if-not-exists for diagnostics
Every resource that supports diagnostic settings gets them auto-applied via a deployIfNotExists policy. No manual configuration, no gaps in the audit log. Teams don't have to remember — it just happens.
Require tags
A modify policy enforces required tags (environment, owner, cost-centre) at creation time. Cost allocation stays clean without depending on teams to do the right thing.
What This Actually Gets You
In practice:
- A new spoke subscription is production-ready in under 2 hours via
terraform apply - Security posture is enforced at the infrastructure level, not left to individual teams to configure correctly
- Every resource's diagnostic data flows to the central Log Analytics workspace automatically
- Network topology is fully auditable — no undocumented peerings, no routing exceptions that accumulated quietly
The foundation work isn't glamorous. But it pays back every time a new workload lands cleanly rather than requiring a remediation project six months later.
