← Back to blog

Azure VM Sizing and Boot Time: When a Reboot Takes 30 Minutes

You reboot an Azure VM and wait. A minute passes, then five, then fifteen. The VM eventually comes back — but it took 20 to 30 minutes to do something that should take under two. No crash, no error in the logs, no alert. Just an unusually long recovery window that's hard to explain and harder to diagnose if you're looking in the wrong place.

The instinct is to investigate the OS, the application, the startup scripts. More often than it should be, the root cause is simpler: the VM size is mismatched with the workload it's hosting, and the underlying hardware is struggling to allocate resources consistently on restart.

// why VM size affects boot time

Azure's VM size documentation covers vCPU count, RAM, disk throughput, and network bandwidth well. What it doesn't surface clearly is that different VM size families have fundamentally different underlying hardware characteristics — and those characteristics directly affect how the hypervisor allocates resources when a VM restarts.

Burstable B-series VMs are the most common offender. They share physical cores and operate on a CPU credit system. When a B-series VM reboots, it starts with a depleted or zero CPU credit balance. If the hosted application has a CPU-heavy initialization sequence — security agents starting, services registering, databases initializing, logs flushing — the VM will throttle hard during boot. What should take 90 seconds stretches to 20 or 30 minutes as the VM slowly accumulates enough CPU credits to complete initialization.

A VM that boots slowly after a reboot but runs fine once up is often a CPU credit exhaustion problem at startup — not an OS or application issue.

// what it looks like in practice

The pattern is consistent: the VM reboots, Azure shows it as Running almost immediately in the portal, but SSH or RDP connections time out or refuse for an extended period. When you finally get in, everything looks normal — the OS booted, the application is running, no errors in the event logs. Just a very long gap between when Azure reports the VM as running and when it is actually accessible.

This is the key diagnostic clue. Azure Activity Logs will show the VM transitioning to Running state quickly. The VM is technically running at the hypervisor level, but the guest OS and application services are still initializing under CPU throttling. The gap between hypervisor running and guest accessible is where the 20 to 30 minutes disappears.

You can confirm CPU credit exhaustion by checking Azure Monitor metrics during the boot window. Navigate to the VM's Metrics blade and look at CPU Credits Remaining. If it hits zero during startup and stays there for an extended period, that is your cause.

// the fix

Move the workload to a non-burstable VM size. D-series and E-series VMs provide dedicated vCPUs with no credit system — CPU is available immediately on boot without throttling. For most workloads experiencing slow B-series boot times, moving to a comparable D-series size resolves it completely.

# Check available non-burstable sizes in your region
az vm list-sizes --location eastus \
  --query "[?!contains(name, '_B')]" \
  --output table

# Stop, resize, and restart
az vm deallocate \
  --resource-group myResourceGroup \
  --name myVM

az vm resize \
  --resource-group myResourceGroup \
  --name myVM \
  --size Standard_D2s_v5

az vm start \
  --resource-group myResourceGroup \
  --name myVM

If staying on a burstable size is required for cost reasons, let the VM sit idle before a planned reboot to accumulate credits, then reboot immediately. This is a workaround, not a fix — but it reduces the boot window significantly for planned maintenance.

// the key takeaway

Slow VM boot recovery in Azure is not always an OS or application problem. When a reboot that should take two minutes takes thirty, check the VM size family before digging into application logs. CPU credit exhaustion at startup is a common and easily overlooked cause, and moving to a dedicated vCPU size is usually the cleanest resolution. The Azure portal showing the VM as Running is not the same as the VM being ready — understanding that distinction saves a lot of unnecessary troubleshooting time.

>_ Have questions or feedback on this post?

Reach out at info@rootandsecure.io or connect on LinkedIn.

Working through Azure infrastructure issues or hybrid cloud deployments? Book a free intake call and let's work through it together.