Medha Cloud
Medha Cloud Exchange Server Team
Exchange Database Recovery Team8 min read

Event IDs 15004-15007 indicate Exchange Server has activated back pressure—a self-protection mechanism that throttles mail flow when transport resources are critically low. This guide shows you how to identify the resource constraint, resolve it quickly, and prevent future back pressure events.

Our Exchange Mail Flow Support team resolves back pressure issues within 15-30 minutes. Follow this same process to restore your mail flow.

Error Overview: What Back Pressure Means

Back pressure is Exchange's built-in throttling mechanism that protects the transport service from overload. When resources like disk space, memory, or queue depth exceed thresholds, Exchange progressively restricts mail acceptance and delivery.

Typical Event Log Entries
# Event ID 15004 - Medium Pressure
Log Name:      Application
Source:        MSExchangeTransport
Event ID:      15004
Level:         Warning
Description:   Resource pressure is Medium. UsedDiskSpace = 89%

# Event ID 15005 - High Pressure
Log Name:      Application
Source:        MSExchangeTransport
Event ID:      15005
Level:         Error
Description:   Resource pressure is High. UsedDiskSpace = 96%
               Inbound mail submission has been paused.

Pressure Levels:

  • Normal: All resources within thresholds—full mail flow
  • Medium (15004): Warning level—mail slowed, new connections may be rejected
  • High (15005): Critical—inbound mail paused, only delivery continues

Symptoms & Business Impact

What Users Experience:

  • Outbound email stuck in Outbox, not sending
  • Inbound email delayed by minutes to hours
  • Internal email between departments severely delayed
  • NDRs with "452 4.3.1 Insufficient system resources" errors

What Admins See:

  • Event IDs 15004 or 15005 in Application log
  • Large queue counts in Queue Viewer
  • Transport service using high CPU/memory
  • Disk space alerts on transport queue drive
Check Current Back Pressure Status
# View transport resource utilization
Get-ExchangeDiagnosticInfo -Process EdgeTransport -Component ResourceThrottling |
    Select-Object -ExpandProperty ComponentStates

# Alternative: Check transport agent status
Get-TransportPipeline | Format-List

Common Causes of Back Pressure

1. Low Disk Space (Most Common - 60%)

The transport queue database drive runs low on space. Exchange requires at least 4GB free (by default) to continue accepting mail. Large attachments, virus quarantine, or log file accumulation often trigger this.

2. High Memory Usage (25%)

EdgeTransport.exe consumes excessive memory due to large queue backlogs, memory leaks, or competing processes. Exchange throttles when available memory drops below threshold.

3. Queue Database Growth (10%)

The mail.que database file grows excessively large, typically from undeliverable messages accumulating. This slows database operations and triggers back pressure.

4. Version Store Exhaustion (5%)

The ESE database engine's version store runs out of space during heavy message processing. This is more common during mailbox moves or large mail bursts.

Quick Diagnosis

Step 1: Identify Resource Under Pressure
# Get detailed resource utilization
[xml]$bp = Get-ExchangeDiagnosticInfo -Process EdgeTransport -Component ResourceThrottling
$bp.Diagnostics.Components.ResourceThrottling.ResourceTracker.ResourceMeter |
    Select-Object Resource, Pressure, CurrentResourceUse, PreviousResourceUse |
    Format-Table -AutoSize

# Check disk space on queue database drive
Get-WmiObject Win32_LogicalDisk |
    Select-Object DeviceID, @{N='FreeGB';E={[math]::Round($_.FreeSpace/1GB,2)}},
    @{N='TotalGB';E={[math]::Round($_.Size/1GB,2)}} | Format-Table
Step 2: Check Queue Status
# View all queues and message counts
Get-Queue | Select-Object Identity, DeliveryType, Status, MessageCount, NextHopDomain |
    Sort-Object MessageCount -Descending | Format-Table -AutoSize

# Check for poison messages
Get-Queue -Identity "Poison" | Format-List
Step 3: Review Recent Back Pressure Events
# Get back pressure events from last 24 hours
Get-WinEvent -FilterHashtable @{
    LogName = 'Application'
    ProviderName = 'MSExchangeTransport'
    Id = 15004,15005,15006,15007
    StartTime = (Get-Date).AddHours(-24)
} | Select-Object TimeCreated, Id, Message | Format-Table -Wrap

Quick Fix (10-15 Minutes)

Most Common Solution:

Free up disk space on the transport queue drive. This resolves 60%+ of back pressure issues immediately.

Immediate Relief: Free Disk Space
# 1. Find and clean transport queue folder
$queuePath = (Get-TransportService).QueueDatabasePath
Write-Host "Queue database location: $queuePath"

# 2. Clear old temp files (safe to delete)
Get-ChildItem "$queuePath\..\temp" -Recurse |
    Where-Object { $_.LastWriteTime -lt (Get-Date).AddDays(-1) } |
    Remove-Item -Force

# 3. Clear IIS log files if on same drive
Get-ChildItem "C:\inetpub\logs\LogFiles" -Recurse -Include *.log |
    Where-Object { $_.LastWriteTime -lt (Get-Date).AddDays(-7) } |
    Remove-Item -Force

# 4. Restart transport service to reset pressure
Restart-Service MSExchangeTransport

# 5. Verify back pressure cleared
Start-Sleep -Seconds 30
Get-EventLog -LogName Application -Source MSExchangeTransport -Newest 5 |
    Where-Object { $_.EventID -in 15004,15005,15006 } | Format-Table TimeGenerated, EventID, Message -Wrap

Detailed Solutions by Resource Type

Solution 1: Disk Space Pressure

Free Space and Prevent Recurrence
# Check what's consuming space
$queueDrive = (Get-TransportService).QueueDatabasePath.Substring(0,2)
Write-Host "Analyzing drive $queueDrive..."

# Find large files in Exchange paths
$exchangePaths = @(
    "$queueDrive\Program Files\Microsoft\Exchange Server\V15\TransportRoles",
    "$queueDrive\Program Files\Microsoft\Exchange Server\V15\Logging"
)

foreach ($path in $exchangePaths) {
    if (Test-Path $path) {
        Get-ChildItem $path -Recurse -File |
            Sort-Object Length -Descending |
            Select-Object -First 20 FullName, @{N='SizeMB';E={[math]::Round($_.Length/1MB,2)}} |
            Format-Table -AutoSize
    }
}

# Safe cleanup of protocol logs older than 30 days
$logPaths = Get-ChildItem "$queueDrive\Program Files\Microsoft\Exchange Server\V15\Logging" -Directory
foreach ($logPath in $logPaths) {
    $oldLogs = Get-ChildItem $logPath.FullName -Recurse -File |
        Where-Object { $_.LastWriteTime -lt (Get-Date).AddDays(-30) }
    if ($oldLogs) {
        Write-Host "Removing $($oldLogs.Count) old logs from $($logPath.Name)..."$logPath.Name)..."
        $oldLogs | Remove-Item -Force
    }
}

# Configure circular logging for protocol logs (prevents growth)
Set-TransportService -Identity $env:COMPUTERNAME -ReceiveProtocolLogMaxAge 7.00:00:00
Set-TransportService -Identity $env:COMPUTERNAME -SendProtocolLogMaxAge 7.00:00:00

Solution 2: Memory Pressure

Address Memory Exhaustion
# Check EdgeTransport memory usage
$edgeProcess = Get-Process EdgeTransport -ErrorAction SilentlyContinue
if ($edgeProcess) {
    Write-Host "EdgeTransport Memory: $([math]::Round($edgeProcess.WorkingSet64/1GB,2)) GB"2)) GB"
    Write-Host "EdgeTransport Private Memory: $([math]::Round($edgeProcess.PrivateMemorySize64/1GB,2)) GB"2)) GB"
}

# Check system memory
$os = Get-WmiObject Win32_OperatingSystem
$freeMemGB = [math]::Round($os.FreePhysicalMemory/1MB,2)
$totalMemGB = [math]::Round($os.TotalVisibleMemorySize/1MB,2)
Write-Host "System Memory: $freeMemGB GB free of $totalMemGB GB total"$totalMemGB GB total"

# If memory is low, restart transport to release memory
if ($freeMemGB -lt 2) {
    Write-Host "Low memory detected. Restarting transport service..."
    Restart-Service MSExchangeTransport -Force
}

# Reduce memory footprint by adjusting thresholds
# Note: Requires EdgeTransport.exe.config edit and service restart
Write-Host "Consider adjusting memory thresholds in EdgeTransport.exe.config if recurring"

Solution 3: Queue Database Issues

Clean Queue Database
# Check queue database size
$queuePath = (Get-TransportService).QueueDatabasePath
$queueDb = Get-ChildItem "$queuePath\mail.que" -ErrorAction SilentlyContinue
if ($queueDb) {
    Write-Host "Queue database size: $([math]::Round($queueDb.Length/1GB,2)) GB"2)) GB"
}

# If queue is large, remove expired/stuck messages
Get-Queue | Where-Object { $_.MessageCount -gt 100 } | ForEach-Object {
    Write-Host "Processing queue: $($_.Identity) with $($_.MessageCount) messages"$_.MessageCount) messages"

    # Remove messages older than 2 days with failures
    Get-Message -Queue $_.Identity |
        Where-Object { $_.DateReceived -lt (Get-Date).AddDays(-2) -and $_.LastError -ne $null } |
        Remove-Message -Confirm:$false -WithNDR:$false
}

# Force queue database maintenance
Write-Host "Restarting transport for database compaction..."
Stop-Service MSExchangeTransport -Force
Start-Sleep -Seconds 10
Start-Service MSExchangeTransport

Verify the Fix

Confirm Back Pressure Resolved
# Check for Event ID 15006 (Normal pressure restored)
Get-WinEvent -FilterHashtable @{
    LogName = 'Application'
    ProviderName = 'MSExchangeTransport'
    Id = 15006
    StartTime = (Get-Date).AddMinutes(-30)
} -MaxEvents 5 | Select-Object TimeCreated, Message

# Verify resource utilization is normal
[xml]$bp = Get-ExchangeDiagnosticInfo -Process EdgeTransport -Component ResourceThrottling
$bp.Diagnostics.Components.ResourceThrottling.ResourceTracker.ResourceMeter |
    Select-Object Resource, Pressure | Format-Table

# Check queue processing resumed
Get-Queue | Measure-Object -Property MessageCount -Sum |
    Select-Object @{N='TotalQueued';E={$_.Sum}}

# Send test email and verify delivery
Send-MailMessage -From "admin@domain.com" -To "testuser@domain.com" -Subject "Back Pressure Test" -Body "Mail flow restored" -SmtpServer localhost

Prevention Tips

Monitoring & Alerts

  • Set up disk space alerts at 85% (warning) and 90% (critical)
  • Monitor Event ID 15004 as early warning of pressure
  • Track queue depth trends in performance monitoring
  • Alert on memory utilization exceeding 85%

Capacity Planning

  • Maintain 20%+ free space on transport database drives
  • Allocate dedicated drives for queue databases
  • Size message tracking logs appropriately
  • Implement message size and recipient limits
Configure Proactive Limits
# Set reasonable message size limits
Set-TransportConfig -MaxReceiveSize 25MB -MaxSendSize 25MB

# Configure protocol log retention
Set-TransportService -Identity $env:COMPUTERNAME -ReceiveProtocolLogMaxAge 7.00:00:00 -SendProtocolLogMaxAge 7.00:00:00 -ReceiveProtocolLogMaxDirectorySize 500MB -SendProtocolLogMaxDirectorySize 500MB

# Set message tracking log limits
Set-TransportService -Identity $env:COMPUTERNAME -MessageTrackingLogMaxAge 30.00:00:00 -MessageTrackingLogMaxDirectorySize 1GB

When to Escalate

Contact Exchange specialists if:

  • Back pressure persists after freeing resources
  • Transport service crashes repeatedly
  • Queue database corruption is suspected
  • Memory leaks require advanced troubleshooting
  • You need help sizing transport infrastructure

Need Expert Help?

Our Exchange Mail Flow Team resolves back pressure issues with average response time under 30 minutes. We provide 24/7 support for critical mail flow emergencies.

Frequently Asked Questions

Back pressure events occur when Exchange transport resources are exhausted. Common triggers include low disk space on queue database drives, high memory usage, excessive queue database size, or too many outbound connections. Exchange throttles mail flow to prevent system failure.

Still Stuck? We Can Help

Our Exchange Server experts have resolved thousands of issues just like yours.

  • Remote troubleshooting in 95 minutes average
  • No upfront commitment or diagnosis fees
  • Fix-it-right guarantee with documentation
Get Expert Help
95 min
Average Response Time
24/7/365 Availability
Medha Cloud

Medha Cloud Exchange Server Team

Microsoft Exchange Specialists

Our Exchange Server specialists have 15+ years of combined experience managing enterprise email environments. We provide 24/7 support, emergency troubleshooting, and ongoing administration for businesses worldwide.

15+ Years ExperienceMicrosoft Certified99.7% Success Rate24/7 Support