Event ID 17018

Event ID 17018: Transport Insufficient Resources - Fix Guide 2025

Complete troubleshooting guide for Exchange Server Event ID 17018 transport resource exhaustion. Learn how to fix memory pressure, queue backlogs, back pressure conditions, and restore mail flow in 15-30 minutes.

Medha Cloud
Medha Cloud Exchange Server Team
Exchange Database Recovery Team14 min read

Table of Contents

Reading Progress
0 of 9

Event ID 17018 in Exchange Server indicates the Transport service cannot process email due to insufficient system resources. This causes mail flow to halt or severely slow down, with messages queuing up and external senders receiving temporary rejection notices.

Our Exchange Mail Flow Recovery Team resolves these resource exhaustion issues daily. This guide provides the same diagnostic and remediation steps we use to restore mail flow quickly.

Error Overview: What Event ID 17018 Means

Event ID 17018 is logged by the MSExchangeTransport service when Exchange detects critical resource constraints that prevent normal mail processing. The Transport service implements "back pressure" - a self-preservation mechanism that throttles or rejects mail to prevent server crashes.

Typical Event Log Entry
Log Name:      Application
Source:        MSExchangeTransport
Event ID:      17018
Level:         Error
Description:   The resource pressure is at Critical level because the
               following resources are under pressure:
               Queue database and target drive disk space ("C:\Program Files\
               Microsoft\Exchange Server\V15\TransportRoles\data\Queue")
               is at 95%.

Why this happens: Exchange Transport maintains in-memory queues and a disk-based queue database (mail.que). When memory, disk space, or other resources become constrained, Exchange activates back pressure to protect system stability.

Back Pressure Resource Monitoring

NormalAll resources healthy - Mail flows normally
MediumSome throttling - New connections limited
HighHeavy throttling - Mail acceptance delayed
CriticalMail rejected - Event ID 17018 logged

Symptoms & Business Impact

What Users Experience:

  • Outbound emails stuck in Outbox for extended periods
  • Delayed email delivery (minutes to hours)
  • External senders receive "452 4.3.1 Insufficient system resources" bounce
  • NDR messages if delays exceed retry timeout (default 2 days)
  • OWA/Outlook shows "Sending..." indefinitely

What Admins See:

  • Event ID 17018 in Application event log
  • Event IDs 15004, 15005, 15006, 15007 (back pressure notifications)
  • Queue Viewer shows thousands of messages in Submission or Unreachable queues
  • High memory usage by EdgeTransport.exe or MSExchangeTransport.exe
  • Transport database drive at or near capacity

Business Impact:

  • Email SLA Breach: Critical communications delayed
  • Customer Communication: Sales/support responses stuck
  • Partner Integration: Automated email workflows fail
  • Reputation Risk: External senders see your server as unreliable

Common Causes of Event ID 17018

1. Queue Database Disk Space Exhaustion (45% of cases)

Most Common Cause: The drive hosting the Transport queue database (mail.que) runs out of space. Default location is the Exchange installation drive.

Identified by: Event log mentions "Queue database and disk space" pressure

2. Memory Pressure (30% of cases)

EdgeTransport.exe consumes available RAM processing large message volumes or oversized attachments. When available memory drops below thresholds, back pressure activates.

Identified by: Event log mentions "Private bytes" or "Physical memory" pressure

3. Excessive Queue Depth (15% of cases)

Message queues grow beyond capacity due to downstream delivery failures, spam floods, or misconfigured connectors creating loops.

Identified by: Queue Viewer shows 10,000+ messages; Event log mentions "Submission queue length"

4. Version Store Exhaustion (7% of cases)

The ESE database version store (used for transaction rollback) fills up during heavy load or long-running transactions.

Identified by: Event log mentions "Version buckets" pressure

5. Database Transaction Logs Full (3% of cases)

Transport database transaction logs consume remaining disk space, preventing new transactions.

Identified by: Large number of .LOG files in Queue folder

Quick Diagnosis: Identify the Resource Bottleneck

📌 Version Compatibility: This guide applies to Exchange 2016, Exchange 2019, Exchange 2022. Commands may differ for other versions.

Run these commands in Exchange Management Shell (as Administrator) to identify which resource is exhausted:

Step 1: Check Current Back Pressure State
# Get real-time resource pressure status
Get-ExchangeDiagnosticInfo -Process EdgeTransport -Component ResourceThrottling |
  Format-List

# Alternative: Check via XML output
[xml]$diag = Get-ExchangeDiagnosticInfo -Process EdgeTransport -Component ResourceThrottling
$diag.Diagnostics.Components.ResourceThrottling.ResourceTracker.ResourceMeter |
  Select-Object Name, CurrentResourceUse, PreviousResourceUse, Pressure

What to look for:

  • Pressure: Normal = Resource healthy
  • Pressure: Medium/High/Critical = Resource constrained
  • Note which specific resource shows pressure
Step 2: Check Queue Database Disk Space
# Find queue database location
$transportConfig = Get-TransportService $env:COMPUTERNAME
$queuePath = $transportConfig.QueueDatabasePath
Write-Host "Queue Database Path: $queuePath"

# Check free space on that drive
$drive = (Get-Item $queuePath).PSDrive.Name
Get-WmiObject Win32_LogicalDisk -Filter "DeviceID='$drive:'" |
  Select-Object DeviceID,
    @{Name="FreeGB";Expression={[math]::Round($_.FreeSpace/1GB,2)}},
    @{Name="TotalGB";Expression={[math]::Round($_.Size/1GB,2)}},
    @{Name="PercentFree";Expression={[math]::Round(($_.FreeSpace/$_.Size)*100,1)}}

Pro Tip: Exchange requires at least 4GB free space on the queue database drive. Back pressure activates at 4GB and becomes critical below 2GB. If you're under 10GB free, plan for immediate remediation.

Step 3: Check Memory Pressure
# Check EdgeTransport process memory usage
Get-Process EdgeTransport | Select-Object Name,
  @{Name="MemoryMB";Expression={[math]::Round($_.WorkingSet64/1MB,0)}},
  @{Name="PrivateMB";Expression={[math]::Round($_.PrivateMemorySize64/1MB,0)}}

# Check overall system memory
$os = Get-WmiObject Win32_OperatingSystem
$freeMemGB = [math]::Round($os.FreePhysicalMemory/1MB,2)
$totalMemGB = [math]::Round($os.TotalVisibleMemorySize/1MB,2)
Write-Host "Free Memory: $freeMemGB GB of $totalMemGB GB"$totalMemGB GB"
Step 4: Check Queue Depth
# Get message queue summary
Get-Queue | Group-Object Status | Select-Object Name, Count

# Find queues with high message counts
Get-Queue | Where-Object {$_.MessageCount -gt 100} |
  Select-Object Identity, Status, MessageCount, NextHopDomain |
  Sort-Object MessageCount -Descending

# Check submission queue specifically (often the bottleneck)
Get-Queue -Identity "Submission" | Format-List *

Quick Fix (10 Minutes) - Emergency Mail Flow Restoration

Choose the fix based on your diagnosis. If unsure, start with the transport service restart.

Option A: Restart Transport Service (Immediate Relief)

Restart Transport Service
# Stop and restart the Transport service
Restart-Service MSExchangeTransport -Force

# Verify service is running
Get-Service MSExchangeTransport | Select-Object Status, Name

# Check if back pressure cleared
Start-Sleep -Seconds 30
Get-ExchangeDiagnosticInfo -Process EdgeTransport -Component ResourceThrottling

Note: Restarting clears in-memory queues and resets resource counters. Messages in the queue database (mail.que) are preserved and will be processed after restart.

Option B: Free Disk Space (If Disk Pressure)

Emergency Disk Space Cleanup
# Find and clean old transport logs (if safe)
$logPath = "C:\Program Files\Microsoft\Exchange Server\V15\TransportRoles\Logs"

# Check log folder sizes
Get-ChildItem $logPath -Recurse |
  Group-Object DirectoryName |
  Select-Object Name, @{Name="SizeMB";Expression={[math]::Round(($_.Group | Measure-Object -Property Length -Sum).Sum/1MB,0)}} |
  Sort-Object SizeMB -Descending |
  Select-Object -First 10

# Delete protocol logs older than 7 days (adjust as needed)
$cutoffDate = (Get-Date).AddDays(-7)
Get-ChildItem "$logPath\Hub\ProtocolLog" -Recurse -Filter "*.LOG" |
  Where-Object {$_.LastWriteTime -lt $cutoffDate} |
  Remove-Item -Force -Verbose

# Clear IIS logs if on same drive
Get-ChildItem "C:\inetpub\logs\LogFiles" -Recurse -Filter "*.log" |
  Where-Object {$_.LastWriteTime -lt $cutoffDate} |
  Remove-Item -Force -Verbose

Option C: Reduce Queue Depth (If Queue Pressure)

Warning: Message Deletion

The commands below delete messages from the queue. Only use if you've identified spam or unwanted messages causing the backup.

Manage Queue Depth
# Identify the source of queue buildup
Get-Queue | Where-Object {$_.MessageCount -gt 100} |
  Get-Message -ResultSize 50 |
  Group-Object FromAddress |
  Select-Object Count, Name |
  Sort-Object Count -Descending

# If spam detected - remove messages from specific sender
# Get-Queue | Get-Message -Filter {FromAddress -like "*spam-domain.com"} | Remove-Message -Confirm:$falseGet-Message -Filter {FromAddress -like "*spam-domain.com"} | Remove-Message -Confirm:$false

# Force retry all queued messages (if stuck due to transient error)
Get-Queue | Where-Object {$_.Status -eq "Retry"} | Retry-Queue

# If queue is poisoned/stuck, suspend and resume
Get-Queue | Where-Object {$_.Status -eq "Suspended"} | Resume-Queue

Detailed Solution: Root Cause Remediation

Scenario 1: Persistent Disk Space Issues

If disk pressure recurs, move the queue database to a larger drive:

Move Queue Database to New Location
# 1. Check current location
Get-TransportService $env:COMPUTERNAME |
  Select-Object QueueDatabasePath, QueueDatabaseLoggingPath

# 2. Create new directory on larger drive
New-Item -ItemType Directory -Path "D:\ExchangeQueues\Queue" -Force
New-Item -ItemType Directory -Path "D:\ExchangeQueues\QueueLogs" -Force

# 3. Stop Transport service
Stop-Service MSExchangeTransport

# 4. Move queue database and logs
$oldQueuePath = "C:\Program Files\Microsoft\Exchange Server\V15\TransportRoles\data\Queue"
Move-Item "$oldQueuePath\mail.que" "D:\ExchangeQueues\Queue\"
Move-Item "$oldQueuePath\trn*.log" "D:\ExchangeQueues\QueueLogs\"

# 5. Update Transport configuration
Set-TransportService $env:COMPUTERNAME -QueueDatabasePath "Set-TransportService $env:COMPUTERNAME -QueueDatabasePath "D:\ExchangeQueues\Queue" -QueueDatabaseLoggingPath "D:\ExchangeQueues\QueueLogs"

# 6. Start Transport service
Start-Service MSExchangeTransport

Scenario 2: Memory Pressure Optimization

Optimize Transport Memory Usage
# 1. Check current memory limits in EdgeTransport.exe.config
$configPath = "C:\Program Files\Microsoft\Exchange Server\V15\Bin\EdgeTransport.exe.config"
[xml]$config = Get-Content $configPath
$config.configuration.appSettings.add | Where-Object {$_.key -like "*Memory*"}

# 2. Adjust memory thresholds (edit config file)
# Backup first!
Copy-Item $configPath "$configPath.backup"

# Key settings to consider:
# - PercentagePhysicalMemoryUsedLimit (default: depends on RAM)
# - DatabaseMaxCacheSize
# - MessageTrackingLogMaxAge

# 3. Limit max message size to reduce memory spikes
Set-TransportConfig -MaxReceiveSize 25MB -MaxSendSize 25MB

# 4. Set max concurrent connections
Set-ReceiveConnector "Default Frontend*" -MaxInboundConnection 1000 -MaxInboundConnectionPerSource 50

# 5. Restart service to apply
Restart-Service MSExchangeTransport

Scenario 3: Adjust Back Pressure Thresholds

Pro Tip: Only adjust back pressure thresholds if you've verified adequate hardware resources. Raising thresholds on an under-resourced server risks crashes.

Modify Back Pressure Thresholds (Advanced)
# Location: EdgeTransport.exe.config
# Backup the file first!

# Key threshold settings:
# DatabaseDiskSpaceMonitor - Disk space thresholds
#   - LowToMedium: 1GB (default)
#   - MediumToHigh: 2GB
#   - HighToMedium: 500MB
#
# MemoryBasedGradualRejectSettings - Memory thresholds
#   - PhysicalMemoryLimitInMB
#   - PrivateBytesLimitInMB

# Example: Increase disk thresholds for large queue drives
# Edit EdgeTransport.exe.config and add/modify:
<add key="DatabaseDiskSpaceHighThreshold" value="1073741824" />
<add key="DatabaseDiskSpaceMediumThreshold" value="2147483648" />

# After editing, restart Transport service
Restart-Service MSExchangeTransport

Scenario 4: Identify and Block Spam Source

Investigate Spam/Attack Source
# Find top senders by volume in queue
Get-Queue | Get-Message -ResultSize 1000 |
  Group-Object FromAddress |
  Sort-Object Count -Descending |
  Select-Object -First 20 Count, Name

# Find top recipient domains (identify relay abuse)
Get-Queue | Where-Object {$_.DeliveryType -eq "SmtpDeliveryToMailbox"} |
  Get-Message -ResultSize 1000 |
  Group-Object {($_.Recipients[0].Address -split "@")[1]} |
  Sort-Object Count -Descending |
  Select-Object -First 20 Count, Name

# Check connector logs for abuse source IP
$logPath = "C:\Program Files\Microsoft\Exchange Server\V15\TransportRoles\Logs\Hub\ProtocolLog\SmtpReceive"
Get-ChildItem $logPath -Filter "*.LOG" |
  Sort-Object LastWriteTime -Descending |
  Select-Object -First 1 |
  Get-Content -Tail 1000 |
  Select-String "RCPT TO" |
  ForEach-Object { ($_ -split ",")[3] } |
  Group-Object |
  Sort-Object Count -Descending |
  Select-Object -First 10

# Block abusive IP at connector level
Set-ReceiveConnector "Default Frontend*" -RemoteIPRanges @{Remove="192.168.1.100"168.1.100"}

Verify the Fix

After applying fixes, run these checks to confirm mail flow is restored:

Verification Commands
# 1. Confirm no back pressure
Get-ExchangeDiagnosticInfo -Process EdgeTransport -Component ResourceThrottling

# 2. Verify queue depth decreasing
Get-Queue | Select-Object Identity, Status, MessageCount | Sort-Object MessageCount -Descending

# 3. Check for new 17018 events (should see none)17018 events (should see none)
Get-EventLog -LogName Application -Source MSExchangeTransport -Newest 20 |
  Where-Object {$_.EventID -eq 17018}

# 4. Send test email
Send-MailMessage -From "admin@company.com" -To "test@external.com" -Subject "Mail Flow Test $(Get-Date)" -Body "Testing after 17018 fix" -SmtpServer localhost

# 5. Verify outbound delivery
Get-MessageTrackingLog -Start (Get-Date).AddMinutes(-15) -EventId SEND |
  Select-Object Timestamp, Sender, Recipients, MessageSubject |
  Format-Table -AutoSize

Success Indicators:

  • All resource pressures show Normal
  • Queue message count decreasing over time
  • No new Event ID 17018 entries
  • Test emails delivered within 1-2 minutes
  • Users report outbound mail flowing again

Prevention: Stop Event ID 17018 From Recurring

1. Implement Proactive Monitoring

Back Pressure Monitoring Script
# Schedule this script to run every 15 minutes
$threshold = "Normal"
[xml]$diag = Get-ExchangeDiagnosticInfo -Process EdgeTransport -Component ResourceThrottling
$pressures = $diag.Diagnostics.Components.ResourceThrottling.ResourceTracker.ResourceMeter |
  Where-Object {$_.Pressure -ne $threshold}

if ($pressures) {
    $body = "Back pressure detected on $env:COMPUTERNAME:`n"
    $body += $pressures | ForEach-Object { "$($_.Name): $($_.Pressure)`n"$_.Pressure)`n" }

    Send-MailMessage -To "admin@company.com" -From "exchange-alerts@company.com" `
      -Subject "ALERT: Exchange Back Pressure Active" `
      -Body $body -SmtpServer "backup-smtp.company.com"
}

2. Configure Disk Space Alerts

  • Alert at 20% free space remaining
  • Critical alert at 10GB free
  • Emergency at 4GB free (back pressure threshold)

3. Size Transport Infrastructure Properly

  • Minimum 8GB RAM for dedicated Transport server
  • 50GB+ dedicated disk for queue database
  • Separate physical disk for queue database and OS
  • Consider SSD for high-volume environments

4. Implement Message Size Limits

Configure Reasonable Limits
# Set organization-wide limits
Set-TransportConfig -MaxReceiveSize 25MB -MaxSendSize 25MB

# Set connector-specific limits
Get-ReceiveConnector | Set-ReceiveConnector -MaxMessageSize 25MB

# Limit concurrent connections to prevent overwhelming
Set-ReceiveConnector "Default Frontend*" -MaxInboundConnection 2000

5. Regular Maintenance Schedule

  • Weekly: Review queue statistics and clear stuck messages
  • Monthly: Archive and purge old transport logs
  • Quarterly: Review back pressure thresholds vs actual usage
  • After CU updates: Verify threshold configs preserved

Mail Still Not Flowing? Get Expert Help Now.

If back pressure persists despite these fixes, you may have a deeper infrastructure issue - undersized hardware, storage I/O bottlenecks, or misconfigured routing. Our mail flow specialists diagnose and resolve complex transport issues.

Exchange Mail Flow Emergency Support

Average Response Time: 15 Minutes

Frequently Asked Questions

Event ID 17018 occurs when the Exchange Transport service lacks sufficient system resources to process email. Common causes include low available memory, excessive queue depth, disk space exhaustion on the transport database drive, high CPU utilization, or too many concurrent SMTP connections overwhelming the server.

Can't Resolve Event ID 17018?

Exchange errors can cause data loss or extended downtime. Our specialists are available 24/7 to help.

Emergency help - Chat with us
Medha Cloud

Medha Cloud Exchange Server Team

Microsoft Exchange Specialists

Our Exchange Server specialists have 15+ years of combined experience managing enterprise email environments. We provide 24/7 support, emergency troubleshooting, and ongoing administration for businesses worldwide.

15+ Years ExperienceMicrosoft Certified99.7% Success Rate24/7 Support