Event ID 4999: High RPC Latency
Complete troubleshooting guide for Exchange Server Event ID 4999 high RPC latency causing slow Outlook performance, client timeouts, and degraded user experience.
Table of Contents
Error Overview
Event ID 4999: High RPC Latency
"Performance counter 'MSExchangeIS Store\RPC Latency average (msec)' has exceeded the threshold of 50 msec. Current value: 245 msec. This indicates degraded mailbox server performance affecting client connectivity."
What This Error Means
High RPC (Remote Procedure Call) latency indicates that Exchange is taking too long to process client requests. This directly impacts every Outlook operation - from opening emails to sending messages. Users experience a sluggish, unresponsive mailbox that affects productivity across the organization.
Performance Impact
- • Outlook hangs and freezes
- • Slow email delivery
- • Search timeouts
- • Calendar sluggishness
- • Public folder delays
Latency Thresholds
- • <10ms: Excellent
- • 10-25ms: Good
- • 25-50ms: Acceptable
- • 50-100ms: Degraded
- • >100ms: Critical
Version Notice
This guide applies to Exchange Server 2016, 2019, and Subscription Edition. Performance counter names and thresholds may vary. Exchange 2019 introduces improved RPC handling that reduces latency under load.
Symptoms & Detection
User-Reported Symptoms
- ✗Outlook shows "Not Responding" frequently
- ✗Opening emails takes several seconds
- ✗Switching folders is extremely slow
- ✗Send/receive operations timeout
- ✗Calendar updates are delayed
Administrator Detection
- →Event ID 4999 in Application log
- →High RPC Latency counters in PerfMon
- →Managed Availability health probe failures
- →Client connectivity alerts
- →RPC Client Access service warnings
Event Log Entry Example
Log Name: Application
Source: MSExchange Common
Event ID: 4999
Level: Warning
Description: Watson report about to be sent for process id: 12456,
with parameters: E12IIS, c-RTL-AMD64, 15.02.0986.015,
M.E.RpcClientAccess, M.E.R.Server.RpcExecute,
RPC_S_CALL_FAILED_DNE, eb41, 15.02.0986.015.
Additional Information:
RPC Latency: 245ms (Threshold: 50ms)
Database: Mailbox Database 01
Active Connections: 847
CPU Usage: 78%Common Causes
Slow Database I/O
Slow storage is the #1 cause of high RPC latency. Every mailbox operation requires database reads and writes. When disk latency increases, RPC operations queue up, causing exponential performance degradation.
Insufficient Memory
Exchange relies heavily on memory for caching database pages. When memory pressure occurs, the Information Store must read from disk instead of cache, dramatically increasing latency.
Database Fragmentation
Highly fragmented databases require more I/O operations to read the same data. This is especially problematic when databases haven't had maintenance performed or have grown significantly.
High CPU Utilization
When CPU is saturated, RPC requests cannot be processed quickly. This often occurs during peak usage, antivirus scanning, or when problematic mailboxes trigger expensive operations.
Excessive Concurrent Connections
Too many simultaneous client connections can overwhelm the server's ability to process requests efficiently. This commonly happens after server failovers when all clients reconnect simultaneously.
Diagnostic Steps
Step 1: Check Current RPC Latency
# Check RPC latency using Performance Counters
$server = $env:COMPUTERNAME
$counters = @(
"\$server\MSExchangeIS Store(*)RPC Latency average (msec)",
"\$server\MSExchangeIS Store(*)RPC Requests",
"\$server\MSExchangeIS Store(*)RPC Operations/sec",
"\$server\MSExchange RpcClientAccessRPC Averaged Latency",
"\$server\MSExchange RpcClientAccessRPC Operations/sec"
)
Get-Counter -Counter $counters -SampleInterval 2 -MaxSamples 5 |
ForEach-Object { $_.CounterSamples } |
Format-Table Path, CookedValue -AutoSize
# Quick one-liner for current latency
Get-Counter "\$server\MSExchangeIS Store(*)RPC Latency average (msec)" |
Select-Object -ExpandProperty CounterSamples | Format-Table InstanceName, CookedValueStep 2: Analyze Database Performance
# Check database I/O latency
$counters = @(
"\$server\MSExchange Database ==> Instances(*)I/O Database Reads (Attached) Average Latency",
"\$server\MSExchange Database ==> Instances(*)I/O Database Writes (Attached) Average Latency",
"\$server\MSExchange Database ==> Instances(*)I/O Log Writes Average Latency",
"\$server\MSExchange Database ==> Instances(*)Database Page Fault Stalls/sec"
)
Get-Counter -Counter $counters -SampleInterval 2 -MaxSamples 3 |
ForEach-Object { $_.CounterSamples } |
Where-Object { $_.CookedValue -gt 0 } |
Format-Table Path, @{N="Latency(ms)";E={[math]::Round($_.CookedValue,2)}} -AutoSize
# Check for database copy status (DAG environments)
Get-MailboxDatabaseCopyStatus * | Format-Table Name, Status, CopyQueueLength, ReplayQueueLength, ContentIndexStateStep 3: Check Server Resources
# Check CPU and Memory usage
Get-Counter @(
"\$server\Processor(_Total)\% Processor Time",
"\$server\Memory\Available MBytes",
"\$server\Memory\Pages/sec",
"\$server\Paging File(_Total)\% Usage"
) | ForEach-Object { $_.CounterSamples } | Format-Table Path, CookedValue
# Check Exchange process memory usage
Get-Process -Name MSExchangeIS, Microsoft.Exchange.Store.Worker, w3wp |
Select-Object Name, @{N='CPU';E={$_.CPU}}, @{N='Memory(MB)';E={[math]::Round($_.WorkingSet64/1MB,2)}},
@{N='Handles';E={$_.HandleCount}}, @{N='Threads';E={$_.Threads.Count}} |
Format-Table -AutoSize
# Check database cache size
$cacheCounter = "\$server\MSExchange Database(*)Database Cache Size (MB)"
Get-Counter $cacheCounter | Select-Object -ExpandProperty CounterSamples |
Format-Table InstanceName, @{N="CacheMB";E={[math]::Round($_.CookedValue,0)}}Step 4: Identify Heavy Users/Mailboxes
# Find mailboxes with most RPC operations
Get-MailboxStatistics -Server $server |
Sort-Object LastLogonTime -Descending |
Select-Object DisplayName, ItemCount, TotalItemSize, LastLogonTime, DatabaseName |
Select-Object -First 20 | Format-Table -AutoSize
# Check for large mailboxes that might cause issues
Get-MailboxStatistics -Server $server |
Where-Object {$_.TotalItemSize.Value.ToMB() -gt 10000} |
Select-Object DisplayName, ItemCount, @{N='SizeGB';E={[math]::Round($_.TotalItemSize.Value.ToGB(),2)}} |
Sort-Object SizeGB -Descending | Format-Table
# Find users with excessive folder counts
Get-MailboxFolderStatistics -Identity "user@domain.com" |
Measure-Object | Select-Object -ExpandProperty Count
# Check RPC client connections
Get-Counter "\$server\MSExchange RpcClientAccess\Connection Count" |
Select-Object -ExpandProperty CounterSamplesPro Tip
Use the Exchange Performance Analyzer tool or the Get-StoreUsageStatistics cmdlet to identify specific operations causing high latency. Focus on the "TimeInServer" metric which shows actual server-side processing time.
Quick Fix
Immediate Relief Actions
These steps can provide immediate improvement while you investigate root causes:
# Step 1: Restart Information Store service (causes brief disruption)
# Only do this during low-usage periods
Restart-Service MSExchangeIS -Force
# Step 2: Clear RPC client cache
# Force clients to re-establish connections
Get-MailboxDatabase | ForEach-Object {
Write-Host "Moving database: $($_.Name)" -ForegroundColor Yellow
Move-ActiveMailboxDatabase $_.Name -Confirm:$false
}
# Step 3: Check and terminate problematic sessions
# Find users with abnormally high RPC operations
$threshold = 1000
Get-LogonStatistics -Server $server |
Where-Object {$_.MessagesTotal -gt $threshold} |
Format-Table UserName, MessagesTotal, FolderOperations
# Step 4: Quick memory relief - clear working sets (temporary)
# Recycle IIS application pools
Get-WebAppPoolState | Where-Object {$_.Value -eq "Started"} |
ForEach-Object { Restart-WebAppPool -Name $_.ItemXPath.Split("'")[1] -Verbose }
# Step 5: Check for and stop any running maintenance
Get-ScheduledTask | Where-Object {$_.TaskName -like "*Exchange*" -and $_.State -eq "Running"}Caution: These quick fixes provide temporary relief. Use the detailed solutions below to address root causes and prevent recurrence.
Detailed Solutions
Solution 1: Optimize Storage Performance
Storage is the foundation of Exchange performance. Address disk I/O first:
# Check current disk performance
Get-Counter @(
"\$server\PhysicalDisk(*)Avg. Disk sec/Read",
"\$server\PhysicalDisk(*)Avg. Disk sec/Write",
"\$server\PhysicalDisk(*)Current Disk Queue Length",
"\$server\PhysicalDisk(*)Disk Reads/sec",
"\$server\PhysicalDisk(*)Disk Writes/sec"
) -SampleInterval 5 -MaxSamples 6 |
ForEach-Object { $_.CounterSamples } |
Where-Object {$_.InstanceName -ne "_total"} |
Format-Table Path, CookedValue -AutoSize
# Disk latency targets:
# - Database reads: < 20ms
# - Database writes: < 20ms
# - Log writes: < 10ms
# If latency is high, consider:
# 1. Migrate databases to SSD/NVMe storage
# 2. Separate database and log files to different volumes
# 3. Enable storage tiering if using hybrid arrays
# 4. Check RAID controller cache settings (ensure write-back with BBU)-back with BBU)
# Move database to faster storage
$db = Get-MailboxDatabase "Mailbox Database 01"
Move-DatabasePath -Identity $db -EdbFilePath "D:ExchangeDatabasesDB01DB01.edb" -LogFolderPath "L:ExchangeLogsDB01"Solution 2: Increase Server Memory
Ensure Exchange has adequate memory for database caching:
# Check current memory allocation
Get-Counter @(
"\$server\MSExchange Database(*)Database Cache Size (MB)",
"\$server\MSExchange Database(*)Database Cache % Hit",
"\$server\Memory\Available MBytes",
"\$server\Memory\% Committed Bytes In Use"
) | ForEach-Object { $_.CounterSamples } | Format-Table Path, CookedValue
# Memory recommendations:
# - Exchange 2019: Minimum 128GB, recommended 256GB for large deployments
# - Database cache should be 25-50% of total RAM-50% of total RAM
# - Cache hit ratio should be > 98%
# Check if memory pressure is limiting cache
$cacheHit = Get-Counter "\$server\MSExchange Database(*)Database Cache % Hit"
$cacheHitValue = ($cacheHit.CounterSamples | Measure-Object CookedValue -Average).Average
if ($cacheHitValue -lt 98) {
Write-Host "WARNING: Cache hit ratio is $cacheHitValue%. Consider adding RAM." -ForegroundColor Red
}
# Configure page file appropriately
# Rule: Page file = RAM + 10MB (for memory dumps)
# Place on fast SSD, not database drives
# Check for memory-hogging processes
Get-Process | Sort-Object WorkingSet64 -Descending |
Select-Object -First 10 Name, @{N='MemoryGB';E={[math]::Round($_.WorkingSet64/1GB,2)}}Solution 3: Database Maintenance
Regular database maintenance reduces fragmentation and improves performance:
# Check database white space (indicates fragmentation level)
Get-MailboxDatabase -Status | Select-Object Name, DatabaseSize, AvailableNewMailboxSpace |
ForEach-Object {
$percentFree = [math]::Round(($_.AvailableNewMailboxSpace.ToMB() / $_.DatabaseSize.ToMB()) * 100, 2)
[PSCustomObject]@{
Database = $_.Name
SizeGB = [math]::Round($_.DatabaseSize.ToGB(), 2)
FreeSpaceGB = [math]::Round($_.AvailableNewMailboxSpace.ToGB(), 2)
PercentFree = $percentFree
}
} | Format-Table -AutoSize
# Schedule online maintenance (ensure it's enabled)
Get-MailboxDatabase | Set-MailboxDatabase -MaintenanceSchedule "Sun.1:00 AM-Sun.5:00 AM"00 AM-Sun.5:00 AM"
# For severe fragmentation, perform offline defragmentation
# WARNING: Requires database dismount - plan maintenance window!
# 1. Dismount database
Dismount-Database "Mailbox Database 01" -Confirm:$false
# 2. Run ESEUTIL defragmentation
# From command prompt (not PowerShell):
# eseutil /d "D:ExchangeDatabasesDB01DB01.edb" /t "D:Temp empdfrg.edb""D:Temp empdfrg.edb"
# 3. Remount database
Mount-Database "Mailbox Database 01"
# Alternative: Create new database and move mailboxes (no downtime per mailbox)
New-MailboxDatabase -Name "Mailbox Database 01-New"-New" -Server EXCH01 -EdbFilePath "D:ExDBDB01New.edb" -LogFolderPath "L:ExLogsDB01New"
Get-Mailbox -Database "Mailbox Database 01" | New-MoveRequest -TargetDatabase "Mailbox Database 01-New"-New"Solution 4: Optimize Client Connections
Manage connection load and client behavior:
# Check current RPC connection count
Get-Counter "\$server\MSExchange RpcClientAccess\Connection Count" |
Select-Object -ExpandProperty CounterSamples
# Set RPC throttling policies
# Create a new throttling policy for heavy users
New-ThrottlingPolicy -Name "HighVolumeUsersPolicy" -RcaMaxConcurrency 20 -EwsMaxConcurrency 20 -CpaMaxConcurrency 20
# Apply to specific users
Set-Mailbox -Identity "heavyuser@domain.com" -ThrottlingPolicy "HighVolumeUsersPolicy"
# Configure RPC Client Access settings
Get-RpcClientAccess | Set-RpcClientAccess -MaximumConnections 65535 -EncryptionRequired $true
# Implement connection limits per user (registry - requires restart)
# HKEY_LOCAL_MACHINESYSTEMCurrentControlSetServicesMSExchangeRPCParametersSystem
# MaxConnections REG_DWORD = 5000
# Force Cached Exchange Mode for Outlook clients
# This reduces server load significantly
# Use Group Policy: User Configuration > Administrative Templates > Microsoft Outlook > ExchangeDanger Zone
Never perform offline database defragmentation without a verified backup. Offline defragmentation creates a new database file - ensure you have sufficient disk space (2x database size) and a tested recovery plan before proceeding.
Verification Steps
Verify Performance Improvements
# Create a comprehensive performance baseline script
$server = $env:COMPUTERNAME
$duration = 300 # 5 minutes
$interval = 10 # 10 seconds
Write-Host "Collecting performance data for $duration seconds..." -ForegroundColor Cyan
$counters = @(
"\$server\MSExchangeIS Store(*)RPC Latency average (msec)",
"\$server\MSExchange Database ==> Instances(*)I/O Database Reads (Attached) Average Latency",
"\$server\MSExchange Database ==> Instances(*)I/O Database Writes (Attached) Average Latency",
"\$server\MSExchange Database(*)Database Cache % Hit",
"\$server\Processor(_Total)% Processor Time",
"\$server\Memory\Available MBytes"
)
$samples = Get-Counter -Counter $counters -SampleInterval $interval -MaxSamples ($duration/$interval)
# Calculate averages
$avgLatency = ($samples.CounterSamples |
Where-Object {$_.Path -match "RPC Latency"} |
Measure-Object CookedValue -Average).Average
$avgDbRead = ($samples.CounterSamples |
Where-Object {$_.Path -match "Database Reads"} |
Measure-Object CookedValue -Average).Average
Write-Host "`nPerformance Summary:" -ForegroundColor Green
Write-Host " Average RPC Latency: $([math]::Round($avgLatency,2)) ms"2)) ms"
Write-Host " Average DB Read Latency: $([math]::Round($avgDbRead,2)) ms"2)) ms"
# Target values:
# RPC Latency: < 25ms
# DB Read Latency: < 20ms
# Cache Hit: > 98%
if ($avgLatency -lt 25) {
Write-Host " Status: HEALTHY" -ForegroundColor Green
} elseif ($avgLatency -lt 50) {
Write-Host " Status: ACCEPTABLE" -ForegroundColor Yellow
} else {
Write-Host " Status: NEEDS ATTENTION" -ForegroundColor Red
}✓ Success Indicators
- • RPC latency < 25ms average
- • No Event ID 4999 warnings
- • Users report snappy performance
- • Cache hit ratio > 98%
⚠ Warning Signs
- • RPC latency 25-50ms
- • Occasional slowness reports
- • Cache hit ratio 95-98%
- • Disk latency spikes
✗ Failure Indicators
- • RPC latency > 100ms
- • Continuous Event ID 4999
- • Widespread user complaints
- • Database disconnections
Prevention Strategies
Proactive Monitoring
- ✓Set up alerts
Alert when RPC latency exceeds 30ms
- ✓Track trends
Log daily averages to identify degradation
- ✓Capacity planning
Monitor growth and plan hardware upgrades
- ✓Regular maintenance
Schedule database maintenance weekly
Performance Monitoring Script
# Daily performance check script
# Schedule with Task Scheduler
$threshold = 50 # Alert threshold in ms
$server = $env:COMPUTERNAME
$latency = (Get-Counter "\$server\MSExchangeIS Store(*)RPC Latency average (msec)" |
Select-Object -ExpandProperty CounterSamples |
Measure-Object CookedValue -Average).Average
if ($latency -gt $threshold) {
# Send alert email
$params = @{
To = "exchange-admins@domain.com"
From = "monitoring@domain.com"
Subject = "High RPC Latency Alert: $server"
Body = "Current RPC latency: $([math]::Round($latency,2))ms"2))ms"
SmtpServer = "smtp.domain.com"
}
Send-MailMessage @params
}
# Log to file
$log = "$(Get-Date -Format 'yyyy-MM-dd HH:mm'),$server,$([math]::Round($latency,2))"-Format 'yyyy-MM-dd HH:mm'),$server,$([math]::Round($latency,2))"
Add-Content "C:LogsRpcLatency.csv" $logWhen to Escalate
Escalate to Microsoft or Exchange Specialist When:
- →RPC latency remains high after storage and memory optimization
- →Database corruption suspected or detected
- →Performance issues affecting all users organization-wide
- →Managed Availability continues to report failures
- →Need assistance with storage architecture redesign
Need Expert Exchange Performance Help?
Our Exchange Server performance specialists can diagnose complex latency issues, optimize your storage architecture, and implement monitoring solutions to prevent future problems.
15 Minutes average response time for performance emergencies
Frequently Asked Questions
Can't Resolve HIGH_RPC_LATENCY?
Exchange errors can cause data loss or extended downtime. Our specialists are available 24/7 to help.
Emergency help - Chat with usMedha Cloud Exchange Server Team
Microsoft Exchange Specialists
Our Exchange Server specialists have 15+ years of combined experience managing enterprise email environments. We provide 24/7 support, emergency troubleshooting, and ongoing administration for businesses worldwide.