MEMORY_PRESSURE

Event ID 2160: Memory Pressure Detected

Complete troubleshooting guide for Exchange Server Event ID 2160 memory pressure causing performance degradation, database cache reduction, and potential service instability.

Medha Cloud
Medha Cloud Exchange Server Team
Exchange Database Recovery Team8 min read

Table of Contents

Reading Progress
0 of 10

Error Overview

Event ID 2160: Memory Pressure Detected

"Memory resource pressure has been detected. Database cache target size has been reduced from 98304 MB to 65536 MB. Current available memory: 2048 MB. System will attempt to release memory to maintain stability."

What This Error Means

Event ID 2160 indicates that Exchange Server is experiencing memory pressure and must reduce its database cache to free up memory for other operations. This directly impacts performance because less cached data means more disk reads, resulting in slower mailbox access for users.

Memory Impact Areas

  • • Database cache size reduction
  • • Increased disk I/O
  • • Higher RPC latency
  • • Content indexing slowdown
  • • Transport queue processing

Memory Thresholds

  • • Medium pressure: <8% available
  • • High pressure: <4% available
  • • Critical: <2% available
  • • Target cache: 25-50% of RAM
  • • OS reserve: 8-16GB minimum
⚠️

Version Notice

This guide applies to Exchange Server 2016, 2019, and Subscription Edition. Exchange 2019 has improved memory management with the Metacache Database (MCDB) feature that can utilize additional SSD storage when RAM is constrained.

Symptoms & Detection

User-Reported Symptoms

  • Overall mail system is sluggish
  • Random disconnections from mailbox
  • OWA pages load slowly or timeout
  • Search takes much longer than usual
  • Email delivery delays

Administrator Detection

  • Event ID 2160 in Application log
  • Low available memory in Task Manager
  • Database cache hit ratio declining
  • High page file usage
  • Store Worker process memory alerts

Event Log Entry Example

Log Name:      Application
Source:        MSExchangeIS
Event ID:      2160
Level:         Warning
Description:   Memory resource pressure has been detected on this server.

               Resource Monitor Statistics:
               Available Memory: 2,048 MB (1.6% of total)
               Total Physical Memory: 131,072 MB
               Database Cache Target Size: 65,536 MB (reduced from 98,304 MB)
               Database Cache Current Size: 62,445 MB

               Actions Taken:
               - Database cache target reduced
               - Working set trimming initiated
               - Background maintenance deferred

Common Causes

1

Insufficient Physical Memory

Server doesn't have enough RAM for the number of users and mailbox databases it hosts. This is the most common cause, especially when organizations grow without upgrading hardware.

Solution: Upgrade to 128GB minimum for production, 256GB for large deployments. Plan 2-4GB per active database plus OS overhead.
2

Third-Party Applications

Backup agents, antivirus software, monitoring tools, or other applications running on the Exchange server consuming memory that should be reserved for Exchange services.

Best Practice: Exchange servers should be dedicated - no other applications except required agents with minimal memory footprint.
3

Memory Leaks

Memory leaks in Exchange processes or third-party components cause memory usage to grow over time. Eventually available memory is exhausted, triggering pressure events.

Indicators: Memory usage increases steadily after restart, specific processes showing continuous growth, resolved temporarily by service restart.
4

Too Many Active Databases

Each active mailbox database requires memory for its cache. Hosting too many active databases on one server exhausts available memory, even with adequate total RAM.

Guideline: Limit to 5-10 active databases per server for 128GB RAM, or redistribute databases across more servers in DAG.
5

Virtual Memory Configuration

Improperly sized page file or virtual machine memory settings can trigger false memory pressure or prevent Exchange from properly managing memory.

Configuration: Page file should be RAM + 10MB (for crash dumps). For VMs, disable dynamic memory and assign fixed RAM allocation.

Diagnostic Steps

Step 1: Check Current Memory Status

# Get comprehensive memory information
$server = $env:COMPUTERNAME

# System memory overview
$os = Get-CimInstance Win32_OperatingSystem
$totalRAM = [math]::Round($os.TotalVisibleMemorySize/1MB, 2)
$freeRAM = [math]::Round($os.FreePhysicalMemory/1MB, 2)
$usedRAM = $totalRAM - $freeRAM
$percentFree = [math]::Round(($freeRAM/$totalRAM)*100, 2)

Write-Host "=== System Memory ===" -ForegroundColor Cyan
Write-Host "Total RAM: $totalRAM GB"
Write-Host "Used RAM: $usedRAM GB"
Write-Host "Free RAM: $freeRAM GB ($percentFree%)"$percentFree%)"

# Page file usage
$pageFile = Get-CimInstance Win32_PageFileUsage
Write-Host "`n=== Page File ===" -ForegroundColor Cyan
Write-Host "Allocated: $([math]::Round($pageFile.AllocatedBaseSize/1024,2)) GB"1024,2)) GB"
Write-Host "Current Usage: $([math]::Round($pageFile.CurrentUsage/1024,2)) GB"1024,2)) GB"
Write-Host "Peak Usage: $([math]::Round($pageFile.PeakUsage/1024,2)) GB"1024,2)) GB"

# Performance counters
$counters = @(
    "\$server\Memory\Available MBytes",
    "\$server\Memory\% Committed Bytes In Use",
    "\$server\Memory\Pages/sec",
    "\$server\Paging File(_Total)\% Usage"
)

Get-Counter -Counter $counters | ForEach-Object { $_.CounterSamples } |
    Format-Table Path, @{N='Value';E={[math]::Round($_.CookedValue,2)}} -AutoSize

Step 2: Analyze Exchange Memory Usage

# Check Exchange-specific memory counters
$server = $env:COMPUTERNAME

$exchangeCounters = @(
    "\$server\MSExchange Database(*)Database Cache Size (MB)",
    "\$server\MSExchange Database(*)Database Cache % Hit",
    "\$server\MSExchange Database(*)Database Page Fault Stalls/sec",
    "\$server\MSExchange Database(*)Log Record Stalls/sec"
)

Get-Counter -Counter $exchangeCounters | ForEach-Object { $_.CounterSamples } |
    Where-Object { $_.CookedValue -ne 0 } |
    Format-Table Path, @{N='Value';E={[math]::Round($_.CookedValue,2)}} -AutoSize

# Check Store Worker process memory
Get-Process -Name "Microsoft.Exchange.Store.Worker" -ErrorAction SilentlyContinue |
    Select-Object Id, @{N='WorkingSetGB';E={[math]::Round($_.WorkingSet64/1GB,2)}},
    @{N='PrivateMemGB';E={[math]::Round($_.PrivateMemorySize64/1GB,2)}},
    @{N='VirtualMemGB';E={[math]::Round($_.VirtualMemorySize64/1GB,2)}} |
    Format-Table -AutoSize

# Summary of all Exchange processes
Write-Host "`n=== Exchange Process Memory Usage ===" -ForegroundColor Cyan
Get-Process | Where-Object { $_.ProcessName -match "Exchange|MSExchange|EdgeTransport|w3wp" } |
    Select-Object ProcessName, Id, @{N='MemoryGB';E={[math]::Round($_.WorkingSet64/1GB,2)}} |
    Sort-Object MemoryGB -Descending | Format-Table -AutoSize

Step 3: Identify Memory-Hungry Processes

# Find top memory consumers
Write-Host "=== Top 15 Memory-Consuming Processes ==="-Consuming Processes ===" -ForegroundColor Cyan

Get-Process | Sort-Object WorkingSet64 -Descending | Select-Object -First 15 |
    Select-Object ProcessName, Id,
    @{N='Working Set (GB)';E={[math]::Round($_.WorkingSet64/1GB,2)}},
    @{N='Private Mem (GB)';E={[math]::Round($_.PrivateMemorySize64/1GB,2)}},
    @{N='CPU (sec)';E={[math]::Round($_.CPU,0)}} |
    Format-Table -AutoSize

# Check for unexpected high-memory processes
$unexpectedProcesses = Get-Process | Where-Object {
    $_.ProcessName -notmatch "Exchange|MSExchange|w3wp|System|Memory|sqlservr" -and
    $_.WorkingSet64 -gt 1GB
} | Select-Object ProcessName, @{N='MemoryGB';E={[math]::Round($_.WorkingSet64/1GB,2)}}

if ($unexpectedProcesses) {
    Write-Host "`nWARNING: Non-Exchange processes using >1GB RAM:" -ForegroundColor Yellow
    $unexpectedProcesses | Format-Table -AutoSize
}

# Check IIS application pools
Import-Module WebAdministration -ErrorAction SilentlyContinue
Get-IISAppPool | ForEach-Object {
    $pool = $_
    $worker = Get-Process -Id (Get-IISAppPool -Name $pool.Name).WorkerProcesses.ProcessId -ErrorAction SilentlyContinue
    if ($worker) {
        [PSCustomObject]@{
            AppPool = $pool.Name
            ProcessId = $worker.Id
            MemoryGB = [math]::Round($worker.WorkingSet64/1GB,2)
        }
    }
} | Sort-Object MemoryGB -Descending | Format-Table -AutoSize

Step 4: Check Database Count and Size

# Get database information
$databases = Get-MailboxDatabase -Status

Write-Host "=== Mailbox Databases ===" -ForegroundColor Cyan

$databases | Select-Object Name, Server, Mounted,
    @{N='SizeGB';E={[math]::Round($_.DatabaseSize.ToGB(),2)}},
    @{N='AvailableGB';E={[math]::Round($_.AvailableNewMailboxSpace.ToGB(),2)}},
    @{N='Mailboxes';E={(Get-Mailbox -Database $_.Name | Measure-Object).Count}} |
    Format-Table -AutoSize

# Count active databases per server
Write-Host "`n=== Active Databases Per Server ===" -ForegroundColor Cyan
Get-MailboxDatabaseCopyStatus * | Where-Object {$_.Status -eq "Mounted"} |
    Group-Object -Property MailboxServer |
    Select-Object @{N='Server';E={$_.Name}}, @{N='ActiveDatabases';E={$_.Count}} |
    Format-Table -AutoSize

# Memory calculation
$activeDBCount = ($databases | Where-Object {$_.Mounted}).Count
$recommendedRAMGB = ($activeDBCount * 4) + 16  # 4GB per DB + 16GB for OS
Write-Host "`nRecommended minimum RAM for $activeDBCount active databases: $recommendedRAMGB GB"$recommendedRAMGB GB" -ForegroundColor Yellow
💡

Pro Tip

A database cache hit ratio below 95% indicates memory pressure is actively impacting performance. Users will experience slower response times because Exchange must read from disk instead of memory cache.

Quick Fix

Immediate Memory Relief

These steps can provide immediate relief while planning permanent solutions:

# Step 1: Identify and stop non-essential processes-essential processes
# List processes using significant memory that aren't Exchange
Get-Process | Where-Object {
    $_.ProcessName -notmatch "Exchange|MSExchange|w3wp|System|Idle" -and
    $_.WorkingSet64 -gt 500MB
} | Select-Object ProcessName, @{N='MemMB';E={[math]::Round($_.WorkingSet64/1MB)}} |
    Sort-Object MemMB -Descending

# Step 2: Recycle IIS application pools to release memory
# This causes brief interruption to OWA/ECP
Import-Module WebAdministration
Get-IISAppPool | Where-Object {$_.State -eq "Started"} | ForEach-Object {
    Write-Host "Recycling app pool: $($_.Name)"
    Restart-WebAppPool -Name $_.Name
}

# Step 3: Clear working sets (temporary relief)
# Restart transport service (may queue mail briefly)
Restart-Service MSExchangeTransport -Force

# Step 4: In DAG environment - move databases to less-loaded server-loaded server
# Check memory on all DAG members first
$dagServers = (Get-DatabaseAvailabilityGroup).Servers
foreach ($srv in $dagServers) {
    $mem = Invoke-Command -ComputerName $srv -ScriptBlock {
        $os = Get-CimInstance Win32_OperatingSystem
        [math]::Round($os.FreePhysicalMemory/1MB, 2)
    }
    Write-Host "$srv : $mem GB free"$mem GB free"
}

# Move database to server with more free memory (if needed)
# Move-ActiveMailboxDatabase "Database01" -ActivateOnServer "EXCH02" -Confirm:$false"Database01" -ActivateOnServer "EXCH02" -Confirm:$false

# Step 5: Clear search index cache (if search is causing pressure)
# Stop-Service MSExchangeFastSearch
# Remove cached index files
# Start-Service MSExchangeFastSearch

Important: These are temporary measures. For lasting improvement, implement the permanent solutions below - especially adding more RAM.

Detailed Solutions

Solution 1: Add Physical Memory

The most effective solution for memory pressure - upgrade server RAM:

# Calculate required memory
$activeDBs = (Get-MailboxDatabase | Where-Object {$_.Mounted}).Count
$mailboxCount = (Get-Mailbox -ResultSize Unlimited | Measure-Object).Count

# Memory calculation guidelines
$dbCacheRAM = $activeDBs * 4  # 4GB minimum per active database
$osRAM = 16                     # OS and Exchange services
$bufferRAM = 16                 # Buffer for spikes

$recommendedRAM = $dbCacheRAM + $osRAM + $bufferRAM

Write-Host "=== Memory Sizing Calculation ===" -ForegroundColor Cyan
Write-Host "Active Databases: $activeDBs"
Write-Host "Total Mailboxes: $mailboxCount"
Write-Host ""
Write-Host "Database Cache: $dbCacheRAM GB ($activeDBs DBs x 4GB)"$activeDBs DBs x 4GB)"
Write-Host "OS/Services: $osRAM GB"
Write-Host "Buffer: $bufferRAM GB"
Write-Host "--------------------------------"
Write-Host "Recommended Total: $recommendedRAM GB" -ForegroundColor Green
Write-Host ""
Write-Host "Microsoft Minimums:"
Write-Host "  Exchange 2016: 64GB (128GB recommended)"
Write-Host "  Exchange 2019: 128GB (256GB recommended)"

# Check current vs recommended
$currentRAM = [math]::Round((Get-CimInstance Win32_ComputerSystem).TotalPhysicalMemory/1GB)
if ($currentRAM -lt $recommendedRAM) {
    Write-Host "`nWARNING: Current RAM ($currentRAM GB) is below recommended ($recommendedRAM GB)"$recommendedRAM GB)" -ForegroundColor Red
}

RAM Upgrade Steps:1. Order compatible DDR4 ECC RAM for your server model 2. Schedule maintenance window 3. Shut down Exchange services gracefully 4. Install RAM per server documentation 5. Boot and verify BIOS recognizes new RAM 6. Start Exchange services and monitor

Solution 2: Optimize Memory Usage

Configure Exchange and Windows for optimal memory utilization:

# Configure page file correctly
# Rule: Page file = RAM + 10MB (for complete memory dumps)

# Check current page file configuration
Get-CimInstance Win32_PageFileSetting | Select-Object Name, InitialSize, MaximumSize

# Set page file to system managed (recommended) or fixed size
# Via GUI: System Properties > Advanced > Performance Settings > Advanced > Virtual Memory

# For 128GB RAM server, set page file to approximately 130GB
# Place on fast SSD, not database volumes

# Disable unnecessary services
$unnecessaryServices = @(
    "RemoteRegistry",           # Unless needed
    "Spooler",                  # Print spooler - usually not needed
    "WSearch"                   # Windows Search - Exchange has its own
)

foreach ($svc in $unnecessaryServices) {
    $service = Get-Service -Name $svc -ErrorAction SilentlyContinue
    if ($service -and $service.Status -eq "Running") {
        Write-Host "Consider disabling: $svc" -ForegroundColor Yellow
    }
}

# Optimize IIS application pool recycling
Import-Module WebAdministration

# Set regular recycling to off-peak hours
$pools = Get-ChildItem IIS:AppPools | Where-Object { $_.Name -match "MSExchange" }
foreach ($pool in $pools) {
    # Clear default recycling time
    Clear-ItemProperty "IIS:AppPools$($pool.Name)" -Name recycling.periodicRestart.schedule

    # Set recycling to 3 AM
    Set-ItemProperty "IIS:AppPools$($pool.Name)" -Name recycling.periodicRestart.schedule -Value @{value="03:00:00"00:00"}

    # Set private memory limit (optional - 0 means unlimited)
    # Set-ItemProperty "IIS:AppPools$($pool.Name)" -Name recycling.periodicRestart.privateMemory -Value 0"IIS:AppPools$($pool.Name)" -Name recycling.periodicRestart.privateMemory -Value 0
}

Solution 3: Redistribute Database Load

Balance databases across DAG members to reduce per-server memory pressure:

# View current database distribution
Get-MailboxDatabaseCopyStatus * |
    Where-Object {$_.Status -eq "Mounted"} |
    Group-Object MailboxServer |
    Select-Object @{N='Server';E={$_.Name}}, @{N='ActiveDBs';E={$_.Count}} |
    Format-Table -AutoSize

# Calculate ideal distribution
$totalDBs = (Get-MailboxDatabase).Count
$servers = (Get-DatabaseAvailabilityGroup).Servers.Count
$idealPerServer = [math]::Ceiling($totalDBs / $servers)

Write-Host "`nIdeal distribution: $idealPerServer databases per server"

# Identify servers with too many active databases
$overloaded = Get-MailboxDatabaseCopyStatus * |
    Where-Object {$_.Status -eq "Mounted"} |
    Group-Object MailboxServer |
    Where-Object {$_.Count -gt $idealPerServer}

if ($overloaded) {
    Write-Host "`nOverloaded servers:" -ForegroundColor Yellow
    $overloaded | Format-Table @{N='Server';E={$_.Name}}, @{N='ActiveDBs';E={$_.Count}}
}

# Move databases to balance load
# Example: Move DB from overloaded server to underloaded server
# Move-ActiveMailboxDatabase "Database01" -ActivateOnServer "EXCH02" -SkipLagChecks -Confirm:$false"Database01" -ActivateOnServer "EXCH02" -SkipLagChecks -Confirm:$false

# For automated balancing, use RedistributeActiveDatabases.ps1
# Located in Exchange Scripts folder
$scriptsPath = (Get-ItemProperty 'HKLM:SOFTWAREMicrosoftExchangeServer15Setup').MsiInstallPath + "Scripts"
Write-Host "`nBalance script location: $scriptsPathRedistributeActiveDatabases.ps1"

Solution 4: Remove Third-Party Memory Consumers

Exchange servers should be dedicated - remove or minimize other applications:

# Identify non-Exchange applications using memory
$exchangeProcesses = "Exchange|MSExchange|w3wp|EdgeTransport|Microsoft.Exchange"
$systemProcesses = "System|Idle|smss|csrss|wininit|services|lsass|svchost"

$thirdParty = Get-Process | Where-Object {
    $_.ProcessName -notmatch $exchangeProcesses -and
    $_.ProcessName -notmatch $systemProcesses -and
    $_.WorkingSet64 -gt 100MB
} | Select-Object ProcessName, @{N='MemoryMB';E={[math]::Round($_.WorkingSet64/1MB)}}, Path

Write-Host "=== Third-Party Processes Using >100MB ===" -ForegroundColor Yellow
$thirdParty | Sort-Object MemoryMB -Descending | Format-Table -AutoSize

# Common offenders to address:
Write-Host "`n=== Common Memory Offenders ===" -ForegroundColor Cyan
Write-Host "1. Backup agents - Schedule during off-hours, limit memory usage"-hours, limit memory usage"
Write-Host "2. Antivirus - Ensure exclusions are configured, limit scan threads"
Write-Host "3. Monitoring agents - Use lightweight agents or remote monitoring"
Write-Host "4. SCOM/SCCM agents - Optimize data collection intervals"
Write-Host "5. SQL Server instances - Move to dedicated server"

# Check for SQL instances
$sqlServices = Get-Service | Where-Object {$_.Name -match "SQL"}
if ($sqlServices) {
    Write-Host "`nWARNING: SQL Server found on Exchange server!" -ForegroundColor Red
    $sqlServices | Format-Table Name, Status, StartType
}

# Check installed applications
Get-ItemProperty HKLM:SoftwareMicrosoftWindowsCurrentVersionUninstall* |
    Where-Object {$_.DisplayName -and $_.DisplayName -notmatch "Microsoft|Windows|Exchange"} |
    Select-Object DisplayName, Publisher |
    Format-Table -AutoSize
🚨

Danger Zone

Never reduce Exchange memory allocation or set memory limits on Exchange processes. Exchange is designed to use all available memory for caching. Setting limits will severely degrade performance. Instead, add more RAM or reduce the workload.

Verification Steps

Verify Memory Pressure Resolution

# Comprehensive memory health check
$server = $env:COMPUTERNAME

Write-Host "=== Memory Health Verification ===" -ForegroundColor Cyan

# Check system memory
$os = Get-CimInstance Win32_OperatingSystem
$freePercent = [math]::Round(($os.FreePhysicalMemory / $os.TotalVisibleMemorySize) * 100, 2)
$freeGB = [math]::Round($os.FreePhysicalMemory / 1MB, 2)

$memStatus = if ($freePercent -gt 10) {"HEALTHY"} elseif ($freePercent -gt 4) {"WARNING"} else {"CRITICAL"}
$memColor = if ($freePercent -gt 10) {"Green"} elseif ($freePercent -gt 4) {"Yellow"} else {"Red"}

Write-Host "Available Memory: $freeGB GB ($freePercent%)"$freePercent%)" -ForegroundColor $memColor
Write-Host "Status: $memStatus" -ForegroundColor $memColor

# Check database cache health
$cacheCounters = Get-Counter @(
    "\$server\MSExchange Database(*)Database Cache Size (MB)",
    "\$server\MSExchange Database(*)Database Cache % Hit"
) | ForEach-Object { $_.CounterSamples }

$avgCacheHit = ($cacheCounters | Where-Object {$_.Path -match "Cache % Hit" -and $_.InstanceName -ne "_Total"} |
    Measure-Object CookedValue -Average).Average

Write-Host "`nDatabase Cache Hit Ratio: $([math]::Round($avgCacheHit,2))%"2))%"
if ($avgCacheHit -ge 98) {
    Write-Host "Cache Status: OPTIMAL" -ForegroundColor Green
} elseif ($avgCacheHit -ge 95) {
    Write-Host "Cache Status: ACCEPTABLE" -ForegroundColor Yellow
} else {
    Write-Host "Cache Status: DEGRADED - Consider adding RAM" -ForegroundColor Red
}

# Check for recent memory pressure events
$recentEvents = Get-WinEvent -FilterHashtable @{
    LogName = 'Application'
    ProviderName = 'MSExchangeIS'
    Id = 2160
    StartTime = (Get-Date).AddHours(-24)
} -ErrorAction SilentlyContinue

if ($recentEvents) {
    Write-Host "`nWARNING: $($recentEvents.Count) memory pressure events in last 24 hours"24 hours" -ForegroundColor Yellow
} else {
    Write-Host "`nNo memory pressure events in last 24 hours" -ForegroundColor Green
}

# Check page file usage
$pageUsage = (Get-Counter "\$server\Paging File(_Total)\% Usage" | Select-Object -ExpandProperty CounterSamples).CookedValue
Write-Host "`nPage File Usage: $([math]::Round($pageUsage,1))%"1))%"
if ($pageUsage -gt 50) {
    Write-Host "Page file usage is elevated - may indicate memory pressure" -ForegroundColor Yellow
}

✓ Success Indicators

  • • Available memory > 10%
  • • Cache hit ratio > 98%
  • • No Event ID 2160 events
  • • Page file usage < 25%

⚠ Warning Signs

  • • Available memory 4-10%
  • • Cache hit ratio 95-98%
  • • Occasional pressure events
  • • Page file usage 25-50%

✗ Failure Indicators

  • • Available memory < 4%
  • • Cache hit ratio < 95%
  • • Frequent Event ID 2160
  • • Page file usage > 75%

Prevention Strategies

Memory Monitoring

  • Set up alerts

    Alert when available memory drops below 8%

  • Monitor cache hit ratio

    Alert if cache hit drops below 97%

  • Track Event ID 2160

    Any occurrence should trigger investigation

  • Plan capacity

    Review memory quarterly as mailboxes grow

Memory Monitoring Script

# Daily memory health check
# Schedule as Windows Task

$server = $env:COMPUTERNAME
$alertThreshold = 8  # percent

$os = Get-CimInstance Win32_OperatingSystem
$freePercent = [math]::Round(($os.FreePhysicalMemory / $os.TotalVisibleMemorySize) * 100, 2)

$cacheHit = (Get-Counter "\$serverMSExchange Database(*)Database Cache % Hit" |
    Select-Object -ExpandProperty CounterSamples |
    Where-Object {$_.InstanceName -ne "_Total"} |
    Measure-Object CookedValue -Average).Average

$status = if ($freePercent -gt $alertThreshold) {"OK"} else {"ALERT"}

# Log daily metrics
$log = "$(Get-Date -Format 'yyyy-MM-dd HH:mm'),$server,$freePercent,$([math]::Round($cacheHit,2)),$status"-Format 'yyyy-MM-dd HH:mm'),$server,$freePercent,$([math]::Round($cacheHit,2)),$status"
Add-Content "C:LogsMemoryHealth.csv" $log

# Send alert if needed
if ($freePercent -lt $alertThreshold) {
    # Insert alerting code here
    Write-Warning "Low memory: $freePercent%"
}

When to Escalate

Escalate to Exchange Specialist When:

  • Memory pressure persists after adding recommended RAM
  • Suspected memory leaks in Exchange processes
  • Need assistance with capacity planning and sizing
  • Complex DAG rebalancing required
  • Performance issues affecting business operations

Need Expert Exchange Memory Help?

Our Exchange Server specialists can diagnose memory issues, plan capacity upgrades, and optimize your environment for peak performance. Don't let memory problems impact your users.

15 Minutes average response time for performance emergencies

Frequently Asked Questions

Exchange 2019 requires minimum 128GB RAM for production mailbox servers. Microsoft recommends up to 256GB for optimal performance. The memory is primarily used for database caching - more RAM means more database pages cached in memory, reducing disk I/O and improving performance. A good rule: plan for 2-4GB per mailbox database plus 8-16GB for OS and services.

Can't Resolve MEMORY_PRESSURE?

Exchange errors can cause data loss or extended downtime. Our specialists are available 24/7 to help.

Emergency help - Chat with us
Medha Cloud

Medha Cloud Exchange Server Team

Microsoft Exchange Specialists

Our Exchange Server specialists have 15+ years of combined experience managing enterprise email environments. We provide 24/7 support, emergency troubleshooting, and ongoing administration for businesses worldwide.

15+ Years ExperienceMicrosoft Certified99.7% Success Rate24/7 Support