How to Monitor a Process's Memory Usage Over Time in Batch Script
Identifying a memory leak is impossible with a single snapshot. You need to observe whether an application's RAM consumption is steadily climbing over hours or days, eventually starving the rest of the system. By creating a Batch script that periodically polls the memory footprint of a specific executable and logs it to a CSV file, you create a timeline of the application's resource health. This data is invaluable for troubleshooting unstable servers or verifying that a code update hasn't introduced performance regressions.
This guide will explain how to log process memory usage over time using PowerShell from a Batch script.
Why PowerShell Is Necessary
Monitoring per-process memory in pure Batch faces several problems:
tasklistformats memory with locale-specific separators: On English systems it shows45,200 K, on German systems45.200 K. Parsing these inconsistently formatted strings in Batch is fragile and error-prone.wmicis deprecated since Windows 10 21H1, and its output contains invisible\rcharacters that corrupt Batch variables.- Batch
set /aoverflows at ~2 GB: Any process using more than 2 GB of memory produces a byte count that exceeds Batch's 32-bit integer limit. The digit-truncation workaround (%bytes:~0,-6%) divides by 1,000,000 instead of 1,048,576, producing inaccurate results. - Multi-instance processes (browsers, VS Code, Teams) spawn dozens of child processes under the same name.
tasklistandwmicreturn one row per instance, requiring manual summation that Batch handles poorly.
PowerShell's Get-Process returns WorkingSet64 as a raw 64-bit integer, handles multiple instances with Measure-Object -Sum, and performs accurate division, solving all four problems.
Method 1: Process Memory Monitor with CSV Logging
This method tracks a specific process's memory consumption over time, logging timestamped readings to a CSV file. It automatically sums memory across all instances of the process and handles the process-not-running case gracefully.
@echo off
setlocal
set "ProcessName=msedge"
set "LogFile=%~dp0mem_monitor_%ProcessName%.csv"
set "Interval=10"
title Memory Monitor: %ProcessName% - Logging every %Interval%s (Ctrl+C to stop)
:: Write CSV header if file is new
if not exist "%LogFile%" (
echo "Timestamp","WorkingSet_MB","Instances","Status" > "%LogFile%"
)
echo [INFO] Monitoring "%ProcessName%" every %Interval% seconds.
echo [INFO] Log file: %LogFile%
echo [INFO] Press Ctrl+C to stop.
echo.
:MonitorLoop
for /f "tokens=1-3 delims=|" %%a in ('powershell -NoProfile -Command "$p=Get-Process -Name '%ProcessName%' -EA SilentlyContinue; $ts=Get-Date -Format 'yyyy-MM-dd HH:mm:ss'; if($p){ $mb=[math]::Round(($p|Measure WorkingSet64 -Sum).Sum/1MB,1); Write-Output ($ts+'|'+$mb+'|'+$p.Count) } else { Write-Output ($ts+'|NOT_RUNNING|0') }" 2^>nul') do (
set "Timestamp=%%a"
set "MemMB=%%b"
set "Instances=%%c"
)
if "%MemMB%"=="NOT_RUNNING" (
echo [%Timestamp%] %ProcessName%: Not running
echo "%Timestamp%","0","0","Not Running" >> "%LogFile%"
) else (
echo [%Timestamp%] %ProcessName%: %MemMB% MB across %Instances% instance(s^)
echo "%Timestamp%","%MemMB%","%Instances%","Running" >> "%LogFile%"
)
timeout /t %Interval% >nul
goto :MonitorLoop
Sample output:
Console:
[2024-05-10 14:32:05] msedge: 487.3 MB across 12 instance(s)
[2024-05-10 14:32:15] msedge: 512.1 MB across 13 instance(s)
[2024-05-10 14:32:25] msedge: 498.7 MB across 12 instance(s)
CSV (mem_monitor_msedge.csv):
"Timestamp","WorkingSet_MB","Instances","Status"
"2024-05-10 14:32:05","487.3","12","Running"
"2024-05-10 14:32:15","512.1","13","Running"
"2024-05-10 14:32:25","498.7","12","Running"
Why WorkingSet64 instead of WorkingSet:
WorkingSet is a 32-bit property that overflows for processes using more than 2 GB of RAM. WorkingSet64 is the 64-bit equivalent and handles any amount of memory correctly. Modern applications (databases, browsers with many tabs, development tools) routinely exceed 2 GB.
Why summing across instances matters:
A browser with 15 tab processes might use 40 MB each, 40 × 15 = 600 MB total. If you only measure the first instance, you see 40 MB and conclude the browser is lightweight, missing the 560 MB consumed by the other instances.
Method 2: Memory Leak Detection with Trend Alerting
Beyond logging, you often want to detect when memory is growing continuously, the hallmark of a memory leak. This method tracks whether memory has increased on every reading for a configurable number of consecutive checks.
@echo off
setlocal EnableDelayedExpansion
set "ProcessName=msedge"
set "LogFile=%~dp0mem_leak_%ProcessName%.csv"
set "Interval=30"
set "GrowthLimit=6"
set "GrowthCount=0"
set "PrevMB=0"
title Memory Leak Monitor: %ProcessName%
if not exist "%LogFile%" (
echo "Timestamp","WorkingSet_MB","Delta_MB","Consecutive_Growth","Alert" > "%LogFile%"
)
echo [INFO] Monitoring "%ProcessName%" for memory leaks.
echo [INFO] Alert after %GrowthLimit% consecutive increases.
echo.
:LeakLoop
set "Alert=No"
set "Timestamp="
set "CurrMB="
:: Fixed: Single-line PowerShell command to prevent CMD parser "unexpected |" errors.
for /f "tokens=1-2 delims=|" %%a in ('powershell -NoProfile -Command "$p=Get-Process -Name '%ProcessName%' -EA SilentlyContinue; $ts=Get-Date -Format 'yyyy-MM-dd HH:mm:ss'; if($p){$mb=[math]::Round(($p|Measure WorkingSet64 -Sum).Sum/1MB,1); Write-Output ($ts+'|'+$mb)}else{Write-Output ($ts+'|NOT_RUNNING')}"') do (
set "Timestamp=%%a"
set "CurrMB=%%b"
)
if "!CurrMB!"=="NOT_RUNNING" (
echo [!Timestamp!] %ProcessName%: Not running
set "GrowthCount=0"
set "PrevMB=0"
echo "!Timestamp!","0","0","0","Process Not Running" >> "%LogFile%"
goto :LeakWait
)
:: Calculate delta using PowerShell (handles floating-point safely)
for /f "delims=" %%d in ('powershell -NoProfile -Command "[math]::Round(!CurrMB! - !PrevMB!, 1)"') do set "Delta=%%d"
:: Check if memory grew (Safe float comparison)
powershell -NoProfile -Command "if ([double]!CurrMB! -gt [double]!PrevMB! -and [double]!PrevMB! -gt 0) { exit 0 } else { exit 1 }" >nul 2>&1
if !errorlevel! == 0 (
set /a "GrowthCount+=1"
) else (
set "GrowthCount=0"
)
:: Alert if sustained growth
if !GrowthCount! geq %GrowthLimit% (
set "Alert=Yes"
echo [ALERT] %ProcessName% memory has grown for !GrowthCount! consecutive readings!
echo [ALERT] Current: !CurrMB! MB, Delta: +!Delta! MB
eventcreate /T WARNING /ID 501 /L APPLICATION /SO "MemoryMonitor" ^
/D "%ProcessName% possible memory leak: !CurrMB! MB after !GrowthCount! consecutive increases" >nul 2>&1
)
echo [!Timestamp!] !CurrMB! MB (delta: !Delta! MB, growth streak: !GrowthCount!^)
echo "!Timestamp!","!CurrMB!","!Delta!","!GrowthCount!","!Alert!" >> "%LogFile%"
set "PrevMB=!CurrMB!"
:LeakWait
timeout /t %Interval% >nul
goto :LeakLoop
How leak detection works:
A memory leak manifests as a working set that never decreases, it only grows, slowly but continuously. Normal applications fluctuate: memory rises during activity and falls during idle periods. This script tracks consecutive readings where memory increased compared to the previous reading. If memory grows on every single check for GrowthLimit consecutive readings (default: 6), it flags a potential leak.
Why this is better than a simple threshold:
A fixed threshold (e.g., "alert above 1 GB") fires immediately and stays firing, even if the application legitimately needs that much memory. Consecutive-growth detection fires only when memory is climbing without recovery, which is the specific pattern that indicates a leak.
Suggested intervals for leak detection:
Memory leaks typically manifest over hours, not seconds. Use longer intervals (30–60 seconds) with a higher consecutive limit (6–12) to avoid false positives from temporary allocation spikes.
Method 3: Detailed Memory Breakdown
When investigating a suspected leak, the total working set alone may not be enough. You may need to distinguish between private bytes (memory exclusive to the process), working set (physical RAM pages), and virtual memory (address space reserved).
@echo off
setlocal
set "ProcessName=msedge"
set "OutFile=%~dp0mem_detail_%ProcessName%.csv"
echo [INFO] Capturing detailed memory breakdown for %ProcessName%...
if not exist "%OutFile%" (
echo "Timestamp","Process","WorkingSet_MB","PrivateBytes_MB","VirtualMem_MB","PagedPool_KB","Instances" > "%OutFile%"
)
:: Fixed: Consistently single-line PowerShell command to prevent CMD parser errors.
:: Fixed: Using string concatenation ($ts + '|' + ...) to keep the pipe safe.
for /f "delims=" %%a in ('powershell -NoProfile -Command "$p=Get-Process -Name '%ProcessName%' -EA SilentlyContinue; $ts=Get-Date -Format 'yyyy-MM-dd HH:mm:ss'; if(-not $p){ Write-Output ($ts+'|NOT_FOUND') }else{ $ws=[math]::Round(($p|Measure WorkingSet64 -Sum).Sum/1MB,1); $pb=[math]::Round(($p|Measure PrivateMemorySize64 -Sum).Sum/1MB,1); $vm=[math]::Round(($p|Measure VirtualMemorySize64 -Sum).Sum/1MB,1); $pp=[math]::Round(($p|Measure PagedSystemMemorySize64 -Sum).Sum/1KB,1); Write-Output ($ts+'|'+$ws+'|'+$pb+'|'+$vm+'|'+$pp+'|'+$p.Count) }"') do (
for /f "tokens=1-6 delims=|" %%b in ("%%a") do (
if "%%c"=="NOT_FOUND" (
echo [INFO] "%ProcessName%" is not running.
) else (
echo "%%b","%ProcessName%","%%c","%%d","%%e","%%f","%%g" >> "%OutFile%"
echo [%%b] WS: %%c MB Private: %%d MB Virtual: %%e MB PagedPool: %%f KB (%%g instances^)
)
)
)
endlocal
exit /b 0
Memory metrics explained:
| Metric | Property | What It Measures |
|---|---|---|
| Working Set | WorkingSet64 | Physical RAM pages currently assigned to the process |
| Private Bytes | PrivateMemorySize64 | Memory that cannot be shared with other processes (strongest leak indicator) |
| Virtual Memory | VirtualMemorySize64 | Total address space reserved (includes memory-mapped files, shared DLLs) |
| Paged Pool | PagedSystemMemorySize64 | Kernel memory allocated on behalf of the process |
For leak detection, Private Bytes is the most important metric. Working set can fluctuate as Windows pages memory in and out, but private bytes only grow when the process allocates memory it doesn't release.
How to Avoid Common Errors
Wrong Way: Parsing tasklist Output for Memory Values
:: BROKEN: locale-dependent formatting breaks parsing
for /f "tokens=5" %%a in ('tasklist /fi "imagename eq chrome.exe" /nh') do set "mem=%%a"
tasklist formats memory with locale-specific thousands separators (, in English, . in German, in French) and appends K. The token position also shifts depending on whether the process name or PID has more or fewer digits. This makes reliable parsing nearly impossible.
Correct Way: Use Get-Process in PowerShell, which returns WorkingSet64 as a raw integer.
Wrong Way: Using Digit Truncation for Unit Conversion
:: WRONG - divides by 1,000,000 instead of 1,048,576 (1 MB)
set /a "mb=%bytes:~0,-6%"
Dropping the last 6 digits divides by 1,000,000, not by 1,048,576 (the actual number of bytes in a megabyte). For a 2 GB process (2,147,483,648 bytes), truncation produces 2147 instead of the correct 2048. Additionally, this fails entirely if the byte count has fewer than 7 digits.
Correct Way: Use PowerShell's [math]::Round($bytes / 1MB, 1) for accurate conversion.
Problem: Process Not Running
If the target process is not running, Get-Process returns nothing and any attempt to measure or sum its properties fails.
Solution: All methods in this guide check for a null/empty result from Get-Process before attempting calculations. When the process is not running, they log a "Not Running" entry to the CSV to maintain a continuous timeline, making it visible exactly when the process started and stopped.
Problem: Multiple Instances Underreporting
A browser with 15 sub-processes shows as 15 separate entries. Measuring only the first instance captures roughly 1/15th of the actual memory usage.
Solution: All methods use Measure-Object -Sum to total memory across all instances of the target process name, providing an accurate application-level total.
Best Practices and Rules
1. Track Private Bytes for Leak Detection
Working set (physical RAM) fluctuates as Windows manages page allocation. Private bytes only grow when the process allocates memory it doesn't release. A steadily climbing private bytes value with no decrease during idle periods is the strongest indicator of a memory leak.
2. Use Appropriate Intervals
| Monitoring Goal | Recommended Interval |
|---|---|
| Active debugging (minutes) | 5–10 seconds |
| Leak detection (hours) | 30–60 seconds |
| Long-term trend (days) | 5–15 minutes |
Shorter intervals produce more data points but generate larger log files and incur more monitoring overhead.
3. Log Process-Not-Running Entries
When the target process is not running, write a "Not Running" entry with a zero value rather than skipping the log entry. This preserves a continuous timeline and makes it immediately obvious when the process started, stopped, or restarted.
4. Include Instance Count
The number of child processes changes over time (new browser tabs, worker threads). Logging the instance count alongside total memory helps distinguish between "more memory per instance" (possible leak) and "more instances" (normal scaling).
5. Graph the Results
A CSV with thousands of rows is difficult to interpret visually. Open the file in Excel or a similar tool, graph the WorkingSet_MB or PrivateBytes_MB column over time, and look for the characteristic "sawtooth" pattern (allocate/release cycles, healthy) versus a "staircase" pattern (allocate but never release, leak).
Conclusions
Monitoring process memory over time is the definitive way to prove or disprove a suspected memory leak. By logging periodic snapshots with proper multi-instance aggregation, accurate unit conversion, and continuous-growth detection, you gain a powerful diagnostic tool that reveals application behavior invisible to single-point measurements. This proactive auditing is a cornerstone of professional system stability and performance optimization.