How to Run Performance Monitor (perfmon) Data Collection in Batch Script
Performance Monitor (perfmon.exe) tracks system metrics like CPU usage, memory consumption, disk I/O, and network throughput in real-time. For diagnosing intermittent performance bottlenecks or monitoring server health over extended periods, automated data collection is essential. The logman command-line utility creates, starts, and stops "Data Collector Sets", i.e. performance logging configurations that run in the background and save results to log files for later analysis.
This guide explains how to automate performance data collection with logman.
Understanding the Components
Performance Monitoring Architecture:
┌──────────────────────────────────────┐
│ Performance Counters │
│ (CPU, RAM, Disk, Network metrics) │
│ Built into Windows kernel/drivers │
└──────────────┬───────────────────────┘
│
▼
┌──────────────────────────────────────┐
│ Data Collector Set │ ← logman create/start/stop
│ (Configuration: which counters, │
│ sample interval, output format) │
└──────────────┬───────────────────────┘
│
▼
┌──────────────────────────────────────┐
│ Log File (.blg, .csv) │ ← relog for format conversion
│ (Time-series data for analysis) │ ← perfmon.exe for visualization
└──────────────────────────────────────┘
Key logman commands:
| Command | Purpose |
|---|---|
logman create counter | Define a new Data Collector Set |
logman start | Begin data collection |
logman stop | End data collection |
logman delete | Remove the collector set definition |
logman query | List existing collector sets |
Creating and managing performance logs requires access to system performance counters and protected filesystem locations. You MUST run as Administrator or the logman commands will fail.
Method 1: Basic System Health Monitoring
Creates a Data Collector Set that tracks the most important system health metrics: CPU, memory, disk, and network. Runs for a configurable duration, then stops and converts the output to CSV.
Implementation
@echo off
setlocal
set "SetName=SystemHealthCheck"
set "LogDir=%~dp0PerfLogs"
set "LogFile=%LogDir%\%SetName%_%COMPUTERNAME%.blg"
set "DurationMinutes=%~1"
set "SampleInterval=5"
if "%DurationMinutes%"=="" (
echo Usage: %~nx0 ^<duration_minutes^> [sample_interval_seconds]
echo.
echo Collects CPU, memory, disk, and network performance data.
echo.
echo Examples:
echo %~nx0 15 Monitor for 15 minutes (5-second samples^)
echo %~nx0 60 10 Monitor for 1 hour (10-second samples^)
echo %~nx0 1440 30 Monitor for 24 hours (30-second samples^)
endlocal
exit /b 1
)
if not "%~2"=="" set "SampleInterval=%~2"
:: Check admin rights
net session >nul 2>&1
if errorlevel 1 (
echo [ERROR] Performance monitoring requires administrator privileges. >&2
endlocal
exit /b 1
)
:: Create log directory
if not exist "%LogDir%\" mkdir "%LogDir%"
echo ============================================================
echo Performance Data Collection
echo ============================================================
echo.
echo Duration: %DurationMinutes% minute(s^)
echo Interval: %SampleInterval% second(s^)
echo Output: %LogFile%
echo.
:: =============================================
:: Step 1: Clean up any previous collector with the same name
:: =============================================
logman stop %SetName% >nul 2>&1
logman delete %SetName% >nul 2>&1
:: =============================================
:: Step 2: Create the Data Collector Set
:: =============================================
echo [1/4] Creating Data Collector Set...
:: Calculate duration in seconds for the -rf flag
set /a "DurationSeconds=%DurationMinutes% * 60"
logman create counter %SetName% ^
-c "\Processor(_Total)\%% Processor Time" ^
-c "\Processor(_Total)\%% Privileged Time" ^
-c "\Memory\Available MBytes" ^
-c "\Memory\Pages/sec" ^
-c "\PhysicalDisk(_Total)\%% Disk Time" ^
-c "\PhysicalDisk(_Total)\Avg. Disk Queue Length" ^
-c "\PhysicalDisk(_Total)\Disk Bytes/sec" ^
-c "\Network Interface(*)\Bytes Total/sec" ^
-c "\System\Processor Queue Length" ^
-o "%LogFile%" ^
-f bin ^
-si %SampleInterval% ^
-rf %DurationSeconds% >nul 2>&1
if errorlevel 1 (
echo [ERROR] Failed to create Data Collector Set. >&2
echo This may happen if a collector with this name already exists. >&2
echo Try: logman delete %SetName% >&2
endlocal
exit /b 1
)
echo [OK] Collector created with %SampleInterval%-second sample interval.
:: =============================================
:: Step 3: Start the collection
:: =============================================
echo [2/4] Starting data collection...
logman start %SetName% >nul 2>&1
if errorlevel 1 (
echo [ERROR] Failed to start data collection. >&2
logman delete %SetName% >nul 2>&1
endlocal
exit /b 1
)
echo [OK] Collection active.
echo.
echo [INFO] Collecting data for %DurationMinutes% minute(s^)...
echo [INFO] The collector runs in the background - you can close this window.
echo [INFO] Data is saved automatically when the duration expires.
echo.
:: Calculate end time for display
for /f "usebackq delims=" %%t in (
`powershell -NoProfile -Command "(Get-Date).AddMinutes(%DurationMinutes%).ToLongTimeString()"`
) do echo [INFO] Collection will end at approximately: %%t
echo.
:: =============================================
:: Step 4: Wait and finalize
:: =============================================
echo [3/4] Waiting for collection to complete...
echo (Press Ctrl+C to stop early - data collected so far is preserved^)
echo.
:: Wait for the duration
set /a "WaitSeconds=%DurationSeconds% + 5"
timeout /t %WaitSeconds% /nobreak >nul
:: Stop and clean up
logman stop %SetName% >nul 2>&1
logman delete %SetName% >nul 2>&1
:: =============================================
:: Step 5: Convert to CSV
:: =============================================
echo [4/4] Converting to CSV for analysis...
set "CSVFile=%LogFile:.blg=.csv%"
relog "%LogFile%" -f csv -o "%CSVFile%" >nul 2>&1
if not errorlevel 1 (
echo [OK] CSV saved: %CSVFile%
) else (
echo [WARNING] CSV conversion failed. BLG file is still available. >&2
)
echo.
echo ============================================================
echo Collection complete.
echo ============================================================
echo.
echo Binary log: %LogFile%
echo CSV report: %CSVFile%
echo.
echo To view the graph: double-click the .blg file
echo To analyze in Excel: open the .csv file
echo ============================================================
endlocal
exit /b 0
Why these specific counters:
| Counter | What It Reveals | Warning Signs |
|---|---|---|
\Processor(_Total)\% Processor Time | Overall CPU utilization | Sustained >80% indicates CPU bottleneck |
\Processor(_Total)\% Privileged Time | Time in kernel mode (drivers, OS) | >30% may indicate driver issues |
\Memory\Available MBytes | Free physical RAM | Below 10% of total = memory pressure |
\Memory\Pages/sec | Hard page faults (disk swapping) | Sustained >1000 = excessive paging |
\PhysicalDisk(_Total)\% Disk Time | Disk busy percentage | Sustained >80% = disk bottleneck |
\PhysicalDisk(_Total)\Avg. Disk Queue Length | Requests waiting for disk | >2 per physical disk = bottleneck |
\PhysicalDisk(_Total)\Disk Bytes/sec | Total throughput | Context-dependent; baseline comparison |
\Network Interface(*)\Bytes Total/sec | Network throughput | Compare to link speed for saturation |
\System\Processor Queue Length | Threads waiting for CPU time | >2x core count = CPU contention |
Why the -rf (run for) flag:
-rf %DurationSeconds% tells logman to automatically stop the collection after the specified number of seconds. Without it, the collection runs indefinitely until manually stopped, useful for monitoring but problematic for automated scripts that need a defined endpoint.
| Scenario | Recommended Interval |
|---|---|
| Quick diagnostic (5–15 min) | 1–5 seconds |
| Hourly monitoring | 10–15 seconds |
| 24-hour baseline | 30–60 seconds |
| Multi-day trend analysis | 60–300 seconds |
Shorter intervals produce larger log files but capture spikes that longer intervals miss. For a 24-hour collection at 5-second intervals with 10 counters, expect approximately 50–100 MB of data.
Method 2: Background Monitoring with Automatic Start/Stop
For ongoing server monitoring, start the collector, let it run in the background, and stop it later (or let it stop automatically after a duration).
Start Monitoring
@echo off
setlocal
set "SetName=ServerMonitor"
set "LogDir=C:\PerfLogs"
set "MaxSizeMB=100"
net session >nul 2>&1
if errorlevel 1 (
echo [ERROR] Administrator privileges required. >&2
endlocal
exit /b 1
)
:: Clean up any existing set
logman stop %SetName% >nul 2>&1
logman delete %SetName% >nul 2>&1
if not exist "%LogDir%\" mkdir "%LogDir%"
echo [ACTION] Creating background performance monitor...
:: Create with circular logging (overwrites when max size reached)
logman create counter %SetName% ^
-c "\Processor(_Total)\%% Processor Time" ^
-c "\Memory\Available MBytes" ^
-c "\PhysicalDisk(_Total)\%% Disk Time" ^
-c "\PhysicalDisk(_Total)\Avg. Disk Queue Length" ^
-c "\Network Interface(*)\Bytes Total/sec" ^
-o "%LogDir%\%SetName%" ^
-f bin ^
-si 15 ^
-cnf 01:00:00 ^
-max %MaxSizeMB% >nul 2>&1
if errorlevel 1 (
echo [ERROR] Failed to create monitor. >&2
endlocal
exit /b 1
)
logman start %SetName% >nul 2>&1
if not errorlevel 1 (
echo [OK] Background monitoring started: %SetName%
echo Counters: CPU, Memory, Disk, Network
echo Interval: 15 seconds
echo Max size: %MaxSizeMB% MB per log segment
echo Rotation: New log file every hour (-cnf 01:00:00^)
echo Location: %LogDir%\
echo.
echo To stop: logman stop %SetName%
echo To check: logman query %SetName%
) else (
echo [ERROR] Failed to start monitoring. >&2
)
endlocal
exit /b 0
Stop Monitoring
@echo off
setlocal
set "SetName=ServerMonitor"
echo [ACTION] Stopping background performance monitor...
logman stop %SetName% >nul 2>&1
logman delete %SetName% >nul 2>&1
echo [OK] Monitoring stopped.
echo [INFO] Log files are in C:\PerfLogs\
endlocal
exit /b 0
Check Monitoring Status
@echo off
echo [INFO] Active Data Collector Sets:
echo --------------------------------------------------
echo.
logman query
echo --------------------------------------------------
Why -cnf 01:00:00:
The -cnf (create new file) flag creates a new log file segment at the specified interval. This prevents a single log file from growing indefinitely and makes it easier to analyze specific time windows. Each file is automatically named with a sequence number (e.g., ServerMonitor_000001.blg, ServerMonitor_000002.blg).
Method 3: Process-Specific Performance Collection
When investigating a specific application's resource usage, useful for diagnosing memory leaks, high CPU consumption, or I/O patterns in a particular process.
@echo off
setlocal
set "ProcessName=%~1"
set "DurationMin=%~2"
if "%DurationMin%"=="" (
echo Usage: %~nx0 ^<process_name^> ^<duration_minutes^>
echo.
echo Example: %~nx0 sqlservr 30
echo %~nx0 w3wp 60
endlocal
exit /b 1
)
net session >nul 2>&1
if errorlevel 1 (
echo [ERROR] Administrator privileges required. >&2
endlocal
exit /b 1
)
set "SetName=ProcessMonitor_%ProcessName%"
set "LogDir=%~dp0PerfLogs"
set "LogFile=%LogDir%\%ProcessName%_perf.blg"
set /a "DurationSec=%DurationMin% * 60"
if not exist "%LogDir%\" mkdir "%LogDir%"
:: Clean up
logman stop %SetName% >nul 2>&1
logman delete %SetName% >nul 2>&1
echo [ACTION] Monitoring process: %ProcessName% for %DurationMin% minute(s)...
logman create counter %SetName% ^
-c "\Process(%ProcessName%)\%% Processor Time" ^
-c "\Process(%ProcessName%)\Working Set" ^
-c "\Process(%ProcessName%)\Private Bytes" ^
-c "\Process(%ProcessName%)\IO Read Bytes/sec" ^
-c "\Process(%ProcessName%)\IO Write Bytes/sec" ^
-c "\Process(%ProcessName%)\Thread Count" ^
-c "\Process(%ProcessName%)\Handle Count" ^
-o "%LogFile%" ^
-f bin ^
-si 5 ^
-rf %DurationSec% >nul 2>&1
if errorlevel 1 (
echo [ERROR] Failed to create monitor. >&2
echo The process "%ProcessName%" may not be running. >&2
echo Run "typeperf -q Process" to see available process names. >&2
endlocal
exit /b 1
)
logman start %SetName% >nul 2>&1
echo [OK] Monitoring %ProcessName%...
echo [INFO] Collection ends in %DurationMin% minutes.
timeout /t %DurationSec% /nobreak >nul
logman stop %SetName% >nul 2>&1
logman delete %SetName% >nul 2>&1
:: Convert to CSV
set "CSVFile=%LogFile:.blg=.csv%"
relog "%LogFile%" -f csv -o "%CSVFile%" >nul 2>&1
echo.
echo [OK] Process monitoring complete.
echo Log: %LogFile%
echo CSV: %CSVFile%
endlocal
exit /b 0
Process counter names:
In Performance Monitor, process names use the executable name WITHOUT the .exe extension:
- SQL Server →
sqlservr - IIS Worker Process →
w3wp - Chrome →
chrome(but multi-instance, usechrome#1,chrome#2, etc.)
Performance counter paths like \Processor(_Total)\% Processor Time are translated on non-English Windows. On German Windows, it might be \Prozessor(_Insgesamt)\Prozessorzeit (%). If logman create fails with "The counter specified is not valid," check the available counter names:
typeperf -q Processor > processor_counters.txt
typeperf -q Memory > memory_counters.txt
typeperf -q PhysicalDisk > disk_counters.txt
Use the names from these files in your logman create command.
Method 4: Convert and Analyze Results
After collection, convert .blg binary logs to .csv for analysis in Excel, Power BI, or other tools.
@echo off
setlocal
set "InputFile=%~1"
if "%InputFile%"=="" (
echo Usage: %~nx0 ^<blg_file^>
echo.
echo Converts a Performance Monitor .blg file to .csv for analysis.
echo.
echo Example: %~nx0 C:\PerfLogs\SystemHealthCheck.blg
endlocal
exit /b 1
)
if not exist "%InputFile%" (
echo [ERROR] File not found: %InputFile% >&2
endlocal
exit /b 1
)
set "OutputFile=%InputFile:.blg=.csv%"
echo [ACTION] Converting %InputFile% to CSV...
relog "%InputFile%" -f csv -o "%OutputFile%"
if not errorlevel 1 (
echo [OK] CSV saved: %OutputFile%
echo.
echo [INFO] To analyze in Excel:
echo 1. Open the CSV file
echo 2. Select the timestamp and counter columns
echo 3. Insert a Line Chart
echo 4. Look for sustained high values or upward trends
) else (
echo [ERROR] Conversion failed. >&2
)
endlocal
exit /b 0
The relog utility:
relog converts between performance log formats and can also resample data:
| Command | Purpose |
|---|---|
relog file.blg -f csv -o file.csv | Convert binary to CSV |
relog file.blg -f bin -o file2.blg | Copy/merge binary logs |
relog file.blg -f csv -o file.csv -t 60 | Resample to 60-second intervals (reduces file size) |
How to Avoid Common Errors
Wrong Way: Creating Without Cleaning Up First
:: FAILS if a collector with this name already exists
logman create counter MyMonitor -c "..." -o "..."
:: Error: Data Collector Set already exists.
If a previous run created the collector and the script crashed before logman delete, the name is "taken."
Correct Way: Always clean up at the beginning of the script:
logman stop %SetName% >nul 2>&1
logman delete %SetName% >nul 2>&1
:: Now safe to create
logman create counter %SetName% ...
All methods in this guide include this cleanup step.
Wrong Way: Using % Instead of %% in Counter Paths
:: WRONG - Batch interprets single % as a variable
logman create counter MySet -c "\Processor(_Total)\% Processor Time"
:: Result: The counter name is garbled
Correct Way: Double the percent sign in Batch scripts:
logman create counter MySet -c "\Processor(_Total)\%% Processor Time"
Problem: Counter Names Fail on Non-English Windows
Performance counter paths are localized. \Processor(_Total)\% Processor Time is English-only.
Solution: Query available counters with typeperf -q [category] to get the correct localized names (see the :::warning in Method 3).
Problem: Log File Too Large
A 24-hour collection at 1-second intervals with 20 counters can produce gigabytes of data.
Solution: Increase the sample interval for long-duration monitoring. Use -cnf for log rotation (Method 2). Use relog -t to resample existing large logs:
:: Reduce a 1-second log to 30-second samples
relog huge_log.blg -f csv -o reduced.csv -t 30
Best Practices and Rules
1. Clean Up Before Creating
Always logman stop and logman delete the collector name at the start of the script. This prevents "already exists" errors from crashed previous runs.
2. Double the Percent Sign
In Batch scripts, % in counter paths must be escaped as %%. This is the most common logman syntax error.
3. Match Interval to Duration
Short intervals (1–5 seconds) for short durations (minutes). Long intervals (30–60 seconds) for long durations (hours/days). This prevents excessively large log files.
4. Convert to CSV for Analysis
.blg files can be opened in Performance Monitor for graphing, but .csv files are more versatile, they work in Excel, Power BI, Python, and any other analysis tool.
5. Include Queue Length Counters
CPU and disk percentage alone don't tell the full story. Queue length counters (Processor Queue Length, Avg. Disk Queue Length) reveal whether resources are genuinely contended or just busy.
6. Use Process-Specific Monitoring for Application Issues
System-wide counters show the big picture but can't isolate which process is responsible. Method 3's process-specific counters identify exactly how much CPU, memory, and I/O a specific application is consuming.
Conclusion
Automating performance data collection with logman transforms ad-hoc troubleshooting into repeatable, documented diagnostics. By selecting the right counters for the investigation, configuring appropriate sample intervals and durations, and converting the results to CSV for analysis, you create a performance monitoring capability that rivals commercial APM tools, using only built-in Windows commands. Whether running a quick 15-minute diagnostic or a 24-hour server baseline, the logman approach provides the raw data needed to identify exactly where and when bottlenecks occur.