How to Benchmark a Command's Execution Time in Batch Script
Benchmarking is the practice of running a specific operation multiple times to determine its performance and stability. In Batch scripting, you might want to know if robocopy is faster than xcopy for your specific file structure, or if a certain database query lags under repetition. Measuring a command once isn't enough; you need to see the average time across several runs to account for system background noise.
This guide will explain how to create a benchmarking loop that measures the total and average execution time of any command.
The Logic of a Benchmark Wrapper
A benchmark script follows a simple pattern:
- Record the absolute start time.
- Run the target command inside a loop (e.g., 10 times).
- Record the absolute end time.
- Calculate the total duration and divide by the number of iterations.
Method 1: The Native Batch Benchmarker
This method is useful because it requires no external tools. We use the "Total Seconds" conversion to handle time math.
@echo off
setlocal enabledelayedexpansion
set "cmd_to_test=ping 127.0.0.1 -n 2"
set /a "iterations=5"
echo [BENCHMARK] Testing: %cmd_to_test%
echo [BENCHMARK] Iterations: %iterations%
echo.
:: Warm-up run (not counted) to eliminate first-run overhead
echo [WARMUP] Running initial pass...
%cmd_to_test% >nul 2>&1
:: 1. Capture Start Time
set "t=!time: =0!"
set /a "start_s=(1!t:~0,2!-100)*3600 + (1!t:~3,2!-100)*60 + (1!t:~6,2!-100)"
:: 2. The Benchmarking Loop
for /l %%i in (1,1,%iterations%) do (
echo [RUN %%i / %iterations%]
%cmd_to_test% >nul 2>&1
)
:: 3. Capture End Time
set "t=!time: =0!"
set /a "end_s=(1!t:~0,2!-100)*3600 + (1!t:~3,2!-100)*60 + (1!t:~6,2!-100)"
:: 4. Handle midnight crossing
set /a "total=end_s - start_s"
if !total! lss 0 set /a "total+=86400"
:: 5. Calculate average
set /a "avg=total / iterations"
echo.
echo ===================================
echo Total Time: !total! seconds
echo Average Time: !avg! seconds per run
echo Iterations: %iterations%
echo ===================================
pause
endlocal
Method 2: High-Precision Benchmark (Using PowerShell)
If you are benchmarking very fast commands (less than 1 second), native Batch math is not precise enough. PowerShell's Stopwatch class provides millisecond precision with a multi-iteration loop.
@echo off
set "command=Get-ChildItem C:\Windows -Recurse -ErrorAction SilentlyContinue | Out-Null"
set "iterations=5"
echo [BENCHMARK] Running high-precision PowerShell benchmark...
echo [BENCHMARK] Iterations: %iterations%
echo.
powershell -NoProfile -Command ^
"$cmd = { %command% };" ^
"$iterations = %iterations%;" ^
"" ^
"Write-Host '[WARMUP] Initial pass...';" ^
"& $cmd;" ^
"" ^
"$results = @();" ^
"for ($i = 1; $i -le $iterations; $i++) {" ^
" $sw = [System.Diagnostics.Stopwatch]::StartNew();" ^
" & $cmd;" ^
" $sw.Stop();" ^
" $ms = $sw.ElapsedMilliseconds;" ^
" Write-Host \"[RUN $i / $iterations] ${ms}ms\";" ^
" $results += $ms" ^
"};" ^
"" ^
"$total = ($results | Measure-Object -Sum).Sum;" ^
"$avg = [math]::Round(($results | Measure-Object -Average).Average, 1);" ^
"$min = ($results | Measure-Object -Minimum).Minimum;" ^
"$max = ($results | Measure-Object -Maximum).Maximum;" ^
"" ^
"Write-Host '';" ^
"Write-Host '===================================';" ^
"Write-Host \"Total: ${total}ms\";" ^
"Write-Host \"Average: ${avg}ms per run\";" ^
"Write-Host \"Fastest: ${min}ms\";" ^
"Write-Host \"Slowest: ${max}ms\";" ^
"Write-Host '==================================='"
pause
The PowerShell method provides per-iteration timing, min/max spread, and millisecond precision, i.e. essential data for identifying performance variance and outliers.
How to Avoid Common Errors
Wrong Way: Including the "Benchmark Script" Overhead
If you launch a new process for the timer inside every loop iteration, you are adding 100-200ms of lag per iteration just to run the timer itself.
Correct Way: Start the timer once before the loop and stop it once after the loop (Method 1), or time each iteration individually within the same PowerShell session (Method 2). Both approaches avoid spawning extra processes per iteration.
Problem: Antivirus Scanning
Antivirus software often scans new processes. If your benchmark command launches a new .exe, the first run might be much slower than the subsequent ones because the AV has already verified the file.
Best Practice: Run a "Warm-up" iteration (1 run) before you start the official benchmark timer (as shown in both methods).
Best Practices and Rules
1. Close Background Apps
For an accurate benchmark, close browsers, music players, and update managers. System background noise will cause "spikes" in your data that don't represent the true speed of the command.
2. Iteration Count
- Slow commands (e.g., 10GB file copy): Run 3–5 times.
- Fast commands (e.g., text search): Run 50–100 times. The faster the command, the more iterations you need to get a statistically significant average.
3. I/O Caching
Windows caches files in RAM. If you benchmark a file-reading command, the second run will always be faster than the first because the file is already in memory.
To benchmark real-world "Cold" performance, you must clear the system cache or use a different file for every iteration.
Conclusions
Benchmarking your commands in Batch is the key to writing high-performance automation. By building a simple loop and calculating the average execution time, you can scientifically prove which approach is superior. Whether you use the native math approach for long tasks or the PowerShell approach for micro-benchmarks, this data-driven logic is essential for any professional developer.