Skip to main content

How to Measure Elapsed Time in Python

Measuring elapsed time: the duration a specific task takes to complete: is essential for benchmarking code performance, debugging bottlenecks, and optimizing critical sections of your Python programs. Whether you're comparing two algorithms, profiling a database query, or simply curious about how long a function runs, Python provides several built-in modules to get accurate timing measurements.

In this guide, you'll learn how to measure execution time using the timeit, time, and datetime modules, understand when to use each one, and avoid common pitfalls that lead to inaccurate results.

Choosing the Right Module

Python offers three primary modules for measuring elapsed time, each suited for different scenarios:

ModuleBest ForPrecision
timeitBenchmarking small code snippetsVery high (disables GC, runs multiple iterations)
timeGeneral-purpose timingHigh (nanosecond-level options)
datetimeHuman-readable timestampsLower (microsecond resolution)

Measuring Time with the timeit Module

The timeit module is specifically designed for accurate benchmarking. It automatically disables garbage collection and runs your code multiple times to produce consistent, reliable results.

Basic Usage with timeit.timeit()

The simplest way to benchmark a statement is to pass it as a string to timeit.timeit():

import timeit

exec_time = timeit.timeit("print('Hello World!')", number=5)
print(f"{exec_time:.6f} secs")

Output:

Hello World!
Hello World!
Hello World!
Hello World!
Hello World!
0.000043 secs

The number parameter controls how many times the statement is executed. The default is 1,000,000, so always set it explicitly for expensive operations.

Measuring a Multi-Line Code Block

For more complex code that requires imports or function definitions, pass the entire block as a multi-line string:

import timeit

code = '''\
import random

def run(n):
return n ** n

run(random.randint(20, 50))
'''

exec_time = timeit.timeit(code, number=10**6)
print(f"{exec_time:.3f} secs")

Output:

2.040 secs

Repeating Benchmarks with timeit.repeat()

Instead of running a single benchmark, timeit.repeat() executes the test multiple times and returns a list of results. This makes it easier to spot outliers and find the minimum (most representative) time:

import timeit

def square(n):
return n ** 2

times = timeit.repeat(lambda: square(3), number=10, repeat=5)

for i, t in enumerate(times, 1):
print(f"Run {i}: {round(t * 1e6, 2)} µs")

print(f"\nBest: {round(min(times) * 1e6, 2)} µs")

Output:

Run 1: 3.47 µs
Run 2: 1.34 µs
Run 3: 1.21 µs
Run 4: 0.96 µs
Run 5: 1.04 µs

Best: 0.96 µs
tip

When analyzing timeit.repeat() results, use the minimum value rather than the average. The minimum represents the best achievable time, while higher values are typically caused by system interference (OS scheduling, background processes, etc.).

Using timeit.default_timer() for Manual Timing

If you need to measure a specific section of code without passing it as a string, use timeit.default_timer(), which internally calls time.perf_counter():

import timeit

def square(n):
return n ** 2

start = timeit.default_timer()
square(11111111)
end = timeit.default_timer()

print(f"Elapsed: {round((end - start) * 1e6, 3)} µs")

Output:

Elapsed: 1.263 µs

Measuring Time with the time Module

The time module provides flexible, general-purpose timing functions. It's ideal when you want to measure elapsed time within your regular program flow without the overhead of timeit's repeated execution.

time.perf_counter(): Highest Resolution Timer

time.perf_counter() provides the highest available resolution clock on your system. It includes time elapsed during sleep and is ideal for measuring wall-clock duration:

import time

def square(n):
return n ** 2

start = time.perf_counter()
square(3)
end = time.perf_counter()

print(f"Elapsed: {(end - start) * 1e6:.3f} µs")

Output:

Elapsed: 1.549 µs

time.time_ns(): Nanosecond Precision

For situations where you need integer nanosecond precision without floating-point rounding errors, use time.time_ns():

import time

def square(x):
return x ** 2

start = time.time_ns()
square(3)
end = time.time_ns()

print(f"Time taken: {end - start} ns")

Output:

Time taken: 1504 ns

time.process_time(): CPU Time Only

Unlike perf_counter(), time.process_time() measures only the CPU time consumed by your process. It excludes time spent sleeping or waiting for I/O, making it perfect for profiling CPU-bound tasks:

import time

def heavy_calc(n):
return n ** 76567

start = time.process_time()
heavy_calc(125)
end = time.process_time()

print(f"CPU Time: {(end - start) * 1e3:.3f} ms")

Output:

CPU Time: 21.164 ms
perf_counter() vs process_time()
  • perf_counter() measures wall-clock time: the total real time that passes, including sleeps and I/O waits.
  • process_time() measures CPU time: only the time your process actively uses the CPU.

Use perf_counter() for end-to-end timing and process_time() when you want to isolate CPU work from I/O delays.

Measuring Time with the datetime Module

The datetime module is useful when you want human-readable timestamps alongside your timing measurements. It's not as precise as timeit or time.perf_counter(), but it's convenient for logging:

from datetime import datetime

def square(n):
return n ** 2

start = datetime.now()
square(3)
end = datetime.now()

elapsed = (end - start).total_seconds() * 1e6
print(f"Started at: {start}")
print(f"Ended at: {end}")
print(f"Elapsed: {elapsed} µs")

Output:

Started at: 2026-02-15 14:13:49.864666
Ended at: 2026-02-15 14:13:49.864670
Elapsed: 4.0 µs
caution

datetime.now() has only microsecond resolution and can be affected by system clock adjustments (e.g., NTP sync). Avoid using it for precise benchmarking: use time.perf_counter() or timeit instead.

Common Mistake: Measuring a Single Run of Fast Code

One of the most frequent errors is measuring a very fast operation only once and drawing conclusions from it:

import time

# ❌ Unreliable: single run of a fast operation
start = time.perf_counter()
result = 3 ** 2
end = time.perf_counter()

print(f"Elapsed: {(end - start) * 1e6:.3f} µs")

Output (varies wildly between runs):

Elapsed: 0.238 µs

The result is unreliable because OS scheduling, CPU caching, and other factors can heavily influence a single measurement of such a fast operation.

The correct approach is to use timeit to run the code many times:

import timeit

# ✅ Reliable: averaged over 1,000,000 runs
exec_time = timeit.timeit("3 ** 2", number=1_000_000)
print(f"Average: {exec_time / 1_000_000 * 1e6:.3f} µs per call")

Output:

Average: 0.015 µs per call

Creating a Reusable Timer Context Manager

For convenient timing in larger projects, you can create a context manager that automatically measures and prints the elapsed time for any code block:

import time

class Timer:
def __enter__(self):
self.start = time.perf_counter()
return self

def __exit__(self, *args):
self.elapsed = time.perf_counter() - self.start
print(f"Elapsed: {self.elapsed:.4f} secs")

# Usage
with Timer():
total = sum(range(1_000_000))
print(f"Sum: {total}")

Output:

Sum: 499999500000
Elapsed: 0.0312 secs

This approach keeps your timing code clean and reusable across your project.

Summary

MethodMeasuresBest Use Case
timeit.timeit()Wall-clock (multiple runs)Precise benchmarking of small snippets
timeit.repeat()Wall-clock (repeated benchmarks)Statistical analysis of performance
timeit.default_timer()Wall-clock (single measurement)Quick inline timing
time.perf_counter()Wall-clock (high resolution)General-purpose elapsed time
time.time_ns()Wall-clock (nanoseconds, integer)Ultra-precise integer timing
time.process_time()CPU time onlyProfiling CPU-bound tasks
datetime.now()Wall-clock (human-readable)Logging with timestamps
  • For benchmarking, use timeit with multiple iterations.
  • For general timing within your application, use time.perf_counter().
  • For CPU profiling, use time.process_time().
  • For readable logs, use datetime.now(): but never rely on it for precision measurements.