Python NumPy: How to Resolve "RuntimeWarning: Mean of Empty Slice"
The RuntimeWarning: Mean of empty slice is one of the most common warnings encountered when working with NumPy arrays. It occurs when you attempt to calculate the mean of an array or array slice that contains no elements. While it doesn't crash your program, it returns nan (Not a Number) and signals a potential logic error in your code.
This guide explains why this warning occurs, shows the most common scenarios that trigger it, and provides reliable solutions.
What Causes This Warning?
When you call np.mean() on an empty array or slice, NumPy attempts to divide the sum of elements (0) by the count of elements (also 0). Since division by zero is undefined, NumPy returns nan and raises a RuntimeWarning:
import numpy as np
empty_array = np.array([])
result = np.mean(empty_array)
print(result)
Output:
RuntimeWarning: Mean of empty slice.
RuntimeWarning: invalid value encountered in scalar divide
nan
Two warnings appear: one for attempting the mean of an empty slice, and another for the resulting division by zero. The final value is nan.
Common Scenarios That Trigger the Warning
Empty Array
The most straightforward case - calling np.mean() on an array with zero elements:
import numpy as np
data = np.array([])
mean_value = np.mean(data) # RuntimeWarning: Mean of empty slice
print(mean_value)
Output:
RuntimeWarning: invalid value encountered in scalar divide
nan
Invalid Slicing
Using slice indices that produce an empty result. For example, data[5:2] selects nothing because the start index is greater than the stop index:
import numpy as np
data = np.array([1, 2, 3, 4, 5])
# Slice from index 5 to index 2, produces an empty array
empty_slice = data[5:2]
print("Slice contents:", empty_slice)
print("Slice length:", len(empty_slice))
mean_value = np.mean(empty_slice) # RuntimeWarning
print("Mean:", mean_value)
Output:
Slice contents: []
Slice length: 0
RuntimeWarning: Mean of empty slice.
RuntimeWarning: invalid value encountered in scalar divide
Mean: nan
Conditional Filtering That Matches Nothing
When boolean indexing produces an empty result because no elements satisfy the condition:
import numpy as np
scores = np.array([45, 52, 67, 73, 81])
# Filter for scores above 100, no elements match
high_scores = scores[scores > 100]
mean_value = np.mean(high_scores) # RuntimeWarning
print("Mean of high scores:", mean_value)
Output:
Mean of high scores: nan
Using np.nanmean() on All-NaN Slices
Even np.nanmean(), which is designed to ignore NaN values, triggers this warning when all values are NaN:
import numpy as np
data = np.array([np.nan, np.nan, np.nan])
result = np.nanmean(data) # RuntimeWarning: Mean of empty slice
print(result)
Output:
RuntimeWarning: Mean of empty slice
nan
After ignoring all NaN values, the remaining slice is empty, so the same warning appears.
Solution 1: Check for Empty Arrays Before Computing
The most reliable fix is to verify that the array has elements before calling np.mean():
import numpy as np
data = np.array([])
if data.size > 0:
mean_value = np.mean(data)
print("Mean:", mean_value)
else:
print("Cannot compute mean: array is empty")
Output:
Cannot compute mean: array is empty
For filtered arrays, apply the same check after filtering:
import numpy as np
scores = np.array([45, 52, 67, 73, 81])
high_scores = scores[scores > 100]
if high_scores.size > 0:
print("Mean of high scores:", np.mean(high_scores))
else:
print("No scores above 100 found")
Output:
No scores above 100 found
Use array.size > 0 instead of len(array) > 0 when working with NumPy arrays. The .size attribute works correctly for arrays of any dimensionality, while len() only checks the first dimension.
Solution 2: Fix Incorrect Slice Indices
If the warning results from a slicing error, correct the indices so the slice contains elements:
import numpy as np
data = np.array([1, 2, 3, 4, 5])
# WRONG: start (5) > stop (2) produces an empty slice
empty_result = data[5:2]
print("Wrong slice:", empty_result)
# CORRECT: use valid indices
correct_result = data[2:5]
print("Correct slice:", correct_result)
print("Mean:", np.mean(correct_result))
Output:
Wrong slice: []
Correct slice: [3 4 5]
Mean: 4.0
Solution 3: Provide a Default Value
When an empty array is a valid possibility in your workflow, return a sensible default instead of nan:
import numpy as np
def safe_mean(arr, default=0.0):
"""Compute the mean of an array, returning a default if empty."""
if arr.size == 0:
return default
return np.mean(arr)
data_with_values = np.array([10, 20, 30])
empty_data = np.array([])
print("Mean (with values):", safe_mean(data_with_values))
print("Mean (empty):", safe_mean(empty_data))
print("Mean (empty, custom default):", safe_mean(empty_data, default=-1))
Output:
Mean (with values): 20.0
Mean (empty): 0.0
Mean (empty, custom default): -1
Solution 4: Suppress the Warning When Intentional
If you expect empty slices and have already handled the nan result in your logic, you can suppress the warning using np.errstate:
import numpy as np
data = np.array([])
with np.errstate(invalid='ignore'):
mean_value = np.mean(data)
# Handle nan explicitly
if np.isnan(mean_value):
mean_value = 0.0
print("Mean:", mean_value)
Output:
RuntimeWarning: Mean of empty slice.
Mean: 0.0
Only suppress the warning if you are certain the empty slice is expected and you handle the nan result afterward. Suppressing warnings indiscriminately can hide real bugs in your code.
Solution 5: Handle np.nanmean() on All-NaN Arrays
When using np.nanmean(), check whether the array contains any non-NaN values before computing:
import numpy as np
data = np.array([np.nan, np.nan, 5.0, np.nan])
all_nan = np.array([np.nan, np.nan, np.nan])
def safe_nanmean(arr, default=0.0):
"""Compute nanmean, returning a default if all values are NaN."""
if arr.size == 0 or np.all(np.isnan(arr)):
return default
return np.nanmean(arr)
print("Mixed array:", safe_nanmean(data))
print("All-NaN array:", safe_nanmean(all_nan))
Output:
Mixed array: 5.0
All-NaN array: 0.0
Common Mistake: Ignoring the Warning Entirely
The worst approach is to ignore the warning and let nan propagate silently through your calculations. nan is contagious - any arithmetic involving nan produces nan:
import numpy as np
import warnings
warnings.filterwarnings('ignore')
data = np.array([])
mean_value = np.mean(data) # nan, warning suppressed
# nan propagates through all subsequent calculations
result = mean_value * 100 + 50
print("Final result:", result) # Still nan!
Output:
Mixed array: 5.0
All-NaN array: 0.0
The correct approach is to handle the empty case before it contaminates downstream calculations:
import numpy as np
data = np.array([])
if data.size > 0:
mean_value = np.mean(data)
else:
mean_value = 0.0 # Explicit default
result = mean_value * 100 + 50
print("Final result:", result)
Output:
Final result: 50.0
Never blindly suppress NumPy warnings with warnings.filterwarnings('ignore'). A nan value that goes undetected can corrupt entire pipelines - from data analysis to model training - producing meaningless results that are difficult to debug.
Summary of Solutions
| Approach | When to Use |
|---|---|
Check array.size > 0 before computing | Always - the safest and most explicit approach |
| Fix slice indices | When the empty slice is caused by a coding error |
| Provide a default value with a helper function | When empty arrays are expected in your workflow |
Suppress with np.errstate | When you intentionally handle nan downstream |
Use np.all(np.isnan(arr)) check | When using np.nanmean() on potentially all-NaN data |
The RuntimeWarning: Mean of empty slice is NumPy telling you that something unexpected happened with your data. The best practice is to always validate that your array or slice contains elements before performing aggregation operations, ensuring your calculations produce meaningful results.