Skip to main content

Python PyTroch: How to Check if a Tensor Is Contiguous

When working with tensors in PyTorch, understanding memory layout is crucial for writing efficient and bug-free deep learning code. A contiguous tensor stores its elements in an unbroken, sequential block of memory with no gaps between them. Certain PyTorch operations require contiguous tensors, and operations like transpose() or view() can produce non-contiguous tensors that may cause unexpected errors.

In this guide, you'll learn what contiguous tensors are, how to check if a tensor is contiguous using is_contiguous(), why it matters, and how to fix non-contiguous tensors when needed.

What Is a Contiguous Tensor?

A contiguous tensor is one whose elements are laid out in memory in a single, continuous block, following the expected row-major (C-style) order. When you create a tensor directly, it's almost always contiguous. However, operations that change how you view the data - without copying it - can produce non-contiguous tensors.

For example, when you transpose a 2D tensor, PyTorch doesn't rearrange the data in memory. Instead, it changes the metadata (strides) that describe how to traverse the data. The underlying memory remains in the original order, making the transposed tensor non-contiguous.

Checking Contiguity with is_contiguous()

PyTorch provides the is_contiguous() method on tensors. It returns True if the tensor's elements are stored contiguously in memory and False otherwise.

Syntax:

tensor.is_contiguous()

Example: A Simple Contiguous Tensor

Tensors created directly from data are contiguous by default:

import torch

tens = torch.tensor([1.0, 2.0, 3.0, 4.0, 5.0])

print("Tensor:", tens)
print("Is contiguous:", tens.is_contiguous())

Output:

Tensor: tensor([1., 2., 3., 4., 5.])
Is contiguous: True

Example: Transpose Creates a Non-Contiguous Tensor

Transposing a tensor changes how it's accessed but doesn't rearrange the underlying memory, resulting in a non-contiguous tensor:

import torch

tens = torch.tensor([[10.0, 20.0, 30.0],
[40.0, 50.0, 60.0]])

tens_transpose = tens.transpose(0, 1)

print("Original tensor:")
print(tens)
print(f"Is contiguous: {tens.is_contiguous()}\n")

print("Transposed tensor:")
print(tens_transpose)
print(f"Is contiguous: {tens_transpose.is_contiguous()}")

Output:

Original tensor:
tensor([[10., 20., 30.],
[40., 50., 60.]])
Is contiguous: True

Transposed tensor:
tensor([[10., 40.],
[20., 50.],
[30., 60.]])
Is contiguous: False

The original tensor is contiguous, but its transpose is not - even though the data looks correct when printed.

Why Does Contiguity Matter?

Several PyTorch operations require contiguous tensors. If you pass a non-contiguous tensor, you'll get a runtime error.

Common Error with view()

The view() method reshapes a tensor but requires it to be contiguous:

import torch

tens = torch.tensor([[1.0, 2.0, 3.0],
[4.0, 5.0, 6.0]])

tens_transpose = tens.transpose(0, 1)

# This will fail
tens_transpose.view(6)

Output:

RuntimeError: view size is not compatible with input tensor's size and stride (at least one dimension spans across two contiguous subspaces). Use .reshape(...) instead.
warning

view() only works on contiguous tensors. If you're unsure about contiguity, use reshape() instead - it works regardless of whether the tensor is contiguous.

Making a Tensor Contiguous

If you need a contiguous tensor, call the contiguous() method. It returns a new tensor with the same data copied into a contiguous block of memory. If the tensor is already contiguous, it returns the same tensor without copying.

import torch

tens = torch.tensor([[1.0, 2.0, 3.0],
[4.0, 5.0, 6.0]])

tens_transpose = tens.transpose(0, 1)

print(f"Before contiguous(): {tens_transpose.is_contiguous()}")

# Make it contiguous
tens_contiguous = tens_transpose.contiguous()

print(f"After contiguous(): {tens_contiguous.is_contiguous()}")

# Now view() works
reshaped = tens_contiguous.view(6)
print(f"Reshaped: {reshaped}")

Output:

Before contiguous(): False
After contiguous(): True
Reshaped: tensor([1., 4., 2., 5., 3., 6.])

Understanding Strides

To understand why a tensor becomes non-contiguous, examine its strides. Strides indicate how many memory positions to skip to move to the next element along each dimension.

import torch

tens = torch.tensor([[1.0, 2.0, 3.0],
[4.0, 5.0, 6.0]])

print(f"Original - shape: {tens.shape}, stride: {tens.stride()}, contiguous: {tens.is_contiguous()}")

tens_t = tens.transpose(0, 1)
print(f"Transposed - shape: {tens_t.shape}, stride: {tens_t.stride()}, contiguous: {tens_t.is_contiguous()}")

tens_c = tens_t.contiguous()
print(f"Contiguous - shape: {tens_c.shape}, stride: {tens_c.stride()}, contiguous: {tens_c.is_contiguous()}")

Output:

Original  - shape: torch.Size([2, 3]), stride: (3, 1), contiguous: True
Transposed - shape: torch.Size([3, 2]), stride: (1, 3), contiguous: False
Contiguous - shape: torch.Size([3, 2]), stride: (2, 1), contiguous: True
  • Original (2×3): stride (3, 1) means skip 3 elements to move down a row, 1 element to move across a column - standard row-major order.
  • Transposed (3×2): stride (1, 3) means the strides are swapped, so elements aren't accessed sequentially in memory - non-contiguous.
  • After contiguous() (3×2): stride (2, 1) is back to standard row-major order - the data has been copied into a fresh memory layout.

Operations That Can Produce Non-Contiguous Tensors

Here are common operations that may result in non-contiguous tensors:

OperationExampleContiguous?
transpose()t.transpose(0, 1)❌ Usually not
permute()t.permute(2, 0, 1)❌ Usually not
expand()t.expand(3, 4)❌ Usually not
narrow() / slicingt[:, ::2]❌ Often not
contiguous()t.contiguous()✅ Always
clone()t.clone()✅ Always
reshape()t.reshape(6)✅ Returns contiguous if needed

Best Practices

Practical Guidelines
  1. Use reshape() instead of view() when you're not sure about contiguity - reshape() handles both cases automatically.
  2. Call contiguous() explicitly before operations that require it, especially when passing tensors to C/C++ extensions or custom CUDA kernels.
  3. Avoid unnecessary contiguous() calls - they copy data, which consumes memory and time. Check with is_contiguous() first if performance matters.
  4. Be aware after transpose() and permute() - these are the most common sources of non-contiguous tensors.

Conclusion

Checking tensor contiguity in PyTorch is straightforward with the is_contiguous() method:

  • A tensor is contiguous when its elements are stored sequentially in memory following row-major order.
  • Operations like transpose(), permute(), and slicing can produce non-contiguous tensors by changing strides without moving data.
  • Use contiguous() to create a contiguous copy when needed, or use reshape() instead of view() to avoid contiguity-related errors altogether.

Understanding contiguity helps you write more efficient PyTorch code and avoid subtle runtime errors, especially when working with complex tensor manipulations in deep learning pipelines.