Skip to main content

Python PyTorch: How to Resize a Tensor in PyTorch

Resizing tensors is one of the most common operations in deep learning. Whether you're preparing input data for a neural network, reshaping feature maps between layers, or adjusting tensor dimensions for matrix operations, you'll need to change tensor shapes frequently. PyTorch provides several methods to resize tensors, each suited for different scenarios.

In this guide, you'll learn four methods to resize tensors in PyTorch - view(), reshape(), resize_(), and unsqueeze() - understand when to use each one, and avoid common pitfalls.

Key Rule: Total Elements Must Match

Before resizing, remember this fundamental rule: the total number of elements must remain the same before and after resizing (except with resize_(), which can add or discard elements).

For example, a tensor with 12 elements can be reshaped to (2, 6), (3, 4), (6, 2), (2, 2, 3), etc. - but not to (2, 5) because 2 × 5 = 10 ≠ 12.

Method 1: Using view()

The view() method returns a new tensor with the same underlying data but a different shape. It's the most commonly used reshaping method in PyTorch.

torch.Tensor.view(*shape)

Resizing a 1D Tensor to 2D

import torch

tens = torch.Tensor([10, 20, 30, 40, 50, 60])
print("Original:", tens)
print("Shape: ", tens.shape)

# Resize to 2x3
result = tens.view(2, 3)
print("\nResized to 2×3:")
print(result)

Output:

Original: tensor([10., 20., 30., 40., 50., 60.])
Shape: torch.Size([6])

Resized to 2×3:
tensor([[10., 20., 30.],
[40., 50., 60.]])

Using -1 to Infer a Dimension

You can pass -1 for one dimension, and PyTorch will automatically calculate its size based on the total number of elements:

import torch

tens = torch.Tensor([10, 20, 30, 40, 50, 60])

# PyTorch infers the missing dimension
result_a = tens.view(2, -1) # Infers 3 columns → (2, 3)
result_b = tens.view(-1, 2) # Infers 3 rows → (3, 2)
result_c = tens.view(-1, 3) # Infers 2 rows → (2, 3)

print("view(2, -1):", result_a.shape)
print("view(-1, 2):", result_b.shape)
print("view(-1, 3):", result_c.shape)

Output:

view(2, -1): torch.Size([2, 3])
view(-1, 2): torch.Size([3, 2])
view(-1, 3): torch.Size([2, 3])
tip

The -1 shorthand is extremely useful when you know one dimension but want PyTorch to figure out the other. It's commonly used in neural networks to flatten a batch of images: x.view(batch_size, -1).

Resizing a 2D Tensor

import torch

tens = torch.Tensor([[1, 2, 3],
[4, 5, 6],
[7, 8, 9],
[10, 11, 12]])

print("Original shape:", tens.shape)

result_a = tens.view(2, 6)
result_b = tens.view(6, 2)
result_c = tens.view(3, 4)

print("\n2×6:")
print(result_a)
print("\n6×2:")
print(result_b)
print("\n3×4:")
print(result_c)

Output:

Original shape: torch.Size([4, 3])

2×6:
tensor([[ 1., 2., 3., 4., 5., 6.],
[ 7., 8., 9., 10., 11., 12.]])

6×2:
tensor([[ 1., 2.],
[ 3., 4.],
[ 5., 6.],
[ 7., 8.],
[ 9., 10.],
[11., 12.]])

3×4:
tensor([[ 1., 2., 3., 4.],
[ 5., 6., 7., 8.],
[ 9., 10., 11., 12.]])

Method 2: Using reshape()

The reshape() method works similarly to view() but is more flexible - it can handle both contiguous and non-contiguous tensors:

import torch

tens = torch.tensor([10, 20, 30, 40, 50, 60, 70, 80])
print("Original:", tens)

# Reshape to 2×4
result_a = tens.reshape(2, 4)
print("\n2×4:")
print(result_a)

# Reshape to 4×2
result_b = tens.reshape(4, 2)
print("\n4×2:")
print(result_b)

# Reshape to 2×2×2
result_c = tens.reshape(2, 2, 2)
print("\n2×2×2:")
print(result_c)

Output:

Original: tensor([10, 20, 30, 40, 50, 60, 70, 80])

2×4:
tensor([[10, 20, 30, 40],
[50, 60, 70, 80]])

4×2:
tensor([[10, 20],
[30, 40],
[50, 60],
[70, 80]])

2×2×2:
tensor([[[10, 20],
[30, 40]],

[[50, 60],
[70, 80]]])

view() vs reshape() - When Does It Matter?

import torch

tens = torch.tensor([[1, 2, 3],
[4, 5, 6]])

# Transpose makes the tensor non-contiguous
transposed = tens.t()
print("Is contiguous:", transposed.is_contiguous())

# ❌ view() fails on non-contiguous tensors
try:
result = transposed.view(6)
except RuntimeError as e:
print(f"view() error: {e}")

# ✅ reshape() handles non-contiguous tensors
result = transposed.reshape(6)
print("reshape() result:", result)

Output:

Is contiguous: False
view() error: view size is not compatible with input tensor's size and stride (at least one dimension spans across two contiguous subspaces). Use .reshape(...) instead.
reshape() result: tensor([1, 4, 2, 5, 3, 6])
view() vs reshape()
  • view() requires the tensor to be contiguous in memory. It always returns a tensor that shares the same data (no copy).
  • reshape() works on any tensor. It returns a view when possible, or creates a copy when the tensor is non-contiguous.

Rule of thumb: Use reshape() when you're not sure about memory layout. Use view() when you want to guarantee no data copying occurs.

Method 3: Using resize_() (In-Place, Use with Caution)

The resize_() method is an in-place operation that can change both the shape and the total number of elements. If the new size is larger, the additional values are uninitialized (contain arbitrary data). If smaller, excess elements are discarded.

import torch

tens = torch.tensor([1, 2, 3, 4, 5, 6])
print("Original:", tens)

# Resize to 2×3 (same total elements)
tens.resize_(2, 3)
print("\nResized to 2×3:")
print(tens)

Output:

Original: tensor([1, 2, 3, 4, 5, 6])

Resized to 2×3:
tensor([[1, 2, 3],
[4, 5, 6]])
caution

resize_() is dangerous because it can create tensors with uninitialized memory when the new size is larger than the original:

import torch

tens = torch.tensor([1, 2, 3])

# Resizing to a LARGER size: new elements contain garbage values!
tens.resize_(2, 3)
print(tens)

Output (values after index 2 are unpredictable):

tensor([[                  1,                   2,                   3],
[8533869532566003722, 7306069384733224543, 6062475618639241574]])

Avoid resize_() in most cases. Use view() or reshape() instead, which enforce element count consistency.

Method 4: Using unsqueeze() and squeeze()

These methods add or remove dimensions of size 1, which is essential for broadcasting and matching tensor dimensions in neural networks.

unsqueeze() - Add a Dimension

import torch

tens = torch.tensor([10, 20, 30, 40, 50])
print("Original shape:", tens.shape)

# Add dimension at position 0 → row vector
result_0 = tens.unsqueeze(0)
print("\nunsqueeze(0):", result_0.shape)
print(result_0)

# Add dimension at position 1 → column vector
result_1 = tens.unsqueeze(1)
print("\nunsqueeze(1):", result_1.shape)
print(result_1)

Output:

Original shape: torch.Size([5])

unsqueeze(0): torch.Size([1, 5])
tensor([[10, 20, 30, 40, 50]])

unsqueeze(1): torch.Size([5, 1])
tensor([[10],
[20],
[30],
[40],
[50]])

squeeze() - Remove Dimensions of Size 1

import torch

tens = torch.tensor([[[1, 2, 3]]])
print("Original shape:", tens.shape)

# Remove all dimensions of size 1
result = tens.squeeze()
print("After squeeze():", result.shape)
print(result)

Output:

Original shape: torch.Size([1, 1, 3])
After squeeze(): torch.Size([3])
tensor([1, 2, 3])

Practical Example: Reshaping for a Neural Network

A common use case is reshaping image tensors before feeding them into fully connected layers:

import torch

# Simulated batch of 4 grayscale images, 28×28 pixels
images = torch.randn(4, 1, 28, 28)
print("Original shape:", images.shape)

# Flatten each image for a fully connected layer
# Keep batch dimension, flatten everything else
flat = images.view(4, -1)
print("Flattened shape:", flat.shape)

# Alternative using reshape
flat_alt = images.reshape(images.size(0), -1)
print("Flattened (reshape):", flat_alt.shape)

Output:

Original shape: torch.Size([4, 1, 28, 28])
Flattened shape: torch.Size([4, 784])
Flattened (reshape): torch.Size([4, 784])

Common Mistake: Element Count Mismatch

Trying to reshape a tensor into a shape with a different total number of elements raises an error:

import torch

tens = torch.tensor([1, 2, 3, 4, 5])

# ❌ 5 elements cannot be reshaped to 2×3 (6 elements)
try:
result = tens.view(2, 3)
except RuntimeError as e:
print(f"Error: {e}")

Output:

Error: shape '[2, 3]' is invalid for input of size 5

Fix: Ensure the product of the new dimensions equals the total number of elements:

# ✅ 5 elements → (1, 5) or (5, 1)
result = tens.view(1, 5)
print(result)

Comparison of Methods

MethodIn-PlaceRequires ContiguousPreserves Element CountBest For
view()✅ (enforced)Fast reshaping, guaranteed no copy
reshape()✅ (enforced)General reshaping (most flexible)
resize_()❌ (can change)Low-level memory manipulation (use rarely)
unsqueeze()Adding a single dimension
squeeze()Removing size-1 dimensions

Summary

To resize tensors in PyTorch:

  • Use reshape() as your default choice - it's the most flexible and handles both contiguous and non-contiguous tensors.
  • Use view() when you want to guarantee no data is copied and the tensor is contiguous in memory.
  • Use unsqueeze() and squeeze() to add or remove dimensions of size 1, which is essential for broadcasting and layer compatibility.
  • Avoid resize_() unless you have a specific low-level need - it can create tensors with uninitialized garbage data.
  • Use -1 in any dimension to let PyTorch calculate the size automatically.