How to Implement Resumable File Uploads in JavaScript
Introduction
Uploading large files over the internet is inherently unreliable. A user uploading a 500MB video on a mobile connection can lose their progress at any moment: the network drops, the browser tab crashes, the laptop lid closes, or the server restarts. With a standard upload, the entire file must be re-sent from the beginning. For a user who was 95% through a large upload, this is infuriating.
Resumable uploads solve this problem. Instead of sending the entire file as a single request, the file is split into chunks and uploaded piece by piece. If the connection drops, the client asks the server "how much did you receive?" and resumes from exactly where it left off, without re-uploading the data that already arrived.
This is not a built-in browser feature. There is no single API call that makes uploads resumable. Instead, it requires a protocol between the client and server: a set of agreed-upon rules for starting an upload, sending chunks, querying progress, and resuming after interruption. The client and server must both understand and follow these rules.
In this guide, you will learn the core concepts behind resumable uploads, how to design a simple protocol using HTTP headers, and how to implement both the client side (with XMLHttpRequest for upload progress and fetch() for control flow) from scratch.
The Concept of Resumable Uploads
Why Standard Uploads Fail
A standard file upload sends the entire file in a single HTTP request:
// Standard upload - all or nothing
const formData = new FormData();
formData.append('file', largeFile); // 500MB file
await fetch('/api/upload', {
method: 'POST',
body: formData
});
If this request fails at any point (after 1 byte or after 499MB), the entire upload must restart. The server has no way to accept a partial upload and continue later because:
- The server does not know what file is being resumed
- The server does not know how much data it already received
- The client does not know where to continue from
How Resumable Uploads Work
A resumable upload breaks the process into discrete, recoverable steps:
Step 1: INITIATE
Client → Server: "I want to upload photo.jpg, 500MB, type image/jpeg"
Server → Client: "OK, your upload ID is abc123"
Step 2: UPLOAD CHUNKS
Client → Server: "Here are bytes 0-1048575 of abc123" (1MB chunk)
Client → Server: "Here are bytes 1048576-2097151 of abc123" (next 1MB)
Client → Server: "Here are bytes 2097152-3145727 of abc123" (next 1MB)
... connection drops at byte 250,000,000 ...
Step 3: RESUME
Client → Server: "How much of abc123 did you receive?"
Server → Client: "I have 249,561,088 bytes"
Client → Server: "Here are bytes 249,561,088-250,609,663" (resume from here)
... continues until complete ...
Step 4: COMPLETE
Server → Client: "Upload abc123 complete. File saved."
The key elements are:
- Upload ID: A unique identifier that links the client and server across multiple requests and sessions
- Chunk transfer: The file is sent in manageable pieces, each identified by its byte range
- Progress query: The client can ask the server how much data was received, enabling resume after any interruption
- Idempotent chunks: Re-sending a chunk that was already received does no harm
Chunk Size Considerations
The chunk size is a trade-off between several factors:
| Chunk Size | Advantages | Disadvantages |
|---|---|---|
| Small (256KB-1MB) | Fast recovery, granular progress | More HTTP overhead, more requests |
| Medium (5MB-10MB) | Good balance | Moderate retry cost |
| Large (50MB-100MB) | Fewer requests, less overhead | Large retry cost, more memory |
For most applications, 1MB to 10MB chunks provide a good balance. Mobile-oriented applications may benefit from smaller chunks (256KB to 1MB) due to less reliable connections.
Protocol Design
There is no single standard protocol for resumable uploads (though the IETF has a draft specification called "tus" and there is a TUS protocol). We will design a simple, practical protocol that demonstrates the core concepts.
Overview
Our protocol uses four types of HTTP requests:
| Step | Method | Endpoint | Purpose |
|---|---|---|---|
| Initiate | POST | /upload/start | Start a new upload, get an upload ID |
| Send chunk | POST | /upload/{id} | Send a chunk of file data |
| Query progress | GET | /upload/{id}/status | Ask how much data the server has |
| Complete | N/A | Automatic | Server detects when all bytes arrive |
Step 1: Initiate the Upload
The client sends metadata about the file. The server creates an upload session and returns a unique ID.
Request:
POST /upload/start HTTP/1.1
Content-Type: application/json
{
"filename": "vacation-video.mp4",
"fileSize": 524288000,
"fileType": "video/mp4",
"lastModified": 1700000000000
}
Response:
HTTP/1.1 200 OK
Content-Type: application/json
{
"uploadId": "abc123-def456-ghi789",
"chunkSize": 5242880,
"expiresAt": "2024-01-15T00:00:00Z"
}
The uploadId is the key to the entire protocol. It uniquely identifies this upload session. If the client disconnects and comes back hours later, it uses this ID to resume.
The server may also suggest a chunkSize and an expiration time (after which incomplete uploads are deleted).
Step 2: Send Chunks
Each chunk is sent as a separate request with headers indicating which bytes are being transmitted.
Request:
POST /upload/abc123-def456-ghi789 HTTP/1.1
Content-Type: application/octet-stream
Content-Range: bytes 0-5242879/524288000
X-Upload-Id: abc123-def456-ghi789
[binary data: 5MB chunk]
Response (chunk accepted, more data expected):
HTTP/1.1 200 OK
Content-Type: application/json
{
"received": 5242880,
"total": 524288000,
"complete": false
}
Response (final chunk, upload complete):
HTTP/1.1 201 Created
Content-Type: application/json
{
"received": 524288000,
"total": 524288000,
"complete": true,
"fileUrl": "/files/vacation-video.mp4"
}
The Content-Range Header
The Content-Range header is a standard HTTP header that specifies which bytes of the full file are included in this request:
Content-Range: bytes START-END/TOTAL
Content-Range: bytes 0-5242879/524288000 (first 5MB of 500MB)
Content-Range: bytes 5242880-10485759/524288000 (second 5MB)
Content-Range: bytes 524282880-524287999/524288000 (last chunk)
START: The zero-based byte offset of the first byte in this chunkEND: The zero-based byte offset of the last byte in this chunkTOTAL: The total file size in bytes
Step 3: Query Progress (Resume)
When the client needs to resume, it asks the server how much data was successfully received:
Request:
GET /upload/abc123-def456-ghi789/status HTTP/1.1
Response:
HTTP/1.1 200 OK
Content-Type: application/json
{
"uploadId": "abc123-def456-ghi789",
"received": 262144000,
"total": 524288000,
"complete": false,
"expiresAt": "2024-01-15T00:00:00Z"
}
The client now knows to resume from byte 262,144,000.
Response (upload ID not found or expired):
HTTP/1.1 404 Not Found
Content-Type: application/json
{
"error": "Upload session not found or expired"
}
If the session is lost, the client must start a new upload from the beginning.
Generating a Stable Upload ID on the Client
Instead of relying solely on the server to generate an upload ID, the client can generate a file fingerprint to identify the same file across sessions. This way, if the user selects the same file again after a browser crash, the client can detect the existing upload:
async function generateFileId(file) {
// Create a fingerprint from file metadata
const raw = `${file.name}-${file.size}-${file.lastModified}-${file.type}`;
// Hash it for a compact, consistent ID
const encoder = new TextEncoder();
const data = encoder.encode(raw);
const hashBuffer = await crypto.subtle.digest('SHA-256', data);
const hashArray = Array.from(new Uint8Array(hashBuffer));
const hashHex = hashArray.map(b => b.toString(16).padStart(2, '0')).join('');
return hashHex;
}
const file = fileInput.files[0];
const fileId = await generateFileId(file);
console.log(fileId); // "a3f2b8c9d1e4f5a6b7c8d9e0f1a2b3c4..."
The server can use this client-generated ID to look up existing uploads for the same file, allowing resume even across different browser sessions.
Implementation: The Resumable Upload Client
Let us build a complete, production-quality resumable upload client. We will use XMLHttpRequest for sending chunks (to get upload progress per chunk) and structure the code as a reusable class.
The ResumableUpload Class
class ResumableUpload {
constructor(file, options = {}) {
this.file = file;
this.baseURL = options.baseURL || '/upload';
this.chunkSize = options.chunkSize || 5 * 1024 * 1024; // 5MB default
this.maxRetries = options.maxRetries || 5;
this.retryDelay = options.retryDelay || 1000;
this.uploadId = null;
this.offset = 0;
this.aborted = false;
this.paused = false;
this.currentXHR = null;
// Callbacks
this.onProgress = options.onProgress || null;
this.onComplete = options.onComplete || null;
this.onError = options.onError || null;
this.onStatusChange = options.onStatusChange || null;
}
setStatus(status) {
if (this.onStatusChange) {
this.onStatusChange(status);
}
}
async start() {
this.aborted = false;
this.paused = false;
try {
// Step 1: Generate a file fingerprint
const fileId = await this.generateFileId();
// Step 2: Check if a previous upload exists
this.setStatus('checking');
const existing = await this.checkExistingUpload(fileId);
if (existing && !existing.complete) {
// Resume existing upload
this.uploadId = existing.uploadId;
this.offset = existing.received;
this.setStatus('resuming');
console.log(`Resuming upload from byte ${this.offset}`);
} else if (existing && existing.complete) {
// File already uploaded
this.setStatus('complete');
if (this.onComplete) this.onComplete(existing);
return;
} else {
// Start new upload
this.setStatus('initiating');
await this.initiate(fileId);
}
// Step 3: Upload chunks
this.setStatus('uploading');
await this.uploadChunks();
} catch (error) {
if (this.aborted) return;
this.setStatus('error');
if (this.onError) this.onError(error);
}
}
async generateFileId() {
const raw = `${this.file.name}-${this.file.size}-${this.file.lastModified}`;
const encoder = new TextEncoder();
const data = encoder.encode(raw);
const hashBuffer = await crypto.subtle.digest('SHA-256', data);
const hashArray = Array.from(new Uint8Array(hashBuffer));
return hashArray.map(b => b.toString(16).padStart(2, '0')).join('');
}
async checkExistingUpload(fileId) {
try {
const response = await fetch(`${this.baseURL}/status/${fileId}`);
if (response.ok) {
return await response.json();
}
return null; // No existing upload
} catch {
return null; // Server unreachable, will start fresh
}
}
async initiate(fileId) {
const response = await fetch(`${this.baseURL}/start`, {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
fileId: fileId,
filename: this.file.name,
fileSize: this.file.size,
fileType: this.file.type,
lastModified: this.file.lastModified
})
});
if (!response.ok) {
throw new Error(`Failed to initiate upload: ${response.status}`);
}
const data = await response.json();
this.uploadId = data.uploadId;
this.offset = 0;
// Persist upload ID for recovery across sessions
this.saveSession();
console.log(`Upload initiated with ID: ${this.uploadId}`);
}
async uploadChunks() {
while (this.offset < this.file.size) {
// Check for pause or abort
if (this.aborted) throw new Error('Upload aborted');
if (this.paused) {
await this.waitForResume();
if (this.aborted) throw new Error('Upload aborted');
}
// Extract the current chunk
const end = Math.min(this.offset + this.chunkSize, this.file.size);
const chunk = this.file.slice(this.offset, end);
// Send chunk with retry logic
await this.sendChunkWithRetry(chunk, this.offset, end - 1);
// Update offset
this.offset = end;
this.saveSession();
// Report overall progress
if (this.onProgress) {
this.onProgress({
loaded: this.offset,
total: this.file.size,
progress: this.offset / this.file.size
});
}
}
// Upload complete
this.setStatus('complete');
this.clearSession();
if (this.onComplete) {
this.onComplete({ uploadId: this.uploadId, size: this.file.size });
}
}
sendChunk(chunk, start, end) {
return new Promise((resolve, reject) => {
const xhr = new XMLHttpRequest();
this.currentXHR = xhr;
xhr.open('POST', `${this.baseURL}/${this.uploadId}`);
xhr.setRequestHeader('Content-Type', 'application/octet-stream');
xhr.setRequestHeader('Content-Range',
`bytes ${start}-${end}/${this.file.size}`
);
// Per-chunk upload progress
xhr.upload.onprogress = (event) => {
if (event.lengthComputable && this.onProgress) {
// Calculate total progress including previous chunks
const totalLoaded = start + event.loaded;
this.onProgress({
loaded: totalLoaded,
total: this.file.size,
progress: totalLoaded / this.file.size,
chunkLoaded: event.loaded,
chunkTotal: event.total
});
}
};
xhr.onload = () => {
this.currentXHR = null;
if (xhr.status >= 200 && xhr.status < 300) {
resolve(xhr.response ? JSON.parse(xhr.response) : null);
} else if (xhr.status === 409) {
// Conflict - server has different offset, need to re-sync
reject(new ChunkConflictError(xhr.status, xhr.response));
} else {
reject(new Error(`Chunk upload failed: ${xhr.status}`));
}
};
xhr.onerror = () => {
this.currentXHR = null;
reject(new Error('Network error during chunk upload'));
};
xhr.onabort = () => {
this.currentXHR = null;
reject(new Error('Chunk upload aborted'));
};
xhr.send(chunk);
});
}
async sendChunkWithRetry(chunk, start, end) {
let lastError;
for (let attempt = 1; attempt <= this.maxRetries; attempt++) {
try {
return await this.sendChunk(chunk, start, end);
} catch (error) {
lastError = error;
if (this.aborted) throw error;
// On conflict, re-sync with server
if (error instanceof ChunkConflictError) {
await this.resync();
return; // Let the main loop handle the corrected offset
}
if (attempt < this.maxRetries) {
const delay = this.retryDelay * Math.pow(2, attempt - 1);
this.setStatus(`retrying`);
console.log(
`Chunk ${start}-${end} failed (attempt ${attempt}/${this.maxRetries}). ` +
`Retrying in ${delay}ms...`
);
await this.delay(delay);
}
}
}
throw new Error(
`Chunk upload failed after ${this.maxRetries} attempts: ${lastError.message}`
);
}
async resync() {
console.log('Re-syncing with server...');
try {
const response = await fetch(`${this.baseURL}/${this.uploadId}/status`);
if (response.ok) {
const data = await response.json();
this.offset = data.received;
console.log(`Re-synced: server has ${this.offset} bytes`);
}
} catch (error) {
console.error('Re-sync failed:', error);
}
}
// Pause/Resume/Abort controls
pause() {
this.paused = true;
this.setStatus('paused');
}
resume() {
this.paused = false;
this.setStatus('uploading');
if (this.resumeResolver) {
this.resumeResolver();
}
}
abort() {
this.aborted = true;
this.paused = false;
if (this.currentXHR) {
this.currentXHR.abort();
}
if (this.resumeResolver) {
this.resumeResolver();
}
this.setStatus('aborted');
this.clearSession();
}
waitForResume() {
return new Promise((resolve) => {
this.resumeResolver = resolve;
});
}
// Session persistence (survives page reload)
saveSession() {
const session = {
uploadId: this.uploadId,
filename: this.file.name,
fileSize: this.file.size,
lastModified: this.file.lastModified,
offset: this.offset,
timestamp: Date.now()
};
try {
localStorage.setItem(`upload_${this.uploadId}`, JSON.stringify(session));
} catch {
// localStorage might be unavailable
}
}
clearSession() {
try {
localStorage.removeItem(`upload_${this.uploadId}`);
} catch {
// Ignore
}
}
static getSavedSessions() {
const sessions = [];
try {
for (let i = 0; i < localStorage.length; i++) {
const key = localStorage.key(i);
if (key.startsWith('upload_')) {
const session = JSON.parse(localStorage.getItem(key));
// Only return sessions less than 24 hours old
if (Date.now() - session.timestamp < 24 * 60 * 60 * 1000) {
sessions.push(session);
} else {
localStorage.removeItem(key);
}
}
}
} catch {
// Ignore
}
return sessions;
}
// Utility
delay(ms) {
return new Promise(resolve => setTimeout(resolve, ms));
}
}
class ChunkConflictError extends Error {
constructor(status, response) {
super(`Chunk conflict: ${status}`);
this.name = 'ChunkConflictError';
this.status = status;
this.response = response;
}
}
Using the ResumableUpload Class
const fileInput = document.getElementById('fileInput');
const progressBar = document.getElementById('progressBar');
const progressText = document.getElementById('progressText');
const statusText = document.getElementById('statusText');
const pauseBtn = document.getElementById('pauseBtn');
const cancelBtn = document.getElementById('cancelBtn');
let upload = null;
fileInput.addEventListener('change', () => {
const file = fileInput.files[0];
if (!file) return;
upload = new ResumableUpload(file, {
baseURL: '/api/upload',
chunkSize: 5 * 1024 * 1024, // 5MB chunks
onProgress: ({ loaded, total, progress }) => {
const percent = (progress * 100).toFixed(1);
progressBar.style.width = `${percent}%`;
progressText.textContent = `${percent}% (${formatBytes(loaded)} / ${formatBytes(total)})`;
},
onComplete: (result) => {
statusText.textContent = 'Upload complete!';
statusText.style.color = 'green';
console.log('Upload finished:', result);
},
onError: (error) => {
statusText.textContent = `Error: ${error.message}`;
statusText.style.color = 'red';
},
onStatusChange: (status) => {
statusText.textContent = `Status: ${status}`;
pauseBtn.textContent = status === 'paused' ? 'Resume' : 'Pause';
}
});
upload.start();
});
pauseBtn.addEventListener('click', () => {
if (!upload) return;
if (upload.paused) {
upload.resume();
} else {
upload.pause();
}
});
cancelBtn.addEventListener('click', () => {
if (upload) upload.abort();
});
function formatBytes(bytes) {
if (bytes < 1024) return `${bytes} B`;
if (bytes < 1048576) return `${(bytes / 1024).toFixed(1)} KB`;
if (bytes < 1073741824) return `${(bytes / 1048576).toFixed(1)} MB`;
return `${(bytes / 1073741824).toFixed(1)} GB`;
}
Recovering from Page Reload
One of the most powerful features of resumable uploads is surviving a page reload or browser crash. Here is how to check for and resume previous uploads:
// On page load, check for incomplete uploads
function checkForResumableUploads() {
const sessions = ResumableUpload.getSavedSessions();
if (sessions.length === 0) return;
const container = document.getElementById('resumableUploads');
sessions.forEach(session => {
const percent = ((session.offset / session.fileSize) * 100).toFixed(1);
const el = document.createElement('div');
el.className = 'resumable-upload-notice';
el.innerHTML = `
<p>
<strong>${session.filename}</strong> - ${percent}% uploaded
(${formatBytes(session.offset)} / ${formatBytes(session.fileSize)})
</p>
<button class="resume-btn">Resume</button>
<button class="discard-btn">Discard</button>
`;
el.querySelector('.resume-btn').addEventListener('click', () => {
// User must re-select the same file
const input = document.createElement('input');
input.type = 'file';
input.addEventListener('change', () => {
const file = input.files[0];
// Verify it is the same file
if (file.name !== session.filename ||
file.size !== session.fileSize ||
file.lastModified !== session.lastModified) {
alert('This does not appear to be the same file. Please select the original file.');
return;
}
const upload = new ResumableUpload(file, {
baseURL: '/api/upload',
onProgress: ({ progress }) => {
console.log(`Resuming: ${(progress * 100).toFixed(1)}%`);
},
onComplete: () => {
console.log('Resume complete!');
el.remove();
},
onError: (error) => {
console.error('Resume failed:', error);
}
});
upload.start();
});
input.click();
});
el.querySelector('.discard-btn').addEventListener('click', () => {
localStorage.removeItem(`upload_${session.uploadId}`);
el.remove();
});
container.appendChild(el);
});
}
// Call on page load
checkForResumableUploads();
The user must re-select the file to resume because browsers do not allow JavaScript to persist file references across page loads for security reasons. The File object from an <input type="file"> becomes invalid after the page reloads. The client generates the same file fingerprint from the re-selected file's metadata and uses it to match the existing upload session on the server.
Implementation with Fetch Only
If you do not need per-chunk upload progress and prefer the cleaner fetch() API, here is a simplified implementation:
class FetchResumableUpload {
constructor(file, options = {}) {
this.file = file;
this.baseURL = options.baseURL || '/upload';
this.chunkSize = options.chunkSize || 5 * 1024 * 1024;
this.maxRetries = options.maxRetries || 5;
this.uploadId = null;
this.offset = 0;
this.controller = null;
this.onProgress = options.onProgress || null;
this.onComplete = options.onComplete || null;
this.onError = options.onError || null;
}
async start() {
this.controller = new AbortController();
try {
// Initiate or resume
const fileId = await this.generateFileId();
const existing = await this.checkStatus(fileId);
if (existing && existing.complete) {
if (this.onComplete) this.onComplete(existing);
return;
}
if (existing) {
this.uploadId = existing.uploadId;
this.offset = existing.received;
} else {
await this.initiate(fileId);
}
// Upload loop
while (this.offset < this.file.size) {
const end = Math.min(this.offset + this.chunkSize, this.file.size);
const chunk = this.file.slice(this.offset, end);
await this.sendChunkWithRetry(chunk, this.offset, end - 1);
this.offset = end;
if (this.onProgress) {
this.onProgress({
loaded: this.offset,
total: this.file.size,
progress: this.offset / this.file.size
});
}
}
if (this.onComplete) {
this.onComplete({ uploadId: this.uploadId, size: this.file.size });
}
} catch (error) {
if (error.name === 'AbortError') return;
if (this.onError) this.onError(error);
}
}
async sendChunkWithRetry(chunk, start, end) {
let lastError;
for (let attempt = 1; attempt <= this.maxRetries; attempt++) {
try {
const response = await fetch(`${this.baseURL}/${this.uploadId}`, {
method: 'POST',
headers: {
'Content-Type': 'application/octet-stream',
'Content-Range': `bytes ${start}-${end}/${this.file.size}`
},
body: chunk,
signal: this.controller.signal
});
if (!response.ok) {
throw new Error(`HTTP ${response.status}`);
}
return await response.json();
} catch (error) {
if (error.name === 'AbortError') throw error;
lastError = error;
if (attempt < this.maxRetries) {
const delay = 1000 * Math.pow(2, attempt - 1);
await new Promise(r => setTimeout(r, delay));
}
}
}
throw lastError;
}
abort() {
if (this.controller) this.controller.abort();
}
async generateFileId() {
const raw = `${this.file.name}-${this.file.size}-${this.file.lastModified}`;
const data = new TextEncoder().encode(raw);
const hash = await crypto.subtle.digest('SHA-256', data);
return Array.from(new Uint8Array(hash))
.map(b => b.toString(16).padStart(2, '0'))
.join('');
}
async checkStatus(fileId) {
try {
const response = await fetch(`${this.baseURL}/status/${fileId}`, {
signal: this.controller.signal
});
if (response.ok) return await response.json();
return null;
} catch {
return null;
}
}
async initiate(fileId) {
const response = await fetch(`${this.baseURL}/start`, {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
fileId,
filename: this.file.name,
fileSize: this.file.size,
fileType: this.file.type
}),
signal: this.controller.signal
});
if (!response.ok) throw new Error(`Initiation failed: ${response.status}`);
const data = await response.json();
this.uploadId = data.uploadId;
this.offset = 0;
}
}
Server-Side Reference
While this guide focuses on the client, the server side is equally important. Here is a minimal Express.js server that implements the protocol, for reference and testing:
// server.js (Node.js with Express)
const express = require('express');
const fs = require('fs');
const path = require('path');
const crypto = require('crypto');
const app = express();
app.use(express.json());
const UPLOAD_DIR = path.join(__dirname, 'uploads');
const TEMP_DIR = path.join(__dirname, 'uploads', 'temp');
// Ensure directories exist
fs.mkdirSync(UPLOAD_DIR, { recursive: true });
fs.mkdirSync(TEMP_DIR, { recursive: true });
// In-memory store (use a database in production)
const uploads = new Map();
// Initiate upload
app.post('/upload/start', (req, res) => {
const { fileId, filename, fileSize, fileType } = req.body;
// Check if this file was already started
for (const [id, upload] of uploads) {
if (upload.fileId === fileId && !upload.complete) {
return res.json({
uploadId: id,
received: upload.received,
complete: false
});
}
}
const uploadId = crypto.randomUUID();
const tempPath = path.join(TEMP_DIR, uploadId);
// Create empty file
fs.writeFileSync(tempPath, Buffer.alloc(0));
uploads.set(uploadId, {
fileId,
filename,
fileSize,
fileType,
received: 0,
complete: false,
tempPath,
createdAt: Date.now()
});
res.json({ uploadId, chunkSize: 5 * 1024 * 1024 });
});
// Check status by fileId
app.get('/upload/status/:fileId', (req, res) => {
const { fileId } = req.params;
for (const [id, upload] of uploads) {
if (upload.fileId === fileId) {
return res.json({
uploadId: id,
received: upload.received,
total: upload.fileSize,
complete: upload.complete
});
}
}
res.status(404).json({ error: 'Not found' });
});
// Check status by uploadId
app.get('/upload/:uploadId/status', (req, res) => {
const upload = uploads.get(req.params.uploadId);
if (!upload) return res.status(404).json({ error: 'Not found' });
res.json({
uploadId: req.params.uploadId,
received: upload.received,
total: upload.fileSize,
complete: upload.complete
});
});
// Receive chunk
app.post('/upload/:uploadId', (req, res) => {
const upload = uploads.get(req.params.uploadId);
if (!upload) return res.status(404).json({ error: 'Upload not found' });
if (upload.complete) return res.status(400).json({ error: 'Upload already complete' });
// Parse Content-Range header
const range = req.headers['content-range'];
const match = range && range.match(/bytes (\d+)-(\d+)\/(\d+)/);
if (!match) {
return res.status(400).json({ error: 'Invalid Content-Range header' });
}
const start = parseInt(match[1]);
const end = parseInt(match[2]);
const total = parseInt(match[3]);
// Verify the chunk starts where we expect
if (start !== upload.received) {
return res.status(409).json({
error: 'Offset mismatch',
expected: upload.received,
received: start
});
}
// Collect the raw body
const chunks = [];
req.on('data', chunk => chunks.push(chunk));
req.on('end', () => {
const buffer = Buffer.concat(chunks);
// Append to temp file
fs.appendFileSync(upload.tempPath, buffer);
upload.received += buffer.length;
// Check if upload is complete
if (upload.received >= upload.fileSize) {
upload.complete = true;
const finalPath = path.join(UPLOAD_DIR, upload.filename);
fs.renameSync(upload.tempPath, finalPath);
return res.status(201).json({
received: upload.received,
total: upload.fileSize,
complete: true,
fileUrl: `/files/${upload.filename}`
});
}
res.json({
received: upload.received,
total: upload.fileSize,
complete: false
});
});
});
app.listen(3000, () => console.log('Upload server on port 3000'));
This server implementation is simplified for educational purposes. A production server would need:
- Database storage for upload metadata (not in-memory
Map) - File integrity checks (checksums for each chunk and the final file)
- Cleanup of expired uploads (cron job or TTL mechanism)
- Concurrent chunk support (for parallel upload of non-sequential chunks)
- Authentication and authorization (who is allowed to upload)
- Disk space management (quotas, temp file cleanup)
- Streaming writes instead of buffering the entire chunk in memory
Existing Protocols and Libraries
Before building your own resumable upload system, consider using an established protocol or library:
TUS Protocol
TUS is an open protocol for resumable uploads with client and server implementations in many languages:
// Using tus-js-client
import * as tus from 'tus-js-client';
const upload = new tus.Upload(file, {
endpoint: 'https://your-server.com/files/',
retryDelays: [0, 1000, 3000, 5000],
chunkSize: 5 * 1024 * 1024,
metadata: {
filename: file.name,
filetype: file.type
},
onError: (error) => {
console.error('Upload failed:', error);
},
onProgress: (bytesUploaded, bytesTotal) => {
const percentage = ((bytesUploaded / bytesTotal) * 100).toFixed(1);
console.log(`${percentage}%`);
},
onSuccess: () => {
console.log('Upload complete:', upload.url);
}
});
upload.start();
Cloud Provider Solutions
Major cloud providers offer their own resumable upload implementations:
- Google Cloud Storage: Resumable uploads via the JSON API
- AWS S3: Multipart upload API
- Azure Blob Storage: Block blob upload with block IDs
These handle server-side complexity and infrastructure, letting you focus on the client experience.
Best Practices
Chunk Integrity Verification
For critical uploads, verify each chunk's integrity using checksums:
async function computeChunkHash(chunk) {
const buffer = await chunk.arrayBuffer();
const hashBuffer = await crypto.subtle.digest('SHA-256', buffer);
const hashArray = Array.from(new Uint8Array(hashBuffer));
return hashArray.map(b => b.toString(16).padStart(2, '0')).join('');
}
// Include hash in the chunk upload
async function sendVerifiedChunk(chunk, start, end) {
const hash = await computeChunkHash(chunk);
const response = await fetch(`${baseURL}/${uploadId}`, {
method: 'POST',
headers: {
'Content-Type': 'application/octet-stream',
'Content-Range': `bytes ${start}-${end}/${fileSize}`,
'X-Chunk-Hash': hash // Server verifies this
},
body: chunk
});
const result = await response.json();
// Server confirms the hash matches
if (result.hashMismatch) {
throw new Error('Chunk corrupted during transfer - retrying');
}
return result;
}
Adaptive Chunk Size
Adjust chunk size based on network conditions:
function getAdaptiveChunkSize(lastChunkTime, lastChunkSize) {
// Target: each chunk should take 5-10 seconds
const targetTime = 7000; // 7 seconds in ms
const speed = lastChunkSize / lastChunkTime; // bytes per ms
let newSize = Math.round(speed * targetTime);
// Clamp between 256KB and 50MB
newSize = Math.max(256 * 1024, newSize);
newSize = Math.min(50 * 1024 * 1024, newSize);
// Round to nearest 256KB
newSize = Math.round(newSize / (256 * 1024)) * (256 * 1024);
return newSize;
}
// Usage in upload loop
let chunkSize = 2 * 1024 * 1024; // Start with 2MB
while (offset < file.size) {
const chunkStart = performance.now();
const chunk = file.slice(offset, offset + chunkSize);
await sendChunk(chunk, offset, offset + chunk.size - 1);
const chunkTime = performance.now() - chunkStart;
chunkSize = getAdaptiveChunkSize(chunkTime, chunk.size);
offset += chunk.size;
}
Offline Detection
Pause uploads when the network is unavailable and resume when it returns:
class NetworkAwareUpload extends ResumableUpload {
constructor(file, options) {
super(file, options);
this.setupNetworkListeners();
}
setupNetworkListeners() {
window.addEventListener('offline', () => {
console.log('Network lost - pausing upload');
this.pause();
this.setStatus('waiting-for-network');
});
window.addEventListener('online', () => {
console.log('Network restored - resuming upload');
this.setStatus('resuming');
// Small delay to let the connection stabilize
setTimeout(() => this.resume(), 2000);
});
}
}
Summary
Resumable file uploads are essential for any application that handles large file transfers over unreliable networks. The core concept is simple: split the file into chunks, track how much the server has received, and resume from the last known position after any interruption.
The protocol requires three operations: initiating an upload (getting a unique upload ID from the server), sending chunks (identified by byte ranges using the Content-Range header), and querying progress (asking the server how many bytes it has received so the client knows where to resume from).
On the client side, file.slice(start, end) extracts chunks from the file, XHR provides per-chunk upload progress through xhr.upload.onprogress, and localStorage persists the upload session across page reloads. Retry logic with exponential backoff handles transient network failures, and file fingerprinting (hashing file metadata) enables resume even when the user selects the file again in a new browser session.
For production applications, consider using the established TUS protocol or cloud provider multipart upload APIs rather than building a custom solution. These handle server-side complexity, support parallel chunk uploads, and have been battle-tested across millions of uploads.