How to Implement Long Polling for Real-Time Updates in JavaScript
Introduction
Web applications frequently need to display data that changes on the server in real time: new chat messages, live notifications, stock price updates, order status changes, or collaborative editing indicators. The fundamental challenge is that HTTP is a request-response protocol. The server cannot spontaneously push data to the browser. The browser must ask for it.
The simplest approach is regular polling: the client sends a request every few seconds asking "anything new?" This works but is wasteful. If nothing has changed, the server responds with empty data, and both sides have burned bandwidth and processing time for nothing. If updates are rare (a notification every few minutes), thousands of useless requests pile up between each meaningful one.
Long polling is a smarter variation. Instead of the server responding immediately with "nothing new," it holds the connection open until it actually has new data to send. The client sends a request, the server waits (sometimes for 30 seconds or more), and only responds when something happens. The moment the client receives a response, it immediately sends another request, creating a near-real-time stream of updates without the waste of constant empty responses.
Long polling was the dominant technique for real-time web applications before WebSockets became widely supported. It is still used today in situations where WebSockets are not available, as a fallback mechanism, or in environments where simplicity and HTTP compatibility matter more than raw performance.
In this guide, you will learn how long polling works conceptually, how to implement it with proper reconnection and error handling, and how it compares to WebSockets and Server-Sent Events.
What Is Long Polling?
Regular Polling: The Naive Approach
Before understanding long polling, let us see the problem it solves. Regular (short) polling sends requests at fixed intervals:
// Regular polling - simple but wasteful
setInterval(async () => {
const response = await fetch('/api/notifications');
const data = await response.json();
if (data.length > 0) {
showNotifications(data);
}
// If data is empty, this entire request was wasted
}, 5000); // Every 5 seconds
Client Server
| |
| GET /notifications |
| ──────────────────────────► |
| 200 OK [] (nothing new) |
| ◄────────────────────────── |
| |
| ... 5 seconds pass ... |
| |
| GET /notifications |
| ──────────────────────────► |
| 200 OK [] (still nothing) |
| ◄──── ────────────────────── |
| |
| ... 5 seconds pass ... |
| |
| GET /notifications |
| ──────────────────────────► |
| 200 OK [{msg: "Hello!"}] | ← Finally something! But 5s delay
| ◄────────────────────────── |
Problems with regular polling:
- Wasted requests: Most responses are empty. If updates happen once per minute, 11 out of 12 requests per minute are useless.
- Latency: Updates are delayed by up to one polling interval. A 5-second interval means updates can be up to 5 seconds late.
- Trade-off impossible to win: Shorter intervals mean faster updates but more wasted requests and server load. Longer intervals reduce waste but increase latency.
Long Polling: The Server Waits
Long polling flips the approach. The client sends a request, and instead of the server responding immediately, it holds the request open until new data is available or a timeout is reached:
Client Server
| |
| GET /notifications |
| ──────────────────────────► |
| |
| ... server waits ... | (connection held open)
| ... still waiting ... |
| ... 20 seconds pass ... |
| |
| ... new data arrives! ... |
| |
| 200 OK [{msg: "Hello!"}] | ← Instant delivery!
| ◄────────────────────────── |
| |
| GET /notifications | ← Immediately reconnect
| ──────────────────────────► |
| |
| ... server waits again ... |
The key differences:
- No wasted responses: The server only responds when it has data (or on timeout)
- Near-instant delivery: Updates arrive as soon as they happen on the server, not on the next polling cycle
- Fewer requests: Instead of 12 requests per minute, you might make 1 or 2 (one per actual update plus timeout reconnections)
How the Connection Stays Open
When the server receives a long polling request, it does not call response.send() immediately. Instead, it stores a reference to the response object and waits. When new data becomes available (from a database change, a message queue, a webhook, or any other source), the server sends the response to all waiting clients.
Most servers set a timeout (typically 30 to 60 seconds). If no data arrives within the timeout period, the server sends an empty response, and the client reconnects immediately. This prevents the connection from being held indefinitely, which could cause issues with proxies, load balancers, and browser connection limits.
Normal flow:
Request → Wait → Data arrives → Response → Reconnect → Wait → ...
Timeout flow:
Request → Wait → No data for 30s → Empty response → Reconnect → Wait → ...
Implementation Pattern
The Basic Long Polling Loop
The client-side implementation is a loop that sends a request, processes the response, and immediately sends another request:
async function subscribe() {
while (true) {
try {
const response = await fetch('/api/updates');
if (response.ok) {
const data = await response.json();
handleUpdates(data);
} else if (response.status === 502) {
// Connection timeout - server held the request too long
// This is normal, just reconnect
} else {
// Unexpected error
console.error(`Server error: ${response.status}`);
// Wait before retrying to avoid hammering a failing server
await new Promise(resolve => setTimeout(resolve, 5000));
}
} catch (error) {
// Network error (offline, DNS failure, etc.)
console.error('Connection failed:', error);
// Wait before retrying
await new Promise(resolve => setTimeout(resolve, 3000));
}
}
}
function handleUpdates(data) {
if (data.messages && data.messages.length > 0) {
data.messages.forEach(msg => {
displayMessage(msg);
});
}
}
// Start the long polling loop
subscribe();
This simple loop is the core of long polling. Let us break down what happens:
fetch('/api/updates')sends a request to the server- The
awaitpauses execution until the server responds (which could be seconds or minutes later) - When a response arrives, it is processed immediately
- The loop continues, sending another request right away
- If a network error occurs, the client waits briefly and retries
Using a Timestamp or Message ID for Continuity
In practice, you need to tell the server what you have already received so it only sends new updates. The most common approach is including a timestamp or message ID in each request:
async function subscribe() {
let lastEventId = 0; // Track the last received event
while (true) {
try {
const url = `/api/updates?lastId=${lastEventId}`;
const response = await fetch(url);
if (response.ok) {
const data = await response.json();
if (data.events && data.events.length > 0) {
data.events.forEach(event => {
processEvent(event);
// Update our position
if (event.id > lastEventId) {
lastEventId = event.id;
}
});
}
// If the server provides a new timestamp, use it
if (data.lastId) {
lastEventId = data.lastId;
}
} else if (response.status !== 502) {
console.error(`Error: ${response.status}`);
await delay(5000);
}
} catch (error) {
console.error('Connection error:', error);
await delay(3000);
}
}
}
function delay(ms) {
return new Promise(resolve => setTimeout(resolve, ms));
}
function processEvent(event) {
console.log(`Event #${event.id}: ${event.type}`, event.data);
}
The server uses lastId to filter events: it only sends events with an ID greater than the one the client provides. This ensures no events are missed even if the connection drops and reconnects.
A Complete Notification System
Here is a more polished implementation with a proper class structure:
class LongPollingClient {
constructor(url, options = {}) {
this.url = url;
this.lastEventId = options.lastEventId || 0;
this.running = false;
this.retryDelay = options.retryDelay || 1000;
this.maxRetryDelay = options.maxRetryDelay || 30000;
this.currentRetryDelay = this.retryDelay;
this.timeout = options.timeout || 60000;
// Event handlers
this.handlers = new Map();
}
on(event, handler) {
if (!this.handlers.has(event)) {
this.handlers.set(event, []);
}
this.handlers.get(event).push(handler);
return this; // Allow chaining
}
emit(event, data) {
const handlers = this.handlers.get(event) || [];
handlers.forEach(handler => handler(data));
}
async start() {
if (this.running) return;
this.running = true;
this.emit('connect', { status: 'starting' });
while (this.running) {
try {
const data = await this.poll();
// Reset retry delay on successful connection
this.currentRetryDelay = this.retryDelay;
if (data && data.events && data.events.length > 0) {
data.events.forEach(event => {
this.lastEventId = event.id;
this.emit('message', event);
this.emit(event.type, event.data);
});
}
// Server may send a new ID even without events (heartbeat)
if (data && data.lastId) {
this.lastEventId = data.lastId;
}
} catch (error) {
if (!this.running) break; // Stop was called during fetch
this.emit('error', error);
// Exponential backoff
await this.delay(this.currentRetryDelay);
this.currentRetryDelay = Math.min(
this.currentRetryDelay * 2,
this.maxRetryDelay
);
this.emit('reconnecting', {
delay: this.currentRetryDelay,
error: error.message
});
}
}
}
async poll() {
const params = new URLSearchParams({
lastId: this.lastEventId,
timeout: this.timeout
});
const response = await fetch(`${this.url}?${params}`, {
signal: AbortSignal.timeout(this.timeout + 10000)
// Allow slightly more than server timeout to account for network latency
});
if (!response.ok) {
if (response.status === 502 || response.status === 504) {
// Gateway timeout - normal for long polling, just reconnect
return null;
}
throw new Error(`HTTP ${response.status}: ${response.statusText}`);
}
return response.json();
}
stop() {
this.running = false;
this.emit('disconnect', { reason: 'client-stop' });
}
delay(ms) {
return new Promise(resolve => setTimeout(resolve, ms));
}
}
Usage:
const poller = new LongPollingClient('/api/updates', {
retryDelay: 1000,
maxRetryDelay: 30000,
timeout: 45000
});
// Listen for specific event types
poller.on('message', (event) => {
console.log(`Received: ${event.type}`, event.data);
});
poller.on('notification', (data) => {
showNotification(data.title, data.body);
});
poller.on('chat-message', (data) => {
appendChatMessage(data.sender, data.text);
});
poller.on('error', (error) => {
console.error('Polling error:', error.message);
});
poller.on('reconnecting', ({ delay }) => {
console.log(`Reconnecting in ${delay}ms...`);
});
// Start polling
poller.start();
// Stop when navigating away
window.addEventListener('beforeunload', () => {
poller.stop();
});
Server-Side Reference
For context, here is a minimal Node.js server that implements the long polling server side:
const express = require('express');
const app = express();
// Store waiting clients
const waitingClients = new Set();
// Store events (in production, use a database or message queue)
const events = [];
let nextId = 1;
app.get('/api/updates', (req, res) => {
const lastId = parseInt(req.query.lastId) || 0;
const timeout = parseInt(req.query.timeout) || 45000;
// Check if there are already new events
const newEvents = events.filter(e => e.id > lastId);
if (newEvents.length > 0) {
// Send immediately
return res.json({
events: newEvents,
lastId: newEvents[newEvents.length - 1].id
});
}
// No new events - hold the connection
const client = {
lastId,
res,
timer: null
};
// Set a timeout to release the connection
client.timer = setTimeout(() => {
waitingClients.delete(client);
res.json({ events: [], lastId });
}, timeout);
waitingClients.add(client);
// Clean up if client disconnects
req.on('close', () => {
clearTimeout(client.timer);
waitingClients.delete(client);
});
});
// Endpoint to publish new events (called by your backend logic)
app.post('/api/publish', express.json(), (req, res) => {
const event = {
id: nextId++,
type: req.body.type || 'message',
data: req.body.data,
timestamp: Date.now()
};
events.push(event);
// Keep only last 1000 events
if (events.length > 1000) {
events.splice(0, events.length - 1000);
}
// Notify all waiting clients
for (const client of waitingClients) {
clearTimeout(client.timer);
const newEvents = events.filter(e => e.id > client.lastId);
client.res.json({
events: newEvents,
lastId: event.id
});
}
waitingClients.clear();
res.json({ published: true, eventId: event.id });
});
app.listen(3000);
When the server has no new data, it stores the response object in waitingClients. When new data arrives (via the /api/publish endpoint), it iterates through all waiting clients and sends them the response, clearing the held connections. Each client then immediately reconnects.
Reconnection Logic
Robust reconnection is what separates a demo from a production-ready implementation. Network failures, server restarts, and temporary outages are inevitable. Your long polling client must handle them gracefully.
Exponential Backoff
When a connection fails, retrying immediately and repeatedly can overwhelm a struggling server. Exponential backoff increases the delay between retries progressively:
class ReconnectingPoller {
constructor(url) {
this.url = url;
this.baseDelay = 1000; // Start with 1 second
this.maxDelay = 30000; // Cap at 30 seconds
this.currentDelay = this.baseDelay;
this.consecutiveErrors = 0;
this.running = false;
this.lastEventId = 0;
}
async start() {
this.running = true;
while (this.running) {
try {
const response = await fetch(
`${this.url}?lastId=${this.lastEventId}`,
{ signal: AbortSignal.timeout(65000) }
);
if (response.ok) {
const data = await response.json();
this.onSuccess(data);
// Reset backoff on success
this.currentDelay = this.baseDelay;
this.consecutiveErrors = 0;
} else if (response.status === 502 || response.status === 504) {
// Timeout responses are normal - reconnect without delay
continue;
} else {
throw new Error(`HTTP ${response.status}`);
}
} catch (error) {
if (!this.running) break;
this.consecutiveErrors++;
console.warn(
`Connection error #${this.consecutiveErrors}: ${error.message}. ` +
`Retrying in ${this.currentDelay}ms`
);
await this.wait(this.currentDelay);
// Increase delay with exponential backoff + jitter
this.currentDelay = Math.min(
this.currentDelay * 2 + Math.random() * 1000,
this.maxDelay
);
}
}
}
onSuccess(data) {
if (data.events) {
data.events.forEach(event => {
this.lastEventId = event.id;
this.handleEvent(event);
});
}
}
handleEvent(event) {
console.log('Event received:', event);
}
stop() {
this.running = false;
}
wait(ms) {
return new Promise(resolve => setTimeout(resolve, ms));
}
}
The backoff sequence looks like:
Error #1: Wait ~1 second
Error #2: Wait ~2 seconds
Error #3: Wait ~4 seconds
Error #4: Wait ~8 seconds
Error #5: Wait ~16 seconds
Error #6: Wait ~30 seconds (capped)
Error #7: Wait ~30 seconds (stays at cap)
...
Success: Reset to 1 second
Adding Jitter
The random component (Math.random() * 1000) in the delay calculation is called jitter. It prevents the "thundering herd" problem: if a server restarts and 1000 clients are all using the same backoff schedule, they would all reconnect at exactly the same moment, potentially crashing the server again. Jitter spreads the reconnection attempts randomly across a time window.
Online/Offline Detection
Combine long polling with the browser's network status APIs for smarter reconnection:
class NetworkAwarePoller {
constructor(url, options = {}) {
this.url = url;
this.running = false;
this.lastEventId = 0;
this.onMessage = options.onMessage || (() => {});
this.onStatusChange = options.onStatusChange || (() => {});
this.setupNetworkListeners();
}
setupNetworkListeners() {
window.addEventListener('online', () => {
console.log('Network restored');
this.onStatusChange('online');
// Resume polling if we were running
if (this.shouldBeRunning && !this.running) {
this.start();
}
});
window.addEventListener('offline', () => {
console.log('Network lost');
this.onStatusChange('offline');
// No point in polling while offline
// The loop will naturally pause on the next failed fetch
});
// Detect when the page becomes visible again
document.addEventListener('visibilitychange', () => {
if (document.visibilityState === 'visible' && this.shouldBeRunning) {
// Page is visible again - reconnect immediately
// (the connection may have been dropped while the tab was hidden)
this.onStatusChange('visible');
if (!this.running) {
this.start();
}
}
});
}
async start() {
this.shouldBeRunning = true;
if (this.running) return;
this.running = true;
let retryDelay = 1000;
while (this.shouldBeRunning) {
// Wait if offline
if (!navigator.onLine) {
this.onStatusChange('waiting-for-network');
await this.waitForOnline();
if (!this.shouldBeRunning) break;
}
try {
const params = new URLSearchParams({ lastId: this.lastEventId });
const response = await fetch(`${this.url}?${params}`, {
signal: AbortSignal.timeout(65000)
});
if (!response.ok && response.status !== 502 && response.status !== 504) {
throw new Error(`HTTP ${response.status}`);
}
if (response.ok) {
const data = await response.json();
retryDelay = 1000; // Reset on success
if (data.events) {
data.events.forEach(event => {
this.lastEventId = event.id;
this.onMessage(event);
});
}
}
} catch (error) {
if (!this.shouldBeRunning) break;
this.onStatusChange('reconnecting');
await this.wait(retryDelay);
retryDelay = Math.min(retryDelay * 2, 30000);
}
}
this.running = false;
}
stop() {
this.shouldBeRunning = false;
this.running = false;
}
waitForOnline() {
if (navigator.onLine) return Promise.resolve();
return new Promise(resolve => {
const handler = () => {
window.removeEventListener('online', handler);
resolve();
};
window.addEventListener('online', handler);
});
}
wait(ms) {
return new Promise(resolve => setTimeout(resolve, ms));
}
}
// Usage
const poller = new NetworkAwarePoller('/api/updates', {
onMessage: (event) => {
console.log('New event:', event);
displayNotification(event);
},
onStatusChange: (status) => {
updateConnectionIndicator(status);
}
});
poller.start();
Preventing Duplicate Events
Network interruptions can cause the client to miss the server's response even though the server already sent data. When the client reconnects, it might receive the same events again. Use the lastEventId to prevent processing duplicates:
class DeduplicatingPoller {
constructor(url) {
this.url = url;
this.lastEventId = this.loadLastEventId();
this.processedIds = new Set();
this.running = false;
}
async start() {
this.running = true;
while (this.running) {
try {
const response = await fetch(
`${this.url}?lastId=${this.lastEventId}`,
{ signal: AbortSignal.timeout(65000) }
);
if (response.ok) {
const data = await response.json();
if (data.events) {
data.events.forEach(event => {
// Skip duplicates
if (this.processedIds.has(event.id)) {
console.log(`Skipping duplicate event #${event.id}`);
return;
}
this.processedIds.add(event.id);
this.lastEventId = event.id;
this.handleEvent(event);
// Keep the dedup set from growing forever
if (this.processedIds.size > 10000) {
const idsArray = Array.from(this.processedIds);
this.processedIds = new Set(idsArray.slice(-5000));
}
});
// Persist position for recovery across page loads
this.saveLastEventId();
}
}
} catch (error) {
if (!this.running) break;
await new Promise(r => setTimeout(r, 3000));
}
}
}
handleEvent(event) {
console.log('Processing event:', event);
}
saveLastEventId() {
try {
localStorage.setItem('polling_lastEventId', String(this.lastEventId));
} catch { /* ignore */ }
}
loadLastEventId() {
try {
return parseInt(localStorage.getItem('polling_lastEventId')) || 0;
} catch {
return 0;
}
}
stop() {
this.running = false;
}
}
Practical Example: Real-Time Chat
Here is a complete long polling chat application demonstrating all the concepts together:
<!DOCTYPE html>
<html>
<head>
<style>
.chat-container {
max-width: 500px;
margin: 20px auto;
font-family: system-ui, sans-serif;
}
.messages {
height: 400px;
overflow-y: auto;
border: 1px solid #ddd;
border-radius: 8px;
padding: 16px;
margin-bottom: 12px;
background: #fafafa;
}
.message {
margin-bottom: 12px;
padding: 8px 12px;
background: white;
border-radius: 8px;
box-shadow: 0 1px 2px rgba(0,0,0,0.05);
}
.message .sender {
font-weight: 600;
color: #2c3e50;
font-size: 13px;
}
.message .text {
margin-top: 4px;
color: #34495e;
}
.message .time {
font-size: 11px;
color: #95a5a6;
margin-top: 4px;
}
.input-row {
display: flex;
gap: 8px;
}
.input-row input {
flex: 1;
padding: 10px;
border: 1px solid #ddd;
border-radius: 8px;
font-size: 14px;
}
.input-row button {
padding: 10px 20px;
background: #3498db;
color: white;
border: none;
border-radius: 8px;
cursor: pointer;
font-size: 14px;
}
.status-bar {
text-align: center;
padding: 6px;
font-size: 12px;
color: #7f8c8d;
border-radius: 8px;
margin-bottom: 8px;
}
.status-bar.connected { background: #d5f5e3; color: #27ae60; }
.status-bar.disconnected { background: #fadbd8; color: #e74c3c; }
.status-bar.connecting { background: #fdebd0; color: #f39c12; }
</style>
</head>
<body>
<div class="chat-container">
<div id="status" class="status-bar connecting">Connecting...</div>
<div id="messages" class="messages"></div>
<div class="input-row">
<input type="text" id="nameInput" placeholder="Your name" style="max-width: 120px;">
<input type="text" id="messageInput" placeholder="Type a message..." disabled>
<button id="sendBtn" disabled>Send</button>
</div>
</div>
<script>
class ChatClient {
constructor() {
this.lastMessageId = 0;
this.running = false;
this.retryDelay = 1000;
this.processedIds = new Set();
this.messagesEl = document.getElementById('messages');
this.statusEl = document.getElementById('status');
this.nameInput = document.getElementById('nameInput');
this.messageInput = document.getElementById('messageInput');
this.sendBtn = document.getElementById('sendBtn');
this.setupUI();
}
setupUI() {
const sendMessage = () => {
const name = this.nameInput.value.trim();
const text = this.messageInput.value.trim();
if (!name || !text) return;
this.send(name, text);
this.messageInput.value = '';
};
this.sendBtn.addEventListener('click', sendMessage);
this.messageInput.addEventListener('keydown', (e) => {
if (e.key === 'Enter') sendMessage();
});
this.nameInput.addEventListener('input', () => {
const hasName = this.nameInput.value.trim().length > 0;
this.messageInput.disabled = !hasName;
this.sendBtn.disabled = !hasName;
});
}
setStatus(status, text) {
this.statusEl.className = `status-bar ${status}`;
this.statusEl.textContent = text;
}
async startPolling() {
this.running = true;
while (this.running) {
try {
const params = new URLSearchParams({
lastId: this.lastMessageId
});
this.setStatus('connected', 'Connected');
const response = await fetch(`/api/chat/poll?${params}`, {
signal: AbortSignal.timeout(65000)
});
if (response.ok) {
const data = await response.json();
this.retryDelay = 1000; // Reset backoff
if (data.messages) {
data.messages.forEach(msg => {
if (!this.processedIds.has(msg.id)) {
this.processedIds.add(msg.id);
this.lastMessageId = msg.id;
this.displayMessage(msg);
}
});
}
} else if (response.status !== 502 && response.status !== 504) {
throw new Error(`HTTP ${response.status}`);
}
} catch (error) {
if (!this.running) break;
this.setStatus('disconnected', `Disconnected. Retrying in ${this.retryDelay / 1000}s...`);
await new Promise(r => setTimeout(r, this.retryDelay));
this.retryDelay = Math.min(this.retryDelay * 2, 30000);
this.setStatus('connecting', 'Reconnecting...');
}
}
}
displayMessage(msg) {
const el = document.createElement('div');
el.className = 'message';
const time = new Date(msg.timestamp).toLocaleTimeString();
el.innerHTML = `
`;
this.messagesEl.appendChild(el);
this.messagesEl.scrollTop = this.messagesEl.scrollHeight;
}
async send(sender, text) {
try {
await fetch('/api/chat/send', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ sender, text })
});
} catch (error) {
console.error('Failed to send message:', error);
this.setStatus('disconnected', 'Failed to send message');
}
}
escapeHtml(text) {
const div = document.createElement('div');
div.textContent = text;
return div.innerHTML;
}
stop() {
this.running = false;
}
}
const chat = new ChatClient();
chat.startPolling();
window.addEventListener('beforeunload', () => chat.stop());
</script>
</body>
</html>
Always escape user-generated content before inserting it into the DOM, as shown with the escapeHtml method. This prevents Cross-Site Scripting (XSS) attacks. Using textContent instead of innerHTML for user data is another safe approach.
Long Polling vs. WebSocket vs. SSE
There are three main approaches for server-to-client real-time communication. Each has distinct strengths and trade-offs.
Long Polling
// Client sends repeated HTTP requests
async function longPoll() {
while (true) {
const response = await fetch('/api/updates?lastId=' + lastId);
const data = await response.json();
processData(data);
}
}
How it works: Repeated HTTP requests where the server delays responding until data is available.
Advantages:
- Works everywhere HTTP works (any browser, any proxy, any firewall)
- No special server infrastructure needed (any HTTP server works)
- Simple to implement and debug (standard HTTP requests visible in DevTools)
- Stateless server design (each request is independent)
- Works through HTTP proxies and load balancers without special configuration
Disadvantages:
- Higher latency than persistent connections (each message requires a new HTTP round-trip)
- HTTP overhead on every reconnection (headers, TCP handshake, TLS negotiation)
- One-directional server-to-client push (client-to-server requires separate requests)
- Server must hold many open connections (one per waiting client)
WebSocket
// Persistent bidirectional connection
const ws = new WebSocket('wss://example.com/socket');
ws.onmessage = (event) => {
const data = JSON.parse(event.data);
processData(data);
};
ws.send(JSON.stringify({ type: 'chat', text: 'Hello!' }));
How it works: A single TCP connection is upgraded from HTTP to the WebSocket protocol, providing full-duplex communication.
Advantages:
- True real-time, lowest latency (no HTTP overhead per message)
- Full-duplex: both sides can send data simultaneously
- Very low overhead per message (just 2-14 bytes of framing)
- Efficient for high-frequency updates (gaming, live collaboration, trading)
Disadvantages:
- More complex server infrastructure (WebSocket servers, sticky sessions for load balancing)
- Does not work through all HTTP proxies and firewalls (some strip the
Upgradeheader) - Stateful connections make scaling harder (must maintain connection state)
- No automatic reconnection (must implement yourself or use a library)
- Not regular HTTP (cannot use standard HTTP middleware, caching, or debugging tools as easily)
Server-Sent Events (SSE)
// Persistent one-way connection from server to client
const source = new EventSource('/api/stream');
source.onmessage = (event) => {
const data = JSON.parse(event.data);
processData(data);
};
source.addEventListener('notification', (event) => {
showNotification(JSON.parse(event.data));
});
How it works: A single long-lived HTTP connection where the server sends events using a simple text-based format.
Advantages:
- Built-in browser API with automatic reconnection
- Uses standard HTTP (works with proxies, load balancers, CDNs)
- Built-in event types and message IDs (the protocol handles what you must build manually for long polling)
- Simpler than WebSocket for server-to-client streaming
Disadvantages:
- One-directional only: server to client (client-to-server requires separate HTTP requests)
- Limited to text data (no binary)
- Maximum connection limit per domain in browsers (typically 6 in HTTP/1.1, unlimited in HTTP/2)
- Less widespread server framework support than WebSocket
Comparison Table
| Feature | Long Polling | WebSocket | SSE |
|---|---|---|---|
| Direction | Server → Client | Bidirectional | Server → Client |
| Protocol | HTTP | WS/WSS | HTTP |
| Latency | Medium | Lowest | Low |
| Overhead per message | High (HTTP headers) | Very low (2-14 bytes) | Low (small text frames) |
| Auto-reconnect | Manual (your code) | Manual (your code) | Built-in |
| Binary data | Via response types | Native support | No (text only) |
| Proxy/firewall friendly | Excellent | Can be problematic | Good |
| Browser support | Universal | All modern | All modern (no IE) |
| Server complexity | Low | Medium-High | Low-Medium |
| Scaling | Stateless (easy) | Stateful (harder) | Semi-stateful |
| Best for | Compatibility, simplicity | Chat, gaming, collaboration | Notifications, feeds, dashboards |
Decision Guide
Do you need bidirectional communication?
├── Yes → WebSocket
│ (chat, multiplayer games, collaborative editing)
│
└── No (server-to-client only)
│
├── Do you need HTTP/proxy compatibility?
│ ├── Critical → Long Polling
│ │ (corporate environments, legacy infrastructure)
│ │
│ └── Not critical
│ │
│ ├── High-frequency updates? → SSE or WebSocket
│ │ (live scores, stock tickers, dashboards)
│ │
│ └── Low-frequency updates? → SSE or Long Polling
│ (notifications, status changes)
│
└── Maximum simplicity needed?
└── Yes → SSE (built-in reconnect, simple API)
Using Long Polling as a Fallback
Many production applications use WebSocket as the primary transport and fall back to long polling when WebSocket is unavailable:
class RealTimeClient {
constructor(url) {
this.url = url;
this.onMessage = null;
this.transport = null;
}
connect() {
// Try WebSocket first
try {
this.connectWebSocket();
} catch {
this.connectLongPolling();
}
}
connectWebSocket() {
const wsUrl = this.url.replace(/^http/, 'ws') + '/ws';
const ws = new WebSocket(wsUrl);
ws.onopen = () => {
console.log('Connected via WebSocket');
this.transport = 'websocket';
};
ws.onmessage = (event) => {
const data = JSON.parse(event.data);
if (this.onMessage) this.onMessage(data);
};
ws.onerror = () => {
console.log('WebSocket failed, falling back to long polling');
ws.close();
this.connectLongPolling();
};
ws.onclose = () => {
if (this.transport === 'websocket') {
// Unexpected close - try to reconnect
setTimeout(() => this.connect(), 3000);
}
};
}
connectLongPolling() {
this.transport = 'longpolling';
console.log('Connected via long polling');
let lastId = 0;
const poll = async () => {
if (this.transport !== 'longpolling') return;
try {
const response = await fetch(`${this.url}/poll?lastId=${lastId}`);
if (response.ok) {
const data = await response.json();
if (data.events) {
data.events.forEach(event => {
lastId = event.id;
if (this.onMessage) this.onMessage(event);
});
}
}
} catch {
await new Promise(r => setTimeout(r, 3000));
}
if (this.transport === 'longpolling') {
poll(); // Continue polling
}
};
poll();
}
}
// Usage - same API regardless of transport
const client = new RealTimeClient('https://api.example.com');
client.onMessage = (data) => {
console.log('Received:', data);
};
client.connect();
This is the pattern used by libraries like Socket.IO, which tries WebSocket first and falls back to long polling automatically.
Summary
Long polling is a technique where the client sends an HTTP request, the server holds the connection open until new data is available (or a timeout expires), and then the client immediately reconnects. This creates near-real-time server-to-client communication using nothing but standard HTTP requests.
The implementation pattern is an async loop: fetch() a URL, await the response (which may take seconds or minutes), process the data, and immediately start the next request. Include a lastEventId or timestamp parameter so the server knows what data you have already received and only sends new updates.
Reconnection logic is critical. Use exponential backoff with jitter to avoid overwhelming a recovering server. Detect network status changes with the online/offline events and the Page Visibility API to pause and resume intelligently. Persist the lastEventId in localStorage to survive page reloads. Deduplicate events by tracking processed IDs.
Long polling excels in HTTP compatibility: it works through any proxy, firewall, or load balancer that handles standard HTTP. It requires no special server infrastructure beyond the ability to hold connections open. For applications that need bidirectional communication or minimal latency, WebSocket is the better choice. For server-to-client streaming with automatic reconnection built into the browser, Server-Sent Events offer a simpler API. Many production systems use WebSocket as the primary transport with long polling as a universal fallback.