How the Node.js Event Loop Works: A Deep Dive into libuv and Event Phases
The Node.js event loop is a single-threaded mechanism that enables non-blocking I/O by offloading operations to the libuv thread pool and executing callbacks in specific phases, allowing thousands of concurrent connections to be handled efficiently.
Node.js operates on a single-threaded JavaScript engine (V8) yet achieves high concurrency through an event-driven, non-blocking I/O architecture centered on the event loop. This mechanism, implemented primarily through the libuv library, continuously polls for completed operations and executes their callbacks in a defined sequence of phases while keeping the main thread free for JavaScript execution.
Core Architecture of the Node.js Event Loop
The libuv Library and Thread Pool
The foundation of the Node.js event loop resides in libuv, a multi-platform C library that provides the underlying event loop implementation and thread pool. When JavaScript code initiates an asynchronous operation—such as file system access via fs.readFile—libuv schedules the work on its internal thread pool (default size of 4 threads) or registers a file descriptor with the operating system.
Once the OS signals completion, libuv places the corresponding callback into the appropriate phase queue. This design ensures that expensive operations never block the main JavaScript thread, allowing the event loop to continue processing other callbacks.
Non-Blocking I/O and the Main Thread
While JavaScript executes synchronously on the main thread, the event loop acts as a coordinator that continuously polls for events such as I/O completion, timer expirations, and scheduled callbacks. The loop proceeds through its phases in a fixed order, processing all callbacks in the current phase before moving to the next. This single-threaded event loop enables Node.js to efficiently serve thousands of concurrent connections with a relatively small memory footprint.
Event Loop Phases Explained
The Node.js event loop is divided into six distinct phases, each with a FIFO queue of callbacks. The loop processes each phase sequentially, executing all queued callbacks before proceeding to the next phase.
1. Timers Phase
The timers phase executes callbacks scheduled by setTimeout and setInterval whose scheduled time has elapsed. However, timers are not guaranteed to execute at exactly the specified delay—only after at least that much time has passed and the event loop has reached this phase. The implementation resides in src/timers.cc, which manages the timer heap and determines which callbacks are ready for execution.
2. Pending Callbacks Phase
The pending callbacks phase executes I/O callbacks deferred from the previous iteration, along with system-level callbacks such as TCP errors. If an I/O operation completes but the corresponding callback cannot be executed immediately (for example, if the poll phase queue is full), libuv queues it here for the next iteration.
3. Idle and Prepare Phase
The idle and prepare phase is used internally by libuv for house-keeping tasks and is not exposed to user code. These phases allow libuv to perform internal bookkeeping before the poll phase begins.
4. Poll Phase
The poll phase has two primary functions: executing I/O callbacks for completed operations (such as data received on a socket) and checking for new I/O events to process. If the poll phase queue is not empty, the loop will iterate through the callbacks synchronously until the queue is exhausted or a system-dependent limit is reached. The stream handling in src/stream_base.cc places callbacks into this phase when I/O operations complete.
5. Check Phase
The check phase executes callbacks scheduled by setImmediate(). Unlike timers, setImmediate callbacks are executed after the poll phase completes, making them ideal for executing code immediately after I/O operations. The implementation in src/check.cc manages this phase's callback queue.
6. Close Callbacks Phase
The close callbacks phase executes callbacks for closed connections, such as socket.on('close', ...) or stream.close(). This phase handles the cleanup of resources that have been closed during the current or previous iterations.
process.nextTick and Microtasks
While not technically part of the event loop phases, process.nextTick callbacks are processed after the current operation completes and before the event loop continues to the next phase. According to the implementation in src/internal/process/next_tick.js, these callbacks are stored in an internal queue and executed immediately after the current JavaScript stack unwinds, giving them higher priority than any event loop phase including setImmediate.
Node.js Event Loop Implementation in Source Code
The Node.js event loop implementation spans both C++ and JavaScript layers, with critical files defining how the loop initializes, processes phases, and bridges to JavaScript callbacks.
Initialization and Environment Setup
The event loop begins in src/node.cc, which serves as the entry point that initializes the V8 isolate and starts the libuv loop. The src/env.h and src/env.cc files define the Environment class, which ties together V8, libuv, and the JavaScript execution context, maintaining the state of the event loop across phases.
Phase-Specific Implementations
src/timers.cc: Implements the timers phase, managing the timer heap and determining whensetTimeoutandsetIntervalcallbacks are ready for execution.src/check.cc: Implements the check phase forsetImmediatecallbacks, managing the queue that runs after the poll phase.src/async_wrap.cc: Bridges libuv async handles to JavaScript, exposingprocess.nextTickand other async APIs by wrapping C++ handles.src/stream_base.cc: Handles I/O streams and places callbacks into the poll phase when network or file operations complete.
JavaScript Layer Integration
The JavaScript shims in src/internal/process/next_tick.js and src/internal/timers.js schedule callbacks from JavaScript land into the C++ event loop structures. These files manage the queues that eventually trigger the native phase handlers, ensuring that JavaScript callbacks execute in the correct order relative to the event loop phases.
Practical Examples of Event Loop Behavior
Demonstrating Phase Ordering
The following example illustrates how the event loop prioritizes different callback types across phases:
const fs = require('fs');
// Timers phase (1)
setTimeout(() => console.log('timer'), 0);
// Check phase (5)
setImmediate(() => console.log('immediate'));
// nextTick queue (executed before next phase)
process.nextTick(() => console.log('nextTick'));
// Poll phase (4) - I/O callback
fs.readFile(__filename, () => console.log('file read'));
Typical output demonstrates the execution order:
nextTick
timer
file read
immediate
This output shows that process.nextTick runs before any event loop phase, followed by the timers phase, then the poll phase (I/O), and finally the check phase (setImmediate).
Offloading CPU-Intensive Work
To prevent blocking the event loop with CPU-bound tasks, use the worker_threads module to offload work to the libuv thread pool:
const { Worker } = require('worker_threads');
function heavyComputation() {
return new Promise((resolve, reject) => {
const worker = new Worker(`
const { parentPort } = require('worker_threads');
// Simulate CPU-intensive calculation
let sum = 0;
for (let i = 0; i < 1e9; i++) sum += i;
parentPort.postMessage(sum);
`, { eval: true });
worker.on('message', resolve);
worker.on('error', reject);
});
}
heavyComputation().then(result => console.log('Result:', result));
This approach keeps the main event loop responsive while the heavy calculation runs on a separate thread, demonstrating how Node.js maintains non-blocking behavior even for computationally expensive tasks.
Summary
- The Node.js event loop is a single-threaded mechanism that enables non-blocking I/O by offloading operations to the libuv thread pool and executing callbacks in specific phases, allowing thousands of concurrent connections to be handled efficiently.
- The loop processes callbacks through six distinct phases: timers, pending callbacks, idle/prepare, poll, check, and close callbacks, executing each queue in FIFO order.
process.nextTickcallbacks execute immediately after the current operation and before the event loop continues, giving them higher priority than any phase includingsetImmediate.- CPU-intensive operations should be offloaded to worker threads or the libuv thread pool to prevent blocking the main event loop and degrading application performance.
- Key implementation files include
src/node.ccfor initialization,src/timers.ccandsrc/check.ccfor phase management, andsrc/internal/process/next_tick.jsfor nextTick scheduling.
Frequently Asked Questions
What is the difference between process.nextTick and setImmediate?
process.nextTick queues a callback to execute immediately after the current JavaScript stack unwinds but before the event loop proceeds to the next phase, giving it the highest priority. setImmediate queues a callback in the check phase, which runs after the poll phase completes. This means process.nextTick callbacks always execute before setImmediate callbacks, and excessive use of nextTick can starve the event loop by preventing it from proceeding to subsequent phases.
How does the Node.js event loop handle CPU-intensive tasks?
The Node.js event loop itself is single-threaded and cannot perform CPU-intensive work without blocking subsequent callbacks. For CPU-bound operations, Node.js provides the worker_threads module, which creates separate JavaScript threads running on the libuv thread pool. These workers execute independently of the main event loop, allowing CPU-intensive calculations to run in parallel while the main thread continues processing I/O events.
Is the Node.js event loop truly single-threaded?
Yes, the JavaScript execution and the event loop itself run on a single thread—the main thread. However, Node.js achieves concurrency through libuv's thread pool (default 4 threads) for file system and DNS operations, and through non-blocking I/O syscalls that delegate work to the operating system. While JavaScript callbacks execute sequentially on the main thread, the underlying I/O operations run concurrently via these mechanisms, giving the illusion of multi-threading for I/O-bound applications.
What happens if a timer callback takes too long to execute?
If a callback in any phase—including the timers phase—runs for an extended period, it blocks the entire event loop. Subsequent callbacks in the timers queue, as well as callbacks in later phases (poll, check, close), must wait until the long-running callback completes. This can cause cascading delays, timer drift (timers firing later than scheduled), and degraded application performance. To prevent this, CPU-intensive work should be offloaded to worker threads or broken into smaller chunks using setImmediate to yield control back to the event loop.
Have a question about this repo?
These articles cover the highlights, but your codebase questions are specific. Give your agent direct access to the source. Share this with your agent to get started:
curl -s https://instagit.com/install.md