Pino Logger - Complete Guide of Node.js Logging Library with Examples [2026]

Updated Mar 31, 202619 min read

Logging is the process of recording timestamped, immutable records of discrete events within a system, enabling developers to diagnose failures, audit behaviour, and maintain operational visibility in production environments. Node.js ships with the built-in console a module, which provides methods like console.log(), console.error(), console.warn(), and console.info().

However, these methods fall short in several practical ways such as, there are no log levels with filterable severity, no structured (JSON) output, no log rotation or file management, and no way to route logs to external systems. As a result, the Node.js ecosystem relies heavily on third-party libraries such as Pino, Winston, and Bunyan.

This article explores Pino Logger in depth, the fastest of the three (official benchmarks), which outputs structured JSON by default. We'll start with basic setup and work our way through log levels, child loggers, transports, and error handling. Everything you need to go from console.log to a production-grade logging setup.

What is Pino Logger?

Pino is a fast, low-overhead logging library designed for Node.js applications. It is built around a simple principle: logging should never slow down your application. Pino transport code moves log processing to a separate worker thread, keeping the main thread’s CPU usage low even when application traffic spikes.

Out of the box, it ships with structured JSON output, multiple log levels, child loggers for scoped context, custom serializers for redaction or data reshaping, and pluggable transports.

Quickstart: How to set up and use Pino Logger?

Let's set up Pino from scratch and see how its features work in practice.

Prerequisites

Before starting, make sure you have the latest stable version of Node.js and npm installed. You can verify your version by running node -v in your terminal.

Installing and Setting Up Pino

Set up a clean Node.js project first.

mkdir pino-demo && cd pino-demo
npm init -y

Install Pino using npm.

npm install pino --save

Basic Setup

Setting up a basic Pino Logger involves creating and using a logger instance to log messages. Here's a simple example:

index.js
const pino = require('pino');
const logger = pino();

logger.info('Pino logger is running');

This code initializes Pino and logs an info message. By default, the logs are output to the console in JSON format.

Output
{"level":30,"time":1711440000000,"pid":12345,"hostname":"your-machine","msg":"Pino logger is running"}

This JSON-first approach is intentional. This type of structured JSON logs is easy to parse, filter, and ingest into log analysis tools. Every log entry is a single JSON line.

Following is its breakdown:

The structure of a Pino JSON log entry with labeled fields: level (numeric log severity), time (Unix timestamp in milliseconds), pid (process ID of the Node.js program), hostname (machine where the program is running), and msg (log message content).
Breakdown of a default Pino log entry.

Printing Logs for Development

While JSON logs are great for machines, they can be difficult for humans to read. For development and debugging purposes, you can use the pino-pretty module to format logs in a more readable way. The following are the steps to use it:

Step 1: Install pino-pretty using npm

npm install pino-pretty --save-dev

Step 2: Pipe your application output through it

node index.js | npx pino-pretty
Output
[10:30:00.000] INFO: Pino logger is running

For the full list of configuration options, including custom colour themes, message formatting, and timestamp handling, check out the official pino-pretty documentation.

Logging HTTP requests

Pino Logger can be integrated with HTTP servers to automatically log incoming requests and outgoing responses. This is particularly useful for monitoring, debugging, and analyzing the performance and behaviour of web applications.

Using Pino HTTP Middleware

You can log HTTP requests and responses using the pino-http module, which integrates Pino with HTTP servers like Express. This middleware logs details about each request and response, including HTTP method, URL, status code, and response time.

Step 1: Install the pino-http module

npm install pino-http --save

Step 2: Integrate pino-http into your application

Here’s an example using Express:

app.js
const express = require('express')
const pino = require('pino')
const pinoHttp = require('pino-http')

const logger = pino()
const httpLogger = pinoHttp({ logger })

const app = express()

// Use Pino HTTP middleware
app.use(httpLogger)

app.get('/', (req, res) => {
  res.send('Hello, world!')
})

app.listen(3000, () => {
  logger.info('Server is running on port 3000')
})

Run using node app.js and visit http://localhost:3000/, reload page to generate the logs.

Output
{"level":30,"time":1774947711963,"pid":12345,"hostname":"your-machine","msg":"Server is running on port 3000"}

{"level":30,"time":1774947742994,"pid":12345,"hostname":"your-machine","req":{"id":1,"method":"GET","url":"/","query":{},"params":{},"headers":{"host":"localhost:3000","connection":"keep-alive",....},"remoteAddress":"::1","remotePort":63275},"res":{"statusCode":200,"headers":{"x-powered-by":"Express",...}},"responseTime":6,"msg":"request completed"}

In this example, the Pino HTTP middleware is added to the Express application using app.use(httpLogger). This ensures that all incoming HTTP requests and outgoing responses are logged automatically.

Configuring Pino Logger for Production

Now that the basic setup is working, let's configure Pino for how it's actually used in production, starting with controlling what gets logged.

Pino Log Levels

Log levels let you control the level of detail in your application's output. Every log message has a severity, and Pino will only output messages at or above the level you configure. This means in production, you can set the level to info or warn to filter out noisy debug output, and in development, you can drop it to debug or trace to see everything.

Pino supports six log levels in order of decreasing severity: fatal, error, warn, info, debug, and trace. Each level has a corresponding numeric value: fatal is 60, trace is 10. By default, Pino logs at info (30), which means debug and trace messages are silently ignored unless you explicitly lower the threshold.

LevelNumeric ValueUse Case
trace10Fine-grained debugging (variable values, loop iterations)
debug20Diagnostic information for development
info30Normal operational messages (server started, request handled)
warn40Unexpected situations that are not errors
error50Errors that need attention but don't crash the process
fatal60Unrecoverable errors that will crash the process
logLevels.js
const pino = require('pino');

const logger = pino({
  level: process.env.LOG_LEVEL || 'info',
});

logger.fatal('Application crashed — unrecoverable');
logger.error('Failed to connect to database');
logger.warn('Deprecated API endpoint hit');
logger.info('Server started on port 3000');
logger.debug('Request payload: %o', { userId: 42 });
logger.trace('Entering function parseToken()');

Output:

Using process.env.LOG_LEVEL lets you change verbosity at deploy time without touching code. Set it to debug in staging, warn in production.

Transports

Pino follows the philosophy that the logger should do only one thing: write JSON to stdout. Everything else, such as sending logs to a file, forwarding them to log management tools, or routing them to multiple destinations, is handled by transports. A transport is a separate worker thread that receives log data from the main thread and processes it independently, so log routing never blocks your application.

You configure transports using pino.transport(), passing either a single target or multiple targets. Each target is an npm package (or an absolute path) that defines where and how logs are delivered.

transport.js
const pino = require('pino');

const logger = pino({
  level: 'info',
  transport: {
    targets: [
      {
        target: 'pino/file',
        options: { destination: './app.log' },
        level: 'info',
      },
      {
        target: 'pino-pretty',
        options: { colorize: true },
        level: 'debug',
      },
    ],
  },
});

logger.info('This goes to both the file and the console');

Once you run the transport.js file, you will see a log in the console, and a new file named app.log will be created with the log.

Child Loggers

In any application with multiple components or concurrent requests, a single logger instance isn't enough. You need context, like which request triggered this log, which module produced it, and which user was involved.

Child loggers solve this by letting you create a new logger that inherits the parent's configuration but carries additional fields that are automatically included in every log entry it produces.

You create a child logger using logger.child() and passing an object of key-value pairs. These fields get merged into every log line from that child, so you don't have to manually repeat them in each log call.

childLogger.js
const pino = require('pino');
const logger = pino();

// Create a child logger with module context
const authLogger = logger.child({ module: 'auth' });

authLogger.info('User login attempted');
// {"level":30,"time":...,"module":"auth","msg":"User login attempted"}

// Create a request-scoped child logger
function handleRequest(req) {
  const reqLogger = logger.child({ requestId: req.id, userId: req.userId });

  reqLogger.info('Processing request');
  // {"level":30,"time":...,"requestId":"abc-123","userId":42,"msg":"Processing request"}
}
Output
{"level":30,"time":1774874841855,"pid":55188,"hostname":"your-machine","module":"auth","msg":"User login attempted"}
{"level":30,"time":1774874841856,"pid":55188,"hostname":"your-machine","requestId":"abc-123","userId":42,"msg":"Processing request"}

This pattern is especially useful in Express or Fastify middleware. Create a child logger per request with a unique requestId attach it to the request object, and every downstream log automatically carries that ID. When something goes wrong in production, you can filter logs by requestId and trace the entire lifecycle of a single request.

Built-in and Custom Serializers

When you log objects like HTTP requests, responses, or errors, you rarely want the entire raw object in your logs. Serializers help you define how specific fields are transformed before they're written to the log output. You pass them as functions under the serializers option, where each key maps to a field name in the logged object.

...
  serializers: {
    // Functions
  },
...

Pino provides ready-made serializer functions for three common objects:

  1. pino.stdSerializers.req takes the raw Node.js incoming message object and returns only { method, url, headers, remoteAddress, remotePort }. Everything else is thrown away.
  2. pino.stdSerializers.res takes the raw ServerResponse and returns only { statusCode, headers }.
  3. pino.stdSerializers.err takes an Error object and returns { type, message, stack }.
builtInSerializer.js
const http = require('http');
const pino = require('pino');

const logger = pino({
  serializers: {
    req: pino.stdSerializers.req,
    res: pino.stdSerializers.res,
    err: pino.stdSerializers.err,
  },
});

const server = http.createServer((req, res) => {
  // Log the request using built-in serializer
  logger.info({ req }, 'Incoming request');

  res.statusCode = 200;
  res.end('Hello World');

  // Log the response using built-in serializer
  logger.info({ res }, 'Request completed');
});

server.listen(3000, () => {
  logger.info('Server running on port 3000');
});

Run the file builtInSerializer.js, then hit http://localhost:3000 in your browser.

Output
{"level":30,"time":1711440000000,"pid":12345,"hostname":"your-machine","msg":"Server running on port 3000"}
{"level":30,"time":1711440001000,"pid":12345,"hostname":"your-machine","req":{"method":"GET","url":"/","headers":{"host":"localhost:3000",...},"remoteAddress":"::1","remotePort":54321},"msg":"Incoming request"}
{"level":30,"time":1711440001001,"pid":12345,"hostname":"your-machine","res":{"statusCode":200,"headers":{}},"msg":"Request completed"}

Notice the req field, it only contains method, url, headers, remoteAddress, and remotePort. Without a serializer, the raw req object would dump hundreds of internal Node.js properties into your log.

Try without the serializer, remove req: pino.stdSerializers.req from the config and run it again. You'll see the difference immediately.

Even though most of the internal Node.js properties were removed by the built-in serializer, it still logs all headers, including cookie, which contains session tokens, tracking IDs, and other sensitive data you definitely don't want in your logs.

This is exactly where the built-in serializer falls short, and a custom serializer can be used. It’s the same idea, but you write the function yourself. For example, in the following file, we have written a custom req function which will only log the method, url, remoteAddress and headers you explicitly chose, like host.

customSerializer.js
const http = require('http');
const pino = require('pino');

const logger = pino({
  serializers: {
    req: (req) => ({
      method: req.method,
      url: req.url,
      remoteAddress: req.remoteAddress,
      // only pick the headers you actually need
      headers: {
        host: req.headers.host,
      },
    }),
  },
});

const server = http.createServer((req, res) => {
  // Log the request using built-in serializer
  logger.info({ req }, 'Incoming request');

  res.statusCode = 200;
  res.end('Hello World');

});

server.listen(3000, () => {
  logger.info('Server running on port 3000');
});
Output
{"level":30,"time":1774939188325,"pid":12345,"hostname":"your-machine","msg":"Server running on port 3000"}
{"level":30,"time":1774939190368,"pid":12345,"hostname":"your-machine","req":{"method":"GET","url":"/","headers":{"host":"localhost:3000"}},"msg":"Incoming request"}
{"level":30,"time":1774939190402,"pid":12345,"hostname":"your-machine","req":{"method":"GET","url":"/favicon.ico","headers":{"host":"localhost:3000"}},"msg":"Incoming request"}

Redaction

Serializers let you reshape objects before logging, but sometimes you just need to hide specific fields across your entire log output, such as passwords, tokens, credit card numbers, etc, without writing a serializer for every object. Redaction handles this at the logger level. You pass an array of key paths, and Pino replaces their values with [Redacted] in every log entry, regardless of where they appear.

redact.js
const pino = require('pino');

const logger = pino({
  redact: ['password', 'creditCard', 'headers.cookie', 'headers.authorization'],
});

logger.info({
  username: 'Olly',
  password: 'super-secret',
  creditCard: '4111-1111-1111-1111',
}, 'User signup');
// {"level":30,"time":...,"username":"olly","password":"[Redacted]","creditCard":"[Redacted]","msg":"User signup"}
Output
{"level":30,"time":1774940121873,"pid":12345,"hostname":"your-machine","username":"Olly","password":"[Redacted]","creditCard":"[Redacted]","msg":"User signup"}

You can also change the placeholder value or remove the field entirely instead of showing [Redacted]:

customRedacted.js
const pino = require('pino');

const logger = pino({
  redact: {
    paths: ['password', 'creditCard'],
    censor: '**',       // replaces with '**' instead of '[Redacted]'
    // remove: true,         // uncomment to remove the field entirely from the output
  },
});

logger.info({
  username: 'Olly',
  password: 'super-secret',
  creditCard: '4111-1111-1111-1111',
}, 'User signup');
Output
{"level":30,"time":1774940321495,"pid":12345,"hostname":"your-machine","username":"Olly","password":"**","creditCard":"**","msg":"User signup"}

The difference between serializers and redaction lies in scope: serializers transform a specific field's shape, while redaction blanks out sensitive values wherever they appear. In practice, you'll often use both together. For the full list of path syntax options, see the official Redaction documentation.

Asynchronous Logging

Asynchronous logging helps to offload logging operations to a separate process, reducing the impact on the main application thread and improving performance. This is particularly useful in high-performance applications where logging must be performed without blocking the main execution flow. Pino offers two ways to make logging asynchronous.

  1. Using pino.destination() with sync: false: This buffers log messages in memory and flushes them to the destination in larger chunks, reducing the number of I/O operations.

    asyncLogging.js
    const pino = require('pino');
    
    const logger = pino(
      pino.destination({
        dest: './app.log',
        minLength: 4096,  // buffer 4KB before writing
        sync: false,
      })
    );
    
    logger.info('Server started');
    logger.info({ userId: 42 }, 'User logged in');
    logger.error('Something went wrong');
    
    // periodically flush the buffer during low-traffic periods
    setInterval(() => {
      logger.flush();
    }, 10000).unref();
    
  2. Using pino.transport(): This moves log processing entirely to a separate worker thread, which reduces work on the main thread during traffic spikes. It’s covered in the Transports section above.

Buffered messages can be lost if the process crashes before a flush, and there's no one-to-one relationship between a log call and a write. Pino mitigates the first issue by automatically flushing on exit, SIGINT, SIGTERM, and other shutdown signals.

Integrating Pino with Fastify

Fastify has built-in Pino integration and does not require a separate dedicated Pino integration package. However, it’s disabled by default, and you need to enable it when creating a Fastify instance.

Step 1: Install Fastify using npm

npm install fastify --save

Step 2: Enable pino logger and create a basic server

fastifyApp.js
const fastify = require('fastify')({
  logger: true,
});

fastify.get('/', async (request, reply) => {
  request.log.info('Handling root route');
  return { hello: 'world' };
});

fastify.listen({ port: 3000 }, (err) => {
  if (err) {
    fastify.log.error(err);
    process.exit(1);
  }
});

Passing { logger: true } enables Pino with its default configuration, with log level set to info, JSON output to stdout, and standard serializers for req, res, and err objects.

Fastify automatically logs every incoming request and outgoing response, and attaches a request-scoped child logger to request.log with a unique reqId for tracing. fastify.log is the server-level logger available outside of request context. In the above code, we use it to log startup errors before the server is ready to handle requests.

Step 3: Run and check http://localhost:3000.

node fastifyApp.js

Step 4: Check Output

Output
{"level":30,"time":1774948769144,"pid":12345,"hostname":"your-machine","msg":"Server listening at http://127.0.0.1:3000"}
{"level":30,"time":1774948773716,"pid":12345,"hostname":"your-machine","reqId":"req-1","req":{"method":"GET","url":"/","host":"localhost:3000","remoteAddress":"::1","remotePort":63608},"msg":"incoming request"}
{"level":30,"time":1774948773717,"pid":12345,"hostname":"your-machine","reqId":"req-1","msg":"Handling root route"}
{"level":30,"time":1774948773721,"pid":12345,"hostname":"your-machine","reqId":"req-1","res":{"statusCode":200},"responseTime":4.095916986465454,"msg":"request completed"}

Centralizing and Monitoring Logs in an Observability Backend

Everything we have configured so far writes logs to stdout or a local file. That works on a single server, but production applications typically run across multiple containers or instances. When something breaks, you need to check logs from the right container, on the right server, at the right time. If that container has already been killed and restarted, those logs are gone.

Centralizing your logs solves this. Instead of writing only to local destinations, you configure an exporter that sends logs to a backend over the network in real time. All your logs end up in one place, searchable across every service, and they persist regardless of what happens to individual containers.

For this guide, we will use SigNoz as our observability backend and OpenTelemetry as the export layer. OpenTelemetry is the open-source standard for collecting telemetry data, and SigNoz is an OpenTelemetry-native observability platform that lets you search, filter, and correlate your logs with traces and metrics in a single interface.

The setup uses OpenTelemetry auto-instrumentation, which automatically captures Pino logs along with trace correlation and HTTP request data. You don't need to change your logging code. Your existing logger.info(), logger.error() calls continue to work as-is.

Install the required packages:

npm install --save @opentelemetry/api @opentelemetry/auto-instrumentations-node

Set the environment variables pointing to your SigNoz instance and run your application:

export OTEL_EXPORTER_OTLP_ENDPOINT="https://ingest.<region>.signoz.cloud:443"
export OTEL_NODE_RESOURCE_DETECTORS="env,host,os"
export OTEL_SERVICE_NAME="<service_name>"
export OTEL_EXPORTER_OTLP_HEADERS="signoz-ingestion-key=<your-ingestion-key>"
export NODE_OPTIONS="--require @opentelemetry/auto-instrumentations-node/register"
node index.js

That's it. Your Pino logs will start appearing in SigNoz.

SigNoz Logs Explorer showing filtered log entries for the consumer-svc-2 service, with severity levels, deployment environment, and service name filters visible in the left sidebar, and timestamped Kafka consumer log lines in the main panel.
Viewing Node.js application logs in SigNoz Logs Explorer

The complete walkthrough, including deployment options for VMs, Docker, Kubernetes, and Windows, along with the advanced code-level instrumentation approach, is available in the official SigNoz guide:

Send logs from Node.js Pino logger to SigNoz using OpenTelemetry.

Get Started with SigNoz

You can choose between various deployment options in SigNoz. The easiest way to get started with SigNoz is SigNoz Cloud. We offer a 30-day free trial account with access to all features.

Those with data privacy concerns who can't send their data outside their infrastructure can sign up for either the enterprise self-hosted or BYOC offering.

Those who have the expertise to manage SigNoz themselves, or who just want to start with a free self-hosted option, can use our community edition.

Conclusion

In Pino's official benchmarks, 10,000 basic log operations complete in ~115ms, compared to ~270ms for Winston and ~377ms for Bunyan. Combined with structured JSON output, worker-thread transports, and child loggers for request context, it's the right default choice for production Node.js logging.

To get the most out of Pino, pair it with an observability platform. Using OpenTelemetry auto-instrumentation you can send structured logs directly to SigNoz Cloud, where they become searchable and correlated with traces and metrics, turning fast logging into actionable visibility.

FAQs

What is Pino in Node.js?

Pino is a high-performance, low-overhead logging library for Node.js that outputs structured JSON logs. It is designed to minimize the impact of logging on application performance by deferring heavy processing like log formatting and transport to worker threads. Pino supports six log levels (trace, debug, info, warn, error, fatal), child loggers to add contextual metadata, and a transport system to route logs to files, remote services, or observability platforms. It is roughly 2.4x faster than Winston in benchmarks, making it one of the most popular choices for production Node.js applications.

Which is better, Pino or Winston?

Pino is better suited for performance-sensitive applications. According to Pino's official benchmarks, it completes 10,000 basic log operations in about 115ms, while Winston takes roughly 270ms for the same workload, making Pino about 2.4x faster for basic logging. Pino achieves this by outputting minimal JSON synchronously and offloading all formatting and transport work to separate worker threads. The Pino README claims it is "over 2.4x faster than alternatives" in many real-world cases, particularly when accounting for HTTP request throughput.

Winston offers more built-in flexibility for formatting and in-process transports, and its API may feel more familiar if you're coming from other logging ecosystems. It has a larger plugin ecosystem for direct integrations, such as logging to databases or cloud services.

For most production Node.js services handling significant traffic, Pino is the suitable because logging overhead directly affects response latency under load. You can pair Pino with pino-pretty during development to get a readable output without sacrificing production performance.

Library10K Basic Logs10K Object LogsRelative Speed
Pino~115ms~119msBaseline (fastest)
Winston~270ms~273ms~2.4x slower
Bunyan~377ms~410ms~3.3x slower

What does Pino do?

Pino is a Node.js logging library that generates structured JSON log output with minimal performance overhead. When you call a Pino log method like logger.info(), it serializes your log data into a single JSON line containing the log level, a high-resolution timestamp, the process ID, hostname, and your message or data object. This JSON output can then be piped to files, forwarded to log aggregation services, or sent to observability platforms like SigNoz via OpenTelemetry transports. Pino's core design principle is that log processing should happen outside the main application thread.

What are the benefits of Pino logger?

The benefits of Pino logger are high throughput (10,000+ logs/second), low CPU and memory overhead, structured JSON output that is machine-parseable, and asynchronous transport support via worker threads. Pino also provides child loggers for attaching request-scoped context, built-in redaction for sensitive fields like passwords and tokens, and native integration with frameworks like Express, Fastify, and NestJS through companion packages like pino-http. Its structured JSON format makes it particularly well-suited for centralized log management with observability tools, where logs need to be queried, filtered, and correlated with traces and metrics.

Was this page helpful?

Your response helps us improve this page.

Tags
JavaScriptLogging