logging javascript tutorial best-practices nodejs python golang

From console.log to Production Logging: A Developer's Journey

We've all been there — console.log scattered everywhere, no structure, no search, no alerts. Here's the practical guide to graduating from console.log to real production logging that actually helps you debug at 2 AM. With code examples in Node.js, Python, and Go.

L
Loggy Team
· 5 min read
app.js — node
$ node app.js

Let me paint a picture you’ll recognize. It’s 11 PM on a Tuesday. A customer just emailed saying their checkout is broken. You SSH into your server — or more likely, you open your hosting provider’s dashboard and click on “Function Logs” — and you’re staring at something like this:

here
here 2
got to the function
user object: [object Object]
WHY IS THIS UNDEFINED
order created i think
made it to payment
STILL BROKEN

You wrote every single one of those lines. You remember writing them, actually. It was during a frantic debugging session three weeks ago, and you told yourself you’d clean them up before pushing to production. You did not clean them up. And now, at 11 PM, with a real customer experiencing a real problem, your only debugging tools are a handful of console.log statements that made sense to you in the moment and are completely useless now.

If this sounds painfully familiar, you’re not alone. This is the starting point for almost every developer who eventually takes logging seriously. Nobody wakes up one day and decides to implement structured, centralized logging because it sounds fun. They do it because they’ve been burned. They do it because they spent four hours debugging something that should have taken four minutes, and they swore they’d never go through that again.

This post is about that journey. We’re going to walk through the natural progression from console.log chaos to production-grade logging, step by step, with real code examples. Not because logging is glamorous — it isn’t — but because it’s one of those boring infrastructure decisions that quietly saves you hundreds of hours over the life of a project.

app.js — node
$ node app.js

Stage 1: The console.log Wilderness

Let’s be honest about what console.log actually gives you. In your local development environment, it’s genuinely useful. You can print objects, inspect state, trace execution flow. The browser DevTools format it nicely with expandable objects and color-coded levels. It’s immediate, it’s easy, and it requires zero setup. There’s a reason it’s the first debugging tool every JavaScript developer learns.

The problems start the moment you deploy. In production, console.log output goes… somewhere. On a traditional server, it goes to stdout, which maybe gets captured by your process manager, which maybe writes it to a file, which maybe gets rotated, which maybe you can access via SSH. On serverless platforms like Vercel or AWS Lambda, it goes to a log viewer that keeps it for a few hours or a few days before it vanishes forever. On a Docker container, it goes to the container’s stdout, which gets captured by your orchestrator’s logging driver, which you can access with docker logs if you know the container ID and the container hasn’t been recycled since the incident happened.

In every one of these scenarios, you have the same fundamental problems. You can’t search across time periods. You can’t filter by severity — everything is the same “level” because console.log doesn’t have levels. You can’t correlate a log message from one part of your application with a message from another part. You can’t set up an alert that wakes you up when errors start spiking. And you definitely can’t add structured metadata that would let you answer questions like “how many times did this error happen for users on the Pro plan in the last hour?”

The worst part is that console.log actively works against you in production. Every console.log you add during development is noise in production. Every console.log("here") makes it harder to find the log messages that actually matter. And because there’s no log level system, you can’t turn off the debug noise without removing the statements entirely — which means you lose the debugging context the next time you need it.

Stage 2: Adding Log Levels (The First Step)

The first step most developers take is switching from console.log to something that understands log levels. In vanilla JavaScript, that’s often just using the built-in console methods that most people forget exist:

console.debug("Payment flow started", { userId, cartId });
console.info("Order created successfully", { orderId: order.id });
console.warn("Payment retry triggered", { attempt: 2, reason: "timeout" });
console.error("Payment failed permanently", { orderId, error: err.message });

This is better! You now have semantic meaning attached to your messages. An error is different from an info, and a debug message is something you probably don’t need to see in production. But you still have all the other problems. The output still goes to stdout with no persistence. You still can’t search or filter across your entire application. You still can’t get alerted when something goes wrong.

Some developers reach for a logging library at this point. In the Node.js world, that usually means Winston or Pino:

import pino from "pino";
const logger = pino({ level: "info" });

logger.info({ orderId: order.id, userId }, "Order created");
logger.error({ orderId, err }, "Payment failed");

In Python, you have the built-in logging module:

import logging
logger = logging.getLogger(__name__)

logger.info("Order created", extra={"orderId": order_id, "userId": user_id})
logger.error("Payment failed", extra={"orderId": order_id}, exc_info=True)

In Go, you might use log/slog from the standard library:

slog.Info("Order created", "orderId", order.ID, "userId", userID)
slog.Error("Payment failed", "orderId", order.ID, "error", err)

These are all good steps. Structured logging with proper levels is a real upgrade over console.log. But the logs are still local. They’re still on the machine that produced them, in a file or stdout stream that you have to manually access. If you’re running multiple instances of your application (which you probably are, even if it’s just your API server plus a background worker), you now have logs split across multiple places with no way to see them together.

This is the gap where most developers get stuck. They have structured logs, but they don’t have centralized logging. And the centralized logging solutions they’ve heard of — Datadog, Splunk, the ELK stack — feel like they’re built for companies with dedicated DevOps teams and budgets to match. So the structured logs stay local, and when an incident happens, the developer is still SSHing into machines and grepping through files.

Stage 3: Centralized Logging (Where It Gets Real)

Centralized logging is the point where debugging transforms from an archaeology expedition into something that actually resembles a workflow. Instead of your logs living on individual machines, they all get sent to a single place where you can search, filter, and analyze them together.

The concept is simple: your application sends its log messages to a remote service over HTTP, and that service stores them, indexes them, and gives you a dashboard to explore them. The difference this makes in practice is enormous. When a customer reports a bug, instead of trying to figure out which server handled their request and manually combing through files, you open a dashboard, search for their user ID, and see every log message from every service in chronological order.

Here’s what this looks like with Loggy’s Node.js SDK:

import { CreateLoggy } from "@loggydev/loggy-node";

const loggy = CreateLoggy({
  apiKey: process.env.LOGGY_API_KEY,
  projectId: process.env.LOGGY_PROJECT_ID,
  identifier: "api-server",
});

// Now every log is automatically sent to your Loggy dashboard
loggy.info("Order created", { orderId: order.id, userId, plan: user.plan });
loggy.warn("Payment retry", { attempt: 2, gateway: "stripe", orderId });
loggy.error("Payment failed", { orderId, error: err.message, duration: elapsed });

The Python SDK works the same way:

from loggy import Loggy

loggy = Loggy(
    api_key=os.environ["LOGGY_API_KEY"],
    project_id=os.environ["LOGGY_PROJECT_ID"],
    identifier="api-server",
)

loggy.info("Order created", {"orderId": order_id, "userId": user_id})
loggy.error("Payment failed", {"orderId": order_id, "error": str(e)})

And the Go SDK:

import loggy "github.com/loggy-dev/loggy-go"

logger := loggy.New(loggy.Config{
    APIKey:     os.Getenv("LOGGY_API_KEY"),
    ProjectID:  os.Getenv("LOGGY_PROJECT_ID"),
    Identifier: "api-server",
})

logger.Info("Order created", map[string]interface{}{
    "orderId": order.ID,
    "userId":  userID,
})

Notice what changed in these examples versus the console.log originals. Every log message now has three things: a severity level that tells you how important it is, a human-readable message that tells you what happened, and structured metadata that gives you the context you need to actually debug the issue. That metadata is what transforms logging from “I know something went wrong somewhere” to “I know exactly what went wrong, for which user, in which service, at what time, and here’s every related event that led up to it.”

Stage 4: The Metadata Mindset

Once you have centralized logging, you start thinking about your logs differently. Instead of writing console.log("error happened") and hoping for the best, you start asking yourself: “If I’m reading this log at 2 AM while half-asleep, what information do I need to immediately understand what happened?”

This is the metadata mindset, and it’s the single biggest improvement you can make to your logging practice. The difference between a useless log and a useful one is almost always the metadata.

Compare these two approaches to logging a failed API request:

// What most people write
console.error("Request failed");

// What you should write
loggy.error("External API request failed", {
  service: "stripe",
  endpoint: "/v1/charges",
  statusCode: 502,
  duration: 30012,
  retryAttempt: 3,
  orderId: "ord_3f9x",
  userId: "usr_8k2m",
  errorCode: "gateway_timeout",
});

The first one tells you almost nothing. You know something failed. Cool. Good luck figuring out what, why, and for whom. The second one tells you everything. You know it was the Stripe API. You know it was the charges endpoint. You know it returned a 502. You know the request took 30 seconds. You know it was the third retry. You know which order and which user were affected. You know the error was a gateway timeout.

With that metadata flowing into a centralized logging dashboard, you can do things that would be impossible with console.log. You can search for all errors related to a specific user. You can filter for all Stripe failures in the last hour. You can build alerts that fire when the error rate for a specific service crosses a threshold. You can answer questions like “is this a Stripe-wide outage or something specific to our integration?” in seconds instead of minutes.

The metadata mindset also helps you write better log messages. When you’re forced to think about what data to attach, you naturally write more descriptive messages. Instead of “error” you write “External API request failed.” Instead of “user signed up” you write “New user registration completed.” The message becomes the summary; the metadata becomes the detail. Together, they give you everything you need.

Stage 5: Beyond Just Logging

Here’s what happens once you have proper centralized logging in place: you start wanting more. Not because you’re greedy, but because good logging naturally leads you to questions that logging alone can’t answer.

You’ll look at an error log and wonder: “What was the user doing right before this error? What other services were involved in this request?” That’s when you need distributed tracing — the ability to follow a single request across multiple services and see exactly how it flowed, where it slowed down, and where it broke.

You’ll notice a spike in errors and wonder: “Has our response time degraded? Are we getting more traffic than usual?” That’s when you need performance metrics — RPM, response times, status code breakdowns, and throughput data that gives you the bird’s-eye view that individual log lines can’t provide.

You’ll have an outage and your first instinct will be: “We need to know about this faster.” That’s when you need alerts — automated notifications that ping your team in Slack or Discord when error rates spike, when heartbeats go silent, or when your uptime monitors detect a problem.

And you’ll get questions from your customers asking “is your service down right now?” and you’ll want a better answer than “let me check.” That’s when you need a public status page — a transparent, always-updated view of your service health that builds trust with your users.

The point is, logging isn’t the destination. It’s the foundation. Once you have reliable, structured, centralized logging, everything else in your observability stack becomes possible. But without that foundation, you’re building on sand. Tracing doesn’t help if you can’t correlate traces with log messages. Metrics don’t help if you can’t drill down from a spike on a chart to the actual error messages that caused it. Alerts don’t help if the logs they point you to are unstructured noise.

The Five-Minute Migration

If you’ve read this far and you’re thinking “okay, I’m convinced, but migrating sounds like a huge project,” let me show you how small the change actually is. If you’re using console.log today, the migration to centralized logging is about five minutes of actual work.

Step 1: Install the SDK (30 seconds)

# Node.js
npm install @loggydev/loggy-node

# Python
pip install loggy

# Go
go get github.com/loggy-dev/loggy-go

Step 2: Create a logger instance (60 seconds)

import { CreateLoggy } from "@loggydev/loggy-node";

const loggy = CreateLoggy({
  apiKey: process.env.LOGGY_API_KEY,
  projectId: process.env.LOGGY_PROJECT_ID,
  identifier: "my-app",
});

Step 3: Find and replace (3 minutes)

This is the part that feels like it should be harder than it is. In most codebases, you can do a project-wide find-and-replace:

console.log(   →  loggy.info(
console.error( →  loggy.error(
console.warn(  →  loggy.warn(
console.debug( →  loggy.debug(

That’s it. Your logs now go to both your local console (the SDKs still print locally, with nice colors and formatting) and to your Loggy dashboard, where they’re searchable, filterable, and persistent. You didn’t have to change your hosting. You didn’t have to configure a log pipeline. You didn’t have to learn a query language. You just swapped console.log for loggy.info and suddenly you have production-grade logging.

Then, over the next few days, you start adding metadata to the log calls that matter most. You don’t have to do this all at once. Start with your error handlers and your most critical business logic. Add the user ID, the request ID, the relevant entity IDs. Each piece of metadata you add makes your future debugging sessions faster.

What Changes When You Can Actually See Your Logs

The first time you debug a production issue with proper centralized logging, it’s a little bit of a revelation. Instead of the usual 45-minute scramble of SSHing, grepping, guessing, and praying, you open your dashboard, search for the affected user or the error message, and see the entire story laid out in front of you. The sequence of events. The exact error. The metadata that tells you why it happened. The timestamps that tell you when.

What used to take an hour now takes five minutes. What used to require waking up a senior engineer now just requires reading the logs. What used to be a mystery — “why did this happen?” — now has an answer sitting right there in your dashboard, often before a customer even reports it.

That’s the real value of graduating from console.log. It’s not about using a fancier tool or following best practices because someone on a blog told you to. It’s about reclaiming your time. Every minute you spend debugging with inadequate tools is a minute you’re not spending building features, improving your product, or just… sleeping.

And look — the best logging in the world won’t prevent bugs. Your code will still break in weird ways at weird times. But the difference between debugging with console.log and debugging with proper production logging is the difference between searching for a needle in a haystack and having someone hand you the needle with a note that says “here’s what happened and here’s why.”

Getting Started

If you’re still living in the console.log wilderness, today’s a good day to leave. Loggy’s free tier gives you one project with a week of log retention and 1,000 logs — more than enough to get started and see the difference for yourself. Setup takes about five minutes, and once you’ve seen your logs in a real dashboard with search, filters, and real-time streaming, you’ll wonder how you ever debugged without it.

And if you’re already using a logging library like Winston or Pino and you’re happy with it, that’s great. You can keep your existing logger and just add Loggy as a transport — your structured logs get sent to a centralized dashboard without changing any of your existing code.

The journey from console.log("here") to production logging isn’t glamorous. Nobody’s going to tweet about your amazing log messages. But the first time you diagnose and fix a production issue in under five minutes — while your competitor is still SSHing into servers and grepping through files — you’ll know it was worth it.

Your future self, debugging at 2 AM, will thank you.

Ready to simplify your logging?

Join developers who are already using Loggy to debug faster.

Get Started for Free