Introducing Distributed Tracing: Follow Requests Across Your Microservices
Loggy now supports distributed tracing! Track requests as they flow through your services, identify bottlenecks, and debug issues faster with beautiful waterfall visualizations.
We’re incredibly excited to announce the launch of distributed tracing in Loggy! This has been one of our most requested features, and we’ve built something we think you’re going to love.
What is Distributed Tracing?
If you’re running microservices (or even a monolith that talks to databases and external APIs), you’ve probably experienced the pain of debugging a slow or failing request. The request hits your API gateway, then bounces to your auth service, then to your user service, which queries the database, and somewhere along the way… something goes wrong.
Distributed tracing solves this by giving each request a unique trace ID that follows it through every service it touches. Each unit of work becomes a span, and spans can have parent-child relationships, creating a tree that shows exactly how your request flowed through your system.
Beautiful Waterfall Visualizations
When you open a trace in Loggy, you’ll see a waterfall view that shows every span on a timeline. At a glance, you can see:
- Which services were involved in handling the request
- How long each service took to do its work
- Where time was spent - is it the database? An external API? Your own code?
- Which spans failed - errors are highlighted in red so you can spot them instantly
The waterfall makes it obvious when something is slow. If your database query is taking 500ms out of a 600ms request, you’ll see it immediately.
Getting Started is Simple
We’ve added tracing support to our Node.js SDK. Here’s all you need to do:
import { CreateTracer, createTracingMiddleware } from '@loggydev/loggy-node';
// Create a tracer for your service
const tracer = CreateTracer({
serviceName: 'api-gateway',
serviceVersion: '1.0.0',
environment: 'production',
remote: {
token: 'your-project-token',
},
});
// Add middleware for automatic HTTP tracing
app.use(createTracingMiddleware({ tracer }));
That’s it! Every incoming HTTP request will automatically create a span, and the trace context will be propagated to any outgoing requests.
Manual Spans for Custom Operations
Sometimes you want to trace specific operations that aren’t HTTP requests - like database queries or cache lookups. You can create manual spans:
const span = tracer.startSpan('db.query', {
kind: 'client',
attributes: {
'db.system': 'postgresql',
'db.statement': 'SELECT * FROM users WHERE id = $1',
},
});
try {
const result = await db.query('SELECT * FROM users WHERE id = $1', [userId]);
span.setStatus('ok');
return result;
} catch (err) {
span.setStatus('error', err.message);
span.addEvent('exception', {
'exception.type': err.name,
'exception.message': err.message,
});
throw err;
} finally {
span.end();
}
Context Propagation
When one service calls another, you need to pass the trace context so the spans get linked together. We use the W3C Trace Context standard, which means you can inject and extract trace context from HTTP headers:
// Inject trace context into outgoing request
const headers = tracer.inject({});
const response = await fetch('http://user-service/api/users/123', { headers });
// Extract trace context from incoming request (done automatically by middleware)
const parentContext = tracer.extract(req.headers);
const span = tracer.startSpan('handle-request', { parent: parentContext });
Log Correlation
One of the coolest features is automatic log correlation. If you’re using Loggy for both logging and tracing, your logs will automatically include the trace ID and span ID. When you view a trace, you’ll see all the correlated logs right there in the UI.
This means you can go from “this request was slow” to “here’s exactly what happened during that request” in seconds.
Service Maps (Team Plan)
For Team plan users, we’ve also built service maps. This is a visual graph showing how your services connect and communicate. You can see:
- Which services call which other services
- How many requests flow between services
- Error rates for each service
- Average latency for each connection
It’s incredibly useful for understanding your system architecture and identifying problematic service-to-service connections.
Pricing
Distributed tracing is available on our Pro and Team plans:
| Feature | Pro ($10/mo) | Team ($50/mo) |
|---|---|---|
| Traces per month | 10,000 | 100,000 |
| Trace retention | 7 days | 30 days |
| Max spans per trace | 100 | 500 |
| Service map | ❌ | ✅ |
What’s Next
This is just the beginning for tracing in Loggy. We’re already working on:
- Auto-instrumentation for popular libraries (pg, redis, axios, etc.)
- Trace-based alerts - get notified when traces are slow or failing
- Comparison views - compare two traces side-by-side
- Additional SDKs - Python and Go are on the roadmap
Try It Today
If you’re on a Pro or Team plan, tracing is already available in your dashboard. Just update to the latest version of @loggydev/loggy-node and start sending traces!
If you’re on the free plan and want to try tracing, now’s a great time to upgrade. You can start a free trial of Pro with no credit card required.
We’d love to hear what you think! Drop us a line at [email protected] with your feedback, questions, or feature requests.
Happy tracing! 🔍