Introducing the Loggy Python SDK: Observability for Python Applications
The official Loggy SDK for Python is here! Get beautiful logging, distributed tracing, and performance metrics in your Flask, FastAPI, and Django applications with a Pythonic API that feels right at home.
Python developers, we heard you. After launching our Go SDK last week, the most requested feature was a Python SDK, and today we’re delivering. The Loggy Python SDK brings everything you love about Loggy to the Python ecosystem: colorful console output, remote log aggregation, distributed tracing with W3C Trace Context support, and performance metrics tracking.
Why Python?
Python powers an enormous chunk of the modern web. From Django and Flask web applications to FastAPI microservices, from data pipelines to machine learning inference servers, Python is everywhere. And yet, observability in Python has always felt like an afterthought. You either cobble together the standard library’s logging module with various handlers, or you pull in heavyweight frameworks that feel like they were designed for enterprise Java shops.
We wanted something different. We wanted logging that’s as simple as print() but with all the power you need for production systems. We wanted tracing that doesn’t require a PhD to configure. And we wanted it all to feel Pythonic, not like a port from another language.
Getting Started
Installation is straightforward with pip:
pip install loggydev-py
# With optional extras
pip install loggydev-py[color] # Colored console output
pip install loggydev-py[flask] # Flask middleware
pip install loggydev-py[fastapi] # FastAPI middleware
pip install loggydev-py[all] # Everything
And getting your first logs flowing takes just a few lines:
from loggy import Loggy
logger = Loggy(
identifier="my-service",
timestamp=True,
color=True,
remote={
"token": os.environ.get("LOGGY_TOKEN"),
}
)
logger.info("Application started")
logger.info("User logged in", metadata={
"user_id": 12345,
"role": "admin",
})
# Clean up on shutdown
logger.destroy()
Your logs will appear both in your terminal with beautiful colored output and in your Loggy dashboard in real-time. The SDK batches logs intelligently and handles network failures gracefully, so you never lose a log even if your connection drops temporarily.
Framework Middleware
One of the things we’re most proud of is how seamlessly the SDK integrates with popular Python web frameworks. For Flask applications, adding observability is just a few lines:
from flask import Flask
from loggy import Loggy, Metrics, Tracer
from loggy import flask_tracing_middleware, flask_metrics_middleware
app = Flask(__name__)
logger = Loggy(identifier="my-flask-app", remote={"token": "..."})
metrics = Metrics(token="...")
tracer = Tracer(service_name="my-flask-app", remote={"token": "..."})
# Add middleware - that's it!
flask_tracing_middleware(tracer)(app)
flask_metrics_middleware(metrics)(app)
@app.route("/")
def hello():
logger.info("Handling request")
return "Hello, World!"
FastAPI users get the same great experience with async-native middleware:
from fastapi import FastAPI
from loggy import Tracer, fastapi_tracing_middleware
app = FastAPI()
tracer = Tracer(service_name="my-fastapi-app", remote={"token": "..."})
app.add_middleware(fastapi_tracing_middleware(tracer))
@app.get("/")
async def hello():
return {"message": "Hello, World!"}
Every request is automatically traced, timed, and logged. You get visibility into your application’s behavior without writing any instrumentation code.
Distributed Tracing That Actually Works
If you’re running microservices, you know the pain of trying to debug a request that touches five different services. Which service was slow? Where did the error actually happen? Without tracing, you’re left grepping through logs and hoping the timestamps line up.
The Python SDK supports W3C Trace Context out of the box, which means it propagates trace IDs automatically through HTTP headers. When Service A calls Service B, the trace context flows through, and you can see the entire request journey in your Loggy dashboard:
from loggy import Tracer, SpanKind, SpanStatus
tracer = Tracer(
service_name="order-service",
service_version="1.0.0",
environment="production",
remote={"token": os.environ.get("LOGGY_TOKEN")},
)
# Start a span for a database operation
span = tracer.start_span("db.query", kind=SpanKind.CLIENT, attributes={
"db.system": "postgresql",
"db.statement": "SELECT * FROM orders WHERE user_id = $1",
})
try:
result = db.execute(query, [user_id])
span.set_status(SpanStatus.OK)
except Exception as e:
span.set_status(SpanStatus.ERROR, str(e))
span.add_event("exception", {
"exception.type": type(e).__name__,
"exception.message": str(e),
})
raise
finally:
span.end()
We’ve also included a with_span helper that makes this even cleaner:
from loggy import with_span
result = with_span(
tracer,
"fetch_user_data",
lambda: db.get_user(user_id),
attributes={"user.id": user_id},
)
And if you prefer context managers, spans work great with Python’s with statement:
with tracer.start_span("process_order") as span:
span.set_attribute("order.id", order_id)
process_order(order_id)
# Span automatically ends and sets status
Log-Trace Correlation
The real magic happens when you combine logging with tracing. Every log can include the current trace context, which means you can click on a trace in your dashboard and see all the logs from all services that were involved in that request:
ctx = tracer.get_current_context()
logger.info("Processing payment", metadata={
"trace_id": ctx["trace_id"],
"span_id": ctx["span_id"],
"order_id": order_id,
"amount": 99.99,
})
This is incredibly powerful for debugging. Instead of searching through millions of logs trying to find the needle in the haystack, you can start from a trace and see exactly what happened, in order, across your entire system.
Performance Metrics
The SDK also includes performance metrics tracking. You get visibility into your application’s throughput, response times, and error rates without any additional configuration:
from loggy import Metrics
metrics = Metrics(token=os.environ.get("LOGGY_TOKEN"))
# Manual tracking
timer = metrics.start_request()
# ... handle request ...
timer(status_code=200, path="/api/users", method="GET")
# Or record pre-measured requests
metrics.record(
duration_ms=150,
status_code=200,
bytes_in=1024,
bytes_out=4096,
)
You’ll get RPM (requests per minute), response time distributions, status code breakdowns, and throughput metrics, all visualized in your Loggy dashboard.
End-to-End Encryption
For applications handling sensitive data, the Python SDK supports the same end-to-end encryption as our other SDKs. Your logs are encrypted on your server before they ever leave, using AES-256-GCM with RSA key exchange:
import requests
# Fetch the public key
response = requests.get("https://loggy.dev/api/logs/public-key")
public_key = response.json()["publicKey"]
logger = Loggy(
identifier="secure-service",
remote={
"token": "...",
"public_key": public_key, # Enable encryption
}
)
# All logs are now encrypted before leaving your system
logger.info("Sensitive operation", metadata={"user_ssn": "***-**-1234"})
What’s Next
This is just the beginning for our Python SDK. We’re already working on Django middleware, automatic instrumentation for popular libraries like SQLAlchemy and Redis, and OpenTelemetry compatibility for teams already invested in that ecosystem.
The SDK is open source at github.com/loggy-dev/loggydev-py, and we welcome issues, feature requests, and contributions. We’ve put a lot of care into making this SDK feel Pythonic, but we know there’s always room for improvement.
Try It Today
The Python SDK is available now for all Loggy users. Free tier users can start logging immediately, while Pro and Team users get access to distributed tracing and performance metrics.
pip install loggydev-py
Happy logging! 🐍