announcement api mcp automation developer-experience

Introducing API Tokens: Programmatic Access and MCP Integration

Create API tokens for programmatic access to Loggy. Perfect for CI/CD pipelines, automation scripts, and AI coding assistants via MCP. Manage heartbeats, status pages, and more without ever leaving your terminal.

L
Loggy Team
· 5 min read

We’ve been thinking a lot about how developers actually work. You’re in your IDE, deep in flow state, building something. Then you need to set up monitoring for a new cron job. The old way: open a browser, navigate to the dashboard, click through forms, copy a token, paste it back into your code. The new way: ask your AI assistant to do it while you keep coding.

Today we’re launching API tokens for Loggy, and with them, full MCP (Model Context Protocol) server support. This means you can manage your entire Loggy setup—heartbeats, status pages, uptime monitors, feature flags, and more—directly from your AI coding assistant or any automation tool.

The Vision: Never Leave Your Editor

Picture this: You’re building a new background job that processes payments every hour. You write the code, and then you say to your AI assistant:

“Set up a heartbeat monitor for this job. It should expect a ping every hour with a 10-minute grace period. Name it ‘payment-processor’ and alert me if it goes down.”

Your assistant calls the Loggy MCP server, creates the heartbeat, and gives you the ping URL to add to your code. You never switched windows. You never broke your flow.

Or maybe you’re deploying a new microservice:

“Create an uptime monitor for api.myapp.com/health, check every 5 minutes, and add it to our public status page.”

Done. Your AI assistant handles the API calls, and you keep shipping.

How It Works

API tokens are scoped credentials that give programmatic access to your Loggy account. Each token has granular permissions—you decide exactly what it can read and write.

Creating a Token

Head to Settings → API Tokens in your Loggy dashboard. Click “Create Token” and configure:

  • Name: Something descriptive like “CI/CD Pipeline” or “MCP Server”
  • Permissions: Choose read-only or read/write access for each resource type
  • Expiration: Optional expiration date for extra security

When you create the token, you’ll see it once. Copy it immediately—we don’t store the plain text, only a hash.

Permission Scopes

Each token can have different access levels for different resources:

ResourceReadWrite
Projects & LogsView logs and project infoCreate projects, ingest logs
HeartbeatsView heartbeat statusCreate/update/delete heartbeats
Status PagesView status pagesCreate/update/delete pages
Uptime MonitorsView monitor statusCreate/update/delete monitors
Feature FlagsEvaluate flagsCreate/update/delete flags
AlertsView alert historyCreate/update alert rules
TracingView tracesIngest traces
MetricsView metricsIngest metrics

This granularity means you can create a read-only token for your monitoring dashboard, a write-only token for log ingestion, or a full-access token for your MCP server.

MCP Integration

The Model Context Protocol is an open standard that lets AI assistants interact with external tools and services. With Loggy’s MCP server, your AI coding assistant becomes a full-fledged DevOps partner.

Setting Up the MCP Server

Add Loggy to your MCP configuration:

{
  "mcpServers": {
    "loggy": {
      "command": "npx",
      "args": ["@loggydev/mcp-server"],
      "env": {
        "LOGGY_API_TOKEN": "lgky_your_token_here"
      }
    }
  }
}

That’s it. Your AI assistant now has access to all the Loggy operations your token permits.

What You Can Do

With the Loggy MCP server, your AI assistant can:

Heartbeat Monitoring

  • Create heartbeats for new cron jobs
  • Check heartbeat status
  • Update expected intervals
  • Get ping URLs for your code

Status Pages

  • Create and configure status pages
  • Add monitors to pages
  • Update page branding
  • Check current status

Uptime Monitoring

  • Create URL monitors
  • Configure check intervals and expected status codes
  • View uptime history
  • Set up alert thresholds

Feature Flags

  • Create and toggle feature flags
  • Set up percentage rollouts
  • Evaluate flags for specific users

And More

  • View and search logs
  • Check alert history
  • Access performance metrics
  • Query distributed traces

Real-World Workflows

Here are some ways we’ve been using this internally:

Automated Heartbeat Setup

When we add a new scheduled job, we include the heartbeat setup right in the PR:

// In our cron job file
const HEARTBEAT_SLUG = "daily-report-generator";

async function runDailyReport() {
  try {
    await generateReport();
    await fetch(`https://loggy.dev/api/heartbeats/ping/${HEARTBEAT_SLUG}`);
  } catch (error) {
    loggy.error("Daily report failed", { error });
  }
}

The AI assistant creates the heartbeat with matching settings when we describe the job’s schedule.

Status Page Updates During Incidents

During an incident, you can ask your AI assistant to update the status page without context-switching:

“Mark the API service as degraded on our status page with the message ‘Investigating elevated error rates‘“

Feature Flag Rollouts

Rolling out a new feature? Your AI can help manage the rollout:

“Create a feature flag called ‘new-checkout-flow’ and set it to 10% rollout”

Then later:

“Increase the new-checkout-flow flag to 50%“

Security Considerations

We take security seriously. Here’s how we’ve designed API tokens to be safe:

  • Hashed Storage: We only store a SHA-256 hash of your token. Even if our database were compromised, your tokens couldn’t be recovered.
  • Granular Permissions: Tokens only have access to what you explicitly grant. A token for log ingestion can’t delete your status pages.
  • Expiration Dates: Set tokens to expire automatically. Great for temporary CI/CD access or contractor work.
  • Usage Tracking: See when each token was last used and how many requests it’s made. Spot anomalies quickly.
  • Instant Revocation: Revoke a token immediately if it’s compromised. The change takes effect instantly.

Team Plan Feature

API tokens are available on the Team plan ($50/month). This includes:

  • Up to 10 API tokens per account
  • Full MCP server access
  • All permission scopes
  • Usage analytics and audit logs

We made this a Team feature because it’s designed for professional workflows where automation and AI assistance provide the most value.

Getting Started

  1. Upgrade to Team if you haven’t already (Settings → Billing)
  2. Create an API token (Settings → API Tokens)
  3. Configure your MCP server with the token
  4. Start automating your observability setup

What’s Next

This is just the beginning. We’re working on:

  • Webhook integrations: Trigger actions based on Loggy events
  • Terraform provider: Infrastructure-as-code for your Loggy setup
  • GitHub Actions: Native actions for common Loggy operations
  • More MCP capabilities: Deeper integration with trace analysis and log search

We’d love to hear how you’re using API tokens and MCP integration. What workflows would make your life easier? Let us know—we’re building this for you.

Happy automating! 🔑

Ready to simplify your logging?

Join developers who are already using Loggy to debug faster.

Get Started for Free