Variant Systems

Logging & Tracing Vibe Code Cleanup

AI filled your codebase with console.log statements. They work for debugging. They're useless for production operations.

At Variant Systems, we pair the right technology with the right approach to ship products that work.

Why this combination

  • AI generates console.log/print statements instead of structured logging
  • No log levels - everything logs at the same priority
  • No request context - logs can't be correlated to specific user actions
  • Sensitive data logged without redaction

What AI Gets Wrong in Logging

AI generates console.log("Processing order") and calls it logging. No structure. No severity level. No context. No correlation ID. When this runs in production with 100 concurrent requests, the log output is an interleaved mess of identical messages with no way to tell which request generated which line.

Error handling from AI typically logs console.error(error.message) - discarding the stack trace, request context, and any data that would help debug the issue. The error happened. Something failed. Good luck figuring out what, where, and why from a one-line message in a sea of identical error strings.

AI also logs things it shouldn’t. Request bodies containing passwords. Response payloads with user data. API keys passed as parameters. The logging is simultaneously too noisy (logging everything at the same level) and too sparse (missing the context that would actually be useful).

Our Logging Cleanup Process

We replace unstructured logging with a structured logger - Winston, Pino, or the framework’s built-in structured logging. Every log entry is JSON with consistent fields: timestamp, level, service, requestId, message, and contextual data. This makes logs searchable, filterable, and aggregatable.

Log levels get assigned properly. Debug for development diagnostics. Info for normal operations (request received, job completed). Warn for unexpected but handled situations (retry attempt, fallback used). Error for failures that need attention. Production runs at info level; debug output stays in development.

Request IDs are generated at the entry point and propagated through every function call, async operation, and service interaction. Every log line for a specific request shares the same ID. Debugging goes from “search for the error” to “filter by request ID and read the story.”

Distributed Tracing Integration

For applications with multiple services or external API dependencies, we go beyond logging and implement distributed tracing using OpenTelemetry. Each incoming request generates a trace ID that propagates across HTTP calls, message queues, and background jobs. Spans capture timing data for every operation: database queries, cache lookups, third-party API calls. When a request is slow, the trace shows exactly which operation took the time, instead of requiring guesswork across multiple log streams. We instrument the most critical paths first and configure sampling rates that balance observability with storage costs. Traces feed into Grafana Tempo, Jaeger, or your existing observability platform alongside the structured logs, giving your team a complete picture of request behavior in production.

Before and After

Before: Thousands of console.log statements generating unstructured text. Debugging production issues means SSH into the server, tail the log file, and hope. Sensitive data scattered throughout log output. No way to track a single request through the application.

After: Structured JSON logs flowing to a central system. Every entry has context. Filter by request ID to see the complete request lifecycle. Filter by error level to see all failures. PII is redacted. Alerts fire on error patterns. Production debugging takes minutes, not hours.

What you get

Structured logging implementation replacing console.log/print statements
Log level strategy (debug, info, warn, error) with proper usage
Request ID propagation for correlating logs to user actions
Sensitive data redaction in log output
Centralized log collection setup
Log-based alerting for error patterns

Ideal for

  • AI-built applications with console.log as the only logging
  • Products heading to production that need proper log infrastructure
  • Teams debugging production issues by SSH-ing into servers
  • Applications with sensitive data that might be appearing in logs

Other technologies

Industries

Ready to build?

Tell us about your project and we'll figure out how we can help.

Get in touch