Browse docs (11)

LogVault User Guide

1,268 words ยท 6 min read ยท 13 sections

Overview

Log ingestion, full-text search, real-time tailing, and pattern/rate/absence alerting.

Subdomain: logs.microgemlabs.ai

Getting Started

1. Enable LogVault in the Products page

2. Navigate to logs.microgemlabs.ai

3. Go to Streams โ†’ + New Stream to create your first log stream

4. Copy the API key and start sending logs (see Ingestion below)

Concepts

Streams โ€” Logical groupings of log entries, typically one per application or environment. Each stream has its own API key, retention policy, and can have independent alert rules. Examples: api-production, worker-staging, nginx-access, payment-service. Entries โ€” Individual log records with timestamp, level, message, source, and optional structured tags/metadata. Alert Rules โ€” Conditions that trigger on-call alerts when log patterns match: keyword/regex match, error rate spikes, or stream silence.

Log Ingestion

HTTP API

Send logs via POST with your stream's API key:

curl -X POST https://logs.microgemlabs.ai/api/logs/ingest \
  -H "Authorization: Bearer YOUR_STREAM_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "level": "error",
    "message": "Connection timeout to database",
    "source": "api-server-01",
    "tags": { "env": "production", "region": "us-east" },
    "metadata": { "query": "SELECT * FROM users", "duration_ms": 30000 }
  }'
Response: 202 Accepted with { "accepted": 1 }

Batch Ingestion

Send up to 1,000 entries per request for high-volume applications:

curl -X POST https://logs.microgemlabs.ai/api/logs/ingest \
  -H "Authorization: Bearer YOUR_STREAM_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "entries": [
      { "level": "info", "message": "Request handled", "source": "api-01" },
      { "level": "warn", "message": "Slow query: 2340ms", "source": "api-02" },
      { "level": "error", "message": "Redis connection refused", "source": "api-01" }
    ]
  }'

Entry Fields

FieldTypeRequiredDescription
messagestringYesThe log message (max 10,000 chars)
levelstringNodebug, info, warn, error, fatal (default: info)
timestampISO 8601NoWhen the event occurred (default: server time)
sourcestringNoHostname, service name, or container ID
tagsobjectNoKey-value pairs for structured filtering
metadataobjectNoAdditional structured data (JSON)

Integration Examples

Node.js:
async function log(level, message, extra = {}) {
  await fetch('https://logs.microgemlabs.ai/api/logs/ingest', {
    method: 'POST',
    headers: {
      'Authorization': Bearer ${process.env.LOGVAULT_API_KEY},
      'Content-Type': 'application/json',
    },
    body: JSON.stringify({ level, message, source: os.hostname(), ...extra }),
  })
}

log('error', 'Payment failed', { tags: { customer: 'cus_123' }, metadata: { amount: 4999 } })
Python:
import requests, socket

def log(level, message, kwargs):
    requests.post('https://logs.microgemlabs.ai/api/logs/ingest',
        headers={'Authorization': f'Bearer {API_KEY}', 'Content-Type': 'application/json'},
        json={'level': level, 'message': message, 'source': socket.gethostname(), kwargs},
        timeout=5)

log('error', 'Database connection failed', tags={'env': 'production'})

Search & Query

Log Viewer

The log viewer at logs.microgemlabs.ai/logs provides:

  • Full-text search โ€” Boolean queries across message content (timeout AND database, error OR fatal)
  • Level filter โ€” Show only entries at a specific level
  • Stream filter โ€” Scope search to a specific stream
  • Expandable entries โ€” Click any entry to see its tags (as key=value pills) and metadata (JSON)
  • Live tail โ€” Toggle real-time streaming of new entries via Server-Sent Events

Search API

GET /api/logs/search?q=timeout&level=error&stream=STREAM_ID&from=2026-04-20T00:00:00Z&limit=50
Parameters:
ParamDescription
qFull-text search query (uses PostgreSQL tsvector)
streamStream ID filter
levelLevel filter (comma-separated: error,fatal)
sourceSource filter (prefix match)
fromStart timestamp (ISO 8601)
toEnd timestamp (ISO 8601)
tag.*Tag filter (e.g., tag.env=production)
limitResults per page (default: 100, max: 500)
cursorPagination cursor (last entry ID)
orderSort: desc (newest first, default) or asc

Live Tail

GET /api/logs/tail?stream=STREAM_ID&level=error&q=timeout

Returns a Server-Sent Events stream. New entries matching your filters are pushed in real time.

const es = new EventSource('/api/logs/tail?stream=abc&level=error')
es.onmessage = (e) => console.log(JSON.parse(e.data))

Alert Rules

Pattern Match

Triggers when a log entry matches a regex or keyword pattern.

Configuration:
  • Pattern โ€” Regex (e.g., FATAL|OutOfMemory|Segfault) or keyword
  • Level Filter โ€” Optional: only match entries at a specific level
  • Stream โ€” Optional: match in a specific stream or all streams

Example: Alert on any entry containing "FATAL" or "OutOfMemory" in the api-production stream.

Rate Threshold

Triggers when the count of matching entries exceeds a threshold within a time window.

Configuration:
  • Threshold โ€” Number of entries (e.g., 50)
  • Window โ€” Time period in minutes (e.g., 5)
  • Level Filter โ€” Optional: only count entries at a specific level

Example: Alert when more than 50 error-level entries occur within 5 minutes across all streams.

Absence Detection

Triggers when a stream stops receiving entries for longer than expected.

Configuration:
  • Absence Minutes โ€” How long the stream can be silent before alerting (e.g., 10)
  • Stream โ€” Which stream to watch (required for absence rules)

Example: Alert if the payment-service stream has no entries for 10 minutes.

Deduplication

Alert rules have a 5-minute cooldown after firing. If the same rule matches again within 5 minutes, no duplicate incident is created. Active incidents are auto-resolved after 30 minutes with no new matches.

Stream Management

API Key โ€” Each stream has a unique API key for authentication. Regenerate it at any time (the old key stops working immediately). Retention โ€” Set per-stream retention or use the plan default. Entries older than the retention period are automatically deleted by a daily cleanup job. Pause/Resume โ€” Paused streams reject new entries with 423 Locked.

Retention by Plan

PlanDaily IngestRetentionStreams
Free100 MB/day3 days2
Pro1 GB/day7 days10
Team10 GB/day30 days50
Scale50 GB/day90 daysUnlimited

Anomaly Detection

Beyond the manual threshold rules above, MicroGemAI's anomaly detection automatically tracks the hourly error rate (error + fatal entries) per stream against a 7-day rolling baseline. If the error rate spikes significantly above normal โ€” even if you haven't set a rate threshold rule โ€” an anomaly alert fires.

This is especially useful for streams where the "normal" error rate varies and a fixed threshold would either miss real spikes or fire on normal fluctuations. The anomaly engine learns what's normal for each stream individually.

A minimum absolute count of 5 errors is required to trigger an anomaly, preventing false positives on low-volume streams.

Maintenance Windows

Suppress LogVault alert rules during planned maintenance by creating a maintenance window (Ops โ†’ Maintenance). Log ingestion continues normally and entries are still recorded and searchable, but alert rules won't fire incidents or trigger on-call. Error rate data during maintenance windows is excluded from anomaly baselines.

Predictive Alerting

Extends anomaly detection with trend extrapolation. MicroGemAI uses linear regression over 48 hours of hourly error counts to predict future spikes. Example: "Error rate in payment-service increasing at 3.2/hr. Expected to hit 50/hr alert threshold in 4 hours." Enable in Anomalies โ†’ Detection Settings โ†’ toggle Prediction on. Requires at least 24 data points and Rยฒ > 0.3 for reliable predictions.

Runbook Actions

Define automated responses to log alert fires (Skills โ†’ Runbook (/agent/skills?type=runbook)). Example: create a "Clear Application Cache" template that POSTs to your admin endpoint when LogVault detects a spike of cache-related errors. Set a trigger pattern like cache miss|redis connection on the LogVault product, and MicroGemAI will suggest or execute the action based on trust level when the pattern matches.

Postmortems

When log incidents resolve, MicroGemAI auto-generates a postmortem with the error pattern that triggered the alert, related entries from other streams, cross-product correlation (e.g., was an upstream service also failing?), and prioritized action items. Review and publish at Ops โ†’ Postmortems.