Overview
Log ingestion, full-text search, real-time tailing, and pattern/rate/absence alerting.
Subdomain: logs.microgemlabs.aiGetting Started
1. Enable LogVault in the Products page
2. Navigate to logs.microgemlabs.ai
3. Go to Streams โ + New Stream to create your first log stream
4. Copy the API key and start sending logs (see Ingestion below)
Concepts
api-production, worker-staging, nginx-access, payment-service.
Entries โ Individual log records with timestamp, level, message, source, and optional structured tags/metadata.
Alert Rules โ Conditions that trigger on-call alerts when log patterns match: keyword/regex match, error rate spikes, or stream silence.Log Ingestion
HTTP API
Send logs via POST with your stream's API key:
curl -X POST https://logs.microgemlabs.ai/api/logs/ingest \
-H "Authorization: Bearer YOUR_STREAM_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"level": "error",
"message": "Connection timeout to database",
"source": "api-server-01",
"tags": { "env": "production", "region": "us-east" },
"metadata": { "query": "SELECT * FROM users", "duration_ms": 30000 }
}'
Response: 202 Accepted with { "accepted": 1 }
Batch Ingestion
Send up to 1,000 entries per request for high-volume applications:
curl -X POST https://logs.microgemlabs.ai/api/logs/ingest \
-H "Authorization: Bearer YOUR_STREAM_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"entries": [
{ "level": "info", "message": "Request handled", "source": "api-01" },
{ "level": "warn", "message": "Slow query: 2340ms", "source": "api-02" },
{ "level": "error", "message": "Redis connection refused", "source": "api-01" }
]
}'
Entry Fields
| Field | Type | Required | Description |
|---|---|---|---|
message | string | Yes | The log message (max 10,000 chars) |
level | string | No | debug, info, warn, error, fatal (default: info) |
timestamp | ISO 8601 | No | When the event occurred (default: server time) |
source | string | No | Hostname, service name, or container ID |
tags | object | No | Key-value pairs for structured filtering |
metadata | object | No | Additional structured data (JSON) |
Integration Examples
Node.js:async function log(level, message, extra = {}) {
await fetch('https://logs.microgemlabs.ai/api/logs/ingest', {
method: 'POST',
headers: {
'Authorization': Bearer ${process.env.LOGVAULT_API_KEY},
'Content-Type': 'application/json',
},
body: JSON.stringify({ level, message, source: os.hostname(), ...extra }),
})
}
log('error', 'Payment failed', { tags: { customer: 'cus_123' }, metadata: { amount: 4999 } })
Python:
import requests, socket
def log(level, message, kwargs):
requests.post('https://logs.microgemlabs.ai/api/logs/ingest',
headers={'Authorization': f'Bearer {API_KEY}', 'Content-Type': 'application/json'},
json={'level': level, 'message': message, 'source': socket.gethostname(), kwargs},
timeout=5)
log('error', 'Database connection failed', tags={'env': 'production'})Search & Query
Log Viewer
The log viewer at logs.microgemlabs.ai/logs provides:
- Full-text search โ Boolean queries across message content (
timeout AND database,error OR fatal) - Level filter โ Show only entries at a specific level
- Stream filter โ Scope search to a specific stream
- Expandable entries โ Click any entry to see its tags (as key=value pills) and metadata (JSON)
- Live tail โ Toggle real-time streaming of new entries via Server-Sent Events
Search API
GET /api/logs/search?q=timeout&level=error&stream=STREAM_ID&from=2026-04-20T00:00:00Z&limit=50
Parameters:
| Param | Description |
|---|---|
q | Full-text search query (uses PostgreSQL tsvector) |
stream | Stream ID filter |
level | Level filter (comma-separated: error,fatal) |
source | Source filter (prefix match) |
from | Start timestamp (ISO 8601) |
to | End timestamp (ISO 8601) |
tag.* | Tag filter (e.g., tag.env=production) |
limit | Results per page (default: 100, max: 500) |
cursor | Pagination cursor (last entry ID) |
order | Sort: desc (newest first, default) or asc |
Live Tail
GET /api/logs/tail?stream=STREAM_ID&level=error&q=timeout
Returns a Server-Sent Events stream. New entries matching your filters are pushed in real time.
const es = new EventSource('/api/logs/tail?stream=abc&level=error')
es.onmessage = (e) => console.log(JSON.parse(e.data))Alert Rules
Pattern Match
Triggers when a log entry matches a regex or keyword pattern.
Configuration:- Pattern โ Regex (e.g.,
FATAL|OutOfMemory|Segfault) or keyword - Level Filter โ Optional: only match entries at a specific level
- Stream โ Optional: match in a specific stream or all streams
Rate Threshold
Triggers when the count of matching entries exceeds a threshold within a time window.
Configuration:- Threshold โ Number of entries (e.g., 50)
- Window โ Time period in minutes (e.g., 5)
- Level Filter โ Optional: only count entries at a specific level
Absence Detection
Triggers when a stream stops receiving entries for longer than expected.
Configuration:- Absence Minutes โ How long the stream can be silent before alerting (e.g., 10)
- Stream โ Which stream to watch (required for absence rules)
payment-service stream has no entries for 10 minutes.
Deduplication
Alert rules have a 5-minute cooldown after firing. If the same rule matches again within 5 minutes, no duplicate incident is created. Active incidents are auto-resolved after 30 minutes with no new matches.
Stream Management
423 Locked.Retention by Plan
| Plan | Daily Ingest | Retention | Streams |
|---|---|---|---|
| Free | 100 MB/day | 3 days | 2 |
| Pro | 1 GB/day | 7 days | 10 |
| Team | 10 GB/day | 30 days | 50 |
| Scale | 50 GB/day | 90 days | Unlimited |
Anomaly Detection
Beyond the manual threshold rules above, MicroGemAI's anomaly detection automatically tracks the hourly error rate (error + fatal entries) per stream against a 7-day rolling baseline. If the error rate spikes significantly above normal โ even if you haven't set a rate threshold rule โ an anomaly alert fires.
This is especially useful for streams where the "normal" error rate varies and a fixed threshold would either miss real spikes or fire on normal fluctuations. The anomaly engine learns what's normal for each stream individually.
A minimum absolute count of 5 errors is required to trigger an anomaly, preventing false positives on low-volume streams.
Maintenance Windows
Suppress LogVault alert rules during planned maintenance by creating a maintenance window (Ops โ Maintenance). Log ingestion continues normally and entries are still recorded and searchable, but alert rules won't fire incidents or trigger on-call. Error rate data during maintenance windows is excluded from anomaly baselines.
Predictive Alerting
Extends anomaly detection with trend extrapolation. MicroGemAI uses linear regression over 48 hours of hourly error counts to predict future spikes. Example: "Error rate in payment-service increasing at 3.2/hr. Expected to hit 50/hr alert threshold in 4 hours." Enable in Anomalies โ Detection Settings โ toggle Prediction on. Requires at least 24 data points and Rยฒ > 0.3 for reliable predictions.
Runbook Actions
Define automated responses to log alert fires (Skills โ Runbook (/agent/skills?type=runbook)). Example: create a "Clear Application Cache" template that POSTs to your admin endpoint when LogVault detects a spike of cache-related errors. Set a trigger pattern like cache miss|redis connection on the LogVault product, and MicroGemAI will suggest or execute the action based on trust level when the pattern matches.
Postmortems
When log incidents resolve, MicroGemAI auto-generates a postmortem with the error pattern that triggered the alert, related entries from other streams, cross-product correlation (e.g., was an upstream service also failing?), and prioritized action items. Review and publish at Ops โ Postmortems.