← Home

Event Log Service

Centralized event logging with cursor pagination and ULID-based ordering

Node.jsHonoPostgreSQLDrizzle
Playground

Send events to the live Event Log Service and see them appear in real-time. The form sends a POST /events request, and the table fetches with GET /events.

The event feed also shows real traffic — every service in this labs log events here, from page views on nrizky.com to user actions on this site. As more services are added, their events will appear in the feed too. Use the service filter to separate playground events from real ones.

Create Event

event-log-service

Event Feed

Event TypeServiceActorResourceTimestamp
No events yet. Create one using the form.

Overview

I built this service after struggling to trace a bug at work — a coupon mysteriously attaching to an old purchase. We had application logs — the system was logging request details, errors, and timing. But when I needed to answer a business question (“how did this coupon end up on this old purchase?”), those logs couldn't give me a clear answer. Tracing the flow meant copying log strings and searching the codebase just to find where they were emitted. (You can read the full investigation story here — coming soon.)

That experience taught me that application logs and business event logs serve different purposes. Application logs track system behavior — request latency, errors, database health. They're built for engineers debugging infrastructure. Business event logs track domain actions — who did what, to what, and when. They're built for tracing flows across services.

What I needed was the latter. A structured record that could answer “show me every action that touched this purchase” with a simple query. I learned that production systems often separate these concerns: application logs go to aggregation tools like Datadog or ELK, while business events get their own dedicated, structured store — making it possible to trace cross-service flows without digging through unstructured log strings.

This Event Log Service is my implementation of that idea: a centralized event store with a consistent schema (service, event type, actor, resource, timestamp) that any service can write to. It's also part of a larger learning journey into building reliable, production-grade backend systems.

How It Works

When you hit “Send Event” in the playground above, here's the journey that request takes:

The form sends a POST request to this website's API route — not directly to the Event Log Service. The client-service acts as a proxy, attaching the API key server-side so it's never exposed to the browser. This is the same pattern you'd see in any production frontend that talks to an authenticated backend.

Browser

Playground form

Client Service

API proxy (hides API key)

Auth Middleware

API key hash validation

Validation

Zod schema at HTTP boundary

Service Layer

ULID generation, service-match check

Repository

Stores structured event

PostgreSQL

Persistent storage

From there, the request reaches the Event Log Service, where it passes through several layers:

Authentication — The API key is validated by hashing it and matching against stored hashes. Plaintext keys are never stored in the database.

Validation — The request body is checked against a strict schema at the HTTP boundary. Fields like event type and service are required. Actor and resource are optional but validated as pairs — if you provide an actor type, you must also provide an actor ID.

Service Logic— A ULID is generated as the event ID. ULIDs are time-sortable, which means the most recent events naturally sort first — no need for a separate timestamp index. The service also checks that the service name in the payload matches the authenticated API key's service, preventing one service from writing events as another.

Storage — The event is stored with a consistent structure: service, event type, actor, resource, timestamp, and optional metadata. This structure is what makes the Event Feed queryable — you can filter by any of these fields and trace actions across services.

The response flows back through the proxy to the browser, and the table refreshes to show the new event. When you use the filters or click “Load more,” the same proxy pattern applies — the GET request passes through with cursor-based pagination, which performs consistently regardless of how deep you page into the data.

Design Decisions

ULID over UUIDv4 for Event IDs

ULIDs are time-sortable and lexicographically ordered. UUIDv4 is random, which means you'd need a separate timestamp column and index for ordering. With ULIDs, cursor pagination is just a string comparison — no extra index needed. Auto-increment was also an option, but it leaks information about volume and doesn't work well across distributed systems.

Cursor Pagination over Offset

Offset pagination degrades as you go deeper — OFFSET 10000 still scans 10,000 rows before returning results. Cursor pagination uses the ULID's ordering to pick up exactly where the last page left off, giving constant performance regardless of depth.

Separating eventTimestamp from createdAt

eventTimestamp is client-provided (when the event actually occurred), createdAt is server-generated (when it was recorded). In distributed systems, events can arrive out of order — a service might buffer events or retry after a failure. Without this separation, you'd lose the real timeline of what happened.

SHA-256 Hashing for API Keys

Plaintext keys are never stored in the database. On each request, the provided key is hashed and compared against the stored hash. If the database is compromised, the attacker gets hashes, not usable keys.

Zod Validation at the HTTP Boundary

Request validation happens in middleware, before reaching business logic. Internal types are plain TypeScript interfaces. I considered validating inside the service layer, but that would mix input parsing with business rules. Keeping Zod at the edge means the service layer can trust its inputs are already clean.

Actor/Resource Pair Validation in Zod, Not the Database

If you provide an actor type, you must also provide an actor ID. I enforced this in the validation schema rather than with database constraints. Database-level CHECK constraints would catch it too, but the error messages would be cryptic and the feedback loop slower. Validating early gives clear, actionable error responses.

API Reference

POST/events

Create a new event. Requires X-API-KEYheader. The service name in the payload must match the authenticated key's service.

{
  "eventType": "user.login",       // required
  "service": "my-service",         // required, must match API key
  "eventTimestamp": "2026-04-08T12:00:00Z",  // required, ISO 8601
  "actor": { "type": "user", "id": "user-101" },  // optional
  "resource": { "type": "order", "id": "order-5001" },  // optional
  "metadata": { "ip": "1.2.3.4" } // optional
}
GET/events

Query events with optional filters. Supports cursor-based pagination.

Query Parameters:
  service       - filter by service name
  eventType     - filter by event type
  actorId       - filter by actor ID
  resourceId    - filter by resource ID
  from          - ISO 8601 start time
  to            - ISO 8601 end time
  limit         - 1-100, default 50
  cursor        - ULID cursor for pagination

Response:
{
  "data": {
    "events": [...],
    "nextCursor": "01ARZ3NDEKTSV4RRFFQ69G5FAE"  // null if no more
  }
}
GET/health

Health check. Returns 200 if the database is reachable, 503 otherwise.