- Catalog reads — answered from a continuously-ingested Postgres catalog in ~10 ms per request.
- Live pass-through — proxied straight to the underlying venue when the catalog can’t safely answer.
The decision tree
{success, data}), so SDK clients
cannot tell the difference.
What the catalog contains
The catalog is three tables in theprediction_markets schema:
events— one row per event per venue, with asource_exchangetag.markets— one row per market per venue, foreign-keyed toevents.outcomes— one row per outcome per market.
source_exchange column and every table has an index
on it. Venue-scoped reads are a single indexed SQL round-trip.
A continuously-running ingest worker keeps the catalog fresh — typical
lag from a venue update to the catalog is under 30 seconds.
Why fall through on credentials?
If a request carriescredentials in the body, the caller is asking
for a user-scoped view (e.g. “markets I hold positions in”, or a
balance-dependent price). The catalog only has the public, shared view
of the venue, so we let pmxt-core proxy the call live. The catalog is a
best-effort optimization — never a dependency.
Why fall through on a DB error?
Catalog is an optimization, not a dependency. If the DB is slow, under migration, or has stale data, the intercept swallows the error, logs it, and lets pmxt-core answer from the venue directly. Customers see a correct response either way; they just pay a ~500 ms latency penalty on the affected calls.Router vs pass-through
The hosted Router (/v0/events, /v0/markets) is
always served from the catalog. It’s designed for cross-venue
search and listings, where a live fan-out to every venue would be
prohibitively slow. The Router has no “fall through to live” path
because there is no single venue to fall through to.
