Skip to main content
PMXT Hosted serves two kinds of traffic through the same Bearer-authed surface:
  1. Catalog reads — answered from a continuously-ingested Postgres catalog in ~10 ms per request.
  2. Live pass-through — proxied straight to the underlying venue when the catalog can’t safely answer.
Which path a request takes is decided per-call, transparently, so the client code is identical either way.

The decision tree

POST /api/:exchange/:method


 is :method one of
 fetchMarkets, fetchEvents,
 fetchMarket, fetchEvent?

        ├── no ──────────────────────► live venue (pmxt-core)

        ▼ yes
 does req.body.credentials
 exist? (user-scoped view)

        ├── yes ─────────────────────► live venue (pmxt-core)

        ▼ no
 are all params in the
 catalog-safe allowlist?
 (query, limit, offset, category, ...)

        ├── no ──────────────────────► live venue (pmxt-core)

        ▼ yes
 run catalog SQL against
 prediction_markets.* with
 source_exchange = :exchange

        ├── error ───────────────────► live venue (pmxt-core)

        ▼ success, rows > 0
 return { success: true, data }
The envelope is identical either way ({success, data}), so SDK clients cannot tell the difference.

What the catalog contains

The catalog is three tables in the prediction_markets schema:
  • events — one row per event per venue, with a source_exchange tag.
  • markets — one row per market per venue, foreign-keyed to events.
  • outcomes — one row per outcome per market.
Every row carries a source_exchange column and every table has an index on it. Venue-scoped reads are a single indexed SQL round-trip. A continuously-running ingest worker keeps the catalog fresh — typical lag from a venue update to the catalog is under 30 seconds.

Why fall through on credentials?

If a request carries credentials in the body, the caller is asking for a user-scoped view (e.g. “markets I hold positions in”, or a balance-dependent price). The catalog only has the public, shared view of the venue, so we let pmxt-core proxy the call live. The catalog is a best-effort optimization — never a dependency.

Why fall through on a DB error?

Catalog is an optimization, not a dependency. If the DB is slow, under migration, or has stale data, the intercept swallows the error, logs it, and lets pmxt-core answer from the venue directly. Customers see a correct response either way; they just pay a ~500 ms latency penalty on the affected calls.

Router vs pass-through

The hosted Router (/v0/events, /v0/markets) is always served from the catalog. It’s designed for cross-venue search and listings, where a live fan-out to every venue would be prohibitively slow. The Router has no “fall through to live” path because there is no single venue to fall through to.