Surface monitor Derivasys
Roadmap

Kafka and Kubernetes roadmap

The next generation of the platform should separate stream ingestion, calibration, storage, and websocket fanout. Kafka provides the event backbone; Kubernetes gives each service a scalable runtime boundary.

Kafka and Kubernetes roadmap diagram showing exchange ingestion pods, Kafka topics, analytics workers, websocket gateways, and the dashboard
Kafka separates replayable market-data streams from slower calibration work, while Kubernetes lets ingestion, analytics, and websocket fanout scale independently.

1. Kafka as the market-data backbone

Exchange connectors publish normalized BTC, ETH, and altcoin option book, trade, future, and index updates into Kafka topics. Downstream consumers can replay, inspect, and recover from those topics without coupling every service directly to each exchange.

2. Topic design

A pragmatic topic model separates raw exchange events, normalized option quotes, forward context, fitted SVI surfaces, risk nodes, fit diagnostics, and websocket-ready patches. Keys should include currency, expiry, strike, venue, and instrument where relevant.

3. Kubernetes workloads

In Kubernetes, ingestion pods, calibration workers, API gateways, and websocket broadcasters can scale independently. Calibration workers scale with fit latency and topic lag, while websocket gateways scale with connected users and fanout pressure.

4. Reliability model

Kafka offsets make replay and recovery explicit. Kubernetes readiness checks should fail when a worker is stale, unable to reach Kafka, or publishing fits that violate quality thresholds. Dead-letter topics preserve bad payloads for inspection.

5. What changes for the UI

The browser should still receive compact websocket snapshots and patches. Kafka and K8s change the reliability and scaling of the backend, not the trader-facing contract: the UI remains focused on current surface state, RR/fly nodes, quote overlays, and fit health.