1. Kafka as the market-data backbone
Exchange connectors publish normalized BTC, ETH, and altcoin option book, trade, future, and index updates into Kafka topics. Downstream consumers can replay, inspect, and recover from those topics without coupling every service directly to each exchange.
2. Topic design
A pragmatic topic model separates raw exchange events, normalized option quotes, forward context, fitted SVI surfaces, risk nodes, fit diagnostics, and websocket-ready patches. Keys should include currency, expiry, strike, venue, and instrument where relevant.
3. Kubernetes workloads
In Kubernetes, ingestion pods, calibration workers, API gateways, and websocket broadcasters can scale independently. Calibration workers scale with fit latency and topic lag, while websocket gateways scale with connected users and fanout pressure.
4. Reliability model
Kafka offsets make replay and recovery explicit. Kubernetes readiness checks should fail when a worker is stale, unable to reach Kafka, or publishing fits that violate quality thresholds. Dead-letter topics preserve bad payloads for inspection.
5. What changes for the UI
The browser should still receive compact websocket snapshots and patches. Kafka and K8s change the reliability and scaling of the backend, not the trader-facing contract: the UI remains focused on current surface state, RR/fly nodes, quote overlays, and fit health.