The heart of every FIWARE deployment — implemented in C for extreme performance
The NGSI-LD Context Broker is the heart of every FIWARE deployment — the central hub that stores, queries, and distributes IoT entity data. The current Orion-LD 1.x already uses C and several fw-libs (fwJson, fwAlloc, fwHash), but is held back by libmicrohttpd as its HTTP server and by an NGSIv1-era internal data model that forces costly transformations on every request.
The FiWorks NGSI-LD Broker uses fwHttp (epoll, zero-copy, SO_REUSEPORT multi-process scaling) for HTTP, features a new subscription engine, and integrates the full fw-libs stack. MongoDB remains the database, but the broker-side overhead drops dramatically: native FtNode data model throughout, no more BSON ↔ NGSIv1 conversions, and multi-core scaling via SO_REUSEPORT.
Estimated effort: 2–3 months with Claude Max. The main work is fwHttp integration, the subscription engine, and FtNode-native data model throughout. Half the infrastructure (fwHash, fwAlloc, fwJson, fwHttp, fwProm, fwTrace) already exists.
Worth every month — these are the projected gains over the current Orion-LD 1.x:
| Metric | Orion-LD 1.x (measured) | FiWorks NGSI-LD Broker (projected) |
|---|---|---|
| Single-core throughput | ~5,000 req/s | ~10,000–15,000 req/s |
| System throughput (same HW) | ~5,000 req/s | ~15,000–30,000 req/s |
| p99 latency | ~5–20ms | ~1–5ms |
| RAM (broker process) | ~300–500 MB | ~50–150 MB |
The single-core 2–3x comes from eliminating libmicrohttpd overhead and the NGSIv1-era internal data model conversions. The system-wide 3–6x comes from multi-core scaling via SO_REUSEPORT — Orion-LD is single-threaded, the FiWorks broker uses all available cores. Performance comparisons with Scorpio and Stellio are planned.