Skip to content
Candela — a candle with a neural-network flame

Candela — LLM Observability Platform

Open-source observability for LLM traffic. Trace requests, monitor costs, and manage providers across your entire AI infrastructure.

End-to-End Traces

Every LLM request captured with OpenTelemetry — latency, TTFB, token counts, and cost — all in a unified trace tree with W3C Trace Context propagation.

Multi-Provider Routing

Route to OpenAI, Gemini, Anthropic, Ollama, and LM Studio through a single endpoint. Smart routing picks the right backend automatically.

Cost Intelligence

Real-time cost calculation per request, per model, per user. Set budgets and get alerts before they’re exceeded.

Drop-In Integration

Works with Google ADK, LangChain, LiteLLM, raw HTTP — anything that talks to an LLM endpoint. Point base_url at Candela and you’re done.