Chronicle Protocol is an innovative oracle securing $10B+ for MakerDAO since 2017. With eRPC, Chronicle nearly halved RPC spend across 18+ L1/EVM chains through smarter caching and multiplexing.
Our operations faced several significant challenges that led to high RPC costs and inefficiencies:
eth_chainId
and eth_blockByNumber
, inflating costs without proportional value.eRPC is deployed on a Kubernetes EKS cluster (EU region) behind NGINX Ingress. A dedicated node group is allocated specifically for the eRPC pods to ensure predictable performance and resource isolation.
We utilize eRPC in‑memory caching with a maximum of 1 million items. Under normal load the cache remains steady and consistent. When previously using Redis caching, CPU throttling caused reliability issues; in‑memory cache has been more reliable and performant for our use‑case.
We run three environments (three replicas per environment), each using different upstream providers with consistent network configs for mainnet and testnet. This improved failover, made staging/production updates safer, and increased overall resilience.
By leveraging eRPC caching, we restructured request patterns to be more specific instead of polling only the latest head. The result was higher cache hit rates and fewer redundant reads.
eRPC multiplexes concurrent identical requests, allowing many callers to share a single upstream fetch. This significantly reduced overhead from bursts of individual requests.
Better visibility into cache misses allowed us to optimize client behavior across components, networks, and projects, further minimizing unnecessary RPC calls.