Utilizing eRPC has enabled us to nearly halve our RPC spend. By optimizing our RPC requests across 18+ L1 and EVM-compatible chains, we've achieved significant cost reductions and performance improvements.
Chronicle Protocol is an innovative Oracle solution that has secured over $10 B in assets for MakerDAO and its ecosystem since 2017. As the creator of Ethereum's first Oracle, Chronicle continues to lead Oracle network advancements. This blockchain-agnostic protocol offers the first truly scalable, cost-efficient, decentralized, and verifiable Oracles, transforming data transparency and accessibility on-chain.
Our operations faced several significant challenges that led to high RPC costs and inefficiencies:
eth_chainId
and eth_blockByNumber
. These frequent, low-complexity requests led to inflated RPC costs without delivering proportional value.To overcome these challenges, we implemented the following setup with eRPC:
eRPC is deployed on a Kubernetes EKS cluster (EU region), utilizing a standard setup with Nginx ingress for traffic management. Additionally, a dedicated node group is allocated specifically for the erpc pods to ensure optimal performance and resource allocation.
We utilized eRPC in-memory caching, setting a maximum of 1 million items. Under normal utilization, the in-memory caching is still heavily under-utilized, but very conistent. When using redis caching, we had issues with cpu throttling often. In-memory has been way more reliable and performant for our use case.
We configured eRPC for 3 environments (3 replicas per env), each utilizing different upstream providers while maintaining consistent network configurations (mainnet and testnet). This setup ensured reliable failover, improved update delivery to staging and production environments, and enhanced overall system resilience.
We leveraged eRPC's enhanced caching capabilities to store and serve requests efficiently. By identifying areas within our protocol tools that could benefit from caching, we restructured our request patterns. Instead of making generic calls to “head” or the latest blocks, we made more specific requests, significantly increasing cache hit rates and reducing RPC usage.
Implementing multiplexing for RPC requests allowed multiple requests to be handled concurrently over a single connection. This optimization reduced the overhead associated with multiple individual RPC calls, contributing to lower overall usage.
The transition to eRPC provided visibility into cache misses, enabling us to identify and implement client-side optimizations. By refining how we make requests across various components, networks, and projects, we further minimized unnecessary RPC calls.
Implementing eRPC has yielded impressive results:
Daily RPC spend before eRPC implementation
Daily RPC spend after eRPC implementation