eRPC Logoerpc

How eRPC reduced Chronicle RPC cost by 45%

Chronicle Protocol is an innovative oracle securing $10B+ for MakerDAO since 2017. With eRPC, Chronicle nearly halved RPC spend across 18+ L1/EVM chains through smarter caching and multiplexing.

Company Logo
eRPC Logo

Challenges

Our operations faced several significant challenges that led to high RPC costs and inefficiencies:

  • Excessive RPC call volume on simple requests: A large portion of usage was consumed by repetitive calls such as eth_chainId and eth_blockByNumber, inflating costs without proportional value.
  • Multi‑chain complexity: Operating across 18+ L1 and EVM chains multiplied polling schedules and failure modes, increasing redundant RPC usage.
  • Ineffective caching mechanisms: Prior to eRPC, unreliable caching made it hard to store and serve from cache effectively, leading to unnecessary upstream requests.
  • Scalability issues with load balancers: Horizontal scaling increased RPC consumption as replicas independently generated more calls without coordination.
eRPC Setup Diagram 1

Setup

Deployment Setup

eRPC is deployed on a Kubernetes EKS cluster (EU region) behind NGINX Ingress. A dedicated node group is allocated specifically for the eRPC pods to ensure predictable performance and resource isolation.

In‑Memory Cache Implementation

We utilize eRPC in‑memory caching with a maximum of 1 million items. Under normal load the cache remains steady and consistent. When previously using Redis caching, CPU throttling caused reliability issues; in‑memory cache has been more reliable and performant for our use‑case.

Cache implementation dashboard

Robust Environment Configuration

We run three environments (three replicas per environment), each using different upstream providers with consistent network configs for mainnet and testnet. This improved failover, made staging/production updates safer, and increased overall resilience.

Restructured Request Patterns

By leveraging eRPC caching, we restructured request patterns to be more specific instead of polling only the latest head. The result was higher cache hit rates and fewer redundant reads.

Multiplexing Requests

eRPC multiplexes concurrent identical requests, allowing many callers to share a single upstream fetch. This significantly reduced overhead from bursts of individual requests.

Multiplexing effectiveness

Optimized Client‑Side Operations

Better visibility into cache misses allowed us to optimize client behavior across components, networks, and projects, further minimizing unnecessary RPC calls.

Client-side optimizations

Results

  • Significant reduction in RPC usage: After migrating to eRPC we nearly halved RPC usage relative to the prior billing period (providers included dRPC and BlastAPI).
    • Prior to migration, daily spend was roughly $60/day on dRPC.
    • After migration, daily spend decreased to about $11/day on two random days.
    • The week prior to migration averaged $50–60/day.
      Pre-migration spend chart
      Pre-migration spend chart 2
    Post-migration savings
  • Improved performance and scalability: Caching and multiplexing improved responsiveness and allowed scaling replicas without exponential RPC growth.
    Performance and scalability improvements