erpc

How eRPC reduced Chronicle RPC cost by 45%

Utilizing eRPC has enabled us to nearly halve our RPC spend. By optimizing our RPC requests across 18+ L1 and EVM-compatible chains, we've achieved significant cost reductions and performance improvements.

Chronicle Logo
eRPC Logo

About Chronicle

Chronicle Protocol is an innovative Oracle solution that has secured over $10 B in assets for MakerDAO and its ecosystem since 2017. As the creator of Ethereum's first Oracle, Chronicle continues to lead Oracle network advancements. This blockchain-agnostic protocol offers the first truly scalable, cost-efficient, decentralized, and verifiable Oracles, transforming data transparency and accessibility on-chain.

Challenges

Our operations faced several significant challenges that led to high RPC costs and inefficiencies:

  • Excessive RPC Call Volume on Simple Requests: A large portion of our RPC usage was consumed by repetitive simple calls such as eth_chainId and eth_blockByNumber. These frequent, low-complexity requests led to inflated RPC costs without delivering proportional value.
  • Multi-Chain Complexity: Operating across 18 different L1 and EVM chains introduced substantial complexity in managing RPC calls efficiently. Each additional chain increased the potential for redundant or inefficient RPC usage.
  • eRPC Setup Diagram 1
  • Ineffective Caching Mechanisms: Prior to migrating to eRPC, our system lacked reliable caching solutions. This limitation hindered our ability to store, cache, and serve requests from cache effectively, resulting in unnecessary RPC calls and higher costs.
  • Scalability Issues with Load Balancers: Our previous load balancing setup struggled with throughput. Scaling horizontally by adding more replicas led to exponential increases in RPC consumption, as each replica independently generated additional RPC calls without optimizing resource usage.

Setup

To overcome these challenges, we implemented the following setup with eRPC:

Deployment Setup

eRPC is deployed on a Kubernetes EKS cluster (EU region), utilizing a standard setup with Nginx ingress for traffic management. Additionally, a dedicated node group is allocated specifically for the erpc pods to ensure optimal performance and resource allocation.

In-Memory Cache Implementation

We utilized eRPC in-memory caching, setting a maximum of 1 million items. Under normal utilization, the in-memory caching is still heavily under-utilized, but very conistent. When using redis caching, we had issues with cpu throttling often. In-memory has been way more reliable and performant for our use case.

eRPC Setup Diagram 1

Robust Environment Configuration

We configured eRPC for 3 environments (3 replicas per env), each utilizing different upstream providers while maintaining consistent network configurations (mainnet and testnet). This setup ensured reliable failover, improved update delivery to staging and production environments, and enhanced overall system resilience.

Restructured Request Patterns

We leveraged eRPC's enhanced caching capabilities to store and serve requests efficiently. By identifying areas within our protocol tools that could benefit from caching, we restructured our request patterns. Instead of making generic calls to “head” or the latest blocks, we made more specific requests, significantly increasing cache hit rates and reducing RPC usage.

Multiplexing Requests

Implementing multiplexing for RPC requests allowed multiple requests to be handled concurrently over a single connection. This optimization reduced the overhead associated with multiple individual RPC calls, contributing to lower overall usage.

eRPC Setup Diagram 1

Optimized Client-Side Operations

The transition to eRPC provided visibility into cache misses, enabling us to identify and implement client-side optimizations. By refining how we make requests across various components, networks, and projects, we further minimized unnecessary RPC calls.

eRPC Setup Diagram 1

Results

Implementing eRPC has yielded impressive results:

  • Significant Reduction in RPC Usage: Since migrating to eRPC, we have nearly halved our RPC usage compared to the previous billing period. (with providers such as dRPC and BlastAPI)eRPC Setup Diagram 2
    • Prior to migration, we were spending approximately $60 per day on dRPC.
    • Since migrating to eRPC, our daily spend has decreased to an average of $11 per day based on two random days.
    • In the week prior to migration, our average daily spend was consistently between $50-60 per day.

    Daily RPC spend before eRPC implementation

    eRPC Setup Diagram 1

    Daily RPC spend after eRPC implementation

    eRPC Setup Diagram 2
  • Improved Performance and Scalability: Enhanced caching and request multiplexing have not only reduced costs but also improved the responsiveness and scalability of our system. The ability to run multiple load balancer replicas without exponential RPC consumption has enabled better handling of increased traffic and demand.
  • eRPC Setup Diagram 2