Somehow, a cache is no longer a cache. Our requirements changed while we were all focused on trying to handle the ever-increasing load -- one application at a time. So what is a cache? Why does it exist? What changed? And why are people replacing Redis? If you’re interested in hearing about it in more detail, watch the Webinar Redis Replaced: Why Companies Now Choose Apache® Ignite™ to Improve Application Speed and Scale. But here’s a short answer.
In software, a cache is not a place for hiding things. It’s a place to temporarily store data. It’s usually an in-memory store -- with a backup on disk to speed up recovery in case of failure -- that sits on the side of the actual data flow.
This is called a “cache-aside” pattern. You request data in an app, put it in the cache on the side, and use it until it’s out of date. It’s the developer’s responsibility to manage how long the cached data is used before it needs to be refreshed with another read. Caching increases performance and offloads reads from the underlying data store.
One reason that a cache-aside pattern for each application doesn’t work anymore is that business eventually requires more speed and scale than a cache can handle. Most companies are going through digital transformations that have driven up the number of real-time queries and transactions 10x over the last decade.
Customers have come to expect at least 10x faster responsiveness from new entrants. Experiences that took days or hours a decade ago are now done online in hours or seconds. Delivering that great new experience not only requires real-time visibility and responsiveness. It requires A LOT more data about each customer and each experience -- about 50x compared to a decade ago.
And it requires an “omnichannel” experience where the SAME data during the same transaction is used across different channels that are supported by different applications. At some point using separate caches, or even a single common cache on the side, gets overwhelmed by this need to be 10x faster and scale to support 10x the volume and 50x the data.
HomeAway, a vacation-rental-home platform, and many others have moved to a cache that is not a cache or a cache-aside pattern.
They’ve adopted GridGain’s enterprise-ready release of Apache Ignite as a common in-memory computing layer that provides an in-memory version of the data that is not just a copy. Apache Ignite sits in-between their existing and new applications and their underlying databases -- directly in the path of all queries and transactions as a read-through and write-through cache.
Ignite makes this possible with minimal work by supporting SQL and transactions -- not requiring developers to code and create a “non-SQL” cache. Whenever a transaction occurs, it is passed through to the underlying store. Upon commit the data is updated in Ignite, in memory. So the in-memory version is exactly the same as the underlying store.
This new approach that is an evolution of an in-memory data grid (IMDG) delivers in-memory speed and unlimited horizontal scalability. The speed comes from Ignite becoming the new in-memory system of record that can handle any existing or new workload without impacting the underlying data store.
The scalability comes from Ignite’s horizontal scale-out shared-nothing architecture. As Chris Barry from HomeAway pointed out in explaining why they didn’t use Redis or Memcached for their current architecture, you can’t avoid the laws of physics.
At some point, traditional caches cannot scale if they have to move data across the network. Ignite solves that problem with features such as data affinity and collocated processing that have provided linear horizontal scalability with commodity servers even at petabyte memory scale, making it a powerful Redis alternative.
By expanding one application at a time, companies like HomeAway have been creating a new, common in-memory data layer that provides the speed and scale they need. It also simplifies new application development for existing apps, new apps, transactions, analytics, streaming and machine learning.
For a more detailed analysis, you can read the GridGain® and Redis® feature comparison.
If you want to learn more, watch this recorded webinar from GridGain’s director of product management, Denis Magda (he’s also Apache Ignite’s PMC Chair): Redis Replaced: Why Companies Now Choose Apache® Ignite™ to Improve Application Speed and Scale. The webinar is free, however, registration is required. You can do that here.