In-Memory Data Grid
What is an In-Memory Data Grid?
An in-memory data grid (IMDG) is an advanced distributed cache deployed in the write-through/write-behind pattern. It integrates out of the box with your underlying database and keeps the information in sync. IMDGs allow data nodes to share the same memory space with an application layer.
A key IMDG feature is co-location (data affinity) that organizes related data for storage in the same node, enabling extremely low latency while allowing high throughput computing. Co-located applications can access in-memory data directly without network movement, achieving extremely low latency.
Since there's no distance between data and applications, solutions based on in-memory data grids don't need to move data across networks from the data grid to the application (and vice versa) for processing. In-memory data grids process queries at in-memory speeds and scale to hundreds and thousands of nodes, providing unmatched scalability and throughput.
IMDGs typically support a wide array of developer APIs, including SQL queries, key-value, compute, and ACID transactions. IMDGs allow you to leverage the power of those APIs and create a common data access layer for aggregating and processing data from multiple on-premises and cloud-based sources.
Why They’re Useful
In-memory data grids are especially helpful when a business or organization requires working with large datasets at low latency and high throughput. For example, imagine a country-wide movie ticket company that manages a website and a mobile app with 30 million daily users, 1.6 billion daily page views, and 20,000 visits per second.
With concurrent users buying tickets simultaneously, the company has to have the correct answers to quickly make the movie theater reservations with up-to-date information every time. Otherwise, the company risks selling overbooked movie theater seats. Or they might make the next user wait until the previous user’s reservation is complete, creating a persistent backlog.
In this scenario, an in-memory data grid is superior to distributed cache because the data grid’s co-located applications compute directly against the data without a network’s high latency. Because the data grids are distributed caches deployed in the write-through/write-behind pattern, it is the grid’s responsibility to synchronize with the database or other underlying storage. Applications no longer need to do this (as they would in the cache-aside pattern). The in-memory data grid setup seamlessly accelerates and offloads your existing databases and back-end systems.
Also, in-memory data grids usually support essential developer APIs such as SQL, key-value, and ACID transactions. These APIs make it easier to migrate to in-memory computing solutions without ripping and replacing existing solutions. Data grids enable an evolution rather than a revolution.
In-Memory Data Grid Use Cases
Fraud detection is one good use case for in-memory data grids. The growing sophistication of online criminal activity creates a challenging problem for payment gateway platforms. Detecting fraud many days or even hours after the fact has become obsolete. Instead, organizations must assess the legitimacy of every transaction in real-time.
Fraudsters can attack quickly using multiple credit cards until they succeed. To fight this attack, organizations can use in-memory data grids to perform real-time fraud detection and, based on purchase patterns, determine if each transaction is legitimate without compromising the shopping platform’s overall performance.
Another use case candidate is border security. Customs and immigration services personnel must deal with the high influx of passengers coming through seaports, airports, roads, and railroads. The large volume of passport scans and biometric data verification makes it challenging. Fortunately, an in-memory data grid solution can provide the distributed architecture and compute power required for a real-time security analysis against vast data sets.
In-Memory Data Grid in Apache Ignite and GridGain
Apache Ignite is a distributed database for high-performance computing with in-memory speed. Application developers frequently choose this solution for their in-memory data grid use cases. Ignite supports a wide array of developer APIs, including SQL queries, key-value compute, and ACID transactions, helping set up and support the data grids.
The open-source Apache Ignite project provides the foundation for GridGain. The enterprise-grade GridGain In-Memory Computing Platform enables developers to deploy Ignite securely at a global scale, without any downtime.
GridGain provides a general-purpose in-memory computing platform that you can configure and use as a data grid. Similarly, you can configure and use the same platform as an in-memory cache or digital integration hub.
Conclusion
In-memory caching technology paved the way for more potent forms of in-memory computing, such as data grids. But, as developers, we understand that technology alone is not enough for a business to succeed. We must stay informed about which architecture, tools, and services are adequate to address our company’s needs regarding in-memory data grids.
As the leading edge of in-memory computing, Apache Ignite and GridGain help us address those needs. To learn more about how to leverage the power of Apache Ignite, try the GridGain In-Memory Computing Platform for free and discover how to solve your data grid challenges to become production-ready and cloud-native.
Ready to learn more?
Explore our products and services →