Bringing the Mainframe to the Data Lake: How Advanced Analytics Can Leverage Real-time Transactions

Share:

“The mainframe is going away” is as true now as it was 10, 20 and 30 years ago. Mainframes are still crucial in handling critical business transactions, they were however built for an era where batch data movement was the norm and can be difficult to integrate into today’s data-driven, real-time, analytics-focused business processes as well as the environments that support them.

Unlocking Mainframe Data

Most businesses looking to integrate mainframes into a broader analytical environment take a brute force approach, querying directly into the mainframe system to access the data they need. This approach can be costly in environments where billing is based on how many MIPS (millions of instructions per second) the system uses, because every new query eats up more instructions, adding to that month’s bill. And, a significant amount of tuning and optimization is typically required to get a data warehouse sitting on a mainframe to support the kind of broad, deep and fast analysis that businesses need today.

One solution for a business facing these challenges is to offload the data from the mainframe to another environment such as Apache® Hadoop™ or an MPP database. Mainframe offload is an effective approach for some business environments but it involves an expensive, time-consuming process to implement. And more often than not, the result from multiple data types and sources in real time. Doing so requires a new kind of solution – one that keeps mainframe data current and available to the broader ecosystem without all the environmental complexity and without breaking the bank.

Apache Kafka Brings Mainframe Data to Life

Some organizations are leveraging Apache Kafka’s modern, distributed architecture to move mainframe data in real time. Unlike using Hadoop or database offload, which will first store and index data so that it can be accessed by querying, Kafka captures data for processing in flight. Although it may look like a messaging system — with producers publishing messages that are available to consumers in milliseconds — Kafka works more like a distributed database for your mainframe and other data.

Mainframe CDC Diagram
Confluent and Attunity provide seamless mainframe integration and query offload for modern,
distributed analytics environments via Apache Kafka.

It’s a New Day for the Mainframe

With Attunity Replicate, you can unlock your mainframe data without incurring the complexity and expense that come with sending ongoing queries into the mainframe database. With Apache Kafka, you can deliver that data in real time to the most demanding analytics environments. And with Confluent, you can ensure that your analytics environment includes the broadest possible range of data sources and destinations, while ensuring true enterprise-grade functionality. The combined solution puts mainframe data right at the heart of your modern, distributed analytics environment.

Learn More:

Dev Tool:

Request: blog/bringing-the-mainframe-to-the-data-lake-how-advanced-analytics-can-leverage-real-time-transactions
Matched Rewrite Rule: blog/([^/]+)(?:/([0-9]+))?/?$
Matched Rewrite Query: name=bringing-the-mainframe-to-the-data-lake-how-advanced-analytics-can-leverage-real-time-transactions&page=
Loaded Template: single.php