Digital businesses succeed by achieving greater real-time intimacy with their customers across every touchpoint and channel. And nothing delivers that intimacy — plus speedier business insights and faster business results — quite like stream computing.
In the 21st century, stream computing is becoming the foundation for transformation of all customer-facing and back-end business processes. Streaming is as fundamental to today’s always-on economy as relational data architectures were to the prior era of enterprise computing.
At the heart of this revolution are advances in real-time event processing, continuous computing, in-memory data persistence and change data capture. When deployed within an enterprise’s cloud computing infrastructure, these technologies drive a continual feed of real-time data updates, contextual insights, optimized experiences and fast results into all business processes.
Over the coming decade, data-at-rest architectures — such as data warehouses, data lakes and transactional data stores — will become less central to enterprise data strategies. In Wikibon’s recent big data analytics market update, we uncovered several trends that point toward a new era in which stream computing is the foundation of most data architectures:
- Media and entertainment is a key vertical market for stream computing, relying on back-end cloud infrastructure that supports real-time packaging, loading, processing and artificial intelligence-driven personalization of content delivery.
- Stream computing is the foundation of many new edge applications, including access by mobile, embedded and “internet of things” devices, with back-end infrastructure providing real-time device management and in-stream analytic processing.
- Enterprises are expanding their investments in in-memory, continuous computing, change data capture and other low-latency solutions while converging those investments with their big data at-rest environments, including Hadoop, NoSQL and RDBMSs.
- Streaming environments are evolving to support low-latency, application-level processing of live data in any volume, variety, frequency, format, payload, order or pattern.
- Stream computing backbones are being deployed to manage more stateful, transactional workloads, execute in-stream machine learning and handle other complex orchestrated scenarios that have heretofore been the province of relational databases and other at-rest repositories.
- Online transactional analytic processing, data transformation, data governance and machine learning are increasingly moving toward low-latency, stateful streaming backbones.
- Vendors are introducing innovative solutions that incorporate streaming platforms ensuring they can serve as a durable source of truth for diverse applications.
- Databases are being deconstructed and reassembled into new approaches to address emerging requirements, especially the need to handle continuous machine learning DevOps workflows and edge-facing IoT analytics.
- Cloud providers have integrated streaming technologies into the heart of their solution portfolios for mobility, IoT, serverless computing and other key solution patterns.
- Enterprises are migrating more inferencing, training and other workloads toward edge devices that process real-time streams of locally acquired sensor data.
- Open-source streaming environments such as Kafka, Flink, and Spark Structured Streaming are becoming important enterprise big-data platforms.
- Batch-oriented big data deployments are giving way to more completely real-time, streaming, low-latency end-to-end environments.
For a further discussion of these trends, please register here for the webcast “Digital Business Transformation In the Streaming Era.” On Thursday, June 28, at 1 p.m. EDT, I’ll be joined by Clive Bearman of Attunity Ltd. and Mike Boyarski of MemSQL Inc. in a lively session in which we will provide guidance for enterprise data professionals looking to migrate their legacy architectures to support all-streaming architectures for complex cloud and edge applications.
This article was originally published on the siliconANGLE blog and was republished on the Attunity blog with permission from the author.
About the Author
James Kobielus is @theCUBE and Wikibon lead analyst for AI, data, data science, deep learning and application development. Previously, Jim was IBM Corp.’s data science evangelist. He managed IBM’s thought leadership, social and influencer marketing programs targeted at developers of big data analytics, machine learning and cognitive computing applications. Prior to his five-year stint at IBM, Jim was an analyst at Forrester Research, Current Analysis and the Burton Group. He is also a prolific blogger, a popular speaker and a familiar face from his many appearances as an expert on theCUBE and at industry events.