Data integration and processing in Industrial IoT (IIoT, aka Industry 4.0 or Automation Industry). Apache Kafka, its ecosystem (Kafka Connect, KSQL) and Apache PLC4X are a great open source choice to implement this integration end to end in a scalable, reliable and flexible way.
Machine Learning / Deep Learning models can be used in different ways to do predictions. Natively in the application or hosted in a remote model server. Then you combine stream processing with RPC / Request-Response paradigm. This blog post shows examples of stream processing vs. RPC model serving using Java, Apache Kafka, Kafka Streams, gRPC and TensorFlow Serving.
This blog explains the different components of a hybrid integration architecture, their deployment models, when to use them, and their target audience. Afterward, each section shows how TIBCO’s Hybrid Integration Platform maps to this.
This article shows the different components available for a Hybrid Integration Architecture. The goal is not to discuss different vendor offerings but to explain different concepts and benefits of each component in general and how they relate to each other. Including concepts such as Hybrid Integration Platform (HIP), Cloud-Native Middleware, PaaS, Docker, iPaaS, iSaaS, API Management, and others.
This article discusses how relevant microservices, containers and a cloud-native architecture is for middleware. It is unbelievable how fast enterprises of all sizes are moving forward with these topics!
Slide deck from OOP 2016: Comparison of Frameworks and Products for Big Data Log Analytics and ITOA, e.g. Open Source ELK, TIBCO LogLogic / Unity, Splunk, Papertrail; Relation to Hadoop is also discussed.
In this blog post, I will show you how to „ETL“ all kinds of data to Amazon’s cloud data warehouse Redshift wit Talend’s big data components. You need not be a cloud or DWH expert, or an expert developer to integrate with Amazon’s cloud data warehouse Redshift. It is very easy with Talend’s integration solutions. Just drag&drop, configure, do some graphical mappings / transformations (if necessary), that’s it. Code is generated. Job runs. With Talend, you can easily „ETL“ all data from different sources to Redshift and store it there for under $1,000 per terabyte per year – even with the open source version!