KSQL Deep Dive – The Open Source Streaming SQL Engine for Apache Kafka

I had a workshop at Kafka Meetup Tel Aviv in May 2018: “KSQL Deep Dive – The Open Source Streaming SQL Engine for Apache Kafka“.

Here are the agenda, slides and video recording.

KSQL – The Open Source Streaming SQL Engine for Apache Kafka

KSQL is the open-source, Apache 2.0 licensed streaming SQL engine on top of Apache Kafka which aims to simplify all this and make stream processing available to everyone. Even though it is simple to use, KSQL is built for mission-critical and scalable production deployments (using Kafka Streams under the hood).
Benefits of using KSQL include No coding required; no additional analytics cluster needed; streams and tables as first-class constructs; access to the rich Kafka ecosystem. This session introduces the concepts and architecture of KSQL. Use cases such as Streaming ETL, Real-Time Stream Monitoring or Anomaly Detection are discussed. A live demo shows how to setup and use KSQL quickly and easily on top of your Kafka ecosystem.

If you want to get started, try out the KSQL quick start guide. It get’s you started in 10min locally on your laptop or alternatively in a Docker environment.

Agenda

  1. Apache Kafka Ecosystem
  2. Kafka Streams as Foundation for KSQL
  3. Motivation for KSQL
  4. KSQL Concepts
  5. Live Demo #1 – Intro to KSQL
  6. KSQL Architecture
  7. Live Demo #2 – Clickstream Analysis
  8. Building a User Defined Function (Example: Machine Learning)
  9. Getting Started

Slides

Click on the button to load the content from www.slideshare.net.

Load content

Video Recording

There was a Youtube live stream. Unfortunately, we had some technical problems. So the audio of the first half is not really good. Sorry for that. I still want to share it. The second half has good sounds quality:

Looking forward to get your feedback. Also please feel free to ask questions in the Confluent Slack community (where you can also get help from the engineers of KSQL) or create Github tickets if you have problems or contributions to this great open source project.

Kai Waehner

builds cloud-native event streaming infrastructures for real-time data processing and analytics

Recent Posts

A New Era in Dynamic Pricing: Real-Time Data Streaming with Apache Kafka and Flink

In the age of digitization, the concept of pricing is no longer fixed or manual.…

3 days ago

IoT and Data Streaming with Kafka for a Tolling Traffic System with Dynamic Pricing

In the rapidly evolving landscape of intelligent traffic systems, innovative software provides real-time processing capabilities,…

2 weeks ago

Fraud Prevention in Under 60 Seconds with Apache Kafka: How A Bank in Thailand is Leading the Charge

In the fast-paced world of finance, the ability to prevent fraud in real-time is not…

3 weeks ago

When to Choose Apache Kafka vs. Azure Event Hubs vs. Confluent Cloud for a Microsoft Fabric Lakehouse

Choosing between Apache Kafka, Azure Event Hubs, and Confluent Cloud for data streaming is critical…

4 weeks ago

How Microsoft Fabric Lakehouse Complements Data Streaming (Apache Kafka, Flink, et al.)

In today's data-driven world, understanding data at rest versus data in motion is crucial for…

1 month ago

What is Microsoft Fabric for Azure Cloud (Beyond the Buzz) and how it Competes with Snowflake and Databricks

If you ask your favorite large language model, Microsoft Fabric appears to be the ultimate…

1 month ago