Apr 08, 2021 | 3 min read

Streaming Events Among Your Microservices Using Apache Kafka

By: Robert Perrin

Streaming Events Among Your Microservices Using Apache Kafka

Building a system of microservices to support your business’s operations is a challenge faced by architects and executives at companies large and small. Orchestrating flows of data and actions across many services and servers without breeding complexity that slows your development and saps your agility is demanding. Data streaming services are an excellent option for connecting your microservices, maintaining efficiency, and controlling costs. Here’s how to get started connecting your microservices with the data streaming service Apache Kafka.

What are Data Streaming Services?

Data streaming services exchange data through queuing and messaging, which can be seen as essential building blocks of a modern system of distributed microservices. At their simplest, queues allow you to deliver a sequence of messages from a producer component to a consumer component. Queues are often used with a message broker component to make functionality more versatile and enable more flexible coupling between the components. The broker component manages message distribution to multiple consumers, eliminating the need for the producer component to be aware of the number or type of consumers that messages are routed to.

Apache Kafka

When considering how to connect your microservices efficiently, Kafka should not be overlooked. Kafka is an option for data streaming that combines queuing, messaging, and more. It encompasses functionality that requires multiple services from many of its competitors. If you’re familiar with the AWS ecosystem, Kafka is similar to Kinesis. However, it also serves the purposes of SQS Queues, SNS Topic, sand to some extent, even data storage systems like Dynamo or RDS.

How to Get Started

Kafka is implemented as a distributed log of immutable messages, referred to as a topic. Entries are created by the producer but can’t be updated (only removed as they exceed their time to live). The log is partitioned and distributed, so no single server needs its entire contents or serves all its consumers. Each consumer or consumer group receives an offset into the log, which can increment to indicate the record it’s reading.

In simple implementations, incrementation may be automatic and act as a first in, first out queue from the consumer’s perspective. Kafka’s configurable offset management allows each consumer to read the sequence of log messages independently and in linear order, based on when they were added. Because each consumer’s offset is independent, many consumers read the same messages and perform different operations on their contents.

Benefits

Kafka combines high throughput with low latency, making it an ideal data streaming option. As a software package that can be hosted in its own cluster, Kafka provides additional configuration flexibility at the cost of complexity and maintenance. Its versatility in configuration can allow for interesting use cases not possible in other data streaming services. For example, the time to live can be unlimited in Kafka, enabling you to use it as a long-term data store as well as a repository of transient messages.

DragonSpears’ engineers are eager to help you find and implement the technology stack best suited to your business needs. Contact us to take the next step in building out your organization’s data streaming infrastructure.