This article uses straightforward Spring Boot examples to illustrate how your application can inadvertently lose messages or process them twice due to the Kafka offset commit mechanism. It builds upon the scenarios discussed in two of my previous posts on Kafka and Spring Boot, offering deeper insights:

Source Code

Feel free to use my source code if you’d like to try it out yourself. To do that, you must clone my sample GitHub repository. Then you should only follow my instructions.

How It Works

Before diving into the exercise, let’s explore how Spring Kafka handles offset commit. By default, the Spring Kafka consumer processes messages in BATCH mode, meaning a batch of messages sent by the producer can be received on the consumer side all at once. Typically, only one thread manages this process, responsible for both receiving and processing messages. While this default setup can be customized extensively, understanding these core mechanisms is essential for effective use.

The diagram below illustrates the default scenario. Now, here’s the key point: this offset is only committed to the broker after the entire batch of incoming messages has been processed.

I explained the potential consequences of this approach in my earlier blog post on concurrency with Kafka and Spring Boot. When we examine this mechanism closely, we see that a Kafka topic can have multiple partitions. However, Spring still processes messages in a single thread unless we explicitly configure it to do otherwise.

The diagram below offers a detailed view of this setup. A single consumer thread actively listens for messages across all partitions within a topic. After processing all messages, it commits the offsets on each partition.

Screenshot 2026 03 26 at 13.03.06

Let’s explore how we can improve the situation. Set the concurrency parameter on our listener to match the number of partitions in the topic. You might consider increasing it further, but that would be unnecessary, as any extra threads would remain idle.

In this situation, each thread assigned to a specific partition processes messages in the packet routed to that partition one after another. After processing the packet, the thread commits the offset for its respective partition. See the diagram below for an illustration of this scenario.

spring-kafka-offset-commit-concurrent

Now that we’ve explored the theory, let’s dive into the practical side. In the next section, we’ll examine the source code.

Use Spring Boot with Kafka

Sending Messages

Let’s begin by implementing the message producer with Spring Kafka. When we use the KafkaTemplate bean to send messages, it defaults to batching. We also want to log each message immediately after sent. The GET /transactions endpoint allows us to control the destination topic and the total number of messages generated.

For clarity, here is the InputParameters class with the endpoint’s input parameters.

Below is a list of Spring Boot configuration settings for the Kafka producer. It sends messages in JSON format along with an id key.

Here’s the Order class that represents the JSON message exchanged between the producer and the consumer.

Receiving Messages

First, let’s take a look at the list of dependencies. You should not add the Spring Boot Kafka starter or, for some reason, the jackson-databind library.

The message-sending application lets you choose a target topic. Meanwhile, the Spring Boot Kafka consumer offers several @KafkaListener annotations for different message reception scenarios. Let’s start with the simplest one, which processes messages in a single thread. The input topic is called transactions. The message processing method is straightforward. It prints the received message along with the partition number and offset. To simulate realistic processing time, it deliberately pauses for 10 seconds.

The second method (listenMulti) does the same thing as the previous one, but sets the number of consumer threads to 3. It consumes messages from the transactions-multi topic.

The last @KafkaListener method processes messages asynchronously using thread pool provided by Java ExecutorService. The target topic in this is transactions-async-auto. For now, we won’t focus on this method. We’ll come back to it at the end of the article.

Before running the Spring Boot consumer, we need to start Kafka. The docker-compose.yml file is located in the repository’s root directory. So all you need to do is run the docker compose up command.

Next, simply enable the message consumer and producer using the commands below. Then, we can move on to our test scenarios.

Duplicate Message Processing with Spring Kafka

Single Consuming Thread

In this section, we’ll demonstrate how our application processes the same messages multiple times after a restart. You simply need to send the messages, stop the application, and then restart it. Spring Boot, combined with Spring Kafka, automatically handles graceful shutdown by waiting for all ongoing message processing to finish before shutting down. However, this mechanism has a timeout. By default, it is 30 seconds in Spring Boot. Since each message takes about 10 seconds to process due to an intentional delay with Thread.sleep(), you’ll see how Spring won’t commit Kafka offset before graceful shutdown.

First, let’s send 20 messages to the transactions topic using the endpoint exposed by the producer application.

Let’s see what’s happening on the consumer side. The logs show that the consumer received a batch of 20 messages from the transactions topic. It then processed the first message from partition 1. Subsequent messages are processed by a single thread at 10-second intervals.

spring-kafka-offset-commit-log-single

Here we see the end of the message handling for offset=24 and the start for the message with offset=25.

Screenshot 2026 03 26 at 18.06.35

Let’s gracefully shut down the consumer application with the CTRL+C shortcut. You’ll notice that all listeners, except for the one subscribed to the transactions topic, will close. Spring Boot waits 30 seconds to process messages from this topic, and if it doesn’t receive them within that time, it closes with the error shown below. Consequently, the application doesn’t commit any offsets to the Kafka broker.

spring-kafka-offset-commit-graful-shutdown

After restarting, the consumer starts from the beginning with offset=24.

Screenshot 2026 03 26 at 18.09.21

This time, wait until all messages have been processed. Once that happens, you’ll see an entry like the one below in the log. This time, Spring Boot was able to commit the Kafka offset for all partitions.

Screenshot 2026 03 26 at 18.13.56

Multiple Consuming Threads

Now we’ll repeat the same exercise, but for a message reception mode with three listener threads. To do this, send messages to the transactions-multi topic as shown below.

As shown below, messages are handled by three separate threads, each assigned to a single partition.

spring-kafka-offset-commit-log-multiple

I stop the application as soon as two out of three threads finish processing the messages. As shown below, these threads successfully commit their offsets in the Kafka topic.

Screenshot 2026 03 26 at 23.21.13

The last thread didn’t finish processing all messages before the application terminated, and the graceful timeout proved too short to complete this task.

Screenshot 2026 03 26 at 23.25.18

Therefore, after the restart, our Spring Boot application resumes processing all messages from partition 1, since the offset commit for that Kafka topic hadn’t been performed before.

Screenshot 2026 03 26 at 23.27.03

Of course, you can increase the graceful shutdown timeout to match the message processing time. To configure the timeout period, you must use the spring.lifecycle.timeout-per-shutdown-phase property.

Lose Messages with Spring Kafka

In this section, we explore a new scenario where a Spring Boot application might lose messages. We set up a listener that receives messages from the transactions-async-auto topic. Messages arrive through a single consumer thread, but the processing occurs across five threads in the pool. Therefore, the Spring Kafka offset commit occurs in the main thread after the message batch is received. I’ve pasted this code snippet before, but let’s take another look at it for clarity.

Here’s the Processor @Service, which handles incoming messages asynchronously. As you can see, it also introduces an artificial 10-second delay in processing.

Below is a command that sends 30 messages to the transactions-async-auto topic.

Let’s examine the Spring Boot consumer logs. The listener receives a batch of 30 messages and actively processes the first five asynchronously, while the remaining messages wait for available threads in the pool.

Screenshot 2026 03 27 at 08.59.35

Now let’s take a closer look at the timing. Essentially, shortly after asynchronous processing of several messages begins, an offset commit occurs for all partitions. What does this mean in practice?

Screenshot 2026 03 27 at 09.04.24

You can now shut down the application just as you did before. Spring Boot does not wait for the graceful shutdown period because it perceives that Kafka messages have already been received. Thankfully, you can configure message reception and processing in different ways to prevent this issue. For more details, refer to two of my earlier articles mentioned in the introduction to this post.

To complete the exercise, restart the Spring Boot application. Once it’s running again, notice that no messages appear in the topic. Depending on when you stopped it, some or all of the messages might have been lost.

Screenshot 2026 03 27 at 09.12.51 scaled

Conclusion

Understanding message reception and commit offset handling in Kafka reveals crucial insights into system reliability. When developers overlook these mechanisms, both on Kafka’s side and within the application’s framework, they risk severe failures during restarts or unexpected shutdowns. In this article, I illustrate scenarios that cause message loss and force the application to reprocess messages. I hope this sparks your interest and enhances your understanding of Kafka and how to build consumers with Spring Kafka.

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Leave A Reply