One line of code caused the SeaTunnel Kafka connector to eat 12GB of memory in 5 mins!

What happened?

In Apache SeaTunnel version 2.3.9, the Kafka connector implementation contained a potential memory leak risk. When users configured streaming jobs to read data from Kafka, even with a read rate limit (read_limit.rows_per_sec…


This content originally appeared on DEV Community and was authored by Apache SeaTunnel

laptop-5906264\_1280

What happened?

In Apache SeaTunnel version 2.3.9, the Kafka connector implementation contained a potential memory leak risk. When users configured streaming jobs to read data from Kafka, even with a read rate limit (read_limit.rows_per_second) set, the system could still experience continuous memory growth until an OOM (Out Of Memory) occurred.

What's the key issue?

In real deployments, users observed the following phenomena:

  1. Running a Kafka-to-HDFS streaming job on an 8-core, 12G memory SeaTunnel Engine cluster
  2. Although read_limit.rows_per_second=1 was configured, memory usage soared from 200MB to 5GB within 5 minutes
  3. After stopping the job, memory was not released; upon resuming, memory kept growing until OOM
  4. Ultimately, worker nodes restarted

Root Cause Analysis

Through code review, it was found that the root cause lay in the createReader method of the KafkaSource class, where elementsQueue was initialized as an unbounded queue:

elementsQueue = new LinkedBlockingQueue<>();

This implementation had two critical issues:

  1. Unbounded Queue: LinkedBlockingQueue without a specified capacity can theoretically grow indefinitely. When producer speed far exceeds consumer speed, memory continuously grows.
  2. Ineffective Rate Limiting: Although users configured read_limit.rows_per_second=1, this limit did not actually apply to Kafka data reading, causing data to accumulate in the memory queue.

Solution

The community fixed this issue via PR#9041. The main improvements include:

  1. Introducing a Bounded Queue: Replacing LinkedBlockingQueue with a fixed-size ArrayBlockingQueue
  2. Configurable Queue Size: Adding a queue.size configuration parameter, allowing users to adjust as needed
  3. Safe Default Value: Setting DEFAULT_QUEUE_SIZE=1000 as the default queue capacity

Core implementation changes:

public class KafkaSource {
    private static final String QUEUE_SIZE_KEY = "queue.size";
    private static final int DEFAULT_QUEUE_SIZE = 1000;

    public SourceReader<SeaTunnelRow, KafkaSourceSplit> createReader(
            SourceReader.Context readerContext) {
        int queueSize = kafkaSourceConfig.getInt(QUEUE_SIZE_KEY, DEFAULT_QUEUE_SIZE);
        BlockingQueue<RecordsWithSplitIds<ConsumerRecord<byte[], byte[]>>> elementsQueue =
                 new ArrayBlockingQueue<>(queueSize);
        // ...
    }
}

Best Practice Recommendations

For users of the SeaTunnel Kafka connector, it is recommended to:

  1. Upgrade Version: Use the SeaTunnel version containing this fix
  2. Configure Properly: Set an appropriate queue.size value according to business needs and data characteristics
  3. Monitor Memory: Even with a bounded queue, monitor system memory usage
  4. Understand Rate Limiting: The read_limit.rows_per_second parameter applies to downstream processing, not Kafka consumption

Summary

This fix not only resolved the memory overflow risk but also improved system stability and configurability. By introducing bounded queues and configurable parameters, users can better control system resource usage and avoid OOM caused by data backlog. It also reflects the virtuous cycle of open-source communities continuously improving product quality through user feedback.


This content originally appeared on DEV Community and was authored by Apache SeaTunnel


Print Share Comment Cite Upload Translate Updates
APA

Apache SeaTunnel | Sciencx (2025-09-11T07:00:06+00:00) One line of code caused the SeaTunnel Kafka connector to eat 12GB of memory in 5 mins!. Retrieved from https://www.scien.cx/2025/09/11/one-line-of-code-caused-the-seatunnel-kafka-connector-to-eat-12gb-of-memory-in-5-mins/

MLA
" » One line of code caused the SeaTunnel Kafka connector to eat 12GB of memory in 5 mins!." Apache SeaTunnel | Sciencx - Thursday September 11, 2025, https://www.scien.cx/2025/09/11/one-line-of-code-caused-the-seatunnel-kafka-connector-to-eat-12gb-of-memory-in-5-mins/
HARVARD
Apache SeaTunnel | Sciencx Thursday September 11, 2025 » One line of code caused the SeaTunnel Kafka connector to eat 12GB of memory in 5 mins!., viewed ,<https://www.scien.cx/2025/09/11/one-line-of-code-caused-the-seatunnel-kafka-connector-to-eat-12gb-of-memory-in-5-mins/>
VANCOUVER
Apache SeaTunnel | Sciencx - » One line of code caused the SeaTunnel Kafka connector to eat 12GB of memory in 5 mins!. [Internet]. [Accessed ]. Available from: https://www.scien.cx/2025/09/11/one-line-of-code-caused-the-seatunnel-kafka-connector-to-eat-12gb-of-memory-in-5-mins/
CHICAGO
" » One line of code caused the SeaTunnel Kafka connector to eat 12GB of memory in 5 mins!." Apache SeaTunnel | Sciencx - Accessed . https://www.scien.cx/2025/09/11/one-line-of-code-caused-the-seatunnel-kafka-connector-to-eat-12gb-of-memory-in-5-mins/
IEEE
" » One line of code caused the SeaTunnel Kafka connector to eat 12GB of memory in 5 mins!." Apache SeaTunnel | Sciencx [Online]. Available: https://www.scien.cx/2025/09/11/one-line-of-code-caused-the-seatunnel-kafka-connector-to-eat-12gb-of-memory-in-5-mins/. [Accessed: ]
rf:citation
» One line of code caused the SeaTunnel Kafka connector to eat 12GB of memory in 5 mins! | Apache SeaTunnel | Sciencx | https://www.scien.cx/2025/09/11/one-line-of-code-caused-the-seatunnel-kafka-connector-to-eat-12gb-of-memory-in-5-mins/ |

Please log in to upload a file.




There are no updates yet.
Click the Upload button above to add an update.

You must be logged in to translate posts. Please log in or register.