A Deep Dive Into SeaTunnel’s Thread Sharing Mechanism and Task Execution Model Optimization

The Apache SeaTunnel Zeta engine is a dedicated data integration and synchronization engine independently designed by the community.

The issue #2279 on DolphinScheduler Github repo focuses on the optimized design of the TaskExecutionService and task s…


This content originally appeared on DEV Community and was authored by Apache SeaTunnel

The Apache SeaTunnel Zeta engine is a dedicated data integration and synchronization engine independently designed by the community.

The issue #2279 on DolphinScheduler Github repo focuses on the optimized design of the TaskExecutionService and task scheduling model within the Zeta engine. Due to this remarkable design, Zeta results in a leap forward on performance that is times faster than other big data computing engines. This design covers the communication approach of TaskGroup, the call()-driven execution model, as well as two thread resource optimization strategies: static tagging and dynamic thread sharing.

Now, let's dive deep into how these innovative mechanisms enable the Zeta engine to achieve multi-fold performance improvements.

Description

TaskExecutionServer is a service that executes Tasks and will run an instance on each node. It receives the TaskGroup from the JobMaster and runs the Task in it. And maintain TaskID->TaskContext, and the specific operations on Task are encapsulated in TaskContext. And Task holds OperationService internally, which means that Task can remotely call and communicate with other Tasks or JobMaster through OperationService.

  • TaskGroup design : The tasks in a TaskGroup all run on the same node. image

  • An optimization point:
    The data channel between tasks within the same TaskGroup uses a local Queue. And the data channel between different TaskGroups may use a distributed Queue (hazalcast Ringbuffer) because it may be executed on different nodes.

  • Task design:
    One of the most important methods of Task is call(), and the executor drives the operation of Task by calling call() of Task. call() will have a return value of ProgressState, through which the executor can determine whether the Task has ended and whether it needs to continue to call call(). As follows.

  • Thread Share optimization: Thread Share Background: In the scenario where a large number of small tasks are synchronized, a large number of tasks will be generated. If each Task is responsible for one thread, it will waste resources by running a large number of threads. At this time, if one thread can run multiple Tasks, this situation will be greatly improved. But how can one thread execute multiple tasks at the same time? Because the Task is internally driven by calling Call() again and again, a thread can call Call() of all Tasks it is responsible for in turn. As follows.

This will also bring a problem, that is, if the call() execution time of a task is very long. In this way, this thread will be used all the time, causing the delay of other tasks to be very serious.

For such a problem, I temporarily think of the following two optimization solutions:

  • Option1: Marking Thread Share

Provide an marking on the Task, and mark this Task to support Thread Share. In the specific implementation of the task, marking whether the task supports thread sharing. Tasks that can be shared will share a thread for execution, and tasks that cannot be shared will be executed exclusively by a thread.

Whether the Task supports thread sharing is evaluated by the specific implementer of the Task. According to the execution time of the Call method, if the execution implementation of the Call method is all at the ms level, then the Task can be marked as supporting thread sharing.

  • Option2: Dynamic Thread Share

There is a fundamental problem with the above solution one, that is, the execution time of the Call method is often not fixed, and the Task itself is not very clear about the calling time of its Call() method. Because different stages, different amounts of data, etc. will affect the execution time of Call(). It is not very appropriate for such a Task to be marked as supporting shared threads or not. Because if a thread is marked as a shareable thread, if the execution time of a call to the Call method is very long, this will cause the delay of other tasks that share the current thread to be very high. If sharing is not supported, the problem of resource waste is still not solved.

So the task thread sharing can be made dynamic, and a group of tasks is executed by a thread pool (the number of tasks >> the number of threads). During the execution of thread1, if the execution time of call() of Task1 exceeds the set value (100ms), a thread thread2 will be taken out from the thread pool to execute the Call method of the next Task2. It is guaranteed that the delay of other tasks will not be too high due to the long execution time of Task1. When the call method of Task2 is executed normally within the timeout period, it will put Task2 back at the end of the task queue, and thread2 will continue to take out Task3 from the task queue to execute the Call method. When the call method of Task1 is executed, thread1 will be put back into the thread pool, and Task1 will be marked as timed out once. When a certain task's Call method executes timeout times reaches a certain limit, the task will be removed from the shared thread task queue, and a thread will be used exclusively.

The related execution process is as follows:






The link: https://github.com/apache/seatunnel/issues/2279


This content originally appeared on DEV Community and was authored by Apache SeaTunnel


Print Share Comment Cite Upload Translate Updates
APA

Apache SeaTunnel | Sciencx (2025-06-25T02:30:19+00:00) A Deep Dive Into SeaTunnel’s Thread Sharing Mechanism and Task Execution Model Optimization. Retrieved from https://www.scien.cx/2025/06/25/a-deep-dive-into-seatunnels-thread-sharing-mechanism-and-task-execution-model-optimization/

MLA
" » A Deep Dive Into SeaTunnel’s Thread Sharing Mechanism and Task Execution Model Optimization." Apache SeaTunnel | Sciencx - Wednesday June 25, 2025, https://www.scien.cx/2025/06/25/a-deep-dive-into-seatunnels-thread-sharing-mechanism-and-task-execution-model-optimization/
HARVARD
Apache SeaTunnel | Sciencx Wednesday June 25, 2025 » A Deep Dive Into SeaTunnel’s Thread Sharing Mechanism and Task Execution Model Optimization., viewed ,<https://www.scien.cx/2025/06/25/a-deep-dive-into-seatunnels-thread-sharing-mechanism-and-task-execution-model-optimization/>
VANCOUVER
Apache SeaTunnel | Sciencx - » A Deep Dive Into SeaTunnel’s Thread Sharing Mechanism and Task Execution Model Optimization. [Internet]. [Accessed ]. Available from: https://www.scien.cx/2025/06/25/a-deep-dive-into-seatunnels-thread-sharing-mechanism-and-task-execution-model-optimization/
CHICAGO
" » A Deep Dive Into SeaTunnel’s Thread Sharing Mechanism and Task Execution Model Optimization." Apache SeaTunnel | Sciencx - Accessed . https://www.scien.cx/2025/06/25/a-deep-dive-into-seatunnels-thread-sharing-mechanism-and-task-execution-model-optimization/
IEEE
" » A Deep Dive Into SeaTunnel’s Thread Sharing Mechanism and Task Execution Model Optimization." Apache SeaTunnel | Sciencx [Online]. Available: https://www.scien.cx/2025/06/25/a-deep-dive-into-seatunnels-thread-sharing-mechanism-and-task-execution-model-optimization/. [Accessed: ]
rf:citation
» A Deep Dive Into SeaTunnel’s Thread Sharing Mechanism and Task Execution Model Optimization | Apache SeaTunnel | Sciencx | https://www.scien.cx/2025/06/25/a-deep-dive-into-seatunnels-thread-sharing-mechanism-and-task-execution-model-optimization/ |

Please log in to upload a file.




There are no updates yet.
Click the Upload button above to add an update.

You must be logged in to translate posts. Please log in or register.