This content originally appeared on DEV Community and was authored by kouwei qing
Background Introduction
During the development of the instant messaging (IM) SDK, users reported issues such as delayed message reception and page卡顿 (freezing) when opening interfaces. Since IM involves numerous network requests and database operations, all of which were previously handled on the main thread, heavy I/O operations caused app卡顿 and frame drops when there were many chat messages or sessions. Initially, the design considered using multi-threading mechanisms like HarmonyOS-provided Worker and TaskPool, but due to time constraints and complex logic, the non-shared memory multi-threading approach had high refactoring costs. Now that the production environment has hit a bottleneck, resolving this through multi-threading has become imperative.
Implementation Approach
The message reception mechanism uses a combination of push and pull. When new messages arrive, they are delivered to the client via long connections, and the client then requests specific message content via HTTP. On Android and iOS, daemon threads are employed: a dedicated thread for pulling new messages, which triggers the message-pulling interface when events occur and blocks to wait when no events are present.
Selection Between Worker and TaskPool
The memory-sharing concurrency model refers to multiple threads executing tasks simultaneously while relying on and having access to the same memory. Threads must抢占 (compete for) and lock memory usage before accessing it, with threads that fail to acquire the lock waiting for others to release it. Common concurrency models include memory-sharing and message-communication-based models. The Actor concurrency model is a typical example of the latter, widely adopted because it eliminates complex lock-handling for developers and offers high concurrency. Currently, ArkTS provides two concurrency capabilities—TaskPool and Worker—both based on the Actor model.
In the Actor model, each thread is an independent Actor with its own isolated memory. Actors trigger each other’s behaviors through message-passing and cannot directly access each other’s memory. Compared to memory-sharing models, the Actor model isolates memory between threads, eliminating thread contention for shared resources. This allows developers to avoid memory-locking issues and improves development efficiency.
The following diagram illustrates how to solve the producer-consumer problem using a memory-sharing model:
The example below briefly demonstrates using the Actor-based TaskPool concurrency capability to solve the producer-consumer problem:
In the Actor model, threads do not share memory and must use inter-thread communication to pass tasks and results. Non-shared memory threads are analogous to traditional system processes, with independent memory and high inter-thread interaction costs.
TaskPool and Worker provide multi-threaded environments for applications to handle time-consuming computational or intensive tasks, preventing task blocking on the main thread and improving system performance and resource utilization.
A comparison of TaskPool and Worker implementation characteristics is as follows:
Implementation | TaskPool | Worker |
---|---|---|
Memory Model | Thread-isolated, no shared memory. | Thread-isolated, no shared memory. |
Parameter Passing | Uses structured cloning for serialization/deserialization. Supports ArrayBuffer transfer and SharedArrayBuffer sharing. |
Same as TaskPool. |
Argument Handling | Directly passed, no encapsulation needed (transfer by default). | Requires encapsulating messages as the sole parameter. |
Method Invocation | Methods are directly passed for invocation. | Messages must be parsed in the Worker thread to invoke corresponding methods. |
Return Values | Asynchronous calls return values by default. | Messages must be actively sent and parsed in onmessage for assignment. |
Lifecycle | Self-managed; no need to monitor task load. | Developers manage Worker count and lifecycle. |
Max Pool Size | Automatically managed; no configuration required. | Up to 64 Workers per process (limited by memory). |
Task Duration Limit | 3 minutes (excluding async operations like I/O). Long tasks have no limit. | No limit. |
Priority Configuration | Supports setting task priorities. | Does not support. |
Task Cancellation | Supports canceling pending tasks. | Does not support. |
Thread Reuse | Supports. | Does not support. |
Delayed Execution | Supports. | Does not support. |
Task Dependencies | Supports. | Does not support. |
Serial Queues | Supports. | Does not support. |
Task Groups | Supports. | Does not support. |
TaskPool’s worker threads bind to system scheduling priorities and support load balancing (automatic scaling), whereas Worker requires manual creation, incurs creation overhead, and lacks priority settings. Thus, TaskPool outperforms Worker in performance, and the official recommendation is to use TaskPool for most scenarios.
TaskPool focuses on independent tasks, with no need to manage thread lifecycles (超长任务 (ultra-long tasks >3 minutes, excluding long tasks) are auto-recycled by the system). Worker focuses on thread persistence, requiring developers to actively manage lifecycles.
Common use cases and recommendations:
- Tasks longer than 3 minutes (excluding async I/O): e.g., 1-hour background CPU-intensive prediction model training → use Worker.
- Dependent synchronous tasks: e.g., scenarios requiring persistent handles (each handle is unique and must be preserved for operations) → use Worker.
- Priority-based tasks: e.g., gallery histogram rendering (background calculations affect UI performance, requiring high priority) → use TaskPool.
- Frequent task cancellation: e.g., large image browsing in galleries (canceling cached tasks when sliding) → use TaskPool.
- Massive or distributed tasks: e.g., bulk database writes in large apps (difficult to manage with Worker) → use TaskPool.
For our requirement to implement a daemon-like thread, we use Worker to manually manage thread lifecycles.
Implementing a Daemon Thread with Worker
DevEco Studio supports one-click Worker generation. In the {moduleName} directory, right-click > New > Worker to auto-generate template files and configurations. We create MsgSyncWorker
as follows:
// worker.ets
import { ErrorEvent, MessageEvents, ThreadWorkerGlobalScope, worker } from '@kit.ArkTS';
const workerPort: ThreadWorkerGlobalScope = worker.workerPort;
// Register onmessage callback: triggered when the Worker receives messages from the main thread via postMessage, executed in the Worker thread
workerPort.onmessage = (e: MessageEvents) => {
let data: string = e.data;
console.info('workerPort onmessage is: ', data);
// Send messages to the main thread
workerPort.postMessage('2');
};
// Register onmessageerror callback: triggered for unserializable messages
workerPort.onmessageerror = () => {
console.info('workerPort onmessageerror');
};
// Register onerror callback: triggered on Worker execution errors
workerPort.onerror = (err: ErrorEvent) => {
console.info('workerPort onerror err is: ', err.message);
};
The IDE-generated Worker includes default callback implementations. Key steps involve registering onmessage
to receive main thread commands (executed in the child thread) and using workerPort.postMessage
to send replies.
Next, start the Worker in the main thread:
// Create a Worker instance
let workerInstance = new worker.ThreadWorker('entry/ets/workers/worker.ets');
// Register onmessage callback: triggered when the main thread receives messages from the Worker
workerInstance.onmessage = (e: MessageEvents) => {
let data: string = e.data;
console.info("workerInstance onmessage is: ", data);
};
// Register onerror callback: triggered on Worker errors
workerInstance.onerror = (err: ErrorEvent) => {
console.info("workerInstance onerror message is: " + err.message);
};
// Register onmessageerror callback: triggered for unserializable messages
workerInstance.onmessageerror = () => {
console.info('workerInstance onmessageerror');
};
// Register onexit callback: triggered when the Worker is destroyed
workerInstance.onexit = (e: number) => {
// Exit code 0 for normal termination, 1 for errors
console.info("workerInstance onexit code is: ", e);
};
// Send a message to the Worker thread
workerInstance.postMessage('1');
Issues Encountered
Due to memory isolation between threads in HarmonyOS, database and network singleton classes initialized on the main thread needed re-initialization in the Worker. However, initializing the database caused a crash with the error: Error message: UserDaoHelper is not initialized.
UserDaoHelper
is a singleton, and the crash occurred before calling getInstance
, indicating a failure to import the module.
Root Cause Analysis
The "module not initialized" error stemmed from circular dependencies between modules. Circular dependencies can cause runtime initialization failures, as seen in the example below. Before executing index.ets
, the dependent page.ets
runs first, which in turn depends on foo
exported by index.ets
. Since index.ets
has not executed yet, foo
is uninitialized, leading to a runtime exception.
// index.ets
import { bar } from './page';
export function foo() {
bar();
}
// page.ets
import { foo } from './index';
export function bar() {
foo();
}
bar();
Solution
Use DevEco Studio’s Code Linter to identify circular dependencies and refactor the code:
- Create a
code-linter.json5
configuration file in the project root with:
{
"files": [
"**/*.js",
"**/*.ts",
"**/*.ets"
],
"rules": {
"@security/no-cycle": "error" // Enable circular dependency checking
}
}
- Right-click the project root in the workspace and select Code Linter > Full Linter to perform a full code scan.
- Refactor the code based on the linter’s circular dependency warnings to eliminate the issue.
This content originally appeared on DEV Community and was authored by kouwei qing

kouwei qing | Sciencx (2025-06-28T08:35:30+00:00) HarmonyOS Next IM Practical Combat: Handling Module Uninitialized. Retrieved from https://www.scien.cx/2025/06/28/harmonyos-next-im-practical-combat-handling-module-uninitialized/
Please log in to upload a file.
There are no updates yet.
Click the Upload button above to add an update.