This content originally appeared on DEV Community and was authored by Nguyễn Long
TL;DR
Imagine a DJ show with hundreds of people waiting for their favorite track to drop. If everyone kept asking the DJ every millisecond:
"Is it time to dance yet? Now?"
You’d probably end up with:
- A sweaty DJ (over loaded)
- Burned speakers (crashing)
- And a power outage (aka a CPU meltdown)
That’s what polling looks like in code.
But what if the DJ had a mic and just said:
"Hey! When the beat drops, I’ll tell you!"
That’s sync.Cond
.
So, What is sync.Cond
?
Go’s sync.Cond
is a condition variable
a concurrency primitive that lets goroutines sleep efficiently while waiting for a condition to become true. It’s based on:
- A shared mutex
- A condition-checking loop
- And the ability to wait, signal, or broadcast to other goroutines.
You use it when:
- There’s a shared resource (like a connection pool or ticket list)
- Goroutines must wait for that resource to become available
- You want to avoid polling or wasting CPU
Take the example when you use Polling:
Imagine you’re running a party where guests (goroutines) can only enter the dance floor when there’s space.
for {
if showReady {
fmt.Println("💃 Fan starts dancing!")
break
}
time.Sleep(100 * time.Millisecond)
}
They keep checking and sleeping... That’s like asking the DJ every 100ms:
"Hey, can I dance yet? What about now? How about now?!"
That’s called polling, and it’s inefficient and annoying (for both CPU and DJ).
Enter: sync.Cond
– a way to wait without burning CPU until you're signaled to proceed.
So let's take a look at this example first, then we will dig deeply into the mechanism:
var mu sync.Mutex
cond := sync.NewCond(&mu)
showReady := false
// Fan goroutine
go func() {
mu.Lock()
for !showReady {
fmt.Println("🧍 Fan waiting...")
cond.Wait() // unlocks, sleeps, then relocks
}
fmt.Println("💃 Fan starts dancing!")
mu.Unlock()
}()
// DJ goroutine
go func() {
time.Sleep(3 * time.Second)
mu.Lock()
showReady = true
fmt.Println("🎧 DJ: The beat drops!")
cond.Signal() // wakes up one fan
mu.Unlock()
}()
This example illutrates:
- Let fans "wait" until the event is ready -> stimulate for 3 seconds
- Notify waiting fans as soon as the DJ starts
Business Analogy
This pattern models real business use cases like:
- Worker thread pools waiting for jobs
- Order processors waiting for payment confirmation
- Consumers waiting for items to appear in a queue
- Background tasks triggered by external events
In our case:
- The dance floor = the shared resource.
- The fan = goroutine that waits.
- The DJ = the event trigger.
Now let's talk about the correct way to use this mechanism:
This is the simple flow:
mu.Lock()
for !condition {
cond.Wait()
}
doWork()
mu.Unlock()
like :
mu.Lock()
for !showReady {
fmt.Println("🧍 Fan waiting...")
cond.Wait() // unlocks, sleeps, then relocks
}
fmt.Println("💃 Fan starts dancing!")
mu.Unlock()
the flow will be depicted as bellow:
[🔒 LOCKED] G1 enters critical section
[❓ CHECK ] Check condition Is showReady true? -->No.
[😴 WAIT ] G1 goes to sleep → cond.Wait() waiting for other GR to wake it up! (meaning waiting for the beat drops)
[🔓 UNLOCKED] Lock is released while waiting
Another goroutine will signals:
mu.Lock()
condition = true
cond.Signal() // or cond.Broadcast()
mu.Unlock()
That's mean when the DJ drops the beat, game on...
go func() {
time.Sleep(3 * time.Second)
mu.Lock()
showReady = true
fmt.Println("🎧 DJ: The beat drops!")
cond.Signal() // wakes up one fan
mu.Unlock()
}()
by:
[🔒 LOCKED] G2 changes condition to true (start the show)
[📣 SIGNAL] G2 calls cond.Signal() (notify to the FAN)
[🔓 UNLOCKED] Lock released
Then:
[👂 WOKEN UP] G1 is notified, wakes up
[🔒 LOCK AGAIN] Tries to reacquire the mutex
[✅ RECHECK ] Sees condition is now true
[🏃 PROCEED ] Does work and exits
The output looks like this:
🧍 Fan waiting...
🎧 DJ: The beat drops!
💃 Fan starts dancing!
Why not just time.Sleep()
?
You could say: "Why not just let the fan sleep for 3 seconds too?"
Because in a real app:
- The DJ doesn’t follow a fixed schedule.
- There might be many fans, not just one.
- The fan may give up waiting or be notified instantly the moment music starts.
How Does Wait()
Actually Work?
Here’s the implementation under the hood:
func (c *Cond) Wait() {
c.checker.check() // panic if copied
t := runtime_notifyListAdd(&c.notify) // get a wait ticket
c.L.Unlock() // release the mutex
runtime_notifyListWait(&c.notify, t) // suspend goroutine
c.L.Lock() // re-acquire lock on wake
}
Key Takeaways:
- There’s a checker to prevent copying the
Cond
instance, it would be panic if you do so -> anyway, we don't care abt it's detail - Calling
cond.Wait()
immediately unlocks the mutex -> mutex must be locked before we call cond.Wait() - After being notified, this method will lock the mutex again -> you need to unlock it after you done with shared data
Signal vs Broadcast
Method | Meaning |
---|---|
Signal() |
Wakes up 1 goroutine |
Broadcast() |
Wakes up all waiting goroutines |
Conclusion
Polling based flow:
┌──────────────┐ ┌────────────────────┐
│ Goroutine A │────→ │ tryGetConnection() │
└──────────────┘ └────────────────────┘
│ │
▼ ▼
[Not Available] → Sleep(100ms)
│ │
└───── loop ──────────────┘
Fan keeps knocking: “DJ, can I dance now? … How about now? … Still no?”
- Loop repeats wastefully
- Sleep is either too short (CPU burn) or too long (latency)
sync.Cond
Flow (Efficient + Coordinated)
[ DJ thread ]
┌──────────────┐ time.Sleep
│ Goroutine B │────────────────────┐
└──────────────┘ │
│ ▼
│ ┌──────────────┐
│ │ showReady = true
│ │ cond.Signal()
│ └──────┬───────┘
▼ │
┌────────────────────────────────┐ ▼
│ Goroutine A (waiting fan) │◄─┘
│ mu.Lock() │
│ while !showReady { ◄──────────────┐
│ cond.Wait() (sleep) │
│ } │
│ // Proceed to dance 💃 │
└────────────────────────────────┘
Fan enters the club, sits quietly. DJ announces:
“🎧 The beat drops!”
Fan wakes up instantly:
“💃 I’m dancing!”
- Sleeps peacefully while waiting
- Wakes up only when ready
- CPU usage remains near-zero
Wrap-up
Don’t burn out your CPU (or your DJ). If you’re managing shared resources in Go and you’re still writing polling loops, it’s time to level up with sync.Cond
.
This pattern:
- Scales beautifully
- Imprves latency
- Gives you precise coordination
When Should You Reach for sync.Cond
?
Use it when
- You have a shared condition
- And multiple goroutines wait on it
- And that condition is protected by a mutex
- And polling is not acceptable Don’t use it if:
- A simple
chan
will do - You don’t already hold a mutex around the state
Bonus: real-world use case: connection pool
The Problem With Polling:
Time a go I already have a nice tcp server implementation setup in golang.
To implement this kind of custom tcp server, we need st called connection pool for maximize the power of machines, allow more concurrency operation.
So the first implementation look like this: bad pattern (but a common one):
for {
if conn := tryGetConnection(); conn != nil {
return conn
}
time.Sleep(100 * time.Millisecond) // 👎 try polling
}
What happened next?
- CPU burned — even when no connection was available
- Latency grew — increase sleep = slower reaction, decrease sleep = higher CPU
- 1,000 goroutines polling = chaos
- Edge cases everywhere — race conditions like being woken just before state changed
Thanks to sync.Cond
which perfectly resolve this problem:
cond.L.Lock()
for !hasFreeConnection() {
cond.Wait()
}
conn := acquireConnection()
cond.L.Unlock()
return conn
What changed?
- No more busy loops
- CPU usage drops to near zero while waiting
- Only woken when it matters
- No weird races or wasted wakeups
Someone will ask me st like this:
Why not use channels instead?
Well, to be honest, my 2nd try consider buffered channel of net.Conn
and it can resolve the problem.
and that works… until it doesn't.
- Channels are great for linear producers/consumers
- But don’t scale well to:
- broadcast wakeups
- shared state protected by mutex
-
non-linear wake patterns
Channels are data pipes.
sync.Cond
is a condition watcher.
Think of channels as a delivery guy.
Think ofsync.Cond
as a waiter with a bell: “Your table is ready!”
You have a []*ConnWrapper
, and you want to hand out a free one. If none are free, you wait — but you don’t burn the CPU.
for {
pool.Lock()
for !hasFreeConn(pool.connections) {
pool.cond.Wait()
}
conn := grabFreeConn()
pool.Unlock()
return conn
}
When a connection is returned:
pool.Lock()
markConnFree(conn)
pool.cond.Signal()
pool.Unlock()
You could be scaling up to thousands of goroutines, and CPU usage would still be flatlined at.
Here's the full implementation
type ConnWrapper struct {
conn net.Conn
inUse bool
}
type ConnectionPool struct {
connections []*ConnWrapper
mu sync.Mutex
cond *sync.Cond
}
func NewConnectionPool(size int) *ConnectionPool {
pool := &ConnectionPool{}
pool.cond = sync.NewCond(&pool.mu)
// Initialize dummy connections (you can replace with real dials)
for i := 0; i < size; i++ {
pool.connections = append(pool.connections, &ConnWrapper{conn: nil, inUse: false})
}
return pool
}
func (p *ConnectionPool) hasFreeConnection() bool {
for _, cw := range p.connections {
if !cw.inUse {
return true
}
}
return false
}
func (p *ConnectionPool) acquireConnection() net.Conn {
for _, cw := range p.connections {
if !cw.inUse {
cw.inUse = true
return cw.conn
}
}
return nil
}
// GetConnection blocks until one is available
func (p *ConnectionPool) GetConnection(ctx context.Context) (net.Conn, error) {
p.mu.Lock()
defer p.mu.Unlock()
for !p.hasFreeConnection() {
waitCh := make(chan struct{})
go func() {
p.cond.Wait()
close(waitCh)
}()
select {
case <-ctx.Done():
return nil, ctx.Err()
case <-waitCh:
// recheck in next loop
}
}
conn := p.acquireConnection()
return conn, nil
}
func (p *ConnectionPool) ReleaseConnection(conn net.Conn) {
p.mu.Lock()
defer p.mu.Unlock()
for _, cw := range p.connections {
if cw.conn == conn {
cw.inUse = false
break
}
}
p.cond.Signal() // Wake up one goroutine waiting
}
References
-
Go Official Documentation –
sync.Cond
https://pkg.go.dev/sync#Cond The authoritative source explaining howCond
works and how to use it properly. -
Go Blog – Share Memory by Communicating
https://go.dev/blog/share-memory-by-communicating
Go’s philosophy on concurrency. Although
sync.Cond
is a shared-memory primitive, this blog explains when it’s okay to step outside channels. -
"Practical Go Concurrency Patterns" – Google I/O talk
https://www.youtube.com/watch?v=QDDwwePbDtw
Great visual explanation of channels, mutexes,
sync.Cond
, and when to use what. -
Go Forum: Why
sync.Cond
Over Channels https://forum.golangbridge.org/t/what-is-the-difference-between-channel-and-sync-cond/13125 A useful community thread with insights into differences, tradeoffs, and whenCond
is a better fit than channels. -
Source code for
sync.Cond.Wait()
(Go standard library) https://github.com/golang/go/blob/master/src/sync/cond.go For those who want to go under the hood and see howWait()
really works internally (withruntime_notifyListWait
). -
Advanced Go Concurrency Patterns – Francesc Campoy
https://www.youtube.com/watch?v=QDDwwePbDtw
Covers deep concurrency primitives including real-world use cases of
sync.Cond
.
This content originally appeared on DEV Community and was authored by Nguyễn Long

Nguyễn Long | Sciencx (2025-06-26T02:38:14+00:00) From Polling to Partying: Writing Cooler Goroutines with sync.Cond. Retrieved from https://www.scien.cx/2025/06/26/from-polling-to-partying-writing-cooler-goroutines-with-sync-cond/
Please log in to upload a file.
There are no updates yet.
Click the Upload button above to add an update.