Scaling Laravel Queues with Go: A High Performance replacement to php artisan queue:work

NOTE : This worker is only for learning purpose and can be foundation to bigger projects , it’s not complete or full replaceable to laravel queue .

Scaling Laravel Queues with Go: A High Performance Alternative to php artisan queue:work…


This content originally appeared on DEV Community and was authored by Walid LAGGOUNE

NOTE : This worker is only for learning purpose and can be foundation to bigger projects , it's not complete or full replaceable to laravel queue .

First test

Second test

Third dtest

Scaling Laravel Queues with Go: A High Performance Alternative to php artisan queue:work

The Problem with Traditional Laravel Queue Workers

Laravel's built in queue system uses php artisan queue:work to process jobs, which operates as a single threaded process. While this works well for small to medium applications, it presents several performance bottlenecks:

  • Single-threaded execution: Only one job can be processed at a time per worker
  • Resource inefficiency: Running multiple queue:work processes consumes significant memory and CPU
  • Limited concurrency: Scaling requires spawning multiple heavy PHP processes

The Solution: Go Powered Concurrent Workers

To address these limitations, I developed a Go based queue worker that maintains persistent Laravel processes and distributes jobs efficiently using goroutines. This hybrid approach combines Go's concurrency strengths with Laravel's job processing capabilities.

Architecture Overview

The system consists of three main components:

  1. Go Worker Pool: Initially started with a single PHP process, then scaled to manage multiple concurrent PHP processes
  2. Custom Laravel Command: queue:run-job that accepts jobs via STDIN
  3. Redis Integration: Pulls jobs from Laravel's queue system

Key Features

Persistent PHP Processes

// Each worker maintains a persistent PHP process
worker.cmd = exec.CommandContext(ctx, "php", "laravel/artisan", "queue:run-job")

The breakthrough came from maintaining long-running PHP processes instead of spawning new ones for each job. Even with just one persistent process, performance improved dramatically by eliminating Laravel's expensive bootstrap overhead.

Concurrent Job Distribution

// Round-robin job distribution across workers
worker := pool.workers[workerIndex]
workerIndex = (workerIndex + 1) % len(pool.workers)

After proving the concept with a single worker, scaling to multiple concurrent processes was straightforward. Jobs are distributed using a round robin algorithm across all available workers, with 6 concurrent PHP processes showing excellent performance gains.

Real-time Performance Monitoring

fmt.Printf("Stats => Processed: %d, Failed: %d\n",
    atomic.LoadInt64(&pool.processedJobs),
    atomic.LoadInt64(&pool.failedJobs))

Development Evolution

Phase 1: Single Worker Proof of Concept
The initial implementation used just one persistent PHP process managed by Go. This simple change alone delivered a 10.8x performance improvement (13.99s → 1.29s for 1,000 jobs), proving that Laravel's bootstrap overhead was the primary bottleneck.

Phase 2: Concurrent Worker Pool
Building on the single-worker success, I implemented concurrent processing with 6 PHP workers. This delivered an additional 1.8x improvement (11.37s → 6.41s for 10,000 jobs), demonstrating excellent scaling characteristics.

class RunJob extends Command
{
    protected $signature = 'queue:run-job';
    protected $description = 'Run jobs continuously from STDIN (fed by Go pool)';

    public function handle(Container $container, RedisQueue $redisQueue)
    {
        $stdin = fopen('php://stdin', 'r');
        while (($line = fgets($stdin)) !== false) {
            $data = json_decode(trim($line), true);
            // Process job using Laravel's worker infrastructure
        }
    }
}

This approach maintains full compatibility with Laravel's job system while enabling concurrent processing.

Example Job Implementation

class ProcessPodcast implements ShouldQueue
{
    use Queueable;

    public $timeout = 5;

    public function handle(): void
    {
        echo json_encode([
            'uuid' => $payload['uuid'] ?? null,
            'status' => 'done',
            'success' => true,
        ]) . "\n";
        flush();
        fflush(STDOUT);
    }
}

Jobs remain unchanged and work exactly as they would with standard Laravel queue workers.

Performance Results

Benchmark Comparison

The performance improvements are dramatic when compared to traditional Laravel queue workers:

1,000 Jobs Test

  • Traditional Laravel worker: 13.99 seconds
  • Go worker (single PHP process): 1.29 seconds
  • Performance improvement: 10.8x faster

10,000 Jobs Test

  • Traditional Laravel worker: 139.71 seconds (2 minutes 20 seconds)
  • Go worker (single PHP process): 11.37 seconds
  • Go worker (6 concurrent PHP processes): 6.41 seconds
  • Performance improvement: 21.8x faster with 6 workers

Scaling Analysis

The results show clear performance scaling patterns:

  1. Single Process Optimization: Even with just one PHP process, the Go wrapper achieved ~10x performance improvement .
  2. Concurrency Scaling: Adding 6 concurrent PHP workers nearly doubled performance again (11.37s → 6.41s)
  3. Linear Scaling Potential: The 6 worker configuration suggests near-linear scaling with worker count

Note: These benchmarks focused purely on execution time. CPU usage and memory consumption comparisons were not measured in this initial testing phase, which would be important metrics for production evaluation.

Technical Implementation Details

Worker Pool Management

func NewConcurrentWorkerPool(workerCount int) *ConcurrentWorkerPool {
    // Initialize worker pool with configurable concurrency
    for i := 0; i < workerCount; i++ {
        worker := pool.startLaravelWorker(i)
        pool.workers = append(pool.workers, worker)
    }
    return &pool
}

Job Timing and Metrics

type JobResult struct {
    JobID      string
    Success    bool
    Error      error
    Output     string
    Duration   time.Duration
    DurationMs int64
}

Use Cases and Applications

When to Use This Approach

  • High-volume queue processing: Applications with thousands of jobs per minute
  • Development and testing: Learning concurrent programming concepts

When to Stick with Traditional Workers

  • Small applications: Low job volume doesn't justify the complexity
  • Simple deployments: Standard Laravel hosting without custom infrastructure
  • Team familiarity: When team expertise lies primarily in PHP

Future Possibilities

This experiment opens several interesting avenues for Laravel scaling:

  1. Production ready implementation: Enhanced error handling, graceful shutdowns, and monitoring
  2. Dynamic scaling: Auto-adjust worker count based on queue depth
  3. Multi-queue support: Handle different queue types with specialized workers
  4. Distributed processing: Extend across multiple servers for massive scale

Key Takeaways

  1. Laravel Bootstrap for each new process overhead is the real bottleneck: A 10x improvement with just persistent processes
  2. Concurrency scales well: Near linear performance gains with additional workers
  3. Simple is powerful: Even the single worker implementation drastically outperforms traditional approaches
  4. Incremental adoption: Start with one persistent worker, scale as needed

While this started as a learning experiment, the dramatic performance improvements suggest real production potential. The approach maintains Laravel's familiar job processing patterns (NOT ALL patterns , still missing errors handling , requeue, time management ...) while delivering performance that rivals dedicated queue systems.

This project showcases how different programming languages can complement each other, with Go handling the concurrency challenges while Laravel manages the business logic and job processing.


This content originally appeared on DEV Community and was authored by Walid LAGGOUNE


Print Share Comment Cite Upload Translate Updates
APA

Walid LAGGOUNE | Sciencx (2025-09-28T15:05:38+00:00) Scaling Laravel Queues with Go: A High Performance replacement to php artisan queue:work. Retrieved from https://www.scien.cx/2025/09/28/scaling-laravel-queues-with-go-a-high-performance-replacement-to-php-artisan-queuework/

MLA
" » Scaling Laravel Queues with Go: A High Performance replacement to php artisan queue:work." Walid LAGGOUNE | Sciencx - Sunday September 28, 2025, https://www.scien.cx/2025/09/28/scaling-laravel-queues-with-go-a-high-performance-replacement-to-php-artisan-queuework/
HARVARD
Walid LAGGOUNE | Sciencx Sunday September 28, 2025 » Scaling Laravel Queues with Go: A High Performance replacement to php artisan queue:work., viewed ,<https://www.scien.cx/2025/09/28/scaling-laravel-queues-with-go-a-high-performance-replacement-to-php-artisan-queuework/>
VANCOUVER
Walid LAGGOUNE | Sciencx - » Scaling Laravel Queues with Go: A High Performance replacement to php artisan queue:work. [Internet]. [Accessed ]. Available from: https://www.scien.cx/2025/09/28/scaling-laravel-queues-with-go-a-high-performance-replacement-to-php-artisan-queuework/
CHICAGO
" » Scaling Laravel Queues with Go: A High Performance replacement to php artisan queue:work." Walid LAGGOUNE | Sciencx - Accessed . https://www.scien.cx/2025/09/28/scaling-laravel-queues-with-go-a-high-performance-replacement-to-php-artisan-queuework/
IEEE
" » Scaling Laravel Queues with Go: A High Performance replacement to php artisan queue:work." Walid LAGGOUNE | Sciencx [Online]. Available: https://www.scien.cx/2025/09/28/scaling-laravel-queues-with-go-a-high-performance-replacement-to-php-artisan-queuework/. [Accessed: ]
rf:citation
» Scaling Laravel Queues with Go: A High Performance replacement to php artisan queue:work | Walid LAGGOUNE | Sciencx | https://www.scien.cx/2025/09/28/scaling-laravel-queues-with-go-a-high-performance-replacement-to-php-artisan-queuework/ |

Please log in to upload a file.




There are no updates yet.
Click the Upload button above to add an update.

You must be logged in to translate posts. Please log in or register.