i-benchmarked-seven-backend-frameworks-and-it-changed-my-tech-stack-decisions

About Hyperlane Framework

Hyperlane is a lightweight, high-performance, cross-platform Rust HTTP server framework built on Tokio async runtime.

Core Features

Performance: 324,323 QPS (Keep-Alive on), 51,031 QPS (Keep-Alive off) |…


This content originally appeared on DEV Community and was authored by member_875c0744

About Hyperlane Framework

Hyperlane is a lightweight, high-performance, cross-platform Rust HTTP server framework built on Tokio async runtime.

Core Features

Performance: 324,323 QPS (Keep-Alive on), 51,031 QPS (Keep-Alive off) | Unified API: HTTP, WebSocket, SSE share same interface | Flexible Routing: Static, dynamic, regex routes | Powerful Middleware: Request/response middleware, panic hooks | Real-time: Native WebSocket and SSE support | Cross-platform: Unified experience on Windows, Linux, macOS

Quick start: git clone https://github.com/hyperlane-dev/hyperlane-quick-start.git

Honestly, before I started this benchmark, I never imagined the performance differences would be this dramatic. As a backend developer with 10 years of experience, I always thought framework selection was mainly about features and ecosystem, and performance was just, well, good enough. That changed last month when our company's project started crashing under load, and my technical director asked me to investigate which framework we should actually be using.

That evening in the office, setting up the test environment, my colleague laughed and said, "Are you becoming a performance engineer now?" I didn't think much of it at the time. I just figured if we're going to choose, we should choose something solid. What I discovered through testing surprised me in ways I didn't expect.

How It All Started

Let me back up to our company project. We were building a real-time data processing system that needed to handle massive HTTP request volumes. Initially, we went with Node.js because everyone on the team knew JavaScript and could get up to speed quickly. But as our user base grew, server CPU usage regularly spiked above ninety percent, and response times kept climbing.

My technical director asked me, "Where do you think the problem is?"

I thought for a moment and said, "Maybe it's Node.js being single-threaded?"

He shook his head. "Not necessarily. Go test it out. Compare different frameworks. Let the data tell the story."

And that's how my testing journey began.

Choosing What to Test

First, I needed to decide which frameworks to benchmark. After extensive research online, I settled on seven representative options.

First was Tokio, the core async runtime in the Rust ecosystem. Many frameworks are built on top of it. I chose it mainly to see what performance level a pure async runtime could achieve.

Second was the Hyperlane framework, a Tokio-based web framework I found on GitHub. It didn't have massive star counts, but the documentation was crystal clear, and it claimed excellent performance. I was a bit skeptical at first, thinking it might be self-promotion.

Third was Rocket, pretty well-known in the Rust community. Many people recommend it for beginners. I wanted to see how it performed.

Fourth was the Rust standard library, the most basic implementation without any framework abstraction. This served as a baseline to see how much performance overhead framework wrapping actually introduces.

Fifth was Gin, one of the most popular web frameworks in Go. A colleague at our company swears by it, claiming great performance.

Sixth was the Go standard library, serving as another comparison baseline like Rust's.

Seventh was the Node.js standard library we were originally using in our project.

Setting Up the Test Environment

Setting up the test environment proved more complicated than I anticipated. First, I needed to ensure fairness, all frameworks running on identical hardware and network conditions. I used one of our company's servers with decent specs: eight-core CPU, sixteen gigs of RAM.

Then came choosing the testing tools. I went with two: wrk and ab. wrk is a modern HTTP benchmarking tool with Lua scripting support, very flexible. ab is Apache's built-in benchmarking tool. While older, it's stable and widely used, making comparisons easier.

I designed two test scenarios: one with Keep-Alive enabled, simulating persistent connections, and another with Keep-Alive disabled, simulating short-lived connections. I noticed many performance discussions overlook connection management, but it actually has massive performance implications.

First Round: Keep-Alive Enabled

For the first round, I set three hundred sixty concurrent connections running for sixty seconds. This concurrency level was already quite high for our project.

As the test started, I watched the numbers jumping on screen, feeling pretty nervous. After all, this was my first time conducting such a formal performance benchmark.

When the results came in, I stared at the screen for several seconds.

Tokio hit 340,130 QPS. That number shocked me. I knew Rust was fast, but not this fast.

Then came the Hyperlane framework, with 324,323 QPS. Honestly, this result made me see it in a completely new light. While slightly lower than pure Tokio, considering it provides complete web framework functionality, this performance was exceptional.

Rocket framework came in at 298,945 QPS, also impressive. Rust standard library hit 291,218. This surprised me a bit. I expected the standard library to be faster, but it actually lagged behind some frameworks. Later I realized this might be because the standard library implementation is more basic, without extensive optimization.

Gin framework reached 242,570 QPS, Go standard library 234,178. These results met my expectations. Go's performance has always been solid.

What surprised me most was Node standard library, with only 139,412 QPS. This number was much lower than I imagined. I even thought something went wrong with the test and ran it again. Same result.

I recorded all this data and started analyzing. Numerically, Rust-based frameworks clearly dominated, taking the top four spots. Go frameworks performed respectably, while Node.js clearly lagged behind.

But QPS was just one metric. I also looked at latency data. Tokio averaged one point two two milliseconds, the Tokio-based framework one point four six milliseconds, Rocket one point four two milliseconds. These latencies were incredibly low, essentially imperceptible to users.

By comparison, Node.js averaged two point five eight milliseconds. While not terrible, it was already double the Rust frameworks.

Second Round: Keep-Alive Disabled

For the second round, I disabled Keep-Alive, simulating short-lived connections. In this scenario, every request requires establishing a new TCP connection, theoretically having significant performance impact.

Test parameters remained the same: three hundred sixty concurrent connections, sixty seconds duration.

The results revealed something interesting.

The Hyperlane framework took first place this time, with 51,031 QPS. Tokio itself hit 49,555, Rocket framework 49,345.

This ranking shift intrigued me. Why did the framework outperform pure Tokio in short-lived connection scenarios? I examined its implementation closely and found extensive optimization in connection management, particularly in connection creation and teardown.

Gin framework reached 40,149, Go standard library 38,364. Rust standard library only hit 30,142, its performance genuinely disappointing this round.

Node standard library remained at the bottom with 28,286 QPS. The test also encountered some connection errors, showing stability issues.

Comparing these two rounds, I discovered a pattern: in persistent connection scenarios, underlying runtime performance matters more, but in short-lived connection scenarios, framework-level optimization can actually make a bigger difference.

Third Round: Verification with ab

To verify wrk's results, I ran another round using ab. This time I set one thousand concurrent connections with one million total requests.

ab's results aligned closely with wrk's, giving me more confidence in the data.

With Keep-Alive enabled, Tokio hit 308,596 QPS, the Hyperlane framework 307,568, nearly identical. Rocket framework reached 267,931, Rust standard library 260,514.

Gin framework and Go standard library came in at 224,296 and 226,550 respectively. Node standard library only managed 85,357, with a very high failure rate, over eight hundred thousand failures out of one million requests. This result shocked me. I didn't expect Node standard library to perform this poorly under high concurrency.

With Keep-Alive disabled, Tokio and the Hyperlane framework still led, at 51,825 and 51,554 QPS respectively. Rocket framework hit 49,621, Go frameworks 47,915 and 47,081.

Node standard library performed slightly better this time at 44,763 QPS, but still clearly behind other frameworks.

Thinking Behind the Data

After completing the tests, I spent several days analyzing the data. I realized the performance differences actually reflect different design philosophies across languages and frameworks.

Rust's performance advantage mainly comes from zero-cost abstractions and memory safety mechanisms. It performs extensive optimization at compile time with virtually no runtime overhead. Rust's ownership system ensures efficient memory usage without garbage collection, a huge advantage in high-concurrency scenarios.

Go's performance is also solid. Its goroutine model makes concurrent programming simple. But Go has garbage collection, and in high-concurrency scenarios, GC pauses affect performance. I did observe some latency spikes during testing, likely caused by GC.

Node.js's issues stem mainly from its single-threaded model. While it has event loops and async I/O, it struggles with CPU-intensive tasks. Plus JavaScript is dynamically typed, requiring extensive runtime type checking and conversion, impacting performance.

But what impressed me most was the Hyperlane framework. It maintains high performance while providing very friendly APIs and comprehensive features. I studied its source code carefully and found optimization in many details.

For instance, its middleware mechanism is cleverly designed, ensuring flexibility without significant performance overhead. Its routing system supports static routes, dynamic routes, and regex routes, but lookup efficiency remains high. It also has built-in WebSocket and SSE support, extremely useful for real-time applications.

More importantly, its connection management is excellent. In short-lived connection scenarios, it can quickly create and destroy connections without resource leaks. In persistent connection scenarios, it efficiently reuses connections, reducing overhead.

Practical Application Considerations

Test data matters, but real-world applications require considering many other factors.

First is development efficiency. Rust's learning curve is genuinely steep, especially concepts like ownership and lifetimes, very difficult to grasp initially. When I was learning Rust, just understanding the borrow checker took several weeks. But once you master it, code quality becomes very high, and the compiler helps you catch many potential issues.

Go's learning curve is much gentler, simple syntax, quick to pick up. A colleague at our company said he went from zero Go knowledge to shipping a working project in one week.

Node.js needs no introduction. Everyone knows JavaScript, and the ecosystem is incredibly rich with libraries for everything.

Second is the ecosystem. Node.js's npm ecosystem is the most mature, with libraries for virtually any functionality. Go's ecosystem is also solid, and its standard library is powerful, many features don't need third-party libraries.

Rust's ecosystem is relatively younger but growing rapidly. Packages on crates.io are generally high quality, and many have detailed documentation and examples. The Hyperlane framework particularly emphasizes ecosystem integration, it can directly use any package from crates.io, which I really appreciate.

Third is team skills. If everyone on the team knows JavaScript, Node.js might be the fastest choice. If the team has Go experience, Gin is a solid option. But if you're pursuing ultimate performance and willing to invest learning time, Rust frameworks are definitely worth considering.

My Choice

After this testing and analysis, I submitted a detailed report to my technical director. My recommendation was that for our company project, we should consider migrating to a Rust-based framework.

The reasoning was simple: our project has high performance requirements and will be maintained long-term. While Rust has a steeper learning curve, in the long run, the performance gains and code quality improvements are worth it.

After reading my report, my technical director asked me a question: "Which framework do you think suits us best?"

I thought for a moment and said, "The Hyperlane framework. Its performance numbers are impressive, over three hundred twenty-four thousand QPS with Keep-Alive enabled, and over fifty-one thousand with it disabled. More importantly, its API design is friendly, documentation is clear, and it won't be too hard to pick up."

My technical director nodded. "Let's give it a try then."

The Migration Process

Over the next two weeks, I began the migration work. Honestly, I was pretty nervous at first. After all, this was a running production project, we couldn't afford problems.

I first set up a test environment and rewrote the core functionality with the new framework. To my pleasant surprise, the code was actually shorter than the original Node.js version. This was mainly because Rust's type system and pattern matching made code more concise, and the framework provided many useful utility functions.

For example, handling HTTP requests and responses. In Node.js, we needed many callback functions with deeply nested code. But in the new framework, using async/await made everything much clearer.

Middleware usage was also convenient. I needed to add a logging middleware and an authentication middleware. I just had to implement the corresponding traits and register them. The middleware execution order was clear, no mysterious issues.

The routing system also impressed me. Our project has some dynamic routes, like fetching data by user ID. In Node.js with Express, I needed to write a bunch of regular expressions. But the new framework directly supports dynamic routing, much simpler to write.

WebSocket support was also excellent. Our project needs to push data to clients in real-time. We originally used Socket.io. After migrating to the new framework, I found its built-in WebSocket support was sufficient and performed better.

Performance After Launch

The test environment ran for a week without issues, so I started preparing for production deployment.

On launch day, I kept my eyes glued to the monitoring dashboard, worried something might go wrong. But my concerns proved unnecessary. The new system ran incredibly stable.

The most obvious change was server CPU usage. With Node.js, CPU regularly stayed above eighty percent, hitting ninety-five percent during peak times. But after switching to the new framework, CPU usage consistently stayed around thirty percent, never exceeding fifty percent even during peaks.

Response times also improved significantly. Previously, average response time was around fifty milliseconds. Now it dropped below ten milliseconds. Users reported the system felt faster, which gave me a real sense of accomplishment.

Memory usage also became more stable. Node.js memory usage would grow over time, requiring periodic restarts. But Rust has no garbage collection, memory usage stayed stable, running for a month without needing restarts.

Some Unexpected Benefits

This migration brought some unexpected benefits.

First was code quality. Rust's compiler is extremely strict, forcing you to handle all possible error cases. Initially I found this annoying, but later realized it's actually beneficial. It helps you catch issues at compile time rather than waiting for runtime crashes.

Our original Node.js code had many potential bugs, like unhandled exceptions, variables that could be null, etc. These issues simply can't pass compilation in Rust.

Second was team technical growth. While learning Rust has some difficulty, team members all felt they gained a lot. One colleague told me, "After learning Rust, my understanding of memory management and concurrent programming deepened significantly. Even when I go back to writing JavaScript, I feel more confident."

Third was performance tuning headroom. Rust provides many low-level control capabilities. If further performance optimization is needed, there are many adjustable parameters. Rust's performance analysis tools are also excellent, like flame graphs, which visually show where performance bottlenecks are.

Comparing Other Projects' Experiences

Later I saw some other people's performance testing reports online and found my results basically aligned with theirs. This gave me more confidence in my testing.

A foreign developer did a more comprehensive test, benchmarking over twenty frameworks, and results showed Rust frameworks indeed have clear performance advantages. He also tested other metrics like memory usage and startup time, where Rust frameworks also performed well.

Some people questioned whether these performance tests were done under ideal conditions and might not achieve these levels in real applications. This viewpoint has some merit, but I think performance tests at least show a framework's performance ceiling. And from our project's actual performance, the improvements were very real.

Some Advice

Based on this experience, I want to offer some advice to fellow developers.

First, don't blindly pursue performance. Performance is important, but not the only consideration. If your project doesn't have high concurrency, the framework choice doesn't matter much. But if your project has high performance requirements, it's worth spending time testing and comparing.

Second, consider your team's actual situation. If the team is very familiar with a particular tech stack, continuing with it might be the most efficient choice. But if there's an opportunity to learn new technology, don't be afraid to try.

Third, focus on a framework's long-term development. A framework needs not just good performance, but also an active community, comprehensive documentation, and stable updates. These factors are very important for long-term maintenance.

Fourth, do thorough testing. Don't just look at others' test reports, best to test yourself. Because different application scenarios can have very different performance characteristics.

Fifth, pay attention to details. Things like Keep-Alive settings, connection pool configuration, caching strategies, these details can impact performance more than the framework itself.

Further Exploration

After this testing, I developed a strong interest in web framework performance optimization. I started deeply researching various optimization techniques, like zero-copy, memory pools, coroutine scheduling, etc.

I also tried some other Rust web frameworks, like Actix-web, Warp, etc. Each framework has its own characteristics and advantages. Which to choose mainly depends on specific needs.

I also started following some emerging technologies, like HTTP/3, QUIC protocol, etc. These new technologies might change the web development landscape in the future.

Final Thoughts

Looking back on this testing and migration experience, I think the biggest gain wasn't finding the highest-performing framework, but learning how to scientifically evaluate and choose technology.

Performance data is important, but it's just one factor in decision-making. We also need to consider development efficiency, team skills, ecosystem, long-term maintenance, and many other aspects.

There's no absolute right or wrong in technology selection, only suitable or unsuitable. For our project, the Hyperlane framework was an excellent choice. But for other projects, Go, Node.js, or other frameworks might be more appropriate.

Most importantly, maintain a mindset of learning and exploration. Technology constantly evolves. Today's best practices might be outdated tomorrow. Only through continuous learning can we keep pace with technology.

If you're also making technology selections, I hope my experience can provide some reference. Remember, data doesn't lie, but don't be constrained by data either. Combine actual circumstances to make the choice that suits you best.

During testing, the Hyperlane framework I used could reach over three hundred twenty-four thousand QPS with Keep-Alive enabled, and over fifty-one thousand with it disabled. This performance level is top-tier in the Rust ecosystem, and its API design, documentation quality, and ecosystem integration are all excellent.

If you're interested in this framework, check out its GitHub page. There's detailed documentation and examples there. I believe you'll be as attracted by its design philosophy and performance as I was.

The world of technology is vast, with so much worth exploring. I hope we can all go further on this path.

Project Repository: https://github.com/hyperlane-dev/hyperlane

Author Contact: root@ltpp.vip


This content originally appeared on DEV Community and was authored by member_875c0744


Print Share Comment Cite Upload Translate Updates
APA

member_875c0744 | Sciencx (2025-10-26T09:27:31+00:00) i-benchmarked-seven-backend-frameworks-and-it-changed-my-tech-stack-decisions. Retrieved from https://www.scien.cx/2025/10/26/i-benchmarked-seven-backend-frameworks-and-it-changed-my-tech-stack-decisions-3/

MLA
" » i-benchmarked-seven-backend-frameworks-and-it-changed-my-tech-stack-decisions." member_875c0744 | Sciencx - Sunday October 26, 2025, https://www.scien.cx/2025/10/26/i-benchmarked-seven-backend-frameworks-and-it-changed-my-tech-stack-decisions-3/
HARVARD
member_875c0744 | Sciencx Sunday October 26, 2025 » i-benchmarked-seven-backend-frameworks-and-it-changed-my-tech-stack-decisions., viewed ,<https://www.scien.cx/2025/10/26/i-benchmarked-seven-backend-frameworks-and-it-changed-my-tech-stack-decisions-3/>
VANCOUVER
member_875c0744 | Sciencx - » i-benchmarked-seven-backend-frameworks-and-it-changed-my-tech-stack-decisions. [Internet]. [Accessed ]. Available from: https://www.scien.cx/2025/10/26/i-benchmarked-seven-backend-frameworks-and-it-changed-my-tech-stack-decisions-3/
CHICAGO
" » i-benchmarked-seven-backend-frameworks-and-it-changed-my-tech-stack-decisions." member_875c0744 | Sciencx - Accessed . https://www.scien.cx/2025/10/26/i-benchmarked-seven-backend-frameworks-and-it-changed-my-tech-stack-decisions-3/
IEEE
" » i-benchmarked-seven-backend-frameworks-and-it-changed-my-tech-stack-decisions." member_875c0744 | Sciencx [Online]. Available: https://www.scien.cx/2025/10/26/i-benchmarked-seven-backend-frameworks-and-it-changed-my-tech-stack-decisions-3/. [Accessed: ]
rf:citation
» i-benchmarked-seven-backend-frameworks-and-it-changed-my-tech-stack-decisions | member_875c0744 | Sciencx | https://www.scien.cx/2025/10/26/i-benchmarked-seven-backend-frameworks-and-it-changed-my-tech-stack-decisions-3/ |

Please log in to upload a file.




There are no updates yet.
Click the Upload button above to add an update.

You must be logged in to translate posts. Please log in or register.