As Software Scales, So Does Its Energy Appetite

Microservices can use over 40% more energy per transaction than monoliths, and training AI models like GPT-3 can consume as much electricity as 130 U.S. homes annually.


This content originally appeared on HackerNoon and was authored by Supriya Lal

From the earliest days of mainframe computing through the rise of client-server models, service-oriented and microservices architectures, and now today’s AI-driven systems, software architecture has continually evolved to meet the ever-growing demands of technological advancement. However, supporting these advancements has led to increasingly complex and distributed systems, requiring the continuous expansion of the underlying infrastructure. More energy is required not only to operate the infrastructure but also to support the rising computational workloads it must handle. This article explores how the evolution of software architecture is contributing to increasing energy consumption.

Evolution

1960-70s: Monolithic Architecture

Monolithic ArchitectureIn the early days of computing, software systems were typically built as monolithic applications—single, unified codebases with minimal modularity. These systems operated on mainframes or early minicomputers, where processing power, memory, and storage were severely limited. A notable example is the IBM 7094, introduced in the early 1960s, which supported major space missions like Gemini and Apollo with only 150 KB of memory. Combined with the use of highly optimized assembly language code, the centralized execution model of early monolithic architectures minimized network overhead and reduced resource duplication. This led to more efficient use of the limited processing and memory resources available on mainframes and minicomputers. As a result, early computing environments achieved lower overall energy consumption per operation compared to the distributed, network-intensive architectures common today. An IBM article states that consolidating workloads onto five IBM z16 mainframe systems, released in 2022, instead of running them on x86 servers under similar conditions, can reduce energy consumption by up to 75%.

1980s: Client-Server & Layered Architecture

Client-Server & Layered ArchitectureThe Layered architecture introduced in the 1980s, split the single codebase into presentation, business logic, and data access layers. In parallel, the rise of personal computing and the expansion of computer networks during the same timeframe led to the emergence of the client-server architecture. Applications were further divided into two distinct components: client-side interfaces handling user interactions, and server-side systems managing business logic, data processing, and storage.

This architectural shift introduced a new layer of infrastructure—the network—which brought additional energy demands for data transmission and system operation. In 1999, Huber and Mills claimed that 8% of U.S. electricity was being consumed by the Internet and Internet-related devices. Although this figure has since been widely refuted, with more accurate estimates placing it closer to 1%, the article raised an important point that network communication inherently requires more energy. For example, it was estimated that creating, packaging, storing, and transmitting just 2 megabytes of data could consume the energy equivalent of burning one pound of coal.

The need for network communication introduced new sources of energy consumption. Data had to be transferred over the network between client and server, which was less energy-efficient than accessing local data within a mainframe. Network connectivity required extra power and CPU resources for components like network interface cards, transceivers, and protocol stacks (e.g., TCP/IP) to manage data transmission.

However, research on the energy implications of transitioning to layered client-server architectures remains limited. Drawing definitive conclusions about the overall energy impact is challenging due to the complex interplay of factors, including increased modularity, expanding infrastructure, and technological advancements aimed at reducing hardware energy consumption.

\

Late 1990s–2000s: Service-Oriented Architecture

Service-Oriented ArchitectureAs software systems grew in scale and complexity, organizations began adopting service-oriented architecture (SOA) to increase reusability and scalability. Unlike monolithic systems, SOA decomposes applications into loosely coupled services distributed across networks. This means that not only are clients and servers distributed over the internet, but the server-side application itself is further broken down into multiple services, each potentially running on different machines or locations across the network.

This introduced new infrastructure requirements to support inter-service communication including service registries for discovery, messaging protocols (such as SOAP or REST), serialization and deserialization mechanisms (e.g. XML or JSON), and load balancers to manage traffic across services. A study conducted by Freie Universität Berlin in 2003 highlighted the challenges of this transition, noting that Web services significantly increased network traffic and imposed additional processing overhead on servers due to the need to parse serialized data formats such as XML. These operations consumed more CPU cycles and network bandwidth than traditional in-process function calls, ultimately contributing to higher energy consumption.

2010s: Microservices Architecture

\n Microservices ArchitectureMicroservices architecture evolved from the SOA architecture. It further decomposes applications into smaller, independently deployable services with each service typically maintaining its own database.

A performance comparative study published in  the Journal of Object Technology showed Microservices consume  approximately 20% more CPU than monolithic systems for the same workload. Another comparative analysis between monolithic and microservice architectures shows microservice consumed approximately 43.79% more energy per transaction than its monolithic counterpart. This architectural shift introduces significantly more inter-service communication than traditional SOA, along with remote database calls. Whereas a traditional client-server architecture might require only a few network calls to fulfill a request, a microservices-based system can involve hundreds of such calls, resulting in substantially higher network resource consumption.

\ \ \

Mid 2020s and Beyond: AI-Driven Architectures

AI-Driven ArchitecturesWhile AI has existed for decades, generative AI has seen rapid advancement in recent years. Training AI requires extensive infrastructure to store large datasets and neural networks to process that data. This process requires significant computational power, often relying on thousands of GPUs running for weeks or even months. For example, training OpenAI’s GPT-3 is estimated to have consumed approximately 1,287 megawatt-hours of electricity—roughly equivalent to the annual energy consumption of 130 U.S. homes, according to a 2022 study and data from the U.S. Energy Information Administration.

Beyond training, using AI models to produce results (a process known as inference) also requires substantial infrastructure. GPUs, high-speed memory, and advanced networking systems are needed to process new inputs through trained neural networks in real time. This inference stage, especially at scale, continues to drive high energy consumption.

\ As AI adoption grows, its energy demands are escalating rapidly. Alex de Vries, a Ph.D. candidate at VU Amsterdam, estimated in a 2023 article in Joule that AI systems could eventually consume as much electricity annually as Ireland if current growth trends continue.

Conclusion

As software architecture is evolving to support smarter, faster, and more flexible use cases, the underlying infrastructure is also growing exponentially. Each architectural leap has introduced more layers, more services, and more hardware dependencies. This has led to a steady rise in energy consumption. To build a sustainable digital future, we must rethink how we design and deploy software. The future of software must be not only intelligent and scalable, but also energy-efficient.


This content originally appeared on HackerNoon and was authored by Supriya Lal


Print Share Comment Cite Upload Translate Updates
APA

Supriya Lal | Sciencx (2025-07-31T07:38:53+00:00) As Software Scales, So Does Its Energy Appetite. Retrieved from https://www.scien.cx/2025/07/31/as-software-scales-so-does-its-energy-appetite/

MLA
" » As Software Scales, So Does Its Energy Appetite." Supriya Lal | Sciencx - Thursday July 31, 2025, https://www.scien.cx/2025/07/31/as-software-scales-so-does-its-energy-appetite/
HARVARD
Supriya Lal | Sciencx Thursday July 31, 2025 » As Software Scales, So Does Its Energy Appetite., viewed ,<https://www.scien.cx/2025/07/31/as-software-scales-so-does-its-energy-appetite/>
VANCOUVER
Supriya Lal | Sciencx - » As Software Scales, So Does Its Energy Appetite. [Internet]. [Accessed ]. Available from: https://www.scien.cx/2025/07/31/as-software-scales-so-does-its-energy-appetite/
CHICAGO
" » As Software Scales, So Does Its Energy Appetite." Supriya Lal | Sciencx - Accessed . https://www.scien.cx/2025/07/31/as-software-scales-so-does-its-energy-appetite/
IEEE
" » As Software Scales, So Does Its Energy Appetite." Supriya Lal | Sciencx [Online]. Available: https://www.scien.cx/2025/07/31/as-software-scales-so-does-its-energy-appetite/. [Accessed: ]
rf:citation
» As Software Scales, So Does Its Energy Appetite | Supriya Lal | Sciencx | https://www.scien.cx/2025/07/31/as-software-scales-so-does-its-energy-appetite/ |

Please log in to upload a file.




There are no updates yet.
Click the Upload button above to add an update.

You must be logged in to translate posts. Please log in or register.