Stop Coding Like It’s 2015

Stop Coding Like It’s 2015 (and here’s why most traditional software rules fail in the AI era)The “Unit Test” CrisisIf you have been a software engineer for the last decade, you are used to control.You write if (user.is_logged_in), and the computer exe…


This content originally appeared on Level Up Coding - Medium and was authored by Dylan Oh

Stop Coding Like It’s 2015 (and here’s why most traditional software rules fail in the AI era)

The “Unit Test” Crisis

If you have been a software engineer for the last decade, you are used to control.

You write if (user.is_logged_in), and the computer executes it. If it fails, you set a breakpoint, find the bug on line 50, and fix it. The world is deterministic. Ideally, your unit tests pass 100% of the time. (I know. Ideally).

But the moment you integrate a Large Language Model (LLM) into your product, that safety net vanishes.

I recently tried to build a simple sentiment analysis tool. I ran the test suite. It passed. I reran it 10 minutes later. It failed. Same input. Different output.

The unit tests turned red, not because the code was broken, but because the “computer” changed its mind.

Welcome to the uncomfortable world of Software 2.0.

Software 1.0 vs. Software 2.0: The Paradigm Shift

To understand why this happens, we need to look at the mental model proposed by Andrej Karpathy (and this was like 8 years ago).

  • Software 1.0 (Traditional): Humans write explicit logic (code). We tell the machine how to solve the problem step by step.
  • Software 2.0 (AI): Humans provide the goal and the data (prompts/examples). The machine figures out the logic (neural weights) to solve it.
Difference between the proposed software 1.0 vs software 2.0.
Difference between the proposed software 1.0 vs software 2.0.

As Karpathy says, “In Software 2.0, the source code is the dataset.” But this shift breaks the fundamental rules we grew up with.

Three Rules That No Longer Apply

When you move from writing logic to curating context, three of our most “sacred” engineering beliefs fall apart.

Rule #1 Broken: “There is Exactly One Right Answer”

  • Old Rule: In Software 1.0, 1 + 1 must always equal 2. If it equals 2.0001, it’s a bug. We optimize for Certainty.
  • New Reality: In Software 2.0, we optimize for Capability. As noted in the Latent Space manifesto, “There are no certainties about what these models are capable of.” You are trading precision for the ability to understand English, Images, and nuance.
  • The Shift: You stop checking for “Exact Matches” (Assert Equals) and start checking for “Semantic Similarity” (LLM-as-a-Judge). “Good enough” is the new “Correct.”

Rule #2 Broken: “Latency is a Bug”

  • Old Rule: API calls should take milliseconds. If an endpoint takes 3 seconds to load, you optimize the database query.
  • New Reality: LLMs are slow. Thinking takes time. Generating a complex summary might take 10–30 seconds.
  • The Shift: Latency is no longer just a bug. It’s often a feature of “thinking depth.” This forces us to completely change our Frontend Architecture. We can no longer use simple await calls. We must master Streaming (for real-time UI updates) and Asynchronous Queues to keep the user engaged while the “brain” works.

Rule #3 Broken: “The Tech Stack is Static”

  • Old Rule: You pick a database (PostgreSQL) and a language (Python), and you stick with them for 5 years.
  • New Reality: The “CPU” of your application (the Model) changes every week. GPT-5 today, Claude 4.5 tomorrow, Gemini 3 next month.
  • The Shift: You cannot “wed” your architecture to OpenAI. You need Model Agnostic Design. Your system must be built so that swapping the underlying intelligence engine is as easy as changing a config variable, not rewriting the codebase.

What does this mean for your career?

Does this mean your coding skills are obsolete? Absolutely not.

The Latent Space article highlights that AI engineers serve as the bridge between researchers and products. Researchers build the engines, and we make the cars.

Because the models are probabilistic (unreliable) and black boxes (un-debuggable), System Design becomes more critical, not less.

  • You need to build Guardrails to catch hallucinations.
  • You need to build Retry Logic to handle non-determinism.
  • You need to build Evaluation Pipelines to measure quality.

You are no longer just writing code; you are architecting reliability on top of unreliable components.

The Future: Software 3.0 and the “Autonomy Slider”

But the story doesn’t end at Software 2.0. In his recent talks, Karpathy has introduced Software 3.0.

If Software 1.0 is Manual Coding, and Software 2.0 is Training/Prompting, then Software 3.0 is Natural Language. It’s the era where you talk to the computer in English to get things done (think Agentic workflows like Cursor or Devin).

Does this replace us? No. Karpathy describes it as a Slider of Autonomy.

  • Software 1.0 (Left End): Banking logic, critical Infrastructure. You want 0% Autonomy here. You need explicit code.
  • Software 3.0 (Right End): Coding assistants, content generation. You accept High Autonomy. You act as a manager, not a worker.

The Great Integration: We aren’t abandoning 1.0 or 2.0. They will coexist. A modern application might use Software 3.0 (an Agent) to write Software 1.0 (Python code) that calls a Software 2.0 (LLM) model (which is exactly what I am building now, will demo in the future).

The best AI Engineers in 2026 won’t just pick one. They will know exactly where to place that slider for every problem they solve.

What’s Next?

Now that we understand the philosophy, we need to talk about the tools.

There is one programming language that has completely monopolized this entire stack, from training 2.0 models to orchestrating 3.0 agents. It’s not JavaScript, and it’s not Rust.

In the next issue, we will dive into Why Python Won the AI War.

If you find this insightful, you may also subscribe to this newsletter for more articles like this.

Till We Code Again | Dylan Oh | Substack


Stop Coding Like It’s 2015 was originally published in Level Up Coding on Medium, where people are continuing the conversation by highlighting and responding to this story.


This content originally appeared on Level Up Coding - Medium and was authored by Dylan Oh


Print Share Comment Cite Upload Translate Updates
APA

Dylan Oh | Sciencx (2025-11-24T04:06:06+00:00) Stop Coding Like It’s 2015. Retrieved from https://www.scien.cx/2025/11/24/stop-coding-like-its-2015/

MLA
" » Stop Coding Like It’s 2015." Dylan Oh | Sciencx - Monday November 24, 2025, https://www.scien.cx/2025/11/24/stop-coding-like-its-2015/
HARVARD
Dylan Oh | Sciencx Monday November 24, 2025 » Stop Coding Like It’s 2015., viewed ,<https://www.scien.cx/2025/11/24/stop-coding-like-its-2015/>
VANCOUVER
Dylan Oh | Sciencx - » Stop Coding Like It’s 2015. [Internet]. [Accessed ]. Available from: https://www.scien.cx/2025/11/24/stop-coding-like-its-2015/
CHICAGO
" » Stop Coding Like It’s 2015." Dylan Oh | Sciencx - Accessed . https://www.scien.cx/2025/11/24/stop-coding-like-its-2015/
IEEE
" » Stop Coding Like It’s 2015." Dylan Oh | Sciencx [Online]. Available: https://www.scien.cx/2025/11/24/stop-coding-like-its-2015/. [Accessed: ]
rf:citation
» Stop Coding Like It’s 2015 | Dylan Oh | Sciencx | https://www.scien.cx/2025/11/24/stop-coding-like-its-2015/ |

Please log in to upload a file.




There are no updates yet.
Click the Upload button above to add an update.

You must be logged in to translate posts. Please log in or register.