This content originally appeared on Level Up Coding - Medium and was authored by Salman Hoque
We’ve developed a new category of tech debt: code that AI wrote confidently but incorrectly. It passes code review because it looks reasonable. It passes tests because we didn’t think to test for the specific edge case it misunderstands. Then production breaks. “Vibe coding” captures our AI frustration perfectly.
But here’s the problem - we think “AI in engineering” means “AI writes code”. That ignores most of engineering. Engineering isn’t just writing code. It is about understanding what to build and why. It’s researching approaches, making trade-off decisions, communicating context across teams, planning architecture, and figuring out what questions to ask before we write the first line. Code is the output of all that thinking work.

The conversation has been backward. We’re debating code generation tools when we should be examining how our teams, processes, and requirements work. Instead of “can AI write my code?”, what if we asked “where does AI actually help with the thinking parts of engineering?” This shift in mindset changes everything.
“Successful AI adoption is a systems problem, not a tools problem.”
— DORA Research (2025): State of AI-Assisted Software Development
In this post, we’ll explore six practical applications where I’ve seen AI create value in engineering work. They’re mostly not about code generation. We’ll look at requirements gathering, planning, and the coordination work that typically slows teams down. Then we’ll talk about the limitations, because there are plenty.
TL;DR
Code generation dominates the conversation. But the leverage is in clarifying requirements, planning, and getting the alignment. Understand what you’re building and why before writing code. Start by mapping your team’s workflow bottlenecks. AI might streamline these areas.
Bad Requirements Can’t Be Solved by Generating Code
Code is tangible and easy to demonstrate. It’s central to software engineering. So AI companies focused their marketing on code generation while throwing user research, product validation, business rules, scalability, and security out of the window.
The disappointment was inevitable in practice. We threw complex problems at AI, expecting it to understand business logic and navigate legacy codebases. We expected AI to function like a Senior Engineer, so we dismissed it when it couldn’t. The problem isn’t what AI does, it’s how and where we use it.
“AI’s primary role in software development is that of an amplifier. It magnifies the strengths of high-performing organizations and the dysfunctions of struggling ones.”
— DORA Research (2025): State of AI-Assisted Software Development
When requirements are scattered across Slack threads and meeting notes, AI can’t just consolidate them, it just makes the fragmentation visible. If teams struggle to articulate requirements clearly that’s a signal. The fix isn’t better prompting, it’s better requirements practices. AI adoption becomes an opportunity to address systemic issues that were always there but easier to ignore.
Slapping AI onto existing workflows without understanding the system we operate in creates more problems than it solves. If we start building a feature without understanding the requirement, just adding AI to build it won’t help. If we’re struggling with ambiguous requirements or misaligned stakeholders, generating code faster just means you build the wrong thing more quickly.

The key is understanding our systems and workflows. How conversations take three meetings when they should take one. How context gets lost across many communication threads. Identifying bottlenecks helps us use AI to amplify work, not force it everywhere.
Software Engineering Isn’t Just About Coding
Consider how most engineering disciplines work: planning, requirements gathering, research, and trade-off analysis come before implementation. Software engineering shouldn’t be any different, yet we often skip straight to coding.
Most engineering bottlenecks happen earlier. We spend days in Slack threads clarifying what “real-time” actually means for this feature. We debate whether to build or buy. We discover three teams are solving similar problems differently. We realise the database design we sketched won’t handle the access patterns we need. Code generation doesn’t help with any of that. Code only delivers value when these steps are clear.
Software systems are socio-technical systems (I’ve written about it before). The complexity in most systems isn’t technical. It’s about people, processes, team boundaries, deployment constraints, and organisational politics. If we are deciding whether to build a new microservice or extend an existing service — half that decision is technical. But the other half is “which team owns this”, “what’s their capacity”, and “can we deploy independently?”
AI can help with the structured thinking work that happens before those decisions. Researching, analysing options, and documenting the context is where I’ve seen the most practical value. It accelerates the messy pre-code work.
“The greatest returns on AI investment come not from the tools themselves, but from a strategic focus on the underlying organizational system.”
— DORA Research (2025): State of AI-Assisted Software Development
An AI-Accelerated Engineering Workflow: Six Practical Steps
Let me share six areas where I’ve found AI useful, based on my experiments over the past few months. Nothing revolutionary here, just practical applications that save time on work we’re already doing.
There’s a natural flow to these: when requirements are clearer, trade-off analysis gets easier. Better trade-off analysis leads to more specific cards. Clearer cards make planning more concrete. Each step removes ambiguity for the next one.
You might not use all six, or you might tackle them in a different order depending on your situation. But the sequence shows how they build on each other. These steps typically involve multiple roles — engineer driving design, tech lead reviewing trade-offs, and PM validating requirements. AI can support each stage regardless of who’s leading it.
Gathering and understanding requirements
A pattern all too common regardless of the size of a company is: requirements live everywhere except in one place. Meeting notes, Slack threads, user feedback, edge cases someone mentioned in standup. Six weeks in, the team discovers conflicting assumptions. One engineer thought feature X had a certain constraint; another thought the opposite.
For one project, I gathered scattered requirements. I fed them to AI along with some context about our domain and existing patterns. AI helped me extract functional requirements, flag non-functional constraints, and organise information to expose potential gaps and contradictions. While it couldn’t validate these on its own, having the requirements structured made it much easier to spot inconsistencies I needed to investigate.
What came back wasn’t a polished document, but a good starting point. I spent a couple of hours refining it, filling gaps, and clarifying ambiguous points. I then shared this with the team for review and built a shared understanding.
The goal was making assumptions visible. Having to articulate requirements for AI forced us to be explicit.
Analysing and designing the solution
Once we have the requirement, the next question is: how do we actually build this? In software, there are multiple approaches to everything, each with different trade-offs. The right choice depends on your team’s context, their capacity, their operational maturity and what they’re willing to maintain.
Unfortunately, these decisions often happen ad hoc in Slack, or worse, during code review when someone questions the entire approach. By then you’ve already invested days in implementation.
I gave AI the requirements I created and validated. Then asked it to brainstorm competing approaches with explicit pros, cons, and assumptions. It generated three or four options with credible reasoning behind each. I didn’t ask AI to “decide” which approach was best. But having those options laid out with explicit trade-offs gave me a starting point.
This is where experience matters. I reviewed the options, picked elements from two of them, and modified the approach. Then I shared it with stakeholders to align everyone on the implementation.
Now we have documentation that explains the system to engineers. The artefact itself matters less than the fact that we have one. It aligned everyone on what we’re building and why.
Breaking requirements into actionable work
A common struggle for engineering teams is taking a clear solution design and breaking it into cards so that other engineers can pick them up. I’ve seen this go wrong repeatedly. Someone creates a card with the title “implement payment flow” with no context or explanation. Only the person who wrote it can pick it up, and only if they remember what they meant.
Another project, I broke down the requirements into high-level stories, using the requirements and solution documents as context for AI. Then for each card, I asked it to generate acceptance criteria, dependencies, and edge cases. The focus was on “what” each card should deliver, not “how” to implement it.
Each step of decomposition brought more clarity. When I finished, the cards had enough detail for anyone to understand.
Creating a detailed implementation plan
The card tells us “what” it needs to be, but not “how” it should be. Here an engineer designs which components to create or modify, how to wire up the communication and how they get invoked. This is familiar territory for most of us, but it still takes time to think through.
I tried something different here. I opened a coding agent, provided all the context (requirements, solution design, and the specific card). Then asked it to generate a step-by-step implementation plan. It pulled relevant code, suggested changes, identified test cases, and flagged dependencies.
The plan it generated was a good starting point. I used it in a kickoff meeting with a couple of other engineers, and we discovered some issues that we wanted to handle differently.
This step requires the most judgment calls. AI can suggest an implementation path, but it may not know your team’s coding conventions, operational constraints, or the subtle undocumented patterns established over time. You still need experienced engineers to validate the implementation plan. In a refinement session, if you’re playing the story points game, it’s now a more predictable estimate.
Generating test cases
With clear requirements and an implementation plan, you might be thinking about how components should behave. What they accept, what they return, how they handle errors. This allows us to follow Test-Driven Development practices. But writing tests upfront feels tedious enough that engineers would skip it.
I used the implementation plan with an AI coding agent and generated tests covering expected behaviour, edge cases, and error cases. It generated some obvious tests; others surfaced scenarios I would have missed.
It’s somewhat like pairing with another engineer out loud. Specifically around designing behaviour before implementation. If we later ask AI to generate code, these tests constrain what it produces. We’ve already defined success.
Recording decisions
After the project, I asked the AI to write Architecture Decision Records from the implementation. The AI coding agent wrote down our reasoning, the options we considered, and how we built it. It took minutes.
We know we should write these. We don’t. Six months later, someone asks why we built it that way. No one knows. The person who decided has left. They took the answer with them.
The AI-generated ADRs aren’t perfect. If you made a decision based on unspoken team or organisational knowledge, that won’t appear in the ADR unless you include it. But they catch the main ideas. It’s a quick way to build a shared knowledge base instead of revisiting decisions.

Notice the progression: first, understand what we need (requirements). Then weigh our options and decide (trade-off analysis). Then plan how to build and test it. Last, we document it. Each step makes the next one clearer.
AI doesn’t replace this work. These are thinking tasks. They need judgment, context, and experience. AI accelerates the thinking parts. It frees you to focus on the decisions that matter.
When requirements are clear, decisions are documented, and plans are explicit, code generation becomes safer. The context and intent is there. The AI has less room to hallucinate or misunderstand.
The Limitations, Failures, and Trade-Offs We Need to Understand
This workflow shows what’s possible when AI is applied to the thinking work. But before this sounds too optimistic, let’s be honest about some probable constraints, trade-offs, and failure modes I’ve experienced.
Context, requirements and expertise still matter most
As mentioned before, AI can’t fix unclear or missing requirements. If there are gaps in what you ask for, AI won’t fix them. It will make it worse by guessing (and hallucinating) the gap. It’s crucial to provide the correct context and details when working with AI on real work.
Clear context in earlier steps is critical to building a system where generated artefacts flow to the next one. If the requirement is vague or made up, it will impact everything that follows. When I tried to move fast with half-baked requirements, I had to spend more time on review, rework, or refactoring. It didn’t save time; it created more work. It’s garbage in, garbage out, but faster.
Use AI to find the gaps. Don’t use it to fill them. The decisions are still yours. AI can show you options. It can’t tell you which option fits your team’s constraints, skills, or timeline.
Trading writing time for review time
Whatever AI generates (documents, plans, code) requires careful review. That review time is non-negotiable. Review catches hallucinations, identifies confusion, and grounds AI outputs in reality. I’ve found that reviewing also helps me articulate my own thinking better. When I’m reviewing AI-generated requirements or solution designs, I’m forced to be explicit about what I actually mean.
Don’t skip this step. It might complicate what you’re trying to build.
Testing becomes a must-have, not an optional requirement
With all planning in place, you might feel confident using AI to generate code. The issue is that code review alone won’t catch the subtle bugs that AI introduces. I’ve seen it create code that looks correct but has odd behaviour. Methods are mocked to pass tests, or not actually invoking the methods they should be calling.
This isn’t a failure of AI. It’s the reality of working with generated code. Verify the work before shipping it, as we should anyway, and make sure it actually does what you think it does.
The productivity gains aren’t guaranteed
AI is still in its early stages. You might see big gains in gathering requirements and no gain in planning, or the other way around. The variance is high, and it depends heavily on domain, the team’s practices, and how well you can provide context.
I’ve had sessions where AI saved me hours of writing. I’ve had sessions where fixing made-up answers took longer than writing it myself. The difference was usually how well I explained the problem and how much context I provided.
Set realistic expectations. It’s not just about speed; it’s also about clarity, alignment, and discovering unknowns. That’s still valuable.
Start With Your Workflow, Then Add AI Strategically
The real shift: instead of getting overly excited by AI, examine your workflow first.
Where do you lose time? Where does communication break down? Where does repetitive work slow you down? Then add AI where it saves time and reduces friction.
Improvement happens only if we understand why we need it, not just that we want to use it.
The goal isn’t 10× productivity. It’s sustainable work. Work where we’re not grinding repetitive tasks. Work where we have time to think, learn, and build better systems.
“The value of AI is not going to be unlocked by the technology itself, but by reimagining the system of work it inhabits.”
— DORA Research (2025): State of AI-Assisted Software Development
Use AI as a thinking partner to clarify requirements, analyse trade-offs, and capture decisions - not just as a code generator. You can get better alignment, validated requirements, and faster decisions.
That’s not vibe coding. That’s practical engineering with better tools.
A Mindset Shift Beyond “Vibe Coding”: Six Ways to Use AI in Engineering (Besides Coding) was originally published in Level Up Coding on Medium, where people are continuing the conversation by highlighting and responding to this story.
This content originally appeared on Level Up Coding - Medium and was authored by Salman Hoque
Salman Hoque | Sciencx (2025-11-25T16:57:03+00:00) A Mindset Shift Beyond “Vibe Coding”: Six Ways to Use AI in Engineering (Besides Coding). Retrieved from https://www.scien.cx/2025/11/25/a-mindset-shift-beyond-vibe-coding-six-ways-to-use-ai-in-engineering-besides-coding/
Please log in to upload a file.
There are no updates yet.
Click the Upload button above to add an update.