This content originally appeared on DEV Community and was authored by Yongsik Yun
Summary
I don’t think “good prompts” are enough. In my experience, outcomes improve when I understand what my keywords actually mean and can roughly predict what the model will return.
This post shares my current model in a report style, but the views are personal and context-dependent.
Why this matters (my take)
I keep returning to four areas that change results the most:
Tool Understanding
Knowing model limits, context management, and I/O formats reduces avoidable iteration.Requirements Understanding
Clear problem statements, success criteria, and non-functional needs (security, performance, operations) keep direction stable.Design & Architecture Understanding
Boundary setting, dependency control, and explaining trade-offs lower change cost.Organization & Process Understanding
Roles, collaboration flow, deployment and operations realities increase execution efficiency.
The 4× model
I think results behave like a product of four “pillars”:
Outcome ≈
(Depth of prior learning)
× (Understanding of the problem context)
× (Design & architecture skill)
× (Environment awareness: org, infra, process)
If any one term is near zero, the whole outcome drops sharply. That’s what I’ve observed, not a universal law.
Human ↔ AI split (what I’m experimenting with)
I’m actively designing how to split work between AI agents and myself across the four pillars. It’s a work in progress.
Pillar | What it means | Delegate to AI agents | Keep human-led (for now) |
---|---|---|---|
Tool | Prompt scaffolds, format transforms, test data generation | Patterned refactors, doc drafting, spec-to-code skeletons | Choosing models, context strategy, safety/limits |
Requirements | Clarify terms, map examples, validate acceptance criteria | Requirement clustering, duplicate detection, glossary drafts | Final problem framing, success metrics, risk acceptance |
Design & Arch | Option listing, RFC skeletons, sequence/state diagrams | Alternative enumeration, boilerplate architectures | Final boundary decisions, trade-off ownership |
Org & Process | Checklists, runbooks, review templates | Routine updates, status summaries, meeting minutes | Incentives, role design, escalation paths |
Practical checklist
- Do I know the model’s constraints and how I’ll manage context?
- Are success criteria and non-functionals explicit?
- Can I explain my trade-offs like I would in a design review?
- Does the plan reflect team roles and deployment reality?
- For each pillar, what is agent-do vs human-decide?
Closing
This is the frame I’m using right now:
Prior learning × Context understanding × Design skill × Environment awareness.
I expect efficiency to drop when any one term weakens. My current focus is to make the agent/human split explicit in each pillar.
If you use a different split or model, I’d like to learn from it.
This content originally appeared on DEV Community and was authored by Yongsik Yun

Yongsik Yun | Sciencx (2025-08-30T14:01:01+00:00) Four Multipliers for Using AI Well: My Working Model. Retrieved from https://www.scien.cx/2025/08/30/four-multipliers-for-using-ai-well-my-working-model-2/
Please log in to upload a file.
There are no updates yet.
Click the Upload button above to add an update.