Workflows Are Not AI Agents: Selling Lies

Zapier’s claim of “800+ AI agents” is absurd.

If these were real agents, it wouldn’t be 800 of them; it’d be one agent with 800 tools. But “workflow” doesn’t sound as sexy or sell as well as “AI agent,” so we lie. Marketing, right?

A real agent (Cl…


This content originally appeared on DEV Community and was authored by SK

Zapier’s claim of "800+ AI agents" is absurd.

selling lies

If these were real agents, it wouldn’t be 800 of them; it’d be one agent with 800 tools. But “workflow” doesn’t sound as sexy or sell as well as “AI agent,” so we lie. Marketing, right?

A real agent (Claude, GPT, etc.) is not a glorified SHOULD BE regex parser with an LLM slapped on top.

And just for context, I’m actually building an agentic tool in Go for smaller models, so I think I’ve earned a bit of leeway to talk about this.

  • What Actually Is an Agent?
  • But Wait... What Is an Agent Under the Hood?
    • Letting LLMs Return JSON
    • The "Read File" Tool Pattern
  • Where Agentic Behavior Begins
    • Real-World Example
  • Why Workflows ≠ Agents
    • An Email Example
  • So… 800 Agents?
  • Final Thought

What Actually Is an Agent?

At its core, an AI agent is a large language model that orchestrates decisions in real time. It doesn’t just spit out text, It can:

  1. Choose the right tool or function to use next.
  2. Handle unpredictable situations or errors. ("File not found" → “Should I search again? Try a new path?”)
  3. Loop through its own failures until it either solves your problem or admits it can’t.

A traditional workflow is deterministic:
Take input → regex parse (or LLM) → store in DB → done.
No thinking. No dynamic decision-making.

But Wait... What Is an Agent Under the Hood?

LLMs are text-in, text-out machines. We all know this.

But what happens when we teach them to return structured data like JSON?

JSON is friendly; most importantly friendly to programming languages.

Letting LLMs Return JSON

Here’s an example in JavaScript where we teach the model to output structured data:

// Pseudo-prompt to LLAMA
const prompt = `
If the user asks about reading a file, reply ONLY with valid JSON:
{ "name": "read_file", "path": "<file path>" }

Example:
User: read and validate main.go
Assistant:
{ "name": "read_file", "path": "main.go" }
`;

// Wrapper logic in JS
let llmResponse = callLLM(prompt);
try {
  const parsed = JSON.parse(llmResponse);
  if (parsed.name === "read_file") {
    const fileContents = fs.readFileSync(parsed.path, "utf8");
    // Re-prompt the LLM with file contents for further reasoning
    llmResponse = callLLM(`Here are the file contents:\n\n${fileContents}`);
  }
} catch (err) {
  console.log("LLM says:", llmResponse);
}

This is huge! Like Thorsten Ball says, you’re teaching the LLM to nudge you.

"Hey LLM, if you want to talk to me as the developer, send JSON in this format and I’ll handle the logic. If not, just send text, I’ll pipe it to the user."

This is the foundation of tools and tool calling.

Even something tiny, like LLAMA 3.2 (a mere 2GB model), can use this approach to read files.

The "Read File" Tool Pattern

You; the developer act as the middleman. When the LLM returns JSON, you interpret the intent:

if (json && json.name === "read_file") {
  // read the file and return the content or error
}

Then, you re-prompt the model with that result:

"You asked to read main.go here are the contents."

Where Agentic Behavior Begins

Now comes the magic.

Let’s say the file doesn’t exist. What happens?

This is where agency kicks in:
The model decides what to do next based on dynamic, unpredictable output.

Real-World Example

I tested this live:

Prompt: “In the current directory ., there's a file called fn_call.go. Read it and tell me what it does.”

But guess what? I lied! twice, as you'll see in the video below.

The file wasn’t in . or in internal, it was buried in internal/agent/fn_call.go.

Yet the model, through feedback loops and self-reasoning figured it out. It tripped over its assumptions, hit a few errors, and kept trying until it got there.

That’s agency. Not just pattern matching. "Thinking".

Why Workflows ≠ Agents

A workflow is like a cron job:

  • Scrape some predefined data
  • Run it through a regex (or LLM)
  • Store the result in a DB

Maybe regex is replaced by an LLM, but the structure is the same.
It’s all deterministic.

An Email Example

Let’s say you give an LLM a structured invoice email and ask it to extract data.
That’s not an agent, it’s just regex replaced by an LLM.

But now give it a mailbox and say:

“This is your inbox. You can:

  1. Reply with a helpful response
  2. Forward the message
  3. Extract important info and save it.”

Now we’re talking agency.

The model has to decide what to do with random, real-world emails. No predefined flow. No guardrails.

So… 800 Agents?

That’s just 800 workflows.

A real agent is one model with tools, able to adapt and reason.
But agents are risky; they’re eager: powerful, but unpredictable.

Still, I get it.

Selling 800 “agents” at $5 a pop sounds better on paper than one real agent at $200/month.

Final Thought

Call a workflow a workflow.


This content originally appeared on DEV Community and was authored by SK


Print Share Comment Cite Upload Translate Updates
APA

SK | Sciencx (2025-06-08T06:00:00+00:00) Workflows Are Not AI Agents: Selling Lies. Retrieved from https://www.scien.cx/2025/06/08/workflows-are-not-ai-agents-selling-lies/

MLA
" » Workflows Are Not AI Agents: Selling Lies." SK | Sciencx - Sunday June 8, 2025, https://www.scien.cx/2025/06/08/workflows-are-not-ai-agents-selling-lies/
HARVARD
SK | Sciencx Sunday June 8, 2025 » Workflows Are Not AI Agents: Selling Lies., viewed ,<https://www.scien.cx/2025/06/08/workflows-are-not-ai-agents-selling-lies/>
VANCOUVER
SK | Sciencx - » Workflows Are Not AI Agents: Selling Lies. [Internet]. [Accessed ]. Available from: https://www.scien.cx/2025/06/08/workflows-are-not-ai-agents-selling-lies/
CHICAGO
" » Workflows Are Not AI Agents: Selling Lies." SK | Sciencx - Accessed . https://www.scien.cx/2025/06/08/workflows-are-not-ai-agents-selling-lies/
IEEE
" » Workflows Are Not AI Agents: Selling Lies." SK | Sciencx [Online]. Available: https://www.scien.cx/2025/06/08/workflows-are-not-ai-agents-selling-lies/. [Accessed: ]
rf:citation
» Workflows Are Not AI Agents: Selling Lies | SK | Sciencx | https://www.scien.cx/2025/06/08/workflows-are-not-ai-agents-selling-lies/ |

Please log in to upload a file.




There are no updates yet.
Click the Upload button above to add an update.

You must be logged in to translate posts. Please log in or register.