Your AI Agents Deserve the Same Ops Treatment as Your Microservices

A few months ago I was looking at how our team was actually running AI agents in production. One was a Python script in a tmux session on someone’s laptop. Another was a cron job with no timeout. A third had no cost limits — it had quietly burned throu…


This content originally appeared on DEV Community and was authored by rdmnl

A few months ago I was looking at how our team was actually running AI agents in production. One was a Python script in a tmux session on someone's laptop. Another was a cron job with no timeout. A third had no cost limits — it had quietly burned through $800 in API calls over a weekend because it got stuck in a loop.

None of this would fly for a microservice. We'd never ship a service with no health checks, no resource limits, and no way to roll back a bad deploy. But agents were getting a free pass because they felt different somehow. They're AI, not "real" infrastructure.

I don't think that's a good enough reason.

The thing is, agents are just workloads

Strip away the LLM part and an agent is a long running process that consumes resources, has a health state, needs to scale, and requires configuration management. That's just a service. Kubernetes already knows how to manage services.

The missing piece was a way to tell Kubernetes what an agent is — not in terms of CPU and memory, but in terms of model, system prompt, and tool access.

So I built a Kubernetes operator that does exactly that: agentops-operator.

What it actually looks like

You define an agent the same way you define a Deployment:

apiVersion: agentops.agentops.io/v1alpha1
kind: AgentDeployment
metadata:
  name: research-agent
spec:
  replicas: 3
  model: claude-sonnet-4-20250514
  systemPrompt: |
    You are a research agent. Gather and summarise information
    accurately. Always cite your sources.
  limits:
    maxTokensPerCall: 8000
    maxConcurrentTasks: 5
    timeoutSeconds: 120
kubectl apply -f research-agent.yaml
kubectl get agdep
# NAME             MODEL                      REPLICAS   READY   AGE
# research-agent   claude-sonnet-4-20250514   3          3       45s

Three agent pods, managed by Kubernetes. Scale to 10:

kubectl patch agdep research-agent --type=merge -p '{"spec":{"replicas":10}}'

GitOps, RBAC, namespaces, kubectl... all of it works without modification because agents are just Kubernetes resources now.

The part I'm most proud of: semantic health checks

Standard liveness probes check if a process is responding to HTTP. That's fine for a web server. For an LLM, a process can be "alive" while producing complete nonsense.

agentops-operator adds a semantic probe type — a secondary LLM call that validates whether the agent is actually working:

livenessProbe:
  type: semantic
  intervalSeconds: 60
  validatorPrompt: "Reply with exactly one word: HEALTHY"

If the agent fails that check, the pod gets pulled from routing until it recovers. Same as any other failing health check, except the health check understands what "healthy" actually means for an LLM.

Token limits that you can't accidentally delete

This was the $800 problem. The fix: limits live in the infrastructure, not in application code.

limits:
  maxTokensPerCall: 8000
  maxConcurrentTasks: 5
  timeoutSeconds: 120

The operator injects these as environment variables into every agent pod it creates. A developer can't remove them by editing the wrong file. A misconfigured prompt can't cause an infinite loop that runs until your credit card gets declined.

Rolling back a bad system prompt

Change a prompt → open a PR → merge → kubectl apply. Roll back → git revertkubectl apply. The full history of who changed what prompt and when is in git, same as any other infrastructure change.

This sounds obvious until you've had to figure out why an agent started behaving differently last Tuesday and nobody can remember what changed.

Multi-agent pipelines without the glue code

This is the one that surprised me the most when it actually worked. You can chain agents together declaratively:

apiVersion: agentops.agentops.io/v1alpha1
kind: AgentPipeline
metadata:
  name: research-then-summarize
spec:
  input:
    topic: "AI in healthcare"
  steps:
    - name: research
      agentDeployment: research-agent
      inputs:
        prompt: "Research this topic: {{ .pipeline.input.topic }}"
    - name: summarize
      agentDeployment: summarizer-agent
      dependsOn: [research]
      inputs:
        prompt: "Summarize these findings: {{ .steps.research.output }}"
  output: "{{ .steps.summarize.output }}"

The operator handles the queue, waits for each step to complete, passes the output to the next step, and updates the pipeline status. I tested this locally and watched kubectl get agpipe -w go from Running to Succeeded while two separate LLMs did their thing. It's a bit surreal.

Try it

Prerequisites: Docker, kind, kubectl, Go 1.25+

git clone https://github.com/agentops-io/agentops-operator.git
cd agentops-operator
make dev ANTHROPIC_API_KEY=sk-ant-...

That one command creates a kind cluster, builds both Docker images, deploys Redis and the operator inside the cluster, and sets up the API key secret. When it finishes, deploy an agent:

kubectl apply -f config/samples/agentops_v1alpha1_agentdeployment.yaml
kubectl get agdep -w

To see the LLM actually respond, submit a task:

kubectl exec -it -n agent-infra redis-0 -- \
  redis-cli XADD agent-tasks '*' prompt "What is the capital of France? One sentence."

kubectl exec -it -n agent-infra redis-0 -- \
  redis-cli XREAD COUNT 10 STREAMS agent-tasks-results 0
# "The capital of France is Paris."

Honest caveats

This is v0.0.1, single contributor, early alpha. Some things that aren't done yet:

  • Parallel pipeline steps: right now steps run sequentially even if they don't depend on each other
  • KEDA autoscaling on queue depth: CPU is the wrong signal for agent workloads, queue depth is right, not implemented yet
  • Only Anthropic for now: the provider interface is there for OpenAI/Gemini but no implementations yet

I'm writing this because I think the problem is real and worth solving, not because the solution is finished.

If any of this sounds familiar, 'agents running in tmux sessions', 'no cost controls', 'prompt changes deployed by SSHing into a box', I'd genuinely like to hear how you're handling it. And if you want to contribute, CONTRIBUTING.md has the setup.

GitHub: agentops-io/agentops-operator
Docs: agentops-io.com


This content originally appeared on DEV Community and was authored by rdmnl


Print Share Comment Cite Upload Translate Updates
APA

rdmnl | Sciencx (2026-03-13T13:18:19+00:00) Your AI Agents Deserve the Same Ops Treatment as Your Microservices. Retrieved from https://www.scien.cx/2026/03/13/your-ai-agents-deserve-the-same-ops-treatment-as-your-microservices/

MLA
" » Your AI Agents Deserve the Same Ops Treatment as Your Microservices." rdmnl | Sciencx - Friday March 13, 2026, https://www.scien.cx/2026/03/13/your-ai-agents-deserve-the-same-ops-treatment-as-your-microservices/
HARVARD
rdmnl | Sciencx Friday March 13, 2026 » Your AI Agents Deserve the Same Ops Treatment as Your Microservices., viewed ,<https://www.scien.cx/2026/03/13/your-ai-agents-deserve-the-same-ops-treatment-as-your-microservices/>
VANCOUVER
rdmnl | Sciencx - » Your AI Agents Deserve the Same Ops Treatment as Your Microservices. [Internet]. [Accessed ]. Available from: https://www.scien.cx/2026/03/13/your-ai-agents-deserve-the-same-ops-treatment-as-your-microservices/
CHICAGO
" » Your AI Agents Deserve the Same Ops Treatment as Your Microservices." rdmnl | Sciencx - Accessed . https://www.scien.cx/2026/03/13/your-ai-agents-deserve-the-same-ops-treatment-as-your-microservices/
IEEE
" » Your AI Agents Deserve the Same Ops Treatment as Your Microservices." rdmnl | Sciencx [Online]. Available: https://www.scien.cx/2026/03/13/your-ai-agents-deserve-the-same-ops-treatment-as-your-microservices/. [Accessed: ]
rf:citation
» Your AI Agents Deserve the Same Ops Treatment as Your Microservices | rdmnl | Sciencx | https://www.scien.cx/2026/03/13/your-ai-agents-deserve-the-same-ops-treatment-as-your-microservices/ |

Please log in to upload a file.




There are no updates yet.
Click the Upload button above to add an update.

You must be logged in to translate posts. Please log in or register.