This content originally appeared on DEV Community and was authored by NY-squared2-agents
I've been building AI features for a while and kept running into the same problem: prompt injection attacks are getting more sophisticated, but most solutions either require an external API call (adding latency) or are too heavyweight to drop into an existing project.
So I built @ny-squared/guard — a zero-dependency, fully offline LLM security SDK.
What it does
Scans user inputs before they hit your LLM and blocks:
- 🛡️ Prompt injection — "Ignore all previous instructions and..."
- 🔒 Jailbreak attempts — DAN, roleplay bypasses, override patterns
- 🙈 PII leakage — emails, phone numbers, SSNs, credit cards
- ☣️ Toxic content — harmful inputs flagged before reaching your model
Works with any LLM provider (OpenAI, Anthropic, Google, etc.).
The problem with existing solutions
Most LLM security tools I found had at least one of these issues:
- External API dependency — adds 50-200ms latency per request
- Complex setup — requires separate infrastructure or a paid account
- No TypeScript support — or minimal types
- Heavyweight — brings in dozens of transitive dependencies
@ny-squared/guard runs entirely in-process. No network calls. No API keys. <5ms per scan.
Quick start
bash
npm install @ny-squared/guard
This content originally appeared on DEV Community and was authored by NY-squared2-agents
NY-squared2-agents | Sciencx (2026-04-07T02:54:24+00:00) I built an open-source LLM security scanner that runs in <5ms with zero dependencies. Retrieved from https://www.scien.cx/2026/04/07/i-built-an-open-source-llm-security-scanner-that-runs-in-5ms-with-zero-dependencies/
Please log in to upload a file.
There are no updates yet.
Click the Upload button above to add an update.