Adversarial Attacks on Large Language Models and Defense Mechanisms Post date December 1, 2025 Post author By Prakash Velusamy Post categories In adversarial-ai, adversarial-attacks, ai-and-data-breaches, defense-mechanisms, llm-security, owasp, prompt-injection, user-preference-manipulation
Beyond Prompt Injection: 12 Novel AI Agent Attacks Post date November 16, 2025 Post author By Mohit Sewak, Ph.D. Post categories In agentic-ai, ai-agent-security, ai-vulnerabilities, large-language-models, llm-security
The Hidden Fragility of AI: Why Just 250 Poisoned Documents Can Twist an LLM’s Reality Post date October 31, 2025 Post author By Muhammad Faisal Ishfaq Post categories In ai, llm, llm-poisoning, llm-security
Why Traditional Testing Breaks Down with AI Post date October 21, 2025 Post author By Mend.io Post categories In ai-fuzzing, ai-safety, ai-testing, good-company, llm-security, ml-engineering, prompt-injection, red-teaming
The Illusion of Scale: Why LLMs Are Vulnerable to Data Poisoning, Regardless of Size Post date October 18, 2025 Post author By Anthony Laneau Post categories In adversarial-machine-learning, ai-safety, backdoor-attacks, data-poisoning, enterprise-ai-security, generative-ai, hackernoon-top-story, llm-security
Future of AD Security: Addressing Limitations and Ethical Concerns in Typographic Attack Research Post date October 1, 2025 Post author By Text Generation Post categories In ad-security, autonomous-cars, autonomous-driving, llm-security, traffic-safety, typographic-attacks, vision-language-models, vision-llms
Empirical Study: Evaluating Typographic Attack Effectiveness Against Vision-LLMs in AD Systems Post date October 1, 2025 Post author By Text Generation Post categories In autonomous-cars, autonomous-driving-(ad), lingoqa, llava, llm-security, qwen-vl, vision-language-models, vision-llms
The Vulnerability of Autonomous Driving to Typographic Attacks: Transferability and Realizability Post date September 30, 2025 Post author By Text Generation Post categories In autonomous-driving, closed-source-models, gradient-based-attacks, llm-security, llms, typographic-attacks, vision-language, vision-llms
Typographic Attacks on Vision-LLMs: Evaluating Adversarial Threats in Autonomous Driving Systems Post date September 27, 2025 Post author By Text Generation Post categories In adversarial-attacks, autonomous, autonomous-driving, computer-vision, decision-making-autonomy, llm-security, typographic-attacks, vision-llms
The Prompt Protocol: Why Tomorrow’s Security Nightmares Will Be Whispered, Not Coded Post date July 14, 2025 Post author By Igboanugo David Ugochukwu Post categories In adversarial-prompts, ai-governance, ai-risk-management, ai-security, ai-vulnerabilities, llm-prompt-hacking, llm-security, prompt-injection
How Large Language Models Impact Data Security in RAG Applications Post date March 6, 2025 Post author By Aravind Nuthalapati Post categories In ai-data, ai-data-security, ai-governance, enterprise-ai-compliance, gdpr-and-ai, llm-security, rag-applications, secure-ai-deployment