The Fine Print of Misbehavior: VRP’s Blueprint and Safety Stance Post date August 11, 2025 Post author By Large Models (dot tech) Post categories In adversarial-ai-research, ai-evaluation, ai-model-security, ethical-ai-attacks, mllm-jailbreak, role-play-attack, text-moderation, vrp-methodology
One Image to Rule Them All: The Jailbreak That Outsmarts Multimodal AI Post date August 11, 2025 Post author By Large Models (dot tech) Post categories In adversarial-ai, ai-alignment-bypass, ai-model-security, future-ai-research, mllm-jailbreak, role-play-attack, universal-jailbreak, visual-role-play
Introducing VRP: Structure-Based Role-Play Attacks on Multimodal Large Language Models Post date August 11, 2025 Post author By Large Models (dot tech) Post categories In adversarial-ai, ai-misuse-prevention, ai-model-security, mllm-jailbreak, multimodal-ai, role-play-attack, universal-jailbreak, visual-role-play
AI Can Outsmart You, and Cybercriminals Know It Post date February 19, 2025 Post author By Aditya Visweswaran Post categories In adversarial-ai, ai-and-cybersecurity, ai-model-security, ai-powered-phishing, cybersecurity-awareness, data-poisoning-attacks, hyper-personalized-phishing, phishing-and-malware
Adversarial Machine Learning Is Preventing Bad Actors From Compromising AI Models Post date January 6, 2025 Post author By Praise James Post categories In adversarial-attacks, adversarial-machine-learning, ai-adversarial-attacks, ai-attacks, ai-model-security, black-box-ai-attack, machine-learning, what-is-aml
Security concern is a top barrier to AI implementation Post date August 6, 2021 Post author By Modzy Post categories In ai, ai-model-security, data-scanning-solutions, enhances-backpropagation, good-company, modelops, modzy, security