Ego-Driven Design: How To Introduce Existential Crisis In Personality-based Agents Post date November 27, 2025 Post author By Lab42AI Post categories In ai-agents, ai-security, artificial-intelligence, jailbreaking, machine-learning, personality-based-agents, prompt-injection, wisc-ai
Jailbreaking iPhones in 2025: What Still Works and What Doesn’t Post date October 30, 2025 Post author By v. Splicer Post categories In hacktoberfest, ios, jailbreaking, programming
Jailbreaking iPhones in 2025: What Still Works and What Doesn’t Post date October 30, 2025 Post author By v. Splicer Post categories In hacktoberfest, ios, jailbreaking, programming
Adaptive Attacks Expose SLM Vulnerabilities and Qualitative Insights Post date February 6, 2025 Post author By Phonology Technology Post categories In adversarial-attacks, black-box-attacks, jailbreaking, large-language-models, multimodal-models, robustness-countermeasures, speech-language-models, white-box-attacks
Transfer Attacks Reveal SLM Vulnerabilities and Effective Noise Defenses Post date February 6, 2025 Post author By Phonology Technology Post categories In adversarial-attacks, black-box-attacks, jailbreaking, large-language-models, multimodal-models, robustness-countermeasures, speech-language-models, white-box-attacks
Cross-Prompt Attacks and Data Ablations Impact SLM Robustness Post date February 6, 2025 Post author By Phonology Technology Post categories In adversarial-attacks, black-box-attacks, jailbreaking, large-language-models, multimodal-models, robustness-countermeasures, speech-language-models, white-box-attacks
Safety Alignment and Jailbreak Attacks Challenge Modern LLMs Post date February 6, 2025 Post author By Phonology Technology Post categories In adversarial-attacks, black-box-attacks, jailbreaking, large-language-models, multimodal-models, robustness-countermeasures, speech-language-models, white-box-attacks
Audio Encoder Pre-training and Evaluation Enhance SLM Safety Post date February 6, 2025 Post author By Phonology Technology Post categories In adversarial-attacks, black-box-attacks, jailbreaking, large-language-models, multimodal-models, robustness-countermeasures, speech-language-models, white-box-attacks
Integrated Speech Language Models Face Critical Safety Vulnerabilities Post date February 6, 2025 Post author By Phonology Technology Post categories In adversarial-attacks, black-box-attacks, jailbreaking, large-language-models, multimodal-models, robustness-countermeasures, speech-language-models, white-box-attacks
SpeechVerse Unites Audio Encoder and LLM for Superior Spoken QA Post date February 6, 2025 Post author By Phonology Technology Post categories In adversarial-attacks, black-box-attacks, jailbreaking, large-language-models, multimodal-models, robustness-countermeasures, speech-language-models, white-box-attacks
Unified Speech and Language Models Can Be Vulnerable to Adversarial Attacks Post date February 6, 2025 Post author By Phonology Technology Post categories In adversarial-attacks, black-box-attacks, jailbreaking, large-language-models, robustness-countermeasures, speech-language-models, spoken-question-answering, white-box-attacks
SLMs Outperform Competitors Yet Suffer Rapid Adversarial Jailbreaks Post date February 6, 2025 Post author By Phonology Technology Post categories In adversarial-attacks, black-box-attacks, jailbreaking, large-language-models, multimodal-models, robustness-countermeasures, speech-language-models, white-box-attacks
Adversarial Settings and Random Noise Reveal Speech LLM Vulnerabilities Post date February 6, 2025 Post author By Phonology Technology Post categories In adversarial-attacks, black-box-attacks, jailbreaking, large-language-models, multimodal-models, robustness-countermeasures, speech-language-models, white-box-attacks
Datasets and Evaluation Define the Robustness of Speech Language Models Post date February 6, 2025 Post author By Phonology Technology Post categories In adversarial-attacks, black-box-attacks, jailbreaking, large-language-models, multimodal-models, robustness-countermeasures, speech-language-models, white-box-attacks
Adversarial Attacks Challenge the Integrity of Speech Language Models Post date February 6, 2025 Post author By Phonology Technology Post categories In adversarial-attacks, black-box-attacks, jailbreaking, large-language-models, multimodal-models, robustness-countermeasures, speech-language-models, white-box-attacks
Increased LLM Vulnerabilities from Fine-tuning and Quantization: Appendix Post date October 17, 2024 Post author By Quantization Post categories In adversarial-attacks, alignment-training, fine-tuning, guardrails, jailbreaking, large-language-models-(llms), quantization, vulnerabilities
Increased LLM Vulnerabilities from Fine-tuning and Quantization: Conclusion and References Post date October 17, 2024 Post author By Quantization Post categories In adversarial-attacks, alignment-training, fine-tuning, guardrails, jailbreaking, large-language-models-(llms), quantization, vulnerabilities
Increased LLM Vulnerabilities from Fine-tuning and Quantization: Experiment Set-up & Results Post date October 17, 2024 Post author By Quantization Post categories In adversarial-attacks, alignment-training, fine-tuning, guardrails, jailbreaking, large-language-models-(llms), quantization, vulnerabilities