Sovereign AI: The Why and How Behind National LLMs Post date August 22, 2025 Post author By Vik Bogdanov Post categories In ai, ai-and-society, ai-in-governance, digital-sovereignty, govtech, large-language-models-(llms), national-ai-strategy, sovereign-ai-infrastructure
Sovereign AI: The Why and How Behind National LLMs Post date August 22, 2025 Post author By Vik Bogdanov Post categories In ai, ai-and-society, ai-in-governance, digital-sovereignty, govtech, large-language-models-(llms), national-ai-strategy, sovereign-ai-infrastructure
Building the Unbreakable Contract: A Pipeline for AI-Powered Vulnerability Classification and Repair Post date July 1, 2025 Post author By Blockchainize Any Technology Post categories In automated-contract-repair, blockchain-security, gpt-3.5-turbo, large-language-models-(llms), llama-2-7b, randomforestclassifier, slither-detection, smart-contract-vulnerabilities
A Novel Pipeline for Classifying and Repairing Smart Contracts at Scale Post date July 1, 2025 Post author By Blockchainize Any Technology Post categories In automated-contract-repair, blockchain-security, gpt-3.5-turbo, large-language-models-(llms), llama-2-7b, randomforestclassifier, slither-detection, smart-contract-vulnerabilities
Primer on Large Language Model (LLM) Inference Optimizations: 2. Introduction to Artificial Intelligence (AI) Accelerators Post date November 7, 2024 Post author By Ravi Mandliya Post categories In ai, faster-llm-inference, hackernoon-top-story, large-language-models, large-language-models-(llms), llm-inference-on-gpus, llm-optimization, llms
Increased LLM Vulnerabilities from Fine-tuning and Quantization: Appendix Post date October 17, 2024 Post author By Quantization Post categories In adversarial-attacks, alignment-training, fine-tuning, guardrails, jailbreaking, large-language-models-(llms), quantization, vulnerabilities
Increased LLM Vulnerabilities from Fine-tuning and Quantization: Conclusion and References Post date October 17, 2024 Post author By Quantization Post categories In adversarial-attacks, alignment-training, fine-tuning, guardrails, jailbreaking, large-language-models-(llms), quantization, vulnerabilities
Increased LLM Vulnerabilities from Fine-tuning and Quantization: Experiment Set-up & Results Post date October 17, 2024 Post author By Quantization Post categories In adversarial-attacks, alignment-training, fine-tuning, guardrails, jailbreaking, large-language-models-(llms), quantization, vulnerabilities