Comparing Efficiency Strategies for LLM Deployment and Summarizing PowerInfer‑2’s Impact Post date November 3, 2025 Post author By Writings, Papers and Blogs on Text Models Post categories In edge-computing, mobile-ai, model-optimization, neural-efficiency, on-device-llm, power-infer, quantization, speculative-decoding
PowerInfer-2 Achieves 29x Speedup, Running 47-Billion Parameter LLMs on Smartphones Post date August 26, 2025 Post author By Writings, Papers and Blogs on Text Models Post categories In Edge AI, efficient-ai, heterogeneous-computing, mobile-ai, on-device-language-models, power-infer-2, system-for-ml
The HackerNoon Newsletter: How I Set Up a Cowrie Honeypot to Capture Real SSH Attacks (8/9/2025) Post date August 9, 2025 Post author By Noonification Post categories In ai, cowrie-honeypot, hackernoon-newsletter, immutable-backups, latest-tect-stories, mobile-ai, noonification, web3
Mobile AI with ONNX Runtime: How to Build Real-Time Noise Suppression That Works Post date August 3, 2025 Post author By Sergey Drymchenko Post categories In android-ai-sdk, dtln-noise-reduction, lightweight-ai-deployment, mobile-ai, mobile-ai-performance, on-device-ai, onnx-runtime, onnx-runtime-android