RECKONING: Reasoning through Dynamic Knowledge Encoding: Generalization to Real-World knowledge Post date October 24, 2025 Post author By The Tech Reckoning is Upon Us! Post categories In few-shot-learning, folio-dataset, gpt-3.5, llm, llm-generalization, multi-hop-reasoning, real-world-reasoning, reckoning-algorithm
Smarter AI Training with Few-Shot Natural Language Tasks Post date October 2, 2025 Post author By Model Tuning Post categories In adamix, efficient-ai-training, few-shot-learning, low-rank-adaptation, mixture-of-experts-ai, model-weight-averaging, pre-trained-language-models, stochastic-routing
Beating Full Fine-Tuning with Just 0.2% of Parameters Post date October 2, 2025 Post author By Model Tuning Post categories In adamix, efficient-ai-training, few-shot-learning, low-rank-adaptation, mixture-of-experts-ai, model-weight-averaging, pre-trained-language-models, stochastic-routing
The Role of Consistency and Sharing in Efficient Fine-Tuning Post date October 1, 2025 Post author By Model Tuning Post categories In adamix, efficient-ai-training, few-shot-learning, low-rank-adaptation, mixture-of-experts-ai, model-weight-averaging, pre-trained-language-models, stochastic-routing
Smarter Fine-Tuning for NLU and NLG Tasks Post date October 1, 2025 Post author By Model Tuning Post categories In adamix, efficient-ai-training, few-shot-learning, low-rank-adaptation, mixture-of-experts-ai, model-weight-averaging, pre-trained-language-models, stochastic-routing
How Mixture-of-Adaptations Makes Language Model Fine-Tuning Cheaper and Smarter Post date October 1, 2025 Post author By Model Tuning Post categories In adamix, efficient-ai-training, few-shot-learning, low-rank-adaptation, mixture-of-experts-ai, model-weight-averaging, pre-trained-language-models, stochastic-routing
How to Improve AI Models While Training Only 0.1% of Parameters Post date October 1, 2025 Post author By Model Tuning Post categories In adamix, efficient-ai-training, few-shot-learning, low-rank-adaptation, mixture-of-experts-ai, model-weight-averaging, pre-trained-language-models, stochastic-routing
The Future of Remote Sensing: Few-Shot Learning and Explainable AI Post date June 12, 2025 Post author By Obfuscation Post categories In disaster-classification, disaster-scene-classification, few-shot-learning, few-shot-learning-for-uav-data, remote-sensing-data, satellite-based-remote-sensing, unmanned-aerial-vehicles, xai-in-remote-sensing
Behind the Scenes: The Prompts and Tricks That Made Many-Shot ICL Work Post date June 2, 2025 Post author By The FewShot Prompting Publication Post categories In few-shot-learning, gemini-1.5-pro, gpt-4o, image-classification, large-language-models, many-shot-in-context-learning, model-data-efficiency, multimodal-foundation-models
Scientists Just Found a Way to Skip AI Training Entirely. Here’s How Post date June 2, 2025 Post author By The FewShot Prompting Publication Post categories In few-shot-learning, gemini-1.5-pro, gpt-4o, image-classification, large-language-models, many-shot-in-context-learning, model-data-efficiency, multimodal-foundation-models
How Many Examples Does AI Really Need? New Research Reveals Surprising Scaling Laws Post date June 2, 2025 Post author By The FewShot Prompting Publication Post categories In few-shot-learning, gemini-1.5-pro, gpt-4o, image-classification, large-language-models, many-shot-in-context-learning, model-data-efficiency, multimodal-foundation-models
The Science Behind Many-Shot Learning: Testing AI Across 10 Different Vision Domains Post date June 2, 2025 Post author By The FewShot Prompting Publication Post categories In few-shot-learning, gemini-1.5-pro, gpt-4o, image-classification, large-language-models, many-shot-in-context-learning, model-data-efficiency, multimodal-foundation-models
Med-Flamingo: a Multimodal Medical Few-shot Learner – Appendix Post date June 19, 2024 Post author By The FewShot Prompting Publication Post categories In clinical-applications, few-shot-learning, generative-vqa, medical-ai, medical-informatics, multimodal-learning, usmle-evaluation, vision-language-models
Med-Flamingo: a Multimodal Medical Few-shot Learner – Discussion, Acknowledgments, and References Post date June 19, 2024 Post author By The FewShot Prompting Publication Post categories In clinical-applications, few-shot-learning, generative-vqa, medical-ai, medical-informatics, multimodal-learning, usmle-evaluation, vision-language-models
Med-Flamingo: a Multimodal Medical Few-shot Learner – Results Post date June 19, 2024 Post author By The FewShot Prompting Publication Post categories In clinical-applications, few-shot-learning, generative-vqa, medical-ai, medical-informatics, multimodal-learning, usmle-evaluation, vision-language-models
Med-Flamingo: a Multimodal Medical Few-shot Learner – Evaluation Post date June 19, 2024 Post author By The FewShot Prompting Publication Post categories In clinical-applications, few-shot-learning, generative-vqa, medical-ai, medical-informatics, multimodal-learning, usmle-evaluation, vision-language-models
Med-Flamingo: a Multimodal Medical Few-shot Learner – Med-Flamingo Post date June 19, 2024 Post author By The FewShot Prompting Publication Post categories In clinical-applications, few-shot-learning, generative-vqa, medical-ai, medical-informatics, multimodal-learning, usmle-evaluation, vision-language-models
Med-Flamingo: a Multimodal Medical Few-shot Learner – Related Works Post date June 19, 2024 Post author By The FewShot Prompting Publication Post categories In clinical-applications, few-shot-learning, generative-vqa, medical-ai, medical-informatics, multimodal-learning, usmle-evaluation, vision-language-models