Confronting Multimodal LLM Challenges: Reasoning Gaps and Safety Trade-offs in Phi-3-Vision

Explore the inherent challenges in Phi-3-Vision, from limitations in high-level reasoning and occasional ungrounded outputs to the complex trade-offs between helpfulness and harmlessness, particularly in safety-critical domains.


This content originally appeared on HackerNoon and was authored by Writings, Papers and Blogs on Text Models

Abstract and 1 Introduction

2 Technical Specifications

3 Academic benchmarks

4 Safety

5 Weakness

6 Phi-3-Vision

6.1 Technical Specifications

6.2 Academic benchmarks

6.3 Safety

6.4 Weakness

References

A Example prompt for benchmarks

B Authors (alphabetical)

C Acknowledgements

6.4 Weakness

Regarding the multi-modal LLM capabilities of our Phi-3-Vision, it performs admirably across various fields. However, we have identified certain limitations, particularly with questions necessitating highlevel reasoning abilities. Additionally, the model has been observed to occasionally generate ungrounded outputs, making it potentially unreliable in sensitive areas, such as finance. To mitigate these issues, we will incorporate more reasoning-focused and hallucination-related DPO data into post-training in the future.

\ From a responsible AI standpoint, whilst safety post-training has made significant strides, our Phi3-Vision occasionally fails to refrain from answering harmful or sensitive inquiries. Examples of such occasions include deciphering particular types of captcha and describing scam images containing disinformation or hallucination. We find that this issue partly arises from the capabilities, such as OCR, acquired during the training process with normal instruct tuning datasets, which can be regarded as the trade-off between helpfulness and harmlessness. Moving forward, we need to further explore this area to achieve a better balance.

\ Table 3: Comparison results on public and private multi-modal RAI benchmarks. Note that all metrics in the table are [0,10] and a higher value indicates a better performance.

\ Figure 7: Comparison of categorized RAI performance of Phi-3-Vision with and without the safety post-training on the VLGuard (left) and Internal (right) benchmark, respectively. It clearly indicates that safety post-training can enhance the RAI performance across nearly all the RAI categories.

\

:::info Authors:

(1) Marah Abdin;

(2) Sam Ade Jacobs;

(3) Ammar Ahmad Awan;

(4) Jyoti Aneja;

(5) Ahmed Awadallah;

(6) Hany Awadalla;

(7) Nguyen Bach;

(8) Amit Bahree;

(9) Arash Bakhtiari;

(10) Jianmin Bao;

(11) Harkirat Behl;

(12) Alon Benhaim;

(13) Misha Bilenko;

(14) Johan Bjorck;

(15) Sébastien Bubeck;

(16) Qin Cai;

(17) Martin Cai;

(18) Caio César Teodoro Mendes;

(19) Weizhu Chen;

(20) Vishrav Chaudhary;

(21) Dong Chen;

(22) Dongdong Chen;

(23) Yen-Chun Chen;

(24) Yi-Ling Chen;

(25) Parul Chopra;

(26) Xiyang Dai;

(27) Allie Del Giorno;

(28) Gustavo de Rosa;

(29) Matthew Dixon;

(30) Ronen Eldan;

(31) Victor Fragoso;

(32) Dan Iter;

(33) Mei Gao; 

(34) Min Gao;

(35) Jianfeng Gao;

(36) Amit Garg;

(37) Abhishek Goswami;

(38) Suriya Gunasekar;

(39) Emman Haider;

(40) Junheng Hao;

(41) Russell J. Hewett;

(42) Jamie Huynh;

(43) Mojan Javaheripi;

(44) Xin Jin;

(45) Piero Kauffmann;

(46) Nikos Karampatziakis;

(47) Dongwoo Kim;

(48) Mahoud Khademi;

(49) Lev Kurilenko; 

(50) James R. Lee;

(51) Yin Tat Lee;

(52) Yuanzhi Li;

(53) Yunsheng Li;

(54) Chen Liang;

(55) Lars Liden;

(56) Ce Liu;

(57) Mengchen Liu;

(58) Weishung Liu;

(59) Eric Lin;

(60) Zeqi Lin;

(61) Chong Luo;

(62) Piyush Madan;

(63) Matt Mazzola;

(64) Arindam Mitra;

(65) Hardik Modi;

(66) Anh Nguyen;

(67) Brandon Norick;

(68) Barun Patra;

(69) Daniel Perez-Becker;

(70) Thomas Portet; 

(71) Reid Pryzant;

(72) Heyang Qin;

(73) Marko Radmilac;

(74) Corby Rosset;

(75) Sambudha Roy; 

(76) Olatunji Ruwase;

(77) Olli Saarikivi;

(78) Amin Saied;

(79) Adil Salim;

(80) Michael Santacroce;

(81) Shital Shah;

(82) Ning Shang;

(83) Hiteshi Sharma;

(84) Swadheen Shukla;

(85) Xia Song;

(86) Masahiro Tanaka;

(87) Andrea Tupini;

(88) Xin Wang;

(89) Lijuan Wang; 

(90) Chunyu Wang;

(91) Yu Wang;

(92) Rachel Ward;

(93) Guanhua Wang;

(94) Philipp Witte; 

(95) Haiping Wu; 

(96) Michael Wyatt; 

(97) Bin Xiao;

(98) Can Xu; 

(99) Jiahang Xu; 

(100) Weijian Xu; 

(101) Sonali Yadav; 

(102) Fan Yang; 

(103) Jianwei Yang;

(104) Ziyi Yang;

(105) Yifan Yang; 

(106) Donghan Yu;

(107) Lu Yuan;

(108) Chengruidong Zhang; 

(109) Cyril Zhang; 

(110) Jianwen Zhang;

(111) Li Lyna Zhang;

(112) Yi Zhang;

(113) Yue Zhang;

(114) Yunan Zhang;

(115) Xiren Zhou.

:::


:::info This paper is available on arxiv under CC BY 4.0 DEED license.

:::

\


This content originally appeared on HackerNoon and was authored by Writings, Papers and Blogs on Text Models


Print Share Comment Cite Upload Translate Updates
APA

Writings, Papers and Blogs on Text Models | Sciencx (2025-07-09T15:45:07+00:00) Confronting Multimodal LLM Challenges: Reasoning Gaps and Safety Trade-offs in Phi-3-Vision. Retrieved from https://www.scien.cx/2025/07/09/confronting-multimodal-llm-challenges-reasoning-gaps-and-safety-trade-offs-in-phi-3-vision/

MLA
" » Confronting Multimodal LLM Challenges: Reasoning Gaps and Safety Trade-offs in Phi-3-Vision." Writings, Papers and Blogs on Text Models | Sciencx - Wednesday July 9, 2025, https://www.scien.cx/2025/07/09/confronting-multimodal-llm-challenges-reasoning-gaps-and-safety-trade-offs-in-phi-3-vision/
HARVARD
Writings, Papers and Blogs on Text Models | Sciencx Wednesday July 9, 2025 » Confronting Multimodal LLM Challenges: Reasoning Gaps and Safety Trade-offs in Phi-3-Vision., viewed ,<https://www.scien.cx/2025/07/09/confronting-multimodal-llm-challenges-reasoning-gaps-and-safety-trade-offs-in-phi-3-vision/>
VANCOUVER
Writings, Papers and Blogs on Text Models | Sciencx - » Confronting Multimodal LLM Challenges: Reasoning Gaps and Safety Trade-offs in Phi-3-Vision. [Internet]. [Accessed ]. Available from: https://www.scien.cx/2025/07/09/confronting-multimodal-llm-challenges-reasoning-gaps-and-safety-trade-offs-in-phi-3-vision/
CHICAGO
" » Confronting Multimodal LLM Challenges: Reasoning Gaps and Safety Trade-offs in Phi-3-Vision." Writings, Papers and Blogs on Text Models | Sciencx - Accessed . https://www.scien.cx/2025/07/09/confronting-multimodal-llm-challenges-reasoning-gaps-and-safety-trade-offs-in-phi-3-vision/
IEEE
" » Confronting Multimodal LLM Challenges: Reasoning Gaps and Safety Trade-offs in Phi-3-Vision." Writings, Papers and Blogs on Text Models | Sciencx [Online]. Available: https://www.scien.cx/2025/07/09/confronting-multimodal-llm-challenges-reasoning-gaps-and-safety-trade-offs-in-phi-3-vision/. [Accessed: ]
rf:citation
» Confronting Multimodal LLM Challenges: Reasoning Gaps and Safety Trade-offs in Phi-3-Vision | Writings, Papers and Blogs on Text Models | Sciencx | https://www.scien.cx/2025/07/09/confronting-multimodal-llm-challenges-reasoning-gaps-and-safety-trade-offs-in-phi-3-vision/ |

Please log in to upload a file.




There are no updates yet.
Click the Upload button above to add an update.

You must be logged in to translate posts. Please log in or register.