Teaching AI to See and Speak: Inside the OW‑VISCap Approach

This article outlines the OW‑VISCap framework, which jointly detects, segments, and captions both seen and unseen objects within a video.


This content originally appeared on HackerNoon and was authored by Instancing

Abstract and 1. Introduction

  1. Related Work

    2.1 Open-world Video Instance Segmentation

    2.2 Dense Video Object Captioning and 2.3 Contrastive Loss for Object Queries

    2.4 Generalized Video Understanding and 2.5 Closed-World Video Instance Segmentation

  2. Approach

    3.1 Overview

    3.2 Open-World Object Queries

    3.3 Captioning Head

    3.4 Inter-Query Contrastive Loss and 3.5 Training

  3. Experiments and 4.1 Datasets and Evaluation Metrics

    4.2 Main Results

    4.3 Ablation Studies and 4.4 Qualitative Results

  4. Conclusion, Acknowledgements, and References

\ Supplementary Material

A. Additional Analysis

B. Implementation Details

C. Limitations

3 Approach

Given a video, our goal is to jointly detect, segment and caption object instances present in the video. Importantly, note that object instance categories may not be part of the training set (e.g., the parachutes shown in Fig. 3 (top row)), placing our goal in an open-world setting. To achieve this goal, a given video is first broken into short clips, each consisting of T frames. Each clip is processed using our approach OW-VISCap. We discuss merging of the results of each clip in Sec. 4.

\ We provide an overview of OW-VISCap to process each clip in Sec. 3.1. We then discuss our contributions: (a) introduction of open-world object queries in Sec. 3.2, (b) use of masked attention for object-centric captioning in Sec. 3.3, and (c) use of inter-query contrastive loss to ensure that the object qeries are different from each other in Sec. 3.4. In Sec. 3.5, we discuss the final training objective.

3.1 Overview

\ Both open- and closed-world object queries are processed by our specifically designed captioning head which yields an object-centric caption, a classification head which yields a category label, and a detection head which yields either a segmentation mask or a bounding-box.

\

\ We introduce an inter-query contrastive loss to ensure that the object queries are encouraged to differ from each other. We provide details in Sec. 3.4. For closed world objects, this loss helps in removing highly overlapping false positives. For open-world objects, it helps in the discovery of new objects.

\ Finally, we provide the full training objective in Sec. 3.5.

\

3.2 Open-World Object Queries

\

\

\ We first match the ground truth objects with the open-world predictions by minimizing a matching cost using the Hungarian algorithm [34]. The optimal matching is then used to calculate the final open-world loss.

\

\

3.3 Captioning Head

\

\

3.4 Inter-Query Contrastive Loss

\

\

3.5 Training

Our total training loss is

\ Table 1: Open-world tracking accuracy (OWTA) on the BURST validation and test sets for all, common (comm.) and uncommon (unc.) categories of objects. Onl. refers to online frame-by-frame processing. The best scores are highlighted in bold font, and the second-best scores are underlined.

\ Table 2: Dense video object captioning results on the VidSTG [57] dataset. Off. indicates offline methods and onl. refers to online methods.

\

:::info Authors:

(1) Anwesa Choudhuri, University of Illinois at Urbana-Champaign (anwesac2@illinois.edu);

(2) Girish Chowdhary, University of Illinois at Urbana-Champaign (girishc@illinois.edu);

(3) Alexander G. Schwing, University of Illinois at Urbana-Champaign (aschwing@illinois.edu).

:::


:::info This paper is available on arxiv under CC by 4.0 Deed (Attribution 4.0 International) license.

:::

\


This content originally appeared on HackerNoon and was authored by Instancing


Print Share Comment Cite Upload Translate Updates
APA

Instancing | Sciencx (2025-11-04T09:46:04+00:00) Teaching AI to See and Speak: Inside the OW‑VISCap Approach. Retrieved from https://www.scien.cx/2025/11/04/teaching-ai-to-see-and-speak-inside-the-ow%e2%80%91viscap-approach/

MLA
" » Teaching AI to See and Speak: Inside the OW‑VISCap Approach." Instancing | Sciencx - Tuesday November 4, 2025, https://www.scien.cx/2025/11/04/teaching-ai-to-see-and-speak-inside-the-ow%e2%80%91viscap-approach/
HARVARD
Instancing | Sciencx Tuesday November 4, 2025 » Teaching AI to See and Speak: Inside the OW‑VISCap Approach., viewed ,<https://www.scien.cx/2025/11/04/teaching-ai-to-see-and-speak-inside-the-ow%e2%80%91viscap-approach/>
VANCOUVER
Instancing | Sciencx - » Teaching AI to See and Speak: Inside the OW‑VISCap Approach. [Internet]. [Accessed ]. Available from: https://www.scien.cx/2025/11/04/teaching-ai-to-see-and-speak-inside-the-ow%e2%80%91viscap-approach/
CHICAGO
" » Teaching AI to See and Speak: Inside the OW‑VISCap Approach." Instancing | Sciencx - Accessed . https://www.scien.cx/2025/11/04/teaching-ai-to-see-and-speak-inside-the-ow%e2%80%91viscap-approach/
IEEE
" » Teaching AI to See and Speak: Inside the OW‑VISCap Approach." Instancing | Sciencx [Online]. Available: https://www.scien.cx/2025/11/04/teaching-ai-to-see-and-speak-inside-the-ow%e2%80%91viscap-approach/. [Accessed: ]
rf:citation
» Teaching AI to See and Speak: Inside the OW‑VISCap Approach | Instancing | Sciencx | https://www.scien.cx/2025/11/04/teaching-ai-to-see-and-speak-inside-the-ow%e2%80%91viscap-approach/ |

Please log in to upload a file.




There are no updates yet.
Click the Upload button above to add an update.

You must be logged in to translate posts. Please log in or register.