How 24 Special Queries Optimized a Neural Network’s Recall Rate

This article explores query optimization and embedding space analysis in the IFRP-T2P model using the KITTI360Pose dataset. Testing with 16, 24, and 32 queries reveals that 24 offers the best localization recall. Additionally, IFRP-T2P outperforms Text2Loc by producing a more discriminative and informative text-cell embedding space, improving retrieval accuracy.


This content originally appeared on HackerNoon and was authored by Instancing

Abstract and 1. Introduction

  1. Related Work

  2. Method

    3.1 Overview of Our Method

    3.2 Coarse Text-cell Retrieval

    3.3 Fine Position Estimation

    3.4 Training Objectives

  3. Experiments

    4.1 Dataset Description and 4.2 Implementation Details

    4.3 Evaluation Criteria and 4.4 Results

  4. Performance Analysis

    5.1 Ablation Study

    5.2 Qualitative Analysis

    5.3 Text Embedding Analysis

  5. Conclusion and References

Supplementary Material

  1. Details of KITTI360Pose Dataset
  2. More Experiments on the Instance Query Extractor
  3. Text-Cell Embedding Space Analysis
  4. More Visualization Results
  5. Point Cloud Robustness Analysis

Anonymous Authors

  1. Details of KITTI360Pose Dataset
  2. More Experiments on the Instance Query Extractor
  3. Text-Cell Embedding Space Analysis
  4. More Visualization Results
  5. Point Cloud Robustness Analysis

1 DETAILS OF KITTI360POSE DATASET

\ Figure 1: Visualization of the KITTI360Pose dataset. The trajectories of five training sets, three test sets, and one validation set are shown in the dashed borders. One colored point cloud scene and three cells are shown in the middle.

2 MORE EXPERIMENTS ON THE INSTANCE QUERY EXTRACTOR

We conduct an additional experiment to assess the impact of the number of queries on the performance of our instance query extractor. As detailed in Table 1, we evaluate the localization recall rate using 16, 24, and 32 queries. The result demonstrates that using 24 queries yields the highest localization recall rate, i.e, 0.23/0.53/0.64 on the validation set and 0.22/0.47/0.58 on the test set. This finding suggests that the optimal number of queries for maximizing the effectiveness of our model is 24.

\ \ Table 1: Ablation study of the query number on KITTI360Pose dataset.

\

3 TEXT-CELL EMBEDDING SPACE ANALYSIS

Fig. 2 shows the aligned text-cell embedding space via T-SNE [? ]. Under the instance-free scenario, we compare our model with Text2loc [? ] using a pre-trained instance segmentation model, Mask3D [? ], as a prior step. It can be observed that Text2Loc results in a less discriminative space, where positive cells are relatively far from the text query feature. In contrast, our IFRP-T2P effectively reduces the distance between positive cell features and text query features within the embedding space, thereby creating a more informative embedding space. This enhancement in the embedding space is critical for improving the accuracy of text-cell retrieval.

\ \ Figure 2: T-SNE visualization for the text features and cell features in the coarse stage.

\ \ \

:::info Authors:

(1) Lichao Wang, FNii, CUHKSZ (wanglichao1999@outlook.com);

(2) Zhihao Yuan, FNii and SSE, CUHKSZ (zhihaoyuan@link.cuhk.edu.cn);

(3) Jinke Ren, FNii and SSE, CUHKSZ (jinkeren@cuhk.edu.cn);

(4) Shuguang Cui, SSE and FNii, CUHKSZ (shuguangcui@cuhk.edu.cn);

(5) Zhen Li, a Corresponding Author from SSE and FNii, CUHKSZ (lizhen@cuhk.edu.cn).

:::


:::info This paper is available on arxiv under CC BY-NC-ND 4.0 Deed (Attribution-Noncommercial-Noderivs 4.0 International) license.

:::

\


This content originally appeared on HackerNoon and was authored by Instancing


Print Share Comment Cite Upload Translate Updates
APA

Instancing | Sciencx (2025-07-16T14:00:03+00:00) How 24 Special Queries Optimized a Neural Network’s Recall Rate. Retrieved from https://www.scien.cx/2025/07/16/how-24-special-queries-optimized-a-neural-networks-recall-rate/

MLA
" » How 24 Special Queries Optimized a Neural Network’s Recall Rate." Instancing | Sciencx - Wednesday July 16, 2025, https://www.scien.cx/2025/07/16/how-24-special-queries-optimized-a-neural-networks-recall-rate/
HARVARD
Instancing | Sciencx Wednesday July 16, 2025 » How 24 Special Queries Optimized a Neural Network’s Recall Rate., viewed ,<https://www.scien.cx/2025/07/16/how-24-special-queries-optimized-a-neural-networks-recall-rate/>
VANCOUVER
Instancing | Sciencx - » How 24 Special Queries Optimized a Neural Network’s Recall Rate. [Internet]. [Accessed ]. Available from: https://www.scien.cx/2025/07/16/how-24-special-queries-optimized-a-neural-networks-recall-rate/
CHICAGO
" » How 24 Special Queries Optimized a Neural Network’s Recall Rate." Instancing | Sciencx - Accessed . https://www.scien.cx/2025/07/16/how-24-special-queries-optimized-a-neural-networks-recall-rate/
IEEE
" » How 24 Special Queries Optimized a Neural Network’s Recall Rate." Instancing | Sciencx [Online]. Available: https://www.scien.cx/2025/07/16/how-24-special-queries-optimized-a-neural-networks-recall-rate/. [Accessed: ]
rf:citation
» How 24 Special Queries Optimized a Neural Network’s Recall Rate | Instancing | Sciencx | https://www.scien.cx/2025/07/16/how-24-special-queries-optimized-a-neural-networks-recall-rate/ |

Please log in to upload a file.




There are no updates yet.
Click the Upload button above to add an update.

You must be logged in to translate posts. Please log in or register.