A2VIS: Amodal-aware Approach to Video Instance Segmentation

Minh Tran
University of Arkansas
Thang Pham
University of Arkansas
Winston Bounsavy
University of Arkansas
Tri Nguyen
Coupang, Inc.
Ngan Le
University of Arkansas

Preprint

Paper Code
Comparison between existing VIS and the proposed A2VIS. By integrating amodal knowledge, A2VIS perceives the complete trajectory and shape of a target. This contrasts with other VIS methods that do not predict occluded parts, making them inherently susceptible to losing track of the target.

Abstract

Handling occlusion remains a significant challenge for video instance-level tasks like Multiple Object Tracking (MOT) and Video Instance Segmentation (VIS). In this paper, we propose a novel framework, Amodal-Aware Video Instance Segmentation (A2VIS), which incorporates amodal representations to achieve a reliable and comprehensive understanding of both visible and occluded parts of objects in a video. The key intuition is that awareness of amodal segmentation through spatiotemporal dimension enables a stable stream of object information. In scenarios where objects are partially or completely hidden from view, amodal segmentation offers more consistency and less dramatic changes along the temporal axis compared to visible segmentation. Hence, both amodal and visible information from all clips can be integrated into one global instance prototype. To effectively address the challenge of video amodal segmentation, we introduce the spatiotemporal-prior Amodal Mask Head, which leverages visible information intra clips while extracting amodal characteristics inter clips. Through extensive experiments and ablation studies, we show that A2VIS excels in both MOT and VIS tasks in identifying and tracking object instances with a keen understanding of their full shape.


Overview of A2VIS




Overall architecture of the proposed A2VIS. IP denotes instance prototypes in this figure. In each clip, the IP Modelling generates the clip-based IP, which is subsequently updated with the global IP through the IP Update module. The updated global IP is then used to produce both visible segmentation and amodal segmentation.



Spatiotemporal-Prior Amodal Mask Head (SAMH)




Network design of Spatiotemporal-prior Amodal Mask Head (SAMH), which takes the frame feature, visible segmentation and the global instance prototypes as inputs to generate amodal segmentations and updates the global instance prototypes.



Qualitative Results

Qualitative results on SAILVOS dataset.

Qualitative results on FISHBOWL dataset.


Quantitative Results




Across all backbones and datasets, A2VIS achieves the highest performance with a significant performance gap with the second best method GenVIS. Notably, the differences in IDF1 and IDS metrics highlight A2VIS's ability to maintain consistency and accuracy in object tracking, particularly due to its amodal awareness.



Citation

@article{tran2024a2vis,
  title = {A2VIS: Amodal-aware Approach to Video Instance Segmentation},
  author={Tran, Minh and Pham, Thang and Bounsavy, Winston and Nguyen, Tri and Le, Ngan},
  journal={arXiv preprint arXiv:2412.01147},
  year={2024},
}

Acknowledgement

This webpage is borrowed from SDFusion. Thanks for their beautiful website!