Open-Vocabulary Segmentation (OVS) methods offer promising capabilities in detecting unseen object categories, but the category must be known and needs to be provided by a human, either via a text prompt or pre-labeled datasets, thus limiting their scalability. We propose 3D-AVS, a method for Auto-Vocabulary Segmentation of 3D point clouds for which the vocabulary is unknown and auto-generated for each input at runtime, thus eliminating the human in the loop and typically providing a substantially larger vocabulary for richer annotations. 3D-AVS first recognizes semantic entities from image or point cloud data and then segments all points with the automatically generated vocabulary. Our method incorporates both image-based and point-based recognition, enhancing robustness under challenging lighting conditions where geometric information from LiDAR is especially valuable. Our point-based recognition features a Sparse Masked Attention Pooling (SMAP) module to enrich the diversity of recognized objects.
To address the challenges of evaluating unknown vocabularies and avoid annotation biases from label synonyms, hierarchies, or semantic overlaps, we introduce the annotation-free Text-Point Semantic Similarity (TPSS) metric for assessing generated vocabulary quality.
Our evaluations on nuScenes and ScanNet200 demonstrate 3D-AVS’ ability to generate semantic classes with accurate point-wise segmentations. Our method is especially effective for improving the recognition robustness under difficult lightning conditions such as night or rainy scenes.
The image above shows an overview of the method. A point cloud and corresponding images are fed to respective point captioner and image captioner to generate captions. Then, Caption2Tag excludes irrelevant words in the captions. The remaining nouns are passed to a text encoder and eventually assigned to points through a segmenter. The dashed lines indicate that the entire images branch is optional. The point captioner is the only trainable component in 3D-AVS. Note that, the example point caption is generated based on observing the green points.
The image encoder and point encoder are pre-aligned in the CLIP latent space. During training (left), Sparse Masked Attention Pooling (SMAP) aggregates features from points visible in the image (highlighted in red) and is supervised using CLIP image features. During inference (right), neither the image nor camera intrinsic parameters are available. To address this, a group of masks are generated based solely on geometric information. The SMAP output is then decoded into a group of captions. For simplicity, only one image (left) and one sector (right) are shown.
Even in challenging weather conditions, our method is capable of generating useful descriptions of the scene, combining the strengths of both the image (when visual information is present) and the point captioner (when geometric information is present). See below for some examples. Green classes correspond to categories that overlap with human-annotated categories provided in the dataset. Purple classes are additionally recognized by 3D-AVS which we deem plausible and useful.
@inproceedings{wei20253davs,
title={3D-AVS: LiDAR-based 3D Auto-Vocabulary Segmentation},
author={Weijie Wei and Osman Ülger and Fatemeh Karimi Nejadasl and Theo Gevers and Martin R. Oswald},
booktitle={Conference on Computer Vision and Pattern Recognition (CVPR)},
year={2025},
}