Publications
* denotes equal contribution and joint lead authorship.
2024
DFDNet: Directional Feature Diffusion for Efficient Fully-Sparse LiDAR Object Detection.
Submitted to CVPR 2025.
LiDAR-based object detection is essential for autonomous driving but remains computationally demanding. Conventional methods use dense feature map representations, leading to significant computational overhead and underutilizing the inherent sparsity of LiDAR data. Recent fully sparse detectors show promise but suffers from missing central object features due to the surface-dominant distribution of LiDAR points. Sparse feature diffusion methods attempt to address this by expanding features within object bounding boxes to cover neighboring regions prior to detection head. However, these approaches have excessive computational demands due to need of large diffusion range for larger objects. In this paper, we propose DFDNet, a fully sparse directional feature diffusion network that introduces a novel adaptive sparse feature realignment module that dynamically projects sparse features along object centerlines prior to feature diffusion. This realignment enables efficient, directional feature diffusion along object centerline. The resulting diffused features are then aggregated via max-pooling to construct a refined feature representation for each object. Our method reduces redundant sparse feature computations, achieving a two-fold reduction in computational load while improving performance over state-of-the-art detectors on the Waymo and nuScenes benchmarks.