YOdar: Uncertainty-based Sensor Fusion for Vehicle Detection with Camera and Rad
YOdar: Uncertainty-based Sensor Fusion for Vehicle Detection with Camera and Radar Sensors
In this work, we present an uncertainty-based method for sensor fusion with camera and radar data. The outputs of two neural networks, one processing camera and the other one radar data, are combined in an uncertainty aware manner.To this end, we gather the outputs and corresponding meta information for both networks. For each predicted object, the gathered information is post-processed by a gradient boosting method to produce a joint prediction of both networks. In our experiments we combine the YOLOv3 object detection network with a customized $1D$ radar segmentation network and evaluate our method on the nuScenes dataset. In particular we focus on night scenes, where the capability of object detection networks based on camera data is potentially handicapped. Our experiments show, that this approach of uncertainty aware fusion, which is also of very modular nature, significantly gains performance compared to single sensor baselines and is in range of specifically tailored deep learning based fusion approaches.
YOdar:基于不确定度的传感器融合,可通过摄像头和雷达传感器进行车辆检测
在这项工作中,我们提出了一种基于不确定度的传感器与摄像机和雷达数据融合的方法。两个神经网络的输出(一个处理相机和另一个雷达数据)以不确定性感知的方式组合。.. 为此,我们收集的输出和相应的两个网络的元信息。对于每个预测对象,通过梯度增强方法对收集到的信息进行后处理,以生成两个网络的联合预测。在我们的实验中,我们将YOLOv3对象检测网络与定制 1个d 雷达分割网络,并在nuScenes数据集上评估我们的方法。特别是,我们专注于夜景,其中基于相机数据的物体检测网络的功能可能会受到影响。我们的实验表明,这种具有不确定性的融合方法也具有非常模块化的性质,与单传感器基线相比,其性能显着提高,并且属于基于深度学习的融合方法。 (阅读更多)