
3D BENCHMARK UPDATE
Update against Blender 3.0 upstream release. Update against Blender 3.2 upstram, expose Radeon HIP option, switch to using -cycles-device option in recent versions for setting device acceleration. Update against Blender 3.3 upstream, add Intel oneAPI back-end option. Update SHA256 / MD5 for Blender 3.3 Linux as upstream appears to have respun that build. Xie: SIENet: Spatial Information Enhancement Network forģD Object Detection from Point Cloud. Scale, Mutual-relation 3D Object Detection with Manocha: M3DeTR: Multi-representation, Multi. Zhao: Improving 3D Object Detection with Channel. Waslander: Point Density-Aware Voxels for LiDAR 3D Object Detection. He: Homogeneous Multi-modal Feature Fusion and Bai: EPNet++: Cascade Bi-directional Fusion for 2020 IEEE/RSJ InternationalĬonference on Intelligent Robots and Systems Radha: CLOCs: Camera-LiDAR Object Candidatesįusion for 3D Object Detection. Proceedings of the IEEE Conference on Computer Vision Jia: Focal Sparse Convolutional Networks for 3D Objectĭetection. Lee: Rethinking IoU-based Optimization for Single. Waslander: Dense Voxel Fusion for 3D Object Fu: SE-SSD: Self-Ensembling Single-Stage ObjectĪ. With Virtual Point based LiDAR and Stereo Dataįusion. Zhang: VPFNet: Improving 3D Object Detection Yuan: GLENet: Boosting 3D Object Detectors with Cai: Graph R-CNN: Towards AccurateģD Object Detection with Semantic-Decorated Local Object Detection from LiDAR point clouds. Wang: CasA: A Cascade Attention Network for 3D Cai: Sparse Fuse Dense: Towards High Quality 3Dĭetection with Depth Completion. Object Detection Evaluation, 3D Object Detection Evaluation, Bird's Eye View Evaluation.

The last leaderboards right before this change can be found here: This results in a more fair comparison of the results, please check their paper. Note 2: On, we have followed the suggestions of the Mapillary team in their paper Disentangling Monocular 3D Object Detection and use 40 recall positions instead of the 11 recall positions proposed in the original Pascal VOC benchmark. truncation: 50 %Īll methods are ranked based on the moderately difficult results. For cars we require an 3D bounding box overlap of 70%, while for pedestrians and cyclists we require a 3D bounding box overlap of 50%. We note that the evaluation does not take care of ignoring detections that are not visible on the image plane - these detections might give rise to false positives. As only objects also appearing on the image plane are labeled, objects in don't car areas do not count as false positives. Far objects are thus filtered based on their bounding box height in the image plane. We evaluate 3D object detection performance using the PASCAL criteria also used for 2D object detection. We thank David Stutz and Bo Li for developing the 3D object detection benchmark.Jonas Heylen (TRACE vzw) has released pixel accurate instance segmentations for all 7481 training images.
3D BENCHMARK CODE
Karl Rosaen (U.Mich) has released code to convert between KITTI, KITTI tracking, Pascal VOC, Udacity, CrowdAI and AUTTI formats.Qianli Liao (NYU) has put together code to convert from KITTI to PASCAL VOC file format (documentation included, requires Emacs).
3D BENCHMARK DOWNLOAD


We require that all methods use the same parameter set for all test pairs. To rank the methods we compute average precision. For evaluation, we compute precision-recall curves. The 3D object detection benchmark consists of 7481 training images and 7518 test images as well as the corresponding point clouds, comprising a total of 80.256 labeled objects.
