A Computational Taxonomy of Deep Neural Networks for 3D Point Cloud Processing
Analyzing Deep Learning Representations of Point Clouds for Real-Time In-Vehicle LiDAR Perception
Deep neural networks operating on 3-dimensional (3d) point clouds of data from multiple high-resolution lidar sensors must extract semantics from this increasingly precise picture of the vehicle s environment.
However, it is computationally difficult to make use of the ever-increasing amounts of data from multiple high-resolution lidar sensors.
Authors
Marc Uecker, Tobias Fleck, Marcel Pflugfelder, J. Marius Zöllner
In this work, we present a taxonomy of different architecture designs, based on design decisions regarding the point cloud data representation.
This taxonomy is used to analyze the impact of these design decisions on the run-time performance characteristics in.
Based on this analysis, we also provide insights and recommendations for future work in.
Result
In this work, we present a taxonomy covering the most significant design decisions regarding the lidar point cloud representation used in deep neural networks.
By categorizing recent state-of-the-art methods for 3d semantic segmentation, we uncover common trends which can be traced back to common design decisions regarding representational capacity.
We dive deeply into the design decisions made during the development of point cloud representations, and analyze the impact of each design decision on memory consumption, run-time latency and representational capacity during the design phase of the model.
Finally, we discuss for each of the limitations present in the autonomous vehicle use-case, which design decisions can be altered to lighten the load of latency-or memory-constraints, or increase representational capacity.