Articles | Volume 13
https://doi.org/10.5194/ars-13-209-2015
https://doi.org/10.5194/ars-13-209-2015
03 Nov 2015
 | 03 Nov 2015

Multi-view point cloud fusion for LiDAR based cooperative environment detection

B. Jaehn, P. Lindner, and G. Wanielik

Abstract. A key component for automated driving is 360° environment detection. The recognition capabilities of modern sensors are always limited to their direct field of view. In urban areas a lot of objects occlude important areas of interest. The information captured by another sensor from another perspective could solve such occluded situations. Furthermore, the capabilities to detect and classify various objects in the surrounding can be improved by taking multiple views into account.

In order to combine the data of two sensors into one coordinate system, a rigid transformation matrix has to be derived. The accuracy of modern e.g. satellite based relative pose estimation systems is not sufficient to guarantee a suitable alignment. Therefore, a registration based approach is used in this work which aligns the captured environment data of two sensors from different positions. Thus their relative pose estimation obtained by traditional methods is improved and the data can be fused.

To support this we present an approach which utilizes the uncertainty information of modern tracking systems to determine the possible field of view of the other sensor. Furthermore, it is estimated which parts of the captured data is directly visible to both, taking occlusion and shadowing effects into account. Afterwards a registration method, based on the iterative closest point (ICP) algorithm, is applied to that data in order to get an accurate alignment.

The contribution of the presented approch to the achievable accuracy is shown with the help of ground truth data from a LiDAR simulation within a 3-D crossroad model. Results show that a two dimensional position and heading estimation is sufficient to initialize a successful 3-D registration process. Furthermore it is shown which initial spatial alignment is necessary to obtain suitable registration results.

Download
Short summary
In the future autonomous robots will share their environment information captured by range sensors like LiDAR or ToF cameras. In this paper it is shown that a two dimensional position and heading information, e.g. obtained by GPS tracking methods, is enough to initialize a 3D registration method using the range images from different perspectives of different platforms (e.g. car & infrastructure). Thus they will be able to explore their surrounding in a cooperative manner.