Multiple Extended Object Tracking for Automotive Applications
Summary
In order to safely navigate through traffic, an automated vehicle needs to be aware of the trajectories and dimensions of all dynamic objects (e.g., traffic participants) as well as the locations and dimensions of all stationary objects (e.g., road infrastructure). For this purpose, automated vehicles are equipped with modern high-resolution sensors like LIDAR, RADAR or cameras that allow to detect objects in the vicinity. Typically, the sensors generate multiple detections for each object, where the detections are unlabeled, i.e. it is unknown which of the objects was detected. Furthermore, the detections are corrupted by sensor noise, e.g., some detections might be clutter, and some detections might be missing. The task of detecting and tracking an unknown number of moving spatially extended objects (e.g., traffic participants) based on noise-corrupted unlabeled measurements is called multiple extended object tracking.
This tutorial will introduce state-of-the-art theory for multiple extended object tracking together with relevant real-world automotive applications. In particular, we will demonstrate applications for different object types, e.g., pedestrians, bicyclists, and cars, using different sensors such as LIDAR, RADAR, and camera. The tutorial aims at professionals and academics who are interested in the field of sensor fusion and tracking. As prerequisite, basic knowledge of sequential Bayesian estimation methods (such as Kalman filtering) is recommended. After attending the tutorial, the participants will be familar with the state-of-the-art in multiple extended object tracking and environment modeling. They will be in the position in implement and evaluate track management, data association, shape estimation, and fusion methods for extended objects.
Biographies
Marcus Baum received his Diploma degree in 2007 and his Ph.D. degree in 2013, both in Computer Science from the Karlsruhe Institute of Technology (KIT), Germany. He is Professor of Computer Science and Head of the Data Fusion Lab at the University of Goettingen, Germany. His research interests are in the areas of signal processing, state estimation, machine learning, sensor data fusion, tracking, and environment perception. He is Area Editor of the Journal of Advances in Information Fusion (JAIF) and Associate Editor of Letters of the IEEE Transactions on Aerospace and Electronic Systems (TAES). He was Local Arrangements Co-Chair of the 19th International Conference on Information Fusion (FUSION) and Technical Co-Chair of the 2016 and 2020 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI). He has organized several special sessions, workshops and tutorials in the area of (multiple) extended object tracking and sensor fusion. He received the best student paper award at the Fusion 2011 conference, and the International Society for Information Fusion (ISIF) awarded him the 2017 ISIF Young Investigator Award for outstanding contributions to the art of information fusion.
Jens Honer studied physics at the University of Stuttgart and received his PhD in theoretical physics (quantum optics) in 2013. In 2013, he joined Valeo Comfort and Driving Assistance Systems (CDA) to design the first data fusion systems based on Radar, Lidar and cameras in the Systems and Functions department, which was used in the first mass-produced level 3 autonomous cars by Honda. From 2017 to 2020 he led the algorithm design for the next generation environment perception system in Valeo CDA Driving Systems and Functions (DSF) and transitioned in 2020 to his current position in Valeo CDA Driving Advanced Research (DAR) to lead the research of environment perception systems. There, he is working on the perception systems for the Valeo Drive4U cars, automated valet parking (type 2), and novel applications for the Valeo sensor portfolio. In 2016 he was appointed Valeo Expert and from 2020 Valeo Senior Expert for Sensor Fusion and Environment Perception. His fields of interests include localization, machine learning, environment perception, extended and multi-target tracking and statistics. He has co-organized a tutorial about multiple extended object tracking and sensor fusion at the 2018 and 2019 International Conference on Information Fusion (FUSION), and the 2021 and 2022 Intelligent Vehicle Symposium (IV).