SDF/MFI Conference 2023: Tutorial line up

Tutorial 1

Bridging theory and applications with possibility theory


Although it is widely accepted that there two main sources of uncertainty, that is, epistemic uncertainty corresponding to lack of knowledge and aleatoric uncertainty corresponding to random phenomena, the usual approach is to model both using the tools of probability theory. In this tutorial, I will show how a particular version of possibility theory offers an intuitive approach to model epistemic uncertainty which can be combined with probability theory to yield a general framework in which different sources of uncertainty are faithfully modelled. Such a framework provides a principled basis for many successful heuristics, allowing to understand and improve them, hence furthering the applicability of the Bayesian method. The effectiveness of this approach will be illustrated via a range of applications including distributed inference, ensemble Kalman filtering and space situational awareness. This tutorial does not require any prior knowledge of possibility theory.



Jeremie Houssineau is an assistant professor in the Department of Statistics at the University of Warwick. His research interests include possibility theory and Bayesian statistics. He received a PhD in statistical signal processing from Heriot-Watt University, Edinburgh, in 2015 and was with the Department of Statistics and Applied Probability at the National University of Singapore from 2016 to 2019.

Tutorial 2

Introduction to ELINT and ESM: A Beginner's Tutorial



Tutorial 3

Statistical and information-theoretic methods for multi-sensor multi-target estimation


There is growing need for the development of autonomous systems that are able to make decisions automatically based on information that they receive from sensors. In this talk I will review some new algorithms and methods for multi-target tracking, multi-sensor fusion, and information-theoretic tools for performance assessment and decision-making.

  • Large-scale tracking of many objects: The scale of the problem is growing and thus solutions need to be scalable to tracking many objects requires algorithms that mitigate combinatorial complexity. Low-complexity solutions for multi-target tracking are tested in complex environments. A method was developed for robustly tracking large numbers of targets that is scaleable in the number of targets and number of measurements, which enables millions of targets to be tracked.
  • Determining the information content of multi-sensor multi-target tracking systems: In sensor networks with high-density information, bandwidth may be a constraint for multi-sensor multi-target tracking. I will describe methods for determining the information content in networks of sensors used for multi-target tracking.
  • Distributed integration of data from multiple sensors: Operators are required to make decisions based on information from multiple tracking systems to enhance the overall situational awareness. A new method of distributed multi-sensor multi-target tracking is developed for multi-sensor integration that mitigates against corruption from inaccurate or misleading data sources.
  • Assessment of threat in multi-target surveillance applications: Large-scale tracking of many objects enables the identification of immediate threats. However, some threats may be more pertinent than others. A new formulation of adversarial risk was developed to provide situational awareness for operators to aid prioritization of sensing assets.
  • How to co-ordinate a strike on a moving target with an unknown number of interceptors: Suppose that you have access to a number of interceptors that can strike a moving target at a time of your choosing but you do not know how many there are or where they are. I will describe how to co-ordinate the interception of the target.
  • Performance bounds for multi-target tracking estimators: The inverse of the Fisher information, known as the Cramer- Rao bound, provides a bound on the estimator of a parameter and is fundamental for statistical analysis. It provides a minimum achievable variance or covariance for a parameter. A Cramer-Rao bound was derived for point processes based on mathematical concepts from quantum field theory that generalizes the concept to variables that have spatial variates.


Daniel Clark is a Chair of Electronics and Computer Science at the University of Southampton. His research interests are in the development of the theory and applications of multi-object estimation algorithms for sensor fusion problems. He has demonstrated capability across a wide range of military application domains, including harbour surveillance, detection and tracking objects in underwater environments, and for space situational awareness. He is Fellow of the Royal Aeronautical Society, the Institute of Engineering and Technology, and the Institute of Mathematics and its Applications.


Tutorial 4


Perception systems play an essential role in enabling underwater robots with autonomous capabilities, and especially creating robotic systems that can inspect the environment. The most common type of perception sensors used by underwater robots are optical cameras and sonar systems. Light attenuation and turbidity are challenges that decrease the performance of underwater optical imaging, while low-resolution and lack of texture affect the sonar performance. However, with the advancements in both hardware and software capabilities, the underwater research community is becoming interested in fusing visual and sonar data to improve the capabilities of underwater robots for object recognition, mapping, and navigation. This interactive tutorial will discuss how polarized imaging and sonar data can be used together for advancing the field of autonomous underwater robotics. The tutorial will give an overview of polarized optical imaging, highlighting the basic principles and benefits of this imaging technique. Furthermore, we will discuss the recent advancements in data-driven approaches for object identification in sonar datasets. And lastly, we will introduce an approach to fuse the sonar data and polarized images for creating 3D reconstructions of the environment. 



Corina Barbalata is an Assistant Professor in the Department of Mechanical and Industrial Engineering at Louisiana State University (LSU). Prior to joining Louisiana State University, she was a Postdoctoral Research Fellow in the Naval Architecture and Marine Engineering Department at University of Michigan. She obtained her PhD from Heriot-Watt University (Edinburgh, United Kingdom) in 2017 from the School of Engineering and Physical Sciences, a dual-MS degree from University of Burgundy (France) and Heriot-Watt University (UK), and a BS degree in Automation-System Engineering from Transylvania University (Romania). Her research interests are in autonomy for robotic systems, with a focus on dynamic modeling, control theory, motion planning, and perception systems. Her work is focused on mobile robotic manipulator systems that perform tasks in complex, dynamic and uncertain environments, such as underwater environments or industrial settings.

Katherine A. Skinner is an Assistant Professor in the Department of Robotics at the University of Michigan. She also holds a courtesy appointment in the Department of Naval Architecture and Marine Engineering. Prior to this appointment, she was a Postdoctoral Fellow in the Daniel Guggenheim School of Aerospace Engineering and the School of Earth and Atmospheric Sciences at Georgia Institute of Technology. She received an M.S. and Ph.D. from the Robotics Institute at the University of Michigan, and a B.S.E. in Mechanical and Aerospace Engineering with a Certificate in Applications of Computing from Princeton University.

Jinwei Ye is an assistant professor of Computer Science at George Mason University. She was an assistant professor at Louisiana State University from 2017 to 2021. Her research interests are in the areas of computer vision, computational imaging, and computer graphics, with a focus on geometry and reflectance reconstruction. Before joining academia, she was a senior scientist at Canon U.S.A. and a postdoctoral fellow at US Army Research Lab. She received her Ph.D. in Computer Science from the University of Delaware and B.Eng. from Huazhong University of Science and Technology (China).

Tutorial 5

Multiple Extended Object Tracking for Automotive Applications


In order to safely navigate through traffic, an automated vehicle needs to be aware of the trajectories and dimensions of all dynamic objects (e.g., traffic participants) as well as the locations and dimensions of all stationary objects (e.g., road infrastructure). For this purpose, automated vehicles are equipped with modern high-resolution sensors like LIDAR, RADAR or cameras that allow to detect objects in the vicinity. Typically, the sensors generate multiple detections for each object, where the detections are unlabeled, i.e. it is unknown which of the objects was detected. Furthermore, the detections are corrupted by sensor noise, e.g., some detections might be clutter, and some detections might be missing. The task of detecting and tracking an unknown number of moving spatially extended objects (e.g., traffic participants) based on noise-corrupted unlabeled measurements is called multiple extended object tracking.

This tutorial will introduce state-of-the-art theory for multiple extended object tracking together with relevant real-world automotive applications. In particular, we will demonstrate applications for different object types, e.g., pedestrians, bicyclists, and cars, using different sensors such as LIDAR, RADAR, and camera. The tutorial aims at professionals and academics who are interested in the field of sensor fusion and tracking. As prerequisite, basic knowledge of sequential Bayesian estimation methods (such as Kalman filtering) is recommended. After attending the tutorial, the participants will be familar with the state-of-the-art in multiple extended object tracking and environment modeling. They will be in the position in implement and evaluate track management, data association, shape estimation, and fusion methods for extended objects.



Marcus Baum received his Diploma degree in 2007 and his Ph.D. degree in 2013, both in Computer Science from the Karlsruhe Institute of Technology (KIT), Germany. He is Professor of Computer Science and Head of the Data Fusion Lab at the University of Goettingen, Germany. His research interests are in the areas of signal processing, state estimation, machine learning, sensor data fusion, tracking, and environment perception. He is Area Editor of the Journal of Advances in Information Fusion (JAIF) and Associate Editor of Letters of the IEEE Transactions on Aerospace and Electronic Systems (TAES). He was Local Arrangements Co-Chair of the 19th International Conference on Information Fusion (FUSION) and Technical Co-Chair of the 2016 and 2020 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI). He has organized several special sessions, workshops and tutorials in the area of (multiple) extended object tracking and sensor fusion. He received the best student paper award at the Fusion 2011 conference, and the International Society for Information Fusion (ISIF) awarded him the 2017 ISIF Young Investigator Award for outstanding contributions to the art of information fusion.

Jens Honer studied physics at the University of Stuttgart and received his PhD in theoretical physics (quantum optics) in 2013. In 2013, he joined Valeo Comfort and Driving Assistance Systems (CDA) to design the first data fusion systems based on Radar, Lidar and cameras in the Systems and Functions department, which was used in the first mass-produced level 3 autonomous cars by Honda. From 2017 to 2020 he led the algorithm design for the next generation environment perception system in Valeo CDA Driving Systems and Functions (DSF) and transitioned in 2020 to his current position in Valeo CDA Driving Advanced Research (DAR) to lead the research of environment perception systems. There, he is working on the perception systems for the Valeo Drive4U cars, automated valet parking (type 2), and novel applications for the Valeo sensor portfolio. In 2016 he was appointed Valeo Expert and from 2020 Valeo Senior Expert for Sensor Fusion and Environment Perception. His fields of interests include localization, machine learning, environment perception, extended and multi-target tracking and statistics. He has co-organized a tutorial about multiple extended object tracking and sensor fusion at the 2018 and 2019 International Conference on Information Fusion (FUSION), and the 2021 and 2022 Intelligent Vehicle Symposium (IV).

Tutorial 6

Optical Non-Resolved Observation of a GPS satellite observed by the Purdue Optical Ground Station

Human-made Space Object Characterization over Large Distances


Human made space objects, such as satellites are ubiquitous in our daily lives in their services. Active payloads may communicate with their owner-operators, however, collecting information outside of communication channels is crucial in case of communication interruption, satellite failure or when dealing with space debris objects. The information collection is plagued by the large distances between the objects and a potential observer.  As a result, in general, only non-resolved observations are available. Non-resolved observations do not reveal any details on the objects, beyond their center-of-mass or their center-of-reflection (see Figure). Inversion and sensor fusion techniques can be used to ascertain certain subset of characterization information, such as shape and attitude state of the space object. The tutorial teaches the details of space object characterization and how to fuse passive optical information as collected in brightness measurements over time, in combination with satellite laser ranging measurements for space object characterization. The challenges specific to human-made objects, which offer non-smooth surfaces, concavities and varying reflection properties are addressed.



Carolin Frueh is faculty at the School of Aeronautics and Astronautics at Purdue at the rank of Associate Professor. Prior to joining Purdue, she was a TEES Research Scientist at the Aerospace Department of Texas A&M university and a National Research Council postdoc with the US Air Force Research Laboratory, Kirtland AFB, at the Space Vehicles Directorate. She is the director of the Purdue Optical Ground Station, the chair of the Committee on Space Research (COSPAR) Panel on Potentially Environmentally Detrimental Activities in Space (PEDAS), she was the recipients of the E.F. Bruhn Teaching awards in 2021.

Her research and expertise is focused on Space Situational Awareness and Space Domain Awareness including optical observations, multi-target tracking and detection, information theory, machine learning, low observability systems and object characterization. She has authored more than 90 conference proceedings and over 30 peer-reviewed journal papers.