Multimodal data fusion for unobtrusive human physiological sensing
Thesis event information
Date and time of the thesis defence
Place of the thesis defence
L5, Linnanmaa campus
Topic of the dissertation
Multimodal data fusion for unobtrusive human physiological sensing
Doctoral candidate
Master of Science Manuel Lage Cañellas
Faculty and unit
University of Oulu Graduate School, Faculty of Information Technology and Electrical Engineering, Center for Machine Vision and Signal Analysis (CMVS)
Subject of study
Computer Science and Engineering
Opponent
Professor Lasse Lensu, LUT
Custos
Associate Professor Miguel Bordallo López, University of Oulu
Exploring contact-free technologies for reliable health monitoring at home
The increasing demand for continuous health monitoring, particularly in home environments, is driven by the demographic shift toward aging populations and the growing prevalence of individuals living alone. Traditional monitoring systems, often reliant on wearable devices or user-initiated actions, are limited by compliance, comfort, and accessibility. Unobtrusive technologies offer a solution by enabling remote sensing of physiological signals without requiring active user participation, providing continuous and passive monitoring while preserving comfort and autonomy.
This dissertation investigates unobtrusive physiological sensing for reliable human monitoring, focusing on multiple non-contact technologies, including RGB-D cameras, thermal imaging, and millimeter-wave radar. It proposes novel methods for fusing multimodal data to improve accuracy and robustness. To support this research, a multimodal dataset, OMuSense-23, was established, capturing respiratory and cardiac activities across different poses. This dataset serves both as a benchmark and as a foundation to evaluate challenges such as synchronization complexity, scalability, and ecological validity.
Findings demonstrate that combining multiple sensing modalities enhances monitoring performance, while they introduce challenges in data alignment, processing, and generalization to real-world conditions. The dissertation provides insights into designing and leveraging multimodal sensing frameworks, highlighting strategies for integrating handcrafted and learned features, self-supervised representation learning, and robust fusion techniques. This work contributes to scalable, interpretable, and reliable systems for unobtrusive physiological monitoring in home and assistive living environments.
This dissertation investigates unobtrusive physiological sensing for reliable human monitoring, focusing on multiple non-contact technologies, including RGB-D cameras, thermal imaging, and millimeter-wave radar. It proposes novel methods for fusing multimodal data to improve accuracy and robustness. To support this research, a multimodal dataset, OMuSense-23, was established, capturing respiratory and cardiac activities across different poses. This dataset serves both as a benchmark and as a foundation to evaluate challenges such as synchronization complexity, scalability, and ecological validity.
Findings demonstrate that combining multiple sensing modalities enhances monitoring performance, while they introduce challenges in data alignment, processing, and generalization to real-world conditions. The dissertation provides insights into designing and leveraging multimodal sensing frameworks, highlighting strategies for integrating handcrafted and learned features, self-supervised representation learning, and robust fusion techniques. This work contributes to scalable, interpretable, and reliable systems for unobtrusive physiological monitoring in home and assistive living environments.
Created 23.1.2026 | Updated 23.1.2026