High Level Data Fusion on a Multimodal Interactive Application Platform

This research aims to propose a multi-modal fusion framework for high-level data fusion between two or more modalities. It takes as input low level features extracted from different system devices, analyses and identifies intrinsic meanings in these data. Extracted meanings are mutually compared to identify complementarities, ambiguities and inconsistencies to better understand the user intention when interacting with the system. The whole fusion life cycle will be described and evaluated in an office environment scenario, where two co-workers interact by voice and movements, which might show their intentions. The fusion in this case is focusing on combining modalities for capturing a context to enhance the user experience.
ACM Press, New York[Acceptation rate: 41/118=35%]
Proc. of ACM Int. Conf. on Multimodal Interfaces ICMI-MLMI'2009