Development of multimodal user interfaces by interpretation and by compiled components : a comparative analysis between InterpiXml and OpenInterface

The final goal of this thesis is to integrate the multimodality on two platforms. Our modalities will be the pen-based gesture and hand-based camera recognition. We will then realize a comparative study between these two platforms.

More precisely, Our goals are to integrate this multimodality on two platforms, InterpiXML and OpenInterface that we will introduce in the following chapters. We will integrate both modalities on both platforms.

To achieve this we will modify the InterpiXML architecture to be aware of pen-based gesture and of natural hand gestures.

For OpenInterface, we will develop two generic components, one for the pen-based gesture recognition and one for the hand gesture recognition. This genericity will enable OpenInterface to reuse those components for any application.

When those modalities will be integrated, we will evaluate the two platforms and compare them. This comparaison will be assessed in terms of CARE properties but also in terms of utilisability based on the IBM forms. For reaching this final goals we will introduce an experiment we performed.
M.Sc. thesis
UCL, Louvain-la-Neuve, 28 August 2007
2007