Dates
Thursday, May 12, 2022 - 01:15pm to Thursday, May 12, 2022 - 02:45pm
Event Description

Abstract: In the current post-PC computing era, natural input modalities such as finger touch, voice, and gaze have been widely adopted. However, the input signals from these modalities are noisy and coupled with uncertainty. Moreover, each input modality has its own strengths and weaknesses, and one modality alone often fails to meet input requirements. In this thesis, we investigate how to synergistically integrate input from multiple input modalities to amplify their strengths, mitigate their weaknesses, and resolve the input uncertainties. We propose a probabilistic framework that can infer the user's interaction intentions from multimodal inputs on mobile devices. By probabilistically combining different input modalities with the context of the task, the proposed multimodal interaction techniques are tolerant to noisy input and can accommodate ambiguity among possible intentions. Moreover, these techniques can fulfill accessibility needs and offer new alternative ways to interact with mobile devices. We have implemented three interactive system prototypes to explore multimodal interaction on mobile devices. The first prototype, called VT, allows users to efficiently edit text on mobile devices with finger touch and voice input; the second prototype, called EyeSeeCorrect, enables hands-free text editing with gaze and voice input; the third prototype, called SeeSaySurf, enables hands-free web browsing via gaze and voice input.


Contact events [at] cs.stonybrook.edu for Zoom information. 

Event Title
Ph.D. Proposal Defense: Maozheng Zhao, 'Multi-modal Interaction on Mobile Devices'