WordGesture-GAN Paper wins SIGCHI 2023 Award


The Association for Computing Machinery (ACM) CHI ‘23 conference brings together researchers and practitioners from around the world with the goal to make the world a better place using interactive digital technologies. This year CHI ‘23 received over 3,000 paper submissions in which only the top 5% of the submissions are chosen to receive an award.

Jeremy Chu presenting at SIGCHIThe paper, WordGesture-GAN: Modeling World-Gesture Movement with Generative Adversarial Network written by Department of Computer Science PhD students Jeremy Chu, Dongsheng An (graduated), Yan Ma, and Wenzhe Cui (graduated) with professors David Gu and Xiaojun Bi, won an Honorable Mention Award.

The team detailed a generative adversarial network (GAN)-based technique to synthesize realistic word gestures to be used to evaluate gesture typing keyboards. A word-gesture is when a user makes a continuous stroke on a virtual keyboard connecting letters to spell out a certain word. As user-drawn gestures are becoming a more popular way of typing on the keyboard, WordGesture-GAN can generate realistic word-gestures and predict the input performance. It hopes to advance the state of the art for generative modeling of human word-gesture production.

Gesture typing keyboard

Along with collaborator Shumin Zhai from Google, the authors explored using deep learning as a tool to model human gesture-typing behavior. They created WordGesture-GAN, which is able to predict interaction behaviors beyond what lab tests and field studies can show. This model is crucial in the development of user interface, particularly for language input technologies such as word-gesture input.

Congratulations to all!




-Kimberly Xiao

Homepage photo credit: L. Azevedo/Shutterstock