HOME

Screen Captures:

Video shoot pictures

iSign: Making the Benefits of Reading Aloud
Accessible to Families with Deaf Children

Prof. Tony Scarlatos
Computer Science Department
(Stony Brook University, SUNY)

Francesco Gallarotti
Undergraduate in Computer Science
(Stony Brook University, SUNY)

Communication within the hearing-impaired and deaf community has historically been done using the American Sign Language (ASL). Most deaf children learn their first ASL words when they are just a few months old, therefore ASL becomes their first language and the process of learning to read English is, for them, completely equivalent to the process of becoming bilingual. Hearing-impaired children learn to read by recognizing written words as images associated with a meaning. They learn to form words by observing a speaker’s mouth and lip movements (as well as facial expression). They have to simultaneously associate images of words with the ASL meaning of the word. Outside of the classroom, most parents and caregivers do not know ASL, and thus are limited in their ability to supplement instruction provided in school.

The software iSign has been designed to solve this problem by providing a tool that translates English spoken words into ASL video clips providing all the information necessary to the child in order to make the correct visual associations.

The prototype system responds to a limited set of semantically related words (semantic web), such as are found in board books for young children. These board books typically focus on some theme such as farm animals, foods, colors, shapes, or parts of the body. Each page contains an illustration and a single word.

The system presents a simple interface on the screen, since most of the interaction is done using vocal commands. An example of a complete interaction with the program is summarized here:

  • The speaker pronounces clearly a word, while the child has the time to look carefully at the lip movement. This first step works both as vocal input for the computer and, at the same time, as basic visual information for the child in order to associate the lip movements with the English word.
  • After a short pause (during which the computer recognizes the spoken word and fetches the information to be displayed next) an ASL video clip of the word is displayed on the screen. This is the second visual information for the child. A well-known sign is displayed on the screen, providing the child with information in his/her first language.
  • The English word is then displayed on the screen, together with a large image of the object associated with the word. This visual information can be used by the child to associate the image of the written word with the meaning of the previously displayed video clip.

The application runs on a Macintosh computer running the operating system OS X. It uses the very advanced voice recognition capability built into the operating system to recognize the words in our vocabulary. The advantage of this approach is that the system does not need to be trained for a particular voice. Users do not need to install any software that does not come with the operating system, nor do they need any special hardware except for a microphone.

The application is going to be tested at the Cleary School for the Deaf in Nesconset, NY. It will be installed in the computer lab, where it will be used by teachers. We will also loan the application (on a laptop computer with a microphone) to parents of hearing-impaired children through the school, for further testing in the home environment. This work is being supported by the National Science Foundation under Grant No. 0203333.