Eager: Collaborative Research: Data Visualizations For Linguistically Annotated, Publicly Shared, Video Corpora For American Sign Language (Asl)

Description

Linguistic research on ASL has been held back by the lack of precise tools for measurement, over large corpora, of the non-manual articulations (i.e., facial expressions and head gestures) that carry key grammatical information in sign languages. The same limitations have, until now, also held back computer science research on sign language recognition and generation. The PIs have created valuable resources, through prior NSF support, to serve the research, education, and sign language communities, including: computational techniques for analysis of American Sign Language (ASL) videos and the SignStream software for linguistic annotation of sign language data; large linguistically annotated and computationally analyzed corpora with videos from native signers; and an online Data Access Interface (DAI) that enables intuitive and flexible searching, browsing, and download, to provide easy access to these publicly shared corpora. They have also exploited these corpora for research on the linguistic structure of ASL and on computer-based sign language recognition from video. Recently, they have developed new versions of SignStream and the DAI with many new features that are now ready to be released publicly. Both represent major improvements over earlier versions of these applications and, combined with the public release of large new richly annotated and readily searchable data sets, constitute resources that will be of great value to researchers, educators, and students in linguistics and computer science, by opening up whole new avenues of research and enabling dramatic improvements in computer-based sign language recognition and generation. The resulting wide-ranging research advances will also contribute to future computer-based applications that will enhance communication for and with deaf individuals, as well as applications that will have educational benefits and overall improve the lives of those who are deaf and hard-of-hearing. The part-time effort to be funded for the two key software developers will also enable them to provide the limited technical support that is essential during the first year of the public release of SignStream 3 and DAI 2.The goal of this project is to further improve the existing applications by incorporating several powerful enhancements and additional functionalities to enable the shared tools and data to support new kinds of research in both linguistics (for analysis of linguistic properties of ASL and other signed languages) and computer science (for work in sign language recognition and generation). Specifically, the PIs will incorporate into their displays, within both the annotation software and the Web interface, graphical representations of computer-generated analyses of ASL videos, so that users will be able to visualize the distribution and characteristics of key aspects of facial expressions and head movements that carry critical linguistic information in sign languages (e.g., head nods and shakes, eyebrow height, and eye aperture). The most challenging aspect of sign language generation has been the production of natural-looking, appropriately timed, facial expressions and head movements. The sophisticated approach to tracking and 3D modeling of such expressions that has been developed recently by Metaxas et al. makes it possible to derive precise information about these facial expressions and head gestures for large sets of video files.
StatusFinished
Effective start/end date8/1/177/31/18

Funding

  • National Science Foundation (NSF)

Fingerprint

Data visualization
Linguistics
Computer science
Audition
Interfaces (computer)
Education
Display devices
Students
Communication