Don't just listen to Nasa recordings, experience them. This Kinect Installation utilizes a data source that is driven by dynamically generated audio transcripts to search and aggregate data from popular apis providing a more contextualized data set including images and news articles.

This project is solving the Can You Hear Me Now? Space-y Sounds challenge.




Nasa has a wealth of recordings of many major missions, shuttle launches, and of space in general, documenting space exploration to allow everyone to become passive participants, explorers searching for new discoveries and possibilities beyond what the eye can see. These recordings often capture moments of deep historical significance.

This project aims to take that journey one step beyond just listening to these moments in time. By creating transcripts dynamically generated through indexing the audio files, natural language processing can determine frequency, popular phrases and even entities. These results can help define query terms, along with other meta data from the file such as date, to help contextualize this moment in time for the user: images from Flickr and news paper articles from the New York Times for example. Once this aggregated data source is created from the audio file transcript, a Kinect enabled installation utilizes it to present an abstract view of this information for the user, along with the audio, providing a more immersive way to experience this content.


Microsoft Azure Media Indexing###

Indexes and gets transcripts of audio

Microsoft Azure

Natural Language Processing for frequency

Microsoft Azure

API Aggregation and Hosting

Microsoft Kinect for Windows

Installation - skeleton capturing


Installation View

Project Information

License: Common Public Attribution License 1.0 (CPAL-1.0)

Source Code/Project URL: