transcriptThis project takes the Nasa recordings one step beyond from a passive listening experience to an immersive and engaging one. By indexing the audio files for transcripts, then enriching that text with data from the Bing api to create a contextualized data source, users can engage with these moments in time in an entirely new way using this Kinect installation.
This project is solving the Can You Hear Me Now? Space-y Sounds challenge. Description
Nasa has a wealth of recordings of many major missions, shuttle launches, and of space in general, documenting space exploration to allow everyone to become passive participants, explorers searching for new discoveries and possibilities beyond what the eye can see. These recordings often capture moments of deep historical significance.
This project aims to take that journey one step beyond just listening to these moments in time. By creating transcripts dynamically generated through indexing the audio files, natural language processing can determine frequency, popular phrases and even entities. These results can help define query terms, along with other meta data from the file such as date, to help contextualize this moment in time for the user: images from Flickr and news paper articles from the New York Times for example. Once this aggregated data source is created from the audio file transcript, a Kinect enabled installation utilizes it to present an abstract view of this information for the user, along with the audio, providing a more immersive way to experience this content.
Microsoft Azure Media Indexing
Indexes and gets transcripts of audio Azure Media Indexer
Natural Language Processing for frequency - Azure
API Aggregation and Hosting Azure
Microsoft Kinect for Windows
Installation - skeleton capturing on Kinect V2
Installation View in Unity3d
License: Common Public Attribution License 1.0 (CPAL-1.0)
Source Code/Project URL: https://github.com/spaceappsnyc/transcript
presentation - https://speakerdeck.com/bitchwhocodes/nasaspaceapps-nyc-april-12-2015