Spectroscanner
Converts images into sound in real time by directly mapping pixel x/y position and brightness to time/frequency and amplitude. Frequency will be logarithmically scaled so octaves will be evenly spaced. Mapping of brightness to amplitude will depend on the background color of the board.
I decided that two weeks isn’t enough time for me to get the range-finder sufficiently functional, so instead, the Spectroscanner will use a computer and a webcam. In the gallery, the software will continuously and repeatedly scan the video feed of the webcam from left to right, taking a vertical slice of the image and converting it to sound via inverse FFT.
The galley set-up will involve a whiteboard/chalkboard (whichever one is more convenient to borrow), where the camera, speaker, and monitor (showing the video capture) will be pointed towards the board. I intend the audience to initially walk into the video capture, where their presence on the video preview noticeably affects the output sound. This will prompt them to try to control the sound using gestures, and hopefully they will realize that they can sketch out sounds on the board for more control than their bodies, slowly discovering the relationship between the sound and the visual. I may create rulers on the x and y axis of the board, indicating the pitch and time.
Another ideal context is to have this software be available as a webapp, where users can draw, import images/audio, or use their webcam as input to the system. The user would also have a lot more control over editing and playback.
Influences:
Vi Hart’s Doodle Music
Spectral Tablature
Photosounder Demo
Tools:
HTML5 video capture/WebAudio
Max/MSP Jitter (backup, I’d rather not use Max)
Also, slightly different concept, but very effective in terms of interaction: http://www.flong.com/projects/ifp/
-Abby