I am interested in the idea of a ‘soundtrack’ and the ability of sound to determine and channel emotions. From the transcendental transformation sequences of anime protagonists to the confident strut of models on the runway, I want to utilize sound to recreate these powerful gestures. Inspired by artists such as Laetitia Sonami, I want to create a machine that uses facial/body recognition to correlate sound with the movement of the body; a real-time foley for everyday life. The sounds themselves would reference favorite childhood media: blockbuster action films, anime, nature documentaries, children’s cartoons, and sitcoms.
In a similar vein, a second idea uses the idea of clothing as a kind of machine; the movement of a body becomes an input for the machine and the sound is the output. Like the sound sculptures by Nick Cave or the haute couture A/W 2000 collection by Viktor and Rolf, this project merges my interest in sound with my interest in clothing. The sounds themselves could either emphasize the already existing noises of clothes – such as the click of a heel or the squeal of a zipper – or add new sounds, in a similar fashion to how nature documentaries embellish their footage with sounds that animals don’t actually make. This idea could also be combined with the first idea, allowing a person to create a multi-layered sound experience.
A final idea involves crowd-sourcing to create a composition: a program that goes through chatroulette, webcam sex service, or a similar website to capture soundbytes of other users. The sound material could either be the already existing sound of the users or sounds could be requested by the machine. Using a similar method as Amazon’s mechanical turk (or even using Amazon’s mechanical turk) the goal would be to create a sort of sound portrait.