1- I have always wanted to try using Max to take in video data and convert that into sounds. I know the Vizzie plugins have some tools that can open up some options for this, but I don’t really know how to go about actually linking the two in a meaningful way. I know I could get the numerical data given by the plugins to give values to oscillators and other objects, but I need to spend more time thinking on how to make this more than just some sine waves. The sound material will be Max signal objects and the machine will be a Max patch with some sort of simple user interface. I want it to be user-friendly so that anyone can just stand in front of the computer’s webcam and begin creating music just by being there and pushing the toggle. The idea is for anyone who can position themselves in front of the computer to be able to ‘play’ the patch. It would be as simple as hitting a toggle. The patch will take in what it sees and render it into some form of sound. No other action needs to be taken by the listener/performer. I would imagine this would most likely be in an art gallery, though it strikes me as more of a toy than an installation or exhibit. Perhaps it would be useful in some sort of educational context or for sheer entertainment value (similarly to how a theremin can be a serious performance instrument, a neat toy, or a teaching tool).
2- Similar to the first idea, I’d like to find a way to create atmosphere from the surrounding environment by having a Max patch take in sounds from a public space and process them in an interesting way. Conversely, I would also be open to using something like SPEAR to edit, manipulate, and process some recorded sounds. (I could even combine both approaches.) I am not too sure how or what effects I’d like to achieve, but I would like to make use of the rich library of noises that is the hallways on the mezzanine of CFA. The students in the practice rooms create the most wonderful collections of sounds and I would love to try my hand at creating some sort of ambient collage of sounds. The sound material will be the collected sounds from the practice room hallways that would then be processed through software. The ‘machine’ would be a Max patch that would run and loop. The patch is toggled on to start the sound collage and left on for listeners to appreciate. There is no requirement otherwise. Technically, the ‘players’ would be the musicians diligently practicing in the practice rooms. This would almost certainly be part of an art gallery exhibit though I think it has some cross-disciplinary value in being played as an audience is awaiting the doors to open prior to taking their seats or as they leave the concert hall for a performance given by the very students that took part in the works genesis.
3- I remember listening to a concert in undergrad where the composer had asked the percussionist in charge of operating the prepared piano to let a chain rattle over the piano’s strings for a short amount of time. I really enjoyed that sound and would like to see that realized in a greater scale. I would love to somehow mechanize the action of moving a chain over the strings of an open piano. It could be several chains moving in several directions. Perhaps some would be dropped as well. The machine could be played as well while the chains are rattling to create greater entropy and obfuscate the playing of the instrument. If this were the case, then I could see this as either some sort of interactive exhibit, or, more likely, the instrument for performing a composition on stage. The operation of the machine itself could be anyone as it would be totally mechanized, and the playing of the piano could be anyone with keyboard skills. If it was presented on stage, then it would preferably be a keyboard performer.
– Jorge Padrón
(EDIT: I just noticed I never actually hit ‘Publish’ and that this was stuck as a draft for several days… Sorry about that…)