Found Sound Final Proposal – Adam

Basically, my proposal is a real-time algorithmic composition program designed to accompany live performances in which the the frequency spectrum of the performance is used as input and a library of collected sounds is used as output. The program will continually parse the incoming spectrum into 25 bands and measure the amplitude of each band. It will then compare the results with the average spectrums within the library of collected sounds in search for similarity. The “composition” element will derive from the fact that the computer will indeterminately modulate between finding sounds that are very similar to the live input and finding sounds that are very different in order to create a timbral counterpoint; also the length of the playback of each sound will be an important element in the “composing”. The sound library would need to be quite large in order for this work effectively.

Since I am currently working on a concerto for Piano and Electronics and there are sections in need of electronic accompaniment, I will experiment using a recording of the piano part for the 3rd movement as input in order to generate an accompaniment. This is very experimental, but if the results are satisfactory, I may wish to use the program in the concerto.

The sounds, of course, could conceivably be taken from anything; the project itself is more focused on the tool. I will first try to harvest the sounds created by my noise machine, the Computone (the “computer instrument” that granulates percussive sounds). There is no special rationale for choosing these sounds, only that I believe they will sound interesting in conjunction with the piano for this particular presentation and it would be nice to create a sense of progression between projects in this class. Also, the fact that the sounds were originally “found” once upon a time only to granulized and filtered makes the idea of using them in a “found sound” project compelling (for me at least).  If these sounds do not work as I hope, I am willing to change my mind and find a different collection.

Conceptually, this piece is important to me because I cannot find anyone who has used frequency spectrums in order to create algorithmic composition; it is generally done with probability/Markov chains and by using traditional pitches and rhythms. This does not easily translate into more abstract texture-focused music.

The important aspect of time in this project is that the analysis of the spectrum and the resulting compositional processes are happening in real-time which often translates well for an audience. If they understand that an event is unfolding and exists only in the now, they may be compelled by the (fingers crossed) effectiveness of the project. It may also be relevant that since the sounds I am using are the result of granular synthesis, that is to say that the electronic sounds are, in fact, “freeze-frames” of sound in time that are being recalled only because they are similar to what is happening right now which is exactly how memory works. Often we recall similar visual (or otherwise) “freeze-frames” in our past via association with the present. Whether or not I make this an important conceptual point in the project remains to be seen.

The ultimate effect that I hope for in my audience is an experience of a good, or at least interesting, work of music. This is the end goal in any tool I build; it should be used in conjunction with a musical performance. With this tool specifically though, I would like the audience to be encouraged to hear the similarities and differences between the live sounds and the pre-rendered sounds so that maybe they will be encouraged to listen differently to sound in general. The way anything sounds is ultimately determined by its frequency distribution so composing by overlapping similar distributions is, by itself, a compelling concept.

 

Inspirations:

Davidovsky’s Synchronisms No. 6:

This is meticulously composed piece in which the electronic tape accompaniment sounds very similar to the piano a great deal of the time or even sounds like it is somehow coming out of the piano and is a part of the live sound. It can easily be argued that such an approach is vital for this genre in order to justify the existence of the tape accompaniment because it allows for a more homogeneous relationship between the performer and the machine.

 

Similarly, here is his Synchonism No. 9. This time with violin:

 

“Hello World!” by the Iamus Computer:

http://www.youtube.com/watch?v=bD7l4Kg1Rt8 (doesn’t start until 1:40)

This is debatably the first fully composed piece by a computer and it’s not bad! From the first time I heard this piece I knew that I wanted to get more involved with algorithmic composition and see what I could uncover by transferring control of compositional parameters over to a computer.

 

 

Advertisements

One thought on “Found Sound Final Proposal – Adam

  1. Hi Adam,

    You might find it interesting to check out some of the resources from the Universitat Pompeu Fabra in Barcelona (aka the freesound people). Even if you don’t end up using any of their tools (Essentia and Gaia would be the most relevant), it’s worth seeing what they’ve been up to.

    Here’s an article about the structure of freesound which is worth checking out:
    http://mtg.upf.edu/node/2797

    Here’s some stuff about the freesound API:
    http://www.freesound.org/docs/api/

    And here’s some info about Essentia:
    http://essentia.upf.edu/

    If you do explore this route at all, let me know–I’m looking into it a bit for my own research.

    -Abby

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s