Archive for May, 2008

Music Works

Friday, May 9th, 2008

Pro Black Sheep: A film by Clayton Broomes Jr.
Music score by Allistar D. Peters

Synapsis: Character-driven satirical drama about a young, extraordinarily intellectual black activist wannabe who is discovered as the political critic sending out anonymous emails that criticize todays black leadership. Instead of holding it against him, the black leader who finds him out hires his worst critic as a second adviser.

Flow From Far - an animation from Lucia Jeesun Lee. Music created and mixed by Allistar D. Peters

Alone: A film by Clayton Broomes Jr.

Music scored and played by Allistar D. Peters

The Dead Guy: by Clayton Broomes, Jr.

The Dead Guy

For more musical works from Allistar visit: Omorphy’s Audio

More info to come…

Final Lap Thesis

Thursday, May 1st, 2008

It’s week Last Lap before I present my thesis to the world. After meeting with Agnieszka Roginska from NYU’s Steinhardt Music & Technology division, who specializes in 3D sound & localization, it was clear that I had to switch gears very late in the game, and right before my last class presentation before the critics. The crits were…
- Red Burns, Chair, ITP
- Bob Greenberg, CEO, R/GA
- Allegra Burnette, Director of Digital Media, MoMA
- Lauren Cornell, Executive Director, Rhizome
- Jake Barton, Principal, and my class pairs.

Some of the crits I received was expected because my visuals were still in development, and encouraging because they saw value in aspects I took for granted. One of the comments I received was the meaning of the colors in my visuals with respect to sound information. I am now mapping the data derived from frequency and amplitude only. Other comments I received was interest in my process of collecting the sounds at Union Square where I sat there for 17th hours. They here interested in listening to what I saw, felt, what was different verses the norm. I goes I tell a good story. Lauren even referred to me as an artists. But the meat of my process was the sound and what I can get from it. Before I was trying to extract localized information ( X, Y, Z coordinates based on recorded source ) from a binaural source recording and then visualize its position in the 3D plane. This cannot be achieved at this point in time due to the complexity of sound itself and our perception vs a computer algorithm. Here’s a break down of what I mean.

The sound we hear is influenced my many factors. First the location of sound relative to our ears is calculated in our brains after it’s received by each of our Auricles (Pinna) or the outer ear. This sound ( vibration) is trapped & funneled to the inner ear. This sound vibration is then converted into information filtered by the inner ear ( incus, Malleus, Semi-Circular Canals, Stapes, Cochlea ) and then transfered electrically to the brain via the auditory and vestibular nerves. <- Hope those were simple enough description.

Here's an image of the ear

But other factors help determine the localization (direction) of sound.
The distance between each ear affects the perceived arrival of the sound. Does it get to the left ear first or right ear? Does it sounds far away or very close?
Also, familiarity to sounds and visual cues help determine if the sound lives in ones relative space or a projection. What I mean here is that if we here a dog barking and it sounds really close, but your visual scope reveals there’s nothing within the space to verify that there is a dog in the immediate space, then it’s probably manufactured by an external source. Maybe a recording of some sort.
In terms of direction if we here a plane going by and we are in the middle of the city we know due to conditioning, familiarity, and process of deduction that the sound is coming from above - somewhere. It’s very difficult to perceive height distance.
Sound also changes tone over time reflected in distance. For instance we are all familiar with the Doppler effect named after Christian Doppler. This where a sound changes it’s frequency and wavelength due to changing distance relative to the listener. Think of a car horn which indicates that the car is coming towards you, it’s then at your location, then it away in the opposite direction. The original sound source, the horn, did not change. Only the conditions in which the sound has to travel relative to the listener at an inconstant distance compounded by the reflection of sound off the surface of it’s surroundings.

Car horn sound:

As you can hear and imagine there’s alot that needs to be calculated, memorized and recalled in order for us to go about our day with certainty. At least with respect to sound for those who aren’t hearing impaired. But let’s add to this, the complexity of many sounds happening at once and changing rapidly. Imagine yourself in a crowded room with a friend and everyone’s is talking. Think the bar. But you are able to hear your friend with some clarity. This process is called selective and adaptive hearing. Selective in that we focus our hearing on what’s meaningful at the time - the conversation, thus filtering out, or just turning down the volume on the other sounds around us. Adaptive meaning the sounds that does not add any meaning at all is ignored or masked out. For instance, the low hum of a generator barely audible but annoying the least if focused upon. We do this in order to secure our safety and quality of our everyday conduct. Of course we may not have control over very loud sounds or frequency irritating sounds. Think of the screeching of brakes from cars of the trains - that really urkes me. In that case we either change our environment, cover our ears or find some other distraction if applicable.

Screeching tires sound:

Inside Bar sound:

Free Sound Effects Download thousands of free sound effects from

How would one create a computer algorithm to compensate for all these factors? Forget algorithm how would one construct this process for a machine? Then there’s the nostalgia of a place which provokes a certain feeling which can be collectively agreed upon or just indiviualized based on personal reflection.

As you can see those were some of the challenges I am facing. How can I re-represent those facets of sound visually if I cannot effectively extract some of the basic components? And I am also employing the use of a Binaural recording source - two microphones place on opposite sides of the head. Improbable but not impossible. I do believe that research & technology would continue to take us into new directions and deliver new findings both imaginable and inconceivable. Who ever thought that man would fly clone an animal?

My thesis will be an ongoing addition to those who are interested and researching the the interest of sound localization.

Generated Visuals to come…