code | sound portfolio

research + experiments exploring code, sound, algorithmic improvisation,

human voice, audio-visual interaction, and the natural world

 

virga, a real-time-motion-tracking particle system Instrument:

I created Virga as a body-immersive instrument for live performance that implements motion tracking to control generative particle system movement in large scale projection.  Using the Microsoft Kinect with code written in Max MSP, I track the x, y, z coordinates of my left hand in real time as I sing and perform.  The particle system then breathes and grows as my hand also breathes and grows with my vocalizations.  My videos of natural environments from Germany, United Kingdom, Iceland, Greenland, and Australia are projected beneath the particle system. 

Virga is the phenomenon of rain falling from a cloud and evaporating before it touches the Earth’s surface.

README.png

stochastic weather denver:

I am inspired by the concept of stochastic music composition as a way to interweave concepts from modern science into the field of music composition by creating sound clusters based on musical aspects observed in the natural world. I have always been fascinated by data sonification and non-visual representation of data recorded from the natural environment. With “Stochastic Weather Denver,” I wanted to sing with the weather patterns creating the atmosphere within which I live and breathe in Denver, Colorado.

For the live performance, the Denver weather data (monthly rainfall, snowfall, sunshine, etc) is analyzed and abstracted into pitch classes using Max MSP before being sent to Ableton Live for sonification by synthesizers. In order to create initial pitch parameters for my vocal improvisations within the sonified environment, I use pitch classes of weather related jazz standards. These pitch class sets are then fed into Markov Chain generators to further abstract the sound, giving me peculiar vocalization ideas. Rain, wind, water, and bird song field recordings were also generatively added to my sonic environment for atmospheric effect. I used the Denver weather data and a simple generative algorithm to choose playback time and speed of the field recordings.


The Cave:

Obsessed with massive reverbs, varying delay durations, and sonic layers, I composed a sonic environment using Max MSP and Ableton Live within which I could emulate an experience of singing in a generative vocal cave. Below is an example of the piece performed live with synth and madinda as part of “Short Circuits,” an electronic music and improvisation concert with Middlebury College Professors and musicians. 2018.

Main3.png

geometric vocal visualizer:

Fascinated by visualizing the sound of the human voice in nature, I created this generative geometric vocal visualizer in Max MSP. I use an envelope follower patch to generate data of my vocal amplitudes. This data is then used to create the generative geometric shape’s frequency, movement, and location. This visualization moves through Rocky Mountain nature scenes.

Screen Shot 2018-04-30 at 9.21.02 PM.png

daily sonic gatherings (a sampling):

From supernatural granular synthesized vocal melodies to AI bird-human communication, this playlist is bound to intrigue the ear. I created this sonic journal to listen closely to my environment, record the moments, find the peculiar sounds, and possibly send a few of them through effects and algorithms to create something entirely new. Some of my recordings remain raw while others experience experimentation and sonic manipulation through granular synthesis, convolution reverb, filters, and other audio effects in Ableton Live and Max MSP.