At Music Hack Day, I met Warren Stringer and Matt Howell and teamed up with the idea of creating a dynamic graphic environment, reacting to facial expressions, as well as voice or instrument sounds. This is what it looks like at the end of the weekend :
The face and sound tracking happen on the left-side computer. Whenever a blowing sound is emitted, a particle is emitted in the direction pointed by the face. This is picked up by the graphic rendering program running on the right-side, which uses the particle coordinates to drive a fluid graphics engine. The notes informations of the ocarina are picked up and used to modify the colors and affect the dynamics of the fluids.
How it’s done :
- face-driven particles are a hack of Jason Saragih and Kyle Mcdonald’s Face Tracker project, in the OpenFrameworks environment
- The sounds events are detected in Max/MSP, sending OSC to the face tracker
- The fluids are generated with the MSA Fluids library used in OpenFrameworks too
- The ocarina also sends OSC to the Fluids engine
Update (june 2012) :
I reprogrammed the idea, using a particle system in 3D, mapped to evolve in the same reference as the face tracker. Here’s what it looks like. For now there are only two sound categories that change the color and behavior of the particles. I think there is more potential to this, although the face tracker sometimes lags a little depending on lighting conditions.