In November 2010 I wrote a detailed write-up about what new activity was planned for synnack+0xf8 since the DVD was done. (Read it here: 0xf8 Phase 2).
In preparing for two upcoming shows we really wanted to show some progress on the goals we set forth last year. Much work has been one over the past months to re-build the Jitter patch and create the Max for Live devices needed to interpret the live audio into data for Jitter to use. The show tomorrow will debut the first revision of the new setup.
We're now using M4L devices to send floats (numbers...) representing low-end data, and amplitude of various tracks via udpsend/udprecieve to Jennifer's laptop where they control different effects in realtime. Version 1 of the iPad TouchOSC interface is also complete for her to use to control the Jitter patch. Today we even added some nifty details to it I hadn't planned. Simple things that are highly useful. For example, I have a small M4L device using the API to get the clip name of the current playing clip in a given track and send it to her iPad patch so she knows exactly what stuff i'm messing with.
I was worried at first that switching from the MIDI over LAN to OSC would introduce too much latency. but by messing with the "Track Delay" feature in Ableton Live, I got it pretty much spot on. Looks damn cool I must say. Now instead of pre-determined outcomes specifically triggered at certain times along with the music, the music itself sort of "plays" the video effects.
Looking forward to the show tomorrow night to see how it all finally comes to life in front of an audience.