Techno Ambiance

Techno Ambiance was a joint-venture audio-visual performance between usnul (as DJ) and me, RVRX (on visuals), hosted by WWPI Radio. Credit to usnul for coming up with the idea of the performance and tagging me in for visuals. The VIPs of the show, however, were the stage spanning LED wall above us and the short-throw projector at our flank -- it was my goal to fill these with audio-reactive visuals.

Event Flyer

TouchDesigner #

My weapon of choice for this undertaking was TouchDesigner, a node-based editor for live visuals. TD isn't an After Effects-esque 'create a render pipeline and export to a file'-type program, but instead a live I/O experience, seemingly used most often in live installations and performances.

Huge credit to Elekktronaut (the king of TD how-tos) and his getting started series of tutorials for allowing me to speedrun learning this software in 30-days.

My intention with TD was to take the audio from unsul's set and pipe it into responsive visuals, just like your audio visualizers of old, but cooler (and less GPU friendly). And well, I think I managed to do figure something out:

'Outrun-Matrix-Walls.toe' #

Two parallel wireframe planes, a FOV bouncing between 0-160 to create a zoom-in/zoom-out effect, a bit of glow, and a twirl distortion mapped to the beat.

'Pixel-Relocator-Eclipse.toe' #

An incorrectly wired Pixel Relocator, with a few audio-reactive circles.

'Torus-in-Sphere.toe' #

A wireframe torus in a wireframe sphere, with the same FOV trick as the Outrun/Matrix Walls scene, plus the addition of a little camera spin.

'Noise-Frequency-Wave.toe' #

Three monochrome noise elements turned into a grid, bouncing to the low, mid, and high frequencies respectively.

[Only one of the three shown above]. By far, the most CPU & GPU intensive element in the performance. Had to keep them off until I needed them, as they tank the FPS of the whole TD project. Can't say I quite know why -- but I'd assume the feedback look is up to no good.

'Other-Miscellaneous.toe' #

Skim through the full livestream at the end of this post to see the rest.

Audio Analysis #

After not being fully satisfied with my own attempts at low and high pass filters, I stumbled upon the easy-to-use audio analysis component. Low, Mid, and High I conceptually understand, but how it's able to determine the 'rhythm', or what 'spectral density' even means, I'll have to leave up to the thinkers.

Audio Analaysis component in TouchDesigner

When mapped to my MIDI board (thanks to Jacques Hoepffner's NanoKontrol2 MIDI map for TouchDesigner), it allowed for easy, centralized control of all the audio-reactive components in my project.

Hardware Setup #

The original hope was to use a machine provided by the event production company (intended for running just Resolume), but both Resolume and TouchDesigner are hardware-hogs and no single machine was able to maintain steady FPS running them both. After playing around with a lot of options, the following setup was decided on:

  • My Windows 10 desktop (RTX 3070, i5-11600K) for TouchDesigner renders,
  • and an M2 Mac Mini running Resolume to drive the outputs to the LED wall and projector,
  • Both hooked up over Ethernet to my old Apple router. With the two machines on a closed network, I was able to serve the TouchDesigner renders [via a "Touch Out" CHOP] from the Win10 machine to a client TouchDesigner process running on the Mac. The Mac then piped the renders to Resolume via a Syphon ["Syphon/Spout Out"] server to Resolume. This could have been done through Syphon alone, but I didn't trust the 1Gig Ethernet connection to transfer the video with as high stability as the in-house TD in/out process.

My Setup main view two monitors two computers

My setup side view

The DJ setup: DJ setup showing two turntables and mixer

The performance #

While the main purpose of Resolume was scene switching and output mapping, I ended up making heavy use of its channel mixing features (familiar to me as the equivalent to Photoshop's "Blending Modes") to really tie everything together. Did a fair bit of tweaking the project live, which I didn't initially plan to do, but I was running low on scenes and it worked out well. Pretty much just copy/pasting my audio analysis setup between scenes and tying it to random variables in the proper range.

Reupload of the livestream:

Photo of performance from audience

Photo of performance from audience

Photo of performance from audience

Photo of performance from audience

Photo of performance from audience

Photo of performance from audience

Photo of performance from audience