An audio-visual collaboration between Wiklow, Diagraf and EWERX, Liquid Architecture approaches generative composition in an immersive environment. Combining LIDAR data and creative modeling, real-world buildings and structures and imaginary landscapes are displaced and reconstructed point-by-point in a 3D immersive projection environment.
This project was conceived and developed under the artist-in-residence program at the Société des arts technologiques in Montréal. It was in development for most of the year, and multiple iterations were produced and presented in a variety of formats. This was the most challenging visuals project I have worked on, mainly because we used new and unfamiliar tools and techniques, which involved much experimentation and exploration. I learned a great deal and am very grateful for the privilege to work with my talented collaborators, Michael Dean (Wiklow) and Patrick Trudeau (Diagraf).
Refer to Wiklow’s excellent write-up for a more in-depth look at the project, the artistic vision and technical implementation.
Liquid Architecture performances/presentations:
- IX Symposium [Demo]. La Société des arts technologiques. Montreal, Canada. 2 June 2017.
- MUTEK Montréal. La Société des arts technologiques. Montreal, Canada. 24 August 2017.
- Festival TransArt. Stahlbau PICHLER. Bolzano, Italy. 10 September 2017.
- MAPP MTL. Moment Factory. Montréal, Canada. 7 October 2017.
- Satosphère. La Société des arts technologiques. Montreal, Canada. 31 October – 25 November 2017.
- MUTEK.MX. Papalote Museo del Niño. Mexico City, Mexico. 21-23 November 2017.
This was our first show in a dome and my first VJ performance in one. Much of the content I used was created by Diagraf, who has a lot of experience building visuals for domes, but I also managed to port some Oculon material over after overcoming a few challenges. In this post I’ll describe the process I used for building visual content for the dome.
First, it’s important to understand how the dome works and what it expects as input.
To create an immersive surround image, the dome uses multiple projectors to project on a spherical surface. The business of managing the multiple projectors, splicing and overlapping images, and spherical warping is all handled internally by the Satosphere. You output to the dome as if it was a regular external monitor and the software responsible for outputting your content (in my case, Resolume) is still pushing out a single rectangular image.
The difference is that this image needs to be in domemaster format. Domemaster is a spherical fisheye distorted image: a 2D image that becomes an immersive environment when projected on the inside of a sphere. For example:
When passed through the dome system, images like this are “unwarped” and spliced up among the projectors to be shown inside the surface of the dome, creating an image that surrounds you.
I had previously made some content for the Satosphere, but I “cheated” by making 2D radial designs that didn’t reveal the unwarping applied to them. Making real 3D immersive content was a new challenge.
Rendering actual domemaster output requires a lot of processing power and I couldn’t do this in real-time with Oculon. Normally I perform with Oculon running live, generating and improvising the material as I go, but for this show I needed to pre-render video clips of all the content I wanted to use. I lost the ability to react to audio input or change simulation parameters on the fly. The only option was to build a pipeline to make domemaster video clips with Oculon.
3D world → hemicube frames
Normally, a virtual 3D environment is represented on your flat 2D screen by placing a virtual camera in the 3D world and rendering what it sees (in one particular direction) to a flat image.
To create an immersive view of a virtual 3D world, you need to be able to see in every direction. In the dome, you can look around you horizontally 360 degrees, and also 180-220 degrees from in front to back (through the top). You can’t look down since the floor is not part of the projection surface.
One way to build this image is to stitch together a view of the scene from every direction (top, front, back, left, right), creating a hemicube of images, then apply the fisheye distortion to it. This is not the only method, but it was the most straightforward way to do it with my existing render pipeline. This method is described in more detail here.
Now I have to render 5 different camera angles for each frame of video. I built a 5-camera rig, using code from ofxDomemaster as a starting point. There is a “master” camera, which moves/looks around the scene, and 4 other cameras attached to it and looking in all the other perpendicular directions. Each frame is rendered 5 times, once per camera, and saved to disk as a PNG. This process is extremely time-consuming, with render times over 1 second per frame! That means a 5 minute clip takes nearly 3 hours to render. And that’s only the first step…
Hemicube frames → domemaster frames
Now I had a folder full of PNG files that looked like this:
Next I needed to stitch these images together and warp them into a single domemaster fisheye image (5 hemicube fames → 1 domemaster frame). This can be done with After Effects plugins like Fulldome, but this is very expensive software. Luckily I found an open-source command-line tool that does the job: domemaster.
I rendered frames at 2k resolution and was constantly battling running out of disk space. I learned the hard way that the domemaster tool doesn’t warn you about insufficient space and just keeps processing images for hours but failing to save any! After processing the frames I end up with a folder of domemaster frames:
Processing frames is faster than rendering them, but still a slow process (over an hour for 5 minutes). Since domemaster did not take advantage of multiple cores I could run 4 instances at once, but then disk I/O becomes a bottleneck. Lost in a sea of technical difficulties and still not done…
Domemaster frames → video
With a folder full of domemaster frames, I still needed to make a video file that I could load up in Resolume and use to VJ. This was done with the ffmpeg command line tool. The videos were rendered to Resolume’s hardware-accelerated DXV format for best performance. I wrote a crude script to automate all these post-processing commands, and there was a lot of trial and error involved until the entire pipeline was nailed down.
Since I had no access to the dome before the show, I had no idea how my material would look! All I could see is this:
The SAT has a preview tool called DomePort, which I used to preview my domemaster video clips in a virtual dome. This lets you see how the video looks after it’s unwarped back into a sphere and look around the room. You can also preview with Syphon, so I could see how multiple dome videos mixed in Resolume would look. Due to lengthy render process I had to make a lot of blind guesses since I couldn’t afford too many trial and error passes.
Overall the Nuit Blanche show was a great success and I had a lot of fun performing at the SAT. However, I think there’s a lot of room for improvement in the content-creation process to make it less tedious and time-consuming and allow greater artistic flexibility. If I were to do this more often I would at least invest in a powerful rendering machine, since my Macbook Pro could barely manage it. I would love to one day be able to plug Oculon directly into a dome and run everything in real-time, but this is probably a long ways (and a lot of expensive hardware) away. This work has also got me excited about working with virtual reality content, as dome immersion and VR immersion have a lot of parallels. I hope to invest more time in improving my capabilities in this area in the near future.
This October I was invited to MUTEK Mexico in Mexico City. MUTEK Montreal has always been a very special festival for me, because I can attribute my inspiration to build Oculon and perform visuals to my early experiences at MUTEK. After 4 years of performing at MUTEK Montreal, it was a treat to experience its incarnation in Mexico.
I was impressed beyond my expectations by MUTEK.MX: the venues, the programming, the audio/video production quality, the staff, the crowd, the hospitality (!) and the overall vibe. It was one of my best festival experiences, and stayed true to the MUTEK spirit which I haven’t found elsewhere.
I got to play visuals for Shackleton and Lotic on a bright, crisp LED screen at FMCC. Orbital Mechanics was also participating in the festival with a showing of their immersive dome film, “Dark Matter”, and visuals by Diagraf.
This will be my final performance using the current version of Oculon. The codebase has grown outdated and unwieldy, making it very frustrating to work with. So I’ve decided to completely rewrite the engine and interface from scratch, and port over any content I want to keep. I’ve struggled with this decision for over a year with several false starts, since a rewrite is daunting and time-consuming, and I am still in the planning stages.
MUTEK.MX has given me new inspiration to go forward with the project, because it reminded me that I do still enjoy performing visuals; I just need my instruments and processes to evolve.
The idea for the Trinity video originated over two years ago. One of the earliest modules I built for Oculon, named Tectonic, showed seemingly random blips with crosshairs on a black background.
These were actually the earthquakes from the past 30 days positioned on a hidden map. The data came from the US Geological Survey and the blips were sized relative to the quake magnitudes.
One night I was re-watching the documentary, Trinity and Beyond, featuring a great collection of restored archive footage of nuclear tests. At this point it occurred to me that there must be a database of all these tests, and if only the data included timestamps, location coordinates and magnitudes, then it could work with the existing Tectonic code.
After some research I came across a few data sources. They all seemed to vary slightly, and had discrepancies in the number of tests. I chose the one with the most complete data set, Johnston’s Archive and also the Australian Government Geosciences Nuclear Explosions Database as cross-reference.
Also during this research phase I found that another artist, Isao Hashimoto, had already created a very similar project in 2003, titled “1945-1998”. This is a great piece of work and I encourage everyone to watch it.
At first I was discouraged, thinking I should abandon the idea since it was already done before. After some time passed, this turned to inspiration instead. Trinity would tackle the same subject with a different aesthetic both visually and aurally, and would use the data in different ways. There is always room for multiple interpretations of the same data and different ways of exploring the same concept.
Once the data was formatted and the code could parse it, there were quite a few technical challenges to overcome before this hacked together VJ module could be turned into a presentable data visualization sequence. I worked on these problems intermittently for a while, without making much progress.
Earlier this year, I realized that 2015 marks the 70th anniversary of the Trinity test, as well as the bombings of Hiroshima and Nagasaki. I thought this would be an appropriate time to release the project, as it would resonate and serve as a means to reflect on these events and the proliferation of nuclear weapons thereafter. Having a deadline is also a great motivator.
From the start, I knew this project would be nothing without sound. My initial experiments with generating audio with cinder did not yield promising results, and this had to done right. I called upon my fellow mechanics to handle all the sound. Patrick (Diagraf) created the background soundscape and Phil (Rusty Faders) created the detonation sounds. The sound design on both fronts really took the project to the next level.
To maintain tight synchronization and use the data to manipulate the sound properties, the visualization was run in real-time in Oculon while it sent MIDI data to Ableton, essentially like a live A/V set. This had the added bonus of allowing me to make adjustments on the fly, since nothing was pre-rendered or pre-recorded.
I’m very happy with the final result, and the reception so far has been truly incredible: currently at over 860k views and still counting! It was featured in Huffington Post, The Guardian, Vice/Motherboard, Gizmodo/Sploid, The Independent, Fast Company, and selected as a Vimeo Staff Pick, among the social media frenzy. I’m flabbergasted.
The response has certainly been a great source of encouragement, and we have big plans for expanding on the Trinity project in the near future. We’ve also seen how data can be both beautiful and powerful when combined with visuals and sound, and plan to explore further into this domain. Stay tuned!
We are three-dimensional beings living in four dimensional spacetime. While we can travel freely through space, we only experience a single moment of time: the specious present. Our perception only extends to this slice of the timeline as we are propelled forward through it. We remember the past with memory and simulate the future in our thoughts, but these instruments are entirely subjective. Our minds struggle with accurate time measurement and reliable data collection.
To address this limitation, we have developed various tools such as clocks and calendars.
Clocks point you to the present, the “you are here” sign for the fourth dimension. Calendars mark points of interest on the timeline. Alarms tell you when a future point of interest becomes the present.
These tools generally focus on the present and future, but our picture of the past remains fuzzy. If I ask you when your next doctor’s appointment is, you can look it up in a calendar instantly with minimal mental exertion. If I ask you how long it’s been since your last visit to the doctor, it will take more effort to answer. Maybe you can look up the date, but to answer the question of “how long has it been” will require another mental calculation on top of that.
Hindsight is an app I made for iPhone and Apple Watch that remembers past events and calculates time intervals. By unburdening your mind from these calculations and making the information readily accessible, it gives you a new sense of awareness and perception of the past.
It helps you stay on top of anything that needs doing regularly, without requiring you to plan ahead.
Awareness can also be a powerful motivator for changing behaviour, to reign in vices or encourage good habits.
A histogram exposes patterns in the frequencies and intervals of occurrences to give you new insights at a glance. Alerts notify you when a certain amount of time has passed.
And being a timekeeping tool, of course you can access it from your wrist.
It’s a simple tool to make navigating the fourth dimension a little easier. I hope you find it useful as I do.
This was cross-posted on Medium
After a summer hiatus, I joined forces with Diagraf to perform visuals at Bacchanale Story w/ RØDHÅD, Groj & Adam Solomon b2b Alessandroid. The elephant screen is the work of AV Exciters. Another great underground event organized by La Bacchanale Montréal. More photos here.
This year the MUTEK and ELEKTRA festivals joined forces as EM15. It was a week-long audio-visual extravaganza, and I was fortunate to have the opportunity to participate. I performed four separate shows, which really pushed my limits. On Friday, May 30, SWACK took us to the moon and beyond the stars and Alicia Hush got all the juices flowing to close the night. Saturday night at the MAC, Fake_Electronics took us on a mind-bending journey; I particularly enjoyed playing this set. Later that evening I accompanied the ambient soundscapes of Chat Noir.
I thoroughly enjoyed all the music I got to work with, and most importantly, I had the chance to perform alongside friends. A big thanks to SWACK, Alicia Hush, and Fake_Electronics for meeting up for practice and working with me as we all prepared for the festival. This helped me create some new material for each artist, to match their music and give each show a different feel. This year’s MUTEK was quite a treat.
Here’s a small glimpse of found footage. You had to be there.
On May 23, continuing our collaboration from MUTEK_IMG, I performed visuals for Mateo Murphy‘s live set at the Nocturne Numérique at the Musée d’art Contemporain in Montréal. This was a special opening event for various digital arts festivals and exhibits taking place at the MAC, including BIAN and EM15. These events are part of the Printemps Numérique digital arts season taking place in the city.
This winter I played visuals at Montreal’s infamous Igloofest outdoor music festival, one night with Alex from Tokyo and another with Mayssam. The new Videotron stage was a giant enclosure made up of stacked cargo containers, creating the largest projection surface I’ve worked with to date. It was loads of fun and I got to showcase some all new material.
And here’s a video retrospective of the event: