And below I’m diving into the making-of details:
We approached production more like one would with live-action than full-CGI material this time – I’ve created long takes of base action coverage as our source footage first, which was then cut into the actual film by the marvelous Marek Duda despite me trying to get in his way here and there. So most pixels are mine; most cutting is his.
Marek used Resolve on his side. For me it was mainly Houdini and Fusion; a lot of each.
First came the sims – standard Houdini Pyro containers with a couple of mods. For one I’ve added a custom field to store and advect the colors of the streams. Secondly, running all the simulations in 2D mode made it way more fun – much faster and with a lot more resolution – still providing enough data to reconstruct the required depth later in the process. It’s all about generating the right data in fact – not getting the perfect image out of the box. The right data can be shaped into pretty much anything later (something I’ve written about before) and Houdini is one tool greatly liberating in terms of data manipulation. The trick is to see which data is right at each stage: for me this sim stage was about getting the motion and interaction of the fluids right, so I’ve spent some time just playing around in the digital sandbox. With a simple render rig I’ve been exporting both the simulated volumes (one voxel deep; density, velocity and color) and EXR texture sequences (color and velocity).
|Splitting the streams into RGB channels|
The color was used to isolate the interacting streams by placing them into different RGB channels. This allowed to mask and further manipulate the streams individually, and possibly even do some quirky effects like mixing 2 fluids into an arbitrary color. More info on the method here and here.
|Mixing yellow and green flows to fiery orange|
I’ve done the camera work in Fusion, placing these long “action” textures in 3D space again. This was way more interactive than a 3D animation software would allow and Marek got his preliminary multi-camera footage, so he could start editing in parallel with me working on the shots. I’ve exported the same cameras back to Houdini for later rendering.
|Setting up cameras in Fusion|
The next and last stage in Houdini was to create and render the graphical elements for the final assembly in compositing. I use this approach really often – rendering merely the technical passes from 3D to do the real lookdev in comping later. It takes bending the head around a bit, but surely pays off in flexibility – comping iterations are so much cheaper and you get incredible power in shaping and changing things when you need it the most – at the late stage when things really start getting together. In commercial work this also means that you can iterate on client’s feedback faster too.
So here are the Houdini elements, similar set for each shot. All individual elements have been created from the same initial sim data, reusing many parts, so it made sense to batch-generate them together and later render as delayed load procedurals in Mantra.
|Elements' setup in Houdini SOPs|
The first element is the volumetric pass. This is where it gets its depth back in a Volume VOP – the flat simulated data gets extruded, blurred and its density adjusted. Speed benefits aside, there is also a fine control advantage in doing this in SOPs rather than DOPs. Just like with everything else, the coloring is technical and separates the rendered streams into the individual RGB channels.
Simulated velocity volumes are used to advect particles with a simple SOP solver.
|SOP Solver contents for particle advection|
Trail operator over the same data with some point copying create the streamlines.
Complimentary to the streamlines are topographic isolines. These are built by displacing a plane vertically with the simulated texture's values and then slicing through it (i.e. with a Clip SOP) in a loop. Additionally, the main volume can be (and was) converted to SDF and sliced through too.
These are all the products of the same data – those simulated textures from step 1. Yet another example of its manipulation are the columns. This time a regular grid was used to displace the points vertically based on the fluid density value and then connect each point to its undisplaced version with an Add SOP.
|Columns generation in Houdini|
Many more various elements could be created from the same sim data, but I felt I already had enough for the solid visual diversity at this stage and finally took the whole thing to Fusion for comping. With a moderately sophisticated setup (which really boils down to this and other articles already quoted above), each shot got its set of several different looks exported plus few separate elements we used for transitions during the online sessions with Marek.
|Final Fusion composition|
|Each shot has been render to several different styles|
It took few calendar months to finish the FLOW since we had to fit it into whenever the commercial workload allowed us to, but I believe that the actual hands-on production time turned out to be a little more than one man-month. And if you’ve read all the way down to here, also be sure to check out Mixpoint’s Vimeo and Facebook pages!
And a couple of good books for the road:
Peter S. Stevens, Patterns In Nature
Philip Ball, Flow
Milton Van Dyke, An Album of Fluid Motion
Thank you and enjoy!