Sunday, August 16, 2015

Render Elements: Normals

This time let's do a brief anatomic study of a Normals output variable. Below is my original manuscript of an article first published in issue 188 of a 3D World magazine.

The shaders and the AOVs

Although this might be obscured from the artist in many popular 3D applications, a shader (and a surface shader in particular) is actually a program run on each pixel*. It takes input variables for that pixel such as surface position, normal orientation, texture coordinates and outputs some variable value based on those input traits. This new pixel value is typically a simulated surface color, however any variable calculated by the shader can be rendered instead, often creating weird-looking imagery. Such images usually complement the main Beauty render and are called Arbitrary Output Variables (AOVs) or Render Elements.

*Vertex shading can be used in interactive applications like games. Such shaders are calculated per vertex of the geometry being rendered rather than per pixel of the final image, and the resulting values are interpolated across the surface.

The normals

Normals are vectors perpendicular to the surface. They are already calculated and used by the surface shaders among the other input variables in order to compute the final surface color, and thus are usually cheap to output as an additional element. They are also easy to encode into an RGB image, each being a three-dimensional vector and so requiring exactly three values to express. In accordance with the widespread CG convention, XYZ coordinates are stored in the corresponding RGB channels (X in Red, Y in Green and Z in Blue). And since all we need to encode is orientation data, an integer format would suffice, although 16-bit depth is highly preferable.

This orientation can be expressed in terms of World Space (relative to the global coordinates of the 3D scene) or Screen Space (Tangent Space if we're talking about baking textures). The latter option is typically the most useful output for compositing or texture baking, although the first one has some advantages as well (like masking out parts of the scene based on the global directions, i.e. all faces pointing upwards regardless of camera location).

Let's do some closer examination:

A brief anatomic study of a Normals output variable
A typical Normals element rendered in screen space

The applications

The usages of all this treasury are many. For one, since this is the same data shaders utilize to light a surface, compositing packages offer the relighting tools for simulating local directional lighting and reflection mapping based solely on rendered normals. This method won't generate any shadows, but is typically lightning fast compared to going back to a 3D program and rerendering.

Normals can also be used as an additional (among the others like World Position or ZDepth) input for more complex relighting in 2D.

It's easy to notice that the antialiased edges of the Normals AOV produce improper values and thus become the source of relighting artifacts. One way to partly fight this limitation is upscaling the image plates before the relighting operation and downscaling them back afterwards. This would obviously slow down the processing, so should be turned on just prior to the final render of the composition.

Manipulating the tonal ranges (grading) of individual channels or color keying the Normals pass can generate masks for further adjustments on the main image (the Beauty render) based on surface orientation. Like for brightening up all the surfaces facing left from the camera, or getting a Fresnel mask out of the blue channel.

All this empowered by gradient mapping and adding the intermediate results together provides quite an extensive image manipulation/relighting toolkit capable of producing pictures on its own like the following one.

Render elements: Normals.
This image was created in compositing using only the Normals render element 
and the individual objects' masks as discussed earlier.

*Bump vs Normals (instead of a post scriptum)

Bump mapping is nothing more than modifying the surface normals based on a height map at render time, so that the shader does its usual calculations but with these new modified normals as the input. Everything else stays the same. This also means that bump mapping and normal mapping techniques are essentially the same thing, and the corresponding maps can be easily converted one into another. Bump textures have a benefit of being easier to edit and usable for displacement, while normal maps are more straightforward to interpret by the GPU, which is its main advantage for real-time applications.

The Normals element we've been studying here could be rendered either with or without bump modifications applied.

Thursday, August 13, 2015

Packing Lighting Data into RGB Channels

Most existing renderers process Red, Green and Blue channels independently. While this limits the representation of certain optical phenomena (especially those in the domain of physical rather than geometric optics), it provides some advantages as well. For one, this feature allows encoding lighting information from several sources into a single image separately, which we are going to look at in this article.

Advanced masking

For the first example let's consider rendering masks which we have discussed the last time. Here is a bit more advanced scenario when an object is present in reflections and refractions as well, so we want the mask to include those areas also, but at the the same time to provide a separate control over them.


While the most proper approach might be to render two passes (one masking the object itself and another one with primary visibility off – an object only visible in reflections/refractions), there is a way to pack this data into a single render element if desired. Just set the reflection filter color to pure Red (1,0,0), refraction color to Green (0,1,0) and make sure that an object being masked is present in all these channels plus in one extra (meaning it's white in our case). To isolate the mask for a reflection or refraction, we now only need to subtract the Blue channel (in which nothing gets neither reflected nor refracted by our design) from Red or Green respectively.

A custom matte pass allowing isolation of an object in reflections and refractions as well.

As usually when dealing with mask manipulations, special care should be taken to avoid edge corruptions, and this method might not be the optimal for softer (e.g. motion blurred) contours.

Isolating lights

Another situation could require to isolate an input of a particular light source into the scene, including its impact on global illumination. Again, it's great (and typically way faster) when your rendering software provides per-light output of the elements, however this is not always an option.

But taking advantage of the separate RGB processing, as soon as we color each source with a pure Red, Green or Blue, it's impact will be contained within the corresponding channel completely and never “spill out” to another one. Yet the light will preserve the complete functionality including GI and caustics. Of course, all surfaces should be desaturated for this pass (otherwise an initially red object might not react with the light represented in another channel for example).

The resulting data can be used in compositing as a mask or an overlay to correct/manipulate the individual input of each light into the beauty render, for instance to adjust the key/fill lights balance.

For this scene I had only two light sources (encoded in Red and Green), so in a similar fashion I have added an ambient occlusion light into the otherwise empty Blue channel. Ambient occlusion deserves a separate article on its own as it has numerous applications in compositing and is a very powerful look adjustment vehicle. Depending on the software, AO could be implemented as a surface shader, still it can fit into a single channel and be encoded in one custom shader together with some other useful data like UV coordinates or the aforementioned masks.

This weirdly colored additional render contains separate impact of each of two scene lights within Red and Green channels, while Blue stores ambient occlusion for diffuse objects

Saving on volumes

The described technique becomes most powerful when applied to volumes. Volumetric objects usually take considerable time to render and are often intended to be used as elements later (which implies they should come in multiple versions). By lighting a volume with 3 different lights of pure Red, Green and Blue colors we can get 3 monochromatic images with different light directions in a single render.

To have a clearer picture while setting up those lights, it is handy to tune them one at a time in white color and with others off. Enabling all three simultaneously and assigning channel-specific colors can be done right before the final render – the result for each channel should match the white preview of a corresponding source automatically.

Three lights stored in a single RGB image

The trick now is to manipulate and combine them into the final picture. All sorts of color corrections and compositing techniques can be used here, but I find gradient mapping to be especially powerful. Coming under various names but available almost universally in image editors of all sorts, it is a tool allowing to remap a monochromatic range of an input into an arbitrary color gradient, thus “colorizing” a black-and-white image.

Source image before gradient mapping
Gradient-mapped result

Summing it up

The next cool thing is that the light is naturally additive, and the results of these custom mappings for different channels can be added together with varying intensities, resulting in multiple lighting versions for the same image.

The qualities of each initial RGB light can be changed drastically through manipulating the gamma, ranges, contrast and intensity of each channel (all can be achieved adjusting the target gradient actually). This also means that light directions with wider coverage should be preferred at the render stage to provide more flexibility for these adjustments.

More results from the same three source channels

On a more general note, this 3-lights technique allows for simulating something like the Normals output variable for volumes. And on the other hand, the rendered Normals pass (which anatomy we are going to discuss the next time) can be used for similar lighting manipulations with surfaces.

The main goal of the provided examples was to illustrate a way of thinking – the possibilities are quite endless in fact.

Monday, June 29, 2015

Storing masks in RGB channels

Storing masks in RGB channels
Base image for the examples in this article

Finally returning to posting the original manuscripts of the articles I've written for 3D World magazine in 2014. This one was first published in issue 186 under the title "The Mighty Multimatte".

In an earlier piece we've been looking at raster images as data containers which may be used for storing various supplementary information as well as the pictures themselves. One of the most straightforward usage of these capabilities is rendering masks.

A lot can be done to an image after it has been rendered, in fact contemporary compositing packages even allow us to move a good deal of classical shading into post, often offering a much more interactive workflow. But even if you prefer to polish your beauty renders inside the 3D software till they come out with no need for extra touches, there still can be an urgent feedback from the client or the last little color tweak you'd need to apply under the time pressure – and compositing becomes a life savior again.

The perfect matte

However, the success of most compositing operations depends on how many elements you can isolate procedurally (that is without tracing them manually). And, no less important, with what precision (which refers to antialiasing first of all).

What we are looking for is the antialiased matte with the value of exactly 1.0 (white) for the pixels completely occupied by the object of interest, exactly 0 (black) for the rest of the image,* and antialising identical to that of the beauty render.

*Mask values above one and below zero cause no less problems than the gray ones.

Storing masks in RGB channels

Here are the results of a color-correction through the corresponding mattes. Left-to-right: with proper antialiasing, aliased, and with an attempt to fix aliasing through blurring. Note the corrupted edges in the middle example and dark halos in the right.

The power of three

It is easy to notice, that all this data requires only one raster channel to be stored. Thus a regular RGB image can naturally preserve quality mattes for 3 objects at a time. It only takes applying Constant shaders of pure Red (1,0,0), Green (0,1,0) and Blue (0,0,1) colors to the objects of interest and a black (0,0,0) Constant shader to the rest of the scene. Render layers functionality (implemented in every contemporary 3D package I can think of) comes very handy here. You might want to turn off slower features like shadows and GI for just the masks element, although typically setting all the shaders in the scene to Constant is already enough for the render engine to optimize itself sufficiently.*

*Due to smart sampling, antialiasing of the matte element might not be exactly the same as in beauty pass, still this is normally the closest one can practically get.

Alternatively, some renderers offer a pre-designed output variable (like MultiMatte in V-Ray) to render masks in the similar way. More channels (like Alpha, ZDepth, Motion Vectors or any arbitrary extra image channels) could be used for storing more mattes in the same image file of course, but typically it is not worth the time/inconvenience to set up first and extract later, compared to simply adding more RGB outputs to isolate more objects. Compositing applications and image editors naturally provide the tools to easily use any of the RGBA channels as a mask for an operation, which is another reason to stick with those. (In Photoshop, for instance, it only takes a Ctrl-click in the Channels panel to turn one into a selection).

Storing masks in RGB channels

An example of 2 objects isolated into the Red and Blue channels with Constant shaders.

What to mask?

Unless we're isolating parts of the scene with the specific purpose in mind, the main generic principle here is what will most likely require adjustments? Those are either the parts of the scene in doubt or simply the most important ones. Thus by default foreground/hero object is worth of its own matte (a channel). Grouping objects into mask channels based upon their materials or surface qualities is useful as well, since it allows for adjusting in terms of “highlights on metals” or “the color of liquids”.

But the most demanded usually are masks to separate meaningful objects and their distinct parts, especially those of the similar color, since it is tricky to isolate them with keying.

When working on a sequence of animated shots, consistency in using the same colors for the same objects from one shot to another becomes a very useful habit. This way, the same compositing setup can be propagated to the new shots most painlessly. It is generally better to add more MultiMatte outputs to the scene but stay consistent, rather than to try fitting only the masks needed for each shot into one image every time, so that in one shot a (let's say Green) channel would isolate the character, and in another – a prop.

Storing masks in RGB channels

When masking both an object and its parts in the same matte – think in terms of channels. For instance, if we want to utilize the Green channel in our example for the parts of the main object, we might want to use yellow (1,1,0) for the shader color to avoid holes in the Red channel.

The pitfalls

The world is imperfect though, and sometimes in a crunch there is simply no time to render the proper additional elements (or AOVs – Arbitrary Output Variables). That is the time to manipulate the existing data in comping in order to compensate for the missing. Individual mattes can be added, subtracted, inverted and intersected to get the new ones. Every AOV can be useful in compositing in one way or another, and any non-uniform pass can provide some masking information to be extracted – it only takes thinking of them as the source of masking data and understanding what exactly is encoded within each particular element (which we are going to touch upon in the following few issues).

And right now let's look at some dangers hidden along the way. The biggest pitfall is corrupting the edges of the matte (due to over-processing in post or the way it was rendered). 3D applications often offer some fast way of rendering object IDs (mattes) out of the box like assigning a random color or a discrete floating point luminance value to each. Though it might be faster to set up than the proper MultiMatte-like pass, the temptation should be generally avoided. With totally random colors per object the only way to procedurally separate one mask from another is keying, which will be often limited by close colors of other objects and thus quite crude.

Storing masks in RGB channels

The illustration above shows that you can not isolate one of the mattes stored sequentially by luminance while preserving the same antialiasing over different objects.

Even when assigning “the extended matte colors” (Red, Green, Blue, Yellow, Cyan, Magenta, Black, White, Alpha) instead of the totally random ones in order to store more mattes in a single image and separate them with various channel combinations rather than color keying later, the quality of the edges still gets endangered, although the results are much better typically. *This method should not be confused with the aforementioned usage of yellow, when it was still within a “one channel – one object” paradigm.

And no need to mention that any method of rendering masks/IDs without antialiasing is a no-go.

Divide and conquer

However, if going towards really heavy post-processing, it often becomes safer to render an object and its background separately. The trick in this case is not to forget the shadows and reflections, which means utilizing matte/shadow/ghost objects and visibility for primary (camera) rays functionality rather than just hiding the parts of the scene. 

Sunday, May 17, 2015

CG|VFX reel 2015

CG|VFX reel 2015 from Denis Kozlov on Vimeo.

My reels tend to become shorter and shorter. Here goes the new one – a generalist's reel again, so I have to take the blame for most of non live action pixels – both CG and compositing. With only a couple of exceptions, the work has been done in Houdini and Fusion predominantly. Below follows a breakdown describing my role and approach for each shot.

Direction and most visual elements for this version of the shot. The main supercell plate created and rendered in Houdini with a mixture of volumetric modeling techniques and render-time displacements. The rest of the environment elements including atmospherics, debris and compositing were created in Fusion. My nod for a little twister spinning under the cloud goes to Marek Duda.

For this animated matte-paint I brought the output of ocean simulation (Houdini Ocean Toolkit) from Blender into Fusion for rendering and further processing. The fun part were the shore waves of course, I almost literally (though quite procedurally) painted the RGB displacements animation for this area. Compositing included creating interactive sand ripples and some volumetric work for the fog. Resulting base setup allows for camera animation.

For this piece from a Skoda commercial I've created a water splash and droplets in Houdini, as well as integrated them into the plate. (All moving water is CG here.) A lot of additional work has been done on this shot by my colleagues from DPOST, including Victor Tretyakov and Jiri Sindelar.

Keying and compositing live action plates and elements.

A bit reworked sequence from a spot for Czech Technical University that I've directed in 2014. Most visual elements are own work, Houdini and Fusion. More details on the original spot are at

A TV commercial demo: particles and compositing in Fusion. Production and a magic coal ball by Denis Kosar.

A little piece utilizing the same particles technique as in the previous shot. This time all visual elements are own work.

Direction and all visual elements (well, except for a kiwi texture probably) in my usual Houdini/Fusion combination. Strawberry model is completely procedural; more examples of procedural assets generation are present in the previous posts of this blog.

For this commercial, aside from VFX supervision I created the flamenco sequence graphics, two pieces of which are presented here. Assembling procedural setup in Fusion allowed for interactive prototyping with the director on site. Setup involved mockup 3D geometry rendered from the top view into a depth map on the fly, which in order (after direct 2d adjustments) was feeding a displacement map for the ground grid with individual boxes instanced over it. After laying out the background designs for the whole sequence, setup was quite seamlessly transferred into 3DSMax (using displacement maps as an intermediate). Rendered in Mental Ray and composited with a live action dancer plate in Fusion again. Matte pulling was mostly done by Jiri Sindelar, the rest of the digital work on the sequence is pretty much mine.

Set extension for a scene from Giuseppe Tornatore's Baaria. I used Syntheyes for 3D-tracking and XSI for lighting/rendering of the street model (provided by the client). Composited in Fusion.

Keying/compositing for another shot from the same film.

And yet another compositing shot from few years back.

Last, but not least is the music:

"Black Vortex" Kevin MacLeod (
Licensed under Creative Commons: By Attribution 3.0

And if you've made it till here, you might also find interesting my previous post on procedural modeling. And of course for those interested in cooperation, the email link is at the top of the page.

Thursday, April 23, 2015

On Wings, Tails and Procedural Modeling

Project Aero: Procedural Aircraft Design Toolkit for SideFX Houdini
Project Aero: Procedural Aircraft Design Toolkit for SideFX Houdini

Despite the widespread opinion, I find Houdini a very powerful tool for 3D modeling. In fact, this aspect was largely motivational for me to choose it as a primary 3D application. And talking procedural modeling I mean not just fractal mountains, instanced cities and Voronoi-fractured debris (which all can be made look quite fascinating actually), but efficient creation of 3D assets in general. Any assets.

Project Aero: Procedural Aircraft Design Toolkit for SideFX Houdini

Project Aero: Procedural Aircraft Design Toolkit for SideFX Houdini

Thus lately I have taken some of the not-so-free time (a bit over 3 months, to be more accurate) to develop (or rather prototype) a toolkit for procedural aircraft creation, which I am happy to showcase today. Please welcome Project Aero.

Project Aero: Procedural Aircraft Design Toolkit for SideFX Houdini

The whole point in a nutshell: after the tools have been finished, it took me roughly 4-5 hours to create each of the demonstrated airplane models from scratch to the full detalization.

Project Aero: Procedural Aircraft Design Toolkit for SideFX Houdini

And most of those 4-5 hours was spent on design decisions – not smoothing edge loops, or laying out UVs, or drawing countless layers of rivets and scratches – those kinds of things got automated during the toolset development months. So technically, a new model could have been created in minutes – the time it takes to lay down few nodes (see screengrabs below) and set the basic parameters up.

Project Aero: Procedural Aircraft Design Toolkit for SideFX Houdini
Detalization sample

The resulting models are completely procedural: every pixel and polygon are generated by the system from scratch tailored to a particular vehicle design; textures are generated either at render time or from the scene geometry to the preset resolution – no photo-textures or other disk-based samples are used. Bolts and rivets are randomly turned and micro-shifted on an individual basis. The generator tries to preserve consistent detail scale and proportions across any curvature and size. Controls are designed in a hierarchical fashion, so that user could work from big, conceptual adjustments (like main contours, forms and surface styling) down to tuning individual element's bolting when required. Plus more perks inside, and of course standard Houdini tools can be used at any stage for any manual design adjustments.

Project Aero: Procedural Aircraft Design Toolkit for SideFX Houdini
A new design can be created in few hours from scratch

The skeleton of the system and each design are interactively placed profiles which are skinned into NURBS surfaces with flexible shape controls. The resulting design is then fed into the detalization nodes that create the necessary geometry around the surfaces and generate textures using a variety of techniques. Final models are rendered as subdivision surfaces with displacement; bolts and rivets (which an airplane has quite a few of) are stored as packed primitives.

Project Aero: Procedural Aircraft Design Toolkit for SideFX Houdini
Profiles form the backbone of each design

Project Aero: Procedural Aircraft Design Toolkit for SideFX Houdini
Only few more nodes are required for finalization

Project Aero: Procedural Aircraft Design Toolkit for SideFX Houdini
Texture preview mode

By no means Project Aero is complete or flawless, but hopefully it takes the concept far enough to illustrate the benefits and possibilities of procedural creation of 3D assets. Getting another individual version of the same model is a matter of seconds. Automatic non-identical symmetry and procedural surface aging controlled by few high-level sliders also help to escape “the army of clones” issue that 3D models sometimes suffer from. Deeper variations like repainting or restyling the skin and panels are done in a breeze. The set of detail modules is easily extendable and parts of the existing design can be swapped and reused. Depending on the toolset's design objectives, generated models could be automatically prepared for integration into a particular pipeline (Like textures could be baked out, LODs automatically created and parts named in a chosen convention, exhausts and moving parts marked with special dummy locators or attributes, etc).

Project Aero: Procedural Aircraft Design Toolkit for SideFX Houdini
A new unique copy of an existing model is literally "on a button"

Project Aero: Procedural Aircraft Design Toolkit for SideFX Houdini
Non-identical symmetry

And of course the workflow is non-linear from both design and development perspectives. The first means that you can always go back and change/adjust something at the previous stages of work without having to redo the later steps (like a change in the wing position on a hull of a finished model will make all the related surfaces recalculate to allow for the new shape). And the second refers to the ability to use the toolset while it's being developed, which means that in a production environment artist wouldn't have to wait for a TD to finish his work – the tools would be updating in parallel, automatically adding new features to the designs already being worked on.

Project Aero: Procedural Aircraft Design Toolkit for SideFX Houdini

Hopefully, there is no need to say that the approach used in this demonstration is only one of many-many others, each fitting some objectives better than the other. I might touch upon the topic later in case of available time and/or public interest, and so far those interested in procedural cooperation are more than welcome to email me (link at the top of the page). Or if you just feel like chatting and are planning to attend FMX 2015 – drop me a line to meet there.

Keep having fun!

Project Aero: Procedural Aircraft Design Toolkit for SideFX Houdini
Project Aero: Procedural Aircraft Design Toolkit for SideFX Houdini
Project Aero: Procedural Aircraft Design Toolkit for SideFX Houdini

Project Aero: Procedural Aircraft Design Toolkit for SideFX Houdini

Project Aero: Procedural Aircraft Design Toolkit for SideFX Houdini

Project Aero: Procedural Aircraft Design Toolkit for SideFX Houdini

Project Aero: Procedural Aircraft Design Toolkit for SideFX Houdini

Project Aero: Procedural Aircraft Design Toolkit for SideFX Houdini

Project Aero: Procedural Aircraft Design Toolkit for SideFX Houdini
Project Aero: Procedural Aircraft Design Toolkit for SideFX Houdini

Project Aero: Procedural Aircraft Design Toolkit for SideFX Houdini

Project Aero: Procedural Aircraft Design Toolkit for SideFX Houdini

Project Aero: Procedural Aircraft Design Toolkit for SideFX Houdini

Project Aero: Procedural Aircraft Design Toolkit for SideFX Houdini

Project Aero: Procedural Aircraft Design Toolkit for SideFX Houdini

Project Aero: Procedural Aircraft Design Toolkit for SideFX Houdini

Project Aero: Procedural Aircraft Design Toolkit for SideFX Houdini

Project Aero: Procedural Aircraft Design Toolkit for SideFX Houdini