Monday, November 30, 2015

Render Elements: UVs

Continuing on the topic of AOVs with another brief anatomic study.

One less common yet extremely powerful render element are the texture coordinates (or UVs). 3D software typically doesn't provide a preset output for them out of the box, however it is quite easy to create one manually and experience the benefits of a post-render texturing and other useful tricks.

The UVs

For each point of a three-dimensional surface, texture coordinates are nothing more but 2 numbers*. These numbers define an exact location of a point within a regular 2D image, thus creating a correspondence and so matching every surface point with a point of a texture. This sets the precise way of how a 2D texture image should be placed over a model's 3D surface.**

In fact, UV unwrapping, projecting, pelting and editing are simply the methods by which we define this correspondence within the software.

*Three numbers or UVW coordinates are used for volumetric texturing (like when applying procedural 3D textures).
**The correspondence is unique one-way only, and multiple parts of a 3D model can be matched to the same piece of a 2D texture

The UVs are measured relative to the borders of a picture, and are therefore independent of its resolution, allowing to be reused with any image of any size and proportions. The first number out of two defines the horizontal coordinate (0 means the left picture edge, 1 stands for the right one). The second number describes the vertical position within the raster in a similar manner: 0 means bottom, 1 – top (see the second illustration below).

Texture coordinates locate vertices without snapping to texture pixels (known as texels) – a surface point can be mapped to a center of a texel, its border or anywhere within its area.

The shader

Since raster images store nothing but numbers (and often within that very range of zero-to-one), the surface's texture coordinates can be rendered out as an additional element. Just like the color of the main “Beauty” pass, or ZDepth, or Normals are rendered. Red and Green image channels of this element are traditionally utilized for storing U (horizontal) and V (vertical) values respectively.*

*Blue channel stays free and can be used to encode some additional data (like an object's mask or ambient occlusion)



Rendered UV pass of a 3D scene
This is how the resulting render element looks like. Red and Green values display the exact UV values for each pixel. And this in turn would allow us to map a texture after the render is actually done.


All it takes to create this AOV is a constant shader with the double-gradient texture as shown on the following illustration (the target object has to be UV-mapped of course). This texture simply represents the UV coordinates tile as an image, acting as a sort of indicator when rendered. Due to the very high precision required from the produced values for later texture mapping, it is is highly preferable to have it in floating point and infinite resolution – that is to create procedurally by literally adding two gradients (horizontal black-to-red and vertical black-to-green) within the shading network.

Anatomy of a UV tile
A tile of texture coordinates represented as RGB colors. Rendering objects with this texture generates UVs render element. Corner RGB values and individual channels are shown for reference.

The applications

The main point in outputting UV coordinates as a render element is the ability to quickly reapply any texture to the already rendered object in compositing with the specialized tools like the Texture node in Fusion or STMap in Nuke.

Textures applied in compositing
An image textured in compositing using the UVs render element and the individual objects' masks as discussed in earlier posts.


Like pretty much all post-shading techniques, this one has certain limitations. It is not really suitable for semi-transparent objects and works best on simpler isolated forms.

Using a constant shader ensures that the coordinate information is rendered precisely – unaffected by lighting. However antialiasing of the edges introduces color values that do not really correspond to the UV information which naturally leads to the artifacts in post texture projection. A typical way of partly fighting these is upscaling the texture AOV before processing and downscaling afterwards, which can be turned on right before rendering of the final composite to keep the project more interactive. Rendering aliased samples directly into a high-resolution raster might be more proper solution, though more demanding as well.

And still the advantages are many. Post-texturing becomes especially powerful when the scene is rendered in passes with lighting information separated from textures, so that it can be reused with the new ones. In fact much of the lighting can be recreated in post as well from additional elements like Normals, World Position and Zdepth. All this, in turn, allows for creating procedural scene templates in compositing. As a basic example, imagine an animation of a book with turning pages which needs to be re-rendered with the new textures monthly. A good compositing setup utilizing various render elements would require an actual 3D scene to be rendered only once, leaving most further changes purely in comp, which typically has much more interactivity and lower render times.

3D scene completely textured and shaded in compositing from AOVs
Lighting developed for the previous article solely from the Normals AOV applied to the same image as a simple example of a procedural shading setup completely assembled in composing.


Taking it further, the texture coordinates plate from the second illustration (A UV tile) can efficiently serve as a tool for measuring and capturing any deformation within the screen space. Modifying that image with any transformations, warps, bends and distortions results in a raster “remembering” those deformations precisely. So that it can be recreated on any other image with the same tools used for post-render texture mapping. The method is often used for storing the lens distortion for instance. Another practical application is to render our magic shader refracted in another object, thus creating the refractions map rather than the usual surface UVs element.

Sunday, August 16, 2015

Render Elements: Normals


This time let's do a brief anatomic study of a Normals output variable. Below is my original manuscript of an article first published in issue 188 of a 3D World magazine.

The shaders and the AOVs

Although this might be obscured from the artist in many popular 3D applications, a shader (and a surface shader in particular) is actually a program run on each pixel*. It takes input variables for that pixel such as surface position, normal orientation, texture coordinates and outputs some variable value based on those input traits. This new pixel value is typically a simulated surface color, however any variable calculated by the shader can be rendered instead, often creating weird-looking imagery. Such images usually complement the main Beauty render and are called Arbitrary Output Variables (AOVs) or Render Elements.

*Vertex shading can be used in interactive applications like games. Such shaders are calculated per vertex of the geometry being rendered rather than per pixel of the final image, and the resulting values are interpolated across the surface.

The normals

Normals are vectors perpendicular to the surface. They are already calculated and used by the surface shaders among the other input variables in order to compute the final surface color, and thus are usually cheap to output as an additional element. They are also easy to encode into an RGB image, each being a three-dimensional vector and so requiring exactly three values to express. In accordance with the widespread CG convention, XYZ coordinates are stored in the corresponding RGB channels (X in Red, Y in Green and Z in Blue). And since all we need to encode is orientation data, an integer format would suffice, although 16-bit depth is highly preferable.

This orientation can be expressed in terms of World Space (relative to the global coordinates of the 3D scene) or Screen Space (Tangent Space if we're talking about baking textures). The latter option is typically the most useful output for compositing or texture baking, although the first one has some advantages as well (like masking out parts of the scene based on the global directions, i.e. all faces pointing upwards regardless of camera location).

Let's do some closer examination:

A brief anatomic study of a Normals output variable
A typical Normals element rendered in screen space

The applications

The usages of all this treasury are many. For one, since this is the same data shaders utilize to light a surface, compositing packages offer the relighting tools for simulating local directional lighting and reflection mapping based solely on rendered normals. This method won't generate any shadows, but is typically lightning fast compared to going back to a 3D program and rerendering.

Normals can also be used as an additional (among the others like World Position or ZDepth) input for more complex relighting in 2D.

It's easy to notice that the antialiased edges of the Normals AOV produce improper values and thus become the source of relighting artifacts. One way to partly fight this limitation is upscaling the image plates before the relighting operation and downscaling them back afterwards. This would obviously slow down the processing, so should be turned on just prior to the final render of the composition.

Manipulating the tonal ranges (grading) of individual channels or color keying the Normals pass can generate masks for further adjustments on the main image (the Beauty render) based on surface orientation. Like for brightening up all the surfaces facing left from the camera, or getting a Fresnel mask out of the blue channel.

All this empowered by gradient mapping and adding the intermediate results together provides quite an extensive image manipulation/relighting toolkit capable of producing pictures on its own like the following one.


Render elements: Normals. www.the-working-man.org
This image was created in compositing using only the Normals render element 
and the individual objects' masks as discussed earlier.

*Bump vs Normals (instead of a post scriptum)

Bump mapping is nothing more than modifying the surface normals based on a height map at render time, so that the shader does its usual calculations but with these new modified normals as the input. Everything else stays the same. This also means that bump mapping and normal mapping techniques are essentially the same thing, and the corresponding maps can be easily converted one into another. Bump textures have a benefit of being easier to edit and usable for displacement, while normal maps are more straightforward to interpret by the GPU, which is its main advantage for real-time applications.

The Normals element we've been studying here could be rendered either with or without bump modifications applied.

Thursday, August 13, 2015

Packing Lighting Data into RGB Channels


Most existing renderers process Red, Green and Blue channels independently. While this limits the representation of certain optical phenomena (especially those in the domain of physical rather than geometric optics), it provides some advantages as well. For one, this feature allows encoding lighting information from several sources into a single image separately, which we are going to look at in this article.

Advanced masking

For the first example let's consider rendering masks which we have discussed the last time. Here is a bit more advanced scenario when an object is present in reflections and refractions as well, so we want the mask to include those areas also, but at the the same time to provide a separate control over them.


 

While the most proper approach might be to render two passes (one masking the object itself and another one with primary visibility off – an object only visible in reflections/refractions), there is a way to pack this data into a single render element if desired. Just set the reflection filter color to pure Red (1,0,0), refraction color to Green (0,1,0) and make sure that an object being masked is present in all these channels plus in one extra (meaning it's white in our case). To isolate the mask for a reflection or refraction, we now only need to subtract the Blue channel (in which nothing gets neither reflected nor refracted by our design) from Red or Green respectively.



A custom matte pass allowing isolation of an object in reflections and refractions as well.

As usually when dealing with mask manipulations, special care should be taken to avoid edge corruptions, and this method might not be the optimal for softer (e.g. motion blurred) contours.

Isolating lights

Another situation could require to isolate an input of a particular light source into the scene, including its impact on global illumination. Again, it's great (and typically way faster) when your rendering software provides per-light output of the elements, however this is not always an option.

But taking advantage of the separate RGB processing, as soon as we color each source with a pure Red, Green or Blue, it's impact will be contained within the corresponding channel completely and never “spill out” to another one. Yet the light will preserve the complete functionality including GI and caustics. Of course, all surfaces should be desaturated for this pass (otherwise an initially red object might not react with the light represented in another channel for example).

The resulting data can be used in compositing as a mask or an overlay to correct/manipulate the individual input of each light into the beauty render, for instance to adjust the key/fill lights balance.

For this scene I had only two light sources (encoded in Red and Green), so in a similar fashion I have added an ambient occlusion light into the otherwise empty Blue channel. Ambient occlusion deserves a separate article on its own as it has numerous applications in compositing and is a very powerful look adjustment vehicle. Depending on the software, AO could be implemented as a surface shader, still it can fit into a single channel and be encoded in one custom shader together with some other useful data like UV coordinates or the aforementioned masks.



This weirdly colored additional render contains separate impact of each of two scene lights within Red and Green channels, while Blue stores ambient occlusion for diffuse objects

Saving on volumes

The described technique becomes most powerful when applied to volumes. Volumetric objects usually take considerable time to render and are often intended to be used as elements later (which implies they should come in multiple versions). By lighting a volume with 3 different lights of pure Red, Green and Blue colors we can get 3 monochromatic images with different light directions in a single render.

To have a clearer picture while setting up those lights, it is handy to tune them one at a time in white color and with others off. Enabling all three simultaneously and assigning channel-specific colors can be done right before the final render – the result for each channel should match the white preview of a corresponding source automatically.


Three lights stored in a single RGB image


The trick now is to manipulate and combine them into the final picture. All sorts of color corrections and compositing techniques can be used here, but I find gradient mapping to be especially powerful. Coming under various names but available almost universally in image editors of all sorts, it is a tool allowing to remap a monochromatic range of an input into an arbitrary color gradient, thus “colorizing” a black-and-white image.


Source image before gradient mapping
Gradient-mapped result


Summing it up

The next cool thing is that the light is naturally additive, and the results of these custom mappings for different channels can be added together with varying intensities, resulting in multiple lighting versions for the same image.

The qualities of each initial RGB light can be changed drastically through manipulating the gamma, ranges, contrast and intensity of each channel (all can be achieved adjusting the target gradient actually). This also means that light directions with wider coverage should be preferred at the render stage to provide more flexibility for these adjustments.


More results from the same three source channels

On a more general note, this 3-lights technique allows for simulating something like the Normals output variable for volumes. And on the other hand, the rendered Normals pass (which anatomy we are going to discuss the next time) can be used for similar lighting manipulations with surfaces.

The main goal of the provided examples was to illustrate a way of thinking – the possibilities are quite endless in fact.

Monday, June 29, 2015

Storing masks in RGB channels

Storing masks in RGB channels
Base image for the examples in this article

Finally returning to posting the original manuscripts of the articles I've written for 3D World magazine in 2014. This one was first published in issue 186 under the title "The Mighty Multimatte".

In an earlier piece we've been looking at raster images as data containers which may be used for storing various supplementary information as well as the pictures themselves. One of the most straightforward usage of these capabilities is rendering masks.

A lot can be done to an image after it has been rendered, in fact contemporary compositing packages even allow us to move a good deal of classical shading into post, often offering a much more interactive workflow. But even if you prefer to polish your beauty renders inside the 3D software till they come out with no need for extra touches, there still can be an urgent feedback from the client or the last little color tweak you'd need to apply under the time pressure – and compositing becomes a life savior again.


The perfect matte

However, the success of most compositing operations depends on how many elements you can isolate procedurally (that is without tracing them manually). And, no less important, with what precision (which refers to antialiasing first of all).

What we are looking for is the antialiased matte with the value of exactly 1.0 (white) for the pixels completely occupied by the object of interest, exactly 0 (black) for the rest of the image,* and antialising identical to that of the beauty render.

*Mask values above one and below zero cause no less problems than the gray ones.

Storing masks in RGB channels

Here are the results of a color-correction through the corresponding mattes. Left-to-right: with proper antialiasing, aliased, and with an attempt to fix aliasing through blurring. Note the corrupted edges in the middle example and dark halos in the right.

The power of three

It is easy to notice, that all this data requires only one raster channel to be stored. Thus a regular RGB image can naturally preserve quality mattes for 3 objects at a time. It only takes applying Constant shaders of pure Red (1,0,0), Green (0,1,0) and Blue (0,0,1) colors to the objects of interest and a black (0,0,0) Constant shader to the rest of the scene. Render layers functionality (implemented in every contemporary 3D package I can think of) comes very handy here. You might want to turn off slower features like shadows and GI for just the masks element, although typically setting all the shaders in the scene to Constant is already enough for the render engine to optimize itself sufficiently.*

*Due to smart sampling, antialiasing of the matte element might not be exactly the same as in beauty pass, still this is normally the closest one can practically get.

Alternatively, some renderers offer a pre-designed output variable (like MultiMatte in V-Ray) to render masks in the similar way. More channels (like Alpha, ZDepth, Motion Vectors or any arbitrary extra image channels) could be used for storing more mattes in the same image file of course, but typically it is not worth the time/inconvenience to set up first and extract later, compared to simply adding more RGB outputs to isolate more objects. Compositing applications and image editors naturally provide the tools to easily use any of the RGBA channels as a mask for an operation, which is another reason to stick with those. (In Photoshop, for instance, it only takes a Ctrl-click in the Channels panel to turn one into a selection).

Storing masks in RGB channels

An example of 2 objects isolated into the Red and Blue channels with Constant shaders.

What to mask?

Unless we're isolating parts of the scene with the specific purpose in mind, the main generic principle here is what will most likely require adjustments? Those are either the parts of the scene in doubt or simply the most important ones. Thus by default foreground/hero object is worth of its own matte (a channel). Grouping objects into mask channels based upon their materials or surface qualities is useful as well, since it allows for adjusting in terms of “highlights on metals” or “the color of liquids”.

But the most demanded usually are masks to separate meaningful objects and their distinct parts, especially those of the similar color, since it is tricky to isolate them with keying.

When working on a sequence of animated shots, consistency in using the same colors for the same objects from one shot to another becomes a very useful habit. This way, the same compositing setup can be propagated to the new shots most painlessly. It is generally better to add more MultiMatte outputs to the scene but stay consistent, rather than to try fitting only the masks needed for each shot into one image every time, so that in one shot a (let's say Green) channel would isolate the character, and in another – a prop.

Storing masks in RGB channels

When masking both an object and its parts in the same matte – think in terms of channels. For instance, if we want to utilize the Green channel in our example for the parts of the main object, we might want to use yellow (1,1,0) for the shader color to avoid holes in the Red channel.

The pitfalls

The world is imperfect though, and sometimes in a crunch there is simply no time to render the proper additional elements (or AOVs – Arbitrary Output Variables). That is the time to manipulate the existing data in comping in order to compensate for the missing. Individual mattes can be added, subtracted, inverted and intersected to get the new ones. Every AOV can be useful in compositing in one way or another, and any non-uniform pass can provide some masking information to be extracted – it only takes thinking of them as the source of masking data and understanding what exactly is encoded within each particular element (which we are going to touch upon in the following few issues).

And right now let's look at some dangers hidden along the way. The biggest pitfall is corrupting the edges of the matte (due to over-processing in post or the way it was rendered). 3D applications often offer some fast way of rendering object IDs (mattes) out of the box like assigning a random color or a discrete floating point luminance value to each. Though it might be faster to set up than the proper MultiMatte-like pass, the temptation should be generally avoided. With totally random colors per object the only way to procedurally separate one mask from another is keying, which will be often limited by close colors of other objects and thus quite crude.


Storing masks in RGB channels

The illustration above shows that you can not isolate one of the mattes stored sequentially by luminance while preserving the same antialiasing over different objects.

Even when assigning “the extended matte colors” (Red, Green, Blue, Yellow, Cyan, Magenta, Black, White, Alpha) instead of the totally random ones in order to store more mattes in a single image and separate them with various channel combinations rather than color keying later, the quality of the edges still gets endangered, although the results are much better typically. *This method should not be confused with the aforementioned usage of yellow, when it was still within a “one channel – one object” paradigm.

And no need to mention that any method of rendering masks/IDs without antialiasing is a no-go.


Divide and conquer

However, if going towards really heavy post-processing, it often becomes safer to render an object and its background separately. The trick in this case is not to forget the shadows and reflections, which means utilizing matte/shadow/ghost objects and visibility for primary (camera) rays functionality rather than just hiding the parts of the scene. 

Sunday, May 17, 2015

CG|VFX reel 2015



CG|VFX reel 2015 from Denis Kozlov on Vimeo.


My reels tend to become shorter and shorter. Here goes the new one – a generalist's reel again, so I have to take the blame for most of non live action pixels – both CG and compositing. With only a couple of exceptions, the work has been done in Houdini and Fusion predominantly. Below follows a breakdown describing my role and approach for each shot.

 
Direction and most visual elements for this version of the shot. The main supercell plate created and rendered in Houdini with a mixture of volumetric modeling techniques and render-time displacements. The rest of the environment elements including atmospherics, debris and compositing were created in Fusion. My nod for a little twister spinning under the cloud goes to Marek Duda.


For this animated matte-paint I brought the output of ocean simulation (Houdini Ocean Toolkit) from Blender into Fusion for rendering and further processing. The fun part were the shore waves of course, I almost literally (though quite procedurally) painted the RGB displacements animation for this area. Compositing included creating interactive sand ripples and some volumetric work for the fog. Resulting base setup allows for camera animation.


For this piece from a Skoda commercial I've created a water splash and droplets in Houdini, as well as integrated them into the plate. (All moving water is CG here.) A lot of additional work has been done on this shot by my colleagues from DPOST, including Victor Tretyakov and Jiri Sindelar.


Keying and compositing live action plates and elements.


A bit reworked sequence from a spot for Czech Technical University that I've directed in 2014. Most visual elements are own work, Houdini and Fusion. More details on the original spot are at http://www.the-working-man.org/2014/03/ctus-faculty-of-mechanical-engineering.html



A TV commercial demo: particles and compositing in Fusion. Production and a magic coal ball by Denis Kosar.



A little piece utilizing the same particles technique as in the previous shot. This time all visual elements are own work.



Direction and all visual elements (well, except for a kiwi texture probably) in my usual Houdini/Fusion combination. Strawberry model is completely procedural; more examples of procedural assets generation are present in the previous posts of this blog.



For this commercial, aside from VFX supervision I created the flamenco sequence graphics, two pieces of which are presented here. Assembling procedural setup in Fusion allowed for interactive prototyping with the director on site. Setup involved mockup 3D geometry rendered from the top view into a depth map on the fly, which in order (after direct 2d adjustments) was feeding a displacement map for the ground grid with individual boxes instanced over it. After laying out the background designs for the whole sequence, setup was quite seamlessly transferred into 3DSMax (using displacement maps as an intermediate). Rendered in Mental Ray and composited with a live action dancer plate in Fusion again. Matte pulling was mostly done by Jiri Sindelar, the rest of the digital work on the sequence is pretty much mine.



Set extension for a scene from Giuseppe Tornatore's Baaria. I used Syntheyes for 3D-tracking and XSI for lighting/rendering of the street model (provided by the client). Composited in Fusion.



Keying/compositing for another shot from the same film.



And yet another compositing shot from few years back.
 

Last, but not least is the music:

"Black Vortex" Kevin MacLeod (incompetech.com)
Licensed under Creative Commons: By Attribution 3.0
http://creativecommons.org/licenses/by/3.0/


And if you've made it till here, you might also find interesting my previous post on procedural modeling. And of course for those interested in cooperation, the email link is at the top of the page.