Render Elements: Normals


This time let's do a brief anatomic study of a Normals output variable. Below is my original manuscript of an article first published in issue 188 of a 3D World magazine.

A brief anatomic study of a Normals output variable
A typical Normals element rendered in screen space

The shaders and the AOVs

Although this might be obscured from the artist in many popular 3D applications, a shader (and a surface shader in particular) is actually a program run on each pixel*. It takes input variables for that pixel such as surface position, normal orientation, texture coordinates and outputs some variable value based on those input traits. This new pixel value is typically a simulated surface color, however any variable calculated by the shader can be rendered instead, often creating weird-looking imagery. Such images usually complement the main Beauty render and are called Arbitrary Output Variables (AOVs) or Render Elements.

*Vertex shading can be used in interactive applications like games. Such shaders are calculated per vertex of the geometry being rendered rather than per pixel of the final image, and the resulting values are interpolated across the surface.

The normals

Normals are vectors perpendicular to the surface. They are already calculated and used by the surface shaders among the other input variables in order to compute the final surface color, and thus are usually cheap to output as an additional element. They are also easy to encode into an RGB image, each being a three-dimensional vector and so requiring exactly three values to express. In accordance with the widespread CG convention, XYZ coordinates are stored in the corresponding RGB channels (X in Red, Y in Green and Z in Blue). And since all we need to encode is orientation data, an integer format would suffice, although 16-bit depth is highly preferable.

This orientation can be expressed in terms of World Space (relative to the global coordinates of the 3D scene) or Screen Space (Tangent Space if we're talking about baking textures). The latter option is typically the most useful output for compositing or texture baking, although the first one has some advantages as well (like masking out parts of the scene based on the global directions, i.e. all faces pointing upwards regardless of camera location).

Closer examination is provided in the header image above.


The applications

The usages of all this treasury are many. For one, since this is the same data shaders utilize to light a surface, compositing packages offer the relighting tools for simulating local directional lighting and reflection mapping based solely on rendered normals. This method won't generate any shadows, but is typically lightning fast compared to going back to a 3D program and rerendering.

Normals can also be used as an additional (among the others like World Position or ZDepth) input for more complex relighting in 2D.

It's easy to notice that the antialiased edges of the Normals AOV produce improper values and thus become the source of relighting artifacts. One way to partly fight this limitation is upscaling the image plates before the relighting operation and downscaling them back afterwards. This would obviously slow down the processing, so should be turned on just prior to the final render of the composition.

Manipulating the tonal ranges (grading) of individual channels or color keying the Normals pass can generate masks for further adjustments on the main image (the Beauty render) based on surface orientation. Like for brightening up all the surfaces facing left from the camera, or getting a Fresnel mask out of the blue channel.

All this empowered by gradient mapping and adding the intermediate results together provides quite an extensive image manipulation/relighting toolkit capable of producing pictures on its own like the following one.


Render elements: Normals. www.the-working-man.org

This image was created in compositing using only the Normals render element 
and the individual objects' masks as discussed earlier.

*Bump vs Normals (instead of a post scriptum)

Bump mapping is nothing more than modifying the surface normals based on a height map at render time, so that the shader does its usual calculations but with these new modified normals as the input. Everything else stays the same. This also means that bump mapping and normal mapping techniques are essentially the same thing, and the corresponding maps can be easily converted one into another. Bump textures have a benefit of being easier to edit and usable for displacement, while normal maps are more straightforward to interpret by the GPU, which is its main advantage for real-time applications.

The Normals element we've been studying here could be rendered either with or without bump modifications applied.

No comments:

Post a Comment