Render Elements: UVs

Continuing on the topic of AOVs with another brief anatomic study. This article closes my series on post-render image manipulation, I believe and hope that understanding of other AOVs like Z-Depth, Direct/Indirect Lighting passes or World/Rest Position can be easily derived from the principles already discussed, common sense and the Internet. And the following video could serve as a good example of utilization of these principles.



Now let's take a look at the UVs...

One less common yet extremely powerful render element is the texture coordinates (or UVs). 3D software typically doesn't provide a preset output for them out of the box, however it is quite easy to create one manually and experience the benefits of a post-render texturing and other useful tricks.


The UVs

For each point of a three-dimensional surface, texture coordinates are nothing more but 2 numbers*. These numbers define an exact location of a point within a regular 2D image, thus creating a correspondence and so matching every surface point with a point of a texture. This sets the precise way of how a 2D texture image should be placed over a model's 3D surface.**

In fact, UV unwrapping, projecting, pelting and editing are simply the methods by which we define this correspondence within the software.

*Three numbers or UVW coordinates are used for volumetric texturing (like when applying procedural 3D textures).
**The correspondence is unique one-way only, and multiple parts of a 3D model can be matched to the same piece of a 2D texture

The UVs are measured relative to the borders of a picture, and are therefore independent of its resolution, allowing to be reused with any image of any size and proportions. The first number out of two defines the horizontal coordinate (0 means the left picture edge, 1 stands for the right one). The second number describes the vertical position within the raster in a similar manner: 0 means bottom, 1 – top (see the second illustration below).

Texture coordinates locate vertices without snapping to texture pixels (known as texels) – a surface point can be mapped to a center of a texel, its border or anywhere within its area.

The shader

Since raster images store nothing but numbers (and often within that very range of zero-to-one), the surface's texture coordinates can be rendered out as an additional element. Just like the color of the main “Beauty” pass, or ZDepth, or Normals are rendered. Red and Green image channels of this element are traditionally utilized for storing U (horizontal) and V (vertical) values respectively.*

*Blue channel stays free and can be used to encode some additional data (like an object's mask or ambient occlusion)



Rendered UV pass of a 3D scene
This is how the resulting render element looks like. Red and Green values display the exact UV values for each pixel. And this in turn would allow us to map a texture after the render is actually done.


All it takes to create this AOV is a constant shader with the double-gradient texture as shown on the following illustration (the target object has to be UV-mapped of course). This texture simply represents the UV coordinates tile as an image, acting as a sort of indicator when rendered. Due to the very high precision required from the produced values for later texture mapping, it is is highly preferable to have it in floating point and infinite resolution – that is to create procedurally by literally adding two gradients (horizontal black-to-red and vertical black-to-green) within the shading network.

Anatomy of a UV tile

A tile of texture coordinates represented as RGB colors. Rendering objects with this texture generates UVs render element. Corner RGB values and individual channels are shown for reference.

The applications

The main point in outputting UV coordinates as a render element is the ability to quickly reapply any texture to the already rendered object in compositing with the specialized tools like the Texture node in Fusion or STMap in Nuke.

Textures applied in compositing

An image textured in compositing using the UVs render element and the individual objects' masks as discussed in earlier posts.


Like pretty much all post-shading techniques, this one has certain limitations. It is not really suitable for semi-transparent objects and works best on simpler isolated forms.

Using a constant shader ensures that the coordinate information is rendered precisely – unaffected by lighting. However antialiasing of the edges introduces color values that do not really correspond to the UV information which naturally leads to the artifacts in post texture projection. A typical way of partly fighting these is upscaling the texture AOV before processing and downscaling afterwards, which can be turned on right before rendering of the final composite to keep the project more interactive. Rendering aliased samples directly into a high-resolution raster might be more proper solution, though more demanding as well.

And still the advantages are many. Post-texturing becomes especially powerful when the scene is rendered in passes with lighting information separated from textures, so that it can be reused with the new ones. In fact much of the lighting can be recreated in post as well from additional elements like Normals, World Position and Zdepth. All this, in turn, allows for creating procedural scene templates in compositing. As a basic example, imagine an animation of a book with turning pages which needs to be re-rendered with the new textures monthly. A good compositing setup utilizing various render elements would require an actual 3D scene to be rendered only once, leaving most further changes purely in comp, which typically has much more interactivity and lower render times.

3D scene completely textured and shaded in compositing from AOVs

Lighting developed for the previous article solely from the Normals AOV applied to the same image as a simple example of a procedural shading setup completely assembled in composing.


Taking it further, the texture coordinates plate from the second illustration (A UV tile) can efficiently serve as a tool for measuring and capturing any deformation within the screen space. Modifying that image with any transformations, warps, bends and distortions results in a raster “remembering” those deformations precisely. So that it can be recreated on any other image with the same tools used for post-render texture mapping. The method is often used for storing the lens distortion for instance. Another practical application is to render our magic shader refracted in another object, thus creating the refractions map rather than the usual surface UVs element.

No comments:

Post a Comment