|Base image for the examples in this article|
Finally returning to posting the original manuscripts of the articles I've written for 3D World magazine in 2014. This one was first published in issue 186 under the title "The Mighty Multimatte".
In an earlier piece we've been looking at raster images as data containers which may be used for storing various supplementary information as well as the pictures themselves. One of the most straightforward usage of these capabilities is rendering masks.
A lot can be done to an image after it has been rendered, in fact contemporary compositing packages even allow us to move a good deal of classical shading into post, often offering a much more interactive workflow. But even if you prefer to polish your beauty renders inside the 3D software till they come out with no need for extra touches, there still can be an urgent feedback from the client or the last little color tweak you'd need to apply under the time pressure – and compositing becomes a life savior again.
The perfect matte
However, the success of most compositing operations depends on how many elements you can isolate procedurally (that is without tracing them manually). And, no less important, with what precision (which refers to antialiasing first of all).
What we are looking for is the antialiased matte with the value of exactly 1.0 (white) for the pixels completely occupied by the object of interest, exactly 0 (black) for the rest of the image,* and antialising identical to that of the beauty render.
*Mask values above one and below zero cause no less problems than the gray ones.
The power of three
It is easy to notice, that all this data requires only one raster channel to be stored. Thus a regular RGB image can naturally preserve quality mattes for 3 objects at a time. It only takes applying Constant shaders of pure Red (1,0,0), Green (0,1,0) and Blue (0,0,1) colors to the objects of interest and a black (0,0,0) Constant shader to the rest of the scene. Render layers functionality (implemented in every contemporary 3D package I can think of) comes very handy here. You might want to turn off slower features like shadows and GI for just the masks element, although typically setting all the shaders in the scene to Constant is already enough for the render engine to optimize itself sufficiently.*
*Due to smart sampling, antialiasing of the matte element might not be exactly the same as in beauty pass, still this is normally the closest one can practically get.
Alternatively, some renderers offer a pre-designed output variable (like MultiMatte in V-Ray) to render masks in the similar way. More channels (like Alpha, ZDepth, Motion Vectors or any arbitrary extra image channels) could be used for storing more mattes in the same image file of course, but typically it is not worth the time/inconvenience to set up first and extract later, compared to simply adding more RGB outputs to isolate more objects. Compositing applications and image editors naturally provide the tools to easily use any of the RGBA channels as a mask for an operation, which is another reason to stick with those. (In Photoshop, for instance, it only takes a Ctrl-click in the Channels panel to turn one into a selection).
An example of 2 objects isolated into the Red and Blue channels with Constant shaders.
What to mask?
Unless we're isolating parts of the scene with the specific purpose in mind, the main generic principle here is what will most likely require adjustments? Those are either the parts of the scene in doubt or simply the most important ones. Thus by default foreground/hero object is worth of its own matte (a channel). Grouping objects into mask channels based upon their materials or surface qualities is useful as well, since it allows for adjusting in terms of “highlights on metals” or “the color of liquids”.
But the most demanded usually are masks to separate meaningful objects and their distinct parts, especially those of the similar color, since it is tricky to isolate them with keying.
When working on a sequence of animated shots, consistency in using the same colors for the same objects from one shot to another becomes a very useful habit. This way, the same compositing setup can be propagated to the new shots most painlessly. It is generally better to add more MultiMatte outputs to the scene but stay consistent, rather than to try fitting only the masks needed for each shot into one image every time, so that in one shot a (let's say Green) channel would isolate the character, and in another – a prop.
The world is imperfect though, and sometimes in a crunch there is simply no time to render the proper additional elements (or AOVs – Arbitrary Output Variables). That is the time to manipulate the existing data in comping in order to compensate for the missing. Individual mattes can be added, subtracted, inverted and intersected to get the new ones. Every AOV can be useful in compositing in one way or another, and any non-uniform pass can provide some masking information to be extracted – it only takes thinking of them as the source of masking data and understanding what exactly is encoded within each particular element (which we are going to touch upon in the following few issues).
And right now let's look at some dangers hidden along the way. The biggest pitfall is corrupting the edges of the matte (due to over-processing in post or the way it was rendered). 3D applications often offer some fast way of rendering object IDs (mattes) out of the box like assigning a random color or a discrete floating point luminance value to each. Though it might be faster to set up than the proper MultiMatte-like pass, the temptation should be generally avoided. With totally random colors per object the only way to procedurally separate one mask from another is keying, which will be often limited by close colors of other objects and thus quite crude.
The illustration above shows that you can not isolate one of the mattes stored sequentially by luminance while preserving the same antialiasing over different objects.
Even when assigning “the extended matte colors” (Red, Green, Blue, Yellow, Cyan, Magenta, Black, White, Alpha) instead of the totally random ones in order to store more mattes in a single image and separate them with various channel combinations rather than color keying later, the quality of the edges still gets endangered, although the results are much better typically. *This method should not be confused with the aforementioned usage of yellow, when it was still within a “one channel – one object” paradigm.
And no need to mention that any method of rendering masks/IDs without antialiasing is a no-go.
Divide and conquer
However, if going towards really heavy post-processing, it often becomes safer to render an object and its background separately. The trick in this case is not to forget the shadows and reflections, which means utilizing matte/shadow/ghost objects and visibility for primary (camera) rays functionality rather than just hiding the parts of the scene.