Sunday, February 8, 2015

Evaluating a Particle System: checklist

Below is my original manuscript of what was first published as a 2-piece article in issues 183 and 184 of a 3D World magazine. Worse English and a bit more pictures are included. Plus a good deal of techniques and approaches squeezed between the lines.


Part 1

Most of the 3D and compositing packages offer some sort of a particle systems toolset. They usually come with a nice set of examples and demos showing all the stunning things you can do within the product. However, the way to really judge its capabilities is often not by the things the software can do, but rather by the things it can not. And since the practice shows it might be not so easy to think of all the things one might be missing in a real production at a time, I have put together this checklist.

Flexible enough software allows for quite complex effects 
like this spiral galaxy, created with particles alone.

Even if some possibilities are not obvious out of the box, you can usually make particles do pretty much everything with the aid of scripting or utilizing some application-specific techniques. It often requires adjusting the way of thinking to a particular tool's paradigms, and I personally find acquaintance with different existing products of a big help here. Thus in case you have already made up your mind on a choice of specific program, you might still find the following list useful as a set of exercises – figuring out how to achieve the described functionality within your app.


Overall concerns

The first question would be if it is a two- or three-dimensional system or does it allow for both modes? A 2D-only solution is going to be limited by definition, however it can possess some unique speed optimizations and convenient features like per-particle blurring control, extended support for blending modes and utilizing data directly from images to control the particles. The ability to dial in the existing 3D camera motion is quite important in the real production environment.

In general, it is all about control. The more control over every thinkable aspect of a particle's life you have – the better. And it is never enough, since the tasks at hand are typically custom by their very nature. One distinctive aspect of this control is the data flow. How much data can be transferred into the particle system from the outside, passed along inside and output back in the very end? Which particles' properties can it affect? We want it all.

The quest for control also means that if the system doesn't have some kind of a modular arrangement (like nodes for instance), it is likely to be limited in functionality.

Examples of particle nodes in Houdini (above) and Fusion (below)
 

Emission features

Following the good old-fashioned tradition of starting at the beginning, let's start with the particle emission.

What are the predefined emitter types and shapes and, most importantly, does it allow for user-defined emitter shapes? You can only get that far without the custom sources – input geometry or image data increase the system's flexibility enormously. Emitting from custom volumes allows for great possibilities as well. What about emission from animated objects? Can emitter's velocity and deformations data be transferred to the particles being born? For the geometry input, particle birth should be possible from both the surface and the enclosed volume of the input mesh, and then we'd often want some way of restricting it to the certain areas only. Therefore to achieve the real control, texture information needs to be taken into account as well.

Geometry normally allows for cruder control compared to image data, so we want all kinds of particles' properties (amount, size, initial velocity, age, mass, etc.) to be controllable through texture data, using as much of the texture image's channels as possible. For instance, you might want to set the initial particles' direction with vectors stored in RGB channels of an image, and use Alpha or any other custom channel to control its size, as well as use emitter's texture to assign groups to particles for further manipulation. The same applies to driving particles' properties with different volumetric channels (density, velocity, heat) or dialing an image directly into the 2D particle system as a source.

Does your system allow to create custom particles' attributes and assign their values from a source's texture?


The look options

Now consider the options available for the look of each individual particle. Both instancing custom geometry and sprites are a must for a full-featured 3D particle system*. And there is no need to say that animation should be supported for both. Are there any additional special types of particles available which allow for speed benefits or additional techniques? One example of such a technique would be the single-pixel particles which can be highly optimized and thus available in huge amounts (like in Eyeon Fusion for instance), allowing for the whole set of quite unique looks.

*Rendering a static particle system as strands for representing hair or grass is yet another possible technique which some software might offer.

An effect created with millions of single-pixel particles
 

Another good example are metaballs – while each one merely being a blob on its own, when instanced over a particle system (especially if the particles can control their individual sizes and weights) metaballs can become a powerful effects and modeling tool.

A particle system driving the metaballs


Whether using sprites or geometry, getting the real power requires versatile control over the timing and randomization of these elements. Can element's animation be offset for every individual particle to start when it gets born? Can a random frame of the input sequence be picked for each particle? Can this frame be chosen based on the particle's attributes? Can input sprite's or geometry animation be stretched to the particle's lifetime? (So that if you have a clip with a balloon growing out of nowhere and eventually blowing up, you could match it to every particle in a way, that no matter how long does a particle live, the balloon's appearing would coincide with its birth, and the blowing up would exactly match its death.)

With a good level of randomization and timing management, animated sprites/meshes are quite powerful in creating many effects like fire and crowds.


Rotation and spin

And the last set of controls which we're going to touch upon in this first part are rotation and spin options. Although “always face camera” mode is very useful and important, it is also important to have an option to exchange it for a full set of 3D rotations, even for the flat image instances like sprites (think small tree leaves, snowflakes or playing cards). A frequently required task is to have the elements oriented along their motion paths (in shooting arrows for example). And of course having an easy way to add randomness and spin, or to drive those through textures/other particles' attributes is of high value.

Next time we'll look further at the toolset required to efficiently drive particles later in their lives.


 
Part 2

Now we're taking a look into the further life of a particle. The key concept and requirement stays the same: maximum control over all thinkable parameters and undisrupted data flow through a particle's life and between the different components of the system.

The first question I would ask after the emission is how many particles' properties can be controlled along and with their age. Color and size are a must, but it is also important for the age to be able to influence on any other arbitrary parameter, and in a non-linear fashion (like plotting an age-to-parameter dependency graph with a custom curve). For example, when doing a dust-cloud with sprites you might want to be increasing their size while decreasing opacity towards the very end of a lifetime.

Can custom events be triggered at the certain points of a particle's lifetime? Can the age data be transferred further to those events?


Spawning

Spawning (emitting new particles from the existing ones) is the key functionality for a particle system. Its numerous applications include changing the look of a particle based on events like collisions or parameters like age, creating different kinds of bursts and explosions and creating all sorts of trails. Classical fireworks effect is a good example where spawning functionality is used in at least two ways: it creates the trails by generating new elements behind the leading ones, plus produces the explosion by generating new leads from the old ones at a desired moment.

In a fireworks effect spawning is used to create both the trails and the explosion


Just like with the initial emission discussed the last time, it is paramount to be able to transfer as much data as possible from the old particles to the new ones. Velocity, shape, color, age and all the custom properties should be both easily inheritable if required; or possible to set from scratch as an option.

The last but not least spawning option to name is the recursion. A good software solution provides user with the ability to choose whether to use newly spawned elements as a base for spawning in each next time-step (to spawn recursively) or not. Although a powerful technique, recursive spawning can quickly get out of hand as the number of elements keep growing exponentially.


Behavior control

Forces are a standard way of driving the motion in a particle system. The default set of directional, point, rotational, turbulent and drag forces aside*, it is important to have an easily controllable custom force functionality with a visual control over its shape. Ability to use arbitrary 3D-objects or images as forces comes very handy here.

*An often overlooked drag (sometimes called friction) force plays a very important role as it counteracts the other forces, not letting them get out of control through overgrowing.

Forces raise the next question – how much can the system be driven by physical simulations? Does it support collisions? What are the supported types of collision objects then, the options for post-collision behavior and how much data can a particle exchange in a collision with the rest of the scene?

Can further physical phenomena like smoke/combustion or fluid behavior be simulated within the particle system? Can this kind of simulation data be dialed into the system from the outside? One efficient technique, for instance, is to drive the particles with the results of a low-resolution volumetric simulation, using them to increase its detalization.

The particle effect above uses the low-resolution 
smoke simulation shown below as the custom force


The next things commonly required for directing particles are the follow path and find target functionality. Support for the animated paths/targets is of value here, just as the ability to compel reaching the goal within the certain timeframe.

Many interesting effects can be achieved if the particles have some awareness of each other (like knowing who is the nearest neighbor). Forces like flocking can be used to simulate the collective behavior then.


Limiting the effect

For any force or event (including spawning) which may be added to the flow, let's now consider the limiting options. What are the ways to restrict the effect of each particular operator? Can it be restricted to a certain area only? A certain lifespan? Can it be limited with a custom criteria like mathematical expressions, arbitrary particle properties or a certain probability factor? How much is the user control over the active area for each operator – custom shapes, textures, geometry, volumes? Is there a grouping workflow?

Groups provide a powerful technique of particles' control. The concept implies that at the creation point or further down the life of a particle it can be assigned to some group, and then each single effect can be simply limited to operate on the chosen groups only. For efficient work all the limiting options just discussed should be available as criteria for groups assignment. Plus the groups themselves should be a subject to logic operations (like subtraction or intersection), should not be limited in number or (limited) to contain some particle exclusively. For example, you might want to group some particles based on speed, others based on age and then create yet another group: an intersection of the first two.

 
Further considerations

The last set of questions I would suggest might have less connection with the direct capabilities of a given system, and still they can make a huge difference in a real deadline-driven production.

What are the maximum amounts of particles which the system can manage interactively and render? Are there optimizations for the certain types of elements? What kind of data can be output from the particle system for further manipulation? Can the results be used for meshing into geometry later or in another software package for example? Can a particle system deform or affect in any other way the rest of the scene? Can it be used to drive another particle system?

Can the results of a particle simulation be cached to disk or memory? Can it be played backwards (is scrubbing back-and-forth across the timeline allowed)? Are there options for a prerun before the first frame of the scene?

Does the system provide good visual aids for working interactively in the viewport? Can individual particles be manually selected and manipulated? This last question might often be a game-changer, when after days of building the setup and simulating everything works except for few stray particles, which no one would really miss.

Aside from the presence of variation/randomization options which should be available for the maximum number of parameters in the system, how stable and controllable is the randomization? If you reseed one parameter – will the rest of the simulation stay unaffected and preserve its look? How predictable and stable is the particle solver in general? Can the particle flow be split into, or merged together from several?

And as the closing point in this list for evaluating the particular software solution, it is worth considering the quality and accessibility of documentation together with the amount of available presets/samples and the size of a user-base. Trying to dig through a really complex system like Houdini or Thinking Particles would be quite daunting without those.


Tuesday, February 3, 2015

Project Tundra

Project Tundra 01
01. Tundra

Since I find it very cool to call everything a project, here goes “Project Tundra” with some anagrams. Pretty much all visual elements (except for a couple of bump textures) are completely synthetic and generated procedurally with Houdini and Fusion. So almost no reality was sampled during the production of the series. Some clouds from these setups were used. 

The originals are about 6K in resolution. I'm a bit split between the desire to write that prints will follow shortly, and the desire not to lie. Let's say they will follow for sure if you're patient enough to follow the slowly revolving pace of this blog or some other social media I pretend to participate in.

Project Tundra 02 - Dun Art
02. Dun Art
Project Tundra 03 - Durant
03. Durant

Project Tundra 04 - Rat Dun
04. Rat Dun

Project Tundra 05 - Dauntr'
05. Dauntr'

Project Tundra 06 - Da Turn
06. Da Turn

Project Tundra - Details
And a bit of details' flavor.



Sunday, December 28, 2014

Bit Depth - color precision in raster images


Bit depth diagram

Last time we have been talking about encoding color information in pixels with numbers from a zero-to-one range, where 0 stands for black, 1 for white and numbers in between represent corresponding shades of gray. (RGB model uses 3 numbers like that for storing the brightness of each Red, Green and Blue components and representing a wide range of colors through mixing them). This time let's address the precision of such a representation, which is defined by a number of bits dedicated in a particular file format to describing that 0-1 range, or a bit-depth of a raster image.

Bits are the most basic units of storing information. Each can take only two values, which can be thought of as 0 or 1, off or on, absence or presence of a signal, or black or white in our case. Therefore using a 1-bit per pixel (1-bit image) would give us a picture consisting only of black and white elements with no shades of gray.*

*Of course the two values can be interpreted as anything (for instance you can encode two different arbitrary colors with them like brown and violet, but only two of them – with no gradations in between), and for the most common purpose (which is representing a 0 to 1 gradation), 1 bit means black or white and higher bit depths serve increasing the amount of possible gray sub-steps.

But the great thing about bits is that when you group them together you get times more than the simple sum of the individuals, as each new bit does not add 2 more values to the group, but instead doubles the amount of available unique combinations. It means that if we use 3 bits to describe each pixel value, we'd get not 6 (=2*3) but 8 (=2^3) possible combinations. 5 bits can produce 32, and 8 bits grouped together result in 256 different numbers.
 
Possible values represented by 1 and 3 bits
Although each bit can represent only 2 values, 
even 3 of them grouped together would already 
result in 8 possible combinations.


That group of 8 bits is typically called a byte, which is another standard unit computers use to store data. This makes it convenient (although not necessary) to assign the whole bytes to describe a color of a pixel, and it is one byte which is most commonly used per channel. This is true for the majority of digital images existing today, giving us the precision of 256 gradations from black to white possible (in either a monochrome picture or each Red, Green or Blue channel for RGB) and is what called an 8-bit image in computer graphics, where the bit-depth is traditionally measured per color component. In consumer electronics the same 8-bit RGB image would be called a 24-bit (True Color) simply because they count the sum of all 3 channels together (higher numbers must seem cooler for marketing). An 8-bit RGB image can possibly reproduce 16777216 (=256^3) different colors and results in color fidelity normally sufficient for not seeing any artifacts. Moreover, regular consumer monitors are physically not designed to display more gradations (in fact they may be limited to even less, like 6 bits per channel). So why would someone bother and waste disk space/memory on files of higher bit-depths?

The most basic example when 256 gradations of an 8-bit image are not enough is a heavy color-correction, which may quickly result in artifacts called banding. Rendering to higher bit-depth solves this issue, and normally 16-bit formats with their 65536 distinctions of gray are used for the purpose. But even 10 bits like in Cineon/DPX format can give 4 times higher precision against the standard 8. Going above 2 bytes per channel, on the other hand, becomes impractical as the file size increases proportionally to the bit-depth.*

Insufficient bit-depth of an output device can be another 
cause for banding artifacts, especially in gradients. 
Adding noise can help fighting this issue through dithering. 
A kind of fighting the fire with fire...
*No matter float or integer, the size of a raster image in memory can be calculated as a product of the number of pixels (horizontal times vertical resolution), bit-depth and the number of channels. This way 320x240 8-bit RGBA image would occupy 320x240x8x4=2457600 bits, or 320x240x4=307200 bytes of memory. This does not show the exact file size on disk though, as first, there is additional data like header and other meta-data stored in an image file; and second, some kind of compression (lossless like internal archiving or lossy like in JPEG) is normally utilized in image file formats to save the disk space.


But regardless of the number of gradations (2, 4, 256, 65536, etc), as long as we are using an integer file format, these numbers all describe the values within the range from 0 to 1. For instance, middle gray value in sRGB color space (the color space of a regular computer monitor - not to be confused with RGB color model) is around 0.5 – not 128, and white is 1 – not 255. It is only because 8-bits representation is so popular, many programs by default measure the color in it. But this is not how the underlying math is working and thus can cause problems when trying to make sense of it... For example take a Multiply blending mode – it's easy to learn empirically that it preserves the color of the underlying layer in white areas of the overlay, and darkens the picture under the dark areas – but what exactly is happening – why is it called “multiply”? With black it makes sense – you multiply the underlying color by 0 and get 0 – black, but why would it preserve white areas if white is 255? Multiplying something by 255 should make it way brighter... Well, because it is 1, not 255 (neither 4, nor 16, nor 65536...). And so with the rest of the CG math: white means one.

The above paragraphs referred to how the bit depth works in integer formats – defining the amount of possible variations between 0 and 1 only. Floating point formats are of a different kind. Bit depth does pretty much the same thing here – defines the color precision. However the numbers stored can be anything and may well lay outside of a 0 to 1 range. Like brighter than white (above 1) or darker than black (negative). Internally this works by utilizing the logarithmic scale and requires higher bit-depths for achieving the same fidelity in the usually most important [0,1]. Normally at least 16 or even 32 bits are used per channel to represent floating point data with enough precision. At the cost of the memory usage, this allows for representing High Dynamic Range imagery, additional freedom in compositing, and makes it possible to store arbitrary numerical data in image files like the World Position Pass to name one.

This also means that integer formats always clip the out-of-range pixel values. The quick way to test for clipping is to lower the brightness of a picture and see if any details get revealed in the overbright areas.

A 3d-render of a sphere used to illustrate artifacts of insufficient color precision
Source image

Color banding and clipping artifacts
The same source image rendered to 8 bits integer, 16 bits integer and 16 bits float with 2 different color corrections applied. Notice the color banding in 8-bit and clipping highlights in integer versions.
 
It is natural for a 3D renderer to work in floating point internally, so most often the risk of clipping would occur when choosing a file format to save the final image. But even when dealing with already given low bit-depth or clipped integer files, there are certain benefits in increasing its color precision inside of the compositing software. (For the best of my knowledge, Nuke converts any imported source into a 32-bits floating point representation internally and automatically.) Such conversion won't add any extra details or qualities to the existing data, but the results of your further manipulations would belong to a better colorspace with less quantization errors (and wider luminance range if you also convert an integer to float). Moreover, you can quickly fake HDR data by converting an integer image to float and gaining up the highlights (bright areas) of the picture. This won't give you a real replacement for the properly acquired HDR, but should suffice for many purposes like diffuse IBL (Image Based Lighting). In other words, regardless of the output requirements, do your compositing in at least 16 bits, float highly preferable – final downsampling and clipping for the output delivery is never a problem.

It is important to have a clear understanding of bit-depth and integer/float differences to deliver the renders in adequate quality and not to get caught during the post-processing stage later. Read up on the file formats and options available in your software. For instance 16 bits can refer to both integer and floating point formats, which may be distinguished as “Short” (integer) and “Half” (float) in Maya. As a general rule of thumb, use 16 bits if you plan for extensive color grading/compositing and make sure you render to floating point format to avoid clipping if any out-of-range values need to be preserved (like details in the highlights or negative values in Zdepth or if you simply use linear workflow). 16-bit OpenEXR files can be considered a good color precision/file size compromise for the general case.

Happy and Merry everyone!

Monday, November 3, 2014

Pixel Is Not a Color Square

Rater images contain nothing but numbers in the table cells

Continuing the announced series of my original manuscripts for 3D World magazine.
 
Thinking of images as data containers.


Although those raster image files filling our computers and lives are most commonly used to represent pictures (surprisingly), I find it useful for a CG artist to have yet another perspective – a geekier one. And from that perspective a raster image is essentially a set of data organized into a particular structure, to be more specific — a table filled with numbers (a matrix, mathematically speaking).

The number in each table cell can be used to represent a color, and this is how the cell becomes a pixel (stands for “picture element”). Many ways exist to encode colors numerically. For instance (probably the most straightforward one) to explicitly define a number-to-color correspondence for each value (i.e. 3 stands for dark red, 17 for pale green and so on). Such method was frequently used in the older formats like GIF as it allows for certain size benefits at the expense of a limited palette.

Another way (the most common one) is to use a continuous range from 0 to 1 (not 255!), where 0 stands for black, 1 for white, and numbers in between denote the shades of gray of the corresponding lightness. (A 0-255 range of integers is only an 8-bit representation of zero-to-one, popularized by certain software products and harmfully misleading in understanding many concepts such as color math or blending modes.) This way we get a logical and elegantly organized way of representing a monochrome image with a raster file. The term “monochrome” happens to be a more appropriate than “black-and-white” since the same data set can be used to depict gradations from black to any other color depending on the output device – like many old monitors were rather black&green than black&white.

Encoding custom data with images
A raster may contain data of a totally different kind. As an example, let's fill one table with the digits of PI divided by ten, the other one with random values and present both as images. Both data sets have a particular meaning different from each other, still visually they represent the same — noise. And if the visual sense matches the numeric one in the second case, there is almost no chance to correctly interpret the meaning of the first data set purely visually (as an image).

This system, however, can be easily extended to the full-color case with a simple solution – each table cell can contain several numbers, and again there are multiple ways of describing the color with few (usually three) numbers each in 0-1 range. In RGB model they stand for the amounts of Red, Green and Blue light, in HSV - for hue, saturation and brightness accordingly. But most importantly – those are still nothing but numbers which encode a particular meaning, but don't have to be interpreted that way.

Now to the “why it is not a square” part. Because the table, which a raster image is, tells us how many elements are in each row and column, in which order they are placed, but nothing about what shape or even proportion they are. We can form an image from the data in a file by various means, not necessarily with a monitor, which is only one option for an output device. For example, if we would take our image file and distribute pebbles of sizes proportional to pixel values on some surface – we shall still form essentially the same image.

Displaying raster image data with a set of pebbles
Computer monitor is only one out of many possible 
ways to visualize the raster image data.


And even if we'd take only half of the columns, but instruct ourselves to use the stones twice wider for the distribution – the result would still show principally the same picture with the correct proportions, only lacking half of the horizontal details. “Instruct” is the key word here. This instruction is called “pixel aspect ratio”, which describes the difference between the image's resolution (number of rows and columns) and proportions. It allows to store frames stretched or compressed horizontally and is used in certain video and film formats.

Pixel aspect ratio explained in a diagram with pebbles
In this example of an image stored with 
the pixel aspect ratio of 2.0, representing pixels 
as squares results in erroneous proportions (top). 
Correct representation needs to rely 
on the stretched elements like below.


 Since we started on resolution – it shows the maximum amount of detail which an image can hold, but says nothing about how much does it actually hold. A badly focused photograph won't get improved no matter how many pixels the camera sensor has. Same way upscaling a digital image in Photoshop or any other editor will increase the resolution without adding no detail or quality to it – the extra rows and columns would be just filled with interpolated (averaged) values of originally neighboring pixels.

In a similar fashion a PPI (pixels per inch) parameter (commonly mistakenly called DPI – dots per inch) is only an instruction establishing the correspondence between the image file's resolution and the output's physical dimensions. Thus it is pretty much meaningless on its own, without either of those two.

Returning to the numbers stored in each pixel, of course they can be any, including so called out-of range (values above 1 and negative). And of course there can be more than 3 numbers stored in each cell. These features are limited only by the particular file format definition and are widely utilized in OpenEXR to name one.

The great aspect of storing several numbers in each pixel is their independence. Obviously, each of them can be studied and manipulated individually as a monochrome image called Channel – a sub-raster if you want. Additional channels to the usual color-describing Red, Green and Blue can carry all kinds of information. The default fourth channel is Alpha which encodes opacity (0 denotes a transparent pixel, 1 stands for completely opaque). ZDepth, Normals, Velocity (Motion Vectors), World Position, Ambient Occlusion, IDs and anything else you could think of can be stored in either additional or the main RGB channels – it is only data and the way to store it. Every time you render something out, you decide which data to include and where to place it. Same way you decide in compositing how to manipulate the data you possess to achieve the result pursued.

This is the numerical way of image-thinking, and I would like to wrap this article up with few examples of where it comes beneficial.

We've just mentioned understanding and using render passes, but this aside, it is pretty much all of the compositing that requires this perspective. The basic color-corrections for example are nothing but elementary math operations on pixel values, and seeing through it is quite essential for productive work. Furthermore, math operations like addition, subtraction or multiplication can be performed on pixel values, and with data like Normals and Position many 3D shading tools can be mimicked in 2D.

The described perspective is also how programmers see image files, thus especially in game industry it can help artists achieve a better mutual understanding with developers, resulting in better custom tools and cutting corners with various tricks like using textures for non-image data.

And of course the visual effects and motion design. Texture maps controlling properties of a particles emission, RGB displacements forming 3D shapes, encoding multiple passes within RGBA with custom shaders, and on, and on... All these techniques become much more transparent after you start seeing the digits behind the pixels, which is essentially what a pixel is – a number in its place.


Procedural Clouds

Sample outputs of self-made procedural clouds generators

I've been playing around with generating procedural clouds lately, and this time before turning to the heavy artillery of full scale 3D volumetrics, spent some time with good old fractal noises in the good old Fusion.

So row by row, top to bottom:

The base fractus cloudform generator assembled from several noise patterns: from the coarsest one defining the overall random shape to the smallest for the edge work. It is used as a building block in the setups below. Main trick here was not to rely on a single noise pattern, but rather to look for a way to combine several sizes which would maximize the variation of shapes. The quality of the generator seems to be in direct correlation with the time, tenderness and attention spent on fine-tuning the parameters – the setup itself is not really sophisticated.

Another thing was not to aim for a universal solution, but to design a separate setup for each characteristic cloud type. Good reference is a must of course. Keeping such system modular helps as well, so that the higher level assets rely on the properly tuned base elements. Second and third rows are nothing more than different modifications of the base shapes into cirrus through warping. All 3 top types are then put onto a 3D-plane and slightly displaced for a more volumetric feeling.

Clouds in the forth line are merely the 3D bunches of randomized fractus sprites output from the base generator. The effect of shading is achieved through the variance in tones of individual sprites.

The lowest samples are more stylized experiments in distorting the initial sphere geometry and cloning secondary elements over its surface.