Monday, June 29, 2015

Storing masks in RGB channels

Storing masks in RGB channels
Base image for the examples in this article

Finally returning to posting the original manuscripts of the articles I've written for 3D World magazine in 2014. This one was first published in issue 186 under the title "The Mighty Multimatte".

In an earlier piece we've been looking at raster images as data containers which may be used for storing various supplementary information as well as the pictures themselves. One of the most straightforward usage of these capabilities is rendering masks.

A lot can be done to an image after it has been rendered, in fact contemporary compositing packages even allow us to move a good deal of classical shading into post, often offering a much more interactive workflow. But even if you prefer to polish your beauty renders inside the 3D software till they come out with no need for extra touches, there still can be an urgent feedback from the client or the last little color tweak you'd need to apply under the time pressure – and compositing becomes a life savior again.


The perfect matte

However, the success of most compositing operations depends on how many elements you can isolate procedurally (that is without tracing them manually). And, no less important, with what precision (which refers to antialiasing first of all).

What we are looking for is the antialiased matte with the value of exactly 1.0 (white) for the pixels completely occupied by the object of interest, exactly 0 (black) for the rest of the image,* and antialising identical to that of the beauty render.

*Mask values above one and below zero cause no less problems than the gray ones.

Storing masks in RGB channels

Here are the results of a color-correction through the corresponding mattes. Left-to-right: with proper antialiasing, aliased, and with an attempt to fix aliasing through blurring. Note the corrupted edges in the middle example and dark halos in the right.

The power of three

It is easy to notice, that all this data requires only one raster channel to be stored. Thus a regular RGB image can naturally preserve quality mattes for 3 objects at a time. It only takes applying Constant shaders of pure Red (1,0,0), Green (0,1,0) and Blue (0,0,1) colors to the objects of interest and a black (0,0,0) Constant shader to the rest of the scene. Render layers functionality (implemented in every contemporary 3D package I can think of) comes very handy here. You might want to turn off slower features like shadows and GI for just the masks element, although typically setting all the shaders in the scene to Constant is already enough for the render engine to optimize itself sufficiently.*

*Due to smart sampling, antialiasing of the matte element might not be exactly the same as in beauty pass, still this is normally the closest one can practically get.

Alternatively, some renderers offer a pre-designed output variable (like MultiMatte in V-Ray) to render masks in the similar way. More channels (like Alpha, ZDepth, Motion Vectors or any arbitrary extra image channels) could be used for storing more mattes in the same image file of course, but typically it is not worth the time/inconvenience to set up first and extract later, compared to simply adding more RGB outputs to isolate more objects. Compositing applications and image editors naturally provide the tools to easily use any of the RGBA channels as a mask for an operation, which is another reason to stick with those. (In Photoshop, for instance, it only takes a Ctrl-click in the Channels panel to turn one into a selection).

Storing masks in RGB channels

An example of 2 objects isolated into the Red and Blue channels with Constant shaders.

What to mask?

Unless we're isolating parts of the scene with the specific purpose in mind, the main generic principle here is what will most likely require adjustments? Those are either the parts of the scene in doubt or simply the most important ones. Thus by default foreground/hero object is worth of its own matte (a channel). Grouping objects into mask channels based upon their materials or surface qualities is useful as well, since it allows for adjusting in terms of “highlights on metals” or “the color of liquids”.

But the most demanded usually are masks to separate meaningful objects and their distinct parts, especially those of the similar color, since it is tricky to isolate them with keying.

When working on a sequence of animated shots, consistency in using the same colors for the same objects from one shot to another becomes a very useful habit. This way, the same compositing setup can be propagated to the new shots most painlessly. It is generally better to add more MultiMatte outputs to the scene but stay consistent, rather than to try fitting only the masks needed for each shot into one image every time, so that in one shot a (let's say Green) channel would isolate the character, and in another – a prop.

Storing masks in RGB channels

When masking both an object and its parts in the same matte – think in terms of channels. For instance, if we want to utilize the Green channel in our example for the parts of the main object, we might want to use yellow (1,1,0) for the shader color to avoid holes in the Red channel.

The pitfalls

The world is imperfect though, and sometimes in a crunch there is simply no time to render the proper additional elements (or AOVs – Arbitrary Output Variables). That is the time to manipulate the existing data in comping in order to compensate for the missing. Individual mattes can be added, subtracted, inverted and intersected to get the new ones. Every AOV can be useful in compositing in one way or another, and any non-uniform pass can provide some masking information to be extracted – it only takes thinking of them as the source of masking data and understanding what exactly is encoded within each particular element (which we are going to touch upon in the following few issues).

And right now let's look at some dangers hidden along the way. The biggest pitfall is corrupting the edges of the matte (due to over-processing in post or the way it was rendered). 3D applications often offer some fast way of rendering object IDs (mattes) out of the box like assigning a random color or a discrete floating point luminance value to each. Though it might be faster to set up than the proper MultiMatte-like pass, the temptation should be generally avoided. With totally random colors per object the only way to procedurally separate one mask from another is keying, which will be often limited by close colors of other objects and thus quite crude.


Storing masks in RGB channels

The illustration above shows that you can not isolate one of the mattes stored sequentially by luminance while preserving the same antialiasing over different objects.

Even when assigning “the extended matte colors” (Red, Green, Blue, Yellow, Cyan, Magenta, Black, White, Alpha) instead of the totally random ones in order to store more mattes in a single image and separate them with various channel combinations rather than color keying later, the quality of the edges still gets endangered, although the results are much better typically. *This method should not be confused with the aforementioned usage of yellow, when it was still within a “one channel – one object” paradigm.

And no need to mention that any method of rendering masks/IDs without antialiasing is a no-go.


Divide and conquer

However, if going towards really heavy post-processing, it often becomes safer to render an object and its background separately. The trick in this case is not to forget the shadows and reflections, which means utilizing matte/shadow/ghost objects and visibility for primary (camera) rays functionality rather than just hiding the parts of the scene. 

Sunday, May 17, 2015

CG|VFX reel 2015



CG|VFX reel 2015 from Denis Kozlov on Vimeo.


My reels tend to become shorter and shorter. Here goes the new one – a generalist's reel again, so I have to take the blame for most of non live action pixels – both CG and compositing. With only a couple of exceptions, the work has been done in Houdini and Fusion predominantly. Below follows a breakdown describing my role and approach for each shot.

 
Direction and most visual elements for this version of the shot. The main supercell plate created and rendered in Houdini with a mixture of volumetric modeling techniques and render-time displacements. The rest of the environment elements including atmospherics, debris and compositing were created in Fusion. My nod for a little twister spinning under the cloud goes to Marek Duda.


For this animated matte-paint I brought the output of ocean simulation (Houdini Ocean Toolkit) from Blender into Fusion for rendering and further processing. The fun part were the shore waves of course, I almost literally (though quite procedurally) painted the RGB displacements animation for this area. Compositing included creating interactive sand ripples and some volumetric work for the fog. Resulting base setup allows for camera animation.


For this piece from a Skoda commercial I've created a water splash and droplets in Houdini, as well as integrated them into the plate. (All moving water is CG here.) A lot of additional work has been done on this shot by my colleagues from DPOST, including Victor Tretyakov and Jiri Sindelar.


Keying and compositing live action plates and elements.


A bit reworked sequence from a spot for Czech Technical University that I've directed in 2014. Most visual elements are own work, Houdini and Fusion. More details on the original spot are at http://www.the-working-man.org/2014/03/ctus-faculty-of-mechanical-engineering.html



A TV commercial demo: particles and compositing in Fusion. Production and a magic coal ball by Denis Kosar.



A little piece utilizing the same particles technique as in the previous shot. This time all visual elements are own work.



Direction and all visual elements (well, except for a kiwi texture probably) in my usual Houdini/Fusion combination. Strawberry model is completely procedural; more examples of procedural assets generation are present in the previous posts of this blog.



For this commercial, aside from VFX supervision I created the flamenco sequence graphics, two pieces of which are presented here. Assembling procedural setup in Fusion allowed for interactive prototyping with the director on site. Setup involved mockup 3D geometry rendered from the top view into a depth map on the fly, which in order (after direct 2d adjustments) was feeding a displacement map for the ground grid with individual boxes instanced over it. After laying out the background designs for the whole sequence, setup was quite seamlessly transferred into 3DSMax (using displacement maps as an intermediate). Rendered in Mental Ray and composited with a live action dancer plate in Fusion again. Matte pulling was mostly done by Jiri Sindelar, the rest of the digital work on the sequence is pretty much mine.



Set extension for a scene from Giuseppe Tornatore's Baaria. I used Syntheyes for 3D-tracking and XSI for lighting/rendering of the street model (provided by the client). Composited in Fusion.



Keying/compositing for another shot from the same film.



And yet another compositing shot from few years back.
 

Last, but not least is the music:

"Black Vortex" Kevin MacLeod (incompetech.com)
Licensed under Creative Commons: By Attribution 3.0
http://creativecommons.org/licenses/by/3.0/


And if you've made it till here, you might also find interesting my previous post on procedural modeling. And of course for those interested in cooperation, the email link is at the top of the page.

Thursday, April 23, 2015

On Wings, Tails and Procedural Modeling


Project Aero: Procedural Aircraft Design Toolkit for SideFX Houdini
Project Aero: Procedural Aircraft Design Toolkit for SideFX Houdini

Despite the widespread opinion, I find Houdini a very powerful tool for 3D modeling. In fact, this aspect was largely motivational for me to choose it as a primary 3D application. And talking procedural modeling I mean not just fractal mountains, instanced cities and Voronoi-fractured debris (which all can be made look quite fascinating actually), but efficient creation of 3D assets in general. Any assets.

Project Aero: Procedural Aircraft Design Toolkit for SideFX Houdini

Project Aero: Procedural Aircraft Design Toolkit for SideFX Houdini

Thus lately I have taken some of the not-so-free time (a bit over 3 months, to be more accurate) to develop (or rather prototype) a toolkit for procedural aircraft creation, which I am happy to showcase today. Please welcome Project Aero.

Project Aero: Procedural Aircraft Design Toolkit for SideFX Houdini

The whole point in a nutshell: after the tools have been finished, it took me roughly 4-5 hours to create each of the demonstrated airplane models from scratch to the full detalization.

Project Aero: Procedural Aircraft Design Toolkit for SideFX Houdini

And most of those 4-5 hours was spent on design decisions – not smoothing edge loops, or laying out UVs, or drawing countless layers of rivets and scratches – those kinds of things got automated during the toolset development months. So technically, a new model could have been created in minutes – the time it takes to lay down few nodes (see screengrabs below) and set the basic parameters up.

Project Aero: Procedural Aircraft Design Toolkit for SideFX Houdini
Detalization sample

The resulting models are completely procedural: every pixel and polygon are generated by the system from scratch tailored to a particular vehicle design; textures are generated either at render time or from the scene geometry to the preset resolution – no photo-textures or other disk-based samples are used. Bolts and rivets are randomly turned and micro-shifted on an individual basis. The generator tries to preserve consistent detail scale and proportions across any curvature and size. Controls are designed in a hierarchical fashion, so that user could work from big, conceptual adjustments (like main contours, forms and surface styling) down to tuning individual element's bolting when required. Plus more perks inside, and of course standard Houdini tools can be used at any stage for any manual design adjustments.

Project Aero: Procedural Aircraft Design Toolkit for SideFX Houdini
A new design can be created in few hours from scratch

The skeleton of the system and each design are interactively placed profiles which are skinned into NURBS surfaces with flexible shape controls. The resulting design is then fed into the detalization nodes that create the necessary geometry around the surfaces and generate textures using a variety of techniques. Final models are rendered as subdivision surfaces with displacement; bolts and rivets (which an airplane has quite a few of) are stored as packed primitives.

Project Aero: Procedural Aircraft Design Toolkit for SideFX Houdini
Profiles form the backbone of each design

Project Aero: Procedural Aircraft Design Toolkit for SideFX Houdini
Only few more nodes are required for finalization

Project Aero: Procedural Aircraft Design Toolkit for SideFX Houdini
Texture preview mode


By no means Project Aero is complete or flawless, but hopefully it takes the concept far enough to illustrate the benefits and possibilities of procedural creation of 3D assets. Getting another individual version of the same model is a matter of seconds. Automatic non-identical symmetry and procedural surface aging controlled by few high-level sliders also help to escape “the army of clones” issue that 3D models sometimes suffer from. Deeper variations like repainting or restyling the skin and panels are done in a breeze. The set of detail modules is easily extendable and parts of the existing design can be swapped and reused. Depending on the toolset's design objectives, generated models could be automatically prepared for integration into a particular pipeline (Like textures could be baked out, LODs automatically created and parts named in a chosen convention, exhausts and moving parts marked with special dummy locators or attributes, etc).

Project Aero: Procedural Aircraft Design Toolkit for SideFX Houdini
A new unique copy of an existing model is literally "on a button"

Project Aero: Procedural Aircraft Design Toolkit for SideFX Houdini
Non-identical symmetry

And of course the workflow is non-linear from both design and development perspectives. The first means that you can always go back and change/adjust something at the previous stages of work without having to redo the later steps (like a change in the wing position on a hull of a finished model will make all the related surfaces recalculate to allow for the new shape). And the second refers to the ability to use the toolset while it's being developed, which means that in a production environment artist wouldn't have to wait for a TD to finish his work – the tools would be updating in parallel, automatically adding new features to the designs already being worked on.

Project Aero: Procedural Aircraft Design Toolkit for SideFX Houdini

Hopefully, there is no need to say that the approach used in this demonstration is only one of many-many others, each fitting some objectives better than the other. I might touch upon the topic later in case of available time and/or public interest, and so far those interested in procedural cooperation are more than welcome to email me (link at the top of the page). Or if you just feel like chatting and are planning to attend FMX 2015 – drop me a line to meet there.


Keep having fun!


Project Aero: Procedural Aircraft Design Toolkit for SideFX Houdini
 
Project Aero: Procedural Aircraft Design Toolkit for SideFX Houdini
 
Project Aero: Procedural Aircraft Design Toolkit for SideFX Houdini

Project Aero: Procedural Aircraft Design Toolkit for SideFX Houdini

Project Aero: Procedural Aircraft Design Toolkit for SideFX Houdini


Project Aero: Procedural Aircraft Design Toolkit for SideFX Houdini

Project Aero: Procedural Aircraft Design Toolkit for SideFX Houdini

Project Aero: Procedural Aircraft Design Toolkit for SideFX Houdini

Project Aero: Procedural Aircraft Design Toolkit for SideFX Houdini
Project Aero: Procedural Aircraft Design Toolkit for SideFX Houdini

Project Aero: Procedural Aircraft Design Toolkit for SideFX Houdini

Project Aero: Procedural Aircraft Design Toolkit for SideFX Houdini

Project Aero: Procedural Aircraft Design Toolkit for SideFX Houdini

Project Aero: Procedural Aircraft Design Toolkit for SideFX Houdini

Project Aero: Procedural Aircraft Design Toolkit for SideFX Houdini

Project Aero: Procedural Aircraft Design Toolkit for SideFX Houdini

Project Aero: Procedural Aircraft Design Toolkit for SideFX Houdini

Project Aero: Procedural Aircraft Design Toolkit for SideFX Houdini



Sunday, February 8, 2015

Evaluating a Particle System: checklist

Below is my original manuscript of what was first published as a 2-piece article in issues 183 and 184 of a 3D World magazine. Worse English and a bit more pictures are included. Plus a good deal of techniques and approaches squeezed between the lines.


Part 1

Most of the 3D and compositing packages offer some sort of a particle systems toolset. They usually come with a nice set of examples and demos showing all the stunning things you can do within the product. However, the way to really judge its capabilities is often not by the things the software can do, but rather by the things it can not. And since the practice shows it might be not so easy to think of all the things one might be missing in a real production at a time, I have put together this checklist.

Flexible enough software allows for quite complex effects 
like this spiral galaxy, created with particles alone.

Even if some possibilities are not obvious out of the box, you can usually make particles do pretty much everything with the aid of scripting or utilizing some application-specific techniques. It often requires adjusting the way of thinking to a particular tool's paradigms, and I personally find acquaintance with different existing products of a big help here. Thus in case you have already made up your mind on a choice of specific program, you might still find the following list useful as a set of exercises – figuring out how to achieve the described functionality within your app.


Overall concerns

The first question would be if it is a two- or three-dimensional system or does it allow for both modes? A 2D-only solution is going to be limited by definition, however it can possess some unique speed optimizations and convenient features like per-particle blurring control, extended support for blending modes and utilizing data directly from images to control the particles. The ability to dial in the existing 3D camera motion is quite important in the real production environment.

In general, it is all about control. The more control over every thinkable aspect of a particle's life you have – the better. And it is never enough, since the tasks at hand are typically custom by their very nature. One distinctive aspect of this control is the data flow. How much data can be transferred into the particle system from the outside, passed along inside and output back in the very end? Which particles' properties can it affect? We want it all.

The quest for control also means that if the system doesn't have some kind of a modular arrangement (like nodes for instance), it is likely to be limited in functionality.

Examples of particle nodes in Houdini (above) and Fusion (below)
 

Emission features

Following the good old-fashioned tradition of starting at the beginning, let's start with the particle emission.

What are the predefined emitter types and shapes and, most importantly, does it allow for user-defined emitter shapes? You can only get that far without the custom sources – input geometry or image data increase the system's flexibility enormously. Emitting from custom volumes allows for great possibilities as well. What about emission from animated objects? Can emitter's velocity and deformations data be transferred to the particles being born? For the geometry input, particle birth should be possible from both the surface and the enclosed volume of the input mesh, and then we'd often want some way of restricting it to the certain areas only. Therefore to achieve the real control, texture information needs to be taken into account as well.

Geometry normally allows for cruder control compared to image data, so we want all kinds of particles' properties (amount, size, initial velocity, age, mass, etc.) to be controllable through texture data, using as much of the texture image's channels as possible. For instance, you might want to set the initial particles' direction with vectors stored in RGB channels of an image, and use Alpha or any other custom channel to control its size, as well as use emitter's texture to assign groups to particles for further manipulation. The same applies to driving particles' properties with different volumetric channels (density, velocity, heat) or dialing an image directly into the 2D particle system as a source.

Does your system allow to create custom particles' attributes and assign their values from a source's texture?


The look options

Now consider the options available for the look of each individual particle. Both instancing custom geometry and sprites are a must for a full-featured 3D particle system*. And there is no need to say that animation should be supported for both. Are there any additional special types of particles available which allow for speed benefits or additional techniques? One example of such a technique would be the single-pixel particles which can be highly optimized and thus available in huge amounts (like in Eyeon Fusion for instance), allowing for the whole set of quite unique looks.

*Rendering a static particle system as strands for representing hair or grass is yet another possible technique which some software might offer.

An effect created with millions of single-pixel particles
 

Another good example are metaballs – while each one merely being a blob on its own, when instanced over a particle system (especially if the particles can control their individual sizes and weights) metaballs can become a powerful effects and modeling tool.

A particle system driving the metaballs


Whether using sprites or geometry, getting the real power requires versatile control over the timing and randomization of these elements. Can element's animation be offset for every individual particle to start when it gets born? Can a random frame of the input sequence be picked for each particle? Can this frame be chosen based on the particle's attributes? Can input sprite's or geometry animation be stretched to the particle's lifetime? (So that if you have a clip with a balloon growing out of nowhere and eventually blowing up, you could match it to every particle in a way, that no matter how long does a particle live, the balloon's appearing would coincide with its birth, and the blowing up would exactly match its death.)

With a good level of randomization and timing management, animated sprites/meshes are quite powerful in creating many effects like fire and crowds.


Rotation and spin

And the last set of controls which we're going to touch upon in this first part are rotation and spin options. Although “always face camera” mode is very useful and important, it is also important to have an option to exchange it for a full set of 3D rotations, even for the flat image instances like sprites (think small tree leaves, snowflakes or playing cards). A frequently required task is to have the elements oriented along their motion paths (in shooting arrows for example). And of course having an easy way to add randomness and spin, or to drive those through textures/other particles' attributes is of high value.

Next time we'll look further at the toolset required to efficiently drive particles later in their lives.


 
Part 2

Now we're taking a look into the further life of a particle. The key concept and requirement stays the same: maximum control over all thinkable parameters and undisrupted data flow through a particle's life and between the different components of the system.

The first question I would ask after the emission is how many particles' properties can be controlled along and with their age. Color and size are a must, but it is also important for the age to be able to influence on any other arbitrary parameter, and in a non-linear fashion (like plotting an age-to-parameter dependency graph with a custom curve). For example, when doing a dust-cloud with sprites you might want to be increasing their size while decreasing opacity towards the very end of a lifetime.

Can custom events be triggered at the certain points of a particle's lifetime? Can the age data be transferred further to those events?


Spawning

Spawning (emitting new particles from the existing ones) is the key functionality for a particle system. Its numerous applications include changing the look of a particle based on events like collisions or parameters like age, creating different kinds of bursts and explosions and creating all sorts of trails. Classical fireworks effect is a good example where spawning functionality is used in at least two ways: it creates the trails by generating new elements behind the leading ones, plus produces the explosion by generating new leads from the old ones at a desired moment.

In a fireworks effect spawning is used to create both the trails and the explosion


Just like with the initial emission discussed the last time, it is paramount to be able to transfer as much data as possible from the old particles to the new ones. Velocity, shape, color, age and all the custom properties should be both easily inheritable if required; or possible to set from scratch as an option.

The last but not least spawning option to name is the recursion. A good software solution provides user with the ability to choose whether to use newly spawned elements as a base for spawning in each next time-step (to spawn recursively) or not. Although a powerful technique, recursive spawning can quickly get out of hand as the number of elements keep growing exponentially.


Behavior control

Forces are a standard way of driving the motion in a particle system. The default set of directional, point, rotational, turbulent and drag forces aside*, it is important to have an easily controllable custom force functionality with a visual control over its shape. Ability to use arbitrary 3D-objects or images as forces comes very handy here.

*An often overlooked drag (sometimes called friction) force plays a very important role as it counteracts the other forces, not letting them get out of control through overgrowing.

Forces raise the next question – how much can the system be driven by physical simulations? Does it support collisions? What are the supported types of collision objects then, the options for post-collision behavior and how much data can a particle exchange in a collision with the rest of the scene?

Can further physical phenomena like smoke/combustion or fluid behavior be simulated within the particle system? Can this kind of simulation data be dialed into the system from the outside? One efficient technique, for instance, is to drive the particles with the results of a low-resolution volumetric simulation, using them to increase its detalization.

The particle effect above uses the low-resolution 
smoke simulation shown below as the custom force


The next things commonly required for directing particles are the follow path and find target functionality. Support for the animated paths/targets is of value here, just as the ability to compel reaching the goal within the certain timeframe.

Many interesting effects can be achieved if the particles have some awareness of each other (like knowing who is the nearest neighbor). Forces like flocking can be used to simulate the collective behavior then.


Limiting the effect

For any force or event (including spawning) which may be added to the flow, let's now consider the limiting options. What are the ways to restrict the effect of each particular operator? Can it be restricted to a certain area only? A certain lifespan? Can it be limited with a custom criteria like mathematical expressions, arbitrary particle properties or a certain probability factor? How much is the user control over the active area for each operator – custom shapes, textures, geometry, volumes? Is there a grouping workflow?

Groups provide a powerful technique of particles' control. The concept implies that at the creation point or further down the life of a particle it can be assigned to some group, and then each single effect can be simply limited to operate on the chosen groups only. For efficient work all the limiting options just discussed should be available as criteria for groups assignment. Plus the groups themselves should be a subject to logic operations (like subtraction or intersection), should not be limited in number or (limited) to contain some particle exclusively. For example, you might want to group some particles based on speed, others based on age and then create yet another group: an intersection of the first two.

 
Further considerations

The last set of questions I would suggest might have less connection with the direct capabilities of a given system, and still they can make a huge difference in a real deadline-driven production.

What are the maximum amounts of particles which the system can manage interactively and render? Are there optimizations for the certain types of elements? What kind of data can be output from the particle system for further manipulation? Can the results be used for meshing into geometry later or in another software package for example? Can a particle system deform or affect in any other way the rest of the scene? Can it be used to drive another particle system?

Can the results of a particle simulation be cached to disk or memory? Can it be played backwards (is scrubbing back-and-forth across the timeline allowed)? Are there options for a prerun before the first frame of the scene?

Does the system provide good visual aids for working interactively in the viewport? Can individual particles be manually selected and manipulated? This last question might often be a game-changer, when after days of building the setup and simulating everything works except for few stray particles, which no one would really miss.

Aside from the presence of variation/randomization options which should be available for the maximum number of parameters in the system, how stable and controllable is the randomization? If you reseed one parameter – will the rest of the simulation stay unaffected and preserve its look? How predictable and stable is the particle solver in general? Can the particle flow be split into, or merged together from several?

And as the closing point in this list for evaluating the particular software solution, it is worth considering the quality and accessibility of documentation together with the amount of available presets/samples and the size of a user-base. Trying to dig through a really complex system like Houdini or Thinking Particles would be quite daunting without those.