tag:blogger.com,1999:blog-51491776667705357812024-03-18T04:03:58.689+01:00The Working Mancomputer graphics and related diseasesDenis Kozlovhttp://www.blogger.com/profile/07406391692819839722noreply@blogger.comBlogger33125tag:blogger.com,1999:blog-5149177666770535781.post-2124506080186309732023-02-21T11:37:00.001+01:002023-09-14T13:37:01.699+02:00Parametric Art Systems<iframe allowfullscreen="" frameborder="0" height="281" mozallowfullscreen="" src="https://player.vimeo.com/video/771241731" webkitallowfullscreen="" width="500"></iframe><br />
<a href="https://vimeo.com/771241731">Parametric Art Systems</a> from <a href="https://vimeo.com/kozlove">Denis Kozlov</a> on <a href="https://vimeo.com/">Vimeo</a>.<br /><div><div class="_1DKxU sc-kxynE bTKwUM"><div class="_wMel" data-description-content="true" dir="auto"><p class="first">The
video spans about a decade of work and much more of research. Below
I’ve gathered some links providing additional details, examples and
explanations.</p>
<p>More videos:<br />
<a href="https://vimeo.com/211742962">vimeo.com/211742962</a> - Procedural Aircraft Design Toolkit<br />
<a href="https://vimeo.com/703402772">vimeo.com/703402772</a> - Procedural Creature Generator</p>
<p>The key article covering my vision, process and approach. I’ve
notably advanced in each since the time of writing, but still find it
largely relevant:<br />
<a href="https://www.the-working-man.org/2018/04/procedural-bestiary-and-next-generation.html" rel="nofollow noopener noreferrer" target="_blank">the-working-man.org/2018/04/procedural-bestiary-and-next-generation.html</a></p>
<p>A general overview of the technology involved (at the time of writing
my primary 3D tool being Houdini). No prior knowledge required:<br />
<a href="https://www.the-working-man.org/2017/04/procedural-content-creation-faq-project.html" rel="nofollow noopener noreferrer" target="_blank">the-working-man.org/2017/04/procedural-content-creation-faq-project.html</a></p>
<p>The initial 2015 essay noticed by ACM SIGGRAPH:<br />
<a href="https://www.the-working-man.org/2015/04/on-wings-tails-and-procedural-modeling.html" rel="nofollow noopener noreferrer" target="_blank">the-working-man.org/2015/04/on-wings-tails-and-procedural-modeling.html</a></p>
<p>While the above links mostly focus on 3D part of the work, below is
my secret weapon often and easily overlooked: batch image processing
(typically with compositing tools like Nuke or Fusion)</p>
<p>The basic principles:<br />
<a href="https://www.the-working-man.org/2014/11/pixel-is-not-color-square.html" rel="nofollow noopener noreferrer" target="_blank">the-working-man.org/2014/11/pixel-is-not-color-square.html</a></p>
<p>And examples of more advanced techniques:<br />
<a href="https://www.the-working-man.org/2015/08/render-elements-normals.html" rel="nofollow noopener noreferrer" target="_blank">the-working-man.org/2015/08/render-elements-normals.html</a><br />
<a href="https://www.the-working-man.org/2015/11/render-elements-uvs.html" rel="nofollow noopener noreferrer" target="_blank">the-working-man.org/2015/11/render-elements-uvs.html</a></p>
<p>Hope you enjoy!</p><span><a name='more'></a></span><p><br /></p></div></div></div>
<iframe allowfullscreen="" frameborder="0" height="281" mozallowfullscreen="" src="https://player.vimeo.com/video/211742962" webkitallowfullscreen="" width="500"></iframe><br />
<a href="https://vimeo.com/211742962">Procedural Aircraft Design Demo</a> from <a href="https://vimeo.com/kozlove">Denis Kozlov</a> on <a href="https://vimeo.com/">Vimeo</a>.<br />
<br />
<iframe allowfullscreen="" frameborder="0" height="281" mozallowfullscreen="" src="https://player.vimeo.com/video/703402772" webkitallowfullscreen="" width="500"></iframe><br />
<a href="https://vimeo.com/703402772">Procedural Creature Design Demo</a> from <a href="https://vimeo.com/kozlove">Denis Kozlov</a> on <a href="https://vimeo.com/">Vimeo</a>.<br />
<br />
<iframe allowfullscreen="" frameborder="0" height="281" mozallowfullscreen="" src="https://player.vimeo.com/video/156742662" webkitallowfullscreen="" width="500"></iframe><br />
<a href="https://vimeo.com/156742662">Creature Integration/Advanced AOVs Demo</a> from <a href="https://vimeo.com/kozlove">Denis Kozlov</a> on <a href="https://vimeo.com/">Vimeo</a>.<br />
Denis Kozlovhttp://www.blogger.com/profile/07406391692819839722noreply@blogger.com0tag:blogger.com,1999:blog-5149177666770535781.post-3222331789790805762021-04-27T18:05:00.003+02:002021-04-27T18:07:33.861+02:00dERIVATIVE – The Making of the Film<p></p>
<iframe allow="autoplay; fullscreen; picture-in-picture" allowfullscreen="" frameborder="0" height="281" src="https://player.vimeo.com/video/458135379" width="500"></iframe><p>Made of shapes, colours and a bit of story, dERIVATIVE is a short
film I’ve directed for a wonderful Mixpoint Studio in Prague. It follows
through a row of visual transformations and is likely to be more a work
of motion design than classical CG animation. The project was a
months-long effort and this time I had a chance to personally craft
every single pixel of the final film – what has really helped me is a
compositing-centered workflow which I’d like to talk about in this
tutorial. <br /></p>
<p><span></span></p><a name='more'></a><p></p><p>It’s not uncommon among the film and animation 3D artists to overlook compositing as a mere slapping of few effects’ nodes to fancy up the picture or hide some mistakes in the end of the work. And sadly enough, game development seems to be almost ignorant of compositing as a discipline – at least that’s the impression I’ve been getting. Yet you can do almost anything to the picture in comp – build the whole animations from the ground up, do shading and look development and completely redesign the already rendered scenes – all at the much more interactive speeds than traditional 3D animation pipelines would allow.<br /><br />The key is breaking up the renders into the passes, also known as Render Elements or Arbitrary Output Variables (AOVs). While there’s only that much you can do with a single beauty pass, the full set of AOVs unlocks the whole new range of options. Here’s my approach.</p><p><br /><span style="font-size: large;">Shuffling the Elements In</span><br /></p><div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-uFk0CNryxAs/YIgVpPA4OdI/AAAAAAAAB4Q/sXNREEyZSkAiG5oC81M_D9xwVIDYjHdSQCLcBGAsYHQ/s1920/ss_channels_import_step1.jpg" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1160" data-original-width="1920" height="241" src="https://1.bp.blogspot.com/-uFk0CNryxAs/YIgVpPA4OdI/AAAAAAAAB4Q/sXNREEyZSkAiG5oC81M_D9xwVIDYjHdSQCLcBGAsYHQ/w400-h241/ss_channels_import_step1.jpg" width="400" /></a></div><p></p><p>The first step is to import the 3D renders into Fusion. I typically render to multi-channel EXR file format and instead of using multiple Loaders often prefer to shuffle the extra AOVs into the technical slots at the EXR Format tab. (Remember that channels are just numerical data and you can generally put any pass into any slot and easily extract or rearrange them later!) Then I’d use the Channel Booleans node to move these AOVs into the RGB channels and treat them as regular images further on. Minimizing the number of loaders makes it easier to update the source renders with new versions.</p><p><br /><span style="font-size: large;">Stocking Up on the Masks</span><br /></p><div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-7-IM9eCKiP0/YIgwFReUKTI/AAAAAAAAB40/lKGPQZHnahIFxSUkeUenjuLOVx39HzawwCLcBGAsYHQ/s1200/pe_RGB_masks_step2.jpg" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1080" data-original-width="1200" height="360" src="https://1.bp.blogspot.com/-7-IM9eCKiP0/YIgwFReUKTI/AAAAAAAAB40/lKGPQZHnahIFxSUkeUenjuLOVx39HzawwCLcBGAsYHQ/w400-h360/pe_RGB_masks_step2.jpg" width="400" /></a></div><p></p><p>There are two things to generally look for when preparing the AOVs setup for heavy compositing. One is to make sure we can isolate any image area we might possibly need, either with a dedicated AOV matte or with some combination of channels. You might either use an automated solution like Cryptomatte or set up the mattes manually in my favourite old-school way – note the variety of masks that can be created by merging, multiplying or subtracting the individual Red, Green and Blue channels of these passes.</p><p><br /><span style="font-size: large;">Recreating the Beauty Pass</span><br /></p><div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-UaF7HXVBb2c/YIgwPE2svSI/AAAAAAAAB44/YVWnnmHT2mQ-CWIzTzOJ6HkQhPr9Z6msgCLcBGAsYHQ/s1920/ss_AOVs_merge_step3.jpg" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1160" data-original-width="1920" height="241" src="https://1.bp.blogspot.com/-UaF7HXVBb2c/YIgwPE2svSI/AAAAAAAAB44/YVWnnmHT2mQ-CWIzTzOJ6HkQhPr9Z6msgCLcBGAsYHQ/w400-h241/ss_AOVs_merge_step3.jpg" width="400" /></a></div><p></p><p>The second crucial thing is to make sure the lighting passes assemble – when merged together they should match the beauty pass perfectly. The exact setup differs for every render engine and pipeline, but generally the lighting components are merged additively in linear-to-light colour space, while unshaded texture/colour passes are applied with “Multiply” blending mode. This is in fact quite logical since light does behave additively in the real world, while the colour of objects filters (or multiplies) the incoming spectrum. To achieve the additive mode in Fusion – set the “Alpha Gain” slider of the Merge node to zero.</p><p><br /><span style="font-size: large;">Assembling the Look in Compositing</span><br /></p><p></p><div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-1f0-D9vjlXU/YIgwXBKm2rI/AAAAAAAAB5A/FNwi8guIpVsa4MDOc0k6OHtsnxge92hywCLcBGAsYHQ/s1080/pe_passes_assembly_step4.jpg" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1080" data-original-width="1080" height="400" src="https://1.bp.blogspot.com/-1f0-D9vjlXU/YIgwXBKm2rI/AAAAAAAAB5A/FNwi8guIpVsa4MDOc0k6OHtsnxge92hywCLcBGAsYHQ/w400-h400/pe_passes_assembly_step4.jpg" width="400" /></a></div><p></p><p>Now we can not only adjust the final picture, but directly manipulate (i.e. grade/recolour) individual lighting components like Diffuse, Reflection, Refraction or SSS before they get merged together – meaning we can actually move large part of the shading job into compositing! This provides incredible freedom as many big look decisions can be made interactively on the final image. Top left: Source 3D render before any manipulations. Bottom left and top right: Two shading options assembled from that initial render in Fusion. Bottom right: The final look – a mix of the two versions.</p><p><br /><span style="font-size: large;">A Different Take</span><br /></p><div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-UMlhtFviyrs/YIgwc1RqwiI/AAAAAAAAB5E/zkbh3oCBcEQF2PZZg2lwmGJPwvrZPksJwCLcBGAsYHQ/s1400/fo_passes_reassembly_step5.jpg" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1080" data-original-width="1400" height="309" src="https://1.bp.blogspot.com/-UMlhtFviyrs/YIgwc1RqwiI/AAAAAAAAB5E/zkbh3oCBcEQF2PZZg2lwmGJPwvrZPksJwCLcBGAsYHQ/w400-h309/fo_passes_reassembly_step5.jpg" width="400" /></a><br /></div><p>And here is a totally different look assembled from all the same 3D render. This one largely relies on Diffuse pass, while mixing in much smaller fraction of specular components like Reflection and Refraction. Light Wrap effect accents the softness of shadows, and the Primitive IDs pass makes for a great mask introducing some variation into the colouring of individual cells.</p><p><br /><span style="font-size: large;">Pretty Endless Variations</span></p><div class="separator" style="clear: both; text-align: center;"><span style="font-size: large;"><a href="https://1.bp.blogspot.com/-TuTyVCd6ch8/YIgwnSEW6QI/AAAAAAAAB5Q/aj06TKtjrUcG29ecdvoRHz-XqK8rjvaogCLcBGAsYHQ/s1400/fo_passes_reassembly_a_step6.jpg" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1080" data-original-width="1400" height="309" src="https://1.bp.blogspot.com/-TuTyVCd6ch8/YIgwnSEW6QI/AAAAAAAAB5Q/aj06TKtjrUcG29ecdvoRHz-XqK8rjvaogCLcBGAsYHQ/w400-h309/fo_passes_reassembly_a_step6.jpg" width="400" /></a></span><br /></div><p>Yet another example of the technique and of what could be built from our source render in the matter of minutes in compositing. As opposed to the previous look, this time it’s mainly the Reflection AOV. On top of it a small amount of Refraction pass and strong green colour applied to only one part of the object (thanks to the comprehensive set of masks exported) while introducing a contrast through keeping the rest sleek black.</p><p><br /></p><p></p><p></p><p></p><p></p><p></p><p><span style="font-size: large;">Fighting the Artefacts</span></p><p><span style="font-size: large;"></span></p><div class="separator" style="clear: both; text-align: center;"><span style="font-size: large;"><a href="https://1.bp.blogspot.com/-x6qK9G2lU8o/YIgw0Wp-3zI/AAAAAAAAB5U/choT43x354oI5p2n6CVpXfMizGa2NfwqwCLcBGAsYHQ/s1920/ss_degrain_step7.jpg" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1160" data-original-width="1920" height="241" src="https://1.bp.blogspot.com/-x6qK9G2lU8o/YIgw0Wp-3zI/AAAAAAAAB5U/choT43x354oI5p2n6CVpXfMizGa2NfwqwCLcBGAsYHQ/w400-h241/ss_degrain_step7.jpg" width="400" /></a></span></div><br />Such heavy manipulations of the source elements can often be pushing them to the limits. It goes without saying that 3D renders should have at least 16 bits per channel depth, floating point formats strongly preferred. The sampling is important too – any jitter and aliasing artefacts can become much worse after heavy compositing. Velocity Blur can help fighting some noise, but my personal favourite is the Reduce Noise plug-in from Neat Video – one of the frequency domain denoisers the film compositors have been using for years before AI denoising entered the 3D rendering scene. You can apply it at any stage – to either clean the sources or the final composite, and to fight the compression artefacts too.<br /><br /><span style="font-size: large;">Depth Of Field in Compositing</span><p></p><p><span style="font-size: large;"></span></p><div class="separator" style="clear: both; text-align: center;"><span style="font-size: large;"><a href="https://1.bp.blogspot.com/-LMumnGIB8-Y/YIgw-RYQjiI/AAAAAAAAB5g/4OAQJv5tx9EbME9og7vUE5cxA7vVefvWQCLcBGAsYHQ/s1920/ss_DOF_3Ddisplacement_step8.jpg" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1160" data-original-width="1920" height="241" src="https://1.bp.blogspot.com/-LMumnGIB8-Y/YIgw-RYQjiI/AAAAAAAAB5g/4OAQJv5tx9EbME9og7vUE5cxA7vVefvWQCLcBGAsYHQ/w400-h241/ss_DOF_3Ddisplacement_step8.jpg" width="400" /></a></span></div><p></p><p></p><p>It’s much faster to apply depth of field to a scene as a post effect rather then raytracing it straight through. However, achieving clean Z-defocus in compositing is quite challenging and I’ve been experimenting with different solutions for years. The technique I eventually came up with relies on Fusion’s GPU-accelerated 3D environment. 1) Place an Image Plane in front and exactly matching any static 3D camera; 2) Increase the number of subdivisions and apply 3D Displacement to the plane based on the Z-Depth channel; 3) Set up the focal plane in the Camera and use the Render 3D node in OpenGL mode to render the defocused version of the scene.<br /></p><p><br /><span style="font-size: large;">The Finishing Touches</span><br /></p><div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-YZX7Te3Co0Y/YIgxWGs2NiI/AAAAAAAAB5o/dM9jhvq7KwIsZ97YKSFt4IzYy-PcV446ACLcBGAsYHQ/s1920/pe_final_step9.jpg" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1080" data-original-width="1920" height="225" src="https://1.bp.blogspot.com/-YZX7Te3Co0Y/YIgxWGs2NiI/AAAAAAAAB5o/dM9jhvq7KwIsZ97YKSFt4IzYy-PcV446ACLcBGAsYHQ/w400-h225/pe_final_step9.jpg" width="400" /></a></div><p></p><p>As the last step merge all the elements together with the background and complete the shot by applying the final effects. These may include vignetting (which is a powerful composition tool!); lens flares, distortions and chromatic aberrations; camera shake used to accentuate any particularly strong impacts within the picture and, importantly, film grain. While it’s best to keep the picture clean during the work, the touch of grain on the output adds in extra details and texture, as well as helps against the banding artefacts and excessive video compression. The word “touch” is crucial too – the bells and whistles work best when used sparingly and not overdone!</p><p><br /><span style="font-size: large;">Advanced Masking Techniques</span><br /></p><p>Not only Object IDs and designated matte renders can be used for masking – UVs, Normals, Rest and World Position data, Ambient Occlusion and pretty much any technical pass imaginable make for great matte tools too. (For instance, Normals could be used to mask all the surfaces looking up in the picture). Render elements can be keyed, graded, inverted, added, subtracted or multiplied just to name a few common adjustments – all to arrive at the perfect matte for the exact area we want to manipulate. I’ve covered these and other related techniques in the previous posts:<br /></p><p style="line-height: 100%; margin-bottom: 0in;"><a href="http://www.the-working-man.org/2015/11/render-elements-uvs.html">https://www.the-working-man.org/2015/11/render-elements-uvs.html</a></p>
<p style="line-height: 100%; margin-bottom: 0in;"><a href="http://www.the-working-man.org/2015/08/render-elements-normals.html">https://www.the-working-man.org/2015/08/render-elements-normals.html</a></p>
<p style="line-height: 100%; margin-bottom: 0in;"><a href="http://www.the-working-man.org/2015/08/packing-lighting-data-into-rgb-channels.html">https://www.the-working-man.org/2015/08/packing-lighting-data-into-rgb-channels.html</a></p>
<p style="line-height: 100%; margin-bottom: 0in;"><a href="http://www.the-working-man.org/2015/06/storing-masks-in-rgb-channels.html">https://www.the-working-man.org/2015/06/storing-masks-in-rgb-channels.html</a></p><p style="line-height: 100%; margin-bottom: 0in;"><a href="http://www.the-working-man.org/2014/12/bit-depth-color-precision-in-raster.html">https://www.the-working-man.org/2014/12/bit-depth-color-precision-in-raster.html</a></p>
<p style="line-height: 100%; margin-bottom: 0in;"><a href="http://www.the-working-man.org/2014/11/pixel-is-not-color-square.html">https://www.the-working-man.org/2014/11/pixel-is-not-color-square.html</a></p><p style="line-height: 100%; margin-bottom: 0in;"><br />
</p>
<style type="text/css">p { margin-bottom: 0.1in; line-height: 115% }a:link { so-language: zxx }</style>Denis Kozlovhttp://www.blogger.com/profile/07406391692819839722noreply@blogger.com0tag:blogger.com,1999:blog-5149177666770535781.post-17242687531893887342018-11-26T21:58:00.000+01:002018-11-26T21:58:12.384+01:00FLOW – The Making of the FilmFLOW is a short art film I’ve started mid-summer at <a href="https://www.mixpoint.cz/">Mixpoint</a> – a post-production house which kindly bears with me as their resident CGI director. Few images like those <a href="https://www.nasa.gov/mission_pages/juno/images/index.html">Juno photos</a> got me seriously captivated at the time; I was also deep into commercial tabletop photography with their thick, vividly textured imagery of mixing liquids of all sorts – a grossly overlooked form of art. On top of that, there’s been a bunch of technical stuff I was looking to play with for ages, so here’s the resulting mix, shaken and stirred for your viewing pleasure (and then over-compressed beyond any of my control):<br />
<br />
<iframe allowfullscreen="" frameborder="0" height="281" mozallowfullscreen="" src="https://player.vimeo.com/video/302922203" webkitallowfullscreen="" width="500"></iframe>
<br />
<br />
And below I’m diving into the making-of details:<br />
<a name='more'></a><br />
We approached production more like one would with live-action than full-CGI material this time – I’ve created long takes of base action coverage as our source footage first, which was then cut into the actual film by the marvelous Marek Duda despite me trying to get in his way here and there. So most pixels are mine; most cutting is his.<br />
<br />
Marek used Resolve on his side. For me it was mainly Houdini and Fusion; a lot of each.<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://1.bp.blogspot.com/-VkhBDPLn_2A/W_xKI_ZufJI/AAAAAAAABmw/lseYFavuDlAkhT2mVu7eD2k0MuUzUd_FwCLcBGAs/s1600/01_a_sim_setup.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="473" data-original-width="1600" height="117" src="https://1.bp.blogspot.com/-VkhBDPLn_2A/W_xKI_ZufJI/AAAAAAAABmw/lseYFavuDlAkhT2mVu7eD2k0MuUzUd_FwCLcBGAs/s400/01_a_sim_setup.jpg" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Simulation setup</td></tr>
</tbody></table>
<br />
First came the sims – standard Houdini Pyro containers with a couple of mods. For one I’ve added a custom field to store and advect the colors of the streams. Secondly, running all the simulations in 2D mode made it way more fun – much faster and with a lot more resolution – still providing enough data to reconstruct the required depth later in the process. It’s all about generating the right data in fact – not getting the perfect image out of the box. The right data can be shaped into pretty much anything later (something <a href="http://www.the-working-man.org/2015/11/render-elements-uvs.html">I’ve written</a> about <a href="http://www.the-working-man.org/2015/08/render-elements-normals.html">before</a>) and Houdini is one tool greatly liberating in terms of data manipulation. The trick is to see which data is right at each stage: for me this sim stage was about getting the motion and interaction of the fluids right, so I’ve spent some time just playing around in the digital sandbox. With a simple render rig I’ve been exporting both the simulated volumes (one voxel deep; density, velocity and color) and EXR texture sequences (color and velocity).<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://3.bp.blogspot.com/-4sIqRKpDj7M/W_xLdNxpufI/AAAAAAAABm8/vSGuT0HfbMoDDJ2idaPR-q-QfeTJ9KhqwCLcBGAs/s1600/02_FLOW_RGB_colors.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="1200" data-original-width="1600" height="300" src="https://3.bp.blogspot.com/-4sIqRKpDj7M/W_xLdNxpufI/AAAAAAAABm8/vSGuT0HfbMoDDJ2idaPR-q-QfeTJ9KhqwCLcBGAs/s400/02_FLOW_RGB_colors.jpg" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Splitting the streams into RGB channels</td></tr>
</tbody></table>
<br />
The color was used to isolate the interacting streams by placing them into different RGB channels. This allowed to mask and further manipulate the streams individually, and possibly even do some quirky effects like mixing 2 fluids into an arbitrary color. More info on the method <a href="http://www.the-working-man.org/2015/06/storing-masks-in-rgb-channels.html">here</a> and <a href="http://www.the-working-man.org/2015/08/packing-lighting-data-into-rgb-channels.html">here</a>.<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://1.bp.blogspot.com/-92JzCkjvpxM/W_xL_tPQ9AI/AAAAAAAABnE/EbxNyOhf_Gwy923MW0ufx3ndaiI7vfSSQCLcBGAs/s1600/03_quirky_colormix.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="1600" data-original-width="701" height="640" src="https://1.bp.blogspot.com/-92JzCkjvpxM/W_xL_tPQ9AI/AAAAAAAABnE/EbxNyOhf_Gwy923MW0ufx3ndaiI7vfSSQCLcBGAs/s640/03_quirky_colormix.jpg" width="280" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Mixing yellow and green flows to fiery orange</td></tr>
</tbody></table>
<br />
I’ve done the camera work in Fusion, placing these long “action” textures in 3D space again. This was way more interactive than a 3D animation software would allow and Marek got his preliminary multi-camera footage, so he could start editing in parallel with me working on the shots. I’ve exported the same cameras back to Houdini for later rendering.<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://1.bp.blogspot.com/-0PM7u55PLc8/W_xSg9nG3KI/AAAAAAAABpY/qpg_7SHsFHo8VNFhWsGIOHTy-2SXh4GZQCEwYBhgL/s1600/04_fusion_cam_b.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="473" data-original-width="1600" height="117" src="https://1.bp.blogspot.com/-0PM7u55PLc8/W_xSg9nG3KI/AAAAAAAABpY/qpg_7SHsFHo8VNFhWsGIOHTy-2SXh4GZQCEwYBhgL/s400/04_fusion_cam_b.jpg" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Setting up cameras in Fusion</td></tr>
</tbody></table>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://3.bp.blogspot.com/-N6KgQVzPjwU/W_xM8CPv5TI/AAAAAAAABnQ/Jy2trWeCMU8bNbW26mesovpydLlbcNe7QCLcBGAs/s1600/04_fusion_cam_a.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="473" data-original-width="1600" height="117" src="https://3.bp.blogspot.com/-N6KgQVzPjwU/W_xM8CPv5TI/AAAAAAAABnQ/Jy2trWeCMU8bNbW26mesovpydLlbcNe7QCLcBGAs/s400/04_fusion_cam_a.jpg" width="400" /></a></div>
<br />
The next and last stage in Houdini was to create and render the graphical elements for the final assembly in compositing. I use this approach really often – rendering merely the technical passes from 3D to do the real lookdev in comping later. It takes bending the head around a bit, but surely pays off in flexibility – comping iterations are so much cheaper and you get incredible power in shaping and changing things when you need it the most – at the late stage when things really start getting together. In commercial work this also means that you can iterate on client’s feedback faster too.<br />
<br />
So here are the Houdini elements, similar set for each shot. All individual elements have been created from the same initial sim data, reusing many parts, so it made sense to batch-generate them together and later render as delayed load procedurals in Mantra.<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://3.bp.blogspot.com/-T_yjibvZQUA/W_xShENQnaI/AAAAAAAABpo/6u_1urudc24mZTfkcP7tu0zqPj3hfk0GACEwYBhgL/s1600/06_houdini_elements.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="473" data-original-width="1600" height="117" src="https://3.bp.blogspot.com/-T_yjibvZQUA/W_xShENQnaI/AAAAAAAABpo/6u_1urudc24mZTfkcP7tu0zqPj3hfk0GACEwYBhgL/s400/06_houdini_elements.jpg" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Elements' setup in Houdini SOPs</td></tr>
</tbody></table>
<br />
The first element is the volumetric pass. This is where it gets its depth back in a Volume VOP – the flat simulated data gets extruded, blurred and its density adjusted. Speed benefits aside, there is also a fine control advantage in doing this in SOPs rather than DOPs. Just like with everything else, the coloring is technical and separates the rendered streams into the individual RGB channels.<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://1.bp.blogspot.com/-irpdpoPZ8Ak/W_xShLbHHVI/AAAAAAAABpY/ABd7zqKEzlslHeSXYJZrrZfk80_grm_6gCEwYBhgL/s1600/07_volume_element.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="1200" data-original-width="1600" height="300" src="https://1.bp.blogspot.com/-irpdpoPZ8Ak/W_xShLbHHVI/AAAAAAAABpY/ABd7zqKEzlslHeSXYJZrrZfk80_grm_6gCEwYBhgL/s400/07_volume_element.jpg" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Volumetric pass</td></tr>
</tbody></table>
<br />
Simulated velocity volumes are used to advect particles with a simple SOP solver.<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://3.bp.blogspot.com/-BlkeKfgoW9w/W_xSlndm_7I/AAAAAAAABpg/Wql3dgUdJuMZ9ZP1DGXBvLC37jh5afD6ACEwYBhgL/s1600/FLOW_elements_11.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="1200" data-original-width="1600" height="300" src="https://3.bp.blogspot.com/-BlkeKfgoW9w/W_xSlndm_7I/AAAAAAAABpg/Wql3dgUdJuMZ9ZP1DGXBvLC37jh5afD6ACEwYBhgL/s400/FLOW_elements_11.jpg" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Advected particles</td></tr>
</tbody></table>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://3.bp.blogspot.com/--JVL9qNHPkI/W_xShTO8QPI/AAAAAAAABps/rclxK22f1qs2taDGslY0M0pxme5tTTBIACEwYBhgL/s1600/08_points_solver.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="631" data-original-width="1031" height="243" src="https://3.bp.blogspot.com/--JVL9qNHPkI/W_xShTO8QPI/AAAAAAAABps/rclxK22f1qs2taDGslY0M0pxme5tTTBIACEwYBhgL/s400/08_points_solver.jpg" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">SOP Solver contents for particle advection</td></tr>
</tbody></table>
<br />
Trail operator over the same data with some point copying create the streamlines.<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://2.bp.blogspot.com/-02sHc73nEbA/W_xSmb_1ibI/AAAAAAAABpc/QMev_xKql0YcIMctoWItT_JQMsvbkVF1ACEwYBhgL/s1600/FLOW_elements_16.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="1200" data-original-width="1600" height="300" src="https://2.bp.blogspot.com/-02sHc73nEbA/W_xSmb_1ibI/AAAAAAAABpc/QMev_xKql0YcIMctoWItT_JQMsvbkVF1ACEwYBhgL/s400/FLOW_elements_16.jpg" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Streamlines</td></tr>
</tbody></table>
<br />
Complimentary to the streamlines are topographic isolines. These are built by displacing a plane vertically with the simulated texture's values and then slicing through it (i.e. with a Clip SOP) in a loop. Additionally, the main volume can be (and was) converted to SDF and sliced through too.<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://3.bp.blogspot.com/-W-ZvBEkOu7c/W_xSl1MFm3I/AAAAAAAABps/WaFo4iCnR4UH-bqZzRldyEQzpoWtlAjlACEwYBhgL/s1600/FLOW_elements_13.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="1200" data-original-width="1600" height="300" src="https://3.bp.blogspot.com/-W-ZvBEkOu7c/W_xSl1MFm3I/AAAAAAAABps/WaFo4iCnR4UH-bqZzRldyEQzpoWtlAjlACEwYBhgL/s400/FLOW_elements_13.jpg" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Isolines</td></tr>
</tbody></table>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://1.bp.blogspot.com/-Ne0XC4xQdXI/W_xShllPUjI/AAAAAAAABpk/jFZu-Nt1DhQr4XJ6R6gtMMDMRs6jGNBQQCEwYBhgL/s1600/10_isolines_network.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="473" data-original-width="1600" height="117" src="https://1.bp.blogspot.com/-Ne0XC4xQdXI/W_xShllPUjI/AAAAAAAABpk/jFZu-Nt1DhQr4XJ6R6gtMMDMRs6jGNBQQCEwYBhgL/s400/10_isolines_network.jpg" width="400" /></a></div>
<br />
These are all the products of the same data – those simulated textures from step 1. Yet another example of its manipulation are the columns. This time a regular grid was used to displace the points vertically based on the fluid density value and then connect each point to its undisplaced version with an Add SOP.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://2.bp.blogspot.com/-B35yJnKZGQg/W_xSh_h49FI/AAAAAAAABpg/ln95u3oqIZ0-frdqrE8Gfj5u7fPKxxwLwCEwYBhgL/s1600/11_cols_R.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="900" data-original-width="1600" height="225" src="https://2.bp.blogspot.com/-B35yJnKZGQg/W_xSh_h49FI/AAAAAAAABpg/ln95u3oqIZ0-frdqrE8Gfj5u7fPKxxwLwCEwYBhgL/s400/11_cols_R.jpg" width="400" /></a></div>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://1.bp.blogspot.com/-1osNQueFfq4/W_xSiBb_47I/AAAAAAAABps/o_DK2VYAJ_MUal78iu5_tRPA5HwQuaCKQCEwYBhgL/s1600/11_cols_network.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="473" data-original-width="1600" height="117" src="https://1.bp.blogspot.com/-1osNQueFfq4/W_xSiBb_47I/AAAAAAAABps/o_DK2VYAJ_MUal78iu5_tRPA5HwQuaCKQCEwYBhgL/s400/11_cols_network.jpg" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Columns generation in Houdini</td></tr>
</tbody></table>
<br />
Many more various elements could be created from the same sim data, but I felt I already had enough for the solid visual diversity at this stage and finally took the whole thing to Fusion for comping. With a moderately sophisticated setup (which really boils down to <a href="http://www.the-working-man.org/2014/11/pixel-is-not-color-square.html">this</a> and other articles already quoted above), each shot got its set of several different looks exported plus few separate elements we used for transitions during the online sessions with Marek.<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://1.bp.blogspot.com/-uBXCv8OgVxw/W_xSihUJ5_I/AAAAAAAABpY/Jt1XQfBGU8w2T4QioqyYFH5WGkJ0E01DQCEwYBhgL/s1600/12_fusion_network.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="473" data-original-width="1600" height="117" src="https://1.bp.blogspot.com/-uBXCv8OgVxw/W_xSihUJ5_I/AAAAAAAABpY/Jt1XQfBGU8w2T4QioqyYFH5WGkJ0E01DQCEwYBhgL/s400/12_fusion_network.jpg" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Final Fusion composition</td></tr>
</tbody></table>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://3.bp.blogspot.com/-szzC2QE98MU/W_xSmeljKcI/AAAAAAAABpg/Yh5SjyBm7RQ-VVPqUQQTmUK2x45L0S8LACEwYBhgL/s1600/FLOW_looks.0121.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="900" data-original-width="1600" height="225" src="https://3.bp.blogspot.com/-szzC2QE98MU/W_xSmeljKcI/AAAAAAAABpg/Yh5SjyBm7RQ-VVPqUQQTmUK2x45L0S8LACEwYBhgL/s400/FLOW_looks.0121.jpg" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Each shot has been render to several different styles</td></tr>
</tbody></table>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://4.bp.blogspot.com/-G84b6Fsgmzg/W_xSn9SOGiI/AAAAAAAABpo/UV9uK64EO4w-KiX3R0gh6IG_f5ayIJxygCEwYBhgL/s1600/FLOW_looks2.0151.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="900" data-original-width="1600" height="225" src="https://4.bp.blogspot.com/-G84b6Fsgmzg/W_xSn9SOGiI/AAAAAAAABpo/UV9uK64EO4w-KiX3R0gh6IG_f5ayIJxygCEwYBhgL/s400/FLOW_looks2.0151.jpg" width="400" /></a></div>
<br />
It took few calendar months to finish the FLOW since we had to fit it into whenever the commercial workload allowed us to, but I believe that the actual hands-on production time turned out to be a little more than one man-month. And if you’ve read all the way down to here, also be sure to check out Mixpoint’s <a href="http://vimeo.com/mixpoint">Vimeo</a> and <a href="https://www.facebook.com/Studio-Mixpoint-1478978829061419/">Facebook</a> pages!<br />
<br />
And a couple of good books for the road:<br />
Peter S. Stevens, Patterns In Nature<br />
Philip Ball, Flow<br />
Milton Van Dyke, An Album of Fluid Motion<br />
<br />
Thank you and enjoy!<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://3.bp.blogspot.com/-X768h8BqO_E/W_xSlCvUnHI/AAAAAAAABpo/2BliwOs2mxEuTx2PnuyfUUHQ3KT81XZ2gCEwYBhgL/s1600/FLOW_edit.1171.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="900" data-original-width="1600" height="225" src="https://3.bp.blogspot.com/-X768h8BqO_E/W_xSlCvUnHI/AAAAAAAABpo/2BliwOs2mxEuTx2PnuyfUUHQ3KT81XZ2gCEwYBhgL/s400/FLOW_edit.1171.jpg" width="400" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://2.bp.blogspot.com/-g1gIU0UIH48/W_xSlD3rWlI/AAAAAAAABpw/U5XOAmhJtW8WKTFnTAA5bKq6hrdK_F24wCEwYBhgL/s1600/FLOW_edit.1240.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="900" data-original-width="1600" height="225" src="https://2.bp.blogspot.com/-g1gIU0UIH48/W_xSlD3rWlI/AAAAAAAABpw/U5XOAmhJtW8WKTFnTAA5bKq6hrdK_F24wCEwYBhgL/s400/FLOW_edit.1240.jpg" width="400" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://3.bp.blogspot.com/-UV8feA7wp-A/W_xSkZdbXxI/AAAAAAAABpo/7ZmeSmGPJt0YJktAp05oDiSnK0BMr8rOQCEwYBhgL/s1600/FLOW_edit.0998.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="900" data-original-width="1600" height="225" src="https://3.bp.blogspot.com/-UV8feA7wp-A/W_xSkZdbXxI/AAAAAAAABpo/7ZmeSmGPJt0YJktAp05oDiSnK0BMr8rOQCEwYBhgL/s400/FLOW_edit.0998.jpg" width="400" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://3.bp.blogspot.com/-yEx3TIOQzPU/W_xSj7vErLI/AAAAAAAABpo/rYH_uFPCSMcO5wT_wu8JohuOz1CqN6REQCEwYBhgL/s1600/FLOW_edit.0700.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="900" data-original-width="1600" height="225" src="https://3.bp.blogspot.com/-yEx3TIOQzPU/W_xSj7vErLI/AAAAAAAABpo/rYH_uFPCSMcO5wT_wu8JohuOz1CqN6REQCEwYBhgL/s400/FLOW_edit.0700.jpg" width="400" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://1.bp.blogspot.com/-FNucXfUHBSY/W_xSjZYHl_I/AAAAAAAABpY/pX4WZlWgh3AQQZ64TX2bJVWDB6o7IojpwCEwYBhgL/s1600/FLOW_edit.0548.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="900" data-original-width="1600" height="225" src="https://1.bp.blogspot.com/-FNucXfUHBSY/W_xSjZYHl_I/AAAAAAAABpY/pX4WZlWgh3AQQZ64TX2bJVWDB6o7IojpwCEwYBhgL/s400/FLOW_edit.0548.jpg" width="400" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://2.bp.blogspot.com/-Wy7ZbwmQ_ho/W_xSjGpGcBI/AAAAAAAABpY/012krI_0_Q4I9jqq6ADOw2Z_1Lu9tOnSACEwYBhgL/s1600/FLOW_edit.0517.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="900" data-original-width="1600" height="225" src="https://2.bp.blogspot.com/-Wy7ZbwmQ_ho/W_xSjGpGcBI/AAAAAAAABpY/012krI_0_Q4I9jqq6ADOw2Z_1Lu9tOnSACEwYBhgL/s400/FLOW_edit.0517.jpg" width="400" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://1.bp.blogspot.com/-LPFV7vZDKJ0/W_xSkzliiHI/AAAAAAAABpk/aV4p_CVAQNwGBqAXNtN8K4726q2j7MdwQCEwYBhgL/s1600/FLOW_edit.1159.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="900" data-original-width="1600" height="225" src="https://1.bp.blogspot.com/-LPFV7vZDKJ0/W_xSkzliiHI/AAAAAAAABpk/aV4p_CVAQNwGBqAXNtN8K4726q2j7MdwQCEwYBhgL/s400/FLOW_edit.1159.jpg" width="400" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://3.bp.blogspot.com/--pBVBMY-kug/W_xSkD3DXlI/AAAAAAAABpg/Uvhu6HjJwt4OItkuY93TH5BwS4VMWa92ACEwYBhgL/s1600/FLOW_edit.0725.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="900" data-original-width="1600" height="225" src="https://3.bp.blogspot.com/--pBVBMY-kug/W_xSkD3DXlI/AAAAAAAABpg/Uvhu6HjJwt4OItkuY93TH5BwS4VMWa92ACEwYBhgL/s400/FLOW_edit.0725.jpg" width="400" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://3.bp.blogspot.com/-p3ee_EMYMu8/W_xSixxF12I/AAAAAAAABpw/M8vYOum_zaAyXEp0RsLMPQzXfBoOwW_CgCEwYBhgL/s1600/FLOW_edit.0172.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="900" data-original-width="1600" height="225" src="https://3.bp.blogspot.com/-p3ee_EMYMu8/W_xSixxF12I/AAAAAAAABpw/M8vYOum_zaAyXEp0RsLMPQzXfBoOwW_CgCEwYBhgL/s400/FLOW_edit.0172.jpg" width="400" /></a></div>
<span id="goog_2056699913"></span><span id="goog_2056699914"></span><br />Denis Kozlovhttp://www.blogger.com/profile/07406391692819839722noreply@blogger.com3tag:blogger.com,1999:blog-5149177666770535781.post-2227252180979681962018-04-11T22:31:00.004+02:002022-04-26T20:31:00.128+02:00Procedural Bestiary and the Next Generation of CG SoftwareIn the previous essay “<a href="http://www.the-working-man.org/2017/04/procedural-content-creation-faq-project.html">Procedural content creation F.A.Q.</a>” I’ve claimed that it would take few months to assemble a full-scale creature generator. So I took those months and did it – introducing Kozinarium <strike>v1.0</strike> v1.5:<br />
<br />
<iframe allowfullscreen="" frameborder="0" height="281" mozallowfullscreen="" src="https://player.vimeo.com/video/703402772" webkitallowfullscreen="" width="500"></iframe>
<a href="https://vimeo.com/264317585">Procedural Creature Generator</a> from <a href="https://vimeo.com/kozlove">Denis Kozlov</a> on <a href="https://vimeo.com/">Vimeo</a>.<br />
<br />
Procedural systems I’ve been developing during the recent years served different purposes, not the least one being exploration of how far one can go in formalizing the visual art, expressing its language in machine-readable terms. “Quite far” is the answer I’ve got, and today I’d like to share my vision of the next generation of artistic tools which could empower anyone to render their imaginations with almost the ease of thought. But first, let’s take a look at how these procedural systems are made.<br />
<a name='more'></a><br />
<iframe allowfullscreen="" frameborder="0" height="281" mozallowfullscreen="" src="https://player.vimeo.com/video/211742962" webkitallowfullscreen="" width="500"></iframe><br />
<a href="https://vimeo.com/211742962">Procedural Aircraft Design Demo</a> from <a href="https://vimeo.com/kozlove">Denis Kozlov</a> on <a href="https://vimeo.com/">Vimeo</a>.<br />
<br />
<span style="font-size: large;">Procedural Systems</span><br />
<br />
There’s no magic involved; essentially the work is about translating an artist’s professional expertise into a form that computer understands. In a way, each digital artist is doing this daily. Big difference from classical CG content creation is that you draw not a single instance of a subject, but all its types, versions and modifications at once. This requires understanding the subject and its creation process so clearly, as to be able to write down an explicit algorithm – explain it to a computer. This approach is analytical as opposed to statistical methods of machine learning, so the system is extremely transparent – its inner workings are human-readable by design.<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://2.bp.blogspot.com/-cft2C-ErMjk/Ws5i-b4n5pI/AAAAAAAABjw/6YaI9e4ctwk44dlBM0FnJSNIHAERqjFsQCLcBGAs/s1600/KOZINARIUM_makingof_01.jpg" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="720" data-original-width="1280" height="225" src="https://2.bp.blogspot.com/-cft2C-ErMjk/Ws5i-b4n5pI/AAAAAAAABjw/6YaI9e4ctwk44dlBM0FnJSNIHAERqjFsQCLcBGAs/s400/KOZINARIUM_makingof_01.jpg" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Kozinarium is a creature-generating software I’ve made with Houdini. It’s meant to work under artist’s supervision, but the main means of control are several random seeds: user inputs a new number, and software creates a new creature or animation based on it. Kozinarium consists of about 1700 nodes and is easily expandable with new modules.</td></tr>
</tbody></table>
<br />
After clearly formulating the full scope of the task, step one is to recognize the patterns. Both the patterns in the artistic process which can become the spine of a future system, and the patterns associated with the subject: What makes up the subject structurally? Where the diversity is coming from? What has the biggest impact on the viewer? This is a very exciting part requiring tons of research. For Kozinarium, the concept of Hox genes has been extremely influential, even though it did not make it into the final toolset directly. Project Aero got much of its insights from reading up on how the actual airplanes are designed, especially the constraints imposed on production.<br />
<br />
Based on these patterns the actual system is designed. I guess grown-ups call it ‘software architecture’. The most creative yet challenging step. This is when you decide what to keep, what to leave out, what is the skeleton and what are the details. These are quite important decisions – the essence of the final system – what can and cannot be changed afterwards. For instance both Aero and Kozinarium utilize bilateral symmetry, which in Aero can be turned off manually; Kozinarium reserves some space for adding second symmetry plane and possibly even radial one.<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://4.bp.blogspot.com/-il6XHrID9PI/Ws5i-iSFtmI/AAAAAAAABj0/ioPX_AjeUGIZwQNRNEI8IPhxltAODA8UgCLcBGAs/s1600/KOZINARIUM_makingof_02.jpg" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="720" data-original-width="1280" height="225" src="https://4.bp.blogspot.com/-il6XHrID9PI/Ws5i-iSFtmI/AAAAAAAABj0/ioPX_AjeUGIZwQNRNEI8IPhxltAODA8UgCLcBGAs/s400/KOZINARIUM_makingof_02.jpg" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Major part of the work involved figuring out the sequence in which the decisions are made (what influences what: body size and shape, speed, structure of locomotors, details and appendages, etc). Some of the logic behind such choices got hard-coded, while some got left for the system to try and judge the results on its own. </td></tr>
</tbody></table>
<br />
Step 3 is to express the resulting design with the actual CG tools, which requires some thorough knowledge of computer graphics. To create something seriously new you cannot just be looking up techniques as you go, since that would restrict you only to the paths already taken by others. Instead you have to have a good overview of possibilities beforehand, get a reliable “feeling of the material” – predict the results of operations and often envision them in precise numerical terms. Diverse professional experience comes very handy here, and there’s a reason why I’ve dared to develop my first notable procedural system only after 16 years of practice as a CG|VFX artist. For the particular techniques I’ve used please refer to image captions throughout this article and <a href="http://www.the-working-man.org/2015/04/on-wings-tails-and-procedural-modeling.html">the original 2015 Aero post</a>.<br />
<br />
Then finally comes the implementation stage. The work definitely requiring much skill and deserving proper recognition on its own, it’s not in a primary focus for this piece and is covered in many other sources. My main choice is Houdini for 3D work and Fusion for 2D (I find both tools just absolutely fantastic). C++ is arguably the most versatile, yet quite low-level solution; Java and Processing seem to be popular within the procedural circles; Python is an industry standard in commercial CGI.<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://4.bp.blogspot.com/-GiOMqnnraWk/Ws5i_YFEg2I/AAAAAAAABj8/qCwTHpKyj_gh_Kizk8UZePZcMRY-dLfWACLcBGAs/s1600/KOZINARIUM_makingof_03.jpg" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="720" data-original-width="1280" height="225" src="https://4.bp.blogspot.com/-GiOMqnnraWk/Ws5i_YFEg2I/AAAAAAAABj8/qCwTHpKyj_gh_Kizk8UZePZcMRY-dLfWACLcBGAs/s400/KOZINARIUM_makingof_03.jpg" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">The core modules are CHOP-based generators for shapes and motions – so it’s about literally drawing with mathematical functions. Final meshing is VDB-based, which simplifies the topological aspects a lot, while intermediate steps can involve both polygons and NURBS. Choosing FEM simulation as the main animation vehicle allowed for a flexible marionette-like rigging. Rendered with procedural displacement in Mantra (3-5 minutes per frame) for the final processing and procedural coloring in Fusion.</td></tr>
</tbody></table>
<br />
This is the process in a nutshell. It can be boiled down to a basic analysis-synthesis-implementation chain, but does require certain expertise. Erudition is important, but pattern recognition and ability to see/translate between the structural, the verbal and the visual are probably key. The rewarding part is that once the system is in place, life usually becomes notably easier. Of course each new project involves tons of research, but with experience comes the vision, seeing through patterns and approaches. And eventually pretty much anything can be expressed.<br />
<br />
<span style="font-size: large;">Future Tools</span><br />
<br />
Now imagine a system where you can create any visuals or 3D objects by merely describing them. Doesn’t matter whether with words or parametric sliders, existing or totally made-up. What’s important is that you don’t need to understand the technicalities in order to create, unless you want to. This is high-level graphics creation, as opposed to directly manipulating pixels or polygons at the low level.<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://4.bp.blogspot.com/-NcBGpoKcdgQ/Ws5i-oRHTXI/AAAAAAAABj4/c5BdN4f80WwhmQLhPhILuE4XGQx5f4gfQCLcBGAs/s1600/Flight_Immunity_D.jpg" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="1200" data-original-width="1200" height="400" src="https://4.bp.blogspot.com/-NcBGpoKcdgQ/Ws5i-oRHTXI/AAAAAAAABj4/c5BdN4f80WwhmQLhPhILuE4XGQx5f4gfQCLcBGAs/s400/Flight_Immunity_D.jpg" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;"><a href="http://www.flightimmunity.com/">Flight Immunity</a> is the art project I’ve created with Project Aero. It includes over 50 original aircraft designs and would have hardly be possible with traditional low-level tools.</td></tr>
</tbody></table>
<br />
Such ability wouldn’t turn one into an artist on its own (same as merely owning a camera doesn’t) and in fact it could even alter the meaning we prescribe to the word ‘artist’ (again, just like photography did), yet it would make visual arts much more accessible to everyone. Furthermore, creating and customizing such systems could become a form of art in its own right, and we would encounter not only generic, universal content creation tools, but also individual, one-of-a-kind systems tailored for a particular person, reflecting and complementing her. Not to say there are many other areas like architecture or engineering where these new content creation systems would have a very practical use.<br />
<br />
I am absolutely sure that such software will inevitably appear because<br />
a) it is better than the current tools,<br />
b) it can be created.<br />
<br />
CGI tools have gone a long way since their introduction merely few decades ago. At first they’ve provided little more real automation than perfect perspective projections, later with the advent of unbiased rendering came realistic tonal distributions – more realistic out of the box than if constructed by an average person and possibly even an average artist. The ways we define digital form has evolved too from tedious number-crunching to natural drawing/sculpting. All these and many other advances are implacably moving towards more and more high-level creation methods.<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://1.bp.blogspot.com/-FBsg2SxSLhg/Ws5i_6mzXZI/AAAAAAAABkA/7xszoNknhM8BnUOnfLY217HFSE9uAkv6wCLcBGAs/s1600/Tundra.jpg" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="1200" data-original-width="1200" height="400" src="https://1.bp.blogspot.com/-FBsg2SxSLhg/Ws5i_6mzXZI/AAAAAAAABkA/7xszoNknhM8BnUOnfLY217HFSE9uAkv6wCLcBGAs/s400/Tundra.jpg" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;"><a href="http://www.the-working-man.org/2015/02/project-tundra.html">Project Tundra</a></td></tr>
</tbody></table>
<br />
Still current DCC systems are inefficient in many ways. Typically they’re just designed in very technical terms with little cushioning between the gears and the user – seemingly an atavism from the times when even simple representation of visual data was challenging. Polygons, UVs, interpolation, concatenation – the very language is wrong for high-level work. These terms are neither natural for a human artist, nor they carry much semantics for the machine to make a better use of them than bluntly storing and rendering.<br />
<br />
At the same time there are at least several approaches to building a universal high-level content creation system. The first is to build a network of specialized modules, each one tailored to its specific task. As a matter of fact, for many popular tasks like landscape generation or digital humans such modules exist for quite a while – think Vue, SpeedTree or MakeHuman; Project Aero and Kozinarium would fall into this category too. The trick is to make those modules compatible, let them talk to each other.<br />
<br />
Invention of a true general AI could be another solution of course, but before that happens contemporary machine learning methods already show very impressive results in CGI applications and look like another viable approach to the next generation of graphics software. But if only my understanding is correct, the problem is somewhat similar – we can train the modules for specific tasks, but are missing the glue to assemble them together.<br />
<br />
Developing such a glue – a generalized framework that could connect individual procedural modules – is the third approach that I see.<br />
<br />
<span style="font-size: large;">The Framework</span><br />
<br />
Visual art is formalizable and can be expressed algorithmically to a much higher degree than it’s commonly considered. This is the message going throughout all my procedural work. It takes a lot of knowledge, it takes a lot of effort and devotion, it does take a vision, but it is formalizable. I found that while designing procedural systems I think in altogether different terms than ‘pixels’, ‘polygons’, or ‘UVs’. Translation into this technical language of CGI happens quite late in the process, but is already being done to some extent for each project. What’s needed is to generalize it.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://1.bp.blogspot.com/-XoxSgruvoXo/Ws5jAZPqJ_I/AAAAAAAABkI/wdWnAy9UbHIW_F61AU96nm6kC6JXiYGwgCLcBGAs/s1600/procedural_letterform_constructor_v094.jpg" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1152" data-original-width="1024" height="400" src="https://1.bp.blogspot.com/-XoxSgruvoXo/Ws5jAZPqJ_I/AAAAAAAABkI/wdWnAy9UbHIW_F61AU96nm6kC6JXiYGwgCLcBGAs/s400/procedural_letterform_constructor_v094.jpg" width="355" /></a></div>
<br />
Basic notions like point, curve and surface can all be redefined in primary terms like corners, contrasts, rhythms and junctions. Particular CG implementations like NURBS or polygons, raster or vector should become secondary and be derived from them. Multiple parametrizations should be available in a semantic space of concepts like visual weight, isolation, accent, saturation, etc. There’s nothing mystical about it, just existing artistic knowledge base expressed in mathematical terms. From where I currently stand developing such framework looks as realistic as creating a procedural creature generator or an aircraft design toolkit had looked.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://1.bp.blogspot.com/-pCMfi-RCEHY/Ws5jALJEsDI/AAAAAAAABkE/lqxwrN5YDB4TaTydkvmZTuEo6ihALNyuACLcBGAs/s1600/planetoid_map_generator_v078.jpg" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1152" data-original-width="1024" height="400" src="https://1.bp.blogspot.com/-pCMfi-RCEHY/Ws5jALJEsDI/AAAAAAAABkE/lqxwrN5YDB4TaTydkvmZTuEo6ihALNyuACLcBGAs/s400/planetoid_map_generator_v078.jpg" width="355" /></a></div>
<br />
This system could become a base for a brand new high-level DCC software, a language to connect individual modules of different nature together and possibly even a guide for more context-aware machine learning systems. I’ve surely got a vision for such language and hope to get an opportunity to design/develop it.<br />
<br />
P.S. Kozinarium has been created as a part of my upcoming art project and is not meant for sharing or distribution.Denis Kozlovhttp://www.blogger.com/profile/07406391692819839722noreply@blogger.com5tag:blogger.com,1999:blog-5149177666770535781.post-66636619028183369342017-04-05T22:52:00.000+02:002017-04-15T16:21:53.033+02:00Procedural Content Creation F.A.Q. - Project Aero, Houdini and BeyondI’ve finally found the time to put together a long-requested video demo for Project Aero and would like to use this opportunity and answer some of the questions I’m often hearing in its regard.<br />
<br />
<iframe allowfullscreen="" frameborder="0" height="281" mozallowfullscreen="" src="https://player.vimeo.com/video/211742962" webkitallowfullscreen="" width="500"></iframe>
<a href="https://vimeo.com/211742962">Procedural Aircraft Design Demo</a> from <a href="https://vimeo.com/kozlove">Denis Kozlov</a> on <a href="https://vimeo.com/">Vimeo</a>.<br />
<span style="font-size: large;"><br />What is Project Aero?</span><br />
<br />
Project Aero is the software I’ve developed for rapid design of aircraft concepts. The video above demonstrates its main features. <br />
<div class="separator" style="clear: both; text-align: center;">
</div>
<br />
<span style="font-size: small;"> </span><span style="font-size: large;">What does “procedural” mean?</span><br />
<br />
In a wider
sense it means “automated” - created algorithmically by a computer
(rather than manually by a human operator or sampled like a scan or a
photograph). Here are the good places to learn more:<br />
<a name='more'></a><br />
<a href="https://en.wikipedia.org/wiki/Procedural_generation">https://en.wikipedia.org/wiki/Procedural_generation</a><br />
<a href="https://www.youtube.com/watch?v=UVRqCK6m7m4">https://www.youtube.com/watch?v=UVRqCK6m7m4</a><br />
<br />
Project
Aero in particular is an example of a computer-aided design system,
where the creative work is done by an artist while computer “fills the
gaps” - constructs numerous details to turn the artist’s quick sketch
into the finished model according to predefined rules and interactively
set parameters. “Procedural” here also refers to every single element
(like a rivet geometry or a paint texture) being generated from scratch
for each aircraft design, not utilizing any photo textures or library
models.<br />
<br />
<span style="font-size: large;">Did you create this software from scratch? Is it a standalone product?</span><br />
<br />
Project Aero is a set of tools for <a href="https://sidefx.com/">SideFX Houdini</a>.
Houdini is a 3D animation package (like Maya, 3DSMax, C4D or Blender)
and a visual programming environment at the same time. It’s been around
for over 20 years and is highly regarded within the film visual effects
industry – these days there is hardly a blockbuster released without its
use; and of course there are tons of info available online like at the <a href="http://forums.odforce.net/">od|force forum</a>.<br />
<br />
So
one needs Houdini to use Project Aero tools, but in fact these can be
made to run within almost any other host like Unreal, Maya, Unity, C4D
or even a proprietary app with the Houdini Engine technology – see for
example the ProTrack video further below. <br />
<br />
<span style="font-size: large;">How does it work?</span><br />
<br />
Here’s
a quick recording of a sample session to give an idea of how it looks
from the user’s perspective. The nodes marked with a blue <span style="color: #3d85c6;"><b><strike><span style="font-size: large;">A</span></strike></b></span> letter are
Project Aero tools, the others are native Houdini tools.<br />
<br />
<iframe allowfullscreen="" frameborder="0" height="261" mozallowfullscreen="" src="https://player.vimeo.com/video/125914891" webkitallowfullscreen="" width="500"></iframe>
<a href="https://vimeo.com/125914891">Project Aero - fast and very dirty interaction demo</a> from <a href="https://vimeo.com/kozlove">Denis Kozlov</a> on <a href="https://vimeo.com/">Vimeo</a>.<br />
<br />
<span id="goog_1986024800"></span><span id="goog_1986024801"></span><br />
If
we take a look inside the Aero tools, we can see that while possibly
sophisticated, they are actually made of standard Houdini nodes – so my
job as the developer was more or less to arrange those tiny boxes into
something like this:<span id="goog_1986024798"></span><span id="goog_1986024799"></span><br />
<span style="font-size: small;"> </span><br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://3.bp.blogspot.com/-fpm6UQ0lOd4/WOVRAFCtGaI/AAAAAAAAAv8/OAz5rqfi7cYl6dofxE-otZ830RJMBQC-wCEw/s1600/AERO_inside_tool.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="235" src="https://3.bp.blogspot.com/-fpm6UQ0lOd4/WOVRAFCtGaI/AAAAAAAAAv8/OAz5rqfi7cYl6dofxE-otZ830RJMBQC-wCEw/s400/AERO_inside_tool.jpg" width="400" /></a></div>
<br />
<br />
But a typical aircraft designed with Project Aero would look more like this to the artist (end user):<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://4.bp.blogspot.com/--KN7fLgEfzI/WOVQ-G-qJ6I/AAAAAAAAAv4/WD-lXOOPPAIqjrS-0PBzd8xnINNO6uSugCEw/s1600/AERO_user_example.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="400" src="https://4.bp.blogspot.com/--KN7fLgEfzI/WOVQ-G-qJ6I/AAAAAAAAAv4/WD-lXOOPPAIqjrS-0PBzd8xnINNO6uSugCEw/s400/AERO_user_example.jpg" width="377" /></a></div>
<br />
More technical details can be found in the original article <a href="http://www.the-working-man.org/2015/04/on-wings-tails-and-procedural-modeling.html">On Wings, Tails and Procedural Modeling</a>.<br />
<br />
<span style="font-size: large;">Can a particular feature be added?</span><br />
<br />
This
usually refers to what can or cannot be done within Houdini in general.
Whether it is LOD generation, texture baking, additional details,
options, outputs or whatever, the default answer is yes – but the real
question is how much hassle it would take. Project Aero does this or
that not so much because of some strict external limitations, but
largely because it’s been designed this way. And so it will gladly
provide whatever extra functionality would be added to it. In other
words: if you clearly see the algorithm – you can implement it; it’s
mostly about how much effort you are willing to invest into that, which
in turn is largely connected to what you design for. <br />
<br />
So if you
need LODs – you plan your system in a way that it can naturally generate
these LODs, if you need a diverse library of settings – you design to
have it modular where needed from the start, if require a particular
topology – generate geometry in a way that guarantees that. In this
sense developing for Houdini is just like any other software
development, and it’s not so much about some magic that Houdini supports
or doesn’t – it’s what you implement or not implement, what you plan
for or not. On the purely technological side, Houdini provides a pretty
good support for contemporary CG techniques and approaches.<br />
<br />
<span style="font-size: large;">Is Project Aero publicly available?</span><br />
<br />
Project
Aero has been developed as an internal tool and is not available for
sharing or distribution. And while I’m interested in collaboration on
development of procedural content creation systems, I have to restrict
these opportunities to commercial projects only.<br />
<br />
<span style="font-size: large;">Is it production-ready?</span><br />
<br />
Surprisingly,
yes. In time since the initial publication I had a chance to use Aero
for still semi-secret Flight Immunity – an art project involving over 50
aircraft designs in a timeframe hardly possible with any other
production technique. Things were happening along the way: host version
changed, bugs got found and fixed, features added – texture baking,
extra detail types and custom AOVs to name a few; and I’m having
difficulties to recall a time when such a change would cause sensible
trouble or delay – in general, things went smoother than I expected.<br />
<br />
<span style="font-size: large;">What else can be created this way?</span><br />
<br />
Pretty
much anything. Seriously. At least pretty much anything that can be
algorithmized. And the truth is that with enough understanding and
experience it is possible to algorithmize much if not most of the
commercial artists’ work already today. Here are some examples:<br />
<br />
ProTrack by IndiePro Tools – procedural racing track system for Unity.<br />
<br />
<iframe allowfullscreen="" frameborder="0" height="281" mozallowfullscreen="" src="https://player.vimeo.com/video/123350222" webkitallowfullscreen="" width="500"></iframe>
<a href="https://vimeo.com/123350222">GDC2015 | PROTRACK by IndiePro</a> from <a href="https://vimeo.com/goprocedural">Go Procedural</a> on <a href="https://vimeo.com/">Vimeo</a>.<br />
<br />
<br />
<br />
<iframe allowfullscreen="" frameborder="0" height="281" mozallowfullscreen="" src="https://player.vimeo.com/video/156742662" webkitallowfullscreen="" width="500"></iframe>
<a href="https://vimeo.com/156742662">Creature Integration/Advanced AOVs Demo</a> from <a href="https://vimeo.com/kozlove">Denis Kozlov</a> on <a href="https://vimeo.com/">Vimeo</a>.<br />
<br />
Although
made for a different purpose, the video above showcases the snail that
has been created completely procedurally as a proof of concept. It can
be parametrically adjusted in any designed aspect and, with few months
invested, could even be turned into a full-scale creature generator.<br />
<br />
And
following are few stills from another yet unannounced project of mine.
Unlike the examples seen so far, these are the true generators or
interactive evolution systems – the images are created by an algorithm
with minimum to none of human involvement.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://2.bp.blogspot.com/-Wyz_TdJIyEQ/WOVQ6oTd_VI/AAAAAAAAAwE/dOA9GM6kbEEpdgMMxj5EEkCAMsYAIGk0wCEw/s1600/planetoid_map_generator_v078_DK.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="400" src="https://2.bp.blogspot.com/-Wyz_TdJIyEQ/WOVQ6oTd_VI/AAAAAAAAAwE/dOA9GM6kbEEpdgMMxj5EEkCAMsYAIGk0wCEw/s400/planetoid_map_generator_v078_DK.jpg" width="355" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://4.bp.blogspot.com/-e3zPBHHS8Vs/WOVRKBtqXzI/AAAAAAAAAwE/tXqYfos397oiFLga67dRh5FpQWv21ifiwCEw/s1600/procedural_letterform_constructor_v094_DK.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="400" src="https://4.bp.blogspot.com/-e3zPBHHS8Vs/WOVRKBtqXzI/AAAAAAAAAwE/tXqYfos397oiFLga67dRh5FpQWv21ifiwCEw/s400/procedural_letterform_constructor_v094_DK.jpg" width="355" /></a></div>
<br />
<span style="font-size: large;">Could the same be done without Houdini?</span><br />
<br />
Absolutely. If we take Project Aero, the techniques that make it up have been out there for years – I just saw the way to put them together to reach a particular goal. The same functionality can be achieved with most programming languages.<br />
<br />
Yet at the same time Houdini has saved me few years of learning C++ or the expenses of hiring a skilled programmer – instead it allowed to directly convert my already existing knowledge of computer graphics into the ready-to-use software. This is not a small saving given the amount of extra knowledge a project of this kind requires: graphics and coding aside, one has to dive into the subject-related disciplines. You read up on geomorphology to create good terrain tools; comparative anatomy and evo-devo for creature generation; vehicle packaging, styling and aerodynamics for a transportation project and so forth. Getting slightly off-topic, my top 3 useful books for Aero are:<br />
<br />
D. Raymer – “Aircraft Design: A Conceptual Approach”<br />
T. Talay – “Introduction to the Aerodynamics of Flight”<br />
P. Bowers – “Unconventional Aircraft”<br />
<br />
And then again, the possibility to easily port tools to different platforms with Houdini Engine looks quite appealing.<br />
<br />
<span style="font-size: large;">Is this the future?</span><br />
<br />
I believe so. Procedural content creation looks just a natural next step in the evolution of artistic tools. 40 years ago it required serious coding and engineering skills to create a basic 3D model; 15 years ago the prerequisites still included thorough understanding of software and topology; nowadays digital sculpting allows anyone to create 3D models just as easy as drawing – one can still use the lower level techniques when needed, but the faster, easier and more convenient approach goes first. And this is exactly what procedural tools provide – faster, easier and more convenient content creation methods, while you can still do manual adjustments when required. <br />
<br />
Another futuristic aspect is software being developed by users themselves, not only dedicated programmers and coders. Programming as an activity keeps crawling into our daily lives inch by inch, without us even noticing. I surely see this as a good thing, and I can clearly envision the future where an artist making his own tools looks as usual as a driver setting up the GPS navigator.<br />
<br />
With Project Aero I wanted to show that all this is possible already now, and the future is already here in a sense. Something I’ve been trying to prove in countless pub talks for years.<br />
<br />
Talking practical aspects, game developers seem to be the first logical beneficiaries, and in fact they’ve been using procedural generation in one form or another for decades. Tabletop RPG games and regular toy manufacturers can be a less obvious category. But imagine designing a 3D-printed character for a D&D party, or the whole one-of-a-kind fantastic zoo in just few clicks, with a posh high-end look and yet no special technical skills required.<br />
<br />
This detaching of artist’s technical skills and making them available to anyone seems the most valuable outcome to me – even more than the associated productivity gain. And providing pro-grade artistic tools and capabilities to the fan communities is only the tip of the iceberg.<br />
<br />
Exciting.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://3.bp.blogspot.com/-d6B4_5mK5b0/WOX3ZLu938I/AAAAAAAAAwY/1DV68HbhHA0e4mRAaKe3OEckTEbccbLCgCLcB/s1600/AERO_video_cover_720.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="225" src="https://3.bp.blogspot.com/-d6B4_5mK5b0/WOX3ZLu938I/AAAAAAAAAwY/1DV68HbhHA0e4mRAaKe3OEckTEbccbLCgCLcB/s400/AERO_video_cover_720.jpg" width="400" /></a></div>
Denis Kozlovhttp://www.blogger.com/profile/07406391692819839722noreply@blogger.com0tag:blogger.com,1999:blog-5149177666770535781.post-52376660951503318182015-11-30T21:55:00.002+01:002017-02-13T21:04:50.531+01:00Render Elements: UVs<a href="http://www.the-working-man.org/2015/08/render-elements-normals.html">Continuing on the topic of AOVs</a> with another brief anatomic study. This article closes my series on post-render image manipulation, I believe and hope that understanding of other AOVs like Z-Depth, Direct/Indirect Lighting passes or World/Rest Position can be easily derived from the principles already discussed, common sense and the Internet. And the following video could serve as a good example of utilization of these principles.<br />
<br />
<iframe allowfullscreen="" frameborder="0" height="280" mozallowfullscreen="" src="https://player.vimeo.com/video/156742662" webkitallowfullscreen="" width="500"></iframe>
<br />
<br />
Now let's take a look at the UVs...<br />
<br />
<a name='more'></a>One less common yet extremely powerful render element is the texture
coordinates (or UVs). 3D software typically doesn't provide a preset
output for them out of the box, however it is quite easy to create one
manually and experience the benefits of a post-render texturing and
other useful tricks.<br />
<br />
<br />
<span style="font-size: large;">The UVs</span><br />
<br />
For each point of a three-dimensional surface, texture coordinates are nothing more but 2 numbers*. These numbers define an exact location of a point within a regular 2D image, thus creating a correspondence and so matching every surface point with a point of a texture. This sets the precise way of how a 2D texture image should be placed over a model's 3D surface.** <br />
<br />
In fact, UV unwrapping, projecting, pelting and editing are simply the methods by which we define this correspondence within the software.<br />
<br />
<i><span style="font-size: x-small;">*Three numbers or UVW coordinates are used for volumetric texturing (like when applying procedural 3D textures).</span></i><br />
<i><span style="font-size: x-small;">**The correspondence is unique one-way only, and multiple parts of a 3D model can be matched to the same piece of a 2D texture</span></i><br />
<br />
The UVs are measured relative to the borders of a picture, and are therefore independent of its resolution, allowing to be reused with any image of any size and proportions. The first number out of two defines the horizontal coordinate (0 means the left picture edge, 1 stands for the right one). The second number describes the vertical position within the raster in a similar manner: 0 means bottom, 1 – top <span style="font-size: x-small;">(see the second illustration below)</span>.<br />
<br />
Texture coordinates locate vertices without snapping to texture pixels (known as texels) – a surface point can be mapped to a center of a texel, its border or anywhere within its area.<br />
<br />
<span style="font-size: large;">The shader</span><br />
<br />
Since raster images <a href="http://www.the-working-man.org/2014/11/pixel-is-not-color-square.html">store nothing but numbers</a> (and often <a href="http://www.the-working-man.org/2014/12/bit-depth-color-precision-in-raster.html">within that very range of zero-to-one</a>), the surface's texture coordinates can be rendered out as an additional element. Just like the color of the main “Beauty” pass, or ZDepth, or Normals are rendered. Red and Green image channels of this element are traditionally utilized for storing U (horizontal) and V (vertical) values respectively.*<br />
<i><span style="font-size: x-small;"><br />*Blue channel stays free and can be used to encode some additional data (<a href="http://www.the-working-man.org/2015/06/storing-masks-in-rgb-channels.html">like an object's mask</a> or ambient occlusion)</span></i><br />
<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://4.bp.blogspot.com/-rv9AlgbNA1k/Vlyx4aT_UII/AAAAAAAAAso/FVbBumiphl4/s1600/uvs_element_prw.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img alt="Rendered UV pass of a 3D scene" border="0" height="225" src="https://4.bp.blogspot.com/-rv9AlgbNA1k/Vlyx4aT_UII/AAAAAAAAAso/FVbBumiphl4/s400/uvs_element_prw.jpg" title="Rendered UV pass of a 3D scene" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">This is how the resulting render element looks like. Red and Green
values display the exact UV values for each pixel. And this in turn
would allow us to map a texture after the render is actually done.</td></tr>
</tbody></table>
<br />
<br />
All it takes to create this AOV is a constant shader with the double-gradient texture as shown on the following illustration (the target object has to be UV-mapped of course). This texture simply represents the UV coordinates tile as an image, acting as a sort of indicator when rendered. Due to the very high precision required from the produced values for later texture mapping, it is is highly preferable to have it in floating point and infinite resolution – that is to create procedurally by literally adding two gradients (horizontal black-to-red and vertical black-to-green) within the shading network.<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://3.bp.blogspot.com/-soxlzcjsKDs/Vlyx4KfQSxI/AAAAAAAAAss/oUbffe0YxqQ/s1600/UV_tile_raster_v001.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img alt="Anatomy of a UV tile" border="0" height="305" src="https://3.bp.blogspot.com/-soxlzcjsKDs/Vlyx4KfQSxI/AAAAAAAAAss/oUbffe0YxqQ/s400/UV_tile_raster_v001.jpg" title="Anatomy of a UV tile" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;"><style type="text/css">P { margin-bottom: 0.08in; }</style>
<br />
<div style="margin-bottom: 0in;">
A
tile of texture coordinates represented as RGB colors. Rendering
objects with this texture generates UVs render element. Corner RGB
values and individual channels are shown for reference.</div>
</td></tr>
</tbody></table>
<br />
<span style="font-size: large;">The applications</span><br />
<br />
The main point in outputting UV coordinates as a render element is the ability to quickly reapply any texture to the already rendered object in compositing with the specialized tools like the Texture node in Fusion or STMap in Nuke. <br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://2.bp.blogspot.com/-JIHnMo1chDo/Vlyx4OqkK-I/AAAAAAAAAs4/8sIPL_WPmHk/s1600/textured_w_uvs_prw.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img alt="Textures applied in compositing" border="0" height="225" src="https://2.bp.blogspot.com/-JIHnMo1chDo/Vlyx4OqkK-I/AAAAAAAAAs4/8sIPL_WPmHk/s400/textured_w_uvs_prw.jpg" title="Textures applied in compositing" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;"><style type="text/css">P { margin-bottom: 0.08in; }</style>
<br />
<div style="margin-bottom: 0in;">
An image textured in compositing
using the UVs render element and the individual objects' masks as
discussed in <a href="http://www.the-working-man.org/2015/06/storing-masks-in-rgb-channels.html">earlier</a> <a href="http://www.the-working-man.org/2015/08/packing-lighting-data-into-rgb-channels.html">posts</a>.</div>
</td></tr>
</tbody></table>
<br />
<br />
Like pretty much all post-shading techniques, this one has certain limitations. It is not really suitable for semi-transparent objects and works best on simpler isolated forms. <br />
<br />
Using a constant shader ensures that the coordinate information is rendered precisely – unaffected by lighting. However antialiasing of the edges introduces color values that do not really correspond to the UV information which naturally leads to the artifacts in post texture projection. A typical way of partly fighting these is upscaling the texture AOV before processing and downscaling afterwards, which can be turned on right before rendering of the final composite to keep the project more interactive. Rendering aliased samples directly into a high-resolution raster might be more proper solution, though more demanding as well. <br />
<br />
And still the advantages are many. Post-texturing becomes especially powerful when the scene is rendered in passes with lighting information separated from textures, so that it can be reused with the new ones. In fact much of the lighting can be recreated in post as well from additional elements like Normals, World Position and Zdepth. All this, in turn, allows for creating procedural scene templates in compositing. As a basic example, imagine an animation of a book with turning pages which needs to be re-rendered with the new textures monthly. A good compositing setup utilizing various render elements would require an actual 3D scene to be rendered only once, leaving most further changes purely in comp, which typically has much more interactivity and lower render times.<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://3.bp.blogspot.com/-WKqvWhOgiDY/Vlyx4QlnRQI/AAAAAAAAAsw/dpzxQaWhODc/s1600/textured_w_uvs_shaded_prw.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img alt="3D scene completely textured and shaded in compositing from AOVs" border="0" height="225" src="https://3.bp.blogspot.com/-WKqvWhOgiDY/Vlyx4QlnRQI/AAAAAAAAAsw/dpzxQaWhODc/s400/textured_w_uvs_shaded_prw.jpg" title="3D scene completely textured and shaded in compositing from AOVs" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;"><style type="text/css">P { margin-bottom: 0.08in; }</style>
<br />
<div style="margin-bottom: 0in;">
Lighting
developed for the <a href="http://www.the-working-man.org/2015/08/render-elements-normals.html">previous article</a> solely from the Normals
AOV applied to the same image as a simple example of a procedural
shading setup completely assembled in composing.</div>
</td></tr>
</tbody></table>
<br />
<br />
Taking it further, the texture coordinates plate from the second illustration (A UV tile) can efficiently serve as a tool for measuring and capturing any deformation within the screen space. Modifying that image with any transformations, warps, bends and distortions results in a raster “remembering” those deformations precisely. So that it can be recreated on any other image with the same tools used for post-render texture mapping. The method is often used for storing the lens distortion for instance. Another practical application is to render our magic shader refracted in another object, thus creating the refractions map rather than the usual surface UVs element.<br />
<br />Denis Kozlovhttp://www.blogger.com/profile/07406391692819839722noreply@blogger.com0tag:blogger.com,1999:blog-5149177666770535781.post-8493839597090282452015-08-16T21:26:00.003+02:002017-02-13T21:08:18.455+01:00Render Elements: Normals<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: center;">
</div>
<style type="text/css">P { margin-bottom: 0.08in; }</style>
<br />
This time let's do a brief anatomic study of a Normals output variable. Below is my original manuscript of an article first published in issue 188 of a 3D World magazine.<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://4.bp.blogspot.com/-5jkA6LZF9_4/VdDcH5UgomI/AAAAAAAAAr4/qOoI_V-diRo/s1600/normals_anatomy_v003.jpg" style="margin-left: auto; margin-right: auto;"><img alt="A brief anatomic study of a Normals output variable" border="0" height="640" src="https://4.bp.blogspot.com/-5jkA6LZF9_4/VdDcH5UgomI/AAAAAAAAAr4/qOoI_V-diRo/s640/normals_anatomy_v003.jpg" title="Render elements: Normals anatomy. www.the-working-man.org" width="282" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">A typical Normals element rendered in screen space<br />
</td></tr>
</tbody></table>
<a name='more'></a><br />
<span style="font-size: large;">The shaders and the AOVs </span><br />
<br />
Although this might be obscured from the artist in many popular 3D applications, a shader (and a surface shader in particular) is actually a program run on each pixel*. It takes input variables for that pixel such as surface position, normal orientation, texture coordinates and outputs some variable value based on those input traits. This new pixel value is typically a simulated surface color, however any variable calculated by the shader can be rendered instead, often creating weird-looking imagery. Such images usually complement the main Beauty render and are called Arbitrary Output Variables (AOVs) or Render Elements. <br />
<br />
<span style="font-size: x-small;"><i>*Vertex shading can be used in interactive applications like games. Such shaders are calculated per vertex of the geometry being rendered rather than per pixel of the final image, and the resulting values are interpolated across the surface. </i></span><br />
<br />
<span style="font-size: large;">The normals </span><br />
<br />
Normals are vectors perpendicular to the surface. They are already calculated and used by the surface shaders among the other input variables in order to compute the final surface color, and thus are usually cheap to output as an additional element. They are also <a href="http://www.the-working-man.org/2014/11/pixel-is-not-color-square.html">easy to encode into an RGB image</a>, each being a three-dimensional vector and so requiring exactly three values to express. In accordance with the widespread CG convention, XYZ coordinates are stored in the corresponding RGB channels (X in Red, Y in Green and Z in Blue). And since all we need to encode is orientation data, an integer format would suffice, although <a href="http://www.the-working-man.org/2014/12/bit-depth-color-precision-in-raster.html">16-bit depth is highly preferable</a>. <br />
<br />
This orientation can be expressed in terms of World Space (relative to the global coordinates of the 3D scene) or Screen Space (Tangent Space if we're talking about baking textures). The latter option is typically the most useful output for compositing or texture baking, although the first one has some advantages as well (like masking out parts of the scene based on the global directions, i.e. all faces pointing upwards regardless of camera location).<br />
<br />
Closer examination is provided in the header image above.<br />
<div style="margin-bottom: 0in;">
<br /></div>
<div style="margin-bottom: 0in;">
<style type="text/css">P { margin-bottom: 0.08in; }</style>
</div>
<br />
<span style="font-size: large;">The applications </span><br />
<br />
The usages of all this treasury are many. For one, since this is the same data shaders utilize to light a surface, compositing packages offer the relighting tools for simulating local directional lighting and reflection mapping based solely on rendered normals. This method won't generate any shadows, but is typically lightning fast compared to going back to a 3D program and rerendering. <br />
<br />
Normals can also be used as an additional (among the others like World Position or ZDepth) input for more complex relighting in 2D. <br />
<br />
It's easy to notice that the antialiased edges of the Normals AOV produce improper values and thus become the source of relighting artifacts. One way to partly fight this limitation is upscaling the image plates before the relighting operation and downscaling them back afterwards. This would obviously slow down the processing, so should be turned on just prior to the final render of the composition. <br />
<br />
Manipulating the tonal ranges (grading) of individual channels or color keying the Normals pass can generate masks for further adjustments on the main image (the Beauty render) based on surface orientation. Like for brightening up all the surfaces facing left from the camera, or getting a Fresnel mask out of the blue channel. <br />
<br />
All this empowered by <a href="http://www.the-working-man.org/2015/08/packing-lighting-data-into-rgb-channels.html">gradient mapping and adding the intermediate results together</a> provides quite an extensive image manipulation/relighting toolkit capable of producing pictures on its own like the following one.<br />
<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://2.bp.blogspot.com/-nVpfhzhQjlc/VdDcHZXP9tI/AAAAAAAAAr0/tuaf7EM8hWc/s1600/shaded_w_normals_prw.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img alt="Render elements: Normals. www.the-working-man.org" border="0" height="225" src="https://2.bp.blogspot.com/-nVpfhzhQjlc/VdDcHZXP9tI/AAAAAAAAAr0/tuaf7EM8hWc/s400/shaded_w_normals_prw.jpg" title="Render elements: Normals. www.the-working-man.org" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;"><style type="text/css">P { margin-bottom: 0.08in; }</style>
<br />
<div style="margin-bottom: 0in;">
<span style="font-family: "liberation" serif , serif;">This
image was created in compositing using only the Normals render
element </span></div>
<div style="margin-bottom: 0in;">
<span style="font-family: "liberation" serif , serif;">and the individual objects' masks as discussed <a href="http://www.the-working-man.org/2015/06/storing-masks-in-rgb-channels.html">earlier</a>.</span></div>
</td></tr>
</tbody></table>
<br />
<span style="font-size: large;">*Bump vs Normals</span> (instead of a post scriptum)<br />
<br />
Bump mapping is nothing more than modifying the surface normals based on a height map at render time, so that the shader does its usual calculations but with these new modified normals as the input. Everything else stays the same. This also means that bump mapping and normal mapping techniques are essentially the same thing, and the corresponding maps can be easily converted one into another. Bump textures have a benefit of being easier to edit and usable for displacement, while normal maps are more straightforward to interpret by the GPU, which is its main advantage for real-time applications. <br />
<br />
The Normals element we've been studying here could be rendered either with or without bump modifications applied. Denis Kozlovhttp://www.blogger.com/profile/07406391692819839722noreply@blogger.com0tag:blogger.com,1999:blog-5149177666770535781.post-36875168105627293122015-08-13T15:55:00.001+02:002017-02-13T20:36:51.518+01:00Packing Lighting Data into RGB Channels<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://1.bp.blogspot.com/-TzQML1zvmrg/VcyVg3N_BOI/AAAAAAAAArI/kxWpmBsVzSw/s1600/sample_a_prw.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="277" src="https://1.bp.blogspot.com/-TzQML1zvmrg/VcyVg3N_BOI/AAAAAAAAArI/kxWpmBsVzSw/s400/sample_a_prw.jpg" width="400" /> </a></div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<style type="text/css">P { margin-bottom: 0</style>Most existing renderers process Red, Green and Blue channels independently. While this limits the representation of certain optical phenomena (especially those in the domain of physical rather than geometric optics), it provides some advantages as well. For one, this feature allows encoding lighting information from several sources into a single image separately, which we are going to look at in this article. <br />
<br />
<a name='more'></a><br />
<span style="font-size: large;">Advanced masking </span><br />
<br />
For the first example let's consider rendering masks which <a href="http://www.the-working-man.org/2015/06/storing-masks-in-rgb-channels.html">we have discussed the last time</a>. The image above introduces a bit more advanced scenario when an object is present in reflections and refractions as well, so we want the mask to include those areas also, but at the the same time to provide a separate control over them. <br />
<br />
<br />
While the most proper approach might be to render two passes (one masking the object itself and another one with primary visibility off – an object only visible in reflections/refractions), there is a way to pack this data into a single render element if desired. Just set the reflection filter color to pure Red (1,0,0), refraction color to Green (0,1,0) and make sure that an object being masked is present in all these channels plus in one extra (meaning it's white in our case). To isolate the mask for a reflection or refraction, we now only need to subtract the Blue channel (in which nothing gets neither reflected nor refracted by our design) from Red or Green respectively.<br />
<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://3.bp.blogspot.com/-m7hUr8IqY1U/VcyVgSNZypI/AAAAAAAAArA/p7Aq_mHUj18/s1600/sample_a_mask1_channels_prw.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="150" src="https://3.bp.blogspot.com/-m7hUr8IqY1U/VcyVgSNZypI/AAAAAAAAArA/p7Aq_mHUj18/s400/sample_a_mask1_channels_prw.jpg" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;"><style type="text/css">P { margin-bottom: 0.08in; }</style>
<br />
<div style="margin-bottom: 0in;">
A custom matte pass allowing
isolation of an object in reflections and refractions as well.</div>
</td></tr>
</tbody></table>
<br />
As usually when dealing with mask manipulations, special care should be
taken to avoid edge corruptions, and this method might not be the
optimal for softer (e.g. motion blurred) contours.<br />
<br />
<span style="font-size: large;">Isolating lights </span><br />
<br />
Another situation could require to isolate an input of a particular light source into the scene, including its impact on global illumination. Again, it's great (and typically way faster) when your rendering software provides per-light output of the elements, however this is not always an option. <br />
<br />
But taking advantage of the separate RGB processing, as soon as we color each source with a pure Red, Green or Blue, it's impact will be contained within the corresponding channel completely and never “spill out” to another one. Yet the light will preserve the complete functionality including GI and caustics. Of course, all surfaces should be desaturated for this pass (otherwise an initially red object might not react with the light represented in another channel for example). <br />
<br />
The resulting data can be used in compositing as a mask or an overlay to correct/manipulate the individual input of each light into the beauty render, for instance to adjust the key/fill lights balance. <br />
<br />
For this scene I had only two light sources (encoded in Red and Green), so in a similar fashion I have added an ambient occlusion light into the otherwise empty Blue channel. Ambient occlusion deserves a separate article on its own as it has numerous applications in compositing and is a very powerful look adjustment vehicle. Depending on the software, AO could be implemented as a surface shader, still it can fit into a single channel and be encoded in one custom shader together with some other useful data like UV coordinates or the aforementioned masks.<br />
<br />
<a href="http://3.bp.blogspot.com/-c80mm51RZ34/VcyVgh5hW5I/AAAAAAAAAq8/i0tJgsLtyvI/s1600/sample_a_RGB_lights_scheme_prw.jpg"></a><br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://3.bp.blogspot.com/-c80mm51RZ34/VcyVgh5hW5I/AAAAAAAAAq8/i0tJgsLtyvI/s1600/sample_a_RGB_lights_scheme_prw.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="400" src="https://3.bp.blogspot.com/-c80mm51RZ34/VcyVgh5hW5I/AAAAAAAAAq8/i0tJgsLtyvI/s400/sample_a_RGB_lights_scheme_prw.jpg" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;"><style type="text/css">P { margin-bottom: 0.08in; }</style>
<br />
<div style="margin-bottom: 0in;">
This weirdly colored additional
render contains separate impact of each of two scene lights within
Red and Green channels, while Blue stores ambient occlusion for
diffuse objects</div>
</td></tr>
</tbody></table>
<div style="margin-bottom: 0in;">
<br />
<span style="font-size: large;">Saving on volumes </span><br />
<br />
The described technique becomes most powerful when applied to volumes. Volumetric objects usually take considerable time to render and are often intended to be used as elements later (which implies they should come in multiple versions). By lighting a volume with 3 different lights of pure Red, Green and Blue colors we can get 3 monochromatic images with different light directions in a single render. <br />
<br />
To have a clearer picture while setting up those lights, it is handy to tune them one at a time in white color and with others off. Enabling all three simultaneously and assigning channel-specific colors can be done right before the final render – the result for each channel should match the white preview of a corresponding source automatically.<br />
<br />
<br /></div>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://4.bp.blogspot.com/-J7Ja4IHX5AI/VcyVhbw_n1I/AAAAAAAAAqw/xu9erP8ADek/s1600/sample_b_channels_prw.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="298" src="https://4.bp.blogspot.com/-J7Ja4IHX5AI/VcyVhbw_n1I/AAAAAAAAAqw/xu9erP8ADek/s400/sample_b_channels_prw.jpg" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Three lights stored in a single RGB
image</td></tr>
</tbody></table>
<br />
<br />
The trick now is to manipulate and combine them into the final picture. All sorts of color corrections and compositing techniques can be used here, but I find gradient mapping to be especially powerful. Coming under various names but available almost universally in image editors of all sorts, it is a tool allowing to remap a monochromatic range of an input into an arbitrary color gradient, thus “colorizing” a black-and-white image.<br />
<br />
<div style="margin-bottom: 0in;">
<br /></div>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://4.bp.blogspot.com/-uGD6wJKvbCU/VcyVhHMbdUI/AAAAAAAAAq0/vQNcR3d846M/s1600/sample_b_before_mapping_prw.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="300" src="https://4.bp.blogspot.com/-uGD6wJKvbCU/VcyVhHMbdUI/AAAAAAAAAq0/vQNcR3d846M/s400/sample_b_before_mapping_prw.jpg" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Source image before gradient mapping</td></tr>
</tbody></table>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://4.bp.blogspot.com/-o1Cr6Q9ZrZw/VcyVg4sotlI/AAAAAAAAAqo/-H84yxFPjZg/s1600/sample_b_after_mapping_prw.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="300" src="https://4.bp.blogspot.com/-o1Cr6Q9ZrZw/VcyVg4sotlI/AAAAAAAAAqo/-H84yxFPjZg/s400/sample_b_after_mapping_prw.jpg" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Gradient-mapped result</td></tr>
</tbody></table>
<br />
<br />
<span style="font-size: large;">Summing it up </span><br />
<br />
The next cool thing is that the light is naturally additive, and the results of these custom mappings for different channels can be added together with varying intensities, resulting in multiple lighting versions for the same image. <br />
<br />
The qualities of each initial RGB light can be changed drastically through manipulating the gamma, ranges, contrast and intensity of each channel (all can be achieved adjusting the target gradient actually). This also means that light directions with wider coverage should be preferred at the render stage to provide more flexibility for these adjustments.<br />
<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://3.bp.blogspot.com/-U123_wxx2g4/VcyWuRcy2yI/AAAAAAAAArc/LXnfZpsYRIY/s1600/sample_b_results_prw.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="150" src="https://3.bp.blogspot.com/-U123_wxx2g4/VcyWuRcy2yI/AAAAAAAAArc/LXnfZpsYRIY/s400/sample_b_results_prw.jpg" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">More results from the same three
source channels</td></tr>
</tbody></table>
<br />
On a more general note, this 3-lights technique allows for simulating something like <span id="goog_1780466919"></span><a href="https://www.blogger.com/">the Normals output variable<span id="goog_1780466920"></span></a> for volumes. And on the other hand, the rendered Normals pass (which anatomy we are going to discuss the next time) can be used for similar lighting manipulations with surfaces. <br />
<br />
The main goal of the provided examples was to illustrate a way of thinking – the possibilities are quite endless in fact.<br />
<br />Denis Kozlovhttp://www.blogger.com/profile/07406391692819839722noreply@blogger.com8tag:blogger.com,1999:blog-5149177666770535781.post-81535539397268947542015-06-29T13:24:00.000+02:002017-02-13T18:28:16.482+01:00Storing masks in RGB channels<table cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://2.bp.blogspot.com/-mvty8yJgpsI/VY_VuGwP1BI/AAAAAAAAApo/UkqVuae56l0/s1600/source.jpg" imageanchor="1" style="clear: left; margin-bottom: 1em; margin-left: auto; margin-right: auto;"><img alt="Storing masks in RGB channels" border="0" height="400" src="https://2.bp.blogspot.com/-mvty8yJgpsI/VY_VuGwP1BI/AAAAAAAAApo/UkqVuae56l0/s400/source.jpg" title="Storing masks in RGB channels" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Base image for the examples in this article</td></tr>
</tbody></table>
<br />
Finally returning to posting the original manuscripts of the articles I've written for 3D World magazine in 2014. This one was first published in issue 186 under the title "The Mighty Multimatte".<br />
<br />
<a name='more'></a><br />
In an earlier piece we've been looking at <a href="http://www.the-working-man.org/2014/11/pixel-is-not-color-square.html">raster images as data containers</a> which may be used for storing various supplementary information as well as the pictures themselves. One of the most straightforward usage of these capabilities is rendering masks.<br />
<br />
A lot can be done to an image after it has been rendered, in fact contemporary compositing packages even allow us to move a good deal of classical shading into post, often offering a much more interactive workflow. But even if you prefer to polish your beauty renders inside the 3D software till they come out with no need for extra touches, there still can be an urgent feedback from the client or the last little color tweak you'd need to apply under the time pressure – and compositing becomes a life savior again. <br />
<br />
<br />
<span style="font-size: large;">The perfect matte </span><br />
<br />
However, the success of most compositing operations depends on how many elements you can isolate procedurally (that is without tracing them manually). And, no less important, with what precision (which refers to antialiasing first of all). <br />
<br />
What we are looking for is the antialiased matte with the value of exactly 1.0 (white) for the pixels completely occupied by the object of interest, exactly 0 (black) for the rest of the image,* and antialising identical to that of the beauty render. <br />
<br />
<span style="font-size: x-small;"><i>*Mask values above one and below zero cause no less problems than the gray ones.</i></span><br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://3.bp.blogspot.com/-0MbEkUGFWok/VY_VtiL5rjI/AAAAAAAAAps/oKeGN6vV0a4/s1600/aliasing_prw.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img alt="Storing masks in RGB channels" border="0" height="187" src="https://3.bp.blogspot.com/-0MbEkUGFWok/VY_VtiL5rjI/AAAAAAAAAps/oKeGN6vV0a4/s400/aliasing_prw.jpg" title="Storing masks in RGB channels" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;"><style type="text/css">P { margin-bottom: 0.08in; }</style>
<br />
<div style="margin-bottom: 0in;">
<i>Here are the results of a
color-correction through the corresponding mattes. Left-to-right:
with proper antialiasing, aliased, and with an attempt to fix
aliasing through blurring. Note the corrupted edges in the middle
example and dark halos in the right.</i></div>
</td></tr>
</tbody></table>
<br />
<span style="font-size: large;">The power of three </span><br />
<br />
It is easy to notice, that all this data requires only one raster channel to be stored. Thus a regular RGB image can naturally preserve quality mattes for 3 objects at a time. It only takes applying Constant shaders of pure Red (1,0,0), Green (0,1,0) and Blue (0,0,1) colors to the objects of interest and a black (0,0,0) Constant shader to the rest of the scene. Render layers functionality (implemented in every contemporary 3D package I can think of) comes very handy here. You might want to turn off slower features like shadows and GI for just the masks element, although typically setting all the shaders in the scene to Constant is already enough for the render engine to optimize itself sufficiently.* <br />
<br />
<span style="font-size: x-small;"><i>*Due to smart sampling, antialiasing of the matte element might not be exactly the same as in beauty pass, still this is normally the closest one can practically get. </i></span><br />
<br />
Alternatively, some renderers offer a pre-designed output variable (like MultiMatte in V-Ray) to render masks in the similar way. More channels (like Alpha, ZDepth, Motion Vectors or any arbitrary extra image channels) could be used for storing more mattes in the same image file of course, but typically it is not worth the time/inconvenience to set up first and extract later, compared to simply adding more RGB outputs to isolate more objects. Compositing applications and image editors naturally provide the tools to easily use any of the RGBA channels as a mask for an operation, which is another reason to stick with those. (In Photoshop, for instance, it only takes a Ctrl-click in the Channels panel to turn one into a selection). <br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://4.bp.blogspot.com/-5Zqwj6TDegM/VY_VtbBd6aI/AAAAAAAAApk/I3D7TQNK3jQ/s1600/channels.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img alt="Storing masks in RGB channels" border="0" height="400" src="https://4.bp.blogspot.com/-5Zqwj6TDegM/VY_VtbBd6aI/AAAAAAAAApk/I3D7TQNK3jQ/s400/channels.jpg" title="Storing masks in RGB channels" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;"><style type="text/css">P { margin-bottom: 0.08in; }</style>
<br />
<div style="margin-bottom: 0in;">
<i>An example of 2 objects isolated
into the Red and Blue channels with Constant shaders.</i></div>
</td></tr>
</tbody></table>
<br />
<span style="font-size: large;">What to mask? </span><br />
<br />
Unless we're isolating parts of the scene with the specific purpose in mind, the main generic principle here is what will most likely require adjustments? Those are either the parts of the scene in doubt or simply the most important ones. Thus by default foreground/hero object is worth of its own matte (a channel). Grouping objects into mask channels based upon their materials or surface qualities is useful as well, since it allows for adjusting in terms of “highlights on metals” or “the color of liquids”. <br />
<br />
But the most demanded usually are masks to separate meaningful objects and their distinct parts, especially those of the similar color, since it is tricky to isolate them with keying. <br />
<br />
When working on a sequence of animated shots, consistency in using the same colors for the same objects from one shot to another becomes a very useful habit. This way, the same compositing setup can be propagated to the new shots most painlessly. It is generally better to add more MultiMatte outputs to the scene but stay consistent, rather than to try fitting only the masks needed for each shot into one image every time, so that in one shot a (let's say Green) channel would isolate the character, and in another – a prop. <br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://1.bp.blogspot.com/-sxn01En-_oU/VY_VtZ-TB3I/AAAAAAAAApg/rm7j41QbD8Q/s1600/holes.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img alt="Storing masks in RGB channels" border="0" height="640" src="https://1.bp.blogspot.com/-sxn01En-_oU/VY_VtZ-TB3I/AAAAAAAAApg/rm7j41QbD8Q/s640/holes.jpg" title="Storing masks in RGB channels" width="425" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;"><style type="text/css">P { margin-bottom: 0.08in; }</style>
<br />
<div style="margin-bottom: 0in;">
<i>When masking both an object and its
parts in the same matte – think in terms of channels. For instance,
if we want to utilize the Green channel in our example for the parts
of the main object, we might want to use yellow (1,1,0) for the
shader color to avoid holes in the Red channel.</i></div>
</td></tr>
</tbody></table>
<br />
<span style="font-size: large;"> The pitfalls </span><br />
<br />
The world is imperfect though, and sometimes in a crunch there is simply no time to render the proper additional elements (or AOVs – Arbitrary Output Variables). That is the time to manipulate the existing data in comping in order to compensate for the missing. Individual mattes can be added, subtracted, inverted and intersected to get the new ones. Every AOV can be useful in compositing in one way or another, and any non-uniform pass can provide some masking information to be extracted – it only takes thinking of them as the source of masking data and understanding what exactly is encoded within each particular element (which we are going to touch upon in the following few issues). <br />
<br />
And right now let's look at some dangers hidden along the way. The biggest pitfall is corrupting the edges of the matte (due to over-processing in post or the way it was rendered). 3D applications often offer some fast way of rendering object IDs (mattes) out of the box like assigning a random color or a discrete floating point luminance value to each. Though it might be faster to set up than the proper MultiMatte-like pass, the temptation should be generally avoided. With totally random colors per object the only way to procedurally separate one mask from another is keying, which will be often limited by close colors of other objects and thus quite crude. <br />
<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://2.bp.blogspot.com/-2l1ZJDO-xGk/VY_Vt6g0KaI/AAAAAAAAAp4/CqpZjEnhXS0/s1600/lum_IDs.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img alt="Storing masks in RGB channels" border="0" height="400" src="https://2.bp.blogspot.com/-2l1ZJDO-xGk/VY_Vt6g0KaI/AAAAAAAAAp4/CqpZjEnhXS0/s400/lum_IDs.jpg" title="Storing masks in RGB channels" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;"><style type="text/css">P { margin-bottom: 0.08in; }</style>
<br />
<div style="margin-bottom: 0in;">
<i>The illustration above shows that
you can not isolate one of the mattes stored sequentially by
luminance while preserving the same antialiasing over different
objects.</i></div>
</td></tr>
</tbody></table>
<br />
Even when assigning “the extended matte colors” (Red, Green, Blue, Yellow, Cyan, Magenta, Black, White, Alpha) instead of the totally random ones in order to store more mattes in a single image and separate them with various channel combinations rather than color keying later, the quality of the edges still gets endangered, although the results are much better typically. <span style="font-size: x-small;"><i>*This method should not be confused with the aforementioned usage of yellow, when it was still within a “one channel – one object” paradigm. </i></span><br />
<br />
And no need to mention that any method of rendering masks/IDs without antialiasing is a no-go. <br />
<br />
<br />
<span style="font-size: large;">Divide and conquer </span><br />
<br />
However, if going towards really heavy post-processing, it often becomes safer to render an object and its background separately. The trick in this case is not to forget the shadows and reflections, which means utilizing matte/shadow/ghost objects and visibility for primary (camera) rays functionality rather than just hiding the parts of the scene. Denis Kozlovhttp://www.blogger.com/profile/07406391692819839722noreply@blogger.com12tag:blogger.com,1999:blog-5149177666770535781.post-70092751527471278192015-05-17T20:48:00.000+02:002017-02-13T18:29:13.865+01:00CG|VFX reel 2015<br />
<iframe allowfullscreen="" frameborder="0" height="281" mozallowfullscreen="" src="//player.vimeo.com/video/128073234" webkitallowfullscreen="" width="500"></iframe> <br />
<a href="http://vimeo.com/128073234">CG|VFX reel 2015</a> from <a href="http://vimeo.com/kozlove">Denis Kozlov</a> on <a href="https://vimeo.com/">Vimeo</a>.<br />
<br />
<br />
My reels tend to become <a href="http://www.the-working-man.org/2013/08/tips-for-better-showreel.html">shorter and shorter</a>. Here goes the new one – a generalist's reel <a href="http://www.the-working-man.org/2013/08/cgvfx-showreel-2013.html">again</a>, so I have to take the blame for most of non live action pixels – both CG and compositing. With only a couple of exceptions, the work has been done in Houdini and Fusion predominantly. Below follows a breakdown describing my role and approach for each shot.<br />
<br />
<a name='more'></a> <br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://1.bp.blogspot.com/-qJebX6uyO6Q/VVjPT0Mnq4I/AAAAAAAAAns/Kvn3D5uciF8/s1600/reel2K15_thumbs_web.0001.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="225" src="https://1.bp.blogspot.com/-qJebX6uyO6Q/VVjPT0Mnq4I/AAAAAAAAAns/Kvn3D5uciF8/s400/reel2K15_thumbs_web.0001.jpg" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Direction and most visual elements for this version of the shot. The
main supercell plate created and rendered in Houdini with a mixture of
volumetric modeling techniques and render-time displacements. The rest
of the environment elements including atmospherics, debris and
compositing were created in Fusion. My nod for a little twister spinning
under the cloud goes to Marek Duda. </td></tr>
</tbody></table>
<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://2.bp.blogspot.com/-2OqXAwgXDnU/VVjPj9hzkPI/AAAAAAAAAoE/8Edfu0iqM4Q/s1600/reel2K15_thumbs_web.0003.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="225" src="https://2.bp.blogspot.com/-2OqXAwgXDnU/VVjPj9hzkPI/AAAAAAAAAoE/8Edfu0iqM4Q/s400/reel2K15_thumbs_web.0003.jpg" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">For this animated matte-paint I brought the output of ocean simulation
(Houdini Ocean Toolkit) from Blender into Fusion for rendering and
further processing. The fun part were the shore waves of course, I
almost literally (though quite procedurally) painted the RGB
displacements animation for this area. Compositing included creating
interactive sand ripples and some volumetric work for the fog. Resulting
base setup allows for camera animation. </td></tr>
</tbody></table>
<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://1.bp.blogspot.com/-Vzq-PeTL7-c/VVjPV2c8JkI/AAAAAAAAAn0/x97FwFt_W28/s1600/reel2K15_thumbs_web.0005.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="225" src="https://1.bp.blogspot.com/-Vzq-PeTL7-c/VVjPV2c8JkI/AAAAAAAAAn0/x97FwFt_W28/s400/reel2K15_thumbs_web.0005.jpg" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">For this piece from a Skoda commercial I've created a water splash and
droplets in Houdini, as well as integrated them into the plate. (All
moving water is CG here.) A lot of additional work has been done on this
shot by my colleagues from DPOST, including Victor Tretyakov and Jiri
Sindelar. </td></tr>
</tbody></table>
<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://4.bp.blogspot.com/-iQuIkWRx0d8/VVjPfSAskcI/AAAAAAAAAoA/AIi0MGWNanU/s1600/reel2K15_thumbs_web.0006.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="225" src="https://4.bp.blogspot.com/-iQuIkWRx0d8/VVjPfSAskcI/AAAAAAAAAoA/AIi0MGWNanU/s400/reel2K15_thumbs_web.0006.jpg" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Keying and compositing live action plates and elements. </td></tr>
</tbody></table>
<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://2.bp.blogspot.com/-15sTQi6YTo4/VVjPYq2iVBI/AAAAAAAAAn8/fWEQCaLzScM/s1600/reel2K15_thumbs_web.0007.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="225" src="https://2.bp.blogspot.com/-15sTQi6YTo4/VVjPYq2iVBI/AAAAAAAAAn8/fWEQCaLzScM/s400/reel2K15_thumbs_web.0007.jpg" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">A bit reworked sequence from a spot for Czech Technical University that
I've directed in 2014. Most visual elements are own work, Houdini and
Fusion. More details on the original spot are at <a href="http://www.the-working-man.org/2014/03/ctus-faculty-of-mechanical-engineering.html">http://www.the-working-man.org/2014/03/ctus-faculty-of-mechanical-engineering.html</a> </td></tr>
</tbody></table>
<br />
<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://4.bp.blogspot.com/-ywElrJX3_iI/VVjPshyxRWI/AAAAAAAAAoc/yRE7Iy5_Who/s1600/reel2K15_thumbs_web.0008.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="225" src="https://4.bp.blogspot.com/-ywElrJX3_iI/VVjPshyxRWI/AAAAAAAAAoc/yRE7Iy5_Who/s400/reel2K15_thumbs_web.0008.jpg" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">A TV commercial demo: particles and compositing in Fusion. Production and a magic coal ball by Denis Kosar. </td></tr>
</tbody></table>
<br />
<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://1.bp.blogspot.com/-EFZULRibO-4/VVjPkWeQ5jI/AAAAAAAAAoI/uPMKOvSUyl0/s1600/reel2K15_thumbs_web.0009.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="225" src="https://1.bp.blogspot.com/-EFZULRibO-4/VVjPkWeQ5jI/AAAAAAAAAoI/uPMKOvSUyl0/s400/reel2K15_thumbs_web.0009.jpg" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">A little piece utilizing the same particles technique as in the previous shot. This time all visual elements are own work. </td></tr>
</tbody></table>
<br />
<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://1.bp.blogspot.com/-aancjFCsvHw/VVjPlZvBpeI/AAAAAAAAAoM/6rcsLNfBaBU/s1600/reel2K15_thumbs_web.0010.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="225" src="https://1.bp.blogspot.com/-aancjFCsvHw/VVjPlZvBpeI/AAAAAAAAAoM/6rcsLNfBaBU/s400/reel2K15_thumbs_web.0010.jpg" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Direction and all visual elements (well, except for a kiwi texture
probably) in my usual Houdini/Fusion combination. Strawberry model is
completely procedural; more examples of procedural assets generation are present in the previous posts of this blog.</td></tr>
</tbody></table>
<br />
<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://3.bp.blogspot.com/-iCE1nWjf5MA/VVjPmTVWDZI/AAAAAAAAAoQ/B5GAok_KCQY/s1600/reel2K15_thumbs_web.0011.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="225" src="https://3.bp.blogspot.com/-iCE1nWjf5MA/VVjPmTVWDZI/AAAAAAAAAoQ/B5GAok_KCQY/s400/reel2K15_thumbs_web.0011.jpg" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">For this commercial, aside from VFX supervision I created the flamenco
sequence graphics, two pieces of which are presented here. Assembling
procedural setup in Fusion allowed for interactive prototyping
with the director on site. Setup involved mockup 3D geometry rendered
from the top view into a depth map on the fly, which in order (after
direct 2d adjustments) was feeding a displacement map for the ground
grid with individual boxes instanced over it. After laying out the
background designs for the whole sequence, setup was quite seamlessly
transferred into 3DSMax (using displacement maps as an intermediate).
Rendered in Mental Ray and composited with a live action dancer plate in
Fusion again. Matte pulling was mostly done by Jiri Sindelar, the rest
of the digital work on the sequence is pretty much mine. </td></tr>
</tbody></table>
<br />
<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://4.bp.blogspot.com/-aUxkaRDjXIw/VVjPne9RWbI/AAAAAAAAAoU/K9-kmlSweFI/s1600/reel2K15_thumbs_web.0012.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="225" src="https://4.bp.blogspot.com/-aUxkaRDjXIw/VVjPne9RWbI/AAAAAAAAAoU/K9-kmlSweFI/s400/reel2K15_thumbs_web.0012.jpg" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Set extension for a scene from Giuseppe Tornatore's Baaria. I used
Syntheyes for 3D-tracking and XSI for lighting/rendering of the street
model (provided by the client). Composited in Fusion. </td></tr>
</tbody></table>
<br />
<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://4.bp.blogspot.com/-M9NQSJmYzI4/VVjPoOXUsGI/AAAAAAAAAoY/WINRFjUwTZg/s1600/reel2K15_thumbs_web.0013.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="225" src="https://4.bp.blogspot.com/-M9NQSJmYzI4/VVjPoOXUsGI/AAAAAAAAAoY/WINRFjUwTZg/s400/reel2K15_thumbs_web.0013.jpg" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Keying/compositing for another shot from the same film. </td></tr>
</tbody></table>
<br />
<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://3.bp.blogspot.com/-yDHMp1xdvl8/VVjP5K-SiNI/AAAAAAAAAok/LdYsNjrZ42w/s1600/reel2K15_thumbs_web.0014.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="225" src="https://3.bp.blogspot.com/-yDHMp1xdvl8/VVjP5K-SiNI/AAAAAAAAAok/LdYsNjrZ42w/s400/reel2K15_thumbs_web.0014.jpg" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">And yet another compositing shot from few years back. </td></tr>
</tbody></table>
<br />
<br />
Last, but not least is the music: <br />
<br />
"Black Vortex" Kevin MacLeod (incompetech.com) <br />
Licensed under Creative Commons: By Attribution 3.0<br />
<a href="http://creativecommons.org/licenses/by/3.0/">http://creativecommons.org/licenses/by/3.0/</a> <br />
<br />
<br />
And if you've made it till here, you might also find interesting <a href="http://www.the-working-man.org/2015/04/on-wings-tails-and-procedural-modeling.html">my previous post on procedural modeling</a>. And of course for those interested in cooperation, the email link is at the top of the page. Denis Kozlovhttp://www.blogger.com/profile/07406391692819839722noreply@blogger.com0tag:blogger.com,1999:blog-5149177666770535781.post-87394374132169334712015-04-23T01:07:00.001+02:002017-02-13T21:08:28.963+01:00On Wings, Tails and Procedural Modeling<span id="goog_1430824505"></span><span id="goog_1430824506"></span><br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://4.bp.blogspot.com/-rTSerSCOqlQ/VTgZ5nyHmRI/AAAAAAAAAkI/EPL97vbeQ1U/s1600/AERO_DEMO_v041.0004.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img alt="Project Aero: Procedural Aircraft Design Toolkit for SideFX Houdini" border="0" height="225" src="https://4.bp.blogspot.com/-rTSerSCOqlQ/VTgZ5nyHmRI/AAAAAAAAAkI/EPL97vbeQ1U/s1600/AERO_DEMO_v041.0004.jpg" title="Project Aero: Procedural Aircraft Design Toolkit for SideFX Houdini" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Project Aero: Procedural Aircraft Design Toolkit for SideFX Houdini</td></tr>
</tbody></table>
<br />
I find Houdini a very powerful tool for 3D modeling. In fact, this
aspect was largely motivational for me to choose it as a primary 3D
application. And talking procedural modeling I mean not just fractal
mountains, instanced cities and Voronoi-fractured debris (which all can
be made look quite fascinating actually), but efficient creation of 3D
assets in general. Any assets.<br />
<br />
<a name='more'></a><br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://3.bp.blogspot.com/-9uXvpS8XQmQ/VTgaG_WF-hI/AAAAAAAAAmw/pwh2FMeoZaQ/s1600/AERO_DEMO_v041.0029.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img alt="Project Aero: Procedural Aircraft Design Toolkit for SideFX Houdini" border="0" height="225" src="https://3.bp.blogspot.com/-9uXvpS8XQmQ/VTgaG_WF-hI/AAAAAAAAAmw/pwh2FMeoZaQ/s1600/AERO_DEMO_v041.0029.jpg" title="Project Aero: Procedural Aircraft Design Toolkit for SideFX Houdini" width="400" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://3.bp.blogspot.com/-boQwCI35Gw0/VTgZ49qTw0I/AAAAAAAAAj8/Q5yCQpCDhnQ/s1600/AERO_DEMO_v041.0001.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img alt="Project Aero: Procedural Aircraft Design Toolkit for SideFX Houdini" border="0" height="225" src="https://3.bp.blogspot.com/-boQwCI35Gw0/VTgZ49qTw0I/AAAAAAAAAj8/Q5yCQpCDhnQ/s1600/AERO_DEMO_v041.0001.jpg" title="Project Aero: Procedural Aircraft Design Toolkit for SideFX Houdini" width="400" /></a></div>
<br />
Thus lately I have taken some of the not-so-free time (a bit over 3 months, to be more accurate) to develop (or rather prototype) a toolkit for procedural aircraft creation, which I am happy to showcase today. Please welcome Project Aero.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://1.bp.blogspot.com/-yU_1T5XAsVI/VTgaBwemOkI/AAAAAAAAAlg/wT47A8E-x-w/s1600/AERO_DEMO_v041.0006.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img alt="Project Aero: Procedural Aircraft Design Toolkit for SideFX Houdini" border="0" height="225" src="https://1.bp.blogspot.com/-yU_1T5XAsVI/VTgaBwemOkI/AAAAAAAAAlg/wT47A8E-x-w/s1600/AERO_DEMO_v041.0006.jpg" title="Project Aero: Procedural Aircraft Design Toolkit for SideFX Houdini" width="400" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
</div>
The whole point in a nutshell: after the tools have been finished, it took me roughly 4-5 hours to create each of the demonstrated airplane models from scratch to the full detalization. <br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://4.bp.blogspot.com/-vaQwWmNLW60/VTgZ9HZXHbI/AAAAAAAAAkw/SYznjnHHM8E/s1600/AERO_DEMO_v041.0011.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img alt="Project Aero: Procedural Aircraft Design Toolkit for SideFX Houdini" border="0" height="225" src="https://4.bp.blogspot.com/-vaQwWmNLW60/VTgZ9HZXHbI/AAAAAAAAAkw/SYznjnHHM8E/s1600/AERO_DEMO_v041.0011.jpg" title="Project Aero: Procedural Aircraft Design Toolkit for SideFX Houdini" width="400" /></a></div>
<br />
And most of those 4-5 hours was spent on design decisions – not smoothing edge loops, or laying out UVs, or drawing countless layers of rivets and scratches – those kinds of things got automated during the toolset development months. So technically, a new model could have been created in minutes – the time it takes to lay down few nodes (see screengrabs below) and set the basic parameters up.<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://1.bp.blogspot.com/-W0wzgEUcINA/VTgaDWzU-6I/AAAAAAAAAl4/bA3AttcxRaE/s1600/AERO_DEMO_v041.0022.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img alt="Project Aero: Procedural Aircraft Design Toolkit for SideFX Houdini" border="0" height="225" src="https://1.bp.blogspot.com/-W0wzgEUcINA/VTgaDWzU-6I/AAAAAAAAAl4/bA3AttcxRaE/s1600/AERO_DEMO_v041.0022.jpg" title="Project Aero: Procedural Aircraft Design Toolkit for SideFX Houdini" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Detalization sample</td></tr>
</tbody></table>
<br />
The resulting models are completely procedural: every pixel and polygon are generated by the system from scratch tailored to a particular vehicle design; textures are generated either at render time or from the scene geometry to the preset resolution – no photo-textures or other disk-based samples are used. Bolts and rivets are randomly turned and micro-shifted on an individual basis. The generator tries to preserve consistent detail scale and proportions across any curvature and size. Controls are designed in a hierarchical fashion, so that user could work from big, conceptual adjustments (like main contours, forms and surface styling) down to tuning individual element's bolting when required. Plus more perks inside, and of course standard Houdini tools can be used at any stage for any manual design adjustments.<br />
<div class="separator" style="clear: both; text-align: center;">
</div>
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://4.bp.blogspot.com/-oMlNwNknpSU/VTgZ-VKfBpI/AAAAAAAAAlA/zDPe2yIs37s/s1600/AERO_DEMO_v041.0014.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img alt="Project Aero: Procedural Aircraft Design Toolkit for SideFX Houdini" border="0" height="225" src="https://4.bp.blogspot.com/-oMlNwNknpSU/VTgZ-VKfBpI/AAAAAAAAAlA/zDPe2yIs37s/s1600/AERO_DEMO_v041.0014.jpg" title="Project Aero: Procedural Aircraft Design Toolkit for SideFX Houdini" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">A new design can be created in few hours from scratch</td></tr>
</tbody></table>
<br />
The skeleton of the system and each design are interactively placed profiles which are skinned into NURBS surfaces with flexible shape controls. The resulting design is then fed into the detalization nodes that create the necessary geometry around the surfaces and generate textures using a variety of techniques. Final models are rendered as subdivision surfaces with displacement; bolts and rivets (which an airplane has quite a few of) are stored as packed primitives.<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://2.bp.blogspot.com/-dnURULIvFmY/VTgZylsnf4I/AAAAAAAAAjg/owXXw2aYB-g/s1600/AERO_screen.0001.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img alt="Project Aero: Procedural Aircraft Design Toolkit for SideFX Houdini" border="0" height="96" src="https://2.bp.blogspot.com/-dnURULIvFmY/VTgZylsnf4I/AAAAAAAAAjg/owXXw2aYB-g/s1600/AERO_screen.0001.jpg" title="Project Aero: Procedural Aircraft Design Toolkit for SideFX Houdini" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Profiles form the backbone of each design</td></tr>
</tbody></table>
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://2.bp.blogspot.com/-a74Ga5V6LT4/VTgZzk-2veI/AAAAAAAAAjo/Ub95soAqy0o/s1600/AERO_screen.0002.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img alt="Project Aero: Procedural Aircraft Design Toolkit for SideFX Houdini" border="0" height="96" src="https://2.bp.blogspot.com/-a74Ga5V6LT4/VTgZzk-2veI/AAAAAAAAAjo/Ub95soAqy0o/s1600/AERO_screen.0002.jpg" title="Project Aero: Procedural Aircraft Design Toolkit for SideFX Houdini" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Only few more nodes are required for finalization</td></tr>
</tbody></table>
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://1.bp.blogspot.com/-Gl3smtVorEI/VTgZ0COPZVI/AAAAAAAAAjs/vBNlu7VrT5s/s1600/AERO_screen.0003.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img alt="Project Aero: Procedural Aircraft Design Toolkit for SideFX Houdini" border="0" height="96" src="https://1.bp.blogspot.com/-Gl3smtVorEI/VTgZ0COPZVI/AAAAAAAAAjs/vBNlu7VrT5s/s1600/AERO_screen.0003.jpg" title="Project Aero: Procedural Aircraft Design Toolkit for SideFX Houdini" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Texture preview mode</td></tr>
</tbody></table>
<br />
<br />
By no means Project Aero is complete or flawless, but hopefully it takes
the concept far enough to illustrate the benefits and possibilities of
procedural creation of 3D assets. Getting another individual version of
the same model is a matter of seconds. Automatic non-identical symmetry
and procedural surface aging controlled by few high-level sliders also
help to escape “the army of clones” issue that 3D models sometimes
suffer from. Deeper variations like repainting or restyling the skin and
panels are done in a breeze. The set of detail modules is easily
extendable and parts of the existing design can be swapped and reused.
Depending on the toolset's design objectives, generated models could be
automatically prepared for integration into a particular pipeline (Like
textures could be baked out, LODs automatically created and parts named
in a chosen convention, exhausts and moving parts marked with special
dummy locators or attributes, etc). <span id="goog_1430824468"></span><span id="goog_1430824469"></span><br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://3.bp.blogspot.com/-6BAt_oiydf0/VTgZ47u8kbI/AAAAAAAAAkE/WOA10j4YgPI/s1600/AERO_DEMO_v041.0003.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img alt="Project Aero: Procedural Aircraft Design Toolkit for SideFX Houdini" border="0" height="225" src="https://3.bp.blogspot.com/-6BAt_oiydf0/VTgZ47u8kbI/AAAAAAAAAkE/WOA10j4YgPI/s1600/AERO_DEMO_v041.0003.jpg" title="Project Aero: Procedural Aircraft Design Toolkit for SideFX Houdini" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">A new unique copy of an existing model is literally "on a button"</td></tr>
</tbody></table>
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://2.bp.blogspot.com/-t12L07fgTn8/VTgZ6HFrevI/AAAAAAAAAkM/0lyX9aFLxRk/s1600/AERO_DEMO_v041.0005.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img alt="Project Aero: Procedural Aircraft Design Toolkit for SideFX Houdini" border="0" height="225" src="https://2.bp.blogspot.com/-t12L07fgTn8/VTgZ6HFrevI/AAAAAAAAAkM/0lyX9aFLxRk/s1600/AERO_DEMO_v041.0005.jpg" title="Project Aero: Procedural Aircraft Design Toolkit for SideFX Houdini" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Non-identical symmetry</td></tr>
</tbody></table>
<br />
And of course the workflow is non-linear from both design and development perspectives. The first means that you can always go back and change/adjust something at the previous stages of work without having to redo the later steps (like a change in the wing position on a hull of a finished model will make all the related surfaces recalculate to allow for the new shape). And the second refers to the ability to use the toolset while it's being developed, which means that in a production environment artist wouldn't have to wait for a TD to finish his work – the tools would be updating in parallel, automatically adding new features to the designs already being worked on.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://1.bp.blogspot.com/-DQTAwKQ_7Ko/VTgZ4j3KO6I/AAAAAAAAAj4/N5p9Guo_3NI/s1600/AERO_DEMO_v041.0002.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img alt="Project Aero: Procedural Aircraft Design Toolkit for SideFX Houdini" border="0" height="225" src="https://1.bp.blogspot.com/-DQTAwKQ_7Ko/VTgZ4j3KO6I/AAAAAAAAAj4/N5p9Guo_3NI/s1600/AERO_DEMO_v041.0002.jpg" title="Project Aero: Procedural Aircraft Design Toolkit for SideFX Houdini" width="400" /></a></div>
<br />
Hopefully, there is no need to say that the approach used in this demonstration is only one of many-many others, each fitting some objectives better than the other. I might touch upon the topic later in case of available time and/or public interest, and so far those interested in procedural cooperation are more than welcome to email me (link at the top of the page). <br />
<br />
Keep having fun! <br />
<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://3.bp.blogspot.com/-5Kd0TPHB100/VTgaE4nll5I/AAAAAAAAAmU/io7-0azbwRY/s1600/AERO_DEMO_v041.0025.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img alt="Project Aero: Procedural Aircraft Design Toolkit for SideFX Houdini" border="0" height="225" src="https://3.bp.blogspot.com/-5Kd0TPHB100/VTgaE4nll5I/AAAAAAAAAmU/io7-0azbwRY/s1600/AERO_DEMO_v041.0025.jpg" title="Project Aero: Procedural Aircraft Design Toolkit for SideFX Houdini" width="400" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://2.bp.blogspot.com/-YP08bfl5Hxs/VTgZ94XWLWI/AAAAAAAAAk4/TovuUyLYpJs/s1600/AERO_DEMO_v041.0013.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img alt="Project Aero: Procedural Aircraft Design Toolkit for SideFX Houdini" border="0" height="225" src="https://2.bp.blogspot.com/-YP08bfl5Hxs/VTgZ94XWLWI/AAAAAAAAAk4/TovuUyLYpJs/s1600/AERO_DEMO_v041.0013.jpg" title="Project Aero: Procedural Aircraft Design Toolkit for SideFX Houdini" width="400" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://3.bp.blogspot.com/-uX6TCmwcEEI/VTgZ_ef_I5I/AAAAAAAAAlI/9p5ucj4amc4/s1600/AERO_DEMO_v041.0016.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img alt="Project Aero: Procedural Aircraft Design Toolkit for SideFX Houdini" border="0" height="225" src="https://3.bp.blogspot.com/-uX6TCmwcEEI/VTgZ_ef_I5I/AAAAAAAAAlI/9p5ucj4amc4/s1600/AERO_DEMO_v041.0016.jpg" title="Project Aero: Procedural Aircraft Design Toolkit for SideFX Houdini" width="400" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: center;">
<a href="http://3.bp.blogspot.com/--mvh5KsgplI/VTgZ-oUoToI/AAAAAAAAAlE/3L5_s1t0_vA/s1600/AERO_DEMO_v041.0015.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img alt="Project Aero: Procedural Aircraft Design Toolkit for SideFX Houdini" border="0" height="225" src="https://3.bp.blogspot.com/--mvh5KsgplI/VTgZ-oUoToI/AAAAAAAAAlE/3L5_s1t0_vA/s1600/AERO_DEMO_v041.0015.jpg" title="Project Aero: Procedural Aircraft Design Toolkit for SideFX Houdini" width="400" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="http://1.bp.blogspot.com/-6sJ9EZybpck/VTgZ9QUUjBI/AAAAAAAAAk0/yn2qvmMvsfQ/s1600/AERO_DEMO_v041.0012.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><br /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="http://1.bp.blogspot.com/-6sJ9EZybpck/VTgZ9QUUjBI/AAAAAAAAAk0/yn2qvmMvsfQ/s1600/AERO_DEMO_v041.0012.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img alt="Project Aero: Procedural Aircraft Design Toolkit for SideFX Houdini" border="0" height="225" src="https://1.bp.blogspot.com/-6sJ9EZybpck/VTgZ9QUUjBI/AAAAAAAAAk0/yn2qvmMvsfQ/s1600/AERO_DEMO_v041.0012.jpg" title="Project Aero: Procedural Aircraft Design Toolkit for SideFX Houdini" width="400" /></a></div>
<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://1.bp.blogspot.com/-ByBimqr34UY/VTgaGn2zA8I/AAAAAAAAAms/NoYYz0lc-tk/s1600/AERO_DEMO_v041.0028.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img alt="Project Aero: Procedural Aircraft Design Toolkit for SideFX Houdini" border="0" height="225" src="https://1.bp.blogspot.com/-ByBimqr34UY/VTgaGn2zA8I/AAAAAAAAAms/NoYYz0lc-tk/s1600/AERO_DEMO_v041.0028.jpg" title="Project Aero: Procedural Aircraft Design Toolkit for SideFX Houdini" width="400" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://4.bp.blogspot.com/-kHra_ghnswQ/VTgaF7LBpgI/AAAAAAAAAmk/CrpLXVE8vWI/s1600/AERO_DEMO_v041.0027.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img alt="Project Aero: Procedural Aircraft Design Toolkit for SideFX Houdini" border="0" height="225" src="https://4.bp.blogspot.com/-kHra_ghnswQ/VTgaF7LBpgI/AAAAAAAAAmk/CrpLXVE8vWI/s1600/AERO_DEMO_v041.0027.jpg" title="Project Aero: Procedural Aircraft Design Toolkit for SideFX Houdini" width="400" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://3.bp.blogspot.com/-oSjsvywwc-Q/VTgaFe0TBdI/AAAAAAAAAmY/YPXslvuqLHE/s1600/AERO_DEMO_v041.0026.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img alt="Project Aero: Procedural Aircraft Design Toolkit for SideFX Houdini" border="0" height="225" src="https://3.bp.blogspot.com/-oSjsvywwc-Q/VTgaFe0TBdI/AAAAAAAAAmY/YPXslvuqLHE/s1600/AERO_DEMO_v041.0026.jpg" title="Project Aero: Procedural Aircraft Design Toolkit for SideFX Houdini" width="400" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://4.bp.blogspot.com/-JZe2vJsK_Ns/VTgaEmzkGjI/AAAAAAAAAmM/_X4LglPeEG4/s1600/AERO_DEMO_v041.0024.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img alt="Project Aero: Procedural Aircraft Design Toolkit for SideFX Houdini" border="0" height="225" src="https://4.bp.blogspot.com/-JZe2vJsK_Ns/VTgaEmzkGjI/AAAAAAAAAmM/_X4LglPeEG4/s1600/AERO_DEMO_v041.0024.jpg" title="Project Aero: Procedural Aircraft Design Toolkit for SideFX Houdini" width="400" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="http://2.bp.blogspot.com/-fDJq9bfjjqc/VTgZ8bdKCOI/AAAAAAAAAko/pU_YzN-47WM/s1600/AERO_DEMO_v041.0010.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img alt="Project Aero: Procedural Aircraft Design Toolkit for SideFX Houdini" border="0" height="225" src="https://2.bp.blogspot.com/-fDJq9bfjjqc/VTgZ8bdKCOI/AAAAAAAAAko/pU_YzN-47WM/s1600/AERO_DEMO_v041.0010.jpg" title="Project Aero: Procedural Aircraft Design Toolkit for SideFX Houdini" width="400" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://1.bp.blogspot.com/-NR3fxx3h0UU/VTgZ7UwFDKI/AAAAAAAAAkc/lzOdv27wicg/s1600/AERO_DEMO_v041.0008.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img alt="Project Aero: Procedural Aircraft Design Toolkit for SideFX Houdini" border="0" height="225" src="https://1.bp.blogspot.com/-NR3fxx3h0UU/VTgZ7UwFDKI/AAAAAAAAAkc/lzOdv27wicg/s1600/AERO_DEMO_v041.0008.jpg" title="Project Aero: Procedural Aircraft Design Toolkit for SideFX Houdini" width="400" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://4.bp.blogspot.com/-It-VO8uTs8c/VTgZ73s5jFI/AAAAAAAAAkk/lbfD3qmbf0I/s1600/AERO_DEMO_v041.0009.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img alt="Project Aero: Procedural Aircraft Design Toolkit for SideFX Houdini" border="0" height="225" src="https://4.bp.blogspot.com/-It-VO8uTs8c/VTgZ73s5jFI/AAAAAAAAAkk/lbfD3qmbf0I/s1600/AERO_DEMO_v041.0009.jpg" title="Project Aero: Procedural Aircraft Design Toolkit for SideFX Houdini" width="400" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://3.bp.blogspot.com/-GIaM-KwsS68/VTgaAuF1RrI/AAAAAAAAAlU/Aw3fVR46iLo/s1600/AERO_DEMO_v041.0018.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img alt="Project Aero: Procedural Aircraft Design Toolkit for SideFX Houdini" border="0" height="225" src="https://3.bp.blogspot.com/-GIaM-KwsS68/VTgaAuF1RrI/AAAAAAAAAlU/Aw3fVR46iLo/s1600/AERO_DEMO_v041.0018.jpg" title="Project Aero: Procedural Aircraft Design Toolkit for SideFX Houdini" width="400" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://1.bp.blogspot.com/-eo3SxUwn2-w/VTgaACGa2bI/AAAAAAAAAlQ/wqheK9WothY/s1600/AERO_DEMO_v041.0017.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img alt="Project Aero: Procedural Aircraft Design Toolkit for SideFX Houdini" border="0" height="225" src="https://1.bp.blogspot.com/-eo3SxUwn2-w/VTgaACGa2bI/AAAAAAAAAlQ/wqheK9WothY/s1600/AERO_DEMO_v041.0017.jpg" title="Project Aero: Procedural Aircraft Design Toolkit for SideFX Houdini" width="400" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://3.bp.blogspot.com/-7LuYgG2LCGk/VTgaBn4RwZI/AAAAAAAAAlc/G6wM4FkfocA/s1600/AERO_DEMO_v041.0020.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img alt="Project Aero: Procedural Aircraft Design Toolkit for SideFX Houdini" border="0" height="225" src="https://3.bp.blogspot.com/-7LuYgG2LCGk/VTgaBn4RwZI/AAAAAAAAAlc/G6wM4FkfocA/s1600/AERO_DEMO_v041.0020.jpg" title="Project Aero: Procedural Aircraft Design Toolkit for SideFX Houdini" width="400" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://2.bp.blogspot.com/-FYhU6HNNGvo/VTgaCnQoqnI/AAAAAAAAAls/fab4O6DzyPQ/s1600/AERO_DEMO_v041.0021.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img alt="Project Aero: Procedural Aircraft Design Toolkit for SideFX Houdini" border="0" height="225" src="https://2.bp.blogspot.com/-FYhU6HNNGvo/VTgaCnQoqnI/AAAAAAAAAls/fab4O6DzyPQ/s1600/AERO_DEMO_v041.0021.jpg" title="Project Aero: Procedural Aircraft Design Toolkit for SideFX Houdini" width="400" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://4.bp.blogspot.com/-zNmKfh1s75M/VTgZ60QqKLI/AAAAAAAAAkU/7slUW6nqq08/s1600/AERO_DEMO_v041.0007.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img alt="Project Aero: Procedural Aircraft Design Toolkit for SideFX Houdini" border="0" height="225" src="https://4.bp.blogspot.com/-zNmKfh1s75M/VTgZ60QqKLI/AAAAAAAAAkU/7slUW6nqq08/s1600/AERO_DEMO_v041.0007.jpg" title="Project Aero: Procedural Aircraft Design Toolkit for SideFX Houdini" width="400" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://4.bp.blogspot.com/-HHu1p8yCLOg/VTgaD_H8VUI/AAAAAAAAAl8/jAT_qCYX3q8/s1600/AERO_DEMO_v041.0023.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img alt="Project Aero: Procedural Aircraft Design Toolkit for SideFX Houdini" border="0" height="225" src="https://4.bp.blogspot.com/-HHu1p8yCLOg/VTgaD_H8VUI/AAAAAAAAAl8/jAT_qCYX3q8/s1600/AERO_DEMO_v041.0023.jpg" title="Project Aero: Procedural Aircraft Design Toolkit for SideFX Houdini" width="400" /></a></div>
<br />
<span id="goog_1430824535"></span><span id="goog_1430824536"></span><br />
<br />Denis Kozlovhttp://www.blogger.com/profile/07406391692819839722noreply@blogger.com0tag:blogger.com,1999:blog-5149177666770535781.post-68060789577101487322015-02-08T14:31:00.000+01:002017-02-13T19:20:22.550+01:00Evaluating a Particle System: checklistBelow is my original manuscript of what was first published as a 2-piece article in issues 183 and 184 of a 3D World magazine. Worse English and a bit more pictures are included. Plus a good deal of techniques and approaches squeezed between the lines.<br />
<br />
<style type="text/css">P { margin-bottom: 0.08in; }</style>
<br />
<div style="margin-bottom: 0in;">
<span style="font-size: large;">Part 1</span></div>
<div style="margin-bottom: 0in;">
<br /></div>
<div style="margin-bottom: 0in;">
</div>
<div style="margin-bottom: 0in;">
Most of the 3D and compositing packages
offer some sort of a particle systems toolset. They usually come with
a nice set of examples and demos showing all the stunning things you
can do within the product. However, the way to really judge its
capabilities is often not by the things the software can do, but
rather by the things it can not. And since the practice shows it
might be not so easy to think of all the things one might be missing
in a real production at a time, I have put together this checklist.
</div>
<div style="margin-bottom: 0in;">
<br /></div>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://2.bp.blogspot.com/-IDpzlVVFH3M/VNdckMJtBwI/AAAAAAAAAiI/IJOPgxPllw4/s1600/particles_galaxy_prw.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="215" src="https://2.bp.blogspot.com/-IDpzlVVFH3M/VNdckMJtBwI/AAAAAAAAAiI/IJOPgxPllw4/s1600/particles_galaxy_prw.jpg" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;"><style type="text/css">P { margin-bottom: 0.08in; }</style>
<br />
<div style="margin-bottom: 0in;">
Flexible enough software allows for
quite complex effects </div>
<div style="margin-bottom: 0in;">
like this spiral galaxy, created with particles
alone.</div>
</td></tr>
</tbody></table>
<div style="margin-bottom: 0in;">
<br />
<a name='more'></a></div>
<div style="margin-bottom: 0in;">
Even if some possibilities are not
obvious out of the box, you can usually make particles do pretty much
everything with the aid of scripting or utilizing some
application-specific techniques. It often requires adjusting the way
of thinking to a particular tool's paradigms, and I personally find
acquaintance with different existing products of a big help here.
Thus in case you have already made up your mind on a choice of
specific program, you might still find the following list useful as a
set of exercises – figuring out how to achieve the described
functionality within your app.
</div>
<div style="margin-bottom: 0in;">
<br /></div>
<div style="margin-bottom: 0in;">
<br /></div>
<div style="margin-bottom: 0in;">
<span style="font-size: medium;">Overall
concerns</span></div>
<div style="margin-bottom: 0in;">
<br /></div>
<div style="margin-bottom: 0in;">
The first question would be if it is a
two- or three-dimensional system or does it allow for both modes? A
2D-only solution is going to be limited by definition, however it can
possess some unique speed optimizations and convenient features like
per-particle blurring control, extended support for blending modes
and utilizing data directly from images to control the particles. The
ability to dial in the existing 3D camera motion is quite important
in the real production environment.</div>
<div style="margin-bottom: 0in;">
<br /></div>
<div style="margin-bottom: 0in;">
In general, it is all about control.
The more control over every thinkable aspect of a particle's life you
have – the better. And it is never enough, since the tasks at hand
are typically custom by their very nature. One distinctive aspect of
this control is the data flow. How much data can be transferred into
the particle system from the outside, passed along inside and output
back in the very end? Which particles' properties can it affect? We
want it all.</div>
<div style="margin-bottom: 0in;">
<br /></div>
<div style="margin-bottom: 0in;">
The quest for control also means that
if the system doesn't have some kind of a modular arrangement (like
nodes for instance), it is likely to be limited in functionality.</div>
<div style="margin-bottom: 0in;">
<br /></div>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://2.bp.blogspot.com/-bXvZ54cnOTc/VNdcj1xL0tI/AAAAAAAAAiA/DQ8R1Zv_BRE/s1600/particle_nodes_prw.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="340" src="https://2.bp.blogspot.com/-bXvZ54cnOTc/VNdcj1xL0tI/AAAAAAAAAiA/DQ8R1Zv_BRE/s1600/particle_nodes_prw.jpg" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;"><style type="text/css">P { margin-bottom: 0.08in; }</style>
<br />
<div style="margin-bottom: 0in;">
Examples of particle nodes in
Houdini (above) and Fusion (below)</div>
</td></tr>
</tbody></table>
<div style="margin-bottom: 0in;">
<span id="goog_551108122"></span><span id="goog_551108123"></span> </div>
<div style="margin-bottom: 0in;">
<br /></div>
<div style="margin-bottom: 0in;">
<style type="text/css">P { margin-bottom: 0.08in; }</style>
</div>
<div style="margin-bottom: 0in;">
<span style="font-size: medium;">Emission
features</span></div>
<div style="margin-bottom: 0in;">
<br /></div>
<div style="margin-bottom: 0in;">
Following the good old-fashioned
tradition of starting at the beginning, let's start with the particle
emission.
</div>
<div style="margin-bottom: 0in;">
<br /></div>
<div style="margin-bottom: 0in;">
What are the predefined emitter types
and shapes and, most importantly, does it allow for user-defined
emitter shapes? You can only get that far without the custom sources
– input geometry or image data increase the system's flexibility
enormously. Emitting from custom volumes allows for great
possibilities as well. What about emission from animated objects? Can
emitter's velocity and deformations data be transferred to the
particles being born? For the geometry input, particle birth should
be possible from both the surface and the enclosed volume of the
input mesh, and then we'd often want some way of restricting it to
the certain areas only. Therefore to achieve the real control,
texture information needs to be taken into account as well.</div>
<div style="margin-bottom: 0in;">
<br /></div>
<div style="margin-bottom: 0in;">
Geometry normally allows for cruder
control compared to image data, so we want all kinds of particles'
properties (amount, size, initial velocity, age, mass, etc.) to be
controllable through texture data, using as much of the texture
image's channels as possible. For instance, you might want to set the
initial particles' direction with vectors stored in RGB channels of
an image, and use Alpha or any other custom channel to control its
size, as well as use emitter's texture to assign groups to particles
for further manipulation. The same applies to driving particles'
properties with different volumetric channels (density, velocity,
heat) or dialing an image directly into the 2D particle system as a
source.</div>
<div style="margin-bottom: 0in;">
<br /></div>
<div style="margin-bottom: 0in;">
Does your system allow to create
custom particles' attributes and assign their values from a source's
texture?</div>
<div style="margin-bottom: 0in;">
<br /></div>
<div style="margin-bottom: 0in;">
<br /></div>
<div style="margin-bottom: 0in;">
<b>
<style type="text/css">P { margin-bottom: 0.08in; }</style>
</b></div>
<div style="margin-bottom: 0in;">
<span style="font-size: medium;">The
look options</span></div>
<div style="margin-bottom: 0in;">
<br /></div>
<div style="margin-bottom: 0in;">
Now consider the options available for
the look of each individual particle. Both instancing custom geometry
and sprites are a must for a full-featured 3D particle system*. And
there is no need to say that animation should be supported for both.
Are there any additional special types of particles available which
allow for speed benefits or additional techniques? One example of
such a technique would be the single-pixel particles which can be
highly optimized and thus available in huge amounts (like in Eyeon
Fusion for instance), allowing for the whole set of quite unique
looks.
</div>
<div style="margin-bottom: 0in;">
<br /></div>
<div style="font-style: normal; margin-bottom: 0in;">
*<span style="font-size: x-small;"><i>Rendering a static
particle system as strands for representing hair or grass is yet
another possible technique which some software might offer.</i></span></div>
<div style="font-style: normal; margin-bottom: 0in;">
<br /></div>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://2.bp.blogspot.com/-5875WHu4IVc/VNdckd4WrCI/AAAAAAAAAiM/3kwF7KZMycc/s1600/pixel_particles_merged_prw.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="225" src="https://2.bp.blogspot.com/-5875WHu4IVc/VNdckd4WrCI/AAAAAAAAAiM/3kwF7KZMycc/s1600/pixel_particles_merged_prw.jpg" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;"><style type="text/css">P { margin-bottom: 0.08in; }</style>
<br />
<div style="margin-bottom: 0in;">
An effect created with millions of
single-pixel particles</div>
</td></tr>
</tbody></table>
<div style="font-style: normal; margin-bottom: 0in;">
</div>
<div style="font-style: normal; margin-bottom: 0in;">
<br /></div>
<div style="font-style: normal; margin-bottom: 0in;">
<style type="text/css">P { margin-bottom: 0.08in; }</style>
</div>
<div style="margin-bottom: 0in;">
Another good example are metaballs –
while each one merely being a blob on its own, when instanced over a
particle system (especially if the particles can control their
individual sizes and weights) metaballs can become a powerful effects
and modeling tool.</div>
<div style="margin-bottom: 0in;">
<br /></div>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://3.bp.blogspot.com/-taedqvM5FBo/VNdcjbMX8VI/AAAAAAAAAh8/L_FQY_3bCho/s1600/meta_houdini_prw.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="246" src="https://3.bp.blogspot.com/-taedqvM5FBo/VNdcjbMX8VI/AAAAAAAAAh8/L_FQY_3bCho/s1600/meta_houdini_prw.jpg" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;"><style type="text/css">P { margin-bottom: 0.08in; }</style>
<br />
<div style="margin-bottom: 0in;">
A particle system driving the
metaballs</div>
</td></tr>
</tbody></table>
<div style="margin-bottom: 0in;">
<br /></div>
<div style="margin-bottom: 0in;">
<br /></div>
<div style="margin-bottom: 0in;">
<style type="text/css">P { margin-bottom: 0.08in; }</style>
</div>
<div style="margin-bottom: 0in;">
Whether using sprites or geometry,
getting the real power requires versatile control over the timing and
randomization of these elements. Can element's animation be offset
for every individual particle to start when it gets born? Can a
random frame of the input sequence be picked for each particle? Can
this frame be chosen based on the particle's attributes? Can input
sprite's or geometry animation be stretched to the particle's
lifetime? (So that if you have a clip with a balloon growing out of
nowhere and eventually blowing up, you could match it to every
particle in a way, that no matter how long does a particle live, the
balloon's appearing would coincide with its birth, and the blowing up
would exactly match its death.)</div>
<div style="margin-bottom: 0in;">
<br /></div>
<div style="margin-bottom: 0in;">
With a good level of randomization
and timing management, animated sprites/meshes are quite powerful in
creating many effects like fire and crowds.</div>
<div style="margin-bottom: 0in;">
<br /></div>
<div style="margin-bottom: 0in;">
<br /></div>
<div style="margin-bottom: 0in;">
<span style="font-size: medium;">Rotation
and spin</span></div>
<div style="margin-bottom: 0in;">
<br /></div>
<div style="margin-bottom: 0in;">
And the last set of controls which
we're going to touch upon in this first part are rotation and spin
options. Although “always face camera” mode is very useful and
important, it is also important to have an option to exchange it for
a full set of 3D rotations, even for the flat image instances like
sprites (think small tree leaves, snowflakes or playing cards). A
frequently required task is to have the elements oriented along their
motion paths (in shooting arrows for example). And of course having
an easy way to add randomness and spin, or to drive those through
textures/other particles' attributes is of high value.</div>
<div style="margin-bottom: 0in;">
<br /></div>
<div style="margin-bottom: 0in;">
Next time we'll look further at the
toolset required to efficiently drive particles later in their lives.</div>
<div style="margin-bottom: 0in;">
<br /></div>
<div style="margin-bottom: 0in;">
</div>
<div style="font-style: normal; margin-bottom: 0in;">
<br /></div>
<div style="margin-bottom: 0in;">
<b> </b>
<style type="text/css">P { margin-bottom: 0.08in; }</style>
</div>
<div style="margin-bottom: 0in;">
<span style="font-size: large;">Part 2</span></div>
<div style="margin-bottom: 0in;">
<br /></div>
<div style="margin-bottom: 0in;">
Now we're taking a look into the further life of a
particle. The key concept and requirement
stays the same: maximum control over all thinkable parameters and
undisrupted data flow through a particle's life and between the
different components of the system.</div>
<div style="margin-bottom: 0in;">
<br /></div>
<div style="margin-bottom: 0in;">
The first question I would ask after
the emission is how many particles' properties can be controlled
along and with their age. Color and size are a must, but it is also
important for the age to be able to influence on any other arbitrary
parameter, and in a non-linear fashion (like plotting an
age-to-parameter dependency graph with a custom curve). For example,
when doing a dust-cloud with sprites you might want to be increasing
their size while decreasing opacity towards the very end of a
lifetime.</div>
<div style="margin-bottom: 0in;">
<br /></div>
<div style="margin-bottom: 0in;">
Can custom events be triggered at the
certain points of a particle's lifetime? Can the age data be
transferred further to those events?</div>
<div style="margin-bottom: 0in;">
<br /></div>
<div style="margin-bottom: 0in;">
<br /></div>
<div style="margin-bottom: 0in;">
<span style="font-size: medium;">Spawning</span></div>
<div style="margin-bottom: 0in;">
<br /></div>
<div style="margin-bottom: 0in;">
Spawning (emitting new particles from
the existing ones) is the key functionality for a particle system.
Its numerous applications include changing the look of a particle
based on events like collisions or parameters like age, creating
different kinds of bursts and explosions and creating all sorts of
trails. Classical fireworks effect is a good example where spawning
functionality is used in at least two ways: it creates the trails by
generating new elements behind the leading ones, plus produces the
explosion by generating new leads from the old ones at a desired
moment.</div>
<div style="margin-bottom: 0in;">
<br /></div>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://3.bp.blogspot.com/-tdBfEISZfmM/VNdcjDvNCKI/AAAAAAAAAh4/_Bnqr4qUVic/s1600/018_fireworks_b_v042_prw.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="225" src="https://3.bp.blogspot.com/-tdBfEISZfmM/VNdcjDvNCKI/AAAAAAAAAh4/_Bnqr4qUVic/s1600/018_fireworks_b_v042_prw.jpg" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;"><style type="text/css">P { margin-bottom: 0.08in; }</style>
<br />
<div style="margin-bottom: 0in;">
In a fireworks effect spawning is
used to create both the trails and the explosion</div>
</td></tr>
</tbody></table>
<div style="margin-bottom: 0in;">
<br /></div>
<div style="margin-bottom: 0in;">
<br /></div>
<div style="margin-bottom: 0in;">
<style type="text/css">P { margin-bottom: 0.08in; }</style>
</div>
<div style="margin-bottom: 0in;">
Just like with the initial emission
discussed the last time, it is paramount to be able to transfer as
much data as possible from the old particles to the new ones.
Velocity, shape, color, age and all the custom properties should be
both easily inheritable if required; or possible to set from scratch
as an option.
</div>
<div style="margin-bottom: 0in;">
<br /></div>
<div style="margin-bottom: 0in;">
The last but not least spawning option
to name is the recursion. A good software solution provides user with
the ability to choose whether to use newly spawned elements as a base
for spawning in each next time-step (to spawn recursively) or not.
Although a powerful technique, recursive spawning can quickly get out
of hand as the number of elements keep growing exponentially.</div>
<div style="margin-bottom: 0in;">
<br /></div>
<div style="margin-bottom: 0in;">
<br /></div>
<div style="margin-bottom: 0in;">
<span style="font-size: medium;">Behavior
control</span></div>
<div style="margin-bottom: 0in;">
<br /></div>
<div style="margin-bottom: 0in;">
Forces are a standard way of driving
the motion in a particle system. The default set of directional,
point, rotational, turbulent and drag forces aside*, it is important
to have an easily controllable custom force functionality with a
visual control over its shape. Ability to use arbitrary 3D-objects or
images as forces comes very handy here.</div>
<div style="margin-bottom: 0in;">
<br /></div>
<div style="font-style: normal; margin-bottom: 0in;">
*<span style="font-size: x-small;"><i>An often
overlooked drag (sometimes called friction) force plays a very
important role as it counteracts the other forces, not letting them
get out of control through overgrowing.</i></span></div>
<div style="margin-bottom: 0in;">
<br /></div>
<div style="margin-bottom: 0in;">
Forces raise the next question – how
much can the system be driven by physical simulations? Does it
support collisions? What are the supported types of collision objects
then, the options for post-collision behavior and how much data can a
particle exchange in a collision with the rest of the scene?</div>
<div style="margin-bottom: 0in;">
<br /></div>
<div style="margin-bottom: 0in;">
Can further physical phenomena like
smoke/combustion or fluid behavior be simulated within the particle
system? Can this kind of simulation data be dialed into the system
from the outside? One efficient technique, for instance, is to drive
the particles with the results of a low-resolution volumetric
simulation, using them to increase its detalization.
</div>
<div style="margin-bottom: 0in;">
<br /></div>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://3.bp.blogspot.com/-sNYP6YmDueY/VNdcjdH4IvI/AAAAAAAAAig/z12aPCWI1u0/s1600/image_driven_prw.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="400" src="https://3.bp.blogspot.com/-sNYP6YmDueY/VNdcjdH4IvI/AAAAAAAAAig/z12aPCWI1u0/s1600/image_driven_prw.jpg" width="315" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;"><style type="text/css">P { margin-bottom: 0.08in; }</style>
<br />
<div style="margin-bottom: 0in;">
The particle effect above uses the
low-resolution </div>
<div style="margin-bottom: 0in;">
smoke simulation shown below as the custom force</div>
</td></tr>
</tbody></table>
<div style="margin-bottom: 0in;">
<br /></div>
<div style="margin-bottom: 0in;">
<br /></div>
<div style="margin-bottom: 0in;">
<style type="text/css">P { margin-bottom: 0.08in; }</style>
</div>
<div style="margin-bottom: 0in;">
The next things commonly required for
directing particles are the follow path and find target
functionality. Support for the animated paths/targets is of value
here, just as the ability to compel reaching the goal within the
certain timeframe.
</div>
<div style="margin-bottom: 0in;">
<br /></div>
<div style="margin-bottom: 0in;">
Many interesting effects can be
achieved if the particles have some awareness of each other (like
knowing who is the nearest neighbor). Forces like flocking can be
used to simulate the collective behavior then.</div>
<div style="margin-bottom: 0in;">
<br /></div>
<div style="margin-bottom: 0in;">
<br /></div>
<div style="margin-bottom: 0in;">
<span style="font-size: medium;">Limiting
the effect</span></div>
<div style="margin-bottom: 0in;">
<br /></div>
<div style="margin-bottom: 0in;">
For any force or event (including
spawning) which may be added to the flow, let's now consider the
limiting options. What are the ways to restrict the effect of each
particular operator? Can it be restricted to a certain area only? A
certain lifespan? Can it be limited with a custom criteria like
mathematical expressions, arbitrary particle properties or a certain
probability factor? How much is the user control over the active area
for each operator – custom shapes, textures, geometry, volumes? Is
there a grouping workflow?</div>
<div style="margin-bottom: 0in;">
<br /></div>
<div style="margin-bottom: 0in;">
Groups provide a powerful technique of
particles' control. The concept implies that at the creation point or
further down the life of a particle it can be assigned to some group,
and then each single effect can be simply limited to operate on the
chosen groups only. For efficient work all the limiting options just
discussed should be available as criteria for groups assignment. Plus
the groups themselves should be a subject to logic operations (like
subtraction or intersection), should not be limited in number or
<span style="font-size: x-small;">(limited)</span> to contain some particle
exclusively. For example, you might want to group some particles
based on speed, others based on age and then create yet another
group: an intersection of the first two.
</div>
<div style="margin-bottom: 0in;">
<br /></div>
<div style="margin-bottom: 0in;">
</div>
<div style="margin-bottom: 0in;">
<style type="text/css">P { margin-bottom: 0.08in; }</style>
</div>
<div style="margin-bottom: 0in;">
<span style="font-size: medium;">Further
considerations</span></div>
<div style="margin-bottom: 0in;">
<br /></div>
<div style="margin-bottom: 0in;">
The last set of questions I would
suggest might have less connection with the direct capabilities of a
given system, and still they can make a huge difference in a real
deadline-driven production.</div>
<div style="margin-bottom: 0in;">
<br /></div>
<div style="margin-bottom: 0in;">
What are the maximum amounts of
particles which the system can manage interactively and render? Are
there optimizations for the certain types of elements? What kind of
data can be output from the particle system for further manipulation?
Can the results be used for meshing into geometry later or in another
software package for example? Can a particle system deform or affect
in any other way the rest of the scene? Can it be used to drive
another particle system?</div>
<div style="margin-bottom: 0in;">
<br /></div>
<div style="margin-bottom: 0in;">
Can the results of a particle
simulation be cached to disk or memory? Can it be played backwards
(is scrubbing back-and-forth across the timeline allowed)? Are there
options for a prerun before the first frame of the scene?</div>
<div style="margin-bottom: 0in;">
<br /></div>
<div style="margin-bottom: 0in;">
Does the system provide good visual
aids for working interactively in the viewport? Can individual
particles be manually selected and manipulated? This last question
might often be a game-changer, when after days of building the setup
and simulating everything works except for few stray particles, which
no one would really miss.</div>
<div style="margin-bottom: 0in;">
<br /></div>
<div style="margin-bottom: 0in;">
Aside from the presence of
variation/randomization options which should be available for the
maximum number of parameters in the system, how stable and
controllable is the randomization? If you reseed one parameter –
will the rest of the simulation stay unaffected and preserve its
look? How predictable and stable is the particle solver in general?
Can the particle flow be split into, or merged together from several?</div>
<div style="margin-bottom: 0in;">
<br /></div>
<div style="margin-bottom: 0in;">
And as the closing point in this list
for evaluating the particular software solution, it is worth
considering the quality and accessibility of documentation together
with the amount of available presets/samples and the size of a
user-base. Trying to dig through a really complex system like Houdini
or Thinking Particles would be quite daunting without those.</div>
<div style="margin-bottom: 0in;">
<br /></div>
<div style="margin-bottom: 0in;">
<br /></div>
Denis Kozlovhttp://www.blogger.com/profile/07406391692819839722noreply@blogger.com0tag:blogger.com,1999:blog-5149177666770535781.post-56377324915378753522015-02-03T13:02:00.001+01:002017-02-13T19:19:35.795+01:00Project Tundra<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://3.bp.blogspot.com/-HoCj8mmhxZA/VNCu-PDZ_RI/AAAAAAAAAg4/2_s5fFTDYS0/s1600/TND01_Tundra_Denis_Kozlov_1024px.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img alt="Project Tundra 01" border="0" height="400" src="https://3.bp.blogspot.com/-HoCj8mmhxZA/VNCu-PDZ_RI/AAAAAAAAAg4/2_s5fFTDYS0/s1600/TND01_Tundra_Denis_Kozlov_1024px.jpg" title="Project Tundra 01" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">01. Tundra</td></tr>
</tbody></table>
<br />
<div style="margin-bottom: 0in;">
<style type="text/css">P { margin-bottom: 0.08i</style>Since I find it very cool to call
everything a project, here goes “Project Tundra” with some
anagrams. Pretty much all visual elements (except for a couple of
bump textures) are completely synthetic and generated procedurally
with Houdini and Fusion. So almost no reality was sampled during the production of the series. Some clouds from <a href="http://www.the-working-man.org/2014/11/procedural-clouds.html">these setups</a> were used. </div>
<div style="margin-bottom: 0in;">
<br />
<a name='more'></a></div>
<div style="margin-bottom: 0in;">
The originals are about 6K in resolution. I'm a bit split between the desire to write that prints will follow shortly, and the desire not to lie. Let's say they will follow for sure if you're patient enough to follow the slowly revolving pace of this blog or some other <a href="https://twitter.com/kozzzlove">social</a> <a href="https://www.facebook.com/kozzzlove">media</a> I pretend to participate in.</div>
<div style="margin-bottom: 0in;">
<br /></div>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://4.bp.blogspot.com/-RSs-nTZsJVM/VNCu-s73UqI/AAAAAAAAAg8/AnSZZD1nvdk/s1600/TND02_Dun_Art_Denis_Kozlov_1024px.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img alt="Project Tundra 02 - Dun Art" border="0" height="400" src="https://4.bp.blogspot.com/-RSs-nTZsJVM/VNCu-s73UqI/AAAAAAAAAg8/AnSZZD1nvdk/s1600/TND02_Dun_Art_Denis_Kozlov_1024px.jpg" title="Project Tundra 02 - Dun Art" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">02. Dun Art</td></tr>
</tbody></table>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://1.bp.blogspot.com/--c29B9bvgDw/VNCu-1UOsVI/AAAAAAAAAhA/g9mCsEQsbtY/s1600/TND03_Durant_Denis_Kozlov_1024px.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img alt="Project Tundra 03 - Durant" border="0" height="400" src="https://1.bp.blogspot.com/--c29B9bvgDw/VNCu-1UOsVI/AAAAAAAAAhA/g9mCsEQsbtY/s1600/TND03_Durant_Denis_Kozlov_1024px.jpg" title="Project Tundra 03 - Durant" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">03. Durant</td></tr>
</tbody></table>
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://4.bp.blogspot.com/-utDyO6TKcBA/VNCu_K1SrxI/AAAAAAAAAhE/ROlgPfqo-0w/s1600/TND04_Rat_Dun_Denis_Kozlov_1024px.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img alt="Project Tundra 04 - Rat Dun" border="0" height="400" src="https://4.bp.blogspot.com/-utDyO6TKcBA/VNCu_K1SrxI/AAAAAAAAAhE/ROlgPfqo-0w/s1600/TND04_Rat_Dun_Denis_Kozlov_1024px.jpg" title="Project Tundra 04 - Rat Dun" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">04. Rat Dun</td></tr>
</tbody></table>
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://2.bp.blogspot.com/-FwHAAvd_MFw/VNCu_eFXqEI/AAAAAAAAAhI/scAHWVarVtw/s1600/TND05_Dauntr_Denis_Kozlov_1024px.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img alt="Project Tundra 05 - Dauntr'" border="0" height="400" src="https://2.bp.blogspot.com/-FwHAAvd_MFw/VNCu_eFXqEI/AAAAAAAAAhI/scAHWVarVtw/s1600/TND05_Dauntr_Denis_Kozlov_1024px.jpg" title="Project Tundra 05 - Dauntr'" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">05. Dauntr'</td></tr>
</tbody></table>
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://2.bp.blogspot.com/-l7FtJC7QgcM/VNCvAN1fMbI/AAAAAAAAAhY/gh_jeI4NWxk/s1600/TND06_Da_Turn_Denis_Kozlov_1024px.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img alt="Project Tundra 06 - Da Turn" border="0" height="400" src="https://2.bp.blogspot.com/-l7FtJC7QgcM/VNCvAN1fMbI/AAAAAAAAAhY/gh_jeI4NWxk/s1600/TND06_Da_Turn_Denis_Kozlov_1024px.jpg" title="Project Tundra 06 - Da Turn" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">06. Da Turn</td></tr>
</tbody></table>
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://1.bp.blogspot.com/-pY5WDWyTRto/VNCvBLd2qqI/AAAAAAAAAho/DsEB0dkMg2w/s1600/TND07_Details_Denis_Kozlov_1024px.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img alt="Project Tundra - Details" border="0" height="640" src="https://1.bp.blogspot.com/-pY5WDWyTRto/VNCvBLd2qqI/AAAAAAAAAho/DsEB0dkMg2w/s1600/TND07_Details_Denis_Kozlov_1024px.jpg" title="Project Tundra - Details" width="443" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">And a bit of details' flavor.</td></tr>
</tbody></table>
<div style="margin-bottom: 0in;">
<br /></div>
<div style="margin-bottom: 0in;">
<br /></div>
<div style="margin-bottom: 0in;">
<br /></div>
Denis Kozlovhttp://www.blogger.com/profile/07406391692819839722noreply@blogger.com0tag:blogger.com,1999:blog-5149177666770535781.post-40766221180219450412014-12-28T14:13:00.000+01:002017-02-13T19:18:50.719+01:00Bit Depth - color precision in raster images<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://2.bp.blogspot.com/-NQPXItfa_Og/VJ_5VBeHv3I/AAAAAAAAAgA/Ct9Ae_UW6E0/s1600/bitdepths_chart_med.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img alt="Bit depth diagram" border="0" height="331" src="https://2.bp.blogspot.com/-NQPXItfa_Og/VJ_5VBeHv3I/AAAAAAAAAgA/Ct9Ae_UW6E0/s1600/bitdepths_chart_med.jpg" title="Bit-depth explained" width="400" /></a></div>
<br />
<a href="http://www.the-working-man.org/2014/11/pixel-is-not-color-square.html">Last time</a> we have been talking about encoding color information in pixels with numbers from a zero-to-one range, where 0 stands for black, 1 for white and numbers in between represent corresponding shades of gray. (RGB model uses 3 numbers like that for storing the brightness of each Red, Green and Blue components and representing a wide range of colors through mixing them). This time let's address the precision of such a representation, which is defined by a number of bits dedicated in a particular file format to describing that 0-1 range, or a bit-depth of a raster image. <br />
<br />
<a name='more'></a><br />
Bits are the most basic units of storing information. Each can take only two values, which can be thought of as 0 or 1, off or on, absence or presence of a signal, or black or white in our case. Therefore using a 1-bit per pixel (1-bit image) would give us a picture consisting only of black and white elements with no shades of gray.* <br />
<br />
<span style="font-size: x-small;"><i>*Of course the two values can be interpreted as anything (for instance you can encode two different arbitrary colors with them like brown and violet, but only two of them – with no gradations in between), and for the most common purpose (which is representing a 0 to 1 gradation), 1 bit means black or white and higher bit depths serve increasing the amount of possible gray sub-steps.</i></span><br />
<br />
But the great thing about bits is that when you group them together you get times more than the simple sum of the individuals, as each new bit does not add 2 more values to the group, but instead doubles the amount of available unique combinations. It means that if we use 3 bits to describe each pixel value, we'd get not 6 (=2*3) but 8 (=2^3) possible combinations. 5 bits can produce 32, and 8 bits grouped together result in 256 different numbers.<br />
<br />
<table cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://2.bp.blogspot.com/-LazvB1d4Qtc/VJ_3GNumTnI/AAAAAAAAAfM/ZY_l1515YEo/s1600/bits_grouped_lo.jpg" imageanchor="1" style="clear: left; margin-bottom: 1em; margin-left: auto; margin-right: auto;"><img alt="Possible values represented by 1 and 3 bits" border="0" height="300" src="https://2.bp.blogspot.com/-LazvB1d4Qtc/VJ_3GNumTnI/AAAAAAAAAfM/ZY_l1515YEo/s1600/bits_grouped_lo.jpg" title="How bits work together" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;"><style type="text/css">P { margin-bottom: 0.08in; }</style>
<br />
<div style="margin-bottom: 0in;">
Although each bit can represent only
2 values, </div>
<div style="margin-bottom: 0in;">
even 3 of them grouped together would already </div>
<div style="margin-bottom: 0in;">
result in 8
possible combinations.</div>
</td></tr>
</tbody></table>
<br />
<br />
That group of 8 bits is typically called a byte, which is another standard unit computers use to store data. This makes it convenient (although not necessary) to assign the whole bytes to describe a color of a pixel, and it is one byte which is most commonly used per channel. This is true for the majority of digital images existing today, giving us the precision of 256 gradations from black to white possible (in either a monochrome picture or <b>each</b> Red, Green or Blue channel for RGB) and is what called an 8-bit image in computer graphics, where the bit-depth is traditionally measured per color component. In consumer electronics the same 8-bit RGB image would be called a 24-bit (True Color) simply because they count the sum of all 3 channels together (higher numbers must seem cooler for marketing). An 8-bit RGB image can possibly reproduce 16777216 (=256^3) different colors and results in color fidelity normally sufficient for not seeing any artifacts. Moreover, regular consumer monitors are physically not designed to display more gradations (in fact they may be limited to even less, like 6 bits per channel). So why would someone bother and waste disk space/memory on files of higher bit-depths?<br />
<br />
The most basic example when 256 gradations of an 8-bit image are not enough is a heavy color-correction, which may quickly result in artifacts called banding. Rendering to higher bit-depth solves this issue, and normally 16-bit formats with their 65536 distinctions of gray are used for the purpose. But even 10 bits like in Cineon/DPX format can give 4 times higher precision against the standard 8. Going above 2 bytes per channel, on the other hand, becomes impractical as the file size increases proportionally to the bit-depth.*<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://4.bp.blogspot.com/-aFUMNIF6N8o/VJ_3ML7ghHI/AAAAAAAAAfg/sXavkFV9O14/s1600/grad_banding_noise.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="200" src="https://4.bp.blogspot.com/-aFUMNIF6N8o/VJ_3ML7ghHI/AAAAAAAAAfg/sXavkFV9O14/s1600/grad_banding_noise.jpg" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;"><style type="text/css">P { margin-bottom: 0.08in; }</style>
<br />
<div style="margin-bottom: 0in;">
Insufficient bit-depth of an
output device can be another </div>
<div style="margin-bottom: 0in;">
cause for banding artifacts, especially
in gradients. </div>
<div style="margin-bottom: 0in;">
Adding noise can help fighting this issue through
dithering. </div>
<div style="margin-bottom: 0in;">
A kind of fighting the fire with fire...</div>
</td></tr>
</tbody></table>
<span style="font-size: x-small;"><i>*No matter float or integer, the size of a raster image in memory can be calculated as a product of the number of pixels (horizontal times vertical resolution), bit-depth and the number of channels. This way 320x240 8-bit RGBA image would occupy 320x240x8x4=2457600 bits, or 320x240x4=307200 bytes of memory. This does not show the exact file size on disk though, as first, there is additional data like header and other meta-data stored in an image file; and second, some kind of compression (lossless like internal archiving or lossy like in JPEG) is normally utilized in image file formats to save the disk space.</i></span><br />
<br />
<br />
But regardless of the number of gradations (2, 4, 256, 65536, etc), as long as we are using an integer file format, these numbers all describe the values within the range from 0 to 1. For instance, middle gray value in sRGB color space (the color space of a regular computer monitor - not to be confused with RGB color model) is around 0.5 – not 128, and white is 1 – not 255. It is only because 8-bits representation is so popular, many programs by default measure the color in it. But this is not how the underlying math is working and thus can cause problems when trying to make sense of it... For example take a Multiply blending mode – it's easy to learn empirically that it preserves the color of the underlying layer in white areas of the overlay, and darkens the picture under the dark areas – but what exactly is happening – why is it called “multiply”? With black it makes sense – you multiply the underlying color by 0 and get 0 – black, but why would it preserve white areas if white is 255? Multiplying something by 255 should make it way brighter... Well, because it is 1, not 255 (neither 4, nor 16, nor 65536...). And so with the rest of the CG math: <b>white means one</b>. <br />
<br />
The above paragraphs referred to how the bit depth works in integer formats – defining the amount of possible variations between 0 and 1 only. Floating point formats are of a different kind. Bit depth does pretty much the same thing here – defines the color precision. However the numbers stored can be anything and may well lay outside of a 0 to 1 range. Like brighter than white (above 1) or darker than black (negative). Internally this works by utilizing the logarithmic scale and requires higher bit-depths for achieving the same fidelity in the usually most important [0,1]. Normally at least 16 or even 32 bits are used per channel to represent floating point data with enough precision. At the cost of the memory usage, this allows for representing High Dynamic Range imagery, additional freedom in compositing, and makes it possible to store arbitrary numerical data in image files like the World Position Pass to name one. <br />
<br />
This also means that integer formats always clip the out-of-range pixel values. The quick way to test for clipping is to lower the brightness of a picture and see if any details get revealed in the overbright areas.<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://3.bp.blogspot.com/-Qe7W2NnhqJo/VJ_3Mdp03dI/AAAAAAAAAfk/aOy23KVCxPA/s1600/sphere_med.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img alt="A 3d-render of a sphere used to illustrate artifacts of insufficient color precision" border="0" height="300" src="https://3.bp.blogspot.com/-Qe7W2NnhqJo/VJ_3Mdp03dI/AAAAAAAAAfk/aOy23KVCxPA/s1600/sphere_med.jpg" title="A 3d-render used to illustrate artifacts of insufficient color precision" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Source image</td></tr>
</tbody></table>
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://3.bp.blogspot.com/-GeFJAp-j2Dw/VJ_3L96yaXI/AAAAAAAAAfc/hgQ_lGR7N4c/s1600/sphere_cc_comments_lo.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img alt="Color banding and clipping artifacts" border="0" height="400" src="https://3.bp.blogspot.com/-GeFJAp-j2Dw/VJ_3L96yaXI/AAAAAAAAAfc/hgQ_lGR7N4c/s1600/sphere_cc_comments_lo.jpg" title="Color banding and clipping artifacts" width="355" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;"><style type="text/css">P { margin-bottom: 0.08in; }</style>
<br />
<div style="margin-bottom: 0in;">
The same source image rendered to 8
bits integer, 16 bits integer and 16 bits float with 2 different
color corrections applied. Notice the color banding in 8-bit and
clipping highlights in integer versions.</div>
</td></tr>
</tbody></table>
<br />
It is natural for a 3D renderer to work in floating point internally, so most often the risk of clipping would occur when choosing a file format to save the final image. But even when dealing with already given low bit-depth or clipped integer files, there are certain benefits in increasing its color precision inside of the compositing software. (For the best of my knowledge, Nuke converts any imported source into a 32-bits floating point representation internally and automatically.) Such conversion won't add any extra details or qualities to the existing data, but the results of your further manipulations would belong to a better colorspace with less quantization errors (and wider luminance range if you also convert an integer to float). Moreover, you can quickly fake HDR data by converting an integer image to float and gaining up the highlights (bright areas) of the picture. This won't give you a real replacement for the properly acquired HDR, but should suffice for many purposes like diffuse IBL (Image Based Lighting). In other words, regardless of the output requirements, do your compositing in at least 16 bits, float highly preferable – final downsampling and clipping for the output delivery is never a problem. <br />
<br />
It is important to have a clear understanding of bit-depth and integer/float differences to deliver the renders in adequate quality and not to get caught during the post-processing stage later. Read up on the file formats and options available in your software. For instance 16 bits can refer to both integer and floating point formats, which may be distinguished as “Short” (integer) and “Half” (float) in Maya. As a general rule of thumb, use 16 bits if you plan for extensive color grading/compositing and make sure you render to floating point format to avoid clipping if any out-of-range values need to be preserved (like details in the highlights or negative values in Zdepth or if you simply use linear workflow). 16-bit OpenEXR files can be considered a good color precision/file size compromise for the general case. <br />
<br />
Happy and Merry everyone!<br />
<br />Denis Kozlovhttp://www.blogger.com/profile/07406391692819839722noreply@blogger.com0tag:blogger.com,1999:blog-5149177666770535781.post-45885440366837623682014-11-03T21:45:00.000+01:002017-02-13T19:17:59.598+01:00Pixel Is Not a Color Square<div class="separator" style="clear: both; text-align: center;">
<a href="http://4.bp.blogspot.com/-CteQfdkqqa0/VFfkC5-HNXI/AAAAAAAAAeg/6RHF2ir8Msw/s1600/raster_teapot_sample_web_header.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img alt="Rater images contain nothing but numbers in the table cells" border="0" height="250" src="https://4.bp.blogspot.com/-CteQfdkqqa0/VFfkC5-HNXI/AAAAAAAAAeg/6RHF2ir8Msw/s1600/raster_teapot_sample_web_header.jpg" title="Utah teapot rasterized" width="400" /></a></div>
<i><br />Continuing the announced series of my original manuscripts for 3D World magazine.</i><br />
<div style="margin-bottom: 0in;">
<span style="font-size: large;"><i> </i> </span></div>
<div style="margin-bottom: 0in;">
<span style="font-size: large;">Thinking
of images as data containers.</span></div>
<br />
<a name='more'></a><br />
Although those raster image files filling our computers and lives are most commonly used to represent pictures (surprisingly), I find it useful for a CG artist to have yet another perspective – a geekier one. And from that perspective a raster image is essentially a set of data organized into a particular structure, to be more specific — a table filled with numbers (a matrix, mathematically speaking). <br />
<br />
The number in each table cell can be used to represent a color, and this is how the cell becomes a pixel (stands for “picture element”). Many ways exist to encode colors numerically. For instance (probably the most straightforward one) to explicitly define a number-to-color correspondence for each value (i.e. 3 stands for dark red, 17 for pale green and so on). Such method was frequently used in the older formats like GIF as it allows for certain size benefits at the expense of a limited palette. <br />
<br />
Another way (the most common one) is to use a continuous range from 0 to 1 (not 255!), where 0 stands for black, 1 for white, and numbers in between denote the shades of gray of the corresponding lightness. (A 0-255 range of integers is only an 8-bit representation of zero-to-one, popularized by certain software products and harmfully misleading in understanding many concepts such as color math or blending modes.) This way we get a logical and elegantly organized way of representing a monochrome image with a raster file. The term “monochrome” happens to be a more appropriate than “black-and-white” since the same data set can be used to depict gradations from black to any other color depending on the output device – like many old monitors were rather black&green than black&white. <br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://4.bp.blogspot.com/-mt1ohnHRUWE/VFfkCYm4HvI/AAAAAAAAAeY/XR97xtmzPZM/s1600/PI_RAND.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img alt="Encoding custom data with images" border="0" height="200" src="https://4.bp.blogspot.com/-mt1ohnHRUWE/VFfkCYm4HvI/AAAAAAAAAeY/XR97xtmzPZM/s1600/PI_RAND.jpg" title="Encoding custom data with images" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;"><style type="text/css">P { margin-bottom: 0.08in; }</style>
<br />
<div style="margin-bottom: 0in;">
A raster may contain data of a
totally different kind. As an example, let's fill one table with the
digits of PI divided by ten, the other one with random values and
present both as images. Both data sets have a particular meaning
different from each other, still visually they represent the same —
noise. And if the visual sense matches the numeric one in the second
case, there is almost no chance to correctly interpret the meaning of
the first data set purely visually (as an image).</div>
</td></tr>
</tbody></table>
<br />
This system, however, can be easily extended to the full-color case with a simple solution – each table cell can contain several numbers, and again there are multiple ways of describing the color with few (usually three) numbers each in 0-1 range. In RGB model they stand for the amounts of Red, Green and Blue light, in HSV - for hue, saturation and brightness accordingly. But most importantly – those are still nothing but numbers which encode a particular meaning, but don't have to be interpreted that way. <br />
<br />
Now to the “why it is not a square” part. Because the table, which a raster image is, tells us how many elements are in each row and column, in which order they are placed, but nothing about what shape or even proportion they are. We can form an image from the data in a file by various means, not necessarily with a monitor, which is only one option for an output device. For example, if we would take our image file and distribute pebbles of sizes proportional to pixel values on some surface – we shall still form essentially the same image.<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://4.bp.blogspot.com/-JBnxaYtVBsA/VFfkCVLrP9I/AAAAAAAAAeU/AcB1NgDtbTA/s1600/pebbles_web_prw.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img alt="Displaying raster image data with a set of pebbles" border="0" height="400" src="https://4.bp.blogspot.com/-JBnxaYtVBsA/VFfkCVLrP9I/AAAAAAAAAeU/AcB1NgDtbTA/s1600/pebbles_web_prw.jpg" title="Plotting with pebbles" width="332" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;"><style type="text/css">P { margin-bottom: 0.08in; }</style>
<br />
<div style="margin-bottom: 0in;">
Computer monitor is only one out of
many possible </div>
<div style="margin-bottom: 0in;">
ways to visualize the raster image data.</div>
</td></tr>
</tbody></table>
<br />
<br />
And even if we'd take only half of the columns, but instruct ourselves to use the stones twice wider for the distribution – the result would still show principally the same picture with the correct proportions, only lacking half of the horizontal details. “Instruct” is the key word here. This instruction is called “pixel aspect ratio”, which describes the difference between the image's resolution (number of rows and columns) and proportions. It allows to store frames stretched or compressed horizontally and is used in certain video and film formats.<br />
<br />
<table cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://2.bp.blogspot.com/-ZpYB92U_EX4/VFfkB-j7ooI/AAAAAAAAAeQ/uGt24YRC5gQ/s1600/pebbles_aspect_web_prw.jpg" imageanchor="1" style="clear: left; margin-bottom: 1em; margin-left: auto; margin-right: auto;"><img alt="Pixel aspect ratio explained in a diagram with pebbles" border="0" height="400" src="https://2.bp.blogspot.com/-ZpYB92U_EX4/VFfkB-j7ooI/AAAAAAAAAeQ/uGt24YRC5gQ/s1600/pebbles_aspect_web_prw.jpg" title="Aspect ratio" width="332" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;"><style type="text/css">P { margin-bottom: 0.08in; }</style>
<br />
<div style="margin-bottom: 0in;">
In this example of an image stored
with </div>
<div style="margin-bottom: 0in;">
the pixel aspect ratio of 2.0, representing pixels </div>
<div style="margin-bottom: 0in;">
as squares
results in erroneous proportions (top). </div>
<div style="margin-bottom: 0in;">
Correct representation needs
to rely </div>
<div style="margin-bottom: 0in;">
on the stretched elements like below.</div>
</td></tr>
</tbody></table>
<br />
<br />
Since we started on resolution – it shows the maximum amount of detail which an image can hold, but says nothing about how much does it actually hold. A badly focused photograph won't get improved no matter how many pixels the camera sensor has. Same way upscaling a digital image in Photoshop or any other editor will increase the resolution without adding no detail or quality to it – the extra rows and columns would be just filled with interpolated (averaged) values of originally neighboring pixels. <br />
<br />
In a similar fashion a PPI (pixels per inch) parameter (commonly mistakenly called DPI – dots per inch) is only an instruction establishing the correspondence between the image file's resolution and the output's physical dimensions. Thus it is pretty much meaningless on its own, without either of those two. <br />
<br />
Returning to the numbers stored in each pixel, of course they can be any, including so called out-of range (values above 1 and negative). And of course there can be more than 3 numbers stored in each cell. These features are limited only by the particular file format definition and are widely utilized in OpenEXR to name one. <br />
<br />
The great aspect of storing several numbers in each pixel is their independence. Obviously, each of them can be studied and manipulated individually as a monochrome image called Channel – a sub-raster if you want. Additional channels to the usual color-describing Red, Green and Blue can carry all kinds of information. The default fourth channel is Alpha which encodes opacity (0 denotes a transparent pixel, 1 stands for completely opaque). ZDepth, Normals, Velocity (Motion Vectors), World Position, Ambient Occlusion, IDs and anything else you could think of can be stored in either additional or the main RGB channels – it is only data and the way to store it. Every time you render something out, you decide which data to include and where to place it. Same way you decide in compositing how to manipulate the data you possess to achieve the result pursued. <br />
<br />
This is the numerical way of image-thinking, and I would like to wrap this article up with few examples of where it comes beneficial. <br />
<br />
We've just mentioned understanding and using render passes, but this aside, it is pretty much all of the compositing that requires this perspective. The basic color-corrections for example are nothing but elementary math operations on pixel values, and seeing through it is quite essential for productive work. Furthermore, math operations like addition, subtraction or multiplication can be performed on pixel values, and with data like Normals and Position many 3D shading tools can be mimicked in 2D. <br />
<br />
The described perspective is also how programmers see image files, thus especially in game industry it can help artists achieve a better mutual understanding with developers, resulting in better custom tools and cutting corners with various tricks like using textures for non-image data. <br />
<br />
And of course the visual effects and motion design. Texture maps controlling properties of a particles emission, RGB displacements forming 3D shapes, encoding multiple passes within RGBA with custom shaders, and on, and on... All these techniques become much more transparent after you start seeing the digits behind the pixels, which is essentially what a pixel is – a number in its place. <br />
<br />
<br />Denis Kozlovhttp://www.blogger.com/profile/07406391692819839722noreply@blogger.com0tag:blogger.com,1999:blog-5149177666770535781.post-59710114207301384502014-11-03T20:41:00.000+01:002017-02-13T19:17:03.630+01:00Procedural Clouds<div class="separator" style="clear: both; text-align: center;">
<a href="http://2.bp.blogspot.com/-9jjIaljelzM/VFfYP-uFBSI/AAAAAAAAAeA/T2sS2k8TCz8/s1600/proc_clouds_DK_prw.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img alt="Sample outputs of self-made procedural clouds generators" border="0" height="400" src="https://2.bp.blogspot.com/-9jjIaljelzM/VFfYP-uFBSI/AAAAAAAAAeA/T2sS2k8TCz8/s1600/proc_clouds_DK_prw.jpg" title="Procedural clouds by Denis Kozlov" width="400" /></a></div>
<br />
I've been playing around with generating procedural clouds lately, and this time before turning to the heavy artillery of full scale 3D volumetrics, spent some time with good old fractal noises in the good old <a href="https://www.blackmagicdesign.com/products/fusion">Fusion</a>. <br />
<br />
<a name='more'></a><br />
So row by row, top to bottom: <br />
<br />
The base fractus cloudform generator assembled from several noise patterns: from the coarsest one defining the overall random shape to the smallest for the edge work. It is used as a building block in the setups below. Main trick here was not to rely on a single noise pattern, but rather to look for a way to combine several sizes which would maximize the variation of shapes. The quality of the generator seems to be in direct correlation with the time, tenderness and attention spent on fine-tuning the parameters – the setup itself is not really sophisticated. <br />
<br />
Another thing was not to aim for a universal solution, but to design a separate setup for each characteristic cloud type. Good reference is a must of course. Keeping such system modular helps as well, so that the higher level assets rely on the properly tuned base elements. Second and third rows are nothing more than different modifications of the base shapes into cirrus through warping. All 3 top types are then put onto a 3D-plane and slightly displaced for a more volumetric feeling. <br />
<br />
Clouds in the forth line are merely the 3D bunches of randomized fractus sprites output from the base generator. The effect of shading is achieved through the variance in tones of individual sprites. <br />
<br />
The lowest samples are more stylized experiments in distorting the initial sphere geometry and cloning secondary elements over its surface. Denis Kozlovhttp://www.blogger.com/profile/07406391692819839722noreply@blogger.com0tag:blogger.com,1999:blog-5149177666770535781.post-55760734719842112082014-09-28T19:18:00.002+02:002017-02-13T19:15:42.738+01:00On Anatomy of CG Cameras<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://1.bp.blogspot.com/-6ZNWzi8KJXw/VCg3FF2_xQI/AAAAAAAAAc0/2WDE6qV8UNo/s1600/cg_camera_anatomy.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img alt="Diagram of the main anatomical elements of a virtual camera" border="0" height="256" src="https://1.bp.blogspot.com/-6ZNWzi8KJXw/VCg3FF2_xQI/AAAAAAAAAc0/2WDE6qV8UNo/s1600/cg_camera_anatomy.jpg" title="Anatomy of a CG Camera" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Anatomy of a CG Camera</td></tr>
</tbody></table>
<br />
The following article has first appeared in issue 180, and was the first in the series of pieces I've been writing for a <a href="http://www.creativebloq.com/3d-world-magazine">3D World magazine</a> for some time now - the later ones should follow at a (very) roughly monthly pace as well. These versions I'm going to be posting here are my initial manuscripts, and typically differ (like having a worse English and more silly pictures) from what makes it to the print after editing. Try to enjoy. <br />
<br />
<a name='more'></a><br /><br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://1.bp.blogspot.com/-i9I5OQUGdKc/VCg-wSyJjVI/AAAAAAAAAdE/oF3h2Is9YDc/s1600/denis_kozlov_cameras_assembly_v001.0001.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img alt="Anatomy of a CG camera by Denis Kozlov - page 1" border="0" height="640" src="https://1.bp.blogspot.com/-i9I5OQUGdKc/VCg-wSyJjVI/AAAAAAAAAdE/oF3h2Is9YDc/s1600/denis_kozlov_cameras_assembly_v001.0001.jpg" title="Anatomy of a CG camera by Denis Kozlov - page 1" width="451" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://3.bp.blogspot.com/-4QUHMGHdGmY/VCg-w5PjGyI/AAAAAAAAAdI/Z8uRHoYr9KQ/s1600/denis_kozlov_cameras_assembly_v001.0002.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img alt="Anatomy of a CG camera by Denis Kozlov - page 2" border="0" height="640" src="https://3.bp.blogspot.com/-4QUHMGHdGmY/VCg-w5PjGyI/AAAAAAAAAdI/Z8uRHoYr9KQ/s1600/denis_kozlov_cameras_assembly_v001.0002.jpg" title="Anatomy of a CG camera by Denis Kozlov - page 2" width="451" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://4.bp.blogspot.com/-rUU-xCjINSU/VCg-w3y-hWI/AAAAAAAAAdM/LDHAL7LvzZ4/s1600/denis_kozlov_cameras_assembly_v001.0003.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img alt="Anatomy of a CG camera by Denis Kozlov - page 3" border="0" height="640" src="https://4.bp.blogspot.com/-rUU-xCjINSU/VCg-w3y-hWI/AAAAAAAAAdM/LDHAL7LvzZ4/s1600/denis_kozlov_cameras_assembly_v001.0003.jpg" title="Anatomy of a CG camera by Denis Kozlov - page 3" width="451" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://3.bp.blogspot.com/-Niqnz7D6vEc/VCg-xsStt0I/AAAAAAAAAdY/PbInxCeZff0/s1600/denis_kozlov_cameras_assembly_v001.0004.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img alt="Anatomy of a CG camera by Denis Kozlov - page 4" border="0" height="640" src="https://3.bp.blogspot.com/-Niqnz7D6vEc/VCg-xsStt0I/AAAAAAAAAdY/PbInxCeZff0/s1600/denis_kozlov_cameras_assembly_v001.0004.jpg" title="Anatomy of a CG camera by Denis Kozlov - page 4" width="452" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://3.bp.blogspot.com/-h_yh6IQDJvc/VCg-yQuGWfI/AAAAAAAAAdg/dd6ohoq33oo/s1600/denis_kozlov_cameras_assembly_v001.0005.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img alt="Anatomy of a CG camera by Denis Kozlov - page 5" border="0" height="640" src="https://3.bp.blogspot.com/-h_yh6IQDJvc/VCg-yQuGWfI/AAAAAAAAAdg/dd6ohoq33oo/s1600/denis_kozlov_cameras_assembly_v001.0005.jpg" title="Anatomy of a CG camera by Denis Kozlov - page 5" width="451" /></a></div>
<br />
<br />
<br />Denis Kozlovhttp://www.blogger.com/profile/07406391692819839722noreply@blogger.com0tag:blogger.com,1999:blog-5149177666770535781.post-12589856987022070842014-06-11T20:22:00.000+02:002017-02-13T19:15:04.550+01:00Typography Basics for Artists. Part 2 - Matching the Typeface<table cellpadding="0" cellspacing="0" class="tr-caption-container" style="float: left; margin-right: 1em; text-align: left;"><tbody>
<tr><td style="text-align: center;"><a href="http://1.bp.blogspot.com/-fDKFxwmNorE/U5iUxE2UI3I/AAAAAAAAAa4/x8nLh-6lpyM/s1600/570px-Typoghaphia.svg.png" imageanchor="1" style="clear: left; margin-bottom: 1em; margin-left: auto; margin-right: auto;"><img alt="Anatomic parts of a glyph according to Wiki" border="0" height="105" src="https://1.bp.blogspot.com/-fDKFxwmNorE/U5iUxE2UI3I/AAAAAAAAAa4/x8nLh-6lpyM/s1600/570px-Typoghaphia.svg.png" title="Anatomy of a typeface" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;"><div dir="ltr" id="docs-internal-guid-4604447c-8c07-05f8-5eda-1121063f6341" style="line-height: 1; margin-bottom: 0pt; margin-top: 0pt;">
Anatomic parts of a glyph according to Wiki:</div>
1) x-height; 2) ascender line; 3) apex; 4) baseline; 5) ascender; 6) crossbar; 7) stem; 8) serif; 9) leg; 10) bowl; 11) counter; 12) collar; 13) loop; 14) ear; 15) tie; 16) horizontal bar; 17) arm; 18) vertical bar; 19) cap height; 20) descender line.</td></tr>
</tbody></table>
And here it comes finally - the second part of the typography basics for artists, where we're going to address a very common and practical task of matching a typeface to some pre-existing reference. <a href="http://www.the-working-man.org/2013/09/typography-basics-for-artists-part-1.html">The first part can be found here</a>, and again, the material of these posts should be considered as no more than a starting point for further investigation – a hopefully useful introduction into the boundless world which typography is, aimed at those who do not necessarily inhabit it full-time.<br />
<br />
<a name='more'></a><br />
So we have a reference text and want to match its look as close as possible. And first of all we need something to match with. Adobe users have access to the great library of typefaces which is a blessing on a budget, but even with no budget at all there are online collections to browse out there (“<a href="https://www.google.com/?gfe_rd=cr&ei=C5aYU_WcOqrc8geR9oDwAw&gws_rd=ssl#q=download+fonts+free+for+commercial+use">download fonts free for commercial use</a>” seems to be a nice search line to start with). “Free for commercial use” part is quite important as many typefaces are freely available only for personal use – fonts are usually distributed with a license text file which is always worth of a study. And for that reason in particular my preferred online collection is <a href="http://www.fontsquirrel.com/">Font Squirrel</a>. <a href="https://fonts.google.com/about">Google Fonts</a> looks like another great place to visit.<br />
<br />
As soon as we have a typeface library and a quick way of browsing through it – it only take looking and comparing to find the closest match. Here are few things to look at.<br />
<br />
1) The sample text. I personally find it most transparent and convenient to use the reference text (or its part) itself as a sample line when trying candidate typefaces on. Making sure the test string has some digits and special symbols is a good idea too. Another useful and beautiful tool are pangrams – phrases containing every letter of an alphabet. Wikipedia offers <a href="http://en.wikipedia.org/wiki/List_of_pangrams">quite comprehensive list for numerous languages </a>(including Klingon); some of my favorites for English:<br />
<br />
Public junk dwarves quiz mighty fox.<br />
Cozy sphinx waves quart jug of bad milk.<br />
Bored? Craving a pub quiz fix? Why, just come to the Royal Oak!<br />
<br />
For larger volumes there is a classical <a href="https://en.wikipedia.org/wiki/Lorem_ipsum">Lorem ipsum</a> and other <a href="https://en.wikipedia.org/wiki/Filler_text">filler texts</a>.<br />
<br />
<table cellpadding="0" cellspacing="0" class="tr-caption-container" style="float: right; margin-left: 1em; text-align: right;"><tbody>
<tr><td style="text-align: center;"><a href="http://3.bp.blogspot.com/-g1HcTDeWb5o/U5iYHLm5CGI/AAAAAAAAAbE/AiCfPT-S03c/s1600/500px-LowercaseA.svg.png" imageanchor="1" style="clear: right; margin-bottom: 1em; margin-left: auto; margin-right: auto;"><img alt="typographic variants of lowercase "a" grapheme" border="0" height="200" src="https://3.bp.blogspot.com/-g1HcTDeWb5o/U5iYHLm5CGI/AAAAAAAAAbE/AiCfPT-S03c/s1600/500px-LowercaseA.svg.png" title="typographic variants of lowercase "a" grapheme" width="168" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Image by <a href="http://commons.wikimedia.org/wiki/User:GearedBull">GearedBull</a> Jim Hood</td></tr>
</tbody></table>
<a href="http://1.bp.blogspot.com/-JCWJYhs-xp0/U5iYJwd6M4I/AAAAAAAAAbM/FR-jPNfAHrk/s1600/500px-LowercaseG.svg.png" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img alt="typographic variants of minuscule "g" grapheme" border="0" height="200" src="https://1.bp.blogspot.com/-JCWJYhs-xp0/U5iYJwd6M4I/AAAAAAAAAbM/FR-jPNfAHrk/s1600/500px-LowercaseG.svg.png" title="typographic variants of lowercase "g" letter" width="168" /></a>2) <span id="goog_1653169022"></span><span id="goog_1653169023"></span><span id="goog_1653169020"></span><span id="goog_1653169021"></span>One reason to compare the look of all the characters is that even though the other visual parameters (addressed below) of two typefaces might match quite closely, still the same symbol can be represented with different graphemes like the alternative versions of a and g shown on the right. Numbers and special characters allow various visual interpretations as well.<br />
<br />
3) Identifying the typeface in question within a <a href="http://www.the-working-man.org/2013/09/typography-basics-for-artists-part-1.html">broad classification</a> as the first step considerably speeds up the comparison, since from now on we can quickly identify and skip the non-relevant styles and focus on closer examination of candidates from the same group only (like Script or Serif).<br />
<br />
4) The next level of precision would be considering the contrast (thickness ratio between the main and supplementary strokes in a typeface) and other proportions of the characters (both overall like wide or tall letters, and between the elements within each letter like ascenders, descenders and counters). These qualities play a big part in defining the look of the font, and the habit of thinking of typefaces in terms of their contrast speeds up the navigation over the typographic ocean considerably.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://4.bp.blogspot.com/-kwgXGg0KmEk/U5iZgyZOexI/AAAAAAAAAbU/pTp2ylPudII/s1600/type_contrast.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img alt="The contrast of a typeface is the thickness ratio of main and supplementary strokes" border="0" height="320" src="https://4.bp.blogspot.com/-kwgXGg0KmEk/U5iZgyZOexI/AAAAAAAAAbU/pTp2ylPudII/s1600/type_contrast.jpg" title="The contrast of a typeface" width="292" /></a></div>
<br />
5) And then the details. Typography is all about the balance in proportion and fine finishing, so what could be considered a minor in most other visual arts becomes diverse and intricately nuanced. Shapes of the serifs, ending elements, connections between strokes – all have space for diversity. Here is <a href="http://pedamado.files.wordpress.com/2013/04/2012-pedro-amado-multilanguage-typeface-terminology-3et2012_v12.pdf">a very cool PDF</a> listing the typographic elements. The style of those elements is also a subject of fashion, and certain details can attribute the typeface to a particular temporal or stylistic group.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://3.bp.blogspot.com/-EXS1B219Bsg/U5iZgxn0IaI/AAAAAAAAAbc/ted7HGJpsOI/s1600/typographic_details.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img alt="Different versions of serif "T" letter" border="0" height="225" src="https://3.bp.blogspot.com/-EXS1B219Bsg/U5iZgxn0IaI/AAAAAAAAAbc/ted7HGJpsOI/s1600/typographic_details.jpg" title="Variety of typographic details" width="400" /></a></div>
<br />
<br />
<br />
Next part, whenever it will choose to arrive, is going to cover the basics of display typesetting. Denis Kozlovhttp://www.blogger.com/profile/07406391692819839722noreply@blogger.com0tag:blogger.com,1999:blog-5149177666770535781.post-33525529239666974002014-03-10T22:55:00.000+01:002017-02-13T19:07:59.934+01:00My article on CG cameras in 3D World magazine<div class="separator" style="clear: both; text-align: center;">
<a href="http://2.bp.blogspot.com/-zaDG1xnpqF0/Ux4yyo_RmZI/AAAAAAAAAZs/D0p3MligRsg/s1600/TDW180_cover_print_sm.jpg" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" height="200" src="https://2.bp.blogspot.com/-zaDG1xnpqF0/Ux4yyo_RmZI/AAAAAAAAAZs/D0p3MligRsg/s1600/TDW180_cover_print_sm.jpg" width="147" /></a></div>
It should be out and on the shelves by now. Unfortunately, few errors sneaked into the printed version of the article. However, the editorial promised me to fix those in digital edition and to put the edited pdf into the online 'Vault', which all print readers have access to when they buy the issue. <span id="goog_511271701"></span><a href="https://www.blogger.com/"></a><span id="goog_511271702"></span><br />
<br />
<a href="http://www.creativebloq.com/3d-world-magazine">3D World Website </a><br />
<br />
A little preview of the article below.<br />
<br />
<a name='more'></a><br /><br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://1.bp.blogspot.com/-rxbIGUK5oAE/Ux4yyK9UaDI/AAAAAAAAAZw/2G8oijln2aQ/s1600/TDW180.ff_theory_sm.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="270" src="https://1.bp.blogspot.com/-rxbIGUK5oAE/Ux4yyK9UaDI/AAAAAAAAAZw/2G8oijln2aQ/s1600/TDW180.ff_theory_sm.jpg" width="400" /></a></div>
<br />Denis Kozlovhttp://www.blogger.com/profile/07406391692819839722noreply@blogger.com0tag:blogger.com,1999:blog-5149177666770535781.post-2131856864743022352014-03-02T16:18:00.001+01:002017-02-13T19:07:23.078+01:00CTU's Faculty of Mechanical Engineering videoDouble no: No, I didn't forget about the next part of a typography article and No, I didn't lie claiming it will take a while... And while a while continues, here is a piece of recent work I accomplished with the guys at <a href="http://dpost.cz/">DPOST Prague</a>.<br />
<br />
<br />
<iframe allowfullscreen="" frameborder="0" height="281" mozallowfullscreen="" src="//player.vimeo.com/video/84875334?portrait=0&color=e1007a" webkitallowfullscreen="" width="500"></iframe> <br />
<a href="http://vimeo.com/84875334">Czech Technical University 150th Anniversary</a> from <a href="http://vimeo.com/dpostprague">DPOST Prague</a> on <a href="https://vimeo.com/">Vimeo</a>.<br />
<br />
Aside from wearing both Director's and Art-Director's hats, I've spent quite some time with hands on material here, taking the 3D work into Houdini to design the cubes effects, animate and render.<br />
<br />
<a name='more'></a>Probably the only 3D parts of the spot, which are not mine are the inner models like engines and stuff, modeled by Victor Tretyakov or provided by the client and generous Public Domain. We share compositing credits with Denis Kosar, who has been producing the job and multitasking as well. And by no means I am forgetting Marek Duda, who helped with fitting the edit together. Music by Lukas Turza.<br />
<br />
We took a good portion of look development into compositing which can be clearly seen from this little making-of-quad:<br />
<br />
<br />
<iframe allowfullscreen="" frameborder="0" height="281" mozallowfullscreen="" src="//player.vimeo.com/video/85540098?portrait=0&color=e1007a" webkitallowfullscreen="" width="500"></iframe> <br />
<a href="http://vimeo.com/85540098">Czech Technical University - making of</a> from <a href="http://vimeo.com/dpostprague">DPOST Prague</a> on <a href="https://vimeo.com/">Vimeo</a>.<br />
<br />
<br />
<br />Denis Kozlovhttp://www.blogger.com/profile/07406391692819839722noreply@blogger.com2tag:blogger.com,1999:blog-5149177666770535781.post-82631175064902056002013-12-02T22:33:00.002+01:002017-02-13T19:05:37.930+01:00Two Killer Tips for Mastering Any Software<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://3.bp.blogspot.com/-pfk1J9el5CM/Upz7yhFuePI/AAAAAAAAAV4/nFDeBcgY0Lo/s1600/rtfm.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img alt="RTFM and Ctrl-Alt-RESET" border="0" height="234" src="https://3.bp.blogspot.com/-pfk1J9el5CM/Upz7yhFuePI/AAAAAAAAAV4/nFDeBcgY0Lo/s400/rtfm.jpg" title="RTFM and use hotkeys" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">RTFM. Please.</td></tr>
</tbody></table>
<br />
<br />
At different stages in the career I've been paid for working in Houdini, Nuke, 3DSMax, XSI, Fusion, Maya, Shake, Blender and After Effects among the other applications. I've been using Lightwave, 3D-Coat, Combustion, Rayz and so many other things. Not even mentioning programs like Photoshop, Corel Draw or Inkscape here. Of course I'm not the master in most of them, but I think I'm OK with learning new software, and here are the two tricks I know. <br />
<br />
<a name='more'></a><br />
As obvious as they are, it is quite amazing how often even quite experienced artists manage to ignore them. <br />
<br />
First one: RTFM. Read <strike>That Freaking</strike> The Following Manual. Seriously. You get your new toy, you play around, things go less or more easy, you either abandon or start thinking you already know it inside out... Well, if you're planning on using that toy in future – start reading the documentation asap, and try doing it top to bottom. <br />
<br />
Every piece of software comes with a manual. Some are good, some are less, but all of them contain much wider perspective on the tools that you would get elsewhere (or at least from a lone journey). They also often describe the intended use of not always obvious features and address the particular working techniques – personally I've learned a good deal of software-independent tricks and methods from manuals alone. <br />
<br />
It takes less time than it seems to go through the whole book. And although you probably won't memorize or even completely understand all of it in the first reading, the further you would get – the better idea of what your toy is capable of you will have – the less time you'll spend figuring out the answer when confronting new tasks during the real production. <br />
<br />
Because you don't abandon the documentation after the first reading – you just start using it as a reference from now on, since from now on you have a good idea where to find what. And (might sound a bit shocking) there is a search function to it! Again, strictly personally, but the first thing I do whenever get stuck with a software – press F1 and type the issue into the search field. Google helps as well of course. <br />
<br />
The second tip is as groundbreaking as the first one: use hotkeys. Same way as putting hands at 10 and 2 for driving; while holding a mouse or a stylus in one hand, it is a good practice to keep another one on the keyboard when operating the graphics software. It is just plain faster. Times faster. And it is quite addictive after you start, so it only takes to overcome a little laziness once. Unless you're already doing so (and you probably are, but just in case) – look up the keyboard shortcuts for the 5-10 functions you use most often (they are usually listed in the menus, when hovering the tool button and/or in the manual (see above)) – and start accessing those functions with a keyboard rather than a mouse. Give it an hour, and if your life quality will not improve – do not listen to me anymore. <br />
<br />
Thank you. Hope it helps. <br />
<br />
Might return to the <a href="http://www.the-working-man.org/2013/09/typography-basics-for-artists-part-1.html">typography article</a> next time, but that would mean a longer pause as well – will see...Denis Kozlovhttp://www.blogger.com/profile/07406391692819839722noreply@blogger.com0tag:blogger.com,1999:blog-5149177666770535781.post-82706191071140931862013-11-11T20:49:00.000+01:002013-11-11T20:49:19.655+01:00Animusic - Part 2After sharing few introductory words in <a href="http://www.rock-is-dead.info/2013/11/animusic-part-1.html">Part 1</a> at <a href="http://www.rock-is-dead.info/">www.rock-is-dead.info</a> - here are the wireframes we all love so much.<br />
<br />
<br />
<iframe allowfullscreen="" frameborder="0" height="360" src="//www.youtube.com/embed/2lExkQMEHhU?feature=player_detailpage" width="480"></iframe>
<br />
It was incredible ten years ago, it is incredible now. Procedural animation - the concept that keeps fascinating my bent mind, and the concept that wouldn't be possible in any previous era. The idea that instead of telling a computer what to draw, you rather teach it how to draw things changes the whole landscape to me.<br />
<br />The Animusic project was started by two artists Wayne Lytle and Dave Crognale. Their proprietary software uses MIDI input to drive the animation in a commercial programs like 3DSMax or XSI, producing the result of often mind-bending complexity. They get into <a href="http://animusic.com/company/software.php">more details at their website</a>. I would also recommend watching behind the scenes stuff at <a href="http://www.youtube.com/user/AnimusicLLC/videos">their YouTube channel</a>.<br />
<br />
Enjoy.Denis Kozlovhttp://www.blogger.com/profile/07406391692819839722noreply@blogger.com0tag:blogger.com,1999:blog-5149177666770535781.post-44207917739237752062013-10-13T21:03:00.000+02:002013-10-13T21:14:12.338+02:00Understanding Images<table cellpadding="0" cellspacing="0" class="tr-caption-container" style="float: left; margin-right: 1em; text-align: left;"><tbody>
<tr><td style="text-align: center;"><a href="http://2.bp.blogspot.com/-JDavRu8p1j8/Ulrw4W0hPJI/AAAAAAAAAU8/90vDoSI7URs/s1600/colorform.jpg" imageanchor="1" style="clear: left; margin-bottom: 1em; margin-left: auto; margin-right: auto;"><img alt="Form-color correspondence according to Bauhaus" border="0" height="163" src="http://2.bp.blogspot.com/-JDavRu8p1j8/Ulrw4W0hPJI/AAAAAAAAAU8/90vDoSI7URs/s200/colorform.jpg" title="shape-color correspondence according to Bauhaus" width="200" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Form-color correspondence<br />
according to Bauhaus</td></tr>
</tbody></table>
<div style="text-align: left;">
</div>
I once heard they study <a href="http://en.wikipedia.org/wiki/My_Neighbor_Totoro">Totoro</a> at the aesthetics classes in Japanese schools. Aesthetics classes… Wouldn’t this freaking world be better if we had some?<br />
<br />
At least we can be learning things on our own. And one thing I stumbled upon only few months ago, but which I believe must be obligatory for anyone dealing with images in one way or another is <a href="http://char.txa.cornell.edu/">Language of Design course by Charlotte Jirousek</a> of Cornell University. Abstract, objective, well-rounded, on the topic so vital and so overlooked. <br />
<br />
Wish I knew of it years back - would’ve saved certain amount of time. But even now I find it an incredibly useful read. And the practice of authors/universities keeping such courses in public access admirable at least.<br />
<br />
Enjoy.<br />
<br />
Something abstract and procedural next time.Denis Kozlovhttp://www.blogger.com/profile/07406391692819839722noreply@blogger.com0tag:blogger.com,1999:blog-5149177666770535781.post-65707984119633374462013-09-30T22:12:00.001+02:002017-02-13T19:04:36.541+01:00Typography Basics for Artists. Part 1 - Broad Classification<table cellpadding="0" cellspacing="0" class="tr-caption-container" style="float: left; margin-right: 1em; text-align: left;"><tbody>
<tr><td style="text-align: center;"><a href="http://2.bp.blogspot.com/-sOppoOrINeY/UknXHrZYSYI/AAAAAAAAAUE/o-YTPXrF3xM/s1600/472px-Font_types.jpg" imageanchor="1" style="clear: left; margin-bottom: 1em; margin-left: auto; margin-right: auto;"><img alt="major font styles" border="0" height="320" src="https://2.bp.blogspot.com/-sOppoOrINeY/UknXHrZYSYI/AAAAAAAAAUE/o-YTPXrF3xM/s320/472px-Font_types.jpg" title="major font styles" width="252" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Major type styles.</td></tr>
</tbody></table>
Typography is a separate world in its own. It lives according to the myriad of rules - aesthetic, conventional, optical and technical. Few professions include understanding of this world in a job description, and they mostly contain a word “designer” in the name - like graphic designers or (suddenly) typeface designers. Among the artists however it is not uncommon to be way less familiar with the principles involved in creating, manipulating and judging fonts. Still it’s a valuable knowledge for anyone dealing with images, which I’d like to address here. By no means I claim myself as an expert in the field - I’m rather trying to draw some directions for further research, which from my own experience might take some time to establish. As in most of the cases, a great place to start is Wikipedia’s articles on <a href="http://en.wikipedia.org/wiki/Typography">Typography</a> and <a href="http://en.wikipedia.org/wiki/Typeface">Typeface</a>. The trick is to keep digging further exploring the related links. <br />
<br />
<a name='more'></a><br />
The word “Font” refers to a particular combination of a typeface’s size, weight and style. Thus <span style="font-family: "arial" , "helvetica" , sans-serif; font-size: x-small;">Arial 12</span> and <span style="font-family: "arial" , "helvetica" , sans-serif; font-size: small;">Arial 16</span> should be considered different fonts. What they share is a typeface (font family). Basically, “typeface” is a more proper term to refer to a font family like Helvetica or Courier. <br />
<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://3.bp.blogspot.com/-4AjIT7fbZNw/UknWHd9sFDI/AAAAAAAAAT0/5OVflRIt16I/s1600/Typography_Line_Terms.svg.png" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" height="85" src="https://3.bp.blogspot.com/-4AjIT7fbZNw/UknWHd9sFDI/AAAAAAAAAT0/5OVflRIt16I/s320/Typography_Line_Terms.svg.png" width="320" /></a></div>
Be prepared that terminology and classification differ from source to source. Although not too drastically usually. Reading multiple sources is helpful since you can familiarize yourself not only with different terms and conventions, but different opinions as well. A cover picture for this post, which I borrowed from Wikipedia, shows some of the major typeface styles. <br />
<br />
<div class="separator" style="clear: both; text-align: center;">
</div>
Two biggest classes the typefaces can be divided into are text types and display types. Text types are those suitable for huge bodies of text, like in books or newspapers. It is said a typeface designer is lucky to create a single text typeface (both new and good) through the whole career. This also means that if you’re looking for free text types - your best chances are with those coming with OS or other software you are licensed to use, rather than with free online collections. Criteria for a successful text type are pretty much opposite to the goals which art, design and often even display types are seeking to achieve. Instead of expressivity and drawing attention to itself, it seeks loosing all attention possible, becoming invisible since its main goal is to assist reading - the text is the hero and the typeface must step back. What is required however are a perfect balance and rhythm (in kerning, contrast, ascending and descending elements). Achieving this balance and uniformity while preserving aesthetical qualities without distinct features takes tons of skill, taste and patience to iterate, test and adjust the design over and over again till it's ideally polished. It's not an easy job which deserves respect, and is another reason those typefaces cost money. <br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://2.bp.blogspot.com/-a4097qc42qM/UknXldbBg9I/AAAAAAAAAUM/8bdzrqbeSpU/s1600/text_vs_display_type.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img alt="text types vs display types example" border="0" src="https://2.bp.blogspot.com/-a4097qc42qM/UknXldbBg9I/AAAAAAAAAUM/8bdzrqbeSpU/s1600/text_vs_display_type.jpg" title="text vs display types" /></a></div>
<br />
Quite often, although not necessarily, text type would possess serifs due to their ability to accentuate horizontal lines of text helping the eye not to slip. <br />
<br />
Illustrators and other visual artists usually deal with way smaller amounts of text, which often requires organizing, prioretizing and accenting. This is where the display types are used. They generally require less work to design (less does not equal little) and are probably the majority. Display types are intended to be used in large sizes, like from 30pt («pt» stands for <a href="http://en.wikipedia.org/wiki/Point_%28typography%29">point</a> here, a typographical unit of size measurement.) They come in all possible shapes and sizes, ranging from reserved to wildly screaming and blurring edge between a letterform and a decorative element. This is a beautiful, fantastic, dizzying world, which makes it important for a designer to not loose the focus. <br />
<br />
I'm going to address this issue as well as more details on fonts' classification in the next parts: <br />
<br />
<a href="http://www.the-working-man.org/2014/06/typography-basics-for-artists-part-2.html">2 – Matching the typeface </a><br />
3 – Display typesetting basics <br />
4 – Historical context factor <br />
5 – Resources and miscellaneous <br />
<br />
Following my life's organizational principle abbreviated M.E.S.S. (too lazy to invent the decryption), these parts will be appearing on the blog in the announced order, but through uncertain time intervals, interleaved with other stuff. <br />
<br />
Again, my goal here is to announce a topic for research rather than to write a compendium, so at this stage you're probably better off reading more competent sources anyway. Denis Kozlovhttp://www.blogger.com/profile/07406391692819839722noreply@blogger.com0tag:blogger.com,1999:blog-5149177666770535781.post-77326571630167822372013-09-15T18:25:00.000+02:002013-09-30T22:13:33.863+02:00Couple of old works revivedWhile <a href="http://www.the-working-man.org/2013/09/typography-basics-for-artists-part-1.html">the article announced last week</a> continues cooking itself, as an intermission here goes a couple of images which I found in the attic of a hard drive and tried to shake some dust off this week.<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://4.bp.blogspot.com/-Yug1h7YOKcI/UjXd3ZJIAHI/AAAAAAAAATA/X3KiAlP8bEk/s1600/Masquerade_Denis_Kozlov_1000px.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img alt="Masquerade - image by Denis Kozlov www.kozlove.net" border="0" height="266" src="http://4.bp.blogspot.com/-Yug1h7YOKcI/UjXd3ZJIAHI/AAAAAAAAATA/X3KiAlP8bEk/s400/Masquerade_Denis_Kozlov_1000px.jpg" title="Masquerade by Denis Kozlov" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;"><a href="http://fineartamerica.com/featured/masquerade-denis-kozlov.html">Masquerade</a></td></tr>
</tbody></table>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://4.bp.blogspot.com/-C8rRNFlOlic/UjXd271ucqI/AAAAAAAAAS8/-TnY3tU4KGc/s1600/Grande_Pellicano_Denis_Kozlov_1000px.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img alt="Grande Pellicano - image by Denis Kozlov www.kozlove.net" border="0" height="300" src="http://4.bp.blogspot.com/-C8rRNFlOlic/UjXd271ucqI/AAAAAAAAAS8/-TnY3tU4KGc/s400/Grande_Pellicano_Denis_Kozlov_1000px.jpg" title="Grande Pellicano by Denis Kozlov" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;"><a href="http://fineartamerica.com/featured/grande-pellicano-denis-kozlov.html">Grande Pellicano</a></td></tr>
</tbody></table>
<br />Denis Kozlovhttp://www.blogger.com/profile/07406391692819839722noreply@blogger.com0