..That’s enough romance – let’s get to the techy bits!!
In this writeup I will explain my thoughts and describe the techniques used whilst creating this stylised self portrait. I’ve wanted to do a self portrait for a while now because it is such a challenge. Since you know yourself so well, you get very picky and think extra hard about shapes and forms and it is very easy to spot when something is off. It’s a great exercise that I think everyone should do at some point.
Composition wise I want to go for an image that is easy on the eyes. Recently I’ve really noticed how much more I enjoy these kind of pictures. Your eyes need space to rest and move around the frame and this is an exercise in allowing for that.
Mainly shading and rendering related subjects will be covered as that’s my main area of interest. Arnold is my engine of choice and the more I learn about it the more I like it due to the very well thought out philosophy behind it. I will start by looking at the sculpting of the face, explore some marvelous designer with the shirt and then move onto how I approach shading certain interesting objects. This will include the cap, shirt, window and eyes. I will also discuss some procedural noise tricks both for the waterdrops and occlusion based dirt shader.
I pose the arms in zbrush, export the important objects and simulate the shirt on top of that. I always really enjoy making the garments. On the internet I look for sewing patterns and try to replicate them whilst fitting it for my purposes.
One common problem is adding thickness to the simulated garments whilst retaining the perfect UVs that come with the simulation. There are many ways to get to the same end result. I duplicate the single sided geometry, move the UVs to a new tile, then move the vertices in the directions of their normals – just like an extrude and then bridge in-between them. This gives me full control over the way the UVs are manipulated and I can ensure they don’t get screwed up somewhere along the line.
For the colour map I photographed my own shirt and found a tile-able patch. I used the classic technique of adjusting the diffuse channel based on the angle between the normals of the geometry and the camera. Although a dirty cheat, it emulates the thin stray fibres catching light on real fabrics quite nicely.
In Maya 2016 a colour management system was introduced. Practically this means that all Arnold gamma settings need to be set to 1. This also means that all textures are expected to be linear. You linearize when converting to .tx files. Certain types of textures are already linear and don’t need to be converted. A simple way of deciding if maps are of the linear type is if they contain values as opposed to colours. This is the case for displacements, bumps, normals, roughness, masks, etc. You don’t need to set extra flags for these maps. For colour maps however, such as albedo or specular, you need to add a maketx flag “colorconvert sRGB linear”. Mathematically it is necessary to do this conversion before mipmapping, so if you were to apply a .4545 gamma transform afterwards instead, this would result in slightly incorrect results – especially when viewed from a distance.
The cap is a corduroy material which is tricky to make. As with most materials, I start off by creating a displacement map for it. I create these first since I don’t want to get mislead by other shading properties such as SSS. I layer several displacement maps in Maya using alRemapFloat and alCombineFloat nodes. For example, I start off with a ridges map I created in photoshop, layer a displacement map extracted from a tshirt to get some creases in there and finish off with a high frequency noise map to emulate thin fibres sticking out. Layering these within Maya gives me a very short feedback cycle – I don’t have to re-export and re-mipmap textures.
It is often said that the eyes are the gateway to the soul and personally I couldn’t agree more. Being able to see deep into the eyes of a character is crucial. It is also important that they stay cartoony. There are a few key aspect in creating believable eyes. The sclera has to feel milky, there has to be a nice, smooth transition between the iris and the sclera with some colour bleeding and nice shadows and highlights have to be cast within the iris. Last but not least, to pick up the highlights on the iris, I sculpt a displacement map for this part of the eye. This in combination with the shadow/highlight ratio gives the eye the visual depth it needs.
I build the eyes in a layered shader approach in order to get the nice transition between the iris and sclera shader.
After looking at real windows I decided to use 4 pieces of glass geometry, simulating insulated windows. This is key to getting nice, double reflections in the windows.
The rain is created on the outer layer of glass by displacing the geometry. The displacement shader setup for this is simple – I combine several noise shaders with an alLayerColor node and then clip it using an alRemapFloat node. To get realistic waterdrops, including streaking ones, I make sure to adjust the scale of the noise in certain axis and vary the overall frequencies.
This is one of the trickiest elements of this project. I tested several setups with layered shaders or mixing the diffuse channel back in – but in the end my initial approach of adjusting the roughness of the closest glass layer worked best.
I used several maps containing dried water, wipe marks and scratches. Instead of using tiling textures in Mari and baking them down to a new texture map, I paint masks which I can then use in my shader setup. This allows me to use the power of tiling textures to the fullest and achieve resolutions much higher with way less resources. This way I can also balance out the various layered components much quicker.
Mathematical noise is super fun and has many purposes in CG. A neat way to add nice dirt to your models is to combine the power of an occlusion shader and a noise shader, by remapping your occlusion shader with a fractal node. Then if you want more breakup in that noise pattern, you can feed in another noise pattern into one of the two colours. Really believable effects can be achieved by varying frequencies and axis scales.
I created the eyelashes using XGen. Instead of randomly placing curves and painting a mask, I opted to place the hairs manually. This was easy – just brushing them on. After that I used the power of XGen’s clumping and noise modifiers to give them a pleasing, natural appearance.
Since the image is clearly split up in a foreground and background, it makes sense to render it out in 2 layers. By doing this I avoid all the problems that come with distance based convolving where there is a significant distance between objects or where the out-of-focus objects are thinner than the convolving radius. To do this, I create a set of all the background elements and assign an aiMatte override to the set. This renders the objects completely black, as if rendered with a black surface shader, but maintains the shading properties and therefore light interaction of the shader. This means light can still bounce off the objects in the set and contribute to the light information reaching the objects we want to composite on top.
Light groups are an incredible feature that comes with the alShader library – storing light contribution of selected lights into different AOVs. This means I can not just tweak light intensity and colour in compositing, but also fine tune other AOVs on a per light basis. All you need to do is add an integer attribute “mtoa_constant_lightGroup” to your light’s shape node and enable the corresponding AOVs. Super useful!
Real cameras are noisy, which means you should consider adding film grain or sensor noise to your CG work too to make it more realistic. Watch out though – noise is different for the R, G and B channels. For example, in the blue channel there is usually much more noise than in the red and green channel. This is because the eye is significantly less sensitive to light in the blue spectrum and camera sensors take advantage of that. So when compositing noise, add it in using 3 different merge nodes with different mix values.
Chromatic aberration is a cool lens imperfection that can add realism to your renders. It occurs because different wavelengths of light travel at different speeds through a lens and therefore have a different exit angle when leaving that lens. This is why certain wavelengths get focused to different points and we can see shifting of colours.
It is super important to know that this is a depth based effect, so please don’t just shift your channels all over your image. Instead, where there is more depth of field, there should be more chromatic aberration. Another interesting observation is that the colour shifts are different for the points before and behind the focal point.
Powered by themekiller.com