I just finished my latest personal project, Path tracer, so.. time for a breakdown! As always please allow the images to load – I prefer to use high quality images in posts like these 🙂
In this write up I will also post outdated, early work in progress. This is something I didn’t think about enough I started out with 3D and looked at the finished work of other people. I thought the amazing artists I admired just did it right first time.. But I’ve now learned that that obviously isn’t the case. And to everyone starting out with the medium – I’d like to show that. Creating a 3D image is a process of mental ups and downs, losing interest, gaining interest, losing interest once again and perseverance to finish what you started to finally reach the point where you are proud of your work 🙂
For me this was really tough with this project since I worked on it on and off for a good 10 months, doing many other projects in between (e.g. Internet Exporer (1 min animated short) and Still Life). This gave me time to forget about the image and look back at it with a totally fresh pair of eyes multiple times.. Which isn’t always as good for your mental health!
In the following video I compiled all overnight test renders I did for this project, to show how the project evolved.
Packages used: Maya, Zbrush, 3DCoat, Knald, Mari, Arnold and Nuke.
For this image I actually ‘drew’ (scribbled) out a concept, something I don’t do for every project. Because I knew I wanted to work with fisheye lensing (more about this later on!), I felt the need to see if it would work. However, I made the mistake of not trying the whole picture (including dragon) in 3D. When using weird lensing – don’t think you can anticipate the effect, let alone paint it. Test it out in 3D and save yourself a headache or three..
For every character I always spend a while looking for reference images. This stage is super important, as it will affect the whole look of the project.
Because it’s super fun to do, I sketched out the characters in 3D before doing anything else. Zbrush, obviously! 😉
The facial shapes used in Big hero 6 were definitely on my mind whilst sculpting this dude!
Retopology was done in 3DCoat. The retopo tools in this piece of software really are super solid! It just works the way you want it to.
The best advice I can give is: start with BIG quads, laying out the flow lines and loops. It’s so much faster to add extra loops with one click in between others then to add extra loops all way around manually 🙂 It’s also results in much cleaner topology in general.
I posed the character’s body using a quick and dirty rig, then did some cleanup afterwards. For me this is much faster than using Zbrush’s quite awkward transpose system. It’s fine to move some spheres with that.. but pose a character? No thanks!
Then on top of the posed character I simulated the fabric using Marvelous Designer.
One of the big reasons I like to use MD is that I don’t have to worry about the UVs of my fabrics. Because you lay them out in a 2D view (as you would in real life) and simulate them, you can use the 2D layout as your UV coordinates without having to worry about your fabric stretching/deforming in places. Super cool! With sculpted clothing it’s often annoying to get the patterns to flow right.
The only thing you have to worry about then is when you want to give your cloth some thickness. It’s better to do this in Maya than Zbrush since you have so much more control over your UVs. Just extrude all the faces and make one cut on the bridging loop you just made, then unfold the vertices on the other side of the cut :).
Although it’s great to have this fabric simulated, it doesn’t mean it’s perfect. I like to approach the simulation of the fabrics as if it’s just a base. The meshes need sculpting on top to make them look as interesting as they can be.
When sculpting the expression, I thought I had really pushed it as far as I could – if I pushed it any further it just didn’t look right. It turns out that the main problem was the shading models in zbrush. When the character was fully shaded with textures applied and light scattering in the skin, it was a lot more forgiving and the expression I thought I pushed so far almost seems a little bit dull. Knowing that I can push it even a lot further is something I will keep in mind in upcoming projects.
As for every step of the pipeline, it all started with finding good references. When I knew what kind of patterns I wanted to use for the fabrics, I found photographs on the internet and then cropped and scaled them to make them tileable so they were useable as a pattern. I like to think of texturing as building up layers (just like shading!). So first I would lay out the pattern of the fabric, and then use a lot of procedural noise patterns to break up the textures. After doing that, I hand painted stains, dirt and dust layers, all to try to get rid of the ‘CG’ look. This applied to all the materials that I painted – metals or woods went through exactly the same process.
I exported 3 main maps for most objects – a colour map, a dirt map and a dust map. Having these separately is a huge advantage when shading the model, it allows for much more control.
Unfortunately I can’t show anything more since I totally forgot about my Mari archives when formatting my drive 🙁
For the dragon I cheated a bit and used a ramp projection from the camera’s position layered on top of the painted texture maps. This allowed for a lot faster feedback then constantly having to write out tweaked maps.
As this project wasn’t super shading-intense (so much is blurred out, far away or behind smoke), I’ll just go over a couple shaders that are the most interesting.
When doing look development, make sure you’re using a properly calibrated light set up. It’s not super strict though. You just want to avoid light setups that would influence your colour or value choices 🙂
I started off by connecting all the displacement maps to the objects before I started shading. I think it is important to get the displacements working correctly first, with just a grey material applied. This way I can really judge the forms of the model properly, without getting confused with other surface properties like subsurface scattering which might hide mistakes I made.
Reading a random dude’s node graphs can be hard. It’s like taking up someones programming in a sense – there’s so much to look at.. where do I start? What is the structure? So in order to read mine better, you need to know that I build all my shaders in a logically layered approach. Taking the shield as an example, I’ve got a rusty metal shader with a coating. In my layered approach, the coating will be a shader, which will be layered on top of the rust shader, which will be layered on top of the base metal shader. It’s all in the masks!
Another tip: read the shading networks from right to left, it’s so much easier.
Outer layer of fabric (robe) + ‘special’ version (orange):
A quality a lot of fabrics display is the observation that the steeper the viewing angle is between the eye and the fabric, the lighter the fabric gets. This is because of the light hitting the incredibly small stray hairs of the fabric. Although I created a hair system to mimic this (more about this later on), I still implemented this hack into the shader. It helps with taking away some of the visual stress on the hair system, allowing for less hairs, and shorter render times.
The trick is to query the viewing angle through a samplerInfo node and connect this to the U and V coordinates of a ramp node. Within the ramp node, I can then connect two versions of my painted fabric texture – the regular one and a lighter one. Doing this I can easily interpolate between both versions based on the viewing angle. After this, I layered a dirt shader and a dust shader on top, masking in between those with painted isolation maps.
The metal on the helmet consists of four main layers. A shiny grey base metal, a shiny oxidized yellow layer and, the dirty diffuse coating and then a paint layer. The workflow is always the same, starting from painted or tileable maps and using some utility nodes to change their appearance (up the gain, lower the contrast, etc). Doing these operations non destructively in Maya is much better then having to keep going back to nuke/ps to write out new images.
For each shader, I try to figure out what the different maps would look like before I start. Even when you’re just walking around on the street, this is an invaluable exercise to be doing. Pass a street lantern. Okay.. What does the spec map look like? What does the diffuse look like? Is it mainly relief that’s causing surface breakup, or are there big differences in specularity too? That kind of stuff. Just look around and actually, look. 🙂
I’ve already made a detailed tutorial about the eye shaders, which you can buy over here if you’d like: https://gumroad.com/l/cartoon-eyes-arnold-vray
Since I switched to Maya 2016 lately I chose to learn the ways of XGen. And holy moly, did I have a good time doing it! The realtime preview of it in viewport 2.0 is incredible – no need to test render anything! This saves me so much time 🙂
If you’d like to know how to do this – have a look at this tutorial from a really knowledgeable groomer, Tarkan Sarim. I’d just be repeating his points over here which isn’t what I want to do! Just.. putting it out there, this is amazing!
On all the fabrics, I generated hair systems to try and mimic the appearance of real fabric as closely as possible. By having those super small, almost not noticeable hairs in place the light reacts in a much more diffuse manner to the fabric, and the fabric ‘feeling’ is there. This is something that I will always do from now on. With XGen this was super easy and fast to do. Two noise modifiers and a cut modifier with a noise pattern. Easy!
I knew I wanted a lot of smoke in the scene. Arnold offers an excellent solution for this, volume scattering. It’s super easy to pipe a noise/fractal texture into it to create a broken up smoke effect. However, when you want to art direct a mathematical noise pattern, things get slightly tricky.
The way I worked around this was by creating a couple of ramp projections which pipe into an alLayerColor node. By doing this I can easily place my ramp projections where I, for example, don’t want any smoke. It’s like thinking in black and white, with subtractions and sums.. Just in 3D space.
I didn’t feel like I completely found the right settings for the ramp/projection combination to give me a 3D gradient from a point (as rotating the projection node had a strange effect), but I certainly got close. The ramp type is circular and pipes into a ball projection with wrap disabled.
I did this for the main blocks of volumetrics until I was happy enough with it. Of course in comp I gave the volume pass another makeover by simply painting on it, as it is soooo much easier there to art direct it 🙂
NOTE: If anyone could tell me how to properly get a 3D gradient from a point, please contact me – I’d love to learn the technique.
During a Zeiss talk at the FMX conference on the physical simulation of lens effects my eyes got opened to the world of “bad” lenses. I had never reaaaally thought about it! Whilst the Zeiss researcher was talking about bad lenses – because they are optically not perfect – for us CG artists they can be incredibly interesting. Optically imperfect optics can add a very needed imperfect touch to your perfect computer generated image.
So there were two main things I wanted to do:
To illustrate what a custom profile can do, look at these images.
This was really easy to implement since Nuke’s default zDefocus node has a filter input. Plug and play!
2. Because my image has a fisheye distortion, I need to have the custom bokeh behaving as it would on a fisheye lens too. Besides that I also wanted to include the Petzval effect, where your bokeh gets deformed towards the edges of your image.
This one proved to be SUPER hard to do. I couldn’t come up with a solution to this.
There are two Nuke blink scripts available online that emulate this effect, but they don’t take in a depth input. I tried re-coding the blink scripts to take a depth input, which worked, but unfortunately my math wasn’t correct. I need more time to figure this one out!
For one of my university projects this year however, I want to make an Arnold lens shader that emulates various optically imperfect lens effects. But that’s a whole other story for another time!
If you want to learn more about imperfect bokeh shapes, have a look at this SIGGRAPH 2015 course on the subject: http://www.slideshare.net/siliconstudio/making-your-bokeh-fascinating-realtime-rendering-of-physically-based-optical-effect-in-theory-and-practice-siggraph-2015-course
There are many ways to display your scene-linear imagery, applying a 1/2.2 gamma curve is just one of them. I recently started using the SPI-ANIM (sony picture animation) OCIO profile in Nuke. If you would like more info on this, check out the paper on the following website :). It’s all about techy colour theory – super cool.
Then finally I wanted to do a clay render of the project. For this I put all the shaders and lights together in two separate sets which I assigned override attributes. The shader set having an overriding diffuse and sss colour, whilst the light set had an overriding colour temperature. This way I didn’t have to make or adjust anything – it’s non destructive and time saving! And I didn’t have to reassign any displacement maps 🙂
I hope explaining my thoughts and processes was helpful in one way or another 🙂 if you’d have any questions, feel free to contact me!
Powered by themekiller.com