Shooting the backplate and capturing the lighting information

After I knew what I was going to do, it was time to shoot backplate and capture all the lighting information.

For this I used:

MacBeth chart (to calibrate my colours)

Canon 5D Mark II

50mm 1.4f

8mm 3.5f fisheye

Tripod with a panoramic head (One that clicks every # of degrees. Very useful when creating HDRs)

Chrome & grey sphere (to check my lighting)

MacBeth chart (to calibrate my colours)

The first thing I did was to try and make a composition that was less or more pleasing to the eye in real life, maybe trying to get some happy accidents in that occur MUCH more often in real life than in 3D. I also used it as my main big reference to shade objects. All in all, a very useful thing to do even if you’re not using any of it in your 3D scene. This was my original lighting idea as well, but since it doesn’t match with the artist’s work I decided not to use this.

Reference is key!

original_reference

Shooting the backplate was very straightforward. I shot a clean one, one with lighting checkers and one with a colour checker. One thing to note is that your camera has to be on manual mode. You do not want to have different settings for each of these shots. They all need to match!

Also ALWAYS shoot in RAW for these kind of things. You don’t want to have a compressed .jpg before you’ve even started.

The following pictures are shot in a really bright sunny environment, even though they don’t really look like it. They’re just as raw as it comes out of my sensor, and need some grading work to make them pretty again.

plate_spheres

Lighting checkers

plate_clean

Clean plate

plate_macbeth

Colour checker (MacBeth chart)


Shooting the HDR

When the backplate was shot, it was time to capture the lighting information in the room. This is most commonly done by using a fisheye lens (because of the extreme wide angle), and shooting at least three exposures for every angle, for the full 360 degrees. You can shoot every 120° (3 angles), 90° (4 angles), 60° (6 angles), … – it’s entirely up to you! Just keep in mind that you do not want your lighting to change or your objects to move during the shoot. I personally went for 6 angles because I was the only one in the room and there were no clouds. Probably as easy as shooting an HDR gets!

Once again, I made sure my camera was set as manual as possible. I even chose a white balance setting just to be consistent across all my images, even though it makes absolutely no difference for raw images.

Using the panoramic head of my tripod, I shot 3 exposures (called exposure bracketing) for every angle. I did this using a little two second timer. 3 Exposures really is the bare minimum, you should shoot more if you can. The camera I used with the default OS just can’t do more than three.

I placed my Macbeth chart within a sunny area of my environment, so I can colour calibrate the HDR to my photographed backplate (explained later on).

After the shoot I ended up with 18 .cr2 images. I stitched them together with a neat piece of software called PTGUI.

PTGui outputted me the following 32bit EXR. Great – but not there yet. The highest floating point value within the image was only 4 (keep in mind that 8 bit images can only store values between 0 and 1). I used a nuke gizmo called mmColorTarget to match the values of the macbeth chart in the HDR to the values of the macbeth chart in my backplate (read more about this here: http://therenderblog.com/calibrating-images-with-colorchecker-reference), and this boosted my max floating point value up to about 240. A high value like this is great, it gives you nice crisp shadows. No need to fake this using CG lights!

After the calibration, I colour graded it a bit more for artistic purposes.

hdr_raw

Raw out of PTGUI: Max floating point value: +- 4

hdr_tonemapped

After tonemapping the image (rather fancy word for grading it): Max floating point value: +- 240


Lens Distortion

So that’s my HDR image sorted. Great! Just a couple more things and we can move into 3D space.

First thing I had to do is colour grade my backplate a bit for artistic purposes. After that was done, I had to undistort the image. Real lenses always have their own lens distortion. This is not great for 3D work, where cameras do not have that lens distortion.

Undistorting this is easy because of the already available algorithms in Nuke. I just printed off this grid and snapped a picture of it at a 90° angle, then read it in in nuke and used a lensDistortion node to analyse this grid, copy pasted the node and plugged it into my main node network. Easy! Ready to be used in 3D.

Before & after of the grid

Backplate before & after

Backplate before & after


Aligning the camera to the scene

There are a couple of tricks to keep in mind when trying to align your camera in 3D. It can be a pain, but if you have taken the right steps, your image should line up perfectly.

A couple of tips:

Measure your real life objects with precision, always make notes of measurements when shooting

Make sure your image has no lens distortion

If you shoot with a cropped sensor camera, make sure you multiply your cropping factor with your lenses focal length, then enter that value into your Maya cam. Maya doesn’t know whether you shoot with a cropped frame sensor or a full frame sensor!

And then there’s a really useful trick that I learned from a senior modeler at MPC:

Create your camera and immediately group it. On your camera, lock the rotation channels. On your group, lock the translation ones. By doing this you can select your group, pan around with your camera and adjust the rotation values very precisely using the rotate tool.

I feel like this trick really helps a lot when trying to precisely match something. You can also undo every camera movement really easily like this.

It will take a couple of minutes of thinking “this is never going to match up”, but you will get there eventually.

I got there eventually.

I got there eventually.


Setting up the scene

Wicked. So we’ve got all our photographic elements ready to go and our proxy geometry is modeled to match the objects in our photograph. Time to set up the scene.

We know that we will need both information from the HDR (direct illumination) and information from the environment (indirect illumination, because the light has already bounced once) to light our scene properly.

It’s really easy to set this up in Arnold, but it did take me some initial figuring out.

I assigned a shadowCatcher shader to all the modeled ‘shadow catching’ geometry. Since the geometry also has to cast shadows, do not disable ‘opaque’ in the Arnold tab – unlike specified in the documentation.

aiShadowCatcher Attributes

aiShadowCatcher Attributes

The shadowcatcher shader has some cool attributes. The first one we’ll look at is the most important one. Use background.

In this slot we will pipe in a projection. In your node editor, just create a File [projection] node. It’s pretty straightforward, plug in both your projection camera and image you want to project and make sure you have your UVs laid out for the objects you are projecting onto.

Important! Having it set up like this, your shadow catching geometry will also have a diffuse texture (the projection). This is NOT what we want, so we pipe the projection into a aiRaySwitch node, just not connecting the ‘camera rays’ slider. This will essentially not return any camera rays, whilst still returning all the other rays – exactly what we want.

The rest is easy. Enable cast shadows, transparency and catch diffuse. Catch diffuse is one to play with – it affects the colours of the shadows depending on the diffuse color. I wonder why I didn’t try piping in the projection into that. Might be worth testing out!

aiShadowCatcher Node Graph

aiShadowCatcher Node Graph

aiRaySwitch doing it's job!

aiRaySwitch doing it’s job! I tested all this when I had just a few primitives obviously.

For the lighting, I just created a skydome light with the HDR attached, intensity kept at 1. With the help of a chrome sphere I matched the direction of the light. It all worked surprisingly well! Haha.


Modeling the objects

I really enjoyed the modeling aspect of this project. I usually do bigger things, which means I don’t get to model into this fine detail. Really fun when there’s no cap on your polycount and you can try and get every nice little shape in there! I only box modeled the tape, the tin and one single tube. I laid out the UVs and after that I took the tubes into zbrush and sculpted them an individual zdisp map.

I quaded up the geometry so I’d get an even distribution of resolution whilst sculpting. It’s always a good idea to do this! 🙂

Usually whilst sculpting this I’d reach for the dam standard brush – however, since it pinches the geometry and my UVs are already laid out, this is not ideal. I can’t have the pinching. To work around this I simply took the alpha of the dam standard brush and applied it to my regular standard brush. Simple, problem solved. The whole time I watched closely that I was only working in the normal direction of the faces so not too much stretching would appear when they were textured.

A quick demonstration on how I approached sculpting the paint tubes:

For both the paint tubes and the tin can I found that I also had to break up the surface with quite a bit of low level noise.

To create the paint splatters on the objects, I developed two ways – and switched half way through the project! The first one was to get some photoshop splatter brushes, import those into Mari, splash them onto my object and convert that layer’s alpha into a mask (in Mari: create mask -> from alpha). This gave me a clean mask regardless of the colour of the paint. Then I imported this map as an alpha into zbrush and under the masking submenu I clicked ‘Mask by Alpha‘. The only thing I then had to do is inflate certain parts and sculpt a bit of texture on top.

.. but

Halfway through the project I found that it was much easier to have all this paint as separate geometry. It allows for much greater control rather than having it at render time through a displacement map. I used the same splatter maps, but converted the zbrush masks into polygroups. Then I could split off all the faces that weren’t ‘splatters’, give it some thickness and dynamesh that. It’s pretty high res geo (although you generate the same amount of polygons with your disp map), but I found that it looks a lot better than having your actual object displaced. It also cleans up your scene quite a bit IMO.

The water droplets were created with the waterdrop generator I made a while ago. It’s free, it works faily well, check it out if you like! 🙂

http://zenopelgrims.com/waterdrop-generator/

 

http://www.creativecrash.com/maya/script/water-drop-generator-python

 

Who doesn’t like to see wireframe renders eh?

zeno_pelgrims_still_life_wires

And ambient occlusion renders..? (I did NOT use this to comp my image!!)

ao_render

Or a pass where I combined all the lighting without any colour?

lightingMashup


Texturing

As always, I textured my objects in Mari. It was pretty easy – nothing like texturing something that is supposed to look alive! The first step is probably the most important one, and that is looking for good reference. Once you know what you have to paint, and take the guesswork out, it’s just doing it.

The labels of the objects were created in Photoshop since Mari lacks a lot of 2D tools. I then exported the maps and imported those into Mari, where I painted the isolation maps (fancy word for a simple mask, haha!) of the layered shaders (more on this later on), splatters, a dirt layer and a dust layer (which essentially also just are isolation maps).

The way I paint my isolation maps is simple. I just paint them with the standard Mari brushes, but not in black and white. If I’m painting dust, I need it to look like dust to be able to properly judge its appearance. So when I’m done with it, I just convert the layer’s alpha into a mask (in Mari: create mask -> from alpha). By doing this I can easily export the colour map and the mask, to use in my layered shaders.

tube_texturing

coffee_texturing

If I could recap one thing about texturing, it would be reference. Reference reference reference.


Shading

I received some requests to dive a bit deeper into the shading of the objects, so I will try to do this!

If you’ve read about my previous work, you’ve probably noticed I like to use the alShader library instead of the standard Arnold library. It has many advantages, which you can read about over here: https://bitbucket.org/anderslanglands/alshaders/wiki/Home

At this moment in time I really like to think about shading as like thinking in layers, and I also build the networks this way. It gets pretty easy when doing this. I will go over three objects, the coffee tin, the paint tube and the tape.

Coffee Tin

As I mentioned earlier, everything begins with good reference. Life is pretty once you know what you’ve got to do, reach or accomplish. Then it’s just doing it! Before I start, I always try to push myself to spend that extra bit of time on the internet, looking for that one picture that might push my idea that little bit further.

Reference is key!

Reference is key!

The first thing I do is inspect the object, and stare at it for a couple of minutes. How does the light react with the surfaces? What properties of the surface really leave a mark in my mind? Are there any note worthy specialities? What are the material layers of this object?

For example, the coffee tin will have 6 main layers.

Base metal

The super thin tin coat over the metal

The label material

Paint layer

Dirt (wasn’t needed for this asset in my image, but still worth noting that it is there!)

Dust (wasn’t needed for this asset in my image, but still worth noting that it is there!)

Once we’ve got the layers figured out, it’s playtime. All we have to do now is recreate them and paint masks to layer the shaders on top of each other.

As you can see, the node network is pretty simple.

The way I approach building the materials at the moment is still very much by eye. I would like to start using more scientifically correct values and numbers soon, it’s something that I’m working on.

So, for now, eyeballing! Let’s start with layer one – the metal.

In real life you’d have two layers for the base metal. The actual base metal that the coffee tin is made from, and then a tin layer on top. Because the underlying metal layer really is barely visible (just through some scratches), and if it would be visible, it wouldn’t have had any crazy visual impact. So instead of building it this way, I decided to just create a tin material and use a spec map to emulate that metal layer underneath. Bit faster!

Because it’s a metal, the diffuse colour should be black. Ideally you’d pipe a really dark breakup map into this, nothing is perfect in real life, but it just wasn’t necessary in this case.

The specular map is a simple tiling map with some scratches. I add a bit of colour to this spec map by multiplying the tiling map on top of a colour ( light beige – metals have a coloured specular highlight!) through a alLayerColour node. No need to go and do this in another software package 🙂

To give you an idea of the values used for the tin layer

To give you an idea of the values used for the tin layer

As you can see I use the two specular lobes provided by the alSurface shader in here. As I said before, I’m not going to lie, I haven’t got any real values for these. Right now I’m trying to match the reflection by looking at my reference very, very closely! What I’m creating here is a combination of both a rougher reflection, and a sharper one.

The IOR value is pumped up to 5 to have it reflect like a metal. You can use values up to 100 or even 1000 to get a chrome like Fresnel falloff.

That’s that for the tin layer. Now let’s create an alLayer shader, another alSurface shader for the label shader and a file node to connect the mask that we painted into the layered shader.

NOTE: By default if you drag and drop the file node onto the mask attribute, it will connect out alpha to the input. This is NOT what we want! Make sure you switch the out alpha connection of the file node to out colour. Since we’ve painted a black and white mask, all individual colour channels will be identical. Connect it to either the red channel, blue channel or green channel depending on your favourite colour.

NOTE 2: You can also work with just one texture file with transparency. In that case, obviously don’t change the connection 🙂

Cool, let’s move onto the label shader. Not that much special going on here, but definitely an important layer to get right. It’s the shader that will dominate the biggest area of the image. All I did was connect my diffuse map that I painted, and set my spec parameters as the following:

Label shader attributes

Label shader attributes

Note that I used a slightly tinted specular colour. This is for artistic reasons – it gave me a much ‘fuller’ feeling. It’s hard to describe, but it worked. You should always remind yourself that you’re an artist, not a scientist. It’s good to base yourself on values that are proven to be correct, but there’s no one saying you can’t cheat if you know why you cheat.

The last layer for this object is the paint layer. Again, nothing special going on really! Just a diffuse colour map and some tweaked spec values.

coffee_paint_attr

Paint shader attributes

That’s actually it for the tin can! Nothing special or over complicated, but still looks pretty sweet. Working in layers is easy! 🙂

Tape

The tape was fun to make, it’s not something you shade every day! The main thing to note about this object is that I created two different shaders – one for the actual roll of tape, which is double sided geometry, and one for the torn off tape – which is single sided. This is an important feature to consider, because we have to shade single sided objects differently. For example, sub surface scattering effects that I would use on double sided geo don’t make sense on single sided geo.

Anyway, let’s start with the double sided geometry first.

Node network of the tape shader (double sided geo)

Node network of the tape shader (double sided geo)

As you can see, it’s once again pretty simple. I loaded in both my painted diffuse map (really simple, flat colours with some really subtle value/colour differences – I could have done this procedurally in Maya but happened to be in Mari) and a scratch/breakup map. I repeated the breakup map with a 2.5/1 UV ratio. (re-word this) It just needed to be squashed! This piped into both the bump slot and the spec slot of the alShader.

As you can see, it goes through an alRemapColor node before going into the spec slot. The reason for this is simple – it’s way too ‘heavy’, there’s too much value difference. I only want this map to be subtle, so inside of the alRemapColor node I raised the gamma to a value of 8. If you’d test this out, you’d see that the values get flattened out a lot – exactly what I was looking for.

Scratch map used for the tape

Tiling scratch map used for the tape

An important visual attribute this tape has is some subtle sub surface scattering, so we have to make sure we include this. The attributes are really simple (I’m starting to repeat myself here.. But that’s fine I guess!). I only used one SSS layer, which had the same colour as the diffuse map.

Sub surface scattering is important!

Sub surface scattering is important! It really makes the tape come alive.

Because we all are visual learners, here are the attributes.

tape_double_attr

That’s it for the double sided geo, let’s move onto the single sided geometry. It’s a bit more interesting!

Tape single sided geo node network

Tape single sided geo node network

To quickly sum this up, we have a refractive ‘glue’ shader, and a backlit ‘paper’ shader layered on top of each other with the ramapped scratch map from earlier acting as a mask. Once I figured this out, it was once again, pretty easy! Inside of the shaders nothing complicated or special is going on. It’s all about the layering.

Tape glue shader attributes

Tape glue shader attributes

Tape paper shader attributes

Tape paper shader attributes

An important attribute for the paper shader was backlighting. With this I emulated the translucency of the paper, just like I did with the SSS on the actual tape model.

In retrospect, I should have added more rough reflections on the tape on the grazing angles. I tried to do this with through my spec, but it didn’t really work strong enough. It’s something I looked over – and should have faked this with my diffuse. It would have been a lot faster to render and easier to control.

Basically I would use a samplerInfo node to query my viewing angle, and map that into my U and V coordinates of a ramp node. Then in that ramp node I can plug in whatever I like and interpolate in between whatever I plugged in based on my viewing angle. Just like you would fake a cloth shader. It’s explained pretty well over here: https://vimeo.com/102632184

Then I painted an opacity map for the edges of the tape, to give it a bit of a torn look. That’s all, tape = done!


Creating and rendering dust

This was a tricky one. I had never encountered a piece that needed a bit of dust sprinkled on top! I weighed up my options and decided that I’d try to do it as it would occur in real life, meaning rendering curves as thin hairs.

So then my only problem was: how do I create all these curves?! I looked online, but couldn’t find any tool for Maya that was even remotely suitable. I didn’t want to spend an incredible amount of time on just creating some dust, so I decided to paint on curves with Maya’s PaintFX, using the popcorn preset. Pretty strange, having popcorn scattered all over my scene, but the curves that are used as a base for the PaintFX are pretty good. I converted the PaintFX into curves and rendered these as thin hairs. You can find a lot of information on rendering curves (with sets) in the Arnold documentation.

For the shading of the curves, I didn’t use a hair shader. I just used a standard alSurface shader with a really light brown diffuse colour. It gave me the look I wanted!

The dust, however, is something I’m not 100% happy with. The shading is OK but the shape of the little dust particles just doesn’t work fully. Writing a little dust PaintFX preset might be a cool thing to do sometime in the future!


Compositing

The final step of the process. For this image not much compositing work was needed – it came out pretty decent out of the render engine!

I added some colour corrections for some individual objects based on the ID masks that I rendered out, some overall grading, and a bit of grain and the depth of field.

With every project I do I try to learn new things, explore new techniques. In this project I wanted to try out Sony’s SPI-ANIM OCIO profile instead of the default 2.2 gamma curve in Nuke. There’s way too much to it to explain in this post, but with a quick Google you’ll be reading about it in no time.

It’s all about the maths behind going from your linear image to what you display on your monitor. The default is a 2.2 gamma curve, but there are many other options, which actually prove to look better. The highlights were not blown out, and everything had quite a film like look. Pretty cool! Try it yourself 🙂


Conclusion

Overall I learned A LOT whilst creating this image. I absolutely loved diving into all these mini details. I definitely think one of my next pieces will once again be a small scenery. It’s just.. SO much fun! 🙂

Hope this was a little bit useful, and don’t hesitate to contact me if you have any questions / remarks.

28 Comments
  • ackeem
    Reply

    Really great work and thanks for sharing, looking forward to seeing the shading section because at the moment for me it’s all guess work.

    April 21, 2015 at 1:47 am
  • Wim
    Reply

    Insanely well-documented, and very interesting to boot! Congrats on the wonderful result.

    April 27, 2015 at 8:03 am
  • This is awesome! Nice work dude!

    April 27, 2015 at 12:33 pm
  • jason
    Reply

    which panoramic head did you use?

    April 27, 2015 at 9:13 pm
  • Great article! Just love the details and approach!
    Superb work.

    May 1, 2015 at 12:27 pm
  • Really awesome render. Thank you for taking time to create this breakdown. Loved reading it.

    May 2, 2015 at 3:27 pm
  • Awesome work and a BIG thanks for sharing all the knowledge learned. Please share the lens distortion part in more detail like a walk thru.

    May 24, 2015 at 1:09 pm
  • Hendrik
    Reply

    Hi Zeno,
    Awesome post !!!!
    Where did you buy those lighting checker spheres?

    June 4, 2015 at 12:37 pm
      • Hendrik
        Reply

        Thanks for the reply Zeno .

        June 4, 2015 at 8:27 pm
  • Issac
    Reply

    Thanks a lot for sharing this.
    One thing about using high resolution images
    save images in Photoshop by using “save for web” use 80%-60% quality.
    (changes are unidentifiable, it’s physiological want you are trying to do by keeping heavy images)
    Use correct size , use high resolution (but small data size) only on click on image function.
    I know you know this , so tell your friend to do so without informing you any time. You yourself will not be able to identify the difference at once.

    August 2, 2015 at 4:45 am
  • Nick
    Reply

    Really enjoyed reading this, hope you carry on writing these, any chance I can subscribe in some way?

    August 8, 2015 at 5:08 pm
  • Philip
    Reply

    Hello Zeno,

    first of all: GREAT stuff man! Really inspiring to see your workflow 🙂
    I have a question about the HDR part of the breakdown

    I just went outside to shoot a HDR panorama and did pretty much exactly what you did, but I just can’t seem to get my values right without messing up the entire image.
    I would appreciate it immensly if you could talk a bit more about your grading process 🙂

    Thanks!

    September 28, 2015 at 12:54 pm
  • Roman
    Reply

    Great tutorial, what is your workflow about outputting final sRGB JPEG while using Sony’s OCIO, do yo somehow do color transform from OCIO to sRGB to bake in the look?

    March 3, 2016 at 8:47 pm
  • José Felipe
    Reply

    I’m sorry for asking but how were the configurations used to render this project? because I always used the Arnlod rendering in Maya and never found a setting accurate and as realistic as your project.

    March 27, 2016 at 3:28 am
  • José Felipe
    Reply

    such as the sampling settings (camera AA, Diffuse, glossy, refraction, SSS); and Ray depth (Total, Diffuse, Glossy, reflection and refraction) and among other settings that you used! please help me!

    March 27, 2016 at 3:50 am
  • José Felipe
    Reply

    use any setting in arnold it will not matter much in my final render. even using alshaders? this configurations below are what I use.
    sampling settings:
    camera AA: 16
    Diffuse : 1
    glossy : 1
    refraction: 1
    SSS: 8

    Ray depth:
    Total: 14
    Diffuse: 3
    glossy: 3
    reflection: 1
    refraction: 1

    I use these settings usually for organic things, I no know if these settings and correct, if you can help me to improve it I would be very grateful!

    March 28, 2016 at 12:26 am
  • awesome render.Great !

    April 5, 2016 at 9:32 am
  • great read, learnt a bunch. I’m off to setup the alshaders!
    Thanks Zeno

    August 18, 2016 at 10:43 am
  • Garrett Williams
    Reply

    Love your article. It’s very inspiring. I just have one question, is it necessary to line up where your IBL is centered in relation to where the subject will be on the backplate?

    September 23, 2016 at 1:04 am
  • Zim
    Reply

    Great job as always! thanks!
    A question: why do you use a AiRaySwitch ? Can u say a little more about that part? thanks!

    March 26, 2017 at 8:54 am
  • Myles Jackson
    Reply

    I learnt so much from this thanks so much! Really starting to get my head around it all! thanks for all the effort put in to all your breakdowns, really appreciate it!

    June 1, 2017 at 12:49 pm

Post a Comment

Powered by themekiller.com