ZOIC is an open-source camera shader for the Arnold renderer capable of simulating some optically imperfect lens effects.

thinlens_50mmraytraced_50mm_1-4f

Rendered image showing the difference between the classical thin lens approximation and the raytraced approach. Note the optical vignetting and uneven distribution within the bokeh due to the lens geometry.


DOWNLOADS

Note: I decided to charge a small contribution for the ready-to-use, compiled versions of ZOIC. However, you can avoid this fee if you feel up for compiling ZOIC yourself using the provided open-source code and instructions at the bottom of this page. Choice is yours, help a brother out!


INSTALLATION

MTOA

Set the following environment variables, replacing "$PATH_TO_ZOIC" with the actual path on your machine.

Linux:

export ARNOLD_PLUGIN_PATH=$ARNOLD_PLUGIN_PATH:$PATH_TO_ZOIC/bin

export MTOA_TEMPLATES_PATH=$MTOA_TEMPLATES_PATH:$PATH_TO_ZOIC/ae

Windows:

ARNOLD_PLUGIN_PATH = $PATH_TO_ZOIC/bin

MTOA_TEMPLATES_PATH = $PATH_TO_ZOIC/ae

It’s also possible to copy the files into your MtoA install, but I personally prefer the first option. Just copy the files like this:

Files in /bin  go to [$MTOA_LOCATION]/shaders

Files in /ae  go to [$MTOA_LOCATION]/scripts/mtoa/ui/ae

C4DTOA

Copy files in /bin  to [$C4DTOA_LOCATION]/shaders

Copy files in /C4DtoA/res/description  to [$C4DTOA_LOCATION]/res/description

Copy files in  /C4DtoA/res/strings_us/description to [$C4DTOA_LOCATION]/res/strings_us/description

HTOA

Set the following environment variables, replacing "$PATH_TO_ZOIC" with the actual path on your machine.

Linux:

export ARNOLD_PLUGIN_PATH=$ARNOLD_PLUGIN_PATH:$PATH_TO_ZOIC/bin

Windows:

ARNOLD_PLUGIN_PATH = $PATH_TO_ZOIC/bin

It’s also possible to copy the files into your HtoA install, but I personally prefer the first option. Just copy the files like this:

Files in /bin  go to [$HTOA_LOCATION]/arnold/plugins


DOCUMENTATION

ZOIC 2.0 provides two different lens models, a new raytraced model which reads in lens description files often found in optics literature and lens patents, and the classical thin-lens approximation with options for optical vignetting. These are two completely different ways of calculating the camera rays and therefore have separate documentation. Both models serve their own purposes, although in general the new raytraced model should be preferred at all times where photorealism is required. It comes with an increase in camera ray creation times due to the extra complexity of the calculations. This increase scales linearly with the amount of lens elements in the lens description file since the rays are traced through the lens system.


GENERAL

Sensor Dimensions:

Width and height of the camera sensor in centimeters. Default values are the width and height of a full frame sensor [3.6cm * 2.4cm].

lensdrawing_sensor_fullframe

lensdrawing_sensor_cropped

Difference between a full and cropped frame sensor.

Focal Length:

Distance between the lens and the point at which the light converges in centimeters.

lensdrawing_focallength_100
lensdrawing_focallength_50

Difference between a focal length of 100mm and 50mm.

F-Stop:

The f-stop is a dimensionless number that is a quantitative measure of lens speed. Practically, it is the ratio of the distance from the lens to the point where the focused light converges (the focal length) over the diameter of the aperture.

lensdrawing_aperture_1-4

lensdrawing_aperture_16

The fstop determines the radius of the aperture. Note the issue with naive entrance pupil sampling here. Many rays are killed.

Focal distance:

Distance between the lens and the focal plane. Set this to shift the focal plane to a certain distance.

lensdrawing_focaldistance_30

lensdrawing_focaldistance_50

Focal distance of 30cm and 50 cm.


IMAGE BASED BOKEH

When you closely inspect images of defocused light sources it quickly becomes apparent that there’s more going on within the lens system than the simple throughput of an aperture. Light refracts through several lens elements, changing the appearance of the out-of-focus highlights. The quality of these out-of-focus highlights is one of those things that gives lens designers grey hair, however, within the field of computer graphics, an imperfect bokeh shape can be a big step towards creating believable imagery. The ability to input a custom bokeh shape can change the whole look of a shot for the better which is a big step forward in terms of art directability within the raytracing environment.

Please note:

Make sure that your input image sizes aren’t wacko-jacko. Big bokeh images don’t make for prettier images, they just takes longer to sample. 200 – 300 pixels each way is a good middle way. For now, colour information is only used to calculate the luminance. Don’t expect to see colour aberrations.  No need to convert your texture to .tx.

zoic_camera_shader_image_bokeh_01

zoic_camera_shader_image_bokeh_01

zoic_camera_shader_depth_of_field

zoic_camera_shader_image_bokeh_04

zoic_camera_shader_image_bokeh_05

zoic_camera_shader_image_bokeh_06

zoic_camera_shader_image_bokeh_utah_teapot

.. I had to include a teapot somewhere ..


List of bokeh shapes

If you’re looking for some of these bokeh shapes to use, DOF PRO has quite an extensive library.

I’ve also made a small library of captured  point spread functions sourced from hidden dark corners of the internet. You can also easily capture these yourself by defocusing your lens on a small bright light source (fairy lights, iPhone flashlight, ..)

For any path tracer it’s hard to get nicely sampled defocused bokeh since you need a shipload of camera rays hitting these tiny light sources. Overall appearance is therefore much more important than the high frequency detail. I thought I’d include some of the more subtle ones as well since I don’t necessarily have the rendering resources you guys might have.

bokeh_24
bokeh_19
bokeh_26
bokeh_28_graded
bokeh_31
bokeh_18
bokeh_20
bokeh_17
bokeh_23
bokeh_25
bokeh_21
bokeh_27
bokeh_22
bokeh_29
bokeh_28
bokeh_30
bokeh_13
bokeh_03
bokeh_07
bokeh_08
bokeh_01
bokeh_10
bokeh_06
bokeh_14
bokeh_02
bokeh_05
bokeh_11
bokeh_09
bokeh_16
bokeh_33
bokeh_34
bokeh_32
bokeh_15

TECHNICAL / HOW DOES THIS WORK?

The implementation of the image based out-of-focus highlights was quite an adventure and a steep learning curve for me. Although it had been implemented in other render engines [Vray, Tungsten], there was absolutely no information on the topic to be found. Marc-Antoine Desjardins sent me in the right direction, which involved taking the image and making a distribution function of that image and then using that data to determine the distribution of the ray origin offsets. I basically sort the pixel data in such a way that the pixels with a high luminance have more chance of being chosen than pixels with a low luminance, replacing the classic concentric disk sampling method which maps random sample on the unit square uniformly over the unit disk.

teapot
teapot_sample_distribution
teapot_sample_distribution_10000

A comparison of the sampled image, 1000 and 10000 sampled points.

 When the image data is stored using Arnold’s AiTextureLoad() call, I can start manipulating it. First I calculate the luminance of the pixels with the following equation [Y = 0.3 R + 0.59 G + 0.11 B].

After the sum of all the pixel luminance values are calculated, I normalize them so the sum of all normalized pixel values equals 1. Then these values are summed per row and the probability density function of the row values is calculated, essentially sorting them from highest to lowest luminance row.

teapot_probability_density_function

Sorted probability density function of the teapot image. Note that it drops down to 0 pretty quickly because of all the black pixels rows in the image (which won’t be used!)

For every row then, the sum of all previous rows must be added to calculate the cumulative distribution function. Then by dividing the pixel values of each pixel by the sum of the pixel values of that row, the values get normalized. After that I also created the probability distribution function and the cumulative distribution function for the columns.

teapot_cumulative_distribution_function

Cumulative distribution function of teapot image.

Note: Both the reading and distribution functions only have to be executed once per shader update, so they go into the node_update section.

Now that the distribution function is in place, we can use the image for the sampling process. I start by taking two random values the renderer provides per ray it makes, the screen-space coordinates, and use those to pick two different numbers based on the previously calculated probability. Because of the way the values are sorted, certain values have a larger value range and will therefore have a higher chance of being picked by picking the upper bound of the inputted random value. The same is done for the columns.

All that then is left to do is remapping of the image coordinates to fit the lens coordinates, so that the center pixel has the (0, 0) origin coordinates. This needs to be done because in the image coordinate system, the origin pixel is the topmost left pixel.


RAYTRACED

LENS MODEL PRINCIPLES

This lens model reads in lens descriptions found in lens patents and books on optics. This data is used to trace the camera rays through that virtual lens. The model is based on a paper by Kolb et al [1995] and comes with some advantages over the thin-lens model, which by the way, is quite often a criminal approximation to how real lenses work:

Physically plausible optical vignetting

Physically plausible lens distortion

Physically plausible bokeh shapes due to the lens geometry

Non-planar focal field due to lens curvature

Focus breathing – adjusting focus results in slightly shifted the focal length due to the way lens moves in respect to the sensor

Correct image formation for wide angle lenses

Essentially, this should bring you one step closer to creating pretty, believable photographic images.


LENS DESCRIPTION FILES

The lens model requires tabular lens description files, which unfortunately are rather sparsely distributed around the internet. I did some digging into optics literature and wrote some description files myself from the data I found.

You can download these here.

ZOIC accepts both lens descriptions with 4 or 5 columns. For now the 5th entry is just to future-proof the shader as it relates to the dispersion of the glass of the lenses.

The lens description file should look like the following:


PRECALCULATE LOOKUP TABLE

The most straightforward sampling approach is to distribute rays uniformly over the first lens element. This works pretty well with large apertures, but it fails miserably with small apertures since most of the rays won’t not pass through it.

ZOIC combats this problem by providing the option to pre-calculate the aperture shapes at 64*64 points on the sensor, and then bilinearly interpolates between those. This makes sure that very few rays are “wasted”. The aperture has to be calculated at multiple points since its shape and size vary drastically over the film plane.

When using large apertures, you can turn off this option to speed up interactive rendering. However, when interactivity is not needed or using small apertures, it is highly recommended to enable this option.

sampling_naive_comparison

Shooting rays from all positions on the sensor through the same aperture doesn’t work. Note how the aperture shape changes and how not only rays will be killed, but many necessary rays won’t even be sent in the first place. White samples are the cast rays, whilst the orange samples are a ground-truth test.

A better way to approach this problem is to pre-calculate the size and shape of the aperture at different points on the sensor. White samples are the cast rays, whilst the orange samples are a ground-truth test.


TECHNICAL / HOW DOES THIS WORK?

The lens description file provides 4 sets of values for every lens element, which is enough to mathematically model the lens. The first value is the radius of curvature, or simply the radius of the lens imagined extended as a sphere. The second value is the thickness, which is the distance from one surface to the next. Then we have the index of refraction, which tells us how much the light rays will be bent after the intersection. Last but not least there’s the aperture, which defines the maximum radius of the lens.

A special case is the aperture element, which has a radius of curvature of 0.

I will skip explaining how I approached the intersection / refraction as those need an article dedicated to themselves. I’m not the greatest at mathematics and there’s plenty of explanations already around on the internet.

FOCAL LENGTH:

The first step is to figure out what the current focal length of the given lens description file is. Usually they are more or less 100mm. More or less is definitely not the way to go, so the actual focal length is found by tracing a parallel ray through the lens system. From this parallel ray, two values are measured.

1. Principle plane on the image side

2. Focal point

Principle plane: The intersection point between the original, parallel ray and the ray leaving the lens system.

Focal point: The intersection point between y=0 and the ray leaving the lens system.

By subtracting the focal point distance by the principle plane distance, we find the precise focal length.

To apply this focal length to the lens system, all distance measurements in the lens description file are multiplied by the ratio of the new focal length over the traced focal length. All lens elements get essentially scaled up or down.

APERTURE RADIUS:

Given an f-stop, it is necessary to calculate the actual aperture radius. This is trivial, as it is the focal length over 2*fStop.

IMAGE DISTANCE:

This is found by tracing a ray from the object side of the lens into the image side. The ray starts at the point we want to focus at, and hits the lens at a very small angle since lens distortion is minimal in the center.

Where the exit ray intersects with y=0, the image will be in focus. So what is the logical thing to do here? Simply place our image sensor at this distance. It is a good idea to leave the last lens element at the x=0 origin and move the image sensor instead, otherwise the distance between the focus point and the last lens element changes again. This shifting of all lens elements is one of the first steps after reading in the lens data. Calculating the image distance is the last bit of important pre-computation. Now let’s move to the calculation of individual camera rays.

CAMERA RAYS:

Aha! The main ray creation loop. The whole idea here is to set the origin and direction of a ray before sending into the lens system. After the origin and direction have been updated throughout the tracing process inside the lens, we hand the final origin and direction over to Arnold to do it’s thing with.

First, a point on the image sensor is chosen. This point needs to be intricately linked to the current pixel in the image. It looks something like this:

origin = {input->sx * (sensorWidth * 0.5), input->sx (* sensorWidth * 0.5), originShift}

Then a point on the lens is chosen. It is important that:

1. It is as uniformly distributed as possible

2. No rays get wasted (so it needs to be in the unit disk domain, not unit square!)

One of the best and fast solutions to this problem is the concentric mapping method developed by Shirley. This maps random points on the unit square uniformly onto the unit disk.

concentric_mapping_l2program

L2Program.co.uk made a cool visualisation.

When you enable the custom bokeh shape function, this concentric mapping gets replaced by my own. Check out the image based bokeh technical section for that. The custom bokeh function will uniformly distribute samples over the image according to the pixel intensities.

Now an initial direction needs to be calculated. Since we have a point on the lens and a point on the sensor, the direction vector can be calculated simply by subtracting the origin point from the lens point. It is important to scale up the lens coordinates from the unit coordinates to the aperture of the first lens element.

direction = {(lens.x * lenses[0].aperture) - origin.x, (lens.y * lenses[0].aperture) - origin.y, - ld.lenses[0].thickness}

Now the tracing function can start, updating the origin and direction as the ray is traced through the lens. If the ray doesn’t make it through the lens, another position on the lens is chosen and the process starts over until it gets through or hits a hard-coded maximum.

If the LUT pre-calculation is enabled, the lens coordinates will be manipulated even more so the least amount of possible camera rays will fail on first try. This is a bit too complex for this general writeup, so check out the code on Github if you’re interested.


THIN-LENS

LENS MODEL PRINCIPLES

The thin-lens approximation is essentially an extension of the pinhole camera model. Instead of directing the camera rays through an infinitely small point, they now get distributed uniformly on a unit disk.

For a more lengthy explanation on how this works, I recommend reading this article.

EMPERICAL OPTICAL VIGNETTING

Sometimes referred to as the cat-eye effect, this is a lens imperfection that is not caused by the glass but by the outer edge of the lens, also known as the exit aperture. When the main aperture is fully open, light coming into the camera from steep angles will be blocked partially by the edge of the lens. This creates cat-eye like bokeh shapes near the edges of an image. It should be noted that this lens effect is purely mechanical and can not be confused with lens distortion.

thinlens_50mm_opticalvignetting

Optical Vignetting – Distance:

The distance from the actual aperture to the virtual aperture. Increasing this increases optical vignetting

Optical Vignetting – Radius:

The radius of the second, virtual aperture. Use this in combination with the distance to achieve the right amount of optical vignetting.


TECHNICAL / HOW DOES THIS WORK?

I implemented this lens effect by creating a second, virtual aperture at a certain distance behind the main aperture, simulating the exit aperture. Because I already know the intersection point with the main aperture, which I obtained in with the depth of field calculations, and also know the distance I place it at, it is easy to find the new intersection point using linear scaling of the ray direction vector.

All that is left then is to re-center this point by subtracting the origin coordinates. If this intersection point on the virtual aperture is within the radius of the main aperture, the ray can continue. If it falls outside this radius, the ray is marked at false and the function repeats with different lens coordinates until it falls within the radius of the virtual aperture. If it hits a hardcoded amount of tries, it is vignetted anyway.

Optical_vignetting_explained

Lens seen from the front and side with both a wide and small aperture. Notice the change in shape of the effective aperture when using a large aperture radius.

Optical Vignetting explained camera shader

llustration of sensor, camera rays, main and virtual aperture.
Some rays do not intersect the second, virtual aperture and therefore get aborted.

EXTRA

Exposure:

This one is simple, emulating light stops. With every integer increase (1.0, 2.0, 3.0, ..), the intensity of the rays, and therefore the brightness of the light, is essentially doubled.


MOTIVATION

I decided to write this shader because I felt like the Arnold camera is a bit limited on an artistic level. Real lens systems form optically imperfect images and often it are these imperfections that provide that extra bit of realism that is essential to making believable CG images.

Some of the concepts behind the shader are documented to motivate others that these kinds of shader writing projects are doable. I’d be lying if I claimed it was easy or doesn’t require a lot of patience and time to figure it all out by yourself, but it is certainly doable. I kept all information very general, so if you want to dig deeper, check out the code on Github. It’s all open source!

I also got some help along the way from some awesome people.

  • Marc-Antoine Desjardins for the help on the image sampling
  • Benedikt Bitterli for the information on emperical optical vignetting
  • Tom Minor for getting me started with C++
  • Brian Scherbinski for the initial Windows compile
  • Gaetan Guidet for the early C++ cleanup and improvements
  • Daniel Hennies for the C4D interface files

CHANGELOG

Version Notes
2.0

Introduction of raytraced lens model

Better sampling approach for thin-lens model when using optical vignetting

1.1.1

Removed OIIO dependency

1.1

Introduction of emperical edge highlights

Exposure control

1.0

Introduction of image based bokeh shapes

Introduction of emperical optical vignetting

All releases with download links are listed here.


BUILD ZOIC YOURSELF

I use g++ on OSX/Linux and Visual Studio on Windows. If you improve on my work, make sure to message me or send a pull request on Github!

OSX:

g++ -O3 -std=c++11 -I$ARNOLDPATH/Arnold-X.X.X.X-darwin/include" -L"$ARNOLDPATH/Arnold-X.X.X.X-darwin/bin" -lai -dynamiclib $ZOICPATH/zoic.cpp -o $ZOICPATH/zoic.dylib

LINUX:

g++ -O3 -std=c++11 -o $ZOICPATH/zoic.os -c -fPIC -D_LINUX -I$ARNOLDPATH/Arnold-X.X.X.X-linux/include $ZOICPATH/zoic.cpp

g++ -o $ZOICPATH/zoic.so -shared $ZOICPATH/zoic.os -L$ARNOLDPATH/Arnold-X.X.X.X -linux/bin -lai

WINDOWS:

Fork the repository on Github and get the VS_PROJECT folder. Open up the Visual Studio project and change the directories of the Arnold libaries in the project properties. Then just hit compile, and you should be good to go.

Powered by themekiller.com