General FAQ


General Arnold rendering issues:

How can I do glossy reflections using the Arnold Standard shader?

In the specular section, increase the "Scale" parameter. The "Roughness" parameter will control how blurry your reflections are. The lower the value the sharper the reflection. In the limit, a value of 0 will give you perfectly sharp mirror reflection. You also have separate controls to adjust the intensity of the direct and indirect specular reflections. Direct specular are reflections coming from regular light sources (spot/point/area/distant lights), and indirect specular are reflections coming from another object or an environment map.

Fireflies, why do they appear and how can they be avoided?

Certain scenes/configurations suffer from a form of sampling noise commonly referred to as "spike noise", or "fireflies": isolated, super bright pixels that jump around from frame to frame in an animation. This is especially noticeable in recursive glossy reflections involving both diffuse and sharp glossy surfaces. This noise is very difficult to remove by just increasing the number of samples in the renderer. There are several ways to fix the noise.

• Make the objects with the sharp glossy surfaces invisible to glossy rays. This can be done by disabling the glossy flag in the object's visibility parameters
• Use a ray_switch shader in the the objects, with an appropiately modified shader in the "glossy" slot - for example, a shader returning black, or perhaps a shader with a bigger specular_roughness value.

Are caustics possible?

Given that Arnold uses uni-directional path tracing, "soft" caustics originating at glossy surfaces are perfectly possible, as well as caustics coming from big sources of indirect light. The caustics switches in the standard shader mean that you can tell the diffuse GI rays to "see" the mirror reflection, glossy reflection and refraction from the shader of the surfaces that are hit by them. By default only the direct and indirect diffuse rays are seen by GI rays. On the other hand, "hard" caustics emanating from spatially-small but bright direct light sources, for example caustics from a spotlight through a glass of cognac, are currently not possible. One possible workaround to render caustics would be to light the scene with light-emitting geometry where you set the values for emission really high (20-100) and play with the size of the emitter. However you would have to use really high sample settings to avoid grain. Other renderers more or less easily achieve hard caustics with the photon mapping technique. At Solid Angle we dislike photon mapping because it's a biased, memory sucker technique that is prone to artifacts, blurring, obscure settings, doesn't scale well with complex scenes and doesn't work well with interactivity/IPR.

Bump mapping is not working when connecting images to a Bump3D node

Bump3D works by evaluating the bump shader (and image in this case) at different points (P + epsilon, P + epsilon, P + epsilon). Since the only thing displaced is the point, the uv coordinates will be the same in the different lookups. These will give the same texel in the image and result in no perturbation of the normal. You should use Bump2D for images.

How to get rid of noise

Computationally, the efficient way to get rid of noise is to go from the bottom up (going from the particular to the general). We would increase the sampling of individual lights first, then the GI and glossy samples. Finally, the AA samples, which acts as a global multiplier of the whole sampling process. However, the only way to improve the quality and reduce the noise of motion blur and depth of field is to increase the AA samples. In this case, this AA increase allows you to decrease the other sampling rates (GI/glossy/light) to compensate. To sum up, you will get almost the same render time with AA=1 and GI=9 than with AA=9 and GI=1, but at AA=9 you will have much better motion blur and depth of field quality. More information can be found here and a workflow here.

What does `min_pixel_width` do for curves?

min_pixel_width is a threshold parameter that limits how thin the curves shapes can become with respect to the pixel size. So if you were to set min_pixel_width to a value of 1 pixel, no matter how thin or far away from the camera that the curves are they will be "thickened" enough so that they appear 1 pixel wide on screen. Wider strands are easier to sample so they will tend to show much less aliasing artifacts.

The problem with simply thickening curves like this is that they will start to take on a wiry or straw-like appearance because they are much wider and blocking much more of the background than they should be. A well established method of giving a softer look to thick strands is to map their opacity along their length, the internal user data "geo_opacity" is an automatic way to do this.

The value of the "geo_opacity" UData field for strands that are already thicker on screen than the min_pixel_width threshold will always be 1.0, but strands that had to be thickened to meet the min_pixel_width threshold will get lower geo_opacityvalues in proportion to the amount of thickening. If a shader reads this value and uses it to scale its out_opacity result, then the thickening of the curves will be properly compensated and thin curves will retain their soft appearance. The result is not exactly the same as the curves without thickening, but on average the appearance is very similar and can be much easier for the raytracer to sample.

In practice a min_pixel_width setting of 1.0 is probably too high, making the strands look too soft and the difference between using and not using the technique quite noticeable. You will usually get better results with values in the 0.1-0.5 pixel range. Also, the higher the min_pixel_width, the slower to render, because the renderer will make the hairs more transparent to compensate for their increased thickness. For example, a value of 0.25 means that the hair geometry will never be bigger than 1/4 the size of the pixel, so you can get good antialiasing with AA=4 samples. A value of 0.125 (or 1/8) will need at least AA=8 to get good antialiasing etc.

What does autobump do for polymeshes?

When autobump is enabled, Arnold makes a copy of all of the vertices of a mesh prior to displacement (let's call that the "reference" mesh, or Pref). Prior to shading at some surface point on the displaced surface P, the equivalent Pref for that point is found on the non-displaced surface and the displacement shader is evaluated there (at Pref) to estimate what would be the equivalent normal at P if we had subdivided the polymesh at an insanely high tessellation rate.

The main difference between Arnold's autobump and using the displacement shader for bumpmapping (say with the bump2d node) is that autobump has access to Pref whereas bump2d does not and would be executing the displacement shader on already-displaced points which could "compound" the displacement amounts (if that makes any sense).

The only extra storage is for copying P prior to displacement. There is no analysis of the displacement map; Arnold displaces vertices purely based on where they "land" in the displacement map (or procedural) regardless of whether it happens to "hit" a high-frequency spike or not..

Autobump does not work

The autobump algorithm needs UV coordinates to compute surface tangents. Make sure your polymesh has a UV set applied.

How is transparency handled?

Arnold has two different ways of calculating transparency, refraction and opacity. They are different ray types and thus have different controls in the Ai Standard shader as well as in the render options. You must disable 'opaque' for the mesh that has been assigned the Ai Standard material.

How do I work in Linear colorspace with Arnold?

Set texture_gamma, light_gamma and shader_gamma to the value of the gamma you want to correct (usually 2.2).

What are .tx files?

.tx are just tiled+mipmapped tiff files. 

The 'maketx' utility part of OpenImageIO (or the 'txmake' utility shipped with RenderMan) can convert any image file into a TX file. It gets slightly more confusing because it is also common to rename .exr files to .tx, since OpenEXR also supports tiling and mipmapping (which arnold supports). 

The standard libtiff library is all that's necessary to read tiled+mipmapped TIFF, through the use of the appropiate TIFF tags and library calls. 

What makes the difference and the main reason to introduce this step into the pipeline is tha tiled mipmapped images are much more efficiente and cache friendly to the image library OIIO.

How are the settings for textures related? Why .tx files?

The texture cache - recommended settings. 

•Recommended settings for maketx and which ones are important. 

•Windows batch scripts: If you copy one of these scripts to a .bat file (simple textfile with .bat as file extension) in the same folder as maketx.exe and drag a link to it to your desktop, you can throw multiple texture images on it at once and they will get converted in one go. If you don't want the verbose output, remove the "-v", but it may help you understand what maketx does. If you don't want the window to stay visible at the end, remove the "pause".
 
Default settings (mipmap and resizing): 

@for %%i in (%*) do maketx.exe -v %%i

pause
 
No mipmapping, but automatic resizing: 

@for %%i in (%*) do maketx.exe -v --nomipmap %%i

pause

 

No resizing and no mipmapping: 

@for %%i in (%*) do maketx.exe -v --nomipmap --noresize %%i

pause


How do I adjust the falloff of an Arnold light?

You must attach a "filter" to the light (unless you just want to turn the decay off completely). There are several filters for different needs (e.g. gobo, barndoors etc). To get a normal decay behavior, you don't need to do anything, since the light node will default to real-world inverse square (i.e., quadratic) decay. If you want further control you will want to use the light decay filter. Note that the type of decay itself (real-world quadratic fall-off, or no fall off) is specified on the light node itself, the light decay filter is to provide further control via attenuation ranges (altering the range over which the decay occurs).

Why does the standard shader compute reflections twice?

The standard shader can compute both sharp and glossy reflections. 

Sharp reflections are controled by the Kr and Kr_color parameters. This type of reflection uses one ray per shader evaluation, with a depth limited by the GI_reflection_depth global option. 

Glossy reflections are controled by the Ks, Ks_color, specular_brdf, specular_roughness and specular_anisotropy parameters. This type of reflection uses GI_glossy_samples to determine how many rays to trace per shader evaluation, and the depth is limited by the GI_glossy_depth global option. Note that using a specular_roughness of 0 will also give you a sharp reflection, but doing this is slower than using the pure mirror reflection code. 

These two types of reflection can coexist and have independent Fresnel controls. They are simply added together with the other components of the standard shader, without taking into account any energy conservation. We will probably end up unifying both types of reflections in the future.

What is the re-parameterization of specular_roughness in the standard shader, and why is it non-linear?

Following advice of testing from artists, it was found they didn't like the linear mapping of the radius of the specular highlight. They instead preferred a mapping that is slightly curved, with an equation of type 1/r^3. This 1/r^3 mapping resulted difficult in the Cook-Torrance and Ward-Duer BRDF cases (requiring expensive powf() calls), and caused the specular_roughness to lose some of the "physical" sense of being proportional to the specular highlight's radius. We therefore opted to square the current roughness parameter. This results in a similar result to the 1/r^3 while maintaining some of the "physical" sense of the parameter (instead of doubling the radius each time you double the roughness, you end up quadrupling the radius).

Can the diffuse_cache in the hair shader produce artifacts?

The caching of illumination happens right at the hair control points, so yes, you may get artifacts in certain situations:

  • If the first control point of the hair is just below the scalp, the point will be occluded from all lights and environment illumination, and this occlusion will "leak" into the second control point which is above the scalp. So you'd get some shadowing in the roots.
  • If the number of control points is animated, the illumination can flicker.
  • The cached illumination is not motion blurred.

In any case, real hair does not have a diffuse component, this is just a hack to get some sort of bouncing in the hair. The cache also uses memory and may not reach 100% thread scalability (as multiple threads may need to write into the cache at the same time). The plan is to deprecate the hair diffuse cache, as soon as we have a more physically based model for hair with multiple scattering etc.

How do you capture a 360-degree snapshot from a scene in Latitude-Longitude panoramic format?

Lat/long maps can be rendered with Arnold's cylindrical camera. There is an example in the SItoA trac#638. For more information, from the command-line, type kick -info cyl_camera. Make sure that the horizontal fov is set to 360, the vertical fov is set to 180, and the camera aspect ratio is set to 1.

How does bucket size affect performance?

To simplify filtering, Arnold adds a bit of "edge slack" to each bucket in each dimension. The amount of "edge slack" is exactly 2 * pixel filter width (unless you are using a box-1 filter, which has no slack at all). If the user sets the bucket size to 32x32 with a filter width of 2, the buckets are internally extended to 36x36. This is so that each bucket has enough samples in the edges to perform pixel filtering independently of each other bucket, as inter-bucket communication would greatly complicate multithreading. 

Here is an example showing the number of camera rays as the filter width is increased: 

  • 1024x778, AA samples = 1, filter width = 1, traces a total of 796672 camera rays
  • 1024x778, AA samples = 1, filter width = 2, traces a total of 900864 camera rays

The corollary is that you should not use buckets that are too small, as the percentage of "redundant" pixels grows inversely proportional to bucket size. The default 64x64 is a good base setting, but 128x128 should be slightly faster.

Do any of the light parameters support shader networks?

At the time of writing (Arnold 4.0.12.0) shader networks are only supported in the color parameter of the quad_light and skydome_light nodes, and in the light filters (the filters parameter) of all the lights.

How does the exposure parameter work in the lights?

In Arnold, the total intensity of the light is computed with the following formula: color * intensity * 2exposure. You can get exactly the same output by modifying either the intensity or the exposure. For example, intensity=1, exposure=4 is the same as intensity=16, exposure=0. Increasing the exposure by 1 results in double the amount of light. 

The reasoning behind this apparent redundancy is that, for some people, f-stops are a much more intuitive way of describing light brightness than raw intensity values, especially when you're directly matching values to a plate. You may be asked by the director of photography - who is used to working with camera f-stop values - to increase or decrease a certain light by "one stop". Other than that, this light parameter has nothing to do with a real camera's f-stop control. Also, working with exposure means you won't have to type in huge values like 10.000 in the intensity input if your lights have quadratic falloff (which they should). 

If you are not used to working with exposure in the lights, you can simply leave the exposure parameter at its default value of 0 (since 20 = 1, the formula then simplifies to: color * intensity * 1).

How do the various subdiv_adaptive_metric modes work?

  • edge_length: patches will be subdivided until the max length of the edge is below subdiv_pixel_error (regardless of curvature).
  • flatness: patches will be subdivided until the distance to the limit surface is below subdiv_pixel_error (regardless of patch/edge size). This usually generates fewer polygons than edge_length and is therefore recommended.
  • auto: uses the flatness mode unless there is a displacement shader attached to the mesh, in which case it uses theedge_length mode. The rationale here is that if you are going to displace the mesh you probably don't want the subdiv engine to leave big polygons in flat areas that will then miss the displacement (which happens at post-subdivided vertices).

Should I use a background sky shader or a skydome_light?

The skydome_light will most of the time be more efficient, in both speed and noise, than hitting the sky shader with GI rays. The only situation where using a sky shader may be faster than the skydome_light is when the environment texture is a constant color or has very low variance. There are various reasons why using skydome_light is more efficient:

  • The skydome_light uses importance sampling to fire rays to bright spots in the environment, therefore automatically achieving both soft and sharp shadows; sampling the sky shader with GI rays cannot achieve hard shadows in reasonable times, you will need huge amounts of GI samples.
  • The environment map lookups for the skydome_light are cached rather than evaluated at render time. Since texture lookups via OIIO are very slow, this caching results in a nice speedup, usually 2-3x faster than uncached (if you are curious, you can switch to uncached lookups by setting options.enable_fast_importance_tables = false and measure the difference yourself).
  • The skydome_light is sampled with shadow rays, which can be faster than GI rays because shadow rays only need to know that any hit blocks the light (rather than the first hit). This also means the sampling quality for the skydome_light is controlled via skydome_light.samples, whereas the quality for a background sky is controlled via theGI_{diffuse|glossy}_samples. This subtle distinction is very important: skydome_light is direct lighting and sampled with shadow rays, whereas the background sky shader is indirect lighting and therefore sampled with GI rays.

How does the ignore_list parameter from the options node work? 

It tells the renderer to ignore nodes filtered by type. The following example will ignore, at scene creation, all of the occurrences of lambert, standard and curves nodes:

options

{

 ...

 ignore_list 3 1 STRING lambert standard curves

}


Which coordinate space does Arnold assume for a vector displacement?

The displacement shader should output a vector in object space.

How do the different subdiv_uv_smoothing modes work?

The subdiv_uv_smoothing setting is used to decide which sub-set of UV vertices on the control mesh get the Catmull-Clark smoothing algorithm applied to them. Those vertices that do not get smoothing applied to them are considered to be "pinned", since the apparent effect is that their UV coordinates are exactly the same as the corresponding coordinates on the control mesh. The subdiv_uv_smoothing modes work as follows: 

  • smooth (none pinned): In this mode Catmull-Clark smoothing is applied to all vertices of the UV mesh, indiscriminately. This mode is really only useful for organic shapes with hidden UV seams whose coordinates do not have to precisely follow any particular geometric edges.
  • pin_corners: Catmull-Clark smoothing is applied to all vertices of the UV mesh except for those that are connected to only two edges in UV coordinate space (valence = 2). This mode is the default in Arnold (for legacy reasons), however pin_borders is probably a more useful default setting in practice.
  • pin_borders: Catmull-Clark smoothing is applied to all vertices of the UV mesh except for those that are connected to an edge that forms part of only one polygon. This mode is possibly the most useful of the four and we would suggest trying this one first. In this mode it is guaranteed that the UV coordinate seams on the subdivided mesh will exactly match the corresponding edges of the control mesh, making it much easier to place textures at these seams while still applying Catmull-Clark smoothing on all interior vertices.
  • linear (all pinned): Catmull-Clark smoothing is not applied to any vertex. This mode can prove useful when rendering objects with textures generated directly on a subdivided surface, like ZBrush models exported in "linear" mode. 

The different modes are illustrated in the following image: 

How are the polymesh normals computed when subdivision is enabled?

Vertex normals will be computed using the limit subdivision surface, overriding any explicit values set in the nlistparameter. The vertex normals specified in nlist are only used for non-subdivided meshes.

What is the Z value for rays that do not hit any object? 

The Z value for rays that do not hit any object is controlled by the far_clip parameter on the camera. By default this camera parameter has a value of 1.0e30f (AI_INFINITE).


Why does iterating through an AiSampler after the first split in the ray tree always return one sample?

Splits are typically caused by the BRDF integrator methods, which can fire GI_diffuse_samples^2 or GI_glossy_samples^2 rays. In this situation there would be a combinatorial explosion in the number of rays as the ray depths were increased. Therefore, Arnold "splits" the integrals into multiple rays only once, at the first hit. After that first split, we only follow one single ray, in the spirit of pure path tracing. This may seem like it introduces severe noise, as intuitively the higher bounces seem undersampled compared to the first bounce. But in fact what we are doing is concentrating most of the rays where they matter the most. 

Since the AiSampler API was added for users writing their own custom BRDFs, we decided it was best to automatically handle splitting versus no splitting at the renderer level. Thus the sample count is automatically reduced after the first split, which affects the sample counts for both our internal integrators as well as any custom integrators that use AiSampler.


How does the time_samples parameter from the cameras work?

time_samples is used to remap Arnold's "absolute time" to the "relative time" of motion keys, which are specified as values. If the keys are evenly spaced between 0 and 1, time_samples is not needed. But if they are not you can remap time specifying the time_samples array. 

The syntax is the following: 

time_samples 2 1 FLOAT 0 1 

Which means an array of 2 elements with 1 key (you can't motion blur time_samples). 

As an example, let's say there is a camera data with subframe matrix keys (M1, M2, M3) at times (0, 0.25, 1.0).

By default Arnold will assume those keys correspond to (0.0, 0.5, 1.0) and when absolute time t is equal to 0.5 Arnold will use M2.

If you set time_samples to (0, 0.5, 0.66, 0.83, 1.0) you define a linear piece-wise function F that will map 0 to 0, 1 to 1, 0.25 to 0.5, 0.5 to 0.66, etc. You can use as many linear segments as needed to make the remap smoother.

When absolute time t is equal to 0.25 Arnold will use M2, because the remap function F(0.25) = 0.5, and 0.5 in "relative" time corresponds to matrix M2.

 

Is there a way to iterate over regular and user params for a given AtNode?

No. Conceptually, they are different things. The parameter list iterator is handled by the AtNodeEntry, which defines the internal structure for all nodes belonging to that class. It will traverse all parameter definitions, returning elements of the AtParamEntry type, which provides parameter name, type, default value... So, you are traversing the list of parameter definitions, which is stored in the AtNodeEntry (i.e. once for all nodes of the same type).

In the case of user data, they only exist in a specific node, and they are not shared by all nodes of the same class. So, for example, you could have some user data in a polymesh, and some other different in another polymesh, and a third polymesh with no user data.

 


 

 


  • No labels