How can I do glossy reflections using the Standard Surface shader?

 Click here to expand...

Adjust the specular_roughness parameter to control how blurry your reflections are. The lower the value the sharper the reflection. In the limit, a value of 0 will give you perfectly sharp mirror reflection.

Fireflies, why do they appear and how can they be avoided?

 Click here to expand...

Certain scenes/configurations suffer from a form of sampling noise commonly referred to as "spike noise", or "fireflies": isolated, super-bright pixels that jump around from frame to frame in an animation. This is especially noticeable in recursive specular reflections involving both diffuse and sharp specular surfaces. This noise is very difficult to remove by just increasing the number of samples in the renderer. There are several ways to fix the noise.

• Make the objects with the sharp specular surfaces invisible to specular rays. This can be done by disabling the specular flag in the object's visibility parameters.
• Use a ray_switch shader in the objects, with an appropriately modified shader in the "glossy" slot - for example, a shader returning black, or perhaps a shader with a bigger specular_roughness value.

Are caustics possible?

 Click here to expand...

Given that Arnold uses uni-directional path tracing, "soft" caustics originating at specular surfaces are perfectly possible, as well as caustics coming from big sources of indirect light. The caustics switches in the standard shader mean that you can tell the diffuse GI rays to "see" the mirror reflection, specular reflection and refraction from the shader of the surfaces that are hit by them. By default only the direct and indirect diffuse rays are seen by GI rays. On the other hand, "hard" caustics emanating from spatially-small but bright direct light sources, for example caustics from a spotlight through a glass of cognac, are currently not possible. One possible workaround to render caustics would be to light the scene with light-emitting geometry where you set the values for emission really high (20-100) and play with the size of the emitter. However you would have to use really high sample settings to avoid grain. Other renderers more or less easily achieve hard caustics with the photon mapping technique. At Solid Angle we dislike photon mapping because it's a biased, memory sucker technique that is prone to artifacts, blurring, obscure settings, doesn't scale well with complex scenes and doesn't work well with interactivity/IPR.

Bump mapping is not working when connecting images to a Bump3D node

 Click here to expand...

Bump3D works by evaluating the bump shader (and image in this case) at different points (P + epsilon, P + epsilon, P + epsilon). Since the only thing displaced is the point, the uvcoordinates will be the same in the different lookups. These will give the same texel in the image and result in no perturbation of the normal. You should use bump2D for images.

How to get rid of noise

 Click here to expand...

Computationally, the efficient way to get rid of noise is to go from the bottom up (going from the particular to the general). We would increase the sampling of individual lights first, then the GI and specular samples. Finally, the AA samples, which acts as a global multiplier of the whole sampling process. However, the only way to improve the quality and reduce the noise of motion blur and depth of field is to increase the AA samples. In this case, this AA increase allows you to decrease the other sampling rates (GI/specular/light) to compensate. To sum up, you will get almost the same render time with AA=1 and GI=9 than with AA=9 and GI=1, but at AA=9 you will have much better motion blur and depth of field quality. More information can be found here and a workflow here.

What does Min Pixel Width do for curves?

 Click here to expand...

min_pixel_width is a threshold parameter that limits how thin the curves shapes can become with respect to the pixel size. So if you were to set min_pixel_width to a value of 1 pixel, no matter how thin or far away from the camera that the curves are they will be "thickened" enough so that they appear 1 pixel wide on screen. Wider strands are easier to sample so they will tend to show much less aliasing artifacts.

The problem with simply thickening curves like this is that they will start to take on a wiry or straw-like appearance because they are much wider and blocking much more of the background than they should be. A well-established method of giving a softer look to thick strands is to map their opacity along their length, the internal user data "geo_opacity" is an automatic way to do this.

The value of the "geo_opacity" UData field for strands that are already thicker on screen than the min_pixel_width threshold will always be 1.0, but strands that had to be thickened to meet the min_pixel_width threshold will get lower geo_opacityvalues in proportion to the amount of thickening. If a shader reads this value and uses it to scale its out_opacity result, then the thickening of the curves will be properly compensated and thin curves will retain their soft appearance. The result is not exactly the same as the curves without thickening, but on average the appearance is very similar and can be much easier for the raytracer to sample.

In practice a min_pixel_width setting of 1.0 is probably too high, making the strands look too soft and the difference between using and not using the technique quite noticeable. You will usually get better results with values in the 0.1-0.5 pixel range. Also, the higher the min_pixel_width, the slower to render, because the renderer will make the hairs more transparent to compensate for their increased thickness. For example, a value of 0.25 means that the hair geometry will never be bigger than 1/4 the size of the pixel, so you can get good antialiasing with AA=4 samples. A value of 0.125 (or 1/8) will need at least AA=8 to get good antialiasing etc.

What does autobump do for polymeshes?

 Click here to expand...

When autobump is enabled, Arnold makes a copy of all of the vertices of a mesh prior to displacement (let's call that the "reference" mesh, or Pref). Prior to shading at some surface point on the displaced surface P, the equivalent Pref for that point is found on the non-displaced surface and the displacement shader is evaluated there (at Pref) to estimate what would be the equivalent normal at P if we had subdivided the polymesh at an insanely high tessellation rate.

The main difference between Arnold'sautobumpandusingthedisplacementshaderforbumpmapping(say with the bump2d node)isthatautobumphasaccesstoPref whereas bump2d does not and would be executing the displacement shader on already-displaced points which could "compound" the displacement amounts (if that makes any sense).

The only extra storage is for copying P prior to displacement. There is no analysis of the displacement map; Arnold displaces vertices purely based on where they "land" in the displacement map (or procedural) regardless of whether it happens to "hit"a high-frequency spike or not..

Autobump does not work

 Click here to expand...

The autobump algorithm needs UV coordinates to compute surface tangents. Make sure your poly mesh has a UV set applied.

How is transparency handled?

 Click here to expand...

Arnold has two different ways of calculating transparency, transmission, and opacity. They are different ray types and thus have different controls in the standard_surface shader as well as in the render options. You must disable 'opaque' for the mesh that has been assigned the standard surface material.

How do I work in Linear colorspace with Arnold?

 Click here to expand...

Arnold automatically works in linear color space, all light and shader colors are assumed to be in the linear rendering space. For textures and render output, color spaces may be specified to convert to and from the linear rendering space.

What are .tx files?

 Click here to expand...

.tx are just tiled+mipmapped tiff files. 

The 'maketx' utility part of OpenImageIO (or the 'txmake' utility shipped with RenderMan) can convert any image file intoaTXfile. It gets slightly more confusing because it is also common to rename .exr files to.tx,sinceOpenEXRalso supports tiling and mipmapping (which Arnold supports). 

Thestandardlibtifflibraryis all that's necessary to read tiled+mipmapped TIFF, through the use of the appropriate TIFF tags and library calls. 

What makes the difference and the main reason to introduce this step into the pipeline is the tiled mipmapped images are much more efficient and cache-friendly to the image library OIIO.

How are the settings for textures related? Why .tx files?

 Click here to expand...

The texture cache - recommended settings. 

•Recommendedsettingsformaketxandwhichones are important. 

•Windows batch scripts: If you copy one of these scripts to a .bat file (simple text file with .bat as file extension) in the same folder as maketx.exe and drag a link to it to your desktop, you can throw multiple texture images on it at once and they will get converted in one go. If you don't want the verbose output, remove the "-v", but it may help you understand what maketx does. If you don't want the window to stay visible at the end, remove the "pause".
Default settings (mipmap and resizing): 

@for %%i in (%*) do maketx.exe -v %%i

No mipmapping, but automatic resizing: 

@for %%i in (%*) do maketx.exe -v --nomipmap %%i



No resizing and no mipmapping: 

@for %%i in (%*) do maketx.exe -v --nomipmap --noresize %%i


How do I adjust the falloff of an Arnold light?

 Click here to expand...

You must attach a "filter" to the light. There are several filters for different needs (e.g. gobo, barndoors, etc). To get a normal decay behavior, you don't need to do anything, since the light node will default to real-world inverse square (i.e., quadratic) decay. If you want further control you will want to use the light decay filter, which provides control over the attenuation ranges.

Why does the standard shader compute reflections twice?

 Click here to expand...

The (deprecated) standard shader can compute both sharp and glossy specular reflections.

Sharp reflections are controlled by the Kr and Kr_color parameters. This type of reflection uses one ray per shader evaluation, with a depth limited by the GI_reflection_depth global option. 

Glossy reflections are controlled by the Ks, Ks_color, specular_brdf, specular_roughness and specular_anisotropy parameters. This type of reflection uses GI_specular_samples to determine how many rays to trace per shader evaluation, and the depth is limited by the GI_specular_depth global option. Note that using a specular_roughness of 0 will also give you a sharp reflection, but doing this is slower than using the pure mirror reflection code. 

These two types of reflection can coexist and have independent Fresnel controls. They are simply added together with the other components of the standard shader, without taking into account any energy conservation. We will probably end up unifying both types of reflections in the future.

What is the re-parameterization of specular_roughness in the standard surface shader, and why is it non-linear?

 Click here to expand...

Following the advice of testing from artists, it was found they didn't like the linear mapping of the radius of the specular highlight. They instead preferred a mapping that is slightly curved, with an equation of type 1/r^3. This 1/r^3 mapping resulted difficult in the Cook-Torrance and Ward-Duer BRDF cases (requiring expensive powf() calls), and caused the specular_roughness to lose some of the "physical" sense of being proportional to the specular highlight's radius. We, therefore, opted to square the current roughness parameter. This results in a similar result to the 1/r^3 while maintaining some of the "physical" sense of the parameter (instead of doubling the radius each time you double the roughness, you end up quadrupling the radius).

How do you capture a 360-degree snapshot from a scene in Latitude-Longitude panoramic format?

 Click here to expand...

Lat/long maps can be rendered with Arnold's cylindrical camera. There is an example in the SItoA trac#638. For more information, from the command-line, type kick -info cyl_camera. Make sure that the horizontal fov is set to 360, the vertical fov is set to 180, and the camera aspect ratio is set to 1.

How does bucket size affect performance?

 Click here to expand...

To simplify filtering, Arnold adds a bit of "edge slack" to each bucket in each dimension. The amount of "edge slack" is exactly 2 * pixel filter width (unless you are using a box-1 filter, which has no slack at all). If the user sets the bucket size to 32x32 with a filter width of 2, the buckets are internally extended to 36x36. This is so that each bucket has enough samples in the edges to perform pixel filtering independently of each other bucket, as inter-bucket communication would greatly complicate multithreading. 

Here is an example showing the number of camera rays as the filter width is increased: 

  • 1024x778, AA samples = 1, filter width = 1, traces a total of 796672 camera rays
  • 1024x778, AA samples = 1, filter width = 2, traces a total of 900864 camera rays

The corollary is that you should not use buckets that are too small, as the percentage of "redundant" pixels grows inversely proportional to bucket size. The default 64x64 is a good base setting, but 128x128 should be slightly faster.

Do any of the light parameters support shader networks?

 Click here to expand...

Shader networks are only supported in the color parameter of the quad_light and skydome_light nodes, and in the light filters (the filters parameter) of all the lights.

How does the exposure parameter work in the lights?

 Click here to expand...

In Arnold, the total intensity of the light is computed with the following formula: color * intensity * 2exposure. You can get exactly the same output by modifying either the intensity or the exposure. For example, intensity=1, exposure=4 is the same as intensity=16, exposure=0. Increasing the exposure by 1 results in double the amount of light. 

The reasoning behind this apparent redundancy is that, for some people, f-stops are a much more intuitive way of describing light brightness than raw intensity values, especially when you're directly matching values to a plate. You may be asked by the director of photography - who is used to working with camera f-stop values - to increase or decrease a certain light by "one stop". Other than that, this light parameter has nothing to do with a real camera's f-stop control. Also, working with exposure means you won't have to type in huge values like 10.000 in the intensity input if your lights have quadratic falloff (which they should). 

If you are not used to working with exposure in the lights, you can simply leave the exposure parameter at its default value of 0 (since 20 = 1, the formula then simplifies to:color* intensity * 1).

How do the various subdiv_adaptive_metric modes work?

 Click here to expand...
  • edge_length: patches will be subdivided until the max length of the edge is below subdiv_pixel_error (regardless of curvature).
  • flatness: patches will be subdivided until the distance to the limit surface is below subdiv_pixel_error (regardless of patch/edge size). This usually generates fewer polygons than edge_length and is therefore recommended.
  • auto: uses the flatness mode unless there is a displacement shader attached to the mesh, in which case it uses theedge_length mode. The rationale here is that if you are going to displace the mesh you probably don't want the subdiv engine to leave big polygons in flat areas that will then miss the displacement (which happens at post-subdivided vertices).

Should I use a background sky shader or a skydome_light?

 Click here to expand...

The skydome_light should always be preferred, as it supports all functionality of the background while reducing noise. The background shader is considered deprecated. There are various reasons why using skydome_light is more efficient:

  • The skydome_light uses importance sampling to fire rays to bright spots in the environment, therefore automatically achieving both soft and sharp shadows; sampling the sky shader with GI rays cannot achieve hard shadows in reasonable times, you will need huge amounts of GI samples.
  • The environment map lookups for the skydome_light are cached rather than evaluated at render time. Since texture lookups via OIIO are very slow, this caching results in a nice speedup, usually 2-3x faster than uncached (if you are curious, you can switch to uncached lookups by setting options.enable_fast_importance_tables = false and measure the difference yourself).
  • The skydome_light is sampled with shadow rays, which can be faster than GI rays because shadow rays only need to know that any hit blocks the light (rather than the first hit). This also means the sampling quality for the skydome_light is controlled via skydome_light.samples, whereas the quality of a background sky is controlled via theGI_{diffuse|specular}_samples. This subtle distinction is very important: skydome_light is direct lighting and sampled with shadow rays, whereas the background sky shader is indirect lighting and therefore sampled with GI rays.

How does the ignore_list parameter from the options node work? 

 Click here to expand...

It tells the renderer to ignore nodes filtered by type. The following example will ignore, at scene creation, all of the occurrences of Lambert, standard and curves nodes:




 ignore_list 3 1 STRING lambert standard curves


Which coordinate space does Arnold assume for a vector displacement?

 Click here to expand...

The displacement shader should output a vector in object space.

How do the different subdiv_uv_smoothing modes work?

 Click here to expand...

The subdiv_uv_smoothing setting is used to decide which sub-set of UV vertices on the control mesh get the Catmull-Clark smoothing algorithm applied to them. Those vertices that do not get smoothing applied to them are considered to be "pinned" since the apparent effect is that their UV coordinates are exactly the same as the corresponding coordinates on the control mesh. The subdiv_uv_smoothing modes work as follows: 

  • smooth (none pinned): In this mode Catmull-Clark smoothing is applied to all vertices of the UV mesh, indiscriminately. This mode is really only useful for organic shapes with hidden UV seams whose coordinates do not have to precisely follow any particular geometric edges.
  • pin_corners: Catmull-Clark smoothing is applied to all vertices of the UV mesh except for those that are connected to only two edges in UV coordinate space (valence = 2). This mode is the default in Arnold (for legacy reasons), however, pin_borders is probably a more useful default setting in practice.
  • pin_borders: Catmull-Clark smoothing is applied to all vertices of the UV mesh except for those that are connected to an edge that forms part of only one polygon. This mode is possibly the most useful of the four and we would suggest trying this one first. In this mode it is guaranteed that the UV coordinate seams on the subdivided mesh will exactly match the corresponding edges of the control mesh, making it much easier to place textures at these seams while still applying Catmull-Clark smoothing on all interior vertices.
  • linear (all pinned): Catmull-Clark smoothing is not applied to any vertex. This mode can prove useful when rendering objects with textures generated directly on a subdivided surface, like ZBrush models exported in "linear" mode. 

The different modes are illustrated in the following image: 

How are the polymesh normals computed when subdivision is enabled?

 Click here to expand...

Vertex normals will be computed using the limit subdivision surface, overriding any explicit values set in the nlistparameter. The vertex normals specified in nlist are only used for non-subdivided meshes.

What is the Z value for rays that do not hit any object? 

 Click here to expand...

The Z value for rays that do not hit any object is controlled by the far_clip parameter on the camera. By default, this camera parameter has a value of 1.0e30f (AI_INFINITE).

Why does iterating through an AiSampler after the first split in the ray tree always return one sample?

 Click here to expand...

Splits are typically caused by BSDF integration, which can fire multiple rays for a single AA sample. If this was done at every depth, there would be a combinatorial explosion in the number of rays as the ray depths were increased. Therefore, Arnold "splits" the integrals into multiple rays only once, at the first hit. After that first split, we only follow one single ray, in the spirit of pure path tracing. This may seem like it introduces severe noise, as intuitively the higher bounces seem undersampled compared to the first bounce. But in fact what we are doing is concentrating most of the rays where they matter the most. 

Since the AiSampler API was added for users writing their own custom BRDFs, we decided it was best to automatically handle splitting versus no splitting at the renderer level. Thus the sample count is automatically reduced after the first split, which affects the sample counts for both our internal integrators as well as any custom integrators that use AiSampler.

How do the motion_start and motion_end parameters from the cameras work?

 Click here to expand...

These parameters are used to remap Arnold's "absolute time" to the "relative time" of motion keys, which are specified as values.

motion_start and motion_end are the times at which the first and last motion key of the shape are sampled. Other motion keys must be uniformly spaced within this range. By convention, the times are frame relative. For example, start and end times -0.5 to 0.5 indicate that the motion keys were sampled midway between the previous and current frame, and current frame and next frame. This is applied to cameras, lights, and shapes.

Is there a way to iterate over regular and user params for a given AtNode?

 Click here to expand...

No. Conceptually, they are different things. The parameter list iterator is handled by the AtNodeEntry, which defines the internal structure for all nodes belonging to that class. It will traverse all parameter definitions, returning elements of the AtParamEntry type, which provides parameter name, type, default value... So, you are traversing the list of parameter definitions, which is stored in the AtNodeEntry (i.e. once for all nodes of the same type).

In the case of user data, they only exist in a specific node, and they are not shared by all nodes of the same class. So, for example, you could have some userdata in a poly mesh, and some other differences in another poly mesh, and a third poly mesh with no userdata.



  • No labels