- Automatic (default): meshes with Arnold subdivision and/or displacement applied are exported as quad, the others triangulated.
- Always Use Quad: all of the poly meshes are always exported as a quad mesh.
- Always Use Triangles: all of the poly meshes are always exported as a triangulated mesh.
The exported shapes topology can be observed by applying a utility shader, with overlay set to polywire mode.
Exporting to Quad takes around three times more than triangles.
These settings control Arnold's tessellation of subdivision surfaces. Note that, as well as the global subdivision control described below, you can also control the subdivision of an individual object, via the Arnold Properties Modifier. The actual number of subdivisions for each object will be the lower of the two values.
Max Subdivision Level
This value sets an upper limit on the number of subdivision iterations for all objects. By default, this is set to a very high value (999) which in practice has no effect. Setting this to a low value such as 1 or 2 can be useful when debugging scenes that take a long time to subdivide/tessellate.
Cube subdivided with 0, 1, 2 and 3 iterations (objects shaded with the utility shader in polywire mode).
The camera to use when determining subdivision levels of patches during adaptive subdivision. When enabled, the user provides a specific camera that will be used as a reference for all dicing (subdivision) calculations during Adaptive Subdivision (in other words, the tessellation of the object will not vary as the main camera is moved). This can be useful to fix objectionable flickering introduced by Adaptive Tessellation with certain moves of the main camera. If you set a static dicing camera, you will still get the benefits from Adaptive Subdivision (higher polygon detail closer to the dicing camera) while getting a tessellation that does not change from frame to frame. By default, this is disabled, and should only be used when necessary, and with a carefully chosen position for the dicing camera.
Polygon plane rendered with dicing camera (Catclark subdivision with 10 iterations)
When enabled, the user provides a specific camera that will be used as a reference for all dicing (subdivision) calculations during Adaptive Subdivision (in other words, the tessellation of the object will not vary as the main camera is moved). This can be useful to fix objectionable flickering introduced by Adaptive Tessellation with certain moves of the main camera. If you set a static dicing camera, you will still get the benefits from Adaptive Subdivision (higher polygon detail closer to the dicing camera) while getting a tessellation that does not change from frame to frame.
By default, this is disabled, and should only be used when necessary, and with a carefully chosen position for the dicing camera.
Subdivision patches outside the view or dicing camera frustum will not be subdivided. This is useful for any extended surface that is only partially visible as only the directly visible part will be subdivided. Similarly, no subdivision work will happen if a mesh is not directly visible. This can be turned on globally by setting
options.subdiv_frustum_culling true and can be turned off for specific meshes with polymesh
Adds a world space padding to the frustum that can be increased as needed to minimize artifacts from out-of-view objects in cast shadows, reflections, etc. Note that motion blur is not yet taken into account and moving objects might require some additional padding.
Displacement Default Subdivision
This enum. Defines the subdivision rule that will be applied to the polymesh at render time. Possible values are catclark (catmull clark) and linear.
Ignores any subdivision and renders the mesh as it is.
Linear subdivision puts vertices in the middle of each face.
The Catmull–Clark algorithm is used to create smooth surfaces by recursive subdivision surface modeling. The resulting surface will always consist of a mesh of quadrilateral faces.
The maximum number of subdivision rounds applied to the mesh. When subdiv_pixel_error is 0 the number of rounds will be exact instead of a maximum.
Bear in mind that each subdivision iteration quadruples the number of polygons. If your object has 2 levels of subdivision iterations set and 4 additional iterations set in Arnold, that's 6 subdiv_iterations total and therefore 426936 * 4^6 = 426936 * 4096 = 1.7 Billion polygons.
- a value of 1.0 with subdiv_adaptive_space set to "raster" and subdiv_adaptive_metric set to "edge_length" will subdivide until all edges are shorter than 1 pixel.
- a value of 1.0 with subdiv_adaptive_space set to "object" and subdiv_adaptive_metric set to "flatness" will subdivide until the tessellation is closer than 1 unit in object space to the limit surface.
Describes how the curve is formed from the control points. Choose between Catmull-Rom and Linear.
There are three algorithms for rendering curves in Arnold.
Ribbon mode is recommended for fine geometry such as realistic hair, fur or fields of grass. These curves are rendered as camera-facing flat ribbons. For secondary and shadow rays, they face the incoming ray direction. This mode doesn't look so good for very wide hairs or dramatic zoom-ins because of the flat appearance. This mode works best with a proper hair shader (perhaps based on a Kay-Kajiya or Marschner specular model).
Thick mode resembles spaghetti. It has a circular cross-section and a normal vector that varies across the width of the hair. Thick hairs look great when zoomed in, and are especially useful for effects work, but their varying normals make them more difficult to antialias when they are small. You can use any shader with this rendering mode, including lambert, phong, etc.
Oriented mode allows the curves to face in a given direction at each point. This is more useful for modeling blades of grass, tape, and so on.
Min. Pixel Width
If this value is non-zero, curves with a small on-screen width will be automatically enlarged so that they are at least the specified size in pixels. The enlargement fraction is then used in the hair shader to adjust the opacity so that the visible thickness of the hair remains the same. For a given number of AA samples, this makes it a lot easier to antialias fine hair, at the expense of render time (because of the additional transparency/depth complexity). Good values are in the range of 0.2 to 0.7. Values closer to 0 are faster to render but need more AA samples. So if your scene already uses very high AA settings, you should use a low value like 0.1. For best results, you may need to increase the auto-transparency depth, and/or lower the auto-transparency threshold, but watch the effect on render times. Note that this parameter currently works with the ribbon mode only.
Spline shapes are not currently supported.