RAYSHADE

Section: User Commands (1G)
Updated: September 11, 1989
Index Return to Main Contents
 

NAME

rayshade - a ray tracing program  

SYNOPSIS

rayshade [ options ] [ filename ]  

DESCRIPTION

Rayshade reads a file describing a scene to be rendered and produces a Utah Raster RLE format file of the ray-traced image.
 

OPTIONS

-C red_contrast green_contrast blue_contrast
Set contrast factors used in controlling pixel subdivision.
-c
Trace shadow rays through transparent objects.
-E eye_separation
Set eye separation for stereo imaging.
-F report_freq
Set frequency, in lines, of status report (default 10).
-h
Print a short usage message.
-j
Perform jittered sampling.
-L start_line
Begin trace on line start_line, appending to image file (requires -O option).
-l
Render image for left eye (requires -E option).
-n
Do not trace shadow rays.
-O output_file
Override image file name in input file, if any.
-P pixel_subdivisions
Specifies the maximum number of times a pixel may be subdivided.
-p
Discard polygons with degenerate edges.
-q
Do not print warning messages.
-R xres yres
Set image resolution.
-r
Render image for right eye (requires -E option).
-S samples
Specifies number of jittered samples.
-s
Do not cache shadowing information.
-T thresh
Specifies adaptive ray-depth cutoff threshold.
-V filename
Write verbose output to filename.
-v
Write verbose output to standard output.
-W workers
Specify number of worker processes (Linda version only).
-w
Write verbose worker information to the standard error (Linda version only).
 

OVERVIEW

Rayshade is a ray tracing program capable of rendering images composed of a large number of primitive objects. Rayshade reads a series of lines supplied on the standard input or contained in the file named on the command line. After reading the input file, rayshade renders the image. As each scanline is rendered, pixels are written to a Utah Raster RLE format image file. By default, this image file is written to the standard output, and information messages and statistics are written to the standard error.
 

INPUT FILE FORMAT

The input file consists of commands (denoted by keywords) followed by numerical or character arguments. Spaces, tabs, or newlines may be used to separate items in the file. Coordinates and vectors are specified in arbitrary floating-point units, and may be written with or without a decimal point. Colors are specified as red-green-blue floating-point triplets which indicate intensity and range from 0 (zero intensity) to 1. (full intensity).

The following sections describe the keywords which may be included in the input file. Items in boldface type are literals, while square brackets surround optional items.

 

VIEWING PARAMETERS

eyep x y z
Specifies the eye's position in space. The default is (0, 20, 0).
lookp x y z
Specifies the point at which the eye is looking. The default is (0, 0, 0).
up x y z
Specifies the direction which should be considered "up" from the eye's position. Note that this vector need not be perpendicular to the vector between the look point and the eye's position. The default is (0, 0, 1.).
fov horizontal_field_of_view [vertical_field_of_view]
The horizontal field of view specifies the angle, in degrees, between the left-most and right-most columns of pixels If present, the vertical field of view specifies the angle between the center of the top-most and bottom-most row of pixels. If not present, the vertical field of view is calculated using the screen resolution and the assumption that pixels are square. The default horizontal field-of-view is 45 degrees, while the default vertical field-of-view is calculated as described above.
 

IMAGE GENERATION

When specified in the input file, many of the image-generation commands may be overridden through command-line options. For example, the line "screen 1024 768" in an input file may be overridden by specifying "-R 128 128" on the command line.
screen x_resolution y_resolution
Specifies the horizontal and vertical resolution of the image to be rendered. This command may be overridden through use of the -R option. The default resolution is 512 by 512 pixels.
background red green blue
Specifies the color that should be assigned to rays which do not strike any object in the scene. The default is black (0, 0, 0).
outfile filename
Specifies the name of the file to which the resulting image should be written. By default, the image is written to the standard output. This command may be overridden through the use of the -O option.
aperture aperture_radius
The aperture_radius is the radius, in world units, of the aperture centered at the eye point. This controls, in conjunction with focaldist, the depth of field, and thus the amount of focus blur present in the final image. Rays are cast from various places on the aperture disk towards a point which is focal_distance units from the center of the aperture disk. This causes objects which are focal_distance units from the eye point to be in sharp focus. Note that an aperture_radius of zero causes a pinhole camera model to be used, and there will be no blurring (this is the default). Increasing the aperture radius leads to increased blurring. When using a non-zero aperture radius, it is best to use jittered sampling in order to reduce aliasing effects.
focaldist focal_distance
Specifies the distance, in world units, from the eye point to the focal plane. Points which lie in this plane will always be in sharp focus. By keeping aperture_radius constant and changing focal_distance, it is possible to create a sequence of frames which simulate pulling focus. By default, focal_distance is equal to the distance from the eye point to the look point.
maxdepth maximum_depth
Controls the maximum depth of the ray tree. The default is 3, with eye rays considered to be of depth zero.
cutoff cutoff_threshold
Specifies the adaptive ray-depth cutoff threshold. When any ray's maximum contribution to the final color of a pixel falls below this value, the ray and its children (specularly transmitted and reflected rays) are not spawned. This threshold may be overridden through the use of the -T option. The default value is 0.001.
jittered
Use "jittered" sampling. This command may be overridden through the use of the -P option. The default is to use adaptive supersampling.
adaptive max_divisions
Specifies that adaptive supersampling should be used, and that each pixel may be subdivided at most max_divisions times. This command may be overridden through the use of the -j or -P options. The default value is one.
samples num_samples
Specifies the number of jittered samples. See SAMPLING for details. When specified, this value may be overridden through the use of the -S option. The default value is 3.
contrast red green blue
Specifies the maximum contrast allowed between samples in a (sub)pixel before subdivision takes place. See SAMPLING for details. When specified in the input file, these values may be overridden through the use of the -C option. The defaults for the red, green and blue channels are 0.25, 0.2, and 0.4, respectively.
 

LIGHT SOURCES

Three types of light sources are supported: point, extended (area), and directional. Point sources are specified by a location in world space and produce shadows with sharp edges. Extended sources are specified by a location and a radius. They produce shadows with "fuzzy" edges (penumbrae), but increase ray tracing time considerably. Directional sources are specified by a direction. A maximum of 10 light sources may be defined.

In the definitions below, brightness specifies the intensity of the light source. If a single floating-point number is given, the light source emits a "white" light of the indicated normalized intensity. If three floating-point numbers are given, they are interpreted as the normalized red, green and blue components of the light source's color.

Lights are defined as follows:

light brightness point x y z
Creates a point source located at ( x, y, z ).
light brightness extended x y z radius
Creates an extended source centered at ( x, y, z ) with the indicated radius. The images produced using extended sources are usually superior to those produced using point sources, but ray-tracing time is increased substantially. Rather than tracing one shadow ray to a light source, multiple rays are traced to various points on the extended source. The extended source is approximated by sampling a square grid of light sources. See SAMPLING for more details on the sampling of extended light sources.
light brightness directional x y z
Creates a directional light source whose direction vector from each point in world space is defined as ( x, y, z ). This vector need not be normalized.
 

SURFACES

Every primitive object has a surface associated with it. The surface specifies the color, reflectivity, and transparency of an object. A surface may be defined anywhere in the input file, provided it is defined before it is used. Surfaces are defined once, and may be associated with any number of primitive objects. A surface definition is given by:

surface surf_name ar ag ab dr dg db sr sg sb coef refl transp index [translu stcoef]

Surf_name is the name associated with the surface. This name must be unique for each surface.

Ar, ag and ab are used to specify the red, green, and blue components of the surface's ambient color. This color is always applied to a ray striking the surface.

Dr, dg and db specify the diffuse color of the surface. This color, the brightness component of each light source whose light strikes the surface, and dot product of the incident ray and the surface normal at the point of intersection determine the color which is added to the color of the incident ray.

Sr, sg and sb are used to specify the specular color of the surface. The application of this color is controlled by the coef parameter, a floating-point number which indicates the power to which the dot product of the surface's normal vector at the point of intersection and the vector to each light source should be raised. This number is then used to scale the specular color of the surface, which is then added to the color of the ray striking the surface. This model (Phong lighting) simulates specular reflections of light sources on the surface of the object. The larger coef is, the smaller highlights will be.

Refl is a floating-point number between 0 and 1 which indicates the reflectivity of the object. If non-zero, a ray striking the surface will spawn a reflection ray. The color assigned to that ray will be scaled by refl and added to the color of the incident ray.

Transp is a floating-point number between 0 and 1 which indicates the transparency of the object. If non-zero, a ray striking the surface will spawn a ray which is transmitted through the object. The resulting color of this transmitted ray is scaled by transp and added to the color of the incident ray. The direction of the transmitted ray is controlled by the index parameter, which indicates the index of refraction of the surface.

The optional parameters translu and stcoef may be used to give a surface a translucent appearance. Translu is the translucency of the surface. If non-zero and a light source illuminates the side of the surface opposite that being rendered, diffuse lighting calculations are performed with respect to the side of the surface facing the light, and the result is scaled by translu and added to the color of the incident ray. Thus, translu accounts for diffuse transmission of light through the primitive. Stcoef is similar to coef, but it applies to the specular transmission of highlights. Note that in both cases the index of refraction of the surface is ignored. By default, surfaces have zero translucency.
 

PRIMITIVES

The ray tracer is capable of rendering a number of primitive objects. Primitives may be specified inside of an object-definition block, in which case they are added to the list of primitives belonging to that object. In addition, primitives may be defined outside of object-definition blocks. Primitives such as these are added to the list of primitives belonging to the World object. See below for more details.

Rayshade usually ensures that a primitive's normal is pointing towards the origin of the incident ray when performing shading calculations. Exceptions to this rule are transparent primitives, for which rayshade uses the dot product of the normal and the incident ray to determine if the ray is entering or exiting the surface, and superquadrics, whose normals are never modified due to the nature of the ray/superquadric intersection code. Thus, all non-transparent primitives except superquadrics will in effect be double-sided.

Primitives are specified by lines of the form:

primitive_type surface <primitive definition>
[transformations] [texture mapping information]
Surface is the name of the surface to be associated with the primitive. Texture mapping and transformations are discussed below. A list of available primitives follows.
sphere surface radius xcenter ycenter zcenter
Creates a sphere with the indicated radius centered at ( xcenter, ycenter, zcenter ).
triangle surface x1 y1 z1 x2 y2 z2 x3 y3 z3
Creates a triangle with vertices ( x1, y1, z1 ), ( x2, y2, z2 ) and ( x3, y3, z3 ). Vertices should be given in a counter-clockwise order as one is looking at the 'top' face of the triangle.
triangle surface p1x p1y p1z n1x n1y n1z p2x p2y p2z n2x n2y n2z
p3x p3y p3z n3x n3y n3z Defines a Phong-shaded triangle. Here, the first three floating-point numbers specify the first vertex, the second three specify the normal at that vertex, and so on. Again, vertices should be specified in counter-clockwise order. Vertex normals need not be normalized.
poly surface x1 y1 z1 x2 y2 z2 x3 y3 z3 [x4 y4 z4 ...]
Creates a polygon with the specified vertices. The vertices should be given in a counter-clockwise order as one faces the "top" of the polygon. The polygon may be non-convex, but non-planar polygons will not be rendered correctly. The number of vertices defining a polygon is limited only by available memory.
plane surface xnormal ynormal znormal x y z
Creates a plane which passes through the point ( x, y, z ) and has normal ( xnormal, ynormal, znormal ).
cylinder surface xbase ybase zbase xtop ytop ztop radius
Creates a cylinder which extends from (xbase, ybase, zbase) to (xtop, ytop, ztop) and has the indicated radius.
cone surface xbase yase zbase xtop ytop ztop base_radius top_radius
Creates a (truncated) cone which extends from (xbase, ybase, zbase) to (xtop, ytop, ztop). The bottom of the cone will have radius base_radius, while the top will have radius top_radius.
heightfield surface filename
Reads height field data from filename and creates a square height field of unit size centered at (0.5, 0.5). The height field is rendered as a surface tessellated by right isoscoles triangles. The binary data in the heightfield file is stored as an initial 32-bit integer giving the square root of number of data points in the file, termed the size of the height field. The size is followed by altitude (Z) values stored as 32-bit floating point values. The 0th value in the file specifies the Z coordinate of the lower-left corner of the height field (0, 0). The next specifies the Z coordinate for (1/(size-1), 0). The last specifies the coordinate for (1., 1.). In short, value number i in the heightfield file specifies the Z coordinate for the point ( (i % size) / (size -1), (i / size) / (size -1) ). Non-square height fields may be rendered by specifying altitude values less than or equal to -1000. Triangles which have any vertex less than or equal in altitude to this value are not rendered. Be warned that the heightfield file is machine-dependent, as it is stored in binary format.
box surface xcenter ycenter zcenter xsize ysize zsize
Creates a box centered at ( xcenter, ycenter, zcenter ) of size (2 * xsize, 2 * ysize, 2 * zsize ). Although boxes must initially be aligned with the world axes, they may be transformed at will.
superq surface xcenter ycenter zcenter xsize ysize zsize power
Creates a superquadric with center ( xcenter, ycenter, zcenter, ) of total size (2 * xsize, 2 * ysize, 2 * zsize, ). Power defines how closely the superquadric resembles the corresponding box. The larger the value of power, the closer it will resemble the box (with rounded corners). A value greater than or equal to 1 is required for reasonable images. In addition, neither transparent superquadrics nor superquadrics viewed from the interior will rendered correctly.
 

OBJECTS

One key feature of rayshade is its ability to treat groups of primitives as objects which may transformed and instantiated at will. Objects are composed of groups of primitives and/or other objects and are specified in the input file as:

        define object_name
                [grid xvoxels yvoxels zvoxels]
                [list]
                [primitives]
                [instances]
        defend [texturing information]

The ordering of the various elements inside the object-definition block is inconsequential. Here, [instances] are any number of declarations of the form:
object object_name [transformations] [texturing information]
This causes a copy of the named object to be made, transformed and textured as requested, and added to the object being defined. An object must be defined before it may be instantiated, which ensures that no cycles appear in the object-definition graph.

A special object named World is maintained internally by rayshade. Primitive definitions and object instantiations which do not appear inside an object-definition block are added to this object. When performing ray tracing, rays are intersected with the objects that make up the World object.

Internally, objects are stored by one of two means. By default, groups of primitives which make up an object are stored in a list. The constituents of such an object are stored in a simple linked-list. When a ray is intersected with such an object, the ray is tested for intersection with each object in the list. While the list is the default method of object storage, one may emphasize this fact in the input file by including the list keyword somewhere within the object-definition block.

The second form of internal object storage is the three-dimensional grid. The grid's total size is calculated by rayshade and is equal to the bounding box of the object that is engridded. A grid subdivides the space in which an object lies into an array of uniform box-shaped voxels. Each voxel contains a linked-list of objects and primitives which lie within that voxel. When intersecting a ray with an object which is stored in a grid, the ray is traced incrementally from voxel-to-voxel, and the ray is tested for intersected against each object in the linked list of each voxel that is visited. In this way the intersection of a ray with a collection of objects is generally made faster at the expense of increased storage requirements.

This form of object representation is enabled by including the the grid keyword somewhere within the object-definition block:

grid xvoxels yvoxels zvoxels
Stores the object being defined as a grid consisting of a total of (xvoxels * yvoxels * zvoxels) voxels, with xvoxels along the x-axis of the grid, yvoxels along the y-axis, and zvoxels along the z-axis. For reasonably complex objects, a value of 20 for each parameter usually works well.

For convenience, one may also define surfaces inside of an object-definition block. Surfaces defined in this manner are nevertheless globally available.

In addition, object definitions may be nested. This facilitates the definition of objects through the use of recursive programs.
 

TRANSFORMATIONS

Rayshade allows for the application of arbitrary linear transformations to primitives and compound objects. The specification of transformations occurs immediately following the specification of a primitive or instantiation of an object. Any number of transformations may be composed; the resulting total transformation is applied to the entity being transformed. Transformations are specified by:
translate x y z
Translate the object by ( x, y, z).
rotate x y z theta
Rotate the object counter-clockwise about the vector ( x, y, z ) by theta degrees.
scale x y z
Scale the object by ( x, y z ).
transform x1 y1 z1 x2 y2 z2 x3 y3 z3
Transform the object by the column-major matrix specified by the nine floating-point numbers. Thus, a point (x, y, z) on the surface of the object is mapped to (x*x1 + y*y1 + z*z1, x*x2 + y*y2 + z*z2, x*x3 + y*y3 + z*z3).
 

TEXTURE MAPPING

Rayshade provides a means of applying solid procedural textures to surfaces of primitives. This is accomplished by supplying texture mapping information immediately following the definition of a primitive, object, or instance of an object. This allows one to texture individual primitives, objects, and individual instances of objects at will. Texturing information is supplied via a number of lines of the following form:

texture texture_type [arguments] [transformations]

Texture_type is the name of the texture to apply. Arguments are any arguments that the specific texture type requires. If supplied, the indicated transformations will be applied to the texture. (More accurately, the inverse of the supplied transformation is applied to the point of intersection before it is passed to the texturing routines.)

Versions of Perlin's Noise() and DNoise() functions are used to generate values for most of the interesting textures. There are eight available textures:

bump scale
Applies a random bump map to the surface being textured. The point of intersection is passed to DNoise(). The returned normalized vector is weighted by scale and added to the normal vector at the point of intersection.
checker surface
Applies a (3D) checkerboard texture to the object being textured. Every point that falls within an "even" cube will be shaded using the characteristics of the named surface. Every point that falls within an "odd" cube will retain its usual surface characteristics. Be warned that strange effects due to roundoff error are possible when the planar surface of an object lies in a plane of constant integral value in texture space.
blotch blend_factor surface
This texture produces a mildly interesting blotchy-looking surface. Blend_factor is used to control the interpolation between a point's default surface characteristics and the characteristics of the named surface. A value of 0 results in a roughly 50-50 mix of the two surfaces. Higher values result in greater instances of the 'default' surface type.
fbm offset scale H lambda octaves thresh [colormap]
This texture generates a sample of discretized fractional Brownian motion (fBm) and uses it to modify the diffuse and ambient components of an object's color. If no colormap is named, the sample is used to scale the object's diffuse color. If a colormap name is given, a 256-entry colormap is read from the named file, and the object is colored using the values in this colormap (see below). Scale is used to scale the output of the fractional Brownian motion function. Offset allows one to control the minimum value of the fBm function. H is related to the Holder constant used in the fBm (a value of 0.5 works well). Lambda is used to control the lacunarity, or spacing between successive frequencies, in the fBm (a value of 2.0 will suffice). Octaves specifies the number of octaves of Noise() to use in simulating the fBm (5 to 7 works well), and thresh is used to specify a lower bound on the output of fBm function. Any value lower than thresh is set to zero.
fbmbump offset scale H lambda octaves
This texture is similar to the fbm texture. Rather modifying the color of a surface, fbmbump acts as a bump map.
gloss glossiness
This texture gives reflective surfaces a glossy appearance. A glossy object's surface normal is perturbed such that it 'samples' a cone of unit height with radius 1. - glossiness. Thus, a value of 1 results in perfect mirror-like reflections, while a value of 0 results in extremely fuzzy reflections. For best results, jittered sampling should be used when rendering scenes containing glossy objects.
marble [colormap]
This texture gives a surface a marble-like appearance. The texture is implemented as roughly parallel alternating veins of marble, each of which is separated by 1/7 of a unit and runs perpendicular to the Z axis. If the name of a colormap file is given, the marble will be colored using the RGB values in the colormap. If no colormap name is given, the diffuse and ambient components of the object's surface are simply scaled. One may transform the texture to control the density of the marble veins.
wood

This texture gives a wood-like appearance to a surface. The feature size of the texture is approximately 1/100th of a unit, making it often necessary to scale the texture in order to achieve the desired appearance.

A colormap is an ASCII file 256 lines in length, each line containing three space-separated integers ranging from 0 to 255. The first number on the nth line specifies the red component of the nth entry in the colormap, the second number the green component, and the third the blue. The values in the colormap are normalized before being used in texturing functions. Textures which make use of colormaps generally compute an index into the colormap and use the corresponding entry to scale the ambient and diffuse components of a surface's color.

It is important to note that more than one texture may be applied to an object at any time. In addition to being able to apply more than one texture directly (by supplying multiple "texturing information" lines for a single object), one may instantiate textured objects which, in turn, may be textured or contain instances of objects which are textured, and so on.
 

ATMOSPHERIC EFFECTS

Rayshade has the capability of including several kinds of atmospheric effects when rendering an image. Currently, two such effects are available:

fog thinness red green blue
Add global exponential fog with the specified thinness and color. Fog is simulated by blending the color of the fog with the color of each ray. The amount of fog color blended into a ray color is an exponential function of the distance from the ray origin to the point of intersection divided by thinness. If the distance divided by thinness is equal to 1, a ray's new color will be half of the fog color plus half its original color.
mist red green blue trans.red trans.green trans.blue zero scale
Add global low-altitude mist of the specified color. The color of a ray is modulated by a fog with density which varies linearly with the difference in altitude (Z coordinate) between the ray origin and the point of intersection. The three trans values specify the transmissivity (thinness) of the mist for each of the red, green and blue channels. The base altitude of the mist is given by zero, and the apparent height of the mist can be controlled by scale, which is used to scale the difference in altitude.
 

SAMPLING

This section clarifies how antialiasing and sampling of extended light sources are accomplished. Two types of anti-aliasing are supported; adaptive subdivision and so-called "jittered sampling".

Adaptive subdivision works by sampling each pixel at its corners. The contrast between these four samples is computed, and if too large, the pixel is subdivided into four equivalent sub-pixels and the process is repeated. The threshold contrast may be controlled via the -C option or the contrast command. There are separate thresholds for the red, green, and blue channels. If the contrast in any of the three is greater than the appropriate threshold value, the pixel is subdivided. The pixel-subdivision process is repeated until either the samples' contrast is less than the threshold or the maximum pixel subdivision level, specified via the -P option or the adaptive command, is reached. When the subdivision process is complete, a weighted average of the samples is taken as the color of the pixel.

Jittered sampling works by dividing each pixel into a number of square regions and tracing a ray through some point in each region. The exact location in each region is chosen randomly. The number of regions into which a pixel is subdivided is specified through the use of the -S option. The integer following this option specifies the square root of the number of regions.

Each extended light source is, in effect, approximated by a square grid of light sources. The length of each side of the square is equal to the diameter of the extended source. Each array element, which is square in shape, is in turned sampled by randomly choosing a point within that element to which a ray is traced from the point of intersection. If the ray does not intersect any primitive object before it strikes a light source element, there is said to be no shadow cast by that portion of the light source. The fraction of the light emitted by an extended light source which reaches the point of intersection is the number of elements which are not blocked by intervening objects divided by the total number of elements. The fraction is used to scale the intensity (color) of the light source, and this scaled intensity is then used in the various lighting calculations.

When jittered sampling is used, one shadow ray is traced to each extended source per shading calculation. The element to be sampled is determined by the region of the pixel through which the eye ray at the top of the ray tree passed.

When adaptive supersampling is used, the -S option or the samples command controls how may shadow rays are traced to each extended extended light source per shading calculation. Specifically, each extended source is approximated by a square array consisting of samples * samples elements. However, the corners of the array are skipped to save rendering time and to more closely approximate the circular projection of an extended light source. Because the corners are skipped, samples must be at least 3 if adaptive supersampling is being used.

Note that the meaning of the -S option (and the samples command) is different depending upon whether or not jittered sampling is being used.

While jittered sampling is generally slower than adaptive subdivision, it can be beneficial if the penumbrae cast by extended light sources take up a relatively large percentage of the entire image or if the image is especially prone to aliasing.  

EXAMPLES

A very simple rayshade input file might be:
light 1.0 directional 1. 1. 1.

surface red  .2 0 0  .8 0 0  .5 .5 .5  32. 0.8 0. 1.
surface green  0 .2 0  0 .8 0  0 0 0  0. 0. 0. 1.

sphere red 8.  0. 0. -2.
plane green 0. 0. 1.  0. 0. -10.

Passing this input to rayshade will result in an image of a red reflective sphere sitting on a white ground-plane being written to the standard output. Note that in this case, default values for eyep, lookp, up, screen, fov, and background are assumed.

A more interesting example uses instantiation to place multiple copies of an object at various locations in world space:

eyep 10. 10. 10.
fov 20
light 1.0 directional 0. 1. 1.
surface red  .2 0 0  .8 0 0  .5 .5 .5  32. 0.8 0. 1.
surface green  0 .2 0  0 .8 0  0 0 0  0. 0. 0. 1.
surface white 0.1 0.1 0.1 0.8 0.8 0.8 0.6 0.6 0.6 30 0 0 0

define blob
        sphere red 0.5   .5 .5 0.
        sphere white 0.5 .5 -.5 0. texture marble scale 0.5 0.5 0.5
        sphere red 0.5  -.5 -.5 0.
        sphere green 0.5 -.5 .5 0.
defend

object blob translate 1. 1. 0.
object blob translate 1. -1. 0.
object blob translate -1. -1. 0.
object blob translate -1. 1. 0.
grid 20 20 20

Here, an object named blob is defined to consist of four spheres, two of which are red and reflective. The object is stored as a simple list of the four spheres. The World object consists of four instances of this object, translated to place them in a regular pattern about the origin. Note that since the marbled sphere was textured in "sphere space" each instance of that particular sphere has exactly the same marble texture applied to it.

Of course, just as the object blob was instantiated as part of the World object, one may instantiate objects as part of any other object. For example, a series of objects such as:

        define wheel
                sphere tire_color 1.  0 0 0  scale 1. 0.2 1.
                sphere hub_color 0.2 0 0. 0
        defend

        define axle
                object wheel translate 0. 2. 0.
                object wheel translate 0. -2. 0.
                cylinder axle_color 0. -2. 0. 0. 2. 0. 0.1
        defend

        define truck
                box truck_color 0. 0. 0. 5. 2. 2.       /* Trailer */
                box truck_color 6. 0 -1 2 2 1           /* Cab */
                object axle translate -4 0 -2
                object axle translate 4. 0. -2.
        defend

could be used to define a very primitive truck-like object.
 

RENDERING HINTS

Ray tracing is a computationally intensive process, and rendering complex scenes can take hours of CPU time, even on relatively powerful machines. There are, however, a number of ways of attempting to reduce the running time of the program.

The first and most obvious way is to reduce the number of rays which are traced. This is most simply accomplished by reducing the resolution of the image to be rendered. The -P option may be used to reduce the maximum pixel subdivision level. A maximum level of 0 will speed ray tracing considerably, but will result in obvious aliasing in the image. By default, a pixel will be subdivided a maximum of one time, giving a maximum of nine rays per pixel total.

Alternatively, the -C option or the contrast command may be used to decrease the number of instances in which pixels are subdivided. Using these options, one may indicate the maximum normalized contrast which is allowed before supersampling will occur. If the red, green or blue contrast between neighboring samples (taken at pixel corners) is greater than the maximum allowed, the pixel will be subdivided into four sub-pixels and the sampling process will recurse until the sub-pixel contrast is acceptable or the maximum subdivision level is reached.

The number of rays traced can also be lowered by making all surfaces non-reflecting and non-refracting or by setting maxdepth to a small number. If set to 0, no reflection or refraction rays will be traced. Lastly, using the -n option will cause no shadow rays to be traced.

In addition, judicious use of the grid command can reduce rendering times substantially. However, if an object consists of a relatively small number of simple objects, it will likely take less time to simply check for intersection with each element of the object than to trace a ray through a grid.

The C pre-processor can be used to make the creation and managing of input files much easier. For example, one can create "libraries" of useful colors, objects, and viewing parameters by using #define and #include. To use such input files, run the C pre-processor on the file, and pipe the resulting text to rayshade.
 

FILES

Examples/*                              example input files

 

AUTHORS

Rayshade had its beginnings as an "introductory" public-domain ray tracer written by Roman Kuchkuda. Vestiges of his code may be found in rayshade, particularly in the names of variables and the superquadric code. The first version of rayshade was written at Princeton University during 1987-88 by Craig Kolb, David C. Hoffman, and David P. Dobkin. The current manifestation of rayshade was written during the fall of 1988 by Craig Kolb. The Noise() and DNoise() routines which form the basis of many of the texturing functions were written by Robert Skinner and Ken Musgrave. The depth of field code appears courtesy of Rodney G. Bogart.
 

CAVEATS

Rayshade performs no automatic hierarchy construction. The intelligent placement of objects in grids and/or lists is entirely the job of the modeler.

While transparent objects may be wholly contained in other transparent objects, rendering partially intersecting transparent objects with different indices of refraction is, for the most part, nonsensical.

Rayshade is capable of using large amounts of memory. In the environment in which it was developed (machines with at least 8 Megabytes of physical memory plus virtual memory), this has not been a problem, and scenes containing several billion primitives have been rendered. On smaller machines, however, memory size can be a limiting factor.

The "Total memory allocated" statistic is the total space allocated by calls to malloc. It is not the memory high-water mark. After the input file is processed, memory is only allocated when refraction occurs (to push media onto a stack) and when ray tracing height fields (to dynamically allocate triangles).

The image produced will always be 24 bits deep.

Explicit or implicit specification of vectors of length less than epsilon (1.E-6) results in undefined behavior.
 

SEE ALSO

rle(5)


 

Index

NAME
SYNOPSIS
DESCRIPTION
OPTIONS
OVERVIEW
INPUT FILE FORMAT
VIEWING PARAMETERS
IMAGE GENERATION
LIGHT SOURCES
SURFACES
PRIMITIVES
OBJECTS
TRANSFORMATIONS
TEXTURE MAPPING
ATMOSPHERIC EFFECTS
SAMPLING
EXAMPLES
RENDERING HINTS
FILES
AUTHORS
CAVEATS
SEE ALSO

This document was created by man2html, using the manual pages.
Time: 22:51:46 GMT, January 03, 2023