home
***
CD-ROM
|
disk
|
FTP
|
other
***
search
/
C!T ROM 2
/
ctrom_ii_b.zip
/
ctrom_ii_b
/
DOS
/
GRAFISCH
/
RAYTRACE
/
POLYRAY
/
POLYRAY.DOC
< prev
next >
Wrap
Text File
|
1993-04-24
|
191KB
|
4,828 lines
Polyray v1.5
Copyright (c) 1991, 1992 Alexander R. Enzmann
8 November 1992
All Rights Reserved
Table of contents
1 Introduction & Shareware Information 1
1.0 Installation 1
1.1 Origin and Credits 2
1.2 Useful Tools 3
1.3 Contents of This Document 3
1.4 Quick demo (You really should read this section) 4
1.5 Command Line Options 6
1.6 Initialization File 8
1.7 Rendering Options 9
1.7.1 Raytracing 9
1.7.1.1 Antialiasing 9
1.7.1.2 Dithering 11
1.7.1.3 Bounding Slabs 11
1.7.1.4 Shading Quality Flags 12
1.7.2 Scan Conversion 13
1.7.3 Wireframe 14
2 Detailed Description of the Polyray Input Format 14
2.1 Expressions 15
2.1.1 Numeric expressions 15
2.1.2 Vector Expressions 16
2.1.3 Array Expressions 17
2.1.4 Conditional Expressions 19
2.1.5 Run-Time expressions 20
2.1.6 Named Expressions 20
2.2 Definition of the viewpoint 21
2.3 Objects/Surfaces 23
2.3.1 Object Modifiers 25
2.3.1.1 Position and Orientation Modifiers 25
2.3.1.1.1 Translation 25
2.3.1.1.2 Rotation 25
2.3.1.1.3 Scaling 26
2.3.1.1.4 Shear 26
2.3.1.2 Bounding Boxes 26
2.3.1.3 Subdivision of Primitives 27
2.3.1.4 Shading Flags 28
2.3.2 Primitives 29
2.3.2.1 Bezier patch 30
2.3.2.2 Blob 31
2.3.2.3 Box 32
2.3.2.4 Cone 33
2.3.2.5 Cylinder 33
2.3.2.6 Disc 33
2.3.2.7 Implicit function 34
2.3.2.8 Height Field 35
2.3.2.8.1 File Based Height Fields 35
2.3.2.8.1.1 8 Bit Format 36
2.3.2.8.1.2 16 Bit Format 36
2.3.2.8.1.3 24 Bit Format 37
2.3.2.8.2 Implicit Height Fields 37
2.3.2.9 Lathe Surface 38
2.3.2.10 Parabola 39
2.3.2.11 Polygon 40
2.3.2.12 Polynomial Function 40
2.3.2.13 Sphere 42
2.3.2.14 Sweep Surface 42
2.3.2.15 Torus 43
2.3.2.16 Triangular patch 44
2.3.3 Constructive Solid Geometry (CSG) 44
2.3.4 Gridded Objects 46
2.3.5 Bounding Slabs 47
2.4 Color and Lighting 47
2.4.1 Light Sources 47
2.4.1.1 Positional Lights 48
2.4.1.2 Spot Lights 48
2.4.1.3 Textured Lights 49
2.4.2 Background Color 49
2.4.3 Textures 50
2.4.3.1 Procedural Textures 50
2.4.3.1.1 Standard Shading Model 50
2.4.3.1.1.1 Ambient Light 51
2.4.3.1.1.2 Diffuse Light 52
2.4.3.1.1.3 Specular Highlights 52
2.4.3.1.1.4 Reflected Light 52
2.4.3.1.1.5 Transmitted Light 53
2.4.3.1.1.6 Microfacet Distribution 53
2.4.3.1.2 Checker Texture 54
2.4.3.1.3 Hexagon Texture 55
2.4.3.1.4 Noise Surface 55
2.4.3.1.5 Layered Textures 58
2.4.3.2 Functional Textures 60
2.4.3.2.1 Image maps 61
2.5 Comments 63
2.6 Animation support 63
2.7 Conditional processing 63
2.8 Include files 65
2.9 File flush 65
3 File Formats 65
3.1 Output files 65
4 Algorithms 66
4.1 Processing polynomial expressions 66
4.1.1 Example equation representation 66
4.1.2 Allowed Formula Syntax 67
4.1.3 Rules for processing formulas 67
4.2 Processing of arbitrary functional surfaces 68
4.2.1 Spherical coordinates 68
4.3 Three dimensional noise generation 69
4.4 Marching Cubes 70
5 Outstanding issues 71
5.1 To do list 71
5.2 Known Bugs 71
6 Revision History 72
7 Bibliography 77
8 Sample files 77
9 Polyray grammar 78
1
1 Introduction & Shareware Information
The program "Polyray" is a raytracer able to render a wide
variety of shapes and surfaces. The means of description range
from standard primitives like box, sphere, etc. to 3 variable
polynomial expression, and finally (and slowest of all) surfaces
containing transcendental functions like sin, cos, log. The
files associated with Polyray are distributed in three pieces:
the executable, a collection of document files, and a collection
of data files.
As of version 1.5, Polyray is a Shareware program, rather than
Freeware. If you enjoy this program, use it frequently, and can
afford to pay a registration fee, then send $35 to:
Alexander Enzmann
20 Clinton St.
Woburn, Ma 01801
USA
Please include your name and mailing address.
If you formally register this program, you will receive free the
next release of Polyray, when it occurs. In addition you will be
contributing to my ability to purchase software tools to make
Polyray a better program. If you don't register this program,
don't feel bad - I'm poor too - but you also shouldn't expect as
prompt a response to questions or bugs. Note that the Polyray
executables and the Polyray documents (including this document)
are copyrighted. The data files are public domain, you may use
them in any way you please.
If you want a good reference book about raytracing, buy the book:
"Introduction to Ray Tracing", edited by Andrew Glassner,
Academic Press, 1989. That book is excellent and will provide
many of the background details that are not contained here. For
an abbreviated list of Polyray's syntax, see the file
"quickref.txt".
This document contains a step by step description of a simple
scene file to get you familiar with the structure of Polyray
data. By working through the example and by reviewing the
examples in the data archive, you will be able to see how the
various features are used.
The data files are ASCII text, the image file formats are Targa
(see section 3.1 for the supported input and output formats). It
is unlikely that other image formats will be supported. The
Targa format is supported by many image processing programs so if
you need to translate between Targa and something else it is
2
quite simple. (Using the Targa formats also means that I didn't
have to define my own format and then build conversion programs
or try to convince others to build conversion programs.)
The only display mode is the standard 320x200 VGA color mode. I
tried to get good SVGA display code working, but with all the
alternatives it was just a bit too much. For this reason, the
display for Polyray should be considered an aid during debugging
of images, and as a way to monitor the status of the image you
are rendering. For viewing the final image, there are many image
viewers and many image format conversion programs that will
convert TGA to whatever format you prefer (PICLAB being one of
the best for all purpose work).
The standard executable requires IBM PC compatible with at least
a 386 and 387, VGA graphics, and a minimum of 2 Mbytes of RAM to
run. Other memory models will be made available if enough
requests are made. The distributed executable uses a DOS
extender in order to grab as much memory as is available. It has
been successfully run with both HIMEM.SYS, and 386MAX.SYS. Under
Windows it will run in a DOS window, however if you use the
graphics display it will need to be run full screen.
I'm interested in any comments/bug reports. I can be reached via
email by:
Compuserve as: Alexander Enzmann 70323,2461
Internet as: xander@mitre.org
or via the postal service at:
Alexander Enzmann
20 Clinton St.
Woburn, Ma 01801
USA
1.1 Origin and Credits
This original code for this program was based on the "mtv" ray-
tracer, written (and placed in the public domain) by Mark
VandeWettering.
Many thanks go to David Mason for his numerous comments and
suggestions (and for adding a new feature to DTA every time I
needed to test something). Thanks also to the Cafe Algiers in
Cambridge Ma. for providing a place to get heavily caffinated and
rap about ray-tracing and image processing for hours at a time.
3
1.2 Useful tools
There are several types of tools that you will want to have to
work with
this program (in relative order of importance):
o An ASCII text editor for preparing and modifying input
files.
o A picture viewer for viewing images.
o An image processing program for translation between Targa
and some other format.
o An animation generator that will take a series of Targa
images and produce an animation file.
o An animation viewer for playing an animation on the
screen.
For the IBM-PC/Clone world (which is what I use at home), there
are several popular and easily available programs: "PICLAB",
which will read process and write several popular graphics file
formats, "VPIC" (Shareware) which displays images quickly and
supports many different video boards, "DTA" which will take a
bunch of Targa files and produce a "FLI" format animation, and
"aaplay" or "play80" which will show a FLI on a VGA screen.
1.3 Contents of this document
This document describes the input format and capabilities of the
"Polyray" ray-tracing program. The following features are
supported:
o Viewpoint (camera) characteristics,
o Positional (point), directional (spotlight) light sources,
and functional (textured) lights.
o Background color,
o Surface characteristics of objects,
4
o Shape primitives:
Bezier patch, blob, box, cone, cylinder, disc, implicit
function, height field, lathe surface, parabola,
polygon, polynomial function, sphere, sweep surface,
torus, triangular patches
o Animation support,
o Conditional processing,
o Include files,
o Named values and objects.
o Constructive Solid Geometry (CSG)
o Grids of objects
o User definable (functional) textures
o Initialization file for default values
1.4 Quick demo
This section describes one of the simplest possible data files: a
single sphere and a single light source. In what follows
anything following a double slash, "//" is a comment and is
ignored by Polyray. The data file is the following indented
lines. You can either use the file "sphere.pi" in the data
archive, or copy these declarations into a new file.
// We start by setting up the camera. This is where the eye
// is located, and describes how wide a field of view will
// be used, and the resolution (# of pixels wide and high)
// of the resulting image.
viewpoint {
from <0,0,-8> // The location of the eye
at <0,0,0> // The point that we are looking at
up <0,1,0> // The direction that will be up
angle 45 // The vertical field of view
resolution 160, 160 // The image will be 160x160 pixels
}
// Next define the color of the background. This is the
// color that is used for any pixel that isn't part of an
// object. In this file it will color the image around the
// sphere.
5
background skyblue
// Next is a light source. This is a point light source a
// little above, to the left, and behind the eye.
light <-10,3, -20>
// The following declaration is a "texture" that will be
// applied to the sphere. It will be red at every point but
// where the light is reflecting directly towards the eye -
// at that point there will be a bright white spot.
define shiny_red
texture {
surface {
ambient red, 0.2 // Always a little red
diffuse red, 0.4 // Where the light is hitting
// the sphere, the color will be
// a little brighter.
specular white, 0.8 // A white highlight.
microfacet Reitz 10 // The highlight drops to half
// its intensity at an angle of
// 10 degrees.
}
}
// Finally we declare the sphere. It sits right at the
// origin, and has a radius of two. Following the
// declaration of the sphere, we associate the texture
// declared above.
object {
sphere <0, 0, 0>, 2
shiny_red
}
Now that we have a data file, lets render it and show it on the
screen. First of all we can look at a wireframe representation
of the file. Enter the following commands (DOS prompts are in
capitals).
C> polyray sphere.pi -r 2 -V 1 -W
An outline of the sphere will be drawn on the screen, press
any key to get back to DOS
Next lets do a raytrace. Enter the following:
C> polyray sphere.pi -V 1
The sphere will be drawn a line at a time, when it is done
you will be returned to DOS
6
The output image will be the default output file, "out.tga". You
can view it directly with VPIC or TGVIEW (although VPIC will not
have the full range of colors that were generated by the
raytrace). If you have PICLAB, then the following commands will
load the image, map its colors into a spectrum that matches the
colors in the image, then will display it.
C> piclab
> tload out
> makepal
> map
> show
hit any key to get back to PICLAB's command line
> quit
C>
You should see a greatly improved image quality over the display
that is shown during tracing.
Now that you have seen a simple image, you have two options: go
render some or all of the files in the data archive, or continue
reading the documents. For those of you that prefer immediately
getting started, there are a series of DOS batch files that
pretty well automate the rendering of the sample scenes. (When
you unzip the data files, remember to use the /d switch so that
all the subdirectories will be created.)
1.5 Command Line Options
A number of operations can be manipulated through values in an
initialization file, within the data file, or from the command
line (processed in that order, with the last read having the
highest precedence). The command line values will be displayed
if no arguments are given to Polyray.
The values that can be specified at the command line, with a
brief description of their meaning are:
-a Perform simple antialiasing (neighbor
averaging)
-A Perform adaptive antialiasing (based on
threshold)
-b pixels Set the maximum number of pixels that will be
calculated between file flushes
-B Flush the output file every scan line
7
-d probability Dither objects using the given probability
-D scale Dither all rays using the given probability
-J Perform jittered antialiasing (fixed # of
samples/pixel)
-o filename The output file name, the default output
file name if not specified is "out.tga"
-p bits/pixel Set the number of bits per pixel in the
output file (must be one of 8, 16, 24, 32)
-P pallette Which pallette option to use [0=grey, 1=666,
2=884]
-q flags Turn on/off various global shading options
-Q Abort if any key is hit during trace
-r renderer Which rendering method: [0=raytrace, 1=scan
convert, 2=wireframe]
-R Resume an interrupted trace.
-s samples # of samples per pixel when performing
antialiasing
-t status_vals Status display type
[0=none,1=totals,2=line,3=pixel].
-T threshold Threshold to start oversampling
-u Write the output file in uncompressed form
-v Trace from bottom to top
-V mode Use VGA display while tracing [0=none,1=VGA]
-W Wait for key before clearing display
-x columns Set the x resolution
-y lines Set the y resolution
-z start_line Start a trace at a specified line
If no arguments are given then Polyray will give a brief
8
description of the command line options.
1.6 Initialization File
The first operation carried out by Polyray is to read the
initialization file "polyray.ini". This file can be used to tune
a number of the default variables used during rendering. This
file must appear in the current directory. This file does not
have to exist, it is typically used as a convenience to eliminate
retyping command line parameters.
Each entry in the initialization file must appear on a separate
line, and have the form:
default_name value
The names are text. The values are numeric for some names, and
text for others. The allowed names and values are:
abort_test true/false/on/off
alias_threshold [Min value to start adaptive antialiasing]
antialias none/filter/jitter/adaptive
display none/vga
max_level [max depth of recursion]
max_lights [max # of lights]
max_queue_size [max # of objects in a priority queue]
max_samples [# samples for antialiasing]
pixel_size [8, 16, 24, 32]
pixel_encoding none/rle
renderer ray_trace/scan_convert/wire_frame
shade_flags [default/bit mask of flags, see sec 1.7.1.4]
shadow_tolerance [minimum distance for blocking objects]
status none/totals/line/pixel
warnings on/off
A typical example of "polyray.ini" would be:
abort_test off
alias_threshold 0.05
antialias adaptive
display vga
max_samples 8
pixel_size 24
status line
If no initialization file exists, then Polyray will use the
following default values:
abort_test on
alias_threshold 0.2
9
antialias none
display none
max_level 5
max_lights 64
max_queue_size 128
max_samples 4
pixel_size 16
pixel_encoding rle
renderer ray_trace
shade_flags default
shadow_tolerance 0.001
status line
warnings on
1.7 Rendering Options
Polyray supports three very distinct means of rendering scenes:
raytracing, polygon scan conversion, and wireframe. Raytracing
is often a very time consuming process, however a number of types
of surfaces can be rendered with this technique that are very
difficult or impossible using traditional techniques.
1.7.1 Raytracing
Polyray is at heart a raytracer. The quality of the images that
can be produced is entirely a function of how long you want to
wait for results. There are certain options that allow you to
adjust how you want to trade off quality Vs speed Vs memory
requirements.
The basic operation in raytracing is that of shooting a ray from
the eye, through a pixel, and finally hitting an object. For
each type of primitive there is specialized code that helps
determine if a ray has hit that primitive. The standard way that
Polyray generates an image is to use one ray per pixel and to
test that ray against all of the objects in the data file.
The following sections describe how you can cause Polyray to use
more rays per pixel to improve image quality, skip pixels or
objects in a probabilistic way to improve rendering speed, or
partition objects in such a way that not all objects need to be
tested for every ray. Each of these techniques has benefits and
drawbacks.
1.7.1.1 Antialiasing
The representation of rays is as a 1 dimensional line. On the
other hand pixels and objects have a definite size. This can
lead to a problem known as "aliasing", where a ray may not
10
intersect an object because the object only partially overlaps a
pixel, or the pixel should have color contributed by several
objects that overlap it, none of which completely fills the
pixel. Aliasing often shows up as a staircase effect on the edges
of objects.
Polyray offers three ways to reduce aliasing: by filtering, by
oversampling with a fixed number of samples, and by adaptive
oversampling. Filtering smoothes the output image by averaging
the color values of neighboring pixels to the pixel being
rendered. Oversampling is performed by adding extra rays through
random jittering of the direction of the ray within the pixel
being traced. By averaging the result of all of the rays that
are shot through a single pixel, aliasing problems can be greatly
reduced.
The filtering process adds little overhead to the rendering
process, however the resolution of the image is degraded by the
averaging process. Jittered antialiasing slows down the
rendering process in direct proportion to the number of extra
rays, but results in the best image quality.
Oversampling can be performed in two ways, either by using a
fixed number of rays per pixel, or by adaptive antialiasing. The
former will always shoot the same number of rays through each
pixel, regardless of picture complexity. The latter initially
shoots just one ray through each pixel, if a pixel's color is
significantly different that any of its neighbors, then
additional rays are fired.
The two initialization (and command line) variables that will
affect the performance of adaptive antialiasing are
"alias_threshold" and "max_samples". The first is a measure of
how different a pixel must be with respect to its neighbors
before antialiasing will kick in. If a pixel has the value: <r1,
g1, b1>, and it's neighbor has the value <r2, g2, b2> (RGB values
between 0 and 1 inclusive), then the "distance" between the two
colors is:
dist = sqrt((r1 - r2)^2 + (b1 - b2)^2 + (g1 - g2)^2)
This is the standard Pythagorean formula for values specified in
RGB. If "dist" is greater than the value of "alias_threshold",
then extra rays will be fired through the pixel in order to
determine an averaged color for that pixel.
11
Note: Performing the distance test in RGB space may not be the
best way to determine when antialiasing should occur. Future
versions of Polyray may perform distance calculations in LAB
space.
1.7.1.2 Dithering
In order to speed up the generation of an image (and to produce
some interesting effects), several dithering options are
available: Dithering all rays, dithering all objects, and
dithering of specific objects. Associated with each option is a
probability value between 0 and 1. During rendering, if the
dithering option is being used, Polyray generates a random number
between 0 and 1 and compares it to the probability value. If the
random number is greater than the probability value then the ray
(or object) will be ignored.
Dithering of rays is a way to sample the entire image, for
example if a probability value of 0.5 is given, then only half of
all rays will be traced through the scene. The final image will
be severely degraded, however the amount of time taken to trace
the scene will be cut approximately in half.
Dithering of objects is a way to simulate transparency, without
the overhead of generating secondary rays during the raytracing
process. It also can be used as an interesting optical effect -
during an animation, if the probability value is successively
lowered from 1 to 0, the object will dissolve away.
Negative dither values as well as those above 1.0 will have no
effect. The way dithering works is: Right before a check is
made to see if the current ray will intersect an object a random
number between 0 and 1 is generated. If this number is less than
the dither value then the intersection check will be made. If
the random number is greater than the dither value then Polyray
assumes that there is no hit.
Sample files: dither.pi, bnddithr.pi
1.7.1.3 Bounding Slabs
For scenes with large numbers of small objects, there are
optimization tricks that can take advantage of the distribution
of the objects. An easy one to implement, and one that often
results in huge time savings are bounding slabs. The basic idea
of a bounding slab is to sort all objects along a particular
direction. Once this sorting step has been performed, then
during rendering the ray can be quickly checked against a slab
(which can represent many objects rather than just one) to see if
12
intersection tests need to be made on the contents of the slab.
The most important caveats with Polyray's implementation of
bounding slabs are:
o Scenes with only a few large objects will derive little
speed benefits.
o If there is a lot of overlap of the positions of the
objects in the scene, then there will be little
advantage to the slabs.
o If the direction of the slabs does not correspond to a
direction that easily sorts objects, then there will be
little speed gained.
However, for data files that are generated by another program,
the requirements for effective use of bounding slabs are often
met. For example, most of the data files generated by the SPD
programs will be rendered orders of magnitude faster with
bounding slabs than without.
1.7.1.4 Shading Quality Flags
By specifying a series of bits, various shading options can be
turned on and off. The value of each flag, as well as the
meaning are:
1 Shadow_Check Shadows will be generated
2 Reflect_Check Reflectivity will be tested
4 Transmit_Check Check for refraction
8 Two_Sides If on, highlighting will be performed on
both sides of a surface.
16 Cast_Shadow Determines if an object casts shadows.
32 Primary_Rays If off, then primary rays (those from
the eye) are not checked against the
object.
The default settings of these flags are:
raytracing Shadow_Check + Reflect_Check +
Transmit_Check + Two_Sides + Cast_Shadow
+ Primary_Rays (= 63)
13
scan conversion [Two_Sides, = 8]
wireframe Not applicable
The assumption made is that during raytracing the highest quality
is desired, and consequently every shading test is made. The
assumption for scan conversion is that you are trying to render
quickly, and hence most of the complex shading options are turned
off.
If any of the flags are explicitly set and scan conversion is
used to render the scene, then at every pixel that is displayed,
a recursive call to the raytracer will be made to determine the
correct shading. Note that due to the faceted nature of objects
during scan conversion, shadowing, and especially refraction can
get messed up during scan conversion.
For example if you want to do a scan conversion that contains
shadows, you would use the following:
polyray xxx.pi -r 1 -q 1
or if you wanted to do raytracing with no shadows, reflection
turned on, transparency turned off, and with diffuse and specular
shading performed for both sides of surfaces you would use
options 2 and 8:
polyray xxx.pi -q 10
Note: texturing cannot be turned off.
1.7.2 Scan Conversion
In order to support a quicker render of images Polyray can render
most primitives as a series of polygons. Each polygon is scan
converted using a Z-Buffer for depth information, and a S-Buffer
for color information.
The scan conversion process does not by default provide support
for: shadows, reflectivity, or transparency. It is possible to
instruct Polyray to use raytracing to perform these shading
operations through the use of either the global shade flags or by
setting shade flags for a specific object. An alternative method
for quickly testing shadows using shadow buffers is in the works.
The memory requirements for performing scan conversion can be
quite substantial. You need at least as much memory as is
required for raytracing, plus at least 7 bytes per pixel for the
final image. In order to correctly keep track of depth and color
14
- 4 bytes are used for the depth of each pixel in the Z-Buffer
(32 bit floating point number), and 3 bytes are used for each
pixel in the S-Buffer (1 byte each for red, green, and blue).
During scan conversion, the number of polygons used to cover the
surface of a primitive is controlled using the keywords
"u_steps", and "v_steps". These two values control how many
steps around and along the surface of the object values are
generated to create polygons. The higher the number of steps,
the smoother the final appearance. Note however that if the
object is very small then there is little reason to use a fine
mesh - hand tuning is sometimes required to get the best balance
between speed and appearance.
Generating isosurfaces for blobs, polynomial functions and
implicit functions, followed by polygonalization of the surfaces
is performed using a "marching cubes" algorithm. Currently the
value of "u_steps" determines the number of slices along the x-
axis, and the value of "v_steps" controls the number of slices
along the y-axis and along the z-axis. Future versions may
introduce a third value to allow independent control of y and z.
1.7.3 Wireframe
In most cases the fastest way to display a model is by showing a
wireframe representation. Polyray supports this as a variant of
the scan conversion process. When drawing a wireframe image only
the edges of the polygons that are generated as part of scan
conversion are drawn onto the screen. A big problem with
wireframe representations is that CSG operations are not
performed. If you use CSG intersection, difference, or clipping,
you may see a lot of stuff that will not appear in the final
image.
Note that there are images that render slower in scan conversion
and wireframe than raytracing! These are typically those files
that contain large numbers of spheres and cones. The scan
conversion process generates every possible part of each object,
whereas the raytracer is able to sort the objects an only display
the visible parts of the surfaces.
2 Detailed description of the Polyray input format:
An input file describes the basic components of an image:
o A viewpoint that characterizes where the eye is, where it
is looking and what its orientation is.
o Objects, their shape, placement, and orientation.
15
o Light sources, their placement and color.
o Textures for coloring and patterning surfaces.
Beyond the fundamentals, there are many components that exist
either as a convenience such as definable expressions, or
textures. This section of the document describes in detail the
syntax of all of the components of an input file
2.1 Expressions
There are four basic types of expressions that are used in
polyray:
o fexper: Floating point expression (i.e. 0.5, 2 *
sin(1.33)). These are used at any point a floating
point value is needed, such as the radius of a sphere or
the amount of contribution of a part of the lighting
model.
o vexper:- Vector valued expression (i.e. <0, 1, 0>, red,
12 * <2, sin(x), 17> + P). Used for color expressions,
describing positions, describing orientations, etc.
o arrays: Lists of expressions (i.e. [0, 1, 17, 42],
[<0,1,0>, <2*sin(theta), 42, -4>, 2*<3, 7, 2>]
o cexper: Conditional expression (i.e. x < 42).
The following sections describe the syntax for each of these
types of expressions, as well as how to define variables in terms
of expressions.
2.1.1 Numeric expressions
In most places where a number can be used (i.e. scale values,
angles, RGB components, etc.) a simple floating point expression
(fexper) may be used. These expressions can contain any of the
following terms:
val A floating point number or defined
value
'(' fexper ')' Parenthesised expression
fexper ^ fexper Same as pow(x, y)
fexper * fexper Multiplication
fexper / fexper Division
fexper + fexper Addition
fexper - fexper Subtraction
16
-fexper Unary minus
acos(fexper) Arccosine, (radians for all trig
functions)
asin(fexper) Arcsin
atan(fexper) Arctangent
ceil(fexper) Ceiling function
cos(fexper) Cosine
cosh(fexper) Hyperbolic cosine
degrees(fexper) Converts radians to degrees
exp(fexper) e^x, standard exponential function
fabs(fexper) Absolute value
floor(fexper) Floor function
fmod(fexper, fexper) Modulus function for floating point
values
ln(fexper) Natural logarithm
log(fexper) Logarithm base 10
noise(vexper, fexper) Correlated noise function
pow(fexper, fexper) Exponentiation (x^y)
radians(fexper) Converts degrees to radians
sawtooth(fexper) Sawtooth function (range 0 -> 1)
sin(fexper) Sine
sinh(fexper) Hyperbolic sine
sqrt(fexper) Square root
tan(fexper) Tangent
tanh(fexper) Hyperbolic tangent
visible(vexper, vexper) Returns 1 if second point visible from
first.
vexper[i] Extract component i from a vector or
an array.
vexper . vexper Dot product of two vectors
|fexper| Absolute value (same as fabs)
|vexper| Length of a vector
2.1.2 Vector Expressions
In most places where a vector can be used (i.e. color values,
rotation angles, locations, ...), a vector expression is allowed.
The expressions can contain any of the following terms:
vexper + vexper Addition
vexper - vexper Subtraction
vexper * vexper Cross product
vexper * fexper Scaling of a vector by a scalar
fexper * vexper Scaling of a vector by a scalar
vexper / fexper Inverse scaling of a vector by a
scalar
brownian(vexper) Makes a random displacement of a
vector
17
color_wheel(x, y, z) RGB color wheel using x and z (y
ignored), the color returned is based
on <x, z> using the chart below:
Z-Axis
^
|
|
Green Yellow
\ /
\ /
Cyan ---- * ---- Red -----> X-Axis
/ \
/ \
Blue Magenta
Intermediate colors are generated by
interpolation.
rotate(vexper, vexper) Rotate the point given in the first
argument by the angles given in the
second argument. (angles in degrees)
rotate(vexper, vexper,
fexper) Rotate the point in the first argument
around the axis given in the second
argument, by the angle in the third
argument. (angle in degrees)
reflect(vexper, vexper) Reflect the first vector about the
second vector (particularly useful in
environment maps)
2.1.3 Arrays
Arrays are a way to represent data in a convenient list form. A
good use for arrays is to hold a number of locations for polygon
vertices or as locations for objects in successive frames of an
animation.
As an example, a way to define a tetrahedron (4 sided solid) is
to define its vertices, and which vertices make up its faces. By
using this information in an object declaration, we can make a
tetrahedron out of polygons very easily.
18
define tetrahedron_faces
[<0, 1, 2>, <0, 2, 3>, <0, 3, 1>, <1, 3, 2>]
define tetrahedron_vertices
[<0, 0, sqrt(3)>, <0, (2*sqrt(2)*sqrt(3))/3, -sqrt(3)/3>,
<-sqrt(2), -(sqrt(2)*sqrt(3))/3, -sqrt(3)/3>,
<sqrt(2), -(sqrt(2)*sqrt(3))/3, -sqrt(3)/3>]
define tcf tetrahedron_faces
define tcv tetrahedron_vertices
define tetrahedron
object {
object { polygon 3, tcv[tcf[ 0][0]],
tcv[tcf[ 0][1]], tcv[tcf[ 0][2]] } +
object { polygon 3, tcv[tcf[ 1][0]],
tcv[tcf[ 1][1]], tcv[tcf[ 1][2]] } +
object { polygon 3, tcv[tcf[ 2][0]],
tcv[tcf[ 2][1]], tcv[tcf[ 2][2]] } +
object { polygon 3, tcv[tcf[ 3][0]],
tcv[tcf[ 3][1]], tcv[tcf[ 3][2]] }
}
What happened in the object declaration is that each polygon
grabbed a series of vertex indices from the array
"tetrahedron_faces", then used that index to grab the actual
location in space of that vertex.
Another example is to use an array to store a series of view
directions so that we can use animation to generate a series of
very distinct renders of the same scene:
define location <0, 0, 0>
define at_vecs [<1, 0, 0>, <-1, 0, 0>, < 0, 1, 0>, < 0,-1, 0>,
< 0, 0,-1>, < 0, 0, 1>]
define up_vecs [< 0, 1, 0>, < 0, 1, 0>, < 0, 0, 1>,
< 0, 0,-1>, < 0, 1, 0>, < 0, 1, 0>]
// Generate six frames
start_frame 0
end_frame 5
// Each frame generates the view in a specific direction. The
// vectors stored in the arrays "at_vecs", and "up_vecs" turn
// the camera in such a way as to generate image maps correct
// for using in an environment map.
viewpoint {
from location
at location + at_vecs[frame]
19
up up_vecs[frame]
...
}
... Rest of the data file ...
Sample files: plyhdrn.pi, environ.pi
2.1.4 Conditional Expressions
Conditional expressions are used in one of two places:
conditional processing of declarations (see section 2.7) or
conditional value functions.
cexper has one of the following forms:
!cexper
cexper && cexper
cexper || cexper
fexper < fexper
fexper <= fexper
fexper > fexper
fexper >= fexper
fexper == fexper
vexper == vexper
A use of conditional expressions is to define a texture based on
other expressions, the format of this expression is:
(cexper ? true_value : false_value)
Where true_value/false_value can be either floating point or
vector values. This type of expression is taken directly from
the equivalent in the C language. An example of how this is used
(from the file "spot1.pi") is:
special surface {
color white
ambient (visible(W, throw_offset) == 0
? 0
: (P[0] < 1 ? 1
: (P[0] > throw_length ? 0
: (throw_length - P[0]) / throw_length)))
transmission (visible(W, throw_offset) == 1
? (P[0] < 1 ? 0
: (P[0] > throw_length ? 1
: P[0] / throw_length))
: 1), 1
20
}
In this case conditional statements are used to determine the
surface characteristics of a cone defining the boundary of a
spotlight. The amount of ambient light is modified with distance
from the apex of the cone, the visibility of the cone is modified
based on both distance and on a determination if the cone is
behind an object with respect to the source of the light.
2.1.5 Run-Time expressions
There are a few expressions that only have meaning during the
rendering process:
I The direction of the ray that struck the
object
P The point of intersection in "object"
coordinates (you can also use x, y, or z to
represent a single component of P).
N The normal to the point of intersection in
"world" coordinates
W The point of intersection in "world"
coordinates
These expressions describe the interaction of a ray and an
object. To use them effectively, you need to understand the
distinction between "world" coordinates and "object" coordinates.
Object coordinates describe a point or a direction with respect
to an object as it was originally defined. World coordinates
describe a point with respect to an object after it has been
rotated/translated/scaled. Typically texturing is done in object
coordinates so that as the object is moved around the texture
will move with it. On the other hand shading is done in world
coordinates.
NOTE: Capitalization of these variables is important.
2.1.6 Named Expressions
A major convenience for creating input files is the ability to
create named definitions of surface models, object definitions,
vectors, etc. The way a value is defined takes one of the
following forms:
define token expression
define token object { ... }
21
define token surface { ... }
define token texture { ... }
define token transform { ... }
Objects, surfaces, and textures can either be instantiated as-is
by using the token name alone, or it can be instantiated and
modified:
token,
or
token { ... modifiers ... },
Polyray keeps track of what type of entity the token represents
and will parse the expressions accordingly.
Note: It is not possible to have two types of entities referred
to by the same name. If a name is reused, then a warning will be
printed, and all references to that name will use the new
definition from that point on.
2.2 Definition of the viewpoint
The viewpoint and its associated components define the position
and orientation the view will be generated from.
The format of the declaration is:
viewpoint {
from vexper
at vexper
up vexper
angle fexper
hither fexper
resolution fexper, fexper
aspect fexper
yon fexper
dither_rays fexper
dither_objects fexper
max_trace_depth fexper
aperture fexper
focal_distance fexper
}
Order of the entries defining the viewpoint is not important,
unless you redefine some field. (In which case the last is used.)
The parameters are:
aspect The ratio of width to height. (Default: 1.0.)
22
at A position to be at the center of the image,
in XYZ world coordinates. (Default: <0, 0, 0>)
angle The field of view (in degrees), from the
center of the top row to the center of the
bottom row. (Default: 45 degrees)
from The eye location in XYZ. (Default: <0, 0, -10>)
hither Distance to front of view pyramid. Any
intersection less than this value will be
ignored. (Defaults: 1.0e-3)
resolution The number of pixels wide and high of the
raster. (Default: 256x256)
up A vector defining which direction is up, as
an XYZ vector. (Default: <0, 1, 0>)
yon Distance to back of view pyramid. Any
intersection beyond this distance will be
ignored. (Defaults to 1.0e5)
dither_rays Rays will be skipped if a random number is
above the given value.
dither_objects For each ray, Ray-surface checks will skipped
for an object if a random number is above the
given value.
max_trace_depth This allows you to tailor the amount of
recursion allowed for scenes with reflection
and/or transparency. (Default: 5)
aperture If larger than 0, then extra rays are shot
(controlled by "max_samples" in the
initialization file) to produce a blurred
image. (Good values are between 0.1 and
0.5.)
focal_distance Distance from the eye to the point that
things are in focus, this defaults to the
distance between "from" and "at".
The view vectors will be coerced so that they are perpendicular
to the vector from the eye (from) to the point of interest (at).
23
A typical declaration is:
viewpoint {
from <0, 5, -5>
at <0, 0, 0>
up <0, 1, 0>
angle 30
resolution 320, 160
aspect 2
}
In this declaration the eye is five units behind the origin, five
units above the x-z plane and is looking at the origin. The up
direction is aligned with the y-axis, the field of view is 30
degrees and the output file will default to 320x160 pixels.
In this example it is assumed that pixels are square, and hence
the aspect ratio is set to width/height. If you were generating
an image for a screen that has pixels half as wide as they are
high then the aspect ratio would be set to one.
2.3 Objects/Surfaces
In order to make pictures, the light has to hit something.
Polyray supports several primitive objects, and the following
sections describe the syntax for describing the primitives, as
well as how more complex primitives can be built from simple
ones.
The "object" declaration is how polyray associates a surface with
its lighting characteristics, and its orientation. This
declaration includes one of the primitive shapes (sphere,
polygon, ...), and optionally: a texture declaration (set to a
matte white if none is defined), orientation declarations, or a
bounding box declaration.
The format the declaration is:
object {
shape_declaration
[texture_declaration]
[transformation declarations]
[subdivision declarations]
[bounding box declaration]
}
There are three separate ways that an object can be rendered by
Polyray, raytracing, scan conversion, or wireframe. The steps
taken by each of these methods are:
24
Raytracing
1) Find the object that seems closest to the eye along the
current ray. The bounding slab process is what
determines what order objects will be tested against the
ray.
2) Find all intersections of the ray with the object. (Up
to the maximum number of intersections.)
Scan Conversion
1) The object is broken up into a number of rectangles (the
actual number depends on the values of "u_steps" and
"v_steps" in the object declarations).
2) The rectangle is clipped so that only the visible part
will be rendered.
3) The clipped polygon is scan converted a line at a time.
As the scan conversion takes place, the object and world
coordinates of the polygon are interpolated from
vertices.
4) As each pixel is generated by scan conversion, its
distance from the eye is checked against a Z-Buffer.
5) If the distance is less than the value in the Z-Buffer,
then any CSG and clipping operations associated with the
object are performed.
6) If the pixel passes the depth check and CSG checks, then
the pixel is shaded using the texture information
associated with the object. The depth to the pixel is
stored in the Z-Buffer and the color associated with the
pixel is stored in the S-Buffer.
Wireframe
1) The object is broken up into a number of rectangles (the
actual number depends on the values of "u_steps" and
"v_steps" in the object declarations).
2) The rectangle is clipped so that only the visible part
will be rendered.
3) The clipped rectangle is displayed on the screen. CSG is
not taken into consideration in a wireframe image.
25
Determining if a point is inside an object is performed by
evaluating the function that describes the object using the point
of intersection. If the value is less than or equal to 0 then
the point is inside. Some primitives do not have an inside
(triangular patches), others do not have a well defined inside
(cylinder).
The following sub-sections describe the format of the individual
parts of an object declaration. (Note: The shape declaration
MUST be first in the declaration, as any operations that follow
have to have data to work on.)
2.3.1 Object Modifiers
2.3.1.1 Position and Orientation Modifiers
The position, orientation, and size of an object can be modified
through one of four linear transformations: translation,
rotation, scaling, and shear.
2.3.1.1.1 Translation
Translation moves the position of an object by the number of
units specified in the associated vector. The format of the
declaration is:
translate <xt, yt, zt>
For example the declaration:
translate <0, 20, -4>
will move the entire object up 20 units along the y axis and back
4 units along the (negative) z axis.
2.3.1.1.2 Rotation
Rotation revolves the position of an object about the x, y, and z
axes (in that order). The amount of rotation is specified in
degrees for each of the axes. The direction of rotations follows
a left-handed convention: if the thumb of your left hand points
along the positive direction of an axis, then the direction your
fingers curl is the positive direction of rotation.
The format of the declaration is:
rotate <xr, yr, zr>
26
For example the declaration:
rotate <30, 0, 20>
will rotate the object by 30 degrees about the x axis, followed
by 20 degrees about the z axis.
Remember: Left Handed Rotations.
2.3.1.1.3 Scaling
Scaling alters the size of an object by a given amount with
respect to each of the coordinate axes. The format of the
declaration is:
scale <xs, ys, zs>
2.3.1.1.4 Shear
A less frequently used, but occasionally useful transformation is
linear shear. Shear scales along one axis by an amount that is
proportional to the location on another axis. The format of the
declaration is:
shear yx, zx, xy, zy, xz, yz
Typically only one or two of the components will be non-zero, for
example the declaration:
shear 0, 0, 1, 0, 0, 0
will shear an object more and more to the right as y gets larger
and larger. The order of the letters in the declaration is
descriptive, shear ... ab, ... means shear along direction a by
the amount "ab" times the position b.
This declaration should probably be split into three: one that
associates shear in x with the y and z values, one that
associates shear in y with x and z values, and one that
associates shear in z with x and y values.
You might want to look at the file "xander.pi" - this uses shear
on boxes to make diagonally slanted parts of letters.
2.3.1.2 Bounding box
In order to speed up the process of determining if a ray
intersects an object and in order to define good bounds for
surfaces such as polynomials and implicit surfaces, a bounding
27
box can be specified. A short example of how it is used to
define bounds of a polynomial:
define r0 3
define r1 1
define torus_expression (x^2 + y^2 + z^2 - (r0^2 + r1^2))^2 -
4 * r0^2 * (r1^2 - z^2)
object {
polynomial torus_expression
shiny_red
bounding_box <-(r0+r1), -(r0+r1), -r1>,
<(r0+r1), (r0+r1), r1>
}
The test for intersecting a ray against a box is much faster than
performing the test for the polynomial equation. In addition the
box helps the scan conversion process determine where to look for
the surface of the torus.
2.3.1.3 Subdivision of Primitives
The amount of subdivision of a primitive that is performed before
it is displayed as polygons is tunable. These declarations only
take effect during scan conversion - they are ignored during
raytracing. The two declarations are:
u_steps n
v_steps m
Where U generally refers to the number of steps "around" the
primitive (the number of steps around the equator of a sphere for
example). The parameter V refers to the number of steps along
the primitive (latitudes on a sphere). Cone and cylinder
primitives only require 1 step along V, but for smoothness may
require many steps in U.
For blobs, polynomials, and implicit surfaces, the v_steps
component plays double duty, specifying how many subdivisions to
perform along both the y-axis and the z-axis.
28
2.3.1.4 Shading Flags
It is possible to tune the shading that will be performed for
each object. The values of each bit in the flag has the same
meaning as that given for global shading in section 1.7.1.4:
1) Shadow_Check Shadows will be generated
2) Reflect_Check Reflectivity will be tested
4) Transmit_Check Check for refraction
8) Two_Sides If on, highlighting will be performed on
both sides of a surface.
16) Cast_Shadow Determines if an object can cast a
shadow
32) Primary_Rays If off, then primary rays (those from
the eye) are not checked against the
object
The declaration has the form:
shading_flags xx
i.e. if the value 50 is used for "xx" above, then this object can
be reflective and will cast shadows, however there will be no
tests for transparency and there will be no shading of the back
sides of surfaces.
Note: the shading flag only affects the object in which the
declaration is made. This means that if you want the shading
values affected for all parts if a CSG object, then you will need
a declaration in every component.
29
2.3.2 Primitives
Primitives are the lowest level of shape description. Typically
a scene will contain many primitives, appearing either
individually or as aggregates using either Constructive Solid
Geometry (CSG) operations or gridded objects.
The primitive shapes that can be used in Polyray include the
following:
o Bezier patch
o Blob
o Box
o Cone
o Cylinder
o Disc
o Implicit surface
o Height field
o Lathe surface
o Parabola
o Polygon
o Polynomial surface
o Sphere
o Sweep surface
o Torus
o Triangular patch
Descriptions of each of these primitive shapes, as well as
references to data files that demonstrate them are given in the
following subsections. Following the description of the
primitives are descriptions of how CSG and grids can be built.
30
2.3.2.1 Bezier patches
A Bezier patch is a form of bicubic patch that interpolates its
control vertices. The patch is defined in terms of a 4x4 array
of control vertices, as well as several tuning values.
The format of the declaration is:
bezier subdivision_type, flatness_value,
u_subdivisions, v_subdivision,
[ 16 comma-separated vertices, i.e.
<x0, y0, z0>, <x1, y1, z1>, ..., <x15, y15, z15> ]
The subdivision type controls how the patch is represented
internally. The valid values and their meaning are:
1) Store only the minimum information needed. Least storage
requirement but takes more time during rendering.
2) Store a hierarchical tree of bounding spheres that
contain smaller and smaller pieces of the patch. This
tree is used during rendering to speed up the ray-
surface intersection process. Faster but requires more
memory.
The flatness value is used to determine how far a patch should be
subdivided before it can be considered "flat". The smaller this
value, the more the patch will be subdivided (limited by the next
two values).
The number of levels of subdivision of the patch, in each
direction, is controlled by the two parameters "u_subdivisions",
and "v_subdivisions". The more subdivisions allowed, the
smoother the approximation to the patch, however storage and
processing time go up.
An example of a bezier patch is:
object {
bezier 2, 0.05, 3, 3,
<0, 0, 2>, <1, 0, 0>, <2, 0, 0>, <3, 0,-2>,
<0, 1, 0>, <1, 1, 0>, <2, 1, 0>, <3, 1, 0>,
<0, 2, 0>, <1, 2, 0>, <2, 2, 0>, <3, 2, 0>,
<0, 3, 2>, <1, 3, 0>, <2, 3, 0>, <3, 3,-2>
rotate <30, -70, 0>
shiny_red
}
31
Sample files: bezier0.pi, teapot.pi, teapot.inc
2.3.2.2 Blob
A blob describes a smooth potential field around one or more
spherical, cylindrical, or planar components.
The format of the declaration is:
blob threshold:
blob_component1
[, blob_component2 ]
[, etc. for each component ]
The threshold is the minimum potential value that will be
considered when examining the interaction of the various
components of the blob. Each blob component one of two forms:
sphere <x, y, z>, strength, radius
cylinder <x0, y0, z0>, <x1, y1, z1>, strength, radius
plane <nx, ny, nz>, d, strength, dist
The strength component describes how strong the potential field
is around the center of the component, the "radius" component
describes the maximum distance at which the component will
interact with other components. For a spherical blob component
the vector <x,y,z> gives the center of the potential field around
the component. For a cylindrical blob component the vector <x0,
y0, z0> defines one end of the axis of a cylinder, the vector
<x1, y1, z1> defines the other end of the axis of a cylinder. A
planar blob component is defined by the standard plane equation
with <nx, ny, nz> defining the normal and "d" defining the
distance of the plane from the origin along the normal.
Note: The ends of a cylindrical blob component are given
hemispherical caps.
Note: The colon and the commas in the declaration really are
important.
An example of a blob is:
object {
blob 0.5:
cylinder <0, 0, 0>, <5, 0, 0>, 1, 0.7,
cylinder <1, -3, 0>, <3, 2, 0>, 1, 1.4,
sphere <3, -0.8, 0>, 1, 1,
sphere <4, 0.5, 0>, 1, 1,
sphere <1, 1, 0>, 1, 1,
32
sphere <1, 1.5, 0>, 1, 1,
sphere <1, 2.7, 0>, 1, 1
texture {
surface {
color red
ambient 0.2
diffuse 0.8
specular white, 1
microfacet Phong 5
}
}
}
Note: since a blob is essentially a collection of 4th order
polynomials, it is possible to specify which quartic root solver
to use. See section 2.3.2.12, "Polynomial surface" for a
description of the "root_solver" statement.
Sample file: blob.pi
2.3.2.3 Box
A box is rectangular solid that has its edges aligned with the x,
y, and z axes. it is defined in terms of two diagonally opposite
corners. The alignment can be changed by rotations after the
shape declaration.
The format of the declaration is:
box <x0, y0, z0>, <x1, y1, z1>
Usually the convention is that the first point is the front-lower-
left point and the second is the back-upper-right point. The
following declaration is four boxes stacked on top of each other:
define pyramid
object {
object { box <-1, 3, -1>, <1, 4, 1> }
+ object { box <-2, 2, -2>, <2, 3, 2> }
+ object { box <-3, 1, -3>, <3, 2, 3> }
+ object { box <-4, 0, -4>, <4, 1, 4> }
matte_blue
}
Sample file: boxes.pi
33
2.3.2.4 Cone
A cone is defined in terms of a base point, an apex point, and
the radii at those two points. Note that cones are not closed.
The format of the declaration is:
cone <x0, y0, z0>, r0, <x1, y1, z1>, r1
An example declaration of a cone is:
object {
cone <0, 0, 0>, 4, <4, 0, 0>, 0
shiny_red
}
Sample file: cone.pi, cones.pi
2.3.2.5 Cylinder
A cylinder is defined in terms of a bottom point, a top point,
and its radius. Note that cylinders are not closed.
The format of the declaration is:
cylinder <x0, y0, z0>, <x1, y1, z1>, r
An example of a cylinder is:
object {
cylinder <-3, -2, 0>, <0, 1, 3>, 0.5
shiny_red
}
Sample file: cylinder.pi
2.3.2.6 Disc
A disc is defined in terms of a center a normal and either a
radius, or using an inner radius and an outer radius. If only
one radius is given, then the disc has the appearance of a (very)
flat coin. If two radii are given, then the disc takes the shape
of an annulus (washer) where the disc extends from the first
radius to the second radius. Typical uses for discs are as caps
for cones and cylinders, or as ground planes (using a really big
radius).
The format of the declaration is:
34
disc <cx, cy, cz>, <nx, ny, nz>, r
or
disc <cx, cy, cz>, <nx, ny, nz>, ir, or
The center vector <cx,cy,cz> defines where the center of the disc
is located, the normal vector <nx,ny,nz> defines the direction
that is perpendicular to the disc. i.e. a disc having the center
<0,0,0> and the normal <0,1,0> would put the disc in the x-z
plane with the y-axis coming straight out of the center.
An example of a disc is:
object {
disc <0, 2, 0>, <0, 1, 0>, 3
rotate <-30, 20, 0>
shiny_red
}
Note: a disc is infinitely thin. If you look at it edge-on it
will disappear.
Sample file: disc.pi
2.3.2.7 Implicit Surface
The format of the declaration is:
function f(x,y,z)
The function f(x,y,z) may be any algebraic expression composed of
the variables: x, y, z, a numerical value (i.e. 0.5), the
operators: +, -, *, /, ^, and the functions: cos, cosh, exp,
fabs, ln, log, sin, sinh, tan, tanh. The code is not
particularly fast, not is it totally accurate, however the
capability to ray-trace such a wide class of functions by a SW
program is (I believe) unique to Polyray.
The distance along the ray that solutions will be found in are
determined by the following:
o If there is a bounding box, then the entry and exit
points of the ray with the box will be used as the
search interval.
o If there is no bounding box then the interval from 0.01
to 100 will be used.
o Solutions must be more than 1.0e-4 units distant from
each other.
35
Note: the absolute value can be written with vertical bars, i.e.
|x|^0.75.
The following object is taken from "sombrero.pi" and is a surface
that looks very much like diminishing ripples on the surface of
water.
define a_const 1.0
define b_const 2.0
define c_const 3.0
define two_pi_a 2.0 * 3.14159265358 * a_const
// Define a diminishing cosine surface (sombrero)
object {
function y - c_const * cos(two_pi_a * sqrt(x^2 + z^2)) *
exp(-b_const * sqrt(x^2 + z^2))
matte_red
bounds object { box <-4, -4, -4>, <4, 4, 4> }
}
Sample files: sinsurf.pi, sectorl.pi, sombrero.pi, superq.pi,
zonal.pi
2.3.2.8 Height Field
There are two ways that height fields can be specified, either by
using data stored in a Targa file, or using an implicit function
of the form y = f(x, z).
The default orientation of a height field is that the entire
field lies in the square 0 <= x <= 1, 0 <= z <= 1. File based
height fields are always in this orientation, implicit height
fields can optionally be defined over a different area of the x-z
plane. The height value is used for y.
2.3.2.8.1 File Based Height Fields
Height fields data can be read from a Targa format file. The
only formats currently supported are 8, 16 and 24 bit
uncompressed.
By using "smooth_" in front of the declaration, an extra step is
performed that calculates normals to the height field at every
point within the field. The result of this is a greatly smoothed
appearance, at the expense of around three times as much memory
being used.
The format of the declaration is:
36
height_field "filename"
smooth_height_field "filename"
The sample program "wake.c" generates a Targa file that simulates
the wake behind a boat.
Sample file: wake.pi (requires that "wave.tga" be generated by
"wake.c")
The way the pixel values of the file are interpreted are:
2.3.2.8.1.1 8 Bit Format
Each pixel in the file (Type 3 Targa, raw greyscale image) is
represented by a single byte. The byte is treated as a signed
integer, giving a range of values from -127 to 127.
If you are generating a Targa file to use in Polyray, given a
height, add 128 to the height and write it as an unsigned char.
The following code fragment shows what the statement would look
like.
float height;
FILE *height_file;
... calculate each eight ...
fputc((unsigned char)(height += 128.0), height_file);
2.3.2.8.1.2 16 Bit Format
Each pixel in the file (Type 1 or type 2 Targa) is represented by
two bytes, low then high. The high component defines the integer
component of the height, the low component holds the fractional
part scaled by 255. The entire value is offset by 128 to
compensate for the unsigned nature of the storage bytes. As an
example the values high = 140, low = 37 would be translated to
the height:
(140 + 37 / 256) - 128 = 12.144
similarly if you are generating a Targa file to use in Polyray,
given a height, add 128 to the height, extract the integer
component, then extract the scaled fractional component. The
following code fragment shows a way of generating the low and
high bytes from a floating point number.
unsigned char low, high;
37
float height;
FILE *height_file;
...
height += 128.0;
high = (unsigned char)height;
height -= (float)high;
low = (unsigned char)(256.0 * height);
fputc(low, height_file);
fputc(high, height_file);
2.3.2.8.1.3 24 Bit Format
The red component defines the integer component of the height,
the green component holds the fractional part scaled by 255, the
blue component is ignored. The entire value is offset by 128 to
compensate for the unsigned nature of the RGB values. As an
example the values r = 140, g = 37, and b = 0 would be
translated to the height:
(140 + 37 / 256) - 128 = 12.144
similarly if you are generating a Targa file to use in Polyray,
given a height, add 128 to the height, extract the integer
component, then extract the scaled fractional component. The
following code fragment shows a way of generating the RGB
components from a floating point number.
unsigned char r, g, b;
float height;
FILE *height_file;
...
height += 128.0;
r = (unsigned char)height;
height -= (float)r;
g = (unsigned char)(256.0 * height);
b = 0;
fputc(b, height_file);
fputc(g, height_file);
fputc(r, height_file);
2.3.2.8.2 Implicit Height Fields
Another way to define height fields is by evaluating a
38
mathematical function over a grid. Given a function y = f(x, z),
Polyray will evaluate the function over a specified area and
generate a height field based on the function. This method can
be used to generate images of many sorts of functions that are
not easily represented by collections of simpler primitives.
The valid formats of the declaration are:
height_fn xsize, zsize, min_x, max_x, min_z, max_z, expression
height_fn xsize, zsize, expression
smooth_height_fn xsize, zsize, min_x, max_x, min_z, max_z,
expression
smooth_height_fn xsize, zsize, expression
If the four values min_x, max_x, min_z, and max_z are not defined
then the default square 0 <= x <= 1, 0 <= z <= 1 will be used.
For example,
// Define constants for the sombrero function
define a_const 1.0
define b_const 2.0
define c_const 3.0
define two_pi_a 2.0 * 3.14159265358 * a_const
// Define a diminishing cosine surface (sombrero)
object {
height_fn 80, 80, -4, 4, -4, 4,
c_const * cos(two_pi_a * sqrt(x^2 + z^2)) *
exp(-b_const * sqrt(x^2 + z^2))
shiny_red
}
will build a height field 80x80, covering the area from -4 <= x
<= 4, and -4 <= z <= 4.
Compare the run-time performance and visual quality of the
sombrero function as defined in "sombfn.pi" with the sombrero
function as defined in "sombrero.pi". The former uses a height
field representation and renders quite fast. The latter uses a
very general function representation and gives smoother but very
slow results.
Sample file: sombfn.pi, sinfn.pi
2.3.2.9 Lathe surfaces
A lathe surface is a polygon that has been revolved about the y-
axis. This surface allows you to build objects that are
39
symmetric about an axis, simply by defining 2D points.
The format of the declaration is:
lathe type, direction, total_vertices,
<vert1.x,vert1.y,vert1.z>
[, <vert2.x, vert2.y, vert2.z>]
[, etc. for total_vertices vertices]
The value of "type" is either 1, or 2. If the value is 1, then
the surface will simply revolve the line segments. If the value
is 2, then the surface will be represented by a spline that
approximates the line segments that were given. A lathe surface
of type 2 is a very good way to smooth off corners in a set of
line segments.
The value of the vector "direction" is used to change the
orientation of the lathe. For a lathe surface that goes straight
up and down the y-axis, use <0, 1, 0> for "direction. For a
lathe surface that lies on the x-axis, you would use <1, 0, 0>
for the direction.
Sample file: lathe1.pi, lathe2.pi
Note that CSG will really only work correctly if you close the
lathe - that is either make the end point of the lathe the same
as the start point, or make the x-value of the start and end
points equal zero. Lathes, unlike polygons are not automatically
closed by Polyray.
Note: since a splined lathe surface (type = 2) is a 4th order
polynomial, it is possible to specify which quartic root solver
to use. See section 2.3.2.12, "Polynomial surface" for a
description of the "root_solver" statement.
2.3.2.10 Parabola
A parabola is defined in terms of a bottom point, a top point,
and its radius at the top.
The format of the declaration is:
parabola <x0, y0, z0>, <x1, y1, z1>, r
The vector <x0,y0,z0> defines the "top" of the parabola - the
part that comes to a point. The vector <x1,y1,z1> defines the
"bottom" of the parabola, the width of the parabola at this point
is "r".
An example of a parabola declaration is:
40
object {
parabola <0, 6, 0>, <0, 0, 0>, 3
translate <16, 0, 16>
steel_blue
}
This is sort of like a salt shaker shape with a rounded top and
the base on the x-z plane.
2.3.2.11 Polygon
Although polygons are not very interesting mathematically, there
are many sorts of objects that are much easier to represent with
polygons. Polyray assumes that all polygons are closed and
automatically adds a side from the last vertex to the first
vertex.
The format of the declaration is:
polygon total_vertices,
<vert1.x,vert1.y,vert1.z>
[, <vert2.x, vert2.y, vert2.z>]
[, etc. for total_vertices vertices]
As with the sphere, note the comma separating each vertex of the
polygon.
I use polygons as a floor in a lot of images. They are a little
slower than the corresponding plane, but for scan conversion they
are a lot easier to handle. An example of a checkered floor made
from a polygon is:
object {
polygon 4, <-20, 0, -20>, <-20, 0, 20>,
< 20, 0, 20>, < 20, 0, -20>
texture {
checker matte_white, matte_black
translate <0, -0.1, 0>
scale <2, 1, 2>
}
}
2.3.2.12 Polynomial surface
The format of the declaration is:
polynomial f(x,y,z)
The function f(x,y,z) must be a simple polynomial, i.e.
41
x^2+y^2+z^2-1.0 is the definition of a sphere of radius 1
centered at (0,0,0). See section 4.1 for a little more detail on
restrictions on the form of the polynomial.
For quartic equations, there are three available ways to solve
for roots, by specifying which one is desired, it is possible to
tune for quality or speed. The method of Ferrari is the fastest,
but also the most numerically unstable. By default the method of
Vieta is used. Sturm sequences (which are the slowest) should be
used where the highest quality is desired.
The declaration of which root solver to use takes one of the
forms:
root_solver Ferrari
root_solver Vieta
root_solver Sturm
(Capitalization is important - these are proper nouns after all.)
Note: due to unavoidable numerical inaccuracies, not all
polynomial surfaces will render correctly from all directions.
The following example, taken from "devil.pi" defines a quartic
polynomial. The use of the CSG clipping object is to trim
uninteresting parts of the surface. The bounding box declaration
helps the scan conversion routines figure out where to look for
the surface.
// Variant of a devil's curve in 3-space. This figure has a
// top and bottom part that are very similar to a hyperboloid
// of one sheet, however the central region is pinched in the
// middle leaving two teardrop shaped holes.
object {
object {
polynomial x^4 + 2*x^2*z^2 - 0.36*x^2 - y^4 +
0.25*y^2 + z^4
root_solver Ferrari
}
& object { box <-2, -2, -0.5>, <2, 2, 0.5> }
bounding_box <-2, -2, -0.5>, <2, 2, 0.5>
rotate <10, 20, 0>
translate <0, 3, -10>
shiny_red
}
Note: as the order of the polynomial goes up, the numerical
accuracy required to render the surface correctly also goes up.
One problem that starts to rear its ugly head starting at around
3rd to 4th order equations is a problem with determining shadows
42
correctly. The result is black spots on the surface. You can
ease this problem to a certain extent by making the value of
"shadow_tolerance" larger. For 4th and higher equations, you
will want to use a value of at least 0.05, rather than the
default 0.001.
Sample file: torus.pi, many others
2.3.2.13 Spheres
Spheres are the simplest 3D object to render and a sphere
primitive enjoys a speed advantage over most other primitives.
The format of the declaration is:
sphere <center.x, center.y, center.z>, radius
Note the comma after the center vector, it really is necessary.
My basic benchmark file is a single sphere, illuminated by a
single light. The definition of the sphere is:
object {
sphere <0, 0, 0>, 2
shiny_red
}
Sample file: sphere.pi
2.3.2.14 Sweep surface
A sweep surface, also referred to as an extruded surface, is a
polygon that has been swept along a given direction. It can be
used to make multi-sided beams, or to create ribbon-like objects.
The format of the declaration is:
sweep type, direction, total_vertices,
<vert1.x,vert1.y,vert1.z>
[, <vert2.x, vert2.y, vert2.z>]
[, etc. for total_vertices vertices]
The value of "type" is either 1, or 2. If the value is 1, then
the surface will be a set of connected squares. If the value is
2, then the surface will be represented by a spline that
approximates the line segments that were given.
The value of the vector "direction" is used to change the
orientation of the sweep. For a sweep surface that is extruded
straight up and down the y-axis, use <0, 1, 0> for "direction.
43
The size of the vector "direction" will also affect the amount of
extrusion, i.e. if |direction| = 2, then the extrusion will be
two units in that direction.
An example of a sweep surface is:
// Sweep made from connected quadratic splines.
object {
sweep 2, <0, 2, 0>, 16,
<0, 0>, <0, 1>, <-1, 1>, <-1, -1>, <2, -1>, <2, 3>,
<-4, 3>, <-4, -4>, <4, -4>, <4, -11>, <-2, -11>,
<-2, -7>, <2, -7>, <2, -9>, <0, -9>, <0, -8>
translate <0, 0, -4>
scale <1, 0.5, 1>
rotate <0,-45, 0>
translate <10, 0, -18>
shiny_yellow
}
Sample file: sweep1.pi, sweep2.pi
Note that CSG will really only work correctly if you close the
sweep - that is make the end point of the sweep the same as the
start point. Sweeps, unlike polygons are not automatically
closed by Polyray.
2.3.2.15 Torus
The torus primitive is a doughnut shaped surface that is defined
by a center point, the distance from the center point to the
middle of the ring of the doughnut, the radius of the ring of the
doughnut, and the orientation of the surface.
The format of the declaration is:
torus r0, r1, <center.x, center.y, center.z>,
<dir.x, dir.y, dir.z>
As an example, a torus that has major radius 1, minor radius 0.4,
and is oriented so that the ring lies in the x-z plane would be
declared as:
object {
torus 1, 0.4, <0, 0, 0>, <0, 1, 0>
shiny_red
}
Note: since a torus is a 4th order polynomial, it is possible to
44
specify which quartic root solver to use. See section 2.3.2.12,
"Polynomial surface" for a description of the "root_solver"
statement.
Sample file: torus.pi
2.3.2.16 Triangular patches
A triangular patch is defined by a set of vertices and their
normals. When calculating shading information for a triangular
patch, the normal information is used to interpolate the final
normal from the intersection point to produce a smooth shaded
triangle.
The format of the declaration is:
patch <vert1.x,vert1.y,vert1.z>, <norm1.x,norm1.y,norm1.z>,
<vert2.x,vert2.y,vert2.z>, <norm2.x,norm2.y,norm2.z>,
<vert3.x,vert3.y,vert3.z>, <norm3.x,norm3.y,norm3.z>
Smooth patch data is usually generated as the output of another
program.
2.3.3 Constructive Solid Geometry (CSG)
Objects can be defined in terms of the union, intersection, and
inverse of other objects. The operations and the symbols used
are:
csgexper + csgexper - Union
csgexper * csgexper - Intersection
csgexper - csgexper - Difference
csgexper & csgexper - Clip the first object by the second
~csgexper - Inverse
(csgexper) - Parenthesised expression
Union simply collects two objects together. Union is a
convienient way to group objects in such a way as to be able to
transform them as a group. Intersection keeps only those parts
of each object that are "inside" the other object. Difference
keeps all parts of the two surfaces that are inside the first
object and outside the second. Clipping removes all parts of the
first object that are outside the second. The inverse operation
inverts the "inside" to be the outside and vice-versa. (i.e. a -
b is the same as a * ~b).
45
Note that intersection and difference require a clear inside and
outside. Not all primitives have well defined sides. Those that
do are:
Spheres, Boxes, Polynomials, Blobs, and Functions.
Other surfaces that do not always have a clear inside/outside,
but work reasonably well in CSG intersections are: Cylinders,
Cones, Discs, Lathes, Parabola, Polygons, and Sweeps.
Using Cylinders, Cones, and Parabolas works correctly, but the
open ends of these surfaces will also be open in the resulting
CSG. To close them off you can use a disc shape.
Using Discs, and Polygons in a CSG is really the same as doing a
CSG with the plane that they lie in. If fact, a large disc is an
excellent choice for clipping or intersecting an object, as the
inside/outside test is very fast.
Lathes, and Sweeps use Jordan's rule to determine if a point is
inside. This means that given a test point, if a line from that
point to infinity crosses the surface an odd number of times,
then the point is inside. The net result is that if the lathe
(or sweep) is not closed, then you may get odd results in a CSG
intersection (or difference).
Height fields don't work well with CSG.
As an example, the following file will generate a view of a
sphere of radius 1 with a hole of radius 0.5 through the middle:
viewpoint {
from <0,0,-8>
at <0,0,0>
up <0,1,0>
angle 30
resolution 160, 160
}
background white
light <0, 20, -40>
include "colors.inc"
define cylinder_z object { cylinder <0,0,-2>, <0,0,2>, 0.5 }
define unit_sphere object { sphere <0,0,0>, 1 }
// Define a CSG shape by deleting a cylinder from a sphere
object {
46
unit_sphere { shiny_red } - cylinder_z { shiny_yellow }
rotate <0, 20, 0>
}
Sample files: lens.pi, polytope.pi
2.3.4 Gridded objects
A gridded object is a way to compactly represent a rectangular
arrangement of objects by using an image map. Each object is
placed within a 1x1 cube that has its lower left corner at the
location <i, 0, j> and its upper right corner at <i+1, 1, j+1>.
The color index of each pixel in the image map is used to
determine which of a set of objects will be placed at the
corresponding position in space.
The gridded object is much faster to render than the
corresponding layout of objects. The major drawback is that
every object must be scaled and translated to completely fit into
a 1x1x1 cube that has corners at <0,0,0> and <1,1,1>.
The size of the entire grid is determined by the number of pixels
in the image. A 16x32 image would go from 0 to 16 along the x-
axis and the last row would range from 0 to 16 at 31 units in z
out from the x-axis.
The format of the declaration is:
gridded "image.tga",
object1
object2
object3
...
An example of how a gridded object is declared is:
define tiny_sphere object { sphere <0.5, 0.4, 0.5>, 0.4 }
define pointy_cone object { cone <0.5, 0.5, 0.5>, 0.4,
<0.5, 1, 0.5>, 0 }
object {
gridded "grdimg0.tga",
tiny_sphere { shiny_coral }
tiny_sphere { shiny_red }
pointy_cone { shiny_green }
translate <-10, 0, -10>
rotate <0, 210, 0>
}
In the image "grdimg0.tga", there are a total of 3 colors used,
47
every pixel that uses color index 0 will generate a shiny "coral"
colored sphere, every pixel that uses index will generate a red
sphere, every pixel that uses index 2 will generate a green cone,
and every other color index used in the image will leave the
corresponding space empty.
Sample files: grid.pi, river.pi
2.3.5 Bounding Slabs
Bounding slabs are used by Polyray to sort the objects in the
scene along the coordinate axes. For scenes with many objects,
the result is often greatly reduced rendering times over a non-
optimized renderer.
This sorting operation can lead to the generation of many new
objects that will contain the ones defined in the data file.
Bounding slabs work best when there are many small objects, and
when the objects are oriented in such a way that they can be
separated by the slabs.
2.4 Color and lighting
The color space used in polyray is RGB, with values of each
component specified as a value from 0 -> 1. The way the color
and shading of surfaces is specified is described in the
following sections.
RGB colors are defined as either a three component vector, such
as <0.6, 0.196078, 0.8>, or as one of the X11R3 named colors
(which for the value given is DarkOrchid). One of these days
when I feel like typing and not thinking (or if I find them on
line), I'll put in the X11R4 colors.
The coloring of objects is determined by the interaction of
lights, the shape of the surface it is striking, and the
characteristics of the surface itself.
2.4.1 Light sources
Light sources are one of: simple positional light sources, spot
lights, or textured lights. None of these lights have any
physical size. The lights do not appear in the scene, only the
effects of the lights.
48
2.4.1.1 Positional Lights
A positional light is defined by its RGB color and its XYZ
position.
The format of the declaration is:
light color, location
light location
The second declaration will use white as the color.
2.4.1.2 Spot Lights
The format of the declaration is:
spot_light color, location, pointed_at, Tightness, Angle,
Falloff
spot_light location, pointed_at
The vector "location" defines the position of the spot light, the
vector "pointed_at" defines the point at which the spot light is
directed. The optional components are:
Color The color of the spotlight
Tightness The power function used to determine the
shape of the hot spot
Angle The angle (in degrees) of the full effect of
the spot light
Falloff A larger angle at which the amount of light
falls to nothing
A sample declaration is:
spot_light white, <10,10,0>, <3,0,0>, 3, 5, 20
Sample file: spot0.pi, spot1.pi
49
2.4.1.3 Textured lights
Textured lights are an extension of point lights that use a
function (including image maps) to describe the intensity & color
of the light in each direction. The format of the declaration
is:
textured_light {
color fexper
[scale/translate/rotate/shear]
}
Sample file: environ.pi, ilight.pi
2.4.2 Background color
The background color is the one used if the current ray does not
strike any objects. The color can be any vector expression,
although is usually a simple RGB color value.
The format of the declaration is:
background <R,G,B>
or
background color
If no background color is set black will be used.
An interesting trick that can be performed with the background is
to use an image map as the background color (it is also a way to
translate from one Targa format to another). The way this can be
done is:
background planar_imagemap(image("test1.tga", P)
Note that there are problems associated with textured backgrounds
- if you have reflectivity in the scene & a reflected ray doesn't
hit an object, it may reflect an undesired part of the
"background".
50
2.4.3 Textures
Polyray supports a few simple procedural textures: a standard
shading model, a checker texture, a hexagon texture, a noise
texture, and layered textures. In addition, a very flexible
(although slower) functional texture is supported. The general
syntax of a texture is:
texture { [texture declaration] }
or
texture_sym
Where "texture_sym" is a previously defined texture declaration.
2.4.3.1 Procedural Textures
2.4.3.1.1 Standard Shading Model
Unlike many other ray-tracers, surfaces in Polyray do not have a
single color that is used for all of the components of the
shading model. Instead a number of characteristics of the
surface must be defined (with a matte white being the default).
A surface declaration has the form:
surface {
[ surface definitions ]
}
For example, the following declaration is a red surface with a
white highlight, corresponding to the often seen "plastic"
texture:
define shiny_red
texture {
surface {
ambient red, 0.2
diffuse red, 0.6
specular white, 0.8
microfacet Reitz 10
}
}
The allowed surface characteristics that can be defined are:
ambient - Light given off by the surface
diffuse - Light reflected in all directions
specular - Amount and color of specular highlights
reflection - Reflectivity of the surface
51
transmission - Amount and color of refracted light
microfacet - Specular lighting model (see below)
The lighting equation used is (in somewhat simplified terms):
L = ambient + diffuse + specular + reflected + transmitted, or
L = Ka + Kd * (l1 + l2 + ...) + Ks * (l1 + l2 + ...) + Kr + Kt
Where l1, l2, ... are the lights, Ka is the ambient term, Kd is
the diffuse term, Ks is the specular term, Kr is the reflective
term, and Kt it the transmitted (refractive) term. Each of these
terms has a scale value and a filter value (the filter defaults
to white/clear if unspecified).
See the file "colors.inc" for a number of declarations of surface
characteristics, including: mirror, glass, shiny, and matte.
For lots of detail on lighting models, and the theory behind how
color is used in computer generated images, run (don't walk) down
to your local computer bookstore and get:
"Illumination and Color in Computer Generated Imagery"
Roy Hall, 1989
Springer Verlag
Source code in the back of that book was the inspiration for the
microfacet distribution models implemented for Polyray.
Note that you don't really have to specify all of the color
components if you don't want to. If the color of a particular
part of the surface declaration is not defined, then the value of
the "color" component will be examined to see if it was declared.
If so, then that color will be used as the filter. As an
example, the declaration above could also be written as:
define shiny_red
texture {
surface {
color red
ambient 0.2
diffuse 0.6
specular white, 0.8
microfacet Reitz 10
}
}
2.4.3.1.1.1 Ambient light
Ambient lighting is the light given off by the surface itself.
52
The format of the declaration is:
ambient color, scale
ambient scale
As always, color indicates either an RGB triple like
<1.0,0.7,0.9>, or a named color. scale gives the amount of
contribution that ambient gives to the overall amount light
coming from the pixel. The scale values should lie in the range
0.0 -> 1.0
2.4.3.1.1.2 Diffuse light
Diffuse lighting is the light given off by the surface under
stimulation by a light source.
The format of the declaration is:
diffuse color, scale
diffuse scale
The only information used for diffuse calculations is the angle
of incidence of the light on the surface.
2.4.3.1.1.3 Specular highlights
The format of the declaration is:
specular color, scale
specular scale
The means of calculating specular highlights is by default the
Phong model. Other models are selected through the Microfacet
distribution declaration.
2.4.3.1.1.4 Reflected light
Reflected light is the color of whatever lies in the reflected
direction as calculated by the relationship of the view angle and
the normal to the surface.
The format of the declaration is:
reflection scale
reflection color, scale
Typically, only the scale factor is included in the reflection
declaration, this corresponds to all colors being reflected with
intensity proportional to the scale. A color filter is allowed
53
in the reflection definition, and this allows the modification of
the color being reflected (I'm not sure if this is useful, but I
included it anyway).
2.4.3.1.1.5 Transmitted light
Transmitted light is the color of whatever lies in the refracted
direction as calculated by the relationship of the view angle,
the normal to the surface, and the index of refraction of the
material.
The format of the declaration is:
transmit scale, ior
transmit color, scale, ior
Typically, only the scale factor is included in the transmitted
declaration, this corresponds to all colors being transmitted
with intensity proportional to the scale. A color filter is
allowed in the transmitted definition, and this allows the
modification of the color being transmitted by making the
transmission filter different from the color of the surface
itself.
It is possible to have surfaces with colors very different than
the one closest to the eye become apparent. (See "gsphere.pi"
for an example, a red sphere is in the foreground, a green sphere
and a blue sphere behind. The specular highlights of the red
sphere go to yellow, and blue light is transmitted through the
red sphere.)
A more complex file is "lens.pi" in which two convex lenses are
lined up in front of the viewer. The magnified view of part of a
grid of colored balls is very apparent in the front lens.
2.4.3.1.1.6 Microfacet distribution
The microfacet distribution is a function that determines how the
specular highlighting is calculated for an object.
The format of the declaration is:
microfacet Distribution_name falloff_angle
microfacet falloff_angle
The distribution name is one of: Blinn, Cook, Gaussian, Phong,
Reitz. The falloff angle is the angle at which the specular
highlight falls to 50% of its maximum intensity. (The smaller
the falloff angle, the sharper the highlight.) If a microfacet
name is not given, then the Phong model is
54
used.
The falloff angle must be specified in degrees, with values in
the range
0 to 45. The falloff angle corresponds to the roughness of the
surface, the smaller the angle, the smoother the surface. A very
wide falloff angle will give the same sort of shading that
diffuse shading gives.
Note: as stated before, look at the book by Hall. I have found
falloff values of 5-10 degrees to give nice tight highlights.
Using falloff angle may seem a bit backwards from other
raytracers, which typically use a value defining the power of a
cosine function to define highlight size. When using a power
value, the higher the power, the smaller the highlight. Using
angles seems a little tidier since the smaller the angle, the
smaller the highlight.
2.4.3.1.2 Checker
the checker texture has the form:
texture {
checker texture1, texture2
}
where texture1 and texture2 are texture declarations (or texture
constants).
A standard sort of checkered plane can be defined with the
following:
// Define a matte red surface
define matte_red
texture {
surface {
ambient red, 0.1
diffuse red, 0.5
}
}
// Define a matte blue surface
define matte_blue
texture {
surface {
ambient blue, 0.2
diffuse blue, 0.8
}
}
55
// Define a plane that has red and blue checkers
object {
disc <0, 0.01, 0>, <0, 1, 0>, 5000
texture {
checker matte_red, matte_blue
}
}
For a sample file, look at "spot0.pi". This file has a sphere
with a red/blue checker, and a plane with a black/white checker.
Note A distinct problem with checkerboards is one of aliasing at
great distances. To help with aliasing, either use larger checks
(by adding a scale to the texture), or make sure there is a very
low light level at great distances.
2.4.3.1.3 Hexagon
the hexagon texture is oriented in the x-z plane, and has the
form:
texture {
hexagon texture1, texture2, texture3
}
This texture produces a honeycomb tiling of the three textures in
the declaration. Remember that this tiling is with respect to
the x-z plane, if you want it on a vertical wall you will need to
rotate the texture.
2.4.3.1.4 Noise surfaces
The complexity and speed of rendering of the noise surface type
lies between the standard shading model and the special surfaces
described below. It is an attempt to capture a large number of
the standard 3D texturing operations in a single declaration.
A noise surface declaration has the form:
texture {
noise surface {
[ noise surface definition ]
}
}
The allowed surface characteristics that can be defined are:
color <r, g, b> Basic surface color (used if the noise
function generates a value not
contained in the color map)
56
ambient scale Amount of ambient contribution
diffuse scale Diffuse contribution
specular color, scale Amount and color of specular
highlights, if the color is not given
then the body color is used.
reflection scale Reflectivity of the surface
transmission scale, ior Amount of refracted light
microfacet kind angle Specular lighting model (see the
description of a standard surface)
color_map(map_entries) Define the color map (see
following section on color map
definitions for further details)
bump_scale fexper How much the bumpiness affects the
normal to the surface
frequency fexper Affects the wavelength of ripples and
waves
phase fexper Affects the phase of the ripples and
waves
lookup_fn index Selects a predefined lookup function
normal_fn index Selects a predefined normal modifier
octaves fexper Number of octaves of noise to use
position_fn index How the intersection point is used in
the process of generating a noise
texture
position_scale fexper Amount of contribution of the position
value to the overall texture
turbulence fexper Amount of contribution of the noise to
the overall texture.
The way the final color of the texture is decided is by
calculating a floating point value using the following general
57
formula:
index = lookup_fn(position_scale * position_fn +
turbulence * noise3d(P, octaves))
The index value that is calculated is then used to lookup a color
from the color map. This final color is used for the ambient,
diffuse, reflection and transmission filters. The functions that
are currently available, with their corresponding indices are:
Position functions:
Index Effect
1 x value in the object coordinate system
2 x value in the world coordinate system
3 Distance from the z axis
4 Distance from the origin
5 Distance around the y-axis (ranges from 0 -> 1)
default: 0.0
Lookup functions:
Index Effect
1 sawtooth function, result from 0 -> 1
2 sin function, result from 0->1
3 ramp function, result from 0->1
default: no modification made
Definitions of these function numbers that make sense are:
define position_plain 0
define position_objectx 1
define position_worldx 2
define position_cylindrical 3
define position_spherical 4
define position_radial 5
define lookup_plain 0
define lookup_sawtooth 1
define lookup_sin 2
define lookup_ramp 3
An example of a texture defined this way is a simple white
marble:
define white_marble_texture
texture {
noise surface {
color white
position_fn position_objectx
lookup_fn lookup_sawtooth
octaves 3
58
turbulence 3
ambient 0.2
diffuse 0.8
specular 0.3
microfacet Reitz 5
color_map(
[0.0, 0.8, <1, 1, 1>, <0.6, 0.6, 0.6>]
[0.8, 1.0, <0.6, 0.6, 0.6>, <0.1, 0.1, 0.1>])
}
}
In addition to coloration, the bumpiness of the surface can be
affected by selecting a function to modify the normal. The
currently supported normal modifier functions are:
Index Effect
1 Make random bumps in the surface
2 Add ripples to the surface
3 Give the surface a dented appearance
default: no change
Definitions that make sense are:
define default_normal 0
define bump_normal 1
define ripple_normal 2
define dented_normal 3
See also the file "texture.txt" for a little more explanation and
a few more texture definitions.
Sample file: textures.inc
2.4.3.1.5 Layered Textures
A layered texture is a way of piling one texture on top of
another. If the top most texture has any transparency, the layer
below it will be shaded, and its color will be added to the color
generated in the first layer. This process repeats through every
layer until either an opaque layer is reached, or there are no
more layers.
The format of the declaration is:
texture {
layered
texture1,
texture2,
...
textureN
59
}
The first texture in the list is the top texture layer, the last
is the bottom texture layer. As an example, the following
defines a marble texture that has mirror in its veins. The way
it works is by using an alpha value in the color map, so as the
color in the vein goes to black, the alpha increases to 1.0. The
bottom layer in this case is mirror, and only appears when we are
looking into the veined part of the texture.
// Sample Polyray file
// A gradient texture with changing 'alpha' values
//
viewpoint {
from <0, 2, -10>
at <0,0,0>
up <0,1,0>
angle 35
resolution 80, 80
aspect 1
}
include "colors.inc"
background midnightblue
light <-10, 7, -5>
define position_objectx 1
define lookup_sawtooth 1
// Make a variant on white marble that has transparency in its
// veins
define alpha_white_marble
texture {
noise surface {
color white
position_fn position_objectx
lookup_fn lookup_sawtooth
octaves 3
turbulence 3
ambient 0.3
diffuse 0.8
specular 0.3
microfacet Reitz 5
color_map([0.0, 0.2, white, 0, white, 0]
[0.2, 0.5, white, 0, black, 0.2]
[0.6, 1.0, black, 0.2, black, 1])
}
}
60
// We now build the layered texture from the one just
// defined, as well as the "mirror" texture from "colors.inc"
define mirror_veined_marble
texture {
layered
alpha_white_marble,
mirror
}
// Make a sphere that has the layered texture
object {
sphere <0, 0, 0>, 2
mirror_veined_marble
}
// Add a checkered floor, so we see something in the mirrored
// part of the sphere.
object {
disc <0, -2.005, 0>, <0, 1, 0>, 10
texture { checker matte_white, matte_black }
}
Sample files: layer1.pi, layer2.pi
2.4.3.2 Functional Textures
The most general textures in Polyray are functional textures.
These textures are evaluated at run-time based on the expressions
given for the components of the lighting model. The general
syntax for a surface using a functional texture is:
special surface {
[ surface declaration ]
}
In addition to the components usually defined in a surface
declaration, it is possible to define a function that deflects
the normal, and a "body" color that will be used as the color
filter in the ambient, diffuse, etc. components of the surface
declaration. The format of the two declarations
are:
color vexper
and
normal vexper
An example of how a functional texture can be defined is:
define sin_color_offset (sin(3.14*fmod(x*y*z,1)+otheta)+1)/2
61
define sin_color <sin_color_offset, 0, 1 - sin_color_offset>
define xyz_sin_texture
texture {
special surface {
color sin_color
ambient 0.2
diffuse 0.7
specular white, 0.2
microfacet Reitz 10
}
}
In this example, the color of the surface is defined based on the
location of the intersection point using the vector defined as
"sin_color".
Sample files: cossph.pi, cwheel.pi.
2.4.3.2.1 Image maps
A type of coloring that can be used involves the use of an
existing image projected onto a surface. There are four types of
projection supported, planar, cylindrical, spherical, and
environment. Input files for use as image maps may be 8, 16, 24,
and 32 bit uncompressed, RLE compressed, or color mapped Targa
files.
The declaration of an image map is:
image("imfile.tga")
Typically, an image will be associated with a variable through a
definition such as:
define myimage image("imfile.tga")
The image is projected onto a shape by means of a "projection".
The four types of projection are declared by:
planar_imagemap(image, coordinate [, repeat]),
cylindrical_imagemap(image, coordinate [, repeat]),
spherical_imagemap(image, coordinate)
environment_map(environment(image1, image2, image3, image4,
image5, image6))
The planar projection maps the entire raster image into the
coordinates 0 <= x <= 1, 0 <= z <= 1. The vector value given as
"coordinate" is used to select a color by multiplying the x value
by the number of columns, and the z value by the number of rows.
62
The color appearing at that position in the raster will then be
returned as the result. If a "repeat" value is given then entire
surface, repeating between every integer Value of x and/or z.
The cylindrical projection wraps the image about a cylinder that
has one end at the origin and the other at <0, 1, 0>. If a
"repeat" value is given, then the image will be repeated along
the y-axis, if none is given, then any part of the object that is
not covered by the image will be given the color of pixel (0, 0).
The spherical projection wraps the image about an origin centered
sphere. The top and bottom seam are folded into the north and
south poles respectively. The left and right edges are brought
together on the positive x axis.
The environment map wraps six images around a point. This method
is a standard way to fake reflections by wrapping the images that
would be seen from a point inside an object around the object. A
sample of this technique can be seen by rendering "room1.pi"
(which generates the images) followed by rendering "room0.pi"
(which wraps the images around a sphere).
Following are a couple of examples of objects and textures that
make use of image maps:
define hivolt_image image("hivolt.tga")
define hivolt_tex
texture {
special surface {
color cylindrical_imagemap(hivolt_image, P, 1)
ambient 0.9
diffuse 0.1
}
scale <1, 2, 1>
translate <0, -1, 0>
}
object { cylinder <0, -1, 0>, <0, 1, 0>, 3 hivolt_tex }
and
define disc_image image("test.tga")
define disc_tex
texture {
special surface {
color planar_imagemap(disc_image, P)
ambient 0.9
diffuse 0.1
}
translate <-0.5, 0, -0.5>
scale <7*4/3, 1, 7>
63
rotate <90, 0, 0>
}
object {
disc <0, 0, 0>, <0, 0, 1>, 6
u_steps 10
disc_tex
}
2.5 Comments
Single line comments are allowed and have the following format:
// [ any text to end of the line ]
As soon as the two characters "//" are detected, the rest of the
line is considered a comment.
2.6 Animation support
An animation is generated by rendering a series of frames,
numbered from 0 to some total value. The declarations in polyray
that support the generation of multiple Targa images are:
total_frames val The total number of frames in the animation
start_frame val The value of the first frame to be rendered
end_frame val The last frame to be rendered
outfile "name"
outfile name Polyray appends the frame number to this
string in order to generate distinct Targa
files.
The values of "total_frames", "start_frame", and "end_frame", as
well as the value of the current frame, "frame", are usable in
arithmetic expressions in the input file. Note that these
statements should appear before the use of: total_frames,
start_frame, end_frame, or frame as a part of an expression.
Typically I put the declarations right at the top of the file.
WARNING: if the string given for "outfile" is longer than 5
characters, the three digit frame number that is appended will be
truncated by DOS. Make sure this string is short enough or you
will end up overwriting image files.
Sample files: whirl.pi, plane.pi, squish.pi
2.7 Conditional processing
64
In support of animation generation (and also because I sometimes
like to toggle attributes of objects), polyray supports limited
conditional processing. The syntax for this is:
if (cexper) {
[object/light/... declarations]
}
else {
[other object/light/... declarations]
}
The sample file "rsphere.pi" shows how it can be used to modify
the color characteristics of a moving sphere.
The use of conditional statements is limited to the top level of
the data file. You cannot put a conditional within an object or
texture declaration. i.e.
object {
if (foo == 42) {
sphere <0, 0, 0>, 4
}
else {
disc <0, 0, 0>, <0, 1, 0>, 4
}
}
is not a valid use of an "if" statement, whereas:
if (foo == 42) {
object {
sphere <0, 0, 0>, 4
}
}
else {
object {
disc <0, 0, 0>, <0, 1, 0>, 4
}
}
or
if (foo == 42)
object { sphere <0, 0, 0>, 4 }
else if (foo = 12) {
object { torus 3.0, 1.0, <0, 0, 0>, <0, 1, 0> }
object { cylinder <0, -1, 0>, <0, 1, 0>, 0.5 }
}
else
object { disc <0, 0, 0>, <0, 1, 0>, 4 }
65
are valid.
Note: the curly brackets "{}" are required if there are multiple
statements within the conditional, and not required if there is a
single statement.
2.8 Include files
In order to allow breaking an input file into several files (as
well as supporting the use of library files), it is possible to
direct polyray to process another file. The syntax is:
include "filename"
Beware that trying to use "#include ..." will fail as Polyray
will consider it a comment.
2.9 File flush
Due to unforeseen occurrences, like power outages, or roommates
that hit the reset button so they can do word processing, it is
possible to specify how often the output file will be flushed to
disk. The default is to wait until the entire file has been
written before a flush (which is a little quicker but you lose
everything if a crash occurs).
The format of the declaration is:
file_flush xxx
The number xxx indicates the maximum number of pixels that will
be written before a file flush will occur. This value can be as
low as 1 - this will force a flush after every pixel (only for
the very paranoid).
3 File formats
3.1 Output files
Currently the output file formats are 8, 16, 24, and 32 bit
uncompressed and RLE compressed Targa (types 2, 3, and 10). If
no output file is defined with the '-o' command line option, the
file "out.tga" will be used. The command line option -u
specifies that no compression should be used on the output file.
The default output format is an RLE compressed 16 bit color
format. This format holds 5 bits for each of red, green, and
blue. The reason for choosing this format is that very few can
afford 24 bit color boards and this format meets the limits of
the typical 8 and 15 bit color boards. If you have the hardware
66
to display more than 32K colors then set the command line
switches or set the defaults in the initialization file to
generate 24 or 32 bit Targa files. If you are only interested in
greyscale images, the 8 bit format is appropriate.
4 Algorithms
4.1 Processing polynomial expressions
The program in its current form allows the definition of
polynomial surfaces of degree less than 33 (Due to numerical
problems, don't bother with polynomials of degree more than
around 10, and if possible keep them less than 6.)
The way that polynomial expressions are processed and manipulated
follows the following steps:
1) Parse the expression as taken from the input file. The
YACC parser that is used to process the input file
creates a parse tree of the expression as it is read.
The format for allowed expressions is described in
section 4.1.2.
2) Expand the expression into a sum of simple
subexpressions. After the parse tree has been built,
the expression is expanded so that the expression is in
expanded form. (i.e. (x+1)^2 is rewritten as x^2 + 2*x +
1.) The rules for doing this are explained in section
4.1.3.
3) Determine what order polynomial is represented by the
expression. Every term in the expanded expression is
examined to see what the largest order of x, y, and z is
used. This determines the order N of the polynomial.
From this value, the number of possible terms of that
order is computed.
4) Create an array of coefficients for the general three
variable polynomial equation of order N. There are a
total of (N+1)*(N+2)*(N+3)/6 terms in the general
equation. The values of each of the coefficients in the
expanded expression are then substituted into the array
of coefficients.
Note that the storage of expressions is not particularly
efficient for high orders of N.
4.1.1 Example equation representation
67
As an example of how the parse tree is expanded, given the input:
(x+y)*(1-x-z) + (x-z)^2.
The expanded representation is:
-1*x*y + -3*x*z + x + -1*y*z + y + z^2
This is a polynomial of order 2, and the array of coefficients
is:
0 0 1 -1 -3 -1 1 1 0 0
4.1.2 Allowed Formula Syntax
The keyword here is "simple" algebraic expressions. The only
allowed variables are "x", "y", and "z" (and only lower case).
Constants are floating point numbers. The operations allowed
between constants and the three variable symbols are: addition,
subtraction, multiplication, unary minus, and exponentiation by
an positive integer power. Examples of expressions that can be
processed are:
x^2+y-42
(1+x+y+z)^2 * 2*(x-y)*(z-x*(133-z^2))
27*x-y*42*y^2
(x^2+y^2+z^2+3)^2 - (x*y-x^2)*z^2
Examples of expressions that cannot be processed are:
x/y
x^0.5+y
2(x-y)
x^-2
4.1.3 Rules for processing formulas
There are only a few rules needed to simplify a polynomial
expression to the point that it can be easily turned into an
array of coefficients. They appear below, with the letters A, B,
C, and D standing for an arbitrary sub-expression. The rules
used to simplify an expression are:
Addition:
No simplification needed for the term: A + B
Subtraction:
A-B -> A + (-B)
Unary minus:
68
-(A*B) -> (-A * B)
-(A+B) -> (-A + -B)
Multiplication:
A*(B+C) -> A*B + A*C
(A+B)*C -> A*C + B*C
(A+B)*(C+D) -> A*C + A*D + B*C + B*D
Exponentiation:
(A*B)^n -> A^n * B^n
(A+B)^n -> C(n,0) * A^n + C(n,1) * A^(n-1) * B + ...
C(n,n-1) * A * B^(n-1) + C(n,n) * B^n
{ C(n, r) is the binomial coefficient }
Each of the subexpressions is further simplified until every term
in the expression has the form: const * x^i * y^j * z^k.
4.2 Processing of arbitrary functional surfaces
The basic algorithm for determining a point of intersection using
interval math is quite simple:
1) Given a range of allowed values for the ray parameter T,
a range of function values is calculated.
2) If the range does not include 0 then there is no
intersection.
3) If the range includes 0, then:
A) The range of values of the derivative of the function
is calculated for the allowed values of the ray
parameter T.
B) If the range of values of the derivative includes 0,
then the interval for T is bisected and we process each
subinterval starting with step 1.
C) If the range of values of the derivative does not
include 0, then we know that there is exactly one
solution of the function in the current interval. A
standard root solver using the regula-falsi method is
called to find the root.
There is of course much more to this...
4.2.1 Spherical coordinates
To ray-trace functions that are defined in terms of spherical
coordinates use the substitutions:
69
r = sqrt(x^2 + y^2 + z^2),
theta = atan(y/x),
phi = atan(sqrt(x^2 + y^2)/z)
Sample file: sectorl.pi, zonal.pi
4.3 Three dimensional noise generation
The way Polyray generates noise values from vector inputs follows
the following steps:
1) The integer part of the x, y, and z values of the vector
are determined.
2) The eight integer points surrounding the input vector are
determined.
3) A hashing function (described below) is applied to the
integer points surrounding the input vector, resulting
in eight random numbers.
4) A final number is determine by weighting the values
associated with the surrounding points with the distance
from the input vector to those points.
The result of this is that each point on an integer lattice has a
random value, but each fractional value inside a unit cube in the
lattice is related to its surrounding points.
The hashing function I use takes the bottom 10 bits of the
integer parts of the input vector, squishes them into a single
number, then crunches this value through a series of adds,
multiplies, and mods to get a single result. The algorithm is:
K = ((x & 0x03ff) << 20) |
((y & 0x03ff) << 10) |
(z & 0x03ff);
Kt = 0;
for (i=0;i<mult_table_size;i++)
Kt = ((Kt + K) * mult_table[i]) % HASH_SIZE;
result = (Flt)Kt / (Flt)(HASH_SIZE - 1);
The number and value of the entries in "mult_table" were chosen
at random, as was the value HASH_SIZE. I haven't bothered to
determine if the results from this hashing function really are
spread evenly, however the visual results from using it are quite
good.
70
4.4 Marching Cubes
The marching cubes algorithm puts a lattice over the three
dimensional space that an object sits in. By stepping through
each subcube in the lattice and looking for places where the
surface passes through a subcube, a good guess at a polygonal
cover for the surface can be generated. This algorithm is used
for scan converting blobs, polynomial functions, and implicit
functions.
As each cube is processed, every vertex of the cube is tested to
see if it is inside the object "hot", or outside the object
"cold". There are a total of 2^8 = 256 possible combinations of
hot and cold vertices.
The processing of each subcube in the lattice takes these steps:
o Each of the 8 vertices of the cube corresponds to a bit
in an unsigned char. If the vertex is hot then the bit
is set to 1, if the vertex is cold the bit is set to 0.
o Using a little combinatorics and group theory, it is
possible to classify, under the group of solid rotations
of the cube, each of the 256 possibilities to one of 23
unique base cases. (You can actually drop down to 14,
but it would have made parts of the algorithm more
complicated.)
o Each of the base cases has an entry in a table of
triangles that separate the hot vertices from the cold
vertices for that case.
o By using the inverse of the rotations that took the cube
into the base case, the triangles are rotated to
separate the hot and cold vertices of the original cube.
o The amount of hot and amount of cold that are separated
by each triangle is examined in order to see how much to
push the triangle up or down. This is a linear
interpolation that helps to keep the triangles close to
the actual surface.
o Now that the triangles are known and in approximately the
right place, the normal scan conversion routine is
called for each triangle.
The biggest drawback of this algorithm is that the number of
evaluations goes up with the cube of the resolution of the
lattice. For example, if u_steps is 20 and v_steps is 20 (the
71
default), then 20x20x20 = 8000 individual subcubes are visited.
If you were to increase the resolution by a factor of 5, there
would be 100x100x100 = 1,000,000 subcubes visited.
Another problem is one of aliasing. If the surface is very
complex and/or contains many small features, and the number of
steps along the axes is insufficient to properly determine
inside/outside, then large portions of the surface can disappear.
The solution is to increase the step sizes, or to raytrace,
either option will increase processing time.
Note: when I built the table of triangles, I made the assumption
that if there are two ways to split hot from cold, that hot would
always connect to hot. One example of this choice is when the
vertices: (0, 0, 0), (1, 0, 0), (0, 1, 1), and (1, 1, 1) are all
hot and the rest are cold.
5 Outstanding Issues
Polyray will probably never be "done". And in this respect, the
next two sections list some things that are planned for the
future, as well as things that should already have been fixed.
5.1 To do list
Parameterised objects and textures. This will occur in the
next major release of Polyray, due to the extreme structural
changes required.
The ability to execute a "system" command. During the
generation of an animation it would be nice to call an
external program that in turn generates an include file
containing data for the current frame of the animation.
(This works in the 286 version & the GCC compiled 386
version, but the Zortech compiled 386 version doesn't
support system calls.)
User definable positioning of display elements, in
particular placement of the image as it is drawn, and
placement of the status display during the trace.
Appending ".pi" to input file names that do not have an
extension.
Adding a default search path for the init file & include
files.
5.2 Known Bugs
It is possible to specify surface components that have no
72
meaning for the type of surface that is being declared. i.e.
octaves in a special surface.
The initialization file "polyray.ini" has to be in the
current directory.
Environment maps don't quite mesh at the seams.
Shading in scan conversion doesn't always match that from
raytracing. Not much can be done about this other than
increasing the values of "u_steps" and "v_steps".
6 Revision History
Version 1.5
Released: 3 November 1992
o Found the missing top line in scan converted images -
Polyray was using the background color for the entire
top line.
o Added layered textures.
o Expression processing code improved - bugs removed,
memory used diminished.
o Plugged memory leaks. Extensive debugging of memory
allocations and frees performed. Animations should be
much happier now.
o No longer need to define maximum number of primitives.
o Added support for greyscale Targa (type 3) files. These
can be used as the output format, as imagemaps, and as
height fields.
o Buggy SVGA support removed. Only standard VGA mode
(320x200) supported.
o Gridded objects added.
o Arrays added
o Components of CSG objects are now properly sorted by
bounding slabs
o User defined bounding slabs removed. Polyray will always
use bounding slabs aligned with the x, y, and z-axes.
73
o Clipping and bounding objects removed. Clipping is now
performed in CSG, bounding is specified using a
"bounding_box" declaration.
o Added wireframe display mode.
o Added planar blob types. (Also added toroidal blob
types, but they only appear in scan conversion images
due to the extreme numerical precision needed to
raytrace them.)
o Added smooth height fields.
o Fixed shading bug involving transparent objects &
multiple light sources.
o Fixed diffuse lighting from colored lights.
o Changed RLE Targa output so that line boundaries are not
crossed.
Version 1.4
Released: 11 April 1992
o Support for many SVGA boards at 640x480 resolution in 256
colors. See documentation for the -V flag. (Note: SVGA
displays only work on the 286 versions.)
o Changed the way the status output is managed. Now
requires a number following the -t flag. Note that line
and pixel status will screw up SVGA displays - drawing
goes to the wrong place starting around line 100. If
using SVGA display then either use no status, or
"totals".
o Added cylindrical blob components. Changed the syntax
for blobs to accommodate the new type.
o Added lathe surfaces made from either line segments or
quadratic splines.
o Added sweep surfaces made from quadratic splines.
o Height field syntax changed slightly. Non-square height
fields now handled correctly.
o Added adaptive antialiasing.
o Squashed bug in shading routines that affected almost all
74
primitives. This bug was most noticeable for objects
that were scaled using different values for x, y, and z.
o Added transparency values to color maps.
o Added new keywords to the file "polyray.ini":
shadow_tolerance, antialias, alias_threshold,
max_samples. Lines that begin with "// " in polyray.ini
are now treated as comments.
o Short document called "texture.txt" is now included in
"plydoc.zip". This describes in a little more detail
how to go about developing solid textures using Polyray.
o Added command line argument "-z start_line". This allows
the user to start a trace somewhere in the middle of an
image. Note that an image that was started this way
cannot be correctly resumed & completed. (You may be
able to use image cut and paste utilities though.)
Version 1.3
(not released)
o Added support for scan converting implicit functions and
polynomial surfaces using the marching cubes algorithm.
This technique can be slow, and is restricted to objects
that have user defined bounding shapes, but now Polyray
is able to scan convert any primitive.
o A global shading flag has been added in order to
selectively turn on/off some of the more time consuming
shading options. This option will also allow for the
use of raytracing as a way of determining shadows,
reflectivity, and transparency during scan conversion.
o Added new keywords to the file "polyray.ini": pixel_size,
pixel_encoding, shade_flags.
o Improved refraction code to (mostly) handle transparent
surfaces that are defined by CSG intersection.
o Fixed discoloring of shadows that receive some light
through a transparent object.
o Jittered antialiasing was not being called when the
option was selected, this has been fixed.
o Fixed parsing of blobs and polygons that had large
numbers of entries. Previously the parser would fail
75
after 50-60 elements in a blob and the same number of
vertices of a polygon.
o In keeping with the format used by POV-Ray and Vivid,
comments may now start with "//" as well as "#". The
use of the pound symbol for comments may be phased out
in future versions.
Version 1.2
Released: 16 February 1992
o Scan conversion of many primitives, using Z-Buffer
techniques.
o New primitives: sweep surface, torus
o Support for the standard 320x200 VGA display in 256
colors.
o An initialization file ("polyray.ini") is read before
processing. This allows greater flexibility in tuning
many of the default values used by Polyray.
o User defined bounding slabs added. This greatly improves
speed of rendering on data files with many small
objects.
o Noise surface added.
o Symbol table routines completely reworked. Improved
speed for data files containing many definitions.
o Bug in the texturing of height fields corrected.
Version 1.1
(not released)
o Added parabola primitive
o Dithering of rays, and objects
o Blob code improved, shading corrected, intersection code
is faster and returns fewer incorrect results.
Version 1.0
Released: 27 December 1991
76
o Several changes in input syntax were made, the most
notable result being that commas are required in many
more places. The reason for this is that due to the
very flexible nature of expressions possible, a certain
amount of syntactic sugar is required to remove
ambiguities from the parser.
o Several new primitives were added: boxes, cones,
cylinders, discs, height fields, and Bezier patches.
o A new way of doing textures was added - each component of
the lighting model can be specified by an implicit
function that is evaluated at run time. Using this
feature leads to slower textures, however because the
textures are defined in the data file instead of within
Polyray, development of mathematical texturing can be
developed without making alterations to Polyray.
o File flush commands in the data file and at the command
line were added.
o Several new Targa variants were added.
o Image mapping added.
o Numerous bug fixes have occurred.
Version 0.3 (beta)
Released: 14 October 1991
o This release added Constructive Solid Geometry,
functional surfaces defined in terms of transcendental
functions, a checker texture, and compressed Targa
output.
o Polyray no longer accepted a list of bounding/clipping
objects, only a single object is allowed. since CSG can
be used to define complex shapes, this is not a
limitation, and even better makes for cleaner data
files.
Version 0.2 (beta)
(not released)
o This release added animation support, defined objects,
arithmetic expression parsing, and blobs.
77
Version 0.1 (beta)
(not released)
o First incarnation of Polyray. This version had code for
polynomial equations and some of the basic surface types
contained in "mtv".
7 Bibliography
"Introduction to Ray Tracing"
Edited by Andrew Glassner
Academic Press, 1989
"Illumination and Color in Computer Generated Imagery"
Roy Hall
Springer Verlag, 1989
"Numerical Recipes in C"
Press, et al.
Cambridge University Press, 1988
"CRC Handbook of Mathematical Curves and Surfaces"
David H. von Seggern
CRC Press, 1990
"Robust Ray Intersection with Interval Arithmetic"
D.P. Mitchell,
from:
"Proceedings Graphics Interface '90"
Canadian Information Processing Society
8 Sample files
A number of sample files are referenced in this document. These
files are contained in a separate archive, and demonstrate
various features of Polyray.
Simple demo files:
boxes.pi cone.pi cossph.pi cwheel.pi
cylinder.pi disc.pi gsphere.pi sphere.pi
spot0.pi
Sample color/texture definitions:
colors.inc textures.inc
Polynomial surfaces:
78
bicorn.pi bifolia.pi cassini.pi csaddle.pi
devil.pi folium.pi helix.pi hyptorus.pi
kampyle.pi lemnisca.pi loop.pi monkey.pi
parabol.pi partorus.pi piriform.pi qparab.pi
qsaddle.pi quarcyl.pi quarpara.pi steiner.pi
strophid.pi tcubic.pi tear5.pi torus.pi
trough.pi twincone.pi twinglob.pi witch.pi
Implicit function surfaces
sectorl.pi sombrero.pi sinsurf.pi superq.pi
zonal.pi
Bezier patches
bezier0.pi teapot.pi teapot.inc
Height Fields
hfnoise.pi sombfn.pi sinfn.pi wake.pi
Image mapping
map1.pi
Texturing, CSG, etc.
lens.pi lookpond.pi marble.pi polytope.pi
spot1.pi wood.pi xander.pi
Data file generators:
balls.c coil.c gears.c hilbert.c
mountain.c sphcoil.c tetra.c
Animation files:
plane.pi squish.pi whirl.pi
9 Polyray Grammar
What follows is the complete YACC grammar used to parse Polyray
input files. Only the actions taken at each rule have been
deleted. The conventions used in the grammar are: keywords
(terminal symbols like "sphere") appear in all caps, i.e. SPHERE.
All other grammar rules (nonterminals) appear in lower case. The
terminal symbols, including punctuation, are either recognized by
the lexical analyzer or by lookup from one of the symbol tables.
(Note there is one ambiguity present in the grammar - the "if ..
if .. else" construct.)
79
scene
: elementlist
;
elementlist
: elementlist element
| element
;
element
: background
| camera
| definition
| flush_statement
| frame_decl
| if_statement
| light
| object
| outfile
| system_call
;
defined_token
: SURFACE_SYM
| TEXTURE_SYM
| OBJECT_SYM
| EXPRESSION_SYM
| TRANSFORM_SYM
;
definition
: DEFINE defined_token surface
| DEFINE defined_token texture
| DEFINE defined_token object
| DEFINE defined_token transform
| DEFINE defined_token expression
| DEFINE TOKEN surface
| DEFINE TOKEN texture
| DEFINE TOKEN object
| DEFINE TOKEN transform
| DEFINE TOKEN expression
;
object
: OBJECT '{' object_decls '}'
| OBJECT_SYM
| OBJECT_SYM '{' object_modifier_decls '}'
;
80
object_modifier_decls
: object_modifier_decl object_modifier_decls
| object_modifier_decl
;
object_modifier_decl
: texture
| transform
| DITHER fexper
| ROTATE point
| ROTATE point ',' fexper
| SHEAR fexper ',' fexper ',' fexper ',' fexper ','
fexper ',' fexper
| TRANSLATE point
| SCALE point
| U_STEPS fexper
| V_STEPS fexper
| SHADING_FLAGS fexper
| BOUNDING_BOX point ',' point
| ROOT_SOLVER FERRARI
| ROOT_SOLVER VIETA
| ROOT_SOLVER STURM
;
object_decls
: shape_decl
| shape_decl object_modifier_decls
;
shape_decl
: bezier
| blob
| box
| cone
| cylinder
| csg
| disc
| function
| gridded
| height_field
| height_fn
| lathe
| parabola
| polygon
| polynomial
| ppatch
| smooth_height_field
| smooth_height_fn
| sphere
| sweep
81
| torus
;
camera_exper
: ANGLE fexper
| APERTURE fexper
| AT point
| ASPECT fexper
| MAX_TRACE_DEPTH fexper
| DITHER_RAYS fexper
| DITHER_OBJECTS fexper
| FOCAL_DISTANCE fexper
| FROM point
| HITHER fexper
| YON fexper
| RESOLUTION fexper ',' fexper
| UP point
;
camera_expers
: camera_expers camera_exper
| camera_exper
;
camera
: VIEWPOINT '{' camera_expers '}'
;
light_modifier_decl
: COLOR expression
| transform
| ROTATE point
| ROTATE point ',' fexper
| SHEAR fexper ',' fexper ',' fexper ',' fexper ','
fexper ',' fexper
| TRANSLATE point
| SCALE point
;
light_modifier_decls
: light_modifier_decl light_modifier_decls
|
;
light
: LIGHT point ',' point
| LIGHT point
| SPOT_LIGHT point ',' point
| SPOT_LIGHT point ',' point ',' point ',' fexper ','
fexper ',' fexper
82
| TEXTURED_LIGHT '{' light_modifier_decls '}'
;
background
: BACKGROUND expression
;
surface_declaration
: COLOR expression
| COLOR_MAP '(' map_entries ',' expression ')'
| COLOR_MAP '(' map_entries ')'
| AMBIENT expression ',' expression
| AMBIENT expression
| BUMP_SCALE expression
| DIFFUSE expression ',' expression
| DIFFUSE expression
| FREQUENCY expression
| LOOKUP_FUNCTION expression
| MICROFACET PHONG expression
| MICROFACET BLINN expression
| MICROFACET GAUSSIAN expression
| MICROFACET REITZ expression
| MICROFACET COOK expression
| MICROFACET expression
| NORMAL expression
| OCTAVES expression
| PHASE expression
| POSITION_FUNCTION expression
| POSITION_SCALE expression
| REFLECTION expression ',' expression
| REFLECTION expression
| SPECULAR expression ',' expression
| SPECULAR expression
| TRANSMISSION expression ',' expression ',' expression
| TRANSMISSION expression ',' expression
| TURBULENCE expression
;
surface_declarations
: surface_declaration surface_declarations
|
;
surface
: SURFACE '{' surface_declarations '}'
| SURFACE_SYM
| SURFACE_SYM '{' surface_declarations '}'
;
texture_modifier_decls
83
: texture_modifier_decl texture_modifier_decls
| texture_modifier_decl
;
texture_modifier_decl
: transform
| ROTATE point
| ROTATE point ',' fexper
| SHEAR fexper ',' fexper ',' fexper ',' fexper ','
fexper ',' fexper
| TRANSLATE point
| SCALE point
;
texture_declarations
: texture_declaration texture_modifier_decls
| texture_declaration
;
texture_declaration
: surface
| SPECIAL surface
| NOISE surface
| CHECKER texture ',' texture
| HEXAGON texture ',' texture ',' texture
| LAYERED texture_list
;
texture
: TEXTURE '{' texture_declarations '}'
| TEXTURE_SYM
| TEXTURE_SYM '{' texture_modifier_decls '}'
;
texture_list
: texture
| texture_list ',' texture
;
transform_declaration
: ROTATE point
| ROTATE point ',' fexper
| SCALE point
| TRANSLATE point
;
transform_declarations
: transform_declaration
| transform_declarations transform_declaration
;
84
transform
: TRANSFORM '{'
transform_declarations '}'
| TRANSFORM_SYM
| TRANSFORM_SYM '{' transform_declarations '}'
;
bezier_points
: bezier_points ',' point
| point
;
bezier
: BEZIER fexper ',' fexper ',' fexper ',' fexper ','
bezier_points
;
blob
: BLOB fexper ':' blobelements
;
blobelements
: blobelement
| blobelements ',' blobelement
;
blobelement
: fexper ',' fexper ',' point
| SPHERE point ',' fexper ',' fexper
| CYLINDER point ',' point ',' fexper ',' fexper
| PLANE point ',' fexper ',' fexper ',' fexper
| TORUS point ',' point ',' fexper ',' fexper ',' fexper
;
box
: BOX point ',' point
;
cone
: CONE point ',' fexper ',' point ',' fexper
;
csg
: csg_tree
;
csg_tree
: '(' csg_tree ')'
| csg_tree '+' csg_tree
85
| csg_tree '-' csg_tree
| csg_tree '*' csg_tree
| '~' csg_tree
| csg_tree '&' csg_tree
| object
;
cylinder
: CYLINDER point ',' point ',' fexper
;
disc
: DISC point ',' point ',' fexper
| DISC point ',' point ',' fexper ',' fexper
;
function
: FUNCTION expression
;
gridded
: GRIDDED sexper ',' object_list
;
object_list
: object
| object object_list
;
height_field
: HEIGHT_FIELD sexper
;
height_fn
: HEIGHT_FN fexper ',' fexper ','
fexper ',' fexper ',' fexper ',' fexper ','
expression
| HEIGHT_FN fexper ',' fexper ',' expression
;
lathe
: LATHE fexper ',' point ',' fexper ',' pointlist
;
parabola
: PARABOLA point ',' point ',' fexper
;
polygon:
POLYGON fexper ',' pointlist
86
;
polynomial
: POLYNOMIAL expression
;
ppatch
: PATCH point ',' point ',' point ',' point ',' point ','
point
;
smooth_height_field
: SMOOTH_HEIGHT_FIELD sexper
;
smooth_height_fn
: SMOOTH_HEIGHT_FN fexper ',' fexper ','
fexper ',' fexper ',' fexper ',' fexper ','
expression
| SMOOTH_HEIGHT_FN fexper ',' fexper ',' expression
;
sphere
: SPHERE point ',' fexper
;
sweep
: SWEEP fexper ',' point ',' fexper ',' pointlist
;
torus
: TORUS fexper ',' fexper ',' point ',' point
;
fexper
: expression
;
point
: expression
;
sexper
: expression
;
pointlist
: point
| pointlist ',' point
;
87
expression
: '(' expression ')'
| '[' expression_list ']'
| '<' expression ',' expression '>'
| '<' expression ',' expression ',' expression '>'
| expression '[' expression ']'
| '(' conditional '?' expression ':' expression ')'
| expression '^' expression
| expression '%' expression
| expression '*' expression
| expression '.' expression
| expression '/' expression
| expression '+' expression
| expression '-' expression
| '-' expression %prec UMINUS
| '|' expression '|'
| COLOR_MAP '(' map_entries ',' expression ')'
| COLOR_MAP '(' map_entries ')'
| NOISE '(' expression ')'
| NOISE '(' expression ',' expression ')'
| ROTATE '(' expression ',' expression ')'
| ROTATE '(' expression ',' expression ',' expression ')'
| END_FRAME
| START_FRAME
| TOTAL_FRAMES
| TOKEN '(' expression_list ')'
| TOKEN
| NUM
| STRING
| EXPRESSION_SYM
;
expression_list
: expression
| expression ',' expression_list
;
conditional
: '(' conditional ')'
| expression '<' expression
| expression '>' expression
| expression LTEQ_SYM expression
| expression GTEQ_SYM expression
| expression EQUAL_SYM expression
| conditional AND_SYM conditional
| conditional OR_SYM conditional
| '!' conditional
;
88
map_entry
: '[' fexper ',' fexper ',' point ',' point ']'
| '[' fexper ',' fexper ',' point ',' fexper ',' point ','
fexper ']'
;
map_entries
: map_entry map_entries
| map_entry
;
frame_decl
: end_frame_decl
| start_frame_decl
| total_frames_decl
;
end_frame_decl
: END_FRAME fexper
;
start_frame_decl
: START_FRAME fexper
;
total_frames_decl
: TOTAL_FRAMES fexper
;
outfile
: OUTFILE TOKEN
| OUTFILE STRING
;
flush_statement
: FILE_FLUSH fexper
;
system_call
: SYSTEM '(' expression_list ')'
;
statement
: '{' elementlist '}'
| element
;
if_else_part
: ELSE statement
|
89
;
if_statement
: IF '(' conditional ')' statement if_else_part
;
;