home
***
CD-ROM
|
disk
|
FTP
|
other
***
search
/
C!T ROM 2
/
ctrom_ii_b.zip
/
ctrom_ii_b
/
DOS
/
GRAFISCH
/
RAYTRACE
/
POLYRAY
/
ADDENDUM
next >
Wrap
Text File
|
1993-04-24
|
16KB
|
402 lines
Last minute additions...
After getting the documents into shape for v1.5, I made several bug fixes and
additions to make v1.6. Rather than fix up the full document file (which will
be done before the next release), this addendum file is provided to give
information about the changes. The new features are:
Depth mapped lights
Depth rendering (to save Z-Buffer information)
Displacement surfaces
Raw triangle vertex output
Directional light sources
Global haze
UV mapping and bounds
Hicolor display (VESA only, may not work right for you)
Texture maps and indexed textures
Summed textures
UV triangles
Static variables
Color maps in special surfaces
I) Depth mapped lights and depth rendering
Depth mapped lights are very similar to spotlights, in the sense that they
point from one location and at another information. The primary use for these
is for doing shadowing in scan converted scenes where shadow information
might not be possible using the raytracer (see displacement surfaces). The
format of their declaration is:
depthmapped_light {
[ angle fexper ]
[ aspect fexper ]
[ at vexper ]
[ color expression ]
[ depth "depthfile.tga" ]
[ from vexper ]
[ hither fexper ]
[ up vexper ]
}
You may notice that the format of the declaration is very similar to the
viewpoint declaration. This is intentional, as you will usually generate
the depth information for "depthfile.tga" as the output of a run of Polyray.
To support output of depth information, a new statements was added to the
viewpoint declaration. The declaration to output a depth file would have the
form:
viewpoint {
from [ location of depth mapped light ]
at [ location the light is pointed at ]
...
image_format 1
}
Where the final statement tells Polyray to output depth information instead of
color information. Note that if the value in the image_format statement is
0, then normal rendering will occur. For an example of using a depth mapped
light, see the file "room1.pi" in the data archives.
II) Displacement Surfaces
Displacement surfaces cause modification of the shape of an object as it
is being rendered. The amount and direction of the displacement are specified
by an object modifier statement:
displace vexper
Where the expression is a vector that tells Polyray how to do the displacement.
This feature only works for scan converted images. The raytracer will only
see the undistorted surface. For some examples of displacement surface, see
the following files in the data archives:
disp2.pi, disp3.pi, legen.pi, spikes.pi
III) Raw triangle vertex information
A somewhat odd addition to the image output formats for Polyray is the
generation of raw triangle information. What happens is very similar to the
scan conversion process, but rather than draw polygons, Polyray will write
a text description of the polygons (after splitting them into triangles). The
final output is a (usually long) list of lines, each line describing a single
smooth triangle. The format of the output is:
x1 y1 z1 x2 y2 z2 x3 y3 z3 nx1 ny1 nz1 nx2 ny2 nz2 nx3 ny3 nz3 u1 v1 u2 v2 u3 v3
The locations of the three vertices come first, the normal information for
each of the vertices follows. Lastly the uv values for each triangle are
generated based on the surface you are rendering (see uv triangles below).
Currently I don't have any applications for this output. The intent of this
feature is to provide a way to build models in polygon form for conversion to
another renderers' input format.
For example, to produce raw triangle output describing a sphere, and dump it
to a file you could use the command:
polyray sphere.pi -p z > sphere.tri
IV) Directional lights
The directional light means just that, light coming from some direction. The
biggest difference between this light source and the others is that no shadowing
is performed. This has pretty serious implications for shading, so if you
use this type of light, you should also set the global shading flags so that
surfaces are one-sided. i.e. polyray foo.pi -q 55. The format of the
expression is:
directional_light color, direction
directional_light direction
An example would be: directional_light <2, 3, -4>, giving a white light coming
from the right, above, and behind the origin.
V) Global Haze
The global haze is a color that is added based on how far the ray travled before
hitting the surface. The format of the expression is:
haze coeff, starting_distance, color
The color you use should almost always be the same as the background color.
The only time it would be different is if you are trying to put haze into a
valley, with a clear sky above (this is a tough trick, but looks nice). A
example would be:
haze 0.8, 3, midnight_blue
The value of the coeff ranges from 0 to 1, with values closer to 0 causing
the haze to thicken, and values closer to 1 causing the haze to thin out.
I know it seems backwards, but it is working and I don't want to break anything.
VI) UV mapping and bounds
In addition to the runtime variables x, y, P, etc. the variables u and v
have been added. In general u varies from 0 to 1 as you go around an object
and v varies from 0 to one as you go from the bottom to the top of an object.
Not all primitives set meaningful values for u and v, those that do are:
bezier, cone, cylinder, disc, sphere, torus, patch
These variables can be used in a couple of ways, to tell Polyray to only
render portions of a surface within certain uv bounds, or they can be used
as arguments to expressions in textures or displacement functions.
See the file uvtst.pi in the data archives for an example of using uv bounds
on objects. The file spikes.pi demonstrates using uv as variables in a
displacement surface. The file bezier1.pi demonstrates using uv as variables
to stretch an image over the surface of a bezier patch.
VII) Hicolor display output
Polyray will support the VESA 640x480 hicolor graphics mode for display preview.
The command line switch is "-V 2". In polyray.ini, you would use
"display hicolor". Note that using any of the status display options can
really screw up the picture. I recommend "-t 0" if you are going to use
this option.
Future versions will work better. I just got a board that could handle hicolor,
so I'm still experimenting.
VIII) Texture maps and indexed textures
A texture map is declared in a manner similar to color maps. There is a
list of value pairs and texture pairs, for example:
define index_tex_map
texture_map([-2, 0, red_blue_check, bumpy_green],
[0, 2, bumpy_green, reflective_blue])
Note that for texture maps there is a required comma separating each of the
entries.
These texture maps are complimentary to the indexed texture. Two typical
uses of indexed textures are to use solid texturing functions to select
(and optinally blend) between complete textures rather than just colors, and
to use image maps as a way to map textures to a surface.
For example, using the texture map above on a sphere can be done accomplished
with the following:
object {
sphere <0, 0, 0>, 2
texture { indexed x, index_tex_map }
}
The indexed texture uses a lookup function (in this case a simple gradient
along the x axis) to select from the texture map that follows. See the
data file "indexed1.pi" for the complete example.
As an example of using an image map to place textures on a surface, the
following example uses several textures, selected by the color values in
an image map. The function "indexed_map" returns the color index value from
a color mapped Targa image (or uses the red channel in a raw Targa).
object {
sphere <0, 0, 0>, 1
texture {
indexed indexed_map(image("txmap.tga"), <x, 0, y>, 1),
texture_map([1, 1, mirror, mirror],
[2, 2, bright_pink, bright_pink],
[3, 3, Jade, Jade])
translate <-0.5, -0.5, 0> // center image
}
}
In this example, the image is oriented in the x-y plane and centered on the
origin. The only difference between a "indexed_map" and a "planar_imagemap"
is that the first (indexed_map) returns the index of the color in the image
and the second returns the color itself. Note that the texture map shown
above has holes in it (between the integer values), however this isn't a
problem as the indexed_map function will only produce integers.
IX) Summed textures
Summed textures simply add weighted amounts of a number of textures together
to make the final color. The syntax is:
texture {
summed f1, tex1, f2, tex2, ...
}
The expressions f1, f2, ... are numeric expressions. The expressions tex1, ...
are textures.
X) UV triangles
In order to keep up with the competition, it is possible to assign "uv"
coordinates to triangular patches. The syntax looks like this:
object {
patch <x0, y0, z0>, <nx0, ny0, nz0> uv 0, 0
<x1, y1, z1>, <nx1, ny1, nz1> uv 0, 1
<x2, y2, z2>, <nx2, ny2, nz2> uv 1, 0
}
During texturing, if you use either "u" or "v" in an expression, it will be
set to the appropriate value based on where the ray hit the triangle. Note
that by default the uv values are as shown above. They may however be set
to anything at all. You may also mix uv declarations of vertices with ones
that do not have uv values. For example, the declaration:
patch <0, 0, 0>, <0, 1, 0> uv 0.5, 0.5
<1, 0, 0>, <0, 1, 0>
<0.5, 0, 1>, <0, 1, 0>
is exactly the same as:
patch <0, 0, 0>, <0, 1, 0> uv 0.5, 0.5
<1, 0, 0>, <0, 1, 0> uv 1, 0
<0.5, 0, 1>, <0, 1, 0> uv 0, 1
This feature is especially powerful when you have a large object made of
triangles that you want to wrap an image map onto. By defining a texture
like:
define texture mytex
texture { matte { color planar_imagemap(image("foo.tga"), <u, 0, v>)) } }
And then using "mytex" in each patch object, you can get the image properly
placed.
XI) static variables
At the request of Jeff Bowermaster, I added a way to retain variable values
from frame to frame of an animation. Instead of the normal declaration of
a variable:
define xyz 32 * frame
you would do something like this:
if (frame == start_frame)
static define xyz 42
else
static define xyz (xyz + 0.1)
The big differences between a "static define" and a "define" are that the static
will be retained from frame to frame, and the static actually replaces any
previous definitions rather than simply overloading them.
The static variables have an additional use beyond simple arithmetic on
variables. By defining something that takes a lot of processing at parse time
(like height fields and image maps), you can make them static in the first
frame and simply instantiate them every frame after that.
One example of this would be to spin a complex height field, if you have to
create it every frame, there is a many second long wait while Polyray generates
the field. The following declarations would be a better way:
if (frame == start_frame)
static define sinsf
object {
smooth_height_fn 128, 128, -2, 2, -2, 2,
0.25 * sin(18.85 * x * z + theta_offset)
shiny_red
}
...
sinsf
...
Two examples of how static variables can be used are found in the animation
directory in the data file archive (PLYDAT.ZIP). The first is "movsph.pi",
which bounces several spherical blob components around inside a box. The
second is "cannon.pi" which points a cannon in several directions, firing
balls once it is pointed.
Warning: A texture inside a static object should ONLY be static itself. The
reason is that between frames, every non-static thing is deallocated. If you
have things inside a static object that point to a deallocated pointer, you
will most certainly crash the program. Sorry, but detecting these things would
be too hard and reallocating all the memory would take up too much space.
XII) Color maps in special surfaces
In the interests of making layered textures a bit easier, if you use a color
map in the "color" component of a special surface, then Polyray will check
for an alpha value. If one exists, then that value will be used for the
"transmission" component of the surface.
As an example, the following functions and color maps are from the file
"stones.inc", an adaptation of Mike Millers stones textures. The base
layer of the stone uses a color map that varies between a mauve and cream
colors. The top layer has some partially clear tan and rose.
The first step is to define the solid texturing functions. The functions
granite_fn_xx are similar in effect to the POV-Ray granite plus turbulence.
After that the color maps themselves are defined, followed by definitions
of the coloring functions that use the granite functions to look up a color
from the map. The final step is to create a layered texture, "Stone3", that
has Grnt0a on the top and Grnt5 below.
define granite_fn_05 noise(8 * (P + 1.0 * dnoise(P, 1)), 5)
define granite_fn_06 noise(8 * (P + 1.2 * dnoise(P, 1)), 5)
//------- Medium Mauve Med.Rose & deep cream
define Grnt5_map
color_map(
[0.000, 0.178, <0.804, 0.569, 0.494>, <0.855, 0.729, 0.584>]
[0.178, 0.356, <0.855, 0.729, 0.584>, <0.667, 0.502, 0.478>]
[0.356, 0.525, <0.667, 0.502, 0.478>, <0.859, 0.624, 0.545>]
[0.525, 0.729, <0.859, 0.624, 0.545>, <0.855, 0.729, 0.584>]
[0.729, 1.001, <0.855, 0.729, 0.584>, <0.804, 0.569, 0.494>])
//--------- Gray Tan with Rose, partially transparent
define Grnt0a_map
color_map(
[0.000, 0.153, <0.729, 0.502, 0.451>, 0.306,
<0.769, 0.686, 0.592>, 0.792]
[0.153, 0.398, <0.769, 0.686, 0.592>, 0.792,
<0.843, 0.753, 0.718>, 0.396]
[0.398, 0.559, <0.843, 0.753, 0.718>, 0.396,
<0.780, 0.667, 0.561>, 0.976]
[0.559, 0.729, <0.780, 0.667, 0.561>, 0.976,
<0.741, 0.659, 0.576>, 0.820]
[0.729, 1.001, <0.741, 0.659, 0.576>, 0.820,
<0.729, 0.502, 0.451>, 0.306])
define Grnt5 Grnt5_map[granite_fn_05]
define Grnt0a Grnt0a_map[granite_fn_06]
//------------- Rose & Yellow Marble with fog white veining
define Stone3
texture {
layered
texture { special shiny { color Grnt0a }
scale <2, 3, 2> rotate <0, 0, -30> },
texture { special shiny { color Grnt5 }
scale <2, 3, 2> rotate <0, 0, 40> }
}
A good way to build color maps for layered textures is with ColorMapper,
Written by : SoftTronics, Lutz + Kretzschmar
This is available as CMAP.ZIP in the forum Graphdev on Compuserve. This
program allows you to build color maps with varying colors and transparency
values. The output of this program does have to be massaged a little bit to
make it into a color map as Polyray understands it. In order to help with
this process an IBM executable, "makemap.exe" has been included. To use this
little program, you follow these steps:
1) run CMAPPER to create a color map in the standard output (not the POV-Ray
output format).
2) run makemap on that file, giving a name for the new Polyray color map
definition
3) Add this definition to your Polyray data file.
If you saved your map as "foo.map", and you wanted to add this color map to
the Polyray data file "foo.inc", with the name of foox_map, you would then
run makemap the following way:
makemap foo.map foox_map >> foo.inc
This makes the translation from CMAPPER format to Polyray format, and appends
the output (as "define foox_map color_map(...)") to the file foo.inc.
Have Fun and feel free to contact me if you have questions and/or comments.
email is preferable, but I will eventually answer the ponderable kind.
Xander