homeprevious   next  contents

Shout3D™ 2.0 Specification

Node Class Reference

4.1 Introduction

This document contains a description of all node classes defined in Shout3D. Additional node types may be added by subclassing from existing node classes; see Subclassing Nodes. Node classes are represented using pseudo-code.

Shout3D contains two kinds of nodes: Instantiable Nodes and Abstract Nodes. The instantiable nodes may be included in Shout3D scene graphs and may be instantiated programmatically using the Shout3D API. The abstract nodes may not be included in scene graphs and may not be instantiated programmatically. Rather, they serve to define a class hierarchy. This hierarchy has benefits for those using Shout3D API, for those creating new node classes, and for those who wish to take advantage of certain field usages

4.2 Instantiable Nodes

The following set of instantiable node classes is included in Shout3D 2.0:

4.2.1 Anchor
4.2.2 Appearance
4.2.3 Background
4.2.4 Billboard
4.2.5 BooleanEventToCurrentTime
4.2.6 BooleanEventToInteger
4.2.7 Color
4.2.8 ColorInterpolator
4.2.9 Coordinate
4.2.10 CoordinateInterpolator
4.2.11 DirectionalLight
4.2.12 DoubleEventToBoolean
4.2.13 DoubleEventToInteger
4.2.14 Group
4.2.15 ImageTexture
4.2.16 IndexedFaceSet
4.2.17 IndexedLineSet
4.2.18 Material
4.2.19 NavigationInfo
4.2.20 OrientationInterpolator
4.2.21 PointSet
4.2.22 PositionInterpolator
4.2.23 ScalarInterpolator
4.2.24 Shape
4.2.25 Switch
4.2.26 TextureCoordinate
4.2.27 TimeSensor
4.2.28 TimeSensorPauser
4.2.29 Toggle
4.2.30 TouchSensor
4.2.31 Transform
4.2.32 Viewpoint

All these nodes may be included in Shout3D files and may be instantiated programmatically using the Shout3D API.

In the descriptions below, the node class is presented in bold, followed by a listing of the fields for that node, enclosed in braces. The description of each field in the node includes the type of the node, the name of the node, the default value and the usage. The default usage for all fields is ANY; if the usage is ANY it is not listed. The fields in each node are listed in alphabetical order.

4.2.1 Anchor

class Anchor { 
  NodeArrayField   children        = [];
  DoubleField      activateTime    = 0;  // Usage = NON_NEGATIVE_DOUBLE
  StringField      description     = ""; 
  BooleanField     hidden          = FALSE;
  StringArrayField parameter       = [];
  StringArrayField url             = []; // Usage = URL_ARRAY
}
//INHERITED METHODS (from Group)
  float[] getMatrix();
  void    addChildren(Node[] nodesToAdd);
  void    removeChildren(Node[] nodesToRemove);
} 

The Anchor grouping node retrieves the content of a URL when the user activates (e.g., clicks) some geometry contained within the Anchor node's children, as follows:

  1. An Anchor that has no children will never activate.
  2. An Anchor that has children will activate only if the selected geometry is a descendent of the Anchor.
  3. If more than one Anchor fit the above criteria, then the Anchor that is lowest in the hierarchy will activate.

If the URL points to a valid VRML97, .s3d or .s3z file, that world replaces the world currently in the viewer (except when the parameter field, described below, alters this behaviour). If non-Shout3D data is retrieved, the viewer will determine how to handle that data. An Anchor node with an empty or null url field not load anything new into the viewer.

Exactly how a user activates geometry contained by the Anchor node depends on the pointing device and is determined by the viewer. More details on activating an Anchor may be found in 2.5.7, Sensor nodes. When an Anchor is activated, the current time, as retrieved from the viewer's Clock interface, will be set as the value of the activateTime field.

2.5.5, Grouping and children nodes, provides a description of the children field.

The description field in the Anchor node specifies a textual description of the Anchor node. This may be used by viewer-specific user interfaces that wish to present users with more detailed information about the Anchor.

The hidden field behaves as in the parent class, Group. In addition, if an Anchor node is hidden, then that Anchor is effectively disabled since the geometry below it will be neither rendered nor pickable.

The parameter field may be used to supply any additional information to be interpreted by the viewer. Each string shall consist of "keyword=value" pairs. For example, some viewers allow the specification of a 'target' for a link to display a link in another part of an HTML document. The parameter field is then:

Anchor { 
  parameter [ "target=name_of_frame" ]
  ...
}

An Anchor node may be used to bind the initial Viewpoint node in a world by specifying a URL ending with "#ViewpointName" where "ViewpointName" is the name of a viewpoint defined in the specified URL. The example below specifies an anchor that loads the file "someScene.s3d" and binds the initial user view to the Viewpoint node named "OverView" when the Anchor node's geometry is activated. If the named Viewpoint node is not found in the file, the file is loaded using the default Viewpoint node binding rules (see Viewpoint).

Anchor { 
  url "http://www.school.edu/vrml/someScene.s3d#OverView"
  ...
}

If the url field is specified in the form "#ViewpointName" (i.e. no file name), the Viewpoint node with the given name ("ViewpointName") currently in the viewer will be bound by setting the isBound field of that Viewpoint to TRUE. If there are multiple Viewpoints with the same name in the scene, then the first Viewpoint with that name will be bound. If there are no Viewpoints with that name in the scene, the Viewpoint will not be changed.

The Anchor node is a subclass of the node TouchSensor.

4.2.2 Appearance

class Appearance { 
  NodeField material          = NULL; // Usage = MATERIAL_NODE
  NodeField texture           = NULL; // Usage = TEXTURE_NODE
}

The Appearance node specifies the visual properties of geometry. The value for each of the fields in this node may be NULL. However, if the field is non-NULL, it must contain one node of the appropriate type.

The material field, if specified, must contain a Material node. If the material field is NULL or unspecified, lighting is off (all lights are ignored during rendering of the object that references this Appearance) and the unlit object color is (1, 1, 1). Details of the Shout3D lighting model are in 2.6, Lighting model.

The texture field, if specified, must contain a node derived from the Texture class. If the texture node is NULL or the texture field is unspecified, the object that references this Appearance is not textured.

4.2.3 Background

class Background { 
  FloatArrayField    color        = 0 0 0; // Usage = COLOR
  BooleanField       isBound      = TRUE;
  NodeField          texture      = NULL;  // Usage = TEXTURE_NODE
  BooleanField       stretchToFit = FALSE;
}

The Background node specifies a backdrop behind the 3D scene. It is either a uniform field of color or a texture. If the texture field is NULL, then the background will be filled with the color given by the color field. If the texture field is not NULL, the background will be filled with the image given by the texture.

If stretchToFit is TRUE, then the texture will be scaled to fit exactly within the rendering window.

If stretchToFit is FALSE, then the texture will not be scaled and will drawn be centered in the window. The mapping will be pixel-to-pixel; the image will be cropped if the texture is larger than the window, and if the texture is smaller than the window a border whose color is that of the color field will surround it. If the texture has repeatS or repeatT TRUE, then the background texture will repeat and the frame will not be seen.

Alpha values in the texture (i.e., two or four component images) specify that the texture is semi-transparent or transparent in regions, allowing the color to be visible.

The isBound field controls whether this Background is bound for the viewer containing it. For more information on how nodes are bound, see 2.5.8, Bindable Nodes.

The Background node is a subclass of the abstract node Bindable.

4.2.4 Billboard

class Billboard {
  NodeArrayField   children        = [];
  BooleanField     hidden          = FALSE;
}
//INHERITED METHODS (from Group)
  float[] getMatrix();
  void    addChildren(Node[] nodesToAdd);
  void    removeChildren(Node[] nodesToRemove);
}

The Billboard node is a grouping node which modifies its coordinate system so that the Billboard node's local Z-axis turns to point at the near clipping plane. The Billboard node has children which may be other children nodes.

The Transform node is a subclass of the node Group.

2.5.5, Grouping and children nodes, provides a description of the children field.

The hidden field behaves as in the parent class, Group.

The VRML97 special case of viewer-alignment, in which the object rotates to keep the billboard's local Y-axis parallel with the Y-axis of the viewer, is the only billboard alignment mode in Shout3D. The following steps describe how to rotate the billboard to face the viewer and align the billboard's Y-axis to the Y-axis of the viewer:

  1. Rotate the Z-axis of the billboard to be collinear with the viewer's Z-axis and pointing towards the near clipping plane.
  2. Rotate the Y-axis of the billboard to be parallel and oriented in the same direction as the Y-axis of the viewer.

Multiple instances of Billboard nodes (DEF/USE) operate as expected: each instance rotates in its unique coordinate system to face the near clipping plane.

4.2.5 BooleanEventToCurrentTime

class BooleanEventToCurrentTime {
  DoubleField     currentTime     = 0;
  BooleanField    booleanField    = false;
  BooleanField    trueFilter      = true;
  BooleanField    falseFilter     = false;
}

This node pushes out the current time when one of its boolean inputs changes appropriately.

When the booleanField field is set, the currentTime field is set to the current time; any ROUTEd fields from the currentTime field will then be set. You can either ROUTE directly to the booleanField field if you control the boolean value, or you can ROUTE to either of the filter fields. ROUTEing into the trueFilter will only set the currentTime field if the value is true; ROUTEing into the falseFilter will only set the currentTime field if the value is false.

4.2.6 BooleanEventToInteger

class BooleanEventToInteger {
  IntField        intField     = 0;
  BooleanField    booleanField    = false;
  BooleanField    trueFilter      = true;
  BooleanField    falseFilter     = false;
}

This node pushes out the value of intField when one of its boolean inputs changes appropriately. The value pushed from intField is the same value it contains at the time the boolean input changes. This is useful, for example, in setting the whichChoice field of a Switch node to a particular value when a boolean field is changed elsewhere in the scene.

When the booleanField field is set, the intField field is set to its current value; any ROUTEd fields from the intField field will then be set to this value. You can either ROUTE directly to the booleanField field if you control the boolean value, or you can ROUTE to either of the filter fields. ROUTEing into the trueFilter will only set the intField field if the value is true; ROUTEing into the falseFilter will only set the intField field if the value is false.

4.2.7 Color

class Color {
  FloatArrayField    color    = [];   // Usage = COLOR_ARRAY
}

This node defines a set of RGB colours to be used in the fields of another node.

Color nodes are only used to specify multiple colours for a single geometric shape, such as colours for the faces or vertices of an IndexedFaceSet. A Material node is used to specify the overall material parameters of lit geometry. If both a Material node and a Color node are specified for a geometric shape, the colours shall replace the diffuse component of the material.

RGB or RGBA textures take precedence over colours; specifying both an RGB or RGBA texture and a Color node for geometric shape will result in the Color node being ignored. Details on lighting equations can be found in 2.6, Lighting model.

4.2.8 ColorInterpolator

class ColorInterpolator {
  FloatField         fraction    = 0;  // Usage = NORMALIZED_FLOAT
  FloatArrayField    key         = []; // Usage = NORMALIZED_FLOAT_ARRAY
  FloatArrayField    keyValue    = []; // Usage = COLOR_ARRAY
  FloatArrayField    value       = []; // Usage = COORD3
}

The ColorInterpolator node linearly interpolates among a list of 3 component colors. The keyValue field shall contain exactly as many positions as there are values in the key field.

The ColorInterpolator node is a subclass of the PositionInterpolatornode, which is in turn a subclass of the abstract node Interpolator. See Concepts, Interpolators and TimeSensor for a discussion of interpolators.

4.2.9 Coordinate

class Coordinate {
  FloatArrayField    point    = [];   // Usage = COORD3_ARRAY
}

This node defines a set of 3D coordinates to be used in the coord field of vertex-based geometry nodes including IndexedFaceSet, IndexedLineSet, and PointSet.

4.2.10 CoordinateInterpolator

class CoordinateInterpolator {
  FloatField         fraction = 0;   // Usage = NORMALIZED_FLOAT
  FloatArrayField    key      = [];  // Usage = NORMALIZED_FLOAT_ARRAY
  FloatArrayField    keyValue = [];  // Usage = COORD3_ARRAY
  FloatArrayField    value    = [];  // Usage = COORD3_ARRAY
}

This node linearly interpolates among a list of coordinate values. The number of coordinates in the keyValue field shall be an integer multiple of the number of keyframes in the key field.

The CoordinateInterpolator node is a subclass of the abstract node Interpolator. See Concepts, Interpolators and TimeSensor for a discussion of interpolators.

4.2.11 DirectionalLight

class DirectionalLight {
  StringArrayField affectedGroups  = [];
  FloatArrayField  color           = 1 1 1;    // Usage = COLOR
  FloatArrayField  direction       = 0 0 -1;   // Usage = DIRECTION
  FloatField	   intensity       = 1;        // Usage = NORMALIZED_FLOAT
  BooleanField	   on              = TRUE;
}

The DirectionalLight node defines a directional light source that illuminates along rays parallel to a given 3-dimensional vector.

The affectedGroups field controls the scoping of the lights (see 2.5.6.1, Scoping of lights). By default, lights affect all nodes contained in the subgraph of the light's parent group.

The color field specifies the spectral color properties of the light emission as an RGB value.

The direction field specifies the direction vector of the illumination emanating from the light source in the local coordinate system. Light is emitted along parallel rays from an infinite distance away. A directional light source illuminates only the objects in its enclosing parent group. The light may illuminate everything within this coordinate system, including all children and descendants of its parent group. The accumulated transformations of the parent nodes affect the light.

The intensity field specifies the brightness of the emission from the light. Light intensity may range from 0.0 (no light emission) to 1.0 (full intensity).

The on field specifies whether the light is currently emitting light. If on is FALSE, then the light will not affect the rendering of any geometry.

DirectionalLight nodes do not attenuate with distance. DirectionalLight nodes illuminate only the objects descended from the light's parent grouping node, including any descendent children of the parent grouping nodes. A precise description of Shout3D's lighting equations is contained in 2.6, Lighting model.

The DirectionalLight node is a subclass of the abstract node Light.

4.2.12 DoubleEventToBoolean

class BooleanEventToInteger {
  BooleanField    booleanTrueField    = true;
  BooleanField    booleanFalseField   = false;
  DoubleField     doubleField         = 0;
}

This node pushes out one of each boolean value whenever the input double value changes. This is useful, for example, in turning things on or off whenever a time value is received corresponding to a particular event, such as the touchTime of a TouchSensor.

When the doubleField field is set, the booleanTrueField is set to true and the booleanFalseField is set to false.

4.2.13 DoubleEventToInteger

class DoubleEventToInteger {
  IntField        intField     = -1;
  DoubleField     doubleField  = 0;
}

This node pushes out the value of intField when its double input changes. The value pushed from intField is the same value it contains at the time the double input is received. This is useful, for example, in setting the whichChoice field of a Switch node to a particular value when a time value is received from elsewhere in the scene.

When the doubleField field is set, the intField field is set to its current value; any ROUTEd fields from the intField field will then be set to this value.

4.2.14 Group

class Group { 
  NodeArrayField      children         = [];
  BooleanField        hidden           = FALSE;
  
  //REQUIRED METHODS
  float[] getMatrix();
  void    addChildren(Node[] nodesToAdd);
  void    removeChildren(Node[] nodesToRemove);
} 

A Group node contains children nodes without introducing a new transformation. It is equivalent to a Transform node containing an identity transform. See API, Class Group for a discussion of the getMatrix method.

The addChildren and removeChildren methods add and remove node in the children field. More details on the children field and grouping can be found in 2.5.5, Grouping and children nodes.

The hidden field specifies whether the children of this node are to be rendered by the viewer, and so may be used to show and hide sections of the scene graph. The children are shown when hidden is FALSE, and hidden when the field's value is TRUE. The hidden field also controls whether the children of this node are to be picked when using the programming API (see Picker). It does not prevent the API from searching the children to find a node (see Searcher), nor does it prevent Interpolator nodes from calculating new ouptut values.

The Group node is the parent class of Anchor, Switch, and Transform.Hence, these nodes are subclasses of Group.

4.2.15 ImageTexture

class ImageTexture { 
  BooleanField        hasAlphaTexture = FALSE;
  StringArrayField    url     = [];	// Usage = URL_ARRAY
  BooleanField        repeatS = TRUE;
  BooleanField        repeatT = TRUE;
} 

The ImageTexture node defines a texture map by specifying an image file and general parameters for mapping to geometry. Texture maps are defined in a 2D coordinate system (s, t) that ranges from [0.0, 1.0] in both directions. The bottom edge of the image corresponds to the S-axis of the texture map, and left edge of the image corresponds to the T-axis of the texture map. The lower-left pixel of the image corresponds to s=0, t=0, and the top-right pixel of the image corresponds to s=1, t=1. These relationships are depicted in Figure 4.1.

Texture
                map coord system

Figure 4.1 -- Texture map coordinate system

The texture is read from the URL specified by the url field. When the url field is NULL or empty, texturing is disabled. Viewers must support the JPEG and GIF image file formats. In addition, viewers may support other image formats (e.g. PNG and CGM) which can be rendered into a 2D image.

See 2.5.9, Texture maps, for a general description of texture maps.

See 2.6, Lighting model, for a description of lighting equations and the interaction between textures, materials, light, and geometry.

The hasAlphaTexture field specifies whether a separate GIF file containing 8-bit alpha values for the texture has been provided. By default, hasAlphaTexture is FALSE. If hasAlphaTexture is TRUE, then the applet will look for a separate GIF file containing the alpha channel information to be used in combination with the texture. The file must be the same size as the image specified in the url field, and it must be an 8-bit luminance texture file (also called a "grayscale" texture). The file must be located in the same directory as that specified in the url field, and must be named by the following convention: if the name of the url texture is name.suffix, then the luminance file must be named name_alpha.gif. For example, if the url field is "leaf.gif" or "leaf.jpg" then the applet will search for a grayscale texture named leaf_alpha.gif.

image specified in the url field. located in the same directory as that specified in the url field. This separate file will contain the alpha channel information to be used in combination with the image file specified in the url field. regular GIF file specified The name of the

The repeatS and repeatT fields specify how the texture wraps in the S and T directions. If repeatS is TRUE (the default), the texture map is repeated outside the [0.0, 1.0] texture coordinate range in the S direction so that it fills the shape. If repeatS is FALSE, the texture coordinates are clamped in the S direction to lie within the [0.0, 1.0] range. The repeatT field is analogous to the repeatS field.

If the texture map is repeated in a given direction (S-axis or T-axis), a texture coordinate C (s or t) is mapped into a texture map that has N pixels in the given direction as follows:

    Texture map location = (C - floor(C)) × N

If the texture map is not repeated, the texture coordinates are clamped to the 0.0 to 1.0 range as follows:

    Texture map location = N,     if C > 1.0,
                         = 0.0,   if C < 0.0,
                         = C × N, if 0.0 <= C <= 1.0.

The ImageTexture node is a subclass of the abstract node Texture.

4.2.16 IndexedFaceSet

class IndexedFaceSet { 
  BooleanField     ccw               = TRUE;       
  NodeField        color             = NULL; // Usage =COLOR_NODE
  IntArrayField    colorIndex        = [];   // Usage =INDEX_ARRAY
  BooleanField     colorPerVertex    = TRUE;       
  NodeField        coord             = NULL; // Usage =COORDINATE_NODE
  IntArrayField    coordIndex        = [];   // Usage =INDEX_ARRAY
  FloatField       creaseAngle       = 0;    // Usage =NON_NEGATIVE_FLOAT
  BooleanField     solid             = TRUE;
  NodeField        texCoord          = NULL; // Usage =TEXTURE_COORDINATE_NODE
  IntArrayField    texCoordIndex     = [];   // Usage =INDEX_ARRAY
}

The IndexedFaceSet node represents a 3D shape formed by constructing faces (polygons) from vertices listed in the coord field, using a connectivity specified by the coordIndex field. The coord field contains a Coordinate node that defines the 3D vertices referenced by the coordIndex field; if the coord field is NULL, then the node contains no faces.

IndexedFaceSet uses the indices in its coordIndex field to specify the polygonal faces by indexing into the coordinates in the Coordinate node. The indexing is by vertex number; so given a Coordinate node with values of [0,1,2,3,4,5,6,7,8], an index of 0 denotes the vertex (0,1,2), index 1 denotes the vertex (3,4,5), and index 2 denotes the vertex (6,7,8). An index of "-1" indicates that the current face has ended and the next one begins. The last face may be (but does not have to be) followed by a "-1" index. If the greatest index in the coordIndex field is N, the Coordinate node shall contain N+1 coordinates (indexed as 0 to N). Since each coordinate is comprised of three floats, the Coordinate node must contain 3*(N+1) float values. Each face of the IndexedFaceSet must have:

  1. at least three non-coincident vertices;
  2. vertices that define a planar polygon;
  3. vertices that define a non-self-intersecting polygon.

Otherwise, The results are undefined.

The IndexedFaceSet node is specified in the local coordinate system and is affected by the transformations of its ancestors.

Descriptions of the coord and texCoord fields are provided in the Coordinate and TextureCoordinate nodes, respectively.

Details on lighting equations and the interaction between textures, materials, lights, and geometries are provided in 2.6, Lighting model.

If the color field is not NULL, it shall contain a Color node whose colours are applied to the vertices or faces of the IndexedFaceSet as follows:

  1. If colorPerVertex is FALSE, colors are applied to each face, as follows:
    1. If the colorIndex field is not empty, then one color is used for each face of the IndexedFaceSet. There shall be at least as many indices in the colorIndex field as there are faces in the IndexedFaceSet. If the greatest index in the colorIndex field is N, then there shall be N+1 colours in the Color node. The colorIndex field shall not contain any negative entries.
    2. If the colorIndex field is empty, then the colours in the Color node are applied to each face of the IndexedFaceSet in order. There shall be at least as many colours in the Color node as there are faces.
  2. If colorPerVertex is TRUE, colours are applied to each vertex, as follows:
    1. If the colorIndex field is not empty, then colours are applied to each vertex of the IndexedFaceSet in exactly the same manner that the coordIndex field is used to choose coordinates for each vertex from the Coordinate node. The colorIndex field shall contain at least as many indices as the coordIndex field, and shall contain end-of-face markers (-1) in exactly the same places as the coordIndex field. If the greatest index in the colorIndex field is N, then there shall be N+1 colours in the Color node.
    2. If the colorIndex field is empty, then the coordIndex field is used to choose colours from the Color node. If the greatest index in the coordIndex field is N, then there shall be N+1 colours in the Color node.

If the color field is NULL, the geometry shall be rendered normally using the Material and texture defined in the Appearance node (see 2.6, Lighting model., for details).

The viewer must automatically generate normals, using creaseAngle to determine if and how normals are smoothed across shared vertices. If the angle between the geometric normals of two adjacent faces is less than the crease angle, normals shall be calculated so that the faces are smooth-shaded across the edge; otherwise, normals shall be calculated so that a lighting discontinuity across the edge is produced. For example, a crease angle of 0.5 radians means that an edge between two adjacent polygonal faces will be smooth shaded if the geometric normals of the two faces form an angle that is less than 0.5 radians. Otherwise, the faces will appear faceted. Crease angles shall be greater than or equal to 0.0.

If the texCoord field is not NULL, it shall contain a TextureCoordinate node. The texture coordinates in that node are applied to the vertices of the IndexedFaceSet as follows:

  1. If the texCoordIndex field is not NULL and not empty, then it is used to choose texture coordinates for each vertex of the IndexedFaceSet in exactly the same manner that the coordIndex field is used to choose coordinates for each vertex from the Coordinate node. Non-negative indices in the texCoordIndex field specify texture coordinates by vertex number. Index N refers to the 2D texture coordinate given by the pair of floats in positions 2*N and 2*N+1 of the TextureCoordinate node's array of values. The texCoordIndex field shall contain at least as many indices as the coordIndex field, and shall contain end-of-face markers (-1) in exactly the same places as the coordIndex field. If the greatest index in the texCoordIndex field is N, then there shall be N+1 texture coordinates (and therefore 2*(N+1) float values) in the TextureCoordinate node.
  2. If the texCoordIndex field is NULL or empty, then the coordIndex array is used to choose texture coordinates from the TextureCoordinate node. If the greatest index in the coordIndex field is N, then there shall be N+1 texture coordinates (and therefore 2*(N+1) float values) in the TextureCoordinate node.

If the texCoord field is NULL, no texture will be applied, even if there is a texture specified in the associated Appearance node.

The boolean fields ccw and solid provide hints about the geometry of the IndexedFaceSet.

The ccw field defines the ordering of the vertex coordinates of the geometry with respect to the automatically generated normal vectors used in the lighting model equations. If ccw is TRUE, the normals will follow the right hand rule; the orientation of each normal with respect to the vertices (taken in order) will be such that the vertices appear to be oriented in a counterclockwise order when the vertices are viewed (in the local coordinate system of the Shape) from the opposite direction as the normal. If ccw is FALSE, the normals shall be oriented in the opposite direction.

The solid field determines whether one or both sides of each polygon shall be displayed. If solid is FALSE, each polygon shall be visible regardless of the viewing direction (i.e., no backface culling shall be done, and two-sided lighting shall be performed to illuminate both sides of lit surfaces). If solid is TRUE, the visibility of each polygon shall be determined as follows: Let V be the position of the viewer in the local coordinate system of the geometry. Let N be the geometric normal vector of the polygon, and let P be any point (besides the local origin) in the plane defined by the polygon's vertices. Then if (V dot N) - (N dot P) is greater than zero, the polygon shall be visible; if it is less than or equal to zero, the polygon shall be invisible (backface culled).

IndexedFaceSet assumes that all polygons will be convex. If the polygons are concave, non-planar, or self-intersecting, this is an error and the results of rendering (and picking) will be undefined.

The IndexedFaceSet node is a subclass of the abstract node Geometry.

4.2.17 IndexedLineSet

class IndexedLineSet { 
  NodeField        color             = NULL;   // Usage = COLOR_NODE
  IntArrayField    colorIndex        = [];     // Usage = INDEX_ARRAY
  BooleanField     colorPerVertex    = TRUE;       
  NodeField        coord             = NULL;   // Usage = COORDINATE_NODE
  IntArrayField	   coordIndex        = [];     // Usage = LINE_INDEX_ARRAY
  FloatField	   lineWidth	     = 1;      // Usage = NON_NEGATIVE_FLOAT
}

The IndexedLineSet node represents a 3D geometry formed by constructing polylines from 3D vertices specified in the coord field, using a connectivity specified by the coordIndex field. IndexedLineSet is specified in the local coordinate system and is affected by the transformations of its ancestors.

The coord field specifies the 3D vertices of the line set and contains a Coordinate node. If the coord field is NULL, then the node contains no polylines.

IndexedLineSet uses the indices in its coordIndex field to specify the polylines by indexing into the Coordinate node. The indexing is by vertex number; so given a Coordinate node with values of [0,1,2,3,4,5,6,7,8], an index of 0 denotes the vertex (0,1,2), index 1 denotes the vertex (3,4,5), and index 2 denotes the vertex (6,7,8). If the greatest index in the coordIndex field is N, the Coordinate node must contain N+1 coordinates (indexed as 0 to N). Since each coordinate is comprised of three floats, the Coordinate node must contain 3*(N+1) float values.

A coordIndex value of "-1" or "-2" indicates that the current polyline has ended and the next one begins. An index of "-2" will draw an extra segment between the first and last vertices of the current polyline. An index of "-1" will not draw this extra segment. The last polyline in the IndexedLineSet may be (but does not have to be) followed by a "-1" or "-2". If no final index is specified, a value of "-1" will be used.

Lines are not lit and not texture mapped. Lines do not respond to picking.

If the color field is not NULL, it shall contain a Color node. The colors are applied to the vertices or polylines of the IndexedLineSet as follows:

  1. If colorPerVertex is FALSE:
    1. If the colorIndex field is not empty, then one color is used for each polyline of the IndexedLineSet. There shall be at least as many indices in the colorIndex field as there are polyines in the IndexedLineSet. If the greatest index in the colorIndex field is N, then there shall be N+1 colours in the Color node. The colorIndex field shall not contain any negative entries.
    2. If the colorIndex field is empty, then the colours in the Color node are applied to each polyline of the IndexedLineSet in order. There shall be at least as many colours in the Color node as there are polylines.
  2. If colorPerVertex is TRUE:
    1. If the colorIndex field is not empty, then colours are applied to each vertex of the IndexedLineSet in exactly the same manner that the coordIndex field is used to choose coordinates for each vertex from the Coordinate node. The colorIndex field shall contain at least as many indices as the coordIndex field, and shall contain end-of-face markers (-1) in exactly the same places as the coordIndex field. If the greatest index in the colorIndex field is N, then there shall be N+1 colours in the Color node.
    2. If the colorIndex field is empty, then the coordIndex field is used to choose colours from the Color node. If the greatest index in the coordIndex field is N, then there shall be N+1 colours in the Color node.

If the color field is NULL, the geometry shall be rendered normally using the emissiveColor of the Material defined in the Appearance node (see 2.6, Lighting model., for details). If no such material exists, or if the emissiveColor field is NULL, then the color of the lines will be white (1,1,1).

The lineWidth field contains a single float value that specifies the width of all lines in the IndexedLineSet. The width is given in pixels and is rendered in accordance with the OpenGL specification ([OPEN]). Each line segment is solid (i.e., not dashed).

The IndexedLineSet node is a subclass of the abstract node Geometry.

4.2.18 Material

class Material { 
  FloatArrayField diffuseColor      = 0.8 0.8 0.8; // Usage = COLOR
  FloatArrayField emissiveColor     = 0 0 0;       // Usage = COLOR
  FloatField	  transparency      = 0;           // Usage = NORMALIZED_FLOAT
} 

The Material node specifies surface material properties for associated geometry nodes and is used by the Shout3D lighting equations during rendering. Section 2.6, Lighting Model, contains a detailed description of the Shout3D lighting model equations.

All of the values in all fields of the Material node range from 0.0 to 1.0.

The fields in the Material node determine how light reflects off an object to create color:

  1. The diffuseColor field reflects all Shout3D light sources depending on the angle of the surface with respect to the light source. The more directly the surface faces the light, the more diffuse light reflects.
  2. The emissiveColor field models "glowing" objects. This can be useful for displaying pre-lit models (where the light energy of the room is computed explicitly), or for displaying scientific data. This field also determines the color of untextured IndexedLineSets, which are never affected by lights.
  3. The transparency field specifies how "clear" an object is, with 1.0 being completely transparent, and 0.0 completely opaque.

4.2.19 NavigationInfo

class NavigationInfo { 
  BooleanField	   headlight			= TRUE;  
  BooleanField	   isBound			= TRUE;
  FloatField       visibilityLimit	= 0;    // Usage =NON_NEGATIVE_FLOAT
}

The NavigationInfo node contains information describing how the scene will be rendered.

The headlight field specifies whether the scene is to be rendered with a headlight. A headlight is a directional light that always points in the direction the user is looking. Scenes that enlist precomputed lighting (e.g., pre-lit textures or radiosity solutions) can turn the headlight off. The headlight must have intensity= 1, color = (1 1 1), and direction= (0 0 -1). Headlights affected all nodes in the scene; their scope may not be limited to a subset of the scene.

The isBound field controls whether this NavigationInfo node is bound for the viewer containing it. For more information on how nodes are bound, see 2.5.8, Bindable Nodes.

Geometry beyond the visibilityLimit may not be rendered. A value of 0.0 indicates an infinite visibility limit. The visibilityLimit field is restricted to be greater than or equal to zero.

The NavigationInfo node is a subclass of the abstract node Bindable.

4.2.20 OrientationInterpolator

class OrientationInterpolator {
  FloatField         fraction = 0;  // Usage = NORMALIZED_FLOAT
  FloatArrayField    key      = []; // Usage = NORMALIZED_FLOAT_ARRAY
  FloatArrayField    keyValue = []; // Usage = ROTATION_ARRAY
  FloatArrayField    value    = []; // Usage = ROTATION
}

The OrientationInterpolator node interpolates among a list of rotation values specified in the keyValue field. These rotations are absolute in object space and therefore are not cumulative. The keyValue field shall contain exactly as many rotations as there are keyframes in the key field.

An orientation represents the final position of an object after a rotation has been applied. An OrientationInterpolator interpolates between two orientations by computing the shortest path on the unit sphere between the two orientations. The interpolation is linear in arc length along this path. The results are undefined if the two orientations are diagonally opposite.

If two consecutive keyValue values exist such that the arc length between them is greater than the interpolation will take place on the arc complement. For example, the interpolation between the orientations (0, 1, 0, 0) and (0, 1, 0, 5.0) is equivalent to the rotation between the orientations (0, 1, 0, 2) and (0, 1, 0, 5.0).

This node linearly interpolates among a list of coordinate values. The number of coordinates in the keyValue field shall be an integer multiple of the number of keyframes in the key field.

The OrientationInterpolator node is a subclass of the abstract node Interpolator. See Concepts, Interpolators and TimeSensor for a discussion of interpolators.

4.2.21 PointSet

class PointSet { 
  NodeField  color      = NULL;  // Usage = COLOR_NODE
  NodeField  coord      = NULL;  // Usage = COORDINATE_NODE
  FloatField pointSize  = 1;     // Usage = NON_NEGATIVE_FLOAT
}

The PointSet node specifies a set of 3D points, in the local coordinate system. The coord field specifies a Coordinate node (or instance of a Coordinate node). The results are undefined if the coord field specifies any other type of node. PointSet uses the coordinates in order. If the coord field is NULL, then the node contains no points.

PointSet nodes are not lit and may not be texture mapped. Points do not respond to picking.

If the color field is not NULL, it shall specify a Color node that contains at least the number of points contained in the Coordinate node. The results are undefined if the color field specifies any other type of node. Colors will be applied to each point in order. The results are undefined if the number of values in the Color node is less than the number of values specified in the Coordinate node.

If the color field is NULL, and there is a Material node defined for the Appearance node affecting this PointSet node, the emissiveColor of the Material node will be used to draw the points. If no such material exists, or if the emissiveColor field is NULL, then the color of the points will be white (1,1,1). Details on lighting equations as they affect PointSet nodes are described in 2.6, Lighting Model.

The pointSize field contains a single float value that specifies the size of all points in the PointSet. The size is given in pixels and is rendered in accordance with the OpenGL specification ([OPEN]).

The PointSet node is a subclass of the abstract node Geometry.

4.2.22 PositionInterpolator

class PositionInterpolator {
  FloatField         fraction    = 0;  // Usage = NORMALIZED_FLOAT
  FloatArrayField    key         = []; // Usage = NORMALIZED_FLOAT_ARRAY
  FloatArrayField    keyValue    = []; // Usage = COORD3_ARRAY
  FloatArrayField    value       = []; // Usage = COORD3
}

The PositionInterpolator node linearly interpolates among a list of 3D vectors. The keyValue field shall contain exactly as many positions as there are values in the key field.

The PositionInterpolator node is a subclass of the abstract node Interpolator. See Concepts, Interpolators and TimeSensor for a discussion of interpolators.

4.2.23 ScalarInterpolator

class ScalarInterpolator {
  FloatField         fraction    = 0;  // Usage = NORMALIZED_FLOAT
  FloatArrayField    key         = []; // Usage = NORMALIZED_FLOAT_ARRAY
  FloatArrayField    keyValue    = [];
  FloatField         value       = [];	
}

This node linearly interpolates among a list of float values. This interpolator is appropriate for any parameter defined using a single floating point value. Examples include width, radius, and intensity fields. The keyValue field shall contain exactly as many numbers as there are keyframes in the key field.

The ScalarInterpolator node is a subclass of the abstract node Interpolator. See Concepts, Interpolators and TimeSensor for a discussion of interpolators.

4.2.24 Shape

class Shape { 
  NodeField appearance = NULL;	// Usage = APPEARANCE_NODE
  NodeField geometry   = NULL;	// Usage = GEOMETRY_NODE
}

The Shape node has two fields, appearance and geometry, which are used to create rendered objects in the world. The appearance field contains an Appearance node that specifies the visual attributes (e.g., material and texture) to be applied to the geometry. The geometry field contains a Geometry node. The specified geometry node is rendered with the specified appearance nodes applied. See 2.5.4, Shapes, Geometry and Appearance, and Appearance, for more information.

2.6, Lighting model, contains details of the Shout3D lighting model and the interaction between Appearance nodes and geometry nodes.

If the geometry field is NULL, the object is not drawn.

4.2.25 Switch

class Switch { 
  NodeArrayField      children         = [];
  NodeArrayField      choice           = [];
  BooleanField        hidden           = FALSE;
  IntField            whichChoice      = -1;
  //INHERITED METHODS (from Group)
  float[] getMatrix();
  void    addChildren(Node[] nodesToAdd);       // Disabled in this subclass
  void    removeChildren(Node[] nodesToRemove); // Disabled in this subclass
} 

The Switch node is a subclass of the node Group.  The Switch node traverses zero or one of the nodes specified in the choice field.

The whichChoice field specifies the index of the child to traverse, with the first child having index 0. If whichChoice is less than zero or greater than the number of nodes in the choice field, nothing is chosen.

2.5.5, Grouping and children nodes, provides a description of the children field.  In the Switch node, fieldSensors are placed on the children, choice, and whichChoice fields to insure that the children field always contains either zero children (in the case where whichChoice is negative or out of range) or one child (in the case where whichChoice specifies a valid selection from among the choice field).  Hence, the Switch node is a group that only renders the descendant specified by a combination of the choice and whichChoice fields.

The hidden field behaves as in the parent class, Group.

4.2.26 TextureCoordinate

class TextureCoordinate {
  FloatArrayField    point    = [];   // Usage = TEX_COORD_ARRAY
}

The TextureCoordinate node specifies a set of 2D texture coordinates used by vertex-based geometry nodes (e.g., IndexedFaceSet) to map textures to vertices.

A description of the point field is provided in 3.5.12, TEX_COORD_ARRAY.

4.2.27 TimeSensor

class TimeSensor{ 
  DoubleField   cycleInterval  = 1;    // Usage = NON_NEGATIVE_DOUBLE
  DoubleField   cycleTime      = 0;    // Usage = NON_NEGATIVE_DOUBLE
  DoubleField   elapsedSeconds = 0;    // Usage = NON_NEGATIVE_DOUBLE
  DoubleField   fraction       = 0;    // Usage = NON_NEGATIVE_DOUBLE
  BooleanField  isActive       = FALSE;
  BooleanField  loop           = FALSE;
  IntField      numLoops       = 1; 
  DoubleField   startTime      = 0;
  DoubleField   stopTime       = 0;
  DoubleField   time           = 0;    // Usage = NON_NEGATIVE_DOUBLE

  // REQUIRED METHODS
  void 		start();
  void 		stop();
  void 		setPaused(boolean isPaused);
}

The TimeSensor node provides a way to perform time-based actions in a scene. The TimeSensor node must get its most recent value from the Clock each time the TimeSensor is rendered. It then calculates and sets output values in the cycleTime, elapsedSeconds, fraction, isActive, and time fields accordingly. The cycleInterval, loop, numLoops, startTime, and stopTime fields determine how the TimeSensor will behave in producing those output values. The start(), stop(), and setPaused() methods provide additional API for controlling how the output values are calculated.

A TimeSensor is said to be active when it is generating output values, inactive when it is not.

Fields that control the TimeSensor's behavior:

Summary: A TimeSensor will become active (begin calculating output values) either the first time it is rendered (if startTime is less than the absolute time of the first render), or at a time given by the startTime field (if that time comes after the first render). It will then remain active for the duration of a number of loops (specified by the loop and numLoops fields) each of which lasts for an amount of time given by the cycleInterval field. If the absolute time passes the value contained in the stopTime field, then the TimeSensor will become inactive (stop generating values), even if the total time given by numLoops and cycleInterval has not yet elapsed since it began generating output values.

The cycleInterval field indicates the number of seconds in this timeSensor's cycle.

The loop and numLoops fields indicate how many times this TimeSensor should loop. If loop is true, then the TimeSensor will loop forever regardless of the value of the numLoops field. Otherwise, a numLoops value of 0 indicates that this TimeSensor is not enabled (setting the startTime field, or calling the start() method, will have no effect). A numLoops value of -1 indicates that this TimeSensor should loop forever.

The startTime field specifies an absolute time at which the TimeSensor should become active. If the absolute time of the first render is greater than the startTime, then the TimeSensor will become active upon the first render. In this case, where value-generation begins at the first render, the TimeSensor's output values will all be based on cycles that begin at the absolute time of the first render, not at the absolute time given by startTime. In the other case, where startTime is later than the first render, output values will be generated based on cycles that begin at the absolute time contained in startTime.

The stopTime field specifies when the TimeSensor should become inactive. If the stopTime is less than or equal to the startTime, then the TimeSensor will run until the amount of time given by numLoops*cycleTime has been exhausted, or forever if the value of numLoops is -1. If stopTime is greater than the startTime, then the TimeSensor will become inactive at that time, even if the loops have not completed.

If the TimeSensor is already active when a new value is set on the cycleInterval, numLoops, or startTime field, this new value will be ignored until the TimeSensor becomes inactive. New values for stopTime will affect the TimeSensor, even if it is currently active.

Fields that contain output values generated by the TimeSensor:

Summary: Each time it renders, a TimeSensor may calculate new output values for five of its fields.The isActive field is set when the TimeSensor starts or stops generating values. The cycleTime is set each time a new loop begins. The elapsedSeconds, fraction, and time fields are set to indicate the time that has passed in the current cycle, the fraction of the current cycle that has passed, and the total time that the TimeSensor has been active, respectively.

The isActive field is set to TRUE whenever a TimeSensor becomes active. This will be upon first render (if startTime is less that the time of first render), or upon the first render that occurs at or following the value of startTime. The isActive field is set to FALSE upon the first render occuring at or after the time when a TimeSensor becomes inactive.

The cycleTime field is set to the absolute time once when a TimeSensor becomes active, and again each time that a new cycle begins, until the TimeSensor becomes inactive.

The elapsedTime, fraction, and time fields are set once when the TimeSensor becomes active, and again each time it renders until it becomes inactive. Each of these three fields is set once again during the first render at or after the moment when the TimeSensor becomes inactive.

The elapsedTime is set to be the amount of time that has occured within the current cycle, and so will lie in the range 0<=value<=cycleInterval.

The fraction field is set to be the fraction of the current cycle that has passed, and so will lie in the range 0<=value<=1.

The time field is set to be the absolute time at the moment of the render.

See API, TimeSensors for a discussion of the TimeSensor API's start(), stop(), and setPaused() methods..

4.2.28 TimeSensorPauser

class TimeSensorPauser {
  NodeField       timeSensor   = null;
  BooleanField    reset        = false;
  BooleanField    setPaused    = false;
}

This node allows a TimeSensor node to be paused, unpaused, and reset by routing to it from BooleanFields elsewhere in the scene.

When the setPaused field is set, the timeSensor field is checked to see if it contains a non-null Node of type TimeSensor. If so, the method setPaused(boolean pause) is called on the TimeSensor, passing the new value of setPaused.

When the reset field is set to any value, the timeSensor field is checked to see if it contains a non-null Node of type TimeSensor. If so, the TimeSensor will be unpaused (if necessary), stopped (if necessary) and then started from the beginning of its first cycle by setting the startTime equal to the current time.

4.2.29 Toggle

class Toggle {
  DoubleField     toggleTime   = 0;
  DoubleField     trueTime     = 0;
  DoubleField     falseTime    = 0;
  BooleanField    toggleValue  = false;
}

This node allows a boolean field to be toggled back and forth from true to false when double values are received as input. This is useful, for example, in toggling lights on and off in response to clicking on a TouchSensor. To do this, the touchTime would be ROUTEd to this node's toggleTime and the toggleValue would be ROUTEd to the Light's on field.

When the toggleTime field is set, the toggleValue field is set to !toggleValue. If, after being set, toggleValue is true, trueTime is set to the current time. Otherwise, if toggleValue is false after being set, the falseTime field is set to false.

4.2.30 TouchSensor

class TouchSensor { 
  NodeArrayField      children    = [];
  BooleanField        hidden      = FALSE;
  BooleanField	      enabled     = TRUE;
  FloatArrayField     hitPoint    = NULL;  // Usage = COORD3 
  FloatArrayField     hitNormal   = NULL;  // Usage = COORD3
  BooleanField        isOver      = FALSE;
  BooleanField        isActive    = FALSE;
  DoubleField         touchTime   = 0;     // Usage = NON_NEGATIVE_DOUBLE
  //INHERITED METHODS (from Group)
  float[] getMatrix();
  void    addChildren(Node[] nodesToAdd);
  void    removeChildren(Node[] nodesToRemove);
} 

A TouchSensor node tracks the location and state of the pointing device and detects when the user points toward any geometry nodes that are descendants of either the TouchSensor node or the TouchSensor node's parent group, as follows:

  1. A TouchSensor that has no children will activate if the selected geometry is any descendent of the TouchSensor's parent node.
  2. A TouchSensor that has children will activate only if the selected geometry is a descendent of the TouchSensor itself.
  3. If more than one TouchSensor fit the above criteria, then the TouchSensor that is lowest in the hierarchy will activate.
  4. If more than one TouchSensor that are siblings fit the above criteria, then the TouchSensor that was created first will activate. (When reading a scene from a file, TouchSensors are created in the order in which they are listed in the file.)

See 2.5.7.2 Activating sensors, for more details on using the pointing device to activate the TouchSensor.

The Transform node is a subclass of the node Group.

2.5.5, Grouping and children nodes, provides a description of the children field.

The hidden field behaves as in the parent class, Group.

A TouchSensor node can be enabled or disabled by setting the enabled field TRUE or FALSE. If the TouchSensor node is disabled, it does not track user input or send events.

The isOver field reflects the state of the pointing device with regard to whether it is pointing towards the TouchSensor node's geometry or not. When the pointing device changes state from a position such that its bearing does not intersect any of the TouchSensor node's geometry to one in which it does intersect geometry, an isOver TRUE event is generated. When the pointing device moves from a position such that its bearing intersects geometry to one in which it no longer intersects the geometry, or some other geometry is obstructing the TouchSensor node's geometry, an isOver FALSE event is generated. These events are generated only when the pointing device has moved and changed `over' state. Events are not generated if the geometry itself is animating and moving underneath the pointing device.

As the user moves the bearing over the TouchSensor node's geometry, the point of intersection (if any) between the bearing and the geometry is determined. Each movement of the pointing device, while isOver is TRUE, generates hitPoint and hitNormal events. hitPoint events contain the 3D point on the surface of the underlying geometry, given in the TouchSensor node's coordinate system. hitNormal events contain the surface normal vector at the hitPoint. The values of hitNormal events are computed as appropriate for the associated shape.

If isOver is TRUE, the user may activate the pointing device to cause the TouchSensor node to generate isActive events (e.g., by pressing the primary mouse button). When the TouchSensor node generates an isActive TRUE event, it grabs all further motion events from the pointing device until it is released and generates an isActive FALSE event (other pointing-device sensors will not generate events during this time). Motion of the pointing device while isActive is TRUE is termed a "drag." If a 2D pointing device is in use, isActive events reflect the state of the primary button associated with the device (i.e., isActive is TRUE when the primary button is pressed and FALSE when it is released). If a 3D pointing device is in use, isActive events will typically reflect whether the pointing device is within (or in contact with) the TouchSensor node's geometry.

The touchTime event is generated when all three of the following conditions are true:

  1. The pointing device was pointing towards the geometry when it was initially activated (isActive is TRUE).
  2. The pointing device is currently pointing towards the geometry (isOver is TRUE).
  3. The pointing device is deactivated (isActive FALSE event is also generated).

More information about this behaviour is described in 2.5.7.2 Activating sensors.

4.2.31 Transform

class Transform { 
  FloatArrayField     center           = 0 0 0;    // Usage = COORD3
  NodeArrayField      children         = [];
  BooleanField        hidden           = FALSE;
  FloatArrayField     rotation         = 0 0 1 0;  // Usage = ROTATION
  FloatArrayField     scale            = 1 1 1;    // Usage = SCALE3
  FloatArrayField     scaleOrientation = 0 0 1 0;  // Usage = ROTATION
  FloatArrayField     translation      = 0 0 0;    // Usage = COORD3
  //INHERITED METHODS (from Group)
  float[] getMatrix();
  void    addChildren(Node[] nodesToAdd);
  void    removeChildren(Node[] nodesToRemove);
} 

The Transform node is a grouping node that defines a coordinate system for its children that is relative to the coordinate systems of its ancestors. See 2.3.4, Transformation hierarchy, and 2.3.5, Standard units and coordinate system, for a description of coordinate systems and transformations.

The Transform node is a subclass of the node Group.

2.5.5, Grouping and children nodes, provides a description of the children field.

The hidden field behaves as in the parent class, Group.

The translation, rotation, scale, scaleOrientation and center fields define a geometric 3D transformation consisting of (in order):

  1. a (possibly) non-uniform scale about an arbitrary point;
  2. a rotation about an arbitrary point and axis;
  3. a translation.

The center field specifies a translation offset from the origin of the local coordinate system (0,0,0). The rotation field specifies a rotation of the coordinate system. The scale field specifies a non-uniform scale of the coordinate system. scale values must be greater than zero. The scaleOrientation specifies a rotation of the coordinate system before the scale (to specify scales in arbitrary orientations). The scaleOrientation applies only to the scale operation. The translation field specifies a translation to the coordinate system.

Given a 3-dimensional point P and Transform node, P is transformed into point P' in its parent's coordinate system by a series of intermediate transformations. In matrix transformation notation, where C (center), SR (scaleOrientation), T (translation), R (rotation), and S (scale) are the equivalent transformation matrices,

    P' = T × C × R × SR × S × -SR × -C × P

The following Transform node:

Transform { 
    center           = C;
    rotation         = R;
    scale            = S;
    scaleOrientation = SR;
    translation      = T;
    children         = [...];
}

is equivalent to the nested sequence of:

Transform {
  translation = T;
  children = Transform {
    translation = C;
    children = Transform {
      rotation = R;
      children = Transform {
        rotation = SR; 
        children = Transform {
          scale = S; 
          children = Transform {
            rotation = -SR; 
            children = Transform {
              translation = -C;
              children = [...]
}}}}}}}

4.2.32 Viewpoint

class Viewpoint { 
  StringField	      description  = "";
  BooleanField	      isBound      = TRUE;
  FloatField	      fieldOfView  = 0.785398;  // Usage = FIELD_OF_VIEW
  FloatArrayField     orientation  = 0 0 1 0;   // Usage = ROTATION
  FloatArrayField     position     = 0 0 10;    // Usage = COORD3
}  

The Viewpoint node defines a specific location in the local coordinate system from which the user may view the scene. When a Viewpoint node is employed to render a scene, the user's view is conceptually re-parented as a child of the Viewpoint node. All subsequent changes to the Viewpoint node's coordinate system change the user's view (e.g., changes to any ancestor transformation nodes or to the Viewpoint node's position or orientation fields).

The position and orientation fields of the Viewpoint node specify relative locations in the local coordinate system. Position is relative to the coordinate system's origin (0,0,0), while orientation specifies a rotation relative to the default orientation. In the default position and orientation, the viewer is on the Z-axis looking down the -Z-axis toward the origin with +X to the right and +Y straight up. Viewpoint nodes are affected by the transformation hierarchy.

The fieldOfView field specifies a preferred minimum viewing angle from this viewpoint in radians. A small field of view roughly corresponds to a telephoto lens; a large field of view roughly corresponds to a wide-angle lens. The field of view shall be greater than zero and smaller than . The value of fieldOfView represents the minimum viewing angle in any direction axis perpendicular to the view. For example, when rendering to a rectangular viewing projection, the following relationship will be maintained:

      display width    tan(FOVhorizontal/2)
      -------------- = -----------------
      display height   tan(FOVvertical/2)

where the smaller of display width or display height determines which angle equals the fieldOfView (the larger angle is computed using the relationship described above). The larger angle shall not exceed and may force the smaller angle to be less than fieldOfView in order to sustain the aspect ratio.

The description field specifies a textual description of the Viewpoint node.

The isBound field controls whether this Viewpoint node is bound for the viewer containing it. For more information on how nodes are bound, see 2.5.8, Bindable Nodes.

The Viewpoint node is a subclass of the abstract node Bindable.

4.3 Abstract Node Classes

In addition to the set of Instantiable Nodes, Shout3D includes three abstract node classes. The abstract nodes may not be included in files and may not be instantiated programmatically. Rather, they serve to define a class hierarchy.

This hierarchy has benefits for those using Shout3D API, for those creating new node classes, and for those who wish to take advantage of certain field usages.

The hierarchy benefits those using the Shout3D API because they can perform common actions on all members of a base class. For example, even if new classes of lights are added to the system in later releases, users will be able to turn all such lights on and off (because the on field is defined in the base class), without knowing further specifics about the particular lights.

The hierarchy benefits those using any of the node-based field usages. For example, since the Appearance node's texture field is specified to have a field usage of TEXTURE, then any subclass of the Texture node will be a legal value for that field. This means that future authors of texture nodes (which might potentially be any of MovieTexture, PixelTexture, etc.) can subclass from Texture and know that these nodes will be legal values to use in the Appearance node's texture field.

The following set of abstract nodes is included in the Shout3D 2.0:

4.3.1 Bindable
4.3.2 Geometry
4.3.3 Light
4.3.4 Interpolator
4.3.5 Texture

4.3.1 Bindable

abstract class Bindable { 
 BooleanField	 isBound  = TRUE;
} 

The Bindable class is the base class for all bindable nodes. See 2.5.8, Bindable nodes for a discussion of bindable nodes.

Shout3D contains three nodes subclassed from Bindable: Background, NavigationInfo, and Viewpoint.

4.3.2 Geometry

abstract class Geometry { 
} 

The Geometry class is the base class for all nodes that draw geometry to the scene when the scene is rendered. All subclasses of Geometry are legal and valid values for the geometry field of the Shape node.

Shout3D contains three nodes subclassed from Geometry: IndexedFaceSet, IndexedLineSet, and PointSet.

4.3.3 Interpolator

abstract class Interpolator {
  FloatField 		fraction 	= 0; 	// Usage = NORMALIZED_FLOAT
}

The Interpolator class is the base class for all nodes that interpolate field values in the scene. See Concepts, Interpolators and TimeSensor for a discussion of interpolators.

Shout3D contains four nodes subclassed from Interpolator: ColorInterpolator, CoordinateInterpolator, OrientationInterpolator, PositionInterpolator, ScalarInterpolator.

4.3.4 Light

abstract class Light { 
 StringArrayField affectedGroups = [];
 FloatArrayField  color          = 1 1 1; // Usage = COLOR
 FloatField       intensity      = 1;     // Usage = NORMALIZED_FLOAT
 BooleanField     on             = TRUE;
}

The Light class is the base class for all nodes that add light to the scene. Programmers can thus collect up all lights in a scene and turn them on and off without knowing which kind of lights they are.

The affectedGroups field controls the scoping of the lights (see 2.5.6.1, Scoping of lights). By default, lights affect all nodes contained in the subgraph of the light's parent group.

Shout3D contains one node subclassed from Light: DirectionalLight.

4.3.5 Texture

class Texture { 
} 

The Texture class is the base class for all nodes that specify a texture for rendering geometry. All subclasses of Texture are legal and valid values for the texture field of the Appearance node.

Shout3D contains one node subclassed from Texture: ImageTexture.


Copyright© 1999-2000, Eyematic Interfaces, Inc.