OpenGL Render Serving with GLR:

GLR is an OpenGL based render facility. GLR is a mechanism to share expensive graphics hardware resources between users on a network. The idea is to amortize the hardware among multiple users to bring high-quality rendering to a larger application and user audience. Imagine a network of Indy workstations that use their local graphics acceleration for most interactive tasks, but can fallback to GLR on a RealityEngine or InfiniteReality for extremely high-quality rendering.

There's good information about GLR here:
HTML for a paper on GLR is available.
PostScript for the paper is also available.
Showcase slides from X Technical Conference presentation.

GLR is going to be part of IRIX 6.2.

The images below show two programs demonstrating GLR. The first is a GLR-enabled version of Brian Cabral's interactive volren program for sophisticated volume rendering. volren renders using RealityEngine and IMPACT graphics hardware. volren's rendering results cannot be (interactively) done on Indy or non-IMPACT Indigo^2 hardware. The second program is a GLUT-based Inventor program demonstrating a hybrid use of GLR. A scene of a duck swimming in a pond is rendered. The user can interactively rotate the duck around the pond, but when the scene is static, the image can be re-rendered on a RealityEngine with higher tessellation and multisampling enabled for substantially higher image quality.

Below is volren displaying to an Indigo^2 with XL graphics (same graphics hardware as a 24-bit Indy). All of volren's controls work the same as if volren was running on a RealityEngine. The static (ie, non-rotating) image below is shown at the same quality as on a RealityEngine (because it was rendered on a RealityEngine!). volren itself is running on an Onyx with RealityEngine graphics, using GLR to render the graphics, and the GLX protocol to ship the image to the Indigo^2 for display. The image scene takes about a second to redraw.

Brian Cabral's volren displaying on an Indy!

Below is another snapshot of volren (again running on a Onyx, displaying to an Indigo^2 XL), but the head is spinning at the rate of 5.36 frames per second. The is very good interactive performance. Notice the image quality is not as good. The image was rendered at 9 times smaller resolution on the Onyx and then displayed on the Indigo^2 with glPixelZoom(3, 3) to enlarge the image to the displayed resolution. When the head is spinning the image below is of very accpetable quality. When the user stops the image spinning, well, see the next figure...

Now the image is no longer rotating. Notice the image quality is much better since pixel zooming is no longer used. Static views are shown at the full pixel resolution just like on an Onyx. Next, let's look at volren's ability to view the data at different density levels (crucial for real-world volumen rendering)...

Notice the level scrollbar has been adjusted slightly upward in the picture below. Now you can clearly see a skull in the volume displayed. This was done immediately and interactively when manipulating the scrollbar. The image displayed is static which explains the good image quality.

Now, the skull is rotating. Wow, 5.73 frames a second! Pretty darn good for an application that cannot actually run on the Indigo^2 it is displaying on. In fact, the Onyx and Indigo^2 are actually on different ethernets. Performance would improve to aroung 7 or 8 frames per second if they were on the same network.

For reference, if you were running volren locally on the same Onyx, you would get 10 frames per second. Of course, these are full resolution frames (no pixel zooming). The GLR-enabled volren is nearly as interactive (though the dynamic image quality is significantly lowered), but the static image quality is identical.

The advantage of GLR is that a single Onyx can be amortized between a network of users. Customers that could not justify dedicating an Onyx per-user to an occasional application like volume rendering (for example, a group of geologists with an occasional need to review volumetric siesmic data).

The second example is glrduckpond, using GLUT and Open Inventor with GLR. This demo runs on the user's local Indy workstation, instead of running on the Onyx like volren. Interactive rendering is done on the Indy (locally) at lower image quality when the demo's scene is being dynamically manipulated by the user, but when the scene is static, the scene is re-rendered on the Onyx at higher quality.

Here's the image rendered locally while a user "swims" the duck in circles in the pond interactively:

The time above is the number of milliseconds expected to re-render the scene on the Onyx at higher quality. Basically, the program will render on the Onyx in half a second once the user releases the left mouse button to stop rotating the duck. Here's what the duck looks like statically:

Notice in this image, the scene is multisampled (a technique for hardware antialiasing requring expensive hardware only available on RealityEngine and InfiniteReality) and tessellated with a finer granularity. Notice the lighting under the near eye only shows up in the GLR version and the pond and duck model look substantially smoother in the GLR version.

glrduckpond is an example of hybrid GLR rendering. Interactive, lower quality is done locally. Static, high-quality rendered is done with GLR. The technique is very applicable to CAD and modeling applications, where the user wants fast program interaction, but wants occasional viewers of their model at substantially higher resolution. VRML viewers can also use this approach.

Here is the source code for glrduckpond: glrduckpond.c++

GLR is also useful for image processing, generating animated movie frames, solid modeling intersections using stencil planes (for CAD applications), rendered textures used for shadow map and reflection texture map construction, and any other time when high-quality is necessary without extreme interactivity.

See the links above for more information about GLR's implementation, applications, and programming interface. And, I'll be glad to demonstrate these programs for anyone that needs to see them in action.

- Mark Kilgard