Virtual TV Overview

A brief overview of Warp's Virtual TV (VTV) technology.

FAQ

A list of frequently asked questions and answers.

Performance

A summary of VTV's performance in a variety of environments.

Resolution

A note about resolution issues in VTV.

VTV Reference Manual

You can download our complete Developer's Kit programming manual [PKZIPped MS Word form]


VTV Overview

VTV is an enabling technology which permits a viewer to interactively look around (pan, tilt, roll and zoom) within a still or moving video image.

VTV was created as a next step after our extensive work with traditional realtime 3D renderers. While today's computers are faster than ever before, the drive toward 3D gaming and multimedia has created trade-offs between image quality and speed of display.

Our approach is simple: prerender as much of the environment as possible in advance, and use image warping techniques to offer both higher quality and realtime frame rate at runtime.

VTV lets you prerender full spheres, or 360 degree panoramic cylinders, if the top and bottom of your scene are not required. We believe that image-based rendering, offering high frame rates for arbitrarily complex computer graphics and real world photographic scenes, will play a vital role in meeting quality expectations.

VTV allows you to prerender high resolution backgrounds, or use stills or video, in a format that offers complete freedom of viewing orientation. Gaming and multimedia objects are rendered in real time and displayed against VTV backgrounds. This technique allows you to spend your compute power on the real-time objects, and have beautifully high resolution backgrounds at a very low compute cost.

Computer graphic content is created by rendering your scene as a cube; our processing tools turn this into a full sphere which can be viewed in VTV. Real-world content is captured with a fisheye or wide angle lens, or other special cameras. Try out our demos to evaluate the variety of options.

The VTV Software Developers Kit is available for Win95. For further information on our SDK, please read our faq and manual, and feel free to send us email.

Future applications for VTV include teleconferencing, arcade, surveillance, medical, and aerospace applications.

Return to the top


FAQ

Virtual TV Frequently Asked Questions and Answers

Thu Feb 1 15:39:51 PST 1996

Q: What is Virtual TV?

A: Virtual TV (VTV) is a technology for viewing 3D environments in realtime on low-cost computing hardware. VTV is based on a patent pending image warping technology that lets you dewarp environment maps at very high speed. These maps can be pre-rendered computer graphics converted with our preprocessing tools, and/or digital imagery acquired through wide angle optics.

VTV gets to a desirable point on the price/performance curve by precomputing the rendering of an image from all possible view orientations. This permits unlimited pan/tilt/roll/zoom freedom at high framerates on standard VGA cards, under both Windows. The number of polygons in a scene is restricted only by limitations of your renderer.

Q: How does VTV compare with Quicktime VR and Surround Video?

A: The idea behind each of these systems is similar: being able to quickly control view orientation within real world imagery or pre-rendered computer graphics. With VTV, unlike the other systems, you can freely control pan, roll, tilt, and zoom, since we are handling a full 2D nonlinear dewarp. Additionally, since VTV can use wide angle lenses and other omnidirectional image capture technologies to acquire an entire frame at once, VTV can be applied to motion video, and is not limited to still images. Of course, VTV can also be used to view cylindrical panoramas originally stitched for QTVR. A more detailed comparison of VTV and QTVR is available.

Q: How can I see a demo of VTV?

A: DOS, Windows, and SGI demos are available by anonymous FTP to ftp.warp.com. Look in /pub/vtvdemos, where you will find a Windows demo (vtvwin.exe), a video demo for Windows (vtvmci.zip), and an SGI demo (vtvsgi.tar). You can also download the demos from your Web browser. Please be sure to use the -D option when unzipping.

Q: The downloadable VTV demos seem to permit me to freely roll/pitch/yaw my viewpoint, but I can't move. Is VTV technology capable of this?

A: You can move in VTV, but only along paths for which source imagery has been precomputed. Our current demo CD contains an example which uses Video for Windows to decompress video as you look around. . This is more restrictive than full-blown realtime 3D VR-like rendering, but the result is higher quality at a lower price. Think of ordinary non-VTV video or computer graphics renderings. You have no freedom of view orientation or position. With VTV, you have total freedom of view orientation, but position is still temporally constrained. For many kinds of gaming (Myst-like games, car racing, etc.) you don't need the positional freedom and its associated expense.

If the still frame changes over time, you are using VTV with motion video! The API in our Developer's Kit lets your application change the frame VTV is dewarping, so you can use VTV with your favorite motion video decompression (Cinepak, MPEG, motion JPEG, TrueMotion, proprietary, etc.)

Warp has also implemented code supporting an Intel/Philips "Pegasus" framegrabber to permit us to use VTV with a realtime video in such as a VHS tape or teleconferencing camera. For this hardware motion video solution, the framegrabber simply dumps digitized video data into the bitmap area from which VTV is dewarping.

With our Developer's Kit, we supply sample software integrating VTV with 32 bit Video for Windows as an installable compression manager. This lets you use VTV to look around any AVI compressed video stream.

Q: How is VTV positioned relative to the real-time polygon engines [Reality Lab, Renderware, BRender, etc.] ?

A: VTV can display high quality CG with an unlimited number of polys at high frame rates on moderate performance machines, with the restriction that only view orientation is completely free, view position (as for conventional video) is predetermined. Many games suit this restriction: you don't always need full freedom of view position and its associated performance and/or quality cost. Unlike real-time polygon engines, VTV can incorporate real-world video into 3D games at a very low production cost, no 3D modelling or rendering needed. Just go out and shoot your environment with a wide angle lens. And, VTV can coexist with real-time polygon engines which operate in Mode X, VESA mode 110h, or WinG. VTV can be used to render the backgrounds, with the polygon engine drawing dynamic objects on top before double buffering. We supply a sample integration with the Reality Lab renderer, which can be used to draw realtime 3D objects ontop of a VTV rendered background.

Q: What kind of PC is needed to run VTV?

A: VTV will run on any fast PC equipped with any VGA card. It runs better on faster PCs with PCI bus VGA cards. We are seeing 70 frames/sec under DOS on a 90Mhz Dell Pentium equipped with a #9 GXE or ATI Mach 32 PCI VGA card. On a 33 Mhz 486 with a cheap ISA VGA card, rates of 10 frames/sec are typical.

Q: Will I get the same speed with motion video as I do for the still frames?

A: It depends. If you are using hardware to acquire the changing stills that VTV is dewarping, you will come close to the speed you see for stills. If you are doing software motion video decompression concurrently with VTV dewarping, your framerate will depend on the efficiency of your video decompression code.

Video decompression quality, resolution, and bandwidth to video storage (hard drive vs. CD) are all factors here.

Q: How can I view Autodesk 3DStudio renderings with VTV?

A: Warp provides tools (such as the TOLENS.EXE utility) to generate VTV-prewarped frames from 3DS files for which modellers have provided a camera path in the keyframer. (It's like using a .CUB view for multiple camera positions.)

Q: What about support for other renderers (Alias, SoftImage, etc.)?

A: We are considering explicit support for these in response to customer requests. We provide a utility which permits you to convert any cubical environment map into VTV form.

Q: On what other platforms does VTV run?

A: Currently, we are only licensing the PC version, supporting Windows. A version also exists for Silicon Graphics machines as we use SGI internally for most of our software development. Please contact us if you need to use VTV on the SGI.

Q: Does VTV run under Win95?

A: Yes, VTV runs under Win95 and NT using the high performance Microsoft WinG library. WinG version 1.0 provides fast DIB-to-screen blts under Windows 3.1, Windows for Workgroups 3.11, Windows 95, and Windows NT version 3.5. (VTV will not run on Windows NT version 3.1 or on earlier versions of Windows).

Q: Is there a release of VTV with DirectX support

A: Yes, the VTV SDK contains support for and source code examples of use with DirectDraw in both 8 and 16 bpp modes.

Q: What does it cost to license VTV for use with a PC game?

A: For the PC version, we are asking an up front fee of $10,000 as an advance ona royalty of 25 cents/disk. This gets you the VTV development tools, libraries, conversion tools, and support.

Q: I'm smart. Why should I pay to license VTV when I could just develop it myself, or disassemble it to figure out how it works?

A: Warp has filed for a patent covering the method and apparatus for Virtual TV in both software and hardware embodiments. Warp is vigorously pursing foreign patent filings in all key countries.

Q: How do I generate the data used by VTV?

A: You can shoot footage with a video camera equipped with any wide angle lens. (The demo disk contains imagery acquired with a Nikon 8mm lens.) We have tools to help you automatically determine the lens equations needed by VTV. You then digitize and compress the video in a form suited to your motion video codec. Alternatively, you can use a special camera to shoot a full sphere of hires motion video.

For use of computer graphics imagery with VTV, we have tools to support Autodesk 3DStudio renderings, and will be providing interfaces to other rendering packages in response to customers requests.

Q: Does VTV support an image stitching algorithm to make a panoramic view?

Not explicitly, but the rather time consuming operation of "stitching" is not needed by VTV.

In the case of computer-generated imagery, we can take cubical environment maps consisting of 6 images (rendered from the same camera position, with camera orientations front, back, right, left, up, and down) and create a nonlinear environment map. Our runtime system lets us control pan, roll, tilt, and zoom in realtime within this map, with no singularities.

In the case of real-world stills or video, we typically use wide angle (fisheye)lenses to capture as much of the world as we need in a single frame, to avoid stitching. Capturing the entire environment in a single frame also lets us photograph dynamic environments. For full-spherical motion video, we have used a dodecahedral camera array (actually 11 video cameras) to acquire a very high resolution signal. As in the case of cubical environment maps, we have developed a special image compositor for this geometry.

Return to the top


VTV Technology - Win95 Performance Summary

August 1995

This document serves to quantify VTV performance on Win95 platforms.

Performance is not affected by input source resolution or camera roll/pitch/yaw/zoom parameters. Performance is gated by processor speed and bus bandwidth to VGA display memory, with a PCI bus being preferable. The inner loop of VTV warping requires only 1 addition, 1 increment, and 2 pointer dereferences per output pixel.

The following table gives performance, in frames/sec, for 2 different Pentium platforms. The P90 is a Dell XPS, with a ATI Mach 64 Pro Turbo PCI. The P133 is a Micron with a Diamond Stealth 64 DRAM T PCI card. Benchmarks are for VTVSetWarpType(0). (Figures in parenthesis show speeds for VTVSetWarpType(1)).

Warp speed, frames/sec, for 8bpp 640x480 output

VESA blit speed frames/sec, for 16 bpp 640x480 output

Notes:

  • Testing used Win95 pre-release build 950
  • VTVWIN.C benchmark was compiled optimized using MSVC++2.1
  • Image source resolution is 640x480

    Pentium speeds are such that software-based motion video decompression can occur together with VTV dewarping, while retaining close to realtime video rates for a 320x240 output window.

    Return to the top


    VTV Resolution Issues

    With VTV, a viewer sees a perspective-corrected dewarped region of the wide-angle input image. Output resolution is thus intrinsically lower than input resolution, since only a portion of the image is seen, with the portion outside of the current field of view being discarded.

    The percentage of the transmitted signal which is not utilized depends on the width of the virtual lens controlling the output. Viewing of wider fields utilizes more of the transmitted video, but limits the angular range of pan and tilt.

    To exemplify, consider a complete 180 degree video signal. If the virtual camera has a field of view of 90 degrees horizontal by 60 degrees vertical (a standard 3:2 aspect ratio) the output dewarped view will have a pixel count of 45% that of the complete image circle when the viewing orientation is centered at the lens axis.

    Wide angle lenses compress the image more towards the periphery of the field of view, so output resolution is lower when the viewing orientation is far from the lens axis. In the above example, output view resolution is reduced to about 35% of input resolution when the viewing orientation is at a left or right extremity of the field of view.

    Use of multiple camera geometries can help ensure the viewing vector is always near the center of an image. This is particularly easy when VTV is used for viewing computer-generated imagery.

    An additional issue is the use of the approriate physical camera lens when shooting the wide-angle source video. If too short a lens is employed, or an inappropriate lens adapter is used, the image may not fill the source frame. This needlessly wastes resolution. Experiementation has shown that for many video cameras, an 8mm very wide angle lens generates a circular image spot smaller than the camera CCD, whereas a 16mm lens generates an image too large for the CCD, with information on the periphery of the field of view being lost.

    One strategy which enables delivery of full NTSC resolution without the need for higher bandwidth to homes is to locate the dewarp processor at the cable head end. Signals from the orientation controller are sent to the cable head to control the dewarp there. Viewers see all the video which gets transmitted.

    VTV also benefits from use of digital compression standards which permit transmission of video at a resolution greater than that displayable on the target monitor (such as CCIR vs. NTSC). The excess transmitted resolution compensates for that discarded as being outside the region being dewarped.

    If complete 180 degrees viewing is desired, any digital compression technique used for image transmission should efficiently compress the black region of the frame which lies outside of the image circle.

    Input prefiltering and output antialiasing can also serve to improve the quality of the display.

    Return to the top

    Return to Warp's Home Page


    Send comments and questions to: info@warp.com