home *** CD-ROM | disk | FTP | other *** search
- Frequently Asked Questions (FAQ) on the Multicast Backbone (MBONE)
- ------------------------------------------------------------------
- Steve Casner, casner@isi.edu, 6-May-93
-
- *** This file is venera.isi.edu:mbone/faq.txt ***
- *** Corrections and Additions Requested ***
-
- * What is the MBONE?
-
- The MBONE is an outgrowth of the first two IETF "audiocast"
- experiments in which live audio and video were multicast from the
- IETF meeting site to destinations around the world. The idea is
- to construct a semi-permanent IP multicast testbed to carry the
- IETF transmissions and support continued experimentation between
- meetings. This is a cooperative, volunteer effort.
-
- The MBONE is a virtual network. It is layered on top of portions
- of the physical Internet to support routing of IP multicast
- packets since that function has not yet been integrated into many
- production routers. The network is composed of islands that can
- directly support IP multicast, such as multicast LANs like
- Ethernet, linked by virtual point-to-point links called "tunnels".
- The tunnel endpoints are typically workstation-class machines
- having operating system support for IP multicast and running the
- "mrouted" multicast routing daemon.
-
- * How do IP multicast tunnels work?
-
- IP multicast packets are encapsulated for transmission through
- tunnels, so that they look like normal unicast datagrams to
- intervening routers and subnets. A multicast router that wants to
- send a multicast packet across a tunnel will prepend another IP
- header, set the destination address in the new header to be the
- unicast address of the multicast router at the other end of the
- tunnel, and set the IP protocol field in the new header to be 4
- (which means the next protocol is IP). The multicast router at
- the other end of the tunnel receives the packet, strips off the
- encapsulating IP header, and forwards the packet as appropriate.
-
- Previous versions of the IP multicast software (before March 1993)
- used a different method of encapsulation based on an IP Loose
- Source and Record Route option. This method remains an option in
- the new software for backward compatibility with nodes that have
- not been upgraded. In this mode, the multicast router modifies
- the packet by appending an IP LSRR option to the packet's IP
- header. The multicast destination address is moved into the
- source route, and the unicast address of the router at the far end
- of the tunnel is placed in the IP Destination Address field. The
- presence of IP options, including LSRR, may cause modern router
- hardware to divert the tunnel packets through a slower software
- processing path, causing poor performance. Therefore, use of the
- new software and the IP encapsulation method is strongly
- encouraged.
-
- * What is the topology of the MBONE?
-
- We anticipate that within a continent, the MBONE topology will be
- a combination of mesh and star: the backbone and regional (or
- mid-level) networks will be linked by a mesh of tunnels among
- mrouted machines located primarily at interconnection points of
- the backbones and regionals. Some redundant tunnels may be
- configured with higher metrics for robustness. Then each regional
- network will have a star hierarchy hanging off its node of the
- mesh to fan out and connect to all the customer networks that want
- to participate.
-
- Between continents there will probably be only one or two tunnels,
- preferably terminating at the closest point on the MBONE mesh. In
- the US, this may be on the Ethernets at the two FIXes (Federal
- Internet eXchanges) in California and Maryland. But since the
- FIXes are fairly busy, it will be important to minimize the number
- of tunnels that cross them. This may be accomplished using IP
- multicast directly (rather than tunnels) to connect several
- multicast routers on the FIX Ethernet.
-
- * How is the MBONE topology going to be set up and coordinated?
-
- The primary reason we set up the MBONE e-mail lists (see below)
- was to coordinate the top levels of the topology (the mesh of
- links among the backbones and regionals). This must be a
- cooperative project combining knowledge distributed among the
- participants, somewhat like Usenet. The goal is to avoid loading
- any one individual with the responsibility of designing and
- managing the whole topology, though perhaps it will be necessary
- to periodically review the topology to see if corrections are
- required.
-
- The intent is that when a new regional network wants to join in,
- they will make a request on the appropriate MBONE list, then the
- participants at "close" nodes will answer and cooperate in setting
- up the ends of the appropriate tunnels. To keep fanout down,
- sometimes this will mean breaking an existing tunnel to inserting
- a new node, so three sites will have to work together to set up
- the tunnels.
-
- To know which nodes are "close" will require knowledge of both the
- MBONE logical map and the underlying physical network topology,
- for example, the physical T3 NSFnet backbone topology map combined
- with the network providers' own knowledge of their local topology.
-
- Within a regional network, the network's own staff can
- independently manage the tunnel fanout hierarchy in conjunction
- with end-user participants. New end-user networks should contact
- the network provider directly, rather than the MBONE list, to get
- connected.
-
- * What is the anticipated traffic level?
-
- The traffic anticipated during IETF multicasts is 100-300Kb/s, so
- 500Kb/s seems like a reasonable design bandwidth. Between IETF
- meetings, most of the time there will probably be no audio or
- video traffic, though some of the background session/control
- traffic may be present. A guess at the peak level of experimental
- use might be 5 simultaneous voice conversations (64Kb/s each).
- Clearly, with enough simultaneous conversations, we could exceed
- any bandwidth number, but 500Kb/s seems reasonable for planning.
-
- Note that the design bandwidth must be multiplied by the number of
- tunnels passing over any given link since each tunnel carries a
- separate copy of each packet. This is why the fanout of each
- mrouted node should be no more than 5-10 and the topology should
- be designed so that at most 1 or 2 tunnels flow over any T1 line.
-
- While most MBONE nodes should connect with lines of at least T1
- speed, it will be possible to carry restricted traffic over slower
- speed lines. Each tunnel has an associated threshold against
- which the packet's IP time-to-live (TTL) value is compared. By
- convention in the IETF multicasts, higher bandwidth sources such
- as video transmit with a smaller TTL so they can be blocked while
- lower bandwidth sources such as compressed audio are allowed
- through.
-
- * Why should I (a network provider) participate?
-
- To allow your customers to participate in IETF audiocasts and
- other experiments in packet audio/video, and to gain experience
- with IP multicasting for a relatively low cost.
-
- * What technical facilities and equipment are required for a network
- provider to join the MBONE?
-
- Each network-provider participant in the MBONE provides one or
- more IP multicast routers to connect with tunnels to other
- participants and to customers. The multicast routers are
- typically separate from a network's production routers since most
- production routers don't yet support IP multicast. Most sites use
- workstations running the mrouted program, but the experimental
- MOSPF software for Proteon routers is an alternative (see MOSPF
- question below).
-
- It is best if the workstations can be dedicated to the multicast
- routing function to avoid interference from other activities and
- so there will be no qualms about installing kernel patches or new
- code releases on short notice. Since most MBONE nodes other than
- endpoints will have at least three tunnels, and each tunnel
- carries a separate (unicast) copy of each packet, it is also
- useful, though not required, to have multiple network interfaces
- on the workstation so it can be installed parallel to the unicast
- router for those sites with configurations like this:
-
- +----------+
- | Backbone |
- | Node |
- +----------+
- |
- ------------------------------------------ External DMZ Ethernet
- | |
- +----------+ +----------+
- | Router | | mrouted |
- +----------+ +----------+
- | |
- ------------------------------------------ Internal DMZ Ethernet
-
- (The "DMZ" Ethernets borrow that military term to describe their
- role as interface points between networks and machines controlled
- by different entities.) This configuration allows the mrouted
- machine to connect with tunnels to other regional networks over
- the external DMZ and the physical backbone network, and connect
- with tunnels to the lower-level mrouted machines over the internal
- DMZ, thereby splitting the load of the replicated packets. (The
- mrouted machine would not do any unicast forwarding.)
-
- Note that end-user sites may participate with as little as one
- workstation that runs the packet audio and video software and has
- a tunnel to a network-provider node.
-
- * What skills are needed to participate and how much time might have
- to be devoted to this?
-
- The person supporting a network's participation in the MBONE
- should have the skills of a network engineer, but a fairly small
- percentage of that person's time should be required. Activities
- requiring this skill level would be choosing a topology for
- multicast distribution with in the provider's network and
- analyzing traffic flow when performance problems are identified.
-
- To set up and run an mrouted machine will require the knowledge to
- build and install operating system kernels. If you would like to
- use a hardware platform other than those currently supported, then
- you might also contribute some software implementation skills!
-
- We will depend on participants to read mail on the appropriate
- mbone mailing list and respond to requests from new networks that
- want to join and are "nearby" to coordinate the installation of
- new tunnel links. Similarly, when customers of the network
- provider make requests for their campus nets or end systems to be
- connected to the MBONE, new tunnel links will need to be added
- from the network provider's multicast routers to the end systems
- (unless the whole network runs MOSPF).
-
- Part of the resources that should be committed to participate
- would be for operations staff to be aware of the role of the
- multicast routers and the nature of multicast traffic, and to be
- prepared to disable multicast forwarding if excessive traffic is
- found to be causing trouble. The potential problem is that any
- site hooked into the MBONE could transmit packets that cover the
- whole MBONE, so if it became popular as a "chat line", all
- available bandwidth could be consumed. Steve Deering plans to
- implement multicast route pruning so that packets only flow over
- those links necessary to reach active receivers; this will reduce
- the traffic level. This problem should be manageable through the
- same measures we already depend upon for stable operation of the
- Internet, but MBONE participants should be aware of it.
-
- * Which workstation platforms can support the mrouted program?
-
- The most convenient platform is a Sun SPARCstation simply because
- that is the machine used for mrouted development. An older
- machine (such as a SPARC-1 or IPC) will provide satisfactory
- performance as long as the tunnel fanout is kept in the 5-10
- range. The platforms for which software is available:
-
- Machines Operating Systems Network Interfaces
- -------- ----------------- ------------------
- Sun SPARC SunOS 4.1.1,2,3 ie, le, lo
- Vax or Microvax 4.3+ or 4.3-tahoe de, qe, lo
- Decstation 3100,5000 Ultrix 3.1c, 4.1, 4.2a ln, se, lo
- Silicon Graphics All ship with multicast
-
- There is an interested group at DEC that may get the software
- running on newer DEC systems with Ultrix and OSF/1. Also, some
- people have asked about support for the RS-6000 and AIX or other
- platforms. Those interested could use the mbone list to coordinate
- collaboration on porting the software to these platforms!
-
- An alternative to running mrouted is to run the experimental MOSPF
- software in a Proteon router (see MOSPF question below).
-
- * Where can I get the IP multicast software and mrouted program?
-
- The IP multicast software is available by anonymous FTP from the
- vmtp-ip directory on host gregorio.stanford.edu. Here's a
- snapshot of the files:
-
- ipmulti-pmax31c.tar
- ipmulti-sunos41x.tar.Z Binaries & patches for SunOS 4.1.1,2,3
- ipmulticast-ultrix4.1.patch
- ipmulticast-ultrix4.2a-binary.tar
- ipmulticast-ultrix4.2a.patch
- ipmulticast.README [** Warning: out of date **]
- ipmulticast.tar.Z Sources for BSD
-
- You don't need kernel sources to add multicast support. Included
- in the distributions are files (sources or binaries, depending
- upon the system) to modify your BSD, SunOS, or Ultrix kernel to
- support IP multicast, including the mrouted program and special
- multicast versions of ping and netstat.
-
- Silicon Graphics includes IP multicast as a standard part of their
- operating system. The mrouted executable and ip_mroute kernel
- module are not installed by default; you must install the
- eoe2.sw.ipgate subsystem and "autoconfig" the kernel to be able to
- act as a multicast router. In the IRIX 4.0.x release, there is a
- bug in the kernel code that handles multicast tunnels; an
- unsupported fix is available via anonymous ftp from sgi.com in the
- sgi/ipmcast directory. See the README there for details on
- installing it.
-
- IP multicast is also included in Sun's Solaris 2.1 and in BSD 4.4
- when/if it is released.
-
- The most common problem encountered when running this software is
- with hosts that respond incorrectly to IP multicasts. These
- responses typically take the form of ICMP network unreachable,
- redirect, or time-exceeded error messages, which are a nuisance
- but mostly harmless until we get several such hosts each sending a
- packet in response to 50 packets per second of packet audio.
- These responses are in violation of the current IP specification
- and, with luck, will disappear over time.
-
- * What documentation is available?
-
- Documentation on the IP multicast software is included in the
- distribution on gregorio.stanford.edu (ipmulticast.README).
- RFC1112 specifies the "Host Extensions for IP Multicasting".
-
- Multicast routing algorithms are described in the paper "Multicast
- Routing in Internetworks and Extended LANs" by S. Deering, in the
- Proceedings of the ACM SIGCOMM '88 Conference.
-
- There is an article in the June 1992 ConneXions about the first
- IETF audiocast from San Diego, and a later version of that article
- is in the July 1992 ACM SIGCOMM CCR. A reprint of the latter
- article is available by anonymous FTP from venera.isi.edu in the
- file pub/ietf-audiocast-article.ps. There is no article yet about
- later IETF audio/videocasts.
-
- * Where can I get a map of the MBONE?
-
- Two Postscript files are available on parcftp.xerox.com:
-
- /pub/net-research/mbone-map-{big,small}.ps
-
- The small one fits on one page and the big one is four pages that
- have to be taped together for viewing. This map is produces from
- topology information collected automatically from all MBONE nodes
- running the up-to-date released of the mrouted program (some are
- not yet updated so links beyond them cannot be seen). Pavel
- Curtis at Xerox PARC has added the mechanisms to automatically
- collect the map data and produce the map. (Thanks also to Paul
- Zawada of NCSA who manually produced an earlier map of the MBONE.)
-
- * What is DVMRP?
-
- DVMRP is the Distance Vector Multicast Routing Protocol; it is the
- routing protocol implemented by the mrouted program. An earlier
- version of DVMRP is specified in RFC-1075. However, the version
- implemented in mrouted is quite a bit different from what is
- specified in that RFC (different packet format, different tunnel
- format, additional packet types, and more). It maintains
- topological knowledge via a distance-vector routing protocol (like
- RIP, described in RFC-1058), upon which it implements a multicast
- forwarding algorithm called Truncated Reverse Path Broadcasting.
- DVMRP suffers from the well-known scaling problems of any
- distance-vector routing protocol.
-
- * What is MOSPF?
-
- MOSPF is the IP multicast extension to the OSPF routing protocol,
- currently an Internet Draft. John Moy has implemented MOSPF for
- the Proteon router. A network of routers running MOSPF can
- forward IP multicast packets directly, sending no more than one
- copy over any link, and without the need for any tunnels. This is
- how IP multicasting within a domain is supposed to work.
-
- * Can MOSPF and DVMRP interoperate?
-
- At the Boston IETF, John Moy agreed to add support for DVMRP to his
- MOSPF implementation. He hopes to have this completed "well in
- advance of the next IETF". When it is finished, you will be able
- to set up a DVMRP tunnel from an mrouted to Proteon a router,
- glueing together the DVMRP with MOSPF domains (the MOSPF domains
- will look pretty like ethernets to the multicast topology).
-
- The advantages to linking DVMRP with MOSPF are: fewer configured
- tunnels, and less multicast traffic on the links inside the MOSPF
- domain. There are also a couple potential drawbacks: increasing
- the size of DVMRP routing messages, and increasing the number of
- external routes in the OSPF systems. However, it should be
- possible to alleviated these drawbacks by configuring area address
- ranges and by judicious use of MOSPF default routing.
-
- * How do I join the MBONE?
-
- STEP 1: If you are an end-user site (e.g., a campus), please
- contact your network provider. If your network provider is not
- participating in the MBONE, you can arrange to connect to some
- nearby point that is on the MBONE, but it is far better to
- encourage your network provider to participate to avoid
- overloading links with duplicate tunnels to separate end nodes.
- Below is a list of some network providers who are participating in
- the MBONE, but this list is likely not to be complete.
-
- AlterNet ops@uunet.uu.net
- CERFnet mbone@cerf.net
- CICNet mbone@cic.net
- CONCERT mbone@concert.net
- Cornell swb@nr-tech.cit.cornell.edu
- JvNCnet multicast@jvnc.net
- Los Nettos prue@isi.edu
- NCAR mbone@ncar.ucar.edu
- NCSAnet mbone@cic.net
- NEARnet nearnet-eng@nic.near.net
- OARnet oarnet-mbone@oar.net
- PSCnet pscnet-admin@psc.edu
- PSInet mbone@nisc.psi.net
- SESQUINET sesqui-tech@sesqui.net
- SDSCnet mbone@sdsc.edu
- SURAnet multicast@sura.net
- UNINETT mbone-no@uninett.no
-
- If you are a network povider, send a message to the -request
- address of the mailing list for your region to be added to that
- list for purposes of coordinating setup of tunnels, etc:
-
- mbone-eu: mbone-eu-request@sics.se Europe
- mbone-jp: mbone-jp-request@wide.ad.jp Japan
- mbone-korea: mbone-korea-request@mani.kaist.ac.kr Korea
- mbone-na: mbone-na-request@isi.edu North America
- mbone-oz: mbone-oz-request@internode.com.au Australia
- mbone: mbone-request@isi.edu other
-
- These lists are primarily aimed at network providers who would be
- the top level of the MBONE organizational and topological
- hierarchy. The mailing list is also a hierarchy; mbone@isi.edu
- forwards to the regional lists, then those lists include expanders
- for network providers and other institutions. Mail of general
- interest should be sent to mbone@isi.edu, while regional topology
- questions should be sent to the appropriate regional list.
-
- Individual networks may also want to set up their own lists for
- their customers to request connection of campus mrouted machines
- to the network's mrouted machines. Some that have done so were
- listed above.
-
- STEP 2: Set up an mrouted machine, build a kernel with IP
- multicast extensions added, and install the kernel and mrouted;
- or, install MOSPF software in a Proteon router.
-
- STEP 3: Send a message to the mbone list for your region asking to
- hook in, then coordinate with existing nodes to join the tunnel
- topology.
-
- * How is a tunnel configured?
-
- Mrouted automatically configures itself to forward on all
- multicast-capable interfaces, i.e. interfaces that have the
- IFF_MULTICAST flag set (excluding the loopback "interface"), and
- it finds other mrouteds directly reachable via those interfaces.
- To override the default configuration, or to add tunnel links to
- other mrouteds, configuration commands may be placed in
- /etc/mrouted.conf. There are two types of configuration command:
-
- phyint <local-addr> [disable] [metric <m>] [threshold <t>]
-
- tunnel <local-addr> <remote-addr> [metric <m>] [threshold <t>]
-
- The phyint command can be used to disable multicast routing on the
- physical interface identified by local IP address <local-addr>, or
- to associate a non-default metric or threshold with the specified
- physical interface. Phyint commands should precede tunnel
- commands.
-
- The tunnel command can be used to establish a tunnel link between
- local IP address <local-addr> and remote IP address <remote-addr>,
- and to associate a non-default metric or threshold with that
- tunnel. The tunnel must be set up in the mrouted.conf files of
- both ends before it will be used. The keyword "srcrt" can be
- added just before the keyword "metric" to choose source routing
- for the tunnel if necessary because the other end has not yet
- upgraded to use IP encapsulation. Upgrading is highly encouraged.
- If the methods don't match at the two ends, the tunnel will appear
- to be up according to mrouted typeouts, but no multicast packets
- will flow.
-
- The metric is the "cost" associated with sending a datagram on the
- given interface or tunnel; it may be used to influence the choice
- of routes. The metric defaults to 1. Metrics should be kept as
- small as possible, because mrouted cannot route along paths with a
- sum of metrics greater than 31. When in doubt, the following
- metrics are recommended:
-
- LAN, or tunnel across a single LAN: 1
- any subtree with only one connection point: 1
- serial link, or tunnel across a single serial link: 1
- multi-hop tunnel: 2 or 3
- backup tunnels: sum of metrics on primary path + 1
-
- The threshold is the minimum IP time-to-live required for a
- multicast datagram to be forwarded to the given interface or
- tunnel. It is used to control the scope of multicast datagrams.
- (The TTL of forwarded packets is only compared to the threshold,
- it is not decremented by the threshold. Every multicast router
- decrements the TTL by 1.) The default threshold is 1.
-
- Since the multicast routing protocol implemented by mrouted does
- not yet prune the multicast delivery trees based on group
- membership (it does something called "truncated broadcast", in
- which it prunes only the leaf subnets off the broadcast trees), we
- instead use a kludge known as "TTL thresholds" to prevent
- multicasts from traveling along unwanted branches. This is NOT
- the way IP multicast is supposed to work; MOSPF does it right, and
- mrouted will do it right some day.
-
- Before the November 1992 IETF we established the following
- thresholds. The "TTL" column specifies the originating IP
- time-to-live value to be used by each application. The "thresh"
- column specifies the mrouted threshold required to permit passage
- of packets from the corresponding application, as well as packets
- from all applications above it in the table:
- TTL thresh
- --- ------
- IETF chan 1 low-rate GSM audio 255 224
- IETF chan 2 low-rate GSM audio 223 192
- IETF chan 1 PCM audio 191 160
- IETF chan 2 PCM audio 159 128
- IETF chan 1 video 127 96
- IETF chan 2 video 95 64
- local event audio 63 32
- local event video 31 1
-
- It is suggested that a threshold of 128 be used initially, and
- then raise it to 160 or 192 only if the 64 Kb/s voice is excessive
- (GSM voice is about 18 Kb/s), or lower it to 64 to allow video to
- be transmitted to the tunnel.
-
- Mrouted will not initiate execution if it has fewer than two
- enabled vifs, where a vif (virtual interface) is either a physical
- multicast-capable interface or a tunnel. It will log a warning if
- all of its vifs are tunnels, based on the reasoning that such an
- mrouted configuration would be better replaced by more direct
- tunnels (i.e., eliminate the middle man). However, to create a
- hierarchical fanout for the MBONE, we will have mrouted
- configurations that consist only of tunnels.
-
- Once you have edited the mrouted.conf file, you must run mrouted
- as root. See ipmulticast.README for more information.
-
- * What hardware and software is required to receive audio?
-
- The platform we've been primarily using is the Sun SPARCstation,
- but also the SGI Indigo. You don't need any additional hardware
- (assuming yours is new enough that it came with a microphone, else
- you have to buy one). The audio coding is provided by the
- built-in 64 Kb/s audio hardware plus software compression for
- reduced data rates (32 Kb/s ADPCM, 13 Kb/s GSM, and 4.8 Kb/s LPC).
- The software for packet audio and video is available by anonymous
- FTP. In the future, we expect this or similar software to be
- available for other platforms such as NeXT or Macintosh. One key
- requirement, however, is that the host machine have IP multicast
- software added to its kernel. You can add it now to SunOS 4.1.1
- 4.1.2, or 4.1.3; Sun includes it the standard kernel with Solaris
- 2.1 (though these programs may not yet run on Solaris).
-
- A pre-release of the LBL audio tool "vat" is available by
- anonymous FTP from ftp.ee.lbl.gov in the file vat.tar.Z. Included
- are a binary suitable for use on any version of SPARCstation, and
- a manual entry. Also available are dec-vat.tar.Z for the DEC 5000
- and sgi-vat.tar.Z for the SGI Indigo. The authors, Van Jacobson
- and Steve McCanne, say the source will be released "soon". You
- may find that the vat tar file includes a patch for the kernel
- file in_pcb.c. This has been superceded by a patch that is now
- included in the IP multicast software release for SunOS. These
- patches allow demultiplexing of separate multicast addresses so
- that multiple copies of vat can be run for different conferences
- at the same time.
-
- In addition, a beta release of both binary and source for the
- UMass audio tool NEVOT, written by Henning Schulzrinne, is
- available by anonymous FTP from gaia.cs.umass.edu in the pub/nevot
- directory (the filename may change from version to version).
- NEVOT runs on the SPARCstation and on the SGI Indigo.
-
- You can test vat or NEVOT point-to-point between two hosts with a
- standard SunOS kernel, but to conference with multiple sites you
- will need a kernel with IP multicast support added. IP multicast
- invokes Ethernet multicast to reach hosts on the same subnet; to
- link multiple subnets you can set up tunnels, assuming sufficient
- bandwidth exists.
-
- Once you build the SunOS kernel, you should make sure that the
- kernel audio buffer size variable is patched from the standard
- value of 1024 to be 160 decimal to match the audio packet size for
- minimum delay. The IP multicast software release includes patched
- versions of the audio driver modules, but if for some reason you
- can't use them, you can use adb to patch the kernel as shown
- below. These instructions are for SunOS 4.1.1 and 4.1.2; change
- the variable name to amd_bsize for 4.1.3, or Dbri_recv_bsize for
- the SPARC 10:
-
- adb -k -w /vmunix /dev/mem
- audio_79C30_bsize/W 0t160 (to patch the running kernel)
- audio_79C30_bsize?W 0t160 (to patch kernel file on disk)
- <Ctrl-D>
-
- If the buffer size is incorrect, there will be bad breakup when
- sound from two sites gets mixed for playback.
-
- * What hardware and software is required to receive video?
-
- No special hardware is required to receive the slow-frame-rate
- video prevalent on the MBONE because the decoding and display is
- done all in software. The data rate is typically 25-150 Kb/s. To
- be able to send video requires a camera and a frame grabber. Any
- camcorder with a video output will do. For monitor-top mounting,
- the wide-angle range is most important. There is also a small
- (about 2x2x5 inches) monochrome CCD camera suitable for desktop
- video conference applications available for around $200 from
- Stanley Howard Associates, Thousand Oaks, CA, phone 805-492-4842.
- Subjectively, it seems to give a picture somewhat less crisp than
- a typical camcorder, but sufficient for 320x240 resolution
- software video algorithms. There are also color and infrared (for
- low light, with IR LED illumination) models. The programs listed
- below use the Sun VideoPix card to input video on the SPARCstation.
-
- The video we used for the July 1992 IETF was the DVC (desktop
- video conferencing) program from BBN, written by Paul Milazzo and
- Bob Clements. This program has since become a product, called
- PictureWindow. Contact picwin-sales@bbn.com for more information.
-
- For the November 1992 IETF and several events since then, we have
- used two other programs. The first is the "nv" (network video)
- program from Ron Frederick at Xerox PARC, available from
- parcftp.xerox.com in the file pub/net-research/nv.tar.Z. An 8-bit
- visual is recommended to see the full image resolution, but nv
- also implements dithering of the image for display on 1-bit
- visuals (monochrome displays). Shared memory will be used if
- present for reduced processor load, but display to remote X
- servers is also possible. On the SPARCstation, the VideoPix card
- is required to originate video. Sources are to available,
- as are binary versions for the SGI Indigo and DEC 5000 platforms.
-
- Also available from INRIA is the IVS program written by Thierry
- Turletti and Christian Huitema. It uses a more sophisticated
- compression algorithm, a software implementation of the H.261
- standard. It produces a lower data rate, but because of the
- processing demands the frame rate is much lower and the delay
- higher. System requirements: SUN SPARCstation or SGI Indigo,
- video grabber (VideoPix Card for SPARCstations), video camera,
- X-Windows with Motif or Tk toolkit. Binaries and sources are
- available for anonymous ftp from avahi.inria.fr in the file
- pub/videoconference/ivs.tar.Z or ivs_binary_sparc.tar.Z.
-
- * How can I find out about teleconference events?
-
- Many of the audio and video transmissions over the MBONE are
- advertised in "sd", the session directory tool developed by
- Van Jacobson at LBL. Session creators specify all the address
- parameters necessary to join the session, then sd multicasts the
- advertisement to be picked up by anyone else running sd. The
- audio and video programs can be invoked with the right parameters
- by clicking a button in sd. From ftp.ee.lbl.gov, get the file
- sd.tar.Z or sgi-sd.tar.Z or dec-sd.tar.Z.
-
- Schedules for IETF audio/videocasts and some other events are
- announced on the IETF mailing list (send a message to
- ietf-request@cnri.reston.va.us to join). Some events are also
- announced on the rem-conf mailing list, along with discussions of
- protocols for remote conferencing (send a message to
- rem-conf-request@es.net to join).
-
- * Have there been any movements towards productizing any of this?
-
- The network infrastructure will require resource management
- mechanisms to provide low delay service to real-time applications
- on any significant scale. That will take a few years. Until that
- time, product-level robustness won't be possible. However,
- vendors are certainly interested in these applications, and
- products may be targeted initially to LAN operation.
-
- IP multicast host extensions are being added to some vendors'
- operating systems. That's one of the first steps. Proteon has
- announced IP multicast support in their routers. No network
- provider is offering production IP multicast service yet.
-
- ======================================================================
-