home *** CD-ROM | disk | FTP | other *** search
- Newsgroups: sci.virtual-worlds
- Path: sparky!uunet!zaphod.mps.ohio-state.edu!sol.ctr.columbia.edu!destroyer!ubc-cs!uw-beaver!news.u.washington.edu!milton.u.washington.edu!hlab
- From: dlwood@mailbox.syr.edu (David L Wood)
- Subject: Re: TECH: My standard is better than your standard.
- Message-ID: <1992Jul26.075108.9516@u.washington.edu>
- Originator: hlab@milton.u.washington.edu
- Sender: news@u.washington.edu (USENET News System)
- Organization: University of Washington
- References: <1992Jul19.055422.12836@u.washington.edu>
- Date: Sun, 26 Jul 1992 05:24:43 GMT
- Approved: cyberoid@milton.u.washington.edu
- Lines: 530
-
-
-
- In article <1992Jul19.055422.12836@u.washington.edu> Jeremy Lee <s047@sand.sics.
- bu.oz.au> writes:
- >
- >
- >In article <BrHr16.Mvq@watserv1.waterloo.edu> Bernie writes:
- >
- >>In article <1992Jul15.233601.6824@u.washington.edu> bobp@hal.com (Bob
- >>Pendelton) writes:
- >>
- >>>> How do
- >>>> we subdivide space into more manageable chunks (to avoid having to keep
- >>>> the entire universe's database in memory on every machine)?
- >>>Oct-trees look good.
- >>
- >>Well, perhaps; but they're not terribly efficient. (I'll see I can
- >>get Dave (Stampe) to post his thoughts on this).
- >
- >I've got an answer to this. See my report. Objects themselves decide
- >which other objects to talk to. Objects naturally form a group that
- >talk almost exclusivley to each other, and that, to all intents and
- >purposes, is the same as a "world"
- >
-
- Original question- how do we make the universe more compact or easier to
- deal with on clients' machines?
-
- If the universe is defined by a message router and what that message router
- tells you about your surrounding area, then the idea (presented by someone
- whose name I cannot recall) that the message router should only inform you
- (the client) of the existence of an object only if it occupies one or more
- pixels after being projected with perspective shrinkage. For objects very
- far away, fewer details are visible on the client's display software, and
- for this reason it seems logical to:
-
- store in the machine's memory only the geometry and attributes (and
- ...behavior?) of those objects that contribute to a pixel or more after
- perspective projection.
-
- If you want to shape the object definition around this, a tree of
- primitives (possibly similar to CSG) sorted with largest primitives
- at the top and smallest primitives at the bottom would allow for
- slow machines to discard the unnecessary details and faster machines
- to use the whole data structure for very nice pictures. In the router's
- point of view, it only has to pass those levels of the object tree that
- would contribute significantly to a viewer's display, and ignore those
- that aren't visible because of size.
-
- >>>Imagine a library world. [...]
- >>>Only objects with a high enough security level that are members of the right
- >>>classes can access restricted data.
- >>>It would make sense that the god of that world would filter messages
- >>>so that objects that you can't access can't even be seen by you.
- >>
- >>I suspect that this sort of thing is best handled by the object
- >>itself. If I create a virtual book, it's up to me (the creator and
- >>proprietor of the book's contents) to decide who should read it. (If
- >>I sell the book, I sell control of its attributes and behaviour as
- >>well).
- >>
- >>If an object is to be invisible to certain other objects, it simply
- >>doesn't acknowledge messages from that object (including messages like
- >>"what do you look like" or "what is your location").
- >
- >Heyyyy! We think alike. I came up with exactly the same thing last
- >night.. Each object is responsible for only itself.
-
- I'll respond to the idea of message requests like "What do you look like?"
- later on.
-
- >
- >>God should have nothing to do with it; god is a security hole.
- >
- >>I think distance is perhaps one of the criteria for whose messages I
- >>get, but it shouldn't be the only one. Perhaps size over distance,
- >>giving apparent size; anything too small and/or too far way doesn't
- >>matter. That would solve the problems of the sun, moon and clouds,
- >>aircraft, etc.
- >>
- >>However, I still have to worry about all the stuff in the office next
- >>to mine.
- >
- >Probably the best way to deal with this is to use a system where if an
- >object doesn't get seen for one frame, then the chance of even
- >attempting to render it for the next frame goes down. You hit a lower
- >limit, and say, for nine frames out of ten the renderer doesn't even
- >look at it. If on that one go in ten it is suddely seen again, then
- >the priority goes right back up and an attempt is made to render it
- >every frame. Of course, it depends where you set your limits, but in
- >this case, then when you actually end up looking at it, it will appear
- >within ten frames, or 1/3 of a second in some systems. Short enough
- >for you not to notice. It will take that long for it to move in from
- >your peripheral vision.
-
- I don't know about you, but objects flickering in and out in my peripheral
- vision would be a bit distracting! :) But, clipping is best handled by the
- renderer, not the message router.
-
-
- >
- >>>This all leads to the idea of a world and it's god being defined by a
- >>>message router.
- >>
- >>Yes.
- >
- >No. "worlds" per se don't exist in my model. Only objects do. If a
- >group of objects decide to assosciate, then you can call that
- >assosciation a "world", but it still doesn't really exists. Message
- >routers should be completely transparent, and should have no bearing
- >on the objects.
-
- Well, a superset of YOUR model includes _your_ objects as well as a new term:
- worlds. I really like the idea of a message router controlling the passing
- of messages between objects in a world.
-
- >
- >>>Put a time stamp on each message sent by each object.
- >>
- >>That means we have to "synchronize watches" across the network; that's
- >>probably okay, but...
- >>
- >>>Process messages in time stamp order.
- >>
- >>I would say an object should respond to messages *in the order they
- >>arrive* without regard to timestamps. People with slow network
- >>connections don't have as much control over the universe; that's life.
- >>The alternative is to deliberately introduce delta-delays on every
- >>single interaction, which would make the world unusable. (Cascading
- >>interactions would be even worse, since you'd have to use longer
- >>deltas).
- >
- >I agree. It's easier to put the messages in a queue, with timestamp
- >attached. If the object want to pay attention to the timestamp, then
- >that is their business.
- >
- >The universe itself throus causuality out the window on occasions
- >anyway, so why worry about that?
- >
-
- No need to synchronize watches. The messages are processed by the router,
- which stamps the messages with its own time. One thing to remember in a
- networked environment is that the clients should never be trusted to
- perform any task that you can do yourself (where "you" is the server).
-
-
- >>>For some types of applications causality violations can just be
- >>>ignored. For others you must be able to "rewind" time and play back
- >>>all the events in their correct order so that history happens
- >>>correctly.
- >>
- >>Yikes! That strikes me as a very complicated task, especially since
- >>some interactions will depend on others. And from a user standpoint,
- >>watching an object I picked up suddenly and inexplicably jump from my
- >>hand down to the floor next to the table would be *very*
- >>disconcerting.
- >
- >You can't rewind it. It's a non-deterministic system.
- >
-
- Well, all we'd really need is a virtual camera which stores all the messages
- received by your client and then you can take this message list, along with
- the object descriptions and recreate the event however you like, offline
- if you want.
-
-
- >>>> What should our unit of time measurement be?
- >>>The second.
- >>>> What precision should it have?
- >>>Time would appear to need arbitrary precision and need to be scalable.
- >>
- >>Well, maybe.
- >>
- >>We still need to have some notion of a "tick time". If we were to
- >>adopt the timestamps you described, the timestamp would have to be in
- >>some kind of time units and have some number of bits of precision.
- >>
- >>>I expect that every dimension in VR will need arbirary precision and
- >>>need to be scalable.
- >>
- >>I'm not sure that's practical, but I think further discussion is
- >>needed.
- >
- >You can't really scale time. It's just not practical. the system will
- >run at the speed it wants. You can't make it go faster, although I
- >suppose you can make it go slower.
- >
-
- If you have a system of objects whose behavior depends on the time variable,
- you could certainly scale THEIR time, slowing down your virtual model
- railroad or speeding up your virtual ocean tides. Your personal time,
- as recorded by the message router is unrelated. You can't slow down another
- client's time, or the time of the entire network (legitimately) since
- this would upset people; I'd be!
-
-
- >I personally like the idea of 128 bit numbers to describe spatial
- >co-ordinates. If you scale 32 bit numbers, then you are going to get
- >just as large a speed decreace through calculations, and you have the
- >extra hassles of making sure that everyone is using the same scale
- >factors. And what if an object just won't fit in the current world
- >scale?
- >
-
- Define floors, not ceilings. 128 bits is not enough. You could use
- a system of variable length coordinates. An id byte would indicate
- how many bits (or bytes, take your choice) are used to locate a
- coordinate. Really, anything to extend the ability of the protocol
- is better than putting a limit like 16 bytes. I'll admit 16 bytes is
- fairly hefty, but if you want to dump an entire universe into a file
- that contains every object at a particular instant (universe snap-shot),
- then 128 bits isn't going to cut it.
-
-
- >>>> How do we ensure that a given color looks the same to
- >>>> everyone? Does this even matter?
- >>>Yes, it matters very much. Take a look at Xcms in the X11R5 release.
- >>
- >>I guess what I meant was more like "you see the color blue one way, I
- >>see it in another, but we both agree it's blue and that it's darker
- >>(or lighter) than another shade of blue; does the actual color really
- >>matter?)"
- >
- >Irrelevant.
-
- Color: RGB is fairly standard, converting it to other forms isn't that
- complicated. For transmission about a network, RGB is fine.
-
-
- >>>> How do we represent and share surface textures?
- >
- >Doesn't matter. Each object does it the way it wants. (See my paper)
-
- Hmm, textures- that's a serious question. Flat polygons won't live
- for long with all the developments in microprocessor power. Current
- raytracers depends on geometric primitives and splines, parametric
- equations, and other patches (ala bezier). To create textures, sometimes
- external images are "wrapped" around those primitives, or a primitive
- is 'added' to a surface function (as in adding a noise function to
- make a bumpy surface). For use in networked systems, they probably
- won't be too pleasant for bandwidth considerations. :)
-
-
- >>>The people I know who really know 3D graphics tell me that mip-maps are
- >>>the way to go...
- >>
- >>I'll see if I can find a reference... anybody out there got any pointers?
- >
- >Nope, but I'd like some.
-
- In Computer Graphics: Principles and Practice (2nd edition, by Foley, Van Dam,
- Feiner, and Hughes), on page 826-
-
- Basically- a MIP map is a copy of the original image into three sections
- of RGB in the upper left upper right and lower left corner boxes of a rectangle
- that is divided into four equal sections. These three boxes are the unfiltered
- source image. The lower right box is again subdivided equally. In this
- box however, the original images are reduced (i.e. filtered) by a factor of
- FOUR (4). And the lower right hand box of this set of images is again divided
- into four boxes and so on until you have only four pixels which represent the
- cumulative filtering of the image. The source image from this MIP map is
- then used as a look-up table for those pixels which are a certain distance away
- and so on. Compared to the straight-forward method of taking a pixel on the
- screen and filtering all the pixels that land on it in a projection of the
- source image, this is lots faster. Interpolation is used between "layers"
- to help smooth out the reduction.
-
- This is really not terribly related to surfaces, it is projection of
- images, which won't make a flat polygon any bumpier.
-
-
- >>>Extensiblity is a requirement for any protocol.
- >>
- >>Agreed!!!
- >
- >Of course.
-
- Who am I to disagree?
-
-
- [ junk deleted ]
-
-
- >>That's true. However, adding bandwidth is easier than adding
- >>processing power; you just add more physical connections and multiplex
- >>the traffic over them. (True, you can go to an extensible
- >>multiprocessor architecture, but that's still more complex than
- >>increasing bandwidth).
- >
- >Look at the bandwith increaces that are just around the corner, when
- >people really begin using optic fibre. Currently it's trunk line only.
- >What happens when you have the equivelant of a trunk line leading into
- >your PC?
-
- Don't build a protocol based on promises of better technology.
-
-
- >>>I like the idea of structuring it more like an air traffic control
- >>>system. You have a bunch of routing centers (called routers) that
- >>>control the propagation of messages. Each world has at least one
- >>>router. But, a world can be made up of multiple routers that each
- >>>handle specific regions of the world. It only gets messy at the
- >>>boundries between regions. The collection of all communicating routers
- >>>is called a universe.
- >>
- >>This sounds very, very good to me.
- >
- >Routing and all network stuff should be trasnparent and should have no
- >bearing on what constitutes a world. In that sentence, you are basically
- >saying that objects are restricted to worlds in close physical
- >proximity, and I therefore wouldn't be able to connect to a world that's
- >in Tokyo, for example.
-
- Nonononooo! If you want to go to tokyo's domain, and hence use tokyo's
- router, you can instruct your current router to transfer all messages from
- you to the tokyo router and to allow messages from the tokyo router
- to be sent back to you! Simplicity!
-
-
- >>>Routers aren't tied to specific iron. For small worlds multiple
- >>>routers can run on a single machine. For big worlds you may need
- >>>multiple very large machines.
- >>
- >>Yes.
- >
- >Sounds like your routers are central controllers, and I thought we all
- >agreed that they just won't be able to cope.
-
- Honestly, how much CPU time is burned up by sorting out messages and
- occassionally computing a size over distance function? Not much.
-
-
- >>>I think that objects will need to send out position change messages
- >>>when they move. And they'll need to send out "here I am this is what I
- >>>look like and this is what I do" sorts of messages to anyone they want
- >>>to interact with. I think that we will want to move the visual
- >>>information and at least some of it's behavioural information to
- >>>individual viewing stations. Rendering across a network hard to do in
- >>>real time.
- >>
- >>Right.
- >
- >Not nesessarily. Depends what you mean.
-
- Never trust a client to handle something that could be misused. Also,
- imagine you're on a slow machine, I mean SLOW! You run in wireframe mode,
- and your tiny 2400 baud modem can barely keep up with message flow.
- (I'm not this deprived, but imagine those who are.) Suddenly you are
- surmounted witha thousand requests from one source (an enemy) of a
- fully detailed object description! Thousands of K that would bog you
- down like a fly on flypaper. Of course you could put some restrictions on
- frequency of requests and such, but honestly, the router should handle
- this. Any slight variation in the course of my virtual airplane would have to
- be transmitted to the whole network who I am visible to, which is fine if
- the router included it in its round of messages to a client, but if I had to
- send out these messages myself, the strain would be horrific.
- You NEED a central control by definition of DOMAIN.
-
-
- >>>But managing the data flow in a highly populated world is going to be tough.
- >>
- >>Yes -- it's probably the main "unanswered question" that (so far) no
- >>one's put forward a good solution for.
- >
- >You can't, and you don't need to. Any system is going to be able to be
- >swamped by too many objects. But, by the time it gets that big, then
- >most people will be leaving anyway, because it's so congested.
-
- Speculation, also pessimism. Is that what created the internet?
-
-
- >>>So, what are the hardware requirements for a "VR station" or "deck"
- >>>that can deal with this protocol?
- >>
- >>This is going to be controversial. Do we insist on HMDs? Do we set
- >>frames/second thresholds? Do we have a "standard world" for
- >>benchmarking? What about input devices?
- >
- >See my paper.
-
- What paper, and where?
-
-
- >>> How slow a machine do you want to target?
- >>> How little RAM are you willing to target?
- >>
- >>These are both difficult numbers to wrestle with. They're dependent
- >>on how you store the data; if you're representing things with CSG, you
- >>don't need nearly as much ram as if you're using polygons... and if
- >>you're using voxels, you need even more.
- >
- >See my paper.
-
- Speed of the machine and RAM requirements are really NOT elements of the
- protocol, they are elements of IMPLEMENTING the protocol. If someone
- can write the protocol and renderer and all with a meager 2 megs of ram,
- then why should the protocol care? If the particular tricks and techniques
- of a particular implementation can allow a slow machine to keep up with
- network messages, why make it suffer?
-
-
- >>Same with processor speed; you need more MIPS if you're dealing with
- >>CSG than if you're just doing polys or wireframe.
- >
- >See my paper.
-
- The choice of polygon/wireframe/stereo viewing/ et cetera... is up to
- the client, the protocol doesn't decide that. It shouldn't! Imagine having
- to send whole raytraced images over a network! :)
-
-
- >>I don't think we can set specific numbers; if we support multiple object
- >>resolutions (and representations) then we should be able to accomodate
- >>a wide range of machines.
- >
- >See my paper. (This is getting monotonous!)
-
- yep.
-
-
- >>> Do you assume access to a large secondary storage device?
- >>
- >>Again, it depends on how you do things; in general, I'd say "yes" (so
- >>you can do local caching of object attributes).
- >
- >Doesn't matter.
- >
- >>> What does input from a generic input device look like in the protocol?
- >>
- >>Very good question. A more basic one is "what do we mean by a
- >>'generic input device'"?
- >>
- >>> How do I substitute a dial box or a mouse for a data glove?
- >>
- >>Dave Stampe and I are looking at this very question for our REND386
- >>stuff. (No, we haven't really answered it yet).
- >
- >Just change the object. Re-write a few internal processes, keep the same
- >interface with the rest of it.
- >
- >>>How many channels? What capabilites? Will a Disney Sound Source do?
- >>
- >>Stereo, certainly; in terms of where the sound gets processed... I
- >>would say in the user's station, which means we need to move a *lot*
- >>of sound data over the network.
- >
- >Not necessarily. See my paper.
- >
- >>>In my view of what is being discussed, there are at least three
- >>>different hardware configurations to worry about. There is the
- >>>hardware that objects run on. There is the hardware that worlds run on
- >>>(a world is message router). And the viewing station that people use
- >>>to interact with worlds.
- >>
- >>Right.
- >
- >Nope. Chuck the message routers, because they are simply another
- >bottleneck. They are not needed. Since the viewing station is also
- >running objects, then you are just left with one type of machine.
- >
- >>>All three of these things can run on a single computer. But I expect
- >>>that they will be mapped onto workstation/PC machines for viewing
- >>>stations, and servers for message routing and for running non-person
- >>>objects.
- >
- >No, what will happen is that objects just get shared. Rendering bits
- >will get put on the viewing station etc.
- >
- >>Right. A workstation or PC is the most likely viewing platform.
- >>What's not as clear is what the viewing mechanism will be; a monitor,
- >>shutter glasses, head-mounted display...? Also input devices... 3D
- >>mouse, glove, datasuit?
- >
- >Doesn't matter.
- >
- >>I suspect Unix boxes will be popular for implementing regions and
- >>objects.
- >
- >All machines will have to implement objects.
- >
- >>Anyway, thanks for all the good ideas...
- >>
- >>--
- >> Bernie Roehl, University of Waterloo Electrical Engineering Dept
- >> Mail: broehl@sunee.waterloo.edu OR broehl@sunee.UWaterloo.ca
- >> BangPath: uunet!watmath!sunee!broehl
- >> Voice: (519) 885-1211 x 2607 [work]
- >
- >So, where is this vaunted paper that I keep talking about? I'll post the
- >text after this, but can someone tell me where to send the .ps file?
- >(aw, stuff it. I'll just post it. No doubt the moderators will do what
- >is correct with it.)
- >
- >***********************************************************************
- >* . Jeremy Lee s047@sand.sics.bu.oz.au Student of Everything *
- >* /| "Where the naked spotless intelect is without *
- >* /_| center or circumference. Look to the light, *
- >* / |rchimedes Leland, look to the light" - Dale Cooper *
- >***********************************************************************
-
- Comments on input devices: If the input device is "attached" to a virtual
- object, then the behavior of the object must be defined as changes in the
- input from that real-world peripheral. Selections between types of behavior
- for a single input device will allow a user to define the mouse as being
- either X,Y and 2 buttons to control rotation, or X,Y and 2 buttons to control
- Z motion, or whatever the user wants. I think my point here is to make
- a one-to-one connection to a virtual object's range of motion (its behavior)
- and the input device that controls it. In this way all the input device
- processing is done locally, and only the IMPORTANT information, the end
- result of the input is transmitted to the network. You would have to
- "link" a dataglove to a virtual glove by defining the primitives or
- equations (formula for a hand anyone?) and where they can rotate about
- an axis, where they can be translated about, and for each of these range
- of motions in the virtual world, you would then of course have to calibrate
- your input device so that you get the degree of motion that you like.
- A digitized hand, a 3d bitmap is, at present, too wide of a bandwidth to
- be useful in a network. (IMHO, that is) And sending raw data like
- 3d coordinates across a network would be suicide. I'd like to see the
- input devices kept INDEPENDENT of the protocol and any network overhead.
- End results should have smaller bandwidth, and more easily computed in
- real-time than raw output. I propose no general input device supported
- by the protocol, only behavior or "puppet-strings" control of virtual
- counterparts.
-
- Yeah, it's late.
- --
- +-----------------------------+-----------------------------------------+
- | dlwood@mailbox.syr.edu | drive on a parkway, park on a driveway |
- | Cybernaut, with a thought. | Choosy netters choose GIF. |
- +-----------------------------+-----------------------------------------+
-