home *** CD-ROM | disk | FTP | other *** search
- Xref: sparky alt.amateur-comp:418 comp.lang.c:19036 comp.lang.c++:18553 comp.misc:4728
- Path: sparky!uunet!mcsun!uknet!warwick!doc.ic.ac.uk!agate!spool.mu.edu!uwm.edu!zaphod.mps.ohio-state.edu!swrinde!network.ucsd.edu!munnari.oz.au!metro!extro.ucc.su.OZ.AU!maxtal
- From: maxtal@extro.ucc.su.OZ.AU (John MAX Skaller)
- Newsgroups: alt.amateur-comp,comp.lang.c,comp.lang.c++,comp.misc
- Subject: Re: What is Object Oriented Programming? Is C doomed?
- Message-ID: <1992Dec30.191012.28786@ucc.su.OZ.AU>
- Date: 30 Dec 92 19:10:12 GMT
- References: <1992Dec28.015333.5242@ucc.su.OZ.AU> <1992Dec29.211256.188@fcom.cc.utah.edu>
- Sender: news@ucc.su.OZ.AU
- Organization: MAXTAL P/L C/- University Computing Centre, Sydney
- Lines: 108
- Nntp-Posting-Host: extro.ucc.su.oz.au
-
- In article <1992Dec29.211256.188@fcom.cc.utah.edu> swillden@news.ccutah.edu (Shawn Willden) writes:
- >maxtal@extro.ucc.su.OZ.AU (John MAX Skaller) writes:
- >: You dont understand, see my attempt at explaining above.
- >: A suitable cache which is 1% the size of the list guarrantees
- >: random read and write. End of story.
- >
- >A more intuitive explanation is this: If your cache has n/100
- >elements, you can break the list up into n/100 + 1 pieces and know
- >easily which piece a given element is in. Then, to find the element
- >you want you need only traverse the appropriate piece. Since there is
- >1 element of the cache for every 100 elements of the list, the longest
- >any piece need be is 100 elements. If you have 1 cache entry for
- >every m elements of the list, the longest a piece need be is m.
-
- Yes. For arbitrary list size, read/write is O(1) but
- of course insert and delete is not.
-
- You can do better than this if you also want fast
- sequential access (in the forward direction). Have another
- cache of size m.
-
- When you ask for element i, after you find
- it cache the pointer into slot i%m.
-
- Setting m=2 is great for insertions in a singly
- linked list at the current position. m>2 allows some
- backtracking.
-
- I allow 'indexed' access to the list: the user
- thinks its an array with insert and delete operations.
- I have implemented cache version 2, but not the one
- I originally described.
-
- That because although it looks like an array,
- I know its a list, and also because none of my lists
- have been big enough (yet) to worry.
-
-
- The main point is that I can access it as a singly
- linked list like this:
-
- for(int i=1; i<+l; ++i) l[i]=X;
-
- and its fast, but if I *really* want to get the n'th element,
- I can just say
-
- X=l[n];
-
- and am not constrained by the implementation. Note this
- is NOT a 'list' in the sense that lists can be linked
- together to form trees, my lists cannot share common
- parts.
-
- >This technique allows random access (read/write) in known time and
- >also allows easy insertions and deletions. Insertions and deletions
- >can require major changes to the cache (which could require traversal
- >of a large part of the list) but a reasonable algorithm could minimize
- >these, only adjusting the cache when, for example, some access
- >required 150 nodes to be traversed. This would, of course, require
- >the cache to contain indices as well as pointers.
-
- Yes. But cache type2 is easily adjusted on insertion
- or deletion. On insertion, just add 1 to all indices after
- the insertion point (which is still O(n), however).
- On deletion you might have to remove one pointer from the cache.
-
- >
- >In addition, other optimizations could allow for even faster access
- >when the elements are accessed in common ways, such as sequentially.
-
- Yes, cache type2 does this reasonably well,
- and there are no problems with iterators getting out of whack
- on insertion and deletion. The iterators are also easy
- to use, cause they are just integers.
- >
- >John:
- >
- > Is this scheme similar to the one you had in mind? If yours
- >is better, please share it (or provide references, I suppose I ought
- >to check Knuth to see if he has anything like this).
-
- What you described is exactly what I was thinking of,
- (but haven't implemented) and I can see you already thought
- of what I *have* impemented (cache type 2).
-
- Probably, for each type of access you could
- have a different caching scheme. So you could make the cache
- abstract perhaps?? And 'mixin' the optimal cache design
- on a list by list basis according to your requirements.
-
- >
- >In any case, I like the idea well enough that I think I'm going to
- >start using it.
- >
-
- I havent even tried doubly linked lists yet.
- After all, that 'back link' is just a 'distributed cache'.
-
- There are lots of permutations one could try.
- No one would be optimal for all cases of course.
-
- Sometimes even a preallocated array would be best :-)
-
- --
- ;----------------------------------------------------------------------
- JOHN (MAX) SKALLER, maxtal@extro.ucc.su.oz.au
- Maxtal Pty Ltd, 6 MacKay St ASHFIELD, NSW 2131, AUSTRALIA
- ;--------------- SCIENTIFIC AND ENGINEERING SOFTWARE ------------------
-