home *** CD-ROM | disk | FTP | other *** search
- Path: sparky!uunet!cs.utexas.edu!swrinde!elroy.jpl.nasa.gov!nntp-server.caltech.edu!toddpw
- From: toddpw@cco.caltech.edu (Todd P. Whitesel)
- Newsgroups: comp.sys.apple2
- Subject: Re: gs.os.FST?
- Message-ID: <1992Aug24.033134.13396@cco.caltech.edu>
- Date: 24 Aug 92 03:31:34 GMT
- References: <m0mLtqy-0000OHC@crash.cts.com> <1992Aug22.141938.18518@actrix.gen.nz>
- Sender: news@cco.caltech.edu
- Organization: California Institute of Technology, Pasadena
- Lines: 103
- Nntp-Posting-Host: punisher
-
- David.Empson@bbs.actrix.gen.nz writes:
-
- >No, I'm not talking about extending ProDOS. I'm talking about
- >inventing a new file system that borrows most of its ideas from ProDOS
- >(mainly so that it is easier to understand).
-
- Save yourself some trouble and just rip off the original UNIX filesystem.
- It has all the properties you describe. I am NOT kidding.
-
- >HFS can do most of the things I mentioned in my original article.
- >BUT, if a disk crashes and the directory structure is damaged, I have
- >little or no chance of understanding it, due to the complexity of the
- >B-Tree mechanism (I can't even find any decent documentation on it).
-
- True. You CAN use Mac utilities to fix it, although this is admittedly not
- a nice solution.
-
- >I also don't like the way HFS alphabetises everything for you. Not to
- >mention its severe lack of speed.
-
- Your view of HFS is skewed. Try using HFS on even a SLOW Macintosh, and you
- will find that it is just fine. In fact the automatic alphabetization makes
- things faster once your filesystem access code is properly optimized. This
- has long since been done on the Mac, but on the IIgs the FST is only 1.0 !!
- Give them some time, guy!
-
- >With a new file system which is conceptually similar to ProDOS, it
- >isn't much of a mental leap to understand the disk structure.
-
- Oh, so everybody is hand-repairing their damaged disks now? Last I heard even
- the unix guys were running an automatic repair utility rather than doing it by
- hand. What we need are real disk-repair utilities, not "easy to understand"
- file systems -- especially if the "hard to understand" filesystems can be
- successfully automated. The Mac has proven that this is possible, and Microsoft
- is investing in one with their HPFS (High Performance File System).
-
- >What I want is a "super ProDOS" file system that is specially designed
- >to be as efficient as possible for GS/OS to deal with (e.g. all
- >directory fields 16 bit so that the processor mode doesn't need to be
- >changed).
-
- You don't have to change the processor mode. You just read it as if it were
- 16 bits and then you zero the top byte (takes one instruction -- AND #$00FF).
- Actually, most of the fields in a ProDOS directory entry ARE 16 bits already,
- and the 65816 can read them directly.
-
- >Re Jawaid's comment on using 4-byte block numbers: what do you gain,
- >apart from doubling the size of all index blocks and directory fields
- >containing block numbers?
-
- Actually, if you use a more HFS like structure, you can drastically SHRINK
- your index blocks by storing them as lists of contiguous blocks, or "extents"
- as the HFS literature calls them. These "extents" can each be read with a
- SINGLE GS/OS driver call; with prodos you have to manually analyze the index
- block to figure out what the equivalent extents would be -- why would we want
- to do this? Because GS/OS drivers can be a lot faster if they know you want
- a whole bunch of contiguous blocks and not just one block. The 3.5 driver is
- a dramatic example of this.
-
- >If an allocation block system is used, the pointers remain 16 bits,
- >minimising disk space required for storing the pointers.
-
- Actually I prefer the extent system because you save even more disk space
- and you don't have to analyze the driver call optimization. HFS uses an
- allocation block system with 16 bit pointers, so each extent can be encoded
- in a longword. Nowadays, there is no reason not to use 32 bit block numbers
- (especially if you are using an extent scheme), and most unix filesystems
- store 32 bit block numbers.
-
- >If you have a disk bigger than 32 megabytes are you really worried
- >about wasting an extra 512 bytes for some files?
-
- You might be if your 200 meg disk was constantly running out of space...
- Don't laugh, running out of space is a real annoying problem on unix systems
- because multiple people are using them, and you occasionally have to bug
- everyone to delete extra stuff so you can have enough space to work in.
-
- >Are you ever likely to want to put more than 65535 files on a disk?
-
- If the disk is big enough, YES! Suppose you have a megaBBS. You want each
- message to be a seperate file, say each one is 10K including index block
- and directory overhead. How big are 64K 10K files? 640 Megs. What if your
- BBS software wants all the messages on the same 1 GIG drive? You hit the
- 64K file limit before you fill the drive.
-
- >By the way: I like the idea of inodes - lets put real links in this
- >file system as well!
-
- No argument there, but you can do a bit better than inodes.
-
- BTW, here's an idea you might not have thought of -- have the "assign me a
- free block to use" logic divide the disk into "tracks", say every 64 blocks
- or so. When you need new blocks, allocate directory/inode blocks from the
- even tracks and file data blocks from the odd tracks. When you hit the end of
- the disk you start coming back down on the other's set of tracks. This system
- will naturally tend to pack the directory information toward the front of the
- disk and eliminate long seeks to walk the directory structure -- this is a
- real problem with prodos. If the "tracks" are large enough (32K or 64K should
- be fine) then actual file reads will still block into large enough extents
- for the driver to optimize the transfer.
-
- Todd Whitesel
- toddpw @ tybalt.caltech.edu
-