home *** CD-ROM | disk | FTP | other *** search
- Path: sparky!uunet!elroy.jpl.nasa.gov!ames!agate!doc.ic.ac.uk!uknet!keele!nott-cs!lut.ac.uk!copib
- From: P.I.Berresford1@lut.ac.uk
- Newsgroups: comp.sys.acorn.tech
- Subject: Re: ADFS maps
- Message-ID: <1992Nov11.114709.22666@lut.ac.uk>
- Date: 11 Nov 92 11:47:09 GMT
- Sender: copib@lut.ac.uk (PI Berresford)
- Reply-To: P.I.Berresford1@lut.ac.uk (PI Berresford)
- Organization: Loughborough University, UK.
- Lines: 44
-
-
- ijp@doc.ic.ac.uk (Ian Palmer) writes:
- : Now I have a problem, to which I know of *a* solution to, but I don't
- : want it to be the *only* solution.
- :
- : Over the weekend I suddenly (well over the period of a few mins) lost
- : 3.5Meg of my hard disc free space. I was archiving some files and what
- : appeared to happen is that every time a new file was placed in the
- : archive (producing a copy of the archive temporarily) the old one at
- : the end was NOT being added to the free space map (or is it taken
- : away?) on my IDE (A5000) hard disc.
- :
- : The consequence is that I went from 3.5meg free to zilcho all for a
- : 500k archive. A further consequence is that *checkmap reports that the
- : map is inconsistent with the directory tree.
- :
- : Now my question is, is there some easy way of getting that 3.5 meg
- : back without having to re-initialise my hard disc and reinstal all the
- : data?
- :
- : What I find hard to understand is that *checkmap obviously creates
- : what it consideres to be the actual map of the disc during it's
- : directory scan in order to compare it with what is stored as the map
- : on the disc. So why then can it not just replace the disc map with
- : this correct map?
- :
- : I guess there must be a reason, because in the review of the
- : disc recovery program in a recent Archive magazine it states that it
- : can't do any more than *checkmap does (or words to that effect) on the
- : new map discs (ie. it can't recreate the map by scanning the disc),
- : but I can't understand why this should be the case.
-
- Yep. I've also had the same problem on an A5000 under RO3.0, but I was
- using spark 2.14 at the time. Spark crashed out with a 'Filecore space
- corrupt' error or something like that, as it was a few months ago. I had no
- other option than reformatting my hard drive. The problem seemed to occur
- whilst I was doing other filer operations, at the same time as Spark was
- archiving. Since then I've left the filer alone whilst Spark is archiving.
-
- So maybe theres a bug in filecore, which means two or more applications
- can't use it's facilities at the same time. Not what I'd call multi-tasking.
-
- Hope this helps someone trace the cause.
- Philip
-