home *** CD-ROM | disk | FTP | other *** search
- Newsgroups: comp.lang.c++
- Path: sparky!uunet!secapl!Cookie!frank
- From: frank@Cookie.secapl.com (Frank Adams)
- Subject: Re: Using 64k objects
- Message-ID: <1992Nov11.163605.114187@Cookie.secapl.com>
- Date: Wed, 11 Nov 1992 16:36:05 GMT
- References: <1992Nov4.011604.5884@piccolo.cit.cornell.edu> <BxH4p6.8Bn@research.canon.oz.au> <1992Nov10.102746.12514@us-es.sel.de>
- Organization: Security APL, Inc.
- Lines: 18
-
- In article <1992Nov10.102746.12514@us-es.sel.de> reindorf@us-es.sel.de (Charles Reindorf) writes:
- >In article <BxH4p6.8Bn@research.canon.oz.au>, colin@cole.research.canon.oz.au (Colin Pickup) writes:
- >|> Borland is one of the few C/C++ compilers that handles arrays bigger that
- >|> 64K. To do it either use the huge memory model for the whole program or
- >|> just make the pointers to the arrays huge (read the manuals on how to do
- >|> this). The second option will give better run-time performance.
- >
- >Is it not the case that the "huge" memory model does not imply huge pointers
- >by default? According to my understanding of the "huge" memory model, the
- >only difference between it and large memory models is that a separate data
- >area is allocated [...]
-
- This is true for Borland's C++ (which was the compiler in question). For
- most other PC C/C++ compilers, the "huge" memory model *does* imply huge
- pointers.
-
- So, to answer the original question, to get arrays over 64K with BC++, you
- have to declare them huge and use the appropriate allocation functions.
-