home *** CD-ROM | disk | FTP | other *** search
- Xref: sparky comp.os.msdos.programmer:10401 comp.msdos.programmer:87 comp.os.ms-windows.programmer.tools:1360
- Newsgroups: comp.os.msdos.programmer,comp.msdos.programmer,comp.os.ms-windows.programmer.tools
- Path: sparky!uunet!spool.mu.edu!yale.edu!ira.uka.de!slsvaat!us-es.sel.de!reindorf
- From: reindorf@us-es.sel.de (Charles Reindorf)
- Subject: Re: BC++ far* default in medium model
- Message-ID: <1992Nov7.140122.2542@us-es.sel.de>
- Sender: news@us-es.sel.de
- Organization: SEL-Alcatel Line Transmission Systems Dept. US/ES
- References: <1992Nov4.090359.21666@hellgate.utah.edu> <1992Nov5.031544.5264@sequent.com> <1992Nov5.053551.20533@emr1.emr.ca> <1992Nov5.084656.10427@sequent.com> <1992Nov6.171236.1394@rei.com>
- Date: Sat, 7 Nov 92 14:01:22 GMT
- Lines: 52
-
- In article <1992Nov6.171236.1394@rei.com>, fox@rei.com (Fuzzy Fox) writes:
- |> >>>In your data declarations, if you do not specifically specify the
- |> >>>keyword far, then the declaration is left in the default data
- |> >>>segment, which can be a problem if you have more than 64k of global
- |> >>>data.
- |>
- |> This is true except in BC++ 3.1, where an option can be set which forces
- |> all data declarations larger than a certain threshold to be
- |> automatically moved into another segment. The option is -Ff=nnnn, I
- |> believe. I use -Ff=1024 in all my programs. In the IDE, this option is
- |> set as the "Automatic Far Data" option and threshold.
- |>
- |> >If you want to have arrays or structures that explicity
- |> >store > 64k of data in one instance then you have to use the HUGE
- |> >model,
- |>
- |> You do not have to use huge model, you just need to use the 'huge'
- |> keyword when declaring the data item. In fact, a common misconception
- |> is that the Huge model forces all pointers to be huge by default. This
- |> is not true. Huge model is just like Large model, except that each
- |> individual source file gets its own data segment automatically. A
- |> single source file program will compile exactly the same under Huge and
- |> Large models.
-
- I believe it is the case that virtual function table pointers are 4 bytes in the
- huge model as opposed to 2 bytes in the large model. This is because in the huge
- model they point to "any old" data segment whereas in the large model they point
- to the "default" data segment, so the segment information is not required in the
- latter case. Hence the compilation of a huge model program can be different and
- in particular you will probably get linkage problems if you try to mix, just in
- case you were thinking of doing so.
-
- I think similar highly confusing scenarios arise when considering whether the
- "this" pointer is near or far in mixed memory model programming between small data
- and large data models, and I have seen bugs in earlier versions of BC++ in connection
- with this.
-
- I realise I may be fragmenting this thread of argument somewhat, but I would be
- interested if anyone out there knows a bit more about this aspect of C++ programming
- on PCs.
-
- |> Huge pointers are frought with problems, and you need to know what
- |> you're doing in order to use them properly.
- |>
- |> --
- |> #ifdef TRUE | Fuzzy Fox (a.k.a. David DeSimone) fuzzy@netcom.com
- |> #define TRUE 0 |
- |> #define FALSE 1 | "History doesn't repeat itself, but it rhymes."
- |> #endif | -- Mark Twain
-
- All of my opinions are mine
- --- Charles Reindorf
-