|
Volume Number: | 7 | |
Issue Number: | 3 | |
Column Tag: | Letters |
THINK C and Benchmarks
By Kirk Chase, Editor
Objective-C® Oversight
Sarah C. Bell
The Stepstone Corporation
75 Glen Road
Sandy Hook, CT 06482
I object to Mr. C. Keith Ray’s article, “NeXT for Mac Programmers”, (MacTutor, Dec. ’90). This article dealt with the Objective-C® programming environment marketed by the Stepstone Corporation. At the end of his article, he listed all the trademarks that were included in the article. He failed to mention that Objective-C is a trademark of the Stepstone Corp.
Since the column was centered around our product, we would appreciate it if you made mention of this fact in the next issue of MacTutor.
[Consider it done - ed]
Taming The THINK C Debugger
Rex Reinhart
Irvine, CA
Here’s a simple workaround to setting and keeping breakpoints in THINK C.
Define the following macro:
/* 1 */ #define HALT asm{_Debugger}
Then simply put HALT in your code wherever you want a breakpoint (anywhere but in the middle of an expression). Then run the program under the THINK C debugger. When execution of your code reaches the HALT, your program will stop, the debugger will become active, and your source code will appear with the current line pointer at the HALT. From there you may step at your leisure.
For low level debugging, try it with MACSBUG. Compile your program and run it on its own.
Another useful variation is
/* 2 */ #define CONDHALT(Condition) if(Condition) asm {_Debugger};
The Mac has a current debugger pointer which a debugger may set to point to itself. I don’t know if this pointer is in a global, or directly in the trap dispatch table. _Debugger is a little known (not in IM I-V) trap A9FF which transfers execution to wherever this pointer points to. The THINK C debugger sets this pointer correctly. I haven’t tried this with other debuggers, but if the set this pointer it should work.
Good Luck.
Benchmark Challenge Revisited
John W. Baxter
Port Ludow, WA
This is a response to the letter “Benchmark Challenge” published in the September 1990 issue.
A significant problem exists with the benchmark source code (both C and Pascal) as printed in the September issue: the local variable b controls the flow of the program, and the letter states that it ranges from 1 to 20. However, b is neither initialized to anything, nor adjusted during the loop. Therefore, the value of b will be whatever is left over in the stack frame locations or register the compiler uses for storing b. That could, for example, mean that the value of b in the Pascal case happens to be 1 (causing execution of only one comparison per loop) while the C case it might be 100,572, causing many comparisons per loop. To fix this problem, I would suggest initializing b to 1 at the beginning of each of the test functions (procedures), and in each conditional just advance it to the next value. So the Pascal TestCase procedure would look something like:
begin b := 1; case b of 1: begin b:=2; end;
Another pitfall to watch out for in designing benchmark code is code optimization performed by the compiler. Looking at the code as presented in the magazine, we find that in fact regardless of the of b, the code to be executed by the conditional structure is the same (nothing). In both C and Pascal, the compiler has the “right” in such a case to optimize the conditional structure totally out of existence in the final object code (there are no called function or procedure side effects to worry about). As it happens, fixing the undefined state of the variable b as above will prevent such optimization in this benchmark. And, I doubt that either compiler used actually goes that far in its optimization process.
Yet another potential problem is the fact that the Pascal version using a 2-byte b, while the C version happens to be using a 4-byte b. In this case, the time difference introduced by this difference should be vanishingly small (on the 68xxx family). The same benchmarks moved to the Apple IIgs, and continuing to suffer the problem (they wouldn’t as printed because the available C compilers on the IIgs use a 2-byte int), would favor the Pascal significantly, since the 2-byte arithmetic is much faster than is 4-byte arithmetic on the 65816 (which is why int is 2 bytes in those C compilers). To go a step further, C would allow the use of an unsigned 2-byte b, while Pascal does not. For best C performance on the 65816 chip, everything which CAN be unsigned SHOULD be. That leads to the question: “Does a fair benchmark comparing 2 languages and their implementations use the same data type each, or use the most efficient available data type?”
Unfortunately, the answer to that is “Yes.” It depends upon what one wishes to compare. When ignored, this issue invalidates carelessly-written benchmark articles, including the original Byte magazine Sieve benchmark, which made UCSD Pascal look much worse than it needed to. For the Apple II version, about a 40% speed improvement was produced in the Byte Sieve simply by (a) turning off range checking, and (b) defining the variables with the big array last, so that the other variables could be reached by more efficient p-code. A benchmark needs to make clear whether it is comparing the best source found for a given language/implementation/ platform, or a direct translation.
More on Benchmarks
Tom Pittman
Spreckels, CA
Davis and Bayer have some interesting figures, but their conclusions are misinformed.
First (and trivially), the “assembly language” put out by the compiler has nothing to do with the speed of the compiled code, since most compilers (and THINK in particular) don’t put out assembly language at all, but go directly to native machine code. I consider this point trivial, as I assume that is what they would have intended to say, anyway. But the point as corrected is still misleading, since the quality of the machine code generated by a compiler is largely affected by the nature of the source language, as their benchmarks clearly show.
If you ask experienced compiler people (not just run-of-the-mill programmers), you will get an entirely different answer concerning the relative merits of C and Pascal. You see, compiler people know that C is a low-level language designed to fit the PDP-11 very well; to the extent that other computers are similar in architecture to the PDP-11, it’s possible to generate correspondingly good code for them. But as machine design moves away from that obsolete architecture, compiler designers have a harder and harder time of undoing all those low-level optimizations that C encourages the programmers to write, in order to generate the correct and optimal code that a Pascal compiler can do with ease. For some C constructs it’s utterly impossible; any self-respecting compiler just throws up its metaphorical hands and puts out something that works somehow, never mind how slowly or awkwardly. Pascal, by distancing itself from the low-level machine details, does the compiler writer a favor. Pascal compilers on the Macintosh still put out some rotten code; they have not even begun to show what Pascal is capable of. The C compilers on the other hand, are pretty much maxed out. Draw your own conclusions.
So why do programmers all think C is so much better? It’s widely recognized to be a religious issue. That does not mean, as is commonly believed, that there are no right answers (for the benchmarks tell us otherwise), but only that bringing evidence to the discussion will not change anybody’s mind. As interesting as the Davis and Bayer benchmarks are, I doubt they will change any C hackers into Pascal programmers. More’s the pity.
[I didn’t change over. C is still my language of preference to code in. I do code in Pascal, but I find C an easier one to code in. This “preference” benchmark can go a long way. Code optimization is nice, but I feel that developers, programmers, hackers, and hobbyists should be familiar with many languages and do their work with the best language for the need of the job. At some point, C and Pascal will reach a point where they are nearly equal in code generated. - ed]
Bypassing SANE
Michael J. Gibbs
Surprise, AZ
In response to Martin E. Huber’s letter (September 1990):
Radius includes some nice software with their monitors, including an INIT called Radius Math. Radius Math speeds up floating point by bypassing SANE.
On machines with FPUs, SANE uses the floating point unit for most operations but not transcendental or trigonometric functions. The reason for this is that the 68881/882 does not provide the accuracy for these functions that SANE requires, so they are done in software. Apparently the last 4 or 5 bit positions are inaccurate on the FPU (not important for most applications).
Radius Math patches SANE to use FPU for all floating point operations, including those mentioned above. This is not as fast as inline floating point instructions for the FPU, but it does work for all applications that use SANE.
I ran the following benchmarks using Radius Math version 1.4 and THINK Pascal 3.0.1 on my Macintosh IIcx:
Trig Test
Radius Math Generate FPU instructions Time(ticks)
NO NO 875
YES NO 38
NO YES 16
YES YES 16
Arithmetic Test
Radius Math Generate FPU instructions Time(ticks)
NO NO 54
YES NO 28
NO YES 6
YES YES 6
It appears that THINK Pascal generates inline floating point instructions, since the presence of Radius Math has no effect on performance with 881/882 code generation turned on. Notice that for applications that use SANE, Radius Math speeds normal operations by a factor of 2, and trig by a factor of 23.
Remember to use extended precision for all floating point that uses SANE. SANE converts single and double precision numbers to extended precision for calculations, then back again to store the results, so you will get better performance by avoiding the conversion. For inline FPU code, there is not much difference, except for double precision (64 bit), which is considerably slower than extended (80 or 96 bit) and single (32 bit) precision.
- SPREAD THE WORD:
- Slashdot
- Digg
- Del.icio.us
- Newsvine