Reply-To: m1phm02@fed.frb.gov (Patrick H. McAllister)
Organization: Federal Reserve Board, Wash, DC
Lines: 8
I have been playing around with a graphical statistics program, XLISP-STAT by Luke Tierney. There is a Win port, and my eventual objective is to port to OS/2 as well. I have noticed an interesting thing: the most compute-intensive parts of the program display the biggest speed degradation in WIN-OS2. As an example, one particular kernel regression computation takes about 20 sec. under native Win 3.1 on my machine in enchanted mode and about 75 sec. under WIN-OS2. (The machine is a Northgate 486 ISA, if tha
t matters.) The kernel regression routine is basically a tight inner loop with lots of floating point computation, addressing computation, and subroutine calls. (Uses very little memory -- paging is not a problem.) Other parts of the program display much less performance degradation. Does anyone out there know what is going on?
By the way, the same calculation done in a 16-bit native OS/2 port runs in about 7 sec. It seems to me that if vendors of compute-intensive software were able to get the same kind of speed-up with native OS/2 applications, it would have a big effect on the Windows vs. OS/2 controversy, whereas relying on WIN-OS2 emulation for the same programs may have the opposite effect.
I would like to see of we can have some discussion here on the architectural features of these operating environments that produce such dramatic speed differences. At first glance, it would seem that, if any kind of code should display about the same speed under these different environments, it would be a small, floating point-intensive computation like this.