home
***
CD-ROM
|
disk
|
FTP
|
other
***
search
/
Loadstar 237
/
237.d81
/
t.programming1
< prev
next >
Wrap
Text File
|
2022-08-26
|
18KB
|
697 lines
u
P R O G R A M M I N G
The Great Adventure of
Creativity and Logic
by Dave Moorman
We are entering into a strange
world of symbolic logic. Process
Logic, to be exact. If you do those
logic problems found in puzzle
magazines, you know Classic Logic. If
you found Geometry enjoyable in hight
school, you know Proof Logic. And if
you studied philosophic logic in
college, you are aquainted with
Symbolic Logic.
Process Logic is a combination of
Proof Logic and Symbolic Logic --
plus three wonderful additional
features. Like Proof Logic, you will
be arranging statements and commands
in a particular order -- not to prove
some truth, but to affect some action
in the computer. In the process, you
will use Symbolic Logic to manipulate
values.
The three additional features are
symbolic value holders (called
variables and arrays), loops, and
conditional commands. It is the
ability to make conditional changes
in the flow of logic that gives a
computer its ability to "think."
The C-64 includes a built in
BASIC interpreter. Computers are
controlled with three types of
language. At its very heart, the
computer processor recognizes certain
values as "instructions." This is
built into the machine itself, and is
called Machine Language. EVERYTHING
the computer does is done by means of
ML.
ML is nothing but numbers, that
is, numeric values. Remembering such
values and the tasks they perform is
difficult for humans. We need at
least some easily recognizable code
words to remind us about what is
going on. The ML programmer writes
this code and the computer uses a
program to ASSEMBLE that code into ML
-- which is what the computer
actually understands. The code the
programmer writes is called ML, or
more correctly, Assembly Source Code.
But a computer can be smarter
than that. Assembly ML source code
has a one-to-one relationship with
the code the computer understands.
However, the computer can be
programmed to read words, numbers,
and other characters and translate
them into complex groups of ML
instructions. Such a language is
called Compiled. The program that
translates Compiler Code into ML Code
is called a Compiler.
A compiler compiles an entire
program or routine at a time. If the
programmer has made a mistake the
compiler cannot understand, it
reports errors -- but only after
chugging through the whole source
code. So the programmer writes,
compiles, debugs, recompiles,
executes, rewrites, recompiles, etc,
etc. This is an arduous task, to say
the least. It was even mor frustrating
back in the 1960's when the programmer
had to punch cards with each line of
the program and take the "batch" to
the computer room. The operator would
run the batch and return a paper
print-out to the programmer in a few
hours. Or days!
At that time, computers were
finally becoming fairly fast and
powerful. A terminal could be
directly connected to the computer so
the programmer did not have to wait.
But the computer then did a lot of
waiting -- for the programmer's
input. The concept of time-sharing
was developed, where the computer
could switch between many different
terminals, running different programs
at apparently the same time.
To take advantage of time sharing
and to provide a language that was
easy for students to learn and use,
BASIC (standing for Beginner's All
Purpose Symbolic Instruction Code)
was written (invented) in 1963, at
Dartmouth College, by mathematicians
John George Kemeny and Tom Kurtzas.
The commands and math the programmer
typed looked enough like English to
make reading the code relatively
easy.
But the big advantage of BASIC was
that it was -- and is -- an
Interpreted Language. During the
user's tiny slice of processor time, a
single BASIC statement would be read,
turned into the ML Code necessary to
execute the command, and processed.
Then the processor turned to another
terminal and program to process. On
the BASIC program's next turn, the
next BASIC statement would be
interpreted and executed.
The great thing about an
interpreted language like BASIC is
that the program runs until an error
occurs. Then it stops and delivers
an error message. The programmer can
fix the error and rerun the program.
This made BASIC very interactive. The
programmer did not have to get
everything right before seeing how at
least SOME of the program performed.
In December of 1974, the January
issue of Popular Electronics
published news about the first home
computer -- the Altair 8800. Two
Harvard students, Paul Allen and Bill
Gates saw the magazine -- and their
future. They dropped out of college
and rushed to Albuquerque, NM, where
the Altair was being built.
They realized that these home
computers needed an "operating
system" -- a simple way for users to
interact with the machine. Bill Gates
wrote Altair BASIC using a mainframe
computer with an emulator that made
it act like the Intel 8008
microprocessor used by the Altair.
His BASIC included ML code to read
the keystrokes and put the BASIC
program into memory. Other code would
read BASIC commands and jump to ML
routines that performed them. The
whole thing fit in just 4 kilobytes
of memory (which was rather expensive
at the time).
Gates and his newly founded
company -- MicroSoft -- went on to
write BASIC for nearly every home
computer. BASIC 2 used about 16K of
memory, but was remarkably powerful.
Most anything a programmer wanted to
do could be done in BASIC. True -- it
was slower than straight ML. But it
was easy to learn, faster to write,
and more-or-less portable between
different makes of computers.
When Commodore produced the
Personal Electronic Transactor -- the
PET -- they turned to MicroSoft for
BASIC. Legend has it that Commodore
CEO Jack Tramiel bought BASIC 2.0
outright for $7,000 from cash-
strapped MicroSoft.
So, in the fall of 1981 when
Commodore designed the C-64, they
already owned the BASIC 2.0 operating
system. The C-64 has color video and
other features for which BASIC 2.0
had no commands. But that was OK.
Game designers would certainly use
fast ML for their code. And BASIC 2.0
has commands which can directly read
or write information to places in
memory that can control these
features.
On the up side, the C-64 was
designed to be modified with ML code.
Though BASIC 2.0 was in Read Only
Memory (ROM) and could not be
changed, certain critical jump
locations were in Random Access
Memory (RAM). By changing the jump
addresses, a programmer could add new
commands to BASIC and perform all
sorts of miracles the designers never
dreamed of. The designers did include
"paddle controls" for then popular
games like Break Out. These controls
proved perfect for adding a mouse.
All in all, a C-64 was a
fantastic machine in 1982 when it was
unveiled at the January Consumer
Electronics Show in Las Vegas. Its
capabilities -- especially as a "game
machine" -- and its incredible price
(that dropped below $200 in 1984) kept
it in production through 1992. Over
its decade of manufacture, some 27
million units were sold, making the
C-64 the "Best Selling Computer of the
20th Century," according to the
revered Guinness Book of World Records
(2000-2001).
And to top of a remarkable (if
often ignored) history, a C-64
Direct-to-TV game joystick was
marketed in 2004 through QVC shopping
channel. Some 200,000 units were sold
between Thanksgiving and Christmas.
Thanks to the designer -- Jeri
Ellsworth -- the computer inside the
joystick is a real, honest-to-
goodness C-64. With nine wires
soldered to the credit-card-sized
board, a user can connect a PS2
keyboard, Commodore disk drive, and an
external power supply.
The Commodore 64 -- more than any
other first-generation, 8-bit
computer -- has proved itself as THE
computer for gamesters and hobbyist
programmers all over the world.
INSIDE the C-64
Every computer has three
essential parts --
1. A processor which executes ML
instructions and does math and
logic operations.
2. Input/Output capabilities -- for
keyboard, mouse, joystick,
printers, and disk drives.
3. Memory -- "itty-bitty boxes"
called bytes which