home
***
CD-ROM
|
disk
|
FTP
|
other
***
search
/
Fish 'n' More 2
/
fishmore-publicdomainlibraryvol.ii1991xetec.iso
/
disks
/
disk411.lzh
/
Mind
/
Mind.lzh
/
Theory2
< prev
next >
Wrap
Text File
|
1990-11-21
|
164KB
|
2,678 lines
Artificial Intelligence Theory Journal, Part Two of Three
(1977 - 1978)
Standard Technical Report Number: MENTIFEX/AI-2
by Arthur T. Murray
Mentifex Systems
Post Office Box 31326
Seattle, WA 98103-1326 USA
(Not copyrighted; please distribute freely.)
24 AUG 1977
The theory which we are developing is that of the establishment of the
comparator as the mechanism basic to intelligence.
In continuation of yesterday's thoughts, we were reasoning just now
about a mental organism momentarily deprived of all input perception, but
still consisting of memory and a kind of "central processor." Because
lately we've been reasoning that one's "level" of intelligence has a lot to
do with how many associative inputs ("tags") are available simultaneously to
impinge serially upon one's consciousness. Because, in a way, how
intelligent one is, is a function of how likely one is to make a tenuous but
valid mental connection.
We could give a mind an intelligence test of walking down a street and
providing the names for all familiar human beings met on the street.
We suspect pretty strongly that physical knowledge of the outside
world, such as of visual images, relies upon analysis by exact comparator at
any level of detail or refinement. Because a visual image is
incomprehensible unless you start dealing with some small group of points
upon it.
However, on the scratch-leaf for this paper of today, we wrote down
what we got as a parallel idea: that coded things do not require the same
sort of comparator analysis, because coded structures are exact; they do not
have the variations and vagaries of visual images.
So perhaps the great syncretism of today Wednesday is to take up the
idea that both those two elements go to make up an intelligent mind:
comparators and coded structures.
Comparators would be necessary for vision and coded structures for
language.
The comparator system is that with which we come to know the outer
world; the coded-structure system is that with which we come to hold inner
mental concepts.
Scratch-Leaf 24 AUG 1977
- Idea: That coded things can be on any level of a base-three
expanding pyramid, because by definition they must be totally exact.
Therefore language can have twenty-six letters or umpteen sounds or
perhaps even 20,000 calligraphic characters.
- The self-concept of "I" is made possible by means of the "coded
structure." Through the exact coded structure the word "I" has access to
any of the myriad associations needed to transform the word into the
concept.
26 AUG 1977
Organization of Nolarbeit File System
- master file
- not necessarily the original documents.
- typewritten as much as possible.
- Drawings and figures should be photocopied into the typewritten
material.
- manuscript storage file
- not necessarily a complete file of all Nolarbeit material
- serves as a Sammelpunkt for handwritten documents once their contents
have become typewritten for the master file.
- safety file
- a copy of the contents of the master file
- to be kept at a place safely away from the master file, so that if
either is lost, it can be replaced by copying the other.
- attache file
- main working file for developing new material.
- to be carried about in attache case.
- along with the other complete Nolarbeit files, must be thoroughly
indexed to facilitate new work.
26 AUG 1977
Two days ago we figured out a duality of comparator-system and coded-
structure theoretically necessary for high intelligence, that is, the use of
language.
Today we were reading the work of 8 AUG 1976, and at a lot of points in
there the new significance of coded-structure makes sense.
Although a brain gets to know the outer world through a geometric-logic
comparator system, it is the coded-structure system which makes possible
high-speed intelligent thought. If we posit this theory, we attain a
tentative answer to the age-old question of whether or not thought is
possible without using words. The answer is that, over a spectrum of
thought, words are needed for the most intellectual levels. By using a word
to name any outside thing or action, a brain establishes a punctual
collector of any associative tags that deal with the outside item, and thus
a concept takes root within the mind. No matter how long the word is, it
still operates as a single point.
It is very possible that the "central processor" of a mind would
contain the coded-structure which would provide access to words stored in
memory along with their bundles of "valent" (stronger or weaker) associative
tags. Of course, in a coding system for words, the address of a word-memory
is perhaps identical with its codical breakdown.
Now the use of a word as, say, noun or verb, or, subject or object, has
to do with a processing structure. Such a processing structure is like
nodes which can contain any element ad libitum. So we should consider the
linguistic processor to be distinct from un-node-like memory. It may arise
in the human infant just by the laying down of memory traces, in similar or
dissimilar material, but we don't know here yet.
27 AUG 1977
O O O O O O O O O O O O
O O O X O O ===> O X O =#=> O O O
X X X ===> O X X O X X O X X
L H R
- Everything moves in relationship to the head, "H."
- We consider that the head wills and does the moving.
- The head has two arms, left (L) and right (R).
- It may have to have a way of "feeling" where an arm is at any time.
- Each arm can be moved around among only three spaces relative to the head.
These spaces are the side space, the front space, and the diagonal space
in between.
- The head can move an arm directly from side to front, or first to diagonal
and then to front.
- The head can not move an arm behind its own back, so to speak.
- The head will probably have to have an imaginary eye which must always be
focused on some moveable point in the immediate image.
- The eye-focus can stay on the same point while the body moves, only under
certain circumstances. The eye can not be caused to be looking behind the
body or outside of the allowed visible field.
- There may perhaps be "sleep" during which the eye stops seeing.
- The head can move, and when it does so, ceteris paribus, it moves the
whole body, which retains its configuration.
- The head can move to any adjacent point except those three "behind" it.
- It would be possible to make a logically valid mock-up of this microcosm
or pseudocosm by having a flat array of light bulbs that could illuminate
either points (or squares). A lighted bulb would indicate the presence of
an object. A human could move a bulb-object by means of a kind of
"joystick," so that pushing the joystick forward or sideways would make
the corresponding move on the array. The orientation of arms could be
controlled by six pushbuttons, three on each side of the joystick. The
operative pushbutton on each side could be lighted by an inside bulb.
Then if the arm configuration was to be changed during a move, the desired
new configuration could be entered by pushbutton while the joystick were
manipulated.
- As one step, the body can revolve 45 degrees at a time with the head as
the axis, in either direction.
28 AUG 1977
The bouleumatic system for the speech pseudo-organs will be the
mechanism which engages in thought.
This seems to be an important theory.
Under the bouleumatic theory (q.v.) both motor memory and motor
volition are necessary to initiate a motor action. VERBAL THOUGHT IS
THEREFORE THE ACTIVATION OF VERBAL MOTOR MEMORY WHILE WITHHOLDING MOTOR
VOLITION.
One might interject that just to run through a train of words from
memory is a kind of volition. However, Nommultic theory renders a different
definition. Under Nommultic theory an unspoken mental thought which arises
is a spontaneous linking of associative tags summoning quasi-unitary
concepts or elements from memory. Therefore a nascent thought is not
willed, but rather it is like the topmost point of the iceberg of all the
logical processing going on in the mind.
A thought can become will (volition) by tipping a bouleumatic
accumulator into motor volition. The accumulator prevents linear volition
while implementing a kind of two-dimensional volition. Come to think of it,
that reasoning suggests that thought is a linear component of volition.
Volition, however, is non-linear, so it takes what you might call
exponential thought to bring about motor volition.
In the night we were theorizing on how the verbal motor memory system
would in itself constitute the mechanism of verbal thought in a brain. That
is, there would not be a separate mechanism conveying its results to the
verbal motor system. What we must now investigate is whether or not there
would probably be a kind of superstructure of extra processing equipment
involved with the verbal thought mechanism.
What we are looking for is what we might call a "habit" mechanism. The
human mind has the ability consciously or unconsciously to train itself to
create sentences in manifold ways.
Under Nommultic theory, the basic semantic ingredients for a sentence
just come willy-nilly to the conscious mind, from memory or from perception.
We observe a portion of the associative "tapestry" being able to govern
or modify the operation of the associative process in or along other
portions of the tapestry. For instance, a human speaker can easily insert
an arbitrary syllable after each word in sentences, such as, "I-ay am-ay
here-ay." Such insertion slows down but does not stop the creative verbal
process which is thought.
However, we can carry the investigation even further. It looks as
though the "habit-mechanism" has a kind of double access to verbal memory.
The problem of noun-plurals serves as an example. On the one hand, perhaps
the "habitmech" knows the usual way to form English noun-plurals. So when
emerging thought calls for a noun-plural, it would be easy for the habitmech
to process the retrieved noun and deliver it with the standardized plural
ending. Now there could be a delaying mechanism where time is allowed for
either standard pluralization or else irregular override.
Any associative process which grows complex can be regarded as
consisting of a surface and a "subcorpus." Suppose a habit-mechanism causes
associations to link together into sentences in certain ways. While the
habit is not yet routine, many extra "nodes" in the associative formation of
a sentence will run through the consciousness. However, when the process
becomes smoothed out, only the end-product sentences will remain as surface,
as associative chains, and the strictly processor associations will
disappear down into the subcorpus.
We want to devise a way for all kinds of grammatical rules to become
operative in the habit-mechanism and therefore in the subcorpus. A grammar
rule is a piece of associable information, but it is one which can affect
myriad other pieces as they are called out of memory and manipulated.
30 AUG 1977
Key Modules List for Nommultic Device
_____________ _____________________ ______________
| | | | | |
| environment |--->| perception channels | | Motor Output |
|_____________| |_____________________| |______________|
____________ __________________
| | | |
| Logic | | Coded-Perception |
| Comparator | | Channels |
|____________| |__________________|
___ ____________ ______
| | | | | |
| M | | Central | | M M |
| E | | Logic | | O E |
| M | | Processor? | | T M |
| O | |____________| | O O |
| R | | R R |
| Y | | Y |
|___| |______|
___________ ___________
| | _________ _______________ | |
| Passive | | | | | | Motor |
| Coded | | Sensory | | Motor | | Coded |
| Memory | | Tagging | | Habit Tagging | | Memory |
| Structure | | System | | System | | Structure |
|___________| |_________| |_______________| |___________|
It may be that there are two different tagging systems, a passive
sensory tagging system and an active or motor habit tagging system. This
idea came quite suddenly just now.
It may be the coded-structure system which bridges the passive-active
polarity of the mind. The coded-structure system is one where perceptions
and motor output must obviously be involved with the same place.
When we hear a word, it filters right through an instantaneous decoding
locator system to reach the associative node which pinpoints a concept by
holding all the associated tags in one spot. So what we hear goes straight
to a conceptual node.
The motor output is more complicated. It is clear that a verbal node
is probably the strongest, surest associative link to any information stored
in passive or active memory. That is to say, the mind has all its memory at
its fingertips when it is operating in the realm of a verbally coded system.
I suspect that the verbal input code is automatically the address of a
concept node. I would suspect that the mind would not even differentiate
among different languages as to where a word was stored.
We immediately imagine a problem because we want all the physical
addresses not to be encumbered with dozens or hundreds of associative tags
getting in the way of the very decoding lines. It's as if we wanted a
single line to come out from each node and go somewhere else before
branching out into all possible associations.
Very likely the decode-line comes out of the auditory address area and
goes like the top of a "T" into the verbal motor output. The bottom of the
"T" goes to the concept node. If activated by volition, the motor memory
for any discrete word probably "avalanches" onto a group of muscle-
activation bars, so that there need be no picking and choosing of letter-
sequences. The verbal motor bars, before volition, probably also feed back
into coded sensory input, so that the mind can "hear itself think."
30 AUG 1977
_______________________ ________________
| | | |
| Perception Channels | | Motor Output |
|_______________________| |________________|
.*. .*.
.* *. .* *.
.* *. .* *.
.* *. .* *.
.* .* *. .* *. *.
* .* *. .* *. *
/ * *. .* * \
/ / * * \ \
/ P / / \ Motor \ M \
* A / / \ \ O *
> S * / \ * T <
: S > Sensory * * Habit < O :
: I : > < : R :
| V | : : | |
| E | Tagging | | Tagging | M |
| | : : | E |
: M : > < : M :
: E > System * * System < O :
> M * \ / * R <
* O \ \ / / Y *
\ R \ \ / / /
\ Y \ .* *. / /
\ *. .* *. .* /
*. *. .* *. .* .*
*. *.* *.* .*
*. .* *. .*
*. .* *. .*
*. .* *. .*
* *
30 AUG 1977
/|\
| To concept nodes.
| | | |
| | | |
| | | |
Verbal | Individual --+--)--)--)-- Individual Avalanche
Sensory | Concept -----+--)--)-- Word Motor Speech Muscle
Decoder | Addresses --------+--)-- Output Control
| -----------+-- Chains Bars
31 AUG 1977
Late yesterday we were considering a system of decoding of heard
language and motor generation of words connected to concept nodes. We
imagined a "T" with the input and output across the top and with the concept
node at the base.
Then we were reasoning on how those concept nodes really pinpoint
everything: the audible word, the generation of the word, and all the
associations which flesh out the word as a concept.
Now we are figuring out how concept-points or nodes will interact
within the mind. Obviously they can't interact as unitary points, because
then they have no distinction. They interact through the features that are
associated into them.
Thought occurs by virtue of the "coiled-spring function" of logical
aggregates in the mind.
Any verb for a complex action is really reducible to a list of simple
actions.
How can you express a subject, an action, and a direct object in purely
logical terms? There is conditional logic, but is there semantic logic,
logic to correspond to what a transitive verb does in a sentence?
If we want to find semantic logic, we have to go into that small array-
world where there are so few points that everything is totally unique and
totally describable.
The simplest transitive verb would be something like "touch," "hit," or
"push."
A verb (such as "hit") does not show just one arrangement of all the
dots in the array. I suppose it does show the arrangement of all the dots
at the crucial instant, but, as an action-word, it must also show the
arrangement of the dots at a previous instant.
We are able to express "touch," "hit," and "push" symbolically, or with
dots, on our scratch-leaf for today, but we have trouble trying to make a
universally valid statement of semantic logic out of the triple sequences of
arrays. It may be necessary to use kind of a switcheroo.
First you express the verb on the dot-level so that the machine absorbs
the concept in unique and describable terms, with reference only to the
dots. Then you abstract the concept through re-expressions of the states in
the sequence.
With just three dots, you can have seven configurations. So with
sequences of three states, you can have 343 different things happen, or
reach 343 different words to describe unequivocally what happened.
The simplest language-relationships correspond to summation and
subtraction. Complex language relationships are reducible to such
simplicity.
Diagram of Theoretical Model of the Human Mind 10 SEP 1977
________________________________________________________________
/ \
< E n v i r o n m e n t >
\________________________________________________________________/
| | /|\ /|\
| | | |
| | __|__ __|___
| | | | | |
| | Internal | V | | G M |
| | Verbal | e | | e o |
| |<-----------| r | | n t |
| | Perception | b | | e o |
| | Line | a | | r r |
______V_______ __V__________ | l | | a |
_____| | _____| | | | | l M |
| | Experiential | | | Auditory | | M | | e |
__V___ | Associative- | __V___ | Associative | | o |/### | M m |/###
| | | Tagging | | | | Memory- | | t |\### | u o |\###
| E M | | Comparator | | A M | | Tagging | | o | ## | s r | ##
| x e | | | | u e | | System | | r | ## | c y | ##
| p m | |______________| | d m | |_____________| | | ## | u | ##
| e o | /##\ | i o | /##\ | M | ## | l | ##
| r r | ## | t r | ## | e | ## | a | ##
| i y | ## | o y | ## | m | ## | t | ##
| e | ## | r | ## | o | ## | u | ##
| n | ## | y | ## | r | ## | r | ##
| t | ## | | ## | y | ## | e | ##
| i |/####################################### | | ## | | ##
| a |\####################################### |_____| ## |______| ##
| l | | | ## ## /|\ ## /|\ ##
| | | |/######## ## | ## | ##
| | | |\######## ## | ## | ##
|______| |______| ## | ## | ##
## | ## | ##
################ | ## | ##
################ | ## | ##
## Volition Line | ## | ##
## /----------------' ## | ##
## | Volition Line ## | ##
## | /-------------------##-----' ##
__\##/__|__|_ ____________##_ _______##
| | | | | |
| Conceptual ####\| Verbal Motor | | General |
| Address ####/| Habit Tagging | | Motor |
| Nodes as | | System | | Habit |
| Motor Coded ######################\| Tagging |
| Memory ######################/| System |
|_____________| |_______________| |_________|
| | /|\ /|\
| | Volition Line | |
| \------------------' |
| Volition Line |
\-------------------------------------'
16 SEP 1977
Our mind is now ready to focus upon the "Verbal Motor Habit Tagging
System" in the diagram of 10SEP1977. In the days since then we haven't
really been forging ahead, because we've been busy typing up and
photocopying much previous Nolarbeit material. Now that our yen for
theorizing has casually turned its attention to verbal motor, I think this
is a good opportunity to record and analyze if any creative process shows
itself now.
The hopefully creative process is starting out with our mind thinking
of a black-box mechanism for which it has a purpose but for which it can not
yet describe the interior workings. The purpose is partially that we want
the VMHTS at its output to have control of discrete individual motor sound
units, and we want the interior VMHTS to group together those units as an
avalanche string to go with any desired "conceptual address node" punctually
representing a word which is really a string of motor sound units. In other
words, we want the single node, coupled with volition, to be able to produce
rapidly the whole string of verbal sound units in the given word. We want
the automaton to be able to spend some time learning the "habit" of the
sound-string, and thenceforth to be able to pronounce it briskly and in such
a way that the associated verbal habit tagging mechanism functions only
beneath the surface and does not intrude into the conscious process of
uttering or thinking the word.
Our other partial purpose is that we may want the same VMHTS to
incorporate the grammar structures of language. At any rate, we expect that
the learning and use of grammar will be subject to some sort of motor habit
tagging system, but we are a little afraid that the grammar problem will be
much more difficult than the word-habituation problem, simply because the
inputs to a mechanism of grammar-function will have to be discriminated so
complicatedly as to such things as what part of speech they are and what
function they are assuming.
For the word-generation problem, we envision right now an output that
controls around forty different motor tags, a number we get from the UNIFON
alphabet published 29AUG1977 in the Seattle P-I. It shows English as having
sixteen vowels and twenty-four consonants.
So that's where we are now - trying to figure out how to stack up
sounds on a pull-string and how then to submerge the system in a subcorpus
so that only the habituated results appear during system operation.
We feel pretty optimistic about the VMHTS because we suspect in advance
that the word-habituation can be done quite automatically. In fact, we
should be able to design a general-purpose habituation device which will
link up any desired sequence of outputs with any single, nodular input.
It's as if the mind can run through any sequence of outputs slowly and then
just decide to "freeze" them as a habit.
We do worry about designing such a cumbersome mechanism that it would
be hard to imagine its analog being provided genetically. We want our
system to be simple or straightforward enough that it could be formed
embryonically according to genetic code. We will accept even a cumbersome
mechanism and then try to simplify it.
If we actually do figure out a device which will habituate up to forty
outputs, then I think it would be pretty easy to give even an inexpensive
automaton the ability to say words out loud. We would just make the forty
outputs go to an actual mechanism of sound-production. It could be perhaps
bits of voice-tape or even some kind of digital memory. That would really
sound spooky, to hear the machine pronouncing words out loud. It would
probably sound hiccupy or rasping, but quite intelligible. Later on in an
expensive model we could give the machine a superlative voice such as that
of actor Richard Burton.
Scratch-Leaf
- Output of up to 40 units.
- Input as a node, but output must be compared with the sound series behind
the node.
- Obviously, the build-up chain of the sound-series to be habituated will
start with just finding the initial sound in motor memory, and then it
will just add on sounds.
- For the auditory input, we may have to have a kind of slip-through
comparison chain, where sequences are kind of slid past each other to see
if they match up anywhere.
- For the auditory input, we might establish a kind of "short-term memory."
By "short-term" we would mean memory that goes into a special register.
- Whenever I get the idea of building registers, I feel the urge to attach
umpteen possible processing mechanisms to them.
- Contemporary psychologists have researched short-term memory quite
thoroughly, so I could always read up on it.
- When the psychologists say that seven things can stay in short-term
memory, what they probably mean is seven different concept nodes.
20 SEP 1977
Mental Processing of Transitive Verbs
When we say that "A" does a transitive action to "B," it is like having
three crucial nodes filled in a logical association-assigning apparatus
within a mind.
It is like saying, "A nodes B."
Switch-wise, for a three-word sentence of subject, verb, and direct
object to be understood by an automaton requires only that each of the three
verbal concepts gets tagged to the three crucial nodes on the processing
niveau of the language-understanding apparatus.
A V C
o o o
1 2 3
Subject "A" does verb-action "V" to direct object "C."
If A-V-C, then H and R.
"H" is a "How?" relating to A, and "R" is a result relating to C.
It is further possible that A-V-C also implies a two-way associative
memory link between A and C with some kind of reference to V.
______________________ ____________________________________
| | | |
| AVC --> H & R | or | AVC --> H & R & AVC-tag |
| A C | | A C AC |
|______________________| |____________________________________|
Now comes the semantic nitty-gritty.
"A" is a concept because it is a bundle of associative tags collected
at a node, and so is "C."
The verb-concept "V," by virtue of being a transitive verb, can cause
tag ends to be delivered for "HA" and "RC."
When I say that A-V-C implies RC, a result with respect to "C," I mean
that to "C" is now associatively connected a nodular tag that leads to a
whole list of different result-memories associated with the concept "V."
For example, if "A kills C," then a whole list of images of death
becomes associated to the nodular concept of "C."
Thought then has transpired because the mind's concept of "C" has been
altered radically. The new list for "C" probably does not erase other items
for C, but rather it probably overrides them in associative priority.
At the same time, if "A kills C," then a whole list (HA) of possible
"how's" of killing becomes associated to the nodular concept of "A." Or
maybe just the question of "killing-how?" becomes associated with A. If the
"how" is actually stated in the sentence, that can be associated to "A"
instead of just the question. Of course, language always leaves some degree
of possible question.
So we see that the nodular concepts "A" and "C" are both altered by the
sentence with the transitive verb. The process is meaningful but not
precise. "V" just gives its result-list to "C" and its how-list to "A."
Probably the sentence itself as a percept goes into memory of
experience.
Notice the importance of the idea that a transitive verb carries the
two lists (subnodes?), the "how" and the "result." A transitive verb can
link the result-list to the direct-object concept. I suppose that an
intransitive verb would link both lists right back to the subject.
27 SEP 1977
[U.W. Engr. Lib.]
We have theorized that it may be possible to determine the least amount
of learning necessary to implant a usable concept in a brain.
When a concept is being summoned, it is represented by a unitary node.
However, if a brain is to make use of a summoned concept, then features of
its informational or logical content must come into play.
The concept is a bundle of tags. Tags have a kind of "pop-up"
priority.
Ultimate informational content is very particular; it gets down to
arrays of yes-or-no bits like in a truth-table or a visual dot-array.
Thought can perhaps proceed by a process of surface-skimming. When a
concept is summoned, it can come along internal memory paths corresponding
to one or more of the senses. (If it is fetched in correspondence to only
one sense, still it can quickly associate into other senses.) The
appearance of the summoned concept is a kind of pregnant occurrence. We
don't know in advance what use the brain will make of the concept. The
brain starts getting a serial feed-through of the single associative tags in
the conceptual bundle, starting with the tag of the highest valence and
going on down in a kind of scan or search for one or several highly relevant
associative tags.
The semantic meaning introduced by a concept-word in a sentence goes
into memory as the simple record of an occurrence, but it also plays a role
in subjective conscious thought. If a thought is a statement of
relationship, then the chasing-around of concepts in a mind causes the
expression (statement) of relationships, or thought.
Internal verbalizing of the chase of relationships would be the rather
firm or solid activation of verbal concept-nodes in their role as part of
the structure of verbal motor memory. That is to say, based on our model of
the mind as illustrated on September 10th, the summoning of a concept can
result in avalanche activation of the verbal motor memory concept node for
that concept. Some relationships between concepts get translated into or
expressed as verbs, so that sentences arise. We know theoretically that
relationships are assigned by tags, but can we ever say that the tags are
the relationships? How does a mind fetch a verb to describe a relationship
it is thinking about or even perceiving? Well, here also, associative
taggings made from either the previous thought or the present perception
must fetch the concept of the verb. What we might have here is the
summation of tags until a specific verb/concept is fetched. After all,
erudite people use highly specialized verbs. The tags coming from a
perceived or contemplated relationship would, as it were, "vote" for which
verb is to express the relationship.
The refinements of grammar would have to be added on by a mechanism
supervising the broad processes of verbal thought in sentences. One way to
do it might be to make the deep-structure formulations subconscious, with
their surface-structure transformations rising into the consciousness. (See
the ideas on verbal motor habit-tagging systems.) However, we mustn't rush
to invoke the subconscious here, because we are theorizing that the chase of
concepts is a conscious process, or are we?
It may be that the verbal motor habit-tagging system, as an embodiment
of grammar principles, affects the chase-summoning of word/concepts in such
a way that they are forced to emerge into the consciousness already put into
their grammatical form which is proper for the sentence being generated.
Notice that we often describe a past-time scenario in the present
tense: "Hitler invades Russia. The Russians counterattack."
There may be something like a "forward-looking radar system" to make
sure that words enter consciousness in their proper grammatical form and
syntax.
Let's go back towards the first part of today's work. The concept-
chase would hardly ever have to get down to "ultimate informational
content." So therefore verbal thought can get pretty abstract and still be
logical and rational, even though it is pretty far removed from the ultimate
yes-or-no constructions of logical information.
Those ultimate logical constructions were probably necessary in the
infancy of the organism. Then they yielded to ever more abstract
ratiocination, so that in the mature organism those microstructures of
"ultimate informational content" are probably not necessary any more and
could probably be excised from memory with utterly no resulting damage or
impairment. What this theory says is that you have to go through a
minuscule gearing-up for abstract thinking, but from then on you don't need
the early structures.
So it looks as though we can't obtain what we were foolishly hoping for
at the beginning of today's paper: a read-only-memory containing a
standardized concept. Because a concept is inextricable from the mind in
which it resides. Because the content of a concept depends on the
associations it makes both to sub-concepts and to other concepts.
28 SEP 1977
We haven't yet really gone deeply into the motor habit-tagging systems
as envisioned on September tenth. Instead we started off onto short-term
memory and then yesterday onto concept association.
Some thoughts have come to mind concerning motor habit-tagging. An
intriguing way to do it would be to make some discrete functioning units
which could operate together in indeterminately large groups.
I mean, we could outright design a system to habituate grammar, or we
could design basic units which would then organize themselves so as to be
able to habituate grammar.
Habituation would probably have to occur during short-term rehearsal-
type memory, and then be able to function over long-term memory.
It might be important to note that the barrage of words potentially
coming into the grammar system is probably thinned out whenever the
consciousness reviews a series of visual images. The review of visual
images could qualify as "non-verbal thought," but of course features of the
images are capable of evoking associated words. If they don't, it must be
because of levels of dominance.
It's rather easy to imagine that the first noun-phrase summoned into
the grammar-system could automatically go into the "nominative case," unless
there is a forward-looking system which is able to determine and apply
grammatical usage before the word surfaces into consciousness.
A rough general requirement for the habituation system might be that,
given any nodular verbal inputs, it shall modify and assemble those inputs
into a sentence conforming to the grammar rules as learned from the outside
world.
29 SEP 1977
Towards a Theory of Grammar
There may develop concepts which we could call "invisible quasi-
anonymous concepts." Now, these concepts may be so important for language
that they become operationally linked to the language system, that is, to
the verbal motor habit-tagging system.
These concepts exist, outside of the VMHTS, in the permanent memory,
where they can always be immediately linked to either perceptions or
thoughts that are going to give rise to language-sentences.
They are "invisible" in that we don't become actually aware of them
when they are activated by association. For instance, we may know that we
are seeing either one fish or several fish, but our mind doesn't focus
consciously on the singularity or plurality.
They are "quasi-anonymous" in that we don't necessarily give them
names. Their silent activation does not make us think of words like
"singularity/plurality" or "past/present/future" or "subject/object." Yet
they govern classifications which are real and which are immanent in the
external world, as perceived by us.
In this particular writing of theory there is hardly any lag between
new thoughts and my writing down of thoughts. I find it hard to keep all
the factors in mind while trying to develop from them a theory of
linguistics. However, the wellspring of ideas has not dried up.
I'm just theorizing that it may be good to have concepts in permanent
memory linked operationally to the MVHTS. You see, there seem to be two
levels of habit-tagging to be done. On the low, easy level, habituation of
the sounds in words would be just a simple comparison problem. On the
higher level, the habituation required to set up a language system is much
more complex because it actually involves getting concepts to operate.
We are tending to theorize now that the VMHTS is allowed to associate
itself directly with the invisible concepts and use them as important
elements in the linguistic habituations which it achieves. Well, yes, we
might as well posit an ability of the VMHTS to seek out and attach any
concept it needs.
Of course, I guess it will use those concepts on a yes-or-no basis.
For instance, when dealing with "singularity/plurality" it will answer one
or both of those questions with "yes" or "no." Or it may just accept the
association/activation line of the concept as an input calling for a certain
learned (habituated) operation. For instance, with "past/present/future"
it's not an either-or operation, it's a question of which one. Likewise the
imperative mood might be just a single input.
But there is a fundamental difference between the concepts in general
and the habituation-linked concepts. The general ones just float around,
and when they are summoned (randomly) they must so-to-speak identify
themselves by bringing with them associative-tag bundles which influence the
course of thought and, by so doing, constitute the functioning of the
concept as a concept. On the other hand, the invisible habituation-linked
concepts are already pre-identified any time they go into activation, and
they don't have a direct, conscious influence on thought, but rather they
affect the generation of sentences in a predetermined (habituated) way. We
might say that they effect the translation of a concept into a linguistic
phenomenon. For instance, the invisible concept of plurality makes us often
add an "s" to English noun plurals.
The habituation system might function by rules of which the following
would be an example: "In the presence of a plurality-signal and in the
absence of an irregular-plural signal, an English noun will follow the set
rule to have its plural end in 's.'"
The neat thing here is that the process can be quite automatic. We
have shifted the burden of conceptuality out of the VMHTS and into the
perception system.
If we could devise a "discrete habituation unit" (DHU?) we would not
have to program into the machine any of the intricate mechanisms needed for
linguistic functioning. Of course, we don't want here to build a programmed
computer; we want a mind like that of an infant. But we will want to know
exactly what switching-function goes on for any given operation, linguistic
or not. We don't want to build a machine whose wonderful function we don't
even understand.
If we say that conceptual inputs of the invisible type are just a
unitary nodular line into the VMHTS, then we can begin to search for the
above-mentioned DHU by establishing various parameters, such as the idea
that they should be able to handle any human language and any typical
complexity as long as the DHU's are present in sufficient numbers. They
have as single units such characteristics that their powers are multiplied
as their numbers are multiplied.
It's a lot easier for genetics to provide a discrete unit in large
numbers, than to provide a ready-made mechanism of complex function.
30 SEP 1977
We now want to design a "discrete habituation unit" for language. It
may be impossible or unnecessary to design such a unit, but we want to at
least toy with the idea. It will be wonderful if we can make a device which
can handle greater complexities by coupling with identical units.
We can start with the "black-box method" by figuring out the inputs and
outputs. I guess for the first time in my Nolarbeit I've reached the point
of designing a Turing machine.
The DHU's of the VMHTS must end up constituting a system for sentence-
generation, but first they must be able to self-organize in a way that
builds up that system for sentence-generation. In other words, they
represent two important processes.
There are two classes of input, which we might for brevity call "nodes"
and "signals." A "node" here is a verbal concept. A "signal" is a
grammatically relevant link from an "invisible quasi-anonymous concept."
The VMHTS must learn to process the nodes according to what signals
they are coupled with. The nodes will be freely variable, and the signals
will be constant and invariable, except of course that they can be learned
and unlearned.
It now begins to look as if we don't care how many nodes are coming in,
whether one or a dozen more. Because a node is just a word, and its
grammatical role is determined by its signal or signals.
The changes that can be worked upon a node/word are changes with regard
to its sound-form and its position in a sentence. The sound-form changes
are changes in the so-to-speak "reins" or control-units of verbal motor
memory. In our automaton we might incorporate forty such control-units for
forty discrete sounds. At any rate, the verbal motor control-units are at
the mercy of the VMHTS. The VMHTS sets them up to work in certain ways, but
it is always at liberty to change those procedures, as when learning a new
language.
So thus far we have a handful of nodes for input, forty control-units
for output, and we still have to figure out the role of the signals.
Now a node is already a set group of identified control-units. In
other words, the node itself could go through verbal motor memory as a
spoken word. But through the linguistic VMHTS, the mind has a chance to
manipulate the nodes into language.
Remember, we're dealing with discrete sounds, not spellings.
Let's establish notation that nodes are called " " (theta), signals are
called " " (delta), and control-units are called " " (sigma).
We will allow special signals to change a node/word utterly. For
example, the nodular concept "I" can become expressed as the word "me."
Note that we went to the more difficult problem of language before we
even tackled the simpler problem of the habituation of words as strings of
discrete sounds. Now it looks as though we'll have to go back to the same
beginnings, because we'll have to have a short-term memory comparison
system, so that the mind can check itself against examples of what it's
trying to achieve.
Scratch-Leaf
- Inputs will be nodular words extruded from the association process, plus
invisible-concept signals both relating to the nodular words and governing
the sentence-generation process.
- Pluralization:
[Theta]-CAT + [Delta]-pluralization --> [Sigma]-CATS
- Past Tense:
[Theta]-BLANK + [Delta]-preterite --> [Sigma]-BLANKED
[Theta]:BLANK + [Delta]:preterite --> [Sigma]:BLANKED.
+ [Delta] \
+ [Theta] } --> [Lambda]
+ [Beta] / [Sigma]
[Delta] = signal
[Theta] = word/node
[Beta] = volition ("boulema")
[Lambda] = habit pattern (learning)
[Sigma] = discrete control-unit for a sound
Of course, the learned habit "[lambda]" has to be in relation to some
perceived or imagined model, for which there must be an additional input
into the habituation process.
Therefore [psi] = the model, or the information which is
to be abstracted from each model
of the given grammar phenomenon.
If necessary, let gamma = a grammar rule as we know it when discussing
it or stating it in English.
1.OCT.1977
[Gamma] = a grammar rule
[Psi] = information from a model
[Delta] = signal
[Theta] = word/node
[Beta] = volition
[Lambda] = habit pattern (learning)
[Sigma] = discrete control-unit for a sound
The same system which generates sentences must be capable of being
operated in reverse so as to understand sentences.
Recently we were theorizing that the mind understands a sentence, or at
least a verb, by assigning associative-tag bundles among the concepts
invoked by the words in the sentence. It was fine to do that for a verb,
but now we need to describe how it was done so that we can devise a system
that can learn to handle any facet of grammar.
Typically you have a grammar phenomenon that is taking place as a
model, such as putting person-endings onto a verb-stem, as in Latin or
English.
One or more grammar concepts will have to provide the Delta-signals to
control the lambda-process of habituating the formation of words containing
any fitting stem plus the proper ending.
A theta-word is a word in its most storable form. It is at an address
equivalent to its decoded composition. The address is the word. The
address has a definite end, which is the last element of the word.
For example, in our presently envisioned automaton, addresses (words)
may start with forty different initial letters and then (theoretically)
branch on to forty different second letters, and so on.
It seems obvious that memory-storage of words is double, because they
are stored as unique addresses and also within experiential memories of
occurrences of words.
It is very likely that an address is tagged at the end of its word, or
perhaps at the spot just before where inflection would be put on. Thus
perhaps an incoming compound word can activate two tags, a subordinate,
prefinal one and a main, final one.
So when a [theta]-word comes in it gains instant access to a conceptual
bundle because its ultimate element automatically activates the tag. The
non-ultimate elements would have just fallen into place.
So recognition of words goes by ultimate-tag, but how does generation
go?
Remember, we recently theorized that selection of a theta-word is done
by a kind of cumulative "voting" of associative tags.
When we perceive or imagine something, and seek to express the thing
with a word, pre-positioned tags reach out to activate the address of that
word and thereby its conceptual bundle. Obviously, they only need to go to
the end of that address.
We are now getting into the idea in today's scratch-leaf about the
forty-element train or channel. Obviously, an easy way to transmit
internally a complete [theta]-word would be to somehow activate each element
of its address along a channel having the same forty elements as can be in
the address. Of course, the word would go serially through such a "[theta]-
channel." And naturally it came in from the ears along a similar channel.
A
B
A C
D
A
B
B C
D
A
B
C C
D
A
B
D C
D
- You can't skimp any further here; you've got to do it like this so
that you will have unique addresses.
- Some simple trigger such as a "pause-trigger" can help to make sure
that the front of each incoming word goes first into the word-memory.
You know, a word to be spoken is not really a function of incoming-word
memory. It's really a function of the VMHTS or some memory associated
therewith.
The foregoing idea really gives us something to think about. We don't
want to be pushing words around backwards, as non-biologic hardware might
do. No, perception memory and motor memory are separate, but they approach
each other inasmuch as the motor memory learns to contain the same words as
are in perception memory.
It is much more likely that a [theta]-channel of words to be generated
would exist in a motor area rather than in a perception area. So let us
think of a manipulable and even programmable (by habituation) [theta]-
channel starting in some kind of static memory and stretching through the
habituation system on out to the motor musculature for making the forty
sounds.
A word is a habit.
Associative Tag
(coming to a word) --> DMU -->
C
|
DMU -->
A
|
DMU -->
T
Above, "discrete memory units" form the word "cat."
At this stage, we're into pretty complicated material. The above
diagram shows how a tag could activate a word-memory and cause its elements
to move one-by-one down a forty-line theta-channel into verbal motor memory.
But the material is complicated because we've got to figure out both the
genesis and operation of word-habituation.
It's very possible that the "left" (See diagram of 10SEP1977) side of
the mind becomes aware of individual "sounds" as if they were concepts, and
that thus each of the forty Sigma-sounds can be known on the left and tagged
over to the right, where it reaches some agent capable of uttering that
sound. Thus the VMHTS can attack a word just by starting quasi-conceptually
with the first distinguishable sound, and by then going on through the whole
word.
Scratch-Leaf
- Output is [SIGMA]1 to [SIGMA]40.
Grammatically relevant concepts are [DELTA]'s.
- We may have to imagine a channel of forty [SIGMA]-lines going to several
different places.
A _______
B _______
C _______
D _______
- You know, if the channel into word-memory is a dead end, then pushing
things out backwards will cause them to be going forward again.
Concerns involving word-habituation:
- Is infant-like "random dynamics" necessary so that the automaton just
grows into the use of its speech-organs?
Comparison of uttered words and model-words could be done either in a
register alongside short-term memory, or perhaps directly inside address-
memory.
A
Short-Term Address B
Memory & Memory C
Comparator .
1 2 3 .
C A T .
T
2. OCT 1977
One way to imagine the habituation is as follows. First the random-
dynamics automaton develops quasi-concepts of the forty [SIGMA]-sounds. (We
say "quasi-concept" because of course one motor sound is just a unitary
information, not a concept.)
Then the automaton tries to imitate various heard words. (We'll solve
this volition problem later, you see.) (If a bird can do it, an automaton
can do it.) A model verb "reverberates" in short-term memory. Now, here a
fine distinction must be made. There are two ways for the auditory channel
to "hear" a word: passively and actively. By "passively" I mean on the
left side of the September 10th diagram: by actual perception or by
associative activation of a passive memory (short-term or long-term). By
"actively" I mean on the right side: by inner motor generation of the word
so that it enters the receptive auditory channel on the left side.
So when the left side is repeatedly referring by tag to a model word,
it is not strictly an act of volition, it is an act of the natural flow of
associations.
The foregoing paragraph is not intuitively convincing, but it agrees
with the theory that the left side is passive and the right side is
volitional. Of course, the theory also says that the volition arises as the
convictional end-product of ratiocination, which is the free-flowing
interaction of the concept-nodes on the left side. The left side declares
verisimilitude, which is automatically interpreted as volition by the
bouleumatic accumulator system.
Anyway, a reverberating word should be a unit with a proper tag over to
the motor system, but if it's just a new model, of course it doesn't have a
conceptual tag yet, so the "trans-tagging" operation drops to a lower level,
lower in that it tags over to motor not the whole word, but the elemental
quasi-conceptual tags of the constituent [SIGMA]-sounds of the model word.
Now we assume that there is a certain general volitional desire to
learn to say new words. Each time the model reverberates, it sends over to
motor a serial volley of its constituent [SIGMA]-sounds. These constituent
[SIGMA]-sounds comprise a tentative chain or concatenation in the
habituation area, perhaps even in a "motor short-term memory." An impulse
of volition coursing through the tentative concatenation can then actually
send the [SIGMA]-sounds into motor-operation, either spoken or along the
"internal verbal perception line" of the diagram of 10SEP1977.
Thus the right side responds to a model by generating its own attempt
at mimicry. The mimicry now comes into the perception side for corrective
comparison with its original model. I suppose we now have a classical
feedback situation, or a cybernetic situation, where the goal is to
eliminate any differences showing up in the mimicry.
At any rate, we somehow have to have an adjustment process on the left
side so that on the right side the correct concatenation will be lined up
when the habituator proceeds to "harden" its concatenation.
Since the automaton is so strictly defined, how could any errors enter
in? Well, there is room for error if the low-level process of sending over
a volley of quasi-conceptual [SIGMA]-sounds is a particularly loose process.
Since the sounds are going separately, they may lose their proper sequence
during the cross-over, due perhaps to varying lengths in the "neuronal"
pathways.
We must also keep in mind that this process is designed as if it were
taking place in a very young child. The quasi-conceptual tags may not yet
have been accurately formed, or a word that is heard may somehow be garbled
in its newness and strangeness. But I think the very possibility of error
fosters the possibility of perfection.
Anyway, when the mimicry comes back, there are various ways to
automatically diminish and eliminate error. One easy way would be to keep
the process going over and over until success were indicated by the
activation of an ultimate-tag in the address-memory.
The trouble is, such a process would not allow for intermediate
corrections. Obviously, though, there should be a corrective mechanism
which starts with the front element in the word and goes through.
One way to work it would be that, if the mimicry came through with at
least the front element correct, there would now be two associations calling
for the send-over of that quasi-conceptual element into motor, because both
the model and the mimicry would tag the same first element, and likewise any
other elements that came through correct. But if any elements of the
mimicry were incorrect, there would not be a double insistence upon the
tagging of each such element. And we could arrange things so that the model
word reverberated more frequently, or perhaps even that the mimicry went
through only once and then tended to die out. Thus correct mimicry elements
would steer the modeling, but the strength of the model would gradually
override any incorrect elements of mimicry.
A partial clash between model and mimicry would keep the formative
process going on until the clash disappeared. At that point there could be
a mechanism such that the habituator would "harden" the existing
concatenation. As an added benefit, we might establish that at the same
time the ultimate-tag would be set up in address-memory. After all, the
address has to be pinned down, too, somehow, because the address is the
word. And thirdly at the same time, the conceptual activation of the
concept behind the word can henceforth evoke an immediate motor production
of the word.
Thus we end up with the important things tied together. The address-
memory is set up for instant decoding of the heard word to activate the
concept. At the same time, if conscious thought requires it, the motor
system has available a conceptual [theta]-line ready to generate the word in
a volley of motor activation.
During this discussion I haven't really gone deeply into the
development of certain things, such as the address-memory and the actual
process of the hardening of the word-habituation system. Their descriptions
are yet to be fully theorized, but they seem much less complicated than this
whole process of learning the rapid pronunciation of words.
The neat thing about this theory is that it allows a word to have a
double existence, on the left and on the right side. The two
representations of the word are joined by a kind of concept node, but at
that juncture the whole word does not flow back and forth, but only its
identifier.
This theory allows conscious verbal thought to be a motor function,
even though the concepts beneath the words are in the province of passive
memory. Of course, the flowing results of conscious motor thought pass
through the "internal verbal perception line" and thus become a part of the
historical record in experiential memory. So consciousness feeds upon
itself, and therein lies its power. It can produce ideas, and then examine
its own ideas in a lengthy chain. This circling around of ideas allows
complex logical processes to occur.
The same sort of system which habituates a word ought to habituate
grammar structures. However, grammar acquisition is more complicated
because concepts of rules are involved rather than just linear data as with
a single word.
Although an infant has to conceptualize a lot of things utterly from
scratch, I suspect that a human does not have to figure out for himself the
"invisible quasi-anonymous concepts" (See 29SEP1977) of grammar, but rather
they are conveyed to him when he learns a traditional human language.
In the beginning comes the word. The infant learns words by
habituating them into his motor system. However, the grammatical
inflections and other rule-structures must initially present a puzzle,
albeit not consciously. Because word-endings will go through weird changes
which are not part of the basic word contained so precisely in the address-
memory. But just as words are conceptualized because they are associated
firmly with bundles of perception, so the invisible grammar concepts will
also develop in the mind because the mind will record that certain
linguistic phenomena (such as word-endings) are always simultaneous with
certain classifiable or conceptualizable circumstances.
3. OCT 1977
It looks necessary to describe how invisible grammar concepts will
develop in a mind which is acquiring language. First the mind would become
aware, in some measure, of the existence of a [gamma]-rule because of extra,
"unexplainable" information involved with language inputs - typically word-
inputs. For instance, such extra information might be a different word-
ending for various declension-cases.
It is an important idea that such enigmatic extra information actually
sets up the initial node for a [gamma]-concept. Because it's a clear
genesis: if you have an enigma that has to be fit in, establish a
"Sammelpunkt" and let information accrete onto that point.
We want the concept to come in, but function somewhere outside of the
realm of consciousness.
I know that I personally can sling inflections of a new foreign
language around by picturing the written paradigm in my mind. Of course, my
mind in advance makes a choice of what route to follow in the paradigm. For
example, my mind automatically identifies a subject as being one of the six
persons.
You might say that it's a non-consciously learned skill. Perhaps it
grows like a crystal upon the "seed" of the "enigma-node." "Enigma-node"
looks like a good term to call those extra elements in words which are put
there by grammar rules and puzzle the mind which has not yet learned the
grammar-rule.
You know, I can see enigma-nodes for passive voice operating when a
subject is thought of and immediately thereafter a verb-action to which that
subject is subjected, with or without regard to an agent. But that's how it
would work to generate sentences; how would it initially work to comprehend
sentences?
I am going to theorize rather riskily now about how the mind sets up a
concept for an enigma-node. Suppose someone walks up to you stretching
forth a closed fist and says, "I have my kreds in my hand. Do you want my
kreds?" Now, "kreds" is a nonsense-word that I just dreamed up. Notice
that there's no way to suspect that it's plural or not, except by the "s" at
the end of the word. Now, when thinking of such an utterance, I get the
mental image of a hand holding a bunch of small objects about one centimeter
long. I actually get a couple of different possible images. So I am going
to theorize something pretty far-out now. I suggest that a gamma-enigma-
concept is actually a kind of list of concrete examples of the concept.
There is probably a point in the development of each such concept where a
particular or group of particulars is suddenly transformed into a universal.
This universalization may perhaps happen (of necessity) in the extreme youth
of the organism, at the time when speech is first being acquired. (After
all, isn't there a rumor that children prevented from acquiring speech by a
certain age lose the ability to acquire speech?) My theory wants it to
happen in the youth because that is near or at the time when even the
concrete, non-universal, particular reality is first being perceived. (You
know, we never really become infallible at recognizing members of a class,
anyway.) Of course, the concept may be updated by newer associations, as
long as the string of associations remains unbroken back to an origin.
This theory of invisible concepts has something to do with an older
theory of mine, which I call the theory of "virtuality." If consciousness
can be thought of as a kind of whirling illusion where we never get to see
between the elemental components of our conscious processes, then we can say
that we are perceiving an entity, consciousness, by "virtuality." For
example, the cinema shows motion only by virtuality. Likewise, if an
enigma-concept is a list of speedily consultable concrete examples, then the
mind can whiz through them at a speed so fast that they blur into a multi-
dimensional thing, a concept rather than a mere ordinary fact.
After all, another definition of a concept is that a concept is what we
make of it. If we use our enigma-concept of plurality to act as though we
are dealing with plural items, then the enigma-node has served its purpose.
Our train of thought "skims" over its concepts and uses them only to
the extent necessary to keep the train going. If called upon to define a
concept, it stops and does verbally the best it can.
Likewise enigma concepts should develop for "subject of verb" and
"direct object of verb."
Rather than designing every conceivable grammar concept into the
automaton, or even figuring out how the machine itself would do it, I would
rather decide what I think is the general process and then just watch the
machine do it.
So we have decided to just let the enigma-concepts develop in a freely
associative process based upon virtuality. We can now go on to discuss how
the enigma nodes (or gamma-nodes) would function with respect to the
grammar-habituation system.
We theorize that a [gamma]-node sends a [DELTA]-line into the sentence-
generation area to influence the generation of sentences and make them
grammatical.
We theorize that individual, unaltered words are evoked from motor
memory by a [theta]-line.
(4. OCT. 1977)
One part of our theory, which we intuited in Goettingen in 1972, is
that sentence-generation probably proceeds from a single node/word. Sure,
we have Chomskian transformations that we can perform, but generating any
such sentence has probably got to be like yanking on a string, and the
yanked end of the string is the concept/word which first comes to mind just
before we willy-nilly generate a sentence about it. I suppose this idea has
to do with [DELTA]-control over word-order. If I'm going to develop a
theory of the habituation of grammar, I want it to be able to handle even
the most complicated grammar structures, and word order seems quite
complicated.
In German there are such strict patterns of word-order, depending on
what kind of word (by usage) you choose to utter first. So I guess there
are word-usage deltas and word-order deltas.
I think there should be a hierarchy of at least three delta-lines going
into the VMHTS.
S[delta] = part-of-speech delta
I[delta] = irregularity delta
U[delta] = usage delta.
It looks as though the part-of-speech deltas (S[delta]) will be the
ones which govern syntax. One way to do it would be to have the machine
gradually learn what other S-deltas it can add on when once it has seized
upon an initial S-delta. This idea corresponds to our pullstring theory of
sentence-generation.
Questions of tense will probably be answered by usage-deltas
(U[delta]). There can be a separate gamma-concept not just for each tense,
but also for each mode of expression of a tense. For example, note the
following two sentences.
"John slept eight hours."
"John was sleeping."
Each sentence seems to look at the action of the verb in a different aspect.
You know, tense-deltas might really be shared between the subject and
the verb.
I can see how the concept of the ongoing, imperfect English verb might
call right away for a form of the verb "to be" plus a present participle of
the main intended action-verb. Such an ongoing verb is a kind of
protracted-status verb.
There may be a kind of gamma-concept which has to attach itself to two
different words because of an important relationship between those two
words. For instance, take a gamma-concept of tense between subject and
verb, or a gamma-concept of the relationship between verb and direct-object.
We might even have to say that all gamma-relationship between subject
and verb is expressed simultaneously with tense. After all, if there has to
be a relationship, and tense will always be there, why not just use the
tense-relationship by itself?
S[delta]
I[delta]
U[delta]
The U-delta (for usage) just shows specific modifications of a word,
such as "s" for plural, "ed" for past tense, and "ing" for present
participle. It could show "er" for comparative degree and "est" for
superlative.
The I-delta (for irregularity) could be used to override a normal
process in favor of a required irregular process, such as when the plural of
"child" goes to "children."
The S-delta (for part of speech) actually must organize the whole show.
The place of a word in syntax will depend upon what part of speech it is
functioning as.
The initial push to generate a sentence will come when the
consciousness seizes upon a salient concept and associates a word. In order
to pass into the VMHTS to start generating a sentence, the salient word/node
will first have to pass through a kind of screen set up by the habituation
of the S-deltas in the VMHTS.
The S-delta-screen would allow only certain parts of speech to start a
sentence. I'm having trouble right now trying to think of an English part
of speech that thus would not be allowed to start a sentence, and I can only
cite that we rarely start a declarative sentence with a verb. Sometimes we
say things like: "Running! He's running!"
Whatever part of speech the mind did successfully start with for a
sentence, the VMHTS would then have only certain allowable continuances for
each part of speech used as an initial word.
Now, one way to imagine the process after an initial word would be to
picture a whole group of node-words making themselves available
simultaneously to the VMHTS. Remember that some of them perhaps have
certain relationships already decreed by their gamma-concept. At any rate,
the VMHTS would tend to take those node-words most closely related to the
initial word, until it reached a juncture of fresh choice. Then there could
perhaps be a play-off between certain general syntactical expectations of
the VMHTS on the one hand, and associatively available node-words on the
other hand. As long as associativity kept offering up the right [theta]-
words by part of speech, the VMHTS would not object. But if a somehow
unallowable word were proferred, the VMHTS might stop and wait for an
allowable word. Thus randomness and habit could operate together to
generate sentences.
Eventually we're going to have to come to terms with the
transformational grammar which arose around 1957.
10. OCT 1977
F[delta], I[delta], M[delta]
F[delta] = "function delta"
= functions or usages of a [THETA]-word in a sentence, such
as:
- subject of verb
- (main) verb
- direct object of verb
- indirect object of verb
= - preposition hinging from another word
- object of preposition
- adverb
- adjective
- modifier of the whole sentence, e.g.,
"fortunately"
- conjunction between words.
M[delta] = "modification-delta"
= modifications or changes that can happen to a [theta]-word
irregardless of its function or usage in a sentence, such
as:
- singularity/plurality
- degree of comparison of adjectives.
Artificial Intelligence 11. OCT 1977
Let's try again to establish the basic rules for the inputs to the
habituated mechanism for grammar.
1. Function-deltas can control modification-deltas and perhaps
also [theta]-words.
2. Modification-deltas can modify [theta]-words, usually
according to a habituated pattern contained in the VMHTS.
3. Irregularity-deltas can override a normal modification-pattern
so as to modify a special [theta]-word in a special way.
4. A VMHTS-[lambda] (VMHTS-lambda) can control function-deltas -
for example, by arranging them in syntax.
5. Conscious volition can, theoretically, override and overrule
everything in this discussion.
Today's work, developed between late last night and this afternoon, is
quite exciting to me because it presages the completion, on a certain level,
of the theoretical design of the basic modules to be contained in my
artificially intelligent automaton. I think now I should start heading my
papers with the words "Artificial Intelligence" rather than "Nolarbeit," my
old, personal word stemming from teen-age years.
Just by establishing the controlling inputs to the grammar-VMHTS, I
have made it easy to specify the inner workings. The next step will then be
to describe the habituation process itself, but that shouldn't be too
difficult, because I have already described a habituation process for
[theta]-words.
It looks as though I have a broad enough process being developed that
it will be able to handle many, if not all, human languages. For instance,
I have kept Latin in mind as a highly inflected language, because I know
that English has some few, but highly essential, inflections. In a Latin
declension-paradigm for a noun or adjective, modification-deltas would
select the proper case. Of course, a modification-delta could have chosen
which of the five declensions to use for endings.
Of course, there is no such thing in my theory as a "language-delta,"
but we could say that a "language-lambda" applies the proper syntax
depending upon which language the speaker is using. For instance, among the
five languages which I claim to know well, I have to use different syntaxes
and different declension-paradigms for Latin, Russian, Greek, and German. I
feel like I am "in" a language when I am using that language to think or
speak.
So if a person knows several languages, there could be separate
"language-lambdas" in the VMHTS to facilitate speaking each language.
Theoretically, address-memory, reality-concepts and function-deltas would
work the same way for multiple languages in one mind. (The Chomskians would
refer to "deep structure" here.)
You know, I find it initially a puzzle as to how we keep our flowing
words within the same language when we know several languages. I get the
idea that it must be because of separate address-memories for the vocabulary
of each separate language. As a concept in my mind approaches
verbalization, perhaps it goes to five different [theta]-nodes, but perhaps
the mind is able to keep the link-up going to node/words only of the same
language.
(sentence-delta)
/-----------------S[delta] --------------------------------------->
|
|
|
| (function-delta)
| /---------F[delta] --------------------------------------->
| |
| |
| |
| | (modification-delta)
| | /--- M[delta] --------------------------------------->
| | |
| | |
| | |
| | \--- (theta-word)
| \-------- [THETA] ---------------------------------------->
\----------------
There may have to be another control-line going from associativity into
the VMHTS: a "sentence-delta" (S[delta]), which would direct operations
governing a sentence as a whole, as for instance whether a sentence is going
to be a statement, a question, a command, or a wish. I had thought that
eventually I would have to get around to dealing with the class of English
question-words, but I guess it would suffice to posit a whole "sentence-
delta" to deal with question-words and such things as little words with
which languages like Polish and Japanese indicate questions.
It bothers me a little to be gathering together so many control-lines,
but I guess they all are necessary to represent functions of the intellect.
The VMHTS itself does not select concept-nodes, it just arranges them for
communication.
Obviously, I have heaped a lot of the burden of sentence-generativity
into the associativity area. That is to say, my machine can generate
sentences if concepts can activate the various control-nodes and word-nodes.
I suppose we may feel free to go ahead in a rather unrestrained fashion
if we do the following. Once we have determined exactly what inputs we will
allow, we can go ahead and design a VMHTS capable of habituating any
structure based on those inputs. That way the VMHTS should be able to learn
most or all human languages, and we will avoid built-in programming, as I
discussed in the work of 29SEP1977.
Habituating a Sentence of Subject-Verb-Object
S[delta] ---------------------> declarative sentence
Let's say, "Maria amat Italiam."
_
/ S[delta] ---> declarative sentence: (a) certain word
/ order(s)
/ subject of verb
/ = nominative
Maria < F[delta] ---> subject of verb: nominative case
\ M[delta] ---> singular number
\_ [theta]-word ---> Maria: first declension
_
/ S[delta] ---> declarative sentence: indicative mood
/ F[delta] ---> / main verb
/ \ transitive
/ / active voice
amat < / present tense
\ M[delta] ---> < singular number
\ \ third person
\_ [theta]-word ---> amare (infinitive): first conjugation
_
/ S[delta] ---> declarative sentence
/ F[delta] ---> direct object of verb: accusative case
Italiam < M[delta] ---> singular number
\_ [theta]-word ---> Italia: first declension.
One of the first problems of the VMHTS for a Latin noun as a [theta]-
word is to get at what declension it belongs to. A language will not have
very many declensions, and obviously the forms of their endings will reside
within the memory of the VMHTS.
My initial dilemma involves whether to have the declension-clue
contained within the [theta]-word itself, or to require a specific
communication to the VMHTS of which declension is involved.
You could perhaps tell which declension were involved if the [theta]-
nodes dealt with genitive singular forms, because they distinguish the five
Latin declensions, except that the second and fifth declensions both end in
"i" in the genitive singular.
I think we must recognize a general principle that, if the VMHTS
habituates such a large array as a declension, then there really ought to be
at least some kind of modification-delta to reference that array. Now, we
already theorize that the cases in any declension will be summoned by a
function-delta.
No, as I was thinking further about the dilemma, I realized that the
answer might lie in a third possibility. The Latin noun "miles, militis" is
an example of a Latin noun where you can't get either of the nominative or
genitive from the other. But if you have both, you are clearly on the way
to declining a noun of the third Latin declension.
Remember, whatever forms you have going out, have to be recognizable
coming in.
Nolarbeit 13. OCT 1977
Habituating Inflected Noun-Declensions
A quandary has recently developed as to how the VMHTS ought to identify
the appropriate declension for a Latin noun. Fortunately, we can attack the
problem by looking at its quasi-mirror-reflection in the address-memory of
the passive left side of the automaton.
We can imagine a way by which the address-memory could quite
efficiently handle an incoming inflected Latin noun. Under this idea, the
address-memory could have two different ultimate-tags for a Latin noun, such
as "miles." The first ultimate-tag could, if necessary, identify the
nominative form of a conceptual noun. The other ultimate-tag would identify
the basic stem of the noun, onto which case-endings are attached. Thus
there could be one ultimate-tag for "m-i-l-e-s" and another for "m-i-l-i-t-"
as the stem of the genitive form "militis." There would have to be these
two separate ultimate-tags if the stem were not fully present in the
nominative, or were in an altered form.
The nifty value of such a system lies in what we can then do with the
letters (sounds) remaining after the stem of a noun. The activation of the
ultimate-tag of a stem could cause the remaining sounds to go quickly back
to the entry channel into the address-memory, so that these case-endings
could now themselves be identified as to what case and function-delta they
represent.
A major question now is, would such case-endings have to be sorted out
as to which of the five declensions they belonged to? Such sorting could,
of course, be done, especially if there were some kind of enigma-node or
modification-delta to link each concept-noun permanently to its proper
declension. We don't want nouns like "servi," "militi," and "rei" to
interfere with one another.
I think we should keep in mind that the passive address-memory is one
of the most "cross-wired" and thoroughly organized systems in the putative
automaton. There could be all kinds of gates established adjacent to it to
sort out classes of data, such as ending-groups for noun declensions.
The various declension-endings would be brief groups of [SIGMA]-sounds
lying pretty near to the entrance to the address-memory channel. (I just
got an idea: What if we made the reverberating short-term memory a function
of the address-memory channel?) Their ultimate-tags could be logically
gated in conjunction with the declension-identifying modification-deltas so
as to associate the proper function-deltas. Now, there might have to be a
kind of system-wide notification-device in the address-memory to signal that
these declension-ending ultimate-tags are really secondary ultimate-tags.
16 MAR 1978
-O -- O -- O-
/ \
O O - O - O O
/ / \ \
O O O O O
/ / O O \ \
/ O O O O \
| / O O \ |
O O O O O O
| | | | | |
O O O O O O
| | | | | |
O O O O O O /----(B)
| | | | | | |
O O O O O O /------(B)-----+-----(I)
| | | | | | | |
O O O O O O | \----(U)
| | | | | | |
O O O O O O | /----(B)
| | | | | | | |
O O O O O O (B)-------+-------(I)-----+-----(I)
| | | | | | | | |
O O O O O O | | \----(U)
| | | | | | | |
O O O O O O | | /----(B)
| | | | | | | | |
O O O O O O | \------(U)-----+-----(I)
/ / / | |
O / / | \----(U)
/ O / |
O / O | /----(B)
/ / | |
O / | /------(B)-----+-----(I)
/ | | |
O | | \----(U)
| |
B O-----------------------------------/ | /----(B)
| |
I O----------------------------------(I)-------+-------(I)-----+-----(I)
| |
U O-----------------------------------\ | \----(U)
| |
| | /----(B)
| | |
| \------(U)-----+-----(I)
| |
| \----(U)
|
| /----(B)
| |
| /------(B)-----+-----(I)
| | |
| | \----(U)
| |
| | /----(B)
| | |
(U)-------+-------(I)-----+-----(I)
| |
| \----(U)
|
| /----(B)
| |
\------(U)-----+-----(I)
|
\----(U)
Ideas to go with today's diagramming
- There should be lots of auditory short-term memory loops, and they
should exist in a kind of associative grid or network where each loop has a
kind of attention-valence. I've been puzzling over this problem for several
months, and now I think I see my way clear.
Let me restate the problem. In September 1977 I had worked out
hypothetically how language-words would enter the mature mind and succumb to
immediate decoding through the conceptual address nodes. It was obvious
that words, phrases, and even whole sentences would have to be able to
circulate in a short-term memory so that the brain would have the option and
capability of submitting any coded aggregate multiple times to the "front"
end of the conceptual address decoder. Whenever the mind perceived a break
at the end of an aggregate (word), it would automatically submit any ensuing
code to the front, on the assumption that a new word is beginning. Well,
one important value to short-term memory circulation lay in the idea that a
word which was garbled the first time through could be resubmitted after
looping through short-term memory. Likewise a compound word could be
submitted slowly and deliberately, in hopes that the conceptual decoder
would glean secondary or tertiary meanings from the elements of the
compound. By now one can imagine a line of poetry going through many times
with all kinds of remote associations coming out. The problem was, what
would be the nature of the auditory short-term memory? Would it be a really
long channel in which the oldest traces would constantly be dissolving? Now
I think not.
The solution which I have just perceived this evening is to have a
whole "family" of short-term memory loops. Their breadth would be that of
how many phonemes are within the language of the mind. Their length would
be some apparently arbitrary length depending on how long an utterance the
mind would be expected to remember all at once. This length-question poses
no special problem. All we have to do is design it long enough to cover
those sentence-lengths that a human can normally remember in short-term
memory. I'm sure that it is an empirically available statistic, and many
conventional psychologists have probably worked on determining just what the
length is. This situation is just another case where our fullblown design
would be in accordance with the human reality, while our prototype design
would be some severely limited simulation.
But to go on with today's solution. There would be a typical short-
term memory loop of arbitrary length. The mind would have a "family" of
many of these loops. Now we get to a part of the description which is very
pleasing to the would-be designer. We will say that the artificial mind has
to have a "liminal" number of such loops, but above the liminal number more
loops just make the mind more powerful - if you will, more intelligent.
That feature is pleasing to the designer because it establishes an area
where he can plan to add more loops as permitted by such resources as time
and money.
Now how does the family of loops operate? Let's say there are thirty
loops. Empty or "faded" loops have an automatic tendency to line up at the
auditory input port to onload any next-arriving coded utterance, whether
from the outside world or from an interior thought-line. Therefore there is
a kind of "distributor" which holds loops ready in queue to receive loads.
One after another the queued loops take on loads. A loaded loop detaches
from the reception-queue. The family of loaded loops must exist on a kind
of "valence-topology." By this I mean that there will always be one loaded-
loop most ready to recirculate its load through the conceptual decoder.
Each loaded loop has a recirculation-valence derived from such things as the
perceived importance of the utterance, or its novelty/shock value, or its
associability to circumambient thought and experience. This associability
notion is pretty serious, and I should not just blithely claim it. But
there could easily be temporary association lines as follows. They would go
from the conceptual decoder to any qualifying one of the, say, thirty short-
term memory loops. Any noun, any concept, could be temporarily linked (in
the short term) between the concept-decoder and that short-term memory loop
in which the noun or concept occurred. Say, we could even use this method
to keep track of the antecedents of pronouns. Anyway, key words or key
concepts could serve associatively to bind together freshly remembered
utterances in the family of short-term memory loops.
This same associativity might somehow play a role in the passage of
information out of short-term memory into long-term permanent memory. Of
course, we don't usually remember statements verbatim in our long-term
memory. Therefore, an addition to permanent memory is more like the factual
alteration of a large corpus of belief or knowledge. Memory of the events
in passage of time, instead of being a corpus, can probably be more of a
stringing effect, where minutiae of memory are held together by a serial
associative string. Yes, we could establish something called
"chronomnemics" to deal specifically with this operation of the stringing
together of memory-minutiae in temporal succession.
Let us return to the family of loops. There would be built in a
tendency over time for a loop to lose its load and return to the reception-
queue. Such fade-out would occur if the load of any loop is not refreshed.
Now, a decision has to be made here as to whether refreshment means that the
same loop with the same contents is enhanced, or whether just the
information itself is enhanced by flowing into one or more additional loops.
It looks to me as though refreshment ("rehearsal") occurs by the
process of a memory going from one loop into a new loop. Thus, if we
refresh an utterance over and over in our mind, each time it would fill up a
new loop, so that gradually a whole group of loops could become loaded with
the same informational content. Of course, this chosen process allows for
and makes possible changes such as conscious alteration or gradual,
unconscious distortion.
17 MAR 1978
___ ___ ___ ___ ___ ___ ___
| | | | | | | | | | | | | |
| S | | T | | M | | L | | | | | | |
| H | | E | | E | | O | | | | | | |
| O | | R | | M | | O | | | | | | |
| R | | M | | O | | P | | | | | | | ____
| T | | | | R | | S | | | | | | | ____/ |
| | | | | Y | | | | | | | | | ____/ |
|___| |___| |___| |___| |___| |___| |___| ____/ |
||| ||| ||| ||| ||| ||| ||| ____/ |
_ ||| ||| ||| ||| ||| ||| ||| / CONCEPTUAL |
|_|--((+---((+---((+---((+---((+---((+---((+---| |
|_|--(+----(+----(+----(+----(+----(+----(+----| ADDRESS |
|_|--+-----+-----+-----+-----+-----+-----+-----| |
\____ DECODER |
\____ |
\____ |
\____ |
\____|
17 MAR 1978
There can be no will without knowledge of the options available to the
will. So we can perceive of the will, for one thing, as having a
representation of all the available options. Now, the will does not just
slide over a field governing all the individual muscles. Rather, the will
probably floats over a quasi-surface of very numerous nodes, where each node
can represent a conceptualized, habituated action consisting of a string of
many activations of individual muscles. There might be a node for reaching,
for bicycle-riding, for jumping, and so on. Yet the will can not be
disassociated from its available options.
We perceive of the will as a motor function. It has at least two main
subdivisions: muscular sequence activation and verbal thought.
One method would be to establish a special association-accumulator to
be known as a volition register. The volition register would be like a
cylindrical pole having on itself nodes which would constantly be trying to
initiate mental activity. Instincts, bodily appetites, conscious plans, and
so on, could be represented on the volition register. Each node could have
a kind of priority-valence which would determine how frequently the node
fired out a demand for initiatory action. For instance, as an organism got
hungrier and hungrier, the appetite-node would fire more and more
frequently.
Of course, the above paragraph does not describe a complete will. It
just describes a mechanism for focussing the will. Now I would like to
suggest a quasi-two-dimensional bouleumatic accumulator system for enabling
the deliberate execution of action demanded by a node on the volition
register.
On the volition register, the node of presently highest priority would
become coupled to a "volitional accumulator." This volitional accumulator,
"VA," would accept both positive and negative inputs in a process where
positive accumulation to a certain satiety-point would actually cause
initiation of the action. Thus a decision-process is stretched over time
and the whole mind gets to watch and affect the decision taking shape.
Indeed we probably only need one such VA to serve as the seat of the will in
the mind. How the positive and negative inputs arise is a different
question. However, they would be the result of the processing of all
available related knowledge (belief). We could say that positive
accumulation in the VA is automatic unless contravened by negative
accumulation. Now here's a trick. Negative accumulation could simply be
defined as any protracted, prolonged fetching of related associations. In
other words, we're not really value-judging the inputs. We're decreeing a
positive process which will just run its course unless you stop it. We're
saying that any pause to reflect is automatically negative. After all, if
you haven't thought through the relevant data, you can't make an informed
decision.
18 MAR 1978
To continue with yesterday's discussion of the mental will, so far we
have the Volition Register (VR) and the Volition Accumulator (VA).
Every habituated muscle sequence is a potential target of the mental
will. Each sequence was developed with respect to a purpose. Such purposes
can be very generalized, even though the action sequences are very specific.
For instance, walking is a very specific action sequence, but the purposes
for walking can be infinitely various. So a purpose can be rather
conceptual and abstract. However, I tend to imagine a purpose as being
analogous to a kind of osmotic pressure. That visualization is why
yesterday I established the notion of a Volition Register. It was easy to
imagine bodily appetites and instincts finding their way to expression on
the volition register. After all, they can be genetically "hardwired" as
such. It is harder to figure out how conscious plans would reach the
Volition Register.
One domain that might lend itself to figuring out is that of mental
curiosity. I likened a purpose to osmotic pressure. Well, I can imagine a
quasi-osmotic pressure of incomplete information resulting in curiosity.
For instance, an organism perceives a container and immediately wonders what
is inside the container. Or, better yet, the organism experiences a
physical blow or push and immediately wonders as to the causation of the
action. Yes, causation provides a good source for the disequilibrium of
curiosity. It could be either designed or taught into an artificial mind
that it should generally seek to discover the immediate causation of
perceived consequences.
Now, how do we conceptualize causation-curiosity? It is interesting to
note that with humans causation-curiosity often extends only a limited
number of steps off into murky terra incognita. It takes great physicists
like Einstein and Heisenberg to really pursue matters all the way unto first
principles.
Having stopped and thought, I get this idea: The desire to discover a
cause for a consequence is basically the desire to find something with which
to associate a consequence.
When a phenomenon presents itself to a mind, I can imagine the mind
querying itself as to logical associations. The causation-curiosity
phenomenon could be quasi-two-dimensional in the following way. The mind
perceiving a consequence would, on the one hand, be seeking any logical
association to the consequence. On the other hand, the mind would perhaps
have learned to bring in and apply the concept of the immediate past as a
second dimension to its search for anything associated with the perceived
consequence. Thus I imagine a kind of two-arrowed vectoring. The mind
experiences two disequilibria in looking both for something related and for
something immediately past.
I often have the hunch that many mental phenomena are exponential and
nonlinear in the way just described.
Now, these disequilibria of search are probably something to be handled
by a general mechanism of associativity. The idea goes as follows.
Percepts are to be synthesized into units as much as possible. Then they
can be handled by unitary tags. The General Associativity Mechanism (GAM)
has hold of tags governing unitary percepts laid down in memory. Any memory
track would have a GAM connected with it for the very purpose of organizing
the contents of the memory track. Associative GAM-tags, if activated
backwards to the engram, cause the memory percept to flow back through the
perception channel. During flowback, opportunity arises for many
subassociations to come into play. Thus a single GAM-tag can fetch a bulky
memory aggregate, but during flowback the aggregate can be analyzed with
particular subassociations being looked for or expected.
Now, it can somehow be the design of the GAM to have a logically
compulsive need to associate a perceived consequence along the two above-
mentioned vectors. It is easy to imagine the physical reality of such a
compulsion. There would be, so to speak, three registers with which the GAM
would operate. The Percept Register (PR) would contain momentarily the GAM-
tag, or even many subtags, of the unitarily perceived happening or
consequence. The Percept Register would simultaneously hold a kind of
Substantiality Index (SI) or measure (SIM?) relating to and indicating the
degree of logical complexity and substantiality inherent in the aggregate
percept momentarily being registered in the Percept Register.
Secondly, the Lateral Associand Register (LAR) would be under the
above-mentioned quasi-osmotic pressure to come up with something that can be
associated with the contents of the Percept Register. The pressure would be
towards arriving at an equilibrium of the Substantiality Index Measure
(SIM), there being an SIM connected with the Percept Register and also one
connected with the Lateral Associand Register.
This compulsion system would not require equality to exist between the
SIMs. However, it would require significant correlation. (I don't mean
"correlation" in the strict statistical sense.) Now, here is where
volitional motivation comes in. To whatever extent the SIM of the Lateral
Associand Register did not measure up to the presumably high value obtaining
in the SIM of the Percept Register, to that compulsive extent the GAM would
send out a signal to a node on the Volition Register, where in turn the
valenced or prioritized node would demand that attention be directed towards
discovering some second phenomenon which could be logically associated with
the aggregate percept momentarily being registered in the Percept Register.
Voila, a desire to know something has been created and expressed as a
motivation of the system, all done with switching circuits, quasi-osmotic
pressures, and the disequilibria of index measures.
I have not forgotten the third register in the GAM. It would be the
Temporal Associand Register (TAR), and it would constitute that second
vector, which, together with the first vector of the LAR, would create the
exponential phenomenon of causation-curiosity. The TAR would also have a
Substantiality Index Measure (SIM), and the compulsive need for it to rise
in value to meet the PR-SIM, just as the LAR-SIM is trying to do, would be
expressed as an analog signal to the Volition Register.
Now, I haven't stated whether the TAR-SIM signal and the LAR-SIM signal
would go separately or jointly to the Volition Register. I would like to
theorize them as going to the same VR-node with a summation-effect upon the
priority or valence of the node. After all, the two signals come from the
same GAM.
We could now have a homeostatic effect. Once the causation-curiosity
node achieves release-priority on the Volition Register, then the Volition
Accumulator (as theorized yesterday) couples to the operant node and starts
accumulating towards a go-ahead signal for initiating action. But in this
case, what action is to be initiated? According to yesterday's theory, the
action to be initiated has to be already expressed and ready to go at the
competent node on the Volition Register. The Volition Accumulator isn't an
enabling bridge; it's a firing mechanism. So far in today's written thought
all we have done is bring a desire for causation-knowledge to the Volition
Register.
A curiosity-GAM must therefore probably bring even more to the Volition
Register. I can theorize the process as follows. Perception GAMS are
obviously on the passive side of the mind, even though we are here and now
designing initiatory mechanisms into them. On the active side of the mind,
along with muscle-sequences as for walking, there must also be something
that we could call an Associative Motor Attention Mechanism (AMAM). Just as
there would be a separate GAM for each perception track, so there would
probably arise a separate AMAM to correspond to each perception GAM. There
would be an Associative Motor Attention Mechanism for hearing, one for
seeing, one for each sense. Now, each perception-GAM could have an
associative link to its corresponding AMAM in such a way that the AMAM would
be dynamically represented conjunctively with the causation-curiosity node
at the Volition Register.
Now, after coupling but before go-ahead, the Volition Accumulator (here
is a new part of the theory) would cause non-volitional, memory-only-type
associative impulses to flow through the involved parts of the Motor Habit
Tagging System (see diagram 10 SEP 1977). In other words, the mind would be
pre-thinking whatever motor actions it were thinking of taking. The pre-
thought of each motor action would flow through consciousness, giving the
system its opportunity to object negatively to any contemplated action.
Remember, yesterday's theory says that any prolonged associational
occupation with the contemplated action automatically constitutes inhibition
or interdiction within the Volition Accumulator. However, there can
obviously rage a battle royal within the mind, where the insistence of a
demand-node on the Volition Register can keep forcing the issue, within the
Volition Accumulator, of a motor proposal to which the general associative
consciousness prolongedly raises objections. Thus we have a picture of how
a mind might engage in intense, drawn-out deliberation while contemplating
an action. We can see how the Volition Accumulator becomes almost the
physical seat of consciousness and will, because it is within the VA that
the action-release-potential waxes and wanes. Truthfully, however,
consciousness and the will are spread out to wherever free associations
reverberate in the reaches of the mind.
During today's written discussion I have formulated the idea that
specific motor sequences might be linked specifically to certain passive
memory networks. The ontogenesis of such links would probably be a
situation of side-by-side development. After all, motor sequences had to be
tailored to fit initial purposes. So the thread of reference proceeds from
initial purposes down through subsequent purposes and applications.
Come to think of it, motor attention-getting mechanisms as described
today could be the general key to remembering and bringing to bear even the
most general-purpose motor sequences.
While reading back into today's work above, I thought of an idea for
the general representation of conscious plans in volition. You see, the
work I just did on causation-curiosity involved having the mental organism
look backwards into time in search of causation. For general-purpose
volition, perhaps we should have the mind look forward into time in search
of solutions.
A perceived or even thought-up phenomenon could be interpreted as a
problem if it set off a rash or burgeoning chain of associative activity.
Say, this idea even tells me how specific motor sequences might get
involved. As I was already theorizing today, the ontogenesis of motor
sequences results in specific linkage to passive memory networks.
Therefore, if an emerging problem sets off an associative vortex, that same
vortex will elicit tags referring directly to motor sequences, and we can
design that the motor tags should queue up at the problem node on the
Volition Register.
Under this idea, just about everything that presents itself to the mind
could be interpreted as a problem. However, the prioritizing mechanism of
the Volition Register causes the mind to treat as problems really only the
most pressing phenomena.
This idea of each vortex being handled as an almost palpable problem
seems especially felicitous and fortunate. It is a way for the focussed
mind to become specifically attentive to nonspecific happenings, that is,
the general diffusion of dynamic associations.
I suppose we should devise something to be called the Problem
Extraction Mechanism (PEM) and deposit it along with causation-curiosity in
each General Associativity Mechanism (GAM).
I was now momentarily tempted to try to use the Problem Extraction
Mechanism to abolish the need for the mechanism of causation-curiosity in
each GAM. However, I think I realize that the mechanism of causation-
curiosity really is needed as essential to the whole domain of attention-
getting. In fact, attention-getting can occur first, and then perhaps the
Problem Extraction Mechanism can also come into play. After all, the
functioning of the mechanism of causation-curiosity could in itself be
sufficient to set off the Problem Extraction Mechanism.
Each GAM could have something analogous to a summation-column that
would measure the general level and intensity of associative activity
spreading out around a percept as a proto-problem. We could call this
summation-mechanism the Associativity Index Measure (AIM), and it would be
similar to the Substantiality Index Measure (SIM) by which the mechanism of
causation-curiosity functioned. However, the AIM would measure diffuse
activity spreading out, whereas the SIM would measure a centered, punctiform
phenomenon.
A high AIM-level would cause a GAM to transmit an emerging problem to a
node on the Volition Register. Simultaneously motor proposals flushed out
by the problem-vortex would queue up at the VR-problem-node for adoption or
rejection after consideration during operation of the Volition Accumulator.
I am quite pleased (perhaps prematurely) with yesterday's and today's
working out of how a mind might exercise will. I certainly had to translate
several phenomena into equivalent but unobvious interpretations.
For instance, I translated the notion of positive and negative
influences on the will. I simply declared that all manifestations of the
will start out as positive and that any interfering or temporizing
associativity is to be interpreted as negative. Thus value is functional
rather than epiphenomenal.
I translated intellectual curiosity into the homeostasis of
disequilibria, and thence into a mechanism of attention-getting.
Finally I translated the recognition of a problem into the detection of
any massive vortex of associativity.
Gradually I think I am mapping out the operant regions of a functioning
mind. The systems which I have described this week can become so complex
and recursive that I can almost picture the hum of a mind processing the
inexhaustible material of external perceptions plus its own internal
reflections.
When the mind is rather inactive, it is able to daydream and follow
tenuous chains of thought because there are no higher vortex levels
demanding attention and action at the Volition Register. However, if
something important is suddenly remembered or reasoned out, the raging
vortex of associativity galvanizes the problem-solving mind.
24 MAR 1978
Creating a "Nommulta Information Index"
I am amassing so many notes and especially so much printed matter that
I feel compelled to take a large cardboard box with dozens of freely
obtained escrow envelopes and start a large file to be indexed by a
"Nommulta Information Index," or "Nominfo Index." The Nominfo (Index) will
access the following areas, preceded below by their intended locations:
1. (PDF-box) Nolarbeit Theory Journal, with index. (Nolarbeit Theory
Index?)
- master file; manuscript storage file; safety file; attache file.
- extra papers file.
- Note: The Nominfo will access only the index to the Theory Journal,
and then only partially, during cross-reference.
- Note: The safety file will be kept somewhere safely away from the
other files.
2. (PDF-box) Nolarbeit Hardware Journal. (With index?, Nolarbeit Hardware
Index?)
3. (PDF-box) Nolarbeit Financial Ledger (= record of expenses).
4. (PDF-box) Nolarbeit Topical Files.
- Structured, organized files of work on specific topics or projects.
Excerpts from the strictly date-serial Nolarbeit Theory Journal would
not have to be date-serial in these topical files.
- Perhaps topics should be indexed in the Nolarbeit Theory Index?
5. (PDF-box) Publications/Documents File - "PDF."
- Brochures, catalogs, clippings, correspondence, manuscripts, memos,
photocopies, software as documented computer programs, etc.
6. (Bookshelves) Nolarbeit Reference Library. (With catalog?)
7. (Passim) Extrinsic Reference Locations.
- Public and private libraries; unheld periodicals; persons as
consultants or authorities; data bases; vendors and manufacturers;
institutions and agencies; existing systems and hardware.
5 APR 1978
Machine/Brain Paralleling
Recently I have read a series of four excellent articles on the brain
by Professor Kent in Byte magazine. It is important to take issue with a
lot of points raised in the articles. Right now I just want to record
general impressions and ideas.
Kent wrote a lot about how lower functions of the central nervous
system are designed to run independently if not interfered with from on
high. I think he wrote that the rationale of control is for higher levels
to act generally to inhibit lower levels, which would operate massively on
their own if left alone. (I will have to re-examine this notion in the
articles.) Anyway, general ideas were put forth. One idea was that nervous
mechanisms were pyramided by evolution. That numerous, myriad mechanisms
run parallelly while subject to intervening control from on high.
Well, all these articles (including the aphasia-article in "Human
Nature") cause novel reactions in me with my Nommultic viewpoint.
One reactive notion I get is that the brain, which seems so monolithic,
is really like a stick-forest of parallel components. For instance, in the
various aphasias various abilities can be lost. Yet an essentially integral
brain remains which can operate around the losses if they are not too
severe. So that's the one reaction I get - a notion of columnar
parallelism.
However, to the first reaction (in the preceding paragraph) I get yet a
second reaction, namely the suspicion that the true ultimate seat of the
mind rests highly independently above the various automatic processing
mechanisms.
By the "seat of the mind" I mean the part where the mind thinks in
natural language and makes completely free decisions of volition.
A good picture of the free conscious mind is that of a mind just
thinking or meditating, following a line of internal thought rather than
responding to external stimuli. Nommultic theory says that the
consciousness can be free because there is really a dichotomy between
elements of high consciousness and elements of lower processing mechanisms.
I would hypothesize that most uncontrollable, unpreventable crossflow is
one-way, in the direction from lower mechanisms across into high
consciousness. (Exceptions might be blushing and sweating out of fear.) In
other words, external or lower stimuli can forcefully impinge upon
consciousness, but consciousness ensconces itself in its own realm and does
not have to send out signals unless it is implementing volitional decisions.
Yet within its own realm the consciousness makes decisions as to what train
of thought to follow and how long to meditate without initiating any motor
muscle actions. Now, there is some uncertainty as to whether these
associative happenings are really decisions or just associative outcomes.
One further situation is rather clear, though. When the meditating mind
expresses its desires and decisions verbally, by thinking such words as "I
want...," it achieves a high level of symbolic abstraction, of abstract
thought. Whether or not the underlying associative process constituted a
decision, the symbolic formulation of "I want" achieves the higher-level
reality of a formalized decision, which possesses an enhanced associative
dynamism and which can continue in existence as a trace in memory.
6 APR 1978
Mindrealm versus Perceptflow
The terms "mindrealm" and "perceptflow" are meant to refer respectively
to the mental consciousness/volition area and the total input sensorium.
In Nommultic theory I maintain that the quantitative aspects of the
mindrealm and the perceptflow can be quite non-interdependent.
A typical task of a mindrealm is to know the external world. In
humans, apparently the input sensorium can be quite drastically limited
while still the mindrealm is able to know the world full well. (In this
regard, I think I had better read the autobiography of Helen Keller in
search of insights.) AI researchers lay so much importance on vision, yet
people's intelligence is not affected by blindness from birth.
So what I am looking for is a way to use a minimal sensorium to get
information about the world into the associative memory of a verbally adept
mindrealm.
One extreme would be to try to imagine a grandiloquent mindrealm with
associative emptiness at every conceptual address node. One immediately
jumps to the idea that the words would refer to other words, but that
recursive idea can't work ("os lien mega to aitema"), because there can't be
any discriminating or prioritizing of associations if they all hold the same
value of zero.
So the problem is one of discriminating. Half in jest, I say that we
need not so much to know things truly as just to tell them apart.
The abstract, symbolic nature of natural language plays an important
role here. Once things are discriminated and given individual names, all
kinds of memorable attributes can be tacked on to the pristine cognition of
each named entity through the process of symbolic-level reference in
sentences referring to the named entities.
7 APR 1978
Conjecture on How We Know Things
When I contemplate an artificial mind which is supposed to get to know
the external world, I imagine a potentially very intricate linguistic system
just waiting to associate its conceptual nodes with knowledge gleaned
through the senses.
But what can we know? Of the five main senses, taste and smell convey
a lot of information but convey it in too narrow a fashion. They convey
identifiers, not knowledge of essence.
The sense of hearing is a serial sense, not parallel. As such it is
excellent for conveying coded information, but perhaps not so good for the
all-at-once capturing of essence.
Such elimination leaves us with touch and vision. For the purposes of
this treatise, let us consider vision to be a special or enhanced case of
the sense of touch.
Touch and vision both can convey widespread, parallel, instantaneously
two-dimensional knowledge. Let us say that they both convey "array-
knowledge" or "array-information." Thus they are distinct from the other
three senses mentioned above. If the sense of balance is another sense in
its own right, it, too, is not an "array" sense.
So I would like to speculate that an "array" sense is necessary for the
development of general knowledge about the world.
Now, I know that the professional AI researchers are working
energetically on visual pattern recognition. I also know that vision is
totally unnecessary for intelligence of the highest order.
Such elimination leaves us with touch as the only essential sense for
epistemology.
Now, for years I have been captivated by the notion that the simplest
geometric entities of point, line, and curve contain some kind of central
key to the whole problem of epistemology. It has seemed to me that only on
the level of these simplest geometricalities can we know the world directly
and unerringly. We cannot without prior history know a blob, or a
confusion, or a plethora.
The trouble was, for a long time (as in August of 1976) I applied this
notion of geometricality mainly just to vision. But vision at this stage is
perhaps too complicated for me. Let me for a while in this treatise confine
myself to consideration of touch as an essential "array" sense.
All the variations of a small enough array can be known absolutely
thoroughly because the small number of content-points permits only a very
limited number of variations.
Scratch-Leaf - from 7 APR 1978
- virtuality
- We can know utterly simple things directly.
- Mensch! What if you had a mind which didn't need "knowledge gleaned
through the senses," but which could deal directly and internally with pure
logic, so that it could ascribe properties to simplest particles and thence
decogitate whole universes deducible from the logic?
10 APR 1978
In recent theory I have concentrated on the senses of touch and vision.
Today I get a curious idea about the relative importance of those two
senses: that perhaps touch is more important for the "infant" development
of intellect, and thereafter sight is more important because it conveys
immensely more information. Here follows reasoning behind the idea.
Human vision does not seem actually to get down into point-by-point
characteristics. Rather, as I have been reading in Ernest Kent's "Byte"
articles, human vision does a lot of feature extraction.
First Scratch-Leaf - from 10 APR 1978
- the time-dimension.
- multi-level: arrays below, interconnecting language above.
- simplest geometry.
- interior organization of perception channels themselves.
- two-dimensional (or multi-dimensional) up-and-down cascading of feature-
summation
- remember: The perception array (eye; skin-surface) will probably have one
central point of departure.
- Only the array-senses matter: touch and vision.
Second Scratch-Leaf - from 10 APR 1978
- Perhaps acquisition of intellect is in two stages: bootstrap, by touch;
and ongoing, by vision.
- virtuality: When the eye looks at a scene, the mind is fooled by
"virtuality" into thinking that it is seeing the whole scene at once.
12 APR 1978
It is rather obvious that there must be feature-summation in the
recognition of any aggregate, of everything with more than one feature.
Therefore it follows that features, not aggregates, are what we know and
recognize. If we could not perceive and identify each feature, then we
could not cumulatively sum up features to recognize aggregates.
I can imagine an infancy where the organism learns to know features,
and then a maturation where the organism learns myriad aggregates composed
of the various features.
We must establish a Short-Cut Principle (SCP) to mean that an
originally slow and cumbersome mechanism of feature-summation will
automatically follow all available short-cuts in summation to identify an
aggregate. In other words, if we had to use extremely "granulated"
features, the summation mechanism would still work, but in a slow and
cumbersome way. Perhaps "leaps" of the intellect are shortcuts in a
summation process.
We would probably not have to design the shortcuts into the machine; it
would find them itself.
The question must be resolved of whether the perception memory channels
hold arrays of raw data or only feature-highlights.
It would be neat (and inexpensive) to design a perceptor that
remembered (for touch and vision) only arrays or registers of collective,
pre-processed features. That is, each slice of perception would be an
engram remembering in a registered way just which of all possible features
were present in the original slice of raw perception. (Don't forget to
consider the time-dimension.)
Along this idea, when we remember a mental image, a registered slice of
features is resubmitted to the highest echelon of our visual perception
system. This highest echelon could consist of a framework internally and
peripherally organized to the highest possible degree. Thus it might be
possible to bring even our visual imagination to bear upon the summit
framework. That is, while we are recalling an engram of registered
features, we might use our imagination to fill in other features. It might
be like having articulated, transparent layers.
I like the idea of having a kind of "gate" along the optic perception
channel. In advance of the gate, between the eyes and the gate, there would
be all the considerable apparatus for the self-organizing of the visual
perception system and for the infancy-stage learning of the standard
features in a registered array.
Postpositional to the gate, there would be the lengthy memory channel
with the oldest memories coming first and with new, fresh memories
constantly being added on at the extremity of the memory channel. I like
this notion because it presents an elegant sequence for the physical
positioning of all that goes on in visual perception and memory. It may
bother someone to think that the oldest, perhaps outdated memories would lie
closest to the recall-gate. But that arrangement is not so bad; it
certainly would be difficult to have the most recent memories always
closest.
There is more detail to how the post-gate channel would operate.
Basically, the level of organization prevailing in the summit-framework
recall-gate would not be improved upon further down the memory channel.
Each (so highly significant) point in the gate would go into a line-
extension all the way down the postgate channel. All the lines as a whole
would remain positionally "registered" just as they were in the summit-array
of the recall-gate. Now I do not necessarily mean that the lines would
physically stay in order with respect to the physical geometry of their
dimensional co-ordinates, but rather I intend an organizational order. Of
course, a high degree of physical, locational order could just chance to
occur!
Nor is it hard to imagine a switching-network or a synaptic growth-
process which would continually provide for the proper elongation of the
registered memory channel.
The elongated post-gate memory channel is a transmission-channel. An
incoming slice of perception (probably pulsed) travels to its ongoing
extremity to be deposited as an engram-slice.
For the slices, we must imagine that at regular, frequent intervals
each extension-line can branch off into a memory-node. The extension-line
itself is not a string of nodes (or is it?), but all along its length it has
slices of nodes that hold the same "register" or array as is held both in
the recall-gate and all along the ordered memory channel. Thus an instant
of visual memory can be recorded as a feature-image on a node-slice.
Because every element of the node-slice is in register with the channel, the
intact image can be "dumped" onto the transmission lines and thus
automatically show up in the recall-gate in just the same array-order as it
was perceived minutes or years previously.
Note: When AI researchers say that we can recognize a shape no matter
its size or orientation, they may be forgetting one thing: most shapes we
have seen at one time or another in many different sizes and orientations.
Small print is difficult to read for two reasons: because, being
small, it is difficult to see, and because in our memory channels we are
unaccustomed to the small-size images.
Nolarbeit Theory Journal 12 APR 1978
Diagram of a Visual Memory Channel
-------
/ \
| Eyeball |
\ \ / /
\ ------- /
---------
/
/
|
|
| _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
|\
| \
/|\ \ Infancy-Stage
/ | \ \
/ /|\ \ \ Self-Organizing
/ / | \ \ \
/ / /|\ \ \ \ _ _ _ _ _ _ _ _ _ _ _ _ _ _
/ / / | \ \ \ \
/ / / /|\ \ \ \ \
/ / / / | \ \ \ \ \ Feature Extraction
/ / / / /|\ \ \ \ \ \
- - - - - - - - _/_/_/_/_/_|_\_\_\_\_\_\_ - - - - - - - - - - -
"recall-gate" |_________________________| "summit framework"
- - - - - - - - | | | | | | | | | | | | - - - - - - - - - - -
oldest memory slice --> o|o|o|o|o|o|o|o|o|o|o|o| Memory Slices
| | | | | | | | | | | |
o|o|o|o|o|o|o|o|o|o|o|o| as Node-Arrays on
| | | | | | | | | | | |
o|o|o|o|o|o|o|o|o|o|o|o| Transmission Lines
| | | | | | | | | | | | - - - - - - - - - - - -
| | | | | | | | | | | |
| | | | | | | | | | | |
| | | | | | | | | | | |
| | | | | | | | | | | | - - - - - - - - - - - -
x|o|x|o|x|x|x|o|x|x|o|x| memory slices
| | | | | | | | | | | |
o|x|x|o|x|o|x|x|o|x|x|x| with some
| | | | | | | | | | | |
x|o|o|x|o|o|o|o|x|o|x|o| nodes occupied,
| | | | | | | | | | | |
x|x|x|o|x|o|o|x|x|o|x|x| some unoccupied
| | | | | | | | | | | | - - - - - - - - - - - -
| | | | | | | | | | | |
| | | | | | | | | | | |
| | | | | | | | | | | |
| | | | | | | | | | | |
o|o|o|o|o|o|o|o|o|o|o|o|---- associative
| | | | | | | | | | | |
o|o|o|o|o|o|o|o|o|o|o|o|----------- tags
| | | | | | | | | | | |
o|o|o|o|o|o|o|o|o|o|o|o|------------------
| | | | | | | | | | | |
newest memory slice --> o|o|o|o|o|o|o|o|o|o|o|o|-------------------------
| | | | | | | | | | | |
Extremity of Expanding
Channel
4 MAY 1978
"Virtuality" in visual perception. In the popular culture there are
various drawings which display two different image-patterns at the same
time. Usually the one image is rather obvious (augenfaellig, in German),
and the other image must be searched for by the eye. While searching for
but not yet finding the hidden image, the observer certainly fixes his
vision in a haphazard way upon many of the separate features of the hidden
image. Suddenly the observer sees the proper configuration of all the
features into the hidden image. Now the hidden image looks actually quite
obvious, and the observer can study it in detail.
This trickery of hidden images gives us valuable information about
visual perception. Our seeing of large-scale objects is greatly involved
with "virtuality," the phenomenon in which a seemingly solid effect is
produced from rather chimerical and insubstantial substrata.
12 OCT 1978
Development of Tactile Awareness and Memory
How does an organism develop knowledge of the sensations arising from
the surface area of its own skin? We are posing this question because we
now want to tie the whole input sensorium together and figure out a way for
an organism to remember its multifaceted existence over time.
Obviously, a skin-surface is analogous to just a grid of sensory
points. Of course, we know that important areas of the human skin, such as
the fingers, have many more sensory nerves than other, less important areas.
We have long relied on our maxim, "Whatever information can be
transmitted can also be recorded." So we can imagine a large information
channel which represented every sensor-point on the skin of an organism.
But we would not want constantly to be taking slices out of that channel and
recording them. Rather, we would rather have a kind of self-organizing
time-stage in which the organism learned about its skin and from then on
just made any necessary use of its knowledge.
In a way, knowledge of our skin is recursive, because each part of our
world is determined in terms of other parts, but our whole skin-knowledge
doesn't just collapse upon itself into unity, but rather our skin-knowledge
branches out into countless associations. So we might say that each neuron
going to the skin is a potential building-fiber of knowledge.
15 OCT 1978
Over the last several days I have been reflecting a lot upon the
problem of tactile perception and memory. This evening I have begun to
crystallize a few ideas.
Tactile perception must probably have a pre-positioning stage
(embryonic), a self-organizing stage (infancy), and a lifelong record-
forming stage (maturity).
The different human senses are requiring me to theorize different forms
of perception and memory. For vision, I have thus far theorized a memory
slicing mechanism. For auditory perception of words I have had to theorize
a pyramidal coded structure. Now for touch I suppose I must theorize an
altogether different system in which the individual perception points come
into play only as needed (in the maturity-stage), rather than going along on
a Gestalt basis as in my theory for vision.
In trying to devise an as-needed tactile system, my attention goes
towards reflecting upon the central memory train which chronicles the whole
life of the organism. For this purpose I am imagining a lifelong series of
associative discs, each of which for a particular moment constitutes the hub
of associative activity for the organism.
16 OCT 1978
Before I explain the discs, let me approach them from the tactile
system.
The general tactile nervous transmission system is laid down by pre-
positioning in the embryo. As we know, human embryonic growth follows the
nerves. But such pre-positioning is probably not accurate enough for
individual perception points. So we must imagine a self-organizing stage in
the infant organism. There are probably masses of brain tissue which are
pre-positioned so as to embody the self-organizing of the tactile sensory
reception system. In other words, contiguity on the skin surface
establishes a logical contiguity within the self-organizing areas. (It
doesn't matter if the fingertips play a massive information-transmitting
role; the principles remain the same.) In yet more other words, we can
imagine the process in which an individual sensory point on the skin would
command the attention of the conscious mind. The first news breaking
rapidly to the conscious mind would be that of which major sense is
signaling, namely, touch, and not sight or hearing. Rapidly, perhaps
subconsciously, the mind would associate into which side of the body is
signaling, which major area or limb, and right on down to the most precise
point. Probably at the same time certain motor associations would be being
suggested. This quasi-chain-of-command structure from on top has to be kept
in mind as we theorize about the organization at the lowest levels.
Assuming that a sensory impulse reaches the consciousness, it can be traced
(consciously or not) back to its source only by much associative branching
down the tree. The organism ultimately knows where the source is because at
each level the live transmission sets off extra-parietal associations to
lifelong knowledge. In other words, the hierarchical tree itself (of the
self-organized tactile area) could not pinpoint a sensory point, but the
"voting" extraparietal associations can.
So the pre-positioning works embryonically from genetics, and the self-
organizing works by physical and/or logical contiguity. But how does
tactile information reach the main consciousness? We must return to the
notion of the lifelong-memory associative discs.
The notion of the disc is just a lemma or tool for theorizing. Over
some years I have devised systems for various senses (vision, audition) and
motor effects. All along I had in mind that there must be a central flow of
consciousness. As a lemma I choose the image of a flat, coin-like disc so
that I can imagine both many peripheral connections and a lifelong series of
the discs to constitute the central tapestry of lifelong personal memory.
The trouble with such a dynamic, forward-moving system is, how do you hook
up to it such static aggregates as self-organized tactile systems or even
auditory coded-structure conceptual systems?
One way to do it is to go by the double-edged idea that only used links
survive, and that survival is assured because links are used. We already
wrote today that pin-pointing is done by virtue of extraparietal
associations. Well, we can have those very same extraparietal associations
merge into the consciousness discs and provide a hierarchical thread which
moves along with the temporal progression of consciousness.
We might theorize that often-used association lines become thicker and
stronger, perhaps because free-floating transmission-material tends to
attach itself to links receiving heavy usage, thus turning such links into
"trunk lines."
We might also theorize that two links to the same node can, by heavy
usage, fuse and bypass that node to form a single, more direct link.
8 NOV 1978
_____________________________________
| |
| Immediate Motor Activation Memory |
|_____________________________________|
________ _______________
| |================== _____________ | |
| L M |================== \ / | Habituating |
| i e |================== \ AVR / | |
| f m |================== \_______/ | Motor |
| e o |================== _____________ | |
| l r |================== \ / | Sequencing |
| o y |================== \ AVR / | |
| n |================== \_______/ | Memory |
| g |================== _____________ | |
| |================== \ / | |
| P |================== \ AVR / | |
| e |================== \_______/ | two- |
| r |================== _____________ | |
| c |== time-pulsed === \XX / | dimensional |
| e |== associative === \XXXXX / | |
| p |== tags ========== \XXXXXXX/ | plus |
| t |================== | |
| i |================== | time- |
| o |================== | |
| n |================== | dimension |
|________|================== |_______________|
AVR = Accumulating Volition Register.
8 NOV 1978
An Accumulating (Integrating) Volition Register
Late this evening I have been diagramming and theorizing on how a
Lifelong Perception Memory might link up with a Habituating Motor Sequencing
Memory. I have assembled enough ideas that it is time now to write them
down.
A Seattle newspaper reported recently (P-I, 25.OCT.1978) that we have
around 639 muscles. At any rate, whatever number of muscles there were in
us or in an automaton, that number would serve as direct activation lines
for Immediate Motor Activation Memory, which could perhaps be more aptly
termed as just neuronal hardware rather than memory.
Coming from Immediate Motor Activation Memory I have in mind a habit-
channel consisting of segments of Habituating Motor Sequencing Memory. Each
segment would represent a habituated motor sequence.
Now and then I come upon surprising little conclusions which accelerate
the Nolarbeit. One such realization today was that at the front of any
habituated motor sequence must be the tag from passive perception which
(tag) initiates the sequence. In other words, habituated motor sequences
don't develop in a causeless, sourceless vacuum, but rather they are
assembled starting with and in response to a passive experience stimulus.
In fact, I suspect that the actual process of assembling each habituated
motor sequence will consist of a long series of tentative memory-to-memory
link-ups, which will go on in a quasi-searching way until the full, refined
sequence becomes established as a reliable response to the particular
stimulus. Before this evening's theorizing, I had vaguely envisioned a
motor habituating mechanism which would do the iterative work all on the
motor side, but now I see that that would be a displacing of causation flow.
Having described the original departure-point for each habituated motor
sequence, I must now write down my present ideas for the nature of each
sequence. Basically I have in mind either a kind of fan-out or a nodular
concatenation. From the Immediate Motor Activation Memory block would come
a specific number of lines, each of which would directly activate
musculature in an unrefined, unsequenced way. These control lines would all
go down through the segmented channel of the Habituating Motor Sequencing
Memory. At each segment, though, there would be available the neuronal
hardware necessary for assembling a superstructure that would concatenate
control nodes in a firing sequence. For each sequence there might even be
available "empty nodes" for interspersing within a sequence so as to smooth
out the timing. The relative impetus of each control line within a firing
sequence might be established by using more than one firing of the
particular node in a series or cluster of identical nodes. That is to say,
to emphasize a node just place it, say, three times in a row.
There would probably not be a fan-out, but just a nodular concatenation
that gave the appearance of a fan-out.
From After Midnight, 9 NOV 1978
The motor habituation system which I am describing is not very sparing
of neuronal hardware. It builds up a refined sequence by endless trial-and-
error, and it just consumes neuronal hardware used up in the erroneous
trials. However, the system does perform an important function called for
in the Theory Journal work on motor ontogenesis of 18MAR1978, namely the
idea that motor habituation should proceed in side-by-side development with
initial purposes residing in the passive memory network. However, in the
present work we are tending to move the causation flow out of the motor area
and more into the passive memory area. We are tending now to say that
impulses won't go into the motor area unless physical motor activation is
really supposed to occur. One might say that we are removing the "Volition
Lines" from the system diagram of 10SEP1977. Of course, the equivalent
substance of those volition lines is moving diagrammatically leftwards (for
then and now) into the highly attuned knowledge of the Lifelong Perception
Memory. From simultaneous passive experience the organism knows quite well
the nature of any habituated motor sequence which it may contemplate.
By the way, this present theorizing sheds new light on what happens
when we contemplate an action such as lifting a hand. Our belief that we
know intimately the feeling of the impending movements is not motor memory,
but passive memory of the chain of results of motor memory. Why, even now
in this cold apartment, my upper arm flinched a little when I contemplated
lifting my left hand as I began this paragraph.
This present work on general motor habituation works out well enough on
paper that we may even apply it to the "Verbal Motor Habit Tagging System"
of 10SEP1977.
Now we should go into present ideas on an "Accumulating Volition
Register," which concept has arisen out of the work beginning on 17MAR1978.
You see, it may be possible to have a combined Volition Register and
Volition Accumulator. Each Accumulating Volition Register (AVR) would
actually just be a function of the linkage-line between the passive motor
stimulus on the diagrammatic left and its habituated motor sequence on the
diagrammatic right. Each linkage-line would automatically tend to be slow
to fire, depending however on the intensity and frequency-tempo of its
input.
The selection of a motor action results from remembrance. To remember
a motor action is to propose performing it again. Present thought in the
Lifelong Perception Memory can associate backwards to a whole gamut of
Passive Motor Stimuli. If a particular motor stimulus stands out as
desirable or advisable, its passive-to-motor linkage-line (Volition Link)
will begin to accumulate, to "integrate," towards conveying a firing-signal
to the habituated motor sequence. In the work of 18MAR1978 we said that
"any prolonged associational occupation with the contemplated action
automatically constitutes inhibition or interdiction within the Volition
Accumulator." Now we can see more clearly how all this accumulation or
waxing and waning of volition is really a searching process within the
"Lifelong Perception Memory," which is our temporary lemma-type term for the
stream of consciousness. If all conscious indications point to a motor
action as immediately necessary, it can be pushed through quite rapidly:
the Volition Link fills and fires. The mind can dawdle and hesitate by
contemplating a haphazard group of actions. It can be designed into the
Volition Links that they shall tend to drain if not accessed repeatedly.
We have thus drawn a kind of volitional demarcation line between the
diagrammatic left and right sides of passive and motor memory. We have
gotten away from a "seat" of volition and we now envision volition as the
sum outcome of the albeit haphazard totality of consciousness-cum-memory.
Even though in recent weeks I have been trying to draw together all the
main features of Nommultic theory, tonight I have had to limit myself to the
combined subsystems of passive and motor memory. However, motor activation
memory has been suggesting itself as the right departure point for putting
the whole Nommultic system together.
It has been obvious that animals such as dogs have to have a motor-
sequence selection system. A dog learns tricks, i.e., sequences of
behavior. A dog on the loose is constantly making choices and decisions as
to what learned actions to initiate. So I have been feeling safe about
designing a motor system with the idea that intellectual natural-language
systems can be superimposed by design upon faculties as present in mute
animals. In fact, in accordance with the Theory Journal work of 24JAN1973
and 8MAR1977, dealing with minimal systems, I am always on the lookout for
simple, model-type constructions which become possible with new advances in
the theory. Tonight's work suggests how to build a robot animal capable of
learning and executing behaviors. Of course, even the simplest Nommultic
systems are quite complex because they require so much in the way of memory
and processing.
11 NOV 1978
Fixed Inputs into Expanding Channels
In combining all the so-far developed portions of the Nommulta, we
encounter the problem of connecting fixed systems with expanding systems.
The input sensorium and the motor output constitute basically fixed
systems. In the infancy-stage, there is self-organizing of these systems,
but thereafter they are fixed.
For all the various systems of experiential memory we have had to
envision amazingly long, expanding channels of engrams. We have had to
design these expanding channels as if they stretched off into the distance
away from the locale of the sensorium. For instance, with the visual
channel as designed on 12APR1978, there was apparently no other way but to
design an expanding edge which received its information from no source other
than its own interior "pipeline." Lo and behold, our main experiential
channel seemed to stretch off into nowhere. Yet at the same time it had to
have associative tags exiting all its length to provide access to each
engram-slice.
The idea was that all engrams laid down for a particular moment of time
would be joined together by associative tags.
Whereas the visual memory would be of a "simultaneous," "sliced"
nature, the experiential auditory memory would be of a serial nature.
However, I can see now that memory for sounds would also have to have that
"pipeline" effect. All the auditory input lines important for
differentiation would have to go into the auditory pipeline. For each time-
extended sound there would have to be a temporal series of nodes in the
auditory pipeline. Each momentary combination of frequencies would be a
mosaic of sound, a mosaic of nodes. Now, into our sound channel we can
probably design a special stringing effect. Mosaics which come in a
temporal group will automatically be strung together.
I think I may have just made a startling insight, according to which we
would not be able to recognize time-extended sounds without our short-term
memories. I was thinking of the fact that, from just a few notes of music,
we can quickly recognize and identify a particular symphony by a particular
composer. No, actually I see a more orthodox way to do it, which follows.
Let's say we want to recognize three successive mosaics, say, three notes of
a symphony. The first note going through the pipeline would stimulate all
pre-engrammed mosaics of the note. When the second and third notes went in,
they, too, would stimulate their pre-engrammed mosaics. However, in the
symphonic spot where three adjacent mosaics were being stimulated, the
string-effect would cause a kind of summation of the stimulations, and
therefore the strongest and fastest associative output would come out of
that spot in auditory memory where the actual engram of the three symphonic
notes truly was.
In both the laying-down and recognizing of extended sounds, there is a
question as to where in the extended sound-series the associative tag would
be attached. I don't think it would come at the beginning or at the end....
No, it would have to be attached at the beginning, but in virtuality by the
stringing effect it would really be attached to the first several mosaics.
I thought I had an insight that the short-term memory would have to
perform a match-up like two railroad trains alongside each other. But then
I saw that, even so, the match-up would have to occur inside the main
auditory "pipeline."