

THE VETERAN NEOPHYTE
QUANTUM LUNCH
DAVE JOHNSON, WITH MICHAEL GREENSPON

I've just read the book Alan Turing: The Enigma, by Andrew Hodges, an outstanding and profound-- if
thick-- biography of Alan Turing. Turing's work touched on some deep philosophical questions about the
relationship between brains and computers. I naturally had my own opinions, but I wanted to talk to somebody
with more knowledge of brains who was also computer savvy--someone with a foot in both worlds. So I paid a
visit to Michael Greenspon, who develops software models of neural systems with Walter Freeman at UC
Berkeley. We got together for lunch and had a very interesting conversation. Here's a sample:
[Audio embellishment: clinking of nice glassware as Dave and Michael dine in the sun]
DKJ: I heard something recently that struck me as profound: computers don't manipulate reality, they manipulaterepresentations of reality. The profound part is that seems to be what brains do, too. Alan Turing, for much of his life, wanted to build a brain. He firmly believed that consciousness was caused only by the operation of the brain, and that the brain's operation could eventually be described at any level of detail.
[Michael looks patiently skeptical, but Dave plunges ahead, oblivious, waving his fork excitedly.] Further, he had previously proven that in principle, a "universal machine," of which the computer is a finite approximation, could simulate any other logical machine, and thus any logical process whatsoever. So if you could describe the function of the brain as a logical process, you should be able to program a computer to "be" a brain. The description part, of course, is the killer. But I can't help thinking that we'll get there eventually. What do you think?
MCG: Whoa, Dave [almost choking on his exotic Thai salad], I think you've hit the intellectual cul- de-sac of traditional artificial intelligence. The reason it's so hard to describe the operation of the brain as a logical process is simple: it isn't a logical process at all. That's a cerebral approach to a fundamentally biological and physical problem. I'm sure someday we'll be able to logically explain the operation of the brain in terms of physics, but that explanation won't include a computational mechanics based on formal logical operations.
DKJ: But then how do you approach the problem of trying to understand and model brains in your lab, if you can't describe them as logical processes?
MCG: Our approach is that of computational neuroscience; we're doing dynamic modeling at the level of cell populations, using massively parallel machines with a Macintosh front end.
When I say representationalist AI is a cerebral approach, it helps to realize that the cerebral cortex is just a few millimeters thin. It's a tissue essential for generating the separatist intellectual conception of ourselves as humans, but it's really a translucent veneer over the bulk of what our brains do day in, day out, which comes from our animal ancestors. Before we ever learn formal or even naturallanguages, our brains are already highly developed as processors of spatial, tactile, and kinesthetic information, to name my favorites. This is one reason why the Macintosh has been so successful as a tool--because it's the first readily available machine to offer at least at the outer layer a spatially based interface.
DKJ: And the reason that's so great is that our brains process spatial information effortlessly, without our even trying.
MCG: Right, a spatial interface allows us to apply more of our innate biological intelligence in communicating with the machine. But both structurally and functionally, the digital computer as a metaphor for the brain is almost completely inaccurate at every level of analysis.
I think if you look further into the nature of thought and perception, and also look more carefully through microscopes and macroscopes at what real brain tissue is doing, you'll see a physical system that exhibits chaotic dynamics in time, has fractal extent in space, and is inextricably linked to the natural world. Computers are powerful tools for simulating and visualizing these properties, but they don't themselveshave these properties yet.
DKJ: Especially the links to the natural world.
MCG: Exactly. If you want to apply computational metaphors to the brain, perhaps the brain is like a fractal architecture computer that can compute infinitely recursive functions in finite time.
DKJ: Oooh, I like the sound of that. Fractals, computers, infinity, and recursion all at once.
MCG: I like it too, but that's really just a structural metaphor. I'm interested in what we can learn about how real brains might work, so that we can apply these principles to next-generation user interfaces and to new non-von Neumann computing architectures.
In an engineering sense, we're after machine perception. That is, we want future machines to interact in the human sensory world, rather than forcing humans to interact in the virtual world of the machine.
DKJ: Yeah, to use or program a computer today you still have to interact on the machine's terms. I think one good approach to changing that is to try to build computational structures that are like the brain, so that our machines will be a little more like us. There are 10 billion neurons in the brain, more or less, right?
MCG: More. And perhaps 1015 synapses, which you could say is where a lot of the computation is going on.
DKJ: OK, more than 10 billion neurons in the brain, and they're wired together inunbelievably complex ways. The point is this: I'll bet that we can simulate a single neuron fairly closely with a computer, and over time we can get our simulation closer and closer to the real thing,arbitrarily close. Further, I'll bet that someday it will be possible to get 10 billion little computers together and talking to each other. I know this is a little speculative, but my business card says "Limit Pusher," and I feel compelled to live up to it.
MCG: Rave on.
DKJ: So we set this thing up--10billion little processing nodes--and we turn it on and start feeding it information. What will happen? What will it do? I can't help thinking that whatever it is, it will be something very much like life. And just as mysterious.
MCG: Well, I don't think it's purely an issue of scale. At Berkeley, we're building a new ring architecture parallel machine based on superscalar processors that can accommodate multimodal sensors and effectors. It's called CNS-1 and is spec'd at upwards of 100 billion operations per second.
DKJ: 100 BIPS! MCG: Right. Or 0.1 TRIPS, which is perhaps a better indication of how far we have to go. We expect CNS - 1 will be able to simulate many of the emergent dynamical properties of cell populations observed in real brains--to run what I call the lava lamp model of the mind. But even this much power won't bring us "arbitrarily close" to the wetware. I don't think you'll want to say it's alive or that it works the way a biological brain works.
DKJ: Maybe not, but I think that a network of 10 billion processors couldact something like a brain, could seem like a brain, even though it's not one by any stretch of the imagination. That idea fascinates me: that a computer, or a bunch of computers, can behave like something else. This gets back to Turing's thesis that a computer can simulate anything, if you can describe the thing in enough detail. That begs the question, though, of whether the simulation isfundamentally the same as the reality it simulates.
MCG: Is it live or is it Memorex?
DKJ: Precisely. It's like comparing painting on the computer to painting using canvas, brushes, and oils. At one level of description they're identical activities: applying color to a surface in intricate and skillful ways to produce a little piece of space that other humans can look at and experience emotion toward. But the tools differ hugely and, perhaps more important, the experience of using them is completely different. So I guess what I'm saying is that at the right level of description I believe (well, I want to believe) that it's possible to "build a brain."
MCG: Or to grow a brain. I think you're barking up the wrong dendritic tree. It'sexperience that's essential. Brains are dynamic systems that actively reach out into the sensory world for experience; perception is a creative process, not a passive one. To talk about building a machine with the capabilities of the human brain you have to include the same kinds of connections to the world that humans have. In the real tissue, it goes right down to the level of quantum phenomena and beyond--what I call "real virtuality."
What I've been trying to get across is that real brains operate by virtue of being physically continuous systems; there's an interplay between the nanoscopic and macroscopic, the intrinsic and extrinsic, such that structure and function are not separable. The notion that there exists in brains a "level of description" at which cognition is implemented as logical operations is a convenient fallacy, what John Searle calls "closet dualism." It means, for example, that if you want to start capturing the creative, human aspects of language--not just the literal, but the slang, humorous, ungrammatical, and allusory--you have to model the dynamics of the underlying physical processes.
DKJ: Hmm, this point about not being able to separate cognition from sensory experience is important. It's interesting to compare the development of computers with the development of life. Computer sensors and effectors--the parts of computers that by necessity touch the world--always seem to lag way behind the other parts, the computing parts, in their development. And the gap seems to be widening. So computers are currently wrapped in sensory cellophane, while the connection of biological systems to the world is very strong and high-bandwidth.
MCG: Exactly. It's likely that in biological systems, sensors and effectors developed first and, as part of an evolutionary feedback loop, drove the development of the nervous system. Though now you could say the demands of more sophisticated user interfaces are driving the development of CPUs. The perceptual side is limited to 2-D mouse tracking and 1-D clicks and keystrokes. But speech and pen gestures are about to expand that. Eventually computer-human interface will be polymodal, including intonation, spatial gestures, eye position, facial expression, and cortical activity patterns-- what I call the "think-along interface."
DKJ: It fascinates me that programmers can so easily get sucked into the machine--I knowI've been there-- despite the very limited modes of interaction with it.
MCG: Yes, in programming, I often feel I'm being sucked into a one-dimensional world of historical arbitrariness. I think this comes from the fact that while the complexity of our software systems has increased exponentially, our development tools haven't kept pace. The current tools fail to providethe real-time, interactive turnaround that's crucial to maintaining the creative flow. They force us to think too much about the machine's problems, instead of the human problems we're presumably trying to solve.
DKJ: Amen. And it's true for nonprogrammers, too. So how would you like to see the tools improve?
MCG: Well, besides speed--where speed means real-time, no perceptible delay; anything less is slow-- future tools will have semantic knowledge of the process of software engineering and eventually of the application you're building. The code browsers are a good step forward; at least they can automatically determine structure from syntax. The next step is to automate the build process, the incremental linking of components, and the maintenance of an audit trail and nonlinear undo space for source code. Here we start to blur into a dynamic-language sort of model.
DKJ: That's exactly the kind of administrivia that computers are supposed to be good at. But right now, for most of us, the burden is still on the human.
MCG: It sure is. Where we want to head is to shift the focus of the iterative process from the syntax level--compilation, debugging--which is what the machine is concerned with, to the level of design and validation, which is hopefully where the programmer is trying to solve the semantic problems of the application.
DKJ: Way back in the 1940s Turing talked about the fact that ". . . as soon as any technique becomes at all stereotyped it becomes possible to devise a system of instruction tables which will enable the electronic computer to do it for itself." In other words, as soon as we can describe how we do a job, we can program the machine to do it for us. This is happening, but slowly. As an amusing footnote, he went on to say "It may happen however that the masters [programmers] will refuse to do this. They may be unwilling to let their jobs be stolen from them in this way. In that case they would surround the whole of their work with mystery and make excuses, couched in well chosen gibberish . . ." He was a pretty prescient guy.
[Setting his napkin on the table] Well, I guess we should try to wrap it up here; our readers' MacApp builds are probably finished by now, and we'll be losing them soon. Let's try to wring a message out of our ramblings, something developers can take home with them. How about this: Strive to bring computers ever more firmly into the world of people, rather than trying to cram people ever more firmly into the world of computers. The differences can be subtle, but the distinction is very important.
MCG: Well, I think we can and will go much further toward humanizing the experience of using computers. But I don't think we have to couch what we do in gibberish to keep our jobs, because programming is fundamentally a creative discipline. Like other creative disciplines, when you've done it long enough and intently enough, you tend to see its way reflected in everything you perceive. You could say programming is a way of seeing. That leads us to computers as tools for extending human visualization.
[Flipping up his shades] The point is that it's human vision--not the technology--that's crucial. When we create tools and toys and lifestyles that separate and insulate us from nature, we further the consumption and destruction we see all around us. But I think we can see past the empty goal of creating trillion dollar markets for our products. As humans, we've always had the infinite power to change our minds. It's time we tap that power by creating tools that connect us--to each other, to the earth--and enable us to meet the real life-or-death challenges we face on this planet. As programmers and technologists we're in a key position to determine the future by the choices we make every day. I hope each of us can make every keystroke and every mouse click a step toward a sustainable society.
RECOMMENDED READING
- Alan Turing: The Enigma by Andrew Hodges (Simon & Schuster, 1983).
- The Three-Pound Universe by Judith Hooper and Dick Teresi (Tarcher Press, 1986).
- Who Needs Donuts? by Mark Alan Stamaty (The Dial Press, 1973).
MICHAEL GREENSPON is a doctoral student in the department of Electrical Engineering and Computer Science at UC Berkeley. When he's not cramming for quals, he can often be overheard trying to explain the cost benefits of telecommuting to Apple managers. (We're still not sure when he sleeps.) If the sun's out, you're sure to find him soaking up some of it; since the release of the Macintosh PowerBook and ToolServer, he's hardly been seen indoors except for an occasional rave. In fact, he and Dave Johnson were recently spotted rigging a LAN in the outfield at Golden Gate Park. He does, however, respond to his e-mail: he can be reached via AppleLink as INTEGRAL or on the Internet as mcg@icsi.berkeley.edu.*
Dave welcomes feedback on his musings. He can be reached at JOHNSON.DK on AppleLink, dkj@apple.com on the Internet, or 75300,715 on CompuServe.*

- SPREAD THE WORD:
- Slashdot
- Digg
- Del.icio.us
- Newsvine