Pages

Sunday, September 15, 2013

On Intelligence: Hawkins

What is intelligence? How do we know things? What is creativity? What is consciousness? These are ideas that philosophers have pondered for centuries, with earliest records from the Greeks.

I have been fascinated with these topics for many years and have searched unsuccessfully for answers. Philosophy books on the topic were too... well, philosophical... and unsatisfying. Books on brain functioning more than ten years old were very interesting, with most of our knowledge coming from brain dysfunction, where direct knowledge of damage to specific brain areas produced consistently serious problems. Neuroanatomy books were helpful in learning about the parts of the physical brain and the associated high-level notions on the role of specific brain parts.

Recent advances in neurophysiology were more encouraging. There is a tremendous body of knowledge around chemical and electrical signaling mechanisms in the axons, dendrites, and synapses of neuron cells. The known details around these mechanisms begin to peel back the layers of how this all fits together. But the very massive detail and complexity of the mechanisms is a barrier to understanding how and why intelligence results from this amazing system. Having read several books on the biochemistry and neurophysiology encouraged my belief that we could answer the questions on intelligence, knowledge, and creativity, but neuroscientists were still unable to provide those insights.

Jeff Hawkins seems to have shared my interests and similar experience, but has applied his insight and resources to posit a theory which he calls the memory prediction framework. To understand how the brain works, you must begin with how and why it evolved -- the biological value of the brain:

  • For millions of years, the animal brain evolved as a mechanism to control behavior in order to increase survivability.
  • The brain accepts input from the senses -- vision, hearing, sensory, olfactory -- in order to drive action through the motor systems. Lion -- RUN!
  • The brain consists logically of two parts: the primitive brain parts that are shared by reptiles, birds, and mammals, and the neocortex, which is exclusive to mammals. Knowledge and memory in insects seems to be stored and persisted in DNA, while higher level animals store information in the neuronal structures in the brain. The neocortex evolved to provide higher level functions on top of the primitive brain, organized in a multi-level hierarchy.
  • Memory allows animals to store experiences perceived from their environment. Learning improves survivability by improving the value of the accumulated memories. Darwinian logic drives this process -- adaptations that improve survivability perpetuate the gene pool.
To understand how this works, some other key ideas are as follows:
  • Sensory inputs are at an extremely low-level. Vision consists of about one million nerve fibers of information while hearing has thirty thousand.
  • In addition to spatial content, all senses (and everything in the brain) has a time dimension -- quite different from modern computer architectures. Vision, which we perceive as static spatial content, always has a time component; in fact, the eyes move in a process called saccades about three times per second to focus on specific aspects of the visual field. We actually cannot discern objects using our sense of touch without moving our hands or feet on or in the object; i.e., our object recognition through our sense of touch comes through a temporal comparison of sensory inputs. The temporal input is used to constantly modify our working model of our environment.
  • Actual cognition occurs at higher levels in the neocortex hierarchy, where concepts are formed. For visual level V1 (which does some pre-processing from the optic nerves), we begin to see lines and edges. At level IT (an intermediate level), we perceive objects such as lions. At higher levels, we develop strategies to deal with our perceptions.
  • The main purpose of the brain is to form a model of our environment, mostly to deal with now but in our evolved state, to also plan for the future. Note that at any point in time, our senses are only aware of a small subset of our environment (e.g., the visual field in front of us) and that our memory model of the overall environment (and our history of related situations) allows us to respond to it in a comprehensive manner. If, for example, you are sitting alone at home and hear a certain pattern of footsteps and associated sounds, you realize that your child has arrived at home without actually seeing him or her, and if you have more than one, which one!
We can now begin to talk about the actual functioning of the brain:
  • As sensory information moves up the cortical hierarchy, it gets refined into generalized, invariant representations. So we can recognize a song in any key even though the input from the auditory tract is totally different. We can recognize a face from close up, far away, tilted, or from the side, even though the pixel representations are totally different. Invariant representations allow one copy of the Gettysburg Address to be used to recite it orally, write it down by hand, or type it on a computer -- even though the mechanics of reproducing it are totally different.
  • As we learn new behaviors and concepts, they get pushed down in the memory hierarchy, allowing higher levels the freedom to cogitate on how we might deal with the object(s) of focus. Note that at birth, we start with a blank slate; i.e., we know nothing. As sensory inputs are received, we store them as raw temporal images until we begin to associate them with our prior experiences and generalize what we perceive.
  • The human brain has about 10 billion neurons with somewhere between 1,000 and 10,000 synapses or connections to each one: roughly 100 trillion synapses. Everything we know and do, including our intelligence, knowledge, creativity, and consciousness is contained in these synaptic connections. 
And finally, the Hawkins memory prediction model:
  • The many levels of the memory hierarchy communicate up and down and left and right.
  • The idea of context is very important. Concurrent neuronal firings determine the context of the situation and how we react, so a loud noise on the 4th of July is normal, but a loud noise in a library would be a cause for concern. 
  • The key to understanding the memory prediction model is the prediction part: the current context causes specific levels within the memory hierarchy to pass predictions down the hierarchy in anticipation of what we expect to have happen next. The feeling we get from surprise (like missing the last step when walking up or down a staircase) is when the predictions do not mesh with the subsequent sensory inputs. Predictions are the essence of understanding -- we do not understand something that we simple observe without having an associated knowledge base that automatically forms predictions.
  • These predictions are crucial to understanding how we function because the predictions make it possible for us to anticipate what is going to happen so that we can automatically adapt. So in driving down the highway, we ease off the gas pedal as we sense that the driver to our right is about to change lanes unexpectedly.
So how does this relate to intelligence? The memory model is constantly updated based on sensory inputs. In the case of action (say, in sports), it is concurrently sending action commands through the motor cortex to adapt to this changing environment. The memory predictions driven down the memory hierarchy (including the motor cortex) are based on invariant representations of similar past experiences. These invariant representations are what we might also call analogies. Depending on how they are applied, they might also be called biases or prejudices. These predictions run smoothly as long as they measure up to the incoming signals; when they do not, something like an interrupt occurs that focuses attention to address the mis-prediction and learning occurs.

An interesting thing to think about is whether we know anything! What we "know" are neuronal associations, largely in our neocortex. These associations are formed based on relationships to past experiences. When we drill down in our knowledge to a lower level -- peeling back the onion -- our knowledge becomes deeper, but I would question whether there are any absolute truths -- only our current memories related to a topic. An example of this is our initial model of the atom based on electrons circulating around a nucleus and how we are still discovering subatomic particulars such as new mesons that peel back the onion even further. Ask yourself what you actually know about electrons, protons, and neutrons, let alone quarks, leptons, bosons, mesons, or fermions!

Jeff Hawkins is an accomplished computer scientist, and many of his insights come from analogies from his original field. It would appear that many concepts from computer science apply to how the brain functions as well.
  • Objects Neurophysiologists have identified a mechanism for objects in the brain. One research study identified the "Bill Clinton" neuron (possibly a series of neurons) in a set of test subjects. Any topic related to Bill Clinton caused the same set of firings in the brain, whether events related to his presidency or Monica Lewinsky. Within a time period in which Bill Clinton is the topic of attention, a series of neuronal firings maintain the context of Bill Clinton.
  • Ontologies Although On Intelligence predates the popularity of topics such as RDF, the semantic web, and ontologies, the memory prediction framework seems to map nicely to the concept of an ontology; i.e., a set of objects and concepts (sets of neurons) that are interrelated through synaptic connections. For an ontology, we map subjects to objects using predicates. 
  • Attributes Based on experience, objects stored in the brain have relationships to many other objects, including "objects" such as colors, textures, belief systems, etc. otherwise known as attributes. Related synaptic connections that are not currently firing, may fire later when some other aspect of the context increases the action potential of the appropriate synapse(s) to exceed the firing threshold . 
  • Algorithms Although there are many differences in brain cell types, there are also great similarities. Essentially, the process that the brain uses to grow and wire neurons is driven by our DNA and is the same process for all brain cells. There are no unique "hearing" cells vs. "vision" cells or language cells vs. motor cells. The fetal brain evolves from a single bud that grows in packages around something called columns. The cells that grow out of a single "column cell" have long-term relationships to each other through synaptic connections in addition to the extended connections they have to other parts of the brain. The mechanism of growth -- how axons can grow long distances (as much as a meter) to connect to other parts of the body is all handled by the same DNA-driven algorithm. 
There is insufficient space in this review to include the many excellent examples from On Intelligence that help to illuminate Hawkins' ideas. For those of you that may have studied neurophysiology, you'll get hints and references to the research that will make you comfortable that this theory is indeed based on the hard science that we currently understand as well as understanding where he takes leaps of faith in developing his theory. I would highly recommend reading the book in order to fully appreciate the magnitude and impact of these ideas.

No comments:

Post a Comment