Archive for the ‘Mind’ Category

By Yarden Katz  published on the Atlantic (see link below)

An extended conversation with the legendary linguist

nc_hands4a.jpg

Graham Gordon Ramsay

If one were to rank a list of civilization’s greatest and most elusive intellectual challenges, the problem of “decoding” ourselves — understanding the inner workings of our minds and our brains, and how the architecture of these elements is encoded in our genome — would surely be at the top. Yet the diverse fields that took on this challenge, from philosophy and psychology to computer science and neuroscience, have been fraught with disagreement about the right approach.

In 1956, the computer scientist John McCarthy coined the term “Artificial Intelligence” (AI) to describe the study of intelligence by implementing its essential features on a computer. Instantiating an intelligent system using man-made hardware, rather than our own “biological hardware” of cells and tissues, would show ultimate understanding, and have obvious practical applications in the creation of intelligent devices or even robots.

Some of McCarthy’s colleagues in neighboring departments, however, were more interested in how intelligence is implemented in humans (and other animals) first. Noam Chomsky and others worked on what became cognitive science, a field aimed at uncovering the mental representations and rules that underlie our perceptual and cognitive abilities. Chomsky and his colleagues had to overthrow the then-dominant paradigm of behaviorism, championed by Harvard psychologist B.F. Skinner, where animal behavior was reduced to a simple set of associations between an action and its subsequent reward or punishment. The undoing of Skinner’s grip on psychology is commonly marked by Chomsky’s 1967 critical review of Skinner’s book Verbal Behavior, a book in which Skinner attempted to explain linguistic ability using behaviorist principles.

Skinner’s approach stressed the historical associations between a stimulus and the animal’s response — an approach easily framed as a kind of empirical statistical analysis, predicting the future as a function of the past. Chomsky’s conception of language, on the other hand, stressed the complexity of internal representations, encoded in the genome, and their maturation in light of the right data into a sophisticated computational system, one that cannot be usefully broken down into a set of associations. Behaviorist principles of associations could not explain the richness of linguistic knowledge, our endlessly creative use of it, or how quickly children acquire it with only minimal and imperfect exposure to language presented by their environment. The “language faculty,” as Chomsky referred to it, was part of the organism’s genetic endowment, much like the visual system, the immune system and the circulatory system, and we ought to approach it just as we approach these other more down-to-earth biological systems.

David Marr, a neuroscientist colleague of Chomsky’s at MIT, defined a general framework for studying complex biological systems (like the brain) in his influential book Vision, one that Chomsky’s analysis of the language capacity more or less fits into. According to Marr, a complex biological system can be understood at three distinct levels. The first level (“computational level”) describes the input and output to the system, which define the task the system is performing. In the case of the visual system, the input might be the image projected on our retina and the output might our brain’s identification of the objects present in the image we had observed. The second level (“algorithmic level”) describes the procedure by which an input is converted to an output, i.e. how the image on our retina can be processed to achieve the task described by the computational level. Finally, the third level (“implementation level”) describes how our own biological hardware of cells implements the procedure described by the algorithmic level.

The approach taken by Chomsky and Marr toward understanding how our minds achieve what they do is as different as can be from behaviorism. The emphasis here is on the internal structure of the system that enables it to perform a task, rather than on external association between past behavior of the system and the environment. The goal is to dig into the “black box” that drives the system and describe its inner workings, much like how a computer scientist would explain how a cleverly designed piece of software works and how it can be executed on a desktop computer.

As written today, the history of cognitive science is a story of the unequivocal triumph of an essentially Chomskyian approach over Skinner’s behaviorist paradigm — an achievement commonly referred to as the “cognitive revolution,” though Chomsky himself rejects this term. While this may be a relatively accurate depiction in cognitive science and psychology, behaviorist thinking is far from dead in related disciplines. Behaviorist experimental paradigms and associationist explanations for animal behavior are used routinely by neuroscientists who aim to study the neurobiology of behavior in laboratory animals such as rodents, where the systematic three-level framework advocated by Marr is not applied.

In May of last year, during the 150th anniversary of the Massachusetts Institute of Technology, a symposium on “Brains, Minds and Machines” took place, where leading computer scientists, psychologists and neuroscientists gathered to discuss the past and future of artificial intelligence and its connection to the neurosciences.

The gathering was meant to inspire multidisciplinary enthusiasm for the revival of the scientific question from which the field of artificial intelligence originated: how does intelligence work? How does our brain give rise to our cognitive abilities, and could this ever be implemented in a machine?

Noam Chomsky, speaking in the symposium, wasn’t so enthused. Chomsky critiqued the field of AI for adopting an approach reminiscent of behaviorism, except in more modern, computationally sophisticated form. Chomsky argued that the field’s heavy use of statistical techniques to pick regularities in masses of data is unlikely to yield the explanatory insight that science ought to offer. For Chomsky, the “new AI” — focused on using statistical learning techniques to better mine and predict data — is unlikely to yield general principles about the nature of intelligent beings or about cognition.

This critique sparked an elaborate reply to Chomsky from Google’s director of research and noted AI researcher, Peter Norvig, who defended the use of statistical models and argued that AI’s new methods and definition of progress is not far off from what happens in the other sciences.
Read on: http://www.theatlantic.com/technology/archive/2012/11/noam-chomsky-on-where-artificial-intelligence-went-wrong/261637/.

Robin Hanson predicts what the extraordinary society of the future will look like when emulated minds are ushered in.

Robin Hanson: Extraordinary Society of Emulated Minds – FORA.tv.

Comes highly recommended

Robert Hanson, Associate Professor of Economics at George Mason University, speculates on how systems of class might operate between artificially intelligent machines. Speed and efficiency would be most rewarded, in Hanson’s view, while interaction skills with humans would be least valued.

Preliminary results are in from a huge online experiment designed to test a flaw in the way the brain stores memories. [VIDEO]

Earlier this year, an online memory experiment was launched on the Guardian blog. They had an extraordinary response. In the three weeks the experiment was live, tens of thousands of people of all ages and from all around the world took part, making it one of the biggest memory experiments ever conducted. Although they only had a couple of weeks to process the responses, here’s a sneak preview of the numbers from a sample of 27,000 participants.

Global Experiment Probes the Deceptions of Human Memory « Neuroscience « WiSci | Life Sciences Blog.

What was the experiment really about?

Among the most surprising discoveries about memory has been the realisation that remembering a past event is not like picking a DVD off the shelf and playing it back. Remembering involves a process of reconstruction. We store assorted features of an event as representations that are distributed around the brain.

In simple terms, visual features are represented near the back of the brain in the areas specialised for visual processing; sounds in auditory processing regions close to the ears; and smells in the olfactory system that lies behind the nose.

To experience the rich, vivid “re-living” of a past event that is remembering, we fit these features together into a representation of what took place.

Does the brain’s wiring make us who we are?
Neuroscientists Sebastian Seung and Anothony Movshon debate minds, maps, and the future of their field.

Moderated by Robert Krulwich and Carl Zimmer
Introduction by Stuart Firestein

Columbia University
April 2, 2012

A highly recommended and important debate

Hosted by Neuwrite

Who (and what) can you trust?
Robots Have Feelings, Too

People are fidgety – they’re moving all the time. So how could the team truly zero-in on the cues that mattered? This is where Nexi comes in. Nexi is a humanoid social robot that afforded the team an important benefit – they could control all its movements perfectly. In a second experiment, the team had research participants converse with Nexi for 10 minutes, much like they did with another person in the first experiment. While conversing with the participants, Nexi — operated remotely by researchers — either expressed cues that were considered less than trustworthy or expressed similar, but non-trust-related cues. Confirming their theory, the team found that participants exposed to Nexi’s untrustworthy cues intuited that Nexi was likely to cheat them and adjusted their financial decisions accordingly. “Certain nonverbal gestures trigger emotional reactions we’re not consciously aware of, and these reactions are enormously important for understanding how interpersonal relationships develop,” said Frank. (source: EurekaAlert)

“The fact that a robot can trigger the same reactions confirms the mechanistic nature of many of the forces that influence human interaction.”

Brute computing force alone can’t solve the world’s problems. Data mining innovator Shyam Sankar explains why solving big problems (like catching terrorists or identifying huge hidden trends) is not a question of finding the right algorithm, but rather the right symbiotic relationship between computation and human creativity.

An advocate of human-computer symbiosis, Shyam Sankar looks for clues in big and disparate data sets.

Neuroscientist Daniel Wolpert starts from a surprising premise: the brain evolved, not to think or feel, but to control movement. In this entertaining, data-rich talk he gives us a glimpse into how the brain creates the grace and agility of human motion.

Why you should listen to him (From TED):

Consider your hand. You use it to lift things, to balance yourself, to give and take, to sense the world. It has a range of interacting degrees of freedom, and it interacts with many different objects under a variety of environmental conditions. And for most of us, it all just works. At his lab in the Engineering department at Cambridge, Daniel Wolpert and his team are studying why, looking to understand the computations underlying the brain’s sensorimotor control of the body.

As he says, “I believe that to understand movement is to understand the whole brain. And therefore it’s important to remember when you are studying memory, cognition, sensory processing, they’re there for a reason, and that reason is action.” Movement is the only way we have of interacting with the world, whether foraging for food or attracting a waiter’s attention. Indeed, all communication, including speech, sign language, gestures and writing, is mediated via the motor system. Taking this viewpoint, and using computational and robotic techniques as well as virtual reality systems, Wolpert and his team research the purpose of the human brain and the way it determines future actions.