Archive for the ‘AI’ Category

Seth Llyod is a Professor in the Department of Mechanical Engineering at the Massachusetts Institute of Technology MIT. His talk, “Programming the Universe”, is about the computational power of atoms, electrons, and elementary particles.

A highly recommended watch.

Brute computing force alone can’t solve the world’s problems. Data mining innovator Shyam Sankar explains why solving big problems (like catching terrorists or identifying huge hidden trends) is not a question of finding the right algorithm, but rather the right symbiotic relationship between computation and human creativity.

An advocate of human-computer symbiosis, Shyam Sankar looks for clues in big and disparate data sets.

By Yarden Katz  published on the Atlantic (see link below)

An extended conversation with the legendary linguist

nc_hands4a.jpg

Graham Gordon Ramsay

If one were to rank a list of civilization’s greatest and most elusive intellectual challenges, the problem of “decoding” ourselves — understanding the inner workings of our minds and our brains, and how the architecture of these elements is encoded in our genome — would surely be at the top. Yet the diverse fields that took on this challenge, from philosophy and psychology to computer science and neuroscience, have been fraught with disagreement about the right approach.

In 1956, the computer scientist John McCarthy coined the term “Artificial Intelligence” (AI) to describe the study of intelligence by implementing its essential features on a computer. Instantiating an intelligent system using man-made hardware, rather than our own “biological hardware” of cells and tissues, would show ultimate understanding, and have obvious practical applications in the creation of intelligent devices or even robots.

Some of McCarthy’s colleagues in neighboring departments, however, were more interested in how intelligence is implemented in humans (and other animals) first. Noam Chomsky and others worked on what became cognitive science, a field aimed at uncovering the mental representations and rules that underlie our perceptual and cognitive abilities. Chomsky and his colleagues had to overthrow the then-dominant paradigm of behaviorism, championed by Harvard psychologist B.F. Skinner, where animal behavior was reduced to a simple set of associations between an action and its subsequent reward or punishment. The undoing of Skinner’s grip on psychology is commonly marked by Chomsky’s 1967 critical review of Skinner’s book Verbal Behavior, a book in which Skinner attempted to explain linguistic ability using behaviorist principles.

Skinner’s approach stressed the historical associations between a stimulus and the animal’s response — an approach easily framed as a kind of empirical statistical analysis, predicting the future as a function of the past. Chomsky’s conception of language, on the other hand, stressed the complexity of internal representations, encoded in the genome, and their maturation in light of the right data into a sophisticated computational system, one that cannot be usefully broken down into a set of associations. Behaviorist principles of associations could not explain the richness of linguistic knowledge, our endlessly creative use of it, or how quickly children acquire it with only minimal and imperfect exposure to language presented by their environment. The “language faculty,” as Chomsky referred to it, was part of the organism’s genetic endowment, much like the visual system, the immune system and the circulatory system, and we ought to approach it just as we approach these other more down-to-earth biological systems.

David Marr, a neuroscientist colleague of Chomsky’s at MIT, defined a general framework for studying complex biological systems (like the brain) in his influential book Vision, one that Chomsky’s analysis of the language capacity more or less fits into. According to Marr, a complex biological system can be understood at three distinct levels. The first level (“computational level”) describes the input and output to the system, which define the task the system is performing. In the case of the visual system, the input might be the image projected on our retina and the output might our brain’s identification of the objects present in the image we had observed. The second level (“algorithmic level”) describes the procedure by which an input is converted to an output, i.e. how the image on our retina can be processed to achieve the task described by the computational level. Finally, the third level (“implementation level”) describes how our own biological hardware of cells implements the procedure described by the algorithmic level.

The approach taken by Chomsky and Marr toward understanding how our minds achieve what they do is as different as can be from behaviorism. The emphasis here is on the internal structure of the system that enables it to perform a task, rather than on external association between past behavior of the system and the environment. The goal is to dig into the “black box” that drives the system and describe its inner workings, much like how a computer scientist would explain how a cleverly designed piece of software works and how it can be executed on a desktop computer.

As written today, the history of cognitive science is a story of the unequivocal triumph of an essentially Chomskyian approach over Skinner’s behaviorist paradigm — an achievement commonly referred to as the “cognitive revolution,” though Chomsky himself rejects this term. While this may be a relatively accurate depiction in cognitive science and psychology, behaviorist thinking is far from dead in related disciplines. Behaviorist experimental paradigms and associationist explanations for animal behavior are used routinely by neuroscientists who aim to study the neurobiology of behavior in laboratory animals such as rodents, where the systematic three-level framework advocated by Marr is not applied.

In May of last year, during the 150th anniversary of the Massachusetts Institute of Technology, a symposium on “Brains, Minds and Machines” took place, where leading computer scientists, psychologists and neuroscientists gathered to discuss the past and future of artificial intelligence and its connection to the neurosciences.

The gathering was meant to inspire multidisciplinary enthusiasm for the revival of the scientific question from which the field of artificial intelligence originated: how does intelligence work? How does our brain give rise to our cognitive abilities, and could this ever be implemented in a machine?

Noam Chomsky, speaking in the symposium, wasn’t so enthused. Chomsky critiqued the field of AI for adopting an approach reminiscent of behaviorism, except in more modern, computationally sophisticated form. Chomsky argued that the field’s heavy use of statistical techniques to pick regularities in masses of data is unlikely to yield the explanatory insight that science ought to offer. For Chomsky, the “new AI” — focused on using statistical learning techniques to better mine and predict data — is unlikely to yield general principles about the nature of intelligent beings or about cognition.

This critique sparked an elaborate reply to Chomsky from Google’s director of research and noted AI researcher, Peter Norvig, who defended the use of statistical models and argued that AI’s new methods and definition of progress is not far off from what happens in the other sciences.
Read on: http://www.theatlantic.com/technology/archive/2012/11/noam-chomsky-on-where-artificial-intelligence-went-wrong/261637/.

Robin Hanson predicts what the extraordinary society of the future will look like when emulated minds are ushered in.

Robin Hanson: Extraordinary Society of Emulated Minds – FORA.tv.

Comes highly recommended

Robert Hanson, Associate Professor of Economics at George Mason University, speculates on how systems of class might operate between artificially intelligent machines. Speed and efficiency would be most rewarded, in Hanson’s view, while interaction skills with humans would be least valued.

Who (and what) can you trust?
Robots Have Feelings, Too

People are fidgety – they’re moving all the time. So how could the team truly zero-in on the cues that mattered? This is where Nexi comes in. Nexi is a humanoid social robot that afforded the team an important benefit – they could control all its movements perfectly. In a second experiment, the team had research participants converse with Nexi for 10 minutes, much like they did with another person in the first experiment. While conversing with the participants, Nexi — operated remotely by researchers — either expressed cues that were considered less than trustworthy or expressed similar, but non-trust-related cues. Confirming their theory, the team found that participants exposed to Nexi’s untrustworthy cues intuited that Nexi was likely to cheat them and adjusted their financial decisions accordingly. “Certain nonverbal gestures trigger emotional reactions we’re not consciously aware of, and these reactions are enormously important for understanding how interpersonal relationships develop,” said Frank. (source: EurekaAlert)

“The fact that a robot can trigger the same reactions confirms the mechanistic nature of many of the forces that influence human interaction.”

Brute computing force alone can’t solve the world’s problems. Data mining innovator Shyam Sankar explains why solving big problems (like catching terrorists or identifying huge hidden trends) is not a question of finding the right algorithm, but rather the right symbiotic relationship between computation and human creativity.

An advocate of human-computer symbiosis, Shyam Sankar looks for clues in big and disparate data sets.

See on Scoop.itKnowmads, Infocology of the future

In our economy, many of the jobs most resistant to automation are those with the least economic value. Just consider the diversity of tasks, unpredictable terrains, and specialized tools that a landscaper confronts in a single day. No robot is intelligent enough to perform this $8-an-hour work.

But what about a robot remotely controlled by a low-wage foreign worker?

Hollywood has been imagining the technologies we would need. Jake Sully, the wheelchair-bound protagonist in James Cameron’s Avatar, goes to work saving a distant planet via a wireless connection to a remote body. He interacts with others, learns new skills, and even gets married—all while his “real” body is lying on a slab, miles aw

See on www.technologyreview.com