Archive for the ‘Civilization’ Category

Brute computing force alone can’t solve the world’s problems. Data mining innovator Shyam Sankar explains why solving big problems (like catching terrorists or identifying huge hidden trends) is not a question of finding the right algorithm, but rather the right symbiotic relationship between computation and human creativity.

An advocate of human-computer symbiosis, Shyam Sankar looks for clues in big and disparate data sets.

By Yarden Katz  published on the Atlantic (see link below)

An extended conversation with the legendary linguist

nc_hands4a.jpg

Graham Gordon Ramsay

If one were to rank a list of civilization’s greatest and most elusive intellectual challenges, the problem of “decoding” ourselves — understanding the inner workings of our minds and our brains, and how the architecture of these elements is encoded in our genome — would surely be at the top. Yet the diverse fields that took on this challenge, from philosophy and psychology to computer science and neuroscience, have been fraught with disagreement about the right approach.

In 1956, the computer scientist John McCarthy coined the term “Artificial Intelligence” (AI) to describe the study of intelligence by implementing its essential features on a computer. Instantiating an intelligent system using man-made hardware, rather than our own “biological hardware” of cells and tissues, would show ultimate understanding, and have obvious practical applications in the creation of intelligent devices or even robots.

Some of McCarthy’s colleagues in neighboring departments, however, were more interested in how intelligence is implemented in humans (and other animals) first. Noam Chomsky and others worked on what became cognitive science, a field aimed at uncovering the mental representations and rules that underlie our perceptual and cognitive abilities. Chomsky and his colleagues had to overthrow the then-dominant paradigm of behaviorism, championed by Harvard psychologist B.F. Skinner, where animal behavior was reduced to a simple set of associations between an action and its subsequent reward or punishment. The undoing of Skinner’s grip on psychology is commonly marked by Chomsky’s 1967 critical review of Skinner’s book Verbal Behavior, a book in which Skinner attempted to explain linguistic ability using behaviorist principles.

Skinner’s approach stressed the historical associations between a stimulus and the animal’s response — an approach easily framed as a kind of empirical statistical analysis, predicting the future as a function of the past. Chomsky’s conception of language, on the other hand, stressed the complexity of internal representations, encoded in the genome, and their maturation in light of the right data into a sophisticated computational system, one that cannot be usefully broken down into a set of associations. Behaviorist principles of associations could not explain the richness of linguistic knowledge, our endlessly creative use of it, or how quickly children acquire it with only minimal and imperfect exposure to language presented by their environment. The “language faculty,” as Chomsky referred to it, was part of the organism’s genetic endowment, much like the visual system, the immune system and the circulatory system, and we ought to approach it just as we approach these other more down-to-earth biological systems.

David Marr, a neuroscientist colleague of Chomsky’s at MIT, defined a general framework for studying complex biological systems (like the brain) in his influential book Vision, one that Chomsky’s analysis of the language capacity more or less fits into. According to Marr, a complex biological system can be understood at three distinct levels. The first level (“computational level”) describes the input and output to the system, which define the task the system is performing. In the case of the visual system, the input might be the image projected on our retina and the output might our brain’s identification of the objects present in the image we had observed. The second level (“algorithmic level”) describes the procedure by which an input is converted to an output, i.e. how the image on our retina can be processed to achieve the task described by the computational level. Finally, the third level (“implementation level”) describes how our own biological hardware of cells implements the procedure described by the algorithmic level.

The approach taken by Chomsky and Marr toward understanding how our minds achieve what they do is as different as can be from behaviorism. The emphasis here is on the internal structure of the system that enables it to perform a task, rather than on external association between past behavior of the system and the environment. The goal is to dig into the “black box” that drives the system and describe its inner workings, much like how a computer scientist would explain how a cleverly designed piece of software works and how it can be executed on a desktop computer.

As written today, the history of cognitive science is a story of the unequivocal triumph of an essentially Chomskyian approach over Skinner’s behaviorist paradigm — an achievement commonly referred to as the “cognitive revolution,” though Chomsky himself rejects this term. While this may be a relatively accurate depiction in cognitive science and psychology, behaviorist thinking is far from dead in related disciplines. Behaviorist experimental paradigms and associationist explanations for animal behavior are used routinely by neuroscientists who aim to study the neurobiology of behavior in laboratory animals such as rodents, where the systematic three-level framework advocated by Marr is not applied.

In May of last year, during the 150th anniversary of the Massachusetts Institute of Technology, a symposium on “Brains, Minds and Machines” took place, where leading computer scientists, psychologists and neuroscientists gathered to discuss the past and future of artificial intelligence and its connection to the neurosciences.

The gathering was meant to inspire multidisciplinary enthusiasm for the revival of the scientific question from which the field of artificial intelligence originated: how does intelligence work? How does our brain give rise to our cognitive abilities, and could this ever be implemented in a machine?

Noam Chomsky, speaking in the symposium, wasn’t so enthused. Chomsky critiqued the field of AI for adopting an approach reminiscent of behaviorism, except in more modern, computationally sophisticated form. Chomsky argued that the field’s heavy use of statistical techniques to pick regularities in masses of data is unlikely to yield the explanatory insight that science ought to offer. For Chomsky, the “new AI” — focused on using statistical learning techniques to better mine and predict data — is unlikely to yield general principles about the nature of intelligent beings or about cognition.

This critique sparked an elaborate reply to Chomsky from Google’s director of research and noted AI researcher, Peter Norvig, who defended the use of statistical models and argued that AI’s new methods and definition of progress is not far off from what happens in the other sciences.
Read on: http://www.theatlantic.com/technology/archive/2012/11/noam-chomsky-on-where-artificial-intelligence-went-wrong/261637/.

See on Scoop.itKnowmads, Infocology of the future

In our economy, many of the jobs most resistant to automation are those with the least economic value. Just consider the diversity of tasks, unpredictable terrains, and specialized tools that a landscaper confronts in a single day. No robot is intelligent enough to perform this $8-an-hour work.

But what about a robot remotely controlled by a low-wage foreign worker?

Hollywood has been imagining the technologies we would need. Jake Sully, the wheelchair-bound protagonist in James Cameron’s Avatar, goes to work saving a distant planet via a wireless connection to a remote body. He interacts with others, learns new skills, and even gets married—all while his “real” body is lying on a slab, miles aw

See on www.technologyreview.com

Hannah Fry trained as a mathematician, and completed her PhD in fluid dynamics in early 2011. After a brief period working as an aerodynamicist in the motorsport industry, she came back to UCL to work on a major interdisciplinary project in complexity science. The project spans several departments, including Mathematics and the Centre for Advanced Spatial Analysis, and focuses on understanding global social systems — such as Trade, Migration and Security. Hannah’s research interests revolve around creating new mathematical techniques to study these systems, with recent work including studies of the London Riots and Consumer Behaviour.

Talk: Is life really that complex?
Recently scientists have begun to appreciate that many of the mechanisms inherent in our social systems have analogies in seemingly unrelated problems. The movement of a crowd, for instance, can be understood using techniques traditionally applied to the flow of a fluid, and the uptake of a new technology can be predicted using knowledge of how disease spreads.
By exploiting these analogies, a new field is emerging at the interface between social sciences and mathematics, the potential of which I hope to illustrate using a mathematical model of the London Riots. Our approach can demonstrate why certain areas of the city were at higher risk than others and help determine which policing strategies may have resulted in a swifter resolution to the unrest.
We will discuss how social modelling can provide a greater understanding of our society, and help design better systems for all: from healthcare to policing and policy.

When game designer Jane McGonigal found herself bedridden and suicidal following a severe concussion, she had a fascinating idea for how to get better. She dove into the scientific research and created the healing game, SuperBetter.

In this moving talk, McGonigal explains how a game can boost resilience — and promises to add 7.5 minutes to your life.

Automation of labor through stunning breakthroughs in robotics and artificial intelligence could strip away much of the unskilled labor available to humans.

Carl Bass, CEO of Autodesk, explores the future of the labor market as part of a recent episode of Singularity University’s Which Way Next.

The recent generations have been bathed in connecting technology from birth, says futurist Don Tapscott, and as a result the world is transforming into one that is far more open and transparent. In this inspiring talk, he lists the four core principles that show how this open world can be a far better place.

Don Tapscott can see the future coming … and works to identify the new concepts we need to understand in a world transformed by the Internet