Posts Tagged ‘Futurism’

By Yarden Katz  published on the Atlantic (see link below)

An extended conversation with the legendary linguist

nc_hands4a.jpg

Graham Gordon Ramsay

If one were to rank a list of civilization’s greatest and most elusive intellectual challenges, the problem of “decoding” ourselves — understanding the inner workings of our minds and our brains, and how the architecture of these elements is encoded in our genome — would surely be at the top. Yet the diverse fields that took on this challenge, from philosophy and psychology to computer science and neuroscience, have been fraught with disagreement about the right approach.

In 1956, the computer scientist John McCarthy coined the term “Artificial Intelligence” (AI) to describe the study of intelligence by implementing its essential features on a computer. Instantiating an intelligent system using man-made hardware, rather than our own “biological hardware” of cells and tissues, would show ultimate understanding, and have obvious practical applications in the creation of intelligent devices or even robots.

Some of McCarthy’s colleagues in neighboring departments, however, were more interested in how intelligence is implemented in humans (and other animals) first. Noam Chomsky and others worked on what became cognitive science, a field aimed at uncovering the mental representations and rules that underlie our perceptual and cognitive abilities. Chomsky and his colleagues had to overthrow the then-dominant paradigm of behaviorism, championed by Harvard psychologist B.F. Skinner, where animal behavior was reduced to a simple set of associations between an action and its subsequent reward or punishment. The undoing of Skinner’s grip on psychology is commonly marked by Chomsky’s 1967 critical review of Skinner’s book Verbal Behavior, a book in which Skinner attempted to explain linguistic ability using behaviorist principles.

Skinner’s approach stressed the historical associations between a stimulus and the animal’s response — an approach easily framed as a kind of empirical statistical analysis, predicting the future as a function of the past. Chomsky’s conception of language, on the other hand, stressed the complexity of internal representations, encoded in the genome, and their maturation in light of the right data into a sophisticated computational system, one that cannot be usefully broken down into a set of associations. Behaviorist principles of associations could not explain the richness of linguistic knowledge, our endlessly creative use of it, or how quickly children acquire it with only minimal and imperfect exposure to language presented by their environment. The “language faculty,” as Chomsky referred to it, was part of the organism’s genetic endowment, much like the visual system, the immune system and the circulatory system, and we ought to approach it just as we approach these other more down-to-earth biological systems.

David Marr, a neuroscientist colleague of Chomsky’s at MIT, defined a general framework for studying complex biological systems (like the brain) in his influential book Vision, one that Chomsky’s analysis of the language capacity more or less fits into. According to Marr, a complex biological system can be understood at three distinct levels. The first level (“computational level”) describes the input and output to the system, which define the task the system is performing. In the case of the visual system, the input might be the image projected on our retina and the output might our brain’s identification of the objects present in the image we had observed. The second level (“algorithmic level”) describes the procedure by which an input is converted to an output, i.e. how the image on our retina can be processed to achieve the task described by the computational level. Finally, the third level (“implementation level”) describes how our own biological hardware of cells implements the procedure described by the algorithmic level.

The approach taken by Chomsky and Marr toward understanding how our minds achieve what they do is as different as can be from behaviorism. The emphasis here is on the internal structure of the system that enables it to perform a task, rather than on external association between past behavior of the system and the environment. The goal is to dig into the “black box” that drives the system and describe its inner workings, much like how a computer scientist would explain how a cleverly designed piece of software works and how it can be executed on a desktop computer.

As written today, the history of cognitive science is a story of the unequivocal triumph of an essentially Chomskyian approach over Skinner’s behaviorist paradigm — an achievement commonly referred to as the “cognitive revolution,” though Chomsky himself rejects this term. While this may be a relatively accurate depiction in cognitive science and psychology, behaviorist thinking is far from dead in related disciplines. Behaviorist experimental paradigms and associationist explanations for animal behavior are used routinely by neuroscientists who aim to study the neurobiology of behavior in laboratory animals such as rodents, where the systematic three-level framework advocated by Marr is not applied.

In May of last year, during the 150th anniversary of the Massachusetts Institute of Technology, a symposium on “Brains, Minds and Machines” took place, where leading computer scientists, psychologists and neuroscientists gathered to discuss the past and future of artificial intelligence and its connection to the neurosciences.

The gathering was meant to inspire multidisciplinary enthusiasm for the revival of the scientific question from which the field of artificial intelligence originated: how does intelligence work? How does our brain give rise to our cognitive abilities, and could this ever be implemented in a machine?

Noam Chomsky, speaking in the symposium, wasn’t so enthused. Chomsky critiqued the field of AI for adopting an approach reminiscent of behaviorism, except in more modern, computationally sophisticated form. Chomsky argued that the field’s heavy use of statistical techniques to pick regularities in masses of data is unlikely to yield the explanatory insight that science ought to offer. For Chomsky, the “new AI” — focused on using statistical learning techniques to better mine and predict data — is unlikely to yield general principles about the nature of intelligent beings or about cognition.

This critique sparked an elaborate reply to Chomsky from Google’s director of research and noted AI researcher, Peter Norvig, who defended the use of statistical models and argued that AI’s new methods and definition of progress is not far off from what happens in the other sciences.
Read on: http://www.theatlantic.com/technology/archive/2012/11/noam-chomsky-on-where-artificial-intelligence-went-wrong/261637/.

Posted: April 21, 2012 by Wildcat in Uncategorized
Tags: ,

And the future, to be honest, is already the past. Futurism is a very old fashioned concept. That whole idea of futurism is 19th century. So I really like to give it that twist, to say “OK, it’s not really important where it is on the timeline, it’s important if it makes sense in its elements

Uwe Schmidt – The Ecstasy of Simulation (Wire 793)

Posted: April 16, 2012 by Wildcat in Uncategorized
Tags: , , , ,

What is the New Aesthetic? One accurate answer would be: things James Bridle posts to its tumblr. Another doubled as the subtitle for Bridle’s SXSW panel, and it amounts to a generalization of the same thing: “seeing like digital devices.” Pixel art, data visualizations, computer vision sensor aids—these are the worldly residue that computers have left behind as they alter our lived experience: “Some architects can look at a building and tell you which version of autodesk was used to create it.

go read this:

The New Aesthetic Needs to Get Weirder – Ian Bogost – Technology – The Atlantic

Question: You have a plan to build an “optimal scientist”. What do you mean by that?

Answer: An optimal scientist excels at exploring and then better understanding the world and what can be done in it. Human scientists are suboptimal and limited in many ways. I’d like to build an artificial one smarter than myself (my colleagues claim that should be easy) who will then build an even smarter one, and so on. This seems to be the most efficient way of using and multiplying my own little bit of creativity.

Our Formal Theory of Curiosity & Creativity & Fun already specifies a theoretically optimal, mathematically rigorous method for learning a repertoire of problem solving skills that serve to acquire information about an initially unknown environment. But to improve our current artificial scientists (and artists) we still need to find practically optimal ways of dealing with a finite amount of available computational power.

Vodpod videos no longer available.

 

The Technolife project studies three technologies with a view to their social and ethical implications:

1) Geographical information systems and environmental conflicts;

2) Technologies of body modification and enhancement, with a special focus on nanotechnology, biotechnology and cognitive science;

3) Biometrics, with a focus on surveillance and mobility.

TheTechnolife researchers are ethicists, social scientists and computer programmers. Through our experience we have been convinced that we can only begin to understand and regulate these fields if we can better communicate among a greater number of concerned groups. The ways in which citizens imagine possible futures and the ways in which people are concerned or encouraged by developments are important to the development of technologies and the social roles they will come to play. Therefore we have invited persons and groups that we think are interested in exploring these issues together with us.

From the project description:

Throughout the last 30 years ethics has become increasingly important for the regulation and governance of new and emerging sciences and technologies. Central elements in the establishment of ethics as a tool for governance has been the discipline of bioethics, the coming of patient’s rights in medicine and, increasingly, privacy regulations and data protection related to information and communication technologies. Before such disruptive events as the fall of the Berlin Wall, the coming of the Internet or the global war on terrorism, one could reasonably argue that ethical issues could be dealt with through concepts such as autonomy, privacy and informed consent. One could also argue that these could be taken care of by a few experts, such as medical doctors, lawyers, ethicists or engineers.

However, within new fields of emerging technologies older recipes can play a limited role only. With the immense number of online transactions taking place every second and within the increasingly digital character of everyday environments and objects, concepts such as informed consent and autonomy seem to loose much of their relevance. What is more, there is rarely any easily identifiable expert group, say government agency, to which ethical or political issues and claims can be addressed. When technology pervades our everyday environments it gets difficult to single out any single concept or professional group that may efficiently and legitimately deal with highly complex concerns and ethical issues.

The researchers within the Technolife project are looking for ways to turn governance and ethics of science and technology into more dialogical exercises in which citizens, groups and individuals may also be heard. Technolife uses new technology and media in order to promote discussion and dialogue across cultural, administrative and professional barriers. Central to the project is the establishment of an online deliberation forum used for debating ethical and political issues within three technological fields: 1) biometrics and mobility, 2) digital globes and environmental controversy and 3) enhancements of the human body. For each field short movies are developed and used as starting point for debate. In addition, ethical issues of the three fields are being mapped and central developments are monitored.

The results of both academic analysis and popular discussion from the online forum will be used in order to produce more thorough analysis of the three fields centring on the concepts of imagined communities and imaginaries of socio-technical developments. The outcomes of the deliberative phase, involving a number of different concerned groups will be presented to EU policy makers. They will also be used in the effort to develop better “ethical frameworks for new and emerging technologies”. Ultimately, the hope of the project is to develop tools that may also be used by others, such as NGOs, scientists, ethicists or policy makers seeking to promote more democratic means of governance.

Participate at Technolife online debates at:

Digital Globes Debate

Biometrics Debate

Body & Mind Debate

Here are two short videos produced to illustrate the problems and develop a debate:

Site is worth a visit:  Technolife

This is an interesting overview of futurist ideas from the Arlington Institute. The rest can be found here.

Vodpod videos no longer available.

Reblog this post [with Zemanta]

Science fiction writer Charlie Stross is inteviewed by H+ magazine, and brings his hard core realistic views of near and far futures.

Singularity, 2012: God springs out of a computer to rapture the human race. An enchanted locket transforms a struggling business journalist into a medieval princess. The math-magicians of British Intelligence calculate demons back into the dark. And solar-scale computation just uploads us all into the happy ever after.

Stripped to the high concept, these visions from Charlie Stross are prime geek comfort food. But don’t be fooled. Stross’ stories turn on you, changing up into a vicious scrutiny of raw power and the information economy.

It’s no surprise that Stross is a highly controversial figure within Transhumanist circles – loved by some for his dense-with-high-concepts takes on themes dear to the movement, loathed by others for what they see as a facile treatment of both ideas and characters. But one thing is certain –- Mr. Stross is one SF writer who pays close attention to the entire plethora of post-humanizing changes that are coming on fast. As a satirist, he might be characterized as our Vonnegut, lampooning memetic subcultures that most people don’t even know exist.

Mind uploading would be a fine thing, but I’m not convinced what you’d get at the end of it would be even remotely human. (Me, I’d rather deal with the defects of the meat machine by fixing them — I’d be very happy with cures for senescence, cardiovascular disease, cancer, and the other nasty failure modes to which we are prone, with limb regeneration and tissue engineering and unlimited life prolongation.) But then, I’m growing old and cynical. Back in the eighties I wanted to be the first guy on my block to get a direct-interface jack in his skull. These days, I’d rather have a firewall.

read on the interesting interview…

via The Reluctant Transhumanist | h+ Magazine.

Reblog this post [with Zemanta]