Our eyes may be our window to the world, but how do we make sense of the thousands of images that flood our retinas each day? Scientists at the University of California, Berkeley, have found that the brain is wired to put in order all the categories of objects and actions that we see. They have created the first interactive map of how the brain organizes these groupings. The result — achieved through computational models of brain imaging data collected while the subjects watched hours of movie clips — is what researchers call “a continuous semantic space.” Some relationships between categories make sense (humans and animals share the same “semantic neighborhood”) while others (hallways and buckets) are less obvious. The researchers found that different people share a similar semantic layout.
Posts Tagged ‘Simulation’
Tags: Brain, growth, Science, Simulation, Universe
The universe may grow like a giant brain, according to a new computer simulation. The results, published Nov. 16 in the journal Nature’s Scientific Reports, suggest that some undiscovered, fundamental laws may govern the growth of systems large and small, from the electrical firing between brain cells and growth of social networks to the expansion of galaxies. “Natural growth dynamics are the same for different real networks, like the Internet or the brain or social networks,” said study co-author Dmitri Krioukov, a physicist at the University of California San Diego. The new study suggests a single fundamental law of nature may govern these networks, said physicist Kevin Bassler of the University of Houston, who was not involved in the study. [ What’s That? Your Physics Questions Answered ] “At first blush they seem to be quite different systems, the question is, is there some kind of controlling laws can describe them?” he told LiveScience. By raising this question, “their work really makes a pretty important contribution,” he said. (via Universe may grow like a giant brain – Technology & science – Science – LiveScience | NBC News)
Tags: airports, Health, outbreaks, patogens, Simulation
After SARS broke out in China in 2002, it reached 29 countries in seven months. Air travel is a major reason why such infectious diseases spread throughout the globe so quickly. And yet even with such examples to study, scientists have had no way to precisely predict how the next infectious disease might spread through the nexus of world air terminals—until now.
In 2010 MIT engineer Ruben Juanes set out to model the movement of a pathogen from a single site of departure to junctions worldwide. If he could predict the flow of disease from a given airport and rank the most contagious ones, government officials could more effectively predict outbreaks and issue lifesaving warnings and vaccines. So Juanes and his team used a computer simulation to seed 40 major U.S. airports with virtual infected travelers. Then they mimicked the individual itineraries of millions of real passengers to model how people move through the system. The travel data included flights, wait times between flights, number of connections to international hubs, flight duration, and length of stay at destinations.
JFK International in New York—one of the world’s most heavily trafficked airports—emerged as the biggest culprit in disease spread. Honolulu, despite having just 40 percent of JFK’s traffic, came in third because of its many long-distance flights. The biggest surprise: The number of passengers per day did not directly correlate to contagion risk.
This summer Google set a new landmark in the field of artificial intelligence with software that learned how to recognize cats, people, and other things simply by watching YouTube videos (see “Self-Taught Software“). That technology, modeled on how brain cells operate, is now being put to work making Google’s products smarter, with speech recognition being the first service to benefit, Technology Review reports. Google’s learning software is based on simulating groups of connected brain cells that communicate and influence one another. When such a neural network, as it’s called, is exposed to data, the relationships between different neurons can change. That causes the network to develop the ability to react in certain ways to incoming data of a particular kind — and the network is said to have learned something. Neural networks have been used for decades in areas where machine learning is applied, such as chess-playing software or face detection. Google’s engineers have found ways to put more computing power behind the approach than was previously possible, creating neural networks that can learn without human assistance and are robust enough to be used commercially, not just as research demonstrations. (via Google simulates brain networks to recognize speech and images | KurzweilAI)
Tags: Brain, Mind, Neuroscience, Research, Simulation
The first person electrically stimulate the brain of a living human during surgery was the 19th-century British neurosurgeon Sir Victor Horsley. The operation was to treat a deformation called an encephalocele, where the bones of the skull do not close properly in the womb, causing the brain to protrude from the head. Horsely applied a weak electrical current to the surgically exposed brain tissue, making the patient’s eyes swivel to the side, which told the surgeon that the out-of-place area was the top of the midbrain – normally a deeply embedded neural structure essential for directing vision. (via Vaughan Bell: how simulating dementia can help map our minds | Science | The Observer)
Computer viruses are old news, but virtual bacteria might just be the future of biology. Because on processors at Stanford University a simulation of the entire bacterium Mycoplasma genitalium, its DNA, and the constituents of its single cell is allowing biologists to tease apart the way life works. “The public hear about a new ‘cancer gene’ being discovered, or a new ‘Alzheimer’s gene’. You hear about these all the time and you might wonder, with all these discoveries, where are the cures to those complex diseases?,” explains co-author Prof Markus Covert, speaking to BBC Radio 4’s Material World. (via BBC News – The virtual cell that simulates life)
Tags: Biotechnology, Computing, microbes, Science, Simulation
To Model the Simplest Microbe in the World, You Need 128 Computers
Mycoplasma genitalium has one of the smallest genomes of any free-living organism in the world, clocking in at a mere 525 genes. That’s a fraction of the size of even another bacterium like E. coli, which has 4,288 genes. M. genitalium’s diminutive genome made it the first target for Stanford and J. Craig Venter Institute researchers who wanted to simulate an organism in software. The bioengineers, led by Stanford’s Markus Covert, succeeded in modeling the bacterium, and published their work last week in the journal Cell. What’s fascinating is how much horsepower they needed to partially simulate this simple organism. It took a cluster of 128 computers running for 9 to 10 hours to actually generate the data on the 25 categories of molecules that are involved in the cell’s lifecycle processes. This has a direct bearing on one of the big questions about technology over the next 50 years: how successful will biotechnologies be? On the one hand, we’ve made tremendous strides in describing the molecular processes that power life. I’m not just talking about genomics, but whole sets of other molecules and interactions (see: proteomics, metabolomics, epigenomics, transcriptomics). The new work stands as a testament to how far we’ve come. We can now simulate most known interactions within the cell: how the code of its DNA becomes proteins, how those proteins interact, and how the cell uses energy. (via To Model the Simplest Microbe in the World, You Need 128 Computers – Alexis Madrigal – The Atlantic)