Can You Reread My Mind?
PLoS ONE @ Two
Originally posted on Thursday, January 10, 2008.
Just tryin' to keep it in line
You say you wanna move on and
instead of falling behind
Can you read my mind?
Can you read my mind?
Read My Mind
------The Killers
Earlier this year, a study in PLoS One (Shinkareva et al., 2008) received some wildly overblown coverage in the media:
Scientists can read your mind... sort ofVery briefly, subjects viewed pictures of 10 different objects: 5 tools (drill, hammer, screwdriver, pliers, saw) and 5 dwellings (apartment, castle, house, hut, and igloo). Previous work had shown that these two object categories activate some unique brain regions (e.g., ventral premotor cortex and parahippocampal gyrus, respectively). Machine learning methods were used to classify the patterns of activity obtained while subjects viewed each of these pictures, with a goal of identifying individuals objects (not just the categories) by the distinctive neural activity associated with each.
THOUGHTS are successfully being read for the first time by scientists using nothing but a modified MRI scanner and a special computer program.
But is it humans who are doing the mind-reading, or is it...is it...THE COMPUTERS!! Ahh, they're taking over!
CMU computers seek where thoughts originate
By Allison M. Heinrichs
TRIBUNE-REVIEW
Friday, January 4, 2008
Computers are reading minds at Carnegie Mellon University.
In a small two-year study, computer scientists and cognitive neuroscientists teamed up to teach computers to recognize patterns in brain activity and identify objects that people are looking at.
Scientists call it the first step toward identifying where people's thoughts originate, while ethicists see it as a sign of the need for new public policy.
Colossus - The Forbin Project takes place in the 50s during the height of the cold war. Dr. Charles Forbin, a genius scientist who has lost trust in humanity’s ability to logically address emotional issues, has developed a very special computer to perform the Strategic Air Command and Control functions for the military. This computer, code named Colossus, is developed based on incredible advances in Artificial Intelligence, and has a logical process for determining when to launch the ICBMs. With much fanfare, the President of the US “turns on” Colossus to take over responsibility for the US nuclear armament. [from Cyberpunk Review]
"I want a complete mapping of brain states and thoughts," Dr. Just said. "We're taking tiny baby steps, but anything we can think about is represented in the brain."Unfortunately, shortly after being turned on, Colossus learns the presence of another AI command and control system. It turns out that the Soviet Union, independently has developed their own system call the Guardian. Both computers “insist” that they be linked to ensure no attacks will take place...In coming years, researchers will be able to develop a fairly complex mapping of brain states and thoughts, he said.
"It's a little science fiction-y, and I don't think we'll do it in one year, but five to 10 is plausible," he said.
Wikipedia defines machine learning as a broad subfield of artificial intelligence,
concerned with the design and development of algorithms and techniques that allow computers to "learn". ... Inductive machine learning methods extract rules and patterns out of massive data sets. The major focus of machine learning research is to extract information from data automatically, by computational and statistical methods. Hence, machine learning is closely related not only to data mining and statistics, but also theoretical computer science.Things begin to go downhill when Professor Forbin realizes that the rate of learning for the machines is increasing at an exponential rate – he recommends detaching the connection between the two computers. When they attempt to do this, both computers threaten an immediate launch of nuclear weapons. Quickly, the government’s realize their situation – the machines are now in power. Worse, they proceed to take complete control of human society.
In the PLoS One article, Shinkareva et al. (2008) describe this approach to analyzing functional imaging data as involving
identification of a multivariate pattern of voxels and their characteristic activation levels that collectively identify the neural response to a stimulus. These machine learning methods have the potential to be particularly useful in uncovering how semantic information about objects is represented in the cerebral cortex because they can determine the topographic distribution of the activation and distinguish the content of the information in various parts of the cortex. In the study reported below, the neural patterns associated with individual objects as well as with object categories were identified using a machine learning algorithm applied to activation distributed throughout the cortex. This study also investigated the degree to which objects and categories are similarly represented neurally across different people.And wouldn't you know it, people [Carnegie Mellon students] are people.
CMU finds human brains similarly organizedResults revealed the typical-ish distributed activity patterns underlying object representations, and high classification rank accuracies for object exemplars:Carnegie Mellon University has taken an important step in mapping thought patterns in the human brain, and the research has produced an amazing insight: Human brains are similarly organized.
Based on how one person thinks about a hammer, a computer can identify when another person also is thinking about a hammer. It also can differentiate between items in the same category of tools, be it a hammer or screwdriver.
Reliable (p less than 0.001) accuracies for the classification of object exemplars within participants were reached for eleven out of twelve participants, and reliable (p less than 0.001) accuracies for the classification of object exemplars when training on the union of data from eleven participants were reached for eight out of twelve participants.From Table 1 (Shinkareva et al., 2008). Anatomical regions (out of 71) that singly produced reliable average classification accuracies across the twelve participants for category identification.
L Precentral gyrus
L Superior frontal gyrus
L Inferior frontal gyrus, triangular part
L Insula, rolandic operculum
L/R Calcarine fissure
L/R Cuneus, superior occipital, middle occipital gyri
L/R Inferior occipital, lingual gyri
L/R Fusiform gyrus
L Postcentral gyrus
L/R Superior parietal gyrus, precuneus, paracentral lobule
L/R Inferior parietal, supramarginal, angular gyri
L/R Intraparietal sulcus
L/R Posterior superior temporal, posterior middle temporal gyri
L/R Posterior inferior temporal gyrus
L/R Cerebellum
In contrast to this last statement are the results from a new paper (Sanai et al., 2008) showing that language representation in the brain is highly variable across individuals:"This part of the study establishes, as never before, that there is a commonality in how different people's brains represent the same object," said Mitchell, head of the Machine Learning Department in Carnegie Mellon's School of Computer Science and a pioneer in applying machine learning methods to the study of brain activity. "There has always been a philosophical conundrum as to whether one person's perception of the color blue is the same as another person's. Now we see that there is a great deal of commonality across different people's brain activity corresponding to familiar tools and dwellings."
"This first step using computer algorithms to identify thoughts of individual objects from brain activity can open new scientific paths, and eventually roads and highways," added Svetlana Shinkareva, an assistant professor of psychology at the University of South Carolina who is the study's lead author. "We hope to progress to identifying the thoughts associated not just with pictures, but also with words, and eventually sentences."
Background: Language sites in the cortex of the brain vary among patients. Language mapping while the patient is awake is an intraoperative technique designed to minimize language deficits associated with brain-tumor resection. ...During surgery to remove gliomas, the patients in the mapping study performed three different speech/language tasks (including object naming) while various regions of cortex were stimulated to test for language deficits. Guess the neurosurgeons couldn't read their minds...
Results: ...Cortical maps generated with intraoperative language data ...showed surprising variability in language localization within the dominant [left] hemisphere.
References
Sanai N, Mirzadeh Z, Berger MS. (2008). Functional outcome after language mapping for glioma resection. N Engl J Med. 358:18-27.
Svetlana V. Shinkareva, Robert A. Mason, Vicente L. Malave, Wei Wang, Tom M. Mitchell, Marcel Adam Just (2008). Using fMRI Brain Activation to Identify Cognitive States Associated with Perception of Tools and Dwellings. PLoS ONE, 3 (1) DOI: 10.1371/journal.pone.0001394
Previous studies have succeeded in identifying the cognitive state corresponding to the perception of a set of depicted categories, such as tools, by analyzing the accompanying pattern of brain activity, measured with fMRI. The current research focused on identifying the cognitive state associated with a 4s viewing of an individual line drawing (1 of 10 familiar objects, 5 tools and 5 dwellings, such as a hammer or a castle). Here we demonstrate the ability to reliably (1) identify which of the 10 drawings a participant was viewing, based on that participant's characteristic whole-brain neural activation patterns, excluding visual areas; (2) identify the category of the object with even higher accuracy, based on that participant's activation; and (3) identify, for the first time, both individual objects and the category of the object the participant was viewing, based only on other participants' activation patterns. The voxels important for category identification were located similarly across participants, and distributed throughout the cortex, focused in ventral temporal perceptual areas but also including more frontal association areas (and somewhat left-lateralized). These findings indicate the presence of stable, distributed, communal, and identifiable neural states corresponding to object concepts.
Subscribe to Post Comments [Atom]
0 Comments:
Post a Comment
<< Home