What’s another word for ‘neuronal map-maker’?


Using a novel technology for obtaining recordings from single neurons, a team of investigators at Harvard-affiliated Massachusetts General Hospital has discovered a microscopic “thesaurus” that reflects how word meanings are represented in the human brain.

The research, which is published in Nature, opens the door to understanding how humans comprehend language and provides insights that could be used to help individuals with medical conditions that affect speech.

“Humans possess an exceptional ability to extract nuanced meaning through language — when we listen to speech, we can comprehend the meanings of up to tens of thousands of words and do so seamlessly across remarkably diverse concepts and themes,” said senior author Ziv Williams, a physician-investigator in the Department of Neurosurgery at MGH and an associate professor of neurosurgery at Harvard Medical School. “Yet, how the human brain processes language at the basic computational level of individual neurons has remained a challenge to understand.”

Williams and his colleagues set out to construct a detailed map of how neurons in the human brain represent word meanings — for example, how we represent the concept of animal when we hear the word cat and dog, and how we distinguish between the concepts of a dog and a car.

“We also wanted to find out how humans are able to process such diverse meanings during natural speech and through which we are able to rapidly comprehend the meanings of words across a wide array of sentences, stories, and narratives,” Williams said.

Using this new technique, the investigators discovered how neurons in the brain map words to meanings and how they distinguish certain meanings from others.

To start addressing these questions, the scientists used a novel technology that allowed them to simultaneously record the activities of up to a hundred neurons from the brain while people listened to sentences (such as, “The child bent down to smell the rose”) and short stories (for example, about the life and times of Elvis Presley).

Using this new technique, the investigators discovered how neurons in the brain map words to meanings and how they distinguish certain meanings from others.

“For example, we found that while certain neurons preferentially activated when people heard words such as ‘ran’ or ‘jumped,’ which reflect actions, other neurons preferentially activated when hearing words that have emotional connotations, such as ‘happy’ or ‘sad,’” said Williams. “Moreover, when looking at all of the neurons together, we could start building a detailed picture of how word meanings are represented in the brain.”

To comprehend language, though, it is not enough to only understand the meaning of words; one must also accurately follow their meanings within sentences. For example, most people can rapidly differentiate between words such as “sun” and “son” or “see” and “sea” when used in a sentence, even though the words sound exactly the same.

“We found that certain neurons in the brain are able to reliably distinguish between such words, and they continuously anticipate the most likely meaning of the words based on the sentence contexts in which they are heard,” said Williams.

Lastly, and perhaps most excitingly, the researchers found that by recording a relatively small number of brain neurons, they could reliably predict the meanings of words as they were heard in real time during speech. That is, based on the activities of the neurons, the team could determine the general ideas and concepts experienced by an individual as they were being comprehended during speech.

“By being able to decode word meaning from the activities of small numbers of brain cells, it may be possible to predict, with a certain degree of granularity, what someone is listening to or thinking,” said Williams. “It could also potentially allow us to develop brain-machine interfaces in the future that can enable individuals with conditions such as motor paralysis or stroke to communicate more effectively.”

Paper authors Mohsen Jamali is supported by CIHR, NARSAD Young Investigator Grant, and Foundations of Human Behavior Initiative, Benjamin Grannan is supported by NREF and NIH NRSA, Arjun Khanna and William Muñoz are supported by NIH R25NS065743, Angelique Paulk is supported by UG3NS123723, Tiny Blue Dot Foundation, and P50MH119467, Sydney Cash is supported by R44MH125700 and Tiny Blue Dot Foundation, Evelina Fedorenko is supported by U01NS121471 and R01 DC016950, and Ziv Wiliams is supported by NIH R01DC019653 and U01NS121616.



Source link

About The Author

Scroll to Top