Facebook stops funding brain-reading computer interface

0

Now the answer is there, and it’s not at all close. Four years after announcing a “crazy incredible” project to create a “silent speech” interface using optical technology to read minds, Facebook is putting the project aside, claiming that consumers’ brain reading is still a long way off. carried out ..

In a blog post, Facebook said it was ending the project and instead focus on an experimental wrist controller for virtual reality that reads muscle signals in the arm. “While we still believe in the long-term potential of head-mounted optical devices [brain-computer interface] technologies, we decided to focus our immediate efforts on a different neural interface approach that has a shorter term path to market, ”the company said.

Facebook’s brain-typing project had taken him into uncharted territory, including funding brain surgeries at a California hospital and building prototype helmets capable of projecting light through the skull, and into difficult debates over the whether tech companies should access private information about the brain. Ultimately, however, the company seems to have decided that research just won’t lead to a product soon enough.

“We have gained a lot of hands-on experience with these technologies,” says Mark Chevillet, the physicist and neuroscientist who until last year led the silent speech project but recently changed roles to study how Facebook handles elections. “That’s why we can confidently say that as a consumer interface, a silent optical headset is still a long way off. Maybe longer than we expected.

Telepathy

The reason for the craze around brain-computer interfaces is that companies see mind-controlled software as a huge breakthrough, as big as the computer mouse, graphical user interface, or scanning screen. What’s more, researchers have already shown that if they place electrodes directly into the brain to tap individual neurons, the results are remarkable. Patients paralyzed with such “implants” can deftly move robotic arms and play video games or type via mind control.

Facebook’s goal was to turn those findings into mainstream technology that anyone could use, which meant a headset or headset you could put on and take off. “We never intended to manufacture a brain surgery product,” explains Chevillet. Given the social giant’s many regulatory issues, CEO Mark Zuckerberg once said the last thing the company should do is crack open skulls. “I don’t want to see congressional hearings on this one,” he joked.

In fact, as brain-computer interfaces progress, there are serious new concerns. What if big tech companies could know people’s thoughts? In Chile, lawmakers are even considering a human rights bill to protect brain data, free will and the privacy of tech companies. Given Facebook’s poor privacy record, the decision to stop this research may have the side benefit of putting some distance between the company and growing concerns about “neurorights”.

Facebook’s project specifically targeted a brain controller that could match its virtual reality ambitions; he bought Oculus VR in 2014 for $ 2 billion. To get there, the company took a two-pronged approach, explains Chevillet. First, he needed to determine if a thought-to-speech interface was still possible. For this, he sponsored research at the University of California at San Francisco, where a researcher named Edward Chang placed electrodes on the surface of people’s brains.

While the implanted electrodes read data from single neurons, this technique, called electrocorticography, or ECoG, measures from fairly large groups of neurons at a time. Chevillet says Facebook was hoping it would also be possible to detect equivalent signals from outside the head.

The UCSF team has made surprising progress and reports today in the New England Journal of Medicine that they have used these electrodes to decode speech in real time. The subject was a 36-year-old man researchers call “Bravo-1,” who, after a severe stroke, has lost his ability to form intelligible words and can only growl or moan. In their report, Chang’s group says that with the electrodes on the surface of his brain, Bravo-1 was able to form sentences on a computer at a rate of about 15 words per minute. The technology involves measuring neural signals in the part of the motor cortex associated with Bravo-1’s efforts to move his tongue and vocal tracts as he imagines speaking.

To achieve this result, Chang’s team asked Bravo-1 to imagine saying one of 50 common words nearly 10,000 times, feeding the patient’s neural signals to a deep learning model. After training the model to match words with neural signals, the team was able to correctly determine the word Bravo-1 thought it said 40% of the time (the luck results would have been around 2%). Even so, his sentences were full of errors. “Hello how are you?” might come out “Hungry how are you.”

But scientists improved performance by adding a language model, a program that evaluates the most likely word sequences in English. This increased the accuracy to 75%. With this cyborg approach, the system could predict that Bravo-1’s phrase “I right my nurse” actually meant “I love my nurse”.

As remarkable as the result is, there are over 170,000 words in English, and therefore performance would drop outside of Bravo-1’s restricted vocabulary. This means that the technique, while it can be useful as a medical aid, is not close to what Facebook had in mind. “We see applications for the foreseeable future in clinical assistive technology, but that’s not where our business is,” says Chevillet. “We are focused on consumer applications, and there is a very long way to go for that. “

Equipment developed by Facebook for diffuse optical tomography, which uses light to measure changes in blood oxygen in the brain.

FACEBOOK

Optical failure

Facebook’s decision to give up brain reading does not shock researchers studying these techniques. “I can’t say I’m surprised, because they had hinted that they were considering a short time frame and that they were going to reassess things,” said Marc Slutzky, professor at Northwestern whose former student Emily Mugler was. a key hire for Facebook. his project. “Speaking of experience, the goal of speech decoding is a big challenge. We are still far from a practical and comprehensive solution.

Still, Slutsky says the UCSF project is an “awesome next step” that demonstrates both remarkable possibilities and some limitations in the science of brain reading. “It remains to be seen if you can decode free speech,” he says. “A patient who says ‘I want a glass of water’ as opposed to ‘I want my medicine’, well, that’s different.” He says if artificial intelligence models could be trained for longer and on the brains of multiple people, they could improve quickly.

As UCSF’s research continued, Facebook was also paying other centers, like the Johns Hopkins Applied Physics Lab, to find out how to pump light through the skull to read neurons non-invasively. Like MRI, these techniques rely on the detection of reflected light to measure the amount of blood flow to regions of the brain.

It is these optical techniques that remain the main stumbling block. Even with recent improvements, some by Facebook, they are not able to pick up neural signals with sufficient resolution. Another problem, says Chevillet, is that the blood flow detected by these methods occurs five seconds after a cluster of neurons is triggered, making it too slow to control a computer.


Source link

Leave A Reply

Your email address will not be published.