Neurotechnological Revolution

Neurotechnological Revolution 618 370 IEEE Pulse

The brain contains all that makes us human, but its complexity is the source of both inspiration and frailty. The world’s scientific community is working hard to unravel the secrets of the brain’s computing power and to devise technologies that can heal it when it fails and restore critical functions to patients with neurological conditions.
Neurotechnology is the emerging field that brings together the development of technologies to study the brain and devices that improve and repair brain function. It has been brought into the spotlight by the Brain Research Through Advancing Innovative Neurotechnologies (BRAIN) Initiative in the United States as well as the classification of 2014 as the Year of the Brain by the European Brain Council (EBC), where the mantra is “the national wealth is the brain’s health.” This move has generated much momentum around collaborative efforts to push forward technologies for brain health.
The EBC move is a timely one, given the growing incidence and increasing societal cost of brain-related diseases. Europe’s aging population is increasingly in need of effective care and therapies for brain diseases, including stroke, Parkinson’s disease, and Alzheimer’s disease, which represent 35% of the burden of all diseases in Europe. The EBC estimates the cost of brain disease in Europe since January 2014 at nearly €750 billion and rising fast.
The problem is global in nature. For instance, the U.S. National Institute of Mental Health (NIMH) estimates that in any year, around 25% of American adults suffer from a diagnosable mental disorder, with nearly 6% suffering serious disabilities as a result. The NIMH puts the annual cost in lost earnings at around US$317 billion. Research by the U.S. National Institute for Neurological Disorders and Stroke estimated in 2006 that around 50,000 new cases of Parkinson’s disease are diagnosed in the United States each year and that at least 500,000 people in the United States now have the condition.
Alzheimer’s disease, which accounts for up to 70% of cases of dementia, is a key target for designers of innovative therapies. In the United States, the Alzheimer’s Association estimates that 5.4 million people have the disease. Alzheimers Disease International predicts that by 2050 some 115.4 million people worldwide will develop the condition. Discoveries that promise deeper insight into the condition prompt great anticipation, as with the work of John O’Keefe, May-Britt Moser, and Edvard Moser, who jointly received the 2014 Nobel Prize for Medicine for the discovery of the brain’s positioning system.
This “inner GPS” enables people to orient themselves in space and demonstrates a cellular basis for higher cognitive function. In 1971, O’Keefe observed a type of nerve cell in the hippocampus of rats. These “place cells” consistently activated at a certain place in a room. In 2005, May-Britt and Edvard Moser identified “grid cells,” which create a coordinate system for precise positioning and path finding.
This research could further our understanding of Alzheimer’s disease because place cells and grid cells lie in the hippocampus and the entorhinal cortex, which control memory and orientation and which break down early in sufferers, causing them to become disorientated. But despite this and other research, an effective treatment for Alzheimer’s is still a long way off.
“In the future, a lot of societal cash will go toward funding Alzheimer’s research, and the governments around the world are backing big projects to tackle dementia, but we don’t yet have all the basic science. What we have to do is kick the ball out of the playing field and attack the problem using many different approaches, including neuroscience and engineering,” remarks Dr. Simon Schultz, a reader in neurotechnology in the Bioengineering Department at Imperial College London’s Faculty of Engineering (below).

Simon Schultz, director of the Centre for Neurotechnology and reader in neurotechnology in the Department of Bioengineering, Imperial College London. (Photo courtesy of Simon Schultz.)
Simon Schultz, director of the Centre for Neurotechnology and reader in neurotechnology in the Department of Bioengineering, Imperial College London. (Photo courtesy of Simon Schultz.)

Optogenetics is one area in which different approaches, in this case optics and genetics, combine in the way Schultz suggests. It is a neuromodulation technique that uses light to affect neurons that have been genetically sensitized to light and enables the control and monitoring of activity in individual neurons in living tissue and the real-time study of the effects of those manipulations. (For more on optogenetics, see “Let There Be Light” by Nan Li and Peng Miao in the July/August 2014 issue of IEEE Pulse.) It is just one tool among many that will no doubt play a major role in both understanding brain function and taking remedial action to improve brain function.
“Optogenetics can now be used to disturb brain activity by affecting neurons of a single type as a group. It gives us the ability to watch patterns of brain activity during certain behaviors. Also, new wavefront technology can now address specific areas of neurons. All of this gives us a new window on brain activity. Most of what we know about brain activity comes from single electrode reading techniques, but now we can record thousands at once,” says Schultz.
It is clear that advances such as these in brain research are unlocking great potential for the development of information and communications technology systems to address neurological conditions, but the first step is to better understand how the brain works, and this stems from developing computational systems that mimic the brain’s architecture.

Building the Human Brain

Intensive efforts are under way to decipher the brain’s complex functions through computational models. One of the European Union’s biggest research projects is the €1.2-billion Human Brain Project (HBP), which is a ten-year initiative directed by the École Polytechnique Fédérale de Lausanne (EPFL) with the aim of simulating the complete human brain on supercomputers.
The project builds on the work of Prof. Henry Markram, who, in 2005, launched the Blue Brain Project at EPFL to construct a virtual brain in a supercomputer to give neuroscientists new insights into neurological diseases. The key is vast computing power. Each simulated neuron requires the equivalent of a laptop computer, and a model of the whole brain would require billions, but supercomputing technology is nearing a level at which whole-brain simulation is a realistic possibility.

(a) A plot of the SpiNNaker chip and (b) Steve Temple (left) holding a SpiNNaker chip with Steve Furber in front of the plot. (Images courtesy of Steve Furber/University of Manchester.)
(a) A plot of the SpiNNaker chip and (b) Steve Temple (left) holding a SpiNNaker chip with Steve Furber in front of the plot. (Images courtesy of Steve Furber/University of Manchester.)

In the United Kingdom, similar branches of research are under way, such as the SpiNNaker project, which is a novel computer architecture inspired by the working of the human brain. It is building a massively parallel chip multiprocessor system for modeling large systems of spiking neurons in real time (above). The largest SpiNNaker machine will be capable of simulating a billion simple neurons, or millions of neurons with complex structure and internal dynamics. Steve Furber, ICL professor of computer engineering at the University of Manchester, United Kingdom, is closely involved in both SpiNNaker and the HBP.
“My work focuses both on using the computer power now available to improve our understanding of the brain and on using the brain to build better computers. The focus of the HBP is using the brain to develop treatments for neurological conditions [and] there is a strong focus on building new IT platforms now. We don’t need more neuroscience research—there are 100,000 papers published on that every year—but we do need new computational systems,” he remarks (below).

Steve Furber, ICL professor of computer engineering in the School of Computer Science at the University of Manchester, United Kingdom, with a model brain. (Photo courtesy of Steve Furber/University of Manchester.)
Steve Furber, ICL professor of computer engineering in the School of Computer Science at the University of Manchester, United Kingdom, with a model brain. (Photo courtesy of Steve Furber/University of Manchester.)

SpiNNaker will provide a research tool for neuroscientists, computer scientists, and roboticists. It furthers the investigation of new computer architectures that break the rules of conventional supercomputing. In tandem, the University of Heidelberg is conducting work along similar lines, with brain-inspired multiscale computation in neuromorphic hybrid systems (BrainScaleS). BrainScaleS aims to understand and emulate the function and interaction of multiple spatial and temporal scales in brain information processing using both in vivo experimentation and computational analysis.
“The Heidelberg system operates 10^4-times faster than biological real time, but it casts the models into the circuit, so decisions have to be made about what models to support. SpiNNaker operates at the same speed as biological real time and has a very lightweight, more fragmented communications system and a lower data rate. Our neural model comes from software, so we can support more models,” Furber explains.
The SpiNNaker computing resource—a rack with three card frames, 100,000 ARM cores—which is roughly equivalent to the scale of a mouse brain. (Photo courtesy of Steve Furber/University of Manchester.)“We score on flexibility and, like synaptic learning models, SpiNNaker is arbitrarily adaptable. It requires a huge computing resource, and we have a 19-in rack with 100,000 cores, which is roughly the scale and complexity of a mouse brain. Our commitment to HBP is to deliver a system five times that large, though our goal is to scale up tenfold,” he adds (Right: The SpiNNaker computing resource—a rack with three card frames, 100,000 ARM cores—which is roughly equivalent to the scale of a mouse brain. (Photo courtesy of Steve Furber/University of Manchester.)).
A team from Jülich in Germany is already running a microcortical model on SpiNNaker, having previously used a supercomputer, and a team from Stockholm is using it as a Bayesian probability network.
“The focus is on the science now. I hope that we can remove the computational limitations on what most computational neuroscience groups have done,” says Furber. “Neural networks are abstracting away from the biological to develop principles but also going deep into detailed biological models. One difference between the HBP and the U.S. BRAIN initiative is that HBP is a science-first approach that is identifying models that can then develop an information and communications technology approach to treating brain conditions. In the United States, there is an application-first approach. I think we need to understand the workings of the brain better first.”
Brain modeling techniques have a long way to go before they can replicate a human brain, but progress is swift.
“We are starting to think about reverse engineering parts of mouse brains, and that could lead to the reverse engineering of an entire human brain, but we need to bridge the scale, and advances in modeling techniques will help,” notes Schultz.

Biological Meets Mechanical

One area that could provide excellent short-term prospects for workable technologies is brain–computer interfaces (BCIs). This relies on noninvasive cranial surface measurements of brainwave activity using signals captured with techniques such as electroencephalography (EEG) or near-infrared spectroscopy (NIRS), which can be interpreted to drive human–machine interaction.
Photo courtesy of Etienne Burdet.Many universities in the United Kingdom have neuroscience teams, but Imperial College alone is working in the area of neuroengineering. Prof. Etienne Burdet, who works alongside Schultz in its bioengineering department, has done extensive work in the field of neuromechanical control and neurorehabilitation technologies (right, photo courtesy of Etienne Burdet.). His group integrates neuroscience and robotics to investigate human motor control, design efficient assistive devices and virtual-reality-based training for rehabilitation and surgery, and use existing robotics technology to create novel bioengineering techniques.
One of Burdet’s flagship projects is a brain-controlled wheelchair (BCW) that can navigate familiar environments. It relies on BCIs to enable locked-in patients to communicate. Although BCIs suffer from a low information transfer rate and require concentrated effort from the operator, the BCW is nevertheless a great step forward in enhancing mobility. Users can select a destination from a list of predefined locations to which the BCW moves on virtual guiding paths.
According to Burdet, BCIs have two primary uses for this type of research, including the control of appliances such as wheelchairs by individuals unable to control their limbs due to spinal cord injury or degenerative diseases such as amyotrophic lateral sclerosis or if limbs oscillate widely due to multiple sclerosis or cerebral palsy. They also can help in rehabilitation by deciphering brain activity.
“BCIs can rely on interfaces that are noninvasive, based on EEG or NIRS; minimally invasive, such as electrocorticography (ECoG), with electrodes on the surface of the cortex; or invasive, with electrodes within the brain for animal models or deep brain surgery for Parkinsons. Tracking eye movement is also becoming a simple and useful means to track motion intention, though more for manipulation than for mobility,” he explains.
So far, BCIs promise much but have limited application. Speed of response is the crucial issue.
“The limits of EEG BCI, with typically one decision every five seconds, and electrodes in the brain, which have low long-term reliability, are known, but when one can command a wheelchair—and using thought—one has a wonderful impression that the computer is reading into one’s own brain,” notes Burdet.
“The next phase of BCI research will look at practical use with context-dependent questions. With the typical low bandwidth of BCI, it is necessary to ask the right questions. For example, with our wheelchair, the only question when moving to a selected destination is whether one wants to stop. Concentrating on few questions can accelerate the search and increase the reliability of the decision detection. If used properly, the big drawback of low bandwidth is not critical. We easily wait one minute for a lift to come, so five seconds to select where we want to stop with an EEG BCI is not much,” Burdet adds.
ECoG enables greater control than EEG, and recent advances in areas such as optogenetics are opening up the potential to observe activity deep in the brain with good resolution. There is also great promise in combining different kinds of BCIs.
“Neuromechanical control systems could progress immensely in the next five to ten years. We could see portable or home-based systems for rehabilitation and assistance in manipulation and mobility. The key is to consider neuromechanics—not only neural control aspects but also body biomechanics—and to develop dedicated robotics solutions rather than using generic robotics solutions. For example, typical stroke patients requiring rehabilitation can hardly move the arm at first, so complex arm exoskeletons to train complete arm movement could be replaced by small, compact, and portable robotic interfaces,” Burdet says.

Delving Inside the Skull

While BCIs allow the brain to communicate with external systems, researchers are also taking technology into the brain itself. Implantable neurodevices can monitor or regulate brain activity, and some deep brain stimulators can already give electrical stimulation to inactive areas, such as basal ganglia compromised by Parkinson’s disease. Work on neuromodulation combines neurodevices and neurochemistry to regulate brain activity using implanted technology.
For example, Schulz’s work comprises understanding brain function and the technologies that stem from that. “We have technologies that can only be used in mice because interfacing with the nervous system of a paralyzed patient, for example, is much more complex. Nevertheless, there are some things we can do that overlap with areas of clinical needs, such as paralysis, locked-in syndrome, or loss of a limb,” says Schultz.
“All of these have relatively small patient populations. Alzheimer’s disease would be a much larger population, but we can do relatively little for it at the moment beyond the basic research. We hope to be able to do something with memory-enhancement technology, but that is beyond our capability now in terms of clinical applications,” he adds.
Advances have been made in techniques such as patch clamping, which allows the study of single or multiple ion channels in cells, particularly excitable cells such as neurons, cardiomyocytes, muscle fibers, and pancreatic beta cells. An automated version of the technique is enabling easier and faster study of the inner workings of brain cells.
“Much of my work is focused on improving the technology used for studying the neural code for sensory and cognitive processes. Patch clamping has now been automated at the Massachusetts Institute of Technology. A robot can now take the pipette into the brain and can vary the pressure in a specific area or even introduce a gene. We can observe these processes using a two-photon microscope positioned on top of the patching robot, which allows us to take a scan of an area of tissue and instruct the to robot target a specific cell for patch clamping,” says Schultz.
“Colleagues here at Imperial College are looking at using EEG to control devices, but you need to get a lot out of the signal, which is like listening to a microchip’s operation with a radio receiver—the signals are jumbled, so we’re hitting a ceiling,” Schultz adds. “My view is longer term than BCIs. One goal my group has is to look at repairing brain circuitry and then to develop neural technologies that interact with the nervous system.”
An image taken in Simon Schultz’s lab: two-photon imaging of layer-five pyramidal cells in a living mouse brain. (Image courtesy of Simon Schultz.)Schultz has focused on reverse engineering the information processing architecture of the brain and investigating the basic principles of information processing in cortical circuits. He uses an engineering approach that involves both experiments on mice and theoretical work. Using two-photon microscopy, optogenetic manipulation, and electrophysiology, he measures and interferes with patterns of neuronal activity in vivo, and his team develops new algorithms for analyzing the resulting data (right, An image taken in Simon Schultz’s lab: two-photon imaging of layer-five pyramidal cells in a living mouse brain. (Image courtesy of Simon Schultz.)).
His goal is to understand how neural circuits process sensory information in the mammalian cerebral and cerebellar cortices, including its use in elementary cognitive operations. The resulting insights into the cortical circuit could shed light on how it dysfunctions in neurodevelopmental and neurodegenerative disorders as well as aiding the design of innovative computational devices.
Ultimately, a range of neuroprosthetics might result from this research, which could unlock the ability to heal, or indeed replace, damaged areas of the brain. What is certain is the momentum behind neurotechnological research is building, and whether through implants, BCIs, or innovative computational systems inspired by the human brain, more light will be shed on our most complex and most precious organ, which will no doubt lead to effective treatment for many neurological conditions.