Computing in the Human Brain

By Lori Cameron
Published 06/14/2018
Share this on:

neurons

Our minds hold so many neuronal circuits that scientists didn’t dare capture all the data. Until now—in 3D, no less. The quest pushes boundaries of computing itself.

The study of the human brain’s astronomical number of neuronal circuits—hundreds of trillions—overwhelms researchers, who desire to amass and organize our cerebral activity into one convenient database open to colleagues worldwide.

Until now, such a repository of information has been unthinkable. To make the problem even more daunting, researchers want to map not only the human brain, but the brains of almost every living creature on the earth.

And do it in 3D.

The quest will push the boundaries of computing itself, requiring memory and performance at the highest reaches, in zettabytes (one sextillion, or 10²¹) and zettaflops.

To prepare for the avalanche, 14 researchers from both industry and academia propose building massive, open data repositories that use high-performance computing. The platform would allow researchers to store and organize the data and to collaborate with others on what they describe as several of the biggest challenges in neuroscience.

“Following the goals of the Brain Research through Advancing Innovative Neurotechnologies (BRAIN) Initiative and the EU Human Brain Project (HBP), we propose four grand challenge problems in neuroscience for which high-performance computing will likely play an important role: neuroanatomy and structural connectomics; neural population dynamics and functional connectomics; linking sensations, brains, and behaviors; and synthesis through simulations,” say the authors of “International Neuroscience Initiatives through the Lens of High-Performance Computing,” (login may be required for full text) in the April issue of Computer magazine.

They also plan to go one step further.

“While each challenge would provide useful information by itself, integrating them will result in synergistic understanding. Taken together, the results of these challenges will deepen our understanding of network mechanics that generate complex behaviors and the transformation of sensory inputs into neural representations of sensations. Furthermore, the results will provide insight into how brains achieve near-optimal computing capabilities and link structure to function across many spatiotemporal scales,” the authors say.

Grand challenge problems in neuroscience. We pose four grand challenge problems in neuroscience, which, at scale, will require high-performance computing (HPC). This figure summarizes how problems approximately scale with key features and provides example inputs (data types) and outputs (insights gained) associated with each problem. Note that each of these problems scales approximately as the product of at least two key features of the dataset (for example, neurons2 × time). (Source: Christian Swinehart, Samizdat Drafting Co.)

Neuroanatomy and structural connectomics: A sprawling network of brain signals

Microscopes are now so powerful, they can trace the finest neuronal processes and identify synaptic connections. This will help researchers reconstruct and better understand how neurons function.

“The result of such anatomical reconstruction could be a full 3D representation of each neuron or a graph of the resulting structural connectivity matrix with some measure of synaptic strength (the matrix CsN neurons × N neurons). A structural connectome would provide a compact summary sufficient for some analyses, and is required to link structure to function in the nervous system,” the authors say.

Neural population dynamics and functional connectomics: Discovering brain patterns in the way we think, feel, and act

High performance computing allows scientists to record large numbers of brain signals simultaneously for long periods of time, capturing activity patterns that produce sensations, actions, cognition, and consciousness.

Soon, they will be able to do these recordings non-stop for weeks or even months.

The authors propose a way to get the most out of that data: “It is becoming increasingly important to develop data-analysis methods capable of revealing structure from heterogeneous, nonstationary time-series measurements at scale. Two complementary approaches to this problem are dimensionality reduction and functional connectomics. Dimensionality reduction methods aim to find low-dimensional spaces that concisely summarize high-dimensional spatiotemporal patterns of activity, and can be used to gain insight into network dynamics. Functional connectomics aims to determine time-resolved causal influences among spatially distributed neural recordings.”

Linking sensations, brains, and behaviors

Capturing the structure of a single neuron is important, but researchers want to be able to connect neural activity with the outward stimuli that triggers it.

“The stimuli and behaviors used in neuroscience studies have been relatively simple (low-dimensional), which eases data analyses but elicits neural activity far removed from activity underlying naturalistic sensations and behavior. Indeed, it has recently been noted that simple tasks result in simple neural activity patterns. Thus, it is insufficient to record from more neurons without simultaneously monitoring behavior during increasingly complex sensory, motor, and cognitive tasks. This brings with it challenges of acquiring, analyzing, and integrating such multimodal data (for example, visual, audio, haptic, and movement) with the brain data,” say the authors.

Neuroscience datasets will scale exponentially across species. Neuroscientists study brains across several species of varying complexity. This figure depicts the exponential growth in brain size (number of neurons and synapses) and lifespan across C.elegans (worms), D. melanogaster (flies), mice, and humans. (Source: Christian Swinehart, Samizdat Drafting Co.)

Synthesis through simulation

Mapping brains is expensive. And there is little room for error. One mistake at any point can cascade into numerous errors at other points, making the research almost pointless.

“The goal of neuroscience is to achieve a deeper, broader understanding of the brain that extends across spatial and temporal scales. However, simply acquiring data without simultaneously developing guiding (theoretical) principles will impede extracting understanding from the data, and runs the risk of misguided investment into costly experiments. As other fields have shown, common computational and theoretical frameworks should permeate research directions while scaling up data acquisition and analysis to reduce the challenge of integrating information from very different levels of system granularity,” the authors say.

Grand challenge problems in neuroscience will push the boundaries of computing. Schematic of the computational demands (computing power [flops] and memory footprint [bytes]) of the grand challenge problems associated with four species. We project that these problems will scale approximately within the boundaries outlined by the dashed line. GB: gigabytes, TB: terabytes, PB: petabytes, EB: exabytes, ZB: zettabytes, GF: gigaflops, TF: teraflops, PF: petaflops, EF: exaflops, ZF: zettaflops. (Source: Christian Swinehart, Samizdat Drafting Co.)

From biological brains to digital brains: A super-workstation for everyone to use

Simulating brain activity—including the progression of brain diseases—is an area of ongoing interest among neuroscientists. However, they see even more research potential for researchers through high performance computing.

“There is a growing requirement for HPC architectures to be less simulation focused and more data intensive. Many neuroscientists are envisioning interactive supercomputing—using a supercomputer like a super-workstation for exploring large datasets. This requires a supercomputer to be managed more like a telescope: individual research groups would be assigned time, as opposed to a scheme that executes a job when it optimally fits on the system. The price HPC centers will pay for this is a decline in the overall system utilization metric, but the benefit will be a much broader scientific user base,” the authors say.

The study’s authors hail from the private, nonprofit, and university sectors:

  • Kristofer E. Bouchard of Lawrence Berkeley National Laboratory and University of California, Berkeley
  • James B. Aimone, Sandia National Laboratories
  • Miyoung Chun, Kavli Foundation
  • Thomas Dean, Google and Stanford University
  • Michael Denker, Jülich Research Center
  • Markus Diesmann, Jülich Research Center
  • David D. Donofrio, Lawrence Berkeley National Laboratory
  • Loren M. Frank, UC San Francisco and Howard Hughes Medical Institute
  • Narayanan Kasthuri, Argonne National Labs and University of Chicago
  • Christof Koch, Allen Institute for Brain Science
  • Oliver Rubel, Lawrence Berkeley National Laboratory
  • Horst D. Simon, Lawrence Berkeley National Laboratory
  • F.T. Sommer, UC Berkeley
  • Prabhat Mishra, Lawrence Berkeley National Laboratory

Research related to neuroscience in the Computer Society Digital Library

Login may be required for full text.


 

About Lori Cameron

Lori Cameron is a Senior Writer for the IEEE Computer Society and currently writes regular features for Computer magazine, Computing Edge, and the Computing Now and Magazine Roundup websites. Contact her at l.cameron@computer.org. Follow her on LinkedIn.