Tag Archives: blue brain project

Conventional Misconceptions: Infinite Regressions, Computing Power, and Brain Simulations

found on: https://i1.wp.com/www.labgrab.com/files/blue_brain_3.jpg

Some people are skeptical about the possibility of simulating a brain, ever. *cracks knuckles* It’s rebuttal time. And maybe this post is going to strike an aggressive chord, but point-by-point counterarguments are the only good counterarguments.

SO. Let’s roll.

“There’s no reason to think it will ever be possible to scan the human brain and create a functionally equivalent copy in software. Hanson is confused by the ease with which this sort of thing can be done with digital computers. He fails to grasp that the emulation of one computer by another is only possible because digital computers are the products of human designs, and are therefore inherently easier to emulate than natural systems.”

Barely out of the gate and we’re faced with a statement the refutation for which demands proving a negative. There’s no reason to think the brain can be simulated? “Well, prove that it’s impossible,” says the budding but overeager logician. Of course, proving the certainty of an impossibility is impossible (when you’re ignorant of a system’s parameters, that is). The problem with this series of assertions is that there is no evidence offered to substantiate what appears to be a thesis. The claim “brains can never be digitally simulated” at least demands follow-up samples of evidence.

It is obvious from the following lines “Hanson… etc.” that this author intended to critique someone’s reasoning, not supply direct evidence for his topic sentence. There is simply an error in organization occurring here; only after explaining Hanson’s alleged confusion is the time right to continue by making a statement like “brains can’t be emulated on computers”. That way, you can conclude demi-reasonably that brains can’t be simulated because Hanson fails to understand the limitations of computing. After that, you can strengthen your contention against Hanson by adding in citations. Of course, you’re still in the dark in terms of credibility since what some random guy says about whether or not computers can be simulated doesn’t affect whether they actually can be.

Not only is this opening argument feebly constructed; it is presumptuous to boot. Hanson’s belief that brains can someday be simulated virtually isn’t necessarily the product of confusion over the ease of porting. This author is overemphasizing Hanson’s arguable misuse of the verb “port”, when in fact Hanson was probably using it in a general sense to indicate simulation. Hanson may just think the brain is simulate-able in the future because… well, because people have simulated parts of it already. And the level of accuracy to which these simulations resemble real brains can only improve as experimental neuroscience progresses.

But hey, there’s always a chance to redeem the arguments proposed:

“The word “port” doesn’t make any sense in this context because the human brain isn’t software and he’s not proposing to modify it. What [Hanson] means is that we’d emulate the human brain on a digital computer. But that doesn’t really work either. Emulation works because of a peculiar characteristic of digital computers: they were built by a human being based on a top-down specification that explicitly defines which details of their operation are important. The spec says exactly which aspects of the machine must be emulated and which aspects may be safely ignored. This matters because we don’t have anywhere close to enough hardware to model the physical characteristics of digital machines in detail. Rather, emulation involves re-implementing the mathematical model on which the original hardware was based. Because this model is mathematically precise, the original device can be perfectly replicated.”

The first couple declarations in this segment are fairly on point; emulating the brain in a digital framework isn’t quite the same as porting software from one operating system onto another. Hanson probably would be better off just using “emulation” as his go-to descriptor. However, again, Hanson probably means to imply simulation, not emulation. It’s also true that we currently lack the technology to fully simulate computers–that is, including their hardware aspects. But that’s currently. Still no evidence is provided suggesting that such technology won’t exist in the future. And on the other hand, both theory and technology are constantly advancing in a direction that strongly suggests that brain simulation will be within reach someday. Not now, but someday.

Now, let’s address the argument which states that top-down design simplifies emulation. Yes. Yes it does. Why this precludes emulating a brain remains a mystery, however. Certainly, top-down design enables an individual porting software to allocate resources in different ways according to functional goals (you want to make a toaster, there are about three hundred godzillion ways, but they will all toast bread, or toast toast… or something). However, the brain follows rules just like any system. Ever since the Hodgkin and Huxley proposed their model of action potential propagation in 1952, flurries of biologically apt mathematical models have populated the field of theoretical neuroscience. These can all be implemented in virtual environs, the only constraint is processing power. And again, technology is constantly advancing processing power.

Onward!

“You can’t emulate a natural system because natural systems don’t have designers, and therefore weren’t built to conform to any particular mathematical model. Modeling natural systems is much more difficult—indeed, so difficult that we use a different word, “simulation” to describe the process. Creating a simulation of a natural system inherently means means making judgment calls about which aspects of a physical system are the most important. And because there’s no underlying blueprint, these guesses are never perfect: it will always be necessary to leave out some details that affect the behavior of the overall system, which means that simulations are never more than approximately right.”

First of all, nature conforms to mathematical models all the time. What are the fundamental laws of physics if not mathematically expressible parameters with which nature complies? Certainly, much theory derives from flawed experimental and observational data and is thus fated never to be altogether “perfect”, but science self-corrects and therefore approaches reality ad infinitum. And there’s plenty of data that can be excluded from many simulations; probability algorithms for quantum tunneling effects, for instance, aren’t exactly essential components for weather simulations. Same goes for brains. That’s a judgment call that can be made without expectation of significant error.

As mentioned before, processing power comes into play here.  Omissions of variables are often influenced by processing constraints; however, as processors improve, this issue will dwindle.

The “approximately right” phrase used above to connote inadequacy is anything but; for instance, most protein simulations neglect the influences of gravity. And guess what, doing so doesn’t render the results of their walks inadequate. Adding in gravity algorithms would only make the simulation “more real”, not more valuable. Some variables simply are more important to describing the components of a natural phenomenon than others, and this is measurable. If there is no significant loss of accuracy when ignoring a particular set of rules, then those rules don’t need to be implemented algorithmically.

And finally:

“Scientists have been trying to simulate the weather for decades, but the vast improvements in computing power in recent decades have produced only modest improvements in our ability to predict the weather. This is because the natural world is much, much more complex than even our most powerful computers. The same is true of our brains. The brain has approximately 100 billion neurons. If each neuron were some kind of simple mathematical construct (in the sense that transistors can be modeled as logic gates) we could imagine computers powerful enough to simulate the brain within a decade or two. But each neuron is itself a complex biological system. I see no reason to think we’ll ever be able to reduce it to a mathematically tractable model. I have no doubt we’ll learn a lot from running computer simulations of neurons in the coming decades. But I see no reason to think these simulations will ever be accurate (or computationally efficient) enough to serve as the building blocks for full-brain emulation.

This is pretty cogent, actually. There was some evidence (though no exact statistics, but there aren’t many of those in this post either) provided to suggest that technological advancements don’t contribute dramatically to improvements in simulatory accuracy. Also true is that logic gates cannot sufficiently describe neurons; indeed, they are highly electrochemically dynamic systems coming in myriads of different morphologies, all of which influence their behavior. Those facets of neurons are necessary components to a good simulation. However (the pretentious sibling of “but”), the author still sticks to his guns on the basis of “seeing no reason” not to.

Well, here are a few reasons:

  • BrainGate uses microelectrodes to research activational patterns of sensorimotor neurons. These findings are used to develop brain-computer interfaces (BCIs), which enable translation of stimuli into electrical schemes the brain can interpret as well as brain-control of prosthetic apparatuses. With electrical signals alone, these technologies are already good enough to restore functionality to impaired individuals. As the data accumulates, the degree to which functionality is restored will increase.
  • The Neurally-Controlled Animat is an organism that effectively exists in a virtual environment. Neural tissues arranged on a multi-electrode array respond to controlled stimuli while all their responses, plastic changes, and so on, are recorded in real time. In this way, we acquire large volumes of information on the dynamics of real individual neurons as well as neuron clusters within a digital framework.
  • Hippocampal prostheses are brain implants intended to replace damaged or deteriorating hippocampal tissue. They function using computational models of hippocampal function and have successfully replicated functionality in rats.
  • The Blue Brain Project, first of all, is awesome. Second of all, this enterprise has resulted in 10,000 neurons of a rat neocortical column being simulated to biological accuracy (admittedly, that accuracy isn’t specified). Needless to say, the vault of supercomputers behind this is gargantuan. But from the project’s inception in 2002, it only took until 2006 to get the whole column simulated.

When considering the progress and trajectory of current research in neurotechnology, including brain-computer interfaces, neuroprosthetics, hybrots, and computational neuroscience, you must at least acknowledge the possibility that a simulated brain is on the horizon.

Now, while still on topic, this may be a good time to point out a common fallacy attributed to comprehensive brain simulations. Claims flit about here and there proposing that simulation of a whole brain is impossible due to infinite regressions of inventory. If one intends to store data on every neuron in the system, then there will have to be storage accommodations made for the storing mechanisms, and the mechanisms storing that data, and so on. This is simply false; assuming you can simulate a complete brain, it does not contain information about itself that real brains don’t. Brains in reality don’t have a neuron to account for every neuron. That would indeed create an infinite regression, but that’s obviously not how the brain works. If you were simulating a brain, the system in which it operated would store information about all neurons, not the virtual brain itself.

made using http://cheezburger.com/FlashBuilder/GraphJam

If someone wished to make a really strong argument against the future possibility of brain simulation, the route that must be driven is one of data acquisition. The author of the blog post railed against here touches upon this with his mention of snowball effects. Our models are only as informed as we are, and the brain constantly revolts against being imaged well. Sure, we have fMRI, EEG, ERP, PET, SPECT, and other imaging techniques at our disposal, but the fact of the matter is that the brain’s activities are so dynamic, multitudinous, and difficult to scan that acquiring said data comprehensively and in real-time is pretty daunting. We don’t have the device(s) necessary to accomplish that task yet.

There is another way, however. While genetics is as much an enigma as neuroscience, if it advances more rapidly, simulating ground up genetic translation of the human body in a virtual setting would presumably create a viable, accurate brain. It’s based off the same encoding that our brains are, the trick is to ensure the accuracy of the algorithms denoting genetic activity. Since genetics, as a field, isn’t wrought with quite the same investigative quagmires as neuroscience, computational models may flourish sooner in that arena.


Advertisements