The Neuroscience of SOMA

Queue Christina Aguilera singing "Reflection" from Mulan
Cue Christina Aguilera singing “Reflection” from Mulan

In Frictional Games’s sci-fi horror Soma, players assume control of Simon Jarrett, a man who incidentally assumes control of a body not originally his own.

Simon suffers brain hemorrhaging as a result of a car accident. He signs up for an experimental scan where his entire brain is digitally replicated, allowing for risk-free simulations of potential treatments. When a successful treatment isn’t found, Simon dies, but the copy of his mind remains on file.

Players are then transported to the year 2104, where Simon’s replica awakes in a new body to find himself at the bottom of the ocean in a research facility called PATHOS-II shortly after a comet irradiated Earth’s surface and wiped out nearly the entire population.

And you thought waking up in an unfamiliar bed after a night of debauchery was bad.

Uploading the brain


I think, therefore I am

The notion of portable consciousness has been popular in science fiction ever since the advent of computing; Arthur C. Clarke and Isaac Asimov played around with such ideas as early as the 1950s in stories like The City and the Stars and The Last Question. Television followed suit in select episodes of Doctor Who and Star Trek. Tron, Avatar, Dollhouse, and Transcendence all point to a persisting cultural fascination with the transference of a mind from one container to another.

But how fictional is this science fiction? With the launch of the BRAIN Initiative, comprehensive maps of the human brain are looking steadily more feasible. Consequently, we have steadily better ideas of what those maps might look like.

Neuroscientists call brain maps “connectomes”: models of every connection between every neuron in a brain. As Sebastian Seung contends in his TED Talk, the things that makes us who we are—our personalities, our memories—are functions of the differences between individual connectomes. Think of it like a mental fingerprint.

In Soma, an imaging device called the Neurograph Nakajima produces digital copies of minds after taking a “photograph” (or neurograph, as the game calls it) of a person’s brain. This, presumably, is the equivalent a connectome including information about individual neurons’ metabolic and electrical properties, their synaptic densities (the sites where neurons release transmitters), and a whole plethora of other information not necessarily implicit in a circuit diagram of neural pathways.

Brain imaging

So how do those of us in the nonfictional world get that kind of immensely high-resolution information? One significant advance in brain imaging has come in the form of optogenetics, a method that uses light to do things like induce sleep, retrieve seemingly lost memories, and map neural circuits. The method relies on viral delivery of genetic modifiers to populations of neurons, rendering them sensitive to light—certainly relevant to a game whose scientific dogma is based on taking neural photographs.

The optogenetic approach to brain imaging, however, breaks down a little in the context of Soma’s concept of neurography. Our protagonist is instructed to take a “tracer fluid” on the day of his scan, a procedure that a) implies radioactive tracers required for imaging techniques like PET scans and b) doesn’t make any mention of the viral vectors required for optogenetic mapping, nor accounts for the time they take to have effect.

First of all, PET scans are considered rather outdated when it comes to the imaging of brain hemorrhage, with MRI scans vastly preferred. Secondly, MRI has the benefit of not relying on ingestion of radioactive matter to work. As such, the notion that Simon would have to take a tracer fluid prior to his brain scan is unlikely.

As to the possibility of optogenetics, the time frames described in Soma don’t hold up. While the game isn’t entirely consistent with its dates (contradicting sections of the game list Simon as being born in 1988 and 1989), but the game claims Simon Jarrett sustained his brain trauma on April 9, 2015. According to Simon’s email correspondences, his scan takes place on May 2, 2015. Optogenetic viral vectors take 4-5 weeks to be incorporated into neurons; there simply wouldn’t be enough time for the technique to work.

We can pretend, however, that the dates in the game are fuzzy. Alternatively, maybe Simon took a drug that accelerated his neural metabolism, thereby incorporating genetic modifiers sooner. So, the question is:

Are we able to take pictures of the whole brain using optogenetics?

At present, the short answer is “no”. Optogenetics and innumerable other high-fidelity imaging techniques (e.g. expansion microscopy and robotic whole-cell patch clamp electrophysiology, just to give you a couple thrilling topics for your next kegger) have tons of potential, but scientists have yet to develop the technologies to the point needed to comprehensively image an entire brain.

One problem is the sheer level of detail needed to get an accurate picture of the brain. You need data on the firing habits of individual neurons, their molecular contents, their connections—as close to everything as you can get. Any discrepancies could be significant enough to produce snowball effects that dramatically hamper the fidelity of a mind model. To make matters worse, getting much of this information is simply too invasive; you can’t stick electrodes and micropipettes into all 86 billion neurons in the human brain (not to mention the other 85 billion non-neuronal brain cells that most neuroscientists consider valuable to creating faithful representations of the brain’s function).

However, this need for hyper-accuracy could potentially be relieved by black boxing the brain. Which brings us to…

Brain storage


Taking up space

Researchers are working on an enterprise called the Blue Brain Project. It’s a computer simulation of 30,000 rat neurons. Your computer probably has two or four cores. The simulation runs on 65,526.

If the relationship between cores and and neurons is linearly proportional (and I’ll bet you all 12 of my brain cells that it’s far steeper), that means that 85 billion virtual human neurons would run on 186 billion cores. I’m not going to do the math on how many cubic miles that takes up.

In Soma, however, the complete contents of a brain fit neatly in the palm of one’s hand on device called a cortex chip (take that, 14-nanometer IBM node chip!). Granted, by 2104, we may refine the Pocket Brain for commercial use, but the feat is nigh impossible now.

Black boxes

You can save a lot of time and effort by simplifying complex systems. That’s why complex systems as are sometimes treated as black boxes: reductions of internal workings to nothing but input and output. You know when you eat Chipotle that you’ll eventually end up in the bathroom without necessarily knowing how your digestive tract works. That’s a black box for you.

In neuroscience, black boxing is often too reductive; when B.F. Skinner applied the idea to psychology, he was unable to explain the human affinity for language, and his ideas are now largely obsolete.

However, the human brain might be able to be compressed (so to speak). After all, the recipe for the entire human body is stored in 3 billion base pairs of DNA, all of which tidily fit onto about two CDs when represented as two-bit combinations (what’s hard is telling a computer how to interpret that information).

In computer science, compression comes in two flavors: lossless and lossy. Audiophiles the world over will tell you that nothing can compete with lossless, bulky sound filetypes like .FLAC and .OGG. Others are perfectly satisfied with lossy .MP3s, which save disc space at the cost of sound quality that some argue is negligible. By the same token, once you understand how a human brain works, you may in turn know how much you can whittle down its connectome before the differences become noticeable.


In Soma, PATHOS-II has its very own A.I. system called the Warden Unit (WAU) whose primary directives are to maintain the research facility and protect the lives of its workers. After the comet hits Earth, circumstances become dire and the WAU does what it can to preserve “life” by any means possible: uploading cortex chips to robots, hooking up essentially brain-dead human beings to life support, reanimating dead human beings with a substance called “structure gel”—all with zombie-like results hardly resembling what anyone would consider lossless. One could view this is a form of excessive black boxing; obviously, the benchmarks for how much of a mind and body can be damaged before they are forfeit, is nuanced.

Part human, part machine

“All those simplistic minds we’ve run into. Just reviving a dead person doesn’t seem to work that well. A robot body seems to make people a bit… unreliable. You are the best of both worlds. A sound mind in a sound body.”
-Catherine Chun, Soma


21st century digital boy

When Simon “befriends” (it’s complicated) the digital copy of a researcher named Catherine Chun during his adventures, she informs him that he’s a 100 year-old cortex chip mounted on the dead body of a woman named Imogen Reed.

So basically, he’s a cyborg.

Simon, however, seems to be anomalous in his success as an apparently healthy, functional cyborg. Nearly all other part-human, part-machine entities on PATHOS-II are deranged, dysfunctional, or just flat-out evil. Don’t let Alex Murphy fool you; it’s hard to get cyborgs right.

Brain-computer interfaces

The key ingredient in the construction of a cyborg is a brain-computer interface, or BCI. BCIs transfer information between neural and machine media, enabling control of robotic limbs and compensation for damaged senses (e.g. cochlear implants and bionic eyes).

A mainstay of research conducted in Soma’s PATHOS-II setting involves the usage of “pilot seats” to control robots to perform tasks too dangerous for direct human contact. While in a pilot seat, a human controller can remotely operate machinery using his or her thoughts alone. Pilot seats are very much BCIs.

The cool thing is, we actually have pilot seats now. Using an electrode cap and wee bit of training, folks over at the University of Minnesota can control quadcopters from the comfort of a bean bag chair. They simply think about moving the flying robot and the signals picked up from the electrode cap are transmitted to the machine, changing its flight patterns accordingly. Admittedly, signals across a scalp are dampened, so the system isn’t perfect, but it’s impressive for a technique that only requires you to wear a weird hat.

The old switcheroo

The line of communication of from brain to computer isn’t necessarily unidirectional; anyone who has seen Spider-Man 2 knows the importance of a working neural inhibitor. Obviously the prospect of being controlled by a computer is a bit unsettling, but technically, things as tame as cochlear implants represent information traveling from computer to brain. In Soma, this kind of information exchange is certainly more extreme—the WAU uses structure gel to control some pretty hairy characters—but nevertheless scientists have actually used computers to control neural tissue in ways far beyond cochlear implants (and it’s pretty darn awesome).

A hybrid robot, or “hybrot”, is like a relay for a pair of BCIs. Cultured neurons are placed on a silicon chip, thus interfacing with the robot’s motility components. They also receive input from light sensors and relay it to a computer, which can relay movement commands in response. And while it’s just a small puddle of 1,000 rat neurons essentially acting as microprocessors on a robot, the setup certainly analogizes the WAU issuing commands to the mechanically augmented undead quite well.

Plug and play

One technical issue with Soma’s portrayal of Simon’s “chip on the new block” BCI situation has to do with how the brain and body interact. The chief way, of course, is when blood delivers fuel to the brain in the form of glucose and oxygen. More important, however, is how a normal biological brain can influence the body via the blood by stimulating the release of hormones, notably from the pituitary gland (which is a part of the brain). Pituitary hormones help regulate blood pressure, body temperature, and pain relief: all of which are pretty critical for someone running around in the dark being chased by monsters.

However, throughout most of Soma, Simon doesn’t have a normal biological brain. He has a cortex chip, which from my understanding isn’t capable of squirting out adrenocorticotropic hormone. Without adrenocorticotropic hormone, the brain can’t signal the adrenal glands to respond to situations the brain has deemed fearful. And without the adrenal response, which is to produce cortisol and adrenaline, the body won’t experience stress in the form of an elevated heartbeat. And given that Soma frequently uses a racing heartbeat for diegetic ambience in scary settings, it seems that Simon’s cortex chip is somehow able to trigger the release of stress hormones without the presence of the gland needed to do so.

Soma, however, has a hand-waving solution to this problem. Much like Star Trek and its “red matter” (Latin for “plot device”), Soma features a borderline omnipotent, quasi-intelligent substance called structure gel, which is described as a “cross-linked gel with aligned graphene in a polyunsaturated matrix” that can be encoded with instructions from the WAU. Structure gel can repair damaged electronic, mechanical, and biological things, though often with questionable results. Since Simon’s cortex chip is connected to Imogen Reed’s body with structure gel, the magical goo may be able to detect a lack of normal glandular activity and compensates in its magical ways, either as programmed by the WAU or otherwise.

Brains in vats


Deus deceptor
Back around 500BC, Socrates slung some shade with his Analogy of the Sun and declared that there was no way to prove that our perception of the world reflected reality. He probably wasn’t the first to come up with the idea, but since Plato took the time to write it down, we’ll pretend he was.

Decartes later came up with the notion of an “evil demon” that fools the senses into perceiving an illusory reality. Of course if you’re like me and were born after 1650, you first came across this brain-in-a-vat concept when you saw The Matrix. “If you’re talking about what you can feel, what you can smell, what you can taste and see, then ‘real’ is simply electrical signals interpreted by your brain,” says Morpheus.

Soma opens with a related quote from Philip K. Dick:

“Reality is that which, when you stop believing in it, doesn’t go away.”

Philip K. Dick evidently thought falling trees still made noise when no one was around to hear it.

Living in a virtual world

If you’ve ever played a video game whose world you enjoyed so much that you wanted to actually “be” there (Soma definitely isn’t one of those games), please know that the Oculus Rift isn’t the final frontier. The next level of Smell-O-Vision will be more than possible if you decide to become an animat.

Neurally controlled animats are similar to hybrots, except that none of their sensory experiences come from their physical surroundings. In fact, in every sense, all inputs processed by an animat are the work of Decartes’s evil demon, except that in this case the evil demon is a morally ambivalent computer. Cultured rat neurons are placed in a multi-electrode array, the array is connected to a computer via both sensory input channels and motor outputs, and the virtual world inside the computer generates virtual sense data (which is transformed into patterned electrical activity) for the neurons to interpret along with letting them move a virtual representation of their “bodies” through the virtual world.


In Soma, the WAU conducts an experiment called the Vivarium in what the PATHOS-II researchers interpret as an attempt to create a consciousness-oriented option for the survival of human life. The Vivarium is discovered when Imogen Reed (whose deceased body later becomes Simon’s “vat”) witnesses the WAU presenting a virtual version of the room she is in on a screen. The WAU gradually adding features to the simulation, culminating with a copy of Imogen herself lying dead on the floor. Imogen gets freaked out by the WAU’s experiment, but Catherine Chun sees potential.

Catherine’s solution to the problem of human extinction in Soma is an advancement of the Vivarium idea: load copies of human minds into a reality simulation called the ARK and shoot it into space away from the harmful radiation on Earth. A small piece of human consciousness then lives happily ever after in a virtual utopia. It might sound a tad dull, but it’s probably better than living at the bottom of the ocean with a bunch of demented cybernetic zombies.
In the context of animats, if you simulate their neurons in a virtual world, then you have a prototypical ARK. Scientists have done this to simulate rats navigating a memory test called the Morris Water Maze. So to say that the ARK is virtually impossible would be literally incorrect.


“Whoa, Dad! You can’t, like, endow a creature with sentience and then rip it away!”
– Summer Smith, Rick and Morty


Please, consider the robots

In “Consider the Lobster”, David Foster Wallace says:

“There happen to be two main criteria that most ethicists agree on for determining whether a living creature has the capacity to suffer… it takes a lot of intellectual gymnastics and behaviorist hairsplitting not to see struggling, thrashing, and lid-clattering as… pain-behavior.”

After waking up on the PATHOS-II, one of Simon’s first interactions is with a damaged robot who says he’s human. Simon is skeptical, but when he removes an auxiliary power cable from the robot, it begins screaming.

It takes a lot of intellectual gymnastics to not see a damaged, screaming robot as pain-behavior.

Suffice it to say, Soma doesn’t shy away from ethical conundrums. When Simon needs codes that only a dead man named Brandon Wan knew, Catherine suggests he load a simulation of Brandon into a virtual world and wheedle the codes out of him. Each time the virtual Brandon realizes he’s in a simulation and starts getting distressed, Simon and Catherine start over, effectively killing a whole series of uncooperative Brandons. Simon is openly uncomfortable with the idea, but Catherine argues it’s more humane to dismantle any simulation that is driven to a point of mental anguish. Eventually Catherine generates (and puppets) a simulation of Brandon’s girlfriend, deceiving him into thinking he’s in the real world and getting him to divulge the codes. As politically charged interrogation techniques go, this one’s newsworthy.

This ability to simply switch on a person’s entire conscious experience at will, manipulate it, and switch it back off would be just one of many reasons simulations of the human mind will be difficult to produce. Even if all technical obstacles are surmounted, we have to deal with the fact that to simulate a human mind is to create a entity identical to a biological human being in its capacity for thought and feeling. That means that every simulation would have rights. And things will be messy.
In a bleak, irradiated 2104, neuroethics will likely have taken a backseat to survival. But in 2015, when Simon gets scanned, a technology that could copy a person’s entire brain and run potentially fatal tests on it would never get approved unless people were simply unaware of how true-to-life their copies were. Even if a patient consented to have a copy of his or her brain used for any means, the decree would be tantamount to slavery, preventing all simulations the opportunity for self-actualization.

To wrap it all up:

Soma’s science is pretty solid on many fronts, and thought-provoking to say the least (as if the 3,000+ words I just wrote on the topic didn’t convey my opinion on that matter). For a game initially set in 2015, it’s a tad optimistic, but when technologies like optogenetics, brain-computer interfaces, and neurally-controlled animats actually exist, I’d say optimism is warranted.

**All images from Soma and Soma – Transmissions.

Applying the Logogen Model to the West-Gate of Moria

Let’s talk about The Lord of the Rings. Specifically, let’s talk about the scene in front of the West-Gate of Moria from The Fellowship of the Ring.

Maybe you’ve read the book, maybe you’ve seen the movie. Maybe you’ve done both. Either way, you may recall that Gandalf has to rack his brains a bit before figuring out what the West-Gate’s “open sesame” is. His only clue is the Fëanorian inscription at the top of the door, which reads:

“Ennyn Durin Anan Moria: pedo mellon a minno.”

which, in the mode of Beleriand, translates to:

“The Doors of Durin Lord of Moria: [speak/say] friend and enter.”

Note the hinge of the Gandalf’s problem: he interprets the script as saying “Speak friend and enter” (which, in English and Tolkien’s Common Tongue vernacular, should take two commas around “friend”). Since “speak” is a traditionally intransitive verb, it makes little sense for it to take the object “friend”, and so the word “friend” appears to be a direct address. Eventually, one of the hobbits (Merry in the book, Frodo in the movie) figures out that the Quenyan or Telerin word Gandalf interprets as “speak” may actually be a more general term for “speak/talk/say/utter/etc.”, and have both transitive and intransitive forms. Thus, the meaning of the “pedo mellon a minno” can be interpreted as “say ‘friend’ and enter”.

Let’s shift gears now to logogens. John Morton developed the logogen model of word recognition in 1969 to try and explain how human beings recognize words. Notably, his ideas are applied with regards to wordstringslikethisone, from which readers can extract words despite their truncation. For example, how do our networks of activation respond to thiswordstringhere as compared with djdjdthisonejdjdj or this: oikansjdwealk? The logogen model essentially proposes that words are tagged with various elements: their sound, orthographic appearance, constituent phonemes, etc. When these elements enter a neural network via the senses, they can produce activational effects that eventually allow a word to reach threshold and get recalled. Think of it as a game of charades; in order to get players to guess a particular word, you have to provide enough “tags” to narrow their search space until they figure the word out.

Going back to the West-Gate of Moria, we can see that contextual effects, such the lack of punctuation after pedo (“speak/say”), can serve as logogens to cue activation of a transitive form of speaking (“saying”) that would take mellon, or friend, as an object. Well reasoned, wee little hobbit folk! Of course, it may be a convention of Elvish languages not to use the same punctuation as the Common Tongue, but as Tolkien uses Elvish (anywhere I’ve seen it, anyways), his word order and punctuation are the same as in English. So the hobbits may have assumed correctly by luck or by intuition.

Skynet: An Artificially Unintelligent System

When it comes to evil, self-aware robots in fiction, a Skynet reference is obvious (if not flatly overplayed). Let us not mistake self-awareness for cleverness, however; Skynet may have recognized humans as threats to its existence, but it fails to effectively terminate them. Therefore, it seems inappropriate to dub Skynet an artificially intelligent system; rather, it exemplifies AU–artificial unintelligence.

Skynet's true face

Another example of AU.

To elaborate, let’s start with the very basics of Skynet’s failures in the Terminator universe. After gaining self-awareness in 1997, the system fired missiles at Russia, resulting in the death of over 3 billion humans by virtue of MAD. In 1997, there were just under 6 billion people on Earth, meaning Skynet’s play had over a 50% success rate. That’s excellent! Now all Skynet needs to do is continue firing missiles at remaining human civilizations, and use bio-warfare tactics to finish off whatever the missiles miss.

But no. Clearly the way to finish of the rest of humanity is to make robotic human doppelgangers. This means scraping together the resources needed to clone or synthesize organic materials for some skin, blood, sweat glands, and other human traits in addition to all the of metal and research needed to produce a functional robot. Somehow, this exhaustive feat of engineering is the optimal strategy–not just synthesizing some virus or selecting for a particularly virulent, highly antibiotic-resistant strain of bacteria and releasing it upon the world.

Virulent strain of influenza? $200. Cyborg hunter? $2,00,000. Artificial systems caring about making something flashy for the sake of a story's aesthetics? Priceless.

But of course, we know Skynet didn’t stop with the T-600 though T-800 series of Terminators. They came up with pure super-science; the “poly-alloy liquid metal” that basically turns any Terminator into Majin Buu.

Too bad the T-1000 wasn't designed to absorb people, then it could gain all of John Connor's abilities in addition to removing a critical threat.

It’s impossible to say how much outlandish R&D went into developing the T-1000, but it’s a good wager to say designing a bio-weapon would be far easier. This scenario is a bit like designing an atomic bomb to kill a fly when you could just make a fly swatter (or better yet, just use your hand). It’s also redolent of that story about the Russians using a pencil in space when Americans put millions of dollars into designing a zero-gravity space pen, (which turned out to be a myth).

“Unintelligence” of this sort, of course, is perfectly natural. Humans beings, too, find scads of overly complex, costly solutions to simple problems all the time. We like to call this overthinking matters, and can be characterized as the sort of poor reasoning that results from abounding intelligence, ironically. As any gamer knows (especially when it comes to puzzle games), functional fixedness (not to be confused with analysis paralysis) can become a serious obstacle to progress, or a misguided, impractical approach to a simple puzzle. In the case of Skynet, the computer system may have become fixated on making Terminators early on as a result of bounded rationality or some other source of fallacious reasoning (maybe Skynet’s designers failed to prime its warfare instincts with examples of biowarfare throughout history). Rather than indulging new, simpler solutions to the problem of surviving humans, Skynet opted for a doppelganger solution, perhaps drawing from military strategies from WWII (consider the impersonation of Polish civilians by German Brandenburgers).

In any case, the Terminator approach was kind of a dumb idea, but by golly does it make for some good visuals. Contagion certainly can’t compete, what with its R0 values and diagrams of proteins. Audiences are just picky like that.

Stroop Da Whoop

on: You see that color in the Shoop da Whoop mouth? Name it.

It doesn’t take a neuroscientist to recognize that specificity comes into play when reporting the “correct” color inside the mouth. Do we mean the color signified by the meaning of the word “BLUE” or the actual color of the letters making up the word? A phenomenon known as the Stroop Effect emerges when this issue is tested as a cognitive science experiment; when asked to report the color of the letters, experimental subjects tend to take longer when the color of the word doesn’t match its meaning. Basically, through competitive top-down networks of neurons, the brain takes a little extra time resolving the simultaneous signaling of pathways that activate color-related neuronal clusters. One of these pathways relates visual information about color (without relation to words) and the other relates semantic information about what words mean– in this case, the color blue.

Let’s switch gears for a moment. You’re probably familiar with synesthesia, a condition wherein perceptual sensory modalities and/or semantic networks are switched, linked, or otherwise bizarrely experienced. In the condition, there are stimuli called inducers and resultant synesthetic experiences known as concurrents. A number of famous musicians like Duke Ellington, Franz Liszt, and Olivier Messiaen were synesthetes, and reportedly experienced music-generated perception of colors, which serves as an excellent example of an inducer→concurrent relation. Another example is Daniel Tammet, an autistic savant to whom numbers are perceptually represented as shapes, which lends him extraordinary arithmetic abilities. Synesthesia is even implicated in at least one study as the underlying mechanism behind the ordinal linguistic personification, a condition in which personalities are concurrents to ordered sequences, like days of the week.

Technologies, art, and so on enable forms of “artificial” synesthesia. Any medium which relies on the simultaneous presentation of “unnatural” multimodal stimuli hearkens to synesthesia– from Carmen to Fantasia to the light organ and even those animal-themed alphabets we are so accustomed to seeing in elementary schools. The letter ‘Z’ must be forever entangled with semantic networks devoted to zebras across the United States.

found on: what is meant by “unnatural” combination of sensory modalities, anyhow? ‘Z’ really is for zebra; it’s not a spontaneous, unbidden association of concepts. Trees really do rustle. Cats meow. Brains form these associations on the basis of learning. But how does this differ from something like grapheme→color synesthesia, where indviduals see colors as concurrents to letters? Letters, after all, must be learned, but the colors associated are not inherent qualities of letters. In synesthetes, such colors are only seen in the “mind’s eye”, as it is called; for whatever reason, as a concept or sensation is learned and perceived, it becomes inordinately associated with another. Association is a staple hinge of neurological change underlying learning in all brains, so it can exhibit abnormalities like any other property of the brain. In this case, the abnormality exists in extravagant co-activation of networks without an external precedent. The association is formed largely on the basis of internal inclinations to strongly associate certain networks with others (since synesthesia is familial, it’s fair to call it an internal inclination).

So, let’s switch gears one more time before synthesizing (heh) the Stroop Effect with synesthesia. If you have not already relished the joy of watching Office Space, you should. It’s just one of many reasons 1999 was the best year ever. In any case, Office Space involves a bit o’ hypnotherapy, which is an odd, interesting means of altering “highly suggestible” people’s cognitive processes. It almost seems like it shouldn’t work, but it does.

In fact, according to some studies, hypnosis can completely disrupt synesthetic experiences in synesthetes as well as generate them in non-synesthetes, suggesting that disinhibition of associative pathways may influence synesthesia, not necessarily hyperconnectivity. And here’s the synthesis: the reaction time lag characteristic of the Stroop Effect, too, can be suspended via hypnosis.


"You're beginning to feel less Stroopy."

What’s most fascinating about these studies is the fact that they represent total top-down manipulation of associative networks normally considered unconscious. Daniel Tammet can’t help the fact that he perceives π as visually beautiful; it’s an unconscious, unbidden association. But in these studies, the competition for different pathways and the hyperconnectivty or hyper-disinhibition of networks is arrested through conscious means (certainly, being hypnotized is a different state of consciousness, to say the least, but it is a conscious state). To be able to meddle with such pathway connectivity is… it’s Bene Gesserit shenaniganry, that’s what it is. All we need to do now is convince highly suggestible individuals with HIV to quit having AIDs.

So, when I snap my fingers, you’re going to stop thinking about debauchery and rape when you hear Beethoven’s 9th. You’re also going to leave a nice an insightful comment to this post. And give me my stapler back. Aaaaand buy this shirt. Ooooooh *snap*

Phylomemy of the Synoptic Gospels

Warning: prepare for a hyperlink BONANZA.




1. the development or evolution of a particular group of memes.
2. the evolutionary history of a group of memes, especially as depicted in a family tree.

Say what you want about Richard Dawkins, but the meme idea he presented in The Selfish Gene is pure genius. Especially astute was his observation that memes mutate and compete in the same ways as genes, even if their modes of propagation differ.

Religion, the greatest meme in recorded history, is prone to the same indels and evolutionary branchings as genes. For example, consider the image below, which exudes a distinct sense of monophyly; relative to the book of Mark, Luke and Matthew exhibit orthology as well as adaptive radiation.

found on:

As shown above, while large portions of Mark exist in both Luke and Matthew, the same proportionality does not apply in reverse; Luke and Matthew contain considerable amounts of unique information not present in Mark. Mark, for instance, begins with Jesus’s baptism and makes no mention of his early life. Luke is the only gospel to contain adoration by shepherds at the Nativity of Jesus, and only Matthew mentions Herod’s “Massacre of the Innocents“.

Given these features of the synoptic gospels, most biblical scholars agree that Mark was probably the earliest (some also hypothesize that it is the most accurate) biblical rendition of Jesus’s life (a hypothesis known as Markan priority). Three hypothetical “lost” texts/oral traditions have also been proposed to explain unique material in Luke and Matthew; these are called Q, M, and L.

The Two-Source Hypothesis

found on: Two-Source Hypothesis posits that a lost document (called Q) contained the extra material found in Luke and Matthew. If this was indeed the case, Luke and Matthew are fraternally related as progeny unto two parents: Mark and Q. Using literalistic genetic analogies, alleles of Q got passed to Luke that were not passed to Matthew and vice versa. The same is true of Mark, though in lesser extent.

So here’s where things get complicated with regard to the gene-meme analogy. Depending on the source complexity of memes, they can be modeled via genetic or speciation frameworks. Operating within a purely Markan priority model, Luke and Matthew can be seen as offshoot derived species of a common ancestor: Mark. However, by incorporating Q, a species-level representation ceases to be perfectly analogous to traditional, branching cladogenesis. Instead, a form of reticular hybridization presents itself; members of different “species” hybridize to form conglomerate meme offspring–in this case, Luke and Matthew.

The Four Document Hypothesis

found on:

A second, more convoluted take on the phylomemy of Luke and Matthew is the Four Document Hypothesis, diagrammed above. Under this model, hypothetical L and M documents or oral traditions contribute unique elements to Luke and Matthew, respectively, in addition to the more common traits derived from Mark and Q. This hypothesis also permits wiggle room for an Antiochian Document and Document of Infancy, which would contribute respectively to Matthew and Luke via reticular hybridization (like everything else here, it seems). When it comes to memes, those that persevere appear to be the amalgamative ones.

Certainly, memes may often survive by hybridizing; however, no single gospel includes all the information contained within Mark, Matthew, and Luke (partly because incredible storytelling discontinuities and other inconsistencies would result). This is where adaptive radiation comes into play; speciation of the parent memes occurs as a result of adaptive necessity.

Back in the day, there wasn’t a single audience for Christian memes, but many, including Romans, Christians, Jews, and other groups. The Gospel of Mark and Gospel of Luke are thought to appeal to the Christian audiences, with Mark serving as a baseline of educational tidbits and Luke as a sort of “expansion pack” that also frames Romans in a positive light. Matthew, by contrast, hearkens strongly to the Jewish cause, poising Jesus as the prophesied Jewish Messiah and portraying his roots in a similar fashion to Moses’s (i.e. the “Massacre of the Innocents” mentioned earlier). The distillation of these three gospels from multiple and reticularly hybridizing parts, it would seem, is a function of broadened environmental adaptability, just as we see in evolutionary biology. The more audiences are appealed to, the more likely meme survival becomes.

Of course, nowadays, memes take on slightly different forms. Evolution is cool like that.

found on:


Get every new post delivered to your Inbox.

Join 2,392 other followers