Tag Archives: neuroscience

Stroop Da Whoop

on: http://imgur.com/QXnwk You see that color in the Shoop da Whoop mouth? Name it.

It doesn’t take a neuroscientist to recognize that specificity comes into play when reporting the “correct” color inside the mouth. Do we mean the color signified by the meaning of the word “BLUE” or the actual color of the letters making up the word? A phenomenon known as the Stroop Effect emerges when this issue is tested as a cognitive science experiment; when asked to report the color of the letters, experimental subjects tend to take longer when the color of the word doesn’t match its meaning. Basically, through competitive top-down networks of neurons, the brain takes a little extra time resolving the simultaneous signaling of pathways that activate color-related neuronal clusters. One of these pathways relates visual information about color (without relation to words) and the other relates semantic information about what words mean– in this case, the color blue.

Let’s switch gears for a moment. You’re probably familiar with synesthesia, a condition wherein perceptual sensory modalities and/or semantic networks are switched, linked, or otherwise bizarrely experienced. In the condition, there are stimuli called inducers and resultant synesthetic experiences known as concurrents. A number of famous musicians like Duke Ellington, Franz Liszt, and Olivier Messiaen were synesthetes, and reportedly experienced music-generated perception of colors, which serves as an excellent example of an inducer→concurrent relation. Another example is Daniel Tammet, an autistic savant to whom numbers are perceptually represented as shapes, which lends him extraordinary arithmetic abilities. Synesthesia is even implicated in at least one study as the underlying mechanism behind the ordinal linguistic personification, a condition in which personalities are concurrents to ordered sequences, like days of the week.

Technologies, art, and so on enable forms of “artificial” synesthesia. Any medium which relies on the simultaneous presentation of “unnatural” multimodal stimuli hearkens to synesthesia– from Carmen to Fantasia to the light organ and even those animal-themed alphabets we are so accustomed to seeing in elementary schools. The letter ‘Z’ must be forever entangled with semantic networks devoted to zebras across the United States.

found on: https://martygumblesworth.files.wordpress.com/2011/05/zisforzebra.png?w=300So what is meant by “unnatural” combination of sensory modalities, anyhow? ‘Z’ really is for zebra; it’s not a spontaneous, unbidden association of concepts. Trees really do rustle. Cats meow. Brains form these associations on the basis of learning. But how does this differ from something like grapheme→color synesthesia, where indviduals see colors as concurrents to letters? Letters, after all, must be learned, but the colors associated are not inherent qualities of letters. In synesthetes, such colors are only seen in the “mind’s eye”, as it is called; for whatever reason, as a concept or sensation is learned and perceived, it becomes inordinately associated with another. Association is a staple hinge of neurological change underlying learning in all brains, so it can exhibit abnormalities like any other property of the brain. In this case, the abnormality exists in extravagant co-activation of networks without an external precedent. The association is formed largely on the basis of internal inclinations to strongly associate certain networks with others (since synesthesia is familial, it’s fair to call it an internal inclination).

So, let’s switch gears one more time before synthesizing (heh) the Stroop Effect with synesthesia. If you have not already relished the joy of watching Office Space, you should. It’s just one of many reasons 1999 was the best year ever. In any case, Office Space involves a bit o’ hypnotherapy, which is an odd, interesting means of altering “highly suggestible” people’s cognitive processes. It almost seems like it shouldn’t work, but it does.

In fact, according to some studies, hypnosis can completely disrupt synesthetic experiences in synesthetes as well as generate them in non-synesthetes, suggesting that disinhibition of associative pathways may influence synesthesia, not necessarily hyperconnectivity. And here’s the synthesis: the reaction time lag characteristic of the Stroop Effect, too, can be suspended via hypnosis.

on: https://i1.wp.com/animatedviews.com/wp-content/uploads/2007/10/junglebook-10.JPG

"You're beginning to feel less Stroopy."

What’s most fascinating about these studies is the fact that they represent total top-down manipulation of associative networks normally considered unconscious. Daniel Tammet can’t help the fact that he perceives π as visually beautiful; it’s an unconscious, unbidden association. But in these studies, the competition for different pathways and the hyperconnectivty or hyper-disinhibition of networks is arrested through conscious means (certainly, being hypnotized is a different state of consciousness, to say the least, but it is a conscious state). To be able to meddle with such pathway connectivity is… it’s Bene Gesserit shenaniganry, that’s what it is. All we need to do now is convince highly suggestible individuals with HIV to quit having AIDs.

So, when I snap my fingers, you’re going to stop thinking about debauchery and rape when you hear Beethoven’s 9th. You’re also going to leave a nice an insightful comment to this post. And give me my stapler back. Aaaaand buy this shirt. Ooooooh *snap*


Conventional Misconceptions: You Only Use 10% of Your Brain

As far as misguided sayings go, “You only use 10% of your brain” ranks highly. There’s no way to discern what it means exactly, and most interpretations fall victim to weird fallacies; for example, if the brain is a wholly material structure in an emergent deterministic framework (or even fundamentally deterministic, as may prove to be the underlying case of quantum mechanics), then “you” are your brain and body. Your perceived free will may deceive you into thinking that you control your body, but ultimately all actions you perform are predictable. Therefore, you cannot “use” yourself in any real sense; you merely are yourself. This “usage” of self is illusory.

But let’s set this argument aside for now. Let’s just assume that “you use ___% of your brain” is a phrase that indicates a percentile of brain activity. An upper bound is needed to give fractional activity, so if we assume that activity refers to firing rate, than the upper bound is every neuron in your brain firing at maximum speed. Not only would this result in ludicrous, possibly fatal seizing, but it would also be remarkably discordant since different neurons have different firing rate maxima. Using 100% of your brain, in this situation, would be profoundly awful. Thus, “using only 10% of your brain” is simply safe and healthy, not a sign of unreached potential.

Here’s another possibility; maybe 10% brain usage refers to the percent positive change in the brain’s glucose consumption based on an arbitrary standard, like baseline awake-state alpha rhythm-level glucose consumption. Already some issues arise here because if we rely on an arbitrary standard, 10% brain usage depends on brain state. In addition, in order for this to make sense, we still need an upper bound. There are simply biological limits to how much glucose a neuron can process within a given time frame, so we can use those as our upper bound. However, as is the case with firing rate, a brain consuming 100% of possible glucose would be utterly dysfunctional. For a small example, you would be simultaneously trying to sit and stand. For a larger example, your prefrontal cortex would literally be overwhelmed by trying to think about everything you possibly could at once. It such a model, you should be thanking goodness that you only use 10% of your brain. And furthermore, at any given moment, you would be using any number of percentiles below 100%, not a consistent 10%.

Since potential’s been brought up, maybe that’s what the saying refers to. In fact, a common variation on “You only use 10% of your brain” is “You only use 10% of your brain’s full potential“. The recent film, Limitless, attempts to wrangle with this concept, of course on the basis of total logical shenanigans. Whatever do we mean when we wish to remove the brain’s limits, as the  film title suggests? Should our brain spontaneously stimulate growth of totally new neural regions to allow itself to be powered via photosynthesis? And if there is no limit to its capabilities, how can there be a fractional standard by which to measure brain usage?

Maybe what people mean is that we only use 10% of the brain’s full potential in terms of a theoretical plastic Hebbian maximum. That is, every circuit in your brain that facilitates some task does so optimally. However, given that brain plasticity is stimulus-dependent, one must perform the skills they wish to become optimal at. Given time constraints, however, it would certainly be difficult to become fully proficient at every possibly skill that exists. The show Dollhouse suggests in its “doll architecture” the artificial imposition of a Hebbian framework wherein all skills can be imposed into an already optimized system. However, this is probably impossible for a) reasons; the brain is very unlikely to have enough space within the skull to accommodate such elaborated neural clusters and b) an immediate transformation of long-term potentiation and other lengthy metabotropic processes that give rise to Hebbian learning is, quite frankly, implausible if not outright impossible (unless you’re in a simulated computer environ where variables can be changed hither-thither).

found on: https://i1.wp.com/www.dollhouse-online.net/gallery/albums/cap210/normal_img000995.jpg

Yeah, right.

The fact of the matter is, we use just about all of our brain just about all of the time. In fact, when you don’t call upon certain populations of neurons for a enough time, they tend to die or are “reassigned” to work in a nearby network that is getting used so they aren’t wasting energy– the “use it or lose it” principle (there are notable exceptions, but that’s a separate discussion). Even while you sleep, your brain is doing scads of things: maintaining autonomic processes, keeping your limbs paralyzed, consolidating memories, and so on.

So, the next time you reassure yourself about a personal failure by thinking to yourself, “Well, I was only using 10% of my brain,” remember: it was really more like 100%.


This post about perceptual semantics dissembles its “seriousness” with broccoli pictures

One of the great human pastimes is disagreeing about things. You deem broccoli horrible? Well, I think it ROCKS. Broccoli rocks, folks–make no bones about it. It’s an intrinsically great food, possibly even a great humanitarian. And people who say otherwise are simply wrong.

Puns induce laughter about as much as broccoli rocks.

An error in reasoning presents itself here; this argument masquerades as an argument over broccoli, but it pertains to something else entirely. It’s a taste-based squabble in broccoli’s clothing. A difference in primary gustatory area associations hidden in a Trojan broccoli flower. What this argument really pertains to is subjective experience, and as soon as that becomes obvious, there can be no argument. You don’t have to agree to disagree, you instead agree that your perceptions differ from those of your broccoli-hating antagonist.

So, we’ve brought up the “S” word. Through a derisive lens, subjectivity appears the guiding star for people who avoid questioning viewpoints, changing their perspectives, and so on. Defining words becomes an especially muddled process; really staunch subjectivists refuse to do it. Anything can mean anything. You ask what kind of chairs a true subjectivist likes; he says, “Define ‘chair'”. You try, but he questions every word you use to define “chair”, creating a geometrically expanding tree diagram of attempted definitions. You argue that using words to question words is internally inconsistent, but you are denied again. And you never find out what kinds of chairs he likes!

It’s worse when you meander over to fuzzier words: “evil”, “consciousness”, “art”, and “love”, for instance. Words like these get to sit on pedestals so high they are literally hidden from view in the clouds. By extension, it’s hard to discern if there really is meaning sitting up on those pedestals at all, as opposed a big foofaraw. And yet, these words get used constantly and–most importantly–they are understood.

Translation: BROCCOLI FOR THE BROCCOLI GOD.

Yeah, sure: people will quarrel over what art “really” is, but at the end of the day, the word exists. It exists in most languages, in fact. Same goes for love, consciousness, and evil. These words are cultural tropes, and they reflect massed perception across generations of human beings. These words, alongside all others, function like any categorical perceptions; they are neural designations that enable an intentional agent to formulate hypotheses about his or her environment. I love, therefore I ____________.

The blank, of course, is where things differ from individual to individual. It represents qualia, or the hypotheses people formulate on the basis of perception. And while there are minute differences from person to person, the fact that the word exists alone indicates that it has anthropological relevance and consistency.

Now we’re hitting the ‘P’ word pretty hard. So what is meant by perception, anyway? To illustrate perception in action, the McGurk Effect serves well. Click that hyperlink and watch the video, would you? Just a slovenly young gentleman saying “Da da, da da, da da” or perhaps “Ga ga, ga ga, ga ga”, right? Now close your eyes and play the video again (these instructions are making me feel like a failed magician, so please be awestruck by the video). Sounds… sheep-like, wouldn’t you say? Perhaps like the stuttered lyrics of The Beach Boys’s “Barbara Ann”? Well, that’s because the audio is “Ba ba, ba ba, ba ba”. Now you can happily watch the reactions of excited Japanese people to the McGurk Effect without spoilers.

This recurring broccoli is almost as invasive as a retrotransposon.

There are two interesting facets to the McGurk Effect:

1) Your visual processing stream influences how you perceive sound.

2) Assuming lack of blindness, deafness, feralness, etc., everyone‘s visual processing streams influence how they perceive the sounds used in the McGurk Effect the same way.

In general, brains are pretty much the same. Yeah, they’re plenty complicated. Yeah, no two brains are identical. But they’re built of the same building blocks, with the same receptors, the same transmitters, and the same layouts. γ-Aminobutyric acid inhibits receiving neurons. Purkinje cells populate the cerebellum. Betz cells are found projecting long axons to local circuit neurons in the spinal cord. The posterior superior temporal sulcus preferentially processes human bodily movements. The medial temporal region of the left hemisphere prefers non-human movement. Heschl’s gyrus activates during processing of semantic information. Unpleasant words activate right amygdala and auditory cortex, while pleasant words activate left frontal pole. And for people familiar with broccoli, there’s probably a very special cluster of neurons in the temporal lobe that devotes itself to firing for broccoli. That’s just how things organize themselves. And yeah, brains are plastic, but the mechanisms for that plasticity are the same across different brains, too.

Goodness, that’s a lot of links.

Anyway, the point: perception exists, and it varies how we describe and think about things, but one can be objective about that subjectivity. After all, perception works pretty much the same way in everyone. There does not have to be a screen in between people who perceive things differently. They’re both perceiving, after all. Sure, you can argue for perceiving perception differently, but then we’d just be stuck in a rut, wouldn’t we?

If you haven't read Le Petit Prince, then SHAME ON YOU.

Extending this notion of universal perceptual mechanics to semantic kerfuffles, we come back to the notion of definition. In computer programming, defining is as easy as 1, 2, 3.

1) >>> x=1; y=2; z=3;

2)>>> Wow, turns out it’s easy as just 1, in fact.

3)   File “<stdin>” , line 2

Wow, turns out it’s easy as just 1, in fact.

^

SyntaxError: unexpected token ‘out’

>>>

Easy as a nympho on scopolamine, no? Defining art in code, however, is a completely different ballgame. But that’s kind of the point of art, isn’t it? Questioning the very paradigms that conventionally define it? It’s almost like the word has more value when its definition is just out of reach (of course, that itself can be seen as a definition, but that’s a discussion for a separate blog post). But what about something like consciousness? What does that word mean, seriously?

Like art, consciousness gets regular updates and reinterpretations from those investigating it. Like any firmly established word, it has its cultural roots. Just about everyone has a basic sense of what consciousness is. However, as we learn more about what contributes to conscious experience, it becomes valuable to ask more and more exacting questions, questions that enable the building of an experimental paradigm. Questions like: What neural mechanisms give rise to shifting between conscious and unconscious states? Is linguistic confirmation a valid way to assess consciousness? How do attention and sense modalities contribute to awareness? And these inquiries barely scrape the surface of the leviathan that is the meaning of “consciousness”.

Anyway, the point: think about a chair. Take away fragments of that chair. When does it stop being a chair? Neurally, the answer is, “the object in question ceases to be a chair when the object no longer brings to threshold your neural cluster responsible for responding to objects it has learned to discern as chairs”. After all, there are no chairs out in the “real world”. Chair is simply the word we assign to objects perceived as, well, chairs. Now think about the word “is”. What is “is”? A singular third-person verb indicating equivalence? Really? That’s all you’ve got? Sometimes, you can be surprised by words. Sometimes, it’s fun to reduce things to the absence of meaning. Consider it a thought exercise.

Seriously though, broccoli is fantastic. That is the only absolute truth mankind can ever know.


Conventional Misconceptions: Infinite Regressions, Computing Power, and Brain Simulations

found on: https://i1.wp.com/www.labgrab.com/files/blue_brain_3.jpg

Some people are skeptical about the possibility of simulating a brain, ever. *cracks knuckles* It’s rebuttal time. And maybe this post is going to strike an aggressive chord, but point-by-point counterarguments are the only good counterarguments.

SO. Let’s roll.

“There’s no reason to think it will ever be possible to scan the human brain and create a functionally equivalent copy in software. Hanson is confused by the ease with which this sort of thing can be done with digital computers. He fails to grasp that the emulation of one computer by another is only possible because digital computers are the products of human designs, and are therefore inherently easier to emulate than natural systems.”

Barely out of the gate and we’re faced with a statement the refutation for which demands proving a negative. There’s no reason to think the brain can be simulated? “Well, prove that it’s impossible,” says the budding but overeager logician. Of course, proving the certainty of an impossibility is impossible (when you’re ignorant of a system’s parameters, that is). The problem with this series of assertions is that there is no evidence offered to substantiate what appears to be a thesis. The claim “brains can never be digitally simulated” at least demands follow-up samples of evidence.

It is obvious from the following lines “Hanson… etc.” that this author intended to critique someone’s reasoning, not supply direct evidence for his topic sentence. There is simply an error in organization occurring here; only after explaining Hanson’s alleged confusion is the time right to continue by making a statement like “brains can’t be emulated on computers”. That way, you can conclude demi-reasonably that brains can’t be simulated because Hanson fails to understand the limitations of computing. After that, you can strengthen your contention against Hanson by adding in citations. Of course, you’re still in the dark in terms of credibility since what some random guy says about whether or not computers can be simulated doesn’t affect whether they actually can be.

Not only is this opening argument feebly constructed; it is presumptuous to boot. Hanson’s belief that brains can someday be simulated virtually isn’t necessarily the product of confusion over the ease of porting. This author is overemphasizing Hanson’s arguable misuse of the verb “port”, when in fact Hanson was probably using it in a general sense to indicate simulation. Hanson may just think the brain is simulate-able in the future because… well, because people have simulated parts of it already. And the level of accuracy to which these simulations resemble real brains can only improve as experimental neuroscience progresses.

But hey, there’s always a chance to redeem the arguments proposed:

“The word “port” doesn’t make any sense in this context because the human brain isn’t software and he’s not proposing to modify it. What [Hanson] means is that we’d emulate the human brain on a digital computer. But that doesn’t really work either. Emulation works because of a peculiar characteristic of digital computers: they were built by a human being based on a top-down specification that explicitly defines which details of their operation are important. The spec says exactly which aspects of the machine must be emulated and which aspects may be safely ignored. This matters because we don’t have anywhere close to enough hardware to model the physical characteristics of digital machines in detail. Rather, emulation involves re-implementing the mathematical model on which the original hardware was based. Because this model is mathematically precise, the original device can be perfectly replicated.”

The first couple declarations in this segment are fairly on point; emulating the brain in a digital framework isn’t quite the same as porting software from one operating system onto another. Hanson probably would be better off just using “emulation” as his go-to descriptor. However, again, Hanson probably means to imply simulation, not emulation. It’s also true that we currently lack the technology to fully simulate computers–that is, including their hardware aspects. But that’s currently. Still no evidence is provided suggesting that such technology won’t exist in the future. And on the other hand, both theory and technology are constantly advancing in a direction that strongly suggests that brain simulation will be within reach someday. Not now, but someday.

Now, let’s address the argument which states that top-down design simplifies emulation. Yes. Yes it does. Why this precludes emulating a brain remains a mystery, however. Certainly, top-down design enables an individual porting software to allocate resources in different ways according to functional goals (you want to make a toaster, there are about three hundred godzillion ways, but they will all toast bread, or toast toast… or something). However, the brain follows rules just like any system. Ever since the Hodgkin and Huxley proposed their model of action potential propagation in 1952, flurries of biologically apt mathematical models have populated the field of theoretical neuroscience. These can all be implemented in virtual environs, the only constraint is processing power. And again, technology is constantly advancing processing power.

Onward!

“You can’t emulate a natural system because natural systems don’t have designers, and therefore weren’t built to conform to any particular mathematical model. Modeling natural systems is much more difficult—indeed, so difficult that we use a different word, “simulation” to describe the process. Creating a simulation of a natural system inherently means means making judgment calls about which aspects of a physical system are the most important. And because there’s no underlying blueprint, these guesses are never perfect: it will always be necessary to leave out some details that affect the behavior of the overall system, which means that simulations are never more than approximately right.”

First of all, nature conforms to mathematical models all the time. What are the fundamental laws of physics if not mathematically expressible parameters with which nature complies? Certainly, much theory derives from flawed experimental and observational data and is thus fated never to be altogether “perfect”, but science self-corrects and therefore approaches reality ad infinitum. And there’s plenty of data that can be excluded from many simulations; probability algorithms for quantum tunneling effects, for instance, aren’t exactly essential components for weather simulations. Same goes for brains. That’s a judgment call that can be made without expectation of significant error.

As mentioned before, processing power comes into play here.  Omissions of variables are often influenced by processing constraints; however, as processors improve, this issue will dwindle.

The “approximately right” phrase used above to connote inadequacy is anything but; for instance, most protein simulations neglect the influences of gravity. And guess what, doing so doesn’t render the results of their walks inadequate. Adding in gravity algorithms would only make the simulation “more real”, not more valuable. Some variables simply are more important to describing the components of a natural phenomenon than others, and this is measurable. If there is no significant loss of accuracy when ignoring a particular set of rules, then those rules don’t need to be implemented algorithmically.

And finally:

“Scientists have been trying to simulate the weather for decades, but the vast improvements in computing power in recent decades have produced only modest improvements in our ability to predict the weather. This is because the natural world is much, much more complex than even our most powerful computers. The same is true of our brains. The brain has approximately 100 billion neurons. If each neuron were some kind of simple mathematical construct (in the sense that transistors can be modeled as logic gates) we could imagine computers powerful enough to simulate the brain within a decade or two. But each neuron is itself a complex biological system. I see no reason to think we’ll ever be able to reduce it to a mathematically tractable model. I have no doubt we’ll learn a lot from running computer simulations of neurons in the coming decades. But I see no reason to think these simulations will ever be accurate (or computationally efficient) enough to serve as the building blocks for full-brain emulation.

This is pretty cogent, actually. There was some evidence (though no exact statistics, but there aren’t many of those in this post either) provided to suggest that technological advancements don’t contribute dramatically to improvements in simulatory accuracy. Also true is that logic gates cannot sufficiently describe neurons; indeed, they are highly electrochemically dynamic systems coming in myriads of different morphologies, all of which influence their behavior. Those facets of neurons are necessary components to a good simulation. However (the pretentious sibling of “but”), the author still sticks to his guns on the basis of “seeing no reason” not to.

Well, here are a few reasons:

  • BrainGate uses microelectrodes to research activational patterns of sensorimotor neurons. These findings are used to develop brain-computer interfaces (BCIs), which enable translation of stimuli into electrical schemes the brain can interpret as well as brain-control of prosthetic apparatuses. With electrical signals alone, these technologies are already good enough to restore functionality to impaired individuals. As the data accumulates, the degree to which functionality is restored will increase.
  • The Neurally-Controlled Animat is an organism that effectively exists in a virtual environment. Neural tissues arranged on a multi-electrode array respond to controlled stimuli while all their responses, plastic changes, and so on, are recorded in real time. In this way, we acquire large volumes of information on the dynamics of real individual neurons as well as neuron clusters within a digital framework.
  • Hippocampal prostheses are brain implants intended to replace damaged or deteriorating hippocampal tissue. They function using computational models of hippocampal function and have successfully replicated functionality in rats.
  • The Blue Brain Project, first of all, is awesome. Second of all, this enterprise has resulted in 10,000 neurons of a rat neocortical column being simulated to biological accuracy (admittedly, that accuracy isn’t specified). Needless to say, the vault of supercomputers behind this is gargantuan. But from the project’s inception in 2002, it only took until 2006 to get the whole column simulated.

When considering the progress and trajectory of current research in neurotechnology, including brain-computer interfaces, neuroprosthetics, hybrots, and computational neuroscience, you must at least acknowledge the possibility that a simulated brain is on the horizon.

Now, while still on topic, this may be a good time to point out a common fallacy attributed to comprehensive brain simulations. Claims flit about here and there proposing that simulation of a whole brain is impossible due to infinite regressions of inventory. If one intends to store data on every neuron in the system, then there will have to be storage accommodations made for the storing mechanisms, and the mechanisms storing that data, and so on. This is simply false; assuming you can simulate a complete brain, it does not contain information about itself that real brains don’t. Brains in reality don’t have a neuron to account for every neuron. That would indeed create an infinite regression, but that’s obviously not how the brain works. If you were simulating a brain, the system in which it operated would store information about all neurons, not the virtual brain itself.

made using http://cheezburger.com/FlashBuilder/GraphJam

If someone wished to make a really strong argument against the future possibility of brain simulation, the route that must be driven is one of data acquisition. The author of the blog post railed against here touches upon this with his mention of snowball effects. Our models are only as informed as we are, and the brain constantly revolts against being imaged well. Sure, we have fMRI, EEG, ERP, PET, SPECT, and other imaging techniques at our disposal, but the fact of the matter is that the brain’s activities are so dynamic, multitudinous, and difficult to scan that acquiring said data comprehensively and in real-time is pretty daunting. We don’t have the device(s) necessary to accomplish that task yet.

There is another way, however. While genetics is as much an enigma as neuroscience, if it advances more rapidly, simulating ground up genetic translation of the human body in a virtual setting would presumably create a viable, accurate brain. It’s based off the same encoding that our brains are, the trick is to ensure the accuracy of the algorithms denoting genetic activity. Since genetics, as a field, isn’t wrought with quite the same investigative quagmires as neuroscience, computational models may flourish sooner in that arena.



The Right and Wrong of Right and Left

found on: https://i1.wp.com/www.ideachampions.com/weblogs/left-brain-right-brain.jpg

Today’s chiroscope:

Dextro (for right-handed sorts)

The Sun moves into the Sky Sign of Dextro today, introducing a season of rationality, stringency, and organization. This is the time to count things, become a computer programmer, or sequence your dimes by mint date. Dextro energy tends to be static and analytical, filled with inflexibility and an obtuse lack of emotion.

Sinistro (for lefties)

Flighty tasks involving arts and crafts may occasion your attention today. Be expressive with your friends, who admire your musical abilities and inability to manage a schedule. Sinistros need surprises and should always be ready to go out dancing, draw a pretty picture, or cry at random.

It’s an amusing dichotomy, that left brain/right brain personality scheme. Instead of divvying people up amongst twelve categories, as is the case with astrology, we place them in one of two. Sure, the online quiz you just took declared you 60% left-brained and 40% right-brained, but what you take away is that you’re a left-brained person, whatever that means.

But what does it mean? And on a relate note: how does it relate to handedness?

The overwhelming assumption about right and left brain hemispheres asserts that the left brain is logical and sequential while the right is emotional and creative. Without getting off on too much of a tangent about logic and emotion, it doesn’t take the most captious nitpicker to detect the lack of semantic clarity in this assumption (who knows what creativity is, really?). So let’s get a bit more exact.

In 1975, Kaplan and Tenhouten described three socioculturally determined modes of thought: propositional, appositional, and dialectical. Propositional thought encompasses linear reasoning; solving a basic algebra problem by using a set of steps in sequence is a good example. Language proceeds in a necessarily linear fashion (saying two words simultaneously can pose quite a challenge), and is thus propositional as well. Appositional thought refers more to holistic, synthetic thought processes. The instinctive ability to recognize faces without actively measuring distances between features is one such example. And when these two modes of thought do a little jig together, dialectical thinking results.

Experimental findings on functional hemispheric specificity fall nicely into a model wherein left=propositional and right=appositional. In a majority of cases (more on the exceptions later), language is predominantly governed by the left hemisphere. By contrast, music, which requires far more holistic and therefore appositional processing (being able to discern chords as built up from individual pitches, for instance), largely resides in the right hemisphere. Emotional affect, as construed from holistic data like gestures in conjunction with vocal tone and facial expressions, also lies within the domain of the right hemisphere. As such, when we hear people speak, it is the left brain that assigns meaning to the words but the right brain that evaluates whether their speech is blathering, grievous, heartfelt, excited, gloomy, sarcastic, etc.

 

found on: http://letterstorob.files.wordpress.com/2009/08/dawson-crying.jpg

Dawson may tell you he's happy, but your right brain knows better.

However, don’t let this laterally weighted distribution of function convince you that things are completely clean-cut. Split brain, multilinguistic, and other studies have demonstrated that genetics, basic plasticity, developmental conditions, and so on can influence said dispersion in either symmetry or asymmetry-promoting ways. For example, people who acquire fluency in more than one language before the age or 6 (give or take a few months depending on the person) distribute their linguistic activation across both hemispheres equally. In general, it’s wise to bear in mind the yin and yang, overused as the image may be. Each hemisphere has a hand in the other’s business, sometimes even a whole arm.

Speaking of hands and arms (flawless transition, wouldn’t you say?), how’s about we dabble in the issue of limb dominance. While there are all kinds of interesting combinations of limb dominance–right-handed but left-armed, ambidexterous but left-footed, and so on–on the whole, humans (other other hominids, for that matter) tend to favor a side of their body. And side-favoring works contralaterally (the opposite side) for hemisphere preference. Now, bearing in mind lateralized dispersion of function isn’t clean-cut, think about a left-handed person you know. Perhaps you’re left-handed. And you’re reading this. And any left-handed people you know are likely able to read this as well. So you’re clearly able to comprehend language. While lefties generally favor their right hemisphere more than do righties, they have perfectly functional left hemispheres.

This may sound like a defensive stream of thought, and that’s because it is. The stigma of left-handedness is wrought with the negative connotations of being a “right-brained” person: that is, emotional, unreliable, disorganized, and mathematically challenged. In fact, leftness itself is wrought with negative functions just by association with left-handedness; historically, majority rule has cast lefties (a minority) as outcasts. By extension, such superstitions as a cat crossing your path towards the left indicating misfortune pop up, perfectly exemplifying leftness hate. The Supreme Court really ought to do something about this OUTRAGE.

Nah, not really. But the point is, lefties aren’t really all that statistically more likely to be emotional, unreliable, blah blah blah than anyone else. In fact, one of the only marked differences that comes to mind is that lefties have lower life expectancies than right-handed people because tools and machinery are usually designed with righties in mind, and freak accidents happen. Oh yeah, and they’re slightly more likely to be supergeniuses. And also, they have a minority advantage in sports; everyone is accustomed to playing against righties, and when lateral dominance comes into play, a lefty comes as a surprise.

 

found on: https://i1.wp.com/www.frontiersin.org/TempImages/imagecache/7878_fphar-01-00137-HTML/images/image_m/fphar-01-00137-g003.jpg

Medals of sensitivity should be awarded to the organic chemists responsible for designating left orientation to the types of chiral molecules found chiefly in the human body.

A last tidbit:

According to some recent studies, inhibitory colossal projections from the left hemisphere tend to dampen creative thought, as measured by the ability to analyze problems according to novel ideas instead of learned frameworks. While constant oblivion of standard analytical contingencies is probably not a great thing (it’s a good idea to bear “stop, drop, and roll” in mind in case of fire, for instance), optimal behaviors are probably products of dialectical, bilateral thought. Every situation is different, and organisms can respond to them to their greatest advantage if they analyze them with both knowledge and intuition.

The actual last tidbit:

Apparently, orgasms are coincident with hyperfusion of the right hemisphere. That’s some important stuff right there.