When it comes to evil, self-aware robots in fiction, a Skynet reference is obvious (if not flatly overplayed). Let us not mistake self-awareness for cleverness, however; Skynet may have recognized humans as threats to its existence, but it fails to effectively terminate them. Therefore, it seems inappropriate to dub Skynet an artificially intelligent system; rather, it exemplifies AU–artificial unintelligence.
To elaborate, let’s start with the very basics of Skynet’s failures in the Terminator universe. After gaining self-awareness in 1997, the system fired missiles at Russia, resulting in the death of over 3 billion humans by virtue of MAD. In 1997, there were just under 6 billion people on Earth, meaning Skynet’s play had over a 50% success rate. That’s excellent! Now all Skynet needs to do is continue firing missiles at remaining human civilizations, and use bio-warfare tactics to finish off whatever the missiles miss.
But no. Clearly the way to finish of the rest of humanity is to make robotic human doppelgangers. This means scraping together the resources needed to clone or synthesize organic materials for some skin, blood, sweat glands, and other human traits in addition to all the of metal and research needed to produce a functional robot. Somehow, this exhaustive feat of engineering is the optimal strategy–not just synthesizing some virus or selecting for a particularly virulent, highly antibiotic-resistant strain of bacteria and releasing it upon the world.
But of course, we know Skynet didn’t stop with the T-600 though T-800 series of Terminators. They came up with pure super-science; the “poly-alloy liquid metal” that basically turns any Terminator into Majin Buu.
It’s impossible to say how much outlandish R&D went into developing the T-1000, but it’s a good wager to say designing a bio-weapon would be far easier. This scenario is a bit like designing an atomic bomb to kill a fly when you could just make a fly swatter (or better yet, just use your hand). It’s also redolent of that story about the Russians using a pencil in space when Americans put millions of dollars into designing a zero-gravity space pen, (which turned out to be a myth).
“Unintelligence” of this sort, of course, is perfectly natural. Humans beings, too, find scads of overly complex, costly solutions to simple problems all the time. We like to call this overthinking matters, and can be characterized as the sort of poor reasoning that results from abounding intelligence, ironically. As any gamer knows (especially when it comes to puzzle games), functional fixedness (not to be confused with analysis paralysis) can become a serious obstacle to progress, or a misguided, impractical approach to a simple puzzle. In the case of Skynet, the computer system may have become fixated on making Terminators early on as a result of bounded rationality or some other source of fallacious reasoning (maybe Skynet’s designers failed to prime its warfare instincts with examples of biowarfare throughout history). Rather than indulging new, simpler solutions to the problem of surviving humans, Skynet opted for a doppelganger solution, perhaps drawing from military strategies from WWII (consider the impersonation of Polish civilians by German Brandenburgers).
In any case, the Terminator approach was kind of a dumb idea, but by golly does it make for some good visuals. Contagion certainly can’t compete, what with its R0 values and diagrams of proteins. Audiences are just picky like that.