robotic |rōˈbätik| adjective
1. of or relating to robots : a robotic device for performing surgery.
2. (of a person) mechanical, stiff, or unemotional.
[i am beginning this post with a dictionary definition. this means i am not really motivated to write about this. however, onward.]
the thing is, we are afraid of robots. i mean, you are. i’m fine with them.
there are things we believe to be true about robots (and here i want to be clear that i’m talking about fictionalized, imagined, cinematic robots, not those machines that defuse bombs and the like): they lack autonomy, they lack free will, they lack emotions, they lack morality.
but that’s not a list, it’s actually more like a progression. we start so simply, so clearly: no autonomy, no free will. they work for us. they know only two things, 1s and 0s, and are therefore totally benign and controllable and they do only what it is in their programming to do.
may i introduce the first villain of the piece? the programmer.
wait! miles dyson didn’t turn the AI evil – it just evolved!!! we’ll come to that in a moment, but if i may, sarah connor was pretty convinced that it was dyson’s hubris, his belief in the inherent virtue of technology that led him to put no meaningful roadblocks up to the AI eventually becoming self-aware. he set it free, and in so doing, unleashed the furies. i mean, i don’t have to tell you this. you saw Terminator: The Sarah Connor Chronicles.
but the first two – we think of these as the usual villains, the dr. horrible kind of dude who makes some kind of ray gun and ends up pressing a button and killing his girlfriend. oh shit, spoiler!! his initial reasons for his invention may have been good, or merely benign, and then become totally selfish and depraved, and so we do not blame the monster – for it knows not what it does – we blame the man who made it.
(or in the case of king kong, the woman that it lusted after. now hang on!)
in many ways i see this as the ‘i told you so’ tale we have given back to our Creator. a ‘hey dude, you made us!’ retort to our sins and foibles. man steals fire, sets everything aflame – well maybe you shouldn’t have been leaving fire just lying about the place, eh, gods? it is at turns both a morality play in the ‘careful what you wish for’ vein, and a comforting story we tell ourselves about the evil in our own natures: we’re not bad, we’re just drawn that way.
but then there’s the second source of fear, one that is more frightening than the fallibility of man. if you can believe that.
the second source of fear is the robot that breaks one of the tenets of being robotic – the robot gains autonomy, the robot becomes self-aware. the robot outgrows it’s creator (yep, back to poor miles dyson again). the robot runs amok. and because it adapts, and learns, and knows everything we taught it, it of course does evil.
but why “of course”? here’s what occurred to me in the wee small hours the other day: one of the embedded truths about robots and droids (think Data not C3PO and his constant whinging) is that it does not have the capacity of emotion. it cannot ‘feel.’ we take this as a given, that somehow emotions and the soul are closely linked or maybe even the same thing. and maybe they are. but if robots do not feel, if they can not feel pain or loss or remorse or love or regret or desire, that they have no internal check on their new, autonomous program. they are like little tyrant children – purely selfish, lacking insight. even if they are not malicious, they are making calculations that can not place meaningful values on values. we’re afraid of what happens when someone throws out the notion of emotional consequences, stops counting ephemeral notions of love or loyalty or right or wrong as among the factors in the decision set.
and yet we’ve all been there, haven’t we? we’ve all had a moment when our emotions were completely shut off, when we made decisions based entirely on cold utility, when we, to put it bluntly, used someone else for our own gain, or at least with disregard to their experience.
that’s ultimately what we fear – it’s not that an emotionless, amoral robot set free to think for itself is unthinkable or unpredictable. it’s that we know only too well what that looks like and how that goes. what makes it scary is knowing that you can’t reach into a robot’s circuits after they’ve gone self-aware (they won’t let you), and you can’t really kill them (they’re immortal, and data doesn’t die, it gets synced and downloaded – thanks BSG!!).
they are what we are, late at night at the bar, working on how we’re getting that girl or guy to go home with us so we can escape for a night, have a bit of fun, knowing that we’ll walk away in the morning, feelings be damned.
it’s just that we imagine that they will never stop doing that, and that as they learn more about what ‘works’ and what doesn’t, they will perfect what we do so sloppily, and eventually destroy us all.
this is where i started the chain of bad thinking that led us here:
“robots are some kind of strange reflection back to ourselves of our own capacities for selfishness, recklessness, violence.
“but … where am i going with this? there is something i want to say about being emotionally shut off, or about turning relationships into game play, or about setting up a lot of rules to cope with things, or about distancing yourself emotionally, or about the feeling of contempt you get when you skip over the important, connective tissue parts of a relationship and go straight for the prurient and functional. what we worry about in robots is the dark parts of our natures – not the parts that are evil, per se, but the parts that are indifferent. what am i capable of doing to you, or doing without you, or doing in spite of you, quite simply because i don’t care about you that much? and worse – what am i capable of doing to myself if it’s me i don’t like?”
but i thought that was too ‘confessional’.