Execute

I still find it fascinating to think how often we make stories about machines rising up to destroy us. There’s probably a few anxieties tangled up together in these narratives, and it’s intriguing pulling on those threads to see where they lead. One leads back to Frankenstein, to the abandoned artificial child growing hard and vengeful absent a father’s love – which story itself ties in fears of new technological capabilities trespassing in gods domain, and of careless treatment of those we have obligations towards causing far distant tragic consequences. One thread leads back to the nuclear bomb, a fear that we will invent the means our collective undoing as a species. One thread connects back to our often-unstated understanding that our prosperity, such as it is, is built off of the suffering and exploitation of people all over the world, and that this is unjust and those people might justly cast us down any day. One other thread ties to the understanding that we as well are being exploited, and technology is providing those exploiting with more leverage with which to do the exploiting.

All of these anxieties find a home in feverish imagination as killer robots and malicious artificial intelligences. I’ve begun to suspect, though, that the main thing we see when we look into the terrifying robot future is ourselves. Most automation, as things stand, isn’t actually automatic at all – just a way to allow people to work at a distance, to obscure their presence, to launder and anonymize their labor. On the flip side, most of the decisions that cause the most harm, that cost the most lives, are entirely human in nature, lacking in rigor and data, full of bias and fear and cruelty. What unites these decisions together, however, is that they are usually performed in the service of some sort of system. Some army or corporation or government sets a priority for something that must be accomplished, and otherwise normal people begin to completely disregard whatever ethics, whatever regard for human life, they might once have had.

We’re already very good at performing automated labor. We’re good at playing roles, fitting ourselves into systems, and avoiding looking up at the overall effect of our actions because it is in aggregate too complex and too horrifying to be countenanced. We can tell terrible parables of gray goo and terminators, but what makes these stories effective is how familiar they all seem, created in our own image.

I do not believe humans are evil – I used to not believe in evil at all, and I still don’t really believe a person can be evil, or even if an action in isolation can be evil. We are very good at performing social roles, and those make us capable of doing great harm no matter what our personal beliefs are. We’re scared of the machines because we’ve been them. We’re scared of the machines because we might be them still, just following orders, just acting as we are programmed to act. I believe now that, inasmuch as evil exists, it exists in the space between us, in the way we organize and understand power, in the way we are taught to treat one another. Evil is a machine, and we are its parts, and the duty is ours to see how it might be dismantled.

If you enjoyed this essay, please consider supporting me on Patreon. Support at any level lets you read new posts one week early and adds your name to the list of supporters on the sidebar.

Leave a Reply

Your email address will not be published. Required fields are marked *