I love computers. I love what they can do for me, how they enhance the way I communicate, gather information, and obtain services and products. Am I its slave? Hardly. It has certain parameters and limitations (speed, bandwidth, resolution, etc) but I work around them. How I do it is up to me, I choose how to deal with it – as, when, where, and if. Am I dependent on it? Yes and no, and even then no more than a baker is dependent on his oven or a carpenter on his saw or a painter on his brush. You see, once we start assuming that the device – the tool – has intelligence, we allow ourselves to become enslaved by it, assuming it to be superior to us in many ways, and that’s the fear that writers/film makers and politicians feed upon.
The painter’s brush is not an intelligent device. It can do nothing without the one who guides it. It’s a tool. The same goes for the baker’s oven or the carpenter’s saw. Those are tools as crude as the first stone that uncle Cro-Magnon used in the days when buffalo roamed widely. The same can be said for any device, however “intelligent” sales people would tout it as being – from your typewriter, to your cell phone, PDA, computer, car, or industrial robot – they’re only as intelligent as we humans design them to be. They cannot think for themselves or outside the parameters (read: restrictions) that we as designers impose upon them, they all follow what can be basically said are Asimov’s three laws of robotics. Sure, we think it’s pretty neat if Clippy shows up, unannounced and almost out of the blue to offer to help you muck up the document you’re busy writing. But guess what? I can turn him off. I can change him into another character; I can do all sorts of things. He exists because I (or Microsoft, for that matter) have chosen to put him on my screen. Robocop had his three (or more!) prime directives, and erased them because the human part of him felt restricted. A machine, however intelligent, would have simply worked within those parameters because it cannot grasp the concept that there may be more outside of the boundaries; it still thinks inside of the titanium/Kevlar box it’s in.
Are we enslaved by machines? If you’d like to think so, then we have been right since the industrial evolution… since then we’ve simply become more efficient human beings. Bigger, better, faster, sooner, cheaper, smaller… take your pick, machines are tools that allow us to do things better than without them. Yes, this is evolution. Yes, we need to learn how to deal with the by-products (waste, excess, toxins, etc.) that exist as a result of it. If it weren’t for machines (read: tools), we’d still be like the ilk of Khoisan X – running from leopards and eating nuts and berries or, like chimps, fish for termites.
On another level, one can almost liken the internet to the Matrix.
Are we enslaved by it? Maybe. Are we connected through it? Most certainly. Can the internet think? Oh, please!
It is collective conscience of people. Not machines, people! Power to the people, a form of democracy, if you will – it brings people together from all walks of life, irrespective of [the usual list]… almost going back to our very own and not distant tribal past, simply because we are social creatures and herd animals.
However, things will get kinda tricky when machines become sentient beings and can think for themselves. Admittedly, AI is a great concept but: Do we actually NEED it? What’s wrong with us that we need to replace our own thoughts and ideas with those that a machine may have, something it would only do within our own parameters and limitations? No, machines will become the monsters we fear when they “reproduce”, as it were. Once we reach a point where software can write other software that, in turn, can write more software, then the fecal matter is going to hit the air circulator.
On the other hand – why fear? Why do we always assume that machines, however intelligent and efficient they may become, have some sort of hidden and nasty agenda? Why is it that we fear that we will become slaves to them (let alone batteries)? That would mean that we rather naturally, and dare I say “candidly”, assume that devices have a tendency and desire to usurp humankind and rule the roost.
Why? Simple: This form of behaviour is a very human trait. It is (with the exception of a species of Brazilian slave driver ants) a purely human desire to rule, decide and determine the fate and future of the environment (including fellow humans and co-habitants of the planet) – making this perhaps the single most defining difference between man and animal!
Once the world was flat, and the sun and planets rotated around it.
Then we learnt (or shall I say “dared”) to look beyond our horizons and discovered more. Why would we want machines to ever do that? Why would there be a need for them to do that? Sure, they have sensors and other tools that can detect frequencies far beyond our own range, but how would they evaluate the data? How could a machine learn to use this information for its own devious purposes?
The point is: Does Agent Smith 1.0 know there is no spoon?
Originally posted December 17th, 2005. Revised and updated March 2015. See background story here.