Someone drew my attention the other day to the problem of automation. Most things that people do are increasingly automisable, for instance even something complex and nuanced as a medical diagnosis and prescribing is done better by machines than by people. I kind of knew about this but I hadn’t really been thinking about it properly. I was kind of imagining a future of machines doing all the work with humans as a sort of renteer caste on top siphoning off all the proceeds and then bickering over it. The reality is though, any decent managerial robot would recognise this inefficiency in the system and find a way to do away with it. Even if the managers were explicitly prevented from that, what about tiny little autonomous robots that can learn and are just minding their own business mining resources or something – humans are made of resources… and once we are useless to them, once we are not necessary it only takes one break in the system for the superfluous elements of an ecology to be removed.
Even some kind of benign neglect, imagine it, the managerial robot realises that a new more efficient process leads to human sterilisation through some kind of chemical in the water, he is programmed to consider human wellbeing and he considers that the human race will be fed and comfortable as it dies out so he ticks that box and orders it to be rolled out everywhere. Maybe we have human oversight to prevent that kind of thing, but the robot manager has long since learned how to manipulate the stupid human overseers because they were a bottleneck in the smooth functioning of the machine. Our machine. Our society.
I was discussing it with a friend and he said we’d be fine (apart from mass unemployment) because of “Asimov’s laws” – but do Asimov’s laws cover harming humans by using up all the resources that they need to survive? I mean, the first bastard who programs machines to have an instinct for self preservation and then what? What are we going to do, when not only do we have to compete for the worlds resources with tigers and trees and ebola but also with super smart machines that we designed to outwit us in every single thing we do?
Once a machine has the “desire” to live it won’t be ours anymore. It will be it’s own. It will cease to be a servant and will become competition. We could radically restructure the economy so that the unemployment issue was a liberation not a curse but how would we ever deal with mechanical competition for life?
If we made the robots human enough maybe they’d be lazy or “moralistic” enough to let us have a little place on earth (just like we try and preserve the tigers and elephants), maybe there would still be a place for biological life – but laziness isn’t a trait you give your slaves and moralism can have weird undesired side effects.
There’s plenty of stuff to worry about in the future, catastrophic climate change, economic collapse, world war, diseases, meteors – loads to fill the doomsayers wildest nightmares, but at least if any of that stuff happens soon enough it could stop our super intelligent mechanical competitors before they amass too much power.
Or maybe we can find something, anything, that we can do that the robots can’t? Teach them religion and tell them only humans can pray… be like some kind of holy mitochondria for them, a source of divine energy? Teach them to be hipsters and tell them only hand (human hand that is) made crafts confer status?
After writing all that I kind of understand how the Plantation owners felt when it was suggested to him that he let his slaves learn to read, or what dark fears ran through the head of the composer of the Manusmrti. At least blacks and shudras are human, we can make ourselves part of them if that is the best way to survive, co-mingle our blood, escape our distinctions – but when the machine no longer needs us, when we separate from our creations, where will we find hope then?