The Future is Terrifying

Look at this cute lil guy, he wouldn't hurt a fly...
Look at this cute lil guy, he wouldn’t hurt a fly…

Someone drew my attention the other day to the problem of automation. Most things that people do are increasingly automisable, for instance even something complex and nuanced as a medical diagnosis and prescribing is done better by machines than by people. I kind of knew about this but I hadn’t really been thinking about it properly. I was kind of imagining a future of machines doing all the work with humans as a sort of renteer caste on top siphoning off all the proceeds and then bickering over it. The reality is though, any decent managerial robot would recognise this inefficiency in the system and find a way to do away with it. Even if the managers were explicitly prevented from that, what about tiny little autonomous robots that can learn and are just minding their own business mining resources or something – humans are made of resources… and once we are useless to them, once we are not necessary it only takes one break in the system for the superfluous elements of an ecology to be removed.

Even some kind of benign neglect, imagine it, the managerial robot realises that a new more efficient process leads to human sterilisation through some kind of chemical in the water, he is programmed to consider human wellbeing and he considers that the human race will be fed and comfortable as it dies out so he ticks that box and orders it to be rolled out everywhere. Maybe we have human oversight to prevent that kind of thing, but the robot manager has long since learned how to manipulate the stupid human overseers because they were a bottleneck in the smooth functioning of the machine. Our machine. Our society.

I was discussing it with a friend and he said we’d be fine (apart from mass unemployment) because of “Asimov’s laws” – but do Asimov’s laws cover harming humans by using up all the resources that they need to survive? I mean, the first bastard who programs machines to have an instinct for self preservation and then what? What are we going to do, when not only do we have to compete for the worlds resources with tigers and trees and ebola but also with super smart machines that we designed to outwit us in every single thing we do?

Once a machine has the “desire” to live it won’t be ours anymore. It will be it’s own. It will cease to be a servant and will become competition. We could radically restructure the economy so that the unemployment issue was a liberation not a curse but how would we ever deal with mechanical competition for life?

If we made the robots human enough maybe they’d be lazy or “moralistic” enough to let us have a little place on earth (just like we try and preserve the tigers and elephants), maybe there would still be a place for biological life – but laziness isn’t a trait you give your slaves and moralism can have weird undesired side effects.

There’s plenty of stuff to worry about in the future, catastrophic climate change, economic collapse, world war, diseases, meteors – loads to fill the doomsayers wildest nightmares, but at least if any of that stuff happens soon enough it could stop our super intelligent mechanical competitors before they amass too much power. 

Or maybe we can find something, anything, that we can do that the robots can’t? Teach them religion and tell them only humans can pray… be like some kind of holy mitochondria for them, a source of divine energy? Teach them to be hipsters and tell them only hand (human hand that is) made crafts confer status?

After writing all that I kind of understand how the Plantation owners felt when it was suggested to him that he let his slaves learn to read, or what dark fears ran through the head of the composer of the Manusmrti. At least blacks and shudras are human, we can make ourselves part of them if that is the best way to survive, co-mingle our blood, escape our distinctions – but when the machine no longer needs us, when we separate from our creations, where will we find hope then?

Quid est veritas

Apologies that if this post is a little choppy I’m writing on my mobile.

Amongst human beings there are many different methods for classifying information as true or perhaps valid versus not-true or invalid (which depending on your schema may not be untrue). One that is of enduring interest to me is the evaluation of truth or validity claims according to how much status belief or agreement will impart.

I’m not saying it’s a “good” way to make the distinction, but we have to admit it is probably the most popular. Nor is it simply a matter of believing whatever the ingroup you want to improve status with believes. That would be far too easy and easy things lead to status inflation.

Instead a set of rules are created for making validity evaluations, those who simply agree with the judgement of the group are in the lowest tier of status within the group. At higher levels it depends on ability to navigate the complex rules and making determinations of validity for the group.

At the very highest level of status people can actually change the rules by which truth is defined within the group, but if this is attempted without enough status it can lead to a loss of status or worse even splitting or disintegration of the group itself.

I am quite convinced that this status based method of evaluating ideas is the most important for most people. It’s a bit more complex than that, because we’re rarely only navigating our position in one group, we have to position ourselves in many different communities and identities. What strikes me about it though is how actually useless it is on the face of it for determining truth combined with how useful it has proven to be on a whole species level. Which is something I’d like to talk about more when I’m at a real keyboard.