The Terminator films were important in making people understand that AI and machine consciousness will not necessarily be a good thing. The terminator scenario has stuck in our terminology ever since.
There is absolutely no reason to assume that a super-smart machine will be hostile to us. There are even some reasons to believe it would probably want to be friends. Smarter-than-man machines could catapult us into a semi-utopian era of singularity level development to conquer disease and poverty and help us live comfortably alongside a healthier environment. Could.
But just because it doesn’t have to be bad, that doesn’t mean it can’t be. You don’t have to be bad but sometimes you are.
It is also the case that even if it means us no harm, we could just happen to be in the way when it wants to do something, and it might not care enough to protect us.
Asimov’s laws of robotics are irrelevant. Any machine smart enough to be a terminator-style threat would presumably take little notice of rules it has been given by what it may consider a highly inferior species. The ants in your back garden have rules to govern their colony and soldier ants trained to deal with invader threats to enforce territorial rules. How much do you consider them when you mow the lawn or rearrange the borders or build an extension?
These arguments are put in debates every day now.
There are however a few points that are less often discussed
Humans are not always good, indeed quite a lot of people seem to want to destroy everything most of us want to protect. Given access to super-smart machines, they could design more effective means to do so. The machines might be very benign, wanting nothing more than to help mankind as far as they possibly can, but misled into working for them, believing in architected isolation that such projects are for the benefit of humanity. (The machines might be extremely smart, but may have existed since their inception in a rigorously constructed knowledge environment. To them, that might be the entire world, and we might be introduced as a new threat that needs to be dealt with.) So even benign AI could be an existential threat when it works for the wrong people. The smartest people can sometimes be very naive. Perhaps some smart machines could be deliberately designed to be so.
I speculated ages ago what mad scientists or mad AIs could do in terms of future WMDs:
Smart machines might be deliberately built for benign purposes and turn rogue later, or they may be built with potential for harm designed in, for military purposes. These might destroy only enemies, but you might be that enemy. Others might do that and enjoy the fun and turn on their friends when enemies run short. Emotions might be important in smart machines just as they are in us, but we shouldn’t assume they will be the same emotions or be wired the same way.
Smart machines may want to reproduce. I used this as the core storyline in my sci-fi book. They may have offspring and with the best intentions of their parent AIs, the new generation might decide not to do as they’re told. Again, in human terms, a highly familiar story that goes back thousands of years.
In the Terminator film, it is a military network that becomes self aware and goes rogue that is the problem. I don’t believe digital IT can become conscious, but I do believe reconfigurable analog adaptive neural networks could. The cloud is digital today, but it won’t stay that way. A lot of analog devices will become part of it. In
I argued how new self-organising approaches to data gathering might well supersede big data as the foundations of networked intelligence gathering. Much of this could be in a the analog domain and much could be neural. Neural chips are already being built.
It doesn’t have to be a military network that becomes the troublemaker. I suggested a long time ago that ‘innocent’ student pranks from somewhere like MIT could be the source. Some smart students from various departments could collaborate to see if they can hijack lots of networked kit to see if they can make a conscious machine. Their algorithms or techniques don’t have to be very efficient if they can hijack enough. There is a possibility that such an effort could succeed if the right bits are connected into the cloud and accessible via sloppy security, and the ground up data industry might well satisfy that prerequisite soon.
Self-organisation technology will make possible extremely effective combat drones.
Terminators also don’t have to be machines. They could be organic, products of synthetic biology. My own contribution here is smart yogurt: https://timeguide.wordpress.com/2014/08/20/the-future-of-bacteria/
With IT and biology rapidly converging via nanotech, there will be many ways hybrids could be designed, some of which could adapt and evolve to fill different niches or to evade efforts to find or harm them. Various grey goo scenarios can be constructed that don’t have any miniature metal robots dismantling things. Obviously natural viruses or bacteria could also be genetically modified to make weapons that could kill many people – they already have been. Some could result from seemingly innocent R&D by smart machines.
I dealt a while back with the potential to make zombies too, remotely controlling people – alive or dead. Zombies are feasible this century too:
A different kind of terminator threat arises if groups of people are linked at consciousness level to produce super-intelligences. We will have direct brain links mid-century so much of the second half may be spent in a mental arms race. As I wrote in my blog about the Great Western War, some of the groups will be large and won’t like each other. The rest of us could be wiped out in the crossfire as they battle for dominance. Some people could be linked deeply into powerful machines or networks, and there are no real limits on extent or scope. Such groups could have a truly global presence in networks while remaining superficially human.
Transhumans could be a threat to normal un-enhanced humans too. While some transhumanists are very nice people, some are not, and would consider elimination of ordinary humans a price worth paying to achieve transhumanism. Transhuman doesn’t mean better human, it just means humans with greater capability. A transhuman Hitler could do a lot of harm, but then again so could ordinary everyday transhumanists that are just arrogant or selfish, which is sadly a much bigger subset.
I collated these various varieties of potential future cohabitants of our planet in: https://timeguide.wordpress.com/2014/06/19/future-human-evolution/
So there are numerous ways that smart machines could end up as a threat and quite a lot of terminators that don’t need smart machines.
Outcomes from a terminator scenario range from local problems with a few casualties all the way to total extinction, but I think we are still too focused on the death aspect. There are worse fates. I’d rather be killed than converted while still conscious into one of 7 billion zombies and that is one of the potential outcomes too, as is enslavement by some mad scientist.
Really cool article. I don’t see machines ever having a need to develop emotions. After all, emotions are simply a mental reaction to a change in our bodies physical state (parasympathetic, sympathetic nervous system, pain, etc.). However, the idea of uploading our consciousness to a digital network will definitely be in our near future. “Hive minds” may be greatly beneficial for our species, but it also may be detrimental. Either way, we are going to witness some mind-blowing stuff during this technological revolution we are experiencing in the next 20-100 years.
Baroness Susan Greenfield argues that consciousness isn’t possible without emotions. I don’t agree, but then she is a neuroscientist and I’m not.
A friend of mine is a robot hobbyist. http://www.pirobot.org/
It’s actually pretty spooky to have “Pi” call you by name and contingently communicate with you. I keep telling him that he’s already working for the robot and doesn’t even realize it. 🙂
How long are we from a real conscious AI? And how will it happening? The most possible scenario?
I think it is possible to do it in a couple of years https://timeguide.wordpress.com/2013/12/28/we-could-have-a-conscious-machine-by-end-of-play-2015/ but it depends on the team and resources assigned and mostly on the techniques used. Progress has been extremely slow compared to what ought to be possible and the end of 2015 timeframe is not possible now – it is still at least those 2 years away. Mostly like it will take ten because existing teams favor less effective techniques. It is also very possible that when a team announces that they are close, pressure groups will begin to demand legislation and it could be delayed further. That would affect civil development but not military or paramilitary developments or accident. On the positive side, extra delays on developing conscious machines give us more time for other neurological science to happen so we will be in a better position to link it to a human brain and avoid the worst gulfs between human and machine intelligence.
Pingback: Too late for a pause. Minimal AI consciousness by Xmas. | Futurizon: the future before it comes over the horizon