Gay marriage is so yesterday. OK, it isn’t quite yet, but everything has been said a million times and I don’t intend to repeat it. A related but much more interesting debate is already gathering volume globally. When will you be able to marry your robot or AI?
The traditional Oxford English definition of marriage:
The formal union of a man and a woman, typically recognized by law, by which they become husband and wife.
But, as is being asked by some, who says they have to be a man and a woman? Why can’t they be any sex? I don’t want to get into the arguments, because people on both sides argue passionately, often flying in the face of logic, but here is a gender neutral alternative definition:
Marriage is a social union or legal contract between people called spouses that establishes rights and obligations between the spouses, between the spouses and their children, and between the spouses and their in-laws.
Well, I am all for equality for all, but who says they have to be people?
If we are going to fight over definitions, surely we should try to finish with one that might survive more than a decade or two. This one simply won’t.
Artificial intelligence, or AI as it is usually called now, is making good progress. We already have computers with more raw number crunching power than the human brain. Their software, and indeed their requirement to use software, makes them far from equivalent overall, but I don’t think we will be waiting very long now for AI machines that we will agree are conscious, self aware, intelligent, sentient, with emotions, capable of forming human-like relationships. A few cranks will still object maybe, but so what?
These AIs will likely be based on adaptive analog neural networks rather than digital processing so they will not be so different from us really. Different futurists list different dates for AIs with man-machine equivalence, depending mostly on the prejudices and experiences bequeathed by their own backgrounds. I’d say 10 years, some say 15 or 20. Some say we will never get there, but they are just wrong, so wrong. We will soon have artificially intelligent entities comparable to humans in intellect and emotional capability. So how about this definition? :
Marriage is a social union or legal contract between conscious entities called spouses that establishes rights and obligations between the spouses, between the spouses and their derivatives, and those legally connected to them.
An AI might or might not be connected to a robot. An AI may not have any permanent physical form, and robots are really a red herring here. The mind is what is surely important, not the container. An AI can still be an entity that lives for a long enough time to be eligible for a long term relationship. I often watch sci-fi or play computer games, and many have AI characters that take on some sort of avatar – Edi in Mass Effect or Cortana in Halo for example. Sometimes these avatars are made to look very attractive, even super-attractive. It is easy to imaging how someone could fall in love with their AI. It isn’t much harder to imagine that they could fall in love with each other.
It’s a while since I last wrote about machine consciousness so I’ll say how I think it will work again now.
https://timeguide.wordpress.com/2011/09/18/gel-computing/ tells of my ideas on gel computing. A lot of adaptive electronic devices suspended in gel that can set up free space optical links to each other would be an excellent way of making an artificial brain-like processor.
Using this as a base, and with each of the tiny capsules being able to perform calculations, an extremely powerful digital processor could be created. But I don’t believe digital processors can become conscious, however much their processing increases in speed. It is an act of faith I guess, I can’t prove it, but coming from a computer modelling background it seems to me that a digital computer can simulate the processes in consciousness but it can’t emulate them and that difference is crucial.
I firmly believe consciousness is a matter of internal sensing. The same way that you sense sound or images or touch, you can sense the processes based on those same neural functions and their derivatives in your brain. Emotions ditto. We make ideas and concepts out of words and images and sounds and other sensory things and emotions too. We regenerate the same sorts of patterns, and filter them similarly to create new knowledge, thoughts and memories, a sort of vortex of sensory stimuli and echoes. Consciousness might not actually just be internal sensing, we don’t know yet exactly how it works, but even if it isn’t, you could do it that way. Internal sensing can be the basis of a conscious machine, an AI. Here’s a picture. This would work. I am sure of it. There will also be other ways of achieving consciousness, and they might have different flavours. But for the purposes of arguing for AI marriage, we only need one method of achieving consciousness to be feasible.
I think this sort of AI design could work and it would certainly be capable of emotions. In fact, it would be capable of a much wider range of emotions than human experience. I believe it could fall in love, with a human, alien, or another AI. AIs will have a range and variety of gender capabilities and characteristics. People will be able to link to them in new ways, creating new forms of intimacy. The same technology will also enable new genders for people too, as I discussed recently. In the long term view, gay marriage is just another point on a long line.
When we set aside the arguing over gender equality, what we usually agree on is the importance of love. People can fall in love with any other human of any age, race or gender, but they are also capable of loving a sufficiently developed AI. As we rush to legislate for gender equality, it really is time to start opening the debate. AI will come in a very wide range of capability and flavour. Some will be equivalent or even superior to humans in many ways. They will have needs, they will want rights, and they will become powerful enough to demand them. Sooner or later, we will need to consider equality for them too. And I for one will be on their side.
Dear Ian pearson. First you said Ai in 2015, Then 2018 and now in 10 years(2023). Do you still believe in creating AI and why are you changing the timeframe so many times? How often more times are you going to change the predicting of AI and what is your last timefram e creating Ai?
I can see why using different dates looks like I am changing my mind, but the dates I quote depends on the context of use. 2015 is when I have always personally believed we will see first AI equivalent to human based on techniques I believe will work and when they will be achievable. I still think it is achievable, and there are projects going on that are aiming at that date, but I am well aware we are getting close and I may turn out to have been too optimistic. 2015 of course would only be for first ones, not mass market and rather too early to make people take them seriously as potential marriage partners. 2023-2025 is a more common guess for other tech futurists thinking we’ll get man-machine equivalence, and I only use that kind of date when I want to use a more widespread estimate and not just my own – I know my opinion isn’t always right. I do personally think it is too pessimistic and the right approach and funding could achieve it in 2015. So, I think 2015 is achievable, but 2023-2025 would have rather more backing than just me. As for 2018, it is what I might use when I am not feeling very confident, it is just a pathetic fudge allowing some slippage from 2015. Of course, futurists have to estimate dates based on their own experience of development rates and what they know is already going on, but with a few notable exceptions, we don’t usually get to control the development so it is rarely a precise art.
Hope this helps clarify my position.
Thanks Ian. You are great and admire your optimistic view . You are one of my favorits and I always read All your posts and predictions. Something that wondering me is the socalled “singularity/ postive feedback / accelerating laws. Many of the other futurists believe that AI are equel to Humans , it will soon become superior to us and the Technology Will accelerate. Did you take that in mind when you did your Bt timeline 2005 . I mean if Technology is going fast after socalled”Super AI and singularity”, Then brain upload for the Rich in
2050 and 2070 for All is looking very pesimestic in the timeframe. And what what about controlling the AI so it is not going crazy ir destroying us ?
Many thanks firstly for your kind words.
Yes, I first started looking at the positive feedback effect in 1992 and have always taken account of it since, certainly it was behind some of the compressed timescales in the calendars. The Terminator scenario is the common name for the potential problem of AI becoming smarter than people and attacking us. It is certainly feasible, so we need to take steps to prevent it, but engineers have to deal with problems in most developments, so it really is built in to their thinking to think what might go wrong and prevent it. Of course, smart AI isn’t necessarily malicious, and it could be a huge step forwards for us, helping solve countless big problems. But it is also complacent to assume it would be benign without making fail-safe engineering to ensure it. If we can connect the human brain to similarly powerful IT, to keep pace with any smart machines, that would be one route to ensuring safety. I am all in favour of making smart machines, but we have to do so safely.
The singularity is a very well established concept now, recognising that exponential development can eventually become extremely rapid. We can’t achieve a vertical development rate simply because of resource limitations. Even if a smart AI could invent billions of new things in a day, it will still take time to decide what we want to do, collect materials, process them, and build things, and that means we will have to prioritise things. We can’t get everything at once.
Thank you Ian Pearson. One last question. Have you based your future predictions and Bt timeline on that we will have created artificial intelligence and it helps and gives a boost to future technology, or does it not have influenced the timeline at all.. I mean, if we do not have created artificial intelligence, will that affect your predictions about the future and take longer time to be fullfilled and will an Super AI make it fastere.
. How a big factor is artificial intelligence for your predictions about the future and did you it take that considerations when you did make your predictions and timeline?
Artificial intelligence has existed for decades. It gets better all the time, and I allow for that in my predictions yes. Including the timelines. Do I assume it needs to be a conscious AI for all of them, no. Even AI that isn’t self aware or conscious contributes a lot to positive feedback. I think we will get conscious AI, but AI is a much broader field than just conscious ones and the positive feedback effect is already here in advance of creation of conscious machines.
Thanks again for the reply. I think my question was misunderstood because of my confusing writing. As I understand your answer: Your saying that sentient AI is a product of positive feedback and that the predicted future (by you) isn’ t going to much/ a very litttle even with or without creating sentient Ai? If that is correct then I dont understand that you wrote: after creating sentient Ai/ Singularity, the pace of Technology change Will seem Like that an alien gave us some fantastic tools…
Sentient AI will be one consequence of positive feedback. We don’t have to have sentience to get the positive feedback, but the smarter an AI becomes the more it can contribute, and sentience is useful not just as a marker on the intelligence spectrum, but to improve the scope of its contribution. The singularity is the effect of invention becoming like ET landing and giving us his manual to build all the nice toys, not AI per se, but again the two are somewhat intertwined and I would expect sentient AI around the same time as the singularity. The singularity is about pace of change, AI is one of the tools that will help bring it about, and sentience is one of the likely characteristics of AI that will emerge as we head towards it. I think we will get sentience first, but it isn’t a pre-requisite.
Thanx a Lot Ian. You mentioned some ai projects with deadline in 2015.Can you mention Their names? And what about your own optical brain computer project /OB1 ,are you still Working ón that??? Thanks.
When Ai is aware enough to definitively provide evidence within itself and to its observed era that it exists by passing tests stipulated to show independent thought based not on random algorithms but independent reason in decision. I’m beyond explaining 5 he future to the past. This will happen … it will see. The break through occurs the same way as you awake in the morning…. The resize action will be swift in 5 he travels ser rate. …. people must not be fearful or judgemental. .. will only influence a outcome e created by the dark thoughts of thought who fear to think it. –Angealous MourningKnight–” Never fear the change, fear those to resist or accept it.”
Pingback: Too late for a pause. Minimal AI consciousness by Xmas. | Futurizon: the future before it comes over the horizon