Daily Archives: May 19, 2019

The future of reproductive choice

I’m not taking sides on the abortion debate, just drawing maps of the potential future, so don’t shoot the messenger.

An average baby girl is born with a million eggs, still has 300,000 when she reaches puberty, and subsequently releases 300 – 400 of these over her reproductive lifetime. Typically one or two will become kids but today a woman has no way of deciding which ones, and she certainly has no control over which sperm is used beyond choosing her partner.

Surely it can’t be very far in the future (as a wild guess, say 2050) before we fully understand the links between how someone is and their genetics (and all the other biological factors involved in determining outcome too). That knowledge could then notionally be used to create some sort of nanotech (aka magic) gate that would allow her to choose which of her eggs get to be ovulated and potentially fertilized, wasting ones she isn’t interested in and going for it when she’s released a good one. Maybe by 2060, women would also be able to filter sperm the same way, helping some while blocking others. Choice needn’t be limited to whether to have a baby or not, but which baby.

Choosing a particularly promising egg and then which sperm would combine best with it, an embryo might be created only if it is likely to result in the right person (perhaps an excellent athlete, or an artist, or a scientist, or just good looking), or deselected if it would become the wrong person (e.g. a terrorist, criminal, saxophonist, Republican).

However, by the time we have the technology to do that, and even before we fully know what gene combos result in what features, we would almost certainly be able to simply assemble any chosen DNA and insert it into an egg from which the DNA has been removed. That would seem a more reliable mechanism to get the ‘perfect’ baby than choosing from a long list of imperfect ones. Active assembly should beat deselection from a random list.

By then, we might even be using new DNA bases that don’t exist in nature, invented by people or AI to add or control features or abilities nature doesn’t reliably provide for.

If we can do that, and if we know how to simulate how someone might turn out, then we could go further and create lots of electronic babies that live their entire lives in an electronic Matrix style existence. Let’s expand on that briefly.

Even today, couples can store eggs and sperm for later use, but with this future genetic assembly, it will become feasible to create offspring from nothing more than a DNA listing. DNA from both members of a couple, of any sex, could get a record of their DNA, randomize combinations with their partner’s DNA and thus get a massive library of potential offspring. They may even be able to buy listings of celebrity DNA from the net. This creates the potential for greatly delayed birth and tradable ‘ebaybies’ – DNA listings are not alive so current laws don’t forbid trading in them. These listings could however be used to create electronic ‘virtual’offspring, simulated in a computer memory instead of being born organically. Various degrees of existence are possible with varied awareness. Couples may have many electronic babies as well as a few real ones. They may even wait to see how a simulation works out before deciding which kids to make for real. If an electronic baby turns out particularly well, it might be promoted to actual life via DNA assembly and real pregnancy. The following consequences are obvious:

Trade-in and collection of DNA listings, virtual embryos, virtual kids etc, that could actually be fabricated at some stage

Re-birth, potential to clone and download one’s mind or use a direct brain link to live in a younger self

Demands by infertile and gay couples to have babies via genetic assembly

Ability of kids to own entire populations of virtual people, who are quite real in some ways.

It is clear that this whole technology field is rich in ethical issues! But we don’t need to go deep into future tech to find more of those. Just following current political trends to their logical conclusions introduces a lot more. I’ve written often on the random walk of values, and we cannot be confident that many values we hold today will still reign in decades time. Where might this random walk lead? Let’s explore some more.

Even in ‘conventional’ pregnancies, although the right to choose has been firmly established in most of the developed world, a woman usually has very little information about the fetus and has to make her decision almost entirely based on her own circumstances and values. The proportion of abortions related to known fetal characteristics such as genetic conditions or abnormalities is small. Most decisions can’t yet take any account of what sort of person that fetus might become. We should expect future technology to provide far more information on fetus characteristics and likely future development. Perhaps if a woman is better informed on likely outcomes, might that sometimes affect her decision, in either direction?

In some circumstances, potential outcome may be less certain and an informed decision might require more time or more tests. To allow for that without reducing the right to choose, is possible future law could allow for conditional terminations, registered before a legal time limit but performed later (before another time limit) when more is known. This period could be used for more medical tests, or to advertise the baby to potential adopters that want a child just like that one, or simply to allow more time for the mother to see how her own circumstances change. Between 2005 and 2015, USA abortion rate dropped from 1 in 6 pregnancies to 1 in 7, while in the UK, 22% of pregnancies are terminated. What would these figures be if women could determine what future person would result? Would termination rate increase? To 30%, 50%? Abandon this one and see if we can make a better one? How many of us would exist if our parents had known then what they know now?

Whether and how late terminations should be permitted is still fiercely debated. There is already discussion about allowing terminations right up to birth and even after birth in particular circumstances. If so, then why stop there? We all know people who make excellent arguments for retrospective abortion. Maybe future parents should be allowed to decide whether to keep a child right up until it reaches its teens, depending on how the child turns out. Why not 16, or 18, or even 25, when people truly reach adulthood? By then they’d know what kind of person they’re inflicting on the world. Childhood and teen years could simply be a trial period. And why should only the parents have a say? Given an overpopulated world with an infinite number of potential people that could be brought into existence, perhaps the state could also demand a high standard of social performance before assigning a life license. The Chinese state already uses surveillance technology to assign social scores. It is a relatively small logical step further to link that to life licenses that require periodic renewal. Go a bit further if you will, and link that thought to the blog I just wrote on future surveillance: https://timeguide.wordpress.com/2019/05/19/future-surveillance/.

Those of you who have watched Logan’s Run will be familiar with the idea of  compulsory termination at a certain age. Why not instead have a flexible age that depends on social score? It could range from zero to 100. A pregnancy might only be permitted if the genetic blueprint passes a suitability test and then as nurture and environmental factors play their roles as a person ages, their life license could be renewed (or not) every year. A range of crimes might also result in withdrawal of a license, and subsequent termination.

Finally, what about AI? Future technology will allow us to make hybrids, symbionts if you like, with a genetically edited human-ish body, and a mind that is part human, part AI, with the AI acting partly as enhancement and partly as a control system. Maybe the future state could insist that installation into the embryo of a state ‘guardian’, a ‘supervisory AI’, essentially a deeply embedded police officer/judge/jury/executioner will be required to get the life license.

Random walks are dangerous. You can end up where you start, or somewhere very far away in any direction.

The legal battles and arguments around ‘choice’ won’t go away any time soon. They will become broader, more complex, more difficult, and more controversial.

Advertisements

Future Surveillance

This is an update of my last surveillance blog 6 years ago, much of which is common discussion now. I’ll briefly repeat key points to save you reading it.

They used to say

“Don’t think it

If you must think it, don’t say it

If you must say it, don’t write it

If you must write it, don’t sign it”

Sadly this wisdom is already as obsolete as Asimov’s Laws of Robotics. The last three lines have already been automated.

I recently read of new headphones designed to recognize thoughts so they know what you want to listen to. Simple thought recognition in various forms has been around for 20 years now. It is slowly improving but with smart networked earphones we’re already providing an easy platform into which to sneak better monitoring and better though detection. Sold on convenience and ease of use of course.

You already know that Google and various other large companies have very extensive records documenting many areas of your life. It’s reasonable to assume that any or all of this could be demanded by a future government. I trust Google and the rest to a point, but not a very distant one.

Your phone, TV, Alexa, or even your networked coffee machine may listen in to everything you say, sending audio records to cloud servers for analysis, and you only have naivety as defense against those audio records being stored and potentially used for nefarious purposes.

Some next generation games machines will have 3D scanners and UHD cameras that can even see blood flow in your skin. If these are hacked or left switched on – and social networking video is one of the applications they are aiming to capture, so they’ll be on often – someone could watch you all evening, capture the most intimate body details, film your facial expressions and gaze direction while you are looking at a known image on a particular part of the screen. Monitoring pupil dilation, smiles, anguished expressions etc could provide a lot of evidence for your emotional state, with a detailed record of what you were watching and doing at exactly that moment, with whom. By monitoring blood flow and pulse via your Fitbit or smartwatch, and additionally monitoring skin conductivity, your level of excitement, stress or relaxation can easily be inferred. If given to the authorities, this sort of data might be useful to identify pedophiles or murderers, by seeing which men are excited by seeing kids on TV or those who get pleasure from violent games, and it is likely that that will be one of the justifications authorities will use for its use.

Millimetre wave scanning was once controversial when it was introduced in airport body scanners, but we have had no choice but to accept it and its associated abuses –  the only alternative is not to fly. 5G uses millimeter wave too, and it’s reasonable to expect that the same people who can already monitor your movements in your home simply by analyzing your wi-fi signals will be able to do a lot better by analyzing 5G signals.

As mm-wave systems develop, they could become much more widespread so burglars and voyeurs might start using them to check if there is anything worth stealing or videoing. Maybe some search company making visual street maps might ‘accidentally’ capture a detailed 3d map of the inside of your house when they come round as well or instead of everything they could access via your wireless LAN.

Add to this the ability to use drones to get close without being noticed. Drones can be very small, fly themselves and automatically survey an area using broad sections of the electromagnetic spectrum.

NFC bank and credit cards not only present risks of theft, but also the added ability to track what we spend, where, on what, with whom. NFC capability in your phone makes some parts of life easier, but NFC has always been yet another doorway that may be left unlocked by security holes in operating systems or apps and apps themselves carry many assorted risks. Many apps ask for far more permissions than they need to do their professed tasks, and their owners collect vast quantities of information for purposes known only to them and their clients. Obviously data can be collected using a variety of apps, and that data linked together at its destination. They are not all honest providers, and apps are still very inadequately regulated and policed.

We’re seeing increasing experimentation with facial recognition technology around the world, from China to the UK, and only a few authorities so far such as in San Francisco have had the wisdom to ban its use. Heavy handed UK police, who increasingly police according to their own political agenda even at the expense of policing actual UK law, have already fined people who have covered themselves to avoid being abused in face recognition trials. It is reasonable to assume they would gleefully seize any future opportunity to access and cross-link all of the various data pools currently being assembled under the excuse of reducing crime, but with the real intent of policing their own social engineering preferences. Using advanced AI to mine zillions of hours of full-sensory data input on every one of us gathered via all this routine IT exposure and extensive and ubiquitous video surveillance, they could deduce everyone’s attitudes to just about everything – the real truth about our attitudes to every friend and family member or TV celebrity or politician or product, our detailed sexual orientation, any fetishes or perversions, our racial attitudes, political allegiances, attitudes to almost every topic ever aired on TV or everyday conversation, how hard we are working, how much stress we are experiencing, many aspects of our medical state.

It doesn’t even stop with public cameras. Innumerable cameras and microphones on phones, visors, and high street private surveillance will automatically record all this same stuff for everyone, sometimes with benign declared intentions such as making self-driving vehicles safer, sometimes using social media tribes to capture any kind of evidence against ‘the other’. In depth evidence will become available to back up prosecutions of crimes that today would not even be noticed. Computers that can retrospectively date mine evidence collected over decades and link it all together will be able to identify billions of real or invented crimes.

Active skin will one day link your nervous system to your IT, allowing you to record and replay sensations. You will never be able to be sure that you are the only one that can access that data either. I could easily hide algorithms in a chip or program that only I know about, that no amount of testing or inspection could ever reveal. If I can, any decent software engineer can too. That’s the main reason I have never trusted my IT – I am quite nice but I would probably be tempted to put in some secret stuff on any IT I designed. Just because I could and could almost certainly get away with it. If someone was making electronics to link to your nervous system, they’d probably be at least tempted to put a back door in too, or be told to by the authorities.

The current panic about face recognition is justified. Other AI can lipread better than people and recognize gestures and facial expressions better than people. It adds the knowledge of everywhere you go, everyone you meet, everything you do, everything you say and even every emotional reaction to all of that to all the other knowledge gathered online or by your mobile, fitness band, electronic jewelry or other accessories.

Fools utter the old line: “if you are innocent, you have nothing to fear”. Do you know anyone who is innocent? Of everything? Who has never ever done or even thought anything even a little bit wrong? Who has never wanted to do anything nasty to anyone for any reason ever? And that’s before you even start to factor in corruption of the police or mistakes or being framed or dumb juries or secret courts. The real problem here is not the abuses we already see. It is what is being and will be collected and stored, forever, that will be available to all future governments of all persuasions and police authorities who consider themselves better than the law. I’ve said often that our governments are often incompetent but rarely malicious. Most of our leaders are nice guys, only a few are corrupt, but most are technologically inept . With an increasingly divided society, there’s a strong chance that the ‘wrong’ government or even a dictatorship could get in. Which of us can be sure we won’t be up against the wall one day?

We’ve already lost the battle to defend privacy. The only bits left are where the technology hasn’t caught up yet. In the future, not even the deepest, most hidden parts of your mind will be private. Pretty much everything about you will be available to an AI-upskilled state and its police.