Optical computing

A few nights ago I was thinking about the optical fibre memories that we were designing in the late 1980s in BT. The idea was simple. You transmit data into an optical fibre, and if the data rate is high you can squeeze lots of data into a manageable length. Back then the speed of light in fibre was about 5 microseconds per km of fibre, so 1000km of fibre, at a data rate of 2Gb/s would hold 10Mbits of data, per wavelength, so if you can multiplex 2 million wavelengths, you’d store 20Tbits of data. You could maintain the data by using a repeater to repeat the data as it reaches one end into the other, or modify it at that point simply by changing what you re-transmit. That was all theory then, because the latest ‘hero’ experiments were only just starting to demonstrate the feasibility of such long lengths, such high density WDM and such data rates.

Nowadays, that’s ancient history of course, but we also have many new types of fibre, such as hollow fibre with various shaped pores and various dopings to allow a range of effects. And that’s where using it for computing comes in.

If optical fibre is designed for this purpose, with optimal variable refractive index designed to facilitate and maximise non-linear effects, then the photons in one data stream on one wavelength could have enough effects of photons in another stream to be used for computational interaction. Computers don’t have to be digital of course, so the effects don’t have to be huge. Analog computing has many uses, and analog interactions could certainly work, while digital ones might work, and hybrid digital/analog computing may also be feasible. Then it gets fun!

Some of the data streams could be programs. Around that time, I was designing protocols with smart packets that contained executable code, as well as other packets that could hold analog or digital data or any mix. We later called the smart packets ANTs – autonomous network telephers, a contrived term if ever there was one, but we wanted to call them ants badly. They would scurry around the network doing a wide range of jobs, using a range of biomimetic and basic physics techniques to work like ant colonies and achieve complex tasks using simple means.

If some of these smart packets or ANTs are running along a fibre, changing the properties as they go to interact with other data transmitting alongside, then ANTs can interact with one another and with any stored data. ANTs could also move forwards or backwards along the fibre by using ‘sidings’ or physical shortcuts, since they can route themselves or each other. Data produced or changed by the interactions could be digital or analog and still work fine, carried on the smart packet structure.

(If you’re interested my protocol was called UNICORN, Universal Carrier for an Optical Residential Network, and used the same architectural principles as my previous Addressed Time Slice invention, compressing analog data by a few percent to fit into a packet, with a digital address and header, or allowing any digital data rate or structure in a payload while keeping the same header specs for easy routing. That system was invented (in 1988) for the late 1990s when basic domestic broadband rate should have been 625Mbit/s or more, but we expected to be at 2Gbit/s or even 20Gbit/s soon after that in the early 2000s, and the benefit as that we wouldn’t have to change the network switching because the header overheads would still only be a few percent of total time. None of that happened because of government interference in the telecoms industry regulation that strongly disincentivised its development, and even today, 625Mbit/s ‘basic rate’ access is still a dream, let alone 20Gbit/s.)

Such a system would be feasible. Shortcuts and sidings are easy to arrange. The protocols would work fine. Non-linear effects are already well known and diverse. If it were only used for digital computing, it would have little advantage over conventional computers. With data stored on long fibre lengths, external interactions would be limited, with long latency. However, it does present a range of potentials for use with external sensors directly interacting with data streams and ANTs to accomplish some tasks associated with modern AI. It ought to be possible to use these techniques to build the adaptive analog neural networks that we’ve known are the best hope of achieving strong AI since Hans Moravek’s insight, coincidentally also around that time. The non-linear effects even enable ideal mechanisms for implementing emotions, biasing the computation in particular directions via intensity of certain wavelengths of light in much the same way as chemical hormones and neurotransmitters interact with our own neurons. Implementing up to 2 million different emotions at once is feasible.

So there’s a whole mineful of architectures, tools and techniques waiting to be explored and mined by smart young minds in the IT industry, using custom non-linear optical fibres for optical AI.

Advertisements

AI could use killer drone swarms to attack people while taking out networks

In 1987 I discovered a whole class of security attacks that could knock out networks, which I called correlated traffic attacks, creating particular patterns of data packet arrivals from particular sources at particular times or intervals. We simulated two examples to successfully verify the problem. One example was protocol resonance. I demonstrated that it was possible to push a system into a gross overload state with a single call, by spacing the packets precise intervals apart. Their arrival caused a strong resonance in the bandwidth allocation algorithms and the result was that network capacity was instantaneously reduced by around 70%. Another example was information waves, whereby a single piece of information appearing at a particular point could, by its interaction with particular apps on mobile devices (the assumption was financially relevant data that would trigger AI on the devices to start requesting voluminous data, triggering a highly correlated wave of responses, using up bandwidth and throwing the network into overload, very likely crashing due to initiation of rarely used software. When calls couldn’t get through, the devices would wait until the network recovered, then they would all simultaneously detect recovery and simultaneously try again, killing the net again, and again, until people were asked to turn  their devices off and on again, thereby bringing randomness back into the system. Both of these examples could knock out certain kinds of networks, but they are just two of an infinite set of possibilities in the correlated traffic attack class.

Adversarial AI pits one AI against another, trying things at random or making small modifications until a particular situation is achieved, such as the second AI accepting an image is acceptable. It is possible, though I don’t believe it has been achieved yet, to use the technique to simulate a wide range of correlated traffic situations, seeing which ones achieve network resonance or overloads, which trigger particular desired responses from network management or control systems, via interactions with the network and its protocols, commonly resident apps on mobile devices or computer operating systems.

Activists and researchers are already well aware that adversarial AI can be used to find vulnerabilities in face recognition systems and thereby prevent recognition, or to deceive autonomous car AI into seeing fantasy objects or not seeing real ones. As Noel Sharkey, the robotics expert, has been tweeting today, it will be possible to use adversarial AI to corrupt recognition systems used by killer drones, potentially to cause them to attack their controllers or innocents instead of their intended targets. I have to agree with him. But linking that corruption to the whole extended field of correlated traffic attacks extends the range of mechanisms that can be used greatly. It will be possible to exploit highly obscured interactions between network physical architecture, protocols and operating systems, network management, app interactions, and the entire sensor/IoT ecosystem, as well as software and AI systems using it. It is impossible to check all possible interactions, so no absolute defence is possible, but adversarial AI with enough compute power could randomly explore across these multiple dimensions, stumble across regions of vulnerability and drill down until grand vulnerabilities are found.

This could further be linked to apps used as highly invisible Trojans, offering high attractiveness to users with no apparent side effects, quietly gathering data to help identify potential targets, and simply waiting for a particular situation or command before signalling to the attacking system.

A future activist or terrorist group or rogue state could use such tools to make a multidimensional attack. It could initiate an attack, using its own apps to identify and locate targets, control large swarms of killer drones or robots to attack them, simultaneously executing a cyberattack that knocks out selected parts of the network, crashing or killing computers and infrastructure. The vast bulk of this could be developed, tested and refined offline, using simulation and adversarial AI approaches to discover vulnerabilities and optimise exploits.

There is already debate about killer drones, mainly whether we should permit them and in what circumstances, but activists and rogue states won’t care about rules. Millions of engineers are technically able to build such things and some are not on your side. It is reasonable to expect that freely available AI tools will be used in such ways, using their intelligence to design, refine, initiate and control attacks using killer drones, robots and self-driving cars to harm us, while corrupting systems and infrastructure that protect us.

Worrying, especially since the capability is arriving just as everyone is starting to consider civil war.

 

 

Pythagoras Sling update

To celebrate the 50th anniversary of the Moon landing mission, I updated my Pythagoras Sling a bit. It now uses floating parachutes so no rockets or balloons are needed at all and the whole thing is now extremely simple.

Introducing the Pythagoras Sling –

A novel means of achieving space flight

Dr I D Pearson & Prof Nick Colosimo

 

Executive Summary

A novel reusable means of accelerating a projectile to sub-orbital or orbital flight is proposed which we have called The Pythagoras Sling. It was invented by Dr Pearson and developed with the valuable assistance of Professor Colosimo. The principle is to use large parachutes as effective temporary anchors for hoops, through which tethers may be pulled that are attached to a projectile. This system is not feasible for useful sizes of projectiles with current materials, but will quickly become feasible with higher range of roles as materials specifications improve with graphene and carbon composite development. Eventually it will be capable of launching satellites into low Earth orbit, and greatly reduce rocket size and fuel needed for human space missions.

Specifications for acceleration rates, parachute size and initial parachute altitudes ensure that launch timescales can be short enough that parachute movement is acceptable, while specifications of the materials proposed ensure that the system is lightweight enough to be deployed effectively in the size and configuration required.

Major advantages include (eventually) greatly reduced need for rocket fuel for orbital flight of human cargo or potential total avoidance of fuel for orbital flight of payloads that can tolerate higher g-forces; consequently reduced stratospheric emissions of water vapour that otherwise present an AGW issue; simplicity resulting in greatly reduced costs for launch; and avoidance of risks to expensive payloads until active parts of the system are in place. Other risks such as fuel explosions are removed completely.

The journey comprises two parts: the first part towards the first parachute conveys high vertical speed while the second part converts most of this to horizontal speed while continuing acceleration. The projectile therefore acquires very high horizontal speed required for sub-orbital and potentially for orbital missions.

The technique is intended mainly for the mid-term and long-term future, since it only comes into its own once it becomes possible to economically make graphene components such as strings, strong rings and tapes, but short term use is feasible with lower but still useful specifications based on interim materials. While long term launch of people-carrying rockets is feasible, shorter term uses would be limited to smaller payloads or those capable of withstanding higher g-forces. That makes it immediately useful for some satellite or military launches, with others quickly becoming feasible as materials improve.

Either of two mechanisms may be used for drawing the cable – a drum based reel or a novel electromagnetic cable drive system. The drum variant may be speed limited by the strength of drum materials, given very high centrifugal forces. The electromagnetic variant uses conventional propulsion techniques, essentially a linear motor, but in a novel arrangement so is partly unproven.

There are also alternative methods available for parachute deployment and management. One is to make the parachutes from lighter-than-air materials, such as graphene foam, which is capable of making solid forms less dense than helium. The chutes would float up and be pulled into their launch positions. A second option is to use helium balloons to carry them up, again pulling them into launch positions. A third is to use a small rocket or even two to deploy them. Far future variants will probably opt for lighter-than-air parachutes, since they can float up by themselves, carry additional tethers and equipment, and can remain at high altitude to allow easy reuse, floating back up after launch.

There are many potential uses and variants of the system, all using the same principle of temporary high-atmosphere anchors, aerodynamically restricted to useful positions during launch. Not all are discussed here. Although any hypersonic launch system has potential military uses, civil uses to reduce or eliminate fuel requirements for space launch for human or non-human payloads are by far the most exciting potential as the Sling will greatly reduce the currently prohibitive costs of getting people and material into orbit. Without knowing future prices for graphene, it is impossible to precisely estimate costs, but engineering intuition alone suggests that such a simple and re-usable system with such little material requirement ought to be feasible at two or three orders of magnitude less than current prices, and if so, could greatly accelerate mid-century space industry development.

Formal articles in technical journals may follow in due course that discuss some aspects of the sling and catapult systems, but this article serves as a simple publication and disclosure of the overall system concepts into the public domain. Largely reliant on futuristic materials, the systems cannot reasonably be commercialised within patent timeframes, so hopefully the ideas that are freely given here can be developed further by others for the benefit of all.

This is not intended to be a rigorous analysis or technical specification, but hopefully conveys enough information to stimulate other engineers and companies to start their own developments based on some of the ideas disclosed.

Introductory Background

A large number of non-fuel space launch systems have been proposed, from Jules Verne’s 1865 Moon gun through to modern railguns, space hooks and space elevators. Rail guns convey moderately high speeds in the atmosphere where drag and heating are significant limitations, but their main limitation is requiring very high accelerations but still achieving too low muzzle velocity for even sub-orbital trips. Space-based tether systems such as space hooks or space elevators may one day be feasible, but not soon. Current space launches all require rockets, which are still fairly dangerous, and are highly expensive. They also dump large quantities of water vapour into the high atmosphere where, being fairly persistent, it contributes significantly to the greenhouse effect, especially as it drifts towards the poles. Moving towards using less or no fuel would be a useful step in many regards.

The Pythagoras Sling

In summary, having considered many potential space launch mechanisms based on high altitude platforms or parachutes, by far the best system is the Pythagoras Sling. This uses two high-altitude parachutes attached to rings, offering enough drag to act effectively as temporary slow-moving anchors while a tether is pulled through them quickly to accelerate a projectile upwards and then into a curve towards final high horizontal speed.

 

We called this approach the Pythagoras Sling due to its simplicity and triangular geometry. It comprises some ground equipment, two large parachutes and a length of string. The parachutes would ideally be made using lighter-than-air materials such as graphene foam, a foam of tiny graphene spheres containing vacuum, that is less dense than helium. They could therefore float up to the required altitude, and could be manoeuvred into place immediately prior to launch. During the launch process they would move so it would take a few hours to float back to their launch positions. They could remain at high altitude for long periods, perhaps permanently. In that case, as well as carrying the tether for the launch, additional tethers would be needed to anchor and manoeuvre the parachutes and to feed launch tether through in preparation for a new launch. It is easy to design the system so that these additional maintenance tethers are kept well out of the way of the launch path.

The parachutes could be as large as desired if such lightweight materials are used, but if alternative mechanisms such as rockets or balloons are used to carry them into place, they would probably be around 50m diameter, similar to the Mars landing ones.

Each parachute would carry a ring through which the launch tether is threaded, and the rings would need to be very strong, low friction, heat-resistant and good at dispersing heat. Graphene seems an ideal choice but better materials may emerge in coming years.

The first parachute would float up to a point 60-80km above the launch site and would act as the ‘sky anchor’ for the first phase of launch where the payload gathers vertical speed. The 2nd parachute would be floated up and then dragged (using the maintenance tether) as far away and as high as feasible, but typically to the same height and 150km away horizontally, to act as the fulcrum for the arc part of the flight where the speed is both increased and converted to horizontal speed needed for orbit.

Simulation will be required to determine optimal specifications for both human and non-human payloads.

Another version exists where the second parachute is deployed from a base with winding equipment 150km distant from the initial rocket launch. Although requiring two bases, this variant holds merit. However, using a single ground base for both chute deployments offers many advantages at the cost of using slightly longer and heavier tether. It also avoids the issue that before launch, the tether would be on the ground or sea surface over a long distance unless additional system details are added to support it prior to launch such as smaller balloons. For a permanent launch site, where the parachutes remain at high altitude along with the tethers, this is no longer an issue so the choice may be made on a variety of other factors. The launch principle remains exactly the same.

Launch Process

On launch, with the parachutes, rings and tethers all in place, the tether is pulled by either a jet engine powered drum or an electromagnetic drive, and the projectile accelerates upwards. When it approaches the first parachute, the tether is disengaged from that ring, to avoid collision and to allow the second parachute to act as a fulcrum. The projectile is then forced to follow an arc, while the tether is still pulled, so that acceleration continues during this period. When it reaches the final release position, the tether is disengaged, and the projectile is then travelling at orbital or suborbital velocity, at around 200km altitude. The following diagram summarises the process.

Two-base variant

This variant with two bases and using rocket deployment of the parachutes still qualifies as a Pythagoras Sling because they are essentially the same idea with just minor configurational differences. Each layout has different merits and simulation will undoubtedly show significant differences for different kinds of missions that will make the choice obvious.

Calculations based on graphene materials and their theoretical specifications suggest that this could be quite feasible as a means to achieve sub-orbital launches for humans and up to orbital launches for smaller satellites that can cope with 15g acceleration. Other payloads would still need rockets to achieve orbit, but greatly reduced in size and cost.

Exchanges of calculations between the authors, based on the best materials available today suggest that this idea already holds merit for use for microsatellites, even if it falls well below graphene system capabilities. However, graphene technology is developing quickly, and other novel materials are also being created with impressive physical qualities, so it might not be many years before the Sling is capable of launching a wide range of payload sizes and weights.

In closing

The Pythagoras Sling arose after several engineering explorations of high-altitude platform launch systems. As is often the case in engineering, the best solution is also by far the simplest. It is the first space launch system that treats parachutes effectively as temporary aerial anchors, and it uses just a string pulled through two rings held by those temporary anchors, attached to the payload. That string could be pulled by a turbine or an electromagnetic linear motor drive, so could be entirely electric. The system would be extremely safe, with no risk of fuel explosions, and extremely cheap compared to current systems. It would also avoid dumping large quantities of greenhouse gases into the high atmosphere. The system cannot be built yet, and its full potential won’t be realised until graphene or similarly high specification strings or tapes are economically available. However, it should be well noted that other accepted future systems such as the Space Elevator will also need such materials, but in vastly larger quantity. The Pythagoras Sling will certainly be achievable many years before a space elevator and once it is, could well become the safest and cheapest way to put a wide range of payloads into orbit.

Some trees just don’t get barked up

Now and then, someone asks me for an old document and as I search for it, I stumble across others I’d forgotten about. I’ve been rather frustrated that AI progress hasn’t kept up with its development rate in the 90s, so this was fun to rediscover, highlighting some future computing directions that offered serious but uncertain potential exactly 20 years ago, well 20 years ago 3 weeks ago. Here is the text, and the Schrodinger’s Computer was only ever intended to be silly (since renamed the Yonck Processor):

Herrings, a large subset of which are probably red

Computers in the future will use a wide range of techniques, not just conventional microprocessors. Problems should be decomposed and the various components streamed to the appropriate processing engines. One of the important requirements is therefore some means of identifying automatically which parts of a problem could best be tackled by which techniques, though sometimes it might be best to use several in parallel with some interaction between them.

 Analogs

We have a wider variety of components available to be used in analog computing today than we had when it effectively died out in the 80s. With much higher quality analog and mixed components, and additionally micro-sensors, MEMs, simple neural network components, and some imminent molecular capability, how can we rekindle the successes of the analog domain. Nature handles the infinite body problem with ease! Things just happen according to the laws of physics. How can we harness them too? Can we build environments with synthetic physics to achieve more effects? The whole field of non-algorithmic computation seems ripe for exploitation.

 Neural networks

  • Could we make neural microprocessor suspensions, using spherical chips suspended in gel in a reflective capsule and optical broadcasting. Couple this with growing wires across the electric field. This would give us both electrical and optical interconnection that could be ideal for neural networks with high connectivity. Could link this to gene chip technology to have chemical detection and synthesis on the chips too, so that we could have close high speed replicas of organic neural networks.
  • If we can have quantum entanglement between particles, might this affect the way in which neurons in the brain work? Do we have neural entanglement and has this anything to do with how our brain works. Could we create neural entanglement or even virtual entanglement and would it have any use?
  • Could we make molecular neurons (or similar) using ordinary chemistry? And then form them into networks. Might need nanomachines and bottom-up assembly.
  • Could we use neurons as the first stage filters to narrow down the field to make problems tractable for other techniques
  • Optical neurons
  • Magnetic neurons

Electromechanical, MEMS etc

  • Micromirror arrays as part of optical computers, perhaps either as data entry, or as part of the algorithm
  • Carbon fullerene balls and tubes as MEM components
  • External fullerene ‘décor’ as a form of information, cf antibodies in immune system
  • Sensor suspensions and gels as analog computers for direct simulation

Interconnects

  • Carbon fullerene tubes as on chip wires
  • Could they act as electron pipes for ultra-high speed interconnect
  • Optical or radio beacons on chip

Software

  • Transforms – create a transform of every logic component, spreading the functionality across a wide domain, and construct programs using them instead. Small perturbation is no longer fatal but just reduces accuracy
  • Filters – nature works often using simple physical effects where humans design complex software. We need to look at hard problems to see how we might make simple filters to narrow the field before computing final details and stages conventionally.
  • Interference – is there some form of representation that allows us to compute operations by means of allowing the input data to interact directly, i.e. interference, instead of using tedious linear computation. Obviously only suited to a subset of problems.

And finally, the frivolous

  • Schrodinger’s computer – design of computer and software, if any, not determined until box is opened. The one constant is that it destroys itself if it doesn’t finding the solution. All possible computers and all possible programs exist and if there is a solution, the computer will pop out alive and well with the answer. Set it the problem of answering all possible questions too, working out which ones have the most valuable answers and using up all the available storage to write the best answers.

Cable-based space launch system

A rail gun is a simple electromagnetic motor that very rapidly accelerates a metal slug by using it as part of an electrical circuit. A strong magnetic field arises as the current passes through the slug, propelling it forwards.

EM launch system

An ‘inverse rail gun’ uses the same principle, but rather than a short slug, the force acts on a small section of a long cable, which continues to pass through the system. As that section passes through, another takes its place, passing on the force and acceleration to the remainder of the cable. That also means that each small section only has a short and tolerable time of extreme heating resulting from high current.

This can be used either to accelerate a cable, optionally with a payload on the end, or via Newtonian reaction, to drag a motor along a cable, the motor acting as a sled, accelerating all along the cable. If the cable is very long, high speeds could result in the vacuum of space. Since the motor is little more than a pair of conductive plates, it can easily be built into a simple spacecraft.

A suitable spacecraft could thus use a long length of this cable to accelerate to high speed for a long distance trip. Graphene being an excellent conductor as well as super-strong, it should be able to carry the high electric currents needed in the motor, and solar panels/capacitors along the way could provide it.

With such a simple structure, made from advanced materials, and with only linear electromagnetic forces involved, extreme speeds could be achieved.

A system could be made for trips to Mars for example. 10,000 tons of sufficiently strong graphene cable to accelerate a 2 ton craft at 5g could stretch 6.7M km through space, and at 5g acceleration (just about tolerable for trained astronauts), would get them to 800km/s at launch, in 4.6 hours. That’s fast enough to get to Mars in 5-12 days, depending where it is, plus a day each end to accelerate and decelerate, 7-14 days total.

10,000 tons is a lot of graphene by today’s standards, but we routinely use 10,000 tons of steel in shipbuilding, and future technology may well be capable of producing bulk carbon materials at acceptable cost (and there would be a healthy budget for a reusable Mars launch system). It’s less than a space elevator.

6.7M km is a huge distance, but space is pretty empty, and even with gravitation forces distorting the cable, the launch phase can be designed to straighten it. A shorter length of cable on the opposite side of an anchor (attached to a Moon tower, or a large mass at a Lagrange point) would be used to accelerate the spacecraft towards the launch end of the cable, at relatively low speed, say 100km/s, a 20 hour journey, and the deceleration phase of that trip applies significant force to the cable, helping to straighten and tension it for the launch immediately following. The craft would then accelerate along the cable, travel to Mars at high speed, and there would need to be an intercept system there to slow it. That could be a mirror of the launch system, or use alternative intercept equipment such as a folded graphene catcher (another blog).

Power requirements would peak at the very last moments, at a very high 80GW. Then again, this is not something we could build next year, so it should be considered in the context of a mature and still fast-developing space industry, and 800km/s is pretty fast, 0.27% of light speed, and that would make it perfect for asteroid defense systems too, so it has other ways to help cost in. Slower systems would have lower power requirements or longer cable could be used.

Some tricky maths is involved at every stage of the logistics, but no more than any other complex space trip. Overall, this would be a system that would be very long but relatively low in mass and well within scales of other human engineering.

So, I think it would be hard, but not too hard, and a system that could get people to Mars in literally a week or two would presumably be much favored over one that takes several months, albeit it comes with some serious physical stress at each end. So of course it needs work and I’ve only hinted superficially at solutions to some of the issues, but I think it offers potential.

On the down-side, the spaceship would have kinetic energy of 640TJ, comparable to a small nuke, and that was mainly limited by the 5g acceleration astronauts can cope with. Scaling up acceleration to 1000s of gs military levels could make weapons comparable to our largest nukes.

After Brexit: EU RIP

My wife is Swiss so I tend to notice Swiss news. The EU and Switzerland have been fighting lately, with this update today, the Swiss banning EU stock exchanges in retaliation for the EU locking Switzerland out of its exchanges: https://www.telegraph.co.uk/business/2019/06/24/swiss-ban-eu-stock-exchanges-row-brussels-escalates/

The Swiss are a small nation compared to the UK, France or Germany, but they seem to do a hell of a lot with few people: banks, CERN, hosting the Global Economic Forum and acting as a neutral base for very many international negotiations, as well as being famous for chocolate, coffee, coffee machines, cheese, fondues, steel, numerous high tech industries, as well as their winter sports prowess, scenery….. And now they’re falling out with the EU, for the severalth time. So I wonder, when we leave the EU, and are making strategic alliances with other nations of compatible cultural values (strong work ethic, freedom, tolerance of others, democracy) with whom we can do great things, Switzerland ought to be pretty high on our natural allies list. Norway also has a not-quite-perfect arrangement with the EU, so they too would make a good nation to invite to a new economic alliance. So, the UK, Norway and Switzerland potentially forming a new Common Market, you know, just like that thing that formed ages ago that everyone wanted to be in, before the idiots-in-residence decided to force us all into a United States of Europe and eradicate democracy.

Holland, Denmark, Sweden, Ireland, probably Finland but I don’t know Finland well (Belgium, who cares?) would also be very tempted to say goodbye to the EU and join us. That would leave Germany to pay for everyone else, and various surveys have suggested most Germans would be happy to leave the EU even before that, which is why they don’t get asked. The French are the same, their leaders boasting about how clever they are not offering a referendum because they’d get the wrong answer, being even more exity than the Brits. But the pressures would increase too far if these other countries were leaving and joining a better club. So given a few years of the EU heading down hill and the grass on the other side getting greener and greener, the EU might not be able to keep any of its Northern countries.

The new Eastern countries have mixed approaches to life. Some have a very strong work ethic, encouraging hard work and risk-taking to get a better life, and they might well form their own block, or join the new one. The others are more similar to Spain, Portugal, Italy and Greece, and would likely join with them and possibly Turkey too, to make a less prosperous Southern Union. In fact, France might find it hard to decide which of the two to join, the Northern or Southern Unions.

Every time I see another news headline about internal EU problems, relative economic decline, shutting of borders and a more aggressive attitude by un-elected bureaucrats toward forcing a United States of Europe, this end game looks more and more likely. It’s what I predicted before the referendum, and I have even more reason to think that way now.

The EU will die, maybe over 10, 15, 20 years tops. By 2050 we will have some sort of Northern Union and Southern Union, perhaps an Eastern Union too, or they might just divide between the other two. Brexit is just the first domino in the line.

Last one out, turn off the lights.

I won’t publish  comments on this article. Write your own blog if you want.

Population Growth is a Good Thing

Many people are worried about world human population, that we are overpopulating the planet and will reap environmental catastrophe. Some suggest draconian measures to limit or even reduce it. I’m not panicking about population at all. I’m not even particularly concerned. I don’t think it is necessarily a bad thing to have a high population. And I think it will be entirely sustainable to have a much higher population.

Nobody sane think the Earth’s human population will carry on increasing exponentially forever. Obviously it will level off and it is already starting to do so. I would personally put the maximum carrying capacity of the Earth at around 100 billion people, but population will almost certainly level off between 9 and 10 billion, let’s say 9.5Bn. Further in the future, other planets will one day house some more people, but they will have their own economics.

We aren’t running out of physical resources, just moving them around. Apart from a few spacecraft that have moved some stuff off planet, some excess radioactive decay induced in power stations and weapons, and helium and hydrogen escaping from the atmosphere, all of which is offset by meteorites and dust landing from space, all we have done is convert stuff to other forms. Almost all materials are more plentiful now than they were 40 years ago when the loudest of doom-mongers warned of the world running out imminently. They were simply wrong.

If we do start to run short, we can mine key elements from rubbish tips and use energy to convert back to any form we need, we can engineer substitutes or we can gather them from space. Another way of looking at this issue is that we live on top of 6000km of resources and only have homes a few metres deep. When we fill them we have to dispose of one thing to make room for a new one, and recycling technology is getting better all the time. Meanwhile, material technology development means we need less material to make something, and can do so with a wider range of input elements.

We are slowly depleting some organic resources, such as fossil fuels, but there are several hundred years supply left, and we will not need any more than a tiny fraction of that before we move to other energy sources. We’re also depleting some fish stocks around the world, so fishing needs some work in designing and implementing better practices, but that is not unachievable by any means and some progress is already happening. Forestry is being depleted in some areas and expanding in others. Some areas of forest are being wiped out because environmentalists and other doomsayers have forced policies through that encourage people to burn them down to make the land available for biofuel plantations and carbon offset schemes.

We certainly are not short of space. I live in Southern England, which sometimes feels full when I get stuck in traffic jams or queues for public services, but these are a matter of design, not fundamental limits. Physically, I don’t feel it is terribly overpopulated here yet, even with the second highest population density on Earth, at 470 people per square kilometre. India only has 345, even with its massive population. China has even less at only 140, while Indonesia has 117, Brazil just 22, and Russia a mere 7.4 people per square kilometre. Yet these are the world’s biggest populations today. So there is room for expansion perhaps. If all the inhabitable land in the world were to be occupied at average English density of today, the world can actually hold 75-80 Billion people. There would still be loads of open countryside, still only 1 or 2% covered in concrete and tarmac.

But self-driving vehicles can increase road capacity by a factor of 5, regional rail capacity by a factor of 200. Replacement of most public sector workers by machines, or better still, good system design, would eradicate most queues and improve most services. England isn’t even full yet. So that 75-80Bn could become 100Bn before it feels crowded.

So let’s stop first of all from imagining that we are running out of space any time soon. We just aren’t!

Energy isn’t a problem in the long term either. Shale gas is already reducing costs in the USA at the same time as reducing carbon dioxide emissions. In Europe, doom-mongers and environmentalist have been more successful in influencing policy, so CO2 emissions are increasing while energy costs create fuel poverty and threaten many key areas of the economy. Nuclear energy currently depends on uranium but thorium based power is under development and is very likely to succeed in due course, adding several hundred years of supply. Solar, fusion, geothermal and shale gas will add to this to provide abundant power for even a much great population, within a few decades, well ahead of the population curve. The only energy shortages we will see will be doomsayer-induced.

Future generations will face debts handed on to them without their consent to pay for this doom-induced folly, but will also inherit a physical and cultural infrastructure with built in positive feedback that ensure rapid technological development.

Among its many benefits, future technology will greatly reduce the amount of material needed to accomplish a task. It will also expand the global economy to provide enough wealth to buy a decent standard of living for everyone. It will also clean up the environment while producing far more food from less land area, allowing some land to be returned to nature. Food production per hectare has doubled in the last 30 years. The technology promises further gains into the foreseeable future.

The world of the future will be a greener and more pleasant land, with nature in a better state than today, with a larger world population that is richer and better fed, almost certainly no more than 10 billion. Providing that is, that we can stop doom-mongers forcing their policies through – the only thing that would really wreck the environment. A doom-monger-free human population is not a plague but a benefit to the Earth and nature. The doom-mongers and their policies are the greatest proven threat. Environmentalists should focus on making sure we are inspired by nature and care for it, and then get out of the way and let technologists get on with making sure it can flourish in the future.

Let’s compare the outcomes of following the advice of the doom-mongers with the outcome of following a sensible economic development path using high technology.

If everyone wants to live to western standards, the demands on the environment will grow as the poor become richer and able to afford more. If we try to carry on with existing technology, or worse, with yesterday’s, we will not find that easy. Those who consider technology and economic growth to be enemies of the environment, and who therefore would lock us into today’s or yesterday’s technology, would condemn billions of people to poverty and misery and force those extra people to destroy the environment to try to survive. The result would be miserable future for humanity and a wrecked environment. Ironically, these people have the audacity to call themselves environmentalists, but they are actually enemies of both the environment and of humanity.

If we ignore such green lunacy – and we should – and allow progress to continue, we will see steady global economic growth that will result in a far higher average income per capita in 2050 with 9.5Bn people than we have today with only 7.7Bn. The technology meanwhile will develop so much that the same standard of living can be achieved with far less environmental impact. For example, bridges hundreds of years ago used far more material than today’s, because they were built with primitive science and technology and poor understanding of science. Technology is better now, materials are stronger and more consistent, we know their properties accurately as well as all the forces acting on the bridge, so we need less material to build a bridge strong enough for the purpose, which is better for the environment. With nanotechnology and improved materials, we will need even less material to build future bridges. The environmental footprint of each person will certainly be far lower in 2050 if we accept new technology than it will be if we restrict growth and technology development. It will almost certainly be less even than today’s, even though our future lifestyles would be far better. Trying to go back to yesterday’s technologies without greatly reducing population and lifestyle would impose such high environmental impact that the environment would be devastated. We don’t need to, and we shouldn’t.

Take TVs as another example. TVs used to be hugely heavy and bulky monsters that took up half the living room, used lots of electricity, but offered relatively small displays with a choice from just a few channels. Today, thin LCD or LED displays use far less material, consume far less power, take up far less space and offer far bigger and better displays offering access to thousands of channels via satellites and web links. So as far as TV-based entertainment goes, we have a far higher standard of living with far lower environmental impact. The same is true for phones, computers, networks, cars, fridges, washing machines, and most other tools. Better materials and technologies enable lower resource use.

New science and technology has enabled new kinds of materials that can substitute for scarce physical resources. Copper was once in danger of running out imminently. Now you can build a national fibre telecommunication network with a few bucketfuls of sand and some plastic. We have plastic pipes and water tanks too, so we don’t really need copper for plumbing either. Aluminium makes reasonable cables, and future materials such as graphene will make even better cables, still with no copper use. There are few things that can’t be done with alternative materials, especially as quantum materials can be designed to echo the behaviour of many chemicals. So it is highly unlikely that we will ever run out of any element. We will simply find alternative solutions as shortages demand.

Oil will be much the same story. To believe the doom-mongers, our use of oil will continue to grow exponentially until one day there is none left and then we will all be in big trouble, or dead, breathing in 20% CO2 by then of course. Again, this is simply a nonsensical scenario. By 2030, oil will be considered a messy and expensive way of getting energy, and most will be left in the ground. The 6Gjoules of energy a barrel of oil contains could be made for $30 using solar panels in the deserts, and electricity is clean. Even if solar doesn’t progress that far, shale gas only produces half as much CO2 as oil for the same energy output (another potential environmental improvement held back by green zealots here in the UK and indeed the rest of Europe).

This cheap solar electricity mostly won’t come from UK rooftops as currently incentivised by green-pressured government, but somewhere it is actually sunny, deserts for example, where land is cheap, because it isn’t much use for anything else. The energy will get to us via superconducting or graphene cables. Sure, the technology doesn’t yet exist, but it will. Oil will only cost $30 a barrel because no-one will want to pay more than that for what will be seen as an inferior means of energy production. Shale gas might still be used because it produces relatively little CO2 and will be very cheap, but even that will start declining as the costs of solar and nuclear variants fall.

In the longer term, in our 2050 world of 9.5Bn people, fusion power will be up and running, alongside efficient solar (perhaps some wind) and other forms of energy production, proving an energy glut that will help with water supply and food production as well as our other energy needs. In fact, thanks to the development of graphene desalination technology, clean water will be abundantly available at low cost (not much more than typical tap-water costs today) everywhere.

Our technologies will be so advanced by then that we will be able to control climate better too. We will have environmental models based on science, not models based on the CO2-causes-everything-bad religion, so we will know what we’re doing rather than acting on guesswork and old-wives’ tales. We will have excellent understanding of genetics and biotech and be able to make superior crops and animals, so will be able to make enough food to feed everyone, ensuring not only quantity but nutritional quality too. While today’s crops deliver about 2% of the solar energy landing on their fields to us as food, we will be able to make foods in factories more efficiently, and will have crops that are also more efficient. It is true that we may see occasional short-term food shortages, but in the long term, there is absolutely no need to worry about feeding everyone. And no need to worry about the impact on the environment either, because we will be able to make more food with far less space. No-one needs to be hungry, even if we have 9.5Bn of us, and with steady economic growth, everyone will be able to afford food too.

This is no fanciful techno-utopia. It is entirely deliverable and even expectable. All around the world today, people’s ethical awareness is increasing and we are finally starting to address problems of food and emergency aid distribution, even in failing regimes. The next few decades will not eradicate poverty completely, but it will make starvation much less of a problem, along with clean water availability.

How can we be sure it will be developed? Well, there will be more people for one thing. That means more brains. Those people will be richer, they will be better educated, and many will be scientists and engineers. Many will have been born in countries that value engineers and scientists greatly, and will have a lot of backing, so will get results. Some will be in IT, and will develop computer intelligence to add to the human effort, and provide better, cheaper and faster tools for scientists and engineers in every field to use. So, total intellectual resources will be far greater than they are today.

Therefore we can be certain that technological progress will continue to accelerate. As it does, the environment will become cleaner and healthier, because we will be able to make it so. We will restore nature. Rivers today in the UK are cleaner than 100 years ago. The air is cleaner too. We look after nature better, because that’s what people do when they are affluent and well educated. In 50 years we will see that attitude even more widespread. The rainforests will be flourishing, some species will be being resurrected from extinction via DNA banks. People will be well fed. Water supply will be adequate.

But all this can only happen if we stop following the advice of doom-mongers and technophobes who want to take us backwards.

That really is the key: more people mean more brain power, more solutions, and better technology. For the last million years, that has meant steady improvement of our lot. In the un-technological world of the cavemen hunter-gatherers, the world was capable of supporting around 60 million people. If we try to restrict technology development now, it will be a death sentence. People and the environment would both suffer. No-one wins if we stop progress. That is the fallacy of environmental dogma that is shouted loudly by the doom mongers.

Some extremists in the green movement would have us go back to yesterday, rejecting technology, living on nature and punishing everyone who disagrees with them. They can indulge such silliness when they are only a few and the rest of us support them, but everyone simply can’t live like that. Without technology, the world can only support 60 million, not 7 billion or 9.5 billion or 75 billion. There simply aren’t enough nice fields and forest for us all to live that way.

It is a simple choice. We could have 60 million thoroughly miserable post-environmentalists living in a post eco-catastrophe world where nature has been devastated by the results of daft policies invented by self-proclaimed environmentalists, trying to make a feeble recovery. Or we can ignore their nonsense, get on with our ongoing development, and live in a richer, nicer world where 9.5Bn people (or even far more if we want) can be happy, well fed, well educated, with a good standard of living, and living side by side with a flourishing environment, where our main impacts on the environment are positive.

Technology won’t solve every problem, and will even create some, but without a shadow of a doubt, technology is by far nature’s best friend. Not the lunatic fringe of ‘environmentalists’, many of whom are actually among the environment’s worst enemies – at best, well-meaning fools.

There is one final point that is usually overlooked in this debate. Every new person that is born is another life, living, breathing, loving, hopefully having fun, enjoying life and being happy. Life is a good thing, to be celebrated, not extinguished or prevented from coming into existence just because someone else has no imagination. Thanks to the positive feedbacks in the development loops, 50% more people means probably 100% more total joy and happiness. Population growth is good, we just have to be more creative, but that’s what we do all the time. Now let’s get on with making it work.

Good times lie ahead. We do need to fix some things though. I mentioned that physical resources won’t diminish significantly in quantity in terms of the elements they hold at least, though those we use for energy (oil, coal and gas) give up their energy when we use them and that is gone.

However, the ecosystem is a different matter. Even with advanced genetic technology we can expect in the far future, it will be difficult to resurrect organisms that have become extinct. It is far better to make sure they don’t. Even though an organism may be brought back, we’d also have to bring back the environment it needs with all the intricately woven inter-species dependencies.

Losing a single organism species might be relatively recoverable, but losing a rain forest will be very hard to fix. Forests are very complex systems. In fact designing and making a synthetic and simpler rainforest is probably easier than trying to regenerate a lost natural one. We really don’t want to have to do that. It would be far better to make sure we preserve the existing forests and other complex ecosystems. Poor countries may reasonably ask for some payment to preserve their forests rather than chopping them down to sell wood. We should certainly make sure to remove current perverse ‘environmental’ incentives to chop them down to make room for palm oil plantations to satisfy the demands of poorly thought out environmental policies in rich countries.

The same goes for ocean ecosystems. We are badly mismanaging many fisheries today, and that needs to be fixed, but there are already some signs of progress. EU regulations that used to cause huge quantities of fish to be caught and thrown back dead into the sea are becoming history. Again, these are a hangover from previous environmental policy designed to preserve fish stocks, but again this was poorly thought out and has had the opposite result to that intended.

Other policies in the EU and in other parts of the world are also causing problems by unbalancing populations and harming or distorting food chains. The bans on seal hunting are good – we love seals, but the explosion in seal populations caused by throwing dead fish back has increased the demand of the seal population to over 100,000 tons of fish a year, when it is already severely stressed by over-fishing. The dead fish have also helped cause an explosion in lobster populations and in some sea birds. We may appreciate the good side, but we mustn’t forget to look for harmful effects that may also be caused. It is obvious that we could do far better job, and we must.

A well-managed ocean with properly designed farms should be able to provide all the fish and other seafood we need, but we are well away from it yet and we do need to fix it. With ongoing scientific study, understanding of relationships between species and especially in food chains is improving, and regulations are slowly becoming more sensible, so there is hope. Many people are switching their diets to fish with sustainable populations. But these will need managed well too. Farming is suitable for many species and crashes in some fish populations have added up to a loud wake-up call to fix regulations around the world. We may use genetic modification to increase growth and reproduction rates, or otherwise optimise sustainability and ocean capacity. I don’t think there is any room for complacency, but I am confident that we can and will develop good husbandry practices and that our oceans and fish stocks will recover and become sustainable.

Certainly, we have a greater emotional attachment to the organic world than to mere minerals, and we are part of nature too, but we can and will be sustainable in both camps, even with a greatly increased population.

The future of reproductive choice

I’m not taking sides on the abortion debate, just drawing maps of the potential future, so don’t shoot the messenger.

An average baby girl is born with a million eggs, still has 300,000 when she reaches puberty, and subsequently releases 300 – 400 of these over her reproductive lifetime. Typically one or two will become kids but today a woman has no way of deciding which ones, and she certainly has no control over which sperm is used beyond choosing her partner.

Surely it can’t be very far in the future (as a wild guess, say 2050) before we fully understand the links between how someone is and their genetics (and all the other biological factors involved in determining outcome too). That knowledge could then notionally be used to create some sort of nanotech (aka magic) gate that would allow her to choose which of her eggs get to be ovulated and potentially fertilized, wasting ones she isn’t interested in and going for it when she’s released a good one. Maybe by 2060, women would also be able to filter sperm the same way, helping some while blocking others. Choice needn’t be limited to whether to have a baby or not, but which baby.

Choosing a particularly promising egg and then which sperm would combine best with it, an embryo might be created only if it is likely to result in the right person (perhaps an excellent athlete, or an artist, or a scientist, or just good looking), or deselected if it would become the wrong person (e.g. a terrorist, criminal, saxophonist, Republican).

However, by the time we have the technology to do that, and even before we fully know what gene combos result in what features, we would almost certainly be able to simply assemble any chosen DNA and insert it into an egg from which the DNA has been removed. That would seem a more reliable mechanism to get the ‘perfect’ baby than choosing from a long list of imperfect ones. Active assembly should beat deselection from a random list.

By then, we might even be using new DNA bases that don’t exist in nature, invented by people or AI to add or control features or abilities nature doesn’t reliably provide for.

If we can do that, and if we know how to simulate how someone might turn out, then we could go further and create lots of electronic babies that live their entire lives in an electronic Matrix style existence. Let’s expand on that briefly.

Even today, couples can store eggs and sperm for later use, but with this future genetic assembly, it will become feasible to create offspring from nothing more than a DNA listing. DNA from both members of a couple, of any sex, could get a record of their DNA, randomize combinations with their partner’s DNA and thus get a massive library of potential offspring. They may even be able to buy listings of celebrity DNA from the net. This creates the potential for greatly delayed birth and tradable ‘ebaybies’ – DNA listings are not alive so current laws don’t forbid trading in them. These listings could however be used to create electronic ‘virtual’offspring, simulated in a computer memory instead of being born organically. Various degrees of existence are possible with varied awareness. Couples may have many electronic babies as well as a few real ones. They may even wait to see how a simulation works out before deciding which kids to make for real. If an electronic baby turns out particularly well, it might be promoted to actual life via DNA assembly and real pregnancy. The following consequences are obvious:

Trade-in and collection of DNA listings, virtual embryos, virtual kids etc, that could actually be fabricated at some stage

Re-birth, potential to clone and download one’s mind or use a direct brain link to live in a younger self

Demands by infertile and gay couples to have babies via genetic assembly

Ability of kids to own entire populations of virtual people, who are quite real in some ways.

It is clear that this whole technology field is rich in ethical issues! But we don’t need to go deep into future tech to find more of those. Just following current political trends to their logical conclusions introduces a lot more. I’ve written often on the random walk of values, and we cannot be confident that many values we hold today will still reign in decades time. Where might this random walk lead? Let’s explore some more.

Even in ‘conventional’ pregnancies, although the right to choose has been firmly established in most of the developed world, a woman usually has very little information about the fetus and has to make her decision almost entirely based on her own circumstances and values. The proportion of abortions related to known fetal characteristics such as genetic conditions or abnormalities is small. Most decisions can’t yet take any account of what sort of person that fetus might become. We should expect future technology to provide far more information on fetus characteristics and likely future development. Perhaps if a woman is better informed on likely outcomes, might that sometimes affect her decision, in either direction?

In some circumstances, potential outcome may be less certain and an informed decision might require more time or more tests. To allow for that without reducing the right to choose, is possible future law could allow for conditional terminations, registered before a legal time limit but performed later (before another time limit) when more is known. This period could be used for more medical tests, or to advertise the baby to potential adopters that want a child just like that one, or simply to allow more time for the mother to see how her own circumstances change. Between 2005 and 2015, USA abortion rate dropped from 1 in 6 pregnancies to 1 in 7, while in the UK, 22% of pregnancies are terminated. What would these figures be if women could determine what future person would result? Would termination rate increase? To 30%, 50%? Abandon this one and see if we can make a better one? How many of us would exist if our parents had known then what they know now?

Whether and how late terminations should be permitted is still fiercely debated. There is already discussion about allowing terminations right up to birth and even after birth in particular circumstances. If so, then why stop there? We all know people who make excellent arguments for retrospective abortion. Maybe future parents should be allowed to decide whether to keep a child right up until it reaches its teens, depending on how the child turns out. Why not 16, or 18, or even 25, when people truly reach adulthood? By then they’d know what kind of person they’re inflicting on the world. Childhood and teen years could simply be a trial period. And why should only the parents have a say? Given an overpopulated world with an infinite number of potential people that could be brought into existence, perhaps the state could also demand a high standard of social performance before assigning a life license. The Chinese state already uses surveillance technology to assign social scores. It is a relatively small logical step further to link that to life licenses that require periodic renewal. Go a bit further if you will, and link that thought to the blog I just wrote on future surveillance: https://timeguide.wordpress.com/2019/05/19/future-surveillance/.

Those of you who have watched Logan’s Run will be familiar with the idea of  compulsory termination at a certain age. Why not instead have a flexible age that depends on social score? It could range from zero to 100. A pregnancy might only be permitted if the genetic blueprint passes a suitability test and then as nurture and environmental factors play their roles as a person ages, their life license could be renewed (or not) every year. A range of crimes might also result in withdrawal of a license, and subsequent termination.

Finally, what about AI? Future technology will allow us to make hybrids, symbionts if you like, with a genetically edited human-ish body, and a mind that is part human, part AI, with the AI acting partly as enhancement and partly as a control system. Maybe the future state could insist that installation into the embryo of a state ‘guardian’, a ‘supervisory AI’, essentially a deeply embedded police officer/judge/jury/executioner will be required to get the life license.

Random walks are dangerous. You can end up where you start, or somewhere very far away in any direction.

The legal battles and arguments around ‘choice’ won’t go away any time soon. They will become broader, more complex, more difficult, and more controversial.

Future Surveillance

This is an update of my last surveillance blog 6 years ago, much of which is common discussion now. I’ll briefly repeat key points to save you reading it.

They used to say

“Don’t think it

If you must think it, don’t say it

If you must say it, don’t write it

If you must write it, don’t sign it”

Sadly this wisdom is already as obsolete as Asimov’s Laws of Robotics. The last three lines have already been automated.

I recently read of new headphones designed to recognize thoughts so they know what you want to listen to. Simple thought recognition in various forms has been around for 20 years now. It is slowly improving but with smart networked earphones we’re already providing an easy platform into which to sneak better monitoring and better though detection. Sold on convenience and ease of use of course.

You already know that Google and various other large companies have very extensive records documenting many areas of your life. It’s reasonable to assume that any or all of this could be demanded by a future government. I trust Google and the rest to a point, but not a very distant one.

Your phone, TV, Alexa, or even your networked coffee machine may listen in to everything you say, sending audio records to cloud servers for analysis, and you only have naivety as defense against those audio records being stored and potentially used for nefarious purposes.

Some next generation games machines will have 3D scanners and UHD cameras that can even see blood flow in your skin. If these are hacked or left switched on – and social networking video is one of the applications they are aiming to capture, so they’ll be on often – someone could watch you all evening, capture the most intimate body details, film your facial expressions and gaze direction while you are looking at a known image on a particular part of the screen. Monitoring pupil dilation, smiles, anguished expressions etc could provide a lot of evidence for your emotional state, with a detailed record of what you were watching and doing at exactly that moment, with whom. By monitoring blood flow and pulse via your Fitbit or smartwatch, and additionally monitoring skin conductivity, your level of excitement, stress or relaxation can easily be inferred. If given to the authorities, this sort of data might be useful to identify pedophiles or murderers, by seeing which men are excited by seeing kids on TV or those who get pleasure from violent games, and it is likely that that will be one of the justifications authorities will use for its use.

Millimetre wave scanning was once controversial when it was introduced in airport body scanners, but we have had no choice but to accept it and its associated abuses –  the only alternative is not to fly. 5G uses millimeter wave too, and it’s reasonable to expect that the same people who can already monitor your movements in your home simply by analyzing your wi-fi signals will be able to do a lot better by analyzing 5G signals.

As mm-wave systems develop, they could become much more widespread so burglars and voyeurs might start using them to check if there is anything worth stealing or videoing. Maybe some search company making visual street maps might ‘accidentally’ capture a detailed 3d map of the inside of your house when they come round as well or instead of everything they could access via your wireless LAN.

Add to this the ability to use drones to get close without being noticed. Drones can be very small, fly themselves and automatically survey an area using broad sections of the electromagnetic spectrum.

NFC bank and credit cards not only present risks of theft, but also the added ability to track what we spend, where, on what, with whom. NFC capability in your phone makes some parts of life easier, but NFC has always been yet another doorway that may be left unlocked by security holes in operating systems or apps and apps themselves carry many assorted risks. Many apps ask for far more permissions than they need to do their professed tasks, and their owners collect vast quantities of information for purposes known only to them and their clients. Obviously data can be collected using a variety of apps, and that data linked together at its destination. They are not all honest providers, and apps are still very inadequately regulated and policed.

We’re seeing increasing experimentation with facial recognition technology around the world, from China to the UK, and only a few authorities so far such as in San Francisco have had the wisdom to ban its use. Heavy handed UK police, who increasingly police according to their own political agenda even at the expense of policing actual UK law, have already fined people who have covered themselves to avoid being abused in face recognition trials. It is reasonable to assume they would gleefully seize any future opportunity to access and cross-link all of the various data pools currently being assembled under the excuse of reducing crime, but with the real intent of policing their own social engineering preferences. Using advanced AI to mine zillions of hours of full-sensory data input on every one of us gathered via all this routine IT exposure and extensive and ubiquitous video surveillance, they could deduce everyone’s attitudes to just about everything – the real truth about our attitudes to every friend and family member or TV celebrity or politician or product, our detailed sexual orientation, any fetishes or perversions, our racial attitudes, political allegiances, attitudes to almost every topic ever aired on TV or everyday conversation, how hard we are working, how much stress we are experiencing, many aspects of our medical state.

It doesn’t even stop with public cameras. Innumerable cameras and microphones on phones, visors, and high street private surveillance will automatically record all this same stuff for everyone, sometimes with benign declared intentions such as making self-driving vehicles safer, sometimes using social media tribes to capture any kind of evidence against ‘the other’. In depth evidence will become available to back up prosecutions of crimes that today would not even be noticed. Computers that can retrospectively date mine evidence collected over decades and link it all together will be able to identify billions of real or invented crimes.

Active skin will one day link your nervous system to your IT, allowing you to record and replay sensations. You will never be able to be sure that you are the only one that can access that data either. I could easily hide algorithms in a chip or program that only I know about, that no amount of testing or inspection could ever reveal. If I can, any decent software engineer can too. That’s the main reason I have never trusted my IT – I am quite nice but I would probably be tempted to put in some secret stuff on any IT I designed. Just because I could and could almost certainly get away with it. If someone was making electronics to link to your nervous system, they’d probably be at least tempted to put a back door in too, or be told to by the authorities.

The current panic about face recognition is justified. Other AI can lipread better than people and recognize gestures and facial expressions better than people. It adds the knowledge of everywhere you go, everyone you meet, everything you do, everything you say and even every emotional reaction to all of that to all the other knowledge gathered online or by your mobile, fitness band, electronic jewelry or other accessories.

Fools utter the old line: “if you are innocent, you have nothing to fear”. Do you know anyone who is innocent? Of everything? Who has never ever done or even thought anything even a little bit wrong? Who has never wanted to do anything nasty to anyone for any reason ever? And that’s before you even start to factor in corruption of the police or mistakes or being framed or dumb juries or secret courts. The real problem here is not the abuses we already see. It is what is being and will be collected and stored, forever, that will be available to all future governments of all persuasions and police authorities who consider themselves better than the law. I’ve said often that our governments are often incompetent but rarely malicious. Most of our leaders are nice guys, only a few are corrupt, but most are technologically inept . With an increasingly divided society, there’s a strong chance that the ‘wrong’ government or even a dictatorship could get in. Which of us can be sure we won’t be up against the wall one day?

We’ve already lost the battle to defend privacy. The only bits left are where the technology hasn’t caught up yet. In the future, not even the deepest, most hidden parts of your mind will be private. Pretty much everything about you will be available to an AI-upskilled state and its police.

The future for women