AI that talks to us could quickly become problematic

Google’s making the news again adding evidence to the unfortunate stereotype of the autistic IT nerd that barely understands normal people, and they have therefore been astonished at the backlash that normal people would all easily have predicted. (I’m autistic and work in IT mostly too, and am well used to the stereotype it so it doesn’t bother me, in fact it is a sort of ‘get out of social interactions free’ card). Last time it was Google Glass, where it apparently didn’t occur to them that people may not want other people videoing them without consent in pubs and changing rooms. This time it is Google Duplex, that makes phone calls on your behalf to arrange appointment using voice that is almost indistinguishable from normal humans. You could save time making an appointment with a hairdresser apparently, so the Googlanders decided it must be a brilliant breakthrough, and expected everyone to agree. They didn’t.

Some of the objections have been about ethics: e.g. An AI should not present itself as human – Humans have rights and dignity and deserve respectful interactions with other people, but an AI doesn’t and should not masquerade as human to acquire such privilege without knowledge of the other party and their consent.

I would be more offended by the presumed attitude of the user. If someone thinks they are so much better then me that they can demand my time and attention without the expense of any of their own, delegating instead to a few microseconds of processing time in a server farm somewhere, I’ll treat them with the contempt they deserve. My response will not be favourable. I am already highly irritated by the NHS using simple voice interaction messaging to check I will attend a hospital appointment. The fact that my health is on the line and notices at surgeries say I will be banned if I complain on social media is sufficient blackmail to ensure my compliance, but it still comes at the expense of my respect and goodwill. AI-backed voice interaction with better voice wouldn’t be any better, and if it asking for more interaction such as actually booking an appointment, it would be extremely annoying.

In any case, most people don’t speak in fully formed grammatically and logically correct sentences. If you listen carefully to everyday chat, a lot of sentences are poorly pronounced, incomplete, jumbled, full of ums and er’s, likes and they require a great deal of cooperation by the listener to make any sense at all. They also wander off topic frequently. People don’t stick to a rigid vocabulary list or lists of nicely selected sentences.  Lots of preamble and verbal meandering is likely in a response that is highly likely to add ambiguity. The example used in a demo, “I’d like to make a hairdressing appointment for a client” sounds fine until you factor in normal everyday humanity. A busy hairdresser or a lazy receptionist is not necessarily going to cooperate fully. “what do you mean, client?”, “404 not found”, “piss off google”, “oh FFS, not another bloody computer”, “we don’t do hairdressing, we do haircuts”, “why can’t your ‘client’ call themselves then?” and a million other responses are more likely than “what time would you like?”

Suppose though that it eventually gets accepted by society. First, call centers beyond the jurisdiction of your nuisance call blocker authority will incessantly call you at all hours asking or telling you all sorts of things, wasting huge amounts of your time and reducing quality of life. Voice spam from humans in call centers is bad enough. If the owners can multiply productivity by 1000 by using AI instead of people, the result is predictable.

We’ve seen the conspicuous political use of social media AI already. Facebook might have allowed companies to use very limited and inaccurate knowledge of you to target ads or articles that you probably didn’t look at. Voice interaction would be different. It uses a richer emotional connection that text or graphics on a screen. Google knows a lot about you too, but it will know a lot more soon. These big IT companies are also playing with tech to log you on easily to sites without passwords. Some gadgets that might be involved might be worn, such as watches or bracelets or rings. They can pick up signals to identify you, but they can also check emotional states such as stress level. Voice gives away emotion too. AI can already tell better then almost all people whether you are telling the truth or lying or hiding something. Tech such as iris scans can also tell emotional states, as well as give health clues. Simple photos can reveal your age quite accurately to AI, (check out how-old.net).  The AI voice sounds human, but it is better then even your best friends at guessing your age, your stress and other emotions, your health, whether you are telling the truth or not, and it knows far more about what you like and dislike and what you really do online than anyone you know, including you. It knows a lot of your intimate secrets. It sounds human, but its nearest human equivalent was probably Machiavelli. That’s who will soon be on the other side of the call, not some dumb chatbot. Now re-calculate political interference, and factor in the political leaning and social engineering desires of the companies providing the tools. Google and Facebook and the others are very far from politically neutral. One presidential candidate might get full cooperation, assistance and convenient looking the other way, while their opponent might meet rejection and citation of the official rules on non-interference. Campaigns on social issues will also be amplified by AI coupled to voice interaction. I looked at some related issue in a previous blog on fake AI (i.e. fake news type issues): https://timeguide.wordpress.com/2017/11/16/fake-ai/

I could but won’t write a blog on how this tech could couple well to sexbots to help out incels. It may actually have some genuine uses in providing synthetic companionship for lonely people, or helping or encouraging them in real social interactions with real people. It will certainly have some uses in gaming and chatbot game interaction.

We are not very far from computers that are smarter then people across a very wide spectrum, and probably not very far from conscious machines that have superhuman intelligence. If we can’t even rely on IT companies to understand likely consequences of such obvious stuff as Duplex before thy push it, how can we trust them in other upcoming areas of AI development, or even closer term techs with less obvious consequences? We simply can’t!

There are certainly a few such areas where such technology might help us but most are minor and the rest don’t need any deception, but they all come at great cost or real social and political risk, as well as more abstract risks such as threats to human dignity and other ethical issues. I haven’t give this much thought yet and I am sure there must be very many other consequences I have not touched on yet. Google should do more thinking before they release stuff. Technology is becoming very powerful, but we all know that great power comes with great responsibility, and since most people aren’t engineers so can’t think through all the potential technology interactions and consequences, engineers such as Google’s must act more responsibly. I had hoped they’d started, and they said they had, but this is not evidence of that.

 

Advertisements

Futurist memories: The leisure society and the black box economy

Things don’t always change as fast as we think. This is a piece I wrote in 1994 looking forward to a fully automated ‘black box economy, a fly-by-wire society. Not much I’d change if I were writing it new today. Here:

The black box economy is a strictly theoretical possibility, but may result where machines gradually take over more and more roles until the whole economy is run by machines, with everything automated. People could be gradually displaced by intelligent systems, robots and automated machinery. If this were to proceed to the ultimate conclusion, we could have a system with the same or even greater output as the original society, but with no people involved. The manufacturing process could thus become a ‘black box’. Such a system would be so machine controlled that humans would not easily be able to pick up the pieces if it crashed – they would simply not understand how it works, or could not control it. It would be a fly-by-wire society.

The human effort could be reduced to simple requests. When you want a new television, a robot might come and collect the old one, recycling the materials and bringing you a new one. Since no people need be involved and the whole automated system could be entirely self-maintaining and self-sufficient there need be no costs. This concept may be equally applicable in other sectors, such as services and information – ultimately producing more leisure time.

Although such a system is theoretically possible – energy is free in principle, and resources are ultimately a function of energy availability – it is unlikely to go quite this far. We may go some way along this road, but there will always be some jobs that we don’t want to automate, so some people may still work. Certainly, far fewer people would need to work in such a system, and other people could spend their time in more enjoyable pursuits, or in voluntary work. This could be the leisure economy we were promised long ago. Just because futurists predicted it long ago and it hasn’t happened yet does not mean it never will. Some people would consider it Utopian, while others possibly a nightmare, it’s just a matter of taste.

Interstellar travel: quantum ratchet drive

Introductory waffle & background state of the art bit

My last blog included a note on my Mars commute system, which can propel spacecraft with people in up to 600km/s. Unfortunately, although 1000 times faster than a bullet, that is still only 0.2% of light speed and it would take about 2000 years to get to our nearest star at that speed, so we need a better solution. Star Trek uses warp drive to go faster than light, and NASA’s Alcubierre drive is the best approximation we have to that so far:

https://en.wikipedia.org/wiki/Alcubierre_drive

but smarter people than me say it probably won’t work, and almost certainly won’t work any time soon:

https://jalopnik.com/the-painful-truth-about-nasas-warp-drive-spaceship-from-1590330763

If it does work, it will need to use negative energy extracted via the Casimir effect, and if that works, so will my own invention, the Space Anchor:

https://timeguide.wordpress.com/2014/06/14/how-the-space-anchor-works/

The Space Anchor would also allow space dogfights like you see in Star Wars. Unless you’re a pedant like me, you probably never think about how space fighters turn in the vacuum of space when you’re watching movies, but wings obviously won’t work well with no atmosphere, and you’d need a lot of fuel to eject out the back at high thrust to turn otherwise, but the space anchor actually locks on to a point in space-time and you can pivot around it to reverse direction without using fuel, thanks to conservation of angular momentum. Otherwise, the anchor drifts with ‘local’ space time expansion and contraction, which essentially creates relativity based ‘currents’ that can pull a spacecraft along at high speed. But enough about Space Anchors. Read my novel Space Anchor to see how much fun they could be.

Space anchors might not work, being only semi-firm sci-fi based at least partly on hypothetical physics. If they don’t work, and warp drive won’t work without using massive amounts of dark energy that I don’t believe exists either, then we’re left with solar sails, laser sails, and assorted ion drives. Solar sails won’t work well too far from a star. Lasers that can power a spacecraft well outside a star system sound expensive and unworkable and the light sails that capture the light mean this could only get to about 10% light speed. Ion drives work OK for modest speeds if you have an on-board power source and some stuff to thrust out the back to get Newtonian reaction. Fancy shaped resonant cavity thrusters try to cheat maths and physics to get a reaction by using special shapes of microwave chambers,

https://en.wikipedia.org/wiki/RF_resonant_cavity_thruster

but I’d personally put these ‘Em-drives’ in the basket with cold fusion and perpetual motion machines. Sure, there have been experiments that supposedly show they work, but so do many experiments for cold fusion and perpetual motion machines, and we know those results are just experimental or interpretational errors. Of the existing techniques that don’t contradict known physics or rely on unverified and debatable hypotheses, the light sails are best and get 10% of light speed at high expense.

A few proposed thruster-based systems use particles collected from the not-quite-empty space as the fuel source and propellant. Again, if we stretch the Casimir effect theory to near breaking point, it may be possible to use virtual particles popping in and out of existence as propellant by allowing them to appear and thrusting them before they vanish, the quantum thruster drive. My own variant of this solution is to use Casimir combs with oscillating interleaving nano-teeth that separate virtual particles before they can annihilate to prolong that time enough to make it feasible. I frankly have no idea whether this would actually work.

Better still would be if we could use a form of propulsion that doesn’t need to throw matter backwards to get reactionary force forwards. If magical microwave chambers and warp drives are no use, how about this new idea of mine:

The Quantum Ratchet Drive

You can explore other theoretical interstellar drives via Google or Wikipedia, but you won’t find my latest idea there – the Quantum Ratchet Drive. I graduated in Theoretical Physics, but this drive is more in the Hypothetical Physics Department, along with my explanations for inflation, dark matter and novel states of matter. That doesn’t mean it is wrong or won’t work though, just that I can’t prove it will work yet. Anyway, faint heart ne’er won fair maid.

You have seen pics of trains that climb steep slopes using a rack and pinion system, effectively gear wheels on a toothed rail so that they don’t slip (not the ones that use a cable). I originally called my idea the quantum rack and pinion drive because it works in a similar way, but actually, the more I think about it, the more appropriate is the analogy with a ratchet, using a gear tooth as a sort of anchor to pull against to get the next little bit of progress. It relies on the fact that fields are quantized and any system will exist in one state and then move up or down to the next quantum state, it can’t stay in between. At this point I feel I need another 50 IQ points to grasp a very slippery idea, so be patient – this is an idea in early stages of development. I’m basically trying to harness the physics that causes particles to switch quantum states, looking at the process in which quantum states change, nature’s ‘snap to grid’ approach, to make a propulsion system out of it.

If we generate an external field that interacts with the field in a nearby microscopic region of space in front of our craft then as the total field approaches a particular quantum threshold, nature will drag that region to the closest quantum state, hopefully creating a tiny force that drags the system to that state. In essence, the local quantum structure becomes a grid onto which the craft can lock. At very tiny scales obviously, but if you add enough tiny distances you eventually get big ones.

But space doesn’t have a fixed grid does it? If we just generate any old field any which way in front of our craft, no progress will happen because nature will be quite happy to have those states in any location in space so no force of movement will be generated. HOWEVER… suppose space did have such a grid, and we could use interaction of the quantum states in the grid cells and our generated field. Then we could get what we want, a toothed rail with which our gearwheels can engage.

So we just need a system that assigns local quantum states to microscopic space regions and that is our rack, then we apply a field to our pinion that is not quite enough to become that state, but is closer than any other one. At some point, there will be a small thrust towards the next state so that it can reach a local minimum energy level. Those tiny thrusts would add up.

We could use any kind of field that our future tech can generate. Our craft would have two field emitters. One produces a nice tidy waveform that maps quantum states onto the space just in front of our craft. A second emitter produces a second field that creates an interaction so that the system wants to come to rest in a region set slightly ahead of the craft’s current position. It would be like a train laying a toothed track just in front of it as it goes along, always positioning the teeth so that the train will fall into the next location.

We could certainly produce EM fields, making a sort of stepper linear induction motor on a mat created by the ship itself. What about strong or weak nuclear forces? Even if stuck with EM, maybe we use rotating nuclei or rotating atoms or molecules, which would move like a microscopic stepper motors across our pre-quantized space grid. Tiny forces acting on individual protons or electrons adding up to macroscopic forces on our spacecraft. If we’re doing it with individual atoms or nuclear particles, the regions of space we impose the fields on would be just ahead of them, not  out in front of the spacecraft. If we’re using interacting EM fields,  then we’re relying on appropriate phasing and beam intensities to do the job.

As I said, early days. Needs work. Also needs a bigger brain. Intuitively this ought to work. It ought to be capable of up to light speed. The big question is where the energy comes from. It isn’t an impulse drive and doesn’t chuck matter out of a rocket nozzle, but it might collect small particles along the way to convert into energy. Or perhaps nature contributes the energy. If so, then this could get light speed travel without fuel and limited on-board energy supply. Just like gravity pulls a train down a hill, perhaps clever phase design could arrange the grid ahead to be always ‘downhill’ in which case this might turn out to be yet another vacuum energy drive. I honestly don’t know. I’m out of my depth, but intuition suggests this shows promise for someone smarter.

 

Advanced land, sea, air and space transport technologies

I’ll be speaking at the Advanced Engineering conference in Helsinki at the end of May. My topic will be potential solutions for future transport, covering land, sea, air and space. These are all areas where I’ve invented new approaches. In my 1987 BT life as a performance engineer, I studied the potential to increase road capacity by a factor of 5 by using driverless pod technology, mimicking the packet switching approach we were moving towards in telecomms. This is very different from the self-driving systems currently in fashion, because dumb pods would be routed by smart infrastructure rather than having their own AI/sensor systems, so the pods could be extremely cheap and packed very closely together to get a huge performance benefit, using up to 85% of the available space. We’re now seeing a few prototypes of such dumb pod systems being trialled.

It was also obvious even in the 1980s that the same approach could be used on rail, increasing capacity from today’s typical 0.4% occupancy to 80%+, an improvement factor of 200, and that the same pods could be used on rail and road, and that on rail, pods could be clumped together to make virtual trains so that they could mix with existing conventional trains during a long transition period to a more efficient system. In the early 2000s, we realised that pods could be powered by induction coils in the road surface and more recently, with the discovery of graphene, such graphene induction devices could be very advantageous over copper or aluminium ones due to deterrence of metal theft, and also that linear induction could be used to actually propel the pods and in due course even to levitate them, so that future pods wouldn’t even need engines or wheels, let alone AI and sensor systems on board.

We thus end up with the prospect of a far-future ground transport system that is 5-15 times road capacity and up to 200 times rail capacity and virtually free of accidents and congestion.

Advanced under-sea transport could adopt supercavitation technology that is already in use and likely to develop quickly in coming decades. Some sources suggest that it may even be possible to travel underwater more easily then through air. Again, if graphene is available in large quantity at reasonable cost, it would be possible to do away with the need for powerful engines on board, this time by tethering pods together with graphene string.

Above certain speeds, a blunt surface in front of each pod would create a bubble enclosing the entire pod, greatly reducing drag. Unlike Hyperloop style high-speed rail, tubes would not be required for these pods, but together, a continuous stream of many pods tethered together right across an ocean would make a high-capacity under-sea transport system. This would be also be more environmentally friendly, using only electricity at the ends.

Another property of graphene is that it can be used to make carbon foam that is lighter than helium. Such material could float high in the stratosphere well above air lanes. With the upper surface used for solar power collection, and the bottom surface used as a linear induction mat, it will be possible to make inter-continental air lines that can propel sleds hypersonically, connected by tethers to planes far below.

High altitude solar array to power IT and propel planes

As well as providing pollution-free hypersonic travel, these air lines could also double as low satellite platforms for comms and surveillance.

As well as land, sea and air travel, we are now seeing rapid development of the space industry, but currently, getting into orbit uses very expensive rockets that dump huge quantities of water vapour into the high atmosphere. A 2017 invention called the Pythagoras Sling solves the problems of expense and pollution. Two parachutes are deployed (by small rockets or balloons) into the very high atmosphere, attached to hoops through which a graphene tether is threaded, one end connected to a ground-based winch and the other to the payload. The large parachutes have high enough drag to act as temporary anchors while the tether is pulled, propelling the payload up to orbital speed via an arc that renders the final speed horizontal as obviously needed to achieve orbit.

With re-usable parts, relatively rapid redeployment and only electricity as power supply, the sling could reduce costs by a factor of 50-100 over current state of the art, greatly accelerating space development without the high altitude water vapour risking climate change effects.

The winch design for the Pythagoras Sling uses an ‘inverse rail gun’ electromagnetic puller to avoid massive centrifugal forces of a rotating drum. The inverse rail gun can be scaled up indefinitely, so also offers good potential for interplanetary travel. With Mars travel on the horizon, prospects of months journey times are not appealing, but a system using well-spaced motors pulling a graphene tether millions of km long is viable. A 40,000 ton graphene tether could be laid out in space in a line 6.7M km long, and using solar power, could propel a 2 Ton capsule at 5g up to an exit speed of 800km/s, reaching Mars in as little 5-12 days.

At the far end, a folded graphene net could intercept and slow the capsule at 5g  into a chosen orbit around Mars. While not prohibitively expensive, this system would be completely reusable and since it needs no fuel, would be a very clean and safe way of getting crew and materials to a Mars colony.

 

Beyond VR: Computer assisted dreaming

I first played with VR in 1983/1984 while working in the missile industry. Back then we didn’t call it VR, we just called it simulation but it was actually more intensive than VR, just as proper flight simulators are. Our office was a pair of 10m wide domes onto which video could be projected, built decades earlier, in the 1950s I think. One dome had a normal floor, the other had a hydraulic platform that could simulate being on a ship. The subject would stand on whichever surface was appropriate and would see pretty much exactly what they would see in a real battlefield. The missile launcher used for simulation was identical to a real one and showed exactly the same image as a real one would. The real missile was not present of course but its weight was simulated and when the fire button was pressed, a 140dB bang was injected into the headset and weights and pulleys compensated for the 14kg of weight, suddenly vanishing from the shoulder. The experience was therefore pretty convincing and with the loud bang and suddenly changing weight, it was almost as hard to stand steady and keep the system on target as it would be in real life – only the presumed fear and knowledge of the reality of the situation was different.

Back then in 1983, as digital supercomputers had only just taken over from analog ones for simulation, it was already becoming obvious that this kind of computer simulation would one day allow ‘computer assisted dreaming’. (That’s one of the reasons I am irritated when Jaron Lanier is credited for inventing VR – highly realistic simulators and the VR ideas that sprung obviously from them had already been around for decades. At best, all he ‘invented’ was a catchy name for a lower cost, lower quality, less intense simulator. The real inventors were those who made the first generation simulators long before I was born and the basic idea of VR had already been very well established.)

‘Computer assisted dreaming’ may well be the next phase of VR. Today in conventional VR, people are immersed in a computer generated world produced by a computer program (usually) written by others. Via trial and feedback, programmers make their virtual worlds better. As AI and sensor technology continue rapid progress, this is very likely to change to make worlds instantly responsive to the user. By detecting user emotions, reactions, gestures and even thoughts and imagination, it won’t be long before AI can produce a world in real time that depends on those thoughts, imagination and emotions rather than putting them in a pre-designed virtual world. That world would depend largely on your own imagination, upskilled by external AI. You might start off imagining you’re on a beach, then AI might add to it by injecting all sorts of things it knows you might enjoy from previous experiences. As you respond to those, it picks up on the things you like or don’t like and the scene continues to adapt and evolve, to make it more or less pleasant or more or less exciting or more or less challenging etc., depending on your emotional state, external requirements and what it thinks you want from this experience. It would be very like being in a dream – computer assisted lucid dreaming, exactly what I wanted to make back in 1983 after playing in that simulator.

Most people enjoy occasional lucid dreams, where they realise they are dreaming and can then decide what happens next. Making VR do exactly that would be better than being trapped in someone else’s world. You could still start off with whatever virtual world you bought, a computer game or training suite perhaps, but it could adapt to you, your needs and desires to make it more compelling and generally better.

Even in shared experiences like social games, experiences could be personalised. Often all players need to see the same enemies in the same locations in the same ways to make it fair, but that doesn’t mean that the situation can’t adapt to the personalities of those playing. It might actually improve the social value if each time you play it looks different because your companions are different. You might tease a friend if every time you play with them, zombies or aliens always have to appear somehow, but that’s all part of being friends. Exploring virtual worlds with friends, where you both see things dependent on your friend’s personality would help bonding. It would be a bit like exploring their inner world. Today, you only explore the designer’s inner world.

This sort of thing would be a superb development and creativity tool. It could allow you to explore a concept you have in your head, automatically feeding in AI upskilling to amplify your own thoughts and ideas, showing you new paths to explore and helping you do so. The results would still be extremely personal to you, but you on a good day. You could accomplish more, have better visions, imagine more creative things, do more with whatever artistic talent you have. AI could even co-create synthetic personas, make virtual friends you can bond with, share innermost thoughts with, in total confidence (assuming the company you bought the tool from is trustworthy and isn’t spying on you or selling your details, so maybe best not to buy it from Facebook then).

And it would have tremendous therapeutic potential too. You could explore and indulge both enjoyable and troublesome aspects of your inner personality, to build on the good and alleviate or dispel the bad. You might become less troubled, less neurotic, more mentally healthy. You could build your emotional and creative skills. You could become happier and more fulfilled. Mental health improvement potential on its own makes this sort of thing worth developing.

Marketers would obviously try to seize control as they always do, and advertising is already adapting to VR and will continue into its next phases of development. Your own wants and desires might help guide the ‘dreaming’, but marketers will inevitably have some control over what else is injected, and will influence algorithms and AI in how it chooses how to respond to your input. You might be able to choose much of the experience, but others will still want and try to influence and manipulate you, to change your mindset and attitudes in their favour. That will not change until the advertising business model changes. You might be able to buy devices or applications that are entirely driven by you and you alone, but it is pretty certain that the bulk of products and services available will be at least partly financed by those who want to have some control of what you experience.

Nevertheless, computer-assisted dreaming could be a much more immersive and personal experience than VR, being more like an echo of your own mind and personality than external vision, more your own creation, less someone else’s. In fact, echo sounds a better term too. Echo reality, ER, or maybe personal reality, pereal, or mental echo, ME. Nah, maybe we need Lanier to invent a catchy name again, he is good at that. That 1983 idea could soon become reality.

 

High speed transatlantic submarine train

In 1863, Jules Verne wrote about the idea of suspended transatlantic tunnels through which trains could be sent using air pressure. Pneumatic tube delivery was a fashionable idea then, and small scale pneumatic delivery systems were commonplace until the late 20th century – I remember a few shops using them to transport change around. In 1935, the film ‘The tunnel’ featured another high speed transatlantic tunnel, as did another film in 1972, ‘Tunnel through the deeps’. Futurists have often discussed high speed mass transit systems, often featuring maglev and vacuums (no, Elon Musk didn’t invent the idea, his Hyperloop is justifiably famous for resurfacing and developing this very old idea and is likely to see its final implementation).

Anyway, I have read quite a bit about supercavitation over the last years. First developed in 1960 as a military idea to send torpedoes at high speed, it was successfully implemented in 1972 and has since developed somewhat. Cavitation happens when a surface, such as a propeller blade, moves through water so fast that a cavity is left until the water has a chance to close back in. As it does, the resultant shock wave can damage the propeller surface and cause wear. In supercavitation, the cavity is deliberate, and the system designed so that the cavity encloses the entire projectile. In 2005, the first proposal for people transport emerged, DARPA’s Underwater Express Program, designed to transport small groups of Navy personnel at speeds of up to 100 knots. Around that time, a German supercavitating torpedo was reaching 250mph speeds.

More promising articles suggest that supersonic speeds are achievable under water, with less friction than going via air. Achieving the initial high speed and maintaining currently requires sophisticated propulsion mechanisms, but not for much longer. I believe the propulsion problem can be engineered away by pulling capsules with a strong tether. That would be utterly useless for a torpedo of course, but for a transport system would be absolutely fine.

Transatlantic traffic is quite high, and if a cheaper and more environmentally friendly system than air travel were available, it would undoubtedly increase. My idea is to use a long string of capsules attached to a long graphene cable, pulled in a continuous loop at very high speed. Capsules would be filled at stations, accelerated to speed and attached to the cable for their transaltlantic journey, then detached, decelerated and their passengers or freight unloaded. Graphene cable would be 200 times stronger than steel so making such a cable is feasible.

The big benefit of such a system is that no evacuated tube is needed. The cable and capsules would travel through the water directly. Avoiding the need for an expensive and complex  tube containing a vacuum, electromagnetic propulsion system and power supply would greatly reduce cost. All of the pulling force for a cable based system would be applied at the ends.

Graphene cable doesn’t yet exist, but it will one day. I doubt if current supercavitation research is up to the job either, but that’s quite normal for any novel engineering project. Engineers face new problems and solve them every day. By the time the cable is feasible, we will doubtless be more knowledgeable about supercavitation too. So while it’s a bit early to say it will definitely become reality, it is certainly not too early to start thinking about it. Some future Musk might well be able to pull it off.

People are becoming less well-informed

The Cambridge Analytica story has exposed a great deal about our modern society. They allegedly obtained access to 50M Facebook records to enable Trump’s team to target users with personalised messages.

One of the most interesting aspects is that unless they only employ extremely incompetent journalists, the news outlets making the biggest fuss about it must be perfectly aware of reports that Obama appears to have done much the same but on a much larger scale back in 2012, but are keeping very quiet about it. According to Carol Davidsen, a senior Obama campaign staffer, they allowed Obama’s team to suck out the whole social graph – because they were on our side – before closing it to prevent Republican access to the same techniques. Trump’s campaign’s 50M looks almost amateur. I don’t like Trump, and I did like Obama before the halo slipped, but it seems clear to anyone who checks media across the political spectrum that both sides try their best to use social media to target users with personalised messages, and both sides are willing to bend rules if they think they can get away with it.

Of course all competent news media are aware of it. The reason some are not talking about earlier Democrat misuse but some others are is that they too all have their own political biases. Media today is very strongly polarised left or right, and each side will ignore, play down or ludicrously spin stories that don’t align with their own politics. It has become the norm to ignore the log in your own eye but make a big deal of the speck in your opponent’s, but we know that tendency goes back millennia. I watch Channel 4 News (which broke the Cambridge Analytica story) every day but although I enjoy it, it has a quite shameless lefty bias.

So it isn’t just the parties themselves that will try to target people with politically massaged messages, it is quite the norm for most media too. All sides of politics since Machiavelli have done everything they can to tilt the playing field in their favour, whether it’s use of media and social media, changing constituency boundaries or adjusting the size of the public sector. But there is a third group to explore here.

Facebook of course has full access to all of their 2.2Bn users’ records and social graph and is not squeaky clean neutral in its handling of them. Facebook has often been in the headlines over the last year or two thanks to its own political biases, with strongly weighted algorithms filtering or prioritising stories according to their political alignment. Like most IT companies Facebook has a left lean. (I don’t quite know why IT skills should correlate with political alignment unless it’s that most IT staff tend to be young, so lefty views implanted at school and university have had less time to be tempered by real world experience.) It isn’t just Facebook of course either. While Google has pretty much failed in its attempt at social media, it also has comprehensive records on most of us from search, browsing and android, and via control of the algorithms that determine what appears in the first pages of a search, is also able to tailor those results to what it knows of our personalities. Twitter has unintentionally created a whole world of mob rule politics and justice, but in format is rapidly evolving into a wannabe Facebook. So, the IT companies have themselves become major players in politics.

A fourth player is now emerging – artificial intelligence, and it will grow rapidly in importance into the far future. Simple algorithms have already been upgraded to assorted neural network variants and already this is causing problems with accusations of bias from all directions. I blogged recently about Fake AI: https://timeguide.wordpress.com/2017/11/16/fake-ai/, concerned that when AI analyses large datasets and comes up with politically incorrect insights, this is now being interpreted as something that needs to be fixed – a case not of shooting the messenger, but forcing the messenger to wear tinted spectacles. I would argue that AI should be allowed to reach whatever insights it can from a dataset, and it is then our responsibility to decide what to do with those insights. If that involves introducing a bias into implementation, that can be debated, but it should at least be transparent, and not hidden inside the AI itself. I am now concerned that by trying to ‘re-educate’ the AI, we may instead be indoctrinating it, locking today’s politics and values into future AI and all the systems that use it. Our values will change, but some foundation level AI may be too opaque to repair fully.

What worries me most though isn’t that these groups try their best to influence us. It could be argued that in free countries, with free speech, anybody should be able to use whatever means they can to try to influence us. No, the real problem is that recent (last 25 years, but especially the last 5) evolution of media and social media has produced a world where most people only ever see one part of a story, and even though many are aware of that, they don’t even try to find the rest and won’t look at it if it is put before them, because they don’t want to see things that don’t align with their existing mindset. We are building a world full of people who only see and consider part of the picture. Social media and its ‘bubbles’ reinforce that trend, but other media are equally guilty.

How can we shake society out of this ongoing polarisation? It isn’t just that politics becomes more aggressive. It also becomes less effective. Almost all politicians claim they want to make the world ‘better’, but they disagree on what exactly that means and how best to do so. But if they only see part of the problem, and don’t see or understand the basic structure and mechanisms of the system in which that problem exists, then they are very poorly placed to identify a viable solution, let alone an optimal one.

Until we can fix this extreme blinkering that already exists, our world can not get as ‘better’ as it should.

 

Mars trips won’t have to take months

It is exciting seeing the resurgence in interest in space travel, especially the prospect that Mars trips are looking increasingly feasible. Every year, far-future projects come a year closer. Mars has been on the agenda for decades, but now the tech needed is coming over the horizon.

You’ve probably already read about Elon Musk’s SpaceX plans, so I won’t bother repeating them here. The first trips will be dangerous but the passengers on the first successful trip will get to go down in history as the first human Mars visitors. That prospect of lasting fame and a place in history plus the actual experience and excitement of doing the trip will add up to more than enough reward to tempt lots of people to join the queue to be considered. A lucky and elite few will eventually land there. Some might stay as the first colonists. It won’t be long after that before the first babies are born on Mars, and their names will certainly be remembered, the first true Martians.

I am optimistic that the costs and travel times involved in getting to Mars can be reduced enormously. Today’s space travel relies on rockets, but my own invention, the Pythagoras Sling, could reduce the costs of getting materials and people to orbit by a factor of 50 or 100 compared the SpaceX rockets, which already are far cheaper than NASA’s. A system introduction paper can be downloaded from:

https://carbondevices.files.wordpress.com/2017/09/pythagoras-sling-article.pdf

Sadly, in spite of obviously being far more feasible and shorter term than a space elevator, we have not yet been able to get our paper published in a space journal so that is the only source so far.

This picture shows one implementation for non-human payloads, but tape length and scale could be increased to allow low-g human launches some day, or more likely, early systems would allow space-based anchors to be built with different launch architecture for human payloads.

The Sling needs graphene tape, a couple of parachutes or a floating drag platform and a magnetic drive to pull the tape, using standard linear motor principles as used in linear induction motors and rail guns. The tape is simply attached to the rocket and pulled through two high altitude anchors attached to the platforms or parachutes. Here is a pic of the tape drive designed for another use, but the principle is the same. Rail gun technology works well today, and could easily be adapted into this inverse form to drive a suitably engineered tape at incredible speed.

All the components are reusable, but shouldn’t cost much compared to heavy rockets anyway. The required parachutes exist today, but we don’t have graphene tape or the motor to pull it yet. As space industry continues to develop, these will come. The Space Elevator will need millions of tons of graphene, the Sling only needs around 100 kilograms so will certainly be possible decades before a space elevator. The sling configuration can achieve full orbital speeds for payloads using only electrical energy at the ground, so is also much less environmentally damaging than rocketry.

Using tech such as the Sling, material can be put into orbit to make space stations and development factories for all sorts of space activity. One project that I would put high on the priority list would be another tape-pulling launch system, early architecture suggestion here:.

Since it will be in space, laying tape out in a long line would be no real problem, even millions of kms, and with motors arranged periodically along the length, a long tape pointed in the right direction could launch a payload towards a Mars interception system at extreme speeds. We need to think big, since the distances traveled will be big. A launch system weighing 40,000 tons would be large scale engineering but not exceptional, and although graphene today is very expensive as with any novel material, it will become much cheaper as manufacturing technology catches up (if the graphene filament print heads I suggest work as I hope, graphene filament could be made at 200m/s and woven into yarn by a spinneret as it emerges from multiple heads). In the following pics, carbon atoms are fed through nanotubes with the right timing, speed and charges to combine into graphene as they emerge. The second pic shows why the nanotubes need to be tilted towards each other since otherwise the molecular geometry doesn’t work, and this requirement limits the heads to make thin filaments with just two or three carbon rings wide. The second pic mentions carbon foam, which would be perfect to make stratospheric floating platforms as an alternative to using parachutes in the Sling system.

Graphene filament head, ejects graphene filament at 200m/s.

A large ship is of that magnitude, as are some building or bridges. Such a launch system would allow people to get to Mars in 5-12 days, and payloads of g-force tolerant supplies such as water could be sent to arrive in a day. The intercept system at the Mars end would need to be of similar size to catch and decelerate the payload into Mars orbit. The systems at both ends can be designed to be used for launch or intercept as needed.

I’ve been a systems engineer for 36 years and a futurologist for 27 of those. The system solutions I propose should work if there is no better solution available, but since we’re talking about the far future, it is far more likely that better systems will be invented by smarter engineers or AIs by the time we’re ready to use them. Rocketry will probably get us through to the 2040s but after that, I believe these solutions can be made real and Mars trips after that could become quite routine. I present these solutions as proof that the problems can be solved, by showing that potential solutions already exist. As a futurologist, all I really care about is that someone will be able to do it somehow.

 

So, there really is no need to think in terms of months of travel each way, we should think of rapid supply chains and human travel times around a week or two – not so different from the first US immigrants from Europe.

How can we make a computer conscious?

I found this article in my drafts folder, written 3 years ago as part of my short series on making conscious computers. I thought I’d published it but didn’t. So updating and publishing it now. It’s a bit long-winded, thinking out loud, trying to derive some insights from nature on how to make conscious machines. The good news is that actual AI developments are following paths that lead in much the same direction, though some significant re-routing and new architectural features are needed if they are to optimize AI and achieve machine consciousness.

Let’s start with the problem. Today’s AI that plays chess, does web searches or answers questions is digital. It uses algorithms, sets of instructions that the computer follows one by one. All of those are reduced to simple binary actions, toggling bits between 1 and 0. The processor doing that is no more conscious or aware of it, and has no more understanding of what it is doing than an abacus knows it is doing sums. The intelligence is in the mind producing the clever algorithms that interpret the current 1s and 0s and change them in the right way. The algorithms are written down, albeit in more 1s and 0s in a memory chip, but are essentially still just text, only as smart and aware as a piece of paper with writing on it. The answer is computed, transmitted, stored, retrieved, displayed, but at no point does the computer sense that it is doing any of those things. It really is just an advanced abacus. An abacus is digital too (an analog equivalent to an abacus is a slide rule).

A big question springs to mind: can a digital computer ever be any more than an advanced abacus. Until recently, I was certain the answer was no. Surely a digital computer that just runs programs can never be conscious? It can simulate consciousness to some degree, it can in principle describe the movements of every particle in a conscious brain, every electric current, every chemical reaction. But all it is doing is describing them. It is still just an abacus. Once computed, that simulation of consciousness could be printed and the printout would be just as conscious as the computer was. A digital ‘stored program’ computer can certainly implement extremely useful AI. With the right algorithms, it can mine data, link things together, create new data from that, generate new ideas by linking together things that haven’t been linked before, make works of art, poetry, compose music, chat to people, recognize faces and emotions and gestures. It might even be able to converse about life, the universe and everything, tell you its history, discuss its hopes for the future, but all of that is just a thin gloss on an abacus. I wrote a chat-bot on my Sinclair ZX Spectrum in 1983, running on a processor with about 8,000 transistors. The chat-bot took all of about 5 small pages of code but could hold a short conversation quite well if you knew what subjects to stick to. It’s very easy to simulate conversation. But it is still just a complicated abacus and still doesn’t even know it is doing anything.

However clever the AI it implements, a conventional digital computer that just executes algorithms can’t become conscious but an analog computer can, a quantum computer can, and so can a hybrid digital/analog/quantum computer. The question remain s whether a digital computer can be conscious if it isn’t just running stored programs. Could it have a different structure, but still be digital and yet be conscious? Who knows? Not me. I used to know it couldn’t, but now that I am a lot older and slightly wiser, I now know I don’t know.

Consciousness debate often starts with what we know to be conscious, the human brain. It isn’t a digital computer, although it has digital processes running in it. It also runs a lot of analog processes. It may also run some quantum processes that are significant in consciousness. It is a conscious hybrid of digital, analog and possibly quantum computing. Consciousness evolved in nature, therefore it can be evolved in a lab. It may be difficult and time consuming, and may even be beyond current human understanding, but it is possible. Nature didn’t use magic, and what nature did can be replicated and probably even improved on. Evolutionary AI development may have hit hard times, but that only shows that the techniques used by the engineers doing it didn’t work on that occasion, not that other techniques can’t work. Around 2.6 new human-level fully conscious brains are made by nature every second without using any magic and furthermore, they are all slightly different. There are 7.6 billion slightly different implementations of human-level consciousness that work and all of those resulted from evolution. That’s enough of an existence proof and a technique-plausibility-proof for me.

Sensors evolved in nature pretty early on. They aren’t necessary for life, for organisms to move around and grow and reproduce, but they are very helpful. Over time, simple light, heat, chemical or touch detectors evolved further to simple vision and produce advanced sensations such as pain and pleasure, causing an organism to alter its behavior, in other words, feeling something. Detection of an input is not the same as sensation, i.e. feeling an input. Once detection upgrades to sensation, you have the tools to make consciousness. No more upgrades are needed. Sensing that you are sensing something is quite enough to be classified as consciousness. Internally reusing the same basic structure as external sensing of light or heat or pressure or chemical gradient or whatever allows design of thought, planning, memory, learning and construction and processing of concepts. All those things are just laying out components in different architectures. Getting from detection to sensation is the hard bit.

So design of conscious machines, and in fact what AI researchers call the hard problem, really can be reduced to the question of what makes the difference between a light switch and something that can feel being pushed or feel the current flowing when it is, the difference between a photocell and feeling whether it is light or dark, the difference between detecting light frequency, looking it up in a database, then pronouncing that it is red, and experiencing redness. That is the hard problem of AI. Once that is solved, we will very soon afterwards have a fully conscious self aware AI. There are lots of options available, so let’s look at each in turn to extract any insights.

The first stage is easy enough. Detecting presence is easy, measuring it is harder. A detector detects something, a sensor (in its everyday engineering meaning) quantifies it to some degree. A component in an organism might fire if it detects something, it might fire with a stronger signal or more frequently if it detects more of it, so it would appear to be easy to evolve from detection to sensing in nature, and it is certainly easy to replicate sensing with technology.

Essentially, detection is digital, but sensing is usually analog, even though the quantity sensed might later be digitized. Sensing normally uses real numbers, while detection uses natural numbers (real v  integer as programmer call them). The handling of analog signals in their raw form allows for biomimetic feedback loops, which I’ll argue are essential. Digitizing them introduces a level of abstraction that is essentially the difference between emulation and simulation, the difference between doing something and reading about someone doing it. Simulation can’t make a conscious machine, emulation can. I used to think that meant digital couldn’t become conscious, but actually it is just algorithmic processing of stored programs that can’t do it. There may be ways of achieving consciousness digitally, or quantumly, but I haven’t yet thought of any.

That engineering description falls far short of what we mean by sensation in human terms. How does that machine-style sensing become what we call a sensation? Logical reasoning says there would probably need to be only a small change in order to have evolved from detection to sensing in nature. Maybe something like recombining groups of components in different structures or adding them together or adding one or two new ones, that sort of thing?

So what about detecting detection? Or sensing detection? Those could evolve in sequence quite easily. Detecting detection is like your alarm system control unit detecting the change of state that indicates that a PIR has detected an intruder, a different voltage or resistance on a line, or a 1 or a 0 in a memory store. An extremely simple AI responds by ringing an alarm. But the alarm system doesn’t feel the intruder, does it?  It is just a digital response to a digital input. No good.

How about sensing detection? How do you sense a 1 or a 0? Analog interpretation and quantification of digital states is very wasteful of resources, an evolutionary dead end. It isn’t any more useful than detection of detection. So we can eliminate that.

OK, sensing of sensing? Detection of sensing? They look promising. Let’s run with that a bit. In fact, I am convinced the solution lies in here so I’ll look till I find it.

Let’s do a thought experiment on designing a conscious microphone, and for this purpose, the lowest possible order of consciousness will do, we can add architecture and complexity and structures once we have some bricks. We don’t particularly want to copy nature, but are free to steal ideas and add our own where it suits.

A normal microphone sensor produces an analog signal quantifying the frequencies and intensities of the sounds it is exposed to, and that signal may later be quantified and digitized by an analog to digital converter, possibly after passing through some circuits such as filters or amplifiers in between. Such a device isn’t conscious yet. By sensing the signal produced by the microphone, we’d just be repeating the sensing process on a transmuted signal, not sensing the sensing itself.

Even up close, detecting that the microphone is sensing something could be done by just watching a little LED going on when current flows. Sensing it is harder but if we define it in conventional engineering terms, it could still be just monitoring a needle moving as the volume changes. That is obviously not enough, it’s not conscious, it isn’t feeling it, there’s no awareness there, no ‘sensation’. Even at this primitive level, if we want a conscious mic, we surely need to get in closer, into the physics of the sensing. Measuring the changing resistance between carbon particles or speed of a membrane moving backwards and forwards would just be replicating the sensing, adding an extra sensing stage in series, not sensing the sensing, so it needs to be different from that sort of thing. There must surely need to be a secondary change or activity in the sensing mechanism itself that senses the sensing of the original signal.

That’s a pretty open task, and it could even be embedded in the detecting process or in the production process for the output signal. But even recognizing that we need this extra property narrows the search. It must be a parallel or embedded mechanism, not one in series. The same logical structure would do fine for this secondary sensing, since it is just sensing in the same logical way as the original. This essential logical symmetry would make its evolution easy too. It is easy to imagine how that could happen in nature, and easier still to see how it could be implemented in a synthetic evolution design system. Such an approach could be mimicked in natural or synthetic evolutionary development systems. In this approach, we have to feel the sensing, so we need it to comprise some sort of feedback loop with a high degree of symmetry compared with the main sensing stage. That would be natural evolution compatible as well as logically sound as an engineering approach.

This starts to look like progress. In fact, it’s already starting to look a lot like a deep neural network, with one huge difference: instead of using feed-forward signal paths for analysis and backward propagation for training, it relies instead on a symmetric feedback mechanism where part of the input for each stage of sensing comes from its own internal and output signals. A neuron is not a full sensor in its own right, and it’s reasonable to assume that multiple neurons would be clustered so that there is a feedback loop. Many in the neural network AI community are already recognizing the limits of relying on feed-forward and back-prop architectures, but web searches suggest few if any are moving yet to symmetric feedback approaches. I think they should. There’s gold in them there hills!

So, the architecture of the notional sensor array required for our little conscious microphone would have a parallel circuit and feedback loop (possibly but not necessarily integrated), and in all likelihood these parallel and sensing circuits would be heavily symmetrical, i.e. they would use pretty much the same sort of components and architectures as the sensing process itself. If the sensation bit is symmetrical, of similar design to the primary sensing circuit, that again would make it easy to evolve in nature too so is a nice 1st principles biomimetic insight. So this structure has the elegance of being very feasible for evolutionary development, natural or synthetic. It reuses similarly structured components and principles already designed, it’s just recombining a couple of them in a slightly different architecture.

Another useful insight screams for attention too. The feedback loop ensures that the incoming sensation lingers to some degree. Compared to the nanoseconds we are used to in normal IT, the signals in nature travel fairly slowly (~200m/s), and the processing and sensing occur quite slowly (~200Hz). That means this system would have some inbuilt memory that repeats the essence of the sensation in real time – while it is sensing it. It is inherently capable of memory and recall and leaves the door wide open to introduce real-time interaction between memory and incoming signal. It’s not perfect yet, but it has all the boxes ticked to be a prime contender to build thought, concepts, store and recall memories, and in all likelihood, is a potential building brick for higher level consciousness. Throw in recent technology developments such as memristors and it starts to look like we have a very promising toolkit to start building primitive consciousness, and we’re already seeing some AI researchers going that path so maybe we’re not far from the goal. So, we make a deep neural net with nice feedback from output (of the sensing system, which to clarify would be a cluster of neurons, not a single neuron) to input at every stage (and between stages) so that inputs can be detected and sensed, while the input and output signals are stored and repeated into the inputs in real time as the signals are being processed. Throw in some synthetic neurotransmitters to dampen the feedback and prevent overflow and we’re looking at a system that can feel it is feeling something and perceive what it is feeling in real time.

One further insight that immediately jumps out is since the sensing relies on the real time processing of the sensations and feedbacks, the speed of signal propagation, storage, processing and repetition timeframes must all be compatible. If it is all speeded up a million fold, it might still work fine, but if signals travel too slowly or processing is too fast relative to other factors, it won’t work. It will still get a computational result absolutely fine, but it won’t know that it has, it won’t be able to feel it. Therefore… since we have a factor of a million for signal speed (speed of light compared to nerve signal propagation speed), 50 million for switching speed, and a factor of 50 for effective neuron size (though the sensing system units would be multiple neuron clusters), we could make a conscious machine that could think at 50 million times as fast as a natural system (before allowing for any parallel processing of course). But with architectural variations too, we’d need to tune those performance metrics to make it work at all and making physically larger nets would require either tuning speeds down or sacrificing connectivity-related intelligence. An evolutionary design system could easily do that for us.

What else can we deduce about the nature of this circuit from basic principles? The symmetry of the system demands that the output must be an inverse transform of the input. Why? Well, because the parallel, feedback circuit must generate a form that is self-consistent. We can’t deduce the form of the transform from that, just that the whole system must produce an output mathematically similar to that of the input.

I now need to write another blog on how to use such circuits in neural vortexes to generate knowledge, concepts, emotions and thinking. But I’m quite pleased that it does seem that some first-principles analysis of natural evolution already gives us some pretty good clues on how to make a conscious computer. I am optimistic that current research is going the right way and only needs relatively small course corrections to achieve consciousness.

 

New book: Fashion Tomorrow

I finally finished the book I started 2 years ago on future fashion, or rather future technologies relevant to the fashion industry.

It is a very short book, more of a quick guide at 40k words, less than half as long as my other books and covers women’s fashion mostly, though some applies to men too. I would never have finished writing a full-sized book on this topic and I’d rather put out something now, short and packed full of ideas that are (mostly) still novel than delay until they are commonplace. It is aimed at students and people working in fashion design, who have loads of artistic and design talent, but want to know what technology opportunities are coming that they could soon exploit, but anyone interested in fashion who isn’t technophobic should find it interesting. Some sections discussing intimate apparel contain adult comments so the book is unsuitable for minors.

It started as a blog, then I realised I had quite a bit more stuff I could link together, so I made a start, then go sidetracked, for 20 months! I threw away 75% of the original contents list and tidied it up to release a short guide instead. I wanted to put it out for free but 99p or 99c seems to be the lowest price you can start at, but I doubt that would put anyone off except the least interested readers. As with my other books, I’ll occasionally make it free.

Huge areas I left out include swathes of topics on social, political, environmental and psychological fashions, impacts of AI and robots, manufacturing, marketing, distribution and sales. These are all big topics, but I just didn’t have time to write them all up so I just stuck to the core areas with passing mentions of the others. In any case, much has been written on these areas by others, and my book focuses on things that are unique, embryonic or not well covered elsewhere. It fills a large hole in fashion industry thinking.