Category Archives: technology

Enhanced cellular blockchain

I thought there was a need for a cellular blockchain variant, and a more sustainable alternative to cryptocurrencies like Bitcoin that depend on unsustainable proofs-of-work. So I designed one and gave it a temporary project name of Grapevine. I like biomimetics, which I used for both the blockchain itself and its derivative management/application/currency/SW distribution layer. The ANTs were my invention in 1993 when I was with BT, along with Chris Winter. BT never did anything with it, and I believe MIT later published some notes on the idea too. ANTs provide an ideal companion to blockchain and together, could be the basis of some very secure IT systems.

The following has not been thoroughly checked so may contain serious flaws, but hopefully contain some useful ideas to push the field a little in the right direction. Hint: If you can’t read the smaller print, hold the control key and use the mouse scroll button to zoom.

With thanks to my good friend Prof Nick Colosimo for letting me bounce the ideas off him.

Advertisements

AI that talks to us could quickly become problematic

Google’s making the news again adding evidence to the unfortunate stereotype of the autistic IT nerd that barely understands normal people, and they have therefore been astonished at the backlash that normal people would all easily have predicted. (I’m autistic and work in IT mostly too, and am well used to the stereotype it so it doesn’t bother me, in fact it is a sort of ‘get out of social interactions free’ card). Last time it was Google Glass, where it apparently didn’t occur to them that people may not want other people videoing them without consent in pubs and changing rooms. This time it is Google Duplex, that makes phone calls on your behalf to arrange appointment using voice that is almost indistinguishable from normal humans. You could save time making an appointment with a hairdresser apparently, so the Googlanders decided it must be a brilliant breakthrough, and expected everyone to agree. They didn’t.

Some of the objections have been about ethics: e.g. An AI should not present itself as human – Humans have rights and dignity and deserve respectful interactions with other people, but an AI doesn’t and should not masquerade as human to acquire such privilege without knowledge of the other party and their consent.

I would be more offended by the presumed attitude of the user. If someone thinks they are so much better then me that they can demand my time and attention without the expense of any of their own, delegating instead to a few microseconds of processing time in a server farm somewhere, I’ll treat them with the contempt they deserve. My response will not be favourable. I am already highly irritated by the NHS using simple voice interaction messaging to check I will attend a hospital appointment. The fact that my health is on the line and notices at surgeries say I will be banned if I complain on social media is sufficient blackmail to ensure my compliance, but it still comes at the expense of my respect and goodwill. AI-backed voice interaction with better voice wouldn’t be any better, and if it asking for more interaction such as actually booking an appointment, it would be extremely annoying.

In any case, most people don’t speak in fully formed grammatically and logically correct sentences. If you listen carefully to everyday chat, a lot of sentences are poorly pronounced, incomplete, jumbled, full of ums and er’s, likes and they require a great deal of cooperation by the listener to make any sense at all. They also wander off topic frequently. People don’t stick to a rigid vocabulary list or lists of nicely selected sentences.  Lots of preamble and verbal meandering is likely in a response that is highly likely to add ambiguity. The example used in a demo, “I’d like to make a hairdressing appointment for a client” sounds fine until you factor in normal everyday humanity. A busy hairdresser or a lazy receptionist is not necessarily going to cooperate fully. “what do you mean, client?”, “404 not found”, “piss off google”, “oh FFS, not another bloody computer”, “we don’t do hairdressing, we do haircuts”, “why can’t your ‘client’ call themselves then?” and a million other responses are more likely than “what time would you like?”

Suppose though that it eventually gets accepted by society. First, call centers beyond the jurisdiction of your nuisance call blocker authority will incessantly call you at all hours asking or telling you all sorts of things, wasting huge amounts of your time and reducing quality of life. Voice spam from humans in call centers is bad enough. If the owners can multiply productivity by 1000 by using AI instead of people, the result is predictable.

We’ve seen the conspicuous political use of social media AI already. Facebook might have allowed companies to use very limited and inaccurate knowledge of you to target ads or articles that you probably didn’t look at. Voice interaction would be different. It uses a richer emotional connection that text or graphics on a screen. Google knows a lot about you too, but it will know a lot more soon. These big IT companies are also playing with tech to log you on easily to sites without passwords. Some gadgets that might be involved might be worn, such as watches or bracelets or rings. They can pick up signals to identify you, but they can also check emotional states such as stress level. Voice gives away emotion too. AI can already tell better then almost all people whether you are telling the truth or lying or hiding something. Tech such as iris scans can also tell emotional states, as well as give health clues. Simple photos can reveal your age quite accurately to AI, (check out how-old.net).  The AI voice sounds human, but it is better then even your best friends at guessing your age, your stress and other emotions, your health, whether you are telling the truth or not, and it knows far more about what you like and dislike and what you really do online than anyone you know, including you. It knows a lot of your intimate secrets. It sounds human, but its nearest human equivalent was probably Machiavelli. That’s who will soon be on the other side of the call, not some dumb chatbot. Now re-calculate political interference, and factor in the political leaning and social engineering desires of the companies providing the tools. Google and Facebook and the others are very far from politically neutral. One presidential candidate might get full cooperation, assistance and convenient looking the other way, while their opponent might meet rejection and citation of the official rules on non-interference. Campaigns on social issues will also be amplified by AI coupled to voice interaction. I looked at some related issue in a previous blog on fake AI (i.e. fake news type issues): https://timeguide.wordpress.com/2017/11/16/fake-ai/

I could but won’t write a blog on how this tech could couple well to sexbots to help out incels. It may actually have some genuine uses in providing synthetic companionship for lonely people, or helping or encouraging them in real social interactions with real people. It will certainly have some uses in gaming and chatbot game interaction.

We are not very far from computers that are smarter then people across a very wide spectrum, and probably not very far from conscious machines that have superhuman intelligence. If we can’t even rely on IT companies to understand likely consequences of such obvious stuff as Duplex before thy push it, how can we trust them in other upcoming areas of AI development, or even closer term techs with less obvious consequences? We simply can’t!

There are certainly a few such areas where such technology might help us but most are minor and the rest don’t need any deception, but they all come at great cost or real social and political risk, as well as more abstract risks such as threats to human dignity and other ethical issues. I haven’t give this much thought yet and I am sure there must be very many other consequences I have not touched on yet. Google should do more thinking before they release stuff. Technology is becoming very powerful, but we all know that great power comes with great responsibility, and since most people aren’t engineers so can’t think through all the potential technology interactions and consequences, engineers such as Google’s must act more responsibly. I had hoped they’d started, and they said they had, but this is not evidence of that.

 

Advanced land, sea, air and space transport technologies

I’ll be speaking at the Advanced Engineering conference in Helsinki at the end of May. My topic will be potential solutions for future transport, covering land, sea, air and space. These are all areas where I’ve invented new approaches. In my 1987 BT life as a performance engineer, I studied the potential to increase road capacity by a factor of 5 by using driverless pod technology, mimicking the packet switching approach we were moving towards in telecomms. This is very different from the self-driving systems currently in fashion, because dumb pods would be routed by smart infrastructure rather than having their own AI/sensor systems, so the pods could be extremely cheap and packed very closely together to get a huge performance benefit, using up to 85% of the available space. We’re now seeing a few prototypes of such dumb pod systems being trialled.

It was also obvious even in the 1980s that the same approach could be used on rail, increasing capacity from today’s typical 0.4% occupancy to 80%+, an improvement factor of 200, and that the same pods could be used on rail and road, and that on rail, pods could be clumped together to make virtual trains so that they could mix with existing conventional trains during a long transition period to a more efficient system. In the early 2000s, we realised that pods could be powered by induction coils in the road surface and more recently, with the discovery of graphene, such graphene induction devices could be very advantageous over copper or aluminium ones due to deterrence of metal theft, and also that linear induction could be used to actually propel the pods and in due course even to levitate them, so that future pods wouldn’t even need engines or wheels, let alone AI and sensor systems on board.

We thus end up with the prospect of a far-future ground transport system that is 5-15 times road capacity and up to 200 times rail capacity and virtually free of accidents and congestion.

Advanced under-sea transport could adopt supercavitation technology that is already in use and likely to develop quickly in coming decades. Some sources suggest that it may even be possible to travel underwater more easily then through air. Again, if graphene is available in large quantity at reasonable cost, it would be possible to do away with the need for powerful engines on board, this time by tethering pods together with graphene string.

Above certain speeds, a blunt surface in front of each pod would create a bubble enclosing the entire pod, greatly reducing drag. Unlike Hyperloop style high-speed rail, tubes would not be required for these pods, but together, a continuous stream of many pods tethered together right across an ocean would make a high-capacity under-sea transport system. This would be also be more environmentally friendly, using only electricity at the ends.

Another property of graphene is that it can be used to make carbon foam that is lighter than helium. Such material could float high in the stratosphere well above air lanes. With the upper surface used for solar power collection, and the bottom surface used as a linear induction mat, it will be possible to make inter-continental air lines that can propel sleds hypersonically, connected by tethers to planes far below.

High altitude solar array to power IT and propel planes

As well as providing pollution-free hypersonic travel, these air lines could also double as low satellite platforms for comms and surveillance.

As well as land, sea and air travel, we are now seeing rapid development of the space industry, but currently, getting into orbit uses very expensive rockets that dump huge quantities of water vapour into the high atmosphere. A 2017 invention called the Pythagoras Sling solves the problems of expense and pollution. Two parachutes are deployed (by small rockets or balloons) into the very high atmosphere, attached to hoops through which a graphene tether is threaded, one end connected to a ground-based winch and the other to the payload. The large parachutes have high enough drag to act as temporary anchors while the tether is pulled, propelling the payload up to orbital speed via an arc that renders the final speed horizontal as obviously needed to achieve orbit.

With re-usable parts, relatively rapid redeployment and only electricity as power supply, the sling could reduce costs by a factor of 50-100 over current state of the art, greatly accelerating space development without the high altitude water vapour risking climate change effects.

The winch design for the Pythagoras Sling uses an ‘inverse rail gun’ electromagnetic puller to avoid massive centrifugal forces of a rotating drum. The inverse rail gun can be scaled up indefinitely, so also offers good potential for interplanetary travel. With Mars travel on the horizon, prospects of months journey times are not appealing, but a system using well-spaced motors pulling a graphene tether millions of km long is viable. A 40,000 ton graphene tether could be laid out in space in a line 6.7M km long, and using solar power, could propel a 2 Ton capsule at 5g up to an exit speed of 800km/s, reaching Mars in as little 5-12 days.

At the far end, a folded graphene net could intercept and slow the capsule at 5g  into a chosen orbit around Mars. While not prohibitively expensive, this system would be completely reusable and since it needs no fuel, would be a very clean and safe way of getting crew and materials to a Mars colony.

 

Beyond VR: Computer assisted dreaming

I first played with VR in 1983/1984 while working in the missile industry. Back then we didn’t call it VR, we just called it simulation but it was actually more intensive than VR, just as proper flight simulators are. Our office was a pair of 10m wide domes onto which video could be projected, built decades earlier, in the 1950s I think. One dome had a normal floor, the other had a hydraulic platform that could simulate being on a ship. The subject would stand on whichever surface was appropriate and would see pretty much exactly what they would see in a real battlefield. The missile launcher used for simulation was identical to a real one and showed exactly the same image as a real one would. The real missile was not present of course but its weight was simulated and when the fire button was pressed, a 140dB bang was injected into the headset and weights and pulleys compensated for the 14kg of weight, suddenly vanishing from the shoulder. The experience was therefore pretty convincing and with the loud bang and suddenly changing weight, it was almost as hard to stand steady and keep the system on target as it would be in real life – only the presumed fear and knowledge of the reality of the situation was different.

Back then in 1983, as digital supercomputers had only just taken over from analog ones for simulation, it was already becoming obvious that this kind of computer simulation would one day allow ‘computer assisted dreaming’. (That’s one of the reasons I am irritated when Jaron Lanier is credited for inventing VR – highly realistic simulators and the VR ideas that sprung obviously from them had already been around for decades. At best, all he ‘invented’ was a catchy name for a lower cost, lower quality, less intense simulator. The real inventors were those who made the first generation simulators long before I was born and the basic idea of VR had already been very well established.)

‘Computer assisted dreaming’ may well be the next phase of VR. Today in conventional VR, people are immersed in a computer generated world produced by a computer program (usually) written by others. Via trial and feedback, programmers make their virtual worlds better. As AI and sensor technology continue rapid progress, this is very likely to change to make worlds instantly responsive to the user. By detecting user emotions, reactions, gestures and even thoughts and imagination, it won’t be long before AI can produce a world in real time that depends on those thoughts, imagination and emotions rather than putting them in a pre-designed virtual world. That world would depend largely on your own imagination, upskilled by external AI. You might start off imagining you’re on a beach, then AI might add to it by injecting all sorts of things it knows you might enjoy from previous experiences. As you respond to those, it picks up on the things you like or don’t like and the scene continues to adapt and evolve, to make it more or less pleasant or more or less exciting or more or less challenging etc., depending on your emotional state, external requirements and what it thinks you want from this experience. It would be very like being in a dream – computer assisted lucid dreaming, exactly what I wanted to make back in 1983 after playing in that simulator.

Most people enjoy occasional lucid dreams, where they realise they are dreaming and can then decide what happens next. Making VR do exactly that would be better than being trapped in someone else’s world. You could still start off with whatever virtual world you bought, a computer game or training suite perhaps, but it could adapt to you, your needs and desires to make it more compelling and generally better.

Even in shared experiences like social games, experiences could be personalised. Often all players need to see the same enemies in the same locations in the same ways to make it fair, but that doesn’t mean that the situation can’t adapt to the personalities of those playing. It might actually improve the social value if each time you play it looks different because your companions are different. You might tease a friend if every time you play with them, zombies or aliens always have to appear somehow, but that’s all part of being friends. Exploring virtual worlds with friends, where you both see things dependent on your friend’s personality would help bonding. It would be a bit like exploring their inner world. Today, you only explore the designer’s inner world.

This sort of thing would be a superb development and creativity tool. It could allow you to explore a concept you have in your head, automatically feeding in AI upskilling to amplify your own thoughts and ideas, showing you new paths to explore and helping you do so. The results would still be extremely personal to you, but you on a good day. You could accomplish more, have better visions, imagine more creative things, do more with whatever artistic talent you have. AI could even co-create synthetic personas, make virtual friends you can bond with, share innermost thoughts with, in total confidence (assuming the company you bought the tool from is trustworthy and isn’t spying on you or selling your details, so maybe best not to buy it from Facebook then).

And it would have tremendous therapeutic potential too. You could explore and indulge both enjoyable and troublesome aspects of your inner personality, to build on the good and alleviate or dispel the bad. You might become less troubled, less neurotic, more mentally healthy. You could build your emotional and creative skills. You could become happier and more fulfilled. Mental health improvement potential on its own makes this sort of thing worth developing.

Marketers would obviously try to seize control as they always do, and advertising is already adapting to VR and will continue into its next phases of development. Your own wants and desires might help guide the ‘dreaming’, but marketers will inevitably have some control over what else is injected, and will influence algorithms and AI in how it chooses how to respond to your input. You might be able to choose much of the experience, but others will still want and try to influence and manipulate you, to change your mindset and attitudes in their favour. That will not change until the advertising business model changes. You might be able to buy devices or applications that are entirely driven by you and you alone, but it is pretty certain that the bulk of products and services available will be at least partly financed by those who want to have some control of what you experience.

Nevertheless, computer-assisted dreaming could be a much more immersive and personal experience than VR, being more like an echo of your own mind and personality than external vision, more your own creation, less someone else’s. In fact, echo sounds a better term too. Echo reality, ER, or maybe personal reality, pereal, or mental echo, ME. Nah, maybe we need Lanier to invent a catchy name again, he is good at that. That 1983 idea could soon become reality.

 

High speed transatlantic submarine train

In 1863, Jules Verne wrote about the idea of suspended transatlantic tunnels through which trains could be sent using air pressure. Pneumatic tube delivery was a fashionable idea then, and small scale pneumatic delivery systems were commonplace until the late 20th century – I remember a few shops using them to transport change around. In 1935, the film ‘The tunnel’ featured another high speed transatlantic tunnel, as did another film in 1972, ‘Tunnel through the deeps’. Futurists have often discussed high speed mass transit systems, often featuring maglev and vacuums (no, Elon Musk didn’t invent the idea, his Hyperloop is justifiably famous for resurfacing and developing this very old idea and is likely to see its final implementation).

Anyway, I have read quite a bit about supercavitation over the last years. First developed in 1960 as a military idea to send torpedoes at high speed, it was successfully implemented in 1972 and has since developed somewhat. Cavitation happens when a surface, such as a propeller blade, moves through water so fast that a cavity is left until the water has a chance to close back in. As it does, the resultant shock wave can damage the propeller surface and cause wear. In supercavitation, the cavity is deliberate, and the system designed so that the cavity encloses the entire projectile. In 2005, the first proposal for people transport emerged, DARPA’s Underwater Express Program, designed to transport small groups of Navy personnel at speeds of up to 100 knots. Around that time, a German supercavitating torpedo was reaching 250mph speeds.

More promising articles suggest that supersonic speeds are achievable under water, with less friction than going via air. Achieving the initial high speed and maintaining currently requires sophisticated propulsion mechanisms, but not for much longer. I believe the propulsion problem can be engineered away by pulling capsules with a strong tether. That would be utterly useless for a torpedo of course, but for a transport system would be absolutely fine.

Transatlantic traffic is quite high, and if a cheaper and more environmentally friendly system than air travel were available, it would undoubtedly increase. My idea is to use a long string of capsules attached to a long graphene cable, pulled in a continuous loop at very high speed. Capsules would be filled at stations, accelerated to speed and attached to the cable for their transaltlantic journey, then detached, decelerated and their passengers or freight unloaded. Graphene cable would be 200 times stronger than steel so making such a cable is feasible.

The big benefit of such a system is that no evacuated tube is needed. The cable and capsules would travel through the water directly. Avoiding the need for an expensive and complex  tube containing a vacuum, electromagnetic propulsion system and power supply would greatly reduce cost. All of the pulling force for a cable based system would be applied at the ends.

Graphene cable doesn’t yet exist, but it will one day. I doubt if current supercavitation research is up to the job either, but that’s quite normal for any novel engineering project. Engineers face new problems and solve them every day. By the time the cable is feasible, we will doubtless be more knowledgeable about supercavitation too. So while it’s a bit early to say it will definitely become reality, it is certainly not too early to start thinking about it. Some future Musk might well be able to pull it off.

People are becoming less well-informed

The Cambridge Analytica story has exposed a great deal about our modern society. They allegedly obtained access to 50M Facebook records to enable Trump’s team to target users with personalised messages.

One of the most interesting aspects is that unless they only employ extremely incompetent journalists, the news outlets making the biggest fuss about it must be perfectly aware of reports that Obama appears to have done much the same but on a much larger scale back in 2012, but are keeping very quiet about it. According to Carol Davidsen, a senior Obama campaign staffer, they allowed Obama’s team to suck out the whole social graph – because they were on our side – before closing it to prevent Republican access to the same techniques. Trump’s campaign’s 50M looks almost amateur. I don’t like Trump, and I did like Obama before the halo slipped, but it seems clear to anyone who checks media across the political spectrum that both sides try their best to use social media to target users with personalised messages, and both sides are willing to bend rules if they think they can get away with it.

Of course all competent news media are aware of it. The reason some are not talking about earlier Democrat misuse but some others are is that they too all have their own political biases. Media today is very strongly polarised left or right, and each side will ignore, play down or ludicrously spin stories that don’t align with their own politics. It has become the norm to ignore the log in your own eye but make a big deal of the speck in your opponent’s, but we know that tendency goes back millennia. I watch Channel 4 News (which broke the Cambridge Analytica story) every day but although I enjoy it, it has a quite shameless lefty bias.

So it isn’t just the parties themselves that will try to target people with politically massaged messages, it is quite the norm for most media too. All sides of politics since Machiavelli have done everything they can to tilt the playing field in their favour, whether it’s use of media and social media, changing constituency boundaries or adjusting the size of the public sector. But there is a third group to explore here.

Facebook of course has full access to all of their 2.2Bn users’ records and social graph and is not squeaky clean neutral in its handling of them. Facebook has often been in the headlines over the last year or two thanks to its own political biases, with strongly weighted algorithms filtering or prioritising stories according to their political alignment. Like most IT companies Facebook has a left lean. (I don’t quite know why IT skills should correlate with political alignment unless it’s that most IT staff tend to be young, so lefty views implanted at school and university have had less time to be tempered by real world experience.) It isn’t just Facebook of course either. While Google has pretty much failed in its attempt at social media, it also has comprehensive records on most of us from search, browsing and android, and via control of the algorithms that determine what appears in the first pages of a search, is also able to tailor those results to what it knows of our personalities. Twitter has unintentionally created a whole world of mob rule politics and justice, but in format is rapidly evolving into a wannabe Facebook. So, the IT companies have themselves become major players in politics.

A fourth player is now emerging – artificial intelligence, and it will grow rapidly in importance into the far future. Simple algorithms have already been upgraded to assorted neural network variants and already this is causing problems with accusations of bias from all directions. I blogged recently about Fake AI: https://timeguide.wordpress.com/2017/11/16/fake-ai/, concerned that when AI analyses large datasets and comes up with politically incorrect insights, this is now being interpreted as something that needs to be fixed – a case not of shooting the messenger, but forcing the messenger to wear tinted spectacles. I would argue that AI should be allowed to reach whatever insights it can from a dataset, and it is then our responsibility to decide what to do with those insights. If that involves introducing a bias into implementation, that can be debated, but it should at least be transparent, and not hidden inside the AI itself. I am now concerned that by trying to ‘re-educate’ the AI, we may instead be indoctrinating it, locking today’s politics and values into future AI and all the systems that use it. Our values will change, but some foundation level AI may be too opaque to repair fully.

What worries me most though isn’t that these groups try their best to influence us. It could be argued that in free countries, with free speech, anybody should be able to use whatever means they can to try to influence us. No, the real problem is that recent (last 25 years, but especially the last 5) evolution of media and social media has produced a world where most people only ever see one part of a story, and even though many are aware of that, they don’t even try to find the rest and won’t look at it if it is put before them, because they don’t want to see things that don’t align with their existing mindset. We are building a world full of people who only see and consider part of the picture. Social media and its ‘bubbles’ reinforce that trend, but other media are equally guilty.

How can we shake society out of this ongoing polarisation? It isn’t just that politics becomes more aggressive. It also becomes less effective. Almost all politicians claim they want to make the world ‘better’, but they disagree on what exactly that means and how best to do so. But if they only see part of the problem, and don’t see or understand the basic structure and mechanisms of the system in which that problem exists, then they are very poorly placed to identify a viable solution, let alone an optimal one.

Until we can fix this extreme blinkering that already exists, our world can not get as ‘better’ as it should.

 

Mars trips won’t have to take months

It is exciting seeing the resurgence in interest in space travel, especially the prospect that Mars trips are looking increasingly feasible. Every year, far-future projects come a year closer. Mars has been on the agenda for decades, but now the tech needed is coming over the horizon.

You’ve probably already read about Elon Musk’s SpaceX plans, so I won’t bother repeating them here. The first trips will be dangerous but the passengers on the first successful trip will get to go down in history as the first human Mars visitors. That prospect of lasting fame and a place in history plus the actual experience and excitement of doing the trip will add up to more than enough reward to tempt lots of people to join the queue to be considered. A lucky and elite few will eventually land there. Some might stay as the first colonists. It won’t be long after that before the first babies are born on Mars, and their names will certainly be remembered, the first true Martians.

I am optimistic that the costs and travel times involved in getting to Mars can be reduced enormously. Today’s space travel relies on rockets, but my own invention, the Pythagoras Sling, could reduce the costs of getting materials and people to orbit by a factor of 50 or 100 compared the SpaceX rockets, which already are far cheaper than NASA’s. A system introduction paper can be downloaded from:

https://carbondevices.files.wordpress.com/2017/09/pythagoras-sling-article.pdf

Sadly, in spite of obviously being far more feasible and shorter term than a space elevator, we have not yet been able to get our paper published in a space journal so that is the only source so far.

This picture shows one implementation for non-human payloads, but tape length and scale could be increased to allow low-g human launches some day, or more likely, early systems would allow space-based anchors to be built with different launch architecture for human payloads.

The Sling needs graphene tape, a couple of parachutes or a floating drag platform and a magnetic drive to pull the tape, using standard linear motor principles as used in linear induction motors and rail guns. The tape is simply attached to the rocket and pulled through two high altitude anchors attached to the platforms or parachutes. Here is a pic of the tape drive designed for another use, but the principle is the same. Rail gun technology works well today, and could easily be adapted into this inverse form to drive a suitably engineered tape at incredible speed.

All the components are reusable, but shouldn’t cost much compared to heavy rockets anyway. The required parachutes exist today, but we don’t have graphene tape or the motor to pull it yet. As space industry continues to develop, these will come. The Space Elevator will need millions of tons of graphene, the Sling only needs around 100 kilograms so will certainly be possible decades before a space elevator. The sling configuration can achieve full orbital speeds for payloads using only electrical energy at the ground, so is also much less environmentally damaging than rocketry.

Using tech such as the Sling, material can be put into orbit to make space stations and development factories for all sorts of space activity. One project that I would put high on the priority list would be another tape-pulling launch system, early architecture suggestion here:.

Since it will be in space, laying tape out in a long line would be no real problem, even millions of kms, and with motors arranged periodically along the length, a long tape pointed in the right direction could launch a payload towards a Mars interception system at extreme speeds. We need to think big, since the distances traveled will be big. A launch system weighing 40,000 tons would be large scale engineering but not exceptional, and although graphene today is very expensive as with any novel material, it will become much cheaper as manufacturing technology catches up (if the graphene filament print heads I suggest work as I hope, graphene filament could be made at 200m/s and woven into yarn by a spinneret as it emerges from multiple heads). In the following pics, carbon atoms are fed through nanotubes with the right timing, speed and charges to combine into graphene as they emerge. The second pic shows why the nanotubes need to be tilted towards each other since otherwise the molecular geometry doesn’t work, and this requirement limits the heads to make thin filaments with just two or three carbon rings wide. The second pic mentions carbon foam, which would be perfect to make stratospheric floating platforms as an alternative to using parachutes in the Sling system.

Graphene filament head, ejects graphene filament at 200m/s.

A large ship is of that magnitude, as are some building or bridges. Such a launch system would allow people to get to Mars in 5-12 days, and payloads of g-force tolerant supplies such as water could be sent to arrive in a day. The intercept system at the Mars end would need to be of similar size to catch and decelerate the payload into Mars orbit. The systems at both ends can be designed to be used for launch or intercept as needed.

I’ve been a systems engineer for 36 years and a futurologist for 27 of those. The system solutions I propose should work if there is no better solution available, but since we’re talking about the far future, it is far more likely that better systems will be invented by smarter engineers or AIs by the time we’re ready to use them. Rocketry will probably get us through to the 2040s but after that, I believe these solutions can be made real and Mars trips after that could become quite routine. I present these solutions as proof that the problems can be solved, by showing that potential solutions already exist. As a futurologist, all I really care about is that someone will be able to do it somehow.

 

So, there really is no need to think in terms of months of travel each way, we should think of rapid supply chains and human travel times around a week or two – not so different from the first US immigrants from Europe.

New book: Fashion Tomorrow

I finally finished the book I started 2 years ago on future fashion, or rather future technologies relevant to the fashion industry.

It is a very short book, more of a quick guide at 40k words, less than half as long as my other books and covers women’s fashion mostly, though some applies to men too. I would never have finished writing a full-sized book on this topic and I’d rather put out something now, short and packed full of ideas that are (mostly) still novel than delay until they are commonplace. It is aimed at students and people working in fashion design, who have loads of artistic and design talent, but want to know what technology opportunities are coming that they could soon exploit, but anyone interested in fashion who isn’t technophobic should find it interesting. Some sections discussing intimate apparel contain adult comments so the book is unsuitable for minors.

It started as a blog, then I realised I had quite a bit more stuff I could link together, so I made a start, then go sidetracked, for 20 months! I threw away 75% of the original contents list and tidied it up to release a short guide instead. I wanted to put it out for free but 99p or 99c seems to be the lowest price you can start at, but I doubt that would put anyone off except the least interested readers. As with my other books, I’ll occasionally make it free.

Huge areas I left out include swathes of topics on social, political, environmental and psychological fashions, impacts of AI and robots, manufacturing, marketing, distribution and sales. These are all big topics, but I just didn’t have time to write them all up so I just stuck to the core areas with passing mentions of the others. In any case, much has been written on these areas by others, and my book focuses on things that are unique, embryonic or not well covered elsewhere. It fills a large hole in fashion industry thinking.

 

Why superhumans are inevitable, and what else comes in the box

Do we have any real choice in the matter of making  super-humans? 20 years ago, I estimated 2005 as the point of no return, and nothing since then has changed my mind on that date. By my reckoning, we are already inevitably committed to designer babies, ebaybies, super-soldiers and super-smart autonomous weapons, direct brain-machine links, electronic immortality, new human races, population explosion, inter-species conflicts and wars with massively powerful weaponry, superhuman conscious AI, smart bacteria, and the only real control we have is relatively minor adjustments on timings. As I was discussing yesterday, the technology potential for this is vast and very exciting, nothing less than a genuine techno-utopia if we use the technologies wisely, but optimum potential doesn’t automatically become reality, and achieving a good outcome is unlikely if many barriers are put in its way.

In my estimation, we have already started the countdown to this group of interconnected technologies – we will very likely get all of them, and we must get ready for the decisions and impacts ahead. At the moment, our society is a small child about to open its super-high-tech xmas presents while fighting with its siblings. Those presents will give phenomenal power far beyond the comprehension of the child or its emotional maturity to equip it to deal with the decisions safely. Our leaders have already squandered decades of valuable preparation time by ignoring the big issues to focus on trivial ones. It is not too late to achieve a good ending, but it won’t happen by accident and we do need to make preparations to avoid pretty big problems.

Both hard and soft warfare – the sword and the pen, already use rapidly advancing AI, and the problems are already running ahead of what the owners intended.

Facebook, Twitter, Instagram and other media giants all have lots of smart people and presumably they mean well, but if so, they have certainly been naive. They maybe hoped to eliminate loneliness, inequality, and poverty and create a loving interconnected global society with global peace, but instead created fake news, social division and conflict and election interference. More likely they didn’t intend either outcome, they just wanted to make money and that took priority over due care and attention..

Miniaturising swarming smart-drones are already the subjects of a new arms race that will deliver almost un-killable machine adversaries by 2050. AI separately is in other arms races to make super-smart AI and super-smart soldiers. This is key to the 2005 point of no return. It was around 2005 that we reached the levels of technology where future AI development all the way to superhuman machine consciousness could be done by individuals, mad scientists or rogue states, even if major powers had banned it. Before 2005, there probably wasn’t quite enough knowledge already on the net to do that. In 2018, lots of agencies have already achieved superiority to humans in niche areas, and other niches will succumb one by one until the whole field of human capability is covered. The first machines to behave in ways not fully understood by humans arrived in the early 1990s; in 2018, neural nets already make lots of decisions at least partly obscured to humans.

This AI development trend will take us to superhuman AI, and it will be able to accelerate development of its own descendants to vastly superhuman AI, fully conscious, with emotions, and its own agendas. That will need humans to protect against being wiped out by superhuman AI. The only three ways we could do that are to either redesign the brain biologically to be far smarter, essentially impossible in the time-frame, to design ways to link our brains to machines, so that we have direct access to the same intelligence as the AIs, so a gulf doesn’t appear and we can remain relatively safe, or pray for super-smart aliens to come to our help, not the best prospect.

Therefore we will have no choice but to make direct brain links to super-smart AI. Otherwise we risk extinction. It is that simple. We have some idea how to do that – nanotech devices inside the brain linking to each and every synapse that can relay electrical signals either way, a difficult but not impossible engineering problem. Best guesses for time-frame fall in the 2045-2050 range for a fully working link that not only relays signals between your organic brain and an IT replica, but by doing so essentially makes external IT just another part of your brain. That conveys some of the other technology gifts of electronic immortality, new varieties of humans, smart bacteria (which will be created during the development path to this link) along with human-variant population explosion, especially in cyberspace, with androids as their physical front end, and the inevitable inter-species conflicts over resources and space – trillions of AI and human-like minds in cyberspace that want to do things in the real world cannot be assumed to be willingly confined just to protect the interests of what they will think of as far lesser species.

Super-smart AI or humans with almost total capability to design whatever synthetic biology is needed to achieve any biological feature will create genetic listings for infinite potential offspring, simulate them, give some of them cyberspace lives, assemble actual embryos for some of them and bring designer babies. Already in 2018, you can pay to get a DNA listing, and blend it in any way you want with the listing of anyone else. It’s already possible to make DNA listings for potential humans and sell them on ebay, hence the term ebaybies. That is perfectly legal, still, but I’ve been writing and lecturing about them since 2004. Today they would just be listings, but we’ll one day have the tech to simulate them, choose ones we like and make them real, even some that were sold as celebrity collector items on ebay. It’s not only too late to start regulating this kind of tech, our leaders aren’t even thinking about it yet.

These technologies are all linked intricately, and their foundations are already in place, with much of the building on those foundations under way. We can’t stop any of these things from happening, they will all come in the same basket. Our leaders are becoming aware of the potential and the potential dangers of the AI positive feedback loop, but at least 15 years too late to do much about it. They have been warned repeatedly and loudly but have focused instead on the minor politics of the day that voters are aware of. The fundamental nature of politics is unlikely to change substantially, so even efforts to slow down the pace of development or to limit areas of impact are likely to be always too little too late. At best, we will be able to slow runaway AI development enough to allow direct brain links to protect against extinction scenarios. But we will not be able to stop it now.

Given inevitability, it’s worth questioning whether there is even any point in trying. Why not just enjoy the ride? Well, the brakes might be broken, but if we can steer the bus expertly enough, it could be exciting and we could come out of it smelling of roses. The weak link is certainly the risk of super-smart AI, whether AI v humans or countries using super-smart AI to fight fiercely for world domination. That risk is alleviated by direct brain linkage, and I’d strongly argue necessitates it, but that brings the other technologies. Even if we decide not to develop it, others will, so one way or another, all these techs will arrive, and our future late century will have this full suite of techs, plus many others of course.

We need as a matter of extreme urgency to fix these silly social media squabbles and over-reactions that are pulling society apart. If we have groups hating each other with access to extremely advanced technology, that can only mean trouble. Tolerance is broken, sanctimony rules, the Inquisition is in progress. We have been offered techno-utopia, but current signs are that most people think techno-hell looks more appetizing and it is their free choice.

AIs of a feather flocking together to create global instability

Hawking and Musk have created a lot of media impact with their warnings about AI, so although terminator scenarios resulting from machine consciousness have been discussed, as have more mundane use of non-conscious autonomous weapon systems, it’s worth noting that I haven’t yet heard them mention one major category of risks from AI – emergence. AI risks have been discussed frequently since the 1970s, and in the 1990s a lot of work was done in the AI community on emergence. Complex emergent patterns of behavior often result from interactions between entities driven by simple algorithms. Genetic algorithms were demonstrated to produce evolution, simple neighbor-interaction rules were derived to illustrate flocking behaviors that make lovely screen saver effects. Cellular automata were played with. In BT we invented ways of self-organizing networks and FPGAs, played with mechanism that could be used for evolution and consciousness, demonstrated managing networks via ANTs – autonomous network telephers, using smart packets that would run up and down wires sorting things out all by themselves. In 1987 discovered a whole class of ways of bringing down networks via network resonance, information waves and their much larger class of correlated traffic – still unexploited by hackers apart from simple DOS attacks. These ideas have slowly evolved since, and some have made it into industry or hacker toolkits, but we don’t seem to be joining the dots as far as risks go.

I read an amusing article this morning by an ex-motoring-editor who was declined insurance because the AI systems used by insurance companies had labelled him as high risk because he maybe associated with people like Clarkson. Actually, he had no idea why, but that was his broker’s theory of how it might have happened. It’s a good article, well written and covers quite a few of the dangers of allowing computers to take control.

http://www.dailymail.co.uk/sciencetech/article-5310031/Evidence-robots-acquiring-racial-class-prejudices.html

The article suggested how AIs in different companies might all come to similar conclusions about people or places or trends or patterns in a nice tidy positive feedback loop. That’s exactly the sort of thing that can drive information waves, which I demonstrated in 1987 can bring down an entire network in less than 3 milliseconds, in such a way that it would continue to crash many times when restarted. That isn’t intended by the algorithms, which individually ought to make good decisions, but when interacting with one another, create the emergent phenomenon.  Automated dealing systems are already pretty well understood in this regard and mechanisms prevent frequent stock market collapses, but that is only one specific type of behavior in one industry that is protected. There do not seem to be any industry-wide mechanisms to prevent the rest of this infinite class of problems from affecting any or all of the rest, simultaneously.

As we create ever more deep learning neural networks, that essentially teach themselves from huge data pools, human understanding of their ‘mindsets’ decreases. They make decisions using algorithms that are understood at a code level, but the massive matrix of derived knowledge they create from all the data they receive becomes highly opaque. Often, even usually, nobody quite knows how a decision is made. That’s bad enough in a standalone system, but when many such systems are connected, produced and owned and run by diverse companies with diverse thinking, the scope for destructive forms of emergence increases geometrically.

One result could be gridlock. Systems fed with a single new piece of data could crash. My 3 millisecond result in 1987 would still stand since network latency is the prime limiter. The first AI receives it, alters its mindset accordingly, processes it, makes a decision and interacts with a second AI. This second one might have different ‘prejudice’ so makes its own decision based on different criteria, and refuses to respond the way intended. A 3rd one looks at the 2nd’s decision and takes that as evidence that there might be an issue, and with its risk-averse mindset, also refuse to act, and that inaction spreads through the entire network in milliseconds. Since the 1st AI thinks the data is all fine and it should have gone ahead, it now interprets the inaction of the others as evidence that that type of data is somehow ‘wrong’ so itself refuses to process any further of that type, whether from its own operators or other parts of the system. So it essentially adds its own outputs to the bad feeling and the entire system falls into sulk mode. As one part of infrastructure starts to shut down, that infects other connected parts and our entire IT could fall into sulk mode – entire global infrastructure. Since nobody knows how it all works, or what has caused the shutdown, it might be extremely hard to recover.

Another possible result is a direct information wave, almost certainly a piece of fake news. Imagine our IT world in 5 years time, with all these super-smart AIs super-connected. A piece of fake news says a nuke has just been launched somewhere. Stocks will obviously decline, whatever the circumstances, so as the news spreads, everyone’s AIs will take it on themselves to start selling shares before the inevitable collapse, triggering a collapse, except it won’t because the markets won’t let that happen. BUT… The wave does spread, and all those individual AIs want to dispose of those shares, or at least find out what’s happening, so they all start sending messages to one another, exchanging data, trying to find what’s going on. That’s the information wave. They can’t sell shares of find out, because the network is going into overload, so they try even harder and force it into severe overload. So it falls over. When it comes back online, they all try again, crashing it again, and so on.

Another potential result is smartass AI. There is always some prat somewhere who sees an opportunity to take advantage and ruins if for everyone else by doing something like exploiting a small loophole in the law, or in this case, most likely, a prejudice our smartass AI has discovered in some other AI that means it can be taken advantage of by doing x, y, or z. Since nobody quite knows how any of their AIs are making their decisions because their mindsets ate too big and too complex, it will be very hard to identify what is going on. Some really unusual behavior is corrupting the system because some AI is going rogue somewhere somehow, but which one, where, how?

That one brings us back to fake news. That will very soon infect AI systems with their own varieties of fake news. Complex networks of AIs will have many of the same problems we are seeing in human social networks. An AI could become a troll just the same as a human, deliberately winding others up to generate attention of drive a change of some parameter – any parameter – in its own favour. Activist AIs will happen due to people making them to push human activist causes, but they will also do it all by themselves. Their analysis of the system will sometimes show them that a good way to get a good result is to cause problems elsewhere.

Then there’s climate change, weather, storms, tsunamis. I don’t mean real ones, I mean the system wide result of tiny interactions of tiny waves and currents of data and knowledge in neural nets. Tiny effects in one small part of a system can interact in unforeseen ways with other parts of other systems nearby, creating maybe a breeze, which interacts with breezes in nearby regions to create hurricanes. I think that’s a reasonable analogy. Chaos applies to neural net societies just as it does to climate, and 50 year waves equivalents will cause equivalent havoc in IT.

I won’t go on with more examples, long blogs are awful to read. None of these requires any self-awareness, sentience, consciousness, call it what you will. All of these can easily happen through simple interactions of fairly trivial AI deep learning nets. The level of interconnection already sounds like it may already be becoming vulnerable to such emergence effects. Soon it definitely will be. Musk and Hawking have at least joined the party and they’ll think more and more deeply in coming months. Zuckerberg apparently doesn’t believe in AI threats but now accepts the problems social media is causing. Sorry Zuck, but the kind of AI you’re company is messing with will also be subject to its own kinds of social media issues, not just in its trivial decisions on what to post or block, but actual inter-AI socializing issues. It might not try to eliminate humanity, but if it brings all of our IT to a halt and prevents rapid recovery, we’re still screwed.