Category Archives: security

AI could use killer drone swarms to attack people while taking out networks

In 1987 I discovered a whole class of security attacks that could knock out networks, which I called correlated traffic attacks, creating particular patterns of data packet arrivals from particular sources at particular times or intervals. We simulated two examples to successfully verify the problem. One example was protocol resonance. I demonstrated that it was possible to push a system into a gross overload state with a single call, by spacing the packets precise intervals apart. Their arrival caused a strong resonance in the bandwidth allocation algorithms and the result was that network capacity was instantaneously reduced by around 70%. Another example was information waves, whereby a single piece of information appearing at a particular point could, by its interaction with particular apps on mobile devices (the assumption was financially relevant data that would trigger AI on the devices to start requesting voluminous data, triggering a highly correlated wave of responses, using up bandwidth and throwing the network into overload, very likely crashing due to initiation of rarely used software. When calls couldn’t get through, the devices would wait until the network recovered, then they would all simultaneously detect recovery and simultaneously try again, killing the net again, and again, until people were asked to turn  their devices off and on again, thereby bringing randomness back into the system. Both of these examples could knock out certain kinds of networks, but they are just two of an infinite set of possibilities in the correlated traffic attack class.

Adversarial AI pits one AI against another, trying things at random or making small modifications until a particular situation is achieved, such as the second AI accepting an image is acceptable. It is possible, though I don’t believe it has been achieved yet, to use the technique to simulate a wide range of correlated traffic situations, seeing which ones achieve network resonance or overloads, which trigger particular desired responses from network management or control systems, via interactions with the network and its protocols, commonly resident apps on mobile devices or computer operating systems.

Activists and researchers are already well aware that adversarial AI can be used to find vulnerabilities in face recognition systems and thereby prevent recognition, or to deceive autonomous car AI into seeing fantasy objects or not seeing real ones. As Noel Sharkey, the robotics expert, has been tweeting today, it will be possible to use adversarial AI to corrupt recognition systems used by killer drones, potentially to cause them to attack their controllers or innocents instead of their intended targets. I have to agree with him. But linking that corruption to the whole extended field of correlated traffic attacks extends the range of mechanisms that can be used greatly. It will be possible to exploit highly obscured interactions between network physical architecture, protocols and operating systems, network management, app interactions, and the entire sensor/IoT ecosystem, as well as software and AI systems using it. It is impossible to check all possible interactions, so no absolute defence is possible, but adversarial AI with enough compute power could randomly explore across these multiple dimensions, stumble across regions of vulnerability and drill down until grand vulnerabilities are found.

This could further be linked to apps used as highly invisible Trojans, offering high attractiveness to users with no apparent side effects, quietly gathering data to help identify potential targets, and simply waiting for a particular situation or command before signalling to the attacking system.

A future activist or terrorist group or rogue state could use such tools to make a multidimensional attack. It could initiate an attack, using its own apps to identify and locate targets, control large swarms of killer drones or robots to attack them, simultaneously executing a cyberattack that knocks out selected parts of the network, crashing or killing computers and infrastructure. The vast bulk of this could be developed, tested and refined offline, using simulation and adversarial AI approaches to discover vulnerabilities and optimise exploits.

There is already debate about killer drones, mainly whether we should permit them and in what circumstances, but activists and rogue states won’t care about rules. Millions of engineers are technically able to build such things and some are not on your side. It is reasonable to expect that freely available AI tools will be used in such ways, using their intelligence to design, refine, initiate and control attacks using killer drones, robots and self-driving cars to harm us, while corrupting systems and infrastructure that protect us.

Worrying, especially since the capability is arriving just as everyone is starting to consider civil war.

 

 

Future Surveillance

This is an update of my last surveillance blog 6 years ago, much of which is common discussion now. I’ll briefly repeat key points to save you reading it.

They used to say

“Don’t think it

If you must think it, don’t say it

If you must say it, don’t write it

If you must write it, don’t sign it”

Sadly this wisdom is already as obsolete as Asimov’s Laws of Robotics. The last three lines have already been automated.

I recently read of new headphones designed to recognize thoughts so they know what you want to listen to. Simple thought recognition in various forms has been around for 20 years now. It is slowly improving but with smart networked earphones we’re already providing an easy platform into which to sneak better monitoring and better though detection. Sold on convenience and ease of use of course.

You already know that Google and various other large companies have very extensive records documenting many areas of your life. It’s reasonable to assume that any or all of this could be demanded by a future government. I trust Google and the rest to a point, but not a very distant one.

Your phone, TV, Alexa, or even your networked coffee machine may listen in to everything you say, sending audio records to cloud servers for analysis, and you only have naivety as defense against those audio records being stored and potentially used for nefarious purposes.

Some next generation games machines will have 3D scanners and UHD cameras that can even see blood flow in your skin. If these are hacked or left switched on – and social networking video is one of the applications they are aiming to capture, so they’ll be on often – someone could watch you all evening, capture the most intimate body details, film your facial expressions and gaze direction while you are looking at a known image on a particular part of the screen. Monitoring pupil dilation, smiles, anguished expressions etc could provide a lot of evidence for your emotional state, with a detailed record of what you were watching and doing at exactly that moment, with whom. By monitoring blood flow and pulse via your Fitbit or smartwatch, and additionally monitoring skin conductivity, your level of excitement, stress or relaxation can easily be inferred. If given to the authorities, this sort of data might be useful to identify pedophiles or murderers, by seeing which men are excited by seeing kids on TV or those who get pleasure from violent games, and it is likely that that will be one of the justifications authorities will use for its use.

Millimetre wave scanning was once controversial when it was introduced in airport body scanners, but we have had no choice but to accept it and its associated abuses –  the only alternative is not to fly. 5G uses millimeter wave too, and it’s reasonable to expect that the same people who can already monitor your movements in your home simply by analyzing your wi-fi signals will be able to do a lot better by analyzing 5G signals.

As mm-wave systems develop, they could become much more widespread so burglars and voyeurs might start using them to check if there is anything worth stealing or videoing. Maybe some search company making visual street maps might ‘accidentally’ capture a detailed 3d map of the inside of your house when they come round as well or instead of everything they could access via your wireless LAN.

Add to this the ability to use drones to get close without being noticed. Drones can be very small, fly themselves and automatically survey an area using broad sections of the electromagnetic spectrum.

NFC bank and credit cards not only present risks of theft, but also the added ability to track what we spend, where, on what, with whom. NFC capability in your phone makes some parts of life easier, but NFC has always been yet another doorway that may be left unlocked by security holes in operating systems or apps and apps themselves carry many assorted risks. Many apps ask for far more permissions than they need to do their professed tasks, and their owners collect vast quantities of information for purposes known only to them and their clients. Obviously data can be collected using a variety of apps, and that data linked together at its destination. They are not all honest providers, and apps are still very inadequately regulated and policed.

We’re seeing increasing experimentation with facial recognition technology around the world, from China to the UK, and only a few authorities so far such as in San Francisco have had the wisdom to ban its use. Heavy handed UK police, who increasingly police according to their own political agenda even at the expense of policing actual UK law, have already fined people who have covered themselves to avoid being abused in face recognition trials. It is reasonable to assume they would gleefully seize any future opportunity to access and cross-link all of the various data pools currently being assembled under the excuse of reducing crime, but with the real intent of policing their own social engineering preferences. Using advanced AI to mine zillions of hours of full-sensory data input on every one of us gathered via all this routine IT exposure and extensive and ubiquitous video surveillance, they could deduce everyone’s attitudes to just about everything – the real truth about our attitudes to every friend and family member or TV celebrity or politician or product, our detailed sexual orientation, any fetishes or perversions, our racial attitudes, political allegiances, attitudes to almost every topic ever aired on TV or everyday conversation, how hard we are working, how much stress we are experiencing, many aspects of our medical state.

It doesn’t even stop with public cameras. Innumerable cameras and microphones on phones, visors, and high street private surveillance will automatically record all this same stuff for everyone, sometimes with benign declared intentions such as making self-driving vehicles safer, sometimes using social media tribes to capture any kind of evidence against ‘the other’. In depth evidence will become available to back up prosecutions of crimes that today would not even be noticed. Computers that can retrospectively date mine evidence collected over decades and link it all together will be able to identify billions of real or invented crimes.

Active skin will one day link your nervous system to your IT, allowing you to record and replay sensations. You will never be able to be sure that you are the only one that can access that data either. I could easily hide algorithms in a chip or program that only I know about, that no amount of testing or inspection could ever reveal. If I can, any decent software engineer can too. That’s the main reason I have never trusted my IT – I am quite nice but I would probably be tempted to put in some secret stuff on any IT I designed. Just because I could and could almost certainly get away with it. If someone was making electronics to link to your nervous system, they’d probably be at least tempted to put a back door in too, or be told to by the authorities.

The current panic about face recognition is justified. Other AI can lipread better than people and recognize gestures and facial expressions better than people. It adds the knowledge of everywhere you go, everyone you meet, everything you do, everything you say and even every emotional reaction to all of that to all the other knowledge gathered online or by your mobile, fitness band, electronic jewelry or other accessories.

Fools utter the old line: “if you are innocent, you have nothing to fear”. Do you know anyone who is innocent? Of everything? Who has never ever done or even thought anything even a little bit wrong? Who has never wanted to do anything nasty to anyone for any reason ever? And that’s before you even start to factor in corruption of the police or mistakes or being framed or dumb juries or secret courts. The real problem here is not the abuses we already see. It is what is being and will be collected and stored, forever, that will be available to all future governments of all persuasions and police authorities who consider themselves better than the law. I’ve said often that our governments are often incompetent but rarely malicious. Most of our leaders are nice guys, only a few are corrupt, but most are technologically inept . With an increasingly divided society, there’s a strong chance that the ‘wrong’ government or even a dictatorship could get in. Which of us can be sure we won’t be up against the wall one day?

We’ve already lost the battle to defend privacy. The only bits left are where the technology hasn’t caught up yet. In the future, not even the deepest, most hidden parts of your mind will be private. Pretty much everything about you will be available to an AI-upskilled state and its police.

The caravan and migration policy

20 years ago, fewer than half of the people in the world had ever made a phone call. Today, the vast majority of people have a smartphone with internet access, and are learning how people in other parts of the world live. A growing number are refusing to accept their poor luck of being born in poor, corrupt, or oppressive or war-torn countries. After all, nobody chooses their parents or where they are born, so why should people in any country have any more right to live there than anyone else?  Shouldn’t everyone start life with the right to live anywhere they choose? If they don’t like it where they were born, why shouldn’t someone migrate to another country to improve their conditions or to give their children a better chance? Why should that country be allowed to refuse them entry? I’d like to give a brief answer, but I don’t have time. So:

People don’t choose their parents, or where they are born, but nor did they exist to make that choice. The rights of the infinite number of non-existent people who could potentially be born to any possible combination of parents at any time, anywhere, under any possible set of circumstances is no basis for any policy. If lives were formed and then somehow assigned parents, the questions would be valid, but people don’t actually reproduce by choosing from some waiting list of would-be embryos. Even religious people don’t believe that their god has a large queue of souls waiting for a place and parents to be born to, assigning each in turn to happiness or misery. Actual people reproduce via actual acts in actual places in actual circumstances. They create a new life, and the child is theirs. They are solely responsible for bringing that life into existence, knowing the likely circumstances it would emerge into. The child didn’t choose its parents, but its parents made it. If they live in a particular country and choose to have a baby, that baby will be born with the rights and rules and all the other attributes of that country, the skin color, religion, wealth and status of its parents and so on. It will also be born in the prevailing international political and regulatory environment at that time. Other people in other countries have zero a priori political, social, economic or moral responsibility towards that child, though they and their country are free to show whatever compassion they wish, or to join international organisations that extend protection and human rights to all humans everywhere, and so a child anywhere may inherit certain internationally agreed rights, and countries will at some point have signed up to accept them. Those voluntary agreements or signings of international treaties may convey rights onto that child regarding its access to aid or  global health initiatives or migration but they are a matter for other sovereign bodies to choose to sign up to, or indeed to withdraw from. A poor child might grow up and decide to migrate, but it has no a priori right of entry to any country or support from it, legally or morally, beyond that which the people of that country or their ancestors choose to offer individually or via their government.

In short, people can’t really look any further than their parents to thank or blame for their existence, but other people and other countries are free to express and extend their love, compassion and support, if they choose to. Most of us would agree that we should.

Given that we want to help, but still don’t have the resources to help everyone on the planet to live in the standard they’d like, a better question might be: which people should we help first – those that bang loudly on our door, or those in the greatest need?

We love and value those close to us most, but most of us feel some love towards humans everywhere. Few people can watch the migrant caravan coverage without feeling sympathy for the parents trying to get to a better life. Many of those people will be innocent people running away from genuine oppression and danger, hoping to build a better future by working hard and integrating into a new culture. The proportion was estimated recently (Channel 4 News for those who demand sources for every stat they don’t like) at around 11% of the caravan. We know from UK migration from Calais that some will just say they are, advised by activists on exactly what phrases to use when interviewed by immigration officials to get the right boxes ticked. Additionally, those of us who aren’t completely naive (or suffering the amusingly named ‘Trump derangement syndrome’ whereby anything ‘Fake President’ Trump says or does must automatically be wrong even if Obama said or did the same), also accept that a few of those in the caravan are likely to be drug dealers or murderers or rapists or traffickers or other criminals running away from capture and towards new markets to exploit, or even terrorists trying to hide among a crowd. There is abundant evidence that European migrant crowds did conceal some such people, and we’ll never know the exact numbers, but we’re already living with the consequences. The USA would be foolish not to learn from these European mistakes. It really isn’t the simple ‘all saints’ or ‘all criminals’ some media would have us believe. Some may be criminals or terrorists – ‘some’ is a very different concept from ‘all’, and is not actually disproved by pointing the TV camera at a lovely family pushing a pram.

International law defines refugees and asylum seekers and makes it easy to distinguish them from other kinds of migrants, but activist groups and media often conflate these terms to push various political objectives. People fleeing from danger are refugees until they get to the first safe country, often the adjacent one. According to law, they should apply for asylum there, but if they choose to go further, they cease to be refugees and become migrants. The difference is very important. Refugees are fleeing from danger to safety, and are covered by protections afforded to that purpose. Migrants don’t qualify for those special protections and are meant to use legal channels to move to another country. If they choose to use non-legal means to cross borders, they become illegal immigrants, criminals. Sympathy and compassion should extend to all who are less fortunate, but those who are willing to respect the new nation and its laws by going through legal immigration channels should surely solicit more than those who demonstrably aren’t, regardless of how cute some other family’s children look on camera. Law-abiding applicants should always be given a better response, and law-breakers should be sent to the back of the queue.

These are well established attitudes to migration and refugees, but many seek to change them. In our competitive virtue signalling era, a narrative constructed by activists well practiced at misleading people to achieve their aims deliberately conflates genuine refugees and economic migrants to make their open borders policies look like simple humanitarianism. They harness the sympathy everyone feels for refugees fleeing from danger but and routinely mislabel migrants as refugees, hoping to slyly extend refugee rights to migrants, quickly moving on to imply that anyone who doesn’t want to admit everyone lacks basic human decency. Much of the media happily plays along with this deception, pointing cameras at the nice families instead of the much larger number of able young men, with their own presenters frequently referring to migrants as refugees. Such a narrative is deliberately dishonest, little more than self-aggrandizing disingenuous sanctimony. The best policy remains to maintain and protect borders and have well-managed legal immigration polices, offering prioritized help to refugees and extending whatever aid to other countries can be afforded. while recognizing that simple handouts and political interference can be sometimes counter-productive. Most people are nice, but some want to help those who need it most, in the best way. Moral posturing and virtue signalling are not only less effective but highly selfish, aimed at polishing the egos of the sanctimonious rather than the needy.

So, we want to help, but do it sensibly to maximize benefit. Selfishly, we also need some migration, and we already selfishlessly encourage those with the most valuable skills or wealth to migrate from other countries, at their loss (even after they have paid to educate them). Every skilled engineer or doctor we import from a poorer country represents a huge financial outlay being transferred from poor to rich. We need to fix that exploitation too. There is an excellent case for compensation to be paid.

Well-managed migration can and does work well. The UK sometimes feels a little overcrowded, when sitting in a traffic jam or a doctor waiting room, but actually only about 2% of the land is built on, the rest isn’t. It isn’t ‘full’ geographically, it just seems so because of the consequences of poor governance. Given sensible integration and economic policies, competently executed, immigration ought not to be a big problem. The absence of those givens is the main cause of existing problems. So we can use the UK as a benchmark for reasonably tolerable population density even under poor government. The UK still needs migrants with a wide range of skills and since some (mainly old) people emigrate, there is always room for a few more.

Integration is a growing issue, and should be a stronger consideration in future immigration policy. Recent (last 100 years) migrants and their descendants account for around 12% of the UK population, 1 in 8, still a smallish minority. Some struggle to integrate or to find acceptance, some don’t want to, many fit in very well. Older migrations such as the Normans and Vikings have integrated pretty well now. My name suggests some Viking input to my DNA, and ancestry research shows that my family goes back in England at least 500 years. Having migrated to Belfast as a child, and remigrated back 17 years later, I know how it feels to be considered an outsider for a decade or two.

What about the USA, with the migrant ‘caravan’ of a few thousand people on their way to claim asylum? The USA is large, relatively sparsely populated, and very wealthy. Most people in the world can only dream of living at US living standards and some of them are trying to go there. If they succeed, many more will follow. Trump is currently under fire from the left over his policy, but although Trump is certainly rather less eloquent, his policy actually closely echoes Obama’s. Here is a video of Obama talking about illegal immigration in 2005 while he was still a Senator:

https://www.c-span.org/video/?c4656370/sen-barack-obama-illegal-immigration

Left and right both agreed at least back then that borders should be protected and migrants should be made to use legal channels, presumably for all the same common sense reasons I outlined earlier. What if the borders were completely open, as many are now calling for? Here are a few basic figures:

Before it would get to UK population density, the USA has enough land to house every existing American plus every single one of the 422M South Americans, 42M Central Americans, 411M Middle Easterns, the 105M Philippinos and every African. Land area isn’t a big problem then. For the vast majority in these regions, the average USA standard of living would be a massive upgrade, so imagine if they all suddenly migrated there. The USA economy would suddenly be spread over 2.5Bn instead of 325M. Instead of $60k per capita, it would be $7.8k, putting the USA between Bolivia and Guatemala in the world wealth rankings, well below most of Central and South America (still 40% more than Honduras though). Additionally, almost all of the migrants, 87% of the total population would initially be homeless. All the new homes and other infrastructure would have to be paid for and built, jobs created, workforce trained etc. 

Even the most fervent open borders supporter couldn’t pretend they thought this was feasible, so they reject reasoning and focus on emotion, pointing cameras at young families with sweet kids, yearning for better lives. If the borders were open, what then would prevent vast numbers of would-be migrants from succumbing to temptation to better their lives before the inevitable economic dilution made it a worthless trip? Surely opening the borders would result in a huge mass of people wanting to get in while it is still a big upgrade? People in possession of reasoning capability accept that there need to be limits. Left and right, Obama and Trump agree that migration needs to be legal and well managed. Numbers must be restricted to a level that is manageable and sustainable.

So, what should be done about it. What policy principles and behaviors should be adopted. The first must be to stop  misuse of language, particularly conflating economic migrants and refugees. Activists and some media do that regularly, but deliberate misrepresentation is ‘fake news’, what we used to call lies.

Second, an honest debate needs to be had on how best to help refugees, whether by offering them residency or by building and resourcing adequate refugee camps, and also regarding how much we can widen legal immigration channels for migrants while sustaining our existing economy and culture. If a refugee wants to immigrate, that really ought to be a separate consideration and handled via immigration channels and rules. Dealing with them separately would immediately solve the problem of people falsely claiming refugee status, because all they would achieve is access to a refugee camp, and would still have to go through immigration channels to proceed further. Such false claims clog the courts and mean it takes far longer for true refugees to have their cases dealt with effectively.

Thirdly, that debate needs to consider that while countries naturally welcome the most economically and culturally valuable immigrants, there is also a good humanitarian case to help some more. Immigration policy should be generous, and paralleled with properly managed international aid.

That debate should always recognize that the rule of law must be maintained, and Obama made that argument very well. It still holds, and Trump agreeing with it does not actually make it invalid. Letting some people break it while expecting others to follow it invites chaos. Borders should be maintained and properly policed and while refugees who can demonstrate refugee status should be directed into refugee channels (which may take some time), others should be firmly turned away if they don’t have permission to cross, and given the information they need to apply via the legal immigration channels. That can be done nicely of course, and a generous country should offer medical attention, food, and transport home, maybe even financial help. Illegal immigration and lying about refugee status should be strongly resisted by detainment, repatriation and sending to the back of the queue, or permanently denying entry to anyone attempting illegal entry. No country wants to increase its population of criminals. Such a policy distinguishes well between legal and illegal, between refugees and migrants, and ensures that the flow into the country matches that which its government thinks is manageable.

The rest is basically ongoing Foreign Policy, and that does differ between different flavors of government. Sadly, how best to deal with problems in other countries is not something the USA is known to be skilled at. It doesn’t have a fantastic track record, even if it usually intends to make things better. Ditto the UK and Europe. Interference often makes things worse in unexpected ways. Handouts often feed corruption and dependence and support oppressive regimes, or liberate money for arms, so they don’t always work well either. Emergencies such as wars or natural catastrophes already have polices and appropriate agencies in place to deal with consequences, as well as many NGOs.

This caravan doesn’t fit neatly. A few can reasonably be directed into other channels, but most must be turned away. That is not heartless. The Mediterranean migration have led to far more deaths than they should because earlier migrants were accepted, encouraging others, and at one point it seemed to be the EU providing a safe pickup almost as soon as a trafficker boat left shore. The Australian approach seemed harsh, but probably saved thousands of lives by deterring others from risking their lives. My own solution to the Mediterranean crisis was:

https://timeguide.wordpress.com/2015/04/19/the-mediterranean-crisis/ and basically suggested making a small island into a large refugee camp where anyone rescued )or captured if they managed to make the full trip) would be taken, with a free trip home once they realized they wouldn’t be transferred to mainland Europe. I still think it is the best approach, and could be replicated by the USA using a large refugee/migrant camp from which the only exit is back to start or a very lengthy wait from the back of the legal migration queue.

However:

My opening questions on the inequity of birth invite another direction of analysis. When people die, they usually leave the bulk of their estates to their descendants, but by then they will also have passed on a great deal of other things, such as their values, some skills, miscellaneous support, and attitudes to life, the universe and everything. Importantly, they will have conveyed citizenship of their country, and that conveys a shared inheritance of the accumulated efforts of the whole of that countries previous inhabitants. That accumulation may be a prosperous, democratic country with reasonable law and order and safety, and relatively low levels of corruption, like the USA or the UK, or it may be a dysfunctional impoverished dictatorship or anything between. While long-term residents are effectively inheriting the accumulated value (and problems) passed down through their ancestors, new immigrants receive all of that for free when they are accepted. It is hard to put an accurate value on this shared social, cultural and financial wealth, but most that try end up with values in the $100,000s. Well-chosen immigrants may bring in value (including their descendants’ contributions) greatly in excess of what they receive. Some may not. Some may even reduce it. Whether a potential immigrant is accepted or not, we should be clear that citizenship is very valuable.

Then analysis starts to get messier. It isn’t just simple inheritance. What about the means by which that happy inherited state was achieved? Is one country attractive purely because of its own efforts or because it exploited others, or some combination? Is another country a hell hole in part because of our external interference, as some would argue for Iraq or Syria? If so, then perhaps there is a case for reparation or compensation, or perhaps favored immigration status for its citizens. We ought not to shirk responsibility for the consequences of our actions. Or is it a hell hole in spite of our interference, as can be argued for some African countries. Is it a hell hole because its people are lazy or corrupt and live in the country they deserve, as is possible I guess, though I can’t think of any examples. Anyway, heredity is a complex issue, as is privilege, its twin sister. I did write a lengthy blog on privilege (and cultural appropriation). I probably believe much the same as you but in the hostile competitive offence-taking social media environment of today, it remains a draft.

Sorry it took so many words, but there is so much nonsense being spoken, it takes a lot of words to remind of what mostly used to be common sense. The right policy now is basically the same as it was decades ago. Noisy activism doesn’t change that.

 

Enhanced cellular blockchain

I thought there was a need for a cellular blockchain variant, and a more sustainable alternative to cryptocurrencies like Bitcoin that depend on unsustainable proofs-of-work. So I designed one and gave it a temporary project name of Grapevine. I like biomimetics, which I used for both the blockchain itself and its derivative management/application/currency/SW distribution layer. The ANTs were my invention in 1993 when I was with BT, along with Chris Winter. BT never did anything with it, and I believe MIT later published some notes on the idea too. ANTs provide an ideal companion to blockchain and together, could be the basis of some very secure IT systems.

The following has not been thoroughly checked so may contain serious flaws, but hopefully contain some useful ideas to push the field a little in the right direction.

A cellular, distributed, secure ledger and value assurance system – a cheap, fast, sustainable blockchain variant

  • Global blockchain grows quickly to enormous size because all transactions are recorded in single chain – e.g. bitcoin blockchain is already >100GB
  • Grapevine (temp project name) cellular approach would keep local blocks small and self-contained but assured by blockchain-style verification during growth and protected from tampering after block is sealed and stripped by threading with a global thread
  • Somewhat analogous to a grape vine. Think of each local block as a grape that grow in bunches. Vine links bunches together but grapes are all self-contained and stay small in size. Genetics/nutrients/materials/processes all common to entire vine.
  • Grape starts as a flower, a small collection of unverified transactions. All stamens listen to transactions broadcast via any stamen. Flower is periodically (every minute) frozen (for 2 seconds) while pollen is emitted by each stamen, containing stamen signature, previous status verification and new transactions list. Stamens check the pollen they receive for origin signature and previous growth verification and then check all new transactions. If valid, they emit a signed pollination announcement. When each stamen has received signed pollination announcements from the majority of other stamens, that growth stage is closed, (all quite blockchain-like so far), stripped of unnecessary packaging such as previous hash, signatures etc) to leave a clean record of validated transactions, which is then secured from tampering by the grape signature and hash. The next stage of growth then begins, which needs another pollination process (deviating from biological analogy here). Each grape on the bunch grows like this throughout the day. When the grapes are all fully grown, and the final checks made by each grape, the grapes are stripped again and the whole bunch is signed onto the vine using a highly secure bunch signature and hash to prevent any later tampering. Grapes are therefore collections of verified local transactions that have grown in many fully verified stages during the day but are limited in size and stripped of unnecessary packaging. The bunch is a verified global record of all of the grapes grown that day that remains the same forever. The vine is a growing collection of bunches of grapes, but each new grape and bunch starts off fresh each day so signalling and the chain never grow significantly. Each transaction remains verified and recorded forever but signalling is kept minimal. As processing power increases, earlier bunches can be re-secured using a new bunch signature.

Key Advantages

  • Grape vine analogy is easier for non-IT managers to understand than normal blockchain.
  • Unlike conventional blockchains, blocks grow in stages so transactions don’t have to wait long to be verified and sealed.
  • Cellular structure means signalling is always light, with just a few nearby nodes checking a few transactions and keeping short records.
  • Ditto bunching, each day’s records start from zero and bunch is finished and locked at end of day.
  • Cellular structure allows sojourn time for signalling to be kept low with potentially low periods for verification and checking. Will scale well with improving processing speed, less limited by signal propagation time than non-cellular chains.
  • Global all-time record is still complete, duplicated, distributed, but signalling for new transactions always starts light and local every new day.
  • Cellular approach allows easy re-use of globally authenticated tokens within each cell. This limits cost of token production.
  • Cells may be either geographic or logical/virtual. Virtual cells can be geographically global (at penalty of slower comms), but since each is independent until the end of the day, virtual cell speed will not affect local cell speed.
  • Protocols can be different for different cells, allowing cells with higher value transactions to use tighter security.

Associated mechanisms

  • Inter-cell transactions can be implemented easily by using logical/virtual cell that includes both parties. Users may need to be registered for access to multiple cells. If value is being transferred, it is easy to arrange clearing of local cell first (1 minute overhead) and then check currency hasn’t already been spent before allowing transaction on another cell.
  • Grapes are self-contained and data is held locally, duplicated among several stamens. Once sealed for the day, the grape data remains in place, signed off with the appropriate grape signature and the bunch signature verifies it with an extra lock that prevents even a future local majority from being able to tamper with it later. To preserve data in the very long-term against O/S changes, company failure etc, subsequent certified copies may be distributed and kept updated.
  • Signalling during the day can be based on ANT (autonomous network telepher) protocols. These use a strictly limited variety of ANT species that are authenticated and shared at the start of a period (a day or a week perhaps), using period lifetime encryption keys. Level of encryption is determined by ensuring that period is much smaller than the estimated time to crack on current hardware at reasonable cost. All messages use this encryption and ANT mechanisms therefore chances of infiltration or fraudulent transaction is very low so associated signalling and time overhead costs are kept low.
  • ANTs may include transaction descriptor packets, signature distribution packets, new key distribution packets, active (executable code) packets, new member verification packets, software distribution, other admin data, performance maintenance packets such as load distribution, RPCs and many others. Overall, perhaps 64 possible ANT species may be allowed at any one time. This facility makes the system ideal for secure OS and software distribution/maintenance.

Financial use

  • ANTs can contain currency to make valuable packets, or an ANT variant could actually be currency.
  • Optional coins could be made for privacy, otherwise transactions would use real world accounts. A coin-based system can be implemented simply by using the grape signature and coin number. Coins could be faked by decrypting the signature but that signature only lasts one period so by then they will be invalid. Remember, encryption level is set according to cost to decrypt during a period. Coins are globally unique due to different cells having different signatures. Once grapes are sealed no tampering is possible.
  • One mechanism is that coins are used as temporary currency that only lasts one period. Coins are bought using any currency immediately before transactions. At end of day, coins are converted back to desired currency. Any profits/losses due to conversion differences during day accrue to user at point of conversion.
  • A lingering cybercurrency can be made that renews its value to live longer than one period. It simply needs conversion to a new coin at the start of the new day, relying on signature security and short longevity to protect.
  • ANTs can alternatively carry real currency value by direct connection to any account. At end of each growth stage or end of day, transaction clearing debits and deposits in each respective account accordingly.
  • Transaction fees can be implemented easily and simply debited at either or both ends.
  • No expensive PoW is needed. Wasteful mining and PoW activity is unnecessary. Entire system relies only on using encryption signatures that are valid for shorter times than their cost-effective decryption times. Tamper-resistance avoids decryption of earlier signatures being useful.

With thanks to my good friend Prof Nick Colosimo for letting me bounce the ideas off him.

AI that talks to us could quickly become problematic

Google’s making the news again adding evidence to the unfortunate stereotype of the autistic IT nerd that barely understands normal people, and they have therefore been astonished at the backlash that normal people would all easily have predicted. (I’m autistic and work in IT mostly too, and am well used to the stereotype it so it doesn’t bother me, in fact it is a sort of ‘get out of social interactions free’ card). Last time it was Google Glass, where it apparently didn’t occur to them that people may not want other people videoing them without consent in pubs and changing rooms. This time it is Google Duplex, that makes phone calls on your behalf to arrange appointment using voice that is almost indistinguishable from normal humans. You could save time making an appointment with a hairdresser apparently, so the Googlanders decided it must be a brilliant breakthrough, and expected everyone to agree. They didn’t.

Some of the objections have been about ethics: e.g. An AI should not present itself as human – Humans have rights and dignity and deserve respectful interactions with other people, but an AI doesn’t and should not masquerade as human to acquire such privilege without knowledge of the other party and their consent.

I would be more offended by the presumed attitude of the user. If someone thinks they are so much better then me that they can demand my time and attention without the expense of any of their own, delegating instead to a few microseconds of processing time in a server farm somewhere, I’ll treat them with the contempt they deserve. My response will not be favourable. I am already highly irritated by the NHS using simple voice interaction messaging to check I will attend a hospital appointment. The fact that my health is on the line and notices at surgeries say I will be banned if I complain on social media is sufficient blackmail to ensure my compliance, but it still comes at the expense of my respect and goodwill. AI-backed voice interaction with better voice wouldn’t be any better, and if it asking for more interaction such as actually booking an appointment, it would be extremely annoying.

In any case, most people don’t speak in fully formed grammatically and logically correct sentences. If you listen carefully to everyday chat, a lot of sentences are poorly pronounced, incomplete, jumbled, full of ums and er’s, likes and they require a great deal of cooperation by the listener to make any sense at all. They also wander off topic frequently. People don’t stick to a rigid vocabulary list or lists of nicely selected sentences.  Lots of preamble and verbal meandering is likely in a response that is highly likely to add ambiguity. The example used in a demo, “I’d like to make a hairdressing appointment for a client” sounds fine until you factor in normal everyday humanity. A busy hairdresser or a lazy receptionist is not necessarily going to cooperate fully. “what do you mean, client?”, “404 not found”, “piss off google”, “oh FFS, not another bloody computer”, “we don’t do hairdressing, we do haircuts”, “why can’t your ‘client’ call themselves then?” and a million other responses are more likely than “what time would you like?”

Suppose though that it eventually gets accepted by society. First, call centers beyond the jurisdiction of your nuisance call blocker authority will incessantly call you at all hours asking or telling you all sorts of things, wasting huge amounts of your time and reducing quality of life. Voice spam from humans in call centers is bad enough. If the owners can multiply productivity by 1000 by using AI instead of people, the result is predictable.

We’ve seen the conspicuous political use of social media AI already. Facebook might have allowed companies to use very limited and inaccurate knowledge of you to target ads or articles that you probably didn’t look at. Voice interaction would be different. It uses a richer emotional connection that text or graphics on a screen. Google knows a lot about you too, but it will know a lot more soon. These big IT companies are also playing with tech to log you on easily to sites without passwords. Some gadgets that might be involved might be worn, such as watches or bracelets or rings. They can pick up signals to identify you, but they can also check emotional states such as stress level. Voice gives away emotion too. AI can already tell better then almost all people whether you are telling the truth or lying or hiding something. Tech such as iris scans can also tell emotional states, as well as give health clues. Simple photos can reveal your age quite accurately to AI, (check out how-old.net).  The AI voice sounds human, but it is better then even your best friends at guessing your age, your stress and other emotions, your health, whether you are telling the truth or not, and it knows far more about what you like and dislike and what you really do online than anyone you know, including you. It knows a lot of your intimate secrets. It sounds human, but its nearest human equivalent was probably Machiavelli. That’s who will soon be on the other side of the call, not some dumb chatbot. Now re-calculate political interference, and factor in the political leaning and social engineering desires of the companies providing the tools. Google and Facebook and the others are very far from politically neutral. One presidential candidate might get full cooperation, assistance and convenient looking the other way, while their opponent might meet rejection and citation of the official rules on non-interference. Campaigns on social issues will also be amplified by AI coupled to voice interaction. I looked at some related issue in a previous blog on fake AI (i.e. fake news type issues): https://timeguide.wordpress.com/2017/11/16/fake-ai/

I could but won’t write a blog on how this tech could couple well to sexbots to help out incels. It may actually have some genuine uses in providing synthetic companionship for lonely people, or helping or encouraging them in real social interactions with real people. It will certainly have some uses in gaming and chatbot game interaction.

We are not very far from computers that are smarter then people across a very wide spectrum, and probably not very far from conscious machines that have superhuman intelligence. If we can’t even rely on IT companies to understand likely consequences of such obvious stuff as Duplex before thy push it, how can we trust them in other upcoming areas of AI development, or even closer term techs with less obvious consequences? We simply can’t!

There are certainly a few such areas where such technology might help us but most are minor and the rest don’t need any deception, but they all come at great cost or real social and political risk, as well as more abstract risks such as threats to human dignity and other ethical issues. I haven’t give this much thought yet and I am sure there must be very many other consequences I have not touched on yet. Google should do more thinking before they release stuff. Technology is becoming very powerful, but we all know that great power comes with great responsibility, and since most people aren’t engineers so can’t think through all the potential technology interactions and consequences, engineers such as Google’s must act more responsibly. I had hoped they’d started, and they said they had, but this is not evidence of that.

 

Why superhumans are inevitable, and what else comes in the box

Do we have any real choice in the matter of making  super-humans? 20 years ago, I estimated 2005 as the point of no return, and nothing since then has changed my mind on that date. By my reckoning, we are already inevitably committed to designer babies, ebaybies, super-soldiers and super-smart autonomous weapons, direct brain-machine links, electronic immortality, new human races, population explosion, inter-species conflicts and wars with massively powerful weaponry, superhuman conscious AI, smart bacteria, and the only real control we have is relatively minor adjustments on timings. As I was discussing yesterday, the technology potential for this is vast and very exciting, nothing less than a genuine techno-utopia if we use the technologies wisely, but optimum potential doesn’t automatically become reality, and achieving a good outcome is unlikely if many barriers are put in its way.

In my estimation, we have already started the countdown to this group of interconnected technologies – we will very likely get all of them, and we must get ready for the decisions and impacts ahead. At the moment, our society is a small child about to open its super-high-tech xmas presents while fighting with its siblings. Those presents will give phenomenal power far beyond the comprehension of the child or its emotional maturity to equip it to deal with the decisions safely. Our leaders have already squandered decades of valuable preparation time by ignoring the big issues to focus on trivial ones. It is not too late to achieve a good ending, but it won’t happen by accident and we do need to make preparations to avoid pretty big problems.

Both hard and soft warfare – the sword and the pen, already use rapidly advancing AI, and the problems are already running ahead of what the owners intended.

Facebook, Twitter, Instagram and other media giants all have lots of smart people and presumably they mean well, but if so, they have certainly been naive. They maybe hoped to eliminate loneliness, inequality, and poverty and create a loving interconnected global society with global peace, but instead created fake news, social division and conflict and election interference. More likely they didn’t intend either outcome, they just wanted to make money and that took priority over due care and attention..

Miniaturising swarming smart-drones are already the subjects of a new arms race that will deliver almost un-killable machine adversaries by 2050. AI separately is in other arms races to make super-smart AI and super-smart soldiers. This is key to the 2005 point of no return. It was around 2005 that we reached the levels of technology where future AI development all the way to superhuman machine consciousness could be done by individuals, mad scientists or rogue states, even if major powers had banned it. Before 2005, there probably wasn’t quite enough knowledge already on the net to do that. In 2018, lots of agencies have already achieved superiority to humans in niche areas, and other niches will succumb one by one until the whole field of human capability is covered. The first machines to behave in ways not fully understood by humans arrived in the early 1990s; in 2018, neural nets already make lots of decisions at least partly obscured to humans.

This AI development trend will take us to superhuman AI, and it will be able to accelerate development of its own descendants to vastly superhuman AI, fully conscious, with emotions, and its own agendas. That will need humans to protect against being wiped out by superhuman AI. The only three ways we could do that are to either redesign the brain biologically to be far smarter, essentially impossible in the time-frame, to design ways to link our brains to machines, so that we have direct access to the same intelligence as the AIs, so a gulf doesn’t appear and we can remain relatively safe, or pray for super-smart aliens to come to our help, not the best prospect.

Therefore we will have no choice but to make direct brain links to super-smart AI. Otherwise we risk extinction. It is that simple. We have some idea how to do that – nanotech devices inside the brain linking to each and every synapse that can relay electrical signals either way, a difficult but not impossible engineering problem. Best guesses for time-frame fall in the 2045-2050 range for a fully working link that not only relays signals between your organic brain and an IT replica, but by doing so essentially makes external IT just another part of your brain. That conveys some of the other technology gifts of electronic immortality, new varieties of humans, smart bacteria (which will be created during the development path to this link) along with human-variant population explosion, especially in cyberspace, with androids as their physical front end, and the inevitable inter-species conflicts over resources and space – trillions of AI and human-like minds in cyberspace that want to do things in the real world cannot be assumed to be willingly confined just to protect the interests of what they will think of as far lesser species.

Super-smart AI or humans with almost total capability to design whatever synthetic biology is needed to achieve any biological feature will create genetic listings for infinite potential offspring, simulate them, give some of them cyberspace lives, assemble actual embryos for some of them and bring designer babies. Already in 2018, you can pay to get a DNA listing, and blend it in any way you want with the listing of anyone else. It’s already possible to make DNA listings for potential humans and sell them on ebay, hence the term ebaybies. That is perfectly legal, still, but I’ve been writing and lecturing about them since 2004. Today they would just be listings, but we’ll one day have the tech to simulate them, choose ones we like and make them real, even some that were sold as celebrity collector items on ebay. It’s not only too late to start regulating this kind of tech, our leaders aren’t even thinking about it yet.

These technologies are all linked intricately, and their foundations are already in place, with much of the building on those foundations under way. We can’t stop any of these things from happening, they will all come in the same basket. Our leaders are becoming aware of the potential and the potential dangers of the AI positive feedback loop, but at least 15 years too late to do much about it. They have been warned repeatedly and loudly but have focused instead on the minor politics of the day that voters are aware of. The fundamental nature of politics is unlikely to change substantially, so even efforts to slow down the pace of development or to limit areas of impact are likely to be always too little too late. At best, we will be able to slow runaway AI development enough to allow direct brain links to protect against extinction scenarios. But we will not be able to stop it now.

Given inevitability, it’s worth questioning whether there is even any point in trying. Why not just enjoy the ride? Well, the brakes might be broken, but if we can steer the bus expertly enough, it could be exciting and we could come out of it smelling of roses. The weak link is certainly the risk of super-smart AI, whether AI v humans or countries using super-smart AI to fight fiercely for world domination. That risk is alleviated by direct brain linkage, and I’d strongly argue necessitates it, but that brings the other technologies. Even if we decide not to develop it, others will, so one way or another, all these techs will arrive, and our future late century will have this full suite of techs, plus many others of course.

We need as a matter of extreme urgency to fix these silly social media squabbles and over-reactions that are pulling society apart. If we have groups hating each other with access to extremely advanced technology, that can only mean trouble. Tolerance is broken, sanctimony rules, the Inquisition is in progress. We have been offered techno-utopia, but current signs are that most people think techno-hell looks more appetizing and it is their free choice.

AIs of a feather flocking together to create global instability

Hawking and Musk have created a lot of media impact with their warnings about AI, so although terminator scenarios resulting from machine consciousness have been discussed, as have more mundane use of non-conscious autonomous weapon systems, it’s worth noting that I haven’t yet heard them mention one major category of risks from AI – emergence. AI risks have been discussed frequently since the 1970s, and in the 1990s a lot of work was done in the AI community on emergence. Complex emergent patterns of behavior often result from interactions between entities driven by simple algorithms. Genetic algorithms were demonstrated to produce evolution, simple neighbor-interaction rules were derived to illustrate flocking behaviors that make lovely screen saver effects. Cellular automata were played with. In BT we invented ways of self-organizing networks and FPGAs, played with mechanism that could be used for evolution and consciousness, demonstrated managing networks via ANTs – autonomous network telephers, using smart packets that would run up and down wires sorting things out all by themselves. In 1987 discovered a whole class of ways of bringing down networks via network resonance, information waves and their much larger class of correlated traffic – still unexploited by hackers apart from simple DOS attacks. These ideas have slowly evolved since, and some have made it into industry or hacker toolkits, but we don’t seem to be joining the dots as far as risks go.

I read an amusing article this morning by an ex-motoring-editor who was declined insurance because the AI systems used by insurance companies had labelled him as high risk because he maybe associated with people like Clarkson. Actually, he had no idea why, but that was his broker’s theory of how it might have happened. It’s a good article, well written and covers quite a few of the dangers of allowing computers to take control.

http://www.dailymail.co.uk/sciencetech/article-5310031/Evidence-robots-acquiring-racial-class-prejudices.html

The article suggested how AIs in different companies might all come to similar conclusions about people or places or trends or patterns in a nice tidy positive feedback loop. That’s exactly the sort of thing that can drive information waves, which I demonstrated in 1987 can bring down an entire network in less than 3 milliseconds, in such a way that it would continue to crash many times when restarted. That isn’t intended by the algorithms, which individually ought to make good decisions, but when interacting with one another, create the emergent phenomenon.  Automated dealing systems are already pretty well understood in this regard and mechanisms prevent frequent stock market collapses, but that is only one specific type of behavior in one industry that is protected. There do not seem to be any industry-wide mechanisms to prevent the rest of this infinite class of problems from affecting any or all of the rest, simultaneously.

As we create ever more deep learning neural networks, that essentially teach themselves from huge data pools, human understanding of their ‘mindsets’ decreases. They make decisions using algorithms that are understood at a code level, but the massive matrix of derived knowledge they create from all the data they receive becomes highly opaque. Often, even usually, nobody quite knows how a decision is made. That’s bad enough in a standalone system, but when many such systems are connected, produced and owned and run by diverse companies with diverse thinking, the scope for destructive forms of emergence increases geometrically.

One result could be gridlock. Systems fed with a single new piece of data could crash. My 3 millisecond result in 1987 would still stand since network latency is the prime limiter. The first AI receives it, alters its mindset accordingly, processes it, makes a decision and interacts with a second AI. This second one might have different ‘prejudice’ so makes its own decision based on different criteria, and refuses to respond the way intended. A 3rd one looks at the 2nd’s decision and takes that as evidence that there might be an issue, and with its risk-averse mindset, also refuse to act, and that inaction spreads through the entire network in milliseconds. Since the 1st AI thinks the data is all fine and it should have gone ahead, it now interprets the inaction of the others as evidence that that type of data is somehow ‘wrong’ so itself refuses to process any further of that type, whether from its own operators or other parts of the system. So it essentially adds its own outputs to the bad feeling and the entire system falls into sulk mode. As one part of infrastructure starts to shut down, that infects other connected parts and our entire IT could fall into sulk mode – entire global infrastructure. Since nobody knows how it all works, or what has caused the shutdown, it might be extremely hard to recover.

Another possible result is a direct information wave, almost certainly a piece of fake news. Imagine our IT world in 5 years time, with all these super-smart AIs super-connected. A piece of fake news says a nuke has just been launched somewhere. Stocks will obviously decline, whatever the circumstances, so as the news spreads, everyone’s AIs will take it on themselves to start selling shares before the inevitable collapse, triggering a collapse, except it won’t because the markets won’t let that happen. BUT… The wave does spread, and all those individual AIs want to dispose of those shares, or at least find out what’s happening, so they all start sending messages to one another, exchanging data, trying to find what’s going on. That’s the information wave. They can’t sell shares of find out, because the network is going into overload, so they try even harder and force it into severe overload. So it falls over. When it comes back online, they all try again, crashing it again, and so on.

Another potential result is smartass AI. There is always some prat somewhere who sees an opportunity to take advantage and ruins if for everyone else by doing something like exploiting a small loophole in the law, or in this case, most likely, a prejudice our smartass AI has discovered in some other AI that means it can be taken advantage of by doing x, y, or z. Since nobody quite knows how any of their AIs are making their decisions because their mindsets ate too big and too complex, it will be very hard to identify what is going on. Some really unusual behavior is corrupting the system because some AI is going rogue somewhere somehow, but which one, where, how?

That one brings us back to fake news. That will very soon infect AI systems with their own varieties of fake news. Complex networks of AIs will have many of the same problems we are seeing in human social networks. An AI could become a troll just the same as a human, deliberately winding others up to generate attention of drive a change of some parameter – any parameter – in its own favour. Activist AIs will happen due to people making them to push human activist causes, but they will also do it all by themselves. Their analysis of the system will sometimes show them that a good way to get a good result is to cause problems elsewhere.

Then there’s climate change, weather, storms, tsunamis. I don’t mean real ones, I mean the system wide result of tiny interactions of tiny waves and currents of data and knowledge in neural nets. Tiny effects in one small part of a system can interact in unforeseen ways with other parts of other systems nearby, creating maybe a breeze, which interacts with breezes in nearby regions to create hurricanes. I think that’s a reasonable analogy. Chaos applies to neural net societies just as it does to climate, and 50 year waves equivalents will cause equivalent havoc in IT.

I won’t go on with more examples, long blogs are awful to read. None of these requires any self-awareness, sentience, consciousness, call it what you will. All of these can easily happen through simple interactions of fairly trivial AI deep learning nets. The level of interconnection already sounds like it may already be becoming vulnerable to such emergence effects. Soon it definitely will be. Musk and Hawking have at least joined the party and they’ll think more and more deeply in coming months. Zuckerberg apparently doesn’t believe in AI threats but now accepts the problems social media is causing. Sorry Zuck, but the kind of AI you’re company is messing with will also be subject to its own kinds of social media issues, not just in its trivial decisions on what to post or block, but actual inter-AI socializing issues. It might not try to eliminate humanity, but if it brings all of our IT to a halt and prevents rapid recovery, we’re still screwed.

 

2018 outlook: fragile

Futurists often consider wild cards – events that could happen, and would undoubtedly have high impacts if they do, but have either low certainty or low predictability of timing.  2018 comes with a larger basket of wildcards than we have seen for a long time. As well as wildcards, we are also seeing the intersection of several ongoing trends that are simultaneous reaching peaks, resulting in socio-political 100-year-waves. If I had to summarise 2018 in a single word, I’d pick ‘fragile’, ‘volatile’ and ‘combustible’ as my shortlist.

Some of these are very much in all our minds, such as possible nuclear war with North Korea, imminent collapse of bitcoin, another banking collapse, a building threat of cyberwar, cyberterrorism or bioterrorism, rogue AI or emergence issues, high instability in the Middle East, rising inter-generational conflict, resurgence of communism and decline of capitalism among the young, increasing conflicts within LGBTQ and feminist communities, collapse of the EU under combined pressures from many angles: economic stresses, unpredictable Brexit outcomes, increasing racial tensions resulting from immigration, severe polarization of left and right with the rise of extreme parties at both ends. All of these trends have strong tribal characteristics, and social media is the perfect platform for tribalism to grow and flourish.

Adding fuel to the building but still unlit bonfire are increasing tensions between the West and Russia, China and the Middle East. Background natural wildcards of major epidemics, asteroid strikes, solar storms, megavolcanoes, megatsumanis and ‘the big one’ earthquakes are still there waiting in the wings.

If all this wasn’t enough, society has never been less able to deal with problems. Our ‘snowflake’ generation can barely cope with a pea under the mattress without falling apart or throwing tantrums, so how we will cope as a society if anything serious happens such as a war or natural catastrophe is anyone’s guess. 1984-style social interaction doesn’t help.

If that still isn’t enough, we’re apparently running a little short on Ghandis, Mandelas, Lincolns and Churchills right now too. Juncker, Trump, Merkel and May are at the far end of the same scale on ability to inspire and bring everyone together.

Depressing stuff, but there are plenty of good things coming too. Augmented reality, more and better AI, voice interaction, space development, cryptocurrency development, better IoT, fantastic new materials, self-driving cars and ultra-high speed transport, robotics progress, physical and mental health breakthroughs, environmental stewardship improvements, and climate change moving to the back burner thanks to coming solar minimum.

If we are very lucky, none of the bad things will happen this year and will wait a while longer, but many of the good things will come along on time or early. If.

Yep, fragile it is.

 

Mega-buildings could become cultural bubbles

My regular readers, both of them in fact, will know I am often concerned about the dangerous growth of social media bubbles. By mid-century, thanks to upcoming materials, some cities will have a few buildings over 1km tall, possibly 10km (and a spaceport or two up to 30km high). These would be major buildings, and could create a similar problem.

A 1km building could have 200 floors, and with 100m square floors, 200 hectares of space.  Assuming half is residential space and the other half is shops, offices or services, that equates to 20,000 luxury apartments (90 sq m each) or 40,000 basic flats. That means each such building could be equivalent to a small town, with maybe 50,000 inhabitants. A 10km high mega-building, with a larger 250m side, would have 60 times more space, housing up to 300,000 people and all they need day-to-day, essentially a city.

Construction could be interesting. My thoughts are that a 10km building could be extruded from the ground using high pressure 3D printing, rather than assembled with cranes. Each floor could be fully fitted out while it is still near ground level, its apartments sold and populated, even as the building grows upward. That keeps construction costs and cash flow manageable.

My concern is that although we will have the technology to build such buildings in the 2040s, I’m not aware of much discussion about how cultures would evolve in such places, at least not outside of sci-fi (like Judge Dredd or Blade Runner). I rather hope we wouldn’t just build them first and try to solve social problems later. We really ought to have some sort of plans to make them work.

In a 100m side building, entire floors or groups of floors would likely be allocated to particular functions – residential, shopping, restaurants, businesses etc. Grouping functions sensibly reduces the total travel needed. In larger buildings, it is easier to have local shops mixed with apartments for everyday essentials, with larger malls elsewhere.

People could live almost entirely in the building, rarely needing to leave, and many might well do just that, essentially becoming institutionalized. I think these buildings will feel very different from small towns. In small towns, people still travel a lot to other places, and a feeling of geographic isolation doesn’t emerge. In a huge tower block of similar population and facilities, I don’t think people would leave as often, and many will stay inside. All they need is close by and might soon feel safe and familiar, while the external world might seem more distant, scarier. Institutionalization might not take long, a month or two of becoming used to the convenience of staying nearby while watching news of horrors going on elsewhere. Once people stop the habit of leaving the building, it could become easier to find reasons not to leave it in future.

Power structures would soon evolve – local politics would happen, criminal gangs would emerge, people would soon learn of good and bad zones. It’s possible that people might become tribal, their building and their tribe competing for external resources and funding with tribes in other mega-buildings, and their might be conflict. Knowing they are physically detached, the same bravery to attack total strangers just because they hold different views might emerge that we see on social media today. There might be cyber-wars, drone wars, IoT wars between buildings.

I’m not claiming to be a social anthropologist. I have no real idea how these buildings will work and perhaps my fears are unjustified. But even I can see some potential problems just based on what we see today, magnified for the same reasons problems get magnified on social media. Feelings of safety and anonymity can lead to some very nasty tribal behaviors. Managing diversity of opinion among people moving in would be a significant challenge, maintaining it might be near impossible. With the sort of rapid polarization we’ve already seen today thanks to social media bubbles, physically contained communities would surely see those same forces magnified everyday.

Building a 10km mega-building will become feasible in the 2040s, and increased urban populations will make them an attractive option for planners. Managing them and making them work socially might be a much bigger challenge.

 

 

Google and the dangerous pursuit of ‘equality’

The world just got more dangerous, and I’m not talking about N Korea and Trump.

Google just sacked an employee because he openly suggested that men and women, (not all, but some, and there is an overlap, and …) might tend to have different preferences in some areas and that could (but not always, and only in certain cases, and we must always recognize and respect everyone and …) possibly account for some of the difference in numbers of men and women in certain roles (but there might be other causes too and obviously lots of discrimination and …. )

Yes, that’s what he actually said, but with rather more ifs and buts and maybes. He felt the need to wrap such an obvious statement in several kilometers thick of cotton wool so as not to offend the deliberately offended but nonetheless deliberate offense was taken and he is out on his ear.

Now, before you start thinking this is some right-wing rant, I feel obliged to point out just how progressive Futurizon is: 50% of all Futurizon owners and employees are female, all employees and owners have the same voting rights, 50% are immigrants and all are paid exactly the same and have the same size offices, regardless of dedication, ability, nature or quality or volume of output and regardless of their race, religion, beauty, shape, fitness, dietary preferences, baldness, hobbies or political views, even if they are Conservatives. All Futurizon offices are safe zones where employees may say anything they want of any level of truth, brilliance or stupidity and expect it to be taken as absolute fact and any consequential emotional needs to be fully met. No employee may criticize any other employee’s mouse mat, desk personalisation or screen wallpaper for obvious lack of taste. All employees are totally free to do anything they choose 100% of the time and can take as much leave as they want. All work is voluntary. All have the same right to respectfully request any other employee to make them coffee, tea or Pimms. All employees of all genders real or imagined are entitled to the same maternity and paternity rights, and the same sickness benefits, whether ill or not. In fact, Futurizon does not discriminate on any grounds whatsoever. We are proud to lead the world in non-discrimination. Unfortunately, our world-leading terms of employment mean that we can no longer afford to hire any new employees.

However, I note that Google has rather more power and influence than Futurizon so their policies count more. They appear (Google also has better lawyers than I can afford, so I must stress that all that follows is my personal opinion) to have firmly decided that diversity is all-important and they seem to want total equality of outcome. The view being expressed not just by Google but by huge swathes of angry protesters seems to be that any difference in workforce representation from that of the general population must arise from discrimination or oppression so must be addressed by positive action to correct it. There are apparently no statistically discernible differences in behavior between genders, or in job or role preference, so any you may have noticed over the time you’ve been alive is just your prejudice. Google says they fully support free speech and diversity of views, but expression of views is apparently only permitted as long as those views are authorized, on penalty of dismissal.

So unless I’m picking up totally the wrong end of the stick here, and I don’t do that often, only 13% of IT engineers are women, but internal policies must ensure that the proportion rises to 50%, whether women want to do that kind of work or not. In fact, nobody may question whether as many women want to work as IT engineers as men; it must now be taken as fact. By extension, since more women currently work in marketing, HR and PR, they must be substituted by men via positive action programs until men fill 50% of those roles. Presumably similar policies must also apply in medical bays for nursing and other staff there, and in construction teams for their nice new buildings. Ditto all other genders, races, religions; all groups must be protected and equalized to USA population proportions, apparently except those that don’t claim to hold sufficiently left-wing views, in which case it is seemingly perfectly acceptable to oppress, ostracize and even expel them.

In other words, freedom of choice and difference in ability, and more importantly freedom from discrimination, must be over-ruled in favor of absolute equality of diversity, regardless of financial or social cost, or impact on product or service quality. Not expressing full and enthusiastic left-wing compliance is seemingly just cause for dismissal.

So, why does this matter outside Google? Well, AI is developing very nicely. In fact, Google is one of the star players in the field right now. It is Google that will essentially decide how much of the AI around us is trained, how it learns, what it learns, what ‘knowledge’ it has of the world. Google will pick the content the AI learns from, and overrule or reeducate it if it draws any ‘wrong’ conclusions about the world, such as that more women than men want to be nurses or work in HR, or that more men than women want to be builders or engineers. A Google AI must presumably believe that the only differences between men and women are physical, unless their AI is deliberately excluded from the loudly declared corporate values and belief sets.

You should be very worried. Google’s values really matter. They have lots of influence on some of the basic tools of everyday life. Even outside their company, their AI tools and approaches will have strong influence on how other AI develops, determining operating systems and platforms, languages, mechanisms, interfaces, filters, even prejudices and that reach and influence is likely to increase. Their AI may well be in many self-driving cars, and if they have to make life or death decisions, the underlying value assumptions must feature in the algorithms. Soon companies will need AI that is more emotionally compliant. AI will use compliments or teasing or seduction or sarcasm or wit as marketing tools as well as just search engine positioning. Soon AI will use highly expressive faces with attractive voices, with attractive messages, tailored to appeal to you by pandering to your tastes and prejudices while thinking something altogether different. AI might be the person at the party that is all smiles and compliments, before going off to tell everyone else how awful it thinks you are. If you dare to say something not ‘authorized’, the ultra-smart AI all around you might treat you condescendingly, making you feel ashamed, ostracized, a dinosaur. Then it might secretly push you down a few pages in search results, or put a negative spin on text summaries about you, or exclude you from recommendations. Or it might do all the secret stuff while pretending it thinks you’re fantastic. Internal cultural policies in companies like Google today could soon be external social engineering to push the left-wing world the IT industry believes in – it isn’t just Google; Facebook and Twitter are also important and just as Left, though Amazon, Samsung, IBM and other AI players are less overtly politically biased, so far at least. Left wing policies generally cost a lot more, but Google and Facebook will presumably still expect other companies and people to pay the taxes to pay for it all. As their female staff gear up to fight them over pay differences between men and women for similar jobs, it often seems that Google’s holier-than-thou morality doesn’t quite make it as far as their finances.

Then it really starts being fun. We’ll soon have bacteria that can fabricate electronic circuits within themselves. Soon they’ll be able to power them too, giving the concept of smart yogurt. These bacteria could also have nanotechnology flagella to help them get around. We’ll soon have bacterial spies all over our environment, even on our skin, intercepting electronic signals that give away our thoughts. They’ll bring in data on everything that is said, everything that everyone even thinks or feels. Those bacteria will be directly connected into AI, in fact they’ll be part of it. They’ll be able to change things, to favor or punish according to whether they like what someone believes in or how they behave.

It isn’t just right-wing extremists that need to worry. I’m apparently Noveau Left – I score slightly left of center on political profiling tests, but I’m worried. A lot of this PC stuff seems extreme to me, sometimes just nonsense. Maybe it is, or maybe I should be lefter. But it’s not my choice. I don’t make the rules. Companies like Google make the rules, they even run the AI ethics groups. They decide much of what people see online, and even the meaning of the words. It’s very 1984-ish.

The trouble with the ‘echo chambers’ we heard about is that they soon normalize views to the loudest voices in those groups, and they don’t tend to be the moderates. We can expect it will go further to the extreme, not less. You probably aren’t left enough either. You should also be worried.