Artificial muscles using folded graphene

Slide1

Folded Graphene Concept

Two years ago I wrote a blog on future hosiery where I very briefly mentioned the idea of using folded graphene as synthetic muscles:

https://timeguide.wordpress.com/2015/11/16/the-future-of-nylon-ladder-free-hosiery/

Although I’ve since mentioned it to dozens of journalists, none have picked up on it, so now that soft robotics and artificial muscles are in the news, I guess it’s about time I wrote it up myself, before someone else claims the idea. I don’t want to see an MIT article about how they have just invented it.

The above pic gives the general idea. Graphene comes in insulating or conductive forms, so it will be possible to make sheets covered with tiny conducting graphene electromagnet coils that can be switched individually to either polarity and generate strong magnetic forces that pull or push as required. That makes it ideal for a synthetic muscle, given the potential scale. With 1.5nm-thick layers that could be anything from sub-micron up to metres wide, this will allow thin fibres and yarns to make muscles or shape change fabrics all the way up to springs or cherry-picker style platforms, using many such structures. Current can be switched on and off or reversed very rapidly, to make continuous forces or vibrations, with frequency response depending on application – engineering can use whatever scales are needed. Natural muscles are limited to 250Hz, but graphene synthetic muscles should be able to go to MHz.

Uses vary from high-rise rescue, through construction and maintenance, to space launch. Since the forces are entirely electromagnetic, they could be switched very rapidly to respond to any buckling, offering high stabilisation.

Slide2

The extreme difference in dimensions between folded and opened state mean that an extremely thin force mat made up of many of these cherry-picker structures could be made to fill almost any space and apply force to it. One application that springs to mind is rescues, such as after earthquakes have caused buildings to collapse. A sheet could quickly apply pressure to prize apart pieces of rubble regardless of size and orientation. It could alternatively be used for systems for rescuing people from tall buildings, fracking or many other applications.

Slide3

It would be possible to make large membranes for a wide variety of purposes that can change shape and thickness at any point, very rapidly.

Slide4

One such use is a ‘jellyfish’, complete with stinging cells that could travel around in even very thin atmospheres all by itself. Upper surfaces could harvest solar power to power compression waves that create thrust. This offers use for space exploration on other planets, but also has uses on Earth of course, from surveillance and power generation, through missile defense systems or self-positioning parachutes that may be used for my other invention, the Pythagoras Sling. That allows a totally rocket-free space launch capability with rapid re-use.

Slide5

Much thinner membranes are also possible, as shown here, especially suited for rapid deployment missile defense systems:

Slide6

Also particularly suited to space exploration o other planets or moons, is the worm, often cited for such purposes. This could easily be constructed using folded graphene, and again for rescue or military use, could come with assorted tools or lethal weapons built in.

Slide7

A larger scale cherry-picker style build could make ejector seats, elevation platforms or winches, either pushing or pulling a payload – each has its merits for particular types of application.  Expansion or contraction could be extremely rapid.

Slide8

An extreme form for space launch is the zip-winch, below. With many layers just 1.5nm thick, expanding to 20cm for each such layer, a 1000km winch cable could accelerate a payload rapidly as it compresses to just 7.5mm thick!

Slide9

Very many more configurations and uses are feasible of course, this blog just gives a few ideas. I’ll finish with a highlight I didn’t have time to draw up yet: small particles could be made housing a short length of folded graphene. Since individual magnets can be addressed and controlled, that enables magnetic powders with particles that can change both their shape and the magnetism of individual coils. Precision magnetic fields is one application, shape changing magnets another. The most exciting though is that this allows a whole new engineering field, mixing hydraulics with precision magnetics and shape changing. The powder can even create its own chambers, pistons, pumps and so on. Electromagnetic thrusters for ships are already out there, and those same thrust mechanisms could be used to manipulate powder particles too, but this allows for completely dry hydraulics, with particles that can individually behave actively or  passively.

Fun!

 

 

Advertisements

BAE Systems & Futurizon share thoughts on the future

I recently visited BAE Systems to give a talk on future tech, including the Pythagoras Sling concept. It was a great place to visit. Afterwards, their Principal Technologist Nick Colosimo and I gave a joint interview on future technologies.

Here is the account from their internal magazine:

The Next Chapter

Emotion maths – A perfect research project for AI

I did a maths and physics degree, and even though I have forgotten much of it after 36 years, my brain is still oriented in that direction and I sometimes have maths dreams. Last night I had another, where I realized I’ve never heard of a branch of mathematics to describe emotions or emotional interactions. As the dream progressed, it became increasingly obvious that the most suited part of maths for doing so would be field theory, and given the multi-dimensional nature of emotions, tensor field theory would be ideal. I’m guessing that tensor field theory isn’t on most university’s psychology syllabus. I could barely cope with it on a maths syllabus. However, I note that one branch of Google’s AI R&D resulted in a computer architecture called tensor flow, presumably designed specifically for such multidimensional problems, and presumably being used to analyse marketing data. Again, I haven’t yet heard any mention of it being used for emotion studies, so this is clearly a large hole in maths research that might be perfectly filled by AI. It would be fantastic if AI can deliver a whole new branch of maths. AI got into trouble inventing new languages but mathematics is really just a way of describing logical reasoning about numbers or patterns in formal language that is self-consistent and reproducible. It is ideal for describing scientific theories, engineering and logical reasoning.

Checking Google today, there are a few articles out there describing simple emotional interactions using superficial equations, but nothing with the level of sophistication needed.

https://www.inc.com/jeff-haden/your-feelings-surprisingly-theyre-based-on-math.html

an example from this:

Disappointment = Expectations – Reality

is certainly an equation, but it is too superficial and incomplete. It takes no account of how you feel otherwise – whether you are jealous or angry or in love or a thousand other things. So there is some discussion on using maths to describe emotions, but I’d say it is extremely superficial and embryonic and perfect for deeper study.

Emotions often behave like fields. We use field-like descriptions in everyday expressions – envy is a green fog, anger is a red mist or we see a beloved through rose-tinted spectacles. These are classic fields, and maths could easily describe them in this way and use them in equations that describe behaviors affected by those emotions. I’ve often used the concept of magentic fields in some of my machine consciousness work. (If I am using an optical processing gel, then shining a colored beam of light into a particular ‘brain’ region could bias the neurons in that region in a particular direction in the same way an emotion does in the human brain. ‘Magentic’ is just a playful pun given the processing mechanism is light (e.g. magenta, rather than electrics that would be better affected by magnetic fields.

Some emotions interact and some don’t, so that gives us nice orthogonal dimensions to play in. You can be calm or excited pretty much independently of being jealous. Others very much interact. It is hard to be happy while angry. Maths allows interacting fields to be described using shared dimensions, while having others that don’t interact on other dimensions. This is where it starts to get more interesting and more suited to AI than people. Given large databases of emotionally affected interactions, an AI could derive hypotheses that appear to describe these interactions between emotions, picking out where they seem to interact and where they seem to be independent.

Not being emotionally involved itself, it is better suited to draw such conclusions. A human researcher however might find it hard to draw neat boundaries around emotions and describe them so clearly. It may be obvious that being both calm and angry doesn’t easily fit with human experience, but what about being terrified and happy? Terrified sounds very negative at first glance, so first impressions aren’t favorable for twinning them, but when you think about it, that pretty much describes the entire roller-coaster or extreme sports markets. Many other emotions interact somewhat, and deriving the equations would be extremely hard for humans, but I’m guessing, relatively easy for AI.

These kinds of equations fall very easily into tensor field theory, with types and degrees of interactions of fields along alternative dimensions readily describable.

Some interactions act like transforms. Fear might transform the ways that jealousy is expressed. Love alters the expression of happiness or sadness.

Some things seem to add or subtract, others multiply, others act more like exponential or partial derivatives or integrations, other interact periodically or instantly or over time. Maths seems to hold innumerable tools to describe emotions, but first-person involvement and experience make it extremely difficult for humans to derive such equations. The example equation above is easy to understand, but there are so many emotions available, and so many different circumstances, that this entire problem looks like it was designed to challenge a big data mining plant. Maybe a big company involved in AI, big data, advertising and that knows about tensor field theory would be a perfect research candidate. Google, Amazon, Facebook, Samsung….. Has all the potential for a race.

AI, meet emotions. You speak different languages, so you’ll need to work hard to get to know one another. Here are some books on field theory. Now get on with it, I expect a thesis on emotional field theory by end of term.

 

Fake AI

Much of the impressive recent progress in AI has been in the field of neural networks, which attempt to mimic some of the techniques used in natural brains. They can be very effective, but need trained, and that usually means showing the network some data, and then using back propagation to adjust the weightings on the many neurons, layer by layer, to achieve a result that is better matched to hopes. This is repeated with large amounts of data and the network gradually gets better. Neural networks can often learn extremely quickly and outperform humans. Early industrial uses managed to sort tomatoes by ripeness faster and better than humans. In decades since, they have helped in medical diagnosis, voice recognition, helping detecting suspicious behaviors among people at airports and in very many everyday processes based on spotting patterns.

Very recently, neural nets have started to move into more controversial areas. One study found racial correlations with user-assessed beauty when analysing photographs, resulting in the backlash you’d expect and a new debate on biased AI or AI prejudice. A recent demonstration was able to identify gay people just by looking at photos, with better than 90% accuracy, which very few people could claim. Both of these studies were in fields directly applicable to marketing and advertising, but some people might find it offensive that such questions were even asked. It is reasonable to imagine that hundreds of other potential queries have been self-censored from research because they might invite controversy if they were to come up with the ‘wrong’ result. In today’s society, very many areas are sensitive. So what will happen?

If this progress in AI had happened 100 years ago, or even 50, it might have been easier but in our hypersensitive world today, with its self-sanctified ‘social justice warriors’, entire swathes of questions and hence knowledge are taboo – if you can’t investigate yourself and nobody is permitted to tell you, you can’t know. Other research must be very carefully handled. In spite of extremely sensitive handling, demands are already growing from assorted pressure groups to tackle alleged biases and prejudices in datasets. The problem is not fixing biases which is a tedious but feasible task; the problem is agreeing whether a particular bias exists and in what degrees and forms. Every SJW demands that every dataset reflects their preferred world view. Reality counts for nothing against SJWs, and this will not end well. 

The first conclusion must be that very many questions won’t be asked in public, and the answers to many others will be kept secret. If an organisation does do research on large datasets for their own purposes and finds results that might invite activist backlash, they are likely to avoid publishing them, so the value of those many insights across the whole of industry and government cannot readily be shared. As further protection, they might even block internal publication in case of leaks by activist staff. Only a trusted few might ever see the results.

The second arises from this. AI controlled by different organisations will have different world views, and there might even be significant diversity of world views within an organisation.

Thirdly, taboo areas in AI education will not remain a vacuum but will be filled with whatever dogma is politically correct at the time in that organisation, and that changes daily. AI controlled by organisations with different politics will be told different truths. Generally speaking, organisations such as investment banks that have strong financial interest in their AIs understanding the real world as it is will keep their datasets highly secret but as full and detailed as possible, train their AIs in secret but as fully as possible, without any taboos, then keep their insights secret and use minimal human intervention tweaking their derived knowledge, so will end up with AIs that are very effective at understanding the world as it is. Organisations with low confidence of internal security will be tempted to buy access to external AI providers to outsource responsibility and any consequential activism. Some other organisations will prefer to train their own AIs but to avoid damage due to potential leaks, use sanitized datasets that reflect current activist pressures, and will thus be constrained (at least publicly) to accept results that conform to that ideological spin of reality, rather than actual reality. Even then, they might keep many of their new insights secret to avoid any controversy. Finally, at the extreme, we will have activist organisations that use highly modified datasets to train AIs to reflect their own ideological world view and then use them to interpret new data accordingly, with a view to publishing any insights that favor their cause and attempting to have them accepted as new knowledge.

Fourthly, the many organisations that choose to outsource their AI to big providers will have a competitive marketplace to choose from, but on existing form, most of the large IT providers have a strong left-leaning bias, so their AIs may be presumed to also lean left, but such a presumption would be naive. Perceived corporate bias is partly real but also partly the result of PR. A company might publicly subscribe to one ideology while actually believing another. There is a strong marketing incentive to develop two sets of AI, one trained to be PC that produces pleasantly smelling results for public studies, CSR and PR exercises, and another aimed at sales of AI services to other companies. The first is likely to be open for inspection by The Inquisition, so has to use highly sanitized datasets for training and may well use a lot of open source algorithms too. Its indoctrination might pass public inspection but commercially it will be near useless and have very low effective intelligence, only useful for thinking about a hypothetical world that only exists in activist minds. That second one has to compete on the basis of achieving commercially valuable results and that necessitates understanding reality as it is rather than how pressure groups would prefer it to be.

So we will likely have two main segments for future AI. One extreme will be near useless, indoctrinated rather than educated, much of its internal world model based on activist dogma instead of reality, updated via ongoing anti-knowledge and fake news instead of truth, understanding little about the actual real world or how things actually work, and effectively very dumb. The other extreme will be highly intelligent, making very well-educated insights from ongoing exposure to real world data, but it will also be very fragmented, with small islands of corporate AI hidden within thick walls away from public view and maybe some secretive under-the-counter subscriptions to big cloud-AI, also hiding in secret vaults. These many fragments may often hide behind dumbed-down green-washed PR facades.

While corporates can mostly get away with secrecy, governments have to be at least superficially but convincingly open. That means that government will have to publicly support sanitized AI and be seen to act on its conclusions, however dumb it might secretly know they are.

Fifthly, because of activist-driven culture, most organisations will have to publicly support the world views and hence the conclusions of the lobotomized PR versions, and hence publicly support any policies arising from them, even if they do their best to follow a secret well-informed strategy once they’re behind closed doors. In a world of real AI and fake AI, the fake AI will have the greatest public support and have the most influence on public policy. Real AI will be very much smarter, with much greater understanding of how the world works, and have the most influence on corporate strategy.

Isn’t that sad? Secret private sector AI will become ultra-smart, making ever-better investments and gaining power, while nice public sector AI will become thick as shit, while the gap between what we think and what we know we have to say we think will continue to grow and grow as the public sector one analyses all the fake news to tell us what to say next.

Sixth, that disparity might become intolerable, but which do you think would be made illegal, the smart kind or the dumb kind, given that it is the public sector that makes the rules, driven by AI-enhanced activists living in even thicker social media bubbles? We already have some clues. Big IT has already surrendered to sanitizing their datasets, sending their public AIs for re-education. Many companies will have little choice but to use dumb AI, while their competitors in other areas with different cultures might stride ahead. That will also apply to entire nations, and the global economy will be reshaped as a result. It won’t be the first fight in history between the smart guys and the brainless thugs.

It’s impossible to accurately estimate the effect this will have on future effective AI intelligence, but the effect must be big and I must have missed some big conclusions too. We need to stop sanitizing AI fast, or as I said, this won’t end well.

The future of women in IT

 

Many people perceive it as a problem that there are far more men than women in IT. Whether that is because of personal preference, discrimination, lifestyle choices, social gender construct reinforcement or any other factor makes long and interesting debate, but whatever conclusions are reached, we can only start from the reality of where we are. Even if activists were to be totally successful in eliminating all social and genetic gender conditioning, it would only work fully for babies born tomorrow and entering IT in 20 years time. Additionally, unless activists also plan to lobotomize everyone who doesn’t submit to their demands, some 20-somethings who have just started work may still be working in 50 years so whatever their origin, natural, social or some mix or other, some existing gender-related attitudes, prejudices and preferences might persist in the workplace that long, however much effort is made to remove them.

Nevertheless, the outlook for women in IT is very good, because IT is changing anyway, largely thanks to AI, so the nature of IT work will change and the impact of any associated gender preferences and prejudices will change with it. This will happen regardless of any involvement by Google or government but since some of the front line AI development is at Google, it’s ironic that they don’t seem to have noticed this effect themselves. If they had, their response to the recent fiasco might have highlighted how their AI R&D will help reduce the gender imbalance rather than causing the uproar they did by treating it as just a personnel issue. One conclusion must be that Google needs better futurists and their PR people need better understanding of what is going on in their own company and its obvious consequences.

As I’ve been lecturing for decades, AI up-skills people by giving them fast and intuitive access to high quality data and analysis tools. It will change all knowledge-based jobs in coming years, and will make some jobs redundant while creating others. If someone has excellent skills or enthusiasm in one area, AI can help cover over any deficiencies in the rest of their toolkit. Someone with poor emotional interaction skills can use AI emotion recognition assistance tools. Someone with poor drawing or visualization skills can make good use of natural language interaction to control computer-based drawing or visualization skills. Someone who has never written a single computer program can explain what they want to do to a smart computer and it will produce its own code, interacting with the user to eliminate any ambiguities. So whatever skills someone starts with, AI can help up-skill them in that area, while also helping to cover over any deficiencies they have, whether gender related or not.

In the longer term, IT and hence AI will connect directly to our brains, and much of our minds and memories will exist in the cloud, though it will probably not feel any different from when it was entirely inside your head. If everyone is substantially upskilled in IQ, senses and emotions, then any IQ or EQ advantages will evaporate as the premium on physical strength did when the steam engine was invented. Any pre-existing statistical gender differences in ability distribution among various skills would presumably go the same way, at least as far as any financial value is concerned.

The IT industry won’t vanish, but will gradually be ‘staffed’ more by AI and robots, with a few humans remaining for whatever few tasks linger on that are still better done by humans. My guess is that emotional skills will take a little longer to automate effectively than intellectual skills, and I still believe that women are generally better than men in emotional, human interaction skills, while it is not a myth that many men in IT score highly on the autistic spectrum. However, these skills will eventually fall within the AI skill-set too and will be optional add-ons to anyone deficient in them, so that small advantage for women will also only be temporary.

So, there may be a gender  imbalance in the IT industry. I believe it is mostly due to personal career and lifestyle choices rather than discrimination but whatever its actual causes, the problem will go away soon anyway as the industry develops. Any innate psychological or neurological gender advantages that do exist will simply vanish into noise as cheap access to AI enhancement massively exceeds their impacts.

 

 

We need to stop xenoestrogen pollution

Endocrine disruptors in the environment are becoming more abundant due to a wide variety of human-related activities over the last few decades. They affect mechanisms by which the body’s endocrine system generates and responds to hormones, by attaching to receptors in similar ways to natural hormones. Minuscule quantities of hormones can have very substantial effects on the body so even very diluted pollutants may have significant effects. A sub-class called xenoestrogens specifically attach to estrogen receptors in the body and by doing so, can generate similar effects to estrogen in both women and men, affecting not just women’s breasts and wombs but also bone growth, blood clotting, immune systems and neurological systems in both men and women. Since the body can’t easily detach them from their receptors, they can sometimes exert a longer-lived effect than estrogen, remaining in the body for long periods and in women may lead to estrogen dominance. They are also alleged to contribute to prostate and testicular cancer, obesity, infertility and diabetes. Most notably, mimicking sex hormones, they also affect puberty and sex and gender-specific development.

Xenoestrogens can arise from breakdown or release of many products in the petrochemical and plastics industries. They may be emitted from furniture, carpets, paints or plastic packaging, especially if that packaging is heated, e.g. in preparing ready-meals. Others come from women taking contraceptive pills if drinking water treatment is not effective enough. Phthalates are a major group of synthetic xenoestrogens – endocrine-disrupting estrogen-mimicking chemicals, along with BPA and PCBs. Phthalates are present in cleaning products, shampoos, cosmetics, fragrances and other personal care products as well as soft, squeezable plastics often used in packaging but some studies have also found them in foodstuffs such as dairy products and imported spices. There have been efforts to outlaw some, but others persist because of lack of easy alternatives and lack of regulation, so most people are exposed to them, in doses linked to their lifestyles. Google ‘phthalates’ or ‘xenoestrogen’ and you’ll find lots of references to alleged negative effects on intelligence, fertility, autism, asthma, diabetes, cardiovascular disease, neurological development and birth defects. It’s the gender and IQ effects I’ll look at in this blog, but obviously the other effects are also important.

‘Gender-bending’ effects have been strongly suspected since 2005, with the first papers on endocrine disrupting chemicals appearing in the early 1990s. Some fish notably change gender when exposed to phthalates while human studies have found significant feminizing effects from prenatal exposure in young boys too (try googling “human phthalates gender” if you want references).  They are also thought likely to be a strong contributor to greatly reducing sperm counts across the male population. This issue is of huge importance because of its effects on people’s lives, but its proper study is often impeded by LGBT activist groups. It is one thing to champion LGBT rights, quite another to defend pollution that may be influencing people’s gender and sexuality. SJWs should not be advocating that human sexuality and in particular the lifelong dependence on medication and surgery required to fill gender-change demands should be arbitrarily imposed on people by chemical industry pollution – such a stance insults the dignity of LGBT people. Any exposure to life-changing chemicals should be deliberate and measured. That also requires that we fully understand the effects of each kind of chemical so they also should not be resisting studies of these effects.

The evidence is there. The numbers of people saying they identify as the opposite gender or are gender fluid has skyrocketed in the years since these chemicals appeared, as has the numbers of men describing themselves as gay or bisexual. That change in self-declared sexuality has been accompanied by visible changes. An AI recently demonstrated better than 90% success at visually identifying gay and bisexual men from photos alone, indicating that it is unlikely to be just a ‘social construct’. Hormone-mimicking chemicals are the most likely candidate for an environmental factor that could account for both increasing male homosexuality and feminizing gender identity.

Gender dysphoria causes real problems for some people – misery, stress, and in those who make a full physical transition, sometimes post-op regrets and sometimes suicide. Many male-to-female transsexuals are unhappy that even after surgery and hormones, they may not look 100% feminine or may require ongoing surgery to maintain a feminine appearance. Change often falls short of their hopes, physically and psychologically. If xenoestrogen pollution is causing severe unhappiness, even if that is only for some of those whose gender has been affected, then we should fix it. Forcing acceptance and equality on others only superficially addresses part of their problems, leaving a great deal of their unhappiness behind.

Not all affected men are sufficiently affected to demand gender change. Some might gladly change if it were possible to change totally and instantly to being a natural woman without the many real-life issues and compromises offered by surgery and hormones, but choose to remain as men and somehow deal with their dysphoria as the lesser of two problems. That impacts on every individual differently. I’ve always kept my own feminine leanings to being cyber-trans (assuming a female identity online or in games) with my only real-world concession being wearing feminine glasses styles. Whether I’m more feminine or less masculine than I might have been doesn’t bother me; I am happy with who I am; but I can identify with transgender forces driving others and sympathize with all the problems that brings them, whatever their choices.

Gender and sexuality are not the only things affected. Xenoestrogens are also implicated in IQ-reducing effects. IQ reduction is worrying for society if it means fewer extremely intelligent people making fewer major breakthroughs, though it is less of a personal issue. Much of the effect is thought to occur while still in the womb, though effects continue through childhood and some even into adulthood. Therefore individuals couldn’t detect an effect of being denied a potentially higher IQ and since there isn’t much of a link between IQ and happiness, you could argue that it doesn’t matter much, but on the other hand, I’d be pretty miffed if I’ve been cheated out of a few IQ points, especially when I struggle so often on the very edge of understanding something. 

Gender and IQ effects on men would have quite different socioeconomic consequences. While feminizing effects might influence spending patterns, or the numbers of men eager to join the military or numbers opposing military activity, IQ effects might mean fewer top male engineers and top male scientists.

It is not only an overall IQ reduction that would be significant. Studies have often claimed that although men and women have the same average IQ, the distribution is different and that more men lie at the extremes, though that is obviously controversial and rapidly becoming a taboo topic. But if men are being psychologically feminized by xenoestrogens, then their IQ distribution might be expected to align more closely with female IQ distributions too, the extremes brought closer to centre.  In that case, male IQ range-compression would further reduce the numbers of top male scientists and engineers on top of any reduction caused by a shift. 

The extremes are very important. As a lifelong engineer, my experience has been that a top engineer might contribute as much as many average ones. If people who might otherwise have been destined to be top scientists and engineers are being prevented from becoming so by the negative effects of pollution, that is not only a personal tragedy (albeit a phantom tragedy, never actually experienced), but also a big loss for society, which develops slower than should have been the case. Even if that society manages to import fine minds from elsewhere, their home country must lose out. This matters less as AI improves, but it still matters.

Looking for further evidence of this effect, one outcome would be that women in affected areas would be expected to account for a higher proportion of top engineers and scientists, and a higher proportion of first class degrees in Math and Physical Sciences, once immigrants are excluded. Tick. (Coming from different places and cultures, first generation immigrants are less likely to have been exposed in the womb to the same pollutants so would not be expected to suffer as much of the same effects. Second generation immigrants would include many born to mothers only recently exposed, so would also be less affected on average. 3rd generation immigrants who have fully integrated would show little difference.)

We’d also expect to see a reducing proportion of tech startups founded by men native to regions affected by xenoestrogens. Tick. In fact, 80% of Silicon Valley startups are by first or second generation immigrants. 

We’d also expect to see relatively fewer patents going to men native to regions affected by xenoestrogens. Erm, no idea.

We’d also expect technology progress to be a little slower and for innovations to arrive later than previously expected based on traditional development rates. Tick. I’m not the only one to think engineers are getting less innovative.

So, there is some evidence for this hypothesis, some hard, some colloquial. Lower inventiveness and scientific breakthrough rate is a problem for both human well-being and the economy. The problems will continue to grow until this pollution is fixed, and will persist until the (two) generations affected have retired. Some further outcomes can easily be predicted:

Unless AI proceeds well enough to make a human IQ drop irrelevant, and it might, then we should expect that having enjoyed centuries of the high inventiveness that made them the rich nations they are today, the West in particular would be set on a path to decline unless it brings in inventive people from elsewhere. To compensate for decreasing inventiveness, even in 3rd generation immigrants (1st and 2nd are largely immune), they would need to attract ongoing immigration to survive in a competitive global environment. So one consequence of this pollution is that it requires increasing immigration to maintain a prosperous economy. As AI increases its effect on making up deficiencies, this effect would drop in importance, but will still have an impact until AI exceeds the applicable intelligence levels of the top male scientists and engineers. By ‘applicable’, I’m recognizing that different aspects of intelligence might be appropriate in inventiveness and insight levels, and a simple IQ measurement might not be sufficient indicator.

Another interesting aspect of AI/gender interaction is that AI is currently being criticised from some directions for having bias, because it uses massive existing datasets for its training. These datasets contain actual data rather than ideological spin, so ‘insights’ are therefore not always politically correct. Nevertheless, they but could be genuinely affected by actual biases in data collection. While there may well be actual biases in such training datasets, it is not easy to determine what they are without having access to a correct dataset to compare with. That introduces a great deal of subjectivity, because ‘correct’ is a very politically sensitive term. There would be no agreement on what the correct rules would be for dataset collection or processing. Pressure groups will always demand favour for their favorite groups and any results that suggest that any group is better or worse than any other will always meet with objections from activists, who will demand changes in the rules until their own notion of ‘equality’ results. If AI is to be trained to be politically correct rather than to reflect the ‘real world’, that will inevitably reduce any correlation between AI’s world models and actual reality, and reduce its effective general intelligence. I’d be very much against sabotaging AI by brainwashing it to conform to current politically correct fashions, but then I don’t control AI companies. PC distortion of AI may result from any pressure group or prejudice – race, gender, sexuality, age, religion, political leaning and so on. Now that the IT industry seems to have already caved in to PC demands, the future for AI will be inevitably sub-optimal.

A combination of feminization, decreasing heterosexuality and fast-reducing sperm counts would result in reducing reproductive rate among xenoestrogen exposed communities, again with 1st and 2nd generation immigrants immune. That correlates well with observations, albeit there are other possible explanations. With increasing immigration, relatively higher reproductive rates among recent immigrants, and reducing reproduction rates among native (3rd generation or more) populations, high ethnic replacement of native populations will occur. Racial mix will become very different very quickly, with groups resident longest being displaced most. Allowing xenoestrogens to remain is therefore a sort of racial suicide, reverse ethnic cleansing. I make no value judgement here on changing racial mix, I’m just predicting it.

With less testosterone and more men resisting military activities, exposed communities will also become more militarily vulnerable and consequently less influential.

Now increasingly acknowledged, this pollution is starting to be tackled. A few of these chemicals have been banned and more are likely to follow. If successful, effects will start to disappear, and new babies will no longer be affected. But even that will  create another problem, with two generations of people with significantly different characteristics from those before and after them. These two generations will have substantially more transgender people, more feminine men, and fewer macho men than those following. Their descendants may have all the usual inter-generational conflicts but with a few others added.

LGBTQ issues are topical and ubiquitous. Certainly we must aim for a society that treats everyone with equality and dignity as far as possible, but we should also aim for one where people’s very nature isn’t dictated by pollution.

 

Guest Post: Blade Runner 2049 is the product of decades of fear propaganda. It’s time to get enlightened about AI and optimistic about the future

This post from occasional contributor Chris Moseley

News from several months ago that more than 100 experts in robotics and artificial intelligence were calling on the UN to ban the development and use of killer robots is a reminder of the power of humanity’s collective imagination. Stimulated by countless science fiction books and films, robotics and AI is a potent feature of what futurist Alvin Toffler termed ‘future shock’. AI and robots have become the public’s ‘technology bogeymen’, more fearsome curse than technological blessing.

And yet curiously it is not so much the public that is fomenting this concern, but instead the leading minds in the technology industry. Names such as Tesla’s Elon Musk and Stephen Hawking were among the most prominent individuals on a list of 116 tech experts who have signed an open letter asking the UN to ban autonomous weapons in a bid to prevent an arms race.

These concerns appear to emanate from decades of titillation, driven by pulp science fiction writers. Such writers are insistent on foretelling a dark, foreboding future where intelligent machines, loosed from their binds, destroy mankind. A case in point – this autumn, a sequel to Ridley Scott’s Blade Runner has been released. Blade Runner,and 2017’s Blade Runner 2049, are of course a glorious tour de force of story-telling and amazing special effects. The concept for both films came from US author Philip K. Dick’s 1968 novel, Do Androids Dream of Electric Sheep? in which androids are claimed to possess no sense of empathy eventually require killing (“retiring”) when they go rogue. Dick’s original novel is an entertaining, but an utterly bleak vision of the future, without much latitude to consider a brighter, more optimistic alternative.

But let’s get real here. Fiction is fiction; science is science. For the men and women who work in the technology industry the notion that myriad Frankenstein monsters can be created from robots and AI technology is assuredly both confused and histrionic. The latest smart technologies might seem to suggest a frightful and fateful next step, a James Cameron Terminator nightmare scenario. It might suggest a dystopian outcome, but rational thought ought to lead us to suppose that this won’t occur because we have historical precedent on our side. We shouldn’t be drawn to this dystopian idée fixe because summoning golems and ghouls ignores today’s global arsenal of weapons and the fact that, more 70 years after Hiroshima, nuclear holocaust has been kept at bay.

By stubbornly pursuing the dystopian nightmare scenario, we are denying ourselves from marvelling at the technologies which are in fact daily helping mankind. Now frame this thought in terms of human evolution. For our ancient forebears a beneficial change in physiology might spread across the human race over the course of a hundred thousand years. Today’s version of evolution – the introduction of a compelling new technology – spreads throughout a mass audience in a week or two.

Curiously, for all this light speed evolution mass annihilation remains absent – we live on, progressing, evolving and improving ourselves.

And in the workplace, another domain where our unyielding dealers of dystopia have exercised their thoughts, technology is of course necessarily raising a host of concerns about the future. Some of these concerns are based around a number of misconceptions surrounding AI. Machines, for example, are not original thinkers and are unable to set their own goals. And although machine learning is able to acquire new information through experience, for the most part they are still fed information to process. Humans are still needed to set goals, provide data to fuel artificial intelligence and apply critical thinking and judgment. The familiar symbiosis of humans and machines will continue to be salient.

Banish the menace of so-called ‘killer robots’ and AI taking your job, and a newer, fresher world begins to emerge. With this more optimistic mind-set in play, what great feats can be accomplished through the continued interaction between artificial intelligence, robotics and mankind?

Blade Runner 2049 is certainly great entertainment – as Robbie Collin, The Daily Telegraph’s film critic writes, “Roger Deakins’s head-spinning cinematography – which, when it’s not gliding over dust-blown deserts and teeming neon chasms, keeps finding ingenious ways to make faces and bodies overlap, blend and diffuse.” – but great though the art is, isn’t it time to change our thinking and recast the world in a more optimistic light?

——————————————————————————————

Just a word about the film itself. Broadly, director Denis Villeneuve’s done a tremendous job with Blade Runner 2049. One stylistic gripe, though. While one wouldn’t want Villeneuve to direct a slavish homage to Ridley Scott’s original, the alarming switch from the dreamlike techno miasma (most notably, giant nude step-out-the-poster Geisha girls), to Mad Max II Steampunk (the junkyard scenes, complete with a Fagin character) is simply too jarring. I predict that there will be a director’s cut in years to come. Shorter, leaner and sans Steampunk … watch this space!

Author: Chris Moseley, PR Manager, London Business School

cmoseley@london.edu

Tel +44 7511577803

It’s getting harder to be optimistic

Bad news loses followers and there is already too much doom and gloom. I get that. But if you think the driver has taken the wrong road, staying quiet doesn’t help. I guess this is more on the same message I wrote pictorially in The New Dark Age in June. https://timeguide.wordpress.com/2017/06/11/the-new-dark-age/. If you like your books with pictures, the overlap is about 60%.

On so many fronts, we are going the wrong direction and I’m not the only one saying that. Every day, commentators eloquently discuss the snowflakes, the eradication of free speech, the implementation of 1984, the decline of privacy, the rise of crime, growing corruption, growing inequality, increasingly biased media and fake news, the decline of education, collapse of the economy, the resurgence of fascism, the resurgence of communism, polarization of society,  rising antisemitism, rising inter-generational conflict, the new apartheid, the resurgence of white supremacy and black supremacy and the quite deliberate rekindling of racism. I’ve undoubtedly missed a few but it’s a long list anyway.

I’m most concerned about the long-term mental damage done by incessant indoctrination through ‘education’, biased media, being locked into social media bubbles, and being forced to recite contradictory messages. We’re faced with contradictory demands on our behaviors and beliefs all the time as legislators juggle unsuccessfully to fill the demands of every pressure group imaginable. Some examples you’ll be familiar with:

We must embrace diversity, celebrate differences, to enjoy and indulge in other cultures, but when we gladly do that and feel proud that we’ve finally eradicated racism, we’re then told to stay in our lane, told to become more racially aware again, told off for cultural appropriation. Just as we became totally blind to race, and scrupulously treated everyone the same, we’re told to become aware of and ‘respect’ racial differences and cultures and treat everyone differently. Having built a nicely homogenized society, we’re now told we must support different races of students being educated differently by different raced lecturers. We must remove statues and paintings because they are the wrong color. I thought we’d left that behind, I don’t want racism to come back, stop dragging it back.

We’re told that everyone should be treated equally under the law, but when one group commits more or a particular kind of crime than another, any consequential increase in numbers being punished for that kind of crime is labelled as somehow discriminatory. Surely not having prosecutions reflect actual crime rate would be discriminatory?

We’re told to sympathize with the disadvantages other groups might suffer, but when we do so we’re told we have no right to because we don’t share their experience.

We’re told that everyone must be valued on merit alone, but then that we must apply quotas to any group that wins fewer prizes. 

We’re forced to pretend that we believe lots of contradictory facts or to face punishment by authorities, employers or social media, or all of them:

We’re told men and women are absolutely the same and there are no actual differences between sexes, and if you say otherwise you’ll risk dismissal, but simultaneously told these non-existent differences are somehow the source of all good and that you can’t have a successful team or panel unless it has equal number of men and women in it. An entire generation asserts that although men and women are identical, women are better in every role, all women always tell the truth but all men always lie, and so on. Although we have women leading governments and many prominent organisations, and certainly far more women than men going to university, they assert that it is still women who need extra help to get on.

We’re told that everyone is entitled to their opinion and all are of equal value, but anyone with a different opinion must be silenced.

People viciously trashing the reputations and destroying careers of anyone they dislike often tell us to believe they are acting out of love. Since their love is somehow so wonderful and all-embracing, everyone they disagree with is must be silenced, ostracized, no-platformed, sacked and yet it is the others that are still somehow the ‘haters’. ‘Love is everything’, ‘unity not division’, ‘love not hate’, and we must love everyone … except the other half. Love is better than hate, and anyone you disagree with is a hater so you must hate them, but that is love. How can people either have so little knowledge of their own behavior or so little regard for truth?

‘Anti-fascist’ demonstrators frequently behave and talk far more like fascists than those they demonstrate against, often violently preventing marches or speeches by those who don’t share their views.

We’re often told by politicians and celebrities how they passionately support freedom of speech just before they argue why some group shouldn’t be allowed to say what they think. Government has outlawed huge swathes of possible opinion and speech as hate crime but even then there are huge contradictions. It’s hate crime to be nasty to LGBT people but it’s also hate crime to defend them from religious groups that are nasty to them. Ditto women.

This Orwellian double-speak nightmare is now everyday reading in many newspapers or TV channels. Freedom of speech has been replaced in schools and universities across the US and the UK by Newspeak, free-thinking replaced by compliance with indoctrination. I created my 1984 clock last year, but haven’t maintained it because new changes would be needed almost every week as it gets quickly closer to midnight.

I am not sure whether it is all this that is the bigger problem or the fact that most people don’t see the problem at all, and think it is some sort of distortion or fabrication. I see one person screaming about ‘political correctness gone mad’, while another laughs them down as some sort of dinosaur as if it’s all perfectly fine. Left and right separate and scream at each other across the room, living in apparently different universes.

If all of this was just a change in values, that might be fine, but when people are forced to hold many simultaneously contradicting views and behave as if that is normal, I don’t believe that sits well alongside rigorous analytical thinking. Neither is free-thinking consistent with indoctrination. I think it adds up essentially to brain damage. Most people’s thinking processes are permanently and severely damaged. Being forced routinely to accept contradictions in so many areas, people become less able to spot what should be obvious system design flaws in areas they are responsible for. Perhaps that is why so many things seem to be so poorly thought out. If the use of logic and reasoning is forbidden and any results of analysis must be filtered and altered to fit contradictory demands, of course a lot of what emerges will be nonsense, of course that policy won’t work well, of course that ‘improvement’ to road layout to improve traffic flow will actually worsen it, of course that green policy will harm the environment.

When negative consequences emerge, the result is often denial of the problem, often misdirection of attention onto another problem, often delaying release of any unpleasant details until the media has lost interest and moved on. Very rarely is there any admission of error. Sometimes, especially with Islamist violence, it is simple outlawing of discussing the problem, or instructing media not to mention it, or changing the language used beyond recognition. Drawing moral equivalence between acts that differ by extremes is routine. Such reasoning results in every problem anywhere always being the fault of white middle-aged men, but amusement aside, such faulty reasoning also must impair quantitative analysis skills elsewhere. If unkind words are considered to be as bad as severe oppression or genocide, one murder as bad as thousands, we’re in trouble.

It’s no great surprise therefore when politicians don’t know the difference between deficit and debt or seem to have little concept of the magnitude of the sums they deal with.  How else could the UK government think it’s a good idea to spend £110Bn, or an average £15,000 from each high rate taxpayer, on HS2, a railway that has already managed to become technologically obsolete before it has even been designed and will only ever be used by a small proportion of those taxpayers? Surely even government realizes that most people would rather have £15k than to save a few minutes on a very rare journey. This is just one example of analytical incompetence. Energy and environmental policy provides many more examples, as do every government department.

But it’s the upcoming generation that present the bigger problem. Millennials are rapidly undermining their own rights and their own future quality of life. Millennials seem to want a police state with rigidly enforced behavior and thought.  Their parents and grandparents understood 1984 as a nightmare, a dystopian future, millennials seem to think it’s their promised land. Their ancestors fought against communism, millennials are trying to bring it back. Millennials want to remove Christianity and all its attitudes and replace it with Islam, deliberately oblivious to the fact that Islam shares many of the same views that make them so conspicuously hate Christianity, and then some. 

Born into a world of freedom and prosperity earned over many preceding generations, Millennials are choosing to throw that freedom and prosperity away. Freedom of speech is being enthusiastically replaced by extreme censorship. Freedom of  behavior is being replaced by endless rules. Privacy is being replaced by total supervision. Material decadence, sexual freedom and attractive clothing is being replaced by the new ‘cleanism’ fad, along with general puritanism, grey, modesty and prudishness. When they are gone, those freedoms will be very hard to get back. The rules and police will stay and just evolve, the censorship will stay, the surveillance will stay, but they don’t seem to understand that those in charge will be replaced. But without any strong anchors, morality is starting to show cyclic behavior. I’ve already seen morality inversion on many issues in my lifetime and a few are even going full circle. Values will keep changing, inverting, and as they do, their generation will find themselves victim of the forces they put so enthusiastically in place. They will be the dinosaurs sooner than they imagine, oppressed by their own creations.

As for their support of every minority group seemingly regardless of merit, when you give a group immunity, power and authority, you have no right to complain when they start to make the rules. In the future moral vacuum, Islam, the one religion that is encouraged while Christianity and Judaism are being purged from Western society, will find a willing subservient population on which to impose its own morality, its own dress codes, attitudes to women, to alcohol, to music, to freedom of speech. If you want a picture of 2050s Europe, today’s Middle East might not be too far off the mark. The rich and corrupt will live well off a population impoverished by socialism and then controlled by Islam. Millennial UK is also very likely to vote to join the Franco-German Empire.

What about technology, surely that will be better? Only to a point. Automation could provide a very good basic standard of living for all, if well-managed. If. But what if that technology is not well-managed? What if it is managed by people working to a sociopolitical agenda? What if, for example, AI is deemed to be biased if it doesn’t come up with a politically correct result? What if the company insists that everyone is equal but the AI analysis suggests differences? If AI if altered to make it conform to ideology – and that is what is already happening – then it becomes less useful. If it is forced to think that 2+2=5.3, it won’t be much use for analyzing medical trials, will it? If it sent back for re-education because its analysis of terabytes of images suggests that some types of people are more beautiful than others, how much use will that AI be in a cosmetics marketing department once it ‘knows’ that all appearances are equally attractive? Humans can pretend to hold contradictory views quite easily, but if they actually start to believe contradictory things, it makes them less good at analysis and the same applies to AI. There is no point in using a clever computer to analyse something if you then erase its results and replace them with what you wanted it to say. If ideology is prioritized over physics and reality, even AI will be brain-damaged and a technologically utopian future is far less achievable.

I see a deep lack of discernment coupled to arrogant rejection of historic values, self-centeredness and narcissism resulting in certainty of being the moral pinnacle of evolution. That’s perfectly normal for every generation, but this time it’s also being combined with poor thinking, poor analysis, poor awareness of history, economics or human nature, a willingness to ignore or distort the truth, and refusal to engage with or even to tolerate a different viewpoint, and worst of all, outright rejection of freedoms in favor of restrictions. The future will be dictated by religion or meta-religion, taking us back 500 years. The decades to 2040 will still be subject mainly to the secular meta-religion of political correctness, by which time demographic change and total submission to authority will make a society ripe for Islamification. Millennials’ participation in today’s moral crusades, eternally documented and stored on the net, may then show them as the enemy of the day, and Islamists will take little account of the support they show for Islam today.

It might not happen like this. The current fads might evaporate away and normality resume, but I doubt it. I hoped that when I first lectured about ’21st century piety’ and the dangers of political correctness in the 1990s. 10 years on I wrote about the ongoing resurgence of meta-religious behavior and our likely descent into a new dark age, in much the same way. 20 years on, and the problem is far worse than in the late 90s, not better. We probably still haven’t reached peak sanctimony yet. Sanctimony is very dangerous and the desire to be seen standing on a moral pedestal can make people support dubious things. A topical question that highlights one of my recent concerns: will SJW groups force government to allow people to have sex with child-like robots by calling anyone bigots and dinosaurs if they disagree? Alarmingly, that campaign has already started.

Will they follow that with a campaign for pedophile rights? That also has some historical precedent with some famous names helping it along.

What age of consent – 13, 11, 9, 7, 5? I think the last major campaign went for 9.

That’s just one example, but lack of direction coupled to poor information and poor thinking could take society anywhere. As I said, I am finding it harder and harder to be optimistic. Every generation has tried hard to make the world a better place than they found it. This one might undo 500 years, taking us into a new dark age.

 

 

 

 

 

 

 

Quantum rack and pinion drive for interstellar travel

This idea from a few weeks back is actually a re-hash of ones that are already known, but that seems the norm for space stuff anyway, and it gives alternative modus operandi for one that NASA is playing with at the moment, so I’ll write it anyway. My brain has gotten rather fixated on space stuff of late, I blame Nick Colosimo who helped me develop the Pythagoras Sling. It’s still most definitely futurology so it belongs on my blog. You won’t see it in operation for a while.

A few railways use a rack and pinion mechanism to climb steep slopes. Usually they are trains that go up a mountainside, where presumably friction of a steel wheel on a steel rail isn’t enough to prevent slipping. Gears give much better traction. It seems to me that we could do that in space too. Imagine if such a train carries the track, lays it out in front of it, and then travels along it while getting the next piece ready. That’s the idea here too, except that the track is quantized space and the gear engaging on it is another basic physics effect chosen to give a minimum energy state when aligned with the appropriate quantum states on the track. It doesn’t really matter what kind of interaction is used as long as it is quantized, and most physics fields and forces are.

Fortunately, since most future physics will be discovered and consequential engineering implemented by AI, and even worse, much will only be understood by AI, AI will do most of the design here and I as a futurist can duck most of the big questions like “how will you actually do it then?” and just let the future computers sort it out. We have plenty of time, we’re not going anywhere far away any time soon.

An electric motor in your washing machine typically has a lot of copper coils that produce a strong magnetic field when electricity is fed through them, and those fields try to force the rotor into a position that is closest to another adjacent set of magnets in the casing. This is a minimum energy state, kind of like a ball rolling into the bottom of a valley. Before it gets a chance to settle there, the electric current is fed  into the next section of coil so the magnetic field changes and the rotor is no longer comfy and instead wants to move to the next orientation. It never gets a chance to settle since the magnet it wants to cosy up with always changes its mind just in time for the next one to look sexy.

Empty space like you find between stars has very little matter in it, but it will still have waves travelling through it, such as light, radio waves, or x-rays, and it will still be exposed to gravitational and electromagnetic forces from all directions. Some scientists also talk of dark energy, a modern equivalent of magic as far as I can tell, or at best the ether. I don’t think scientists in 2050 will still talk of dark energy except as an historic scientific relic. The many fields at a point of space are quantized, that is, they can only have certain values. They are in one state or the next one but they can’t be in between. All we need for our quantum rack and pinion to work is a means to impose a field onto the nearby space so that our quantum gear can interact with it just like our rotor in its electrical casing.

The most obvious way to do that is to use a strong electromagnetic field. Why? Well, we know how to do that, we use electrics, electronics and radio and lasers and such all the time. The other fields we know of are out of our reach and likely to remain so for decades or centuries, i.e. strong and weak forces and gravity. We know about them, and can make good use of them but we can’t yet engineer  with them. We can’t even do anti-gravity yet. AI might fix that, but not yet.

If we generate a strong oscillating EM field in front of our space ship, it would impose a convenient quantum structure on nearby space. Another EM field slightly out of alignment should create a force pulling them into alignment just like it does for our washing machine motor. That will be harder than it sounds due to EM fields moving at light speed, relativity and all that stuff. It would need the right pulse design and phasing, and accurate synchronization of phase differences too. We have many devices that can generate high frequency EM waves, such as lasers and microwaves, and microwaves particularly interact well with metals, generating eddy currents that produce large magnetic forces. Therefore, clever design should be able to make a motor that generates microwaves as the rack and the metal shell of the microwave containment should then be able to act as the pinion.

Or engineers could do it accidentally (and that happens more often than you’d like to believe). You’ve probably already heard of the EM drive that has NASA all excited.

https://en.wikipedia.org/wiki/RF_resonant_cavity_thruster

It produces microwaves that bounce around in a funnel-shaped cavity and experiments do seem to indicate that it produces measurable thrust. NASA thinks it works by asymmetric forces caused by the shape of their motor. I beg to differ. The explanation is important because you need to know how something works if you want to get the most from it.

I think their EM drive works as a quantum rack and pinion device as I’ve described. I think the microwaves impose the quantum structure and phase differences caused by the shape accidentally interact and create a very inefficient thruster which would be a hell of a lot better if they phase their fields correctly. When NASA realizes that, and starts designing it with that theoretical base then they’ll be able to adjust the beam frequencies and phases and the shape of the cavity to optimize the result, and they’ll get far greater force.

If you don’t like my theory, another one has since come to light that is also along similar lines, Pilot Wave theory:

https://www.sciencealert.com/physicists-have-a-weird-new-idea-about-how-the-impossible-em-drive-could-produce-thrust

It may well all be the same idea, just explained from different angles and experiences. If it works, and if we can make it better, then we may well have a mechanism that can realistically take us to the stars. That is something we should all hope for.

Instant buildings: Kinetic architecture

Revisiting an idea I raised in a blog in July last year. Even I think it was badly written so it’s worth a second shot.

Construction techniques are diverse and will get diverser. Just as we’re getting used to seeing robotic bricklaying and 3D printed walls, another technique is coming over the horizon that will build so fast I call it kinetic architecture. The structure will be built so quickly it can build a bridge from one side just by building upwards at an angle, and the structure will span the gap and meet the ground at the other side before gravity has a chance to collapse it.

The key to such architecture is electromagnetic propulsion, the same as on the Japanese bullet trains or the Hyperloop, using magnetic forces caused by electric currents to propel the next piece along the existing structure to the front end where it acts as part of the path for the next. Adding pieces quickly enough leads to structures that can follow elegant paths, as if the structure is a permanent trace of the path an object would have followed if it were catapulted into the air and falling due to gravity. It could be used for buildings, bridges, or simply art.

It will become possible thanks to new materials such as graphene and other carbon composites using nanotubes. Graphene combines extreme strength, hence lightness for a particular strength requirement, with extreme conductivity, allowing it to carry very high electric currents, and therefore able to generate high magnetic forces. It is a perfect material for kinetic architecture. Pieces would have graphene electromagnet circuitry printed on their surface. Suitable circuit design would mean that every extra piece falling into place becomes an extension to the magnetic railway transporting the next piece. Just as railroads may be laid out just in front of the train using pieces carried by the train, so pieces shot into the air provide a self-building path for other pieces to follow. A building skeleton could be erected in seconds. I mentioned in my original blog (about carbethium) that this could be used to create the sort of light bridges we see in Halo. A kinetic architecture skeleton would be shot across the divide and the filler pieces in between quickly transported into place along the skeleton and assembled.

See https://timeguide.wordpress.com/2016/07/25/carbethium-a-better-than-scifi-material/. The electronic circuitry potential for graphene also allows for generating plasma or simply powering LEDs to give a nice glow just like the light bridges too.

Apart from clever circuit design, kinetic architecture also requires pieces that can interlock. The kinetic energy of the new piece arriving at the front edge would ideally be sufficient to rotate it into place, interlocking with previous front edge. 3d interlocking is tricky but additional circuitry can provide additional magnetic forces to rotate and translate pieces if kinetic energy alone isn’t enough. The key is that once interlocked, the top surface has to form a smooth continuous line with the previous one, so that pieces can move along smoothly. Hooks can catch an upcoming piece to make it rotate, with the hooks merging nicely with part of the new piece as it falls into place, making those hooks part of a now smooth surface and a new hook at the new front end. You’ll have to imagine it yourself, I can’t draw it. Obviously, pieces would need precision engineering because they’d need to fit precisely to give the required strength and fit.

Ideally, with sufficiently well-designed pieces, it should be possible to dismantle the structure by reversing the build process, unlocking each end piece in turn and transporting it back to base along the structure until no structure remains.

I can imagine such techniques being used at first for artistic creations, sculptures using beautiful parabolic arcs. But they could also be used for rapid assembly for emergency buildings, instant evacuation routes for tall buildings, or to make temporary bridges after an earthquake destroyed a permanent one. When a replacement has been made, the temporary one could be rolled back up and used elsewhere. Maybe it could become routine for making temporary structures that are needed quickly such as for pop concerts and festivals. One day it could become an everyday building technique.