Category Archives: physics

Optical computing

A few nights ago I was thinking about the optical fibre memories that we were designing in the late 1980s in BT. The idea was simple. You transmit data into an optical fibre, and if the data rate is high you can squeeze lots of data into a manageable length. Back then the speed of light in fibre was about 5 microseconds per km of fibre, so 1000km of fibre, at a data rate of 2Gb/s would hold 10Mbits of data, per wavelength, so if you can multiplex 2 million wavelengths, you’d store 20Tbits of data. You could maintain the data by using a repeater to repeat the data as it reaches one end into the other, or modify it at that point simply by changing what you re-transmit. That was all theory then, because the latest ‘hero’ experiments were only just starting to demonstrate the feasibility of such long lengths, such high density WDM and such data rates.

Nowadays, that’s ancient history of course, but we also have many new types of fibre, such as hollow fibre with various shaped pores and various dopings to allow a range of effects. And that’s where using it for computing comes in.

If optical fibre is designed for this purpose, with optimal variable refractive index designed to facilitate and maximise non-linear effects, then the photons in one data stream on one wavelength could have enough effects of photons in another stream to be used for computational interaction. Computers don’t have to be digital of course, so the effects don’t have to be huge. Analog computing has many uses, and analog interactions could certainly work, while digital ones might work, and hybrid digital/analog computing may also be feasible. Then it gets fun!

Some of the data streams could be programs. Around that time, I was designing protocols with smart packets that contained executable code, as well as other packets that could hold analog or digital data or any mix. We later called the smart packets ANTs – autonomous network telephers, a contrived term if ever there was one, but we wanted to call them ants badly. They would scurry around the network doing a wide range of jobs, using a range of biomimetic and basic physics techniques to work like ant colonies and achieve complex tasks using simple means.

If some of these smart packets or ANTs are running along a fibre, changing the properties as they go to interact with other data transmitting alongside, then ANTs can interact with one another and with any stored data. ANTs could also move forwards or backwards along the fibre by using ‘sidings’ or physical shortcuts, since they can route themselves or each other. Data produced or changed by the interactions could be digital or analog and still work fine, carried on the smart packet structure.

(If you’re interested my protocol was called UNICORN, Universal Carrier for an Optical Residential Network, and used the same architectural principles as my previous Addressed Time Slice invention, compressing analog data by a few percent to fit into a packet, with a digital address and header, or allowing any digital data rate or structure in a payload while keeping the same header specs for easy routing. That system was invented (in 1988) for the late 1990s when basic domestic broadband rate should have been 625Mbit/s or more, but we expected to be at 2Gbit/s or even 20Gbit/s soon after that in the early 2000s, and the benefit as that we wouldn’t have to change the network switching because the header overheads would still only be a few percent of total time. None of that happened because of government interference in the telecoms industry regulation that strongly disincentivised its development, and even today, 625Mbit/s ‘basic rate’ access is still a dream, let alone 20Gbit/s.)

Such a system would be feasible. Shortcuts and sidings are easy to arrange. The protocols would work fine. Non-linear effects are already well known and diverse. If it were only used for digital computing, it would have little advantage over conventional computers. With data stored on long fibre lengths, external interactions would be limited, with long latency. However, it does present a range of potentials for use with external sensors directly interacting with data streams and ANTs to accomplish some tasks associated with modern AI. It ought to be possible to use these techniques to build the adaptive analog neural networks that we’ve known are the best hope of achieving strong AI since Hans Moravek’s insight, coincidentally also around that time. The non-linear effects even enable ideal mechanisms for implementing emotions, biasing the computation in particular directions via intensity of certain wavelengths of light in much the same way as chemical hormones and neurotransmitters interact with our own neurons. Implementing up to 2 million different emotions at once is feasible.

So there’s a whole mineful of architectures, tools and techniques waiting to be explored and mined by smart young minds in the IT industry, using custom non-linear optical fibres for optical AI.

Quantum rack and pinion drive for interstellar travel

This idea from a few weeks back is actually a re-hash of ones that are already known, but that seems the norm for space stuff anyway, and it gives alternative modus operandi for one that NASA is playing with at the moment, so I’ll write it anyway. My brain has gotten rather fixated on space stuff of late, I blame Nick Colosimo who helped me develop the Pythagoras Sling. It’s still most definitely futurology so it belongs on my blog. You won’t see it in operation for a while.

A few railways use a rack and pinion mechanism to climb steep slopes. Usually they are trains that go up a mountainside, where presumably friction of a steel wheel on a steel rail isn’t enough to prevent slipping. Gears give much better traction. It seems to me that we could do that in space too. Imagine if such a train carries the track, lays it out in front of it, and then travels along it while getting the next piece ready. That’s the idea here too, except that the track is quantized space and the gear engaging on it is another basic physics effect chosen to give a minimum energy state when aligned with the appropriate quantum states on the track. It doesn’t really matter what kind of interaction is used as long as it is quantized, and most physics fields and forces are.

Fortunately, since most future physics will be discovered and consequential engineering implemented by AI, and even worse, much will only be understood by AI, AI will do most of the design here and I as a futurist can duck most of the big questions like “how will you actually do it then?” and just let the future computers sort it out. We have plenty of time, we’re not going anywhere far away any time soon.

An electric motor in your washing machine typically has a lot of copper coils that produce a strong magnetic field when electricity is fed through them, and those fields try to force the rotor into a position that is closest to another adjacent set of magnets in the casing. This is a minimum energy state, kind of like a ball rolling into the bottom of a valley. Before it gets a chance to settle there, the electric current is fed  into the next section of coil so the magnetic field changes and the rotor is no longer comfy and instead wants to move to the next orientation. It never gets a chance to settle since the magnet it wants to cosy up with always changes its mind just in time for the next one to look sexy.

Empty space like you find between stars has very little matter in it, but it will still have waves travelling through it, such as light, radio waves, or x-rays, and it will still be exposed to gravitational and electromagnetic forces from all directions. Some scientists also talk of dark energy, a modern equivalent of magic as far as I can tell, or at best the ether. I don’t think scientists in 2050 will still talk of dark energy except as an historic scientific relic. The many fields at a point of space are quantized, that is, they can only have certain values. They are in one state or the next one but they can’t be in between. All we need for our quantum rack and pinion to work is a means to impose a field onto the nearby space so that our quantum gear can interact with it just like our rotor in its electrical casing.

The most obvious way to do that is to use a strong electromagnetic field. Why? Well, we know how to do that, we use electrics, electronics and radio and lasers and such all the time. The other fields we know of are out of our reach and likely to remain so for decades or centuries, i.e. strong and weak forces and gravity. We know about them, and can make good use of them but we can’t yet engineer  with them. We can’t even do anti-gravity yet. AI might fix that, but not yet.

If we generate a strong oscillating EM field in front of our space ship, it would impose a convenient quantum structure on nearby space. Another EM field slightly out of alignment should create a force pulling them into alignment just like it does for our washing machine motor. That will be harder than it sounds due to EM fields moving at light speed, relativity and all that stuff. It would need the right pulse design and phasing, and accurate synchronization of phase differences too. We have many devices that can generate high frequency EM waves, such as lasers and microwaves, and microwaves particularly interact well with metals, generating eddy currents that produce large magnetic forces. Therefore, clever design should be able to make a motor that generates microwaves as the rack and the metal shell of the microwave containment should then be able to act as the pinion.

Or engineers could do it accidentally (and that happens more often than you’d like to believe). You’ve probably already heard of the EM drive that has NASA all excited.

https://en.wikipedia.org/wiki/RF_resonant_cavity_thruster

It produces microwaves that bounce around in a funnel-shaped cavity and experiments do seem to indicate that it produces measurable thrust. NASA thinks it works by asymmetric forces caused by the shape of their motor. I beg to differ. The explanation is important because you need to know how something works if you want to get the most from it.

I think their EM drive works as a quantum rack and pinion device as I’ve described. I think the microwaves impose the quantum structure and phase differences caused by the shape accidentally interact and create a very inefficient thruster which would be a hell of a lot better if they phase their fields correctly. When NASA realizes that, and starts designing it with that theoretical base then they’ll be able to adjust the beam frequencies and phases and the shape of the cavity to optimize the result, and they’ll get far greater force.

If you don’t like my theory, another one has since come to light that is also along similar lines, Pilot Wave theory:

https://www.sciencealert.com/physicists-have-a-weird-new-idea-about-how-the-impossible-em-drive-could-produce-thrust

It may well all be the same idea, just explained from different angles and experiences. If it works, and if we can make it better, then we may well have a mechanism that can realistically take us to the stars. That is something we should all hope for.

Some anti-futurology on The Age of the Universe

Confession: although I am a futurologist and look forwards most of the time, I also enjoy pre-history. In fact, my father is Dr Gordon Pearson, who won the Pomerance Award for his contributions to archaeology, producing a calibration curve for C14 proportion against the age of a sample, thereby facilitating many other researchers’ work on ancient civilization going back 50,000 years, and who was one of the first to measure accurately the correlation between sunspot activity and climate. I inherited his time-traveler gene and conventional generational inversion was then applied.

I wrote a short piece a month or two back on the acceleration of the universe

Explaining accelerating universe expansion without dark energy

I have been irritated by the bad science that has jumped illogically to the conclusion of dark matter and dark energy as the reason for acceleration. Occam’s razor needed to be used so I took it out. I noted that as galaxies expand and move further away from each other, Higgs particle flux must fall so the mass of the galaxies must fall, so their speed must increase to conserve energy. Then I moved on to work that pays my bills. So I missed a bit. If my theory above is correct (and in that regard, I should note that I have forgotten much of the Physics I learned at university, and some of the rest is now wrong anyway), then it must also be true that the universe was accelerating much more slowly in the past when the galaxies were close together, and its mass must have been much higher.

So if you assume, as I now do, that when observing red shifts today, when we are moving faster than before due to that ongoing acceleration, that we are measuring higher speeds than those light emitting galaxies had when they emitted that light, and by assuming relatively constant mass, as is also seemingly assumed, then the earlier speeds must have been far less, therefore we must be looking at too steep a curve for backward extrapolation to the beginning. Therefore the estimate for the age of the universe of 13.82 Billion years is too low. I no longer have the maths skills or physics knowledge to calculate an age that takes my theory into account, but engineer’s intuition suggests it would be 15Bn years or possible even more.

As I’ve cautioned, perhaps you should take my theory with a pinch of salt. There is much I don’t understand. But I do understand enough to know that combinations of group-think and intense focus sometimes mean that scientists overlook gorillas standing right in front of them as they concentrate on their current equations. Unlikely as it is, I might possibly be right.

Just occasionally, everyone else IS wrong.

Trump’s still an idiot but he was right to dump Paris

Climate change has always been in play. It is in play now. Many scientists think that the rise in global temperatures towards the end of the 1990s was largely due to human factors, namely CO2 emissions. Some of it undoubtedly is, but almost certainly nowhere near as much as these scientists believe. Because they put far too much emphasis on CO2 as the driving factor, almost as a meta religion, they downplay or refuse to acknowledge other important factors, such as long term ocean cycles, solar cycles, and poorly model forests and soil-air interchange. Because they rely on this one-factor-fits-all explanation for climate changing, they struggle to explain ‘the pause’ whereby temperatures leveled off even as CO2 levels continued to rise, and can’t explain why post El-Nino temperatures have now returned to that pause level. In short, their ‘science’ is nothing more than a weak set of theories very poorly correlating with observations.

A good scientist, when confronted with real world observations that conflict with their theory throws that theory in the bin and comes up with a better one. When a scientist’s comfy and lucrative job depends on their theory being correct, their response may not be to try to do better science that risks their project ending, but to hide facts, adjust and distort them, misrepresent them in graphs, draw false conclusions from falsified data to try to keep their messages of doom and their models’ predictions sounding plausible. Sadly, that does seem to me and very many other scientists to be what has been happening in so-called climate science. Many high quality scientists in the field have been forced to leave it, and many have had their papers rejected and their reputations attacked. The few brave honest scientists left in the field must put up with constant name-calling by peers whose livelihoods are threatened by honesty. Group-think has become established to the point where anyone not preaching the authorized climate change religion must be subjected to the Spanish Inquisition. Natural self-selection of new recruits into the field from greens and environmentalists mean that new members of the field will almost all follow the holy book. It is ironic that the Pope is on the side of these climate alarmists. Climate ‘science’ is simply no longer worthy of the name. ‘Climate change’ is now a meta-religion, and its messages of imminent doom and desperate demands for urgent wealth redistribution have merged almost fully into the political left. The right rejects it, the left accepts it. That isn’t science, it’s just politics.

Those of us outside the field have a hard time finding good science. There are plenty of blogs on both sides making scientific sounding arguments and showing nice graphs, but it is impossible for a scientist or engineer to look at it over time and not notice a pattern. Over the last decades, ‘climate scientists’ have made apocalyptic predictions in rapid succession, none of which seem ever to actually happen. Almost all of their computer models have consistently greatly overestimated the warming we should have seen by now, we should by now rarely see snow, and there should be no ice left in the Arctic. Sea levels should be far higher than they are too. Arctic ice is slightly below average, much the same as a decade ago. Polar bears are more abundant than for several decades. A couple of years ago we had record ice in the antarctic. Sea level is still rising at about the same rate as it has for the last 100s of years. Greenland is building more ice mass than ever. Every time there is a strong wind we’re told about climate change, but we rarely see any mention of the fastest drop in temperatures on record after the recent El-Nino, the great polar bear recovery or the record Antarctic ice when that happened. It is a one way street of doom that hides facts that don’t play to the hymn book.

In a private industry, at least in ones that aren’t making profits from climate change alarmism or renewable energy, like Elon Musk’s car, solar power and battery companies for example (do you think that might be why he is upset with Trump), scientists as bad as that would have lost their jobs many years ago. Most climate scientists work in state-funded institutions or universities and both tend towards left wing politics of course, so it is not surprising that they have left wing bias distorting their prejudices and consequently their theories and proposed solutions.

Grants are handed out by politicians, who want to look good and win votes, so are always keen to follow policies that are popular in the media. Very few politicians have any scientific understanding, so they are easily hoodwinked by simple manipulation of graphs whereby trends are always shown with the start point at the beginning of the last upwards incline, and where data is routinely changed to fit the message of doom. Few politicians can understand the science and few challenge why data has been changed or hidden. A strong community of religious followers is happy to eagerly and endlessly repeat fraudulent claims such as that “97% of scientists agree…”, mudslinging at anyone who disagrees.

Even if the doom was all true, Paris was still a very bad idea. Even if CO2 were as bad as claimed, the best response to that is to work out realistically how much CO2 is likely to be produced in the future, how fast alternative energy sources could become economic, which ones give the best value per CO2 unit until we get those economic replacements, and to formulate a sensible plan that maximizes bang per buck to ensure that the climate stays OK while spending at the right times to keep on track at the lowest cost. In my 2007 paper, I pointed out that CO2 will decline anyway once photo-voltaic solar becomes cheap enough, as it will even without any government action at all. I pointed out that it makes far more sense to save our pennies until it is cheaper and then get far more in place far faster, for the same spend, thereby still fixing the problem but at far lower costs. Instead, idiotic governments in Europe and especially the UK (and now today May vowing to continue such idiocy) have crippled households with massive subsidies to rich landowners to put renewable energy in place while it is still very expensive, with guarantees to those rich investors of high incomes for decades. The fiasco with subsidizing wood burning in Northern Ireland shows the enormous depths of government stupidity in these area, with some farmers making millions by wasting as much heat as they possibly could to maximize their subsidy incomes. That shows without any doubt the numerical and scientific public-sector illiteracy in play. Via other subsidies for wind, solar, wave and tidal systems, eEvery UK household will have to pay several hundreds of pounds more every year for energy, just so that a negligible impact on temperatures starts to occur neglibly earlier. Large numbers of UK jobs have already been lost to overseas from energy intensive industries. Those activities still occur, the CO2 is still produced, often with far lower environmental and employment standards. No Gain, lots of pain.

Enormous economic damage for almost zero benefit is not good government. A good leader would investigate the field until they could at least see there was still a lot of scientific debate about the facts and causes. A good leader would suspect the motivations of those manipulating data and showing misrepresentative graphs. A good leader would tell them to come back with unbiased data and unbiased graphs and honest theories or be dismissed. Trump has already taken the first step by calling a halt to the stupidity of ‘all pain for no gain’. He now needs to tackle NASA and NOAA and find a solution to get honest science reinstated in what were once credible and respected organisations. That honest science needs to follow up suggestions that because of solar activity reducing, we may in fact be heading into a prolonged period of cooling, as suggested by teams in Europe and Russia. At the very least, that might prevent the idiots currently planning to start geoengineering to reduce temperature to counteract catastrophic global warming, just as nature takes us into a cooling phase. Such mistimed stupidity could kick-start a new ice age. To remind you, climate scientists 45 years ago were warning that we were heading into an ice age and wanted to cover the arctic with black carbon to prevent runaway ice formation.

CO2 is a greenhouse gas. So is methane. We certainly should keep a watch on emissions and study the climate constantly to check that everything is OK. But that must be done by good scientists practicing actual science, whereby theories are changed to fit the observations, not the other way around. We should welcome development of solar power and storage solutions by companies like Musk’s, but there is absolutely no hurry and no need to subsidize any of that activity. Free market economics will give us cheap renewable energy regardless of government intervention, regardless of subsidy.

We didn’t need Kyoto and we didn’t need Paris. Kyoto didn’t work anyway and Paris causes economic redistribution and a great deal of wastage of money and resources, but no significant climate benefit. We certainly don’t want any more pain for no gain. It is right that we should still help poor countries to the very best of our ability, but we should do that without conflating science with religion and politics.

Trump may still be an idiot, but he was right on this occasion and should now follow on by fixing climate science. May should follow and take the UK out of the climate alarmist damage zone too. Making people poor or jobless for no good reason is not something I can vote for.

Explaining accelerating universe expansion without dark energy

I am not the only ex-physicist that doesn’t believe in dark matter or dark energy, or multiple universes. All of these are theoretically possible interpretations of the maths, but I do not believe they are interpretations appropriate to our universe. Like the concept of the ether, I expect they will be shown to be incorrect and replaced by explanations that don’t need such concepts.

There are already explanations for accelerating expansion that don’t rely on dark energy, such as relativity: https://astronomynow.com/2015/01/05/dark-energy-explained-by-relativistic-time-dilation/ (the title is confusing since the article explains why it isn’t needed).

My theory is even simpler and probably not original, but I can’t find any references to it on the first two pages of Google so either it’s novel or so wrong that it doesn’t even warrant mentions. Anyway, here it is, make up your own mind, it doesn’t even need equations to explain it:

As galaxies get further apart, the various field fluxes reduce with the square of distance – gravitational, electromagnetic, and so must the intergalactic portion of the Higgs flux. The Higgs field is what gives particles their mass. As the Higgs field declines, the mass of the particles in each galaxy must therefore drop too. If energy is to be conserved, then as mass declines, Galaxy speed must increase linearly with distance, as is the observation. QED.

Fluorescent microsphere mist displays

A few 3D mist displays have been demonstrated over the last decade. I’ve seen a couple at trade shows and have been impressed. To date, they use mists or curtains of tiny water droplets to make a 3D space onto which to project an image, so you get a walk-through 3D life-sized display. Like this:

Leia Display System Uses A Screen Made Of Water Mist To Display 3D Projections

or check out: http://ixfocus.com/top-10-best-3d-water-projections-ever/

Two years ago, I suggested using a forehead-mounted mist projector:

Forehead 3D mist projector

so you could have a 3D image made right in front of you anywhere.

This week, a holographic display has been doing the rounds on Twitter, called Gatebox:

https://www.geek.com/tech/gatebox-wants-to-be-your-personal-holographic-companion-1682967/

It looks OK but mist displays might be better solution for everyday use because they can be made a lot bigger more cheaply. However, nobody really wants water mist causing electrical problems in their PCs or making their notebook paper soggy. You can use smoke as a mist substitute but then you have a cloud of smoke around you. So…

Suppose instead of using water droplets and walking around veiled in fog or smoke or accompanied by electrical crackling and dead PCs, that the mist was not made of water droplets but tiny dry and obviously non-toxic particles such as fluorescent micro-spheres that are invisible to the naked eye and transparent to visible light so you can’t see the mist at all, and it won’t make stuff damp. Instead of projecting visible light, the particles are made of fluorescent material, so that they are illuminated by a UV projector and fluoresce with the right colour to make the visible display. There are plenty of fluorescent materials that could be made into tiny particles, even nano-particles, and made into an invisible mist that produces a bright and high-resolution display. Even if non-toxic is too big an ask, or the fluorescent material is too expensive to waste, a large box that keeps them contained and recycles them for the next display could still be bigger, better, brighter and cheaper than a large holographic display.

Remember, you saw it here first. My 101st invention of 2016.

Networked telescopes

A very short one since I am still recovering from a painful trapped nerve that has prevented me writing. Anyway, the best ideas are often the simplest. I re-discovered this one in a 2008 article I wrote but I don’t think it has been done yet and it easily could.

So you buy a telescope for use at home. You point it up at a planet or a star. It probably does a magnification of a few hundred. Why not add a digital zoom that is linked to networked images from large telescope such as Hubble? When you reach the limits of your cheaper version, you see images from more expensive better ones. You also could swap to radio or IR or xray images just as easily. Adding that networked function would be fairly simple and cheap, maybe adding a few tens of dollars even to do it well.

Naturally, you could add networked zoom to cameras too, for landscapes and beauty spots anyway.

You could just make a fully digital telescope of course that has no real telescope function at all, just seeming to be one, and working the same way except that ll the images it provides are digital, using direction tracking to pull up the right one.

Ok, my arm hurts again.

 

How to make a Star Wars light saber

A couple of years ago I explained how to make a free-floating combat drone: http://carbonweapons.com/2013/06/27/free-floating-combat-drones/ , like the ones in Halo or Mass Effect. They could realistically be made in the next couple of decades and are very likely to feature heavily in far future warfare, or indeed terrorism. I was chatting to a journalist this morning about light sabers, another sci-fi classic. They could also be made in the next few decades, using derivatives of the same principles. A prototype is feasible this side of 2050.

I’ll ignore the sci-fi wikis that explain how they are meant to work, which mostly approximate to fancy words for using magic or The Force and various fictional crystals. On the other hand, we still want something that will look and sound and behave like the light saber.

The handle bit is pretty obvious. It has to look good and contain a power source and either a powerful laser or plasma generator. The traditional problem with using a laser-based saber is that the saber is only meant to be a metre long but laser beams don’t generally stop until they hit something. Plasma on the other hand is difficult to contain and needs a lot of energy even when it isn’t being used to strike your opponent. A laser can be switched on and off and is therefore better. But we can have some nice glowy plasma too, just for fun.

The idea is pretty simple then. The blade would be made of graphene flakes coated with carbon nanotube electron pipes, suspended using the same technique I outlined in the blog above. These could easily be made to form a long cylinder and when you want the traditional Star Wars look, they would move about a bit, giving the nice shimmery blurry edge we all like so that the tube looks just right with blurry glowy edges. Anyway, with the electron pipe surface facing inwards, these flakes would generate the internal plasma and its nice glow. They would self-organize their cylinder continuously to follow the path of the saber. Easy-peasy. If they strike something, they would just re-organize themselves into the cylinder again once they are free.

For later models, a Katana shaped blade will obviously be preferred. As we know, all ultimate weapons end up looking like a Katana, so we might as well go straight to it, and have the traditional cylindrical light saber blade as an optional cosmetic envelope for show fights. The Katana is a universal physics result in all possible universes.

The hum could be generated by a speaker in the handle if you have absolutely no sense of style, but for everyone else, you could simply activate pulsed magnetic fields between the flakes so that they resonate at the required band to give your particular tone. Graphene flakes can be magnetized so again this is perfectly consistent with physics. You could download and customize hums from the cloud.

Now the fun bit. When the blade gets close to an object, such as your opponent’s arm, or your loaf of bread in need of being sliced, the capacitance of the outer flakes would change, and anyway, they could easily transmit infrared light in every direction and pick up reflections. It doesn’t really matter which method you pick to detect the right moment to activate the laser, the point is that this bit would be easy engineering and with lots of techniques to pick from, there could be a range of light sabers on offer. Importantly, at least a few techniques could work that don’t violate any physics. Next, some of those self-organizing graphene flakes would have reflective surface backings (metals bond well with graphene so this is also a doddle allowed by physics), and would therefore form a nice reflecting surface to deflect the laser beam at the object about to be struck. If a few flakes are vaporized, others would be right behind them to reflect the beam.

So just as the blade strikes the surface of the target, the powerful laser switches on and the beam is bounced off the reflecting flakes onto the target, vaporizing it and cauterizing the ends of the severed blood vessels to avoid unnecessary mess that might cause a risk of slipping. The shape of the beam depends on the locations and angles of the reflecting surface flakes, and they could be in pretty much any shape to create any shape of beam needed, which could be anything from a sharp knife to a single point, severing an arm or drilling a nice neat hole through the heart. Obviously, style dictates that the point of the saber is used for a narrow beam and the edge is used as a knife, also useful for cutting bread or making toast (the latter uses transverse laser deflection at lower aggregate power density to char rather than vaporize the bread particles, and toast is an option selectable by a dial on the handle).

What about fights? When two of these blades hit each other there would be a variety of possible effects. Again, it would come down to personal style. There is no need to have any feel at all, the beams could simple go through each other, but where’s the fun in that? Far better that the flakes also carry high electric currents so they could create a nice flurry of sparks and the magnetic interactions between the sabers could also be very powerful. Again, self organisation would allow circuits to form to carry the currents at the right locations to deflect or disrupt the opponent’s saber. A galactic treaty would be needed to ensure that everyone fights by the rules and doesn’t cheat by having an ethereal saber that just goes right through the other one without any nice show. War without glory is nothing, and there can be no glory without a strong emotional investment and physical struggle mediated by magnetic interactions in the sabers.

This saber would have a very nice glow in any color you like, but not have a solid blade, so would look and feel very like the Star Wars saber (when you just want to touch it, the lasers would not activate to slice your fingers off, provided you have read the safety instructions and have the safety lock engaged). The blade could also grow elegantly from the hilt when it is activated, over a second or so, it would not just suddenly appear at full length. We need an on/off button for that bit, but that could simply be emotion or thought recognition so it turns on when you concentrate on The Force, or just feel it.

The power supply could be a battery or graphene capacitor bank of a couple of containers of nice chemicals if you want to build it before we can harness The Force and magic crystals.

A light saber that looks, feels and behaves just like the ones on Star Wars is therefore entirely feasible, consistent with physics, and could be built before 2050. It might use different techniques than I have described, but if no better techniques are invented, we could still do it the way I describe above. One way or another, we will have light sabers.

 

How nigh is the end?

“We’re doomed!” is a frequently recited observation. It is great fun predicting the end of the world and almost as much fun reading about it or watching documentaries telling us we’re doomed. So… just how doomed are we? Initial estimate: Maybe a bit doomed. Read on.

My 2012 blog https://timeguide.wordpress.com/2012/07/03/nuclear-weapons/ addressed some of the possibilities for extinction-level events possibly affecting us. I recently watched a Top 10 list of threats to our existence on TV and it was similar to most you’d read, with the same errors and omissions – nuclear war, global virus pandemic, terminator scenarios, solar storms, comet or asteroid strikes, alien invasions, zombie viruses, that sort of thing. I’d agree that nuclear war is still the biggest threat, so number 1, and a global pandemic of a highly infectious and lethal virus should still be number 2. I don’t even need to explain either of those, we all know why they are in 1st and 2nd place.

The TV list included a couple that shouldn’t be in there.

One inclusion was an mega-eruption of Yellowstone or another super-volcano. A full-sized Yellowstone mega-eruption would probably kill millions of people and destroy much of civilization across a large chunk of North America, but some of us don’t actually live in North America and quite a few might well survive pretty well, so although it would be quite annoying for Americans, it is hardly a TEOTWAWKI threat. It would have big effects elsewhere, just not extinction-level ones. For most of the world it would only cause short-term disruptions, such as economic turbulence, at worst it would start a few wars here and there as regions compete for control in the new world order.

Number 3 on their list was climate change, which is an annoyingly wrong, albeit a popularly held inclusion. The only climate change mechanism proposed for catastrophe is global warming, and the reason it’s called climate change now is because global warming stopped in 1998 and still hasn’t resumed 17 years and 9 months later, so that term has become too embarrassing for doom mongers to use. CO2 is a warming agent and emissions should be treated with reasonable caution, but the net warming contribution of all the various feedbacks adds up to far less than originally predicted and the climate models have almost all proven far too pessimistic. Any warming expected this century is very likely to be offset by reduction in solar activity and if and when it resumes towards the end of the century, we will long since have migrated to non-carbon energy sources, so there really isn’t a longer term problem to worry about. With warming by 2100 pretty insignificant, and less than half a metre sea level rise, I certainly don’t think climate change deserves to be on any list of threats of any consequence in the next century.

The top 10 list missed two out by including climate change and Yellowstone, and my first replacement candidate for consideration might be the grey goo scenario. The grey goo scenario is that self-replicating nanobots manage to convert everything including us into a grey goo.  Take away the silly images of tiny little metal robots cutting things up atom by atom and the laughable presentation of this vanishes. Replace those little bots with bacteria that include electronics, and are linked across their own cloud to their own hive AI that redesigns their DNA to allow them to survive in any niche they find by treating the things there as food. When existing bacteria find a niche they can’t exploit, the next generation adapts to it. That self-evolving smart bacteria scenario is rather more feasible, and still results in bacteria that can conquer any ecosystem they find. We would find ourselves unable to fight back and could be wiped out. This isn’t very likely, but it is feasible, could happen by accident or design on our way to transhumanism, and might deserve a place in the top ten threats.

However, grey goo is only one of the NBIC convergence risks we have already imagined (NBIC= Nano-Bio-Info-Cogno). NBIC is a rich seam for doom-seekers. In there you’ll find smart yogurt, smart bacteria, smart viruses, beacons, smart clouds, active skin, direct brain links, zombie viruses, even switching people off. Zombie viruses featured in the top ten TV show too, but they don’t really deserve their own category and more than many other NBIC derivatives. Anyway, that’s just a quick list of deliberate end of world solutions – there will be many more I forgot to include and many I haven’t even thought of yet. Then you have to multiply the list by 3. Any of these could also happen by accident, and any could also happen via unintended consequences of lack of understanding, which is rather different from an accident but just as serious. So basically, deliberate action, accidents and stupidity are three primary routes to the end of the world via technology. So instead of just the grey goo scenario, a far bigger collective threat is NBIC generally and I’d add NBIC collectively into my top ten list, quite high up, maybe 3rd after nuclear war and global virus. AI still deserves to be a separate category of its own, and I’d put it next at 4th.

Another class of technology suitable for abuse is space tech. I once wrote about a solar wind deflector using high atmosphere reflection, and calculated it could melt a city in a few minutes. Under malicious automated control, that is capable of wiping us all out, but it doesn’t justify inclusion in the top ten. One that might is the deliberate deflection of a large asteroid to impact on us. If it makes it in at all, it would be at tenth place. It just isn’t very likely someone would do that.

One I am very tempted to include is drones. Little tiny ones, not the Predators, and not even the ones everyone seems worried about at the moment that can carry 2kg of explosives or Anthrax into the midst of football crowds. Tiny drones are far harder to shoot down, but soon we will have a lot of them around. Size-wise, think of midges or fruit flies. They could be self-organizing into swarms, managed by rogue regimes, terrorist groups, or set to auto, terminator style. They could recharge quickly by solar during short breaks, and restock their payloads from secret supplies that distribute with the swarm. They could be distributed globally using the winds and oceans, so don’t need a plane or missile delivery system that is easily intercepted. Tiny drones can’t carry much, but with nerve gas or viruses, they don’t have to. Defending against such a threat is easy if there is just one, you can swat it. If there is a small cloud of them, you could use a flamethrower. If the sky is full of them and much of the trees and the ground infested, it would be extremely hard to wipe them out. So if they are well designed to cause an extinction level threat, as MAD 2.0 perhaps, then this would be way up in the top tem too, 5th.

Solar storms could wipe out our modern way of life by killing our IT. That itself would kill many people, via riots and fights for the last cans of beans and bottles of water. The most serious solar storms could be even worse. I’ll keep them in my list, at 6th place

Global civil war could become an extinction level event, given human nature. We don’t have to go nuclear to kill a lot of people, and once society degrades to a certain level, well we’ve all watched post-apocalypse movies or played the games. The few left would still fight with each other. I wrote about the Great Western War and how it might result, see

Machiavelli and the coming Great Western War

and such a thing could easily spread globally. I’ll give this 7th place.

A large asteroid strike could happen too, or a comet. Ones capable of extinction level events shouldn’t hit for a while, because we think we know all the ones that could do that. So this goes well down the list at 8th.

Alien invasion is entirely possible and could happen at any time. We’ve been sending out radio signals for quite a while so someone out there might have decided to come see whether our place is nicer than theirs and take over. It hasn’t happened yet so it probably won’t, but then it doesn’t have to be very probably to be in the top ten. 9th will do.

High energy physics research has also been suggested as capable of wiping out our entire planet via exotic particle creation, but the smart people at CERN say it isn’t very likely. Actually, I wasn’t all that convinced or reassured and we’ve only just started messing with real physics so there is plenty of time left to increase the odds of problems. I have a spare place at number 10, so there it goes, with a totally guessed probability of physics research causing a problem every 4000 years.

My top ten list for things likely to cause human extinction, or pretty darn close:

  1. Nuclear war
  2. Highly infectious and lethal virus pandemic
  3. NBIC – deliberate, accidental or lack of foresight (includes smart bacteria, zombie viruses, mind control etc)
  4. Artificial Intelligence, including but not limited to the Terminator scenario
  5. Autonomous Micro-Drones
  6. Solar storm
  7. Global civil war
  8. Comet or asteroid strike
  9. Alien Invasion
  10. Physics research

Not finished yet though. My title was how nigh is the end, not just what might cause it. It’s hard to assign probabilities to each one but someone’s got to do it.  So, I’ll make an arbitrarily wet finger guess in a dark room wearing a blindfold with no explanation of my reasoning to reduce arguments, but hey, that’s almost certainly still more accurate than most climate models, and some people actually believe those. I’m feeling particularly cheerful today so I’ll give my most optimistic assessment.

So, with probabilities of occurrence per year:

  1. Nuclear war:  0.5%
  2. Highly infectious and lethal virus pandemic: 0.4%
  3. NBIC – deliberate, accidental or lack of foresight (includes smart bacteria, zombie viruses, mind control etc): 0.35%
  4. Artificial Intelligence, including but not limited to the Terminator scenario: 0.25%
  5. Autonomous Micro-Drones: 0.2%
  6. Solar storm: 0.1%
  7. Global civil war: 0.1%
  8. Comet or asteroid strike 0.05%
  9. Alien Invasion: 0.04%
  10. Physics research: 0.025%

I hope you agree those are all optimistic. There have been several near misses in my lifetime of number 1, so my 0.5% could have been 2% or 3% given the current state of the world. Also, 0.25% per year means you’d only expect such a thing to happen every 4 centuries so it is a very small chance indeed. However, let’s stick with them and add them up. The cumulative probability of the top ten is 2.015%. Lets add another arbitrary 0.185% for all the risks that didn’t make it into the top ten, rounding the total up to a nice neat 2.2% per year.

Some of the ones above aren’t possible quite yet, but others will vary in probability year to year, but I think that won’t change the guess overall much. If we take a 2.2% probability per year, we have an expectation value of 45.5 years for civilization life expectancy from now. Expectation date for human extinction:

2015.5 + 45.5 years= 2061,

Obviously the probability distribution extends from now to eternity, but don’t get too optimistic, because on these figures there currently is only a 15% chance of surviving past this century.

If you can think of good reasons why my figures are far too pessimistic, by all means make your own guesses, but make them honestly, with a fair and reasonable assessment of how the world looks socially, religiously, politically, the quality of our leaders, human nature etc, and then add them up. You might still be surprised how little time we have left.

I’ll revise my original outlook upwards from ‘a bit doomed’.

We’re reasonably doomed.

The future of holes

H already in my alphabetic series! I was going to write about happiness, or have/have nots, or hunger, or harassment, or hiding, or health. Far too many options for H. Holes is a topic I have never written about, not even a bit, whereas the others would just be updates on previous thoughts. So here goes, the future of holes.

Holes come in various shapes and sizes. At one extreme, we have great big holes from deep mining, drilling, fracking, and natural holes such as meteor craters, rifts and volcanoes. Some look nice and make good documentaries, but I have nothing to say about them.

At the other we have long thin holes in optical fibers that increase bandwidth or holes through carbon nanotubes to make them into electron pipes. And short fat ones that make nice passages through semi-permeable smart membranes.

Electron pipes are an idea I invented in 1992 to increase internet capacity by several orders of magnitude. I’ve written about them in this blog before: https://timeguide.wordpress.com/2015/05/04/increasing-internet-capacity-electron-pipes/

Short fat holes are interesting. If you make a fabric using special polymers that can stretch when a voltage is applied across it, then round holes in it would become oval holes as long as you only stretch it in one direction.  Particles that may fit through round holes might be too thick to pass through them when they are elongated. If you can do that with a membrane on the skin surface, then you have an electronically controllable means of allowing the right mount of medication to be applied. A dispenser could hold medication and use the membrane to allow the right doses at the right time to be applied.

Long thin holes are interesting too. Hollow fiber polyester has served well as duvet and pillow filling for many years. Suppose more natural material fibers could be engineered to have holes, and those holes could be filled with chemicals that are highly distasteful to moths. As a moth larva starts to eat the fabric, it would very quickly be repelled, protecting the fabric from harm.

Conventional wisdom says when you are in a hole, stop digging. End.

Five new states of matter, maybe.

http://en.wikipedia.org/wiki/List_of_states_of_matter lists the currently known states of matter. I had an idea for five new ones, well, 2 anyway with 3 variants. They might not be possible but hey, faint heart ne’er won fair maid, and this is only a blog not a paper from CERN. But coincidentally, it is CERN most likely to be able to make them.

A helium atom normally has 2 electrons, in a single shell. In a particle model, they go round and round. However… the five new states:

A: I suspect this one is may already known but isn’t possible and is therefore just another daft idea. It’s just a planar superatom. Suppose, instead of going round and round the same atom, the nuclei were arranged in groups of three in a nice triangle, and 6 electrons go round and round the triplet. They might not be terribly happy doing that unless at high pressure with some helpful EM fields adjusting the energy levels required, but with a little encouragement, who knows, it might last long enough to be classified as matter.

B: An alternative that might be more stable is a quad of nuclei in a tetrahedron, with 8 electrons. This is obviously a variant of A so probably doesn’t really qualify as a separate one. But let’s call it a 3D superatom for now, unless it already has a proper name.

C: Suppose helium nuclei are neatly arranged in a row at a precise distance apart, and two orthogonal electron beams are fired past them at a certain distance on either side, with the electrons spaced and phased very nicely, so that for a short period at least, each of the nuclei has two electrons and the beam energy and nuclei spacing ensures that they don’t remain captive on one nucleus but are handed on to the next. You can do the difficult sums. To save you a few seconds, since the beams need to be orthogonal, you’ll need multiple beams in the direction orthogonal to the row,

D: Another cheat, a variant of C, C1: or you could make a few rows for a planar version with a grid of beams. Might be tricky to make the beams stay together for any distance so you could only make a small flake of such matter, but I can’t see an obvious reason why it would be impossible. Just tricky.

E: A second variant of C really, C2, with a small 3D speck of such nuclei and a grid of beams. Again, it works in my head.

Well, 5 new states of matter for you to play with. But here’s a free bonus idea:

The states don’t have to actually exist to be useful. Even with just the descriptions above, you could do the maths for these. They might not be physically achievable but that doesn’t stop them existing in a virtual world with a hypothetical future civilization making them. And given that they have that specific mathematics, and ergo a whole range of theoretical chemistry, and therefore hyperelectronics, they could therefore be used as simulated constructs in a Turing machine or actual constructs in quantum computers to achieve particular circuitry with particular virtues. You could certainly emulate it on a Yonck processor (see my blog on that). So you get a whole field of future computing and AI thrown in.

Blogging is all the fun with none of the hard work and admin. Perfect. And just in case someone does build it all, for the record, you saw it here first.