Tag Archives: computing

Your phone is wasted on you

Between 1983 and 1985, the fastest computer on Earth was the Cray X-MP. Its two 105MHz processors and 16MB of memory provided a peak performance of 400MFLOPS. It cost around $15M + disks.

The Apple iPhone XS is 1500 times faster and 15000 times cheaper.

In 1985, our division of 50 people ran all of its word processing on a VAX 11- 780, that produced 0.5MIPS (32bit). On equivalent instructions per second basis, the iPhone XS is 2.5M times faster, so ought to be able to run word processing for a country of 125M people.

Think about that next time you’re typing a text.

 

Advertisements

Future AI: Turing multiplexing, air gels, hyper-neural nets

Just in time to make 2018 a bit less unproductive, I managed to wake in the middle of the night with another few inventions. I’m finishing the year on only a third as many as 2016 and 2017, but better than some years. And I quite like these new ones.

Gel computing is a very old idea of mine, and I’m surprised no company has started doing it yet. Air gel is different. My original used a suspension of processing particles in gel, and the idea was that the gel would hold the particles in fixed locations with good free line of sight to neighbor devices for inter-device optical comms, while acting also as a coolant.

Air gel uses the same idea of suspending particles, but does so by using ultrasound, standing waves holding the particles aloft. They would form a semi-gel I suppose, much softer. The intention is that they will be more easily movable than in a gel, and maybe rotate. I imagine using rotating magnetic fields to rotate them, and use that mechanism to implement different configurations of inter-device nets. That would be the first pillar of running multiple neural nets in the same space at the same time, using spin-based TDM (time division multiplexing), or synchronized space multiplexing if you prefer. If a device uses on board processing that is fast compared to the signal transmission time to other devices (the speed of light may be fast but can still be severely limiting for processing and comms), then having the ability to deal with processing associated with several other networks while awaiting a response allows a processing network to be multiplied up several times. A neural net could become a hyper-neural net.

Given that this is intended for mid-century AI, I’m also making the assumption that true TDM can also be used on each net, my second pillar. Signals would carry a stream of slots holding bits for each processing instance. Since this allows a Turing machine to implement many different processes in parallel, I decided to call it Turing multiplexing. Again, it helps alleviate the potential gulf between processing and communication times. Combining Turing and spin multiplexing would allow a single neural net to be multiplied up potentially thousands or millions of times – hyper-neurons seems as good a term as any.

The third pillar of this system is that the processing particles (each could contain a large number of neurons or other IT objects) could be energized and clocked using very high speed alternating EM fields – radio, microwaves, light, even x-rays. I don’t have any suggestions for processing mechanisms that might operate at such frequencies, though Pauli switches might work at lower speeds, using Pauli exclusion principle to link electron spin states to make switches. I believe early versions of spin cubits use a similar principle. I’m agnostic whether conventional Turing machine or quantum processing would be used, or any combination. In any case, it isn’t my problem, I suspect that future AIs will figure out the physics and invent the appropriate IT.

Processing devices operating at high speed could use a lot of energy and generate a lot of heat, and encouraging the system to lase by design would be a good way to cool it as well as powering it.

A processor using such mechanisms need not be bulky. I always assumed a yogurt pot size for my gel computer before and an air gel processor could be the same, about 100ml. That is enough to suspend a trillion particles with good line of sight for optical interconnections, and each connection could utilise up to millions of alternative wavelengths. Each wavelength could support many TDM channels and spinning the particles multiplies that up again. A UV laser clock/power source driving processors at 10^16Hz would certainly need to use high density multiplexing to make use of such a volume, with transmission distances up to 10cm (but most sub-mm) otherwise being a strongly limiting performance factor, but 10 million-fold WDM/TDM is attainable.

A trillion of these hyper-neurons using that multiplexing would act very effectively as 10 million trillion neurons, each operating at 10^16Hz processing speed. That’s quite a lot of zeros, 35 of them, and yet each hyperneuron could have connections to thousands of others in each of many physical configurations. It would be an obvious platform for supporting a large population of electronically immortal people and AIs who each want a billion replicas, and if it only occupies 100ml of space, the environmental footprint isn’t an issue.

It’s hard to know how to talk to a computer that operates like a brain, but is 10^22 times faster, but I’d suggest ‘Yes Boss’.

 

Ultra-simple computing part 3

Just in time v Just in case

Although the problem isn’t as bad now as it has been, a lot of software runs on your computers just in case it might be needed. Often it isn’t, and sometimes the PC is shut down or rebooted without it ever having been used. This wastes our time, wastes a little energy, and potentially adds functionality or weaknesses that can be exploited by hackers.

If it only loaded the essential pieces of software, risks would be minimised and initial delays reduced. There would be a slightly bigger delay once the code is needed because it would have to load then but since a lot of code is rarely used, the overall result would still be a big win. This would improve security and reliability. If all I am doing today is typing and checking occasional emails, a lot of the software currently loaded in my PC memory is not needed. I don’t even need a firewall running all the time if network access is disabled in between my email checks. If networking and firewall is started when I want to check email or start browsing, and then all network access is disabled after I have checked, then security would be a bit better. I also don’t need all the fancy facilities in Office when all I am doing is typing. I definitely don’t want any part of Office to use any kind of networking in either direction for any reason (I use Thunderbird, not Outlook for email). So don’t load the code yet; I don’t want it running; it only adds risks, not benefits. If I want to do something fancy in a few weeks time, load the code then. If I want to look up a word in a dictionary or check a hyperlink, I could launch a browser and copy and paste it. Why do anything until asked? Forget doing stuff just in case it might occasionally generate a tiny time saving. Just in time is far safer and better than just in case.

So, an ultra-simple computer should only load what is needed, when it is needed. It would only open communications when needed, and then only to the specific destination required. That frees up processors and memory, reduces risks and improves speed.

Software distribution

Storing software on hard disks or in memory lets the files be changed, possibly by a virus. Suppose instead that software were to be distributed on ROM chips. They can be very cheap, so why not? No apps, no downloads. All the software on your machine would be in read only memory, essentially part of the hardware. This would change a few things in computer design. First, you’d have a board with lots of nice slots in it, into which you plug the memory chips you’ve bought with the programs you want on them. (I’ll get to tablets and phones later, obviously a slightly different approach is needed for portable devices). Manufacturers would have a huge interest in checking their  code first, because they can’t put fixes out later except on replacement chips. Updating the software to a new version would simply mean inserting a new chip. Secondly, since the chips are read only, the software on them cannot be corrupted. There is no mechanism by which a virus or other malware could get onto the chip.

Apps could be distributed in collections – lifestyle or business collections. You could buy subscriptions to app agencies that issued regular chips with their baskets of apps on them. Or you could access apps online via the cloud. Your machine would stay clean.

It could go further. As well as memory chips, modules could include processing, controller or sensory capabilities. Main processing may still be in the main part of the computer but specialist capabilities could be added in this way.

So, what about tablets and phones? Obviously you can’t plug lots of extra chips into slots in those because it would be too cumbersome to make them with lots of slots to do so. One approach would be to use your PC or laptop to store and keep up to date a single storage chip that goes into your tablet or phone. It could use a re-programmable ROM that can’t be tampered with by your tablet. All your apps would live on it, but it would be made clean and fresh every day. Tablets could have a simple slot to insert that single chip, just as a few already do for extra memory.

Multi-layered security

If your computer is based on algorithms encoded on read only memory chips or better still, directly as hardware circuits, then it could boot from cold very fast, and would be clean of any malware. To be useful, it would need a decent amount of working memory too, and of course that could provide a short term residence for malware, but a restart would clean it all away. That provides a computer that can easily be reset to a clean state and work properly again right away.

Another layer of defense is to disallow programs access to things they don’t need. You don’t open every door and window in your home every time you want to go in or out. Why open every possible entrance that your office automation package might ever want to use just because you want to type an article? Why open the ability to remotely install or run programs on your computer without your knowledge and consent just because you want to read a news article or look at a cute kitten video? Yet we have accepted such appallingly bad practice from the web browser developers because we have had no choice. It seems that the developers’ desires to provide open windows to anyone that wants to use them outweighs the users’ desires for basic security common sense. So the next layer of defense is really pretty obvious. We want a browser that doesn’t open doors and windows until we explicitly tell it to, and even then it checks everything that tries to get through.

It may still be that you occasionally want to run software from a website, maybe to play a game. Another layer of defense that could help then is to restrict remote executables to a limited range of commands with limited scope. It is also easy additionally to arrange a sandbox where code can run but can’t influence anything outside the sandbox. For example, there is no reason a game would need to inspect files on your computer apart from stored games or game-related files. Creating a sandbox that can run a large range of agreed functions to enable games or other remote applications but is sealed from anything else on the computer would enable remote benign executables without compromising security. Even if they were less safe, confining activity to the sandbox allows the machine to be sterilized by sweeping that area and doesn’t necessitate a full reset. Even without the sandbox, knowing the full capability of the range of permitted commands enables damage limitation and precision cleaning. The range of commands should be created with the end user as priority, letting them do what they want with the lowest danger. It should not be created with application writers as top priority since that is where the security risk arises. Not all potential application writers are benign and many want to exploit or harm the end user for their own purposes. Everyone in IT really ought to know that and should never forget it for a minute and it really shouldn’t need to be said.