A recently announced Chinese supercomputer achieves 2.6 Peta-Instructions Per Second apparently. I once calculated that the human brain has about a third as much power in raw processing terms. However, the computer uses fundamentally different approaches to achieving its task compared to the brain.
Artificial intelligence is already used to create sophisticated virus variants, and autonomous AI entities will eventually become potential threats in their own right. Today, computers act only on instruction from people, but tomorrow, they will become a lot more independent. Assumptions that people will write their software are not valid. It is entirely feasible to develop design techniques that harness evolutionary and random chance principles, which could become much more sophisticated than today’s primitive genetic algorithms. Many people underestimate the potential for AI based threats because they assume that all machines and their software must be designed by people, who have limited knowledge, but that is no longer true and will become increasingly untrue as time goes on. So someone intent on mischief could create a piece of software and release it onto the net, where it could evolve and adapt and take on a life of its own, creating problems for companies while hiding using anonymity, encryption and distribution. It could be very difficult to find and destroy many such entities.
Nightmare AI scenarios do not necessarily require someone to be intent on creating mischief. Student pranks or curiosity could be enough. For example, suppose that some top psychology students, synthetic biology students and a few decent hackers spend some time over a few drinks debating whether it is possible to create a conscious AI entity. Even though none of them has any deep understanding of how human consciousness works, or how to make an alternative kind of consciousness, they may have enough combined insight to start a large scale zombie network, and to seed some crude algorithms as the base for an evolutionary experiment. Their lack of industrial experience also translates into a lack of design prejudice. Putting in some basic start-point ideas, coupled with imaginative thinking, a powerful distributed network of such machines would provide a formidable platform on which to run such an experiment. By making random changes to both algorithms and architecture, and perhaps using a ‘guided evolution’ approach, such an experiment might stumble across some techniques that offer promise, and eventually achieve a crude form of consciousness or advanced intelligence, both of which are dangerous. This might continue its development on its own, out of the direct control of the students. Even if the techniques it uses are very crude by comparison to those used by nature, the processing power and storage available to such a network offers vastly more raw scope than that available even in the human brain, and would perhaps allow an inefficient intelligence to still be superior to that of humans.
Once an AI reaches a certain level of intelligence, it would be capable of hiding, using distribution and encryption to disperse itself around the net. By developing its own techniques to capture more processing resources, it could benefit from a positive feedback loop, accelerating quickly towards a vastly superhuman entity. Although there is no reason to assume that it would necessarily be malicious, there is equally no reason to assume it would be benign. With its own curiosity, perhaps humans would become unintentional victims of its activities, in much the same way as insects on a building site.