Tag Archives: science

Emotion maths – A perfect research project for AI

I did a maths and physics degree, and even though I have forgotten much of it after 36 years, my brain is still oriented in that direction and I sometimes have maths dreams. Last night I had another, where I realized I’ve never heard of a branch of mathematics to describe emotions or emotional interactions. As the dream progressed, it became increasingly obvious that the most suited part of maths for doing so would be field theory, and given the multi-dimensional nature of emotions, tensor field theory would be ideal. I’m guessing that tensor field theory isn’t on most university’s psychology syllabus. I could barely cope with it on a maths syllabus. However, I note that one branch of Google’s AI R&D resulted in a computer architecture called tensor flow, presumably designed specifically for such multidimensional problems, and presumably being used to analyse marketing data. Again, I haven’t yet heard any mention of it being used for emotion studies, so this is clearly a large hole in maths research that might be perfectly filled by AI. It would be fantastic if AI can deliver a whole new branch of maths. AI got into trouble inventing new languages but mathematics is really just a way of describing logical reasoning about numbers or patterns in formal language that is self-consistent and reproducible. It is ideal for describing scientific theories, engineering and logical reasoning.

Checking Google today, there are a few articles out there describing simple emotional interactions using superficial equations, but nothing with the level of sophistication needed.


an example from this:

Disappointment = Expectations – Reality

is certainly an equation, but it is too superficial and incomplete. It takes no account of how you feel otherwise – whether you are jealous or angry or in love or a thousand other things. So there is some discussion on using maths to describe emotions, but I’d say it is extremely superficial and embryonic and perfect for deeper study.

Emotions often behave like fields. We use field-like descriptions in everyday expressions – envy is a green fog, anger is a red mist or we see a beloved through rose-tinted spectacles. These are classic fields, and maths could easily describe them in this way and use them in equations that describe behaviors affected by those emotions. I’ve often used the concept of magentic fields in some of my machine consciousness work. (If I am using an optical processing gel, then shining a colored beam of light into a particular ‘brain’ region could bias the neurons in that region in a particular direction in the same way an emotion does in the human brain. ‘Magentic’ is just a playful pun given the processing mechanism is light (e.g. magenta, rather than electrics that would be better affected by magnetic fields.

Some emotions interact and some don’t, so that gives us nice orthogonal dimensions to play in. You can be calm or excited pretty much independently of being jealous. Others very much interact. It is hard to be happy while angry. Maths allows interacting fields to be described using shared dimensions, while having others that don’t interact on other dimensions. This is where it starts to get more interesting and more suited to AI than people. Given large databases of emotionally affected interactions, an AI could derive hypotheses that appear to describe these interactions between emotions, picking out where they seem to interact and where they seem to be independent.

Not being emotionally involved itself, it is better suited to draw such conclusions. A human researcher however might find it hard to draw neat boundaries around emotions and describe them so clearly. It may be obvious that being both calm and angry doesn’t easily fit with human experience, but what about being terrified and happy? Terrified sounds very negative at first glance, so first impressions aren’t favorable for twinning them, but when you think about it, that pretty much describes the entire roller-coaster or extreme sports markets. Many other emotions interact somewhat, and deriving the equations would be extremely hard for humans, but I’m guessing, relatively easy for AI.

These kinds of equations fall very easily into tensor field theory, with types and degrees of interactions of fields along alternative dimensions readily describable.

Some interactions act like transforms. Fear might transform the ways that jealousy is expressed. Love alters the expression of happiness or sadness.

Some things seem to add or subtract, others multiply, others act more like exponential or partial derivatives or integrations, other interact periodically or instantly or over time. Maths seems to hold innumerable tools to describe emotions, but first-person involvement and experience make it extremely difficult for humans to derive such equations. The example equation above is easy to understand, but there are so many emotions available, and so many different circumstances, that this entire problem looks like it was designed to challenge a big data mining plant. Maybe a big company involved in AI, big data, advertising and that knows about tensor field theory would be a perfect research candidate. Google, Amazon, Facebook, Samsung….. Has all the potential for a race.

AI, meet emotions. You speak different languages, so you’ll need to work hard to get to know one another. Here are some books on field theory. Now get on with it, I expect a thesis on emotional field theory by end of term.


Five new states of matter, maybe.

http://en.wikipedia.org/wiki/List_of_states_of_matter lists the currently known states of matter. I had an idea for five new ones, well, 2 anyway with 3 variants. They might not be possible but hey, faint heart ne’er won fair maid, and this is only a blog not a paper from CERN. But coincidentally, it is CERN most likely to be able to make them.

A helium atom normally has 2 electrons, in a single shell. In a particle model, they go round and round. However… the five new states:

A: I suspect this one is may already known but isn’t possible and is therefore just another daft idea. It’s just a planar superatom. Suppose, instead of going round and round the same atom, the nuclei were arranged in groups of three in a nice triangle, and 6 electrons go round and round the triplet. They might not be terribly happy doing that unless at high pressure with some helpful EM fields adjusting the energy levels required, but with a little encouragement, who knows, it might last long enough to be classified as matter.

B: An alternative that might be more stable is a quad of nuclei in a tetrahedron, with 8 electrons. This is obviously a variant of A so probably doesn’t really qualify as a separate one. But let’s call it a 3D superatom for now, unless it already has a proper name.

C: Suppose helium nuclei are neatly arranged in a row at a precise distance apart, and two orthogonal electron beams are fired past them at a certain distance on either side, with the electrons spaced and phased very nicely, so that for a short period at least, each of the nuclei has two electrons and the beam energy and nuclei spacing ensures that they don’t remain captive on one nucleus but are handed on to the next. You can do the difficult sums. To save you a few seconds, since the beams need to be orthogonal, you’ll need multiple beams in the direction orthogonal to the row,

D: Another cheat, a variant of C, C1: or you could make a few rows for a planar version with a grid of beams. Might be tricky to make the beams stay together for any distance so you could only make a small flake of such matter, but I can’t see an obvious reason why it would be impossible. Just tricky.

E: A second variant of C really, C2, with a small 3D speck of such nuclei and a grid of beams. Again, it works in my head.

Well, 5 new states of matter for you to play with. But here’s a free bonus idea:

The states don’t have to actually exist to be useful. Even with just the descriptions above, you could do the maths for these. They might not be physically achievable but that doesn’t stop them existing in a virtual world with a hypothetical future civilization making them. And given that they have that specific mathematics, and ergo a whole range of theoretical chemistry, and therefore hyperelectronics, they could therefore be used as simulated constructs in a Turing machine or actual constructs in quantum computers to achieve particular circuitry with particular virtues. You could certainly emulate it on a Yonck processor (see my blog on that). So you get a whole field of future computing and AI thrown in.

Blogging is all the fun with none of the hard work and admin. Perfect. And just in case someone does build it all, for the record, you saw it here first.

Nuclear weapons + ?

I was privileged and honoured in 2005 to be elected one of the Fellows of the World Academy of Art and Science. It is a mark of recognition and distinction that I wear with pride. The WAAS was set up by Einstein, Oppenheimer, Bertrand Russel and a few other great people, as a forum to discuss the big issues that affect the whole of humanity, especially the potential misuse of scientific discoveries, and by extension, technological developments. Not surprisingly therefore, one of their main programs from the outset has been the pursuit of the abolition of nuclear weapons. It’s a subject I have never written about before so maybe now is a good time to start. Most importantly, I think it’s now time to add others to the list.

There are good arguments on both sides of this issue.

In favour of nukes, it can be argued from a pragmatic stance that the existence of nuclear capability has contributed to reduction in the ferocity of wars. If you know that the enemy could resort to nuclear weapon use if pushed too far, then it may create some pressure to restrict the devastation levied on the enemy.

But this only works if both sides value lives of their citizens sufficiently. If a leader thinks he may survive such a war, or doesn’t mind risking his life for the cause, then the deterrent ceases to work properly. An all out global nuclear war could kill billions of people and leave the survivors in a rather unpleasant world. As Einstein observed, he wasn’t sure what weapons World War 3 would be fought with, but world war 4 would be fought with sticks and stones. Mutually assured destruction may work to some degree as a deterrent, but it is based on second guessing a madman. It isn’t a moral argument, just a pragmatic one. Wear a big enough bomb, and people might give you a wide berth.

Against nukes, it can be argued from a moral basis that such weapons should never be used in any circumstances, their capability to cause devastation beyond the limits that should be tolerated by any civilisation. Furthermore, any resources spent on creating and maintaining them are therefore wasted and could have been put to better more constructive use.

This argument is appealing, but lacks pragmatism in a world where some people don’t abide by the rules.

Pragmatism and morality often align with the right and left of the political spectrum, but there is a solution that keeps both sides happy, albeit an imperfect one. If all nuclear weapons can be removed, and stay removed, so that no-one has any or can build any, then pragmatically, there could be even more wars, and they may be even more prolonged and nasty, but the damage will be kept short of mutual annihilation. Terrorists and mad rulers wouldn’t be able to destroy us all in a nuclear Armageddon. Morally, we may accept the increased casualties as the cost of keeping the moral high ground and protecting human civilisation. This total disarmament option is the goal of the WAAS. Pragmatic to some degree, and just about morally digestible.

Another argument that is occasionally aired is the ‘what if?’ WW2 scenario. What if nuclear weapons hadn’t been invented? More people would probably have died in a longer WW2. If they had been invented and used earlier by the other side, and the Germans had won, perhaps we would have had ended up with a unified Europe with the Germans in the driving seat. Would that be hugely different from the Europe we actually have 65 years later anyway. Are even major wars just fights over the the nature of our lives over a few decades? What if the Romans or the Normans or Vikings had been defeated? Would Britain be so different today? ‘What if?’ debates get you little except interesting debate.

The arguments for and against nuclear weapons haven’t really moved on much over the years, but now the scope is changing a bit. They are as big a threat as ever, maybe more-so with the increasing possibility of rogue regimes and terrorists getting their hands on them, but we are adding other technologies that are potentially just as destructive, in principle anyway, and they could be weaponised if required.

One path to destruction that entered a new phase in the last few years is our messing around with the tools of biology. Biotechnology and genetic modification, synthetic biology, and the linking of external technology into our nervous systems are individual different strands of this threat, but each of them is developing quickly. What links all these is the increasing understanding, harnessing and ongoing development of processes similar to those that nature uses to make life. We start with what nature provides, reverse engineer some of the tools, improve on them, adapt and develop them for particular tasks, and then use these to do stuff that improves on or interacts with natural systems.

Alongside nuclear weapons, we have already become used to the bio-weapons threat based on genetically modified viruses or bacteria, and also to weapons using nerve gases that inhibit neural functioning to kill us. But not far away is biotech designed to change the way our brains work, potentially to control or enslave us. It is starting benignly of course, helping people with disabilities or nerve or brain disorders. But some will pervert it.

Traditional war has been based on causing enough pain to the enemy until they surrender and do as you wish. Future warfare could be based on altering their thinking until it complies with what you want, making an enemy into a willing ally, servant or slave. We don’t want to lose the great potential for improving lives, but we shouldn’t be naive about the risks.

The broad convergence of neurotechnology and IT is a particularly dangerous area. Adding artificial intelligence into the mix opens the possibility of smart adapting organisms as well as the Terminator style threats. Organisms that can survive in multiple niches, or hybrid nature/cyberspace ones that use external AI to redesign their offspring to colonise others. Organisms that penetrate your brain and take control.

Another dangerous offspring from better understanding of biology is that we now have clubs where enthusiasts gather to make genetically modified organisms. At the moment, this is benign novelty stuff, such as transferring a bio-luminescence gene or a fluorescent marker to another organism, just another after-school science club for gifted school-kids and hobbyist adults. But it is I think a dangerous hobby to encourage. With better technology and skill developing all the time, some of those enthusiasts will move on to designing and creating synthetic genes, some won’t like being constrained by safety procedures, and some may have accidents and release modified organisms into the wild that were developed without observing the safety rules. Some will use them to learn genetic design, modification and fabrication techniques and then work in secret or teach terrorist groups. Not all the members can be guaranteed to be fine upstanding members of the community, and it should be assumed that some will be people of ill intent trying to learn how to do the most possible harm.

At least a dozen new types of WMD are possible based on this family of technologies, even before we add in nanotechnology. We should not leave it too late to take this threat seriously. Whereas nuclear weapons are hard to build and require large facilities that are hard to hide, much of this new stuff can be done in garden sheds or ordinary office buildings. They are embryonic and even theoretical today, but that won’t last. I am glad to say that in organisations such as the Lifeboat Foundation (lifeboat.com), in many universities and R&D labs, and doubtless in military ones, some thought has already gone into defence against them and how to police them, but not enough. It is time now to escalate these kinds of threats to the same attention we give to the nuclear one.

With a global nuclear war, much of the life on earth could be destroyed, and that will become possible with the release of well-designed organisms. But I doubt if I am alone in thinking that the possibility of being left alive with my mind controlled by others may well be a fate worse than death.