Category Archives: psychology

Thoughts on declining male intelligence

I’ve seen a few citations this week of a study showing a 3 IQ point per decade drop in men’s intelligence levels: https://www.sciencealert.com/iq-scores-falling-in-worrying-reversal-20th-century-intelligence-boom-flynn-effect-intelligence

I’m not qualified to judge the merits of the study, but it is interesting if true, and since it is based on studying 730,000 men and seems to use a sensible methodology, it does sound reasonable.

I wrote last November about the potential effects of environmental exposure to hormone disruptors on intelligence, pointing out that if estrogen-mimicking hormones cause a shift in IQ distribution, this would be very damaging even if mean IQ stays the same. Although male and female IQs are about the same, male IQs are less concentrated around the mean, so there are more men than women at each extreme.

https://timeguide.wordpress.com/2017/11/13/we-need-to-stop-xenoestrogen-pollution/

From a social equality point of view of course, some might consider it a good thing if men’s IQ range is caused to align more closely with the female one. I disagree and suggested some of the consequences that should be expected if male IQ distribution were to compress towards the female one and managed to confirm many of them, so it does look like it is already a problem.

This new study suggests a shift of the whole distribution downwards, which could actually be in addition to redistribution, making it even worse. The study doesn’t seem to mention distribution. They do show that the drop in mean IQ must be caused by environmental or lifestyle changes, both of which we have seen in recent decades.

IQ distribution matters more than the mean. Those at the very top of the range contribute many times more to progress than those further down. Magnitude of contribution is very dependent on those last few IQ points. I can verify that from personal experience. I have a virus that causes occasional periods of nerve inflammation, and as well as causing problems with my peripheral motor activity, it seems to strongly affect my thinking ability and comprehension. During those periods I generate very few new ideas or inventions and far fewer worthwhile insights than when I am on form. I sometimes have to wait until I recover before I can understand my own previous ideas and add to them. You’ll see it in numbers (and probably quality) of blog posts for example. I really feel a big difference in my thinking ability, and I hate feeling dumber than usual. Perhaps people don’t notice if they’ve always had the reduced IQ so have never experienced being less smart than they were, but my own experience is that perceptive ability and level of consciousness are strong contributors to personal well-being.

As for society as a whole, AI might come to the rescue at least in part. Just in time perhaps, since we’re creating the ability for computers to assist us and up-skill us just as we see numbers of people with the very highest IQ ranges drop. A bit like watching a new generation come on stream and take the reins as we age and take a back seat. On the other hand, it does bring forwards the time where computers overtake humans, humans become more dependent on machines, and machines become more of an existential threat as well as our babysitters.

Advertisements

AI that talks to us could quickly become problematic

Google’s making the news again adding evidence to the unfortunate stereotype of the autistic IT nerd that barely understands normal people, and they have therefore been astonished at the backlash that normal people would all easily have predicted. (I’m autistic and work in IT mostly too, and am well used to the stereotype it so it doesn’t bother me, in fact it is a sort of ‘get out of social interactions free’ card). Last time it was Google Glass, where it apparently didn’t occur to them that people may not want other people videoing them without consent in pubs and changing rooms. This time it is Google Duplex, that makes phone calls on your behalf to arrange appointment using voice that is almost indistinguishable from normal humans. You could save time making an appointment with a hairdresser apparently, so the Googlanders decided it must be a brilliant breakthrough, and expected everyone to agree. They didn’t.

Some of the objections have been about ethics: e.g. An AI should not present itself as human – Humans have rights and dignity and deserve respectful interactions with other people, but an AI doesn’t and should not masquerade as human to acquire such privilege without knowledge of the other party and their consent.

I would be more offended by the presumed attitude of the user. If someone thinks they are so much better then me that they can demand my time and attention without the expense of any of their own, delegating instead to a few microseconds of processing time in a server farm somewhere, I’ll treat them with the contempt they deserve. My response will not be favourable. I am already highly irritated by the NHS using simple voice interaction messaging to check I will attend a hospital appointment. The fact that my health is on the line and notices at surgeries say I will be banned if I complain on social media is sufficient blackmail to ensure my compliance, but it still comes at the expense of my respect and goodwill. AI-backed voice interaction with better voice wouldn’t be any better, and if it asking for more interaction such as actually booking an appointment, it would be extremely annoying.

In any case, most people don’t speak in fully formed grammatically and logically correct sentences. If you listen carefully to everyday chat, a lot of sentences are poorly pronounced, incomplete, jumbled, full of ums and er’s, likes and they require a great deal of cooperation by the listener to make any sense at all. They also wander off topic frequently. People don’t stick to a rigid vocabulary list or lists of nicely selected sentences.  Lots of preamble and verbal meandering is likely in a response that is highly likely to add ambiguity. The example used in a demo, “I’d like to make a hairdressing appointment for a client” sounds fine until you factor in normal everyday humanity. A busy hairdresser or a lazy receptionist is not necessarily going to cooperate fully. “what do you mean, client?”, “404 not found”, “piss off google”, “oh FFS, not another bloody computer”, “we don’t do hairdressing, we do haircuts”, “why can’t your ‘client’ call themselves then?” and a million other responses are more likely than “what time would you like?”

Suppose though that it eventually gets accepted by society. First, call centers beyond the jurisdiction of your nuisance call blocker authority will incessantly call you at all hours asking or telling you all sorts of things, wasting huge amounts of your time and reducing quality of life. Voice spam from humans in call centers is bad enough. If the owners can multiply productivity by 1000 by using AI instead of people, the result is predictable.

We’ve seen the conspicuous political use of social media AI already. Facebook might have allowed companies to use very limited and inaccurate knowledge of you to target ads or articles that you probably didn’t look at. Voice interaction would be different. It uses a richer emotional connection that text or graphics on a screen. Google knows a lot about you too, but it will know a lot more soon. These big IT companies are also playing with tech to log you on easily to sites without passwords. Some gadgets that might be involved might be worn, such as watches or bracelets or rings. They can pick up signals to identify you, but they can also check emotional states such as stress level. Voice gives away emotion too. AI can already tell better then almost all people whether you are telling the truth or lying or hiding something. Tech such as iris scans can also tell emotional states, as well as give health clues. Simple photos can reveal your age quite accurately to AI, (check out how-old.net).  The AI voice sounds human, but it is better then even your best friends at guessing your age, your stress and other emotions, your health, whether you are telling the truth or not, and it knows far more about what you like and dislike and what you really do online than anyone you know, including you. It knows a lot of your intimate secrets. It sounds human, but its nearest human equivalent was probably Machiavelli. That’s who will soon be on the other side of the call, not some dumb chatbot. Now re-calculate political interference, and factor in the political leaning and social engineering desires of the companies providing the tools. Google and Facebook and the others are very far from politically neutral. One presidential candidate might get full cooperation, assistance and convenient looking the other way, while their opponent might meet rejection and citation of the official rules on non-interference. Campaigns on social issues will also be amplified by AI coupled to voice interaction. I looked at some related issue in a previous blog on fake AI (i.e. fake news type issues): https://timeguide.wordpress.com/2017/11/16/fake-ai/

I could but won’t write a blog on how this tech could couple well to sexbots to help out incels. It may actually have some genuine uses in providing synthetic companionship for lonely people, or helping or encouraging them in real social interactions with real people. It will certainly have some uses in gaming and chatbot game interaction.

We are not very far from computers that are smarter then people across a very wide spectrum, and probably not very far from conscious machines that have superhuman intelligence. If we can’t even rely on IT companies to understand likely consequences of such obvious stuff as Duplex before thy push it, how can we trust them in other upcoming areas of AI development, or even closer term techs with less obvious consequences? We simply can’t!

There are certainly a few such areas where such technology might help us but most are minor and the rest don’t need any deception, but they all come at great cost or real social and political risk, as well as more abstract risks such as threats to human dignity and other ethical issues. I haven’t give this much thought yet and I am sure there must be very many other consequences I have not touched on yet. Google should do more thinking before they release stuff. Technology is becoming very powerful, but we all know that great power comes with great responsibility, and since most people aren’t engineers so can’t think through all the potential technology interactions and consequences, engineers such as Google’s must act more responsibly. I had hoped they’d started, and they said they had, but this is not evidence of that.

 

Beyond VR: Computer assisted dreaming

I first played with VR in 1983/1984 while working in the missile industry. Back then we didn’t call it VR, we just called it simulation but it was actually more intensive than VR, just as proper flight simulators are. Our office was a pair of 10m wide domes onto which video could be projected, built decades earlier, in the 1950s I think. One dome had a normal floor, the other had a hydraulic platform that could simulate being on a ship. The subject would stand on whichever surface was appropriate and would see pretty much exactly what they would see in a real battlefield. The missile launcher used for simulation was identical to a real one and showed exactly the same image as a real one would. The real missile was not present of course but its weight was simulated and when the fire button was pressed, a 140dB bang was injected into the headset and weights and pulleys compensated for the 14kg of weight, suddenly vanishing from the shoulder. The experience was therefore pretty convincing and with the loud bang and suddenly changing weight, it was almost as hard to stand steady and keep the system on target as it would be in real life – only the presumed fear and knowledge of the reality of the situation was different.

Back then in 1983, as digital supercomputers had only just taken over from analog ones for simulation, it was already becoming obvious that this kind of computer simulation would one day allow ‘computer assisted dreaming’. (That’s one of the reasons I am irritated when Jaron Lanier is credited for inventing VR – highly realistic simulators and the VR ideas that sprung obviously from them had already been around for decades. At best, all he ‘invented’ was a catchy name for a lower cost, lower quality, less intense simulator. The real inventors were those who made the first generation simulators long before I was born and the basic idea of VR had already been very well established.)

‘Computer assisted dreaming’ may well be the next phase of VR. Today in conventional VR, people are immersed in a computer generated world produced by a computer program (usually) written by others. Via trial and feedback, programmers make their virtual worlds better. As AI and sensor technology continue rapid progress, this is very likely to change to make worlds instantly responsive to the user. By detecting user emotions, reactions, gestures and even thoughts and imagination, it won’t be long before AI can produce a world in real time that depends on those thoughts, imagination and emotions rather than putting them in a pre-designed virtual world. That world would depend largely on your own imagination, upskilled by external AI. You might start off imagining you’re on a beach, then AI might add to it by injecting all sorts of things it knows you might enjoy from previous experiences. As you respond to those, it picks up on the things you like or don’t like and the scene continues to adapt and evolve, to make it more or less pleasant or more or less exciting or more or less challenging etc., depending on your emotional state, external requirements and what it thinks you want from this experience. It would be very like being in a dream – computer assisted lucid dreaming, exactly what I wanted to make back in 1983 after playing in that simulator.

Most people enjoy occasional lucid dreams, where they realise they are dreaming and can then decide what happens next. Making VR do exactly that would be better than being trapped in someone else’s world. You could still start off with whatever virtual world you bought, a computer game or training suite perhaps, but it could adapt to you, your needs and desires to make it more compelling and generally better.

Even in shared experiences like social games, experiences could be personalised. Often all players need to see the same enemies in the same locations in the same ways to make it fair, but that doesn’t mean that the situation can’t adapt to the personalities of those playing. It might actually improve the social value if each time you play it looks different because your companions are different. You might tease a friend if every time you play with them, zombies or aliens always have to appear somehow, but that’s all part of being friends. Exploring virtual worlds with friends, where you both see things dependent on your friend’s personality would help bonding. It would be a bit like exploring their inner world. Today, you only explore the designer’s inner world.

This sort of thing would be a superb development and creativity tool. It could allow you to explore a concept you have in your head, automatically feeding in AI upskilling to amplify your own thoughts and ideas, showing you new paths to explore and helping you do so. The results would still be extremely personal to you, but you on a good day. You could accomplish more, have better visions, imagine more creative things, do more with whatever artistic talent you have. AI could even co-create synthetic personas, make virtual friends you can bond with, share innermost thoughts with, in total confidence (assuming the company you bought the tool from is trustworthy and isn’t spying on you or selling your details, so maybe best not to buy it from Facebook then).

And it would have tremendous therapeutic potential too. You could explore and indulge both enjoyable and troublesome aspects of your inner personality, to build on the good and alleviate or dispel the bad. You might become less troubled, less neurotic, more mentally healthy. You could build your emotional and creative skills. You could become happier and more fulfilled. Mental health improvement potential on its own makes this sort of thing worth developing.

Marketers would obviously try to seize control as they always do, and advertising is already adapting to VR and will continue into its next phases of development. Your own wants and desires might help guide the ‘dreaming’, but marketers will inevitably have some control over what else is injected, and will influence algorithms and AI in how it chooses how to respond to your input. You might be able to choose much of the experience, but others will still want and try to influence and manipulate you, to change your mindset and attitudes in their favour. That will not change until the advertising business model changes. You might be able to buy devices or applications that are entirely driven by you and you alone, but it is pretty certain that the bulk of products and services available will be at least partly financed by those who want to have some control of what you experience.

Nevertheless, computer-assisted dreaming could be a much more immersive and personal experience than VR, being more like an echo of your own mind and personality than external vision, more your own creation, less someone else’s. In fact, echo sounds a better term too. Echo reality, ER, or maybe personal reality, pereal, or mental echo, ME. Nah, maybe we need Lanier to invent a catchy name again, he is good at that. That 1983 idea could soon become reality.

 

Emotion maths – A perfect research project for AI

I did a maths and physics degree, and even though I have forgotten much of it after 36 years, my brain is still oriented in that direction and I sometimes have maths dreams. Last night I had another, where I realized I’ve never heard of a branch of mathematics to describe emotions or emotional interactions. As the dream progressed, it became increasingly obvious that the most suited part of maths for doing so would be field theory, and given the multi-dimensional nature of emotions, tensor field theory would be ideal. I’m guessing that tensor field theory isn’t on most university’s psychology syllabus. I could barely cope with it on a maths syllabus. However, I note that one branch of Google’s AI R&D resulted in a computer architecture called tensor flow, presumably designed specifically for such multidimensional problems, and presumably being used to analyse marketing data. Again, I haven’t yet heard any mention of it being used for emotion studies, so this is clearly a large hole in maths research that might be perfectly filled by AI. It would be fantastic if AI can deliver a whole new branch of maths. AI got into trouble inventing new languages but mathematics is really just a way of describing logical reasoning about numbers or patterns in formal language that is self-consistent and reproducible. It is ideal for describing scientific theories, engineering and logical reasoning.

Checking Google today, there are a few articles out there describing simple emotional interactions using superficial equations, but nothing with the level of sophistication needed.

https://www.inc.com/jeff-haden/your-feelings-surprisingly-theyre-based-on-math.html

an example from this:

Disappointment = Expectations – Reality

is certainly an equation, but it is too superficial and incomplete. It takes no account of how you feel otherwise – whether you are jealous or angry or in love or a thousand other things. So there is some discussion on using maths to describe emotions, but I’d say it is extremely superficial and embryonic and perfect for deeper study.

Emotions often behave like fields. We use field-like descriptions in everyday expressions – envy is a green fog, anger is a red mist or we see a beloved through rose-tinted spectacles. These are classic fields, and maths could easily describe them in this way and use them in equations that describe behaviors affected by those emotions. I’ve often used the concept of magentic fields in some of my machine consciousness work. (If I am using an optical processing gel, then shining a colored beam of light into a particular ‘brain’ region could bias the neurons in that region in a particular direction in the same way an emotion does in the human brain. ‘Magentic’ is just a playful pun given the processing mechanism is light (e.g. magenta, rather than electrics that would be better affected by magnetic fields.

Some emotions interact and some don’t, so that gives us nice orthogonal dimensions to play in. You can be calm or excited pretty much independently of being jealous. Others very much interact. It is hard to be happy while angry. Maths allows interacting fields to be described using shared dimensions, while having others that don’t interact on other dimensions. This is where it starts to get more interesting and more suited to AI than people. Given large databases of emotionally affected interactions, an AI could derive hypotheses that appear to describe these interactions between emotions, picking out where they seem to interact and where they seem to be independent.

Not being emotionally involved itself, it is better suited to draw such conclusions. A human researcher however might find it hard to draw neat boundaries around emotions and describe them so clearly. It may be obvious that being both calm and angry doesn’t easily fit with human experience, but what about being terrified and happy? Terrified sounds very negative at first glance, so first impressions aren’t favorable for twinning them, but when you think about it, that pretty much describes the entire roller-coaster or extreme sports markets. Many other emotions interact somewhat, and deriving the equations would be extremely hard for humans, but I’m guessing, relatively easy for AI.

These kinds of equations fall very easily into tensor field theory, with types and degrees of interactions of fields along alternative dimensions readily describable.

Some interactions act like transforms. Fear might transform the ways that jealousy is expressed. Love alters the expression of happiness or sadness.

Some things seem to add or subtract, others multiply, others act more like exponential or partial derivatives or integrations, other interact periodically or instantly or over time. Maths seems to hold innumerable tools to describe emotions, but first-person involvement and experience make it extremely difficult for humans to derive such equations. The example equation above is easy to understand, but there are so many emotions available, and so many different circumstances, that this entire problem looks like it was designed to challenge a big data mining plant. Maybe a big company involved in AI, big data, advertising and that knows about tensor field theory would be a perfect research candidate. Google, Amazon, Facebook, Samsung….. Has all the potential for a race.

AI, meet emotions. You speak different languages, so you’ll need to work hard to get to know one another. Here are some books on field theory. Now get on with it, I expect a thesis on emotional field theory by end of term.