Non-batty consciousness

Have you read the paper ‘What is it like to be a bat?”? It is interesting example of philosophy that is commonly read by philosophy students. However, it illustrates one of the big problems with philosophy, that in its desire to assign definitions to to make things easier to discuss, it can sometimes exclude perfectly valid examples.

While trying laudibly to grab a handle of what consciousness is, the second page of that paper asserts that

“… but fundamentally an organism has conscious mental states if and only if there is something that it is like to be that organism – something it is like for the organism. We may call this the subjective character.”

Sounds OK?

No, it’s wrong.

Actually, I didn’t read any further than that paragraph. The rest of the paper may be excellent. It is just that statement I take issue with here.

I understand what it is saying, and why, but the ‘only if’ is wrong. There does not have to be something that it is like, or to be, for consciousness to exist. I would agree it is true of the bat, but not of consciousness generally, so although much of the paper might be correct because it discusses bats, that assertion about the broader nature of consciousness is incorrect. It would have been better to include the phrase limiting it to human or bat consciousness, and if so, I’d have had no objection. The author has essentially stepped briefly (and unnecessarily) outside the boundary conditions for that definition. It is probably correct for all known animals, including humans, but it is possible to make a synthetic organism or an AI that is conscious where the assertion would not be correct.

The author of the paper recognizes the difficulty in defining consciousness for good reason: it is not easy to define. In our everyday experience of being conscious, it covers a broad range of things, but the process of defining necessarily constrains and labels those things, and that’s where some things can easily go unlabeled or left out. In a perfectly acceptable everyday (and undefined) understanding of consciousness, at least one manifestation of it could be thought of as the awareness of awareness, or the sensation of sensing, which could notionally be implemented by a sensing circuit with a feedback loop.

That already (there may be many other potential forms of consciousness) includes large classes of potential consciousnesses that would not be covered by that assertion. The assertion assumes that consciousness is static (i.e. it stays in place, resident to that organism) and limited (that it is contained within a shell), whereas it is possible to make a consciousness that is mobile and dynamic, transient or periodic, but that consciousness would not be covered by the assertion.

In fact, using that subset of potential consciousness described by awareness of awareness, or experiencing the sensation of sensing, I wrote a blog describing how we might create a conscious machine:

Biomimetic insights for machine consciousness

Such a machine is entirely feasible and could be built soon – the fundamental technology already exists so no new invention is needed.

It would also be possible to build another machine that is not static, but that emerges intermittently in various forms in various places, so is neither static, continuous or contained. I describe an example of that in a 2010 blog that, although not conscious in this case, could be if the IT platforms it runs on were of different nature (I do not believe a digital computer can become conscious, but many future machines will not be digital):

https://timeguide.wordpress.com/2010/06/16/consciousness/

That example uses a changing platform of machines, so is quite unlike an organism with its single brain (or two in the case of some dinosaurs). Such a consciousness would have a different ‘feel’ from moment to moment. With parts of it turning on and off all over the world, any part of it would only exist intermittently, and yet collectively it would still be conscious at any moment.

Some forms of ground up intelligence will contribute to future smart world. Some elements of that may well be conscious to some small degree, but like simple organisms, we will struggle to define consciousness for them.:

Ground up data is the next big data

As we proceed towards direct brain links in pursuit of electronic immortlity and transhumanism, we may even change the nature of human consciousness. This blog describes a few changes:

Future AI: Turing multiplexing, air gels, hyper-neural nets

Another platform that could be conscious that would have many different forms of consciousness, perhaps even in parallel, would be a smart yoghurt:

The future of bacteria

Smart youghurt could be very superhuman, perhaps a billion times smarter in theory. It could be a hive mind with many minds that come and go, changing from instance to instance, sometimes individual, sometimes part of ‘the collective’.

So really, there are very many forms in which consciousness could exist. A bat has one of them, humans have another. But we should be very careful when we talk about the future world with its synthetic biology, artificaial organisms, AIs, robots, and all sort of hybrids, that we do not fall into the trap of asserting that all consciousness is like our own. Actually, most of it will be very different.

2 responses to “Non-batty consciousness

  1. Pingback: Futureseek Daily Link Review; 26 April 2021 | Futureseek Link Digest

  2. Pingback: Too late for a pause. Minimal AI consciousness by Xmas. | Futurizon: the future before it comes over the horizon

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.