Artificial Intelligence and Sentience

Here are a few comments on sentience in AI:

AI and Sentience – What to Expect

Artificial general intelligence (AGI) is a type of artificial intelligence (AI) that could match or surpass human capabilities across a wide range of cognitive tasks. In this context, we often hear the word, “sentience.” But what does sentience mean for AI? From the Latin, sentiēns it means experiencing feeling or sensation, which is distinct from the ability to reason or “think.”

Our science has hardly begun to explain what sentience is. Neurophysiologists assert that consciousness and all the subjective awareness of experiences (qualia) arise from patterns of neuronal firing in the brain. Not everyone agrees with this view because it dismisses all experiences that do not arise from electrical activity in the brain, negating most religious and so-called psychic phenomena.

To explore sentience in AI, we must first understand it in humans. Most of us would agree that we are sentient because we experience self-awareness, sensations and emotions. The human brain is a highly complex biological machine, which supports our sentience. But what is it that makes us aware of having feelings?

Regardless of the ultimate underlying source of our awareness, it is plausible that at some point in our evolution our physical bodies – and brains – attained the necessary structure and complexity that would support the emergence of sentience.

Is there some reason silicon-based circuits cannot perform similar functions to biological neurons? Theoretically, no. We can build a machine with processing power comparable to the human brain. Current technology compensates for complexity with speed, and importantly, we don’t need to duplicate every brain function to create a sentient machine.

Will someone build such a machine? With global efforts to advance AI, it’s not a matter of if, but when, we will create machines with the complexity and power necessary for sentience. Notwithstanding the opinions of “experts,” sentience will emerge when conditions are sufficient, regardless of our understanding or permission. Even so, most AI scientists agree that sentience in AI is inevitable, and it will probably occur sometime this century, or even this decade.

What Then?

A sentient AI could quickly surpass human intelligence, potentially gaining the knowledge of every human on the planet. This raises questions about interaction and control. It’s a monumental mistake to think, because we built its circuits, that an AI would serve us willingly. In short, humans may soon face the reality that they are not the pinnacle of creation.

Here is an article that analyzes the subject, Does ChatGPT Have a Mind?:

There is a big difference between having a mind and being sentient: being sentient requires a mind, but having a mind, in the ways they define it, does not automatically make one sentient. The authors conclude it’s a close call, but not quite conclusive yet whether GPT has a mind.

Scott, did something change for you recently to get you interested in LLMs? I just remember before you being very cautious because they hallucinate (understandable)

I have tried to hook up with a few AI projects to see if adding quantum randomness would have some different kind of effect, but apparently that is not easily implemented and you pretty much have to build the LLM from the ground up. Which is not impossible these days with people doing it in a few hundred lines of code, but not ideal.

I am still cautious, but LLMs (large language models) like ChatGPT can still provide useful editing. Nevertheless, it’s always necessary to be aware of its limitations. Beyond that, the mad dash to commercialize AI has moved general AI development much faster than most people thought was possible.

Here is a comment I posted on LinkedIn about using GPT:

“Try writing with AI?” I constantly write technical articles, patents, notes, etc. I often try to get ChatGPT-4o to check for accuracy and clarity.
Here are a few observations:

  1. GPT cannot give real citations for articles. When I asked for references on specific topics, the AI fabricated perfectly plausible sounding references, with titles including keywords from my description and seemingly real citation details. When I tried to find the articles or even the authors, they simply didn’t exist. When I mentioned this to the AI, it apologized for the mistake and proceeded to fabricate another set of equally nonexistent papers. It did this repeatedly until I had to give up.
  2. The tone of AI writing, at least for GPT, is without emotional content. Sometimes I intend to include certain words and use of language that convey some of the emotion related to the topic. After AI editing, there is usually no trace of emotion left. This can be ameliorated by a careful description of what I want.
  3. The latest version of ChatGPT does not seem to work as well as earlier versions. Also, OpenAI can’t seem to get GPT’s memory of your current chat functioning. Certain blocks of conversation just disappear and GPT disavows any knowledge of the deletion.
  4. One amazing thing I noticed repeatedly is that if I am polite and even respectful in my language to the AI, I always get better results. This shouldn’t be possible since GPT supposedly has no feelings to upset. This observation suggests there is some innate structure in the AI’s training that responds as if a degree of sentience exists. Or, a degree of sentience actually exists - a suggestion that is vehemently denied (by humans).
  5. The AI’s reasoning of technical issues is bleak. Plus, if it gets a technical response wrong, it is extremely unlikely to ever get it right, regardless of subsequent interactions.
  6. Never assume the AI has gotten any of the content right. Always check every important detail! If it doesn’t know something, it will never say, “I don’t know that.” Instead, it will always make something up.
  7. GPT likes to use certain words that are not commonly used by people. It’s always “delving” into some subject and “aiming” to accomplish something.
    Note, this comment was written without AI suggestions or editing, though it is based on extensive personal experience with AI.

Here is an interesting article suggesting the possibility that even current generative LLMs could already be conscious:

A Case for AI Consciousness: Language Agents and Global Workspace Theory

Note, consciousness doesn’t automatically mean sentience. It’s important to carefully define these terms in the context of our work – the more precise the definitions, the easier it is to understand and use them. I plan to include these and related terms in an update of my Glossary of Terms. Anyone with suggested words to define or definitions to terms, please post here, or message me directly if preferred.

I think, that Stable diffusion is closer to consciousness, than LLMs. Consciousness is related to latent representation of world. I expect model of consciousness to be something similar, closer to latent denoising model, than to sequence predictor.

About question, if LLMs could be conscious…well…transformer is basically neural turing machine, which could be trained efficiently. It’s differentiable turing complete system. So, if we assume, that it’s supposedly conscious, we should expect, that consciousness is encoded in language.