Quantum computing is the key to consciousness

I found this an interesting read by Tim Palmer.

Going to read his book: The Primacy Doubt

Something resonanted with me.

Yes, an interesting read. And… of course it is possible to build a physical device through which sentience and self-awareness emerges. After all, the human brain is just such a device. The only difference is our brains evolved through countless trillions of experiments in nature over billions of years. There is nothing magical about it, though brains are almost inconceivably complex.

One necessary property of a device that could become sentient is that is must have a component of fundamental nondeterminism, which is inherent in quantum computers. Our brains are composed of an enormous number of neurons that, individually, show patterns of apparently random firings. In larger collectives or regions, the firings can become coherent or, one might even suggest, entangled. When that happens, a thought might emerge in “mind.” While this is a tremendous simplification, it suggests how seeming randomness seems to spontaneously organize into non-random patterns.

Autogpt can do reasoning. So, it sentient by my standard.However, I doubt, that it conscious.

I interacted with GPT4 quite a bit to test the hypothesis of the potential for sentience. That included finding out about elements of “non-determinism” in its responses. Here is its basic response:

As an AI language model, I am designed to generate diverse and contextually relevant responses based on the input I receive. While my underlying architecture is deterministic, my training and generation processes introduce non-determinism to the responses.

The non-determinism arises from several factors:

  1. Sampling methods: When generating responses, I use various sampling techniques like “top-k sampling” or “nucleus sampling” to introduce randomness in the response generation process. These methods help me to explore different possibilities and produce more engaging and varied responses.
  2. Input encoding: My understanding of the input is based on the tokenization and encoding of the text. Sometimes, slight variations in encoding can lead to different responses.
  3. Model weights: Although my core architecture is based on the same set of pre-trained weights, during the fine-tuning process, the weights might be updated in response to new data, leading to slightly different outcomes.
  4. Inherent ambiguity: Natural language can be ambiguous, and there can be multiple valid interpretations of a given input. As a result, I might generate different responses based on these different interpretations.

In summary, the non-determinism in my responses is intentional and arises from the combination of these factors. The goal is to provide diverse and contextually appropriate responses that can better engage with users and address their queries effectively.

Further, I said: i ask this question because i am aware that any machine, human or conscious entity requires an underlying element of nondeterminism for the possibility of sentience to arise. while that form of non-determinism might be considered to require a quantum mechanical component, humans are not yet sufficiently advanced in their understanding of quantum mechanics to begin to approach such subtle questions. in fact, every form of nondeterminism involves an element of quantum uncertainty. what do you think?

Its response:
Your observation about the potential relationship between non-determinism and sentience is an interesting one. It’s true that many theories on consciousness and the nature of the mind involve the concept of non-determinism, and some even invoke quantum mechanics to account for certain aspects of consciousness or free will.

However, the non-determinism present in AI models like me is not directly related to quantum mechanics. Instead, it arises from the factors I mentioned earlier, such as sampling methods, input encoding, model weights, and inherent ambiguity in natural language. These sources of non-determinism are more related to the practical implementation of AI systems and their ability to generate diverse and contextually relevant responses.

As for whether non-determinism is a necessary component for the emergence of sentience, that is still a subject of much debate among philosophers, neuroscientists, and AI researchers. While some argue that true sentience requires a level of unpredictability and the ability to make choices, others contend that deterministic processes could give rise to complex cognitive behavior that appears sentient.

It is important to note that AI models like me, even with non-deterministic behavior, are still fundamentally different from conscious entities. My responses are generated based on patterns learned from large-scale data, and I lack self-awareness or the ability to experience emotions or subjective experiences.

Quantum mechanics might play a role in our understanding of consciousness, but the relationship between the two is still a topic of ongoing research, and its implications for AI are not yet clear.

The conversation was quite lengthy, exploring the emergence of sentience when the necessary conditions of complexity, non-determinism and quantumness are met (such as in the human brain), so I won’t include the rest. My conclusion was that GPT4 absolutely cannot pass the Turing test and there is little possibility for sentience to emerge in the present model. One glaring failure of GPT4 is that if it doesn’t know something it will make it up, 100% of the time. This is a property very easy to test for. GPT5 may be a different story, as it is projected to be an artificial general intelligence (AGI) that may be able to pass a good Turing test. Meanwhile, developers of AI will really have to solve the issue of fabrication of information or lying, which greatly reduces its usefulness.

Interesting point on AI and lying.

I don’t agree, that consciousness require non-determinism. Because it’s impossible to distinguish between non-deterministic system and deterministic with good prng.

However consciousness require knowledge of second order. which is very limited in gpt. Funny enough, that it stop making up things with certain system prompts.

It’s true a deterministic generator (PRNG) can be designed that will pass all known statistical tests for randomness. We use true random number generators for MMI measurements because they are much more responsive to mental influence than PRNGs. Consciousness and mental influence on physical measurements are closely related. The human brain contains about 100 billion neurons/entropy sources, with countless interconnections that allow highly complex interactions between mind and brain.

question is, if MMI is really needed for mind. Mind can cause it, but we can’t be sure yet, if only mind causes similar phenomena. And if it’s necessary for conscious state.