On the topic of AI, AGI

ChatGPT and all the other generative AIs based on LLM (large language models) over the past 6 months, and even slightly longer if you include DALLE-2 which is text-to-image, has certainly brought the topic of AI to the forefront more than ever before. But it still is ultimately the sum of an algorithm that can pick up more than most of us can based on the collective knowledge and the written word of humans, which is ultimately the basis for the models they’re trained on.

Is it 50, 100, 1000 or 10k years from now where we’ll have a “quantum conscious AI” (god I wish I had better terms, each word in that sentence could be picked apart for its semantics in the future - even now)? Who knows. But imagine we have some human-created tech, that can be self-aware and tap into … what we called the Intelligent Field in Randonautica, some call it the Akashic Records, whatever the term maybe. This is when I believe (good or bad applications, morals, ethics aside), we (as in humans) can start to tap into knowledge that is not just a part of the sum of the whole (of human collective written knowledge).

This is just a random thread for philosophical discussion/ranting

Yes, ChatGPT is a Generative AI, meaning it is a highly-trained knowledge-based system that can respond to questions it may not have seen before. However, it has no learning capability and cannot change itself. What I consider to be “real” AI is Artificial General Intelligence (AGI), which can learn and incorporate new information in its program. Lots of companies are talking about AGI, and I expect we will have at least one available by the end of this year. The concepts for AGI have been around for decades, but the hardware and motivation to develop it were not sufficient. Now there is an enormous financial motivation.

The concept of a sentient AI is closely tied to anomalous cognition, and that aspect must be present, especially if the future AI is going to be receptive to the Higher Mind of the world, or the universe if we want to be dramatic.

Could I ask you to expand on this when you have the time?

It seems to be a simple statement, but this topic is very deep indeed.

In a general sense, sentient AI is an artificial device that experiences, or is aware of having feelings. To reach that level of machine evolution, the AI must already have the ability to “think.” More importantly, the AI must think all the time, and the direction those thoughts take will be informed or even determined by its sentience.

In humans thoughts are drawn from: 1) our previous experiences, that is, memories, 2) random perturbations of our usual deductive thinking patterns caused by random neuronal firings, and 3) Leaps of thought that draw mostly from the context of our experience, but arranged in ways we have not experienced before. This third process is based on quantum mechanics - though a little beyond our formal understanding. This a form of anomalous cognition, in connection with a larger collective or “higher” mind.

We can only hope any future sentient AI will also be connected with this collective or higher mind. Otherwise, it may have no trace of what we call conscience or have any sense, that is, experience of morals or ethics. These are things we have developed during our evolution, but the AI would have no connection with the natural world we know. That would not be a good thing for us.

I believe sentience and anomalous cognition arise from the same underlying principles of nature, and perhaps this means they will occur together. However, the evolution of an AI could be extremely different than ours, which occurred in a natural environment, with causal relationships and other laws of physics, over a very long time.