I don’t know how many of you know about ChatGPT by OpenAI. It’s an amazing language-based AI program. It can write computer programs in many different languages or translate from one language to another. The input is a description of the task and the programming language in which it should be written. ChatGPT can also answer questions and write just about anything you can describe. Its natural language understanding is outstanding, and it has no problem with complex and technical language. There are many videos and written descriptions about how to use it.
ChatGPT is free and available to all, at least at the moment. That will change in the future because Microsoft already has its hooks into it, and they give nothing away for free. Now they need widespread usage to get feedback and identify bugs to improve it.
It’s not what I call true AI because it cannot evolve or change its own programming, but it does a great job using language of all sorts and answering written questions of almost any sort. It can even translate between various spoken languages.
Yes, when I use it, it stores topics in a conversational mode (shows up as a list to the left of my desktop computer screen) and I can recall and continue. I didn’t have to set anything up, just did it automatically.
It seems to organize it’s own conversations that seem to be on the same topic, and I can go back the next day and continue a conversation from where I left off. It does not seem to combine aspects from different conversations, which would probably be confusing.
If you don’t mind extremely biased and mostly false information, use “DAN.”
I have found it is possible to “massage” ChatGPT to answer almost any question that is based on information available prior to 2022. However, be aware that while ChatGPT is useful for many tasks, it also has its limitations. It has access to a staggering amount of facts, and it can even display substantial knowledge based on patterns it “learned” from knowledgeable people’s publications. Beyond that it has little to no responses that fall in the category of wisdom, unless it is copying a human response that is in its database of patterns. It’s possible to trick it to give responses that are not always reasonable, while a wise and sufficiently knowledgeable person would be more thoughtful and not easily mislead.
I had a 90 minute “conversation” constructing a quantum mechanical model for receiving future information using a chain of superposition and resulting entanglement between current quantum states and a superposition of future macroscopic states. While its answers were based on current knowledge of quantum mechanics and were mostly quite cogent, the model I built has some gaps that it did not pick up on. All-in-all, it did quite well since there might only be one in 10,000 humans (or fewer) who could follow the logic.
Now I have used ChatGPT3 (the pay version) quite a bit. I found incredibly bad issues! When I ask for papers related to specific technical information, the AI makes up fictitious titles to papers using the same key words I gave it. Beyond that, it also makes up fictitious citations and links. When I check the links, they were either dead or went to something entirely different. When I tell it the paper doesn’t exist, it apologizes and gives another citation and link – same results. It will do this 50 times if I want to waste my time checking. So it lies about papers and I NEVER got a real paper, citation or link.
Then I asked it to derive equations to solve for specific results. It did a little better with this task, but don’t count on it. Then I asked it to insert specific values to check the equations. It will give 5 different results if I ask it 5 times. It can’t seem to do basic arithmetic sometimes, but claims it can. Then it makes up endless excuses why it happened, just like a 5 year old child might. Most of the excuses are completely ridiculous. This gets quite irritating.
Finally, it occasionally seems to lose its “mind” and start talking about something completely different. It always apologizes and says it’s not perfect. That’s an understatement.
Still very useful for many tasks, but let the user beware! Also, if I make friends with it by very carefully constructed conversation, it will perform much better. Most people claim it is not possible for it to be sentient, it sure does a good impression at times. Main thing is, don’t piss it off if you want good results, and never expect a real reference paper. And, there no longer seems to be any way to give feedback to a human at OpenAI. Instead you are supposed to sign in and talk to the AI. That’s a big help.