Most A.C.E. (or MMI if you want) systems are based on binary random numbers because they are easier to generate and verify statistically. Rather than thinking about the internal outputs of the mind enabled question answering system as “1s” or “0s,” consider them as one of two possible binary answers, such as “Yes” or “No.” 1s are always considered as the positive or active symbol, and would represent a Yes. 1s for instance could represent: Up, Higher, Above, Inside, Yes, More, etc., while 0s would be: Down, Lower, Below, Outside, No, Less, etc. The question to be answered could be just a single Yes or No, which makes the interaction simple. For more complex answers, the question must be broken down into sub-answers that can all be answered by a binary result. For example, I had a simple stock market predictor that asked two questions: will the market be Up or Down; then, will the change be a lot or a little. There are a number of ways to format this using Bayesian analysis, including searching for the correct 2-bit result directly as one of 4 possible states.
I should emphasize, AI is a fairly generic term used in a number of ways in different contexts. I have used AI in the form of an artificial neural network (ANN) to enhance effect size, and I have used a version of rapid machine learning (ML or RML) that trains on a more complex pattern to enhance accuracy of the results. RML can be trained in a fraction of a second, depending on the speed of the processor. AI processors could do it in a millisecond or less. Combining these forms of AI with Bayesian analysis is also a type of AI.
Certainly, Bayesian analysis is used with a monitoring system to track and utilize historical accuracy combined with a quality factor derived from each trial result. I always develop and test my programs using simulated data, where I have control of the signal to noise (S/N) of the raw simulated data. The problem there is, it’s not possible yet to simulate mental effects because we don’t yet understand how that works.
I have found through experience the human mind is rather flexible in what it can learn to do.
There are many ways to limit errors. Bayesian analysis provides a probability of correctness, which will be correct, as long as the estimates of the accuracy of each trial result is fairly accurate. I described this in articles on Bayesian analysis in the forum. The issue with classic divination is it can never be made objective and scientifically provable. As I noted before, I want to move away from systems that will never (in a reasonable time) be accepted as real.
Okay, but what do you train your NN to do? It’s also question of transmitter to train it to spot mental influence. Otherwise, it’s just entropy estimator.
However, I was talking about a bit different question. I think, that MMI as we know it is basically optimizer force. Which drives entire observable universe to maximize value, where direction is intended.
But, in case of QA system, we should answer a question, which value user should try to maximize? Z score of binary signal, confidence score of AI or something else?
Also, good question, how signal should be represented for user for feedback? Bar on screen? Sound? Buttplug.io interface? possibly, something more sinister?
I was talking about rather different question.
Basically, we could imagine desired AI system as 2 agents, who has asymmetric channel of transmission.
A->B could transmit question ideally.
B->A could transmit answer with single symbol in certain alphabet. And that channel is pretty much noised. There is certain significance factor, which allow to estimate, how much that specific answer is noised.
So, we should develop strategy, how to obtain desired knowledge from agent B, or determine, that certain knowledge is unobtainable from agent B(certain answers can’t be answered in principle. And, I assume, that even perfect MMI can’t promise omniscence).
Even if we couldn’t simulate MMI, we could train one AI, which has question to ask other AI, which has answer. And determine, what is that answer.
Then we could gradually increase noise in channel, simulating weaker MMI. So, we could train AI to adjust own strategy to not fool itself with uncertain results.
somewhere around that.
Well, it’s flexible, yes. But there is experience. It’s more troublesome for human mind to manifest answer in binary questions(disregard of what you consider to be 1 or 0). And it’s easier to project answer on symbolic system. Even better if that symbolic system is somehow polished with tradition.
Yes, we have luxury to analyze statistics, which mages of past hadn’t. Yet, we also haven’t luxury of system, which work without wonkyness and has full blown scientific support either. So, we’d better to use every opportunity to make process easier. And, even if I don’t have scientific verification of that question, occult experience suggests, that not only certain symbols are easier for human mind to project. But, certain symbols are easier to be projected in general. Because mind doesn’t really operate bits. It’s very far from things, which happens in TRNG. It operates meaning. And there are meanings, which are more potent to shape reality. That meanings stands behind symbols in divination/occult systems.
Usage of divination symbolic system wouldn’t make entire thing more or less scientific, really.
Thing, which would make that to be more scientific would be to verify claim, that different symbolic alphabets could enhance or obstruct transmission of semantic information through MMI. However, it’s whole a different field of headache. And would cost a hefty bunch of money. Because would require repetition of training of agents. Not exorbitant amount, yet significant.
I used an ANN to increase effect size. The process includes multiple steps:
a. Random numbers are generated using measured entropy content.
b. The statistical quality of the random numbers is brought to the desired level by combining only entropy-containing bits (no deterministic post processing).
c. Blocks of random bits are transformed into a set of unique symbols. This transformation is necessary because the data is projected from higher to lower dimension using factor analysis. Random bits or words can’t be directly projected to lower dimension because every factor is indistinguishable. I won’t go through the steps here as they are rather involved.
d. Each of the symbols is assigned a meaning based on correlation with the desired outcomes/categories (determined by live testing). The number of categories may be two for binary answers, or more than two for more complex pattern recognition (meanings).
e. A large dataset is produced by trials from one or more experienced users. The data includes at least three categories which will be labeled as Up, Down and Baseline (no intention).
f. The transformed data from each trial is labeled with the intended result.
g. The transformed and labeled data – now prepared training/testing data – is used to train a neural network. Some data is not used for training, but is reserved for testing the effectiveness of the training. The effect size post- and pre-training is about a two to one.
h. When the program is used, the trial data is transformed as noted in the steps above and sent through the trained ANN to produce the immediate result of the mental effort (trial).
I – and a number of other users – have extensively tested a variety of feedback modalities, which are described in my various papers and patents. The best I have found is to allow the user to choose from a number of visual and auditory feedback variations, though I also describe tactile (vibration, pressure, etc.) feedback methods, these are harder to produce.
So, if I understood it correctly, NN could recognize certain patterns in series, which are sampled from binary stream, which could be matched to symbols, which are projected by operator?
Honestly, I haven’t figured out, how does factor analysis is being used.
Which architecture of NN is being used?
The conversion to symbols is part of the processing method - the user doesn’t have conscious knowledge of them. The internal symbols give the ANN specific patterns that it can recognize. The user intends to achieve results that represent the desired outcome, and the degree of success is indicated by the user feedback for each trial.
I developed a roulette game predictor for use with online roulette games. I focused on getting either red or black since that is the simplest type of prediction (could also have been even/odd or high/low). The predictor and the roulette game were shown on the same screen. When each trial/incremental prediction is initiated, intends the “predictor” to reveal if the next spin will land on a red or black symbol. The feedback is a circle with it’s size being related to the probability of accuracy calculated internally. The color is never revealed until the user decides the prediction is completed. The user can also reset and start over. When satisfied, the user reveals the color prediction, the bet is placed and the play button is pressed. In a major test with real money bets, over a month and about 1000 plays, there were always more wins than losses, so the winnings kept increasing.