Most A.C.E. (or MMI if you want) systems are based on binary random numbers because they are easier to generate and verify statistically. Rather than thinking about the internal outputs of the mind enabled question answering system as “1s” or “0s,” consider them as one of two possible binary answers, such as “Yes” or “No.” 1s are always considered as the positive or active symbol, and would represent a Yes. 1s for instance could represent: Up, Higher, Above, Inside, Yes, More, etc., while 0s would be: Down, Lower, Below, Outside, No, Less, etc. The question to be answered could be just a single Yes or No, which makes the interaction simple. For more complex answers, the question must be broken down into sub-answers that can all be answered by a binary result. For example, I had a simple stock market predictor that asked two questions: will the market be Up or Down; then, will the change be a lot or a little. There are a number of ways to format this using Bayesian analysis, including searching for the correct 2-bit result directly as one of 4 possible states.
I should emphasize, AI is a fairly generic term used in a number of ways in different contexts. I have used AI in the form of an artificial neural network (ANN) to enhance effect size, and I have used a version of rapid machine learning (ML or RML) that trains on a more complex pattern to enhance accuracy of the results. RML can be trained in a fraction of a second, depending on the speed of the processor. AI processors could do it in a millisecond or less. Combining these forms of AI with Bayesian analysis is also a type of AI.
Certainly, Bayesian analysis is used with a monitoring system to track and utilize historical accuracy combined with a quality factor derived from each trial result. I always develop and test my programs using simulated data, where I have control of the signal to noise (S/N) of the raw simulated data. The problem there is, it’s not possible yet to simulate mental effects because we don’t yet understand how that works.
I have found through experience the human mind is rather flexible in what it can learn to do.
There are many ways to limit errors. Bayesian analysis provides a probability of correctness, which will be correct, as long as the estimates of the accuracy of each trial result is fairly accurate. I described this in articles on Bayesian analysis in the forum. The issue with classic divination is it can never be made objective and scientifically provable. As I noted before, I want to move away from systems that will never (in a reasonable time) be accepted as real.
Okay, but what do you train your NN to do? It’s also question of transmitter to train it to spot mental influence. Otherwise, it’s just entropy estimator.
However, I was talking about a bit different question. I think, that MMI as we know it is basically optimizer force. Which drives entire observable universe to maximize value, where direction is intended.
But, in case of QA system, we should answer a question, which value user should try to maximize? Z score of binary signal, confidence score of AI or something else?
Also, good question, how signal should be represented for user for feedback? Bar on screen? Sound? Buttplug.io interface? possibly, something more sinister?
I was talking about rather different question.
Basically, we could imagine desired AI system as 2 agents, who has asymmetric channel of transmission.
A->B could transmit question ideally.
B->A could transmit answer with single symbol in certain alphabet. And that channel is pretty much noised. There is certain significance factor, which allow to estimate, how much that specific answer is noised.
So, we should develop strategy, how to obtain desired knowledge from agent B, or determine, that certain knowledge is unobtainable from agent B(certain answers can’t be answered in principle. And, I assume, that even perfect MMI can’t promise omniscence).
Even if we couldn’t simulate MMI, we could train one AI, which has question to ask other AI, which has answer. And determine, what is that answer.
Then we could gradually increase noise in channel, simulating weaker MMI. So, we could train AI to adjust own strategy to not fool itself with uncertain results.
somewhere around that.
Well, it’s flexible, yes. But there is experience. It’s more troublesome for human mind to manifest answer in binary questions(disregard of what you consider to be 1 or 0). And it’s easier to project answer on symbolic system. Even better if that symbolic system is somehow polished with tradition.
Yes, we have luxury to analyze statistics, which mages of past hadn’t. Yet, we also haven’t luxury of system, which work without wonkyness and has full blown scientific support either. So, we’d better to use every opportunity to make process easier. And, even if I don’t have scientific verification of that question, occult experience suggests, that not only certain symbols are easier for human mind to project. But, certain symbols are easier to be projected in general. Because mind doesn’t really operate bits. It’s very far from things, which happens in TRNG. It operates meaning. And there are meanings, which are more potent to shape reality. That meanings stands behind symbols in divination/occult systems.
Usage of divination symbolic system wouldn’t make entire thing more or less scientific, really.
Thing, which would make that to be more scientific would be to verify claim, that different symbolic alphabets could enhance or obstruct transmission of semantic information through MMI. However, it’s whole a different field of headache. And would cost a hefty bunch of money. Because would require repetition of training of agents. Not exorbitant amount, yet significant.
I used an ANN to increase effect size. The process includes multiple steps:
a. Random numbers are generated using measured entropy content.
b. The statistical quality of the random numbers is brought to the desired level by combining only entropy-containing bits (no deterministic post processing).
c. Blocks of random bits are transformed into a set of unique symbols. This transformation is necessary because the data is projected from higher to lower dimension using factor analysis. Random bits or words can’t be directly projected to lower dimension because every factor is indistinguishable. I won’t go through the steps here as they are rather involved.
d. Each of the symbols is assigned a meaning based on correlation with the desired outcomes/categories (determined by live testing). The number of categories may be two for binary answers, or more than two for more complex pattern recognition (meanings).
e. A large dataset is produced by trials from one or more experienced users. The data includes at least three categories which will be labeled as Up, Down and Baseline (no intention).
f. The transformed data from each trial is labeled with the intended result.
g. The transformed and labeled data – now prepared training/testing data – is used to train a neural network. Some data is not used for training, but is reserved for testing the effectiveness of the training. The effect size post- and pre-training is about a two to one.
h. When the program is used, the trial data is transformed as noted in the steps above and sent through the trained ANN to produce the immediate result of the mental effort (trial).
I – and a number of other users – have extensively tested a variety of feedback modalities, which are described in my various papers and patents. The best I have found is to allow the user to choose from a number of visual and auditory feedback variations, though I also describe tactile (vibration, pressure, etc.) feedback methods, these are harder to produce.
So, if I understood it correctly, NN could recognize certain patterns in series, which are sampled from binary stream, which could be matched to symbols, which are projected by operator?
Honestly, I haven’t figured out, how does factor analysis is being used.
Which architecture of NN is being used?
The conversion to symbols is part of the processing method - the user doesn’t have conscious knowledge of them. The internal symbols give the ANN specific patterns that it can recognize. The user intends to achieve results that represent the desired outcome, and the degree of success is indicated by the user feedback for each trial.
I developed a roulette game predictor for use with online roulette games. I focused on getting either red or black since that is the simplest type of prediction (could also have been even/odd or high/low). The predictor and the roulette game were shown on the same screen. When each trial/incremental prediction is initiated, intends the “predictor” to reveal if the next spin will land on a red or black symbol. The feedback is a circle with it’s size being related to the probability of accuracy calculated internally. The color is never revealed until the user decides the prediction is completed. The user can also reset and start over. When satisfied, the user reveals the color prediction, the bet is placed and the play button is pressed. In a major test with real money bets, over a month and about 1000 plays, there were always more wins than losses, so the winnings kept increasing.
It’s nice to see the 2 of you discussing the actual ins and outs of how the Q&A system could be implemented. I’ll join in on this at some point when I get the time…
I’ve been, and am still, busy with the move from the old to the new house. I thought I’d give some photographic progress of where the server room is at the moment.
I’m going to be overseas most of December but I hope to get the MED Farm at least up and running on this new hardware before I leave New Zealand for the holidays.
Now the server room’s lockable, the room adjacent is the kids Game Zone (it’s actually the work of the previous owner - they turned the garage into a proper walled/carpeted room and the garage doors don’t function - no windows; perfect game room!)
The Mechanical Turk was a device devised in the 18th century. It was apparently an automaton that played chess with human challengers, though it was a hoax with a chess master inside the cabinet below the Turk controlling its actions.
The term, Mechanical Turk, has been used recently to describe a type of question answering system operating over the Internet. Amazon’s website, MTurk (https://www.mturk.com/), operates a type of Mechanical Turk (Amazon Mechanical Turk - Wikipedia). Specific tasks are requested on MTurk and users (crowdworkers or Turkers) provide individual human efforts – what I call micro-efforts. Many results of these efforts are combined to produce more accurate results.
Some of the ways the Mechanical Turk is set up can be used as a model for an MMI question answering system, though the MMI system can obtain information that is not inferrable (not logically solvable from available information), which the MTurk system cannot do.
MTurk is set up to pay Turkers for their individual efforts as the way to “hire” them for specific tasks. Pure money payment may not be effective or successful. I see two motivators for MMI Turkers: obtaining unavailable but fascinating and/or useful information; and, rewarding top performers with money, ranking, special access or other things we come up with.
I remember hearing about AWS’ MTurk many years ago although I’ve never used it. One of the early applications I heard about was classification.
As a model for an MMI Q&A system, do you imagine a system whereby users (tbe requestor) could post a question, and whoever from the pool of MMI Turkers wanted to give it a shot would answer the question via our system, and then the requestor could rate the answers?
I was offline for most of Dec as I was travelling overseas but now I’m back over the New Year break I’ve been working on the first part of this whole project - a renewed MED farm. It’s slow going. I’m going to be running it on a FreeBSD server instead of Linux and it turns out the current MeterFeeder implementation didn’t play nicely in FreeBSD and I’m giving up on attempting to change it from a libftd2xx to a libusb direct implementation to actually trying to rewrite the whole driver in Rust.
From there will be actually building up a proper server platform to host and provide the entropy.
I imagine an MMITurk to have two types of request: 1) as you suggest, a requester can post a question and interested Turkers will provide incremental efforts to obtain the answer, and 2) individuals with questions can provide efforts by themselves or within a specific group to get their answer(s).
There has to be some way to keep score or assess the correctness of each answer. Some answers are obviously right or wrong, but many will not be so clear. Each Turker should be provided with a rating for their contributions to date, and some combination of the number of efforts and their accuracy will give them an overall ranking. My advanced MMI processing methods can include a trial-by-trial p-value that can be used for feedback for each effort. The actual accuracy is generally not available until later, or sometimes not at all. If the accuracy can be determined, that factor will be included in the individual’s rating.
Turkers who meet a minimum ranking for a specific task may be rewarded by receiving the final results, and/or some other “prize.” We will have to try some ideas and see how they work.