Providing feedback to the user of MMI applications is fundamentally important. In a practice or training program the user needs to see, hear or possibly even feel how well they are doing in real time. In that sense feedback works the same as in biofeedback. While the user doesn’t know how they are learning to perform the desired task, feedback closes a loop of experience that allows their mind to learn what is necessary to make it happen.

Whenever possible, real-time feedback is the best possible. “Real-time” means the user receives some clear indication of how well they are performing the desired task in under 1 second. Feedback is most effective when it is delivered in under half a second from the time they begin the desired task, down to about 0.2 seconds. At 0.2 seconds the user experiences the feedback as being immediate or delivered in real time. Above about one-quarter of a second, the user may start to notice a slight delay. Achieving real-time feedback can be a challenge when writing an application.

Real-time feedback is not always possible. For example, some applications may be intended to provide information about a future event. In its simplest form a program may be designed to predict the state of a single true random bit that is generated immediately after the prediction is completed. If the program takes 0.2 seconds to complete a prediction and the true random bit can be generated just milliseconds later, providing real-time feedback is easily possible. On the other hand, let’s say the predictor is intended to provide the three winning numbers in a Pick Three lottery to be drawn in the next hour. Obviously, a direct feedback of the accuracy of the prediction will not be available for at least an hour.

There are a number of ways to deal with the situation where real-time feedback is not possible. To get good results with most MMI applications requires the user to practice and gain confidence in their ability. This is most easily done with an application like the METrainer that provides real-time feedback and numerical assessment of the results. If a user wants to make predictions with a lottery picker program, practice using the Predict mode in the METrainer is recommended. Predict mode may seem more difficult at first, but that is apparently due to a belief that predicting the future is harder than affecting the output of the MMI generator at the present. Experience has shown that with sufficient practice all the modes (Affect, Reveal and Predict) can produce similar hit rates.

A real-world prediction program doesn’t usually allow real-time feedback. What type of feedback can be provided in real time that will help the user achieve the best results or highest accuracy in the final prediction? Only results that occur immediately after an initiated trial can be used to generate feedback. Therefore, the feedback will at best be some measure of performance of the results of the single trial. Refer to the example of picking one of 10 possible numbers in a Pick Three type lottery, where each trial updates the accumulated results for all 10 numbers. We may guess to base feedback on the entire trial, that is, some number related to updating the bins for all 10 numbers. That would be unsatisfactory because it doesn’t have a chance of indicating performance on the winning or correct prediction. Instead base feedback on a subset of updates representing the top bins after the update (trial) has occurred. When the bins representing the 10 possible numbers have been updated and sorted high-to-low according to how many 1s they contain, the eventual winning number is expected to be in the upper portion of that list. The position of the winning number in the list is unknown, but if the user is interacting with the MMI program in a constructive way, the winning number will move higher in the sorted list with each trial/update provided.

There are perhaps hundreds of ways to calculate a feedback value; therefore, guess at one and try it:

Assume the winning number is more likely to be represented by one of the top 5 bins after sorting. Find the number of 1s that were added to those top 5 bins in the trial that just completed. Calculate the z-score for those 1s just added. That is, *z* = (2 time the count of 1s – *N* )/Square root of *N* . *N* is the number of bits used to produce that count of 1s, or in this case, one-half the number of bits used in the trial. Find the probability, *p* = 1 – (cumulative distribution function of the Normal distribution evaluated at *z* ). Note, the actual probability is drawn from the Binomial distribution, so that exact probability can be used instead. Make the probability into a linearized number called the *surprisal factor* , *S* = – log (base 2) *p* . Using natural log: *S* = – Ln *p* /Ln 2. The surprisal factor is a convenient linearized form of probability – every higher integer represents half the probability of the previous integer. It is the number used on the slider that shows the Score on the right hand side of the ME Trainer. Now, present a figure, such as a circle on the screen, with its diameter proportional to *S* . A little experimentation will show the scale factor for converting *S* into a number of pixels of diameter. Auditory feedback is also possible with the volume being proportional to *S* , but keep in mind our hearing is already logarithmic versus amplitude, so volume may have to be proportional to 1/ *p* . The object is to provide some form of real-time feedback that is most likely to be associated with how well the user is performing.

The user feedback described is not guaranteed to be the best feedback, but no actual measure can be known of how well the future winning number is being moved toward the top-rated bin. That will only be revealed after the drawing is completed. A practice program can be devised that randomly draws integers between 0 and 9. A series of trials is performed until the top-rated bin reaches a predetermined probability of having occurred by chance, for example 1%. Then compare the prediction with the randomly drawn number or numbers. Such a practice program allows many tests to be done very rapidly, whereas waiting for the actual drawing, though more authentic, only allows one or two tests a day.

As always, feedback of this type must be experienced to get a feel of how it’s working and adjust or change it entirely. One possible tweak is to reduce the number of top-rated bins used to calculate a feedback number as the top bin approaches the target probability. The theory being, the more effort provided by the user, the higher up the list is, or at least should be, the winning number.