In regards to the random walk basics

I was hoping we can share some information or sample programs in regards to the basics of the random walk. From my understanding whenever the RNG device is running in the background, you loop over the generation of a byte at a time. One loop is also known as a trial and in each trial you determine whether the byte contains more 1 or 0 bits.

You can have trials that continue on their own continuously or the trial is decided with a button press.

We’ve had the REG graph that has a line which completes trials continuously on its own where if there’s more 1’s the line goes up and for more 0’s it goes down by increasing/decreasing a variable. With that approach I assume its best to set a threshold and whenever the line/variable reaches the threshold its possible to get an yes/no action.

Or you use a button press to get the action you want the moment you click like with the METrainer.

I don’t really have a good enough understanding of the basics on how to best create all of this in an actual program so a couple of basic questions:

  • What is currently the best way of generating an action using a loop/button press
  • Are there other ways of getting an action from looping over bytes
  • What is the optimal amount of trials to complete to get an action
  • How many bytes should be generated with a trial
  • What is the speed at which x amount of trials should be completed
  • Which programming language is most suited for the devices? C, C++ or Python?
  • Are there any sample programs available we can use as an example

Feel free to correct me about anything I’ve mentioned above

I will make a beginning at replying to your questions:

A trial is a measurement intended show the effect of mental intention on a material system. The system is most often an electronic random number generator, which derives its randomness from a physical entropy source. While many other material systems are possible and have been experimented with, they are not as easy to duplicate or work with. A single trial is usually composed of number of bits, each of which is the result of measuring a property of the entropy source, such as the voltage from a thermal noise source (a resistor).

The number of bits in a trial is not set by any physical principle, but there are a couple of well-tested examples. PEAR (Princeton Engineering Anomalies Research) usually used generators that produced 1000 bits per second from which they took 200 bits for each of their trials that lasted 1 second. Because their effect size was so low (typically 1 bit affected or switched in 5000), they did not consider the results of single trials. Instead, they looked at the cumulative deviation of bits over many of their trials. Note also, since their trials consisted of an even number of bits, there is a significant probability of a tie between 1s and 0s (ties will happen by chance over 5.6% of each 200-bit trial), producing an indeterminate result for the trial.

I usually used trials that lasted 0.2 seconds, though experiments were made with trials lasting up to 1 second. 0.2 seconds was chosen because that is the approximate time it takes the human mind to respond to neuronal firings to become conscious of a though or other stimulus. Note, concepts of mind and thought are controversial and are not at all well defined. However, patterns of neuronal firings are directly measurable and we can note when we become “aware” of their presence. Using 0.2 second trials more easily allows what I call real-time feedback. That is, producing user feedback within a fraction of a second from the effect meant to produce the feedback. Real-time feedback is most effective for biofeedback and similar types of training.

The number of bits used in each 0.2 second trial varied from single entropy measurements up to a trillion bits. The MED100Kx8 generates 1.024 billion bits per second and outputs about 100Kbps. About 20 thousand of those bits can be used to produce one 0.2 second trial (over 200 million raw bits). Always use an odd number of bits to produce a trial so there is never a chance of a tie. If an 8-bit word is used as a trial, count the number of 1s in the lower 7 bits, and output 1 if the count is >3, else output 0. There are a number of ways to count 1s in a word, but a lookup table is probably the fastest.

Reply, part 2:

There are two fundamental ways an MMI system can produce a desired outcome or action. The first is more familiar, that is a user initialed trial: the user starts a trial by some physical action, such as a key press, mouse click, screen touch or something similar. Note, there are two varieties of touch, especially for mobile device screens. One initiates the action on a “press” or contact, the other initiates on “release,” or when contact is removed. I personally prefer on contact since it seems more intuitive as initiating an action, versus on release that may happen with the action is accomplished. An initiated trial is what it sounds like – a trail is started when the user wants it to and takes an action to start it. The second mode is continuous, where a series of trials is initiated continuously and automatically at some predetermined trial rate. A variation of this mode might be called conditional initiation, where a trial or series of trials is initiated when one or more predetermined conditions is detected. The variety of possible conditions is almost limitless.

The trial initiation mode depends completely on the specific application or intended use of the MMI system. For practice or assessment of skill, user initiated trials seem to be the best. In addition, when gathering information from a number of trials, user initiated trials is also the only method to use. Clearly when attempting a hands-free control or when user initiation is not reasonable or available, continuous initiation is the only possible approach.

Getting good results using continuous trial initiation is much more difficult than with user initiated trials. There are several reasons for this. Real-time feedback is often not available in continuous trial mode. Feedback is an important part of most MMI applications. When feedback is delayed by minutes or even hours, the user must depend on previous training and confidence in his or her ability to affect the MMI system to achieve the desired outcome. In addition, in continuous mode there are statistically expected drifts in the accumulated results. For a simple random walk, the walker will move an average of the square root of the number of trials from the beginning of the walk to the present time. For 100 trials, the walker will move about 10 steps in either direction (positive or negative). Two obvious ways to overcome this are: one, to restart the trials from 0 every second or several seconds; and two, to allow the count or current position to decay toward zero by subtracting a certain fraction of the current count. That will cause an exponential decay. To overcome these or some similar filtering method requires a larger effect size than with initiated trials. The effect size will already be smaller than the initiated trial effect size due to lack of feedback and direct trial-by-trial focused intention.

Perhaps some more clever method of signal processing will make up for the reduction in effect size for continuous trial applications, or a more responsive MMI system will be required. However, one expects to already be using the most responsive system available for any MMI application. More responsive systems are doubtless possible, but for the moment applications using continuous MMI systems are extremely challenging.

Thank you for the very detailed replies. Feel free to continue posting whenever you have the time. We could eventually make another thread that contains all of the information that you have shared.

I made a small Python script that mimics the above description a bit and is action based with a keypress. I assume that before starting a trial the buffer from the QNG device is supposed to be cleared.

Using the MED100K or MED100Kx3 it generates around 2000-2200 bytes per 190-200 ms when generating 1 byte by calling the RandBytes func during a loop at at time after pressing the key. The MED100Kx3 generates a bit more bytes this way.

Generating it with a bigger amount at a time gives about the same amount of bytes during those 190-200 ms. So I assume its either the maximum output of bytes in 200ms or the maximum amounts of bytes my PC can process at a time.

When counting I set the MSB to 0 and count the rest of the bits.

Its a very basic program but I’d like to know if I understood it correctly from the posts you made or if there’s any mistakes and how it can be improved.

So a couple of qeustions:

  • Should you call the RandBytes func with 1 byte at a time or a bigger amount in bulk?
  • Do you count the ones/zeros of said byte immediately after calling the RandBytes func or first fill a (byte) array and only after a set amount of time has passed you start counting them
  • Do you just use the maximum amount of bytes that can be generated within 200 ms or just a set amount?
  • Also these trials can give us a yes/no action, is there already a way on how we could extend the amount of actions?

In order to run it you need to install the keyboard and bitstring module with pip:

pip install bitstring
pip install keyboard

I noticed the forum didn’t allow many file extensions so I added: .py/c/h/cpp/js/java/html/rar/zip/txt. The maximum file size is set to 4096kb standard, if the file size needs to be changed though some setting on the webserver also needs to be changed.

main.py (5.0 KB)

Yes, to be certain what your timing is relative to a keypress, clearing the buffer is probably a good idea. However, MMI is not like deterministic programming. The effect of time is not always simple to determine, so it’s good not to take anything for granted in that respect.

The MED100K generates 98,765 bps and the MED100Kx3, 99,896 bps. In 200ms you should get close to 2,469 or 2,497 bytes (depending on the generator) give or take a small percentage. You seem to be losing about 10% somewhere.

Taking one byte at a time is perhaps a little inefficient. I suggest taking about 20-40ms of data per call. That’s about 250 to 500 bytes, but on general principles take even powers of 2, for example, 256 or 512 bytes. However, there are many ways to lose/gain efficiency due to how a program is constructed. Block data transfer is probably very fast, but processing can present significant computational overhead. If the program uses multi-threading there is little chance of losing data. Most programmers probably use a single thread because it is much simpler, but then every slower process should be optimized so there are no big bottlenecks that can back up the entire thread. I am not a programmer (except for Mathematica), but I suggest the bitwise handling of data is one of the bottlenecks.

How data is processed depends very much on how it is to be used. To answer a couple of your other questions, If you want to generate a single trial in 200ms, use all the bits available in that time period to generate the trial. In order to produce a trial output in minimum time, process the data as it becomes available, keeping in mind not to back up the main thread so much that data could be lost. Some timing profiling is advisable. (see for example: https://realpython.com/python-timer/) In order to use all the bits, count/accumulate the number of 1s in every byte, rather than using only 7 bits in each byte. To keep the number of bits odd (to prevent ties), adjust only the final byte in the sequence. The simplest way to do that is to only count the LSB and throw away the rest of the byte. If you use a lookup table to accumulate counts of 1s, just mask the MSB (or LSB, doesn’t matter which) and use the lookup as normal. Note, the most efficient way I know of for accumulating counts of 1s is with a lookup table. The relative address of the 256 location table is just the byte to be counted, while the data field contains the number of 1s in that byte. The table can be generated on the fly when the program is turned on, or, since it’s only 256 locations, the table can be permanently stored in the program.

To be clear, I recommend using the same number of bytes for each trial to provide the most uniform processing and trial results.

Finally, there are a number of ways to give more than two possible outcomes (1 or 0), but there is no simple answer. Producing multiple outcomes is really saying the information being requested is more complex than can be answered with a simple Yes/No answer.

One can of course use trials, or sub-trials as bits in a longer word. Two sub-trials would provide 4 possible outcomes in a 2-bit word. The sub-trials can be of equal length as a regular trial, but then the total time to generate one outcome would be doubled. Otherwise, use sub-trials with half the number of bytes. There is no free information – if shorter sub-trials are used (half the length), the effect size of each bit is reduced by about 40% (divided by square root to 2). In addition, this approach, although the simplest, is not necessarily the best intuitive approach. If one considers using more sub-trials to make even more simultaneous choices, the MSBs will have much greater effect on the outcome than the LSBs. This is inconsistent if all the choices are expected/desired to have equal weights.

One way to give multiple outcomes equal weight from a single keypress (though this approach also applies with multiple keypress information acquisition) is to accumulate sub-trial results in bins associated with each possible outcome. Let’s say one wishes to pick the outcome of the first number in a Pick Three lottery. That type of lottery consists of picking three number from 0 to 9, so the first number would have 10 bins, labeled 0 through 9. When a keypress occurs, the results of an equal number of sub-trials are placed in each bin. Then the bins are sorted from high to low based on their sub-trial results and the label associated with the highest result is the selected outcome.

While this sounds fairly simple, there are many details to consider. The greater the number of possible outcomes desired, the greater the possibility of ties in one or more of the bins containing the highest results. A method must be devised to either prevent ties or provide a tie breaker when one occurs. Another major consideration involves the nature of MMI and how the user interacts with the program/game to achieve desired results.

This post is getting a little long, so I will provide more details in part 2.

Part 2.

As a reminder, these design details are provided exclusively for the members of this forum and cannot be found anywhere else. © 2020 Scott A. Wilber.

The probability of a tie occurring between two or more of the top-rated bins decreases as the number of bits used in a trial increases. However, from a practical perspective one must assume a tie will occur because it can. One way to break a tie is to add more data until there is a clear winner, that is, a single top-rated bin. This is the approach that will be described in the following, but always consider there are many ways of processing MMI data to achieve the desired result.

When a trial is initiated by keypress or other method, accumulate the number of 1s in a preselected number of words from the MMI generator and place that number in the bin labeled “0.” Then proceed the same way with the second bin, and so on until all the bins have counts from an equal number of words. It’s not possible to prevent ties by using an odd number of bits as with a two-bin (binary) selection process. Therefore, use all the bits from the selected number of words for each bin.

The simplest way to proceed would be to take 256 bytes for each bin, requiring a total of 2560 bytes (20480 bits). This is the output of a MED100Kxx generator in about 0.2 seconds. For a number of reasons, the simplest way is not the best, or at least the better way. Instead, take 64 bytes for each of the bins and repeat that 4 times, increasing the total count in the bins each time. The first reason for accumulating the counts in this way is to avoid as much as possible any conscious or unconscious bias toward any one of the bins/target numbers. After the trial – which I will call provisional because it may not be completed yet – sort the bins from highest to lowest by number of counts of 1s in each. If there is a unique highest count (no ties), the trial is completed. Output the label of the highest bin count as the selected number. If one or more counts are tied for the top position, take another 64 bytes of data for each bin, increasing the provisional counts. Check again to see if there is a clear winner. If there is, the trial is completed and output the label of the bin with the highest count of 1s. Again, if a tie still persists for the top bin(s), continue adding data in blocks of 64 bytes per bin for every bin until there is no tie. This is the second reason for adding data in smaller blocks – tie breakers will take less data versus adding the full amount of data (another 0.2 seconds).

Avoiding mental bias is always an issue in MMI systems, and it’s harder than it sounds. Do not display specific results until a trial or final outcome is produced. When future information or a prediction is being made, specific counts of 1s are inverted using a single random bit produced by a true random generator after the MMI data is produced. Inversion means replacing the counts of 1s by the counts of 0s for the same data block or sub-trial. That prevents the user from intending any of the numbers in a specific bin from being increased. This user bias can occur consciously or unconsciously. Note, the data is not all inverted, only, for example the data for a single bin at a single update (64 bytes in this example). A new random bit is used for deciding whether to invert any block of data in any bin – if the random bit is a 1, invert the count, else do not invert. As an example: if the count of 1s from a 64-bit block is 250 and the invert/non-invert random bit is 1, the resulting count to be added to the bin would be, inverted counts = 512 – 250 = 262. If the random bit is a 0, the count to be added would be unchanged, 250. This is a somewhat complicated processing, and at the developmental first pass can be omitted. It’s also not certain this is the best way to avoid an obvious source of user mental bias.

When following the process outlined above, it’s not necessarily enough to get a unique high bin count. One may also require the high count to represent a statistically significant outcome. Statistical significance generally means the probability of the null hypothesis test is 5% or less, though one may desire a more stringent 1% or less. The null hypothesis test is: what is the probability a greater than or equal count of 1s than what actually occurred could have happened by chance, that is, without mental influence. Though it sounds complicated, it’s straight forward test. The bit count data is from the Binomial distribution, but when the number of bits counted is greater than a few hundred, which it is, the Normal distribution approximation can be used with very little error. That means a z-score can be calculated and a simple threshold can be used to determine significance, or preferably, the actual probability can be calculated. The approximate z-score is, z = (2 x 1s – N )/square root N . That is, 2 times the count of 1s minus N , divided by the square root of N . N is the total number of bits in the words used to accumulate the counts of 1s in that bin, or 2048 for this example. If there are ties, N would increase accordingly. The probability of the null hypothesis test is, p = 1 – (cumulative normal distribution function at z ). Numerical Recipes in C , or other online sources can provide the code for the cumulative distribution function. I believe I already provided code for an approximation to that function. If only a threshold is desired, z ≥ 1.6448536 is the 5% significance level, and z ≥ 2.3263478 is 1%. Note, for the statisticians in the group, these tests are one-tailed because I don’t consider so-called Psi-missing, or getting a significant score in the wrong direction, to be in any way useful.

To achieve a particular level of significance will usually require a multi-trial system. That means the user must focus and perform repeated trials until the desired level of significance is reached. When trials are repeated, the counts in the bins are accumulated from trial to trial. In spite of not giving specific feedback during a trial, or a series of trials, some sort of real time feedback is always desirable. In the example, a minimum feedback is the probability or some representation of the probability and whether the desired level of significance has been reached. The bin or number related to the probability should never be displayed until the outcome is reached. There are many possibilities of feedback, and I won’t try to go into them in this message.

I tried to keep this as simple as possible, but the process is not trivial. If anyone is trying to implement this or other similar algorithms, please ask very specific questions.

Thanks for the information, it seems to get a really interesting from here.

I’ve made an update to the script. I’m not sure yet where those missing 10% bytes are.

  • When running trials with filling the bins after pressing a key the script starts looping until 2560 bytes are generated and all 0-9 bins are filled for a single trial and outputs the bin number after sorting. (Takes around 0.234ms on my end)
  • Every iteration of the loop gets 64 bytes at a time and after generating those bytes it fills the first bin in the first iteration, second bin in the second iteration until the tenth bin is filled then it starts over again with the first bin. This process is repeated 4 times.
  • When applying avoiding mental bias we generate a random bit between 0 and 1 from us.urandom() (not sure if TRNG). If that random bit is a 1 we count the ones and zeros after generating and then we swap them so ones become zeros and zeros become ones.
  • For duplicates, lets say bin 3 and 4 both have the same amount of set bits. The script then loops until there’s one bin left. Each iteration adds 64 bytes to bin 3 and then 4. Then we count which of the two contains more set bits and a winner should be determined. If both bins still contain equal amounts of set bits it will continue looping until a winner is decided.
  • I’ve allowed the input of setting a probability treshold. For example 1.0 as default. If (p*100) <= 1.0 it will print the bin number, otherwise it will print the probability in percent.
  • I’ve also added the ability to do multiple sub trials at once, for example: we do 3 trials to get 3 numbers with the bin option. Which takes 0.700 ms and outputs 3 numbers.

Is this correct? Or instead of looping with 64 bytes over bins 0 - 9 and starting over four times, we loop four times over bin 0 before going to bin 1 until bin 9.

So far I’m a bit unsure how to test it or if its working as intended.

Let’s say someone wrote down three numbers: 1 0 5

When running the program should I then focus on getting the first number of that written down number and continue to do trials until the probability is reached and check the matching numbers? Or get around 20-25 numbers from trials all below a set probability and then see which number came up most?

I assume that you already have something on your end working like this, I wrote a number on a paper laying on a desk behind me between 0 - 200 and its flipped. Is it then possible to actually guess this number with good accuracy?

I’ll upload the python script a bit later. Below are some screenshots from the command prompt after running the script.

output1.png output2.png output3.png output4.png output5.png output6.png output7.png

First, I want to thank you for your efforts and say I am impressed by your rapid progress.

I believe the first two steps (bullet points) are as I described: every bin (of 10) should have the counts of 1s from 2048 total bits (256 bytes).

In the third step – avoiding mental bias – the algorithm should be applied separately to counts of 1s from each of the 64-byte blocks. That would be 40 separate applications for one complete trial, assuming no tie breaking is applied. (I believe urandom is always pseudorandom, versus random (dev/random), which should be true random. However, the timing of the generation of these numbers is from the past. Not to worry about these details at the moment – just use what you are using.)

With respect to “duplicates,” it is only necessary to break a tie for bins that have the highest counts of 1s. When breaking a tie, new data must be applied to every bin, not just the tied ones. After a cycle of adding 64 bytes of data to every bin, sort the bins again to see if the new highest ones are still tied. Repeat this process until the top bin (the one with the highest count of ones) is the single highest. Note, if avoiding mental bias is applied, each of the new 64 bytes of data (all 10 separately) should have the algorithm applied prior to adding to each bin’s total.

I noticed in the images, the number of used bytes is 2560, but the number of bits used is 20473. The number of bits used should always be 20480, that is, every bit in every word. When multiple bins are used, use all bits in every word. Of course, when tie breaking is applied, the totals of both bytes and bits will increase.

I also note the number of set and unset bits totals 2048, which is correct when only one trial/keypress is used and no tie breaking is applied. If two trials/keypresses are used, the number will double, and if tie breaking is used, the total increases by 640 bytes or 5120 bits for each iteration (all 10 bins) of attempting to break the tie.

The z-scores were correct when a total of 2048 bits per bin are used. Also, the probabilities were exact to the last digit. When multi-trials/keypresses or tie breaking is applied, adjust the totals accordingly. The new z-scores are calculated from the new totals.

If I understand correctly, the ability to do three trials at once is intended to get three separate answers – equivalent to the complete Pick Three drawing – with a single keypress. It’s a good thought, but I will have to explain in more detail how one mentally associates MMI trials with real world information. It’s a pretty abstract process, and the more complicated the expected answer is, the more difficult the visualization or focused intention becomes.

The reason for looping 4 times over each bin is to spread out any potential mental bias that could be imposed by the user knowing the exact structure and process going on within the program. Any clue like that allows more mental influence to bias the output. As I noted, MMI is very different (very slippery to control), and we must consider things that could never happen in any deterministic program.

Definitely start with your suggestion, or simply make a program that picks the three number, but does not reveal them until you are ready with your revealed answers from your trials. Even simpler, only select one hidden number and try to match that one first – start simple and work up. Yes, focus only on revealing the hidden number and repeat trials until the cumulative probability of the best bin reaches your selected level. I suggest outputting the entire sorted list to see if the first number is in the higher end of the list. As you get better, the hidden number should move up the list until it reaches the top.

The more specific your focus or visualization, the better will be the results. I will go into more detail about methods of concentration and visualization as we progress through developing these programs. Don’t expect the miraculous from this test right away. This is an amazingly difficult task that is impossible without MMI, even with unlimited computational power, including using a quantum computer. Getting this to work as desired can still require improvements in the processing and the available MMI generators.

I used to have an extensive network of devices set up with very sophisticated reveal/prediction programs, but lack of programming support has allowed them to fall into total dysfunction as changes in Internet security made the complex interface programs fail. The hardware I am supplying is the best available for individual use and easy interfacing. This seems to be the appropriate direction of development versus the totally centralized network I had previously in place.

As always thank you for the very detailed replies. I believe I fixed some of these things.

In regards to filling the bins, my apologies for being a bit unclear on this but I want to get it perfectly the way you described. Currently when looping over the bins it fills bin 0 then bin 1 until bin 9 then the loop starts over at bin 0 until bin 9. This process repeats four times like you suggested. However there’s also the possibility of looping four times over filling bin 1, then looping four times over filling bin 2 and so on. I’ve assumed its the former.

Next up I’m going to check if I can create a GUI like the METrainer for this program, which likely makes using the program a bit easier and then I’ll upload it here. Something like having 3 boxes with hidden random numbers masked as ‘X’, once you do a keypress it will do the trial and if the probability is below a set probability it will list the bin number. As feedback the probability in percentage is always given. Once the number is correct it will unmask the box and then you have to correctly guess the next number.

Very much like the screenshots of the output from the command prompt.

What I noticed is when playing with a probability of 1% the amount of losses/mismatches before hitting the right number does seem to decrease compared to for example 3% or 5%.

Improving in these programs does seem to be a bit on the slow side so hopefully somewhere next week I’m able to get some better results. I still haven’t progressed beyond a level where I am able to get > 55 - 60% hit rate consistently on the METrainer.

From my experience I believe it is better to loop over all the bins and then repeat multiple times until the total number of bits is reached. That approach seems to make it a little harder to accidentally influence the outcome according to conscious or unconscious expectations, versus filling each bin entirely before going on to the next bin in a single pass.

I don’t have time now, but I want to explore some other feedback methods that give more responsive feedback each trial. Seeing the probability change is a straight-forward type of feedback, but since it is cumulative it is hard to really see how you are doing each keypress/trial.

Your rate of improvement is extremely fast. Some people have tried for many months and not progressed at all. Remember, the PEAR REGs and their type of feedback produced per trial effect sizes (200-bit trials in one second) of about 0.0002 or hit rates of 0.5001. Over their quarter century of research, they hardly improved that at all. You are talking about getting 10-20% effect sizes in 0.2 second trials. That’s several orders of magnitude better.

These current MMI systems – with a little refining and development – are already responsive enough to produce practical applications, but they will be much more acceptable when they reach the level of responsivity that makes them appear to operate by magic. That will take some real advancements in the MMI generator, and in the processing and user interaction as well. Having a system for comparing two MMI generators will greatly help in the MMI generator development. I am working very hard on making a real leap in the fundamental generator design and you are helping develop a type of user interaction. It takes many people working on a common goal to make these things really happen.

I’ve edited the script a bit and I’m developing a GUI in regards to guessing three hidden random generated numbers using the algorithm above, the user can set a probability threshold and when it is reached it will check if the top bin number corresponds to the first/second/third hidden number, if it fails it will print the the bins on the left and count as a loss. If its a match it will show the number. At the moment the GUI is still very basic but features can be added and it can be improved over time.

What I noticed was that when doing repeated trials trying to get an answer, after doing enough trials it sometimes can get stuck on 10-30% probability and it just won’t go down enough probability per trial to reach the set probability threshold for example 5.0%.

I also just saw that you made a thread about feedback so I will think about it and see if I can incorporate some things from it.

During development I also read a bit of your paper called Intelligence Gathering Using Enhanced Anomalous Cognition on the Core Invention website. In the end of the paper there’s an example about detecting an object or person in a region using a map which reminded me that we do kind of have a base although a bit of a bad history for developing such an application for mobile phones on iOS/Android or as a web application, using cloud infrastructure.

Let me know if you are still interested in having this type of application developed. I know that you spend time writing about the algorithms and a paper you shared with Jamal and us a couple of months ago but in the end we didn’t get to spend a lot of time with it which is a bit unfair from my/our side but right now we should be able to actually create something from it.

gui_1.png gui_2.png gui_3.png

If you find you can’t get the probability below a certain level, increase your threshold so the number of trials is not excessive. If the probability has leveled off, the process has probably reached a point of diminishing returns and may diverge further from the correct answer. It also may indicate the process needs some refinement so it doesn’t get stuck. We can talk more after you have a chance to think about the user feedback post.

I don’t understand what you mean about “not getting to spend a lot of time with it.” My intention is that everyone, at least in this forum, gets what they want/need to make progress with MMI applications and general understanding. What can I do to help?

My bad, what I meant was that at the time that paper (Guidelines and Design Examples for Mind-Enabled Applications part 1) got written it was never really explored very well because of circumstances that kept us busy.

After looking over it right now I also think I personally misunderstood it a bit as I thought the algorithms that were described were for generating a random set of coordinates effected by MMI.

As an example what I thought was the idea: using a 2D random walk like in the first example from the paper with a Latitude of 52.370 and a Longitude of 4.895. It starts as (52.370, 4.895) and after 1000 steps the number of ones in the x dimension increase the latitude by one and for the number of zeros it decreases it by one. Likewise for the y dimension on the longitude. So lets say we end up with 540 ones in the x dimension and 490 ones in the y dimension. The end coordinates are (52.410, 4.885).

While I’m going to improve this program for a bit in the following weeks with the feedback info you shared earlier I could also start developing an application that does use trials and gives a MMI effected coordinate on a map with your help.

Or if there’s something that you’d like to see developed then that might be a good start for a new project as well.

My purpose for writing that paper was to explain a number of algorithms that could be used to design MMI applications using MMI generators, such as the ones I am providing.

Clearly a random walk can be used to provide a mentally-influenced location. I developed an application nearly 15 years ago that used Google Earth as a basis. The user would hover over a specific location that was their best guess about where the target of their search could be located. They would zoom out until the area displayed included the widest possible range of locations of the target. Then would begin the MMI application. The user would intend the cursor to move toward the target, or employ some similar focus such as locating the target object or person. A trial would be initiated by the user and the MMI generator would output a certain number of bits representing an “x” and a “y” direction. The cursor would be moved in the generated (x, y) direction an amount determined by a scale factor in the program. At the same time the position would zoom in a small increment. By repeated trials the cursor location would move toward the target (assuming the MMI effect was working as desired) and get closer to the surface. Eventually the cursor would end up as close to the target location as desired.

The recent phenomenon of Randonautica, essentially the same idea I demonstrated many years ago, shows an MMI application can generate a massive public response. However, they didn’t really have an understanding of MMI and how to use it. They used random bits that were provided by a pseudorandom generator seeded by quantum random bits – that is not an MMI generator. They have other issues now, but I don’t want to be critical. In fact, the huge public interest they achieved, though momentary, was a great boost to public awareness of MMI.

There are a number of ways to use MMI to effect a search of a geographic or physical location. Exact design details would depend on the specific application, for example, to search for a lost item in ones home. Or, one might wish to find a lost pet, or law enforcement might want to find a lost or hidden person.

I want to provide techniques and examples of MMI applications. The best way I know to do that is with a specific design that can be tested by a number of people. The exact application is not so important because many design principles will be common to many applications. But, the application must be of real interest to a number of people so it gets enough exposure. I don’t know if the Pick Three predictor is that example. Perhaps a finding program would be of greater interest, but I don’t want to come too close to Randonautica’s app either.

Hi David,
I noted in gui_3 the number of bytes used was 228480. What sort of generator did you use to get that many bytes in 0.23 seconds? Or, was this a very large number of trials (more than about 80) and the 0.23 seconds was only for the final trial. If the number of trials was that high, it’s an indication the threshold probability was too low or the effect size was too low, or both.

I also calculated the probability for bin 3 and got p = 0.00608. I assume the probability you displayed, 0.61, was 100 times p. Probability of the null hypothesis test would normally be displayed as 0.0061 instead of in the form of percent.

By the way, for comparison, the surprisal factor for p = 0.00608 is 7.36 or just 7.4 for most practical purposes.

Yeah absolutely, those bytes were from all trials combined. It seems its still very messy with the naming and how it gets listed so I’ll change the text a bit, the p listing and add some stuff like the number of matches and bytes generated/used per trial.

Also I’ve started experimenting with the feedback post a bit and used the top 5 bins to generate the surprise factor like it said and will add that as well.

I assumed that doing p times 100 was better for user convenience, getting the number to appear when a probability of < 1.0 is reached compared to 0.01.

Sometimes the probability gets stuck between 10-25% (in this example) and the decrease or increase in the amount of probability when doing a trial is very small which can cause the high trial amount or sometimes you’d even have to reset.

Sometimes it is desirable to give the user more control of the data that is used to produce final results. After all, it’s the user’s mind that is generating the intended outcome. The MMI generator and the user application provide an objective platform to register the results of the mental influence of the user. In the example application of picking 3 numbers from 0-9 for a Pick Three lottery, multiple efforts or trials are virtually always required in the selection of each number. During the course of those efforts, the user may have a period when focus drifts or just feels off. That may be reflected in too small an effect size causing too many trials to be required. Sometimes it can just be a truly random statistical trend that goes against the intended result. In such cases, the user can be given the option to reset or cancel the interim results.

To accomplish such a reset requires a two step process when performing trials. After each trial or group of trials, the user can decide if the system was responsive enough to keep those results. If yes, the user can press a button to indicate to the program to keep or update the bins with the data. If no, the user can press a button to reset all the data in the trial or group of trials. The program automatically keeps track of the trial or trials since the last time either a keep or reset key was pressed, so all the new data is processed according to the button pressed.

While this or similar type of user control adds some additional complexity to the program, it does give the user a little more control than just restarting the entire process. Restarting from the beginning is also a valid approach but sometimes a selection takes considerable user effort and starting over is a little tiresome. Deciding to add this additional user interaction depends on how a typical user reacts or feels about starting over versus having the option of essentially going back to the previously recorded (kept) cumulative data.

I found really dumb, yet surprisingly effective solution.
That’s like regular random walker. But it use pre-amplification method: I take 10000 bytes from RNG buffer, use regular python hash function with that sequence and add to coordinate of random walker sgn(hash)*log(abs(hash),100). So, random walker coordinate is floating point value.
That works far better for me, but today I can’t influence RW to go towards positive direction. Probably, I have really bad mood.
I’ll try next multidimensional random walker with same principle. I can use several different hash functions for different axis.