Fatum Project (mmi-based spatial search)

Recently tested a bit-rotation method of coordinate generation. It helps to eliminate position-significance bias from binary word by shifting all bits in it and adding one new bit from RNG on every iteration so every bit is used in every position. It also decreases entropy consumption, since 10 bits of entropy are producing 10 random numbers.
The only problem i found is autocorrelation between x and y coordinates, but it can be easily solved if you generate all values for x axis, and then all digits for y separately. This way points distribution look uniform.

Code on C# for RN generation

public int rotstack = -1;

 public override int Next(int maxValue)
        {
            //improved method which tries to prevent modulo bias 
            {
                //how many bits do we need to store maxValue?
                int reqBits = Convert.ToInt32(Math.Ceiling(Math.Log(maxValue, 2)));
                int randMax = Convert.ToInt32(Math.Pow(2, reqBits));

                while (true)
                {                  
                    int n = maxValue;
                    if (rotstack > -1)
                    { rotstack = BitRot(rotstack, 1, reqBits);}
                    else
                    { rotstack = BitRot(0, reqBits, reqBits); }

                    if (!(rotstack >= (randMax - n) && rotstack >= n))
                    {
                        return rotstack % n;
                    }
                }
            }
        }
		
 public int BitRot(int val, int rotations, int reqBits)
        {
            int msb = reqBits - 1;
            for (int i=0; i<rotations; i++)
            {
                val = val >> 1;
                if (GetRandomBit()) { val = val |= (1 << msb); }
            }

            return val;
        }

it is also imoprtant to generate all x and y values separately, because if you use one value for X and the next value for Y, it will look like this:


but if you generate all x values first, and y values after that it will look uniform

It is possible to generate a second independent bitstream from a single input stream. That should take care of the autocorrelation (or crosscorrelation) issue. The method is to use an autocorrelation-to-bias converter that I devised. The 1st-order autocorrelation of the original sequence transforms to a bias in the resulting sequence where AC(1) = 2 (count of 1s/N) - 1 (N is the length of the sequence). What’s important is that bias and autocorrelation are statistically independant, so the new sequence should not be correlated to the original sequence.
From the original bit stream, take the current bit and the previous bit and output their XNOr as the current bit in the new bitstream. Each subsequent bit is just the original bit stream incremented by one bit and the current bit XNOred with the previous bit. Only the first bit is undefined because there is no previous bit. You can start with either a 1 or a 0, it will not disturb the outcome after a few bits have been processed.

The operation can be done bit-wise or word-wise. If the XNOr is not built in to the programming language, the truth table is simply

previous bit    current bit    output
      0               0           1
      0               1           0
      1               0           0
      1               1           1


If the bits are equal, output 1, else output 0.
Or, the XOr will produce the same output sequence, but with every bit in the sequence inverted. That would be fine for your application. You get a second bitstream for (almost) free. I usually use the autocorrelation (derived stream) for the imaginary or β€œy” axis and the bias (original stream) for the real or β€œx” axis.

Note, this simple algorithm assumes the bias is fairly small. That is, (2Pout - 1) ≀ AC(1). I also have a full algorithm for when bias is significant, but that should not be the case with any reasonably good random generator.

I tried to modify my bit-rotation method so it could calculate Y value by XNORing bits from X value. It still generates X by shifting bits in previous X value and adding new random one. It turned out, that this method doesnt work.

public int rotstack = -1;
public bool firstbit = false;
 
 
 public Coord XYCorr(int dx)
        {
            int yval = 0;
            Coord result = new Coord();
            int reqBits = Convert.ToInt32(Math.Ceiling(Math.Log(dx, 2)));
            int randMax = Convert.ToInt32(Math.Pow(2, reqBits));
            while (true)
            {
                int n = dx;
                yval = 0;
                if (rotstack > -1)
                { firstbit = GetBit(rotstack, 0); rotstack = BitRot(rotstack, 1, reqBits); }
                else
                { firstbit = GetRandomBit(); rotstack = BitRot(0, reqBits, reqBits); } 

                if (!(rotstack >= (randMax - n) && rotstack >= n))
                {
                    result.x = rotstack % n;
                    if (firstbit == GetBit(rotstack, 0)) { yval = yval |= (1 << 0);  }
                    for (int i = 0; i < reqBits-1; i++)
                    {
                        if (GetBit(rotstack, i) == GetBit(rotstack, i+1)) { yval = yval |= (1 << i+1); }
                    }

                    if (!(yval >= (randMax - dx) && yval >= dx))
                    {
                        result.y = yval % dx;
                        return result;
                    }
                }
            }
			
		public int BitRot(int val, int rotations, int reqBits)
        {
            int msb = reqBits - 1;
            for (int i=0; i<rotations; i++)
            {
                val = val >> 1;
                if (GetRandomBit()) { val = val |= (1 << msb); }
            }

            return val;
        }

        public static bool GetBit(int input, int bitNumber) 
        {
            int b = input;
            int bit = (b >> bitNumber) & 1;
            if (bit == 1) { return true; }
            else { return false; }
        }

The XNOr method is meant to produce an independent source of random bits in parallel with your previous random sequence that you use to feed your β€œx” generating function. There should be no interaction between the actual coordinate generating functions, but two entirely separate x and y generating functions.

This may be what you are doing, but I expect the method to work since the bias and autocorrelati9on sequences are statistically independent. However, it is also possible your method of coordiante generation does something funny with the derived AC bits.

Ah, i got it. It is impossible to generate an Y-axis value from the X-axis value for the same coordinate without making a pattern. The idea to make 1bit coordinates failed.

Thoughts on which amplification methods are most suitable for generating points using the binary word method. Unlike random walker, where the ratio of zeros and ones has a decisive influence on the result, a binary word relies on a bit sequence representing a number. Therefore, when the Psi affects the RNG, the deviation may have a different nature. When trying to create a signal that could be decoded into a number, Psi will generate repeating patterns similar to the target number pattern. Patterns may not play correctly, but they will look like the target pattern. In this case, an algorithm capable of finding in the bit stream the region with the largest number of repeated fragments could be optimal for amplification.

However, there is a second option, when Psi does not reproduce the pattern instantly, but increases the probability of all the numerical digits falling out in turn. Then we will see a lot of biasses in the stream with a classical predominance of zeros or ones, and when put together, they form the desired pattern. The problem here is that such biasses can have between themselves sections of noise of variable length, which means that, falling into the amplifier, two biasses can be combined into one output bit, thereby violating the number pattern.

The picture shows both cases using the example of the number 218 (11011010)

Approximately the same problem is addressed by the bi-entropy algorithm (BiEntropy Thread) with its division into channels by bit shift. Each channel is shifted relative to the original one by 1 bit. Moreover, if we know the level of entropy in the initial sequence of each bit, then comparing these values ​​with each other, we can select bits from those channels where the bias was captured in the amplification window in an optimal way.

I have done a lot of testing that converts runs of 1s or 0s into a bias that can be amplified. The testing included runs of various lengths from a run of one to longer runs. I clearly observed that during strong mental influence all the longer runs in the intended direction (more 1s or more 0s) increased beyond the expected numbers. This is described in one of my papers, Intelligence Gathering using Enhanced Anomalous Cognition, Figure 1, page 4, https://coreinvention.com/files/papers/Intelligence_Gathering_Using_Enhanced_Anomalous_Cognition.pdf. Interestingly, the excess of 1s or 0s that show up in the enhanced runs have to come from somewhere, so the number of runs of length one in the intended direction actually decreased.

Testing of the shorter runs, but looking for the opposite result (decrease rather than increase in the intended direction) in runs of length one, did show a significant MMI effect. Runs of length two seemed to be a crossover and always showed little effect, while runs of increasing lengths occurred less often and therefore imparted less information about the effect. Runs of length one and three had the most information, but neither of these were as good as the usual bias or 1st-order autocorrelation measurements.

Beyond this approach I have not tried any method that successfully identified short segments of a bit sequence showing increased MMI effects. A big part of the difficulty in this regard is the entropy of such short sequences is not actually measurable in a meaningful way. It would be possible to look for arbitrary patterns in 8-bit words for example. I would suggest looking for a pattern that would be output as a β€œ1” and output a β€œ0” if its compliment is seen. For example, 10110001 might translate to a 1 output and 01001110 would translate into a 0 output. A mapping for the rest of the 128 pairs of words would have to be defined so each 8-bit block (word) would produce either a 1 or a 0 output. To me it seems a little arbitrary to allow different lengths of data blocks in this analysis, but I suppose it is possible. Also, I think this analysis will have a similar result to simpler bias amplification algorithms.

Reflecting on why MED does not provide significant advantages over ANU in the Fatum algorithm, I came to the conclusion that bias-amplification algorithms may simply be incompatible with the conversion to numbers by the Binary Word method. Since compressing a series can simply destroy complex patterns and save only information about the general deviation.

Therefore, it occurred to me to create algorithms for word-amplification that would amplify a complex pattern. In my thoughts, I settled on two promising options:

  1. The simplest thing is to apply a series of 32 bits to the input of the amplifier. Each of these 32 bits will go to the input of a separate random walker bias amplifier. Then the output bits of all 32 amplifiers will produce the final binary word. Thus, each bit significance position will be amplified separately and if a repeating pattern occurs in the input stream, it will be reflected in the output pattern.

  2. Advanced version: feed all input bits to the random walker with a fixed number of bits. In this case, the output bit should be determined by the maximum deviation during the walking process. The deviation itself is then converted into a z-score and associated to the output bit. Then the entire array of output bits can be divided into 32 groups by the size of the z-score and the bits with higher z-scores can be used in more significant positions of the binary word. since there will be more less significant bits than more significant ones, you can additionally compress them to one with a bias-amplifier.

I think the biggest problem is the algorithm is trying to obtain an enormous amount of information in a fraction of a second. Each trial in a real-world MMI system produces just one bit with a 60-70% probability of being correct. Trying to get 32 bits of correct information requires a true data rate on the order of 30-50 bits per second, depending on how long a trial lasts. The highest or peak data rate I have ever seen was about 1 bit per second, but 0.1 bit per second was more normal. To be clear, that is about 10,000 times higher than what PEAR researches reported, and no doubt it will be possible to increase that rate another 10 to 100 times with real advances in MMI technology.

This information rate issue is the reason MMI applications are not yet running the world. It is possible now to get a few accurate bits of information in several seconds with training and properly combining trials. Obtaining 32 bits of accurate information in a single keypress is beyond the current state-of-the-art. MMI generators must be made much more responsive, which is possible by some methods I know of and many I don’t know.

May I suggest that the idea of creating a complex pattern of voids and attractors in response to each keypress requires an enormous amount of entropy. At the same time, only a tiny amount of that provides information that points to a relevant target area. If too much is asked from the generator/algorithm, little difference will be seen between entropy sources, as they will all fail to yield the desired result.

The challenge is to find the most efficient way to use the information provided from the MMI generator so the user’s intention is not washed out by spreading it too thin. I will give this some more thought, but no matter what the algorithm it may still require multiple user efforts (trials) to converge to a usable result.

I designed an MMI-based geographical search program using Google Earth about 15 years ago. I think the same principle could be applied to your program.

It’s based on an incremental convergence approach. The user zoomed out to the maximum area in which the target could be located. In your program that would simply be a square of the desired dimensions. Then the user intended (in whatever way they choose) for the program to reveal the location of their target. The user initiates a mental effort (trial) with a keypress or mouse click and the MMI generator produces 1 or 2 results. The results consist of a bit (two bits for 2D). The 2 results can be made by two consecutive blocks of data or by taking every other bit (or word) to produce the x and y results. The positive x direction is represented by 1 and the negative direction by 0. The same for the positive and negative y directions. If a 1 is produced for the x axis, reduce the search area by cropping the left side of the search area from bottom to top. One might expect to just take off the left one-half of the area, but if the target is close to the center, this will likely cause an unrecoverable error. Instead, crop off the left 38.2% of the area and keep the right 61.8%. These are Fibonacci ratios and will provide a balance between convergence time and likelihood of accuracy. Then if the y result is 0, crop off the top 38.2% of the area. The cropped area becomes the new area and the process is repeated until the desired resolution is reached. After 20 efforts or trials, the resolution is 0.0000661 times the beginning dimension. That’s 66.1cm over 10Km (about a 2 foot square in a 6 mile square search area). Of course, the 20 results could be produced with a single mental effort in 1 second or less by dividing the data into 40 blocks (40 random walks, 20 for each dimension). However, as I mentioned before, that is really trying to get too much out of the current state of the technology.

Feedback can be produced for each mental effort from the surprisal factor generated from the size of the z-scores of each random walk. Either 2, one for each dimension or 1 from the two dimensional results combined. The less probable the results the greater the feedback.

This approach addresses the issues I raised in my previous post – not perfect, but closer to an actually possible solution.

Another interesting idea

I think it’s important to reduce MMI technology to hard science whenever possible. Information theory allows just such a presentation for a geographical search task.

For example, the first question to be answered is, how many bits of information are required to localize a 1 meter square location in a 1.024 km square search area? The particular area of 1.024 km is used because it simplifies the math for the example and 1 meter square is chosen because there is not likely any benefit to achieving higher resolution in a practical search. There are two ways of looking at this question in two dimensions. There are 1024 one meter segments in each of the two dimensions. That is 2^10, or exactly 10 bits in each dimension for at total of 20 bits. The second way is to first find the probability of hitting a 1 meter square in the search area. That would be 1 in 1024^2 or 1,048,576, p = 9.536743 x 10^–7. The surprisal factor in bits is, log (base 2) of 1/p = 20. the surprisal factor is equivalent to the number of bits of information. Either approach gives the same 20 bits of (error free) information.

The second question is, what would it take to produce the 20 bits of information? According to Von Neumann, the error-free information rate is: (1–H) bit rate. H is the entropy of the measured bits and bit rate is the number of bits measured per time (seconds). Entropy of the MMI results is calculated from the hit rate (number of correct results divided by the total number of trials), which is equal to the predictability, P (the probability of correctly predicting the result). The entropy is, H = – (1/ln 2)(P ln P + (1 – P)(ln (1 – P))). The information rate get higher as the hit rate increases, up to a maximum when the hit rate is perfect or 100%. In that case, if the trial rate is 1 per second, the error-free information rate is 1/second. The very best information rate achieved in highly-controlled MMI testing with immediate feedback was about 1 bit per second, and that was not sustained for a full 20 seconds. Ultimately information rates that high, or even higher with improved MMI technology, may be achievable.

The objective answer to these questions is it would take 20 seconds at an average information rate of 1 bps to locate a 1 meter square target in a 1.024 km square search area. These numbers are fundamental and not dependent on how the search is performed or how the data is represented. This represents the ideal or best result; the search algorithm may be inefficient and the results will take longer.

I wrote an article about Fatum experiment for Randonaut community https://www.reddit.com/r/randonauts/comments/s7oyfb/mindmatter_interaction/

I will definitely try to implement this approach too. Not sure if we’re able to provide real-time feedback for millions of users though. Now the amount of requests from single user is limited. We’re now working on experiment that will test, if plants are able to produce MMI. It will be a growbox with lamps of program-cotrolled intensity that will change according to deviation in REG.

Over the years I have developed a standardized method of preprocessing MMI data and sending it to users. The preprocessing and data format is flexible enough to accommodate almost any potential application. Data is processed in 40ms blocks and combined with supplemental target information, which can be used for obscuring results from the user in certain applications. Five of these blocks, or 200 ms of data (a recommended trial duration), are sent in packets to the user application. The user can request as many of these blocks as needed for the application.

Most of the processing is done at the server location, but a small amount is more conveniently done by the application at the user’s location using simple programs. That processing includes producing feedback, which takes the burden off the server side, which then doesn’t have to keep track of the users.

Next week I’m going to start testing word amplification algorithms. The algorithm I developed works according to the following principle: Binary Word is converted to a decimal number in accordance with the Gray code system, to reduce the impact of an error. Then an array of numbers is created, the length of which is equal to the amplification factor. A loop is started in which, at each iteration, the arithmetic mean of the numbers from the array is calculated within the specified limits of values. In this case, the boundaries of values ​​at each iteration are shifted in the direction of the arithmetic mean, which becomes the center of the range. This continues until the boundaries converge on one number, where the density of values ​​is maximum, this number is fed to the output of the amplifier. Such an algorithm turned out to be extremely sensitive to the slightest statistical deviations in the distribution of numbers.

However, the problem is still what we take to generate a number, for example, segments of 10 bits. This means that it is possible to detect a repeating pattern in such a series of binary words only if it repeats at regular intervals that are multiples of 10 bits. Since in real life the intervals between repeating patterns are likely to be chaotic, the following solution was proposed:
Rotation-based number generation.
The bit stream is not divided into 10-bit segments, but moves through a 10-bit window, shifting 1 bit to the left in each iteration. Thus, each shift by 1 bit creates a new number and every bit participates in every possible position of the binary word. For such a sequence, the intervals between patterns do not matter. However, another problem arose - autocorrelation. Some bit patterns have a higher regularity and when shifted, they generate values ​​that are closer to themselves, or even repeat at all. Because of this, the probability of numbers such as 682 is slightly higher than the rest.

If you simply draw points with coordinates based on such numbers, then no bias will be visible, since the bias is quite small. However, the word amplifier is so sensitive that the points obtained with its help reflect the bias quite clearly. If you use a Word-amplifier on numbers obtained by segments, as usual, then the bias disappears.

It is also planned to test the hypothesis that Psi generates a limited amount of information. To do this, it is supposed to compare two methods for finding attractors. One of them looks for clusters of points in two-dimensional space, the other looks for clusters of points along each axis separately, and then indicates the intersection of such axis-values. The second algorithm should technically require less information, since it excludes the correlation between x and y values ​​from attention.

Yesterday I conducted an experiment to evaluate the effectiveness of word amplification algorithms. The methodology of the experiment is as follows: a target is randomly drawn on the screen. When I click on the button, a random dot is generated on the screen. My task is to hit the target with a random point. Also, depending on whether the point is in the screen area closest to the target, or closer to the area where the target’s mirror image would be, a Boolean success value is calculated, based on which the z-score is calculated. The points are generated by the binary word method, which is converted directly into numerical coordinates, which means that in order to hit the target, it is required to produce an exact bit pattern of the number in random data.

Two sources of entropy were compared: ANU and MED QWR4E004, since I wanted to answer the question of whether information about complex patterns is destroyed during rw bias amplification.

For each source, the following word amplification algorithms were compared:
No amp - there is no amplification, the series of bits is simply converted to a number that corresponds to the value on the x and y axis.
RWA - Rotary Word Amplifier: shifts a series of 10 bits to the left 1 bit at a time, each time getting a new number. After 100 iterations, counts how many times each value has been repeated and returns the most frequent one. Thus, it allows you to detect a repeating pattern, regardless of the size of the intervals between occurrences.
SWA - Similarity Word-Amplifier: reads a series of 10 bits from the stream and converts it to a number using Gray code, after which it adds the number to the array. After 100 iterations, searches the array for the most dense cluster of numbers with a similar value. Thus, it allows you to find the β€œcenter” of a cluster of similar patterns with slight differences.

All source-WA combinations was tested in a separate experiment.

Experiment progress:
After a series of experiments, it began to seem that the points follow my intention, but do not hit the target. After 100 clicks, I calculated the relative density of the scatter of points in the area where they lay the densest. Later it turned out that the longer I spend the experiment, the denser the dots lie in some separate places, that is, they are not evenly distributed. The Z-score, however, did not reflect this at all and showed either zero or negative values.

I increased the number of points generated on the map per click to 500. Now, with each click of the button, 500 points were plotted on the map and the densest cluster of them was automatically searched. Despite the fact that now much more repetitions of the pattern in the bitstream were required, the statistics began to look much more interesting.

The densest clusters of points still missed the target, but the z-score of a series of 10 clicks of 500 points showed the following results:

ANU (no amp) -0.013498
ANU (RWA) 0.674200
ANU (SWA) 2.481958

V5 (no amp) -0.067475
V5 (RWA) 1.024784
V5 (SWA) 2.550099

It can be seen that the z-score values ​​are approximately the same in the experiments carried out with both sources of entropy and show the same growth when the word amplifiers are turned on. This indicates that the induction of complex bit patterns that repeat many times in a random data stream is possible and does not depend on the preliminary amplification of the bit stream. It is also obvious that the skill of creating such patterns increases with practice.

Thus, the SWA algorithm looks more efficient than RWA, but it is worth considering that RWA creates a binary word using a series of bit shifts of the same bit sequence, which is why it needs 10 times less entropy to achieve the same word amplification factor. So it is possible that its lower efficiency can be explained by the lower amount of entropy in the input.
Initially, both algorithms were supposed to be parts of the same word amplifier, but turned out to be incompatible due to the sensitivity of SWA to autocorrelation resulting from the work of RWA. I plan to do some more research to understand how to combine them for maximum effectiveness.

In fact, one of the premises of the Fatum project was another theory of mine with a more esoteric content. According to this theory, the world perceived by people is limited by their methodological space. That is, each person makes decisions about where to go, how to behave and what to pay attention to with a fair amount of determinism. Such decisions form a tree with a limited space of possible outcomes. Thus, a person can be in interaction only with those fragments of reality that are within this space of probable outcomes. In other words, somewhere there may exist a place that everyone passes by without even looking in its direction, simply because none of the logical chains in their behavior leads to looking in the direction of this particular place. We called such places Blindspots.

In theory, you can find a blindspot simply by generating random coordinates on the map, so their choice does not depend on any methodology. In the Randonautica project, many people confirmed this hypothesis, reporting that visiting such points, they discovered places near their homes that they had never seen before. However, the found places, although they were outside the methodological space of these people, were still inside the methodological space of other people. Apparently, to find the blind spot that is the blindspot for all people, it is not enough just to be in an uncharacteristic place, you also need to be in an uncharacteristic sequence of events. This, however, requires working with a map of future events, which only psychics can do. This is where MMI comes in handy, a technology that can use our unconscious psychic abilities to generate locations on a map. Then, if we’re lucky, we might find something that no one has ever seen before.