Fatum Project (mmi-based spatial search)

I would like to share my thoughts on our ongoing research and hear your ideas about its improvement:

Fatum system, we will call a spatial search system based on MMI-induced anomalies in the distribution of coordinates. The purpose of such a system is to detect the location of objects that are initially unknown. The geographic coordinate system is usually used as the coordinate space, although it is potentially possible to use the IP address space or any other.

Intention Driven Anomalies (IDA)

The main essence of such a system is IDA - the deviation in the density of coordinates randomly scattered over the map. It is believed that such a deviation makes it possible with high sensitivity to detect a trend in the displacement of points relative to a uniform distribution in favor of a certain location.
Thus, even if the points do not fall into the desired location, their displacement in its direction generates an area of increased point density around the location.

It is assumed that the patterns of deviations carry two channels of information: the superposition of all displacements of points contains information about the location of the anomaly, and the size of the deviation indicates the intensity of the MMI impact.
In this case, the direction of deviation does not matter, however, anomalies are conventionally divided into two types: Attractor points (Psi-Hit) are areas of increased density of points where the displacement tends. Void points (Psi-Missing) - areas of anomalous sparseness of points. It has been found that some people may have a predisposition to Psi-Missing, that is, it is easier for them to produce such anomalies than Psi-Hit.

The advantage of this methodology relatively to a simple Random Walker with one point is the ability to detect the moment to stop the search by the intensity of arising anomalies. In the case of RW, we cannot say with certainty when the wandering point reached its target. In addition, this methodology is Psi-Missing resistant and allows multiple targets to be found at once.

Types of objects to be searched

Objects for search can be conditionally divided into categories:
Literal objects - are when you need to find a specific object that can be visualized, such as a red ball.
Abstract objects - you need to find an object whose nature and properties are unknown, for example: An object with paranormal properties, a source of novelty, something that will help make a scientific discovery.
Causal objects - in this case, visiting a location should provoke an event, or start a cascade of events that will lead to the desired outcome.
Blind spots - An object or location that cannot be found methodologically, that is, a visit to this location is outside the space of the outcomes of our logical thinking. In other words, none of the chains of decisions and associations we make leads to a visit to this particular place. The existence of such places has been experimentally confirmed in the Randonutica experiment, and although a simple randomizer is enough to find them, the MMI effect can increase the likelihood of their detection.

Calibration methods

The technique of a visible target is used for experimental testing of the method. To do this, a target is depicted on a graphic canvas, then thousands of points are randomly applied to the canvas and anomalies are searched for. Success is considered if the operator manages to provoke the anomaly into the target area. However, this method differs from real operating conditions by the presence of visual feedback, since in real conditions the location of the target is unknown.
For this reason, it is also recommended to pay attention to additional factors accompanying a successful hit, such as the intensity of anomalies, the rate of change in the deviation of the mean coordinate from the center, etc.
In this case, the target itself is generated pseudo-randomly at a given time. During the experiments, it was found that sometimes an anomaly appears in the location of the target a second before the target itself appeared, which also suggests that the method is independent of the presence of real-time feedback.

Point Generation Methods

  1. Direct conversion method

This method is used in the Randonautica experiment and is based on direct conversion of a stream of bits to random numbers (taking into account protection against problems such as Modulo bias). Groups of 4 bytes are selected from the RNG data stream and converted into numbers, on the basis of which the coordinates of random points are generated. Typically, the entropy for such generation is collected at the moment the user clicks a button, converted into an array of points, and analyzed for the presence of IDA. This method has two significant flaws:

a) Bit positioning - when subtracting a sequence of bits converted to a number, the first bits affect the result much more than the subsequent ones. Thus, an element of temporality is introduced into the operation of the system, since the result is highly dependent on the moment at which the reading of a series of bits is started. This methodology also assumes that the MMI effect should generate sufficiently complex bit patterns to determine the correct point locations (which should not be a problem if the effect does not depend on the complexity of the algorithm).

b) Low exposure time - since entropy is collected instantly, the psi exposure time on the system is short and is calculated in milliseconds. Thus, the MMI dose adsorbed by the entropy turns out to be too small for high-quality detection. To compensate for this problem, it is proposed to split the required amount of entropy into several chunks and request them with an interval of 200ms - 1s. It is assumed that in this way the dose of MMI adsorbed will increase in proportion to the number of chunks.

Experiments show that the effectiveness of this method is rather low, and hits to the target do not occur in every iteration. However, in case of mass use, this method has a significant advantage, since, due to the use of entropy at a specific point in time, it avoids conflicts of intentions of different users. This is essential if the system is used by thousands of people at the same time.
It is not known whether the effect of temporality in the subtraction of positional bits decreases the mmi-sensitivity or vice versa provides it, but in specific tests the results are sometimes very accurate, but poorly reproduced in a series of tests.

  1. Randomly Walking Points Method

This method is good for laboratory use, as it provides high sensitivity and does not have the problems of the previous method.
Initially, an array of points is generated randomly using the direct conversion method, which avoids waiting for the points to be evenly distributed across the map. Then, at each iteration, each point is randomly displaced according to the Random Walker principle. For greater sensitivity, the offset along each coordinate axis is determined by the deviation in a series of 100 bits. (Calculated as the sum of 100 bits, where true = +1 and false = -1)
As you move, the points are regrouped, forming an IDA, which is checked for the presence of each iteration.
The advantage of such a system is the maximum exposure time and the absolute equality of the impact of all bits on the result, and therefore independence from the temporal factor.
The disadvantage of this method is high inertia and vulnerability to conflicts of intentions. It was found that depending on the average density of the points in the array and the size of the bit series, the points need a certain number of iterations in order to regroup from one state to another. That is, in the event of IDA and the impact on the system with a new intention, it will take time before the current anomaly dissociates and a new one begins to form as the points migrate to a new area.
This problem excludes the simultaneous use of the system by a large number of users, however, experimental data gives greater accuracy and reproducibility of the result.

Result detection problems

One of the main problems in detecting a result is the tendency of the system to statistically converge. As in conventional MMI measurements, a rubber band effect often occurs, also in the case of fate search, it is observed that the appearance of the Attractor points is compensated for by the appearance of Void points nearby. To ensure the statistical reliability of detecting anomalies, arrays with a high density of points (several thousand per square kilometer) are used, but at such a density, convergence can make it difficult to detect anomalies or reduce their statistical significance.
This becomes a problem when IDAs hitting the target are less β€œanomalous” than similar patterns that occur randomly.
In this regard, there is a need for the detection of secondary signs of hit, which would help to accurately establish the moment of hit, thus narrowing the circle of applicants for the target IDA. It is assumed that this moment is the period of peak intensity of MMI exposure.
The possibility of assessing such an intensity is offered by two possible markers: the rate of change in the deviation (the sharpest drops) and the amount of entropy calculated by the BiEntropy method.

Here are some thought on your message. If I repeated something you already said, I’m just trying to tie together the flow of information in a complex thread.

Geographic coordinates will likely be the best choice because of the universal availability of imaging and mapping programs, and because the resolution is high enough to uniquely identify positions to 10s of cm or a foot or even less if desired.

There are three important aspects of MMI-based applications:

  1. The entropy source or MMI generator.
    The design of a good MMI generator has been described elsewhere in the forum. To summarize, the true entropy of each bit should be close to 1.0 bits in every bit used. Preferably the statistical properties of the output bits should be sufficiently good so that deterministic post-processing is not required. Finally, it is highly desirable that processing methods be applied to enhance the responsivity (effect size) of the output bits.

The random numbers used by Randonautica were provided by a service at the Australian National University (ANU). My understanding of their service is they used some form of what they called a quantum random number generator (QRNG) to periodically seed a pseudorandom generator for final output of random numbers. This system lacked every desirable property of a good MMI generator:

a) The true entropy of each output bit is the number of seed bits since the last reseeding divided by the number of output bits. The entropy per bit is likely a tiny fraction, near zero bits per output bit.
b) Deterministic post-processing is inherent in the generation of their output bits.
c) They do not provide any processing that would enhance the responsivity to mental influence.

  1. The processing of the MMI data.
    There are many ways to process MMI data to create the desired results of the MMI application. The processing is very much related to the specific purpose, and it is generally the most complex part of the software. As was noted, using binary words to represent location information is an extremely noisy approach, since the MSB (most significant bit) will have hundreds to thousands of times the weight as the LSB (least significant bit). This makes that approach unusable for most MMI applications.

Most ways of getting responsive MMI results include taking a large number of individual MMI bits and converting them to the form used in the application. A random walk is one way to perform the initial processing. It should be noted that the specific sequence or timing of each output bit does not affect the final position. That position can be converted to a z-score and then to a probability in order to produce a linearized result, which is usually the best form of the output. Or, the MMI bits can be added together and converted to a z-score (by subtracting the mean and dividing by the standard deviation) and finally a probability.

MMI data is not like other binary data because it is produced in response to mental influence. A user can produce a biased result – either consciously or unconsciously – by intending the results toward one direction or another. One way to overcome this is by adding another layer of processing that randomizes the results. When this approach is used, it may be desirable to break down the responses into sub-blocks of data, randomizing them separately and then combining them into a final output.

Two results are required for each trial when 2-dimensional position information is produced. These can be obtained from a single MMI data stream by taking alternating bits or bytes to produce each output, or they can be produced one after the other. However, the latter method may not be the best approach since there can be a significant time shift between each dimension output.

  1. User interaction and feedback.
    User interaction is what the user sees and controls during the MMI application operation. This must be engaging, interesting and easy to understand (intuitive operation). In addition, I have found the best MMI results are obtained when the user receives some form of real-time feedback for their effort or trial. Real-time means less than 1 second from initiating a trial, but 0.2 seconds is better if possible.

Real-time feedback is not possible when a result is built up over time and/or by input from a number of different people. In that case, feedback can be generated by the size of the z-score or probability converted to ME Score (which is the surprisal factor, i.e., the log base 2 of the reciprocal of the probability). One of these qualify factors is also a good way to rate individual trials for quality.

Results from multiple trials from one person or from multiple people can be combined simply by adding their z-scores and dividing by the square root of the number of trials combined. Or, a more sophisticated method of combining data uses Bayesian analysis. This approach can include individual accuracy rating for each user and/or quality factors from each trial. The way trials are combined depends on the flow of the application and how much effort or time is invested in each incremental update.

Thank you for the response. I totally agree on the most part, but can you give some quotes that prove ANU is seeding QRNG data to PseudoRNG? I read their abstracts and there are some post processing mentioned in there, but they’re also saying about β€œrandom numbers that can be made arbitrarily correlated with a subset of the quantum fluctuations”.
I’m trying to prove to Randonautica team, that ANU should be replaced with MED, but as you mentioned, binary words is too noisy to find the difference, that would look obvious enough, and this method is only suitable for this particular experiment due to its crowdsource nature, that requires millions of people using it without intent-conficts.
Feedback of ANU gave some positive results in my tests tho, i will describe some of them in this topic soon.

Random Walker looks definitely more accurate and we’re planning to build separate research-site, based on it in nearest future, but with different experiment structure, since it cannot be shared widely through the app because of its inertia, that will cause noise from conflicting intents.

In this test i tried to research dynamics in binary words method. For this purpose i reduced the map from 2D surface to 1D line. I was generating 1 line per second, filling it with 100 random points. For each line i calculated Attractor points (small orange circles) and psi-missing Void points(small blue circles). After 100 lines was generated, i calculated attractors and voids for the whole X-Time surface (Big yellow and blue circles), and also meta attractors (green circles) tha are points of maximum density of regular anomalies. Green bars in the middle are showing the change in the deviation between average point and the center (i hoped to see some correlation between its value and successful hits).

To determine the sucessfull hit i concentrated on target (red vertical line) and tried to make anomalies hit it. The target is generated pseudorandomly after 20 seconds of experiment, that moment is shown by red horizontal line. Blue and green vertical lines represent average value and anomalies clustering point.

Experiment was done with following Entropy Sources: Pseudo, ANU, Psyleron REG, V2 (QWR4A004), V4 (QWR4B003), and V4 amplified with BiEntropy algorhitm.

Results:

  1. Regular IDA’s are hitting the target poorly in all cases, but X-Time surface IDA appears near the target in all tests for both MEDs and ANU, except for Psyleron, where it missed the target absolutely.

  2. Very interesting observation is made for V4. In both V4 and bi-V4 experiments an attractor-point appeared exactly on the target in the exact moment, when the target was created. In case with bi-V4 it was even a second earlier. Interesting that there is no signifigant difference in general statistics, but that insanely accurate hit in the exact single moment when target became visible.

  3. V4 experiment showed interesting Psi-missing effect, where X-Time surface Void appeared near the target instead of Attractor, and attractors mirrored the target on the other side of the line.

In second experiment i was trying to test, if the collection of entropy by chuncks with 1s interval will increase the accuracy. Experiment is made the same way, by generating 360 1D lines and trying to focus on the red vertical line (this time it is present from the beginning). Th difference is that IDA’s are calculated for the virtual line, that is a sum of 20 lines between every pair of green bars.


Yellow anomalies are attractors, and blue are voids.

Results:
It is plain to see that hit rate increased up to 4 hits in 18 trials. Also green circle shows that anomalies in general are clustering very close to target line.
X-Time surface attractors was close to target in ANU test again.
Yet no significant difference spotted between V4 and ANU sources.

My theory also is that temoral factor may also play in MMI effect somehow, since binary word method is very sensitive to the time when entropy reading was started.

I only made 1 test with Random Walker so far. Here i randomly generated 100 points on a map. On every iteration i was RandomWalking each point by deviation in two series of 100 bits (one serie per axis).
On the picture i wasnt deleting previous positions of points to see their trail, but for the program there are only 100 points on a map simultaneously and IDA’s are calculated for each iteration.
I used ANU as a source for this test.

Result:

In red squares are clouds of attractors, appeared initially, when the points was randomly generated. Those doesnt count, sunse they are not related to RW and was just detected on several iterations.

Big blue circle is a target, that i tried to focus on. There was absolutely no attractors in the target area on the initial points set.

After 100 iterations, new attractors cloud appeared directly on target area. That showed RW as very accurate but it takes time for the points to regroup to target position.

My information may not be up to date. I have had interaction with them for years since they wanted me to collaborate with them with their generator (which I didn’t do).

I will do some more research into ANU’s online service, including reaching out to them directly.

Even if the entropy is very high (which is still unknown), it is also necessary that the MMI bits be generated and used real time, that is, within a fraction of a second of the trial being initiated. This is a requirement that I didn’t mention before. Otherwise, there will be a temporal disconnect between the users’ focus during the trial and the generation of bits that are supposed to be influenced. If the bits are stored even for several seconds before being provided on the service, that gets more into retrocausality, which has a weaker foundation and demonstration of effect.

Just tried entropy from QWR4E004 and it shows incredible results with random points (binary word method).
I can see how attractors appear in taget area with a slightest attempt to focus on it.

The strange thing is that it only works that good for single iterations. When i merge several point arrays, created with 1 second interval, effect disappears.
Random Walker performance is not that impressive too. Sometimes RW cannot hit the target at all, sometimes it hits it, but for the short period of time and after lots of iterations.

Have you any thoughts why the most noisy method with lowest exposure time frame works best with it?

1 Like

Earlier I already noticed that QWR4B003 tends to quite often very accurately guess the place in which the target will appear. However, the attractor appears at the point of appearance of the target a second before its appearance and immediately disappears. After that, any serious deviations in the density of points in the target area are almost not observed, or they also appear for a short time at the moments of the greatest concentration of attention.
In the case of QWR4E004, anomalies appear even with a slight concentration of attention, but as soon as you increase the exposure time and all sensitivity disappears, as if the Fatum algorithm effectively works with MED entropy only for short periods of time.

I also noticed an interesting pattern, when concentrating on the target line, attractors clearly alternated with voids.

My hypothesis is that it is rubberband oscillations at a high frequency that neutralize the statistical effect over long exposure times. If we sum up the entropy in two seconds, for example, then attractors and voids manage to overlap and annihilate several times.

Statistically the single word (or two words for 2-D information) are so noisy it would take many samples (10s to 100s) to significantly average out the noise. Certain ways of combining the position will produce ties if the number of samples is even. Ties would be indeterminate results and will cause a bias if not avoided by design. The average of very many random point positions is just the center of the field, depending on exactly how they are averaged. What is your algorithm for combining multiple point arrays?

By the way, Make sure to use an odd number of bits for any random walk to avoid ties.

Finding the best or nearly best sampling and processing method is probably the hardest problem in the application. One caution is to expect to do many test runs on each approach. This is necessary to avoid statistical variations that are actually random.

I have noticed many times over the years with MMI that initial tests of a new system often seem to produce even more anomalous results. Perhaps a kind of β€œbeginner’s luck” that is actually possible with a system designed to respond to mental influence. Sometimes this effect is so strong it really looks like something is broken, when it is really working perfectly. This is another reason to run a lot of tests.

This is in no way meant to be a complete answer to your question, which is harder than it may seem.

1 Like

What is your algorithm for combining multiple point arrays?

I just merge collections of coordinates and use it as a single map with twice more points.

I think this illustrates the higher effect size of real-time results during peak mental effort (focus or concentration), but it also depends on exactly how the data is being processed and combined for longer runs. Also it matters if the trials are initiated or if the sampling is continuous. Initiated trials or measurements are usually more responsive than continuous sampling. This is due to a large part because the user can initiate the trial during or at the exact time of peak mental effort. In addition, when the user is not exerting focused effort, random or uninfluenced data is being added, diluting the intended results.

MMI results are non-stationary, meaning they are momentary deviations from the baseline random statistics. In the long term the MMI data will look purely random. The rubber band effect is very real. After concentrating for some time and pushing the results as much as possible, the output will very quickly return to its random average when the effort or focus is allowed to slip even a little. The larger the effect size the faster will be the snap back.

I contacted ANU about how they generate their random numbers for their service and this was their reply:

Our random numbers are 100% generated with quantum entropy.
CQC2T@ANU
Centre of Excellence for Quantum Computation and Communication Technology

To be clear, they likely don’t understand entropy sufficiently to make that assertion. They do apply deterministic post processing because their raw bits have significant statistical defects. This automatically means the raw bits do not contain 100% quantum entropy. More precisely, they are using an entropy source they consider to be 100% quantum in nature. The statistical defects – bias and autocorrelation as these are the most common in any raw sampling of entropy – are certainly deterministic in nature.

I have not yet gotten a reply as to the latency between generation and sending out data on their service. Their documentation suggests it is available immediately, but a real-time server is hard to build, and they almost certainly have some local buffering or storage.

The entropy of the ANU service is unlikely to be close to 1.0 bits per bit, but is more likely to be in the range of 0.5 to 0.95 based on general knowledge of QRNG systems. There is no effect size enhancement in their system. My analysis suggests their source is fair for MMI purposes. The latency would be a real concern, but they may not even know what it is because they don’t consider it important. The Internet is inherently not a real-time communication system.

Have you actually sampled some data from ANU and found these bias and autocorrelation statistical defects in the stream?

ANU only streams post processed data, not the raw samples of their entropy source. Therefore, testing the data on their service will only indicate how good their post processing is. The post processing appears to be an AES128 algorithm, which would provide statistically very good results if they implemented it properly, so I don’t expect to see significant defects. I have not tested this myself, but I am sure they and other people have.

They say in their documentation they process the data and I know from experience it will contain at least bias and probably autocorrelation (AC) since they are trying to maximize bit rate, which always results in additional autocorrelation. Even without trying to maximize bit rate, it would theoretically take infinite circuit bandwidth to obtain zero AC. But again, these defects are expected in the raw data and will be obscured in the post processed output. See a block diagram of their system: https://journals.aps.org/prapplied/abstract/10.1103/PhysRevApplied.3.054004 Click on the Block Diagram image below the abstract.

If they really understand entropy, which very few people do, they could obtain processed data with theoretically full entropy by outputting fewer bits than the input entropy rate. However, the output entropy would still not be fully quantum mechanical because the raw sampled entropy is a mixture of quantum and classical entropy. There is absolutely no way to algorithmically separate different types of entropy in a mixture. Randomness can be ensured by quantum mechanics with the proper processing, but MMI depends on more than just randomness.

I’m really interested to see how Randonautica server will improve on users experience while being feeded with MED entropy. My tests showed an awesome results on algo, similar to that is used on Randonautica server with QWR4E004 feed. I could ask their devs to switch on MED’s feed but since the project is partly commercial, we need to purchase MED to be able to share its entropy with Randonautica server. How much will it cost and how can we buy it?

I’m not selling any MMI generators at this time, but you have my permission to use the loaned MED as a feed for Randonautica. I want to promote acceptance and development of MMI applications because it will become important in the future. My only condition is that what is learned is shared with us. We will all benefit to see how a real MMI generator performs in that application.

If it works out well we can discuss a more permanent solution. I have the capability to provide 1Mbps MMI generators when higher bandwidth is needed.

1 Like

I have troubles designing RandomWalker method. I have square map with 100-1000 points. Each point randomwalks and on every iteration i detect anomalies in points density. Troubles begin, when i look on distribution of anomalies from all iterations. Looks like RandomWalking array of points is incredibly sensitive to initial distribution. If i generate initial positions of points randomly, and the set contains some density deviation, this deviation will remain through the entire session. I decided to make initial points evenly distributed as a square grid that contains no density anomalies, and all anomalies can only appear purely from random walker. After several iterations i found, that anomalies are distributed as a square grid.

If a large number of random walks start (their points of origin) distributed over a roughly square grid, the resulting terminal points of the walks will be roughly square as well. A number of walks that start in the center or origin (0, 0 of an x, y plane) will give a circular distribution. The density of terminal points will be distributed normally (Gaussian distributed) in two dimensions around the origin.
From one of my papers (see full paper: Google Drive: Sign-in):

image
Fig. 2. Example of 2D bounded random walk terminal coordinates for 10,000 separate walks – each of 10,000 steps (N).

The terminal coordinates are normally distributed in each dimension, with a mean of 0.0 and a standard deviation of SD = Square Root of N. Since the mean is 0.0, the terminal coordinates can be converted to z-scores by dividing by the standard deviation,
z-scores = (x, y)/SD
The z-scores of each coordinate can be converted to uniform variates by a simple inverse approximation*.

*A Mathematica program for converting z-Scores to uniformly distributed variables is in Appendix 1.

image
Fig. 3. Example of 10,000 2D 10,000-step random walks with each set of coordinates converted to uniform variates. Note, the coordinates of each terminal point can be easily scaled so they cover a symmetrical range of –1 to +1 in each dimension. That scaling is
(x’,y’)=(2x-1,2y-1)
Further scaling may be accomplished by multiplying the coordinates by a constant equal to the maximum range needed by an application. This type of transformed random walk is useful when the terminal location is meant to be equally probable in any part of a square area.

In addition to the linearized 2D walk, a 1D walk is prepared the same way, either in the x or y dimension. A 1D walk, ranging from 0 to 1 for example, is useful as a simple slider to provide linear control of one or more values.

The walks of Figures 1-3 are examples of bounded walks that end after 10,000 steps (in these examples).

APPENDIX 1 – UTILITY PROGRAMS

Z-Score to probability conversion program.

The following Mathematica program employs a curve fit to approximate the cumulative probability of the normal distribution. It can be used to transform normally distributed numbers into uniformly distributed numbers. The accuracy is Β±0.05% for z-scores up to Β±4, and Β±1% for z-scores up to Β±7.5.

cdf[z_]:=
c1 = 2.506628275; c2= 0.31938153; c3 = -0.356563782; c4 = 1.781477937; c5 = - 1.821255978; c6 = 1.330274429; c7 = 0.2316419; (Constants)
If [z β‰₯ 0, w = 1, w = -1];
t = 1. + c7 w z;
0.5 + w (0.5 - (c2 + (c6 + c5 t + c4 t^2 + c3 t^3)/t^4)/(c1 Exp[0.5 z^2] t)) (return)

image

You said it will become important in the future. Can you expand on that, please? What do you think will happen in the future to rise importance of this technology being promoted and accepted?