Fatum Project (mmi-based spatial search)

Yes, it looks fine now. It was my bad, sorry for the inconvenience.

Excellent!

No problem. I find that while debugging these issues everything gets tested more extensively, giving more confidence the program is working as expected. I see I have to enhance the random generator output in Mathematica since it seems to fail as more numbers are used in my simulations. Wolfram thinks their generator is so good, but I have seen it fail in a simulation before.

By the way, I looked at your table of Lx difference values. I get exactly the same numbers. You are correct, the derivative of the of the cumulative normal distribution function is maximum around 0.0 and decreases as |x| gets larger.

@ScottWilber

I have another question from the field of statistics.

Since Randonautica has a fairly large user base, it is possible to select a team of espers with high psi-influence from it and invite them to a separate chat. However, the selection methodology itself is still in question.

The only indicator that can reflect psi abilities in Randonautica at the moment is the z-score value returned when calculating the attractor/void point. It is important to emphasize that attractors have positive Z values, and voids are negative, but since both are considered anomalies equally, most often the anomaly is returned to the user, the Z-score of which has the highest absolute value.

Thus, the Psi-influence of the user on the system is characterized not by the predominance of positive values ​​over negative ones, but by leptokurtosis, that is, by the presence of thick tails in the distribution. Also complicating the task a little is the fact that, due to the peculiarities of the algorithm, voids have, on average, slightly higher z-score values ​​than attractors, which is why they should probably be taken into account separately.

The question is how to determine the Psi-rating of a particular user, so that later they can be sorted and recognized as top 10 espers, for example.
The problem here is that each user in their application history may have a different number of generated points, and the ratio of attractors / voids there may also vary. If we just take the average z-score, then a user who generated 10 points will have a higher probability of accidentally overestimating their psi level and overtaking a more effective applicant who generated 100 points.

At the same time, the distribution of users by the number of generated points will most likely be such that small series will have an order of magnitude more users than large ones.

The question is how to organize the count more reliably.

1 Like

This is a very important subject since combining results from a select pool of talented users is a way of arriving at very accurate results for unconstrained requests (no predetermined boundaries) for information. It should be noted, that if done properly, every user can have a weighting factor, which is used when combining results. This allows every user’s data to be combined without introducing error in the results. If their average weight is 0.50, they provide no better than a random guess and their input will not contribute to or change the combined results.

Randonautica may have a large user base, but how many of them have a large number of trials (uses of the algorithm)? It’s not possible to learn anything from a single trial, regardless of the internal statistics. Even 10 uses/trials, as you suggest, will have a significant uncertainty, since the error decreases as one over the square root of the number of samples (trials).

One of the big issues here is, what is the true assessment of mental influence based on a single trial? As I understand it, the algorithm requires only a single initiation/keypress by the user. This is a single trial and should have only a single probability, p. That is the probability of the null hypothesis, i.e., what is the probability the results could have occurred by chance in the absence of any mental influence. To arrive at that number requires knowledge of the probability distribution function (PDF) of the Randonautica search algorithm. Given the PDF, the cumulative distribution function (CDF) can be calculated, which will yield the desired probability, p, for each trial. Trials can be combined, at least approximately, by converting the p value to an estimated z-score (this assumes the PDF is close to a normal distribution), adding all z-scores, dividing by the square root of the number of trials combined, and finally converting the normalized z-score back to probability. That is the probability of the null hypothesis for the collection of trials.

This leaves the big open question: how does one calculate the probability or z-score of a single use or trial of the Randonautica algorithm? Again, as I understand the algorithm, a large number of points are generated in response to a user’s initiation. These are arranged in a 2-dimensional plot wherein exist areas of higher point density (attractors) and lower point density (voids). I would guess there is no actual equation that yields a single probability or single z-score. That is, there is no proper PDF or CDF for this algorithm. If I were approaching the problem, I would make an educated guess about how to combine your internal z-scores into a single value. Or, as you do, take only the single maximum value z-score as the result. If there is a difference between attractor and void z-scores, the simplest approach is to find their ratio and scale one of them so they are equal. Then run the algorithm 1,000-10,000 times and use the resulting values to make a trial distribution. After I saw the results, I would be able to see how to normalize them into a usable distribution, or make another attempt at combining the internal algorithm results and make a new trial distribution. All this to be done without mental influence (run without supervision or observation) since that is the definition of the null hypothesis.

A user’s rating would then be related to their average probability of “being correct.” That’s difficult with the Randonautica algorithm since there is really no correct or incorrect result. The Mind-Enabled Trainer is designed to directly measure that value as a Hit Rate. Experience suggests a user’s hit rate (HR) for unconstrained trials (real world efforts) is no more than half their results using the METrainer. If the average HR is 0.55, the estimated unconstrained rate would be 0.525. From that, each user has a rating that can be used as a weighting factor to combine multiple results from them or results from multiple other users using Bayesian analysis or some probabilistic algorithm. (I prefer the Bayesian approach because it is easier to combine data from users with different skill levels.) Note, a person’s “rating” is conveniently the Mind-Enabled Rating, or Log (base 2) of 1/p, where p is calculated from the average hit rate under a defined set of testing parameters.

Note, the suggested approach only provides a test for the null hypothesis, it does not say anything about how the Randonautica algorithm responds when there is a mental influence; the alternate hypothesis.

I should add, a distribution function constructed from measured data is called an Empirical Distribution Function (EDF) and is often made directly as a cumulative distribution function or ECDF because it is often easier to do. The more points available, the better. From experience, the PDF made from picking the max value of a collection of z-scores, where several are available for each sample, does not result in a normal distribution. Rather it is more peaked (higher kurtosis) and is not likely symmetrical. In your example, you may notice “voids” and “attractors” are two different processes: hence the different sizes.

One can obtain an ECDF for virtually any process. The harder task is finding a continuous function to accurately represent the function for immediate, real-time use. This can be done by direct curve fitting the empirical data or by curve fitting the difference between the empirical curve and a similar one with known shape.

At the moment, we have used a simplified method where we divided users into groups by the number of generated coordinates and looked at the z-score statistics for the anomalies they generated. Z-score is calculated separately for each attractor-point to show the degree of influence of Psi on its generation. Since the attractor point is actually a density fluctuation in a field of 10,000 points, its z-score is computed by renormalizing the Poisson distribution to a standard distribution. Z-scores in the range 1-3 are found in all users. The value 5 is considered to be the “definitely Psi” threshold.

The target group for the study were users who generated 10-15 points, which is something around 80k users. Among them, a certain number of people were found in whom absolutely all generation attempts had a z-score above 4, and at a maximum reaching 6. I personally never scored 6, I believe that these randonauts have a high psi-ability.

I’m currently testing RW-based Fatum algo. Interesting, that while i’m reducing the accuracy by making smaller amount of steps per each point, it shows itself even more responsive. At least 8 of 10 tries had points density anomalies close to the target.


It’s great you have such a large database to work with, and this type of screening could be very valuable.

I believe your z-scores are inflated based on what is reasonable to achieve. A z-score of 6 would be associated with odds of one in a billion, while 3.1 is one in a thousand – considered statistically quite significant (the usual minimum threshold is one in twenty). If the z-scores are linearly scaled, an arbitrary threshold of z = 5 will still work to reveal the best performers, but it would be useful to know the actual probabilities.

Very nice. Could you say what entropy source you are using? Sorry, I see in the test screen you are using the MED hardware.

Yeah, “MED Hardware” stands for PQ128MS plugged into my computer. I usually connect this program to entropy source remotely and there is model name in a menu, but for this method i need large amounts of entropy, so i created a separate menu item for a device, to that i have a hardware access.

On the pictures yellow circles and red markers (dependent of calculation method) are areas of highest points density. Blue circles are areas of lowest points density. Circle with a cross is a target.

I believe (but have to confirm by simulation) I can suggest a method of calculating the z-score in an area of interest, which can conveniently be either a square or a circle.

  1. Calculate the expected number of points in the area of interest: nexpect = N Ai /At That is, the expected number is the total number of points used (N) times the actual area of the area of interest (Ai) divided by the total area (At). The units of area are not important since the ratio is dimensionless.
  2. Calculate the standard deviation for the area of interest: SD = square root (nexpect).
  3. Calculate the z-score: z = (nactual-nexpect) / SD, the z-score is the actual number of points in the area of interest less the expected number and the difference divided by the standard deviation.
  4. Calculate the probability of observing nactual in the area of interest by converting the z-score to probability.
    This approach works for both higher and lower density areas of interest (positive and negative z-scores), yielding a two-tailed calculation, which I think it should be. For positive z-scores, use 1-p. For example, to get a probability of 0.01 instead of 0.99, which is the actual value of the cumulative distribution function for z = 2.326. In addition, for a two-tailed test, always multiply the probability times 2 since there are twice as many possibilities when counting both high-density and low-density areas as being of interest. (It’s not obvious from basic principles this assumption is correct – only a lot of real-world data could confirm this.)

I tested this algorithm by simulation and it appears to be correct. I used square areas of interest because that was easier to program, but circles (or any shape) should work. The standard deviation of the simulated results converged to 1.0 and the mean was 0.0, both as expected for normally distributed values. The high- and low-density areas produced symmetrical z-scores. Now the actual probabilities can be calculated.

I am pretty confident about the z-score calculation method I developed.

What is left is developing an efficient way to search the field of data for the most and least dense areas. Obviously you already have a search algorithm, but I seem to recall the more and less dense searches produce unsymmetrical results, which would seemingly be undesirable.

I am not sure what your criteria are for finding these areas. For example, are you searching for just one most and one least dense area per attempt? Is there a predefined radius for the areas of interest, or do you want to find the maximum z-scores (most improbable results) regardless of radius? This may be accomplished by a clustering algorithm, but the computational overhead could be considerable. A two-stage search starting with square areas and ending with circles would be somewhat straightforward, but with the possibility of not finding the absolute best areas. The search algo will depend on the exact search criteria chosen.

Hi, Scott! Can you elaborate on 4? to my understanding random points in an area have a Poisson distribution, which is asymmetrical [0…Nexpect] for rarefactions (left side) and [Nexpect…Ntotal] for clusters (right side) in discrete case. Nexpect for this distribution equals average point density for At. I follow your logic up to 3, and came to the same ideas, however re-normalizing the distribution so that Z would work as if the distribution is normal took me to a few integrals based on some old physics lecture transcript, at present I only have a Wolfram code I used, however, it results in Zs that behave oddly nevertheless, (left side Zs are larger), can you explain how do you deal with asymmetrical nature of the distribution?

// Poisson Z-score
/*
Wolfram code^
f = x^n/n!Exp[-x]
vss = {0.382924922548026,0.682689492137086,0.866385597462284,0.954499736103642,
0.987580669348448,0.997300203936740,0.999534741841929,0.999936657516334,0.999993204653751,
0.999999426696856,0.999999962020875,0.999999998026825,0.999999999919680,0.999999999997440}
sigma = 14
vs = Table[(1-vss[[l]])/2,{l,1,sigma}]
iter = 150
ta1 = Join[{Table[0.0,{l,1,sigma}]},Table[SetPrecision[m-(x /. FindRoot[Sum[f,{n,0,m-1}] == 1-vs[[l]],{x,m-Sqrt[m]l/2}]),15],{m,1,iter},{l,1,sigma}]]
ta2=Table[SetPrecision[(x /. FindRoot[Sum[f,{n,0,m}] == vs[[l]],{x,m+Sqrt[m]l/2}])-m,15],{m,0,iter},{l,1,sigma}]
/
/

fit1 = FindFit[ta1, (x+1)-a
Sqrt[x+1]-b
Exp[center
(x+1)], {a, b, center}, x]
ap1=Table[(x+1)-aSqrt[x+1]-bExp[center*(x+1)] /. fit1, {x, iter}]
fit2 =FindFit[ta2, x+aSqrt[x]+bExp[centerx], {a, b, center}, x]
ap2=Table[(x+1)-a
Sqrt[x+1]-bExp[centerx] /. fit2, {x, iter}]
*/

My calculations assume the z-scores for the areas of interest approach the normal distribution, given large enough N. Mathematica code follows:

src = Table[{RandomReal[99.9999999999],
RandomReal[99.9999999999]}, {i, 100000}];(generate 100,000 point pairs with coordinates between 1 and 100 - just under 100 avoids hitting 100 exactly)
ListPlot[src, Frame → True, AspectRatio → 1.,
PlotRange → {{0, 100}, {0, 100}}]

plot points

cl = Sort[Ceiling[src]];(count points in each area of interest)
sd = (Flatten[
Table[Count[cl, {i, j}], {i, 1, 100}, {j, 1, 100}]] - 10)/
Sqrt[10.](convert to z-scores)

StandardDeviation[sd](calculate SD and Mean of z-scores)
Mean[sd]
1.00587
2.55795*10^-17

I also tested the symmetry (and normality) of the distribution of z-scores using Kolmogorov-Smirnov (K-S) test. Here is the plot of the data:

KS test distribution

Note, z-scores are quantized due to the limited range of points in each area of interest (which skews the plot somewhat), but they follow the expected pattern for normally distributed data.

I don’t try to determine the distribution within a single area of interest as it is not needed for this calculation. As long as there is a large number of total points so the smallest area of interest would have, on average about 5 points, the calculation will be correct regardless of the size or shape of the area(s) of interest. Even the entire area will give a correct z-score/probability.

I did consider that there must be a lower limit to the average number of points in each area of interest, so I simulated to as few as 0.5 points per area of interest. The standard deviation and mean remained 1.0 and 0.0 respectively at all numbers of points per area of interest. However, the probability distribution function becomes noticeably less normal below about 2 points per area of interest. At 5 points or above, the distribution seems to be adequately normal, with relatively small errors in probability, but the left tail is cut off a little as you suggested. With around 10 points per area of interest, the deviation from normality is not meaningful in the application.

AFAIR my code was based on the same presupposition: Poisson with largre mean behaves like the normal distribution.

I think I understand now. I was somehow convinced that distribution inside small area IS important, but I was missing the 1/ / SD It seems like you may have solved the problem that was bugging me for 3 years, I asked a number of educated people to advice, however didn’t get close to answer:
I even have my question published on SE as well, 0 answers were given

To be sure, my suggested solution is an approximation. What’s important is, what is the average number of points in the areas of interest, nexpect, that gives an acceptable answer within the context of the application? I estimate that 10 points is adequate, but down to 5 points may give usable results.

An exact probability for any integer number of points will likely be calculable using the Binomial distribution, but the shortening of the left tail will still be present. That means the number of different probabilities for reduced densities is fewer than for increased densities. The total number of different probabilities (both lower and higher densities) is roughly 6 SD or about 18 for nexpect = 10. Also, the minimum z-score is, -square root nexpect = -3.16 (p = 0.00158, two-tailed).

I recently tried my hand at replicating the fatum algorithm using dbscan

Personally I am a fan of a method where you visualize the points being put on the graph in real time, and measure the clusters as they get tighter and tighter adjusting the epsilon and sample values dynamically so that the attractors don’t spread too large. This seems to get the effect of a bunch of points hovering in one approximate location for a pretty long time. Interesting thread!