Fatum Project (mmi-based spatial search)

It is what the technology can do and how that will affect social evolution and commercial development that are important. It is possible to obtain information that is not available by any computational means, including with quantum computers. In addition, communication methods are possible that surpass any of those available today.

I once worked in a secret government program, and I became aware of certain information and how it is controlled. To be clear, I’m not saying anything specific about my job.

You may have heard of the Stargate Project (now partially declassified) and possibly some of its successors (some SRI International projects and others). These were first attempts, at least in the US, at using non-classical properties of mind for practical purposes – though I wouldn’t say the purposes were strictly for the public good. Work in some other countries was not as limited by fear of the unknown or of sounding silly, but they are even more hidden.

Another example of things the public is/was not aware of is demonstrated by the recent disclosure of information concerning UAP (Unidentified Areal Phenomena) – now acknowledged as real. This represents technology that far surpasses what we think we know and understand.

These are highly controversial topics, but they are related to MMI and its future importance, IMO.

I have math problem with linearization.

I tried to make C# code from your Mathematica formula (i dont undersatand Mathematica syntax well) and made something like this:

 public Coord Linearize(Coord Nrm, int steps, int dx, int dy)
        {
            Coord result = new Coord();
            double zx = Nrm.x / Math.Sqrt(steps);
            result.x = (int)Math.Round(dx * Phi(zx));
            double zy = Nrm.y / Math.Sqrt(steps);
            result.y = (int)Math.Round(dy * Phi(zy));
                              
            return result;
        }

public double Phi(double x)
        {
            // constants
            double c1 = 2.506628275;
            double c2 = 0.31938153;
            double c3 = -0.356563782;
            double c4 = 1.781477937;
            double c5 = -1.821255978;
            double c6 = 1.330274429;
            double c7 = 0.2316419;

            // Save the sign of x
            int sign = 1;
            if (x < 0)
                sign = -1;

            double t = 1 + c7 * sign * x;
            t = 1 / t;
            double y = 0.5 + sign * (0.5 - (c2 + (c6 + c5 * t + c4 * Math.Pow(t, 2) + c3 * Math.Pow(t, 3)) / Math.Pow(t, 4)) / (c1 * Math.Exp(0.5 * x * x) * t));

            return y;
        }

But it doesnt work: points are not distributed evenly, so i googled inverse approximation and found this code for Phi:

public double Phi(double x)
        {
            // constants
            double a1 = 0.254829592;
            double a2 = -0.284496736;
            double a3 = 1.421413741;
            double a4 = -1.453152027;
            double a5 = 1.061405429;
            double p = 0.3275911;

            // Save the sign of x
            int sign = 1;
            if (x < 0)
                sign = -1;
            x = Math.Abs(x) / Math.Sqrt(2.0);

            // A&S formula 7.1.26
            double t = 1.0 / (1.0 + p * x);
            double y = 1.0 - (((((a5 * t + a4) * t) + a3) * t + a2) * t + a1) * t * Math.Exp(-x * x);

            return 0.5 * (1.0 + sign * y);
        }

It does work, but it looks like points in the middle have less resolution than points in the corners.
Here i made composit from 45 linearized random walks of 100 points with 1001 step and its plain to see, that points are more dense on the edges and look more random in the corners.


Here is the same with 10x more steps per iteration

Try replacing the last two equations with:
double y = 1./t;
double cdf =
0.5 + sign*(0.5 - (c2 + (c6 + c5t + c4Math.Pow (t, 2) +
c3Math.Pow (t, 3)/Math.Pow (t, 4))/(c1
Math.Exp (0.5xx)*t));
return cdf;}

There was an issue with y and the returned variable, plus a parenthesis.

The nonlinearity is likely due to trying to stretch the results over much too large a range (dx and dy) with so few steps in the RWs. If dx, dy are 100, the number of steps in the walks should be at least 10,000 each, but my modeling sugests even that may be too low for good linearity.

I am testing a multi-stage equation that gives better results (better statistics) while using fewer steps.

public double Phi(double x)
        {
            // constants
            double c1 = 2.506628275;
            double c2 = 0.31938153;
            double c3 = -0.356563782;
            double c4 = 1.781477937;
            double c5 = -1.821255978;
            double c6 = 1.330274429;
            double c7 = 0.2316419;

            // Save the sign of x
            int sign = 1;
            if (x < 0)
                sign = -1;

            double t = 1 + c7 * sign * x;
            double y = 1/ t;
            double cdf = 0.5 + sign * (0.5 - (c2 + (c6 + c5*t + c4*Math.Pow(t, 2) + c3*Math.Pow(t, 3) )/ Math.Pow(t, 4)) / (c1*Math.Exp(0.5*x*x) * t));
            return cdf;
        }

Yeah, that can be an issue, i’m trying to stretch it over 900x900 square with 1000 -10000 steps

Where does y goes to? Method only returns cdf value here and y variable is never used.

Looks good with 100000 steps, but calculation goes for 62 seconds

You are correct, the variable y is not used in this routine. That line can simply be removed.

I have written and translated so many programs in Mathematica, it must have been a leftover from some earlier version. I guess I never noticed because its presence doesn’t change the results.

It appears to be working now. How do the results look with 10,000 steps?

I am getting closer to an algorithm I can share that uses fewer steps, perhaps a few thousand for the 100 index grid. but I like to perform many tests and variations so I am confident what I present is doing what I think it is. It’s very easy to be confused when processing what are essentially random numbers. Plus, this approach has never been tried before.

A little bit grid-like in the middle.

My walker works like this:

 public int NextBitRW(int startvalue)
        {
            int result = startvalue;
            int cdev = 0;
            for (int t = 0; t < RWtrials; t++)
            {
                cdev = 0;
                for (int i = 0; i < RWbatch; i++)
                {
                    if (GetRandomBit()) { cdev++; } else { cdev--; }
                }           
            }

Here i set RWBatch to 1001 and RWtrials to 10. So it makes 10 steps with the size of deviation in series of 1001 bits, that is equivalent to 10010 steps.

There is also an option i need to test: Now i reset the RW-points array on every request and create a new one. I want to try just pre-generating RW-array with 100000 steps and increment by 1001 steps on each request without reset.

Technically, we may even pre-generate random points on the map and convert them to normal distribution before start of the random walk. So on the start we will already have needed amount of steps for optimal resolution. So we will only need to randomWalk for amount of steps needed to move points on linearized map far enough to completely rearrange from initial position.

Some thoughts about IDAs

So far, in experiments with 2D coordinates, we have considered two types of anomalies: Attractors (Psi-Hit) and Voids (Psi-Missing), actively experimenting with their properties, dynamics and ways to enhance the sensitivity of their detection.

During the experiments, three types of dynamics were discovered in such anomalies:

  1. short - when an attractor or a void appears exactly at the moment of peak intention and precisely hits the target. Such anomalies appear for a very short time and can be obtained either at the moment the button is pressed, when the user initiates the measurement himself, or at the moment the target appears, if it is generated by a timer.
    It is noteworthy that short anomalies after drying immediately disappear and are not visible in statistics for a long scan time.

  2. long - anomalies in statistics collected over a certain period of time. Such anomalies accumulate in proportion to the time the intent is exposed. Several ways have been found to find such anomalies:

  • summation of entropy batches. If we generate entropy in parts with a certain time interval, then its sum gives the distribution of points, in which attractors and voids are located, on average, closer to the target and more often fall into it.
  • accounting for persistence. If you generate an array of points on the map every second, you can enter a time axis and make the array 3-dimensional. Then attractors begin to be characterized not only by the density of points, but also by the time of continuous existence. Experiments show that such anomalies are most often very close to the target.
  1. meta-anomalies. If, over many iterations, the location of all attractors and voids is memorized, the meta-anomaly will be located in the coordinates around which their density will be maximal.
    Meta-anomalies also help to capture the recurrent appearance of short types of anomalies during repeated attempts to focus attention.

From such differences in the dynamics of anomalies, I assume the possibility of the existence of another type of anomalies: oscillatory. Perhaps when we focus on a target, the statistics of the distribution of points in this area begins to “vibrate” with rubberband-oscillations with an amplitude proportional to the strength of intention. During the measurements, it was repeatedly observed that the appearance of the attractor was immediately followed by the appearance of a void in the same place. In this regard, IDA behave like virtual particles that appear in pairs and immediately annihilate. Likewise, the attractor-void pair annihilates statistically. Thus, a new type of anomaly can represent a persistent oscillation of an attractor-void with a high deviation in one specific place on the map. However, registration of such anomalies will require a high measurement resolution in time and a system capable of rapidly changing the distribution of point density.

Some further information from ANU concerning latency of their RNG server:

Hi Scott,
Currently we are not buffering our generated random numbers. Please have a look at our FAQ. We do not, however, guarantee low latency because of the fluctuating demands and the limited internet speed.
CQC2T@ANU
Centre of Excellence for Quantum Computation and Communication Technology

Can anyone explain how the idea of attractors and voids got started and/or what is the theoretical basis for this type of search? It would seem to be simpler to just produce a 2D coordinate, or a number of 2D coordinates and look for a clustering, perhaps using some type of weighting based on strength of the MMI signal.

Attractor is clustering. We produce dozens of 2D points and look for places where those points are more dense than average (attractor) or less dense than average (void). The idea is simple: a single 2D point can miss the target easily, but if many points wil not hit the target, but deviate slightly towards it, the area of higher points density will appear around the target. So we can not only locate the target even without a sigle point hitting it directly, but also measure the intensity of signal by measuring anomaly density and probability. If z-score is negative and points are “avoiding” the target, we got Void, the empty area around the target.
Also, the ability to measure probability of clustering with such density and radius gives us possibility to separate random clusterings from mmi-induced.

The Power of IDA is how many times it is more/less dense than average.
Also, since attractor can be easily formed by just multiple points simultaneously deviating towards a certain location, it can potentially be created in array of randomwalking points in very few steps.

In the future we’re planning to improve the method in several ways:

  1. Addition of time axis - we will generate points array every second and search for 3D density anomalies in resulting space. In that case, anomalies that are persistent in time will have bigger volume and can be separated easily from random fluctuations of density that only exist in a sigle moment.

  2. Calculation of IDA Power can also be multiplied by average BiEntropy weight of points inside that anomaly.

  3. We assume that some anomalies can oscillate between attractor and void over time due to rubberband effect, so if we’ll find the way to locate area of density-disturbancy it will probably become third IDA type.

Thanks for your explanation. I understand the idea of clustering and how it might be used to analyze the random data. My biggest concern would be related to how well a user’s mental intention is linked to the collection of data and its analysis. If the process takes many seconds or even a minute to complete, the user’s attention will undoubtedly have drifted. In addition, the amount of mental effort or number of initiated trials is related to the accuracy or significance of the results. A single keypress and intention held for a second or two is a rather small amount of effort. It’s hard to get amazing results with such little effort.

The way data is processed is often critically important to getting the best results with current equipment (MMI generator or entropy source). This part of an MMI system is clearly in an area I would call, art. So many possibilities…

It’s true that MMI tends to overcome abstractions in analysis and jump directly to the final intended outcome. However, this result is greatly assisted by providing real-time feedback of a user’s performance. In that way a user can learn when their effort is being effective, even though there may be no conscious understanding of the underlying process (which is neither possible nor necessary). Providing feedback is the primary way to link the MMI generator output to the user’s intention.

I suggest a user practice program to allow some development of skill and confidence they can actually “make it work.” The process should look and feel as close as possible to the real-world application.

In order to avoid large entropy consumption during random walk of thousands of points we decided to generate initial points with distribution, as if required amount of steps was already done.
For example, if we have thousand of points with 0,0 coordinates and random walk each of them, we will have 2D map with normal distribution of points. This map we can convert into uniform distrution with inverse approximation of points z-scores to search for clusters.
But we need a lot of steps first to reach the treshold of accuracy dependent of map resolution, so we decided to generate pseudorandom 2D normal distrution map mathematically to correspond needed amount of steps and then to random walk points on it with real MMI-entropy.
Do you know some formulas or algorithms to generate such map without doing millions of iterations?

Can you please specify what is the desired resolution or average spacing of points in the map, and the overall dimension of the map?

Do you really need a thousand seed or starting points from which to make a walk?

Without fully analyzing the process, I can suggest the possibility of using every step in each random walk instead of just the terminal or end point of a walk. That will give hundreds to thousands of points for each walk. Of course each of the points in the walk would have to be converted to a uniformly distributed point from the z-score. The conversion from Normal to uniform takes a certain minimal number of points using the Normal distribution approximation. That number is about 50-100 points. Perhaps skip the first 50 or so points in each walk before starting to plot every point, or use the Binomial distribution to better convert the first steps to the uniform distribution.

Or, plot a point every 50 steps in a walk and calculate the z-score as if the walk had started from the previous plotted point. Every walk from the previous “starting point” can be considered independent, so this will surely work.

Just a first thought. I haven’t tried this to see how it looks.

For Fatum-like search i use 2D map with several thousand of random points with uniform distribution. To obtain such map i use inverse approximation from the random walk map as you previously recommended. To find clusters on the map i need a significant amount of points, so i have thousands of points simultaneously random walking from the center and then convert their z-scores to uniform distribution.

Technically it doesnt require a lot o f steps to rearrange the points on a uniform map to form a new cluster with abnormal density in some location. But it requires a lot of steps to achieve an accurate conversion. I test this on a 900*900 map and even there it requires enormous amount of steps before resolution of points stops looking grid-like. So we decided that this may be skipped if we just mathematically pregenerate random walk coordinates in positions, that statistically correspond to step#100000 or something and proceed from there.

So the question was how to pre-generate that “initial state” of points, like if they made 100000 steps already. Maybe by determining maximum value of randomness in their coordinates by normal distribution function according to how many points are already calculated or something.

In the desired system we hope to use it dynamically, by continuously random-walking the array of points and projecting their uniform conversion on the geo-map to determine locations in which points are clustering with abnormal density. We expect that areas of clustering will be able to appear in any location immediately after we focus intention on it.