There is plenty of evidence to support your ideas about determinism in thinking and the inevitable resulting “blind spots.” Based on historical and psychological analysis, as well as direct testing, it is quite clear human decisions are much more deterministic than we like to believe. Our thinking is programmed by evolution in the form of “instinct,” by social rules and expectations, as well as by a lifetime of personal experience that only reinforces our belief systems (mental programming).
About 45 years ago I observed the operation of a simple AI program that was designed to learn and predict the next number selected when a subject was instructed to generate a series of random numbers. The program was very successful at learning and predicting the deterministic patterns in each subjects thinking patterns as they tried to pick the next number at random.
While experimenting with the design of artificial consciousness devices, it became clear that a component of true randomness is absolutely required for consciousness to emerge from a physical device (or living organism). Our brains contain vast numbers of neurons that produce electrical outputs or spikes, which usually seem to occur at random intervals. Only under the influence of something we (scientists, neurophysiologists, etc.) don’t currently understand, do these signals become correlated in such a way as to produce a non-random outcome, such as having a thought. While this may be an extreme oversimplification of the process of thought, it is the underlying organizing energy, which I will call mind, that causes significant patterns to appear out of what is usually random. Mind is an anti-entropic influence, bringing order out of chaos.
The appearance of significant patterns out of physical randomness is one way to define the operation of MMI. For myself, the purpose of developing practical MMI systems is to have a way to gain information that is otherwise unavailable. Such information is not limited by either space or time, meaning we may learn something about a future event or something hidden from us by circumstance or limitations in our thinking patterns. The almost unlimited nature of possibly available information could make it extremely valuable to us, individually or collectively.
Can this formula be used to generate radial coordinates instead of square ones? Our attractor search program works with round area, that should have a certain amount of evenly distributed points in it.
Yes, a circular area can be drawn with uniformly-distributed random points. There are a number of variations, but I haven’t done any testing to make a good guess which one might work best in an MMI setting. One might consider just taking the uniform numbers returned by the equation and applying them directly to theta and r to give the polar coordinates, but this will give results that are more densely populated toward the center of the area. Instead, an adjustment is required for the r value: r = Sqrt[U2], where U2 is a uniform number from the equation, cdf[z_], and Sqrt is the square root of the variable, 0 < U < 1.
The simplest way to generate theta is: theta = 2 x Pi x U1, where the result is in radians.
The radius is automatically 1.0. To adjust the size, simply multiply r by the desired constant.
Each point takes two sequences of bits long enough to keep the points from showing obvious quantization. A second sequence can be produced from the first one by converting to autocorrelation using the autocorrelation to bias converter described above in this thread. This is not optimal if bias amplification is already applied to the sequence before the autocorrelation conversion, but it may still work okay.
Other ways of using shorter sequences of numbers have been discussed at length on the forum, but as I noted before, MMI is not simply an algorithm and it’s not possible to get “something for nothing” by using too little effort and too few trials or MMI bits. The algorithm for making uniformly distributed random numbers in a circular area could use two 32-bit integers from the MMI generator and dividing each of them by 2^32 = 4,294,967,296, which will provide uniform numbers [0,1), meaning ranging from 0 to just below 1.0. This might be a good way to test your algorithm or computer code, but the MSB will always overwhelm the results in real MMI trials.
If you want to move the center of the circular area, it’s probably easiest to convert to Cartesian coordinates. This is simply: x = r Cos[theta] y = r Sin[theta]
Then add the desired offset to (x, y)
I tried to combine Fatum’s algorithm with RW technology. With the help of PQ128MS, I generated a bit stream and converted it into walks of 1000 points. Then, converted the coordinates of the points to a uniform distribution with your linearization algorithm. When using a map of 900*900 pixels, it took 170,000 steps for the average point step from one bit walk to become equal to one pixel. I repeated generation several times and merged results on a single map. However, for some reason, there is an uneven accumulation of points in the corners of the map. Probably its because the map resolution on the corners is bigger.
Here I’m trying to combine two methods of finding a coordinate using MMI: accumulative and iterative-emergent.
Fatum’s method refers to iterative emergent and was originally developed for coordinates generated according to the Binary Word principle. Since a single BW coordinate has a very low chance of hitting the target exactly, since for this all 64 bits in a chunk must be psi-flipped, the generation of such points is repeated many times. Thus, even if none of them end up hitting the target, some of the dots may deviate in the direction of the target, creating an area around it with an anomalous dot density. Thus, the accumulation of psi-information goes through multiple iterations.
Random-Walker here refers to accumulative methods, since information is accumulated directly in the coordinate itself. This method is good because you can feed an unlimited amount of entropy to one point and all psi-flipped bits will contribute to determining the position of the point. However, since the psi signal level is inferior to the noise level and we do not know exactly how many flipped bits it takes for the point to reach the desired location, a certain level of error arises.
But if we combine the accumulative and iterative-emergence methods, we can reduce the error to a minimum. Thus, we have a lot of wandering points, each of which accumulates psi information in itself, and the target location is determined as the center of the zone in which the density of points turned out to be anomalously increased. That is, even if none of the walkers reached the target destination, we can determine what goal many of them were striving for. It’s like repeating the walk 10,000 times and finding an anomaly in the distribution of the results.
At the moment, the problem is that when converting coordinates from a normal distribution to a uniform distribution, we have an uneven distribution of map resolution. The map resembles something like a hyperbolic space, where 1 bit shifts the point in the center more than at the edge of the map. It turns out that the edges of the points sit more densely, which means that the places of anomalous density will always be found at the edge. But I don’t know how to solve this problem.
I derived an equation for calculating the number of bits needed in a random walk to provide the resolution to hit every pixel. The equation first needs the derivative of the cumulative normal distribution function versus z-score. The value at z = 0.0 is the maximum, so that is the only value needed. The value is 0.3989. In addition, know that the terminal values of random walks only fall on every other integer, so the resolution is twice what it would be if the walker ended on every integer. The equation for the number of points is 1/n = (2 x 0.3989)/Sqrt[N], where n is the number of pixels (900 in your example) and N is the number of bits needed in each walk. Solving for the number of bits, N = (2 x 0.3989 x n)^2. In your example, N = 515,553 bits. For convenience, that is 64,444 bytes. I thoroughly tested this equation by simulation, and found that every pixel had the expected number of walk terminations. Even half that number of bits clearly showed pixels with no walks ending there. The plot below shows the distribution of 2-D points in the 900 x 900 pixel grid. (10,000 walks, 5,000 (x, y) points; each walk, 516,000 bits)
The bunching of points around the edges is from some other cause. There are two equivalent equations for the walker:
Using N bits, convert each bit from (0, 1) to (-1, 1). The add them together. Convert that count to a z-score by dividing the total by Sqrt[N].
Using N Bits, count the number of 1s and calculate the z-score = (2 x count of 1s - N)/Sqrt[N].
Convert the z-score to probability using the cnd function (z to p equation). The second z-score calculation is probably simpler, but 1 and 2 produce the same answer.
The uniformly distributed results are partitioned into (x, y) pairs and plotted in graph.
I realize over a million bits per point is more than may be desired, but the resolution is also quite high. For a 100 x 100 pixel grid, only 12,800 bits are required. It would take the PQ128MS about 0.1 seconds to generate enough data for 1000 points, or 1 second for 10,000 points. Note, I tested these parameters and found it actually takes 4 times as many bits before the distribution of points is random. That is, 51,200 bits for each (x, y) point. Apparently my “number of bits” equation is not exact for lower grid sizes. Plot of 5,000 points in a 100 x 100 pixel grid below:
There may be a way to use a two-step search process to achieve equivalent resolution using a fraction of the number of bits, but its use depends on the specifics of the application.
For some reason, the resolution of the map resembles a convex lens. This is clearly visible with a small number of steps, when the resolution is insufficient.
To get the number of steps = 170000 i just made a simulation, where on every step i calculated the mean average distance between previous and current positions of all points on a linearized map. 170000 steps is when the mean average step became 1. But since something is wrong in my algorhitm and steps are different in the middle and on the edges it may explain why my steps number is different.
To make sure we are on the same page, please test your Phi(x) equation.
x = 2.1, Phi(x) = 0.982136
x= -0.8, Phi(x) = 0.211855
Or, give me one positive x and one negative x and the corresponding Phi(x), and I will test.
These are exact to every digit. I will take another look at the rest of the algorithm, but I didn’t see anything on first glance. I may not get back to you before tomorrow.
can you tell me amount of steps and coordinates before and after linearization. Maybe its something with type convertion in C#?
I tried to set amount of steps to 170000 and calculate difference between linearized values for x and x+1.
Here what i’ve got:
x = 45 d = 0.86553140217193913
x = 123 d = 0.83261773578453813
x = 636 d = 0.26450813720327915
x = 998 d = 0.046389165459117976
x = 7000 d = 0
So on bigger x-values 1 step gives shorter shift on linearized map. So it doesn’t a Round function error.
for example x0 = 45, x1=46. Lx = (900 * Phi(x/sqrt(170000)), then Lx0 - Lx1 = 0.86
and for pair 998, 999 its 0.04
Looks like resolution is dependent to z-score size.
I measured the last one without Round function. Looks like the problem is somewhere in Phi, but if you don’t have it, and your Phi works the same way, i have no clue.
The PQ28MS can supply random integers, uniform random numbers and Gaussian (normally) distributed numbers. I suggest you test your algorithms by:
Take normally distributed numbers instead of generating them from a random walk. Feed these into the z-to-p (linearizing algorithm) and observe the results in the plot.
If number 1 doesn’t produce the expected results, take uniform numbers, [0, 1), and use them directly (multiply them by 900) to produce the (x, y) coordinates and make the plot.
The next step in trouble shooting will be based on the results of one or both of these tests.
No, i have no direct access to the PQ128MS server at the moment and using only meter-feeder API from it. But i will give update on the issue after some closer look on normal distribution itself, maybe it is biased somehow by the API errors. Still i tried to feed Phi function with different z-scores of the closest values. For example for normal x1=45 and x2=46, linearized values have distance between them x1-x2 = 0.86. And the same distance for x1=998 and x2=999 is 0.04 (steps amount is 170000)
I think i found the bug. I wasnt resetting the array between iterations, so the map probably has several merged sets of points which are the same set, but with different amount of steps