Original Question and Answer Collection

This is a set of questions collected by Jamal about a year ago and my answers at the time. Note, there is always a learning process over time, so some answers may have evolved a bit since then.

● What amount of mind-matter correlation did you get at best in faster than light

experiment Jamal mentioned? Any amount of detail appreciated

Note first, the experiment I mentioned to Jamal was an experimental design that would use currently available hardware. I have not yet carried out that experiment. However,…all my mind-matter interaction systems are built to test three time relationships: 1) revealing already determined but hidden random bits, 2) affecting bits as they are generated and 3) predicting bits that will be generated exclusively after the prediction is made. It can be shown by relativity that receiving information from the future (predicting) may always be interpreted as “faster than light” communication. We achieved peak performance of 80% hit rate (60% effect size). Importantly, the hit rate for the predict mode seemed to be harder. I learned through much experience that there is little difference in the ultimate hit rate for the three time modes. I believe the initial difference was psychological – thinking or believing predicting the future was harder than affecting present generator output.

● Have you ever seen/read of the Russian scientist Simon Shnoll and his work?

○ Summary Simon Shnoll has found a pattern that exists when using Radioactive

decay that suggests it may not be random and instead following some universal

predictable pattern.

YouTube playlist in Russian (English subtitles)

(1 of 7) Simon Shnoll on the Gordon show "The Faces of Time" 2003 [English captions] - YouTube

Article http://21sci-tech.com/articles/time.html

Discussion on Simon Shnoll via PEAR research group:

http://noosphere.princeton.edu/shnoll.html

http://noosphere.princeton.edu/shnoll2.html

If anyone found a pattern in nuclear decay that could be duplicated by anyone else, it would poke a big hole in what we believe about quantum mechanics. While we only know a small fraction of what’s to be known, these should be readily repeatable experiments. I have read of experiments suggesting certain types of nuclear decay rate, specifically the half-lives of certain unstable elements, appear to change as the earth is slightly closer or farther from the sun during the course its annual revolution. This is generally related to Shnoll’s ideas, but he asserts more universal and complex effects.

The scientific method requires us to “question everything” and verify beliefs by experiment. When a new theory is proposed that could disrupt a long-held and well-tested one, the highest standards of proof are required by the scientific community. Extreme care and many independent repetitions are required to verify any of his results.

● Do you know about the following unpublished PEAR experiment with plants, have you

tried replicating this, or something similar:

A plant in a dark room with a light source that points to different regions in that room. The premise of the

experiment was to see if the plant (living organism) would influence where the light pointed in the room.

When there was no plant in the room the light rotated randomly around the room, when a plant was placed

in the room the light pointed more in the region the light was in the room. This implies living organisms will

always influence random numbers.

I have not read about this experiment. As a general principle, nature produces random results unless acted on by consciousness or mind. Then a reduction of entropy occurs (life or its associated mind is anti-entropic) and patterns will appear.

(somewhat related to previous) What else aside from people can influence RNGs by

your experience?

Strictly hypothetically, one may infer from the reduction of entropy in all living things, that whatever form of mind is present may affect probabilities to advance any organism’s existence. This proposal is based on controversial assumptions, which are beyond our scope at the moment.

(again, related to previous) Have you ever tried spatial experiments with RNG arrays?

Like if shielding of any kind or placing those at high altitude/distance impairs mind-matter

correlation and to what extent?

We had an extensive online system in which an operator/user was separated by up to thousands of miles from the hardware. The latency (time between the user’s keypress and feedback generated by remote hardware) was typically 50-150ms, depending on separation distance. We developed our own communication protocol to achieve this.

I have also used either shielded or non-shielded hardware. Neither shielding nor distance appeared to have any influence on the results.

● What’s the golden throughput/bandwidth of RNG for mind-matter (after what bandwidth

do you see little-to-no improvement in correlation?

If you are asking about random number generation rate, generally the faster the better. But, the processing methods are very important as well. I have tested systems with near 1 trillion bits in each trial of 200ms duration. No doubt if one approaches very high effect size, theoretical analysis indicates incremental improvement progressively decreases. But this reduction is related to effect size of the trials, not to the underlying random generation rate.

The one variable that is easiest to control is the random generation rate. My very first thought after determining to make this technology practical (a long time ago) was to build a faster generator. The signal/noise for a linear system would have increased by the square root of the frequency multiple. I jumped immediately to 16Mbps and learned a fundamental fact about mind-matter interaction: the response of each generated bit to mental influence is not linear.

● What’s your theory about why we never see 100% correlation?

One of my papers, Advances in Mind-Matter Interaction Technology: Is 100 Percent Effect Size Possible?, deals extensively with this question. Testing with faster and faster generation rates (up to terabits per second) suggests the effect size generally follows my theoretical model. At the start of a series of tests, with maximum focus on every trial, the results were occasionally 100% for the first 12 or so trials. Longer sequences of 30-50 trials reached 85% hit rates, but not always. Note, most people cannot focus clearly enough for that long, or the mind just gets bored.

The user/machine system (as these become inseparable) is complex. Psychological factors can and do limit the results at times. When a person starts a series and gets the first 10 in a row correct (1 in a thousand probability), I have seen them stop and say, “the system is broken.” Actually, the system was working, and baseline (unobserved) tests produced the expected 50% probability. Meanwhile, the user had blocked himself from further 100% results because it seemed impossible or just seemed wrong.

To be clear, 10-20% effect size has become routine for a practiced user interacting with advanced hardware, but there are variables we may not know of or understand that make very high hit rates extremely difficult. The “brute force” approach of increasing random generation rate goes a long way, but that will never be sufficient by itself. Better methods of generating and processing random sequences, improved user interaction and feedback and more effective training will be required. Skill level with mind-matter interaction is a little like a sport: practice and constant desire to improve are very important to achieving better results. Skill levels are distributed similarly to IQ. An individual may have lower, medium or higher than average potential. However, in a select group with particular interests – such as this one – the average capability will likely be higher than in the general population.

● Any experiments with altered mind states, like hypnosis?

Better results (higher hit rates) depend on the user’s unwavering focus. In that sense, every series of trials requires an alteration from typical day-to-day states of mind. We have experienced that such practiced focus and feedback from the device tends to induce a noticeable shift in mental states, like a mild trance state or self-hypnosis. I have not experimented with trying to use hypnosis to precondition a user to perform better.

● Does hashing/debiasing impair mind-matter and to what extent, given there’s no entropy

dilution (e.g. 64 bytes of entropy hashed to 64 bytes result)? Is there a quantitative

measure of the impairment?

This is a hard question because it is very difficult to eliminate nonstationary variables, i.e., those that change during a series of measurements. Most important are factors that affect the user’s performance. I developed a method for comparing different processing methods or generator types. The method takes data from two systems to be compared, and randomly records the output of only one, which is also used to provide feedback to the user for that trial. Then the statistics from each method are compared to see which one produced the highest hit rate. This can be a little tedious because at least one of the methods must produce a statistically significant result. For example, p≤0.05 or p≤.01, as the preselected significance level for testing the null hypothesis – p being the probability that the results could have occurred by chance without mental influence.

● Ever tried experimenting with influencing radioactive decay? (it’s impossible from current

physics pov, and thus very interesting, more to that, it’s a pure quantum process

independent of temperature etc.). It has very low output, or rather, high output

radioactivity is just dangerous, but correlation of multiple sources is possible, probs

mind-matter is noticeable with advanced algorithms of yours.

I built a number of random generators using nuclear decay. The simplest used the 1μCi (1 micro Curie = 3.7 x 10^4 decays per second) of Americium-241 (241Am) from a smoke detector and a photodiode as a detector. The photodiode must have little material shielding the junction because the alpha radiation is easily blocked by thin layers. The resulting generator can only produce about 10Kbps due to the limited number of decays and the geometry that only allows at most 50% of the decay events to be detected by a flat detector on one side of the source. The bias must be kept very low by careful design since it is undesirable to possibly obscure the quantum properties by some form of postprocessing.

10 Kbps is relatively slow, and, as suggested, the only way to increase it is by using higher levels of radiation, which is clearly a hazard, or multiple generators, which is perhaps a little awkward but doable. Note, the Nuclear Regulatory Commission (NRC) licenses smoke detector manufacturers to use Americium sources only in smoke detectors. It’s not strictly legal to remove the source and use it for any other reason, and definitely not for an alternative commercial use.

My experience strongly suggests a properly designed experiment using nuclear decay will produce results at least as good as any other type of entropy source. That assumes an equal entropy sampling rate and the same type of signal processing. Significant results can be achieved with even lower entropy rates.

● Maybe look at the everlasting math question we’re struggling to solve :)? I suspect it has

a Bayesian solution.

-to-represent-initial-poisson-distribution

There are many ways to look for patterns in what would normally be “random” data. By definition, a sequence of random numbers will not contain any statistically significant patterns. Of course, significant patterns may briefly appear in any truly random sequence. I process the data to look for a number of independent patterns in the data. The processed data can be combined in various ways to produce a single output to correlate with the intended result.

The most important properties of random bits are bias and various orders of autocorrelation (the first order contains the most information). These properties are orthogonal, meaning in this discussion they are statistically independent. In addition, I look at various lengths of runs of 1s and 0s. The results of these measurements of different properties may be combined by simple linear algorithms or by using an artificial neural network (AAN) or by rapid machine learning programs (real-time learning).

● Have you ever tried to devise a pure-quantum algo for bias amplification (given IBM is

giving real hardware if experiment properly described, it possible to test)?

My algorithm for bias amplification is theoretically near 100% efficient as long as the effect size is not too high. It’s not possible to do better than that in nearly every case. However, it may be possible for a quantum computer to look for a larger number of patterns that may result from sequences being altered by mental influence. I don’t have a reference at the moment, but there has recently been a number of news articles declaring a quantum computer has proven a sequence of random numbers is completely random. What we would want is nearly the inverse of that problem: using a quantum computer to show a nominally random sequence is not entirely random, and to what degree it deviates from “perfect” randomness.

● Have you devised any measure or protocol to assess RNG-setup (like hw + algo)

sensitivity to mind-matter? e.g. PSIbit/bit entropy?

Yes. This may be considered two ways. The first is to simply measure the effect size from a number of testing series by several different operators. This involves a rather lengthy data collection and analysis and is generally not useful for comparing multiple systems to see which one is “best.” The second provides a method for directly comparing two systems to assess which one is the most responsive to mental influence.

(This paragraph from previous answer.)

I developed a method for comparing different processing methods and/or generator types. The method takes data from two systems to be compared, and randomly uses the output of only one to provide feedback to the user for each trial. Then the statistics from each method are compared to see which one produced the highest hit rate. This can be a little tedious because at least one of the methods must produce a statistically significant result. For example, p≤0.05 or p≤.01, as the preselected significance level for testing the null hypothesis – p being the probability that the results could have occurred by chance without mental influence (the null hypothesis).

One may think that results from both systems could be stored for every trial, but that doesn’t work. Part of every system includes a method for providing feedback to the user of a “hit” (measurement matches intended result) or “miss” (measurement does not match intended result). The specific method used to produce feedback almost always shows a higher effect size. Therefore, the method chosen randomly for any particular trial is also used to provide user feedback for that trial.

● Do you observe “anti-intent” effect in your setups? Like when you frequently get “-” when

intent is “+” and this happens even after rotating the gauge?

This effect is sometimes called, “psi missing.” Yes, I have seen it in some experiments, especially when the user is bored, tired or hungry (detracting factors). I don’t look for or give weight to “negative” or anti results, because they cloud or block practical applications. The user will learn to produce positive, intended results given a responsive system with good feedback.

● Why military, e.g. DARPA missed the opportunity to use the effect? Are they not satisfied

with existing levels of correlation? Or is it used secretly without public record?

The CIA ran a secret program called Stargate (now mostly declassified), that used remote viewers to spy on various targets. However, Congress pulled funding many years ago, primarily because the results were not objective, repeatable or accurate enough. Any current proposals would be highly politicized and limited by the imaginations of a few old white guys.

The military and NSA are aware of my work. If I knew of such a secret program, I couldn’t comment.

● How effective is pre-recorded and stored entropy for information gathering and

mind-matter interaction compared to a high-bitrate REG?

In a sense, prerecorded bits are pseudorandom. All bits are predetermined: they may be viewed at will and cannot be changed. I am familiar with “retro-causality” experiments that seem to open a pathway to using prerecorded data. In addition, I have used pseudorandom sequences for mind-machine interaction tests. The only possibility of “affecting” the outcome of such tests is by timing of the user by pressing a key to initiate a trial. This will only work if the pseudorandom generator is running continuously in the background. Surprisingly, I was able to achieve quite high effect sizes with those experiments. This suggests I had learned unconsciously how to press the key at the right time to get more hits than misses, further illustrating how strange and complex mind-matter interaction can be.

Prerecorded data could at least give results similar to a pseudorandom generator. I believe this puts a higher burden on the user and may therefore reduce the effect size of any testing. I have not done so, but it would be possible to compare two systems, one live (using an REG) and one using prerecorded data.

● Could you elaborate a bit more on the experiment you did to get answers for questions

about the future?

I did many different experiments. The simplest one was to use a mind-matter interaction system to predict the output of a true random generator. The prediction had to be complete and recorded prior to the beginning of the random bit generation process. If the prediction and the subsequently generated bit matched, it was a “hit.” Another test used an MMI system to predict red or black for betting on roulette (when online gaming was still legal). The user interacted a number of times with the mind-matter system and the results were incrementally combined using Bayesian analysis and displayed on the screen. The user decided when the results were reliable enough to bet (or to start over without betting). Then a bet was placed through the roulette app on the same screen. Tests over a month with real money bets showed profits constantly increasing with about 1000 spins. Note, the gaming company shut us out at that point – I guess they didn’t like the statistics.

● Have you ever conducted experiments with analyzing the spectrum of analog noise

instead of using random walks from sampled and digitized noise? E.g. a German

radionics company called Quantec utilizes the analog noise spectrum of a Zener diode

divided into equal-sized frequency bands; these bands are then automatically numbered

by their software and assigned to the number of choices available for information

gathering. Once the equally distributed noise becomes biased towards one of the

frequency band, an answer is found.

I never duplicated their exact method, but I devised many systems that achieved similar desired results. That is, building up a histogram of measurements associated with numbered labels to select one or more that are intended by a user to match certain real-world events from the past, present or future. Numbered bins corresponding to the possible outcomes can be presented one at a time and be updated by the MMI system output, or all bins can be updated at once by a single user measurement initiation. Usually many measurements/user interactions were combined to increase accuracy.

● Have you experimented or even found any other ways or protocols to interact with a

REG besides conscious focused intent? What are your thoughts on the William Tiller

experiments of “storing intent”?

I provided advice, equipment and data analysis to Rick Leskowitz for his Joy of Sox movie. The measurements seemed to show significant deviation from expected random behavior at highly emotional times during a Red Sox home game.

“Stored intent” is another way of saying “thought form” or perhaps something like an ancient Egyptian tomb “curse.” A talisman is supposedly an object that has been imbued by a thought form to create favorable or protective conditions for the bearer of the object. We (humans) don’t have an easily testable theory to explain what these phenomena might be, to say nothing of understanding the physics involved, which are also unknown to us.

All these examples involve the intentional association of a specific suggestion with an object or material at a location. The suggestion persists so it may interact with or affect the consciousness of others, typically at a later time or different location in the case of a portable object. These ideas also describe what is called sympathetic magic. For example, certain symbolic religious objects are believed to provide protection or a connection with higher mind. Holy water is another example. The water has to be blessed (imbued with the intention of a certain positive outcome or connection), then it is supposed to confer a special connection or power to those using it.

If these phenomena are real and we learn to understand and use them, imagine the amazing possibilities.

● How much do practices like meditation or preparatory rituals influence the efficiency of

mind-matter interaction?

Meditation – in its many varieties – is generally beneficial. With respect to mind-matter interaction, meditation helps one learn to relax, clear the mind and focus. All are helpful to achieving better results.

A preparatory ritual may include creating a conducive environment for exercising MMI, preparing mentally by being mindful of the desired outcome and minimizing typical impediments to peak performance. Impediments include emotional, such as feeling angry, upset, distracted; or physical, feeling unwell, hungry or in pain. These factors can have a dramatic effect on performance.

● Are there significant differences in efficiency for mind-matter interaction or information

gathering between REG sources that utilize thermal/avalanche/shot/phase noise, laser

partition noise, spontaneous radiation emissions?

These variations have not been sufficiently tested to give an exact answer. It is very difficult to control other variables that may easily swamp possible differences due to the entropy source. I previously provided a protocol for comparing MMI systems, including entropy sources, that minimizes other variables.

Many years of experience suggests there may be some differences in the final effect size between various entropy sources in properly designed REGs. However, there a lot of ways to design and use REGs (engineering factors) that likely will decrease or increase responsivity. Often it is other, human variables that have the greatest effect.

● What’s your theory on how the “information field” works? In RNGs ability to learn

complex associations linked to rng output.

This is a very important question when it comes to devising practical applications. The first principle in MMI systems is, it’s the mind of the user that establishes associations between the hardware output and specific ideas, information or events, or even abstract concepts. The user learns in a way similar to biofeedback to produce these correlated results. It’s not necessary to know how biofeedback or MMI systems work, which is good since it would be impossible to consciously grasp the hidden processes that produce the intended results.

This underscores the importance of good feedback when learning and practicing mind-machine interaction. As a user learns and experiences producing intended results, the same skill can be used in everyday life to achieve similar results without an MMI system.

● You said that the PureQuantum RNGs don’t need any post-processing. But according to

ComScire API documentation, there are three levels of processing applied on

PureQuantum RNGs and four levels of processing applied on CryptoStrong RNGs. What

are they then? At which level of processing are mind-machine interaction (MMI)

experiments optimal at?

Note, the CryptoStrong RNG does include a cryptographic postprocessing section as required by the relevant standards. I do not recommend that type of generator for MMI systems. The other ComScire TRNGs only combine entropic, i.e., truly random bits, to produce their output.

Postprocessing, in the context of random number generators, is the combining of entropic bits with deterministic or pseudorandom bits. While that postprocessing, when properly designed, will greatly decrease statistical defects, such as bias, it may also obscure or reduce the measurement of the influence of mind that is desired. Whenever possible, I suggest avoiding deterministic postprocessing.

When combining entropic bits to reduce statistical defects, the bit rate can decrease significantly. Bias amplification or other types of data processing work best with high bitrates. Therefor, there is a balance between maintaining maximum bitrate and achieving sufficiently low statistical defects that will not bias the results. PQ generators are designed to be perfectly unpredictable for cryptographic and other critical applications, meaning more entropic bits are combined than would be optimal for MMI use.

● For simple MMI experiments such as influencing the output bits, you said that thermal

noise based RNGs perform as good as quantum/shot noise based RNGs. What about

for information gathering? Will thermal noise perform as good as quantum noise?

These are not black or white considerations. If, for example, a pure quantum entropy source was shown to be somewhat more responsive to mental influence than a thermal noise source of equal generation rate, that doesn’t mean the quantum source is better in real-world applications. I can build a CMOS thermal and shot noise generator 100 or even 1000 time faster than a typical photonic generator. Using bias amplification and other processing methods, the faster generator will provide superior results in most cases.

Experience suggests if an MMI system works better during simple experiments producing correlations with bits, it will also work better in more complex setups for gathering information.

● I made an RNG based on camera thermal noise. I’ve tested the raw unprocessed data.

Using NIST SP 800-90B, it has a min-entropy of 0.999941 bits/bit, using NIST SP

800-22, it passed all of the tests at the 0.01 level, but using ComScire QNGmeter, it

failed with a score of 25.6. However, if I apply the basic von Neumann debiasing

method, which is very simple and does not involve any PRNG, it hasn’t failed QNGmeter

test so far. Is my RNG good enough for MMI experiments?

Passing statistical tests is necessary, but not always sufficient for a good REG. Generally speaking, I agree von Neumann debiasing is less deterministic than other postprocessing algorithms. However, consider what property of the random sequence one is trying to measure in MMI experiments. The most commonly measured property is bias. In the absence of autocorrelation or other statistical defects, von Neumann’s algorithm will theoretically remove any bias. In that case, the very property of interest may or will also be removed.

Note, depending on the exact design, a camera-based RNG can produce more quantum noise or more thermal noise. In addition, bias may likely be reducible to acceptable levels using designs without von Neumann’s or other deterministic algorithms.

This is a simple answer to a rather more complex question – other variables in the full MMI system also affect the results.

(optional, theoretical) I’m more skeptical of the possibility of quantum psychokinesis than

classical psychokinesis. Isn’t influencing quantum uncertainty breaking the speed of light

and making FTL communication possible? How isn’t this breaking causality and creating

a grandfather paradox?

Regardless of the type of entropy used in a mind-machine interaction system, predicting a future bit of information can be shown by relativity theory to be equivalent to faster-than-light communication, i.e., receiving information before it is sent. Seemingly no one has reported observing a temporal paradox, so one may infer they simply don’t exist, or something happens with our observations that makes them invisible to us. For example, one may suppose some form of quantum multiverse may exist, in which a potential paradox is prevented by the observer following a branch in which the paradox never occurred.

Can we design an experiment to look for a paradox? Probably, but I doubt very much we would see one since one has apparently never been objectively observed. Or, it happens all the time and we cannot be consciously aware of it. When humans have an inexplicable experience – which does happen often enough – our brains usually present an alternate reality that is more acceptable.

In any case, this falls in the category of “hard” questions that philosophers have not answered, so my reply is purely speculative.