Sharing MMI program demo/code

@nplonka @JoshuaLengfelder @Jonf fyi

Thanks @ScottWilber for responding. To add a bit more detail, I’ll first note the following from the GitHub readme:

  • It uses Scott Wilber’s Random Walk Bias Amplification (RWBA) method with advanced processing methods for increased effect size in anomalous cognition details. By default we use a bound of 31, which is the distance to the upper or lower bound. Basically every 1 that’s generated is a step to the right, every 0 generated is a step to the left. This runs until either the upper or lower bound of 31 / -31 is hit. We track how many steps it took to hit the bound and which bound was hit.
  • Results are then grouped into subtrials, trials, windows, window groups, and supertrials. A subtrial consists of one attempt to hit an upper or lower bound. By default there are 21 subtrials per trial, 5 trials per window, 10 windows per window group. A window lasts roughly 1 second. A supertrial equals one overall program execution, which can be configured either to execute indefinitely or based on a configured amount of seconds.
  • For each trial, window, and window group we calculate p and z values to determine probability of randomness. Both one-tailed (targeting either just more 1s or just more 0s) and two-tailed (targeting both more 1s and more 0s) are supported in a given supertrial.
  • Calculated probability values are used to control a graphic window that displays a spinning cube, a bar-chart, and if two-tailed is selected then an on/off light. Behavior of objects and displayed values is controlled by window group probabilities. The goal is to influence the output of the random number generator in such a way so as to reach targets in the graphical window.

As Scott noted, z-value, p-value, and surprisal value are all just different ways of viewing the probability of the selected outcome occurring randomly. You can convert between them with algorithms. In the context of probability theory, z-value and p-value are very common.

  • z-value (also known as a z-score) represents the number of standard deviations an observation or statistic is from the mean of a standard normal distribution.
  • p-value is the probability of observing a value as extreme as, or more extreme than, the z-score under the null hypothesis. This is a value between 0 and 1 and as such can be easily translated to a percentage (e.g. p=0.05 means 5% probability of results being random).
  • surprisal value is less commonly used. In the context of Scott’s paper here, the surprisal value (SV) is calculated from the p-value and uses weights to provide a more uniform range of probabilistic outcomes.

The thing you’ll see used most in other studies, and perhaps the simplest to understand, is the p-value. A commonly used threshold to denote statistical significance is p < 0.05 (5% probability of being random). p < 0.005 (0.5% chance of being random) is considered very significant. It’s important to note that in the code I’ve shared above I made a choice to invert the p-value (1-p). So, instead of targeting p < 0.05 we instead target p > 0.95. I found this to be more intuitive since I wanted the cube to spin faster when the results were less probable, and it felt more natural to have the faster spinning speed associated with a higher numeric value.

It may seem confusing that we are using subtrials, trials, windows, and window groups. In effect, each of these is an aggregation of the thing before it (a group of subtrials is a trial, a group of trials is a window, a group of windows is a window group). Studies have shown the effects of psi to be small, but statistically significant over large data-sets. It takes a lot of data to ensure that an anomalous effect is really occurring (e.g. if I flip a coin 4 times, and get heads 3 times, the probability of this happening randomly is quite high).
As such, tests require a lot of data, and subsequent statistical analysis on that date. The general purpose and intent of Scott’s approach here is to amplify and identify the impact of any anomalous (aka psychokinetic, or psi) effects. If we get a positive result in a single subtrial, or even trial, in isolation it doesn’t tell us much, but looking at large volumes of subtrial outcomes over time (via windows or window groups) gives us much more data and thus we can evaluate the probability of anomalous effect (psi) occurring with much greater confidence.

Lastly, to reference the two other values returned in the console during program execution…

  • Last window was hit? (yes/no) - This only applies to one-tailed analysis (when doing uni-directional targeting - only more 1s or alternatively only more 0s). For one-tailed analysis, a window hit is determined based on whether the z-value for that window has a polarity (positive or negative value) corresponding to the target direction (if targeting more 1s then a positive z-value for the window corresponds to a hit/success, if targeting more 0s then a negative z-value is a hit). In one-tailed we use number of hits within a window group to calculate window group p and z values. For two-tailed analysis (where we watch for both more 1s and more 0s) window hits are not used to calculate window group p or z values. (Note: I’ve just now modified the code to not return whether last window was a hit if we are doing two-tailed analysis)
  • Running overall window bound tracker - For both one-tailed and two-tailed, in effect this tracks consecutive z-value polarity for successive window outcomes. So, if a window has a z-value that’s positive then our bound tracker adds 1. If a window has a z-value that’s negative then we subtract one. The tracker maxes out at +5 and -5 (e.g. if it’s at -5 and we get z-value < 0 then we do not subtract 1). This running bound isn’t actually used for anything meaningful, neither in terms of probability calculations nor visual effects in the graphical window.

Hope this is all helpful and understandable. I’ve also gone ahead and added this description under a new heading of Understanding console output in the GitHub readme for both the database and no-database versions of the code. Happy to answer questions.

1 Like

Thank you! Your explanation for windows and sub trials was very enlightening for me, in fact the entire repo is so awesome for introducing people to the topic. It would make a cool web app

@fluidfcs1 @ScottWilber thank you! That helps clarify the results for me.

One observation I noted when using the game is that the act of “forcing” the cube to spin does little to influence the results. I’ve noticed that having a feeling of anticipation of a positive result somehow “blocks” the influence on the number generator.

About a year ago, I spoke with a researcher from JHU who told me that during the PEAR experiments, the best results came from participants who had no expectations coming into the experiments and that when the participants were attempting to “force” a result, they ended up having no influence on the RNGs.

Is this true in all of your observations as well?

Yes. In magical circles this is called a lust for results. I find my best scores are when I just go with the flow

1 Like

How one uses intention is very important. I would say one of the more important ways to get better results is to always maintain focus on the desired outcome. To be sure, gritting your teeth and trying to “force” something to happen is more likely to cause a headache than the intended result. The part of mind that causes results is apparently not the outer layer of the conscious mind, which on the contrary, seems to stand in the way if that is the primary way of focusing. However, nothing will also happen if focus is not maintained.

The problem with comparing with PEAR results is that their system was so insensitive, good user feedback was not possible. That makes it difficult for a subject to learn what state of mind gives best results. Besides the necessity to constantly focus, it is also important to realize it’s not my surface layer of consciousness, what might be called the outer self or ego, that causes the desired shift in probability. Then I can release that outer effort to control, and let a deeper level of mind cause the effect. This takes some practice, which will be facilitated with a good training tool providing good user feedback.

To answer your question, my experience in many thousands of trials is, it’s not black or white. Good results can be gotten with a wide range of concentration approaches. The more the focus is from or in the outer mind, the harder and more tiring. However, I have not reached the point in learning where I can get significant results 100% of the time. At least that is true with the previous systems. My goal is to provide systems (hardware and processing) that allow most people with some practice to get good results.

2 Likes

I think there’s also good reason to believe that successful influence is a result of more than just the right type of focus, but which variables have input and to what degree is not well understood. The version of the code that uses a database stores information for many different types of variables, which I deemed to have the best potential based on review of the admittedly limited research that’s available. These include:

  • Participant age - Entered manually

  • Participant overall feeling - Subjective evaluation manually entered

  • Participant energy level - Subjective evaluation manually entered

  • Participant focus level - Subjective evaluation manually entered

  • Whether participant ate in the last 90 minutes - Manually entered

  • Whether participant meditated recently - Manually entered

  • Current environment temperature - Manually entered

  • Current environment humidity - Manually entered

  • Solar DST Index - A measure of the disturbance level in the Earth’s magnetosphere caused by solar wind variations. It is used to quantify the intensity of geomagnetic storms and their effects on Earth’s magnetic field. This is pulled automatically in real-time from from the World Data Center for Geomagnetism.

  • The Kp geomagnetic index - A measurement of the global geomagnetic activity level. It quantifies the disturbances in the Earth’s magnetic field caused by solar activity, specifically related to coronal mass ejections (CMEs) and solar flares. This is pulled automatically in real-time from the German Research Centre for Geosciences.

  • Galvanic Skin Response - Galvanic skin response (GSR) is a method of measuring the electrical conductance of the skin. Strong emotion can cause stimulus to your sympathetic nervous system, resulting more sweat being secreted by the sweat glands. Code is written to use the Grove GSR Sensor connected to a Raspberry Pi. (Note: In current code values are captured and printed but not stored to the database, that needs to be done still.)

  • Lots of EEG Data - Captures real-time brainwave data from the user. Code is written to read data from an Emotiv Epoch EEG using a node-red server and Mosquitto (in theory would work the same with any Emotiv device). It’s all working, and includes not just the ability to measure raw brainwave signals (alpha, beta-high, beta-low, gamma, theta) but, interestingly, also uses Emotiv’s algorithms to assign objective values to level of excitement, focus, interest, engagement, stress, relaxation, longterm excitement, attention, and cognitive stress. Similar to the GSR portion, the current code is capturing data but not yet storing it to the database. This produces an immense amount of data and I never quite decided if I wanted to store it all in raw form or average/aggregate it somehow before storage.

  • Technique to be used during session - Options include:
    1. Visualization
    2: Attempt to identify/merge with the device or process
    3: Affirmation or assertive based approach
    4: Passive attention
    5: Generation of intense energy or emotion
    6: Focus on feelings on successful outcome
    7: General mind-quieting and focus
    8: Focus on feeling love
    9: General focus / other

My initial intent, which I never fully realized, was to do a large number of experiments while measuring all of this data. Since it’s being stored to the database it can then be analyzed later, likely using machine learning or even “AI”, to find correlations between positive psi outcomes and values of specific variables (e.g. perhaps we find a certain value for the Solar DST index that is conducive to psi). In theory, after identifying the variables most conducive to psi, these could be fed back into algorithms to further enhance our ability to measure psi influence (e.g. perhaps we only “count” a measurement when variables are most conducive to successful outcomes). If someone ever wanted to carry this effort forward I still think it could be really useful.

2 Likes

Have you considered ephemeris data? There is a fascinating book called Cosmic Patterns by John Nelson. Nelson worked as a radio analyst for RCA, and devised a method to forecast radio blackouts by observing the relationship between planetary alignments and sunspot activity. During high sunspot activity, solar flares and CMEs are more prevalent, which is what causes radio blackouts. Nelson found that sunspot activity on the Sun, and consequently solar flares and CMES, increased when the planets aligned at specific angles. Activity was highest when the planets formed either a Conjunction (0-degree angle), a Square (90-degree), or an Opposition, (180-degree angle) to the Sun.

By knowing when planets would form these angles, he was able to accurately forecast radio blackouts - with a success rate of something like 90% if I remember correctly.

The correlation between planets and sunspot activity is pretty interesting and really makes you think of the validity of the claimed influence that planets supposedly have on human consciousness.

I wonder if ephemeris data would be a useful variable to consider in this program.

I hadn’t considered this. It sounds interesting and potentially worth including, assuming that 1) there is a data-source that I could pull from, and 2) someone expresses interest in actually doing something with this code (otherwise I would conserve my time).

1 Like