Just seen an interesting Idea in some 4chan post (https://archive.4plebs.org/x/thread/38711593)
In short: They assume that if MMI is happening because of observation collapsing quantum probabilities, than its effect may be increased by observing at a higher framerate. In that post it was suggested to train the operator to percieve at higher FPS, but what is actually interesting to me, is will we be able to detect MMI better if measurements from RNG and a visual feedback will have bigger FPS? Technically, without any training human eye can already see frames with exposure of 13ms. Have anyone here tried to experiment with that?
I did hundreds of experiments changing the interval between initiating a trial and producing an output/giving feedback. The interval ranged from 50ms to 2 seconds. I found the highest effect sizes were produced with the interval set to 200-250ms (1/5 to 1/4 second). 200ms is about the minimum time it takes for the brain/mind to form a conscious awareness of a new thought. This result is for initiated trials.
Some results from the classified Stargate remote viewing project were used to show that accuracy (effect size) is related to the entropy of the target. In this case, entropy means the magnitude of changes in the visual appearance of the target during the observation. That is, if the target is changing visually it is more “observable” than a still or static scene. This seems to be related to the idea of increased frames per second, if each frame is measured by a certain amount of change from a previous frame.
Would be interesting to experiment with continous animation with high fps. Where, for example, we have a point on a screen that is moving according to the data from RNG, that is updated each 13ms. Technically it will be observed many times per second so MMI effect may accumulate.
Prior to visual or auditory observation, interaction with some measuring device must occur. In a quantum system this will naturally cause decoherence unless extreme mitigation measures are taken (e.g. near-zero temps, electromagnetic shielding, vacuum, etc). Framerate is not going to matter here.
I did a bit of research into how much the human eye can input and although the concept of FPS doesn’t apply as is to the eye, as it perceives things as a continual stream of data, there are are limits here and there. The two that stood out for me were:
-
The flicker fusion threshold (the point where flickering light appears continuous) is typically around 60-90 Hz for most people in normal conditions. This is why standard displays often have refresh rates of 60 Hz or higher.
-
Under optimal conditions, humans can detect very rapid changes, such as flashes of light, at rates up to 200-300 Hz. For example, fighter pilots in controlled experiments have been shown to distinguish brief flashes at rates as high as 220 Hz
But this whole thread spun me onto a different idea. All the apps and digital forms of experimenting and training for remote viewing are based on static photos taken. If I get the time I’ll prototype a system that takes a few words from a QRNG, feeds them into some Sora like GenAI text-to-video generator, then provide them as a target to the remote viewer with some randomly assigned target number. This’ll be interesting to see the results in that a) it’s not static but quasi-moving (to the best GenAI videos can be made at the moment), b) no human will have ever seen (no collapse in probability wave due to conscious observation) the output until after the RV session