Bias amplification and modulation

I’ve got an idea.
For now, we have 3 somehow reliable methods to process MMI: random walker(RW), majority voting(MV) and bayesian analisys.
What does MV and RW has in common? They are based on slightly different methods to decode PWM modulation into continuous signal.

however, when we decode our pwm signal into numerical time series, MV and RW has slightly different filtering functions. MV gives us 1 when differential f(iт+т)-f(iт)/т is bigger then threshold. RW has slightly more complex filtering function, yet I bet it might be formulated that way.

I must add, that other one widely used method is multidimensional kernel density estimation. Which uses Randonautica. However, if we take principle, what does R algorithm does, pretty much DE methods rely on integration of pulse signals. Multidimensional integration in that case.

I can’t tell anything clever about bayesian analysis. Because I haven’t understood it well yet. Tell me, if you can find any similarities. Because it might be very important for our understanding of process.

But signal processing doesn’t stop on PWM and filtering! We also could try to use wavelets, entropy analysis of sequence(trying right now) and loads, loads of other approaches. I bet, it’s possible to find better filters.

Thank you for taking the time to consider alternative methods of bias amplification. I have found it is, or may be possible to use more advanced methods of pattern recognition in MMI processing.

We often observe that mind can produce a detectable output even when the MMI signal is processed in a way that would ordinarily remove or obscure it. That is, the complexity of the processing doesn’t seem to block the ability of mind to produce the intended result. In addition, I have observed that mentally-influenced signals tend to get results by utilizing less energy than would be expected form a “random” influence on the bits. One may infer from this that the mental influence is entering at a higher level or mathematical dimension.

My research has shown that artificial neural networks (ANN) or other types of machine learning can provide measurable improvement in detecting hits or misses in MMI signals. This type of processing does not require knowledge of the underlying patterns, which may be difficult to identify.

I have tried other simpler methods of getting better results. These include trying to remove blocks of data that seems to contain less information, as identified by clustering toward the center of a distribution. This is similar to what you suggest. However, much statistical modeling suggests this approach will not provide any advantage.

In conclusion, some processing methods may improve results, though they are rather complex and take massive amounts of real MMI data to test and perfect. That would take an organized and concerted project with standardized equipment and data gathering protocols.

I was talking, that all existing methods of MMI processing has something common in their mathematics. They are based on PWM processing. It might be important.

The three methods of bias amplification are similar in that they all compress information. Data compression is their common mathematical basis. The type of information compressed is very specific. That is, it compresses information encoded as an excess of 1s (or 0s) into an output sequence. The information can be compressed until the output is either all 1s or all 0s, or a single 1 or a single 0.

Other types of MMI amplification, different than bias amplification, would depend on higher-order patterns in the MMI data sequences. As I have tested previously, relationships between different types of patterns, such as bias, autocorrelation and runs of 1s (or 0s).

Beyond that, there seems to be a positive autocorrelation in trial results. If the user has just gotten a hit, there is a slightly higher probability the next trial will also produce a hit.

All these methods are based on probabilities or information theory. We search for occurrences in the data that deviate from purely random behavior. But beyond that, some possible deviations caused by mental influence have specific properties that can give even greater advantage. Entropy is reduced, either at the point of measurement of the entropy source, or as the trial result is determined. MMI measurements tend to produce these results with minimum use of energy. This gives a hint on how to best search for MMI effects, but the underlying mechanism is neither clear nor simple.

  1. It has very obscure connection to compression of information. Because we can’t reconstruct original sequence.
    2.I’m researching question of autocorrelation right now.But my opinion, that word counting would be better approach.
  1. I understand your observation, let me explain better. Most people are most familiar with compression methods for digital data that encodes images, sounds and text. There are two types of compression, lossless and lossy. Lossy compression looses some data to reach greater compression. Lossy compression is not entirely reversible.

The three bias amplification methods compress data by removing redundant or unnecessary information. That is primarily the original random sequence, which contains no (or little) usable information. Random sequences are also not compressible by usual methods. When only the final output (1 or 0) is required, one bit of information can encode it. If the input probability and sequence length are needed, 16-32 bits can encode that information. This is clearly a lossy compression and reconstruction of the original sequence is not possible.

If higher-order patterns are important to the MMI algorithm, such a simple compression cannot be used. This is especially true if we don’t know the properties of the patterns of interest. However, some type of compression, data reduction or dimensional reduction must be applied. One cannot easily take a 20,000 bit sequence and map it to an output space. That would be a 20,000-dimension space, which would be a challenging computational problem for most computers. A quantum computer could probably handle this problem, and the concept is not unreasonable. Simple dimensional reduction methods, such as factor analysis, are not effective with nearly random sequences. The signal-to-noise ratio must first be increased, which is one direct effect of bias amplification.