Increasing MMI Effect Size by LFSR Processing of MMI Bits

Increasing MMI Effect Size by LFSR Processing of MMI Bits

MMI bits are usually processed to remove statistical defects, primarily bias (imbalance between 1s and 0s) and autocorrelation (relationship between bits in the sequence). The processing can consume many bits to produce a single output bit with the necessary statistical quality. The processing described here can return statistically acceptable bits while using only one input bit for each output bit. In addition, MMI bits produced by a true random generator or MMI generator can be extended or “stretched” to provide more bits than come from the generator.

One might assume if each bit with entropy HB is extended to N bits, the resulting entropy in each extended bit will be HB /N bits. This is untrue because the result of recombining all the entropy in the extended bits is always less than the entropy in the original bit, HB. Assume entropy is conserved in a reversable transformation: that is, recombining all the extended bits results in a single bit with the original entropy, HB. To make this true, the relative predictability, PR, of each extended bit is the relative predictability of the original bit raised to the power 1/N. When the bits are recombined, the relative predictability of the recombined bit is the product of the relative predictabilities of the N bits. This obviously gives a power of (1/N)N = 1, restoring the original PR and hence the original entropy, HB.

How are bits extended for use in MMI?

Bits are commonly extended for use in encrypting information or messages. A key or seed, which is usually only 128 or 256 bits long, is transmitted from one party to another to enable sharing secret information. The key is used as a seed for a pseudorandom generator, which then generates a large number of bits – hundreds to thousands as many bits as were in the seed. The seed itself should be a high-quality true random number, although the true entropy of the seed is rarely known since entropy is so little understood.

The following example illustrates the relationship between original or input bits and extended bits. The entropy of the original bit(s), HB, must be known. Assume HB = 0.999999. What is the entropy of the corresponding PR1/N? Predictability, PB = 0.5005887049 and PRB = 0.0011774099. If the input bit is extended by a factor of 10 (10 bits out for each bit in), the PR10 = 0.50943968 and P10 = 0.75471984. This gives H10 = 0.80371129/bit. In each case, 10 is appended to the subscript to indicate the variables are for bits extended by a factor of 10.

As the input entropy gets closer to 1.0, the resulting extended-bit entropy also increases. A high-quality random generator without deterministic postprocessing may produce bits with H = 1.0 - ε, where ε = 10-50. PR10 = 0.003214347 and H10 = 0.999992547. As the number of extended bits increases, the extended-bit entropy decreases. With N = 100, the extended-bit entropy, H100 = 0.75718. Note, the total amount of extended-bit entropy also increases to over 75 bits.

MMI is a very different type of application than encryption, which is purely algorithmic. MMI applications cannot be simulated, so only real-world testing can confirm methods that actually work. Two variations for extending bits for MMI use have been tried. They use a set of linear-feedback shift registers (LFSRs) constantly fed by a source of entropic bits. Entropic bits are generated by sampling a property of a physical entropy source, typically represented by a voltage. The figure shows an example layout (from US pat. 6,862,605, Wilber, fig. 5)

The registers in the LFSRs must have lengths relatively prime to each other. That means their lengths can have no common factors. Lengths that are all prime numbers are automatically relatively prime. The relative predictability of the output bits can be calculated as the generator runs through a number of self-seeding initialization cycles. The PR can be estimated for the example generator. After 330 bits have been input, and assuming the entropy of each input bit is 0.25746 bits. Predictability at the input is, P = 0.956647, PR = 0.913294. The PR at the point labeled output is roughly 0.913294**96 = 0.0001654314, P = 0.5000827157 and finally the entropy is H = 1 – ε, where ε = 1.9741521 x 10-8. Note, these are only estimated results.

The calculations assume no output bits are taken during the self-seeding period. When bits are taken at the output, the effect is to split or extend the bitstream into two streams; one is the output stream and the other is fed as usual into the LFSRs. For clarity take the output from line 131, which also feeds the LFSRs. Initially the output bits and the bits fed into register 128 have relative predictability of the square root of 0.913294 (from each input bit) times 0.0001654314 (initial value of the unsplit bits) = .0122918. While register 129 is being filled, each output bit and bits fed into the register have PR = 0.00507654. While register 130 is being filled, each output bit and bits fed into the register have PR = 0.00161539. Note, for subtle reasons of calculating the entropy, the longer register should be filled first, progressing to the shortest.

UPDATE NOTE: more rigorous calculations suggest these estimates are incorrect so results in the previous two paragraph are provisional. The LFSR approach does work to produce statistically high quality output sequences, and it can be used to split or extend the number of input bits if their entropy is very close to 1.0. However, the exact modeling of output entropy, which is quite complex, suggest output bits will have the same entropy as the input bits when one output is taken for each input. More study is required.

I think, that we should think in different direction with extenstion of MMI bits.
What do we have in nutshell? We have bitstream, which is produced by QRNG. It’s supposed to be true random. But some strange effect, which involve conscious intention infuse signal in our sequence. And decreases entropy in our sequence.
We should take a note, that retrocausality is being involved, because said signal isn’t special kind of code, but it leads to some kind of future, which is desired by operator.
However, looks like MMI effect has a lot of noise. Because I can’t make it work in certain hours. And one of forum members said, that he tried to choose link with MMI. But result was more resembling thoughts of his son, than his own.
Despite fact, that MMI can surpass complexity of scramblers, what if different intentions has different abilities to surpass complexity of certain transformations of signal?
So, different transforms of initial bitstream might be used to make MMI more responsive to certain thoughts or thoughts of certain people.

Processing in MMI sequences is really a way to look for patterns. Bias (excess of 1s or 0s – the most common pattern observed) and autocorrelation (correlation of bits in a sequence) are the simplest forms of patterns that can be observed.

I think that MMI is not so repeatable that a specific intention by different users would have the same response in the MMI generator. That makes looking for more specific patterns very difficult. What pattern in the MMI generator output could the processing be looking for? One could consider some type of rapid machine learning to be trained to look for such a specific pattern, but it would only be responsive to the one intention and the one individual.

MMI is not generally responsive enough yet to detect such a specific output in a single trial, so multiple trials would be required in real MMI systems. I have found it is more successful to train the user to get the kind of response desired. Our minds are much more flexible than any pattern recognition algorithm we can design.

I have learned a great deal in the past year about ways to make MMI systems more responsive to mental influence. Any user can learn to get specific results with more responsive systems. However, I designed an MMI generator that outputs an analog stream of bits that can be processed using analog filters to generate a number of tones. It may be possible to hear a pattern that relates to a specific individual’s intention. Alternately, the tones could be processed by an algorithm to recognize patterns. Even so, that approach would work better if the MMI generator is more responsive to mental influence.

I’ll try to describe it better.
MMI can surpass complexity of scrambler. But what if different scramblers needs different effort to be surpassed? And different intentions has different power to surpass certain algorithms.
So we san try to use different scramblers and use their output to produce single stream of MMI events, which will be used for feedback. I think, that it’s logical, that if there are any difference in influence, we’ll have more distorted distribution on scramblers, which are easier to influence.
I guess, that kind of MMI footprint could be analysed.

However, I’m not sure which kind of scramblers should we use as base for such footprint. I would like to hear ideas. I’ve tryed to look for theoretical works about calculations in temporal loops. But for now I haven’t very good idea what could be used.

Speaking about increasing of effect size, I think, that we should seek for materials, which can be more responsive for psychic powers.
Natural crystals oftenly change their clearness and color, when we charge them with our intention. It’s measureble. And possibly we can think about artificial crystal, which can be more responsive.

Possibly it might be luminophore. If natural crystals can change their clearness and color, it means, that power of mind can relocate defects in their structure. Defects in luminophore crystals are source of light emission. So it might work.

To focus psychic power I use alchemical alloy, made of bone, blood and metal(yes, it’s possible). But I won’t elaborate here, because process is much more on field of magic, than science and require sacrifice. However, possibly, some less obscure coatings could be used to channel psychic power into sensor.

The LFSR approach to altering or “scrambling” the MMI generator output is something I have tested and modeled extensively. Therefore I know it has real potential to enhance the resulting effect size of the original sequence. The researchers at Princeton (PEAR) scrambled their generator outputs by two methods: the first was to invert every other bit and the second was to randomly invert bits. These were accomplished by XORing the output with a series of alternating 1s and 0s, and secondly by XORing the output with a pseudorandom sequence. Their reason for doing this was not to enhance the effect size, but to mask the effect of poor statistical properties – primarily bias – in their generator output. They had to do this because their generators had such a large bias it overwhelmed the very intention-caused effect they were trying to observe.

I think you are suggesting literal ways of scrambling an original sequence, such as applying a hash function to blocks of data. There may be a number of different hash functions that could be tried and compared. This is an approach I have not tested, so I don’t know what the result will be.

There is also the inverse of the bias amplifier: a bias canceller. I believe I have described the algorithm elsewhere in the forum. The algorithm takes pairs of non-overlapping bits from the original sequence. If the pair of bits is (0, 1), output a 1, else, if the pair of bits is (1, 0), output a 0. If the bits are the same, do not output anything and take another pair of bits to process. Algorithmically, this will cancel any bias present in the sequence. On average, the number of output bits is one-quarter the number of input bits, just like the simplest bias amplifier, which has a very similar structure. Note, to provide complete information, the algorithm for the simplest bias amplifier is: The algorithm takes pairs of non-overlapping bits from the original sequence. If they both 1, output a 1, if they are both 0, output a 0. If the bits are different, do not output anything and take another pair of bits to process. I think it would be interesting to compare the MMI effect sizes of the bias-amplified output versus the bias-cancelled output of the same original sequence. One could even subtract the bias-cancelled result from the bias amplified result, perhaps enhancing their difference, in line with your thought of comparing different transformation methods. This is something that could be done with the output sequence from the MED100Kx8 MMI generator, so it could be available for immediate testing.

I have tested a number of methods of transforming properties in the original sequence to a bias that could be easily amplified and observed. These included, autocorrelation of at least the first three orders, crosscorrelation of multiple sequences, and runs of 1 up to runs of 4 or 5. I found that the first order autocorrelation is just as responsive as bias, and it is completely independent of bias as well. Using both bias and first order autocorrelation produces two sequences from on original, each containing the same amount of mentally-imposed information. That is, a deviation from the normally expected random behavior. Counting runs of various lengths also contained a lesser amount of information. I found the runs transformation to be not particularly useful and there were issues of autocorrelation in the converted sequences. In addition to these methods of analysis, I used factor analysis on blocks of MMI data and artificial neural networks to approximately double the effect size. This was really a lot of data analysis during development and computational overhead when running for a limited gain.

Can you tell more about neuronets?

The most important thing to know about neural nets is they are just pattern recognition algorithms, though they are sometimes used to project data into the near future, based on prior patterns, or even filter a noisy signal. Still, these and many other variations, are all based on finding real patterns that exist in the signal plus noise. If there is a lot of noise in the patterns, the algorithm needs more data to be able to learn them. That is one of the challenges in machine learning: it can take a truly enormous number of examples to generalize its recognition of the underlying patterns.

There are many types that have evolved over the years. See Types of neural networks or machine learning. The latest type I used with MMI is a type of rapid machine learning that can be “trained” in some few steps. The algorithms available online (which rarely work, by the way) are not good enough for MMI because one of the steps involves using a set of random numbers as part of the input. The performance of the trained network depends somewhat on the specific random numbers used, so the training is rarely optimal. I developed a two stage method that trains from the output back to the random numbers to pick the optimum ones.

Anyway, learning about machine learning and using it is rather involved, both mathematically and writing programs. At some point, I may go into that more deeply, but it is a little off topic at the moment.

Here is a simplified design for LFSR statistical correction and/or stretching (more bits out than put in) of MMI bits. The LFSR uses only entropy-containing bits, that is, no deterministic patterns are used. This is the best way I know of for preprocessing MMI generator bits prior to any subsequent operations (conversion algorithms, bias amplification, etc.).

This simple design can take input bits with as little as 0.1 bits of entropy per bit and provide acceptable output sequences for most MMI use. However, always use input bits with entropy as close to 1.0 as possible. Output sequences are stretched by taking a number of output bits while holding the input constant. Stretched sequences requires higher entropy of the input bits, the more the stretching, the better the input sequence must be.

The following schematic is for a hardware design that uses only 3 logic ICs.

U1 is a 9-bit parity generator (type 74HC280), where the odd parity output is equivalent to the result of XOring all the input bits. Six of the bits are used for feedback from the shift register and one is used as the raw data input. U2 and U3 are each 8-bit serial in parallel out shift registers (type 74HC164) connected in series to make a 15-bit shift register (the 16th bit is not used). The output of the parity generator is the corrected output and also the input to the shift register. The 3 capacitors are 0.1uF bypass capacitors for the high speed logic ICs. On the rising edge of the clock, the output bit – actually the previous input bit – is available for use by subsequent circuitry. In addition, the random generator starts making a new random bit, which must be finished before the next rising clock. It is possible to get the output from the current bit that was just initiated, but it takes more complex clocking circuit and careful attention to timing. If the generator is running continuously at high speed, the additional complexity is not warranted. The 6 feedback taps are located at the outputs 1, 7, 9, 11, 13 and 15 of the shift register. This configuration is optimum for a 15-bit register. This LFSR circuit will work up to 10s of MHz, the actual rate depends on the supply voltage. Note, 74ACxxx type chips are about twice as fast if needed.

The LFSR function can be implemented in software using only a 16-bit word as the register, along with adding, shifting, masking and parity functions. In machine code it can be made quite fast, but it still operates in a bit-by-bit mode. I won’t take the time to describe the pseudo-code unless someone is actually going to use it in an application.

Is LSFR whitening used on PQ128MS output data?

The PQ128MS samples and collects a very large amount of entropy, which is combined to produce the output sequence. There is no post processing involved. This generator was designed as a source of true randomness, so efficiency of using the entropy was not a consideration.

The goal of an MMI generator is slightly different, where the maximum number of bits is desired, while the true entropy doesn’t have to be exactly 1.0 for each bit. There is little difference in the MMI effect between 0.9, 0.99 or 0.99999999 bits of entropy per bit, while the TRNG provides bits with entropy of 1.0 - epsilon, where epsilon may be 10^-100 – so close to 1.0 it is indistinguishable from perfect randomness.

I should add some explanation of how the described LFSR processing works. Its operation may be called IIR or infinite impulse response. That means there is, at least theoretically, an effect on the output from the earliest time. Not infinity obviously, but from the time the system was turned on. In practical terms, the effect on the output decreases approximately exponentially over time. The shorter the LFSR registser, the shorter the time constant of influence. My MMI generators and all my RNGs use basic building blocks (entropy samplers) that operate at 128MHz. None of these devices currently use any LFSR components. If they did, the effect on the immediate or current bit being produced would be influenced by previous bits over a time period of less than one microsecond. After about 3 cycles through the LFSR the effect is insignificant. That equates to 45 previous bits, or about 350 nanoseconds. Most of the influence happens from the previous 15 bits, or only 116 nanoseconds. For practical purposes, that 116 ns can be considered a single output that is subsequently processed by bias amplification.

The higher the entropy of each input bit, the less influence earlier bits will have on the immediate bit. If the entropy is near 1.0, previous bit have essentially no effect. If the input entropy is as low as 0.1 bit, many of the 45 previous bits will have some effect.