Amazing new algorithm makes MMI practical now

I just discovered a new bias amplifier algorithm that will make MMI applications effective and practical now. It does something I didn’t think was possible. The amplification factor is a linear function of N (number of bits used as input) versus the best previous level of the square root of N. That means the effect size increases directly as the number of bits increases. If the effect size is 0.001 for each input bit (hit rate = 0.5005), using only 10 of those bits will result in an output bit with an effect size of 0.01 (HR = 0.505). For only 1000 bits, the effect size approaches 1.0. It doesn’t work exactly that way – the input cannot really be just 10 bits; this is meant as an example of how powerful it is. Note, previously effect size could only approach 1.0 asymptotically, even theoretically, but this algorithm can produce outputs right to 100%.

I have only been testing the new algo for a couple of days, so I have to continue to run every conceivable test and variation. Startling results require overwhelming evidence (and testing). This is very exciting, and my mind is still in awe at the possibilities.

Awesome! Will it be able to run on MED devices with previous versions of your bias amplification algorithm baked in or will we need input bits from non amplified sources?

It should work with any MMI sequence, with or without upstream bias amplification, but would likely replace the old version of bias amplification in the hardware (if I can get the necessary hardware support).

How does it work? Can we reproduce it?

I’m not ready to describe this yet. Just three days now. I want to fully understand it and come up with implementations that can be coded in firmware (for FPGA implementations) or software that can be implemented easily in common programming languages. Then it will be reproducible.

We really want to hear at least idea of that algorithm. We want to reproduce it.

Did my ideas helped at least a bit, btw?

I am working about 6 hours a day to develop a complete mathematical model and finally an easily implementable form. I have been at this too long to release incomplete or possibly incorrect information.

This is a huge breakthrough in the field of MMI. I plan to let everyone know in about two to three weeks when I am sure about it.

I always allow the possibility of discovering something unexpected or even amazing. You suggested to look at extending MMI bits “in a different way.” I thought about what you said and I was exploring different ways to make MMI more responsive. To my surprise I discovered a method that makes MMI amazingly more responsive.

This is exciting, we’re all keen to help out and learn more.

So, you used massive of different scramblers to extend random bits and tryed to find correlations in bias amplified signals?

It’s a variation of bias amplification that takes fewer input bits. But, I am testing the algorithm, not real MMI signals. MMI cannot be simulated. If I were a programmer I would find a way to test with real MMI sequences. I know you want me to tell everything now, but I just don’t know everything yet.

What I have learned in this exercise is how important it is to have a generator that produces larger effect size for each generated bit. So far, there is a strict limit of how many bits must be processed to reach a final output hit rate near 100%. This is true regardless of the processing method. More effective variations of bias amplification allow the generator rate to be much lower – millions or even hundreds of thousands of bits per second versus trillions.

Are there any news? We are eager to see something about your breakthrought.

Let’s give him time.
Scott’s being doing this for I think almost 2 decades so knows when not to rush.
(But I concur on the eagerness;-)

So far I had to invent or develop a number of basic equations (no one has ever done this before so no clue in the literature), write about 10 simulation programs and run many hundreds of simulations. Believe me, I am eager to get through this development myself and will be happy to share whatever is actually useful.

I won’t describe the new bias amplification algorithm yet because it is hard to implement and doesn’t overcome a fundamental limitation as I first thought. I still have to fully understand and test one more important aspect of the mathematics of bias amplification so I can provide correct information.

If a particular block of input (un-amplified) data has a statistical variation that gives more 0s than 1s, no type of bias amplification method will change the output to have more 1s than 0s. Bias amplification only compresses the information in the input sequence into a much shorter output sequence. I can now calculate exactly how probable it is to have more bits in the “right” or intended direction in the block of data used. A number of measurements of different data blocks will reveal the actual versus theoretical hit rate.

So, if I understand you correctly, new algorithm can’t provide significant improvement, as you thought at first?

It allows the highest possible hit rate given the input data, plus it outputs many more bits than the original bias amplifier algorithm. I plan to describe it this week.

Implemented correctly and used with the MED100Kx8 MMI generator, I believe it can provide hit rates of 55 to 65%. Of course, the actual performance depends to a large part on the skill level of the user.

There is no one piece of the whole that makes everything work as we want it to, but I now understand each part enough to outline an MMI system that will perform many practical tasks.

I want to give a brief update since this is taking longer than I expected. I have confirmed to high confidence I have an exact equation for predicting the hit rate given the input probability of the bits, Pin, and the number of bits used, N. Every bias amplifier method, including Majority Voting, Bayes’ Updating and all types related to Random Walk Bias Amplifiers produce exactly the same amplification. It is indeed very valuable to have an exact equation to predict the resulting hit rate when designing an MMI system. It can save so much time in building and testing hardware, which may only work if the equation indicates that it’s possible. Also, the equation gave me a theoretical best to try to surpass.

The results were so uniform and exact, I thought this was not only the best possible, but the only possible result. But, there was one other variation in approach I had not tried because it is harder to model. That is, not requiring the input number of bits, N, to be constant for each output. I was a little surprised to find on my first attempt that the resulting hit rate surpassed the theoretical value when I used the average number of bits used to produce an output as N in the equation. So, it does seem it is possible to do better, but how much better is to be determined.

Now I have to test this idea with the whole range of bias amplification methods and find the design parameters that produce the best results.

1 Like

I just uploaded a paper to google drive. Bias Amplification Algorithms for MMI_110921.pdf - Google Drive

Please let me know if something is unclear or if you have any questions.

Is your new algorithm in this pdf?

The paper includes what I consider the best and most usable information and algorithms for MMI applications.

I discovered an algorithm that could have any level of gain. Variations of the algo can output as many bits as are put in if desired. I did not include an example because increasing the gain does not result in a higher hit rate. The practical hit rate is determined by fundamental statistical principles that cannot be “broken.”

The two curves shown in the figure on page 4 are the best that can be done with fixed and variable numbers of input bits. It is possible to make amplifier designs with limited variability of output generation times – a hybrid of the two configurations of bias amplifiers. The resulting hit rates fall between the two curves as would be expected. It might even be possible to find a design with constant output generation times, but having an output probability equal to the variable number of input bits configuration. That would be the best of both configurations, and something I did not yet discover.

Since I spent about 3 full months developing and testing bias amplifier models, I thought it was time to move on to other topics that are important to the growth of MMI technology. MMI systems consist of a number of important components, each of which must be made as well as we can. I am fairly confident I have described the best bias amplification methods and some simple algorithms for implementing them.

It seems that the MMI effect behaves somewhat like the observer effect, as soon as we notice anomalies in the data, they immediately converge through the rubberband effect. Which led me to think, what if we make several interconnected measurement processes, influencing each other in such a way that a sharp convergence in one stream (the accumulated deviation drops to zero) causes an increase in the deviation in the conjugate streams. Thus, when statistics are normalized in one stream, the total deviation of all streams increases, surpassing the system’s gain from the rubberband effect and making it inexpedient.

I’m not sure how to implement this algorithmically, though.