So, one of popular statistical tests for rng is run length analysis. The run-length approach is based on decomposition of bitstream into chunks of consequent zeros and ones. Probability distribution of lengths of said chunks often used to test quality of entropy sources.

I’m investigating possibility to use entropy of said distribution to detect influences on rng. First problem is scaling of entropy. Shannon formula will provide value of entropy per chunk, not per bit. which can be helped, if we multiply it by number of chunks and divide it by length in bits in theory.

So, entropy of rle is same as entropy of original sequence. I’ve tried it. And it works nicely. Also it scales nicely. entropy of completely random sequence is 2 bit/chunk. And mean chunk length is 2.

=2

Also, it have good performance in estimation of randomness. And can rank sequences. So, ‘aaa’*1000<‘bitsbitsbits’*1000<faceroll on keyboard<texts from that forum<random taps on keyboard<numpy prng<qrng. Sounds sensible to me.

However, that approach doesn’t take in account distribution shifts between runs of 1 and 0 of same length. which is quite unfortunate. Because, one of most well developed method for MMI amplification, random walker, obviously, detects precisely that property. that’s where things are going wild. rle encoding, where 1 and 0 runs of same length are different elements of alphabet is redundant encoding. It’s entropy of completely random sequence is 3 bit/chunk. Possible way to deal with redundancy is subtraction of mutual entropy. But it doesn’t help. It also makes things worse. Because it have poorer abilities to rank random sources. Also, even if I use normalization(divide result by analytically calculated value for completely random sequence), sometimes it gives values, which is greater than 1 bit/bit.

For now, all my attempts to figure out something better, than first approach had worse performance in randomness ranking.

I’d like to hear thoughts and advices on that topic.