Arctile (BiEntropy – The Approximate Entropy of a Finite Binary String) - https://arxiv.org/ftp/arxiv/papers/1305/1305.0954.pdf

Some codes - GitHub - winternewt/bientropy

So, here’s new entropy bias amplifier to try. Idea is somewhat similar to random walker but I hope it’ll be more sensitive.

http://v2-api-json.fatumproject.com/amplirand?out_length=10&l_bits=13&precision=2

http://v4-api-json.fatumproject.com/amplirand?out_length=10&l_bits=13&precision=2

Random walker takes like K bits on input and returns 1 or 0 (or tie is possible if K is even)

Takes inputs:

ampl_bits - K, i.e number of bits

out_length - length of output in bytes

&print_raw=1 - option to print raw unprocessed entropy, it’s printed in eight variants, each shifted by one bit (since selecting bytes from random bit-stream is actually arbitrary).

This one here produces 3 outputs :

data_out - Hamming weight - Wikipedia , similar to random walker but doesn’t take order of bits into account, only number of ones and 0’s, for even K s last bit is taken as tie-breaker for ties.

data_bd_out - that on is based on binary derivative notion - https://journals.sagepub.com/doi/abs/10.1177/003754978905300307, XOR ing the bit sequence with itself shifted by one bit left, reducing length by one, until one bit is all is left. It has the nice observed property that for any number of input bits (K) when all variants are tested - it returns equal numbers of 1s and 0s, no ties.

data_weights - this one is what it’s all is about TBientropy weights [1305.0954] BiEntropy - The Approximate Entropy of a Finite Binary String , it allows to assess bit sequences (length K) based on well, how ordered these are.

Sequences like 1111111 and 00000000 are perfectly ordered and trigger random walker. But sequences like 01010101 and 10101010 are well ordered too, but for random walker these are “transparent” due to their number of 0s and 1s are near equal. TBientropy weight function based on TBientropy mean value allows to score input bit sequence, independently of the result (0 or 1) and assign how much “weight” each output bit has:

[0.0…1.0] - input sequence is unordered (~0,8-0,9 is perfect chaos zone)

[1.0…2.0] - ordered input with low local entropy.

Having this weights assigned to each bit, we can also have average for output bit sequence (weight-per-byte, or per nibble, i.e hex digit) with the same properties: weight function is based on mean value, so that statistically such average converges to 1.0

That is what we have in output, weight values for each nibble, based on these we should be able to discriminate MMI signal/noise independently of output values. This TBientropy function is O(n^2) by default, as cpu hungry as any other entropy analysis so making it on-the-fly took a while. I finally made it capable of ~30-100MBps single thread.

Each of the outputs consists of K arrays, “channels”, shifted by one bit. That is because we operate in bytes and results of these functions are dependent on bit order. But as I said, the arrangement how infinite random bitstream is fitted into bytes is arbitrary, so if we scan all variants of input shifted by one bit we have higher chance of capturing signal. To grasp this better think of tuning a radio to the right frequency, these channels are not independent output, they’re correlated and most probably mmi “signal” can me caught on on multiple adjacent ones, but “bands” might be different each time

(by Newton Winter)