Developing the Meter Feeder

Discussing a tool for comparing the output of various MED devices.

1 Like

I am so pleased to see this group formed and to know so many interested and motivated members are/will be joining in. Now we have an array of MED drives with a variety of processing methods and internal generation rates. Jamal has a large collection of these devices for development, and one of the first projects will be developing software that can compare two devices at the same time. This is a truly important step in improving the general responsivity (effect size) for more advanced MMI generators. Please feel free to ask any question about this topic, especially if you are interested in being part of this project.

2 Likes

I currently have V1 and V2 devices in my possession. I’ve got the initial MEDMeterFeeder C++ stub project I received running on my Mac, but am currently modifying it so it can actually read from multiple devices at once.

Initially I was just thinking of some basic raw text output from whatever devices it detected. Maybe later on we could graph it.

I wanted to ask if there was some format/type of output you had in mind to facilitate the comparison we’re aiming for?

2 Likes

Here is a description I have shared before:
From much experience I learned it is always necessary to test two generators effectively at the same time. There is too much variability of concentration and effort, as well as user bias to test one at a time and compare later. I also learned feedback must be given for the active generator, even as the active generator is being switched in a random way that cannot be tracked by the user. From the user’s point of view, it’s just a test to demonstrate the maximum MMI effect she/he can produce. Thus, the testing program selects the active generator (equal number of trials for each is best) and keeps statistics for each generator that are to be displayed after the test sequence is completed. The user will inject a bias towards the “favored” generator if results are displayed for each generator separately during the series, so individual generator results must wait until the end of the series. MMI presents dynamics not normally encountered in traditional or “linear” systems.

There are two modes of operation:

  1. User initiated - each trial is initiated by key press or mouse click. Trials should take about 0.2-.025 seconds each. I used 0.2 seconds because that is the amount of time the human brain takes to recognize it is having a thought. The series ends when either a preselected number of trials has been reached, or the user chooses to end the test. Eventually everyone becomes tired and the hit rate begins to decline. It’s a good idea to end at or before that happens.

  2. Continuous - a series of trials, for example, 200, is initiated by the user. After that the trials are generated automatically at a preselected rate (for example 2-5 per second) until the series is completed.

The MED generators are all designed to output roughly 100Kbps so they are most easily used/compared. (Note, there are a few generators around that are simply 4Mbps without any bias amplification. These were supplied early on to quickly get something out to get people started. A full description of the types of generators and how to identify each by serial number will be provided shortly.) Some type of processing must be used between the generator output and the tester display. The simplest processing is majority voting, though bias amplification may also be used. If bias amplification is used, the last 5 bits (at least) must be majority voted to average out the unequal time intervals produced by the bias amplifier. If majority voting is used, take 2499 bytes (19,992 bits), plus 1 LSB from one more byte to make the count uneven (19,993 bits). This ensures the MV never has a tie. Note, there are a few ways of making sure there is never a tie. One efficient way to count 1s in words is to use an 8-bit lookup table: the word is the relative address of the table and the data is the number of 1s in the word. Just accumulate the counts of 1s until the end of the trial is reached. If the count is > 1/2 the number of bits used in the MV, output 1, else 0.

Hits are to be displayed on the screen starting at the left center of the screen, and advancing toward the right. The vertical scale is the cumulative number of 1s (hits). If the intention is for “low” or more 0s to be produced, the scale will simply show a decreasing line (assuming a majority of hits or 0s). The horizontal and vertical distances should be scaled so the results more or less fill the screen during a series of trials.

Statistics to be displayed at the end of a series should include the hit rate, the effect size (2 HR-1) and the probability of the null hypothesis test. The probability of the null hypothesis test is the Binomial probability of P(X ≥ x) for the given number of trials with the number of successes = total number of 1s (or 0s for low intention) for prob of success = 0.5. The Binomial probability can be approximated by the Normal distribution when the number of trials is high. 200 trials comes fairly close. Then p is about equal to CDF[normal distribution function[0,1], for the z-score], where z-score is (2 number of 1s - N)/Sqrt[N], N is the number of trials.

There are several ways to randomize the active generator selection:
The easiest one is to alternate generator selection for each trial, but randomly pick which generator is the first one. Even better, randomize the generator selection throughout (with equal numbers of each). This latter is a little trickier as the number of trials is not fixed.

1 Like

Thank you very much for that detailed response. That gives me a lot to continue with.

Not that it’s needed right now, but at some point the the description of the generators and how to identify them by their serial number will be a feature I’d like to put into the application. That way anybody with any kind of MED devices can just plug them in and it’ll be able to accurately identify what hardware the trials are based on and display that visually for them.

The progress maybe slow but I’ll post updates here as I make progress.

Here is a list of the various models of MED Devices. They may be provided in a black or a translucent blue enclosure about the size of a thumb drive. The are all designed to output near 100KHz for easy comparison, except the 4MHz random generators I sent out originally (without labels) and MED1Kx3.

MED100K S/N QWR4Axxx Internal 128 MHz, bias amplified 36x, output rate 98.765 KHz

MED100Kx3 S/N QWR4Bxxx Internal 384 MHz, bias amplified 62x, output rate 99.896 KHz

MED100Kx4 S/N QWR4Dxxx Internal 512 MHz, bias amplified 72x, output rate 98.765 KHz

MED100Kx8 S/N QWR4Exxx Internal 1.024 GHz, bias amplified 101x, output rate 100.382 KHz

MED100KP S/N QWR4Pxxx Purely Pseudorandom 100KHz output. Baseline generator – expected to be least responsive.

MED100KR S/N QWR4Rxxx Internal 128 MHz, no bias amplification, just random output with no deterministic postprocessing, output rate 100 KHz

MED100KX S/N QWR4Xxxx Internal 384 MHz, no bias amplification, special XOr processing – experimental, output rate 100 KHz

PQ4000KM S/N QWR4Mxxx Internal 128 MHz, no bias amplification, a random generator with no deterministic postprocessing, output rate 4 MHz.

The distinguishing feature is the 5th digit of the serial number, which is a unique alpha character for each model type. The serial numbers can be read by a program since they are used to register the device to be read by the USB interface. The last 3 digits of the S/N are just sequential numbers from 1 to 999 to make each one unique. QWR4 is part of the S/N that is recognized by the interface program so it doesn’t connect to some other device that uses FTDI USB interface chips.

To complete this list, there is a Model MED1Kx3. I believe the S/N is QWR4Cxxx. Internal rate 384 MHz, bias amplified 320x, output rate 999.0 Hz. This is intended to compare with other generators used in the Global Consciousness Project (GCP), which output 1 Kbps. So far only one person has this generator.

I’ve released an alpha version of the C++ MeterFeeder library with a Python wrapper that graphs the output of 2 devices.
Info and code is available at https://github.com/vfp2/MeterFeeder/

I should first emphasize I am not a programmer (except for Mathematica), though I am quite familiar with general principles of programming. I would like to try your program and provide meaningful feedback, but I think I would need a version wrapped in an installer that works for Windows 7. There are installer maker programs that take care of must details of creating an installation version such as shown at https://www.actualinstaller.com/. Being a skilled programmer yourself, you may be very familiar with these installation makers. Some of them claim to be free, but I am always a little skeptical.

It would be of great help if you would provide your program in an installable form. If necessary, I am willing to pay for a commercial version of an installer maker if the “free” versions are not adequate or too much trouble to use. (Please contact me by direct email if this is the case.)

I’ll try and get something wrapped up into an installer for all

It turns out the implementation needs a bit more optimization first as it’s slow (it was just pulling one byte at a time).

What is the maximum bytes FT_Read can read in per call?

After a bit of trial and error it seems to be 1696 bytes before my program seg fault 11s.

I don’t know if there is a limit on the number of bytes that can be called at once. Data will continue to be output until the requested number of bytes has been delivered, unless there is a timeout due to an excessively large request. The interface was designed for considerably faster generation rates of random number generators, the slowest of which is 4Mbps and up to 128Mbps. At 100Kbps everything takes much longer.

Taking one byte at a time will definitely slow things down. I can suggest 256 or 512 bytes per call, taking about 20 to 40ms to generate at 100Kbps. The internal buffer in the interface is a circular FIFO and it drops off the oldest bytes when it is full and new data comes in. New data is automatically sent in as it is generated, which is a continuous process. This buffer should perhaps be cleared at the beginning of a trial to ensure the data is most recent.

At 1696 I was getting much closer to the expected speed (~99.5 Kbps). For now I’ve set it to 512 but haven’t measured the timing yet.

I’ve also been working on my first practical implementation of MeterFeeder, albeit just using one device (MED100K S/N QWR4A003 - what I believe is referred to as V2) at the moment. I’m running a version of PacMan with his movements controlled by the device’s output. Every odd iteration of the loop controls up/down movement, and every even one controls left/right. For now I’ve disabled the ghosts so we can focus on just experimenting with the effect of intention on PacMan’s movement. I personally find this a lot more motivating and fun than trying to roll a ball or move a graph but still am disheartened that I haven’t had any even close moments of thinking I’ve had any effect. But then again I’m far from confident my implementation is as it should be.

I’m starting my first attempt at an application-side implementation of the bias algorithm. I have to check the first two ifs based on the “Note” part in Step 5) of the Bias Amplification algorithm outlaid in your paper. Currently PacMan is running this in MeterFeeder with ampFactor set to 12 (just an arbitrary number I chose for now).

	int j = 0;
	for (j = 0; j < 512; j++) {
		counter += lookup(buffer[j]);
		if (counter > (ampFactor - 1 - 8) && counter < (ampFactor - 1)) {
			return buffer[j] << 7;
		} else if (counter > (1 - ampFactor) && counter < (1 - ampFactor + 8)) {
			return buffer[j] << 7;
		} else if (counter > (ampFactor - 1)) {
			counter = 0;
			return 1;
		} else if (counter < (1 - ampFactor)) {
			counter = 0;
			return -1;
		}
	}
	return 0; // do nothing

Practically speaking because of the delay (~2.3 secs when manually timing the lag between the local app and the stream) it’s far from ideal for anybody who is remote, but I’m streaming the game running live on Twitch (a gamers’ YouTube-like platform for streaming themselves playing games) so you can at least get an idea.

Thanks for providing a testable app. I can only vaguely follow the code so I could easily miss something important. It’s not clear to me how you parse the bytes into bits that would be used as input to the bias amplifier, which is a bit-wise operation. The counter should increment and/or decrement 8 times per byte unless the count reaches a bound.

Sorry I should’ve explained the code a bit. It’s all inside a function that the game calls regularly and if the returned value is 1 then PacMan moves left or up, and if it’s -1 then he moves right or down, and he does nothing if 0.

buffer is an array and holds 512 bytes read from the device.
Further descriptions are in-line as comments:

int j = 0; int counter = 0; // Step 2)

// Loop over every byte in the 512 byte buffer
for (j = 0; j < 512; j++) {

	// lookup() is a function that implements the following from the white paper's Bias
	// Amplification description in the Appendix:
	// Step 1) a) & b) The data at each address is the number of 1s minus the number
	// of 0s in the input word; -8 ≤ data ≤ 8. Example, the input word is 10111010, the
	// data is +2. The order of 1s and 0s does not matter. The first address and data
	// are: 00000000, -8, and the last entry in the table is: 11111111, 8.
	//
	// This line below is finding the number of 1s minus the number of 0s in the current
	// byte of the loop and adds it to the counter. (Steps 3 & 4)
	counter += lookup(buffer[j]);

	// The following lines implement the "Note:" section from Step 5):
	// Note: strictly speaking, when the counter value is within 8 counts of either bound,
	// the process should shift to bit-by-bit processing (described above) until the bound
	// is reached or the walk steps away from the bound more than 8 counts. That is
	// because the excursion of the counter depends on the sequence of bits in the
	// word.
	// (I need to review these lines a bit more to make sure they're right)
	if (counter > (ampFactor - 1 - 8) && counter < (ampFactor - 1)) {
		return buffer[j] << 7;
	} else if (counter > (1 - ampFactor) && counter < (1 - ampFactor + 8)) {
		return buffer[j] << 7;

	// Step 5)
	} else if (counter > (ampFactor - 1)) {
		counter = 0;
		return 1;
	} else if (counter < (1 - ampFactor)) {
		counter = 0;
		return -1;
	}
}

return 0; // do nothing

Am I implementing it correctly?

Here’s a basic python program containing the random walk with a singular trial and sub trial output for Windows which might help.

Once I got it cleaned up a bit and a Linux/Mac version I’ll update this post.

RandomWalk.py (9.1 KB)

When making new code I like to make a test program or simulator to see if the output is exactly as expected. Bias amplification has a few testable properties:

  1. When no bias is present in the input data (just random bits), the avarage number of bits required to reach a bound is the bound squared. In your example, that is 12 squared or 144. Input a large number of test sequences from a random number generator and determine the average number of bits until the bound is reached. You can usually use the built-in random integer call, or threshold at 0.5 to make binary bits if only a [0, 1) float is available. To get an average result with 1% accuracy takes about 10,000 tries.
  2. The bias amplifier increases the bias by a calculable amount. To test this result, make input bits with a known bias and average the output bias over a large number of tries. The easiest way (though not a perfect way) to make biased bits is to take the output bytes from a random number generator and change the LSB of every 4th word to a “1.” The resulting bias or probability of a 1 is the counts of all 1s divided by the total number of bits. The theoretical bias is 1/64 + 0.5 = 0.515625. It’s a good idea to test about a million bits (2 to the 17th power bytes or 2 to the 20th power bits). After running these biased bits through the bias amplifier with a bound at ±12, the output bias will be about 0.679. The average number of steps to a bound is 137.65. Note, as the bias increases, the number of steps to a bound decreases until it reaches the minimum of the 3rd test. The exact equations are found in Advances in Mind-Matter Interaction Technology: Is100 Percent Effect Size Possible? Page 2, equations 1a gives the average number of bits to produce an output and equation 2 gives the output probability after bias amplification. https://coreinvention.com/files/papers/Advances-in-Mind-Machine-Interaction.pdf
  3. When the input bits are all 0s or all 1s, the number of steps to a bound will be exactly 12 since every bit decrements/increments the counter. Only a few trials are required to test this trivial result.

After further thought, I realized if the data is not handled exactly right, the method of making every 32nd bit a 1 may cause some systematic error in the bias-amplified probability. That is because of the position of the changed bit versus the beginning and end of each new bias amplification test. If the word selected to have its LSB set to 1 is independent of the start of the test, the results will be closer. That is, the preparation of biased data and its consumption in the test should be independent processes. Ideally, the bits should be biased randomly so there is no correlation with the start and end of any test. One way is to prepare a large number of bits (for example, 2 to the 20th or 1 Mbit) with every 32nd bit set to 1, and then randomize those bits and format them back into bytes.

Biased bits can be generated one at a time from a uniform [0, 1) random number with a constant added (producing a sum), and thresholding at 0.5 to make each sum into a bit (thresholding algorithm: If sum < 0.5 output 0, else output 1). The bits still have to be formatted into bytes for use by the bias amplification algorithm. The constant to be added to each random number before thresholding at 0.5 is exactly 1/64 = 0.015625. A very large number of the resulting bits will average to a probability of 0.515625. This last approach is effectively exact and data can be generated as it is used without making a large pool first. It’s also easy to make any bias amount desired: just select the constant to be added to the random number and (0.5 + bias amount) will be the probability of the biased bits. Finally, threshold uniform random numbers directly at (0.5 - bias amount) to produce biased random bits. (Thresholding: If uniform random number < (0.5 - bias amount) output 0, else output 1) 0.5 - 1/64 = 0.484375 is the direct threshold amount to give bits biased at 0.515625.

I should have mentioned, when the game flow is controlled by mental influence, feedback must be received/seen before the next move of the game piece occurs. Otherwise the player can’t know what mental effort is actually causing the latest move to occur. If the game piece moves twice a second and feedback occurs over two seconds after the effort, the game piece is 4-5 moves behind the effort. That makes it almost impossible for the player to “learn” what works and what doesn’t. This analysis depends on my understanding of the relative timing between effort and game piece movement. Please let me know if I am mistaken.

After trying the game, I see the game piece is in constant motion, meaning it uses continuous versus initiated trials. The processing for this is a little more complicated, but I can provide a more meaningful example of how it may work. That will also minimize the effects of latency between MMI generation and game flow because the results will be averaged/low-pass filtered for several moves. This modifies my previous message about the timing of feedback versus MMI generation. When I finish my analysis I will edit previous messages to avoid confusion.

I will add a detailed description when I have more time, perhaps tomorrow. Ensuring the bias amplification is working correctly is still necessary, or majority voting (MV) could be used instead, which is simpler though not quite as effective. Even with MV, testing the output bias versus input bias is always a good verification that the algorithm is correct.