Developing the Meter Feeder

With one device connected, the data does not always plot or it just plots an initial block as if it were initiated, even though it is running continuously (when this happens, the buttons are not responsive – nothing has an effect except closing the program):

Does this occur after just opening the program with only one device connected? (I.e. the program will still be in continuous mode as you haven’t pressed any keys/clicked any of the buttons).

I have found one bug that seems to sporadically happen and maybe interrupting the flow of data from the read thread to the plot thread. I will fix that.

The issue with the left/right columns appearing in different windows is indeed weird. I can’t imagine under what conditions that would happen. I can re-format the output a bit to try and avoid that. The right-hand column isn’t that important.

Meanwhile I did some timing tests to try and account for the variation:

Doing a loop of 256 bytes-per-call using just the MeterFeed C++ library yields a consistent 29-30ms per/call on my MacBook Pro (a late model back from 2013).

Doing the same thing in Python (i.e. Parking Warden reading in via the C++ MeterFeeder library) seems to add about another 10ms on top of that processing (+/- 1-2ms). It’s when I also start the graphing thread in the Python program that we start to see much more variation. As I was kind of worried in the beginning, the performance issues with choosing Python I suspected might be troublesome.

Once the new devices have arrived, I’ll continue fine-tuning what we have to work nicer with multiple devices plugged in. However, I’m now thinking I’ll have to go back to the original approach I considered - doing the graphing in C++ as well). I chose Python as a quicker implementation method.

Oh I see you posted another reply just as I did, I’ll go over that.

I’ve decided to try a different approach here in terms of implementation. The issues above are due to what I think is a mixture of my lack of Python skills and performance issues with trying to graph multiple outputs at once in the current implementation of Parking Warden.

So I’m thinking of trying to use the same MeterFeeder library but use Unity/C# to do the graphical interface side of things. This will have the added benefit of being able to run the program on multiple platforms. I’m interested in running it on my Android smartphone so we can do mobile tests with the MED devices one day.

I’d like to take a step back and start off with a high level approach - what are our basic requirements? What are the features we want in the prgram/how to we want it displayed? Let me kickstart the listing:

  1. A graphical representation for the output of each connected devices. Do we only need a random walk output (where the Y-axis is the number of Y’s incremented/decremented and X-axis is the timescale) or do we also want a graph of the probability distributions so experiments could easily be visually compared against the normal bell curve distribution. The Y-axis would be the number of runs/trials, X-axis would be the number of 1s (or 0s) counted per trial. There would be a way to toggle between graph modes.
  2. Ability to perform continuous and user-initiated trials.
  3. In the case we also graph the probability distributions, what stats do we also want displayed? I’m using this as an opportunity to refresh my knowledge on statistics and probability theory. Z-score, χ2 (chi-squared) info I believe may be of interest. Does anything else come to mind?

Another question I’d wanted to ask is if there’s a way to disable the bias amplification on the MED devices that have had the algorithm burnt into them? Maybe a FTDI command I’m not aware of? I was thinking in the case of generating control sequences of random numbers for experiments or for any other use cases that might arise in the future like testing other algorithms running on the machine the MED is plugged into etc.

With respect to the user interface:

  1. The overall design is highly dependent on the specific application, with too many variations to try to describe all in one. The visual and/or auditory output should most simply provide the user with feedback that shows success (or failure) to achieve the specific intended outcome or goal. In a game where a piece is meant to move in a direction or faster/slower, the motion of the game piece can be a good feedback. Also, one might want to see/hear something for each single trial that indicates greater or lessor magnitude of success. A single trial with highly improbable results would receive a larger/brighter/louder/etc. feedback bar or ball or tone, etc. This type of additional feedback would seem most appropriate for user-initiated trials rather than continuous operation, but that would not always be the case.
  2. Yes, continuous and user-initiated trial modes are very useful for many applications.
  3. The most direct and simplest statistic is the probability of the null-hypothesis test, that is, the probability the outcome could have been produced at random if no mental effect is operative in the experiment. Alternatively, or in addition to the p value, a Mind-Enabled Score can be useful. It is just the Log base 2 of the reciprocal of the p value.

No, there is no builtin command to disable bias amplification in MEDs. A simple source of relatively non-responsive data is just the output of the OSs builtin pseudorandom generator. There is a version of MED100 that is just a random generator without bias amplification, but that requires more hardware, which is undesirable.

Just for interest, the output of the bias amplifier can be nullified or de-biased using an algorithm attributed to von Neumann. This de-biasing algorithm is as follows: take input bits in pairs, if the bits are (0,1) output a 1, else if input is (1,0), output a 0, else provide no output (when the two bits are the same). This algorithm is a sort of inverse of the simplest bias amplification algorithm – it removes all bias in a sequence where bias is the only statistical deviation. The number of output bits is on average one-quarter the number of input bits. Note, this algorithm assumes the sequence is stationary, meaning simplistically, the statistics do not change during the period of the sequence. MMI effects may be expected to be non-stationary, or to change the statistics of a MMI generator during brief periods of mental effect. That means the de-biasing algorithm will not always produce a “perfectly” de-biased result. In any case, there is always a statistically expected residual bias due to sampling a finite length sequence.

Thanks for the info on disabling/di-biasing the bias amplified output.

Continuing on the user interface discussion, sorry, it sounds like I may have mixed things up causing some confusion. The question above - what are our requirements? - was not in the generic context of making MMI applications/games etc, but purely in the context of making the tool we’ve been talking about to allow multiple MED devices to be run simultaneously and compare their output/responsiveness.

No problem at all; I just didn’t connect the right context for my previous reply.

I should suggest that the testing program be primarily set up to compare two generators. A mode that looked at only one generator may be useful for program testing and development. Comparing MMI generators is very challenging – partly because of the nature of MMI – so I further suggest focusing on comparing only two generators. There is nothing in principle against comparing more than two, but I believe that may be confusing and would take too many trials to achieve significant results.

MMI has strange properties in this type of comparison. When comparing two generators only one output is used to update the feedback plot for each trial. The active generator is the one whose output is used to update the display and the statistics of that generator for a given trial. The two generators are selected at random or in a way that makes the active generator unknown to the test subject. The comparison results are only displayed separately after the end of the test. The reason for this careful hiding of the active generator is that feedback has a strong influence on the effect size of the MMI generator. Also, if the test subject knows which generator is producing feedback at any particular trial, personal bias or “favoritism” for a particular generator can show up in the results. This must be avoided to achieve a fair comparison of the generators.

  1. I have found a simple plot of the cumulative results for both generators (from each one when it is the active generator) seems to work the best. The x axis is usually the trial number and the y axis is the cumulative random walk. The walk starts from zero (the left, middle of the screen). Due to high screen resolution, sometimes it makes sense to plot multiple pixels for each trial. Also the same for the random walk position. The number of pixels updated per trial is determined by how the display appears during the test period.

Usually if it is desired to plot a probability, just add an envelope of fixed probability above and below the random walk results. The fixed probabilities are typically set at 5% and possibly another one at 1%. If results are very strong, a probability envelope at 0.1% could also be used.

When a comparison test is completed (when results are considered significant, or after a fixed number of trials), the display can be toggled to display the two generator results separately, probably in different colors on one plot. Note, since only half the data is produced by each generator, the x axis plot will only be half the length of the combined plot. Or, use double the number of pixels for each individual generator plot to make them the same length on the x axis. Some experimentation is required to choose the best way to display the individual generator plots. One may wish to toggle between the two plots (combined versus individual) before deciding the current test is completed, just to see how things are progressing.

I’ve ported the FTDI stub/MeterFeeder library to Java for running on Android (to be used in conjunction with the FTDI Java Android driver). This will allow us to portably use the Unity replacement I’m making to Parking Warden not only on PCs but also on the go with Android phones. I’ll put the source link up once it’s ready.

I’ve started to redo Parking Warden in Unity. I had a Unity project called MedBots which has kind of been my playground for any MMI projects done in C#. You can also find the Pacman and Pong games there posted about a few months elsewhere on the forum (but they need a lot more work before they could even possibly be used for experimenting so don’t expect anything)

You can download the Windows exe here:

And see a brief demo:

You’re able to scroll out a bit and toggle between random walking and displaying the number of one bits read. It’s reading 4 bytes at a time per device.

The top-left button is a not-finished implementation of trying to also graph the output of one of these EEG scanners I got recently:

The EVP button doesn’t do anything at the moment but I figured it could be a placeholder for an EVP playback feature, perhaps useful for when I’ve got the Java FTDI driver completed and that would allow mobile investigations with MEDs plugged into an Android device.

@ScottWilber Have you been able to run the MedBot.exe above? Just wondering if it run’s on your PC.

Yes, 3 apps run on my computer. Pacman seems to have a bias toward the bottom of the screen and the paddles on Pong – and the ball – move too fast to allow mental effort to affect the play. Parking Warden runs, but it needs some work before it is usable for comparing device performance. Happy to discuss any of these details whenever you want. EVP doesn’t seem to do anything (no response to clicking on the button), but perhaps it’s not active yet or has to be linked to audio output.

Great job getting these to run! Thank you.

Thanks. Pacman and Pong are still in the far-from-complete state left from a few months ago when we discussed them briefly in separate threads.

EVP doesn’t do anything yet. It’s a placeholder button for hopefully something interesting one day.

I’d like to focus on working on Parking Warden for now.

Please let me know what changes/features you’d like.

As far types of graphing, so far we have a toggle between

  • Simple +/- random walk, each bit sequentially processed
  • Counting number of 1 bits per read (4 bytes at a time at the moment) - this read value could be made adjustable on the screen

Anything else we’d like to be able to toggle? z-scores etc?

Okay. The number of 1 bits read is probably not a variable we need to follow. However, the z-score, or even better the probability of the null hypothesis test (from the z-score) would be useful.

When comparing two generators, it is essential the user does not know which generator is actively providing the latest update for the displayed output. That means only one output will be displayed, which shows the single updated output using the active generator. Each generator will provide data for the one plot, but not the data from both generators at the same time. When the user ends the test, the results from each generator, provided only when it was the active one and stored separately, will be displayed separately. To be clear, the one display will have twice as many points as the two separate plots that are revealed only at the end of the test. Then the results can be compared.

It has been suggested that the active generator be selected by alternating from one to the other, but the generator that goes first will be selected at the beginning of the run by a random bit. The alternative is to select the active generator randomly as the program is running, but this is more complex, as we want the total number of trials for each generator to be equal (or plus or minus 1) when the test is ended. Note, it’s probably not useful to try to compare more than two generators at one time. It’s hard enough to get good results with only two.

In continuous mode, the numbers should probably be updated no more than 5 times per second, but even slower might be better. Not less than 1 per second or more than 5.
A user-initiated mode is essential. One update per keypress (or mouse click).
The number of bits in each update should represent about 0.2 seconds of MMI generator data, and provide a single 1 or 0 result. In any case, the total number of bits used should always be the same for every trial, and the number should be odd to prevent a tie, which is an indeterminate result that could bias the test results.

Let me know anything that is not clear.

@ScottWilber
It’s been a while since this thread was active. I was wondering if you could please explain what “special XOr processing - experimental” is in some more detail and if it was for MMI purposes or something else?

This was one of several versions of internal processing methods for MED100K MMI generators I designed while I still had the support to get them programmed. I sent a large number of every type of those generators to Jamal for distribution to others for experimental use, though I have not heard from him in quite a long time. I don’t know what happened to most of those generators.

The XOr processing is based on some theoretical modeling that suggested it could be more responsive than other types of processing. The idea is that if a large number of bits are XOred together, switching any one of them by mental influence will switch the output from 1 to 0 or vice versa.

Every new type of processing must be tested to see if it is actually more responsive (or less, or unchanged). A lot of work was done in this forum on a project to compare various generator designs, but I don’t know the current status of that project.

Thank you.
Jamal did distribute some of the devices to some others and then he shipped the rest to me a few months ago which I’ve made available to everyone for consumption in the API server detailed in the thread titled the MED Farm.

The original version of the comparison tool (titled Parking Warden) which you and I tested together ended up being too laggy so I gave up the implementation in Python and started working on a new version in Unity. I think we discussed some of the points needed to be improved in that one, but sorry I haven’t had any time for that project for a while now.

I’m glad those MMI generators made it somewhere they might be used. That’s about $30,000 of hardware, which I provided for the development of MMI applications because it will be important at some point.

So far we are just scratching the surface, but there is enough understanding to do something really interesting. I will try to provide a more complete system design for one or more applications.

It’s been over half a year since I’ve worked on this but I’m trying now spend some of my Saturdays on MMI work.

My first work will be making the changes mentioned here:

but I just wanted to make sure anybody with Windows and one or more MED devices plugged in can run this attached application.

Plug in your MEDs, unzip the file and run MedBots.exe → then choose Parking Warden from the main menu to see the graphed output in realtime of the connected devices.