Developing the Meter Feeder

I’ve started to redo Parking Warden in Unity. I had a Unity project called MedBots which has kind of been my playground for any MMI projects done in C#. You can also find the Pacman and Pong games there posted about a few months elsewhere on the forum (but they need a lot more work before they could even possibly be used for experimenting so don’t expect anything)

You can download the Windows exe here:

And see a brief demo:

You’re able to scroll out a bit and toggle between random walking and displaying the number of one bits read. It’s reading 4 bytes at a time per device.

The top-left button is a not-finished implementation of trying to also graph the output of one of these EEG scanners I got recently:

The EVP button doesn’t do anything at the moment but I figured it could be a placeholder for an EVP playback feature, perhaps useful for when I’ve got the Java FTDI driver completed and that would allow mobile investigations with MEDs plugged into an Android device.

@ScottWilber Have you been able to run the MedBot.exe above? Just wondering if it run’s on your PC.

Yes, 3 apps run on my computer. Pacman seems to have a bias toward the bottom of the screen and the paddles on Pong – and the ball – move too fast to allow mental effort to affect the play. Parking Warden runs, but it needs some work before it is usable for comparing device performance. Happy to discuss any of these details whenever you want. EVP doesn’t seem to do anything (no response to clicking on the button), but perhaps it’s not active yet or has to be linked to audio output.

Great job getting these to run! Thank you.

Thanks. Pacman and Pong are still in the far-from-complete state left from a few months ago when we discussed them briefly in separate threads.

EVP doesn’t do anything yet. It’s a placeholder button for hopefully something interesting one day.

I’d like to focus on working on Parking Warden for now.

Please let me know what changes/features you’d like.

As far types of graphing, so far we have a toggle between

  • Simple +/- random walk, each bit sequentially processed
  • Counting number of 1 bits per read (4 bytes at a time at the moment) - this read value could be made adjustable on the screen

Anything else we’d like to be able to toggle? z-scores etc?

Okay. The number of 1 bits read is probably not a variable we need to follow. However, the z-score, or even better the probability of the null hypothesis test (from the z-score) would be useful.

When comparing two generators, it is essential the user does not know which generator is actively providing the latest update for the displayed output. That means only one output will be displayed, which shows the single updated output using the active generator. Each generator will provide data for the one plot, but not the data from both generators at the same time. When the user ends the test, the results from each generator, provided only when it was the active one and stored separately, will be displayed separately. To be clear, the one display will have twice as many points as the two separate plots that are revealed only at the end of the test. Then the results can be compared.

It has been suggested that the active generator be selected by alternating from one to the other, but the generator that goes first will be selected at the beginning of the run by a random bit. The alternative is to select the active generator randomly as the program is running, but this is more complex, as we want the total number of trials for each generator to be equal (or plus or minus 1) when the test is ended. Note, it’s probably not useful to try to compare more than two generators at one time. It’s hard enough to get good results with only two.

In continuous mode, the numbers should probably be updated no more than 5 times per second, but even slower might be better. Not less than 1 per second or more than 5.
A user-initiated mode is essential. One update per keypress (or mouse click).
The number of bits in each update should represent about 0.2 seconds of MMI generator data, and provide a single 1 or 0 result. In any case, the total number of bits used should always be the same for every trial, and the number should be odd to prevent a tie, which is an indeterminate result that could bias the test results.

Let me know anything that is not clear.

@ScottWilber
It’s been a while since this thread was active. I was wondering if you could please explain what “special XOr processing - experimental” is in some more detail and if it was for MMI purposes or something else?

This was one of several versions of internal processing methods for MED100K MMI generators I designed while I still had the support to get them programmed. I sent a large number of every type of those generators to Jamal for distribution to others for experimental use, though I have not heard from him in quite a long time. I don’t know what happened to most of those generators.

The XOr processing is based on some theoretical modeling that suggested it could be more responsive than other types of processing. The idea is that if a large number of bits are XOred together, switching any one of them by mental influence will switch the output from 1 to 0 or vice versa.

Every new type of processing must be tested to see if it is actually more responsive (or less, or unchanged). A lot of work was done in this forum on a project to compare various generator designs, but I don’t know the current status of that project.

Thank you.
Jamal did distribute some of the devices to some others and then he shipped the rest to me a few months ago which I’ve made available to everyone for consumption in the API server detailed in the thread titled the MED Farm.

The original version of the comparison tool (titled Parking Warden) which you and I tested together ended up being too laggy so I gave up the implementation in Python and started working on a new version in Unity. I think we discussed some of the points needed to be improved in that one, but sorry I haven’t had any time for that project for a while now.

I’m glad those MMI generators made it somewhere they might be used. That’s about $30,000 of hardware, which I provided for the development of MMI applications because it will be important at some point.

So far we are just scratching the surface, but there is enough understanding to do something really interesting. I will try to provide a more complete system design for one or more applications.

It’s been over half a year since I’ve worked on this but I’m trying now spend some of my Saturdays on MMI work.

My first work will be making the changes mentioned here:

but I just wanted to make sure anybody with Windows and one or more MED devices plugged in can run this attached application.

Plug in your MEDs, unzip the file and run MedBots.exe → then choose Parking Warden from the main menu to see the graphed output in realtime of the connected devices.

Okay, runs on my win7 computer. Tried reset and start/stop as well as the counts of 1s button. All seem to work. Not sure what the two modes of counts of 1s means. The MED serial number is displayed in the lower left corner.

I also ran it on a win10 machine. Of course microsoft “defender” tries to block it several different ways and report my activities to their headquarters, but I got it to work anyway.

Thanks for checking. I’ll go ahead and make the changes. I can’t remember myself what Ones Count is. I need to see what I was doing there…

Lol on Windows 10…

@ScottWilber
I’d like to get some clarification on the above requirements.
Assuming for the moment we’re doing a single trial, continuous that the user presses start and then stop to do the trial run. We have 2 MEDs connected.

  1. We select one of the 2 MEDs to be the active generator for this trial
  2. We graph the random walk output of that active generator only, but we also record the random walk output of the other non-active generator
  3. When the trial stops, we will then show both random walk outputs on the graph and then label each one with their corresponding device ID

Is that understanding correct?

It’s not possible to do a proper comparison with a single trial and the best comparisons are made with a number of trials of a fraction of a second initiated by a keypress or mouse click.

If you are determined to use continuous mode, the active generator is still chosen at random without the user knowing which generator is active. When the user ends that trial, only the results from the active generator are displayed – without revealing which generator was active. Then another like trial (with another random selection of the active generator) is initiated and ended by the user, and so on until a sufficient number of trials have been accumulated. The data displayed during the whole series of trials is from the active generator, which means it is a composite of the data from both generators while they were active.

Only when the series – a number of separate trials – is finished does the data for each of the generators get compiled separately and displayed with the identity of each labeled. The data from the inactive generator can be saved, but it is not really used in the comparison. It can be useful to also display the separate inactive generator results after the series is finished, but only to show that the results for each inactive generator are not as good as for the active generators which were used to provide user feedback.

Thank you.
This is just an initial mockup of the UI I’m thinking about.

  • Mode buttons are separated.
  • User can start and end their trials, with the current status showing in the button.
  • The X shows the number of trials (either the number of started and stopped continuous mode trials, or the number of mouse clicks/space bar presses for instantly measured trials)
  • The graphs and legends will be displayed as per your explanation in your last message.

Any questions/suggestions on this approach?

Sorry, I always try to respond to every post I have information about – I guess I overlooked this one.

The organization is improved but it’s hard to say much since real comparison testing is so dynamic. The single most important point when comparing two generators is that the user cannot know which one is the active one. That is, until the series of trials is completed. For continuous testing the same applies. There should be only one plot displayed during the trials (or series of trials for continuous mode) that is composed of points from both generators alternately selected at random for each trial while the test is ongoing. When the user decides (or when reaching a pre-selected number of trials) to end the series, the results from each of the two generators is then assembled and displayed as two separate plots, as you show in your message (but, only the points from each generator when it was the active generator are used to produce these plots and produce the statistical results for the two separate generators – see additional clarifications below). There should be two basic modes: Test and user feedback results (one plot composed of randomly selected results, only from the active generator) and Comparison results (two plots composed of all the active results from each of the two generators) Note, the active generator is the one that is used to provide the user feedback point in the plot and the accumulated statistical results in Test mode. To be very clear, there is only one active generator at a time: only a single trial result (from one of the generators) is displayed or added to the display at a time.

The reason for this bit of extra housekeeping is because the best results are always achieved from the generator that produces the output used to give real-time user feedback. If both results are displayed, the user will choose one or the other, or switch back and forth, to focus intention. This will never give an unbiased comparison of two entropy sources/processing methods. This conclusion is based on thousands of test series trying to decide which of two generators is working better.

Realizing that the effect of user feedback was overwhelming any attempted comparison, I designed a method to allow truly independent comparisons. If one generator/processing method were 10 times as large as the other, it might be enough to work with both results displayed at the same time. However, there is never that much difference between two concurrently available technologies. Usually it is more like 50% more or less, which requires special care to determine.

Thank you. I’ll work on the above.

I am working on both better processing methods and more responsive generator types. A colleague and I are working on an MMI generator comparison tool, but we cannot get multiple generators connected on the same computer.

Could you please point out your best code for connecting to multiple MED100Kxx generators. That would be a great help, thanks.

Great to hear. I must apologize that I’ve hardly had any progress on this particular project for a long time…

The source code of the simplest implementation (in C++) of the MeterFeeder C++ library is here: MeterFeeder/meterfeeder.cpp at master · vfp2/MeterFeeder · GitHub
This is also the repository where the source code for the library itself resides.

You can find pre-compiled executable binaries (also along with library binaries as well) for Mac, Linux and Windows here:

An abandoned Python implementation (called Parking Warden) with graphed output is here:

(I abandoned it because the graph rendering in Python was horrendously slow).

On-and-off for the past year or so I’ve been working on C#/Unity implementation (also called Parking Warden), example here:

All it’s really capable of doing at the moment is displaying output from any connected MED in realtime, along with a PRNG for “comparison”. None of the discussions like user-initiated mode with active generator been made unknown to the user etc. have been implemented yet. “Output” was the latest thing I was working a few months ago and that was to graph the cumulative z-score from each device. (I don’t think the Random Walk/Ones Count toggle button does anything at the moment).

Related source code code: