Using MMI to assist in training large language models

Hi all, I just signed up and was recommended to check out this forum and the work of Scott Wilber from the HeartMath Institute.

I first became interested in this topic by reading the research conducted by the PEAR group out of Princeton and the Global Consciousness Project. I’ve always had a fascination with psychic phenomena and with the recent developments in generative AI, I have become very interested in exploring the possibility of how to increase the efficiency of these models. A huge problem AI faces is how to scale without needing to use more GPUs and more data. Perhaps incorporating MMI into the training could help.

Admittedly, I am not a machine learning expert, however, in my research about how LLMs work it seems that MMI techniques could be incorporated into LLMs in order to either help better train or to produce more efficient outputs.

The injection of randomness into data during the training of language models is crucial for improving efficiency. Traditionally, pseudo-RNGs serve this purpose well. However, experiments with trueRNGs and quantumRNGs have sometimes shown significant improvements in model output for certain tasks.

Recently, I’ve come upon a fine tuning technique called NEFTune that utilizes random noise created by using a pRNG and injecting that into the training data at various intervals. The result of this has been almost a double increase in the effiency of the models that utilized this fine tuning approach.

Now, the experimenters used a pRNG in order to get the random noise needed to perform this experiment. However, I propose a novel approach by involving a human “trainer” engaged in a creative activity, like writing, during the model’s fine-tuning phase and using a qRNG to get the random noise.

  • The human trainer would perform a creative task throughout the fine-tuning process.
  • A quantum RNG placed near the trainer would capture data, feeding it directly into the model as random noise during training.

This approach aims to see if the trainer’s presence and creative output can directly influence the model, potentially enhancing its performance in tasks such as creative writing.

The alternative approach could be building a game, similar to Psyleron’s software where the participants interaction with the software directly affects the random noise being injected into the fine tuned data.

These are just my initial thoughts on this experiment - I need to work out more of the details - including how to perform this experiment utilzing cloud based GPUs. I also do not have access to a qRNG and was hoping someone on this forum would have leads to one that I could purchase at a reasonable price?

I welcome any feedback or anyone interested in collaboration on this project. There are a lot of gaps in my knowledge, but I feel strongly that MMI can enhance generative AI in a major way.

Welcome Jon. Glad you are here.

I think it’s likely that an MMI generator feeding data in the right way into a machine learning task can have a notable, positive effect. My thought on the subject’s mental effort would be intending to increase the speed and/or accuracy of the ML task. That is, the most positive and direct outcome.

If you are located in the USA, I can readily lend you an MMI generator, Model MED100Kx8. When you are ready to implement your ideas, send me a personal email and we can work out the details – it will be a no-stress process.

In the meantime, this forum is a good venue to discuss and develop your ideas further.

1 Like

I’m already performing experiments in that direction. However, I mostly use MMI signal to influence sampler. thank you for hint about NEFT.
LLM, which is driven by MMI is VERY responsive. I assume, because MMI interacts with connection between idea and matter, and LLM is sort of embodiment of that connection.
however, there is problem, that it require huge effort to induce psychic imprint into model reliably. It’s very draining even for small scale fine tune of small model.

1 Like

Thanks Scott! Will do.

Interesting - can you elaborate on the effort needed to induce psychic imprint onto the model? What are you focusing on in order to induce the psychic imprint?

And glad the NEFTune article is helpful! Will be interesting to try MMI techniques with this fine tuning approach.

I’m focusing to produce correct answer, which is related to asked question.

Welcome to the forum! I’ve been thinking along these lines myself too and have been wanting to get some time aside to experiment. Please share any of your findings here :smiling_face:

1 Like

Welcome jonf. LLMs and MMI are intersecting interests for me so I must try this.

1 Like

Have you tried reaching out to the authors of the experiment and see if they would be interested in collaborating?

1 Like

Hey Joshua - not yet - I want to get a better understanding of the process and try some experiments myself before reaching out.

Good idea, definitely keep us updated this is a super interesting idea

1 Like

I’ll be REALLY surprised, if it works. Because entire NEFTune principle defies purpose of MMI.
Because, basically, it adds part of denoising task to training of neural network. And denoising helps network to adapt for generalisation.
However, if we want to encode some sort of mental imprint into learning process, it means, that noise should be quite significantly biased.