Hi all, I just signed up and was recommended to check out this forum and the work of Scott Wilber from the HeartMath Institute.
I first became interested in this topic by reading the research conducted by the PEAR group out of Princeton and the Global Consciousness Project. I’ve always had a fascination with psychic phenomena and with the recent developments in generative AI, I have become very interested in exploring the possibility of how to increase the efficiency of these models. A huge problem AI faces is how to scale without needing to use more GPUs and more data. Perhaps incorporating MMI into the training could help.
Admittedly, I am not a machine learning expert, however, in my research about how LLMs work it seems that MMI techniques could be incorporated into LLMs in order to either help better train or to produce more efficient outputs.
The injection of randomness into data during the training of language models is crucial for improving efficiency. Traditionally, pseudo-RNGs serve this purpose well. However, experiments with trueRNGs and quantumRNGs have sometimes shown significant improvements in model output for certain tasks.
Recently, I’ve come upon a fine tuning technique called NEFTune that utilizes random noise created by using a pRNG and injecting that into the training data at various intervals. The result of this has been almost a double increase in the effiency of the models that utilized this fine tuning approach.
Now, the experimenters used a pRNG in order to get the random noise needed to perform this experiment. However, I propose a novel approach by involving a human “trainer” engaged in a creative activity, like writing, during the model’s fine-tuning phase and using a qRNG to get the random noise.
- The human trainer would perform a creative task throughout the fine-tuning process.
- A quantum RNG placed near the trainer would capture data, feeding it directly into the model as random noise during training.
This approach aims to see if the trainer’s presence and creative output can directly influence the model, potentially enhancing its performance in tasks such as creative writing.
The alternative approach could be building a game, similar to Psyleron’s software where the participants interaction with the software directly affects the random noise being injected into the fine tuned data.
These are just my initial thoughts on this experiment - I need to work out more of the details - including how to perform this experiment utilzing cloud based GPUs. I also do not have access to a qRNG and was hoping someone on this forum would have leads to one that I could purchase at a reasonable price?
I welcome any feedback or anyone interested in collaboration on this project. There are a lot of gaps in my knowledge, but I feel strongly that MMI can enhance generative AI in a major way.