Neuroscientific Vipassanā-ñāṇas
Author, meditation nerd, and retired emergency physician Daniel Ingram has been participating in new research studies at Cambridge University, producing EEG brain scan data of his meditation practice to better understand correlates between brain activity, and the stages of the progress of insight described in certain Buddhist traditions.
Daniel was doing this in person pre-COVID19, but is now stashed away safely in semi-suburban America to my knowledge, with a very expensive EEG headset and some university lab software.
Well, as you may or may not expect, there isn't a whole lot going on on an EEG during meditation, since meditation tends to, in general, decrease brain activity in many parts of the brain during practice while other, more specific parts get more active. Looking at a graph of EEG activity after a 2 hour sit, it turned out to be difficult for Daniel to know exactly what he was doing or experiencing at any given point on the readout.
Doing some research, he found that his EEG platform, Lab Streaming Layer, had an application that could convert game controller inputs into EEG readings, so that they could be overlaid on the brain scan data, allowing Daniel to make markers with small, undistracting movements as he navigated that stages of insight meditation. However, he found that the app wouldn't launch, and though the C++ source code was available, he didn't have any luck getting it going.
Windows 10 has moved at quite a rapid pace, and 5 year old QT 4 code and DirectX 8 input libraries didn't seem to be cutting it for this particular use case. I'd never used QT before, but the sensible thing seemed to be to use the QT VS plugin to upgrade to QT 5, which, after a bit of import fiddling, started working.
I then replaced the input code, which on Daniel's side, wasn't reporting anything in the Lab Streaming Layer, with the new Windows Direct XTK API for controller input, making the code much more modular and being able to map out each specific feature of the Xbox One controller to an individually marked LSL output.
The good news is that Daniel can now see the controller input appear inside his LSL network, hooray!
Open source, academic software can be indecipherable at times, but with patience we got there!
The bad news is that the overall platform, Neuropype, doesn't seem to be getting all the LSL layers in sync with the EEG input, and Daniel is still seeking technical support for this problem. But, not having access to Neuropype, expensive academic software, I can't test this, and it seems like interpolation of timing is something it should be able to do by default.