July 26, 2015

Paris Debriefing

C-day + 62

I just returned a few days ago from the MNE-Python coding sprint in Paris. It was an invigorating experience to work alongside over a dozen of the core contributors to our Python package for an entire week. Putting a face and personality to all of the github accounts I have come to know would have made the trip worthwhile on it's own, but it was also a great experience to participate in the sprint by making some strides toward improving the code library too. Although I was able to have some planning conversations with my GSoC mentors in Paris (discussed later), my main focus for the week was focused on goals tangential to my SSS project.

Along with a bright student in my GSoC mentor’s lab, I helped write code to simulate raw data files.  These typically contain the measurement data directly as they come off the MEEG sensors, and our code will allow the generation of a raw file for an arbitrary cortical activation. It has the option to include artifacts from the heart (ECG), eye blinks, and head movement. Generating this type of data where the ground truth is known is especially important for creating controlled data to evaluate the accuracy of source localization and artifact rejection methods – a focus for many researchers in the MEEG field. Luckily, the meat of this code was previously written by a post-doc in my lab for an in-house project – we worked on extending and molding it into a form suitable for the MNE-Python library. 

The trip to Paris was also great because I was able to meet my main GSoC mentor and discuss the path forward for the SSS project. We both agreed that my time would be best spent fleshing out all the add-on features associated with SSS (tSSS, fine-calibration, etc.), which are all iterative improvements on the original SSS technique. The grand vision is to eventually create an open-source implementation of SSS that can completely match Elekta’s proprietary version. It will provide more transparency, and, because our project is open source, we have the agility to implement future improvements immediately since we are not selling a product subject to regulation. Fulfilling this aim would also add one more brick to the wall of features in our code library.

July 12, 2015

Opening up a can of moths

C-day + 48

After remedying the coil situation (and numerous other bugs) my filtering method finally seems to maybe possibly work. When comparing my method to the proprietary one, the RMS of the error is on average 1000 times less than the magnetometer and gradiometer RMS.

It turns out that many of the problems imitating the proprietary MaxFilter method stemmed from how the geometry of the MEG sensors were defined in my model. Bear with me here, as you have to understand some background about the physical layout of the sensors to comprehend the problem. When measuring brain activity, each sensor takes three measurements: two concerning the gradient of the magnetic field (the gradiometers) and one sensing the absolute magnetic field (a magnetometer). The MEG scanner itself is made up of ~100 of these triplets. The gradiometers and magnetometers are manufactured with different geometries, but they are all similar in that they contain one (or a set) of wire coils (i.e., loops). The signal recorded by these sensors is a result of magnetic field that threads these coil loops and then induces a current within the wire itself, which can then be measured. When modeling this on a computer system, however, that measurement has to be discretized, as we can’t exactly calculate how a magnetic field will influence any given sensor coil. Therefore, we break up the area contained in the coil into a number of “integration points.” Now, instead of integrating across the entire rectangular area enclosed by a coil, we calculate the magnetic field at 9 points within the plane. This allows a computer to estimate the signal any given coil would pick up. For an analogy, imagine you had to measure the air flowing through a window. One practical way might be to buy 5 or 10 flowmetry devices, hang them so they’re evenly distributed over the open area, and model how air was flowing through using those discrete point sensors. Only here, the airflow is a magnetic field and the flow sensors are these extremely expensive and sensitive SQUIDS bathed in liquid helium – other than that, very similar.

The hang-up I’ve been dealing with is largely because there are different ways to define those discrete points for the numerical integration. You can have more or fewer points (trading off accuracy vs. computational cost) and there are certain optimizations for how to place those points. As far as the placement, all points could be evenly spaced with equal weighting, but there are big fat engineering books that recommend more optimal (and uneven) weighting of the points depending on the shape in use. It turns out the proprietary SSS software used one of these optimized arrangements, while MNE-Python uses an evenly distributed and weighted arrangement. Fixing the coil definitions has made my custom implementation much closer to the black box I’m trying to replicate.


In the process I’ve also been forced to learn the dedication it takes to produce high-quality code. Before last week, I felt pretty high and mighty because I was religiously following PEP8 standards and making sure my code had something more than zero documentation. With some light nudging from my mentors, I feel like I’ve made the next solid leap forward; unit tests, markup, extensive references and comments have all been a theme since my last blog post. In the process, it can be frustrating to get that all right, but I’m sure the minor annoyance is a small price to pay to make this esoteric algorithm easier for the poor soul who inherits the SSS workload :)