July 12, 2015

Opening up a can of moths

C-day + 48

After remedying the coil situation (and numerous other bugs) my filtering method finally seems to maybe possibly work. When comparing my method to the proprietary one, the RMS of the error is on average 1000 times less than the magnetometer and gradiometer RMS.

It turns out that many of the problems imitating the proprietary MaxFilter method stemmed from how the geometry of the MEG sensors were defined in my model. Bear with me here, as you have to understand some background about the physical layout of the sensors to comprehend the problem. When measuring brain activity, each sensor takes three measurements: two concerning the gradient of the magnetic field (the gradiometers) and one sensing the absolute magnetic field (a magnetometer). The MEG scanner itself is made up of ~100 of these triplets. The gradiometers and magnetometers are manufactured with different geometries, but they are all similar in that they contain one (or a set) of wire coils (i.e., loops). The signal recorded by these sensors is a result of magnetic field that threads these coil loops and then induces a current within the wire itself, which can then be measured. When modeling this on a computer system, however, that measurement has to be discretized, as we can’t exactly calculate how a magnetic field will influence any given sensor coil. Therefore, we break up the area contained in the coil into a number of “integration points.” Now, instead of integrating across the entire rectangular area enclosed by a coil, we calculate the magnetic field at 9 points within the plane. This allows a computer to estimate the signal any given coil would pick up. For an analogy, imagine you had to measure the air flowing through a window. One practical way might be to buy 5 or 10 flowmetry devices, hang them so they’re evenly distributed over the open area, and model how air was flowing through using those discrete point sensors. Only here, the airflow is a magnetic field and the flow sensors are these extremely expensive and sensitive SQUIDS bathed in liquid helium – other than that, very similar.

The hang-up I’ve been dealing with is largely because there are different ways to define those discrete points for the numerical integration. You can have more or fewer points (trading off accuracy vs. computational cost) and there are certain optimizations for how to place those points. As far as the placement, all points could be evenly spaced with equal weighting, but there are big fat engineering books that recommend more optimal (and uneven) weighting of the points depending on the shape in use. It turns out the proprietary SSS software used one of these optimized arrangements, while MNE-Python uses an evenly distributed and weighted arrangement. Fixing the coil definitions has made my custom implementation much closer to the black box I’m trying to replicate.


In the process I’ve also been forced to learn the dedication it takes to produce high-quality code. Before last week, I felt pretty high and mighty because I was religiously following PEP8 standards and making sure my code had something more than zero documentation. With some light nudging from my mentors, I feel like I’ve made the next solid leap forward; unit tests, markup, extensive references and comments have all been a theme since my last blog post. In the process, it can be frustrating to get that all right, but I’m sure the minor annoyance is a small price to pay to make this esoteric algorithm easier for the poor soul who inherits the SSS workload :)

No comments:

Post a Comment