August 27, 2015

Wrapping Up / Winding Down

C-day + ~10 weeks

By the end of the project, I finished or almost finished all of the major improvements for the signal space separation (SSS) algorithm. The initial SSS implementation required about six weeks to understand and then implement to a degree that matched the proprietary version of the software. Other than the fundamental algorithm, there are three major “expansion packs” that make the algorithm more attractive for real-world noise rejection. The first is the ability to reconstruct bad channels; since we are oversampling the spherical harmonic bases, we can actually reconstruct the signals at bad sensors where we were forced to throw out that sensor’s data. There are two other extensions – fine calibration and temporospatial SSS (tSSS) – that are both underway but not yet complete. As described before, channel reconstruction, fine calibration, and tSSS are all methods to remove more noise from the MEG signals. Even beyond these published algorithms, we have improvements included in our fine calibration procedure beyond the proprietary SSS algorithm. Of course, I’ll keep working on these extensions until they are finished so that our library has a powerful and complete implementation of the SSS functionality.


Now that the GSoC project is over, I’m glad I followed a post-doc’s advice to use the program as a chance to explore concepts outside my thesis area that manifest in code that’s helpful to our source imaging community. There were a number of technical skills I now have that I didn’t have before and probably wouldn’t have gained without this opportunity. I’m much more familiar with the concepts behind Git and Github as well as a better collaborator having worked with a number of MNE-Python’s other coders. To understand and implement the SSS algorithm, I also had to (heavily) brush up my physics and, in particular, Maxwell’s equations and spherical harmonics. The fact that I had a practical application of these concepts served as a great carrot to stay motivated as much of the necessary mathematical concepts were in impressively dense papers from before the 1980s. At the very least, I'll have one fancy physics concept to draw on a cocktail napkin many years from now.

Schools out for the (remaining) summer.

August 7, 2015

Fine calibration: one of many SSS improvements

SSS itself implemented, I’m now trying to process a back-log of improvements that were made since the algorithm’s initial publication in the mid-2000s. There are four or five of these modifications, some of which will boost noise rejection by an order of magnitude or more. The first set of three improvements goes under the umbrella term “fine calibration.”

The SSS algorithm depends heavily on the location and geometry of the MEG sensors. Therefore, it’s not surprising that any small error in the believed location or behavior these sensors will introduce a significant error in the filter’s output. Fine calibration consists of three modifications to correct for these sensor inconsistencies. For all of these improvements, we record empty room data and construct a “fine calibration” file. The first fix updates the orientation of each sensor coil. Because the sensor coils pickup the magnetic flux through their coil loops, a more accurate estimate of the true orientation will yield more accurate representation in the multipolar moment space. The second fix concerns the gradiometers only. Again, there are small imperfections in the MEG coils, and gradiometers measure small signal differences between pairs of loops. If one gradiometer loop has any physical differences from its twin, a substantial error will be introduced into the recorded signal. Therefore, we simulate small point-like magnetometers at the center of each gradiometer to account for this gradiometer “imbalance.” The third and final fix is concerned with imperfections in the magnetometers. Again, we’re dealing with physical devices, so we measure if any of these sensors have readings that are slightly too high or low in amplitude and correct for this with a calibration coefficient from that same fine calibration file. This final improvement has a relatively small effect compared to the first two.


I’ve finished the code for the fine calibration implementation, but the filtered results aren’t a close enough match with the proprietary code just yet.  On the bright side, the undiscovered bug not causing the filter to completely fail. Once I find the issue, I’ll be on a sprint to implement temporal SSS before the end of summer!

July 26, 2015

Paris Debriefing

C-day + 62

I just returned a few days ago from the MNE-Python coding sprint in Paris. It was an invigorating experience to work alongside over a dozen of the core contributors to our Python package for an entire week. Putting a face and personality to all of the github accounts I have come to know would have made the trip worthwhile on it's own, but it was also a great experience to participate in the sprint by making some strides toward improving the code library too. Although I was able to have some planning conversations with my GSoC mentors in Paris (discussed later), my main focus for the week was focused on goals tangential to my SSS project.

Along with a bright student in my GSoC mentor’s lab, I helped write code to simulate raw data files.  These typically contain the measurement data directly as they come off the MEEG sensors, and our code will allow the generation of a raw file for an arbitrary cortical activation. It has the option to include artifacts from the heart (ECG), eye blinks, and head movement. Generating this type of data where the ground truth is known is especially important for creating controlled data to evaluate the accuracy of source localization and artifact rejection methods – a focus for many researchers in the MEEG field. Luckily, the meat of this code was previously written by a post-doc in my lab for an in-house project – we worked on extending and molding it into a form suitable for the MNE-Python library. 

The trip to Paris was also great because I was able to meet my main GSoC mentor and discuss the path forward for the SSS project. We both agreed that my time would be best spent fleshing out all the add-on features associated with SSS (tSSS, fine-calibration, etc.), which are all iterative improvements on the original SSS technique. The grand vision is to eventually create an open-source implementation of SSS that can completely match Elekta’s proprietary version. It will provide more transparency, and, because our project is open source, we have the agility to implement future improvements immediately since we are not selling a product subject to regulation. Fulfilling this aim would also add one more brick to the wall of features in our code library.

July 12, 2015

Opening up a can of moths

C-day + 48

After remedying the coil situation (and numerous other bugs) my filtering method finally seems to maybe possibly work. When comparing my method to the proprietary one, the RMS of the error is on average 1000 times less than the magnetometer and gradiometer RMS.

It turns out that many of the problems imitating the proprietary MaxFilter method stemmed from how the geometry of the MEG sensors were defined in my model. Bear with me here, as you have to understand some background about the physical layout of the sensors to comprehend the problem. When measuring brain activity, each sensor takes three measurements: two concerning the gradient of the magnetic field (the gradiometers) and one sensing the absolute magnetic field (a magnetometer). The MEG scanner itself is made up of ~100 of these triplets. The gradiometers and magnetometers are manufactured with different geometries, but they are all similar in that they contain one (or a set) of wire coils (i.e., loops). The signal recorded by these sensors is a result of magnetic field that threads these coil loops and then induces a current within the wire itself, which can then be measured. When modeling this on a computer system, however, that measurement has to be discretized, as we can’t exactly calculate how a magnetic field will influence any given sensor coil. Therefore, we break up the area contained in the coil into a number of “integration points.” Now, instead of integrating across the entire rectangular area enclosed by a coil, we calculate the magnetic field at 9 points within the plane. This allows a computer to estimate the signal any given coil would pick up. For an analogy, imagine you had to measure the air flowing through a window. One practical way might be to buy 5 or 10 flowmetry devices, hang them so they’re evenly distributed over the open area, and model how air was flowing through using those discrete point sensors. Only here, the airflow is a magnetic field and the flow sensors are these extremely expensive and sensitive SQUIDS bathed in liquid helium – other than that, very similar.

The hang-up I’ve been dealing with is largely because there are different ways to define those discrete points for the numerical integration. You can have more or fewer points (trading off accuracy vs. computational cost) and there are certain optimizations for how to place those points. As far as the placement, all points could be evenly spaced with equal weighting, but there are big fat engineering books that recommend more optimal (and uneven) weighting of the points depending on the shape in use. It turns out the proprietary SSS software used one of these optimized arrangements, while MNE-Python uses an evenly distributed and weighted arrangement. Fixing the coil definitions has made my custom implementation much closer to the black box I’m trying to replicate.


In the process I’ve also been forced to learn the dedication it takes to produce high-quality code. Before last week, I felt pretty high and mighty because I was religiously following PEP8 standards and making sure my code had something more than zero documentation. With some light nudging from my mentors, I feel like I’ve made the next solid leap forward; unit tests, markup, extensive references and comments have all been a theme since my last blog post. In the process, it can be frustrating to get that all right, but I’m sure the minor annoyance is a small price to pay to make this esoteric algorithm easier for the poor soul who inherits the SSS workload :)

June 28, 2015

Bug Hunt



C-Day plus 34

For over a week now, the name of the game has been bug hunting. I had a finished first draft since the last blog post, so I’ve been trying to get the output of my custom SSS filter to match the proprietary version with sample data. One issue that took a couple days to track down was a simple but erroneous switch of two angles in a complex spherical coordinate gradient to Cartesian coordinate gradient transformation matrix. I can’t say that this is a new class of obstacles – spherical coordinates have thrown wrench after wrench into my code since different mathematicians regularly define these coordinates in different ways. (Is it just me, or is having seven separately accepted conventions for the spherical coordinate system a bit absurd?) My project crosses a couple domains of mathematics, so wrestling with these different conventions has helped me deeply appreciate the other mathematical concepts that do have a single accepted formulation.

Regardless, weeding out the spherical coordinate issue and a menagerie of other bugs has left me with a filter that produces filtered data that is similar to (but not exactly matching) the proprietary code (see some example output below). Luckily, I do have several checkpoints in the filter’s processing chain and I know the problem is between the last checkpoint and the final output. My mentors have been fantastic so far, and we have a potential bead on the last issue; the weak magnetic signals produced by the brain are measured with two major classes of MEG pickup coils: magnetometers and gradiometers. In a very simple sense, one measures the magnetic field while the other measures the spatial derivative of the magnetic field, and (because of this difference) they provide readings on very different scales that I have yet to normalize. Given some luck, this last patch could fix the issue and yield a working solution to the first half of my GSoC project! (Knock on wood.)

Exemplar data showing raw unfiltered MEG signal and the same data after the benchmark SSS filter and my own custom filtering (top). Difference between benchmark and my custom implementation (bottom). The filter in progress is close, but not quite the same as the benchmark implying there remains some bugs to fix.

June 14, 2015

Inner workings of the Maxwell filter

C-Day plus 20

This week, I finished a first draft of the Maxwell filtering project. Remember my goal here is to implement an open-source version of this filter that uses physics to separate brain signal from environmental garbage picked up by the MEG sensors. Now comes the fun part of this project: trying to add all the small tweaks required to precisely match the proprietary Maxwell filter, which I cannot access. I’m sure this will devolve into a tedious comparison between what I’ve implemented and the black box version, so here’s to hoping the proprietary code follows the original white papers.


Most of the Maxwell filter work up until this point was focused on enabling the calculation of the multipolar moment space (comprised of the spherical harmonics), which is the foundation of this Maxwell filter. These multipolar moments are the basis set I’ve mentioned earlier that allow the brain signals to be divided into two parts: those coming from within a sphere and those originating from outside a slightly larger sphere (to see this graphically, cf. Fig 6 in Taulu, et al., 2005). In essence, representing the brain signals as a sum of multipolar moments permits the separation of brain signal from external noise sources like power lines, large moving objects, the Earth’s magnetic field, etc. My most recent code actually projects the brain signals onto this multipolar moment space (i.e., representing MEG data as a sum of these moments), and then reconstructs the signal of interest. These calculations are all standard linear algebra. From Taulu and Simola, 2006 (pg 1762): 


Takeaway: The below equations show how Maxwell filtering is accomplished once the appropriate space has been calculated. We take brain signals recorded using MEG, represent them in a different space (the truncated mutlipolar moment space), and then reconstruct the MEG signal to apply the Maxwell filter and greatly reduce the presence of environmental noise.

ϕ represents MEG recordings
represents the multipolar moment space (each vector is a spherical harmonic)
x represents the ideal weight of each multipolar moment (how much of each basis vector is present)
hat represents an estimate
inout, refer to internal spaces (brain signals), and external spaces (noise), respectively
xis the inverse of x


In the ideal case, the signal we recorded can also be represented as a weighted combination of our multipolar moments: 
ϕ = S * x
The S matrix contains multipolar moments but only up to a certain complexity (or degree), so it has been truncated. See my first post (end of 3rd paragraph) about why we cut out the very complex signals. 

Since we can break up the multipolar moments and their weights into an internal and external space (comprised of brain signal and noise), this is equivalent to the last equation:
ϕ = [S_in, S_out] * [x_in, x_out]T

However, we're not in an ideal world, so we need to estimate these mutlipolar moment weights. x is the unknown so isolate it by taking the pseudo inverse of S to solve for an estimate of multipolar moment weights:

S_pinv * ϕ = S_pinv * S * x
S_pinv * ϕ = x_hat 
x_hat = S_pinv * ϕ
or equivalently,
[x_in_hat, x_out_hat]S_pinv * ϕ

With the multipolar weight estimates in hand, we can finally reconstruct our original MEG recordings, which effectively applies the Maxwell filter. Again, since S_in, and S_out have been truncated, they only recreate signals up to a certain spatial complexity to cut out the noise.

ϕ_in_hat = S_in * x_in_hat
ϕ_out_hat S_out x_out_hat

The above ϕ matrices are a cleaner version of the brain signal we started with and the world is now a much better place to live in.