Sunday, February 20, 2011

Lecture 5 - Spike Decoding

Lecture 5 was a lively one! This week we started talking about neural decoding algorithms. These are methods for trying to estimate what large numbers of neurons are 'thinking' about in real time. We started with an old standard, Georgopoulos 1986, which presented the concept of population vectors. The article was relatively easy for everyone to understand although we did note that most of the results validating their method were not published in that article. Still, the paper represents a landmark bit of thinking and we gave it its due.

The next article we discussed was at the request of one of our classmates who was in the process of reviewing a Schwartz paper for some other journal club she attends. The paper, Jarosiewicz 2008, discussed strategies monkeys undertake to compensate their brain machine interfaces when the preferred directions used in the population vector are rotated out of synch with the monkey's true preferred directions. It turns out the monkeys use a combination of three strategies: (1) changing their 'aim' to offset for the mis-tuned neurons, (2) reducing the relevance of the affected neurons on the BMI control, and (3) retuning those neurons to match the rotated preferred directions. It was an interesting and well done paper, but there was some interesting debate about whether the experiment revealed anything that wasn't already fairly obvious. Someone raised the point that what would really be interesting is to understand the mechanisms that underly the observed changes (as opposed to just knowing what those changes were).

The final paper we discussed was Quiroga 2009, which gives a solid background into the fundamental statistical issues underlying neural decoding. Everyone seemed to praise this paper for its readability and scope. The paper explained the difference between neural decoding and information theory. The take-home message was that, while a BMI can implement a successful neural decoder, there is still a lot of information encoded in the neurons that is simply being discarded. The concept of information theory (i.e. mutual information) allows us to calculate numerically exactly how much information the decoder misses. Such an objective measure is a vital tool for comparing the efficacy of different decoders. The Wikipedia page on mutual information gives a nice introduction into the math underlying that technique. We also discussed a paper written by my old lab mate Debbie Won where they used mutual information to quantify how much information is lost when neurons are mis-detected and mis-sorted. Its a great (albeit quite dense) paper. We finished off the class by creating an Excel spreadsheet that calculated some basic information theory numbers for a simple example; we changed the prior probabilities and tried to predict how the information content would change.

Next week we will tackle two more spike decoding papers that go into a lot more mathematical depth than anything we've dealt with so far. Stay tuned for the report!

No comments:

Post a Comment