This week we started to get into specific signal processing issues. Our reading was the excellent technical BMI summary by Linderman et al (published in IEEE Signal Processing Magazine) from Dr. Shenoy's lab at Stanford University. The article discusses in some technical depth the signal processing stages of a BMI and then discusses their efforts to implement these steps as wireless and/or implantable electronics.
Our main topics of discussion were (a) electrode longevity, (b) spike sorting, (c) neuron tuning, and (d) statistical models of neural behavior. The electrode longevity question was especially interesting since its such a substantial obstacle and there are very few concrete ideas about how to extend the life of electrodes in the brain. The primary issue is that the brain eventually treats electrodes like foreign objects and initiates an immune response that results in the electrodes becoming coated with microglia and other scar-like tissue. We also discussed electrode movement in the brain, what the ramifications are for the stability of recorded neurons (its not good) and how it might be overcome. We had some good discussion about whether larger electrodes might be the solution, since they would have a wider "listening" radius and would therefore be more resilient to micro-movements. It would seem like the flip side of the equation is that larger electrodes would record from more neurons which would increase the incidence of overlapped spikes. Decoding overlapped spikes is certainly possible but perhaps not in a computationally efficient manner.
Finally we spent a solid hour discussing spike sorting. We started by introducing the concept of neural tuning functions, since this motivates the need for spike sorting in the first place. We quickly looked at Georgopoulos' landmark 1982 paper in which he discovered that motor neurons were tuned to arm movement direction in a roughly cosinusoidal manner. Following that, we looked at some demo Matlab code I put together to demonstrate the concept and the math behind spike sorting. We started with some very simple methods (thresholding and windowing) and moved on to more sophisticated methods such as feature extraction (we tried spike amplitude and width) followed by principal component analysis (which is basically just another feature extraction technique). We discussed the concept of clustering, in which spikes with similar features are automatically clustered together. In particular, we examined the k-means clustering method (a built-in Matlab function!) which works pretty well provided you tell it ahead of time how many clusters you are looking for. We discussed some of the pros and cons, including the need for at least partial supervision in determining thresholds and cluster shapes and numbers. The figure below shows our sample data set thats been sorted using PCA and k-means clustering.
Next week we'll be looking at neuron decoding. I'm off now to find a good reading for next week and to develop a good chunk of Matlab demo code!
No comments:
Post a Comment