Monday, February 28, 2011

Chronic Traumatic Encephalopathy

It looks like the tipping point for Chronic Traumatic Encephalopathy (CTE) has arrived, thanks to a number of recent events that have garnered public attention. CTE is a neural disease caused by repeated blows to the head, as might be the case with a contact sport athlete such as a football player or boxer. The symptoms vary, but appear to include memory loss, headaches, depression, and agression. Former professional football players in their 40s are experiencing symptoms that might otherwise be expected from Alzheimer's patients.

My first introduction to this area was this excellent Malcolm Gladwell article in the New Yorker, that described the lives of some former football players, as well as the work of Dr. McKee at Boston University, who studies chronic brain trauma. Based on some conversations I had with colleagues after reading this article, I recently submit a grant to NIH to study how EEG can be used to study chronic brain trauma (our model will study rats, not people). It seems that very little is known at the neuronal level about how minute head impacts accumulate over time to produce serious damage. We are hoping to develop a rat model that will (a) allow us to understand the underlying biological processes and (b) whether the progression of injury can be correlated with EEG markers.

The real story however has been the recent suicide of former Chicago Bear Dave Duerson. In case you've somehow missed this story, Mr. Duerson recently shot himself in the chest after starting to experience some symptoms consistent with chronic traumatic encephalopathy. His shot to the chest ensured that his brain would remain intact for post mortem scientific study my Dr. McKee's lab, as was his final wish. This story seems to have been the tipping point - the past few weeks have been rife with stories in the mass media about the long term cognitive effects of contact sports such as football, including former athletes with obvious brain issues as well as the NFL's announcement of a formal sideline testing policy following potentially concussive events.

The Neural Instrumentation Lab is following these developments with great interest.

Friday, February 25, 2011

iPad Programming

A while ago I brought a new face to the lab to start developing some iPad programming infrastructure. Our eventual goal is to integrate the iPad into some neural engineering applications. For now we are learning some of the low-level odds-and-ends associated with creating code in the iOS environment. Today, Vince finally had a nice breakthrough and managed to get his first 'hello world' program installed on the iPad. We were quite pleased!

Here's a pic of Vince with his creation:

And here's a video of Vince's program running on his iPad:



We're finally making progress!

Temple Engineering Poster Day

Congratulations to Neural Instrumentation Lab students Allie Tierney, Yuri Apel, and Alessandro Napoli for competing in this year's Temple College of Engineering poster contest. Our lab has an excellent track record in this contest, with lab members Karthikeyan Balasubramanian and John Mountney taking first prize in the graduate division last year and the year before.

Here are pictures of Allie and Alessandro with their posters.

Tuesday, February 22, 2011

Fuzzy Logic-Based Spike Sorting

We've just received word that a manuscript authored by my graduate student Karthikeyan Balasubramanian will be published in the Journal of Neuroscience Methods. The paper, titled, "Fuzzy Logic-based Spike Sorting System" looks at how fuzzy logic can be used as an autonomous feature extraction algorithm for spike sorting. Its a pretty neat concept: spike features are measured and fuzzified, and then fuzzy logic is used to calculate a "fuzziness" index for each spike that identifies how similar that spike is to an ideal spike waveform. The fuzziness indicies can be clustered directly for a complete spike sorting solution. There are several advantages of our system. The first is that the fuzzy rules don't ever need to be modified, meaning that the system doesn't need any channel-by-channel calibration every day. Secondly, the sorter does not require that spikes be spatially aligned, as with principal component analysis. Spike alignment is computationally expensive. Finally, our system is computationally negligible to implement and can be built in an FPGA with hundreds of channels in parallel for a nice clean low-power solution.

Sunday, February 20, 2011

Lecture 5 - Spike Decoding

Lecture 5 was a lively one! This week we started talking about neural decoding algorithms. These are methods for trying to estimate what large numbers of neurons are 'thinking' about in real time. We started with an old standard, Georgopoulos 1986, which presented the concept of population vectors. The article was relatively easy for everyone to understand although we did note that most of the results validating their method were not published in that article. Still, the paper represents a landmark bit of thinking and we gave it its due.

The next article we discussed was at the request of one of our classmates who was in the process of reviewing a Schwartz paper for some other journal club she attends. The paper, Jarosiewicz 2008, discussed strategies monkeys undertake to compensate their brain machine interfaces when the preferred directions used in the population vector are rotated out of synch with the monkey's true preferred directions. It turns out the monkeys use a combination of three strategies: (1) changing their 'aim' to offset for the mis-tuned neurons, (2) reducing the relevance of the affected neurons on the BMI control, and (3) retuning those neurons to match the rotated preferred directions. It was an interesting and well done paper, but there was some interesting debate about whether the experiment revealed anything that wasn't already fairly obvious. Someone raised the point that what would really be interesting is to understand the mechanisms that underly the observed changes (as opposed to just knowing what those changes were).

The final paper we discussed was Quiroga 2009, which gives a solid background into the fundamental statistical issues underlying neural decoding. Everyone seemed to praise this paper for its readability and scope. The paper explained the difference between neural decoding and information theory. The take-home message was that, while a BMI can implement a successful neural decoder, there is still a lot of information encoded in the neurons that is simply being discarded. The concept of information theory (i.e. mutual information) allows us to calculate numerically exactly how much information the decoder misses. Such an objective measure is a vital tool for comparing the efficacy of different decoders. The Wikipedia page on mutual information gives a nice introduction into the math underlying that technique. We also discussed a paper written by my old lab mate Debbie Won where they used mutual information to quantify how much information is lost when neurons are mis-detected and mis-sorted. Its a great (albeit quite dense) paper. We finished off the class by creating an Excel spreadsheet that calculated some basic information theory numbers for a simple example; we changed the prior probabilities and tried to predict how the information content would change.

Next week we will tackle two more spike decoding papers that go into a lot more mathematical depth than anything we've dealt with so far. Stay tuned for the report!

Saturday, February 12, 2011

Lecture 4 - Neural Engineering

This week we started to get into specific signal processing issues. Our reading was the excellent technical BMI summary by Linderman et al (published in IEEE Signal Processing Magazine) from Dr. Shenoy's lab at Stanford University. The article discusses in some technical depth the signal processing stages of a BMI and then discusses their efforts to implement these steps as wireless and/or implantable electronics.

Our main topics of discussion were (a) electrode longevity, (b) spike sorting, (c) neuron tuning, and (d) statistical models of neural behavior. The electrode longevity question was especially interesting since its such a substantial obstacle and there are very few concrete ideas about how to extend the life of electrodes in the brain. The primary issue is that the brain eventually treats electrodes like foreign objects and initiates an immune response that results in the electrodes becoming coated with microglia and other scar-like tissue.  We also discussed electrode movement in the brain, what the ramifications are for the stability of recorded neurons (its not good) and how it might be overcome. We had some good discussion about whether larger electrodes might be the solution, since they would have a wider "listening" radius and would therefore be more resilient to micro-movements. It would seem like the flip side of the equation is that larger electrodes would record from more neurons which would increase the incidence of overlapped spikes. Decoding overlapped spikes is certainly possible but perhaps not in a computationally efficient manner.

Finally we spent a solid hour discussing spike sorting. We started by introducing the concept of neural tuning functions, since this motivates the need for spike sorting in the first place. We quickly looked at Georgopoulos' landmark 1982 paper in which he discovered that motor neurons were tuned to arm movement direction in a roughly cosinusoidal manner. Following that, we looked at some demo Matlab code I put together to demonstrate the concept and the math behind spike sorting. We started with some very simple methods (thresholding and windowing) and moved on to more sophisticated methods such as feature extraction (we tried spike amplitude and width) followed by principal component analysis (which is basically just another feature extraction technique). We discussed the concept of clustering, in which spikes with similar features are automatically clustered together. In particular, we examined the k-means clustering method (a built-in Matlab function!) which works pretty well provided you tell it ahead of time how many clusters you are looking for. We discussed some of the pros and cons, including the need for at least partial supervision in determining thresholds and cluster shapes and numbers. The figure below shows our sample data set thats been sorted using PCA and k-means clustering.

Next week we'll be looking at neuron decoding. I'm off now to find a good reading for next week and to develop a good chunk of Matlab demo code!

Monday, February 7, 2011

Lecture 3 - Brain Machine Interfaces

In Week 3 we started to get into the specifics of neural engineering - in this case we looked at brain machine interfaces, focussing on work of the Nicolelis Lab at Duke  University. In the first half of the class, we finished discussing Chapter 3 of my dissertation (see my Lecture 2 post). This chapter gives a good general overview of biomedical data acquisition. We discussed technical details such as input impedance, gain, filtering, analog to digital conversion, bit-rates, and spike detection.

Following that, we delved into the very helpful review paper by Lebedev and Nicolelis (2006 Trends in Neurosciences). This paper outlines the different types of Brain Machine Interfaces, discusses their relative strengths, and goes into some detail about the roadblocks moving forwards. These roadblocks are summarized as: implantable data acquisition devices, developing real-time computational algorithms, design realistic artificial prostheses, and incorporating sensorimotor feedback.

Upcoming this week, we will be discussing "Signal Processing Challenges for Neural Prostheses" by Linderman et al., in which we will finally start to delve into some mathematics.

Wednesday, February 2, 2011

Neural Interfaces Fiction!

A student put a copy of "Fools' Experiments" by Edward Lerner in my hands. Its a sci-fi novel about neural interfaces and artificial intelligence. The book is horribly written with cheesy dialog and weakly developed characters, but there are a few interesting ideas bouncing around in there. In the book, the characters have developed a brain machine interface helmet which reads your thoughts and allows you to subconsciously interact with a computer. The helmets include a neural network layer that adapts the helmets on the fly to optimize the bi-directional flow of information between the user and the computer. The fun part came when the computer became infected with a worm-type virus: the virus saw the user's brain as just another computer to infect, effectively bricking the user's brain. Of all the reactionary concerns I've heard about for avoiding brain computer interface research, that is definitely a first! Still, kinda interesting to think about...