Monday, September 19, 2016

3D Printing

I've been having some fun getting to know the 3D printers in Temple's College of Engineering. I've been using a StrataSys Objet 3D printer to create parts for EEG headsets for a hackathon we're running this weekend (more details on that soon). We've been using headset designs from OpenBCI. We bought the electronics from them and we're printing our own headsets. The first part of the print took some 60 hours but man are the parts nice. The parts come off the printer embedded in a flimsy scaffolding:

The scaffolding is manually removed using a pressure washer to strip it all away:

The finished products are firm and very cleanly articulated:


I'll post some pictures of the headsets once they're completely put together. Overall though, its a really neat process!

Thursday, March 10, 2016

Brain Efficiency

I've been thinking a lot recently about how efficient the brain is. I like to spend time thinking about how neural interfaces will change the nature of humanity. Presumably, at some point, it will be possible to create computers that have intelligence that is on par with that of humans. Does that mean the end of humanity? Maybe, but maybe not.

Computers were designed to crunch numbers, and they are ruthlessly efficient at it. Unfortunately for them, most of the tasks we associate with "intelligence" are not associated with number crunching operations. Human intelligence is essentially a feat in pattern recognition - when we recognize patterns, we learn to predict the future based on previous experience. We can teach computers to perform pattern recognition tasks, but first we have to convert those tasks into number crunching operations. This is a pretty inefficient way of solving those problems, but we make up for that inefficiency by using super fast computers. Think of it as trying to drive a square peg into a round hole: its a bad idea from the start, but you might be able to make some progress if you just agree to use a humongous hammer.

So, number crunching machines are inherently inefficient at recognizing patterns. Is there another type of computing system that would be more efficient? Yes! Millions of years of evolution have placed a very efficient pattern recognition system right between your ears: your brain. Brains are insanely efficient at pattern recognition tasks. Lets see how efficient:

  • The average adult consumes about 2,000 calories per day
  • Of those, about 1,300 are the "resting metabolic rate" which is basically how much energy you'd burn if you just lay in bed all day and didn't move - its what you burn to keep your organs running to stay alive
  • Of those, about 20%, or 260 calories, are consumed by your brain
  • 260 calories in 24 hours converts to about 1.1 million joules per 86400 seconds, which reduces to 12.7 joules per second which is basically 13 watts.

That's right. 13 watts to keep the universe's most sophisticated intelligence machine operational. Astounding. By comparison, the fancypants laptop I'm using to type this blog post with consumes about 45W. The Watson computer that succeeded in playing Jeopardy reportedly uses something like 200,000W, a factor of over 15,000x more. Perhaps a more impressive feat than Watson beating Ken Jennings would have been Watson beating 15,000 Ken Jennings! And lets remember, Watson didn't 'have fun' playing Jeopardy, or parlay its experience into planning for its future: Ken did. Even super computers like Watson, with all their power, are inferior to the wonder of the human brain.

So, will a computer ever become as smart as a person? While it's hard to say, I believe that it will be damn near impossible for a computer to become as smart as a person using only 13 watts of power. I suspect that the only material that can be made to operate as efficiently as a human brain is ... a human brain. You'll never get down to 13 watts with transistors, memristors, or whatever the next great innovation is. Nothing beats neurons with respect to efficiency.

A separate question worth asking is whether a computer that can think as fast as a person (regardless of the wattage) can compete with humanity in terms of collective intelligence. I'll save that question for another day.

Wednesday, February 3, 2016


I spent the past two days at the Proposer's Day meeting for the DARPA Neural Engineering System Design (NESD) program. It was ... interesting. The program manager wants teams to create technology that can record from 1 million neurons, stimulate 100,000 neurons, and do full duplex (read and write simultaneously) from 1,000 neurons. And he wants it done in four years. And he wants this done in the context of addressing a real neuroprosthetics application such as prosthetic vision or audition. And he wants it done wirelessly. And don't forget to do your FDA IDE application, or to come up with a non-nonsensical financial model for bringing this to market. Oh, it can't be larger than 1cm^3, either. Never mind that the science of cortical stimulation for prosthetic sensory input is basically in its infancy. Or that no one can seem to work out to to keep neural electrodes viable in the brain for more than a couple of years reliably.


On the plus side, DARPA is willing to throw up to $60M on the problem. So there's that.

My sense was that very few of the people in the room actually thought it was technically viable to do all these things in the allotted time (even though it'd still be a major accomplishment if only a subset of the desired outcomes are achieved). This sets up an interesting Catch 22: in order to be a successful proposer, you have to propose a project which you claim will meet the program's goals, even if you don't actually believe that your own goals are realistic. That only seems like a logical conundrum until you remind yourself that $60M is an insane amount of money.

To be fair, its _up to_ $60M, and that's divided out among all winning teams. And each winning team will likely have a large number of teammates in order to have a prayer of addressing all the program's requirements. So the money will have to divide down a lot. But, hey, you can divide $60M a lot of times and still have real money left.

DARPA is an interesting part of the funding ecosystem. Its pretty great that someone is willing to throw big money at over-the-horizon technology. Not all technology develop should necessarily be practical if we (the US? the world?) are to make real progress. And that's actually what bugged me most about this program. The emphasis on 'addressing a real problem', jumping through the various FDA hoops, and/or trying to figure out how any of this could be turned into an end product pretty much misses the point. This research is worth doing just because its worth doing. If there was a business case to be made for any of this stuff, some company would already be on it.

Final thought: there was a lecture on ethics this morning. The speaker brought up some interesting points: most notably about the need to deal head-on with the tin-foil-hat crowd. But the bigger point seemed lost: the time to have an ethical debate is before you start a sustained, decades-long, multi-agency research portfolio on brain interfaces. The best we can do now is to make sure we design systems that are therapeutic, safe, and secure. Discussing the bigger questions of "should we engage in this research" is largely moot at this point.

Anyways, the full DARAPA call for proposals (or Broad Agency Announcement - BAA in the DARPA parlance) can be found here.

Monday, January 25, 2016

Remembering Marvin Minsky

MIT Professor Marvin Minsky has died. This is very sad news - Prof. Minsky was perhaps the single most seminal pioneer of Artificial Intelligence research. I was fortunate enough to take his graduate course "Society of Mind" in the spring of 1998. It was pretty mind blowing. I'm not sure how much I understood, but it was fairly self-evident that we were in the presence of genius. If I remember correctly, much of what we discussed in the class was a series of logic exercises designed to help us reverse-engineer the brain. I loved the idea of studying the brain by conceptualizing it as a complex interconnection of simple components - using engineering to forward neuroscience?! Like I said: mind blowing. A unique, quirky, and brilliant individual.

The Washington Post obit:

The New York Times obit:

His textbook was pretty excellent, too. I recommend it highly!

Compressive Sensing

So I've kinda hand my mind blown over the past couple days by the discovery of a signal processing technique called compressive sensing. Compressive sensing allows you to skirt the Nyquist sampling theorem in certain cases, which means effectively sampling a signal at rates lower than twice the maximum signal frequency. Whaaaaaat?

The idea seems to be based on a couple important assumptions and certainly isn't applicable to most signal sampling cases. The most important assumption is that the signal being sampled must be sufficiently sparse, meaning that most of the samples are zero.

I may update this post in the coming days with more details, but right now, the best resource I've found so far to explain things is here: Other decent resources appear to be here, here, and here.

Wednesday, December 2, 2015

High Performance Computing Cluster

This past summer and fall, my research partners built our own personal high performance computing cluster. Temple has its own cluster (Owls Nest) but it's always in heavy use by others around the university and so we're always scrapping for resources. So we built our own cluster. First we built a testbed cluster by lashing together a handful of surplus PCs and then we used that to spec out a formal HPC cluster that we paid about $27k for out of a grant.

The cluster is pretty awesome. Our student, Devin Trejo, put together a very comprehensive blog post on how the cluster was designed and built. You can read all about it here: