So I've kinda hand my mind blown over the past couple days by the discovery of a signal processing technique called compressive sensing. Compressive sensing allows you to skirt the Nyquist sampling theorem in certain cases, which means effectively sampling a signal at rates lower than twice the maximum signal frequency. Whaaaaaat?
The idea seems to be based on a couple important assumptions and certainly isn't applicable to most signal sampling cases. The most important assumption is that the signal being sampled must be sufficiently sparse, meaning that most of the samples are zero.
I may update this post in the coming days with more details, but right now, the best resource I've found so far to explain things is here: http://www.codeproject.com/Articles/852910/Compressed-Sensing-Intro-Tutorial-w-Matlab. Other decent resources appear to be here, here, and here.