The concept of “information” permeates our lives, as we continuously create, process, communicate and consume information in our relation with other people or just for entertainment. Information is represented in terms of signals such as audio, single- and multi-view video, and so on. This richness has spurred the development of efficient ways to acquire, process, communicate and store information. From Shannon’s celebrated sampling theorem, it is known that signals can be reconstructed exactly if they are sampled densely enough; this has led to the prevalent paradigm of representing signals through a set of their sample values. This is also very convenient in a signal processing sense, as the samples are readily available in their original form for any kind of subsequent operation, e.g., filtering, analysis, compression, and encryption, just to mention a few. While this approach has worked well for a long time, however, it is starting to show severe limitations. New applications such as video, hyperspectral and magnetic resonance imaging acquire and use vast amounts of information, whose handling is becoming increasingly difficult.
Compressive sensing (CS) theory provides an attractive and innovative alternative. CS is a new sampling paradigm that exploits the intuitive notion that most natural signals, such as speech, images and video sequences, are highly correlated. The concept of correlation implies that there exists a transform domain in which the signal is sparse, or approximately sparse, i.e. only a small fraction of the transform coefficients of the signal are significantly different from zero. Compression algorithms such as JPEG and MPEG exploit sparseness by representing only few significant coefficients in the transform domain. However, CS takes a different view of this problem, as it attempts to avoid acquiring a huge number of signal samples only to later retain just a few of them. In particular, it deals with the problem of acquiring a set of measurements comparatively much smaller than dictated by Shannon’s theorem, from which the signal can be reconstructed exactly or almost exactly, under the assumption that the signal is sparse in some domain, not necessarily known a priori. In practice, CS can be carried out using very simple techniques. The acquisition of “measurements” (the equivalent of Shannon’s “samples”) is extraordinarily easy: a measurement is obtained as the scalar product of the signal with a pseudorandom sequence (e.g., Gaussian). The reconstruction of the signal from its measurements requires solving a constrained minimization problem, i.e., to find the sparsest signal (or transform thereof) that matches the available measurements. This can be performed in an approximated way using, amongst others, linear programming techniques.