# A Nyquist–Shannon Sampling Theorem misunderstanding

The Nyquist-Shannon Sampling Theorem estabilishes that a sampling process of a continuous-time $x(t)$ signal is perfectally reversible if the sampling frequency is at least the Nyquist rate (the double of the sampled signal bandwidth). However, I’ve been realizing some really experienced engineers have a lightly distorced interpretation of the theorem: to believe that the quality of a reconstructed signal increases with the sampling frequency and the frequencies much higher than Nyquist rate are needed to perform a satisfatory reconstruction.

A strongly possible reason to this erroneous understanding is the thought that the signal reconstruction is made by a linear interpolation of the samples. From that perspective, a higher rate sampling really makes the reconstructed signal closer to its original shape. But, the point is that the signal is not simply reconstructed by an interpolation. Instead, the process is compose by two stages:

• A train of impulses is generated, which one multiplicated by it respective sample value;

$x_i(t)=\sum\limits_{n=-\infty}^{\infty}x[n]\delta(t-nT)$

• Then, the resulting signal passes into a low-pass filter to discard all frequencies above the original signal bandwidth.

It is important to know that in practical terms, the reconstruction is not perfect because the theorem is only valid for bandlimited signals – requiring the signal to be prefiltered what makes a distortion on it and the low-pass filter is unrealizable because it’s response is not causal. So, although the mathematical behavior of the theorem is not achived phisically, the constraint is not the sampling rate, since the Nyquist rate be respected.

### 3 responses to “A Nyquist–Shannon Sampling Theorem misunderstanding”

1. pseudoanonymous

For same bandwidth it’s easier to design the filter for an oversampled signal than for the nyquist frequency one. (lower filter lenght: less complexity, less components, less size). That seems the main reason for oversampling. But as you point oversampling doesn’t imply necessarely more quality, a poor designed oversampled system can be worse than a properly designed x2. Oversampling also reduces quantization error because it gets spread into a bigger bandwith.

2. dilsonlira

That’s right. Your comment is very pertinent and constructive, but, at that post, I was not considering the implementation and quantization aspects, only the sampling theorem itself. Thanks.

3. Petre