Modern control systems are typically implemented in computers that work with periodic samples of signals, that is discrete-time signals, rather than continuous-time signals. A system that produces samples of a given signal is a linear system, albeit a time-varying one. In this post, which starts a series exploring the issue of sampling and discretization, we delve into the nature of a system that can produce such signal samples. More will follow.

## Sampling

Let $u(t)$, $t \geq 0$, be a signal from which one periodically draws samples

$$u[n] = u(n T_s), \quad n = 0, 1, \ldots$$

that is $n \in \mathbb{N}$. The *sampling period*, $T_s$, dictates how fast samples are to be taken. Recall that one can obtain a single sample of a continuous-time signal by the sifting property of the impulse

$$u(t) = \int_{-\infty}^{\infty} u(\tau) \, \delta(t – \tau) \, d\tau$$

from which

$$u[n] =u(n T_s) = \int_{-\infty}^{\infty} u(\tau) \, \delta(n T_s – \tau) \, d\tau.$$

As it will become clear soon, impulses play a central role in the analysis of sampled signals.

Before we go ahead and start working with the newly obtained samples, that is with the sampled discrete-time signal, it is helpful to seek a continuous-time representation of the sampled signal as the result of the action of a continuous-time causal but possibly time-varying linear system with impulse response $g(t, \tau)$ such that

$$

u_s(t) = \int_{0}^{t} g(t, \tau) \, u(\tau) \, d\tau, \quad u_s(n T_s) = u(n T_s) = u[n].

$$

Of course there is an infinite number of possible such systems for which $u_s(n T_s) = u(n T_s)$, each one leading to a particular $u(t) \neq u_s(t)$ when $t \neq n T_s$.

## Sample-and-hold

One popular choice is that of a system for which

$$

u_s(t) = u(n T_s) = u[n], \quad n T_s \leq t \leq (n+1) T_s.

$$

Such sampled signals are produced by virtually all *digital-to-analog* and *analog-to-digital converters* (D/A and A/D converters) one can buy. The operation performed by one such device is know as a *sample-and-hold* operation.

Even though it is relatively easy to design and manufacture a sample-and-hold device, characterizing its impulse response $g(t, \tau)$ is not an obvious thing! In order to make progress, start from the basics: what happens if a delayed step signal $u(t) = 1(t – \tau)$ is applied to a sample-and-hold device? The answer is the following output:

$$

u_s(t, \tau) = 1(t – \lceil \tau/T_s \rceil T_s) = \begin{cases}

1, & t \geq \lceil \tau/T_s \rceil T_s \\

0, & t < \lceil \tau/T_s \rceil T_s

\end{cases}

$$

The notation $\lceil x \rceil$ denotes the first integer greater than $x$, that is the *integer ceiling* of $x$. At this point, invoking linearity, the sample-and-hold impulse response must be

$$g(t, \tau) = \delta(t – \lceil \tau/T_s \rceil T_s).$$

One conclusion is that the sample-and-hold system is time-varying because $g$ depends on both $t$ and $\tau$ and not only on their difference. A more intuitive way of thinking about it is that different samples will be obtained if $x$ is shifted in time by anything other than integer multiples of the sampling period $T_s$.

## Separating the sampling from the holding

Despite the nuisance of the time-varying nature of the sampling operation, it is possible to obtain the Laplace or Fourier transform of the signal $u_s(t)$ in terms of the transform of $u(t)$. The easiest available path seems to be to separate the *sampling* from the *holding* in the sample-and-hold operation described above.

The key is to realize that holding can be thought as the result of applying a causal linear time-invariant filter, $g_h$, to an artificial input, $u_t$, that is

$$u_s(t) = \int_{0}^{t} g_h(t – \tau) \, u_t(\tau) \, d\tau,$$

in which $g_h$ is the filter’s impulse response

$$g_h(t) = 1(t) – 1(t – T_s)$$

and the input $u_t$ is the *modulated train of impulses*

$$

u_t(t)=\sum_{n = 0}^{\infty} u(n T_s) \,\delta(t – n T_s) =\sum_{n = 0}^{\infty} u[n] \, \delta(t – n T_s).

$$

Of course there exists no physical device that can produce $u_t$ by itself, since $u_t$ is comprised of impulses, but the above abstraction helps by keeping what is time-invariant, i.e. the holding, separated from what is time-varying, i.e. the sampling. Also note that ultimately we will not be manipulating the train of impulses, or any impulse for that matter, outside of an integral. All manipulations of impulses outside integrals are for convenience rather than necessity.

Start by rewritting

$$

\begin{aligned}

u_t(t)&=\sum_{n = 0}^{\infty} \int_{-\infty}^{\infty} u(\tau) \,\delta(t -\tau) \, \delta(\tau – n T_s) \, d\tau\\

&=\int_{-\infty}^{\infty} \sum_{n = 0}^{\infty} u(\tau) \, \delta(\tau – n T_s) \, \delta(t -\tau) \, d\tau\\

&=\sum_{n = 0}^{\infty} u(t) \, \delta(t – n T_s)\\

&= u(t) \, \sum_{n = 0}^{\infty} \delta(t – n T_s)

\end{aligned}

$$

by using the sifting property of the impulse repeatedly. Note a subtlety in the above manipulation: it makes use of the property

$$

\begin{aligned}

f(\sigma) \, \delta(t – \sigma) &= \int_{-\infty}^{\infty} f(\tau) \, \delta(t – \tau) \, \delta(\tau – \sigma) \, d\tau \\

&= \int_{-\infty}^{\infty} f(\tau) \, \delta(\tau – \sigma) \, \delta(t – \tau) \, d\tau = f(t) \, \delta(t – \sigma)

\end{aligned}

$$

without having to appeal to intuitive notions of what an impulse is. See Chapter 3 for more on the impulse.

It is to the expression

$$

\begin{aligned}

u_t(t) = u(t) \, \sum_{n = 0}^{\infty} \delta(t – n T_s)

\end{aligned}

$$

that the Laplace transform will be applied. Indeed, proceed as in P3.48 to calculate

$$\mathcal{L} \left \{ \sum_{n = 0}^{\infty} \delta(t – n T_s) \right \} = \sum_{n = 0}^{\infty} e^{- s n T_s} = \frac{1}{1 – e^{-s T_s}} \quad \operatorname{Re}\{s\} > 0,$$

and reach out to the product property of the Laplace transform

$$\mathcal{L} \{ f(t) \, g(t) \} = \frac{1}{2 \pi j} \lim_{\rho \rightarrow \infty} \int_{\alpha – j \rho}^{\alpha + j \rho} F(s – z) \, G(z) \, dz$$

where $\alpha$ is taken in the region of convergence of $G$. A formal proof of this property is very demanding technically, as it requires a careful treatment of the convergence properties of the Fourier and Laplace integrals, reason why it is not even mentioned in the book! It is however, a key property used in the study of modulated signals, frequently appearing in books and papers on signals and communications, where it is often introduced in the (much better behaved) context of the Fourier transform.

By taking $f(t) = u(t)$ and $g(t)=\sum_{n = 0}^{\infty} \delta(t – n T_s)$ one obtains that

$$

\begin{aligned}

U_t(s) &= \mathcal{L} \left \{ u(t) \sum_{n = 0}^{\infty} \delta(t – n T_s) \right \} \\

&= \frac{1}{2 \pi j} \lim_{\rho \rightarrow \infty} \int_{\alpha – j \rho}^{\alpha + j \rho} \frac{U(s – z)}{1 – e^{-z T_s}} \, dz

\end{aligned}

$$

Note that all poles of $G(z)=(1-e^{-z T_s})^{-1}$ are simple and located at $z = j k \omega_s$, $k \in \mathbb{Z}$, where

$$\omega_s = 2 \pi/T_s$$

is the angular sampling frequency.

Before proceeding, some assumptions need to be made. Assume that $U(s)$ converges for $\operatorname{Re}\{s\}>\sigma_u$ and that $\lim_{|s| \rightarrow \infty} |U(s)| = 0$. The first assumption means that all poles of $U(s-z)$ lie in the region $\operatorname{Re}\{s-z\} < \sigma_u$, that is $\operatorname{Re}\{z\} > \operatorname{Re}\{s\} – \sigma_u$. The second assumption means that $U(s)$ goes to zero as $s$ grows, e.g. $U(s)$ is strictly proper.

With the assumptions in place, the above integral can be evaluated using Cauchy’s Residue Theorem (Theorem 3.1) on the positively oriented contour $\Gamma_-^{\alpha}$ as in Fig. 3.2 with $s$ and $\alpha > 0$ taken so as not to include any poles of $U(s-z)$, that is $\operatorname{Re}\{s\} > \sigma_u$. Summing the residues at the poles of $G(z)=( 1 – e^{-z T_s})^{-1}$ results in

$$

\begin{aligned}

U_t(s) &= \frac{1}{2 \pi j} \lim_{\rho \rightarrow \infty} \int_{\alpha – j \rho}^{\alpha + j \rho} \frac{U(s – z)}{1 – e^{-z T_s}} \, dz\\

&= \sum_{k = -\infty}^{\infty} \operatorname{Res}_{z \rightarrow j k \omega_s} \left ( \frac{U(s – z) }{1 – e^{-z T_s}} \right ) \\

&= \frac{1}{T_s} \sum_{k = -\infty}^{\infty} U(s – j k \omega_s), \quad \operatorname{Re}\{s\} > \sigma_u,

\end{aligned}

$$

In the above calculation we have used the standard residue formula

$$

\operatorname{Res}_{z \rightarrow j k \omega_s} \left ( \frac{U(s – z) }{1 – e^{-z T_s}} \right ) = \lim_{z \rightarrow j k \omega_s} (z – j k \omega_s) \frac{U(s-z)}{1 – e^{-z T_s}}=\frac{U(s – j k \omega_s)}{T_s}

$$

because each pole has multiplicity one, and used L’Hospital’s rule to evaluate the limit.

The formula for $U_t(s)$ reveals that the effect of the time-varying part of the sample-and-hold operation is to superimpose multiple *replicas* of the original $U(s)$ shifted by $j k \omega_s$, $k \in \mathbb{Z}$. We will explore this phenomenon more in depth later when discussing *aliasing*.

When the region of convergence includes the imaginary axis, that is when $\sigma_u < 0$, the Fourier transform of $u_t$ can be obtained by evaluating $U_t(s)$ at $s = j \omega$, that is

$$

\begin{aligned}

U_t(j \omega) &= \frac{1}{T_s} \sum_{k = -\infty}^{\infty} U(j (\omega – k \omega_s))

\end{aligned}

$$

It is this formula that appears in most works dealing with sampling using the Fourier transform.

The above derivation is remarkable! It amounts to the calculation of the transform of the output of a time-varying linear system in terms of the transform of the input, a feat possible only under very special circumstances. From this point on, the transform of the output of the sample-and-hold system can be calculated by multiplying $U_t(s)$ by

$$

G_h(s)=\mathcal{L}\{g_h(t)\}=\frac{1-e^{-s T_s}}{s}

$$

to obtain

$$

U_s(s) = G_h(s) U_t(s) = \frac{1 – e^{-s T_s}}{T_s s} \sum_{k = -\infty}^{\infty} U(s -j k \omega_s), \quad \operatorname{Re}\{s\} > \sigma_u

$$

since $G_h$ is entire (see P3.43).

As you might have realized by now, working with sampled signals in continuous-time is tedious and complicated. In future post we will use the above formulas to provide for a much simplified treatment of sampled signals and discrete-time systems that can deal with sampled signals. We shall also discuss how to reconstruct a continuous-time signal from its samples.

## 2 thoughts on “Sampling. Part I”