A continuous function on a closed interval is uniformly continuous
The notion of uniform continuity is a ``stronger'' version of (simple) continuity. If a function is uniformly continuous, it is continuous, but the converse does not generally hold (that is, a continuous function may not be uniformly continuous). However, if we restrict a continuous function on a closed interval, it is always uniformly continuous.
Definition (Uniform continuity)
The function \(f(x)\) on an interval \(I\) is said to be uniformly continuous on \(I\) if it satisfies the following condition.
- For any \(\varepsilon > 0\), there exists \(\delta > 0\), such that, for all \(x, y\in I\), \(|x - y| < \delta\) implies \(|f(x) - f(y)|< \varepsilon\).
In a logical form, this condition is expressed as
\[ \forall \varepsilon > 0, \exists \delta > 0, \forall x,y\in I ~ (|x-y| < \delta \implies |f(x) - f(y)| < \varepsilon).\label{eq:unifcont} \]
Remark. Compare the above condition for uniform continuity with the condition for the continuous function on \(I\):
\[ \forall y\in I, \forall \varepsilon > 0, \exists \delta > 0, \forall x \in I ~ (|x - y| < \delta \implies |f(x) - f(y)| < \varepsilon). \label{eq:contI} \]
What's the difference? In the uniform continuity, \(\delta\) does not depend on \(x\) or \(y\), whereas in the (ordinary) continuity on an interval, \(\delta\) may depend on \(y\). The latter means that we may need to choose different values of \(\delta\) depending on where we are in the interval \(I\). In uniform continuity, on the other hand, we can choose a constant \(\delta\) irrespective of where we are in the interval \(I\). In this sense, uniform continuity imposes a more stringent condition on the function. □
Theorem
- There exists some \(\varepsilon > 0\) such that, for all \(\delta > 0\), there exists \(x,y \in [a,b]\) such that \(|x - y| < \delta\) and \(|f(x) - f(y)| \geq \varepsilon\).
Thus we can define sequences \(\{x_n\}\) and \(\{y_n\}\) bounded on \([a,b]\).
By the Bolzano-Weierstrass theorem, \(\{x_n\}\) contains a subsequence \(\{x_{n_k}\}\) that converges to some \(\alpha \in [a,b]\). Using these indices \(n_1, n_2,\cdots\) in \(\{x_{n_k}\}\), we can define a subsequence \(\{y_{n_k}\}\) of \(\{y_n\}\). But \(\{y_{n_k}\}\) contains yet another subsequence that converges to some \(\beta\in[a,b]\). By reindexing, let us denote this converging subsequence by \(\{y_{n_k}\}\). After this reindexing, \(\{x_{n_k}\}\) still converges to \(\alpha\) (Theorem: a subsequence of a converging sequence converges to the same value.). Then \(x_{n_k} - y_{n_k}\) converges to \(\alpha - \beta\). However, for any \(k\), by assumption, we have \(-\frac{1}{n_k} < x_{n_k} - y_{n_k} < \frac{1}{n_k}\) and \(n_k \to \infty\) as \(k \to \infty\) so that \(\alpha - \beta = 0\) or \(\alpha = \beta\). On the one hand, \(f(x)\) is continuous so that \(\lim_{k\to\infty}f(x_{n_k}) = \lim_{k \to\infty}f(y_{n_k}) = f(\alpha)\). On the other hand, for any \(k\), \(|f(x_{n_k}) - f(y_{n_k})| \geq \varepsilon\) so that \(\lim_{k\to \infty}|f(x_{n_k}) - f(y_{n_k})| \geq \varepsilon\). Hence we have
\[0 = |f(\alpha) - f(\alpha)| = \lim_{k\to \infty}|f(x_{n_k}) - f(y_{n_k})| \geq \varepsilon > 0\]
which is a contradiction. ■
Comments
Post a Comment