Introductory university-level calculus, linear algebra, abstract algebra, probability, statistics, and stochastic processes.
Limit of a univariate function
Get link
Facebook
Twitter
Pinterest
Email
Other Apps
-
Let \(f(x)\) be a function. Suppose we move \(x\in\mathbb{R}\) towards \(a\) while keeping \(x \neq a\). If in this case \(f(x)\) approaches a constant value \(\alpha\) irrespective of the way how \(x\) approaches \(a\), we say that \(f(x)\) converges to \(\alpha\) as \(x \to a\) and write \[\lim_{x\to a}f(x) = \alpha\] or \[f(x) \to \alpha \text{ as \(x \to a\)}.\]
Remark. \(a\) needs not belong to \(\text{dom}(f)\) (the domain of \(f\)) as long as \(x\) can approach \(a\) arbitrarily closely. □
But what does this mean exactly? Here's a rigorous definition in terms of what is called the \(\varepsilon-\delta\) argument.
Definition (Limit of a function)
We say that the function \(f(x)\) converges to \(\alpha\) as \(x \to a\) and write
\[\lim_{x\to a}f(x) = \alpha\]
if the following condition is satisfied.
For any \(\varepsilon > 0\), there exists \(\delta > 0\) such that, for all \(x\in \text{dom}(f)\), if \(0 < |x - a| < \delta\) then \(|f(x) - \alpha| < \varepsilon\).
Remark. We are implicitly assuming \(\varepsilon, \delta \in\mathbb{R}\). □
Here's how this definition works. Suppose \(f(x) \to \alpha\) as \(x \to a\). Let us pick any positive real number \(\varepsilon\). However small this \(\varepsilon\) may be, if \(x\) is sufficiently close to \(a\), we can always have \(|f(x) - \alpha| < \varepsilon\). Here ``sufficiently close'' means that we can pick some sufficiently small positive real number \(\delta\) such that \(|x - a| < \delta\) implies \(|f(x) - \alpha| < \varepsilon\). In other words, we move \(x\) closer and closer to \(a\) until \(|f(x) - \alpha| < \varepsilon\) holds. Conversely, if this operation is possible for any \(\varepsilon\), it makes sense to say that \(f(x)\) converges to \(\alpha\) as \(x \to a\).
Example. Consider \(f(x) = x^2 + 1\). We have
\[\lim_{x \to 1}f(x) = 2.\]
Let \(\varepsilon = 0.1\). Let us find \(\delta > 0\) such that \(0 < |x - 1| < \delta\) implies \(|f(x) - 2| < \varepsilon\). Suppose
Since \(\sqrt{0.9} - 1 = -0.0513\cdots\) and \(\sqrt{1.1} - 1 = 0.0488\cdots\), if
\[|x - 1| < \sqrt{1.1} - 1,\]
then we have
\[|f(x) - 2| < 0.1 = \varepsilon.\]
So \(\delta\) can be any positive number less than \(\sqrt{1.1} - 1\). For example, let \(\delta = 0.04\). Then \(|x - 1| < 0.04\) implies \(0.96 < x < 1.04\), which implies \(-0.0784 < x^2 - 1 < 0.0816\) so
Open sets In \(\mathbb{R}\), we have the notion of an open interval such as \((a, b) = \{x \in \mathbb{R} | a < x < b\}\). We want to extend this idea to apply to \(\mathbb{R}^n\). We also introduce the notions of bounded sets and closed sets in \(\mathbb{R}^n\). Recall that the \(\varepsilon\)-neighbor of a point \(x\in\mathbb{R}^n\) is defined as \(N_{\varepsilon}(x) = \{y \in \mathbb{R}^n | d(x, y) < \varepsilon \}\) where \(d(x,y)\) is the distance between \(x\) and \(y\). Definition (Open set) A subset \(U\) of \(\mathbb{R}^n\) is said to be an open set if the following holds: \[\forall x \in U ~ \exists \delta > 0 ~ (N_{\delta}(x) \subset U).\tag{Eq:OpenSet}\] That is, for every point in an open set \(U\), we can always find an open ball centered at that point, that is included in \(U\). See the following figure. Perhaps, it is instructive to see what is not an open set. Negating (Eq:OpenSet), we have \[\exists x \in U ~ \forall \delta > 0 ~ (N_{\delta}(x) \not
We would like to study multivariate functions (i.e., functions of many variables), continuous multivariate functions in particular. To define continuity, we need a measure of "closeness" between points. One measure of closeness is the Euclidean distance. The set \(\mathbb{R}^n\) (with \(n \in \mathbb{N}\)) with the Euclidean distance function is called a Euclidean space. This is the space where our functions of interest live. The real line is a geometric representation of \(\mathbb{R}\), the set of all real numbers. That is, each \(a \in \mathbb{R}\) is represented as the point \(a\) on the real line. The coordinate plane , or the \(x\)-\(y\) plane , is a geometric representation of \(\mathbb{R}^2\), the set of all pairs of real numbers. Each pair of real numbers \((a, b)\) is visualized as the point \((a, b)\) in the plane. Remark . Recall that \(\mathbb{R}^2 = \mathbb{R}\times\mathbb{R} = \{(x, y) | x, y \in \mathbb{R}\}\) is the Cartesian product of \(\mathbb{R}\) with i
Newton's method (the Newton-Raphson method) is a very powerful numerical method for solving nonlinear equations. Suppose we'd like to solve a non-linear equation \(f(x) = 0\), where \(f(x)\) is a (twice) differentiable non-linear function. Newton's method generates a sequence of numbers \(c_1, c_2, c_3, \cdots\) that converges to a solution of the equation. That is, if \(\alpha\) is a solution (i.e., \(f(\alpha) = 0\)) then, \[\lim_{n\to\infty}c_n = \alpha,\] and this sequence \(\{c_n\}\) is generated by a series of linear approximations of the function \(f(x)\). Theorem (Newton's method) Let \(f(x)\) be a function that is twice differentiable on an open interval \(I\) that contains the closed interval \([a, b]\) (i.e., \([a,b]\subset I\)) and satisfy the following conditions: \(f(a) < 0\) and \(f(b) > 0\); For all \(x\in [a, b]\), \(f'(x) > 0\) and \(f''(x) > 0\). Let us define the sequence \(\{c_n\}\) by \[ \begin{eqnarray} c_1 &
Comments
Post a Comment