Let \(f(x)\) and \(g(x)\) be functions that are continuous on \([a,b]\) and differentiable on \((a,b)\). Suppose that \(g'(x) \neq 0\) for all \(x \in (a,b)\). Then, there exists a \(c\in (a,b)\) such that
Proof. For convenience we set \(f(a) = g(a) = 0\) so that \(f(x)\) and \(g(x)\) are defined on \([a, b)\). Assume conditions 1, 2, and 3 hold.
By condition 1, \(f(x)\) and \(g(x)\) are continuous on \([a,b)\). For any \(x\in(a,b)\), \(f(x)\) and \(g(x)\) are continuous on \([a,x]\) and differentiable on \((a,x)\). By condition 2, for all \(t \in (a,x)\), \(g'(t) \neq 0\). Thus, by Cauchy's mean value theorem, there exists a \(c_x \in (a,x)\) such that
\(c_x \to a + 0\) as \(x \to a +0\) and, by condition 3, \(\lim_{x\to a+0}\frac{f'(c_x)}{g'(c_x)}\) exists. Therefore \(\lim_{x\to a+0}\frac{f(x)}{g(x)}\) also exists and
By the definition of the right limit, for any \(\varepsilon > 0\), there exists a \(\delta_1 > 0\) such that \(a < x < a + \delta_1\) implies \(\left|\frac{f'(x)}{g'(x)} - L\right| < \varepsilon\).
Since \(\lim_{x\to a+0}g(x) = \pm\infty\), there exists \(\delta_2 > 0\) such that \(a < x < a + \delta_2\) implies \(|g(x)| > 1\).
Let \(\delta' = \min\{\delta_1, \delta_2\}\) and \(d = a + \delta'\). By Cauchy's mean value theorem, for all \(x \in (a, d)\), there exists a \(c_x \in (x, d)\) such that
Here, \(f(d)\) and \(g(d)\) are finite constants, and \(\lim_{x\to a+0}\frac{f'(x)}{g'(x)}\) converges to a finite value (by condition 3). Hence, by condition 1' (\(\lim_{x\to a + 0}g(x) = \pm\infty\)), \(\lim_{x\to a+0}r(x) = 0\). In other words, for any \(\varepsilon > 0\), there exists a \(\delta_3 > 0\) such that \(a < x < a + \delta_3\) implies \(|r(x)| < \varepsilon\).
Let \(\delta = \min\{\delta', \delta_3\}\). By Eq. (eq:rx), we have
\[\frac{f(x)}{g(x)} - L = \frac{f'(c_x)}{g'(c_x)} - L + r(x).\]
Proof. We prove the case when \(x \to \infty\) and condition 1 holds.
For the open interval \((b, \infty)\), this \(b\) can be replaced with any real number greater than \(b\). Therefore, without losing generality, we may assume \(b > 0\).
Let \(x = \frac{1}{t}\). As \(x \to \infty\), \(t\to +0\). By conditions 1 and 2,
so that, by condition 3, the right limit \(\lim_{t\to +0}\frac{\frac{d}{dt}f\left(\frac{1}{t}\right)}{\frac{d}{dt}g\left(\frac{1}{t}\right)}\) exists. Therefore, by L'H\^opital's rule (1), we have the limit
Open sets In \(\mathbb{R}\), we have the notion of an open interval such as \((a, b) = \{x \in \mathbb{R} | a < x < b\}\). We want to extend this idea to apply to \(\mathbb{R}^n\). We also introduce the notions of bounded sets and closed sets in \(\mathbb{R}^n\). Recall that the \(\varepsilon\)-neighbor of a point \(x\in\mathbb{R}^n\) is defined as \(N_{\varepsilon}(x) = \{y \in \mathbb{R}^n | d(x, y) < \varepsilon \}\) where \(d(x,y)\) is the distance between \(x\) and \(y\). Definition (Open set) A subset \(U\) of \(\mathbb{R}^n\) is said to be an open set if the following holds: \[\forall x \in U ~ \exists \delta > 0 ~ (N_{\delta}(x) \subset U).\tag{Eq:OpenSet}\] That is, for every point in an open set \(U\), we can always find an open ball centered at that point, that is included in \(U\). See the following figure. Perhaps, it is instructive to see what is not an open set. Negating (Eq:OpenSet), we have \[\exists x \in U ~ \forall \delta > 0 ~ (N_{\delta}(x) \not
We would like to study multivariate functions (i.e., functions of many variables), continuous multivariate functions in particular. To define continuity, we need a measure of "closeness" between points. One measure of closeness is the Euclidean distance. The set \(\mathbb{R}^n\) (with \(n \in \mathbb{N}\)) with the Euclidean distance function is called a Euclidean space. This is the space where our functions of interest live. The real line is a geometric representation of \(\mathbb{R}\), the set of all real numbers. That is, each \(a \in \mathbb{R}\) is represented as the point \(a\) on the real line. The coordinate plane , or the \(x\)-\(y\) plane , is a geometric representation of \(\mathbb{R}^2\), the set of all pairs of real numbers. Each pair of real numbers \((a, b)\) is visualized as the point \((a, b)\) in the plane. Remark . Recall that \(\mathbb{R}^2 = \mathbb{R}\times\mathbb{R} = \{(x, y) | x, y \in \mathbb{R}\}\) is the Cartesian product of \(\mathbb{R}\) with i
Newton's method (the Newton-Raphson method) is a very powerful numerical method for solving nonlinear equations. Suppose we'd like to solve a non-linear equation \(f(x) = 0\), where \(f(x)\) is a (twice) differentiable non-linear function. Newton's method generates a sequence of numbers \(c_1, c_2, c_3, \cdots\) that converges to a solution of the equation. That is, if \(\alpha\) is a solution (i.e., \(f(\alpha) = 0\)) then, \[\lim_{n\to\infty}c_n = \alpha,\] and this sequence \(\{c_n\}\) is generated by a series of linear approximations of the function \(f(x)\). Theorem (Newton's method) Let \(f(x)\) be a function that is twice differentiable on an open interval \(I\) that contains the closed interval \([a, b]\) (i.e., \([a,b]\subset I\)) and satisfy the following conditions: \(f(a) < 0\) and \(f(b) > 0\); For all \(x\in [a, b]\), \(f'(x) > 0\) and \(f''(x) > 0\). Let us define the sequence \(\{c_n\}\) by \[ \begin{eqnarray} c_1 &
Comments
Post a Comment