Introductory university-level calculus, linear algebra, abstract algebra, probability, statistics, and stochastic processes.
Partial and total differentiation of multivariate functions
Get link
Facebook
X
Pinterest
Email
Other Apps
-
A multivariate function may be differentiated with respect to each variable, which is called partial differentiation. By combining all the partial differentiations, we define total differentiation. The essence of (total) differentiation is a linear approximation. In the case of a univariate function, we approximate the function \(y = f(x)\) in the neighbor of a point, say \(x = a\), by the tangent line \(y = f'(a)(x - a) + f(a)\). In the case of a multivariate function, we approximate the function \(y = f(x_1, x_2, \cdots, x_n)\) in the neighbor of a point, say \(a = (a_1, a_2, \cdots, a_n)\), by the tangent hyperplane at the point \(a\).
Partial differentiation
Let \(f(x,y)\) be a function on an open region \(U\subset \mathbb{R}^2\) and \((a,b) \in U\). If we fix \(y = b\) in \(f(x,y)\), we have a univariate function \(g(x) = f(x,b)\). Since \(U\) is open, there exists \(\delta > 0\) such that \(N_{\delta}(a,b) \subset U\). Therefore \(g(x)\) is defined on the open interval \((a - \delta, a + \delta)\). In other words, the function \(g(x)\) is defined in a neighbor of \(x = a\).
Remark. We write \(N_{\delta}(a,b)\) (rather than \(N_{\delta}((a,b))\), to save keystrokes!) to mean the \(\delta\)-neighbor of the point \((a,b)\in \mathbb{R}^2\). □
If \(g(x)\) is differentiable at \(x = a\), its differential coefficient is called the partial differential coefficient with respect to \(x\) (at \((a,b)\)) and \(\frac{dg}{dx}(a)\) is denoted as \(\frac{\partial f}{\partial x}(a,b)\) or \(f_x(a,b)\).
Remark. Here is one way to understand the partial differential coefficient. We have a surface \(z = f(x,y)\) in \(\mathbb{R}^3\). Find its cross-section with the plane \(y = b\). This cross-section is a curve defined by \(z = g(x) = f(x,b)\). The partial differential coefficient \(\frac{\partial f}{\partial x}(a,b) = \frac{dg}{dx}(a)\) is the slope of the tangent line of the curve at \(x = a\). □
Similarly, if we fix \(x = a\) in \(f(x,y)\), we have a univariate function \(h(y) = f(a,y)\) which is defined in a neighbor of \(y = b\). If \(\frac{dh}{dy}(b)\) exists, it is called the partial differential coefficient with respect to \(y\) (at \(b\)) and denoted \(\frac{\partial f}{\partial y}(a,b)\) or \(f_y(a,b)\).
Example. Let \(f(x,y) = x^2y + 2xy^2 - y^3\). Let us find the partial differential coefficients \(f_x(a,b)\) and \(f_y(a,b)\). Letting \(y = b\), we have \(f(x,b) = x^2b + 2xb^2 - b^3\). Differentiating the right-hand side with respect to \(x\), we have \(2xb + 2b^2\). Setting \(x=a\), we have \[f_x(a,b) = 2ab + 2b^2.\]
Similarly, we have \[f_y(a,b) = a^2 + 4ab - 3b^2.\] □
Partial derivatives
If the partial differential coefficient \(\frac{\partial f}{\partial x}(a,b)\) exists at every \((a,b)\in U\), then it defines a function on \(U\). This function is called thepartial derivative of \(f(x,y)\) with respect to \(x\) and is denoted
Let us review the notion of differentiation of univariate functions. We defined the differential coefficient of a univariate function \(f(x)\) at \(x=a\) by
where \(o\) is Landau's little-o. This equation suggests that the function \(y = f(x)\) is approximated by a linear function, namely the tangent of \(y=f(x)\) at \(x = a\),
\[y = f(a) + f'(a)(x - a).\]
Conversely, suppose that the function \(y = f(x)\) can be approximated by a linear function in a neighbor of \(x = a\):
\[f(x) = f(a) + m(x - a) + o(|x - a|).\]
From this equation, we can see that
\[\lim_{x \to a}\frac{f(x) - f(a)}{x - a} = m.\]
This means that \(y = f(x)\) is differentiable at \(x=a\) and \(f'(a) = m\).
In summary, \(f(x)\) is approximated by the linear function \(f(a) + f'(a)(x-a)\) in a neighbor of \(x = a\), and its slope is the differential coefficient \(f'(a)\) itself. Such linear approximation is the essence of differentiation.
The same argument applies to multivariate functions. Differentiating the function \(z = f(x,y)\) at the point \(P= (a,b)\) is to approximate it by a linear function
\[z = f(a,b) + m(x-a) + n(y-b).\]
That is, for the point \(X= (x,y)\) in a neighbor of \(P = (a,b)\), we consider the linear approximation
where \(\|X - P\| = \sqrt{(x-a)^2 + (y-b)^2} = d(X,P)\) is the distance between the points \(X\) and \(P\). Setting \(y = b\) in this equation, we have
\[\lim_{x \to a}\frac{f(x, b) - f(a,b)}{x - a} = m.\]
That is, \(f_x(a,b) = m.\) Similarly, we can show that \(f_y(a,b) = n\). In summary, if the linear approximation (Eq:LA) holds, it must be
Let \(U\) be an open region in \(\mathbb{R}^2\) and \(P=(a,b) \in U\). The function \(f(x,y)\) on \(U\) is said to be (totally) differentiable at \((a,b)\) if there exist constants \(m\) and \(n\) such that
\[f(x,y) = f(a,b) + m(x-a) + n(y-b) + o(\|X-P\|) \text{ as $X = (x,y)\to P = (a,b)$}\]
\(f(x,y)\) is said to be (totally) differentiable on \(U\) if it is (totally) differentiable at every point in \(U\).
Remark. The word "totally" in "totally differentiable" is used in contrast to "partially differentiable." However, "totally" may be omitted. If we simply say, "a multivariate function is differentiable," it means the function is totally differentiable. □
From the above discussion, if the function \(f(x,y)\) is totally differentiable at \((a,b)\), it is partially differentiable at \((a,b)\), and \(m = f_x(a,b)\) and \(n = f_y(a,b)\). (The converse is not necessarily true; We will see such an example in a later post.) The linear function
is the tanget plane of \(z = f(x,y)\) at \((a,b)\).
Remark. More generally, when the domain is in \(\mathbb{R}^n\), for the function \(y = f(x) = f(x_1, x_2,\cdots, x_n)\) at the point \(a = (a_1, a_2, \cdots, a_n)\), we have the linear function
that is the tangent hyperplane of \(y = f(x_1, x_2, \cdots, x_n)\) at \(a = (a_1, a_2, \cdots, a_n)\). □
Example. Let us find the equation of the tangent plane of the surface defined by the function \(z = 2x^3 + y^2\) at \((-1, 2, 2)\) (make sure this point indeed belongs to the given surface). Let \(f(x,y) = 2x^3 + y^2\). Then
Defining the birth process Consider a colony of bacteria that never dies. We study the following process known as the birth process , also known as the Yule process . The colony starts with \(n_0\) cells at time \(t = 0\). Assume that the probability that any individual cell divides in the time interval \((t, t + \delta t)\) is proportional to \(\delta t\) for small \(\delta t\). Further assume that each cell division is independent of others. Let \(\lambda\) be the birth rate. The probability of a cell division for a population of \(n\) cells during \(\delta t\) is \(\lambda n \delta t\). We assume that the probability that two or more births take place in the time interval \(\delta t\) is \(o(\delta t)\). That is, it can be ignored. Consequently, the probability that no cell divides during \(\delta t\) is \(1 - \lambda n \delta t - o(\delta t)\). Note that this process is an example of the Markov chain with states \({n_0}, {n_0 + 1}, {n_0 + 2}...
Joseph Fourier introduced the Fourier series to solve the heat equation in the 1810s. In this post, we show how the Fourier transform arises naturally in a simplified version of the heat equation. Suppose we have the unit circle \(S\) made of a metal wire. Pick an arbitrary point \(A\) on the circle. Any point \(P\) on the circle is identified by the distance \(x\) from \(A\) to \(P\) along the circle in the counter-clockwise direction (i.e., \(x\) is the angle of the section between \(A\) and \(P\) in radian). Let \(u(t,x)\) represent the temperature at position \(x\) and time \(t\). The temperature distribution at \(t = 0\) is given by \(u(0, x) = f(x)\). Assuming no radiation of heat out of the metal wire, \(u(t,x)\) for \(t > 0\) and \(0\leq x \leq 2\pi\) is determined by the following partial differential equation (PDE) called the heat equation : \[\gamma\frac{\partial u}{\partial t} = \kappa\frac{\partial^2 u}{\partial x^2}\] and the initial condition \[u(0,x) = f(x...
Given a sequence \(\{a_n\}\), the expression \[\sum_{n=0}^{\infty}a_n = a_0 + a_1 + a_2 + \cdots\] is called a series (or infinite series ). This expression may or may not have value. At this point, it is purely formal. Note that the order of addition matters : We first add \(a_0\) and \(a_1\), to the result of which we add \(a_2\), to the result of which we add \(a_3\), and so on (Not something like we first add \(a_{101}\) and \(a_{58}\), then add \(a_{333051}\), and so on). We will see, however, that for a special class of series (the positive term series), the order of addition does not matter if the series converges. Example . The sum of a geometric progression \(\{ar^n\}\), that is, \(\sum_{n=0}^{\infty}ar^n\) is called a geometric series . It is understood that \(r^0 = 1\) including the case when \(r = 0\). □ Given a series \(\sum_{n=0}^{\infty}a_n\) and a number \(n\geq 0\), the sum \[\sum_{k=0}^{n}a_k = a_0 + a_1 + \cdots + a_n\] is called the \(n\)-th partial sum . We m...
Comments
Post a Comment