Introductory university-level calculus, linear algebra, abstract algebra, probability, statistics, and stochastic processes.
Partial and total differentiation of multivariate functions
Get link
Facebook
X
Pinterest
Email
Other Apps
-
A multivariate function may be differentiated with respect to each variable, which is called partial differentiation. By combining all the partial differentiations, we define total differentiation. The essence of (total) differentiation is a linear approximation. In the case of a univariate function, we approximate the function \(y = f(x)\) in the neighbor of a point, say \(x = a\), by the tangent line \(y = f'(a)(x - a) + f(a)\). In the case of a multivariate function, we approximate the function \(y = f(x_1, x_2, \cdots, x_n)\) in the neighbor of a point, say \(a = (a_1, a_2, \cdots, a_n)\), by the tangent hyperplane at the point \(a\).
Partial differentiation
Let \(f(x,y)\) be a function on an open region \(U\subset \mathbb{R}^2\) and \((a,b) \in U\). If we fix \(y = b\) in \(f(x,y)\), we have a univariate function \(g(x) = f(x,b)\). Since \(U\) is open, there exists \(\delta > 0\) such that \(N_{\delta}(a,b) \subset U\). Therefore \(g(x)\) is defined on the open interval \((a - \delta, a + \delta)\). In other words, the function \(g(x)\) is defined in a neighbor of \(x = a\).
Remark. We write \(N_{\delta}(a,b)\) (rather than \(N_{\delta}((a,b))\), to save keystrokes!) to mean the \(\delta\)-neighbor of the point \((a,b)\in \mathbb{R}^2\). □
If \(g(x)\) is differentiable at \(x = a\), its differential coefficient is called the partial differential coefficient with respect to \(x\) (at \((a,b)\)) and \(\frac{dg}{dx}(a)\) is denoted as \(\frac{\partial f}{\partial x}(a,b)\) or \(f_x(a,b)\).
Remark. Here is one way to understand the partial differential coefficient. We have a surface \(z = f(x,y)\) in \(\mathbb{R}^3\). Find its cross-section with the plane \(y = b\). This cross-section is a curve defined by \(z = g(x) = f(x,b)\). The partial differential coefficient \(\frac{\partial f}{\partial x}(a,b) = \frac{dg}{dx}(a)\) is the slope of the tangent line of the curve at \(x = a\). □
Similarly, if we fix \(x = a\) in \(f(x,y)\), we have a univariate function \(h(y) = f(a,y)\) which is defined in a neighbor of \(y = b\). If \(\frac{dh}{dy}(b)\) exists, it is called the partial differential coefficient with respect to \(y\) (at \(b\)) and denoted \(\frac{\partial f}{\partial y}(a,b)\) or \(f_y(a,b)\).
Example. Let \(f(x,y) = x^2y + 2xy^2 - y^3\). Let us find the partial differential coefficients \(f_x(a,b)\) and \(f_y(a,b)\). Letting \(y = b\), we have \(f(x,b) = x^2b + 2xb^2 - b^3\). Differentiating the right-hand side with respect to \(x\), we have \(2xb + 2b^2\). Setting \(x=a\), we have \[f_x(a,b) = 2ab + 2b^2.\]
Similarly, we have \[f_y(a,b) = a^2 + 4ab - 3b^2.\] □
Partial derivatives
If the partial differential coefficient \(\frac{\partial f}{\partial x}(a,b)\) exists at every \((a,b)\in U\), then it defines a function on \(U\). This function is called thepartial derivative of \(f(x,y)\) with respect to \(x\) and is denoted
Let us review the notion of differentiation of univariate functions. We defined the differential coefficient of a univariate function \(f(x)\) at \(x=a\) by
where \(o\) is Landau's little-o. This equation suggests that the function \(y = f(x)\) is approximated by a linear function, namely the tangent of \(y=f(x)\) at \(x = a\),
\[y = f(a) + f'(a)(x - a).\]
Conversely, suppose that the function \(y = f(x)\) can be approximated by a linear function in a neighbor of \(x = a\):
\[f(x) = f(a) + m(x - a) + o(|x - a|).\]
From this equation, we can see that
\[\lim_{x \to a}\frac{f(x) - f(a)}{x - a} = m.\]
This means that \(y = f(x)\) is differentiable at \(x=a\) and \(f'(a) = m\).
In summary, \(f(x)\) is approximated by the linear function \(f(a) + f'(a)(x-a)\) in a neighbor of \(x = a\), and its slope is the differential coefficient \(f'(a)\) itself. Such linear approximation is the essence of differentiation.
The same argument applies to multivariate functions. Differentiating the function \(z = f(x,y)\) at the point \(P= (a,b)\) is to approximate it by a linear function
\[z = f(a,b) + m(x-a) + n(y-b).\]
That is, for the point \(X= (x,y)\) in a neighbor of \(P = (a,b)\), we consider the linear approximation
where \(\|X - P\| = \sqrt{(x-a)^2 + (y-b)^2} = d(X,P)\) is the distance between the points \(X\) and \(P\). Setting \(y = b\) in this equation, we have
\[\lim_{x \to a}\frac{f(x, b) - f(a,b)}{x - a} = m.\]
That is, \(f_x(a,b) = m.\) Similarly, we can show that \(f_y(a,b) = n\). In summary, if the linear approximation (Eq:LA) holds, it must be
Let \(U\) be an open region in \(\mathbb{R}^2\) and \(P=(a,b) \in U\). The function \(f(x,y)\) on \(U\) is said to be (totally) differentiable at \((a,b)\) if there exist constants \(m\) and \(n\) such that
\[f(x,y) = f(a,b) + m(x-a) + n(y-b) + o(\|X-P\|) \text{ as $X = (x,y)\to P = (a,b)$}\]
\(f(x,y)\) is said to be (totally) differentiable on \(U\) if it is (totally) differentiable at every point in \(U\).
Remark. The word "totally" in "totally differentiable" is used in contrast to "partially differentiable." However, "totally" may be omitted. If we simply say, "a multivariate function is differentiable," it means the function is totally differentiable. □
From the above discussion, if the function \(f(x,y)\) is totally differentiable at \((a,b)\), it is partially differentiable at \((a,b)\), and \(m = f_x(a,b)\) and \(n = f_y(a,b)\). (The converse is not necessarily true; We will see such an example in a later post.) The linear function
is the tanget plane of \(z = f(x,y)\) at \((a,b)\).
Remark. More generally, when the domain is in \(\mathbb{R}^n\), for the function \(y = f(x) = f(x_1, x_2,\cdots, x_n)\) at the point \(a = (a_1, a_2, \cdots, a_n)\), we have the linear function
that is the tangent hyperplane of \(y = f(x_1, x_2, \cdots, x_n)\) at \(a = (a_1, a_2, \cdots, a_n)\). □
Example. Let us find the equation of the tangent plane of the surface defined by the function \(z = 2x^3 + y^2\) at \((-1, 2, 2)\) (make sure this point indeed belongs to the given surface). Let \(f(x,y) = 2x^3 + y^2\). Then
Defining the birth process Consider a colony of bacteria that never dies. We study the following process known as the birth process , also known as the Yule process . The colony starts with \(n_0\) cells at time \(t = 0\). Assume that the probability that any individual cell divides in the time interval \((t, t + \delta t)\) is proportional to \(\delta t\) for small \(\delta t\). Further assume that each cell division is independent of others. Let \(\lambda\) be the birth rate. The probability of a cell division for a population of \(n\) cells during \(\delta t\) is \(\lambda n \delta t\). We assume that the probability that two or more births take place in the time interval \(\delta t\) is \(o(\delta t)\). That is, it can be ignored. Consequently, the probability that no cell divides during \(\delta t\) is \(1 - \lambda n \delta t - o(\delta t)\). Note that this process is an example of the Markov chain with states \({n_0}, {n_0 + 1}, {n_0 + 2}...
Generational growth Consider the following scenario (see the figure below): A single individual (cell, organism, etc.) produces \(j (= 0, 1, 2, \cdots)\) descendants with probability \(p_j\), independently of other individuals. The probability of this reproduction, \(\{p_j\}\), is known. That individual produces no further descendants after the first (if any) reproduction. These descendants each produce further descendants at the next subsequent time with the same probabilities. This process carries on, creating successive generations. Figure 1. An example of the branching process. Let \(X_n\) be the random variable representing the population size (number of individuals) of generation \(n\). In the above figure, we have \(X_0 = 1\), \(X_1=4\), \(X_2 = 7\), \(X_3=12\), \(X_4 = 9.\) We shall assume \(X_0 = 1\) as the initial condition. Ideally, our goal would be to find how the population size grows through generations, that is, to find the probability \(\Pr(X_n = k)\) for e...
The birth-death process Combining birth and death processes with birth and death rates \(\lambda\) and \(\mu\), respectively, we expect to have the following differential-difference equations for the birth-death process : \[\begin{eqnarray}\frac{{d}p_0(t)}{{d}t} &=& \mu p_1(t),\\\frac{{d}p_n(t)}{{d}t} &=& \lambda(n-1)p_{n-1}(t) - (\lambda + \mu)np_n(t) + \mu(n+1)p_{n+1}(t),~~(n \geq 1).\end{eqnarray}\] You should derive the above equations based on the following assumptions: Given a population with \(n\) individuals, the probability that an individual is born in the population during a short period \(\delta t\) is \(\lambda n \delta t + o(\delta t)\). Given a population with \(n\) individuals, the probability that an individual dies in the population is \(\mu n \delta t + o(\delta t)\). The probability that multiple individuals are born or die during \(\delta t\) is negligible. (The probability of one birth and one death during \(\delta t\) is also negligible.) Consequ...
Comments
Post a Comment