Introductory university-level calculus, linear algebra, abstract algebra, probability, statistics, and stochastic processes.
Geometric meaning of vectors
Get link
Facebook
X
Pinterest
Email
Other Apps
-
So far, we have been treating vectors purely algebraically. We can also give a geometric interpretation of vectors. Geometrically, vectors can be visualized as ``arrows'' in space, and two arrows are considered ``equivalent'' as long as their lengths and directions are the same, irrespective of where they are located in space.
Think of an ``arrow'' in the 2-dimensional space (Figure fig:arrow). An arrow can be defined by its source (tail) and target (head) points. The source of an arrow is the point where the arrow starts; the target is the point where the arrow ends. So, an arrow \(\mathbf{a}\) can be considered as a pair of 2-dimensional points: \(\mathbf{a} = (s, t)\) where \(s = (x_s,y_s)\in\mathbb{R}^2\) and \(t = (x_t,y_t) \in\mathbb{R}^2\) represent the source and target, respectively. That is, an arrow is a pair of pairs of real numbers: \(\mathbf{a} = ((x_s, y_s), (x_t, y_t))\in \mathbb{R}^2\times\mathbb{R}^2 (\simeq \mathbb{R}^4)\).
Figure fig:arrow. An arrow in the 2D space.
We introduce a relation on the set \(\mathbb{R}^2\times\mathbb{R}^2\). Let \(\mathbf{a} = (s,t) = ((x_s,y_s), (x_t,y_t))\) and \(\mathbf{b} = (u,v) = ((x_u,y_u), (x_v,y_v))\) be two arrows. Then we write \(\mathbf{a} \sim \mathbf{b}\) if \(x_t-x_s = x_v-x_u\) and \(y_t-y_s = y_v - y_u\). This relation \(\sim\) is an equivalence relation. In fact,
(Reflexivity) For each arrow \(\mathbf{a} = (s,t) = ((x_s,y_s),(x_t,y_t))\), clearly we have \(x_t - x_s = x_t - x_s\) and \(y_t - y_s = y_t - y_s\). Thus \(\mathbf{a} \sim \mathbf{a}\).
(Symmetry) Suppose \(\mathbf{a} \sim \mathbf{b}\) then \(x_t - x_s = x_v - x_u\) and \(y_t - y_s = y_v - y_u\), so trivially \(\mathbf{b} \sim \mathbf{a}\).
(Transitivity) Suppose \(\mathbf{a} \sim \mathbf{b}\) and \(\mathbf{b} \sim \mathbf{c}\) where \(\mathbf{c} = (p, q) = ((x_p,y_p), (x_q, y_q))\). Then \(x_t - x_s = x_v - x_u\) and \(y_t - y_s = y_v - y_u\) as well as \(x_q - x_p = x_v - x_u\) and \(y_q - y_p = y_v - y_u\) so that \(x_q - x_p = x_t - x_s\) and \(y_q - y_p = y_t - y_s\). Hence \(\mathbf{a} \sim \mathbf{c}\).
Geometrically, two arrows are equivalent if and only if they overlap exactly when translated (moving without rotation). Another way of saying this is that two arrows are equivalent if and only if they have the same ``direction'' and the same length.
We define each equivalence class of \(\mathbb{R}^2\times\mathbb{R}^2\) by \(\sim\) to be a (2-dimensional) vector. Note that each arrow \(\mathbf{a} = ((x_s, y_s), (x_t, y_t))\) is equivalent to the one \(((0,0), (x',y'))\) with \(x' = x_t - x_s, y'= y_t - y_s\). So we can represent each arrow by the one whose source is the origin. This means that each vector can be effectively represented by its target. That is, writing \([\mathbf{a}] = (x',y')\) is just fine as a representation of a vector (an equivalence class) \([\mathbf{a}]\in (\mathbb{R}^2\times\mathbb{R}^2) /{\sim}\). In other words, when dealing with vectors, we don't care where it starts or ends as long as their direction and length are unchanged. After all, we can express any 2-dimensional vectors in \(\mathbb{R}^2\). This means that the set of equivalence classes of arrows \((\mathbb{R}^2\times\mathbb{R}^2) / {\sim}\) is more or less the ``same'' as \(\mathbb{R}^2\).
Vector addition
Let \(\mathbf{a} = (a_1, a_2)\) and \(\mathbf{b} = (b_1, b_2)\) be elements of \(Arrow/\sim\) (From now on, we omit brackets to represent equivalence classes for simplicity). We define the addition between these two vectors by
Let representatives of \(\mathbf{a}\) and \(\mathbf{b}\) be \(((0, 0), (a_1, a_2))\) and \(((0,0), (b_1, b_2))\), respectively.
Translate \(\mathbf{b}\) so its tail is at the head of \(\mathbf{a}\). That is, the representative of \(\mathbf{b}\) is now \(((a_1, a_2), (a_1 + b_1, a_2 + b_2))\). Note that translation does not change the equivalence class of the arrow: \([((a_1, a_2), (a_1 + b_1, a_2 + b_2))] = [((0,0), (b_1, b_2))] = (b_1, b_2).\)
Make an arrow from the tail of \(\mathbf{a}\) to the head of \(\mathbf{b}\). This arrow is \(((0, 0), (a_1 + b_1, a_2 + b_2))\).
The resulting arrow is (a representative of) \(\mathbf{a} + \mathbf{b}\).
Scalar multiplication
Let \(\mathbf{a} = (a_1, a_2) \in Arrow/\sim\) and \(\lambda \in \mathbb{R}\). We define the scalar multiplication by
The resulting vector is parallel to \(\mathbf{a}\) but scaled by \(s\).
Higher dimensional vectors
We can extend the same argument to arrows of any dimension. An arrow in the \(n\)-dimensional space can be represented as a pair of points in \(\mathbb{R}^n\): \(((s_1, s_2, \cdots, s_n), (t_1, t_2, \cdots, t_n)) \in \mathbb{R}^n\times \mathbb{R}^n \simeq \mathbb{R}^{2n}\). An equivalence relation is introduced between two arrows, \(\mathbf{a} = ((s_1, s_2, \cdots, s_n), (t_1, t_2, \cdots, t_n))\) and \(\mathbf{b} = ((u_1, u_2, \cdots, u_n), (v_1, v_2, \cdots, v_n))\):
Then, the equivalence classes of \(\mathbb{R}^n\times \mathbb{R}^n\) by \(\sim\) are defined to be vectors. That is, vectors are elements of \((\mathbb{R}^n\times \mathbb{R}^n)/\sim \simeq \mathbb{R}^n\). Addition and scalar products are defined in the same manner as in the 2-dimensional case.
Defining the birth process Consider a colony of bacteria that never dies. We study the following process known as the birth process , also known as the Yule process . The colony starts with \(n_0\) cells at time \(t = 0\). Assume that the probability that any individual cell divides in the time interval \((t, t + \delta t)\) is proportional to \(\delta t\) for small \(\delta t\). Further assume that each cell division is independent of others. Let \(\lambda\) be the birth rate. The probability of a cell division for a population of \(n\) cells during \(\delta t\) is \(\lambda n \delta t\). We assume that the probability that two or more births take place in the time interval \(\delta t\) is \(o(\delta t)\). That is, it can be ignored. Consequently, the probability that no cell divides during \(\delta t\) is \(1 - \lambda n \delta t - o(\delta t)\). Note that this process is an example of the Markov chain with states \({n_0}, {n_0 + 1}, {n_0 + 2}...
Joseph Fourier introduced the Fourier series to solve the heat equation in the 1810s. In this post, we show how the Fourier transform arises naturally in a simplified version of the heat equation. Suppose we have the unit circle \(S\) made of a metal wire. Pick an arbitrary point \(A\) on the circle. Any point \(P\) on the circle is identified by the distance \(x\) from \(A\) to \(P\) along the circle in the counter-clockwise direction (i.e., \(x\) is the angle of the section between \(A\) and \(P\) in radian). Let \(u(t,x)\) represent the temperature at position \(x\) and time \(t\). The temperature distribution at \(t = 0\) is given by \(u(0, x) = f(x)\). Assuming no radiation of heat out of the metal wire, \(u(t,x)\) for \(t > 0\) and \(0\leq x \leq 2\pi\) is determined by the following partial differential equation (PDE) called the heat equation : \[\gamma\frac{\partial u}{\partial t} = \kappa\frac{\partial^2 u}{\partial x^2}\] and the initial condition \[u(0,x) = f(x...
Given a sequence \(\{a_n\}\), the expression \[\sum_{n=0}^{\infty}a_n = a_0 + a_1 + a_2 + \cdots\] is called a series (or infinite series ). This expression may or may not have value. At this point, it is purely formal. Note that the order of addition matters : We first add \(a_0\) and \(a_1\), to the result of which we add \(a_2\), to the result of which we add \(a_3\), and so on (Not something like we first add \(a_{101}\) and \(a_{58}\), then add \(a_{333051}\), and so on). We will see, however, that for a special class of series (the positive term series), the order of addition does not matter if the series converges. Example . The sum of a geometric progression \(\{ar^n\}\), that is, \(\sum_{n=0}^{\infty}ar^n\) is called a geometric series . It is understood that \(r^0 = 1\) including the case when \(r = 0\). □ Given a series \(\sum_{n=0}^{\infty}a_n\) and a number \(n\geq 0\), the sum \[\sum_{k=0}^{n}a_k = a_0 + a_1 + \cdots + a_n\] is called the \(n\)-th partial sum . We m...
Comments
Post a Comment