\(\log\) and \(e\)

 The mathematical constant \(e\), sometimes called Euler's number or Napier's constant, is an irrational number that appears everywhere in mathematics and other sciences and has a value of \(e=2.71828\cdots\). Now we see how this constant is defined and how it is related to complex numbers. We assume you know some calculus.



Definition (Natural logarithm)

The natural logarithm function \(\log: (0, \infty) \to \mathbb{R}\) is defined by

\[\log x = \int_1^{x}\frac{1}{t}dt.\]

From the definition, we can immediately derive a few important properties of \(\log\).

  1. \(\log(1) = 0\).
  2. \(\log\) is a strictly increasing function as \(1/t > 0\) for all \(t > 0\). This means, for any \(x_1, x_2 \in (0,\infty)\), \[x_1 < x_2 \implies   \log x_1 < \log x_2.\]
  3. \(\log\) is a continuous function. This means that for any \(a \in (0, \infty)\), we have \[\lim_{x\to a}\log x = \log a.\]

Lemma (The logarithm of a product is the sum of logarithms)

For \(y_1, y_2 > 0, y_1, y_2\in\mathbb{R}\), we have
\[\log(y_1y_2) = \log y_1 + \log y_2.\]
Proof
\[\begin{eqnarray} \log(y_1y_2) &=& \int_1^{y_1y_2}\frac{1}{x}dx\\ &=&\int_1^{y_1}\frac{1}{x}dx + \int_{y_1}^{y_1y_2}\frac{1}{x}dx\\ &=& \log y_1 + \int_1^{y_2}\frac{1}{v}dv \end{eqnarray}\]
where we changed the variables using \(v = x/y_1\) (and hence, \(dv = dx/y_1\); \(x = y_1 \mapsto v=1\); \(x=y_1y_2\mapsto v=y_2\)). But the name of the variable doesn't matter, so we have
\[\log(y_1y_2) = \log(y_1) + \log(y_2).\]

Corollary

For all \(x\in(0, \infty)\) and \(n\in\mathbb{N}\), we have
\[\log(x^n) = n\log x.\]
Proof. Exercise. (Use mathematical induction.) ■
From this corollary, we have, in particular,
\[\log(2^m) = m\log 2.\]
Since \(\log 1 = 0\) and \(\log\) is a strictly increasing function, it follows that \(\log 2 > 0\). Thus, as \(m\) increases, \(\log(2^m)\) can take arbitrarily large values.

Corollary

For any \(x>0\), we have
\[\log(1/x) = -\log x.\]
Proof. Using the above lemma and the property of \(\log\), we have
\[0 = \log(1) = \log(x\cdot(1/x)) = \log x + \log(1/x)\]
from which the desired result follows. ■

If \(x>1\), then \(1/x < 1\). As \(x\) increases, \(\log(1/x)\) can take arbitrarily large negative values.

Let's summarize the properties of \(\log: (0, \infty) \to \mathbb{R}\).
  1. It is a strictly increasing function. Hence it is injective.
  2. It takes all real values. Hence surjective (Note the domain and codomain).
Thus \(\log\) is bijective and hence has an inverse which we call \(\exp: \mathbb{R} \to (0, \infty)\).

Definition (Exponential \(\exp\))

The exponential function \(\exp: \mathbb{R} \to (0,\infty)\) is defined as the inverse of the natural logarithm function \(\log: (0, \infty) \to \mathbb{R}\).

Note, in particular, \(\exp(0) = 1\) as \(\log(1) = 0\).

Definition (\(e\))

The number \(e\) is defined by
\[e = \exp(1).\]

In the following, we show that \(\exp\) is indeed the exponential function with base \(e\), that is, \(\exp(x) = e^x\) for all \(x \in \mathbb{R}\).

Lemma (The exponential of a sum is the product of exponentials)

For any \(x_1, x_2\in\mathbb{R}\),
\[\exp(x_1 + x_2) = \exp(x_1)\exp(x_2).\]
Proof. Let \(y_1 = \exp(x_1)\) and \(y_2 = \exp(x_2)\). Then, \(x_1 = \log y_1\) and \(x_2 = \log y_2\). By the property of the natural logarithm, we have
\[\log(y_1y_2) = \log y_1 + \log y_2 = x_1 + x_2.\]
Thus, 
\[y_1y_2 = \exp(x_1 + x_2).\]
Substituting the definitions of \(y_1\) and \(y_2\), we conclude that
\[\exp(x_1 + x_2) = \exp(x_1)\exp(x_2)\]

It follows that, for any \(n \in \mathbb{N}\),

\[\exp(n) = \{\exp(1)\}^n = e^n.\]
Also, for any \(n \in \mathbb{N}\),
\[\exp(-n)\exp(n) = \exp(0) = 1\]
so that
\[\exp(-n) = e^{-n}.\]
Thus, for any \(z \in \mathbb{Z}\), we have
\[\exp(z) = \{\exp(1)\}^z = e^z.\]

Next, note that \(\exp(1/n)\) is real and positive, and
\[(\exp(1/n))^n = \exp(n/n) = \exp(1) = e.\]
Thus \(\exp(1/n)\) is the unique real \(n\)-th root of \(e\): 
\[\exp(1/n) = e^{1/n}.\]

Next, consider the rational \(m/n\) where \(m \in \mathbb{Z}, n\in\mathbb{N}\).
\[\exp(m/n) = (\exp(1/n))^m = (e^{1/n})^m = e^{m/n}.\]
So now we know that
\[\exp(x) = e^x, ~ \forall x \in \mathbb{Q}.\]

However, \(\exp(x)\) and \(e^x\) are both continuous and all real numbers can be approximated by rational numbers, so we have
\[\exp(x) = e^x, ~ \forall x\in\mathbb{R}.\]
Now we know that \(\exp(x)\) is the exponential function with base \(e\), namely, \(e^x\). The next lemma is one of the conspicuous properties of \(\exp(x)\).

Lemma

\[\frac{d}{dx}e^x = e^x.\]
Proof. Let \(y = \exp(x)\). Then \(x = \log(y)\). Differentiating, we have
\[1 = \frac{1}{y}\frac{dy}{dx},\]
so
\[\frac{dy}{dx} = y.\]
That is,
\[\frac{d}{dx}\exp(x) = \exp(x).\]

Next, we would like to define \(e^z\) for \(z \in\mathbb{C}\).

Definition (\(e^z\) on the complex domain)

Let \(z = u + i\theta\) where \(u, \theta \in \mathbb{R}\). We define \(e^z\) by
\[e^z = e^u(\cos\theta + i \sin\theta).\]
Remark. Don't get confused. ``\(e^u\)'' with \(u\in\mathbb{R}\) means the function \(\exp(u): \mathbb{R} \to \mathbb{R}\) as originally defined above, whereas ``\(e^z: \mathbb{C} \to \mathbb{C}\)'' is a new function being defined with a different domain and codomain. □

To see that this definition is consistent with the previous definition of \(e^x\) for \(x\in\mathbb{R}\), let \(\theta = 0\) so the ``complex number'' is purely real. Since \(\cos 0 = 1\) and \(\sin 0 = 0\), the new definition matches the old definition.

Example. The famous equality
\[e^{i\pi} = -1\]
is called Euler's formula. □

We can further show that \(e^z\) on the complex domain behaves in the same manner as \(e^x\) on the real domain.

Lemma

Let \(z_1, z_2\in \mathbb{C}\) and \(k \in \mathbb{Z}\). The following equations hold.
  1. \(e^{z_1}e^{z_2} = e^{z_1+z_2}\).
  2. \((e^{z_1})^k = e^{kz_1}\).
Proof. We may assume that \(z_1 = u_1 + i\theta_1\) and \(z_2 = u_2 + i\theta_2\) for some \(u_1,u_2,\theta_1, \theta_2\in\mathbb{R}\).
  1. \[\begin{eqnarray*} e^{z_1}e^{z_2} &=& e^{u_1}(\cos\theta_1 + i\sin\theta_1)e^{u_2}(\cos\theta_2 + i\sin\theta_2)\\ &=&e^{u_1}e^{u_2}(\cos\theta_1 + i\sin\theta_1)(\cos\theta_2 + i\sin\theta_2)\\ &=& e^{u_1+u_2}(\cos(\theta_1 + \theta_2) + i\sin(\theta_1 + \theta_2))\\ &=& e^{z_1 + z_2} \end{eqnarray*}\] as \(z_1 + z_2 = (u_1 + u_2) + i(\theta_1 + \theta_2)\).
  2. If \(k > 0\), this trivially follows from De Moivre's theorem. If \(k = 0\), it is trivial. For \(k < 0\), note that \((e^z)^{-1} = e^{-z}\) for all \(z\in\mathbb{C}\) (use Part 1 to see this); Let \(k' = -k\) so \(k'> 0\). \[\begin{eqnarray}(e^{z_1})^k &=& (e^{z_1})^{-1\cdot k'}\\ &=& \underbrace{(e^{z_1})^{-1}\cdots(e^{z_1})^{-1}}_{\text{$k'$ times}}\\ &=& e^{-z_1}\cdots e^{-z_1}\\ & = &e^{-k'z_1} = e^{kz_1}.\end{eqnarray}\]

Polar form revisited

We know that, for \(\theta\in\mathbb{R}\),
\[e^{i\theta} = \cos\theta + i\sin\theta.\]
So, the polar form of a non-zero complex number
\[r(\cos\theta + i\sin\theta)\]
is expressed simply as
\[re^{i\theta}.\]
That is, any complex number \(z \in \mathbb{C}\) can be represented as
\[z = re^{i\theta}\]
where \(r = |z|\) and \(\theta \in \arg z\).
We have
\[(re^{i\theta})^{-1} = (r^{-1})e^{-i\theta}\]
and
\[re^{i\theta}\cdot se^{i\psi} = (rs)e^{i(\theta + \psi)}.\]
It follows that
\[(re^{i\theta})/(se^{i\psi}) = (rs^{-1})e^{i(\theta - \psi)}.\]
Since \(e^{i\theta} = \cos\theta + i\sin\theta\) and \(e^{-i\theta} = \cos\theta - i\sin\theta\), we have
\[\begin{eqnarray} \cos\theta &=& \frac{e^{i\theta} + e^{-i\theta}}{2},\\ \sin\theta &=& \frac{e^{i\theta} - e^{-i\theta}}{2i}. \end{eqnarray}\]

Comments

Popular posts from this blog

Open sets and closed sets in \(\mathbb{R}^n\)

Euclidean spaces

Applications of multiple integrals