Introductory university-level calculus, linear algebra, abstract algebra, probability, statistics, and stochastic processes.
Fourier series: Introduction
Get link
Facebook
X
Pinterest
Email
Other Apps
-
The theory of the Fourier series is based on a wild assumption: any "well-behaved" periodic function can be represented as a linear combination of sine and cosine functions, and that expression is unique for the given function. This theory (eventually) provided much of the foundations of modern mathematics. But it is also of tremendous practical importance.
Trigonometric functions are periodic. But what is a periodic function in general?
Definition (Periodic function)
A function is said to be a periodic function if there exists a real number such that
In this case, is called a period of the function. Note that if is a period of , then , , () is are also periods of . In fact,
The smallest (non-zero) period is called the fundamental period.
Remark. If we say simply a period, we usually mean the fundamental period. □
Example. For any , and have a period of , their fundamental periods are for each . □
Definition (Fourier series)
A Fourier series is a series of the form
where () and () are (usually) real constants.
The factor in the constant term () is by convention as well as for convenience. Each term of the series has a period of , so the domain of the above function of may be or or .
Definition (Fourier expansion of a function)
Let be a function on that has a period of . If
holds for all except for finitely many , the right-hand side is said to be
a Fourier expansion or Fourier series expansion of the function .
If term-wise integration is allowed, the coefficients and are readily determined. Note the following formulae: For any ,
where is Kronecker's delta. That is, the functions are orthogonal. By multiplying (eq:FF) by or and then integrating, we have
Thus, we have
Since we are assuming the period of for , the range of integration can be instead of .
The sequences and defined by (eq:fab) are called the Fourier coefficients of . The Fourier coefficients of can be determined if is integrable on . Given the Fourier coefficients of , we can formally define the following series, which is called the Fourier series of , denoted by :
In this case, we also write
Note that, in this case, may not be the same function as . In fact, whether and/or when (note it's "", not "") is a fundamental question in the theory of Fourier series.
The fundamental problems in the theory of the Fourier series are
Under what conditions on does converge?
What does the sum represent?
The continuity of alone is known to be insufficient. To state the sufficient condition, we need the language of Lebesgue integral, or the measure theory, which is far beyond the scope of this post. We give it below anyway without proof.
Theorem (Carlson (1966))
Let be a function that is measurable on and -integrable, i.e.,
then the Fourier series converges to almost everywhere.
Remark. The technical terms such as measurable, -integrable, and almost everywhere come from the theory of Lebesgue integral. The Lebesgue integral is a generalization of the Riemann integral. Much of modern mathematics depends on the Lebesgue integral. □
Defining the birth process Consider a colony of bacteria that never dies. We study the following process known as the birth process , also known as the Yule process . The colony starts with cells at time . Assume that the probability that any individual cell divides in the time interval is proportional to for small . Further assume that each cell division is independent of others. Let be the birth rate. The probability of a cell division for a population of cells during is . We assume that the probability that two or more births take place in the time interval is . That is, it can be ignored. Consequently, the probability that no cell divides during is . Note that this process is an example of the Markov chain with states \({n_0}, {n_0 + 1}, {n_0 + 2}...
Generational growth Consider the following scenario (see the figure below): A single individual (cell, organism, etc.) produces descendants with probability , independently of other individuals. The probability of this reproduction, , is known. That individual produces no further descendants after the first (if any) reproduction. These descendants each produce further descendants at the next subsequent time with the same probabilities. This process carries on, creating successive generations. Figure 1. An example of the branching process. Let be the random variable representing the population size (number of individuals) of generation . In the above figure, we have , , , , We shall assume as the initial condition. Ideally, our goal would be to find how the population size grows through generations, that is, to find the probability for e...
In mathematics, we must prove (almost) everything and the proofs must be done logically and rigorously. Therefore, we need some understanding of basic logic. Here, I will informally explain some rudimentary formal logic. Definitions (Proposition): A proposition is a statement that is either true or false. "True" and "false" are called the truth values, and are often denoted and . Here is an example. "Dr. Akira teaches at UBD." is a statement that is either true or false (we understand the existence of Dr. Akira and UBD), hence a proposition. The following statement is also a proposition, although we don't know if it's true or false (yet): Any even number greater than or equal to 4 is equal to a sum of two primes. See also: Goldbach's conjecture Next, we define several operations on propositions. Note that propositions combined with these operations are again propositions. (Conjunction, logical "and"): Let ...
Comments
Post a Comment