Poisson process: Introduction
We consider stochastic processes with continuous time. For example,
- The number of births (or deaths) in a population by time
. - The number of phone calls at an office by time
. - The number of radioactive particles detected by a Geiger counter by time
.
The Poisson process is a basic stochastic process that can be used to model above phenomena. Let's define it.
The Poisson process
Definition
Let be a time-dependent random variable of integer values, and a constant. Here, represents time. Suppose the probability mass function of is given by
A stochastic process characterized by the probability distribution (Eq:PoissonProc) is called a Poisson process.
Example: Let be the number of phone calls at an office by time . Then can be modeled as a Poisson process. (The same applies to other examples above.)
Why is it called the "Poisson" process? Because the probability distribution is the Poisson distribution with parameter (the mean of ) . You should check that
where represents the expectation value of the random variable in the argument. The variance is
What can we learn from this?
- According to (Eq:Mean), the mean of
is proportional to time . Thus, is a non-decreasing function of time, on average. - According to (Eq:Var), the variance of
is also proportional to time . Thus, for a larger , the difference between different samples of becomes larger, on average.
Here is a sample path of a Poisson process (with ).
Simulating Poisson processes
(See this page for details.)
How can we simulate a Poisson process with parameter , such as in the above figure? Here, I explain a rough approximate method without details. In later posts, we will see why this method correctly (yet approximately) simulates the Poisson process and why it is a rough approximation. Since we cannot represent continuous variables on a computer, we must discretize them. In the present case, it is the time variable that needs to be discretized. So, we consider discrete time steps:
Now, suppose we have for some . We determine the value of in the following manner:
- Generate a random number
from the uniform distribution in [0, 1) (i.e., between 0 and 1, including 0, excluding 1). - If
, then we set , otherwise we set .
By this procedure, the value of is incremented by 1 with probability , and stays the same ( ) with probability . Let's see an example.
- Suppose we have
and . Thus, . - We start from
. - Generate a random number from [0, 1). Say, we obtained
. Since this is greater than , we set . - Generate another random number. Say we obtained
. Then, we set . - Generate another random number. Say we obtained
. Then, we set . - We repeat this procedure over and over.
Since the quantity serves as a probability, this method is applicable only when (otherwise cannot be interpreted as a probability). For example, if we have , then so that for any random number . This means is always incremented by 1 regardless of the random numbers, which is nonsense. To use a larger value, we replace with in Step 2. Note that when is sufficiently small, the Maclaurin expansion gives
That is, can be obtained as the first-order approximation of . In a later post, we will see why is indeed the correct probability of increment. (Hint: is the cumulative distribution function of the exponential distribution of with parameter .)
Comments
Post a Comment