\documentstyle[12pt]{article}
\setlength{\oddsidemargin}{0in}
\setlength{\evensidemargin}{0in}
\setlength{\topmargin}{-0.3in}
\setlength{\headheight}{0in}
\setlength{\headsep}{0in}
\setlength{\textwidth}{6.5in}
\setlength{\textheight}{9.3in}
\setlength{\parindent}{0in}
\setlength{\parskip}{0.2in}
\begin{document}
\title{Discrete-Time Markov Chains}
\author{Norman Matloff \\
(adapted from \\
{\it Probability Modeling and Computer
Simulation}\\
N.S. Matloff, PWS, 1988)}
\date{}
\maketitle
One of the most commonly used stochastic models is that of a {\bf Markov
chain}. To motivate this discussion, we will start with a simple
example: Consider a {\bf random walk} on the set of integers between
1 and 5, moving randomly through that set, say one move per second,
according to the following scheme. If we are currently at position i,
then one time period later we will be at either i-1, i or i+1,
according to the outcome of rolling a fair die---we move to i-1 if
the die comes up 1 or 2, stay at i if the die comes up 3 or 4, and
move to i+1 in the case of a 5 or 6. For the special cases of i = 1
and i = 5, we simply move to 2 or 4, respectively.
The integers 1 through 5 form the {\bf state space} for this process;
if we are currently at 4, for instance, we say we are in state 4.
Letting $X _ t$ represent the position of the particle at time t, t = 0,
1,2,\ldots which is called the {\bf state} of the process at time t.
The random walk is a {\bf Markov process}. The term {\it Markov}
here has meaning similar to that of the term {\it memoryless} used
for the exponential distribution, in that we can ``forget the past'':
\begin{equation}
P( X _ {t+1} = s _ {t+1} | X _ t = s _ t , X _ {t-1} = s _ {t-1},
\ldots , X _ 0 = s _ 0 ) =
P( X _ {t+1} = s _ {t+1} | X _ t = s _ t )
\end{equation}
Although this equation has a very complex look, it has a very
simple meaning: The distribution of our next position, given our
current position and all our past positions, is dependent only on
the current position. It is clear that the random walk process above
does have this property; for instance, if we are now at position 4,
the probability that our next state will be 3 is 1/3---no matter
where we were in the past.
Continuing this example, let $p _ {ij}$ denote the probability of going
from position i to position j in one step. For example, $p _ {21} =
p _ {23} = \frac{1}{3}$, while $p _ {24} = 0$ (we can reach position 4
from position 2 in two steps, but not in one step). The numbers
$p _ {ij}$ are called the {\bf one-step transition probabilities} of
the process. Denote by P the matrix whose entries are the $p _ {ij}$.
In typical applications we are interested in the long-run distribution
of the process, for example, the long-run proportion of the time that
we are at position 4. For each state i, define
\begin{equation}
{\pi}_i = \lim_{t \rightarrow \infty} \frac{N_{it}}{t}
\end{equation}
where $N_{it}$ is the number of visits the process makes to state i
among times 1, 2,..., t. In most practical cases, this proportion
will exist and be independent of our initial position $X_0$. (There
are mathematical conditions under which this is guaranteed to occur,
but they will not be stated here.)
Intuitively, the existence of $\pi_i$ implies that as t approaches
infinity, the system approaches steady-state, in the sense that
\begin{equation}
\lim_{t \rightarrow \infty} P(X_t = i) = \pi_i
\end{equation}
Though we will again avoid discussing mathematical conditions for
this to occur, the point here is that this last equation suggests
a way to calculate the values $\pi_i$, as follows.
First note that
\begin{equation}
P(X_{t+1} = i) = \sum_k P(X_t = k) p_{ki}
\end{equation}
Then as $t \rightarrow \infty$ in this equation, intuitively we would have
\begin{equation}
\pi_i = \sum_k \pi_k p_{ki}
\end{equation}
Letting $\pi$ denote the row vector of the elements $\pi _ i$, these
equations (one for each i) then have the matrix form
\begin{equation}
\pi = \pi P
\end{equation}
Note that there is also the constraint
\begin{equation}
\sum_i \pi_i = 1
\end{equation}
This can be used to calculate the $\pi_i$. For the random walk
problem above, for instance, the solution is $[ \frac{1}{11},
\frac{3}{11},\frac{3}{11},\frac{3}{11},\frac{1}{11} ]$.
Thus in the long run we will spend 1/11 of our time at position 1,
3/11 of our time at position 2, and so on.
In the above example, the labels for the states consisted of single
integers i. In some other examples, convenient labels may be r-tuples,
for example 2-tuples (i,j).
{\bf Example:}
Consider a serial communication line. Let $B_1, B_2, B_3, ...$
denote the sequence of bits transmitted on this line. It is
reasonable to assume the $B_i$ to be independent, and that $P( B_i = 0)$
and $P( B_i = 1)$ both being equal to 0.5.
Suppose that the receiver will eventually fail, with the type of failure
being {\bf stuck at 0}, meaning that after failure it will report all
future received bits to be 0, regardless of their true value. Once
failed, the receiver stays failed, and should be replaced. Eventually
the new receiver will also fail, and we will replace it; we continue
this process indefinitely.
However, the problem is that we will not know whether a receiver has
failed (unless we test it once in a while, which we are not including
in this example). If the receiver reports a long string of 0s, we
should suspect that the receiver has failed, but of course we cannot
be sure that it has; it is still possible that the message being
transmitted just happened to contain a long string of 0s.
Suppose we adopt the policy that, if we receive k consecutive 0s, we
will replace the receiver with a new unit. Here k is a design
parameter; what value should we choose for it? If we use a very small
value, then we will incur great expense, due to the fact that we will
be replacing receiver units at a very high rate. On the other hand, if
we make k too large, then we will often wait too long to replace the
receiver, and the resulting error rate in received bits will be sizable.
Resolution of this tradeoff between expense and accuracy depends on
the relative importance of the two. (There are also other possibilities,
involving the addition of redundant bits for error detection, such as
parity bits. For simplicity, we will not consider such refinements
here. However, the analysis of more complex systems would be similar
to the one below.)
A natural state space in this example would be
\begin{equation}
\{ (i,j): i = 0,1,...,i-1; j = 0,1; i+j \neq 0 \}
\end{equation}
where i represents the number of consecutive 0s that we have received
so far, and j represents the state of the receiver (0 for failed, 1
for nonfailed). Suppose the lifetime of the receiver, that is, the
time to failure, is geometrically distributed with ``success''
probability $\rho$, i.e. the probability of failing on receipt of the
i-th bit after the receiver is installed is $ {(1 - \rho)} ^ {i-1} \rho$,
for i = l,2,3,...
Then calculation of the transition matrix P is straightforward. For
example, suppose the current state is (2,1), and that we are
investigating the expense and accuracy corresponding to a policy
having k = 5. What can happen upon receipt of the next bit? The
next bit will have a true value of either 0 or 1, with probability 0.5
each. The receiver will change from working to failed status with
probability $\rho$. Thus our next state could be:
\begin{itemize}
\item (3,1), if a 0 arrives, and the receiver does not fail;
\item (0,1), if a 1 arrives, and the receiver does not fail; or
\item (3,0), if the receiver fails
\end{itemize}
The probabilities of these three transitions out of state (2,1) are:
\begin{equation}
p_{(2,1),(3,1)} = 0.5 (1-\rho)
\end{equation}
\begin{equation}
p_{(2,1),(0,1)} = 0.5 (1-\rho)
\end{equation}
\begin{equation}
p_{(2,1),(3,0)} = \rho
\end{equation}
Other entries of the matrix P can be computed similarly.
Formally specifying the matrix P using the 2-tuple notation would
be very cumbersome. In this case, it would be much easier to map
to a one-dimensional labeling. For example, if k = 5, the nine states
(1,0),...,(4,0),(0,1),(1,1),...,(4,1) could be renamed states 1,2,...,9.
Then we could form P under this labeling, and the transition probabilities
above would appear as
\begin{equation}
p_{78} = 0.5 (1-\rho)
\end{equation}
\begin{equation}
p_{75} = 0.5 (1-\rho)
\end{equation}
\begin{equation}
p_{73} = \rho
\end{equation}
After the $\pi_i$ are determined, we can find the error rate $\epsilon$,
and the mean time (i.e., the mean number of bit receptions) between
receiver replacements, $\mu$. We can find both $\epsilon$ and $\mu$
in terms of the $\pi_i$, in the following manner.
The quantity $\epsilon$ is the proportion of the time that the true
value of the received bit is 1 but the receiver is down, which is
0.5 times the proportion of the time spent in states of the form (i,0):
\begin{equation}
\epsilon = 0.5 (\pi_1 + \pi_2 + \pi_3 + \pi_4)
\end{equation}
Now to get $\mu$ in terms of the $\pi_i$ note that since $\mu$ is the
mean number of bits between receiver replacements, it is then the
reciprocal of the proportion of bits that result in replacements.
For example, if 5\% of the received bits result in replacement of the
receiver, then (speaking on an intuitive level) the ``average'' set
of 20 bits will contain one bit which makes us replace the receiver,
and there will thus be an average of 20 bits between replacements. A
replacement will occur only from states of the form (4,j), and even
then only under the condition that the next reported bit is a 0. In
other words, there are three possible ways in which replacement can
occur:
\begin{itemize}
\item [(a)] We are in state (4,0). Here, since the receiver has
failed, the next reported bit will definitely be a 0, regardless of
that bit's true value. We will then have a total of k = 5
consecutive received 0s, and therefore will replace the receiver.
\item [(b)] We are in the state (4,1), and the next bit to arrive is
a true 0. It then will be reported as a 0, our fifth consecutive 0,
and we will replace the receiver, as in (a).
\item [(c)] We are in the state (4,1), and the next bit to arrive is
a true 1, but the receiver fails at that time, resulting in the
reported value being a 0. Again we have five consecutive reported 0s,
so we replace the receiver.
\end{itemize}
Therefore,
\begin{equation}
\mu{^-1} = \pi_4 + \pi_9 (0.5+0.5\rho)
\end{equation}
This kind of analysis could be used as the core of a cost-benefit
tradeoff investigation to determine a good value of k. (Note that
the $\pi_i$ are functions of k, and that the above equations for
the case k = 5 must be modified for other values of k.)
\end{document}