\documentstyle[12pt]{article}
\begin{document}
\title{Markov Chains}
\author{Anne Greenbaum \thanks{Math/AMath 381, Winter quarter, 1998.}}
\maketitle
A Markov chain is a special kind of stochastic process, in which a
system can be in a finite number of states and the probabilities
governing changes between the states do not depend on the history
of the system. That is, knowing the current state of the system
we can give the probabilities of future states, without regard to
how the system reached its current state.
\section{Examples}
\begin{itemize}
\item
Drunkard's walk. A drunkard starts at position $x$ below, and moves
to the right one space with probability .5 and to the left one space
with probability .5. If he reaches the bar, he stays there, and if
he reaches home, he stays there. What is the probability $p(x)$ that
he reaches home before reaching the bar?
\vspace{1in}
\noindent
Observe that $p(0) = 0$ and $p(5) = 1$, since the drunk does not
leave either the bar or his home. For $x=1,2,3,4$,
$p(x) = .5*p(x-1) + .5*p(x+1)$, since he moves left with
probability $.5$ and right with probability $.5$.
One could solve this problem by Monte Carlo simulation. Start, say,
10000 drunks at position $x$ and determine their movements via a
random number generator. For example, one could use a uniform random
number generator such as \verb+rand+ and move left if the number is
less than .5, right if it is greater than .5. Count the fraction
of drunks that reach home before they reach the bar. This is not
the most efficient way to solve the problem, however.
We can think of this problem as having six states, $0$ through $5$,
corresponding to the current position of our walker. Define the
{\em transition matrix} $P$ so that
\[
P_{ij} = \mbox{probability of going from state $j$ to state $i$} .
\]
\noindent
[Note: Sometimes the transition matrix is defined as the {\em transpose}
of the one I have given here. Then $P_{ij}$ is the probability of
going from state $i$ to state $j$. To use this definition, one must
apply row vectors to $P$ from the left, whereas I will put $P$ on the
left and apply it to column vectors on the right.]
The transition matrix for this problem is:
\[
P = \left( \begin{array}{cccccc}
1 & 1/2 & 0 & 0 & 0 & 0 \\
0 & 0 & 1/2 & 0 & 0 & 0 \\
0 & 1/2 & 0 & 1/2 & 0 & 0 \\
0 & 0 & 1/2 & 0 & 1/2 & 0 \\
0 & 0 & 0 & 1/2 & 0 & 0 \\
0 & 0 & 0 & 0 & 1/2 & 1 \end{array} \right) .
\]
\noindent
Note that the entries in a transition matrix are all greater than or
equal to zero and the sum of the entries in any column is one:
$\sum_i P_{ij} = 1~\forall j$. Such a matrix may also be called
a {\em probability matrix}.
Suppose the walker starts at position $x=2$. The initial state is
described by the vector
\[
x^{(0)} = \left( \begin{array}{c}
0 \\ 0 \\ 1 \\ 0 \\ 0 \\ 0 \end{array} \right) .
\]
To see the possible positions and their probabilities after one step,
apply the transition matrix $P$ to $x^{(0)}$ to obtain
\[
x^{(1)} = \left( \begin{array}{c}
0 \\ 1/2 \\ 0 \\ 1/2 \\ 0 \\ 0 \end{array} \right) ,
\]
indicating that with probability $1/2$ the walker is in position $1$,
while with probability $1/2$ he is in position $3$. Continuing in this
way, we wish to compute $( \lim_{n \rightarrow \infty} P^n ) x^{(0)}$.
All entries of this vector except the first and the last will be $0$,
since the walker will eventually end up either at home or at the bar.
(These are called {\em absorbing states}.) The last entry of this vector
is the desired answer $p(x)$.
We could do this computation with MATLAB by entering the matrix $P$ and
the vector $x^{(0)}$ and applying $P$ to $x^{(0)}$ many, many times until
a pattern starts to emerge. Try it if you are curious.
Another approach is the following. We could compute the
{\em eigendecomposition} of $P$: $P = V \Lambda V^{-1}$, where
$\Lambda$ is the diagonal matrix of eigenvalues, and $V$ is the matrix
of eigenvectors. Then $P^n = V \Lambda^n V^{-1}$ and
$\lim_{n \rightarrow \infty} P^n = V ( \lim_{n \rightarrow \infty} \Lambda^n )
V^{-1}$. It is easy to compute $\lim_{n \rightarrow \infty} \Lambda^n$,
since a diagonal matrix raised to a power just consists of the individual
diagonal entries raised to that power. The eigenvalues and eigenvectors
of our matrix $P$ are known, but we could also compute (approximations to)
them using MATLAB. Entering the matrix $P$ and typing
\verb+[V,Lam] = eig(P)+, MATLAB returns
\begin{verbatim}
V =
1.0000 0 0.5721 -0.1017 0.3707 -0.2185
0 0 -0.2185 0.3679 -0.5122 0.5721
0 0 -0.3536 -0.5952 -0.3166 -0.3536
0 0 -0.3536 0.5952 0.3166 -0.3536
0 0 -0.2185 -0.3679 0.5122 0.5721
0 1.0000 0.5721 0.1017 -0.3707 -0.2185
Lam =
1.0000 0 0 0 0 0
0 1.0000 0 0 0 0
0 0 0.8090 0 0 0
0 0 0 -0.8090 0 0
0 0 0 0 0.3090 0
0 0 0 0 0 -0.3090
\end{verbatim}
\noindent
and the inverse of \verb+V+ can be computed by typing \verb+Vinv = inv(V)+
\begin{verbatim}
Vinv =
1.0000 0.8000 0.6000 0.4000 0.2000 0
0 0.2000 0.4000 0.6000 0.8000 1.0000
0 -0.6325 -1.0233 -1.0233 -0.6325 0
0 0.3757 -0.6078 0.6078 -0.3757 0
0 -0.7063 -0.4365 0.4365 0.7063 0
0 0.6325 -0.3909 -0.3909 0.6325 0
\end{verbatim}
\noindent
The matrix $P$ has two eigenvalues that are equal to one and the others
are less than one. When these are raised to high powers they will
approach zero. Thus
\[
\lim_{n \rightarrow \infty} \Lambda^n = \left( \begin{array}{cccccc}
1 & 0 & 0 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 \end{array} \right) ,
\]
\noindent
and
\[
\lim_{n \rightarrow \infty} P^n = \left( \begin{array}{cccccc}
1 & .8 & .6 & .4 & .2 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 \\
0 & .2 & .4 & .6 & .8 & 1 \end{array} \right) .
\]
\noindent
From this expression it is easy to determine the value of $p(x)$ for
any $x$; it is the corresponding entry of the last row. The
probability $p(x)$ of reaching home before reaching the bar is
$x/5$, and the probability of reaching the bar first is $1-x/5$.
An obvious generalization of this problem is to assign different
probabilities for moving right and moving left, say, the walker
moves right with probability $3/5$ and left with probability $2/5$.
Now determine the probability $p(x)$ of reaching home before reaching
the bar. The same techniques can be applied.
\item
Electrical Networks. The ``silly'' sounding problem above actually
has application in many areas. Consider an electrical network with
equal resistors in series and a unit voltage across the ends.
\vspace{1.4in}
Voltages $v(x)$ will be established at points $x=0,1,2,3,4,5$.
We have grounded the point $x=0$ so that $v(0) = 0$, and there
is no resistor between the source and point $x=5$, so that $v(5)=1$.
By Kirchoff's laws, the current flowing into $x$ must be equal to
the current flowing out. By Ohm's Law, if points $x$ and $y$ are
separated by a resistor of strength $R$, then the current $i_{xy}$
that flows from $x$ to $y$ is
\[
i_{xy} = \frac{v(x) - v(y)}{R} .
\]
Thus for $x=1,2,3,4$, we have
\[
\frac{v(x-1) - v(x)}{R} = \frac{v(x) - v(x+1)}{R} .
\]
Multiplying by $R$ and combining like terms we see that
$v(x) = .5*v(x-1) + .5*v(x+1)$. This is exactly the same formula
(with the same boundary conditions) that we found for $p(x)$ in the
previous problem.
Suppose the resistors are not of equal strength, say, the resistor
between points $x$ and $x+1$ has strength $R_{x+1/2}$. Then
\[
\frac{v(x-1) - v(x)}{R_{x-1/2}} = \frac{v(x) - v(x+1)}{R_{x+1/2}} ,
\]
so
\[
v(x) = v(x-1) \frac{R_{x+1/2}}{R_{x-1/2} + R_{x+1/2}} + v(x+1)
\frac{R_{x-1/2}}{R_{x-1/2} + R_{x+1/2}} .
\]
\noindent
This is the same as the problem in which the walker goes right
with probability $R_{x-1/2}/( R_{x-1/2} + R_{x+1/2} )$ and left
with probability $R_{x+1/2}/( R_{x-1/2} + R_{x+1/2} )$.
\item
Winning a Game. The following is still another problem analogous
to the previous two. Jack and Jill play rummy for \$1 a hand.
Jack starts with \$2 and Jill starts with \$3. There are six possible
states: Jack has no money and Jill has \$5, Jack has \$1 and Jill has \$4,
etc. The game continues until one of the players has no money.
If the players are equally skilled, then the probability of winning
a hand is $.5$ for each. If we ask, what is the probability that
Jack wins the game then it is the same as the probability that the
drunk of problem 1, starting at position $x=2$, makes it home.
\end{itemize}
\end{document}