Probability Theory

First Principles

Conditional Probability and Independence

Discrete Random Variables

Expectation and More with Discrete Random Variables

More Discrete Distributions

Continuous Probability

Continuous Distribution

Limits

Random Walks and Markov Chains

Random walks on graphs are a special case of Markov chains. A Markov chain is a sequence of random variables $X_0,X_1,X_2,\dots$, with the property that for all $n$, the conditional distribution of $X_{n+1}$ given the past history $X_0,\dots,X_n$ is equal to the conditional distribution of $X_{n+1}$ given $X_n$. This is sometimes stated as the distribution of the future given the past only depends on the present. The set of values of the Markov chain is called the state space.

A simple random walk on a graph is a Markov chain because the distribution of the walk’s position at any fixed time only depends on the last vertex visited and not on the previous locations of the walk. The state space is the vertex set of the graph, leading to a discrete state space. Extensions exist to continuous state spaces.

Markov chains are remarkably useful models. They are used extensively in virtually every applied field to model random processes that exhibit some dependency structure between successive outcomes.