One approach is to consider the sequence $(Y_n)_n$ of the successive different states visited by $(X_t)_t$. You could try to show that $(Y_n)_n$ is a discrete Markov chain, to compute its transition probabilities (these are very simple and, hint, they do not depend on the numbers $q_i$) and, finally, to compute the probability that there exists an integer time $n$ such that $Y_n=i$.
1) does a parallelism between the probability of a markov chain, as defined in basic textbooks, with the measure theory counterpart (and the same for other quantities), thus explaining how to go from one to the other
We formulate some simple conditions under which a Markov chain may be approximated by the solution to a differential equation, with quantifiable error probabilities. The role of a choice of coordinate functions for the Markov chain is emphasised. The general theory is illustrated in three examples: the classical stochastic epidemic, a population process model with fast and slow variables, and core-finding algorithms for large random hypergraphs.
Presence: highly recommendedModule subject:Discrete time Markov chains: definition, communicating class,absorbing state, irreducibility, adsorbing probability. Stoppingtimes, strong Markov property, transience, recurrence. Stationarity,reversibility, periodicity, convergence to equilibrium, ergodictheorem, examples of applications. Continuous time Markov chains: definition,infinitesimal generator, backward/forward Kolmogorov equations,explosion. Poisson process, birth and death processes. Brownian motion andGaussian processes.Detailed module subject:Detailed course programSuggested reading:
J.R. Norris. Markov chains. Cambridge Series in Statistical andProbabilistic Mathematics, Cambridge, 1997.
P. Bremaud, Markov chains, Gibbs fields, MonteCarlo simulation, and queues. Springer-Verlag, New York, 1999.
G. R. Grimmett e D. R. Stirzaker, Probability and random processes. Third edition, Oxford Univesity Press, New YOrk 2001. Issue:Updated version of the solutions to exercises in Norris, Markov ChainsType of course: standardExercise:Suggested exercises from Chapter 6, Grimmett and Stirzaker:6.1 - n. 2,3,8,9,10,11 e 12, 6.2 - 2,4, 6.3 - 1 e 3 (without mean recurrence time),2,4 (simplify the chain),5,8 e 10 (first part).Examination tests:
Topics. This is an introductory course on stochastic processes thattakes a computational approach to the subject, with an emphasis on developing operational skillsin modelling random phenomena. Here are some of the topics we plan to cover:
These two definitions are equivalent. If the event that $X_n = t$ occurs infinite times almost surely, it occurs at least once almost sure. If it occurs once almost sure, it will occur infinite times. That's because of the property of Markov chain.
A continuous-time Markov chain (CTMC) is a continuous stochastic process in which, for each state, the process will change state according to an exponential random variable and then move to a different state as specified by the probabilities of a stochastic matrix. An equivalent formulation describes the process as changing state according to the least value of a set of exponential random variables, one for each possible state it can move to, with the parameters determined by the current state.
Each non-diagonal entry q i , j \displaystyle q_i,j can be computed as the probability that the jump chain moves from state i to state j, divided by the expected holding time of state i. The diagonal entries are chosen so that each row sums to 0.
A CTMC satisfies the Markov property, that its behavior depends only on its current state and not on its past behavior, due to the memorylessness of the exponential distribution and of discrete-time Markov chains.
The image to the right describes a discrete-time Markov chain modeling Pac-Man with state-space 1,2,3,4,5,6,7,8,9. The player controls Pac-Man through a maze, eating pac-dots. Meanwhile, he is being hunted by ghosts. For convenience, the maze shall be a small 3x3-grid and the monsters move randomly in horizontal and vertical directions. A secret passageway between states 2 and 8 can be used in both directions. Entries with probability zero are removed in the following transition rate matrix:
A chain is said to be reversible if the reversed process is the same as the forward process. Kolmogorov's criterion states that the necessary and sufficient condition for a process to be reversible is that the product of transition rates around a closed loop must be the same in both directions.
We will follow the book of Norris beginning with a recap of basic probability. Then we pass to the definition of Markov chains and the definition of irreducible. We analyze notions of recurrence and transcience, particularly for irreducible chains. We then define positive recurrence and stationary distributions before proving the convergence theorem for aperiodic positive recurrent markov chains. The last two topics are continuous times Markov Chains and renewal theorms.
This a whole book just on Markov processes, including some more detailed material that goes beyond this module. Its coverage of of both discrete and continuous time Markov processes is very thorough. Chapter 1 on discrete time Markov chains is available online.
The problem of estimating an unknown discrete distribution from its samples is a fundamental tenet of statistical learning. Over the past decade, it attracted significant research effort and has been solved for a variety of divergence measures. Surprisingly, an equally important problem, estimating an unknown Markov chain from its samples, is still far from understood. We consider two problems related to the min-max risk (expected loss) of estimating an unknown k-state Markov chain from its n sequential samples: predicting the conditional distribution of the next sample with respect to the KL-divergence, and estimating the transition matrix with respect to a natural loss induced by KL or a more general f-divergence measure. For the first measure, we determine the min-max prediction risk to within a linear factor in the alphabet size, showing it is Ω(k log log n/n) and O(k2 log log n/n). For the second, if the transition probabilities can be arbitrarily small, then only trivial uniform risk upper bounds can be derived. We therefore consider transition probabilities that are bounded away from zero, and resolve the problem for essentially all sufficiently smooth f-divergences, including KL-, L2-, Chi-squared, Hellinger, and Alpha-divergences.
aa06259810