Matrix Analytic Techniques
General Distributions Recall the exponential distribution Exp(μ): E[X] = 1/μ and Var(X) = 1/μ2 The coefficient of variation C2 =Var(X)/E[X]2 = 1 We can combine exponential random variables in series an in parallel Series of k (identical) exponential random variables Exp(kμ) has Ck2 = 1/k (Erlang-k distribution) Can be generalized to use different exponential random variables Parallel combination of Exp(μ1) and Exp(μ2) with probabilities p and (1-p), respectively (Hyperexponential distribution) has CH2 > 1 Phase-type (PH) distributions: Arbitrary combination in series and parallel (plus absorbing state) Can approximate any distribution arbitrarily closely using a phase-type distribution Coxian distributions: Series of exponentials with arbitrary probability of exiting at each stage Also “dense” in the class of non-negative distributions PH distributions have exponential stages, so they still give rise to Markov chains Can we leverage the PH structure to solve the corresponding Markov chains?
Two-Stage Coxian Distribution μ1 μ2 p 1-p E[S]=1/μ1+ p/μ2 E[S2]=2/μ12+ 2p/μ22 C2=2(p μ12 + μ22)/(μ1μ2[pμ1+ μ2]) – 1 Can be used to obtained distributions with both C2>1 and C2<1 μ1=1 and μ2=2 yields C2 = 2/(p+2)<1 μ1=1 and μ2=1/2 yields C2 = (6p+1)/(2p+1)>1
The M/E2/1 Queue as an Example Poisson arrivals of rate λ Service time with two exponential stages in series: Exp(μ1) and Exp(μ2) Markov chain representation 0,1 0,2 0,0 μ2 1,2 2,2 1,1 2,1 λ μ1
Balance Equations – (1) λP0,0 + μ2P0,2 = 0 0,1 0,2 0,0 μ2 1,2 2,2 1,1 2,1 λ μ1 λP0,0 + μ2P0,2 = 0
Balance Equations – (2) (λ+μ1) P0,1 = λP0,0 + μ2P1,2 0,2 0,0 μ2 1,2 2,2 1,1 2,1 λ μ1 (λ+μ1) P0,1 = λP0,0 + μ2P1,2 a1P0,1 + λP0,0 + μ2P1,2 = 0, where a1 = -(λ+μ1)
Balance Equations – (3) (λ+μ2) P0,2 = μ1P0,1 0,0 μ2 1,2 2,2 1,1 2,1 λ μ1 (λ+μ2) P0,2 = μ1P0,1 a2P0,2 + μ1P0,1 = 0, where a2 = -(λ+μ2)
Balance Equations & Generator Matrix 0,0 0,1 0,2 1,1 1,2 2,1 2,2 3,1 3,2 -λ λ a1 μ1 μ2 a2 In general λP0,0 + μ2P0,2 = 0 a1P0,1 + λP0,0 + μ2P1,2 = 0 a2P0,2 + μ1P0,1 = 0 a1P1,1 + λP0,1 + μ2P2,2 = 0 a2P1,2 + μ1P1,1 = 0 … Where a1 = -(λ+μ1) and a2 = -(λ+μ2)
Generator Matrix Q = -λ λ a1 μ1 μ2 a2 L0 F0 B0 L F B 0,0 0,1 0,2 1,1 1,2 2,1 2,2 3,1 3,2 -λ λ a1 μ1 μ2 a2 3 2 L0 F0 B0 L F B =
πQ = 0 π0 π1 πi π = [π(0,0),π(0,1),π(0,2),π(1,1),π(1,2), …,π(i,1),π(i,2), …] 3 2 L0 F0 B0 L F B π0L0 + π1B0 π0F0 + π1L + π2B π1F + π2L + π3B π = = 0
Scope of Matrix Analytic Methods Numerical in nature for chains that Are unbounded in at most one dimension With a structure that repeats after some point (similar to what we obtained for M/E2/1) Solution is based on guessing that πi+1 = πiR, where πi is the state probability vector for “level” i in the unbounded dimension, and R is a matrix that can be computed iteratively Convergence is usually fast Accurate except for very high C2 values (unbalanced)
Matrix Balance Equations Using the previous notation π0=[π(0,0),π(0,1),π(0,2)] and πi=[π(i,1),π(i,2)], the matrix balance equations πQ are of the form 0 = π0L0 + π1B0 0 = π0F0 + π1L + π2B 0 = π1F + π2L + π3B 0 = π2F + π3L + π4B Guessing a solution of the form πi+1 = π1Ri or equivalently πi+1 = πiR Balance equations give 0 = π0L0 + π1B0 0 = π0F0 + π1L + π1RB 0 = π1F + π1RL + π1R2B 0 = π2F + π2RL + π2R2B Hence 0 = F + RL + R2B or R = -(R2B + F)L-1 Solving for R is then based on the following recursion Rn+1 = -(Rn2B + F)L-1 Start with R = 0 or some other guess and stop when ||Rn+1– Rn|| ε, where usually ||M|| = max{mij}
Finding π0 and π1 First two balance equations give [π0,π1]= 0, where Normalization condition π1 =1 gives π01+π1(I – R)-11=1 or [π0,π1] =1, where Combining the two gives This system of 5 linear equations has a unique solution and gives us π0 and π1 so that we can obtain all remaining probabilities