Irreducible Markov Chain

Control of Networks: Mathematical Background

Jean Walrand , Pravin Varaiya , in High-Performance Communication Networks (Second Edition), 2000

Theorem 9.1.1

An irreducible Markov chain has at most one invariant distribution. It certainly has one if it is finite. The Markov chain is said to be positive recurrent if it has one invariant distribution.

We noted earlier that the leftmost Markov chain of Figure 9.1 has a unique invariant distribution that is given by (9.7). The theorem tells us that the Markov chain in the center of Figure 9.1 also has a unique invariant distribution.

The invariant distribution, when it exists, measures the fraction of time that the Markov chain spends in the various states. This relationship is expressed in the following theorem.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780080508030500149

Principles and Methods for Data Science

David A. Spade , in Handbook of Statistics, 2020

3.1 Discrete-state Markov chains

Let S = x 1 , x 2 , be a discrete-state space. Then transition probabilities are of the form P ij (t) = P[X t = j|X t−1 = i]. If (X t ) t≥0 is to converge to a stationary distribution, (X t ) t≥0 has to satisfy three conditions. First, the chain must be irreducible, which means that any state j can be reached from any state i in a finite number of steps. The chain must be positive recurrent, meaning that, on average, the chain starting in state i returns to state i in a finite number of steps for all i S . The chain must also be aperiodic, which means it is not expected to make regular oscillations between states. These terms are formalized below.

Definition 10

(X t ) t≥0 is irreducible if for all i, j, there exists an integer t > 0 such that P ij (t) > 0.

Definition 11

An irreducible Markov chain ( X t ) t≥0 is recurrent if the first return time τ ii = min t > 0 : X t = i | X 0 = i to state i has the property that for all i, P ( τ i i < ) = 1 .

Definition 12

An irreducible, recurrent Markov chain is positive recurrent if for all i, E [ τ i i ] < .

Definition 13

A Markov chain (X t ) t≥0 has stationary distribution π(⋅) if for all j and for all t ≥ 0,

i π ( i ) P i j ( t ) = π ( j ) .

The existence of a stationary distribution for the chain is equivalent to that chain being positive recurrent.

Definition 14

An irreducible Markov chain is aperiodic if for all i,

g c d t > 0 : P i i ( t ) > 0 = 1 .

Definition 15

(X t ) t≥0 is reversible if it is positive recurrent with stationary distribution π(⋅) if for all i, j, π(i)P ij = π(j)P ji .

The discrete-state Markov chain (X t ) t≥0 has a unique stationary distribution if it is irreducible, aperiodic, and reversible. The next example illustrates some of these properties.

Example 9

Consider the Markov chain (X t ) t≥0 with state space S = 0 , 1 , 2 . Let the transition probability matrix be given by

P = 1 8 3 8 1 2 1 2 1 8 3 8 3 8 1 2 1 8 ,

so that P ij = P(X t = j|X t−1 = i). Since P00 =P11 =P22 = 1 8 , the chain is aperiodic, and since all transition probabilities are positive, the chain is clearly irreducible. Now we try to find a stationary distribution. To do this, we solve the following system of equations.

π ( 0 ) = π ( 0 ) P 00 + π ( 1 ) P 10 + π ( 2 ) P 20 = 1 8 π ( 0 ) + 1 2 π ( 1 ) + 3 8 π ( 2 ) π ( 1 ) = π ( 0 ) P 01 + π ( 1 ) P 11 + π ( 2 ) P 21 = 3 8 π ( 0 ) + 1 8 π ( 1 ) + 1 2 π ( 2 ) π ( 2 ) = π ( 0 ) P 02 + π ( 1 ) P 12 + π ( 2 ) P 22 = 1 2 π ( 0 ) + 3 8 π ( 1 ) + 1 8 π ( 2 ) .

Solving this system gives

π ( 0 ) = π ( 1 ) = π ( 2 ) = 1 3 .

Since P is symmetric, P ij π(i) =P ji π(j) for all i, j. Thus, (X t ) t≥0 is reversible, so this stationary distribution is the chain's unique stationary distribution. Furthermore, the existence of the stationary distribution ensures that the chain is positive recurrent.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/S0169716119300379

COMPENDIUM OF THE FOUNDATIONS OF CLASSICAL STATISTICAL PHYSICS

Jos Uffink , in Philosophy of Physics, 2007

Ad (iii).

If there is a unique stationary distribution P*, will T t P converge to P*, for every choice of P? Again, the answer is not necessarily affirmative. (Even if (201) is valid!) For example, there are homogeneous and irreducible Markov chains for which P t can be divided into two pieces: P t = Q t + R t with the following properties [Mackey, 1992, p. 71]:

1.

Q t is a term with ||Q t || → 0. This is a transient term.

2.

The remainder R t is periodic, i.e. after some finite time τ the evolution repeats itself: R t = R τ.

These processes are called asymptotically periodic. They may very well occur in conjunction with a unique stationary distribution P*, and show strict monotonic increase of entropy, but still not converge to P*. In this case, the monotonic increase of relative entropy H (P t , P*) is entirely due to the transient term. For the periodic piece R t , the transition probabilities are permutation matrices, which, after τ repetitions, return to the unit operator.

Besides, if we arrange that P* is uniform, we can say even more in this example: The various forms R t that are attained during the cycle of permutations with period τ all have the same value for the relative entropy H (R t , P*), but this entropy is strictly lower than H (P*, P*) = 0. In fact, P* is the average of the R t 's, i.e.: P * = = 1 τ t = 1 t = τ R t , in correspondence with (201).

Further technical assumptions can be introduced to block examples of this kind, and thus enforce a strict convergence towards the unique stationary distribution, e.g. by imposing a condition of 'exactness' [Mackey, 1992]. However, it would take us too far afield to discuss this in detail.

In conclusion, it seems that a weak aspect of "irreversible behaviour", i.e. the monotonic non-decrease of relative entropy is a general feature for all homogeneous Markov processes, (and indeed for all stochastic processes), and non-trivially so when the transition probabilities are non-invertible. Stronger versions of that behaviour, in the sense of affirmative answers to the questions (i), (ii) and (iii), can be obtained too, but at the price of additional technical assumptions.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780444515605500129

Mutation-Selection Algorithm: a Large Deviation Approach

Paul Albuquerque , Christian Mazza , in Foundations of Genetic Algorithms 6, 2001

2 MUTATION–SELECTION ALGORITHM

We now describe a two–operator mutation-selection algorithm on the search space Ω p of populations consisting of p individuals (Ω = {0,1} l ).

Mutation acts as in classical GAs. Each bit of an individual in Ω independently flips with probability 0 < τ < 1. At the population level, all individuals mutate independently of each other. Mutation is fitness–independent and operates as a blind search over Ω.p.

We consider a modified version of the selection procedure of classical GAs. We begin by adding some noise g(ξ,.) to log(F(ξ)) for technical reasons. It helps lift the degeneracy over the global maxima set of F. The real-valued random variables g(ξ,.), indexed by ξ ϵ Ω, are defined on a sample space I (e.g. a subinterval of ). They are independent identically distributed (i.i.d.) with mean zero and satisfy

(1) | g ξ ω | < 1 2 min | log F ξ 1 F ξ 2 | : F ξ 1 F ξ 2 , ξ 1 , ξ 2 Ω , ω I .

Hence the random variables f(ξ,.) = log(F(ξ)) + g(ξ,.) have mean log(F(ξ)) and the function f(.,ω) has the same optima set as F by assumption (1), but a unique global maximum for almost every sample point ω ϵ I.

Fix a point ω in the sample space I. Given a population x = (x 1, …, xp ) ϵ Ω P of size p, individual xi is selected under a Gibbs distribution (Boltzmann selection) with probability

(2) exp β f x i , ω j = 1 p exp β f x j , ω ,

from population x. The parameter β ≥ 0 corresponds to an inverse temperature as in simulated annealing and controls the selective pressure. Note that if we remove the noise and set β = 1, the above selection procedure reduces to classical GA fitness-proportional selection.

The algorithm is run only after the fitness function has been perturbed by the noise component. For any given sample point ω and τ, β fixed, we get an irreducible Markov chain on the search space Ω p by successively applying mutation and selection. Denote by μ τ, β ω its stationary probability distribution and by μ τ, β the probability distribution obtained after averaging the μ τ, β ω   '   s over the sample space.

Before stating results, we introduce some notations and terminology. We define the set of uniform populations

(3) Ω = = x 1 , , x p Ω p : x 1 = = x p

and the set of populations consisting only of maximal fitness individuals

(4) F max = x 1 x p Ω p : F x i = max ξ Ω F ξ .

We also recall that the support of a probability distribution on Ω p consists of all populations having positive probability.

Each theorem stated below corresponds to a specific way of running the mutation-selection algorithm.

Theorem 1 Let β > 0 be fixed. Then, as τ → 0, the probability distribution μτ, β converges to a probability distribution μ 0, β with support(μ 0,β ) = Ω = . Moreover, the limit probability distribution limβ→∞ μ 0, β concentrates on Ω=Fmax.

The first assertion in theorem 1 was already obtained by Davis (Davis and Principe, 1991) and the second by Suzuki (Suzuki, 1997) by directly analyzing the transition probabilities of the Markov chain. However their algorithm did not include the added noise component. We give a different proof in the next section.

Theorem 1 implies that, for β, τ both fixed and τ ≈ 0, the mutation-selection algorithm concentrates on a neighboorhood of Ω= in Ω p . Hence a run has a positive probability of ending on any population in Ω=. The hope remains that the stationary probability distribution has a peak on Ω = Fmax. This is actually the case, since the probability distribution lim β→∞ concentrates on Ω=Fmax. The latter statement can be obtained as a consequence of theorem 3.

Notice that GAs are usually run with τ 0. We believe that the crossover operator improves the convergence speed, but probably not the shape of the stationary probability distribution.

Theorem 2 Let 0 < τ < 1 be fixed. Then, as β → ∞, the probability distribution μ τ, β converges to a probability distribution μ τ,∞ with support(μτ,∞ ).= Ω= . Moreover, the limit probability distribution limτ→0 μ τ,∞ concentrates on Ω=.

Theorem 2 shows that, in terms of probability distribution support, increasing the selective pressure is equivalent to diminishing the mutation probability. However, limτ→0 μ τ, remains concentrated on Ω=.

Consequently, it is a natural idea to link the mutation probability τ to the inverse temperature β. The algorithm becomes a simulated annealing process: the intensity of mutation is decreased, while selection becomes stronger. This actually ensures convergence of the algorithm to an optimal solution.

Theorem 3 Let τ = τ (ϵ, κ, β) = ϵ exp(–κβ) with 0 < ϵ < 1 and κ > 0. Then, for κ large enough, the probability distribution μ τ, β converges, as β → ∞, to the uniform probability distribution over Ω=Fmax. Asymptotically, the algorithm behaves like simulated annealing on Ω= with energy function –p log F.

Notice that the initial mutation probability ϵ does not influence the convergence.

The first assertion in theorem 3 was obtained by Cerf (Cerf, 1996a), (Cerf, 1996b), in a much more general setting, but again with a mutation-selection algorithm not including the added noise component. However, we hope that our proof, presented below in the simple case of binary strings, is more intuitive and easier to grasp. Maybe will it illustrate the importance of Cerf's work and the richness of the Freidlin–Wentzell theory.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9781558607347500950

Stochastic Processes

J. MEDHI , in Stochastic Models in Queueing Theory (Second Edition), 2003

1.2.2.5 Transience and Recurrence

Define

(1.2.14) { f i j ( n ) } , i , j = 1 , 2 , , n , f i j ( 0 ) = 0 , f i j ( 1 ) = p i j , and f i j ( k + 1 ) = r r j p i r f r j ( k ) , k 1 .

The quantity fij (k) is the probability of transition from state i to state j in k steps, without revisiting the state j in the meantime. (It is called taboo probability—j being the taboo state.) Here {fij (k)} gives the distribution of the first passage time from state i to state j. We can write

f i j ( n ) = P r { X n = j , X r j , r = 1 , 2 , , n 1 | | X 0 = i }

The relation (1.2.14) can also be written as

(1.2.15) p i j ( n ) = r = 0 n f i j ( r ) p j j ( n r ) , n 1 = r = 0 n f i j ( n r ) p j j ( r ) .

The relations (1.2.14) and (1.2.15) are known as first entrance formulas.

Let

P i j ( s ) = n p i j ( n ) s n , F i j ( s ) = n f i j ( n ) s n , | s | < 1 .

Then from the convolution structure,

(1.2.16) P i j ( s ) = P i i ( s ) F i j ( s ) , j i P i i ( s ) 1 = P i i ( s ) F i i ( s ) .

Definition 1.2

A state i is said to be persistent if Fii = Fii (1 − 0) = 1 and is said to be transient if Fii (1 − 0) < 1.

A persistent state is null or nonnull based on whether μii = F′ii(1) = ∞ or < ∞, respectively.

Equivalent criteria of persistence and recurrence are as follows.

An index i is persistent iff (if and only if)

n p i i ( n ) =

and is transient iff

n p i i ( n ) < .

The relationship between these two types of classification of states and chain can be given as follows.

An inessential state is transient and a persistent state is essential. In the case of a finite chain, i is transient iff it is inessential; otherwise it is nonnull persistent.

All the states of an irreducible chain, whether finite or denumerable, are of the same type: all transient, all null persistent, or all nonnull persistent.

A finite Markov chain contains at least one persistent state. Further, a finite irreducible Markov chain is nonnull persistent. The ergodic theorem for a Markov chain with a denumerable infinite number of states is stated below.

Theorem 1.3

General Ergodic Theorem

Let P be the TPM of an irreducible aperiodic (i.e., primitive) Markov chain with a countable state space S (which may have a finite or a denumerably infinite number of states). If the Markov chain is transient or null persistent, then for each i, j ∈ S,

(1.2.17a) lim n p i j ( n ) 0.

If the chain is nonnull persistent, then for each i, j ∈ S,

(1.2.17b) lim n p i j ( n ) υ j

exists and is independent of i. The probability vector V = (ν 1 , ν 2 ,…) is the unique invariant measure of P—that is,

(1.2.18a) V P = V , V e = 1 ;

and further if μjj is the mean recurrence time of state j, then

(1.2.18b) υ j = ( μ j j ) 1 .

The result is general and holds for a chain with a countable state space S. In case the chain is finite, irreducibility ensures nonnull persistence, so that irreducibility and aperiodicity (i.e., primitivity) constitute a set of sufficient conditions for ergodicity of a finite chain. The sufficient conditions for ergodicity (lim Pij (n) = νi ) for a chain with a denumerably infinite number of states involve, besides irreducibility and aperiodicity, nonnull persistence of the chain. For a chain with a denumerably infinite number of states, the number of equations given by (1.2.18a) will be infinite. It would sometimes be more convenient to find V in terms of the generating function of {νj } than to attempt to solve Eqn.(1.2.18a) as such. We shall consider two such Markov chains that arise in queueing theory. See the Note below.

Example 1.3

Consider a Markov chain with state space S = {0,1,2,…} having a denumerable number of states and having TPM

(1.2.19) P = [ p 0 p 1 p 2 p 3 p 0 p 1 p 2 p 3 0 p 0 p 1 p 2 0 0 p 0 p 1 ]

where Σ k pk = 1. Let

P ( s ) = k p k s k and V ( s ) = k υ k s k , | s | < 1

be the probability-generating functions (PGF) of {pk } and {νk }, respectively. Clearly, the chain is irreducible and aperiodic; since it is a denumerable chain, we need to consider transience and persistence of the chain to study its ergodic property.

It can be shown that the states of the chain (which are all of the same type because of the irreducibility of the chain) are transient, persistent null or persistent nonnull according to

P ( 1 ) > 1 , P ( 1 ) = 1 , P ( 1 ) < 1 ,

respectively (see Prabhu, 1965). Assume that P′(1) < 1, so that the states are persistent nonnull; then from (1.2.18a), we get

(1.2.20) υ k = p k υ 0 + p k υ 1 + p k 1 υ 2 + p 0 υ k + 1 , k 0 .

Multiplying both sides of (1.2.20) by sk and adding over k = 0,1,2,…, we get

V ( s ) = υ 0 P ( s ) + υ 1 P ( s ) + υ 2 s P ( s ) + + υ k + 1 s k P ( s ) + = P ( s ) [ υ 0 + ( V ( s ) υ 0 ) / s ] ;

whence

V ( s ) = υ 0 ( 1 s ) P ( s ) P ( s ) s .

Since V (1) = 1, we have

1 = lim s 1 V ( s ) = υ 0 [ lim s 1 ( 1 s ) P ( s ) P ( s ) s ] = υ 0 [ 1 1 P ( 1 ) ] .

Thus,

(1.2.21) V ( s ) = ( 1 P ( 1 ) ) ( 1 s ) P ( s ) P ( s ) s .

Example 1.4

Consider a Markov chain with state space S = {0,1,2,…} and having TPM

(1.2.22) P = [ h 0 g 0 0 0 0 h 1 g 1 g 0 0 0 h 2 g 2 g 1 g 0 0 ]

where hi = + g i+1 + g i+2 + …, i ≥ 0,gi> 0, and Σg i = 0 g i = 1. Here P i0 = hi, i ≥ 0,

p i j = g i + 1 j , i + 1 j 1 , i 0 = 0 , i + 1 < j .

The chain is irreducible and aperiodic. It can be shown that it is persistent nonnull when α = Σ jgj>1. Then the chain is ergodic and νj = limn→∞ Pij (n) exist and are given as a solution of (1.2.18a); these lead to

(1.2.23a) v 0 = v r h r

(1.2.23b) v j = v r + j 1 g r , j 1

(1.2.23c) j = 0 v j = 1

Let G(s) = Σr grsr be the PGF of {gr }. Denote the displacement operator by E so that

E r ( υ k ) = υ k + r , r = 0 , 1 , 2 ,

Then we can write (1.2.23b) in symbols as

(1.2.24) E ( υ j 1 ) = υ j = g r E r ( υ j 1 ) , j 1 or { E g r E r } υ j = 0 , j 0 or { E G ( E ) } υ j = 0 , j 0 .

The characteristic equation of the above difference equation is given by

(1.2.25) r ( z ) z G ( z ) = 0.

It can be shown that when α = G′(1) > 1, there is exactly one real root of r(z) = 0 between 0 and 1. Denote this root by r 0 and the other roots by r 1 , r 2 ,…, |ri|> 1, i ≥ 1. The solution of (1.2.24) can thus be put as

υ j = c 0 r 0 j + i = j c i r i j , j 0 ,

where the c's are constants. Since Σ νj = 1,

c i 0 for i 1 , υ j = c 0 r 0 j , j 0 , and c 0 = 1 r 0

so that

(1.2.26) υ j = ( 1 r 0 ) r 0 j , j 0 ,

r 0 being the root lying between 0 and 1 of (1.2.25) (provided α = G′(1) > 1). The distribution is geometric.

Notes:

(1)

The equation VP = V is quite well known in the matrix theory. It follows from the well-known Perron-Frobenius theorem of matrix theory that there exists a solution V = (ν 1 2 ,…) of the matrix equation VP = V subject to the constraints ν i 0, Σ νi = 1.

(2)

When the order of P is not large, the equations can be solved fairly easily to get V = (ν 1 , ν 2 ,…). When the order of P is large (infinite), the number of equations is also large (infinite) and the solution of the equations becomes troublesome. In Example 1.3 we considered and obtained the solution in terms of the generating function V (s) = Σ νjs′. This method may not always be applicable.

See also Remarks (4) in Section 1.3.5.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780124874626500011

Dependable and Secure Systems Engineering

Kishor Trivedi , ... Fumio Machida , in Advances in Computers, 2012

3.1.1 Two-Board System

Many techniques have been proposed to capture the multistate system availability. In Ref. [48], we used three analytic model types (CTMC, SRN, and FT) and compared the results among them. To show the comparative study, we adopted an example of two-board system as shown in Fig. 6. The system consists of two boards (B1 and B2), each of which has a processor (P1 or P2) and a memory (M1 or M2). The state of each board is (1) both P and M are down, (2) P is working properly but M is down, (3) M is working properly but P is down, or (4) both P and M are functional. We assumed that the time to failure of the processor and the memory is exponentially distributed with rates λ p and λ m, respectively. Common cause failure in which both the processor and the memory on the same board fail is also taken into account by assuming exponential distribution with rate λ mp.

Fig. 6. Two-board system example.

Figure 8 presents the CTMC reliability model of the two-board system. The states of the CTMC are represented by a binary vector showing the states of P1, M1, P2, and M2. Note that 1 represents up state of the device and 0 represents its down state. Figures 7 and 9 depict the SHARPE input file for the CTMC. First of all, any character after "*"(asterisk) is considered to be a comment and is ignored for SHARPE execution. On line 1, format 8 means the number of digits after the decimal point to be printed in results. On lines 4 through 8, the variables used as parameters are given values. The failure rates of the processor and the memory (λ p and λ m) are set to 1/1000 and 1/2000 failures per hour, respectively. The mean time to common cause failure, 1/λ mp, is 1/3000   h. When a group of parameters is given values, then the block must start with a keyword bind and finishes with the keyword end.

Fig. 7. SHARPE input for the CTMC example.

Fig. 8. CTMC for a two-board system.

Fig. 9. SHARPE input for the CTMC example (continuation).

The model specification begins with a model type and a name (see line 10). In this case, the model type is markov which denotes Markov chain (CTMC) and the name is PM. SHARPE allows three kinds of Markov chains: irreducible, acyclic, and PH type. Lines 10 through 50 define the states and state transitions of the Markov chain. From lines 50 through 68, we define the reward configuration, where for each state of the CTMC a reward rate is assigned to it. Note that the keyword reward (see line 51 in Fig. 9) denotes that in the next group of lines, SHARPE will assign reward rates to the model states. For the two-board system, we adopted the CTMC model to compute the expected reward rate at time t for the case that at least one processor and both of the memories are operational. For that, we assigned the reward rate 1 to the UP states ((1,1,1,1) (1,1,0,1), and (0,1,1,1)) and 0 to the other states (down states). From lines 70 through 86, the initial state probabilities are specified. Initial state probabilities denote the likelihood of a sequence starting in a certain state. On lines 88 through 91, we define a function (func) to compute the expected reward rate at t  =   100 and t  =   200. It is important to highlight that the keyword exrt is a built-in function which gives the expected reward rate at time t. It takes as arguments a variable t and a Markov model name. The keyword expr says to evaluate an expression. Finally, line 93 contains the keyword end which means the end of the input file. The outputs for this example are shown in Fig. 10.

Fig. 10. SHARPE output for CTMC example.

The SRN model for the same two-board system is shown in Fig. 11. Figure 11A describes the failure behavior of the processor P1 and the memory M1, while Fig. 11B depicts the failure behavior of the processor P2 and the memory M2. Tokens in the places M1U and M2U represent that the memories are operational. Otherwise they are down. Likewise, tokens in the places P1U and P2U represent that the processors are operational. Otherwise, they are down. The SHARPE input file for the SRN model is shown in Fig. 13. From lines 4 through 8, the variables are given values. Note that the parameter values are the same as the ones used for the CTMC model. On lines 10 through 16, we define a reward function to compute the probability that at least one processor and both of the memories are operational. That is, the places P1U or P2U must have at least one token and the places M1U and M2U must have exactly two tokens.

Fig. 11. SRN for a two-board system.

Fig. 12. SHARPE output for SRN example.

Fig. 13. SHARPE input for the SRN example.

On line 18, we begin the specification of model with a keyword srn which means SRNs and a name BS. The SRN specification is divided into the following basic blocks: places, timed transitions, immediate transitions, inputs arcs, output arcs, and inhibitor arcs. Lines 20 through 28 specify the places. Each line contains the name of a place and the number of token in the place. Lines 30 through 36 comprise the timed transitions. Each line contains the name of a timed transition followed by the keyword ind and the value/variable assigned to it. Lines 41 through 49 define the input arcs. Each line consists of a place name followed by a transition name and the multiplicity of the arc. Lines 51 through 59 specify the output arcs. Each line consists of a transition name followed by a place name and the multiplicity of the arc. Note that the SRN models do not have immediate transitions and inhibitor arcs. On lines 63 through 66, we define a function to compute the reward rate at time t  =   100 and t  =   200. The keyword srn_exrt is a built-in function which computes the expected reward rate at time t. It takes as arguments a variable t, an SRN model name, and a reward function, as multiple reward functions can be defined for the SRN. The outputs for the specified model are presented in Fig. 12. One should note that the results from the SRN models are identical to those from the CTMC model.

Finally, the multistate FT model considering that at least one processor and both of the memories are operational is depicted in Fig. 14. The SHARPE input file for the FT model is shown in Fig. 15. On line 4, we begin the specification of the FT with a keyword mstree which means multistate FT and a name BS100. On lines 5 through 8, we define the events. An event begins with the keyword basic. For example, line 5 defines the event B1:4 and assigns to it a transient probability of being in the component state π B1,   4 (t), where B1,4 denotes the board B1 is in state 4. Each board is considered as a component with four states as stated above. The probability is obtained by solving the Markov chain in Fig. 16. The states of the CTMC are represented by a binary vector showing the states of P (processor) and M (memory), where 1 denotes up and 0 denotes down for each device. One should note that the probability is computed at t  =   100 but can be assigned with a variable value of t as a parameter. From lines 9 to 12, the structure of the multistate FT is defined. For instance, on line 9, the gate or is defined followed by the name gor321 and its inputs B2:3 and B2:4. On line 15, the system (failure) probability is computed. Note that for the second part of the specification (from lines 17 through 30), we consider t  =   200   h. The results are depicted in Fig. 17. The solutions of the three models (CTMC, SRN, and FT) yield same results within numerical accuracy [48]. Note that the 4-state CTMC of a single board can also be in the input file and its state probabilities at time t can be directly passed onto the multistate FT, making a much better use of the capability of SHARPE.

Fig. 14. FT for a two-board system.

Fig. 15. SHARPE input for the multistate FT example.

Fig. 16. CTMC model for a single board.

Fig. 17. SHARPE output for the multistate FT example.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780123965257000010