Markov processes provide very flexible, powerful, and efficient means for the description and analysis of dynamic (computer) system properties. Performance and dependability measures can be easily derived. Moreover, Markov processes constitute the fundamental theory underlying the concept of queueing systems. In fact, the notation of queueing systems has been viewed sometimes as a high-level specification technique for (a sub-class of) Markov processes.
Single Station Queueing Systems
A single station queueing system, as shown in Fig. 6.1, consists of a queueing buffer of finite or infinite size and one or more identical servers. Such an elementary queueing system is also referred to as a service station or, simply, as a node.
Algorithms for Non-Product-Form Networks
Although many algorithms are available for solving product-form queueing networks (see Chapters 8 and 9), most practical queueing problems lead to non-product-form networks. If the network is Markovian (or can be Markovized), automated generation and solution of the underlying CTMC via stochastic Petri nets (SPNs) is an option provided the number of states is fewer than a million. Instead of the costly alternative of a discrete-event simulation, approximate solution may be considered.
Queueing networks consisting of several service stations are more suitable for representing the structure of many systems with a large number of resources than models consisting of a single service station. In a queueing network at least two service stations are connected to each other. A station, i.e., a node, in the network represents a resource in the real system. Jobs in principle can be transferred between any two nodes of the network; in particular, a job can be directly returned to the node it has just left. A queueing network is called open when jobs can enter the...
Approximation Algorithms for Product-Form Networks
In Chapter 8, several efficient algorithms for the exact solution of queueing networks are introduced. However, the memory requirements and computation time of these algorithms grows exponentially with the number of job classes in the system. For computationally difficult problems of networks with a large number of job classes, we resort to approximation methods. In Sections 9.1, 9.2, and 9.3 we introduce methods for obtaining such approximate results. The first group of methods is based on the MVA.
This chapter considers several large applications. The set of applications organized into three sections. In Section 13.1, we present case studies queueing network applications. In Section 13.2 we present case studies Markov chains and stochastic Petri nets. In Section 13.3, case studies hierarchical models are presented.
Algorithms for Product-Form Networks
Although product-form solutions can be expressed very easily as formulae, the computation of state probabilities in a closed queueing network is very time consuming if a straightforward computation of the normalization constant using Eq. (7.3.5) is carried out. As seen in Example 7.7, considerable computation is needed to analyze even a single class network with a small number of jobs, primarily because the formula makes a pass through all the states of the underlying CTMC.
Analytic performance models are very well suited as kernels in optimization problems. Two major categories of optimization problems are static and dynamic optimization. In the former, performance measures are computed separately from an analytic queueing or CTMC model and treated simply as functions (generally complex and non-linear) of the control (decision) variables. In the latter class of problems, decision variables are integrated with the analytic performance model and hence optimization is intimately connected with performance evaluation.
Steady-State Solutions of Markov Chains
In this chapter, we restrict ourselves to the computation of the steady-state probability vector’ of ergo&c Markov chains. Most of the literature on solution techniques of Markov chains assumes ergodicity of the underlying model. A comprehensive source on algorithms for steady-state solution techniques is the book by Stewart [Stew94]. From Eq. (2.15) and Eq. (2.58), we have v = VP and 0 = nQ, respectively, as points of departure for the study of steady-state solution techniques. Eq. (2.15) can be transformed so that: 0 = Y(P -1).
Information processing system designers need methods for the quantification of system design factors such as performance and reliability. Modern computerr communicationI’ and production line systems process complex workloads with random service demands. Probabilistic and statistical methods are commonly employed for the purpose of performance and reliability evaluation. The purpose of this book is to explore major probabilistic modeling techniques for the performance analysis of information processing systems....
In this section we introduce an efficient method for the steady-state analysis of Markov chains. Whereas direct and iterative techniques can be used for the exact analysis of Markov chains as previously discussed, the method computations of Courtois [Cour75, Cour77] is mainly applied to approximate u NN the desired state probability vector u. Courtois’s approach is based of on decomposability properties of the models under consideration.
In this chapter, we evaluate the impact of long-range dependence on a single network element (multiplexer or router) of a data network, this element being modeled as a ¯uid queueing system. Such a system has constant output rate C and is fed by a number N of i.i.d. on=off traf®c sources. The case where the number of sources N is ®xed is treated, for instance, in Boxma and Dumas .
Since the seminal study of Leland et al.  on the self-similar nature of network traf®c, signi®cant advances have been made in understanding the statistical properties of measured network traf®cÐin particular, Internet workloadsÐwhy self-similar burstiness is an ubiquitous phenomenon present in diverse networking contexts, mathematical models for their description and performance analysis based on queueing, and traf®c control and resource management under self-similar traf®c conditions. Chapter 1 gives a comprehensive overview including a...
Transient Solution of Markov Chains
Transient solution is more meaningful than steady-state solution when the system under investigation needs to be evaluated with respect to its shortterm behavior, Using steady-state measures instead of transient measures could lead to substantial errors in this case. Furthermore, applying transient analysis is the onl y choice if non-ergodic models are investigated, Transient analysis of Markov chains has been attracting increasing attention and is of particular importance in dependability modeling. ...
Performance Analysis Tools
Performance analysis tools have acquired increased importance due to increased complexity of modern systems. It is often the case that system measurements are not available or are very difficult to get. In such cases the development and the solution of a system model is an effective method of performance assessment. Software tools that support performance modeling studies provide one or more of the following solution methods:
Recently, there has been much interest in the behavior of queues with heavy-tailed service time distributions. This interest has been triggered by a large number of traf®c measurements on modern communication network traf®c (e.g., see Willinger et al.  for Ethernet LAN traf®c, Paxson and Floyd  for WAN traf®c, and Beran et al.  for VBR video traf®c; see also various chapters in this volume).