- Title
The Role of Quantitative Models in Building Scalable Cloud Infrastructures
- Abstract
Planetary scale cloud computing requires hugely scalable infrastructures for compute, storage, network, and services that support programming models such as Map Reduce. There are many design choices that arise in the construction of cloud infrastructures. Examples include: scheduling policies for compute clusters, caching and replication policies for storage, and approaches to integrating bandwidth management with application requirements for quality of service (QoS). These design choices must be evaluated in terms of their impact on QoS considerations such as throughput, latency, and jitter as well as the consumption of power, compute, storage, and network bandwidth. The scale of cloud infrastructures typically makes it impractical or ineffective to do these evaluations using test systems, and, for the most part, it is too costly and time-consuming to evaluate designs by building and deploying multiple implementations. This talk discusses ways in which Google uses quantitative models to evaluate design decisions for cloud infrastructures. In some cases, we incorporate quantitative models into production systems to improve the quality of on-line decision making. Since the effectiveness of these quantitative model relies on the type and accuracy of workload characteristics, the talk also addresses workload characterization. The talk addresses a number of research challenges with employing performance models in practice.
- Brief Bio
Joseph L. Hellerstein (jlh@google.com) is at Google, Inc. where he manages the Performance Analytics Department that develops scalable resource management algorithms and tools for performance prediction and analysis of the Google Cloud. From 2006 to 2008, he was a Principal Architect at Microsoft Developer Division where he developed scheduling optimizations for .NET. From 1984 through 2006, he was a Senior Manager at the IBM Thomas J. Watson Research Center in Hawthorne, New York, where he founded the Adaptive Systems Department that contributed control technologies to IBM products. Dr. Hellerstein received his undergraduate degree from the University of Michigan in Ann Arbor, and his M.S. and Ph.D. in Computer Science from the University of California at Los Angeles. He has published over 100 peer-reviewed papers and two books, and has taught at Columbia University and the University of Washington. Dr. Hellerstein is a Fellow of the IEEE and received the IEEE/IFIP Stokesberry Award for outstanding contributions to the network management community.
- Website
- Title
Reflections on the Numerical Solution of Markov Chains
- Abstract
It has been 45 years since Vic Wallace introduced RQA1, the first software system to generate and then compute the stationary distribution of Markov chains. RQA1 proved successful for a period of time but eventually hit upon a type of problem that it just could not solve, namely a nearly-completely-decomposable (NCD) Markov chain. In the intervening years our understanding of NCD systems has grown immensely which in turn has lead to the development of numerical techniques specifically designed for, and very effective at, solving such problems.
An examination of the history of solving Markov chains over the years since shows that this phenomenon continues to this day. Problems that are not amenable to resolution by existing methods arise often unexpectedly; they are subsequently analyzed and give rise to the development of novel solution procedures. The application of Markov chains to model and analyze emerging technology — the development of search engines such as Google comes to mind — is particularly prone to creating such problems/opportunities.
In this talk, we shall examine how problems concerning generating state spaces, storing huge matrices and analyzing their composition have lead to important theoretical and practical results. We shall examine what the lessons of the past offer for the future; to ask where Markov chain technology is heading and in particular why with such potential, it is not as widely employed as one might expect.
- Brief Bio
William J. Stewart is a professor of computer science at North Carolina State University. He received his Ph.D. degree in 1974 from The Queen's University of Belfast, N. Ireland, under the direction of C.A.R. Hoare. Thereafter he spent almost five years at the University of Rennes, France, before accepting a position at North Carolina State University where he has remained ever since.
His primary research area concerns the development, analysis and solution of systems that are represented by Markov chains. He has published extensively in this area and has been involved in several software implementations notably, MARCA: Markov chains analyzer, QNAP: Queueing Network Analysis Package, and PEPS: Performance Evaluation of Parallel Systems. He also developed a testbed of Markov chains problems so as to provide a framework for the comparative analysis of different solution procedures.
In 1990, he initiated a series of meetings dedicated to the numerical solution of Markov chains; the current meeting is the latest incarnation of this series.
Among his publications, two textbooks are worthy of note: An Introduction to the Numerical Solution of Markov Chains (1994) and Probability, Markov chains, Queues, and Simulation: The Mathematical Basis of Performance Modeling (2009) both published by Princeton University Press.
- Website