qest logo

Diego Garbervetsky (Universidad de Buenos Aires, AR)
Tutorial: Quantitative analysis of heap memory requirements Java/.Net like programs
See abstract.
Marco Vieira (University of Coimbra, PT)
Tutorial: Benchmarking the Dependability of Computer Systems
See abstract.

Quantitative analysis of heap memory requirements Java/.Net like programs

Diego Garbervetsky (Universidad de Buenos Aires, AR)


There is an increasing interest in understanding and analyzing the use of resources in software and hardware systems. Certifying memory consumption is vital to ensure safety in embedded systems as well as proper administration of their power consumption; understanding the number of messages sent through a network is useful to detect performance bottlenecks or reduce communication costs, etc. Assessing resource usage is indeed a cornerstone in a wide variety of software-intensive system ranging from embedded to Cloud computing. It is well known that inferring, and even checking, quantitative bounds is difficult (actually undecidable).

Memory consumption is a particularly challenging case of resource-usage analysis due to its non-accumulative nature. Inferring memory consumption requires not only computing bounds for allocations but also taking into account the memory recovered by a GC. In this tutorial I will provide an overview of the existing approaches to compute bounds on heap memory consumption. Then I will present some of the work our group have been performing in order to automatically analyze heap memory requirements both Java/.Net programs. Finally, I will explain some limitations of our approaches and discuss some key challenges and directions for future research.

Benchmarking the Dependability of Computer Systems

Marco Vieira (University of Coimbra, PT)


Computer benchmarks are standard tools that allow evaluating and comparing different systems or compo- nents according to specific characteristics (e.g., performance, robustness, dependability, etc.). Computer systems industry holds a reputed infrastructure for performance evaluation and the benchmarks managed by TPC (Trans- action Processing Performance Council) and by SPEC (Standard Performance Evaluation Corporation) are rec- ognized as two of the most successful benchmarking initiatives of all computer industry. However, dependability evaluation and comparison have been absent from benchmarking efforts for a long time.

The concept of benchmarking can be summarized in three words: representativeness, usefulness, and agreement. A benchmark must be as representative as possible of a given domain but, as an abstraction of that domain, it will always be an imperfect representation of reality. The objective is to find a useful representation that captures the essential elements of the application domain and provides practical ways to characterize the computer features that help the vendors/integrators to improve their products and help the users in their purchase decisions.

Dependability benchmarking has gained ground in the last years. In fact, several dependability benchmarks have been proposed, covering several different application domains (e.g., general-purpose operating systems, real time kernel space applications, engine control applications for automotive systems, on-line transaction pro- cessing systems, and web-servers).

The purpose of this tutorial is to present the state-of-the-art on computer systems dependability benchmark- ing. During the tutorial we will discuss different approaches to this problem and present in detail the most im- portant works in the field, contributing this way to disseminate possible paths to benchmark the dependability of computer systems and to foster the technical discussion required to create the conditions for the use of dependa- bility benchmarks by the computer systems industry.

May 2013
Accepted papers
Feb 2013
Deadline Extended!!

Organized by


Supported by