Hoppa till huvudinnehåll



PhD defence: On the Challenges of Software Performance Optimization with Statistical Methods

Tid: 2023-06-15 15:15 till 18:00 Disputation

Thesis title: On the Challenges of Software Performance Optimization with Statistical Methods

Author: Noric Couderc, Department of Computer Science, Lund University

Faculty opponent: Assistant Professor Diego Costa, Université du Québec à Montréal, Canada

Examination Committee:

  • Professor Emeritus Nahid Shahmeri, Linköping University

  • Professor Welf Löwe, Linnaeus University

  • Professor Tobias Wrigstad, Uppsala University

  • Deputy: Professor Björn Regnell, Lund University

Session chair: Professor Jacek Malec, Lund University


  • Senior Lecturer Christoph Reichenbach, Lund University

  • Senior Lecturer Emma Söderberg, Lund University

Location: E:A, E-building, John Ericssons väg 2 / Ole Römers väg 3, Lund, Sweden

For download: Follow this link to download the pdf: The pdf will be available for download here later


Most recent programming languages, such as Java, Python and Ruby, include a collection framework as part of their standard library (or runtime). The Java Collection Framework provides a number of collection classes, some of which implement the same abstract data type, making them interchangeable. Developers can therefore choose between several functionally equivalent options. Since collections have different performance characteristics, and can be allocated in thousands of programs locations, the choice of collection has an important impact on performance. Unfortunately, programmers often make sub-optimal choices when selecting their collections.

In this thesis, we consider the problem of building automated tools that would help the programmer choose between different collection implementations. We divide this problem into two sub-problems. First, we need to measure the performance of a collection, and use relevant statistical methods to make meaningful comparisons. Second, we need to predict the performance of a collection with as little benchmarking as possible.

To measure and analyze the performance of Java collections, we identify problems with the established methods, and suggest the need for more appropriate statistical methods, borrowed from Bayesian statistics. We use these statistical methods in a reproduction of two state-of-the-art dynamic collection selection approaches: CoCo and CollectionSwitch. Our Bayesian approach allows us to make sound comparisons between the previously reported results and our own experimental evidence. We find that we cannot reproduce the original results, and report on possible causes for the discrepancies between our results and theirs.

To predict the performance of a collection, we consider an existing tool called Brainy. Brainy suggests collections to developers for C++ programs, using machine learning. One particularity of Brainy is that it generates its own training data, by synthesizing programs and benchmarking them. As a result Brainy can automatically learn about new collections and new CPU architectures, while other approaches required an expert to teach the system about collection performance.  We adapt Brainy to the Java context, and investigate whether Brainy’s adaptability also holds for Java. We find that Brainy’s benchmark synthesis methods do not apply well to the Java context, as they introduce some significant biases. We propose a new generative model for collection benchmarks and present the challenges that porting Brainy to Java entails.


Om händelsen
Tid: 2023-06-15 15:15 till 18:00

E:A, E-building, John Ericssons väg 2 / Ole Römers väg 3, Lund, Sweden

noric [dot] couderc [at] cs [dot] lth [dot] se