Code review is the manual assessment of source code by human reviewers. It is widely recommended practice in software engineering and it is widely adopted both in industrial settings and in open source projects. What can we learn if we compare code review in OSS projects and at Microsoft? In this talk, I will present and compare the results of two studies about code review: One conducted at Microsoft and one conducted in open source settings. We learn that--despite the differences in environments, incentives, languages, and used tools--code review at Microsoft and in the analyzed open source projects have one common trait: The outcome. Unfortunately, this outcome does not match the main reasons why developers say they do code reviews: Finding defects. I will discuss why this happens and what we can do about it.
In summer 2013, we investigated expectations, outcomes, and challenges of code review at Microsoft. I conducted face-to-face interviews with several professional developers, performed a deep analysis of the code review data across many products at Microsoft (e.g., Excel and XBox), and administered a company-wide survey answered by more than 1'000 Microsoft developers and managers. We found that outcomes clearly do not match the expectations from various angles and across all product groups. The study was published at the International Conference on Software Engineering in 2013 (see attached file). After this, we investigated the outcomes of code review in two active open source projects with several years of development (ConQAT and GROMACS). We found a staggering similarity in the outcomes of code review at Microsoft, despite the great differences in the environments in which code review are conducted and the incentives. The study was published at the Working Conference on Mining Software Repositories in 2014 (see attached file). In this talk, I will present the details of these studies, their results, and the motivation we gave to this difference between code review expectations and outcomes, and how I am trying to make code review work as expected with my research.
Speakers: Alberto Bacchelli