Towards empirical answers to important software engineering questions
(Adapted from a two-part article on the Communications of the ACM blog.)
1 The rise of empirical software engineering
One of the success stories of software engineering research in recent decades has been the rise of empirical studies. Visionaries such as Vic Basili, Marvin Zelkowitz and Walter Tichy advocated empirical techniques early [1, 2, 3]; what enabled the field to take off was the availability of software repositories for such long-running projects as Apache, Linux and Eclipse [4], which researchers started mining using modern data analysis techniques.
These studies have yielded many insights including surprises. More experienced developers can produce more buggy code (Schröter, Zimmermann, Premraj, Zeller). To predict whether a module has bugs, intrinsic properties such as complexity seem to matter less than how many changes it went through (Moser, Pedrycz, Succi). Automatic analysis of user reports seems much better at identifying bugs than spotting feature requests (Panichella, Di Sorbo, Guzman, Visaggio, Canfora, Gall). More extensively tested modules tend to have more bugs (Mockus, Nagappan, Dinh-Trong). Eiffel programmers do use contracts (Estler, Furia, Nordio, Piccioni and me). Geographical distance between team members negatively affects the amount of communication in projects (Nordio, Estler, Tschannen, Ghezzi, Di Nitto and me). And so on.
The basic observation behind empirical software engineering is simple: if software products and processes are worthy of discussion, they must be worthy of quantitative discussion just like any natural artifact or human process. Usually at that point the advocacy cites Lord Kelvin:”If you cannot measure it, you cannot improve it” [5].
Not that advocacy is much needed today, at least for publishing research in software engineering and in computer science education. The need for empirical backing of conceptual proposals has achieved consensus. The so-called a “Marco Polo paper” [6] (I traveled far and saw wonderful things, thank you very much for your attention) no longer suffices for program committees; today they want numbers (and also, thankfully, a “threats to validity” section which protects you against suspicions that the numbers are bogus by stating why they might be). Some think this practice of demanding empirical backing for anything you propose has gone too far; see Jeff Ullman’s complaint [7], pertaining to database research rather than software engineering, but reflecting some of the same discussions. Here we can counter Kelvin with another quote (for more effect attributed to Einstein, albeit falsely): not everything that can be counted counts, and not everything that counts can be counted.
2 Limits of empirical research
There can indeed be too much of a good thing. Still, no one would seriously deny the fundamental role that empirical research has gained in modern software engineering. Which does not prevent us from considering the limits of what it has achieved; not in a spirit of criticism for its own sake, but to help researchers define an effective agenda for the next steps. There are in my opinion two principal limitations of current empirical results in software engineering.
The first has to do with the distinction introduced above between the two kinds of possible targets for empirical assessment: products (artifacts) versus processes.
Both aspects are important, but one is much easier to investigate than the other. For software products, the material of study is available in the form of repositories mentioned above, with their wealth of information about lines of code, control and data structures, commits, editing changes, bug reports and bug fixes. Processes are harder to grasp. You gain some information on processes from the repositories (for example, patterns and delays of bug fixing), but processes deserve studies of their own. For example, agile teams practice iterations (sprints) of widely different durations, from a few days to a few weeks; what is the ideal length? A good empirical answer would help many practitioners. But this example illustrates how difficult empirical studies of processes can be: you would need to try many variations with teams of professional programmers (not students) in different projects, different application areas, different companies; for the results to be believable the projects should be real ones with business results at stake, there should be enough samples in each category to ensure statistical significance, and the companies should agree to publication of some form, possibly anonymized, of the outcomes. The difficulties are formidable.
This issue of how to obtain project-oriented metrics is related to the second principal limitation of some of the initial empirical software engineering work: the risk of indulging in lamppost research. The term refers to the well-known joke about the drunkard who, in the dark of the night, searches for his lost keys next to the lamp post, not because he has lost them there but because it is the only place where one can see anything. To a certain extent all research is lamppost research: by definition, if you succeed in studying something, it will be because it can be studied. But the risk is to choose to work on a problem only, or principally, because it is easy to set up an empirical study — regardless of its actual importance. To cite an example that I have used elsewhere, one may suspect that the reason there are so many studies of pair programming is not that it’s of momentous relevance but that it is not hard to set up an experiment.
3 Beyond the lamppost
As long as empirical software engineering was a young, fledgling discipline, it made good sense to start with problems that naturally lended themselves to empirical investigation. But now that the field has matured, it may be time to reverse the perspective and start from the consumer’s perspective: for practitioners of software engineering, what problems, not yet satisfactorily answered by software engineering theory, could benefit, in the search for answers, from empirical studies?
Indeed, this is what we are entitled to expect from empirical studies: guidance. The slogan of empirical software engineering is that software is worthy of study just like geological strata, photons, and lilies-of-the-valley; OK, sure, but we are talking about human artifacts rather than wonders of the natural world, and the idea should be to help us produce better software and produce software better.
4 A horror story
Whenever we call for guidance from empirical studies, we should immediately include a caveat: every empirical study has its limitations (politely called “threats to validity”) and one must be careful about any generalization. The following horror story serves as caution [9]. The fashion today in programming language design is to use the semicolon not as separator in the Algol tradition (instruction1 ; instruction2) but as a terminator in the C tradition (instruction1; instruction2;). The original justification, particularly in the case of Ada [10], is an empirical paper by Gannon and Horning [11], which purported to show that the terminator convention led to fewer errors. (The authors themselves not only give their experimental results but, departing from the experimenter’s reserve, explicitly jump to the conclusion that terminators are better.) This view defies reason: witness, among others, the ever-recommenced tragedy of if c then a; else; b where the semicolon after else is an error (a natural one, since one gets into the habit of adding semicolons just in case) but the code compiles, with the result that b will be executed in all cases rather than (as intended) just when c is false [12].
How in the world could an empirical study come up with such a bizarre conclusion? Go back to the original Gannon-Horning paper and the explanation becomes clear: the experiments used subjects who were familiar with the PL/I programming language, where semicolons are used generously and an extra semicolon is harmless, as it is in all practical languages (two successive semicolons being simply interpreted as the insertion of an empty instruction, causing no harm); but the experimental separator-based language and compiler used to the experiment treated an extra semicolon as an error! As if this were not enough, checking the details of the article reveals that the terminator language is terminator-based for both declarations and instructions, whereas the example delimiter language is only delimiter-based for instructions, but terminator-based for declarations. Talk about a biased experiment! The experiment was bogus and so are the results.
One should not be too harsh about a paper from 1975, when the very idea of systematic experimental studies of programming was novel, and some of its other results are worthy of consideration. But the sad terminator story, even though it only affected a syntax property, should serve as a reminder that we should not accept a view blindly just because someone invokes some empirical study to justify it. We should assess the study itself, its methods and its credibility.
5 Addressing the issues that matter
With this warning in mind, we should still expect empirical software engineering to help us practitioners. It should help address important software engineering problems.
Ideally, I should now list the open issues of software engineering, but I am in no position even to start such a list. All I can do is to give a few examples. They may not be important to you, but they give an idea:
- What are the respective values of upfront design and refactoring? How best can we combine these approaches?
- Specification and testing are complementary techniques. Specifications are in principles superior to testing in general, but testing remains necessary. What combination of specification and testing works best?
- What is the best commit/release technique, and in particular should we use RTC (Review Then Commit, as with Apache originally then Google) or CTR (Commit To Review, as Apache later) [13]?
- What measure of code properties best correlates with effort? Many fancy metrics have appeared in the literature over the years, but there is still a nagging feeling among many of us that for all its obvious limitations the vulgar SLOC metrics (Source Lines Of Code) still remains the least bad.
- When can a manager decide to stop testing? We did some work on the topic [14], but it is only a start.
- Is test coverage a good measure of test quality [15] (spoiler: it is not, but again we need more studies)?
And so on. These examples may not be the cases that you consider most important; indeed what we need is input from many software engineers to help steer empirical software engineering towards the topics that truly matter to the community.
To provide a venue for that discussion, a workshop will take place 10-12 September 2018 (provisional dates) in the Toulouse area, involving many of the tenors in empirical software engineering, with the same title as these two articles: Empirical Answers to Important Software Engineering Questions. The key idea is to start not from the solutions side (the lamppost) but from the actual challenges facing software engineers. It will not just be a traditional publication-oriented meeting but will also include ample time for discussions and joint work.
If you would like to contribute your example “important questions”, please use any appropriate support (responses to this blog, email to me, Facebook, LinkedIn, anything as long as one can find it). Suggestions will be taken into consideration for the workshop. Empirical software engineering has already established itself as a core area of research; it is time feed that research with problems that actually matter to software developers, managers and users
Acknowledgments
These reflections originated in a keynote that I gave at ESEM in Bolzano in 2010 (I am grateful to Barbara Russo and Giancarlo Succi for the invitation). I never wrote up the talk but I dug up the slides [8] since they might contain a few relevant observations. I used some of these ideas in a short panel statement at ESEC/FSE 2013 in Saint Petersburg, and I am grateful to Moshe Vardi for suggesting I should write them up for Communications of the ACM, which I never did.
References and notes
[1] Victor R. Basili: The role of experimentation in software engineering: past, present and future, in 18th ICSE (International Conference on Software Engineering), 1996, see here.
[2] Marvin V. Zelkowitz and Dolores Wallace: Experimental validation in software engineering, International Conference on Empirical Assessment and Evaluation in Software Engineering, March 1997, see here.
[3] Walter F. Tichy: Should computer scientists experiment more?, in IEEE Computer, vol. 31, no. 5, pages 32-40, May 1998, see here.
[4] And EiffelStudio, whose repository goes back to the early 90s and has provided a fertile ground for numerous empirical studies, some of which appear in my publication list.
[5] This compact sentence is how the Kelvin statement is usually abridged, but his thinking was more subtle.
[6] Raymond Lister: After the Gold Rush: Toward Sustainable Scholarship in Computing, Proceedings of 10th conference on Australasian Computing Education Conference, pages 3-17, see here.
[7] Jeffrey D. Ullman: Experiments as research validation: have we gone too far?, in Communications of the ACM, vol. 58, no. 9, pages 37-39, 2015, see here.
[8] Bertrand Meyer, slides of a talk at ESEM (Empirical Software Engineering and Measurement), Bozen/Bolzano, 2010, available here. (Provided as background material only, they are not a paper but just slide support for a 45-minute talk, and from several years ago.)
[9] This matter is analyzed in more detail in section 26.5 of my book Object-Oriented Software Construction, 2nd edition, Prentice Hall. No offense to the memory of Jim Horning, a great computer scientist and a great colleague. Even great computer scientists can be wrong once in a while.
[10] I know this from the source: Jean Ichbiah, the original designer of Ada, told me explicitly that this was the reason for his choice of the terminator convention for semicolons, a significant decision since it was expected that the language syntax would be based on Pascal, a delimiter language.
[11] Gannon & Horning, Language Design for Programming Reliability, IEEE Transactions on Software Engineering, vol. SE-1, no. 2, June 1975, pages 179-191, see here.
[12] This quirk of C and similar languages is not unlike the source of the Apple SSL/TLS bug discussed earlier in this blog under the title Code matters.
[13] Peter C. Rigby, Daniel M. German, Margaret-Anne Storey: Open Source Software Peer Review Practices: a Case study of the Apache Server, in ICSE (International Conference on Software Engineering) 2008, pages 541-550, see here.
[14] Carlo A. Furia, Bertrand Meyer, Manuel Oriol, Andrey Tikhomirov and Yi Wei:The Search for the Laws of Automatic Random Testing, in Proceedings of the 28th ACM Symposium on Applied Computing (SAC 2013), Coimbra (Portugal), ACM Press, 2013, see here.
[15] Yi Wei, Bertrand Meyer and Manuel Oriol: Is Coverage a Good Measure of Testing Effectiveness?, in Empirical Software Engineering and Verification (LASER 2008-2010), eds. Bertrand Meyer and Martin Nordio, Lecture Notes in Computer Science 7007, Springer, February 2012, see here.