Archive for the ‘Testing’ Category.

The French School of Programming

July 14 (still here for 15 minutes) is not a bad opportunity to announced the publication of a new book: The French School of Programming.

The book is a collection of chapters, thirteen of them, by rock stars of programming and software engineering research (plus me), preceded by a Foreword by Jim Woodcock and a Preface by me. The chapters are all by a single author, reflecting the importance that the authors attached to the project. Split into four sections after chapter 1, the chapters are, in order:

1. The French School of Programming: A Personal View, by Gérard Berry (serving as a general presentation of the subsequent chapters).

Part I: Software Engineering

2. “Testing Can Be Formal Too”: 30 Years Later, by  Marie-Claude Gaudel

3. A Short Visit to Distributed Computing Where Simplicity Is Considered a First-Class Property, by Michel Raynal

4. Modeling: From CASE Tools to SLE and Machine Learning, by Jean-Marc Jézéquel

5. At the Confluence of Software Engineering and Human-Computer Interaction: A Personal Account,  by Joëlle Coutaz

Part II:  Programming Language Mechanisms and Type Systems

6. From Procedures, Objects, Actors, Components, Services, to Agents, by  Jean-Pierre Briot

7. Semantics and Syntax, Between Computer Science and Mathematics, by Pierre-Louis Curien

8. Some Remarks About Dependent Type Theory, by Thierry Coquand

Part III: Theory

9. A Personal Historical Perspective on Abstract Interpretation, by Patrick Cousot

10. Tracking Redexes in the Lambda Calculus, by  Jean-Jacques Lévy

11. Confluence of Terminating Rewriting Computations, by  Jean-Pierre Jouannaud

Part IV: Language Design and Programming Methodology

12. Programming with Union, Intersection, and Negation Types, by Giuseppe Castagna

13, Right and Wrong: Ten Choices in Language Design, by Bertrand Meyer

What is the “French School of Programming”? As discussed in the Preface (although Jim Woodcock’s Foreword does not entirely agree) it is not anything defined in a formal sense, as the variety of approaches covered in the book amply demonstrates. What could be more different (for example) than Coq, OCaml (extensively referenced by several chapters) and Eiffel? Beyond the differences, however, there is a certain je ne sais quoi of commonality; to some extent, in fact, je sais quoi: reliance on mathematical principles, a constant quest for simplicity, a taste for elegance. It will be for the readers to judge.

Being single authors of their chapters, the authors felt free to share some of their deepest insights an thoughts. See for example Thierry Coquand’s discussion of the concepts that led to the widely successful Coq proof system, Marie-Claude Gaudel’s new look at her seminal testing work of 30 years ago, and Patrick Cousot’s detailed recounting of the intellectual path that led him and Radhia to invent abstract interpretation.


The French School of Programming
Edited by Bertrand Meyer
Springer, 2024. xxiv + 439 pages

Book page on Springer site
Amazon US page
Amazon France page
Amazon Germany page

The book is expensive (I tried hard to do something about it, and failed). But many readers should be able to download it, or individual chapters, for free through their institutions.

It was a privilege for me to take this project to completion and work with such extraordinary authors who produced such a collection of gems.

VN:F [1.9.10_1130]
Rating: 10.0/10 (5 votes cast)
VN:F [1.9.10_1130]
Rating: +5 (from 5 votes)

New paper: optimization of test cases generated from failed proofs

Li Huang (PhD student at SIT) will be presenting at an ISSRE workshop the paper Improving Counterexample Quality from Failed Program Verification, written with Manuel Oriol and me. One can find the text on arXiv here. (I will update this reference with the official publication link when I have it.)

The result being presented is part of a more general effort at combining proofs and tests (with other papers in the pipeline). The idea of treating proofs and tests as complementary rather than competing methods of software verification is an old pursuit of mine (which among other consequences resulted in the creation with Yuri Gurevich of the Tests and Proofs conference, which I see is continuing to run). A particular observation is that failure means a different thing for proofs and tests.

A failed test provides interesting information (in fact it is a successful proof — of incorrectness). A successful proof is, of course, also interesting (in principle it should be end of the story), whereas a successful test tells us very little. But in the practice of program proving the common occurrence is failure to prove a program element correct. You are typically left with no clue as to the source of the failure. In the AutoProof verification system for Eiffel, we are able to rely on the underlying technology (Boogie and Z3) to extract a counterexample which gives concrete evidence: as with a failed test, a programmer can in general quickly understand what is wrong.

In other words, the useless negative result of the bottom-left entry of the above picture can produce a useful result:

Pasted

The general approach is the subject of another article but this one focuses on producing tests that are actually significant for the programmer. If you get very large values, you will not immediately be able to relate to them. Hence the need for a process of minimization, described in the article. The results on our examples are encouraging, making it possible to evidence the bug on very small integer values.

Reference

Li Huang, Bertrand Meyer and Manuel Oriol: Improving Counterexample Quality from Failed Program Verification, 6th International Workshop on Software Faults, October 2022. Preprint available on arXiv here. The program workshop is available here; the presentation is on Monday, 31 October, 15:55 CET (7:55 AM Los Angeles, 10:55 New York).

 

VN:F [1.9.10_1130]
Rating: 10.0/10 (2 votes cast)
VN:F [1.9.10_1130]
Rating: +1 (from 1 vote)

New book: the Requirements Handbook

cover

I am happy to announce the publication of the Handbook of Requirements and Business Analysis (Springer, 2022).

It is the result of many years of thinking about requirements and how to do them right, taking advantage of modern principles of software engineering. While programming, languages, design techniques, process models and other software engineering disciplines have progressed considerably, requirements engineering remains the sick cousin. With this book I am trying to help close the gap.

pegsThe Handbook introduces a comprehensive view of requirements including four elements or PEGS: Project, Environment, Goals and System. One of its principal contributions is the definition of a standard plan for requirements documents, consisting of the four corresponding books and replacing the obsolete IEEE 1998 structure.

The text covers both classical requirements techniques and novel topics such as object-oriented requirements and the use of formal methods.

The successive chapters address: fundamental concepts and definitions; requirements principles; the Standard Plan for requirements; how to write good requirements; how to gather requirements; scenario techniques (use cases, user stories); object-oriented requirements; how to take advantage of formal methods; abstract data types; and the place of requirements in the software lifecycle.

The Handbook is suitable both as a practical guide for industry and as a textbook, with over 50 exercises and supplementary material available from the book’s site.

You can find here a book page with the preface and sample chapters.

To purchase the book, see the book page at Springer and the book page at Amazon US.

VN:F [1.9.10_1130]
Rating: 10.0/10 (1 vote cast)
VN:F [1.9.10_1130]
Rating: +1 (from 1 vote)

PhD and postdoc positions in verification in Switzerland

The Chair of Software Engineering, my group at the Schaffhausen Institute of Technology in Switzerland (SIT), has open positions for both PhD students and postdocs. We are looking for candidates with a passion for reliable software and a mix of theoretical knowledge and practical experience in software engineering. Candidates should have degrees in computer science or related fields: a doctorate for postdoc positions, a master’s degree for PhD positions. Postdoc candidates should have a substantial publication record. Experience is expected in one or more of the following fields:

  • Software verification (axiomatic, model-checking, abstract interpretation etc.).
  • Advanced techniques of software testing.
  • Formal methods, semantics of programming languages.
  • Concurrent programming.
  • Design by Contract, Eiffel, techniques of correctness-by-construction.

Some of the work involves the AutoProof framework, under development at SIT (earlier at ETH), although other topics are also available, particularly in static analysis.

Compensation is attractive. Candidates must have the credentials to work in Switzerland (typically, citizenship or residence in Switzerland or the EU). Although we work in part remotely like everyone else these days, the positions are residential.

Interested candidates should send a CV and relevant documents or links (and any questions) to bm@sit.org.

VN:F [1.9.10_1130]
Rating: 10.0/10 (3 votes cast)
VN:F [1.9.10_1130]
Rating: 0 (from 0 votes)

Time to resurrect PSP?

Let us assume for the sake of the argument that software quality matters. There are many ingredients to software quality, of which one must be the care that every programmer devotes to the job. The Personal Software Process, developed by Watts Humphrey in the 1990s [1], prescribes a discipline that software developers should apply to produce good software and improve their professional ability over their careers. It has enjoyed moderate success but was never a mass movement and rarely gets mentioned nowadays; few software developers, in my experience, even know the name. Those who do often think of it as passé, a touching memory from the era of Monica Lewinsky and the Roseanne show.

Once cleaned of a few obsolete elements, PSP deserves to be known and applied.

PSP came out of Watts Humphrey’s earlier work on the Capability Maturity Model (see my earlier article on this blog, What is wrong with CMMI), a collection of recommended practices and assessment criteria for software processes, originally developed in the mid-eighties for the U.S. military contractor community but soon thereafter embraced by software outsourcing companies (initially, Indian ones) and later by other industries. Responding to complaints that CMM/CMMI, focused on processes in large companies, ignored the needs of smaller ones, and lacked individual guidance for developers, Humphrey developed TSP, the Team Software Process, and PSP.

The most visible part of PSP is a six-step process pictured in the middle of this diagram:
cmmi

The most visible and also the most corny. Who today wants to promise always to follow such a strict sequence of steps? Always to write the code for a module in full before compiling it? (Notice there is no backward arrow, the process is sequential.) Always to test at the end only? Come on. This is the third decade of the 21st century.

Today we compile as we code, using the development environment (IDE) as a brilliant tool to check everything we do or plan to do. For my part, whenever I am writing code and have not compiled my current draft for more than a few minutes I start feeling like an addict in need of a fix; my fix is the Compile button of EiffelStudio. At some eventual stage the compiler becomes a tool to generate excutable code, but long before that it has been my friend, coach, mentor, and doppelgänger, helping me get things (types, null references, inheritance…) right and gently chiding me when I wander off the rails.

As to tests, even if you do not buy into the full dogma of Test-Driven Development (I don’t), they get written and exercised right from the start, as you are writing the code, not afterwards. Compile all the time, test all the time.

It’s not just that a process such as the above ignores the contributions of agile methods, which are largely posterior to PSP. As analyzed in [2], agile is a curious mix of good ideas and a few horrendous ones. But among its durable contributions is the realization that development must be incremental, not a strict succession of separate activities.

This old-style flavor or PSP is probably the reason why it has fallen out of favor. But (like the agile rejection of upfront lifecycle activities) such a reaction is a case of criticism gone too far, ignoring the truly beneficial contributions. Ignore PSP’s outmoded sequence of activities and you will find that PSP’s core message is as relevant today as it ever was. That message is: we should learn from the practices of traditional engineers and apply a strict professional discipline. For example:

  • Keep a log of all activities. (See “Logs” in the above figure.) Engineers are taught to record everything they do; many programmers don’t bother. This practice, however, is essential to self-improvement.
  • Keep measurements of everything you do. (There are lots of things to measure, from hours spent on every kind of task to bugs found, time to fix them etc.)
  • Estimate and plan your work.
  • Clearly define commitments, and meet them.
  • Resist pressure to make unreasonable commitments (something that agilists approach also emphasize).
  • Understand your current performance.
  • Understand your programming style and how it affects various measures. (As an example, code size, as a function of the number of routines, depends on whether you are more concise or more verbose in style).
  • Continually improve your expertise as a professional.

PSP does not limit itself to such exhortations but gives concrete tools to apply the principles, with a view to: measuring, tracking and analyzing your work; learning from your performance variations; and incorporating the lessons learned into your professional practices. On the topic of measurement, for example, PSP includes precise guidelines on what to measure and how to measure it, and how to rely on proxies for quantities that are hard to assess directly. On this last point, PSP includes PROBE (PROxy-Based Estimating, you cannot have a method coming out of the world of US government organizations without cringeworthy acronyms), a general framework for estimating size and resource parameters from directly measurable proxies.

This is what PSP is about: a discipline of personal productivity and growth, emphasizing personal discipline, tracking and constant improvement. It is not hard to learn; a technical report by Humphrey available online [3] provides a sufficient basis to understand the concepts and start a process of self-improvement.

Watts Humphrey himself, as all who had the privilege to meet him can testify, was a model of seriousness and professionalism, the quintessential engineer. (I also remember him as the author of what may be the best pun I ever heard — ask me sometime.) PSP directly reflects these qualities and — ignoring its visible but in the end unimportant remnants from outdated technical choices — should be part of every software engineering curriculum and every software engineer’s collection of fundamental practices.

References

[1] Watts Humphrey, Introduction to the Personal Software Process, Addison-Wesley, 1996.

[2] Bertrand Meyer: Agile! The Good, the Hype and the Ugly, Springer, 2014, see here.

[3] Watts Humphrey, The Personal Software Process, Software Engineering Institute Technical Report CMU/SEI-2000-TR-022, available (in PDF, free) here.

 

Recycled A version of this article was first published in the Communications of the ACM blog.

.

VN:F [1.9.10_1130]
Rating: 7.8/10 (8 votes cast)
VN:F [1.9.10_1130]
Rating: +1 (from 3 votes)

PhD and postdoc positions in verification in Switzerland

My group, the Chair of Software Engineering, at the newly created Schaffhausen Institute of Technology has open positions for both PhD students and postdocs. We are looking for candidates with a passion for reliable software and a mix of theoretical knowledge and practical experience in software engineering. Candidates should have degrees in computer science or related fields: a doctorate for postdoc positions, a master’s degree for PhD positions. Postdoc candidates should have a substantial publication record. Experience in one or more of the following fields is a plus:

  • Software verification (axiomatic, model-checking, abstract interpretation etc.).
  • Advanced techniques of software testing.
  • Formal methods, semantics of programming languages, type theory.
  • Design by Contract, Eiffel, techniques of correctness-by-construction.
  • Cybersecurity.

 Compensation at both levels is attractive. The PhD program is conducted in cooperation with partner universities. 

 Interested candidates should send a CV and relevant documents or links to bm@sit.org. They are also welcome to contact me for details.

VN:F [1.9.10_1130]
Rating: 10.0/10 (1 vote cast)
VN:F [1.9.10_1130]
Rating: +1 (from 1 vote)

Getting a program right, in nine episodes

About this article: it originated as a series of posts on the Communications of the ACM blog. I normally repost such articles here. (Even though copy-paste is usually not good, there are three reasons for this duplication: the readership seems to be largely disjoint; I can use better formatting, since their blog software is more restrictive than WordPress; and it is good to have a single repository for all my articles, including both those who originated on CACM and those who did not.) The series took the form of nine articles, where each of the first few ended with a quiz, to which the next one, published a couple of days later, provided an answer. Since all these answers are now available it would make no sense to use the same scheme, so I am instead publishing the whole thing as a single article  with nine sections, slightly adapted from the original.

I was too lazy so far to collect all the references into a single list, so numbers such as [1] refer to the list at the end of the corresponding section.


A colleague recently asked me to present a short overview of  axiomatic semantics as a guest lecture in one of his courses. I have been teaching courses on software verification for a long time (see e.g. here), so I have plenty of material; but instead of just reusing it, I decided to spend a bit of time on explaining why it is good to have a systematic approach to software verification. Here is the resulting tutorial.


 

1. Introduction and attempt #1

Say “software verification” to software professionals, or computer science students outside of a few elite departments, and most of them will think  “testing”. In a job interview, for example, show a loop-based algorithm to a programmer and ask “how would you verify it?”: most will start talking about devising clever test cases.

Far from me to berate testing [1]; in fact, I have always thought that the inevitable Dijkstra quote about testing — that it can only show the presence of errors, not their absence [2] — which everyone seems to take as an indictment and dismissal of testing (and which its author probably intended that way) is actually a fantastic advertisement for testing: a way to find bugs? Yes! Great! Where do I get it?  But that is not the same as verifying the software, which means attempting to ascertain that it has no bugs.

Until listeners realize that verification cannot just mean testing, the best course material on axiomatic semantics or other proof techniques will not attract any interest. In fact, there is somewhere a video of a talk by the great testing and public-speaking guru James Whittaker where he starts by telling his audience not to worry, this won’t be a standard boring lecture, he will not start talking about loop invariants [3]! (Loop invariants are coming in this article, in fact they are one of its central concepts, but in later sections only, so don’t bring the sleeping bags yet.) I decided to start my lecture by giving an example of what happens when you do not use proper verification. More than one example, in fact, as you will see.

A warning about this article: there is nothing new here. I am using an example from my 1990 book Introduction to the Theory of Programming Languages (exercise 9.12). Going even further back, a 1983 “Programming Pearls” Communications of the ACM article by Jon Bentley [4] addresses the same example with the same basic ideas. Yet almost forty years later these ideas are still not widely known among practitioners. So consider these articles as yet another tutorial on fundamental software engineering stuff.

The tutorial is a quiz. We start with a program text:

from

i := 1 ; j := n              — Result initialized to 0.

until i = j loop

m := (i + j) // 2         — Integer division

if t [m] ≤ x then i := m  else  j := m end

end

if x = t [i] then Result := i end

All variables are of integer type. t is an up-sorted array of integers, indexed from 1 to n . We do not let any notation get between friends. A loop from p until e loop q end executes p then, repeatedly: stops if e (the exit condition) is true, otherwise executes q. (Like {p ; while not e do {q}} in some other notations.) “:=” is assignment, “=” equality testing.  “//” is integer division, e.g. 6 //3 = 7 //3 = 2. Result is the name of a special variable whose final value will be returned by this computation (as part of a function, but we only look at the body). Result is automatically initialized to zero like all integer variables, so if execution does not assign anything to Result the function will return zero.

First question: what is this program trying to do?

OK, this is not the real quiz. I assume you know the answer: it is an attempt at “binary search”, which finds an element in the array, or determines its absence, in a sequence of about log2 (n) steps, rather than n if we were use sequential search.  (Remember we assume the array is sorted.) Result should give us a position where x appears in the array, if it does, and otherwise be zero.

Now for the real quiz: does this program meet this goal?

The answer should be either yes or no. (If no, I am not asking for a correct version, at least not yet, and in any case you can find some in the literature.) The situation is very non-symmetric, we might say Popperian:

  • To justify a no answer it suffices of a single example, a particular array t and a particular value x, for which the program fails to set Result as it should.
  • To justify a yes answer we need to provide a credible argument that for every t and  x the program sets Result as it should.

Notes to section 1

[1] The TAP conference series (Tests And Proofs), which Yuri Gurevich and I started, explores the complementarity between the two approaches.

[2] Dijkstra first published his observation in 1969. He did not need consider the case of infinite input sets: even for a trivial finite program that multiplies two 32-bit integers, the number of cases to be examined, 264, is beyond human reach. More so today with 64-bit integers. Looking at this from a 2020 perspective, we may note that exhaustive testing of a finite set of cases, which Dijkstra dismissed as impossible in practice, is in fact exactly what the respected model checking verification technique does; not on the original program, but on a simplified — abstracted — version precisely designed to keep the number of cases tractable. Dijkstra’s argument remains valid, of course, for  the original program if non-trivial. And model-checking does not get us out of the woods: while we are safe if its “testing” finds no bug, if it does find one we have to ensure that the bug is a property of the original program rather than an artifact of the abstraction process.

[3] It is somewhere on YouTube, although I cannot find it right now.

[4] Jon Bentley: Programming Pearls: Writing Correct Programs, in Communications of the ACM, vol. 26, no. 12, pp. 1040-1045, December 1983, available for example here.


2. Attempt #2

Was program #1 correct? If so it should yield the correct answer. (An answer is correct if either Result is the index in t of an element equal to x, or Result = 0 and x does not appear in t.)

This program is not correct. To prove that it is not correct it suffices of a single example (test case) for which the program does not  “yield the correct answer”. Assume x = 1 and the array t has two elements both equal to zero (n = 2, remember that arrays are indexed from 1):

t = [0   0]

The successive values of the variables and expressions are:

                                            m       i          j            i + j + 1

After initialization:                   1         2                3

i ≠ j, so enter loop:           1       1        2                 6         — First branch of “if” since t [1] ≤ x
— so i gets assigned the value of m

But then neither of the values of i and j has changed, so the loop will repeat its body identically (taking the first branch) forever. It is not even that the program yields an incorrect answer: it does not yield an answer at all!

Note (in reference to the famous Dijkstra quote mentioned in the first article), that while it is common to pit tests against proofs, a test can actually be a proof: a test that fails is a proof that the program is incorrect. As valid as the most complex mathematical proof. It may not be the kind of proof we like most (our customers tend to prefer a guarantee that the program is correct), but it is a proof all right.

We are now ready for the second attempt:

—  Program attempt #2.

from

i := 1 ; j := n

until i = j or Result > 0  loop

m := (i + j) // 2         — Integer division

if t [m] ≤ x then

i := m  + 1

elseif t [m] = x then

Result := m

else                         — In this case t [m] > x

j := m – 1

end

end

Unlike the previous one this version always changes i or j, so we may hope it does not loop forever. It has a nice symmetry between i and j.

Same question as before: does this program meet its goal?


3. Attempt #3

The question about program #2, as about program #1: was: it right?

Again no.  A trivial example disproves it: n = 1, the array t contains a single element t [1] = 0, x = 0. Then the initialization sets both i and j to 1, i = j holds on entry to the loop which stops immediately, but Result is zero whereas it should be 1 (the place where x appears).

Here now is attempt #3, let us see it if fares better:

—  Program attempt #3.

from

i := 1 ; j := n

until i = j loop

m := (i + j + 1) // 2

if t [m] ≤ x then

i := m  + 1

else

j := m

end

end

if 1  ≤ i  and i ≤ n then Result := i end
       — If not, Result remains 0.

What about this one?


3. Attempt #4 (also includes 3′)

The first two program attempts were wrong. What about the third?

I know, you have every right to be upset at me, but the answer is no once more.

Consider a two-element array t = [0 0] (so n = 2, remember that our arrays are indexed from 1 by convention) and a search value x = 1. The successive values of the variables and expressions are:

                                                  m          i          j            i + j + 1

After initialization:                            1        2           4

i ≠ j, so enter loop:               2           3        2          6                  — First branch of “if” since t [2] < x

i ≠ j,  enter loop again:        3           ⚠                                       — Out-of-bounds memory access!
— (trying to access non-existent t [3])

Oops!

Note that we could hope to get rid of the array overflow by initializing i to 0 rather than 1. This variant (version #3′) is left as a bonus question to the patient reader. (Hint: it is also not correct. Find a counter-example.)

OK, this has to end at some point. What about the following version (#4): is it right?

—  Program attempt #4.

from

i := 0 ; j := n + 1

until i = j loop

m := (i + j) // 2

if t [m] ≤ x then

i := m  + 1

else

j := m

end

end

if 1 ≤ i  and i ≤ n then Result := i end


5. Attempt #5

Yes, I know, this is dragging on. But that’s part of the idea: witnessing how hard it is to get a program right if you just judging by the seat of your pants. Maybe we can get it right this time?

Are we there yet? Is program attempt #4 finally correct?

Sorry to disappoint, but no. Consider a two-element array t = [0 0], so n = 2, and a search value x = 1 (yes, same counter-example as last time, although here we could also use x = 0). The successive values of the variables and expressions are:

                                                 m          i          j            i + j

After initialization:                           0        3           3

i ≠ j, so enter loop:               1           2       3          5            — First branch of “if

i ≠ j, enter loop again:         2         3        3         6            — First branch again

i = j, exit loop

The condition of the final “if” is true, so Result gets the value 3. This is quite wrong, since there is no element at position 3, and in any case x does not appear in t.

But we are so close! Something like this should work, should it not?

So patience, patience, let us tweak it just one trifle more, OK?

—  Program attempt #5.

from

i := 1 ; j := n + 1

until i ≥ j or Result > 0 loop

m := (i + j) // 2

if t [m] < x then

i := m + 1

elseif  t [m] > x then

j := m

else

Result := m

end

end

Does it work now?


6. Attempt #6

The question about program #5  was the same as before: is it right, is it wrong?

Well, I know you are growing more upset at me with each section, but the answer is still that this program is wrong. But the way it is wrong is somewhat specific; and it applies, in fact, to all previous variants as well.

This particular wrongness (fancy word for “bug”) has a history. As I pointed out in the first article, there is a long tradition of using binary search to illustrate software correctness issues. A number of versions were published and proved correct, including one in the justly admired Programming Pearls series by Jon Bentley. Then in 2006 Joshua Bloch, then at Google, published a now legendary blog article [2] which showed that all these versions suffered from a major flaw: to obtain m, the approximate mid-point between i and j, they compute

(i + j) // 2

which, working on computer integers rather than mathematical integers, might overflow! This in a situation in which both i and j, and hence m as well, are well within the range of the computer’s representable integers, 2-n to 2n (give or take 1) where n is typically 31 or, these days, 63, so that there is no conceptual justification for the overflow.

In the specification that I have used for this article, i starts at 1, so the problem will only arise for an array that occupies half of the memory or more, which is a rather extreme case (but still should be handled properly). In the general case, it is often useful to use arrays with arbitrary bounds (as in Eiffel), so we can have even a small array, with high indices, for which the computation will produce an overflow and bad results.

The Bloch gotcha is a stark reminder that in considering the correctness of programs we must include all relevant aspects and consider programs as they are executed on a real computer, not as we wish they were executed in an ideal model world.

(Note that Jon Bentley alluded to this requirement in his original article: while he did not explicitly mention integer overflow, he felt it necessary to complement his proof by the comment that that  “As laborious as our proof of binary search was, it is still unfinished by some standards. How would you prove that the program is free of runtime errors (such as division by zero, word overflow, or array indices out of bounds)?” Prescient words!)

It is easy to correct the potential arithmetic overflow bug: instead of (i + j) // 2, Bloch suggested we compute the average as

i + (j – i) // 2

which is the same from a mathematician’s viewpoint, and indeed will compute the same value if both variants compute one, but will not overflow if both i and j are within range.

So we are ready for version 6, which is the same as version 5 save for that single change:

—  Program attempt #6.

from

i := 1 ; j := n + 1

until i ≥ j or Result > 0 loop

m := i + (j – i) // 2

if t [m] < x then

i := m + 1

elseif  t [m] > x then

j := m

else

Result := m

end

end

Now is probably the right time to recall the words by which Donald Knuth introduces binary search in the original 1973 tome on Sorting and Searching of his seminal book series The Art of Computer Programming:knuth

Although the basic idea of binary search is comparatively straightforward, the details can be somewhat tricky, and many good programmers have done it wrong the first few times they tried.

Do you need more convincing? Be careful what you answer, I have more variants up my sleeve and can come up with many more almost-right-but-actually-wrong program attempts if you nudge me. But OK, even the best things have an end. This is not the last section yet, but that was the last program attempt. To the naturally following next question in this running quiz,  “is version 6 right or wrong”, I can provide the answer: it is, to the best of my knowledge, a correct program. Yes! [3].

But the quiz continues. Since answers to the previous questions were all  that the programs were not correct, it sufficed in each case to find one case for which the program did not behave as expected. Our next question is of a different nature: can you find an argument why version #6 is correct?

References for section 6

[1] (In particular) Jon Bentley: Programming Pearls — Writing Correct Programs, in Communications of the ACM, vol. 26, no. 12, December 1983, pages 1040-1045, available here.

[2] Joshua Bloch: Extra, Extra — Read All About It: Nearly All Binary Searches and Mergesorts are Broken, blog post, on the Google AI Blog, 2 June 2006, available here.

[3] A caveat: the program is correct barring any typos or copy-paste errors — I am starting from rigorously verified programs (see the next posts), but the blogging system’s UI and text processing facilities are not the best possible for entering precise technical text such as code. However carefully I check, I cannot rule out a clerical mistake, which of course would be corrected as soon as it is identified.


7. Using a program prover

Preceding sections presented candidate binary search algorithms and asked whether they are correct. “Correct” means something quite precise: that for an array t and a value x, the final value of the variable Result is a valid index of t (that is to say, is between 1 and n, the size of t) if and only if x appears at that index in t.

The last section boldly stated that program attempt #6 was correct. The question was: why?

In the case of the preceding versions, which were incorrect, you could prove that property, and I do mean prove, simply by exhibiting a single counter-example: a single t and x for which the program does not correctly set Result. Now that I asserting the program to be correct, one example, or a million examples, do not suffice. In fact they are almost irrelevant. Test as much as you like and get correct results every time, you cannot get rid of the gnawing fear that if you had just tested one more time after the millionth test you would have produced a failure. Since the set of possible tests is infinite there is no solution in sight [1].

We need a proof.

I am going to explain that proof in the next section, but before that I would like to give you an opportunity to look at the proof by yourself. I wrote in one of the earlier articles that most of what I have to say was already present in Jon Bentley’s 1983 Programming Pearls contribution [2], but a dramatic change did occur in the four decades since: the appearance of automated proof system that can handle significant, realistic programs. One such system, AutoProof, was developed at the Chair of Software engineering at ETH Zurich [3] (key project members were Carlo Furia, Martin Nordio, Nadia Polikarpova and Julian Tschannen, with initial contributions by Bernd Schoeller) on the basis of the Boogie proof technology from Microsoft Research).

AutoProof is available for online use, and it turns out that one of the basic tutorial examples is binary search. You can go to the corresponding page and run the proof.

I am going to let you try this out (and, if you are curious, other online AutoProof examples as well) without too many explanations; those will come in the next section. Let me simply name the basic proof technique: loop invariant. A loop invariant is a property INV associated with a loop, such that:

  • A. After the loop’s initialization, INV will hold.
  • B. One execution of the loop’s body, if started with INV satisfied (and the loop’s exit condition not satisfied, otherwise we wouldn’t be executing the body!), satisfies INV again when it terminates.

This idea is of course the same as that of a proof by induction in mathematics: the initialization corresponds to the base step (proving that P (0) holds) and the body property to the induction step (proving that from P (n) follows P (n + 1). With a traditional induction proof we deduce that the property (P (n)) holds for all integers. For the loop, we deduce that when the loop finishes its execution:

  • The invariant still holds, since executing the loop means executing the initialization once then the loop body zero or more times.
  • And of course the exit condition also holds, since otherwise we would still be looping.

That is how we prove the correctness of a loop: the conjunction of the invariant and the exit condition must yield the property that we seek (in the example, the property, stated above of Result relative to t and x).

We also need to prove that the loop does terminate. This part involves another concept, the loop’s variant, which I will explain in the next section.

For the moment I will not say anything more and let you look at the AutoProof example page (again, you will find it here), run the verification, and read the invariant and other formal elements in the code.

To “run the verification” just click the Verify button on the page. Let me emphasize (and emphasize again and again and again) that clicking Verify will not run the code. There is no execution engine in AutoProof, and the verification does not use any test cases. It processes the text of the program as it appears on the page and below. It applies mathematical techniques to perform the proof; the core property to be proved is that the proposed loop invariant is indeed invariant (i.e. satisfies properties A and B above).

The program being proved on the AutoProof example page is version #6 from the last section, with different variable names. So far for brevity I have used short names such as i, j and m but the program on the AutoProof site applies good naming practices with variables called low, up, middle and the like. So here is that version again with the new variable names:

—  Program attempt #7  (identical to #6 with different variable names) .

from

low := 0 ; up := n

until low ≥ up or Result > 0 loop

middle := low + ((up – low) // 2)

if a [middle] < value then      — The array is now called a rather than t

low := middle + 1

elseif  a [middle] > value then

up := middle

else

Result := middle

end

end

This is exactly the algorithm text on the AutoProof page, the one that you are invited to let AutoProof verify for you. I wrote “algorithm text” rather than “program text” because the actual program text (in Eiffel) includes variant and invariant clauses which do not affect the program’s execution but make the proof possible.

Whether or not these concepts (invariant, variant, program proof) are completely new to you, do try the prover and take a look at the proof-supporting clauses. In the next article I will remove any remaining mystery.

Note and references for section 7

[1] Technically the set of possible [array, value] pairs is finite, but of a size defying human abilities. As I pointed out in the first section, the “model checking” and “abstract interpretation” verification techniques actually attempt to perform an exhaustive test anyway, after drastically reducing the size of the search space. That will be for some other article.

[2]  Jon Bentley: Programming Pearls: Writing Correct Programs, in Communications of the ACM, vol. 26, no. 12, pp. 1040-1045, December 1983, available for example here.

[3] The AutoProof page contains documentations and numerous article references.


8. Understanding the proof

The previous section invited you to run the verification on the AutoProof tutorial page dedicated to the example. AutoProof is an automated proof system for programs. This is just a matter of clicking  “Verify”, but more importantly, you should read the annotations added to the program text, particularly the loop invariant, which make the verification possible. (To avoid any confusion let me emphasize once more that clicking “Verify” does not run the program, and that no test cases are used; the effect is to run the verifier, which attempts to prove the correctness of the program by working solely on the program text.)

Here is the program text again, reverting for brevity to the shorter identifiers (the version on the AutoProof page has more expressive ones):

from

i := 1 ; j := n + 1

until i ≥ j or Result > 0 loop

m := i + (j – i) // 2

if t [m] < x then

i := m + 1

elseif  t [m] > x then

j := m

else

Result := m

end

end

Let us now see what makes the proof possible. The key property is the loop invariant, which reads

A:   1  ≤ i  ≤ j  ≤ n + 1
B:   0  ≤ Result  ≤ n
C:   ∀ k: 1 .. i –1  |  t [k] < x
D:   ∀ k: j .. n  |  t [k] > x
E:    (Result > 0)   ⇒   (t [Result] = x)

The notation is slightly different on the Web page to adapt to the Eiffel language as it existed at the time it was produced; in today’s Eiffel you can write the invariant almost as shown above. Long live Unicode, allowing us to use symbols such as (obtained not by typing them but by using smart completion, e.g. you start typing “forall” and you can select the symbol that pops up), for  “implies” and many others

Remember that the invariant has to be established by the loop’s initialization and preserved by every iteration. The role of each of its clauses is as follows:

  • A: keep the indices in range.
  • B: keep the variable Result, whose final value will be returned by the function, in range.
  • C and D: eliminate index intervals in which we have determined that the sought value, x, does not appear. Before i, array values are smaller; starting at j, they are greater. So these two intervals, 1..i and j..n, cannot contain the sought value. The overall idea of the algorithm (and most other search algorithms) is to extend one of these two intervals, so as to narrow down the remaining part of 1..n where x may appear.
  • E: express that as soon as we find a positive (non-zero) Result, its value is an index in the array (see B) where x does appear.

Why is this invariant useful? The answer is that on exit it gives us what we want from the algorithm. The exit condition, recalled above, is

i ≥ j or Result > 0

Combined with the invariant, it tells us that on exit one of the following will hold:

  • Result > 0, but then because of E we know that x appears at position Result.
  • i < j, but then A,  C and D  imply that x does not appear anywhere in t. In that case it cannot be true that Result > 0, but then because of B Result must be zero.

What AutoProof proves, mechanically, is that under the function’s precondition (that the array is sorted):

  • The initialization ensures the invariant.
  • The loop body, assuming that the invariant is satisfied but the exit condition is not, ensures the loop invariant again after it executes.
  • The combination of the invariant and the exit condition ensures, as just explained, the postcondition of the function (the property that Result will either be positive and the index of an element equal to x, or zero with the guarantee that x appears nowhere in t).

Such a proof guarantees the correctness of the program if it terminates. We (and AutoProof) must prove separately that it does terminate. The technique is simple: find a “loop variant”, an integer quantity v  which remains non-negative throughout the loop (in other words, the loop invariant includes or implies v ≥ 0) and decreases on each iteration, so that the loop cannot continue executing forever. An obvious variant here is j – i + 1 (where the + 1 is needed because j – i may go down to -1 on the last iteration if x does not appear in the array). It reflects the informal idea of the algorithm: repeatedly decrease an interval i .. j – 1 (initially, 1 .. n) guaranteed to be such that x appears in t if and only if it appears at an index in that interval. At the end, either we already found x or the interval is empty, implying that x does not appear at all.

A great reference on variants and the techniques for proving program termination is a Communications of the ACM article of 2011: [3].

The variant gives an upper bound on the number of iterations that remain at any time. In sequential search, j – i + 1 would be our best bet; but for binary search it is easy to show that  log(j – i + 1) is also a variant, extending the proof of correctness with a proof of performance (the key goal of binary search being to ensure a logarithmic rather than linear execution time).

This example is, I hope, enough to highlight the crucial role of loop invariants and loop variants in reasoning about loops. How did we get the invariant? It looks like I pulled it out of a hat. But in fact if we go the other way round (as advocated in classic books [1] [2]) and develop the invariant and the loop together the process unfolds itself naturally and there is nothing mysterious about the invariant.

Here I cannot resist quoting (thirty years on!) from my own book Introduction to the Theory of Programming Languages [4]. It has a chapter on axiomatic semantics (also known as Hoare logic, the basis for the ideas used in this discussion), which I just made available: see here [5]. Its exercise 9.12 is the starting point for this series of articles. Here is how the book explains how to design the program and the invariant [6]:

In the general case [of search, binary or not] we aim for a loop body of the form

m := ‘‘Some value in 1.. n such that i ≤ m < j’’;

if t [m] ≤ x then

i := m + 1

else

j := m

end

It is essential to get all the details right (and easy to get some wrong):

  • The instruction must always decrease the variant j – i, by increasing i or decreasing j. If the the definition of m specified just m ≤ j rather than m < j, the second branch would not meet this goal.
  •  This does not transpose directly to i: requiring i < m < j would lead to an impossibility when j – i is equal to 1. So we accept i ≤ m but then we must take m + 1, not m, as the new value of i in the first branch.
  •  The conditional’s guards are tests on t [m], so m must always be in the interval 1 . . n. This follows from the clause 0 ≤ i ≤ j ≤ n + 1 which is part of the invariant.
  •  If this clause is satisfied, then m ≤ n and m > 0, so the conditional instruction indeed leaves this clause invariant.
  • You are invited to check that both branches of the conditional also preserve the rest of the invariant.
  • Any policy for choosing m is acceptable if it conforms to the above scheme. Two simple choices are i  and j – 1; they lead to variants of the sequential search algorithm [which the book discussed just before binary search].

For binary search, m will be roughly equal to the average of i and j.

“Roughly” because we need an integer, hence the // (integer division).

In the last section, I will reflect further on the lessons we can draw from this example, and the practical significance of the key concept of invariant.

References and notes for section 8

[1] E.W. Dijkstra: A Discipline of Programming, Prentice Hall, 1976.

[2] David Gries: The Science of Programming, Springer, 1989.

[3] Byron Cook, Andreas  Podelski and Andrey Rybalchenko: Proving program termination, in Communications of the ACM, vol. 54, no. 11, May 2011, pages 88-98, available here.

[4] Bertrand Meyer, Introduction to the Theory of Programming Languages, Prentice Hall, 1990. The book is out of print but can be found used, e.g. on Amazon. See the next entry for an electronic version of two chapters.

[5] Bertrand Meyer Axiomatic semantics, chapter 9 from [3], available here. Note that the PDF was reconstructed from an old text-processing system (troff); the figures could not be recreated and are missing. (One of these days I might have the patience of scanning them from a book copy and adding them. Unless someone wants to help.) I also put online, with the same caveat, chapter 2 on notations and mathematical basis: see here.

[6] Page 383 of [4] and [5]. The text is verbatim except a slight adaptation of the programming notation and a replacement of the variables: i in the book corresponds to i – 1 here, and j to j – 1. As a matter of fact I prefer the original conventions from the book (purely as a matter of taste, since the two are rigorously equivalent), but I changed here to the conventions of the program as it appears in the AutoProof page, with the obvious advantage that you can verify it mechanically. The text extract is otherwise exactly as in the 1990 book.

9. Lessons learned

What was this journey about?

We started with a succession of attempts that might have “felt right” but were in fact all wrong, each in its own way: giving the wrong answer in some cases, crashing (by trying to access an array outside of its index interval) in some cases, looping forever in some cases. Always “in some cases”,  evidencing the limits of testing, which can never guarantee that it exercises all the problem cases. A correct program is one that works in all cases. The final version was correct; you were able to prove its correctness with an online tool and then to understand (I hope) what lies behind that proof.

To show how to prove such correctness properties, I have referred throughout the series to publications from the 1990s (my own Introduction to The Theory of Programming Languages), the 1980s (Jon Bentley’s Programming Pearls columns, Gries’s Science of Programming), and even the 1970s (Dijkstra’s Discipline of Programming). I noted that the essence of my argument appeared in a different form in one of Bentley’s Communications articles. What is the same and what has changed?

The core concepts have been known for a long time and remain applicable: assertion, invariant, variant and a few others, although they are much better understood today thanks to decades of theoretical work to solidify the foundation. Termination also has a more satisfactory theory.

On the practical side, however, the progress has been momentous. Considerable engineering has gone into making sure that the techniques scaled up. At the time of Bentley’s article, binary search was typical of the kind of programs that could be proved correct, and the proof had to proceed manually. Today, we can tackle much bigger programs, and use tools to perform the verification.

Choosing binary search again as an example today has the obvious advantage that everyone can understand all the details, but should not be construed as representative of the state of the art. Today’s proof systems are far more sophisticated. Entire operating systems, for example, have been mechanically (that is to say, through a software tool) proved correct. In the AutoProof case, a major achievement was the proof of correctness [1] of an entire data structure (collections) library, EiffelBase 2. In that case, the challenge was not so much size (about 8,000 source lines of code), but the complexity of both:

  • The scope of the verification, involving the full range of mechanisms of a modern object-oriented programming language, with classes,  inheritance (single and multiple), polymorphism, dynamic binding, generics, exception handling etc.
  • The code itself, using sophisticated data structures and algorithms, involving in particular advanced pointer manipulations.

In both cases, progress has required advances on both the science and engineering sides. For example, the early work on program verification assumed a bare-bones programming language, with assignments, conditionals, loops, routines, and not much more. But real programs use many other constructs, growing ever richer as programming languages develop. To cover exception handling in AutoProof required both theoretical modeling of this construct (which appeared in [2]) and implementation work.

More generally, scaling up verification capabilities from the small examples of 30 years ago to the sophisticated software that can be verified today required the considerable effort of an entire community. AutoProof, for example, sits at the top of a tool stack relying on the Boogie environment from Microsoft Research, itself relying on the Z3 theorem prover. Many person-decades of work make the result possible.

tool_stack

Beyond the tools, the concepts are esssential. One of them, loop invariants, has been illustrated in the final version of our program. I noted in the first article the example of a well-known expert and speaker on testing who found no better way to announce that a video would not be boring than  “relax, we are not going to talk about loop invariants.” Funny perhaps, but unfair. Loop invariants are one of the most beautiful concepts of computer science. Not so surprisingly, because loop invariants are the application to programming of the concept of mathematical induction. According to the great mathematician Henri Poincaré, all of mathematics rests on induction; maybe he exaggerated, maybe not, but who would think of teaching mathematics without explaining induction? Teaching programming without explaining loop invariants is no better.

Below is an illustration (if you will accept my psychedelic diagram) of what a loop is about, as a problem-solving technique. Sometimes we can get the solution directly. Sometimes we identify several steps to the solution; then we use a sequence (A ; B; C). Sometimes we can find two (or more) different ways of solving the problem in different cases; then we use a conditional (if c then A else B end). And sometimes we can only get a solution by getting closer repeatedly, not necessarily knowing in advance how many times we will have to advance towards it; then, we use a loop.

loop_strategy

We identify an often large (i.e. very general) area where we know the solution will lie; we call that area the loop invariant. The solution or solutions (there may be more than one) will have to satisfy a certain condition; we call it the exit condition. From wherever we are, we shoot into the invariant region, using an appropriate operation; we call it the initialization. Then we execute as many times as needed (maybe zero if our first shot was lucky) an operation that gets us closer to that goal; we call it the loop body. To guarantee termination, we must have some kind of upper bound of the distance to the goal, decreasing each time discretely; we call it the loop variant.

This explanation is only an illustration, but I hope it makes the ideas intuitive. The key to a loop is its invariant. As the figure suggests, the invariant is always a generalization of the goal. For example, in binary search (and many other search algorithms, such as sequential search), our goal is to find a position where either x appears or, if it does not, we can be sure that it appears nowhere. The invariant says that we have an interval with the same properties (either x appears at a position belonging to that interval or, if it does not, it appears nowhere). It obviously includes the goal as a special case: if the interval has length 1, it defines a single position.

An invariant should be:

  1. Strong enough that we can devise an exit condition which in the end, combined with the invariant, gives us the goal we seek (a solution).
  2. Weak enough that we can devise an initialization that ensures it (by shooting into the yellow area) easily.
  3. Tuned so that we can devise a loop body that, from a state satifying the invariant, gets us to a new one that is closer to the goal.

In the example:

  1. The exit condition is simply that the interval’s length is 1. (Technically, that we have computed Result as the single interval element.) Then from the invariant and the exit condition, we get the goal we want.
  2. Initialization is easy, since we can just take the initial interval to be the whole index range of the array, which trivially satisfies the invariant.
  3. The loop body simply decreases the length of the interval (which can serve as loop variant to ensure termination). How we decrease the length depends on the search strategy; in sequential search, each iteration decreases the length by 1, correct although not fast, and binary search decreases it by about half.

The general scheme always applies. Every loop algorithm is characterized by an invariant. The invariant may be called the DNA of the algorithm.

To demonstrate the relevance of this principle, my colleagues Furia, Velder, and I published a survey paper [6] in ACM Computing Surveys describing the invariants of important algorithms in many areas of computer science, from search algorithms to sorting (all major algorithms), arithmetic (long integer addition, squaring), optimization and dynamic programming  (Knapsack, Levenshtein/Edit distance), computational geometry (rotating calipers), Web (Page Rank)… I find it pleasurable and rewarding to go deeper into the basis of loop algorithms and understand their invariants; like a geologist who does not stop at admiring the mountain, but gets to understand how it came to be.

Such techniques are inevitable if we want to get our programs right, the topic of this article. Even putting aside the Bloch average-computation overflow issue, I started with 5 program attempts, all kind of friendly-looking but wrong in different ways. I could have continued fiddling with the details, following my gut feeling to fix the flaws and running more and more tests. Such an approach can be reasonable in some cases (if you have an algorithm covering a well-known and small set of cases), but will not work for non-trivial algorithms.

Newcomers to the concept of loop invariant sometimes panic: “this is all fine, you gave me the invariants in your examples, how do I find my own invariants for my own loops?” I do not have a magic  recipe (nor does anyone else), but there is no reason to be scared. Once you have understood the concept and examined enough examples (just a few of those in [6] should be enough), writing the invariant at the same time as you are devising a loop will come as a second nature to you.

As the fumbling attempts in the first few sections should show, there is not much of an alternative. Try this approach. If you are reaching these final lines after reading what preceded them, allow me to thank you for your patience, and to hope that this rather long chain of reflections on verification will have brought you some new insights into the fascinating challenge of writing correct programs.

References

[1] Nadia Polikarpova, Julian Tschannen, and Carlo A. Furia: A Fully Verified Container Library, in Proceedings of 20th International Symposium on Formal Methods (FM 15), 2015. (Best paper award.)

[2] Martin Nordio, Cristiano Calcagno, Peter Müller and Bertrand Meyer: A Sound and Complete Program Logic for Eiffel, in Proceedings of TOOLS 2009 (Technology of Object-Oriented Languages and Systems), Zurich, June-July 2009, eds. M. Oriol and B. Meyer, Springer LNBIP 33, June 2009.

[3] Boogie page at MSR, see here for publications and other information.

[4] Z3 was also originally from MSR and has been open-sourced, one can get access to publications and other information from  its Wikipedia page.

[5] Carlo Furia, Bertrand Meyer and Sergey Velder: Loop invariants: Analysis, Classification and Examples, in ACM Computing Surveys, vol. 46, no. 3, February 2014. Available here.

[6] Dynamic programming is a form of recursion removal, turning a recursive algorithm into an iterative one by using techniques known as “memoization” and  “bottom-up computation” (Berry). In this transformation, the invariant plays a key role. I will try to write this up some day as it is a truly elegant and illuminating explanation.

VN:F [1.9.10_1130]
Rating: 10.0/10 (8 votes cast)
VN:F [1.9.10_1130]
Rating: +4 (from 4 votes)

Soundness and completeness: with precision

Over breakfast at your hotel you read an article berating banks about the fraudulent credit card transactions they let through. You proceed to check out and bang! Your credit card is rejected because (as you find out later) the bank thought [1] it couldn’t possibly be you in that exotic place. Ah, those banks! They accept too much. Ah, those banks! They reject too much. Finding the right balance is a case of soundness versus precision.

Similar notions are essential to the design of tools for program analysis, looking for such suspicious cases as  dead code (program parts that will never be executed). An analysis can be sound, or not; it can be complete, or not.

These widely used concepts are sometimes misunderstood.  The first answer I get when innocently asking people whether the concepts are clear is yes, of course, everyone knows! Then, as I bring up such examples as credit card rejection or dead code detection, assurance quickly yields to confusion. One sign that things are not going well is when people start throwing in terms like “true positive” and “false negative”. By then any prospect of reaching a clear conclusion has vanished. I hope that after reading this article you will never again (in a program analysis context) be tempted to use them.

Now the basic idea is simple. An analysis is sound if it reports all errors, and complete if it only reports errors. If not complete, it is all the more precise that it reports fewer non-errors.

You can stop here and not be too far off [2]. But a more nuanced and precise discussion helps.

1. A relative notion

As an example of common confusion, one often encounters attempts to help through something like Figure 1, which cannot be right since it implies that all sound methods are complete. (We’ll have better pictures below.)

Figure 1: Naïve (and wrong) illustration

Perhaps this example can be dismissed as just a bad use of illustrations [3] but consider the example of looking for dead code. If the analysis wrongly determines that some reachable code is unreachable, is it unsound or incomplete?

With this statement of the question, the only answer is: it depends!

It depends on the analyzer’s mandate:

  • If it is a code checker that alerts programmers to cases of bad programming style, it is incomplete: it reports as an error a case that is not. (Reporting that unreachable code is reachable would cause unsoundness, by missing a case that it should have reported.)
  • If it is the dead-code-removal algorithm of an optimizing compiler, which will remove unreachable code, it is unsound: the compiler will remove code that it should not. (Reporting that unreachable code is reachable would cause incompleteness, by depriving the compiler of an optimization.)

As another example, consider an analyzer that finds out whether a program will terminate. (If you are thinking “but that can’t be done!“, see the section “Appendix: about termination” at the very end of this article.) If it says a program does not terminates when in fact it does, is it unsound or incomplete?

Again, that depends on what the analyzer seeks to establish. If it is about the correctness of a plain input-to-output program (a program that produces results and then is done), we get incompleteness: the analyzer wrongly flags a program that is actually OK. But if it is about verifying that continuously running programs, such as the control system for a factory, will not stop (“liveness”), then the analyzer is unsound.

Examples are not limited to program analysis. A fraud-indentification process that occasionally rejects a legitimate credit card purchase is, from the viewpoint of preserving the bank from fraudulent purchases, incomplete. From the viewpoint of the customer who understands a credit card as an instrument enabling payments as long as you have sufficient credit, it is unsound.

These examples suffice to show that there cannot be absolute definitions of soundness and precision: the determination depends on which version of a boolean property we consider desirable. This decision is human and subjective. Dead code is desirable for the optimizing compiler and undesirable (we will say it is a violation) for the style checker. Termination is desirable for input-output programs and a violation for continuously running programs.

Once we have decided which cases are desirable and which are violations, we can define the concepts without any ambiguity: soundness means rejecting all violations, and completeness means accepting all desirables.

While this definition is in line with the unpretentious, informal one in the introduction, it makes two critical aspects explicit:

  • Relativity. Everything depends on an explicit decision of what is desirable and what is a violation. Do you want customers always to be able to use their credit cards for legitimate purchases, or do you want to detect all frauds attempts?
  • Duality. If you reverse the definitions of desirable and violation (they are the negation of each other), you automatically reverse the concepts of soundness and completeness and the associated properties.

We will now explore the consequences of these observations.

2. Theory and practice

For all sufficiently interesting problems, theoretical limits (known as Rice’s theorem) ensure that it is impossible to obtain both soundness and completeness.

But it is not good enough to say “we must be ready to renounce either soundness or completeness”. After all, it is very easy to obtain soundness if we forsake completeness: reject every case. A termination-enforcement analyzer can reject every program as potentially non-terminating. A bank that is concerned with fraud can reject every transaction (this seems to be my bank’s approach when I am traveling) as potentially fraudulent. Dually, it is easy to ensure completeness if we just sacrifice soundness: accept every case.

These extreme theoretical solutions are useless in practice; here we need to temper the theory with considerations of an engineering nature.

The practical situation is not as symmetric as the concept of duality theoretically suggests. If we have to sacrifice one of the two goals, it is generally better to accept some incompleteness: getting false alarms (spurious reports about cases that turn out to be harmless) is less damaging than missing errors. Soundness, in other words, is essential.

Even on the soundness side, though, practice tempers principle. We have to take into account the engineering reality of how tools get produced. Take a program analyzer. In principle it should cover the entire programming language. In practice, it will be built step by step: initially, it may not handle advanced features such as exceptions, or dynamic mechanisms such as reflection (a particularly hard nut to crack). So we may have to trade soundness for what has been called  “soundiness[4], meaning soundness outside of cases that the technology cannot handle yet.

If practical considerations lead us to more tolerance on the soundness side, on the completeness side they drag us (duality strikes again) in the opposite direction. Authors of analysis tools have much less flexibility than the theory would suggest. Actually, close to none. In principle, as noted, false alarms do not cause catastrophes, as missed violations do; but in practice they can be almost as bad.  Anyone who has ever worked on or with a static analyzer, going back to the venerable Lint analyzer for C, knows the golden rule: false alarms kill an analyzer. When people discover the tool and run it for the first time, they are thrilled to discover how it spots some harmful pattern in their program. What counts is what happens in subsequent runs. If the useful gems among the analyzer’s diagnostics are lost in a flood of irrelevant warnings, forget about the tool. People just do not have the patience to sift through the results. In practice any analysis tool has to be darn close to completeness if it has to stand any chance of adoption.

Completeness, the absence of false alarms, is an all-or-nothing property. Since in the general case we cannot achieve it if we also want soundness, the engineering approach suggests using a numerical rather than boolean criterion: precision. We may define the precision pr as 1 – im where im is the imprecision:  the proportion of false alarms.

The theory of classification defines precision differently: as pr = tp / (tp + fp), where tp is the number of false positives and fp the number of true positives. (Then im would be fp / (tp + fp).) We will come back to this definition, which requires some tuning for program analyzers.

From classification theory also comes the notion of recall: tp / (tp + fn) where fn is the number of false negatives. In the kind of application that we are looking at, recall corresponds to soundness, taken not as a boolean property (“is my program sound?“) but  a quantitative one (“how sound is my program?“). The degree of unsoundness un would then be fn / (tp + fn).

3. Rigorous definitions

With the benefit of the preceding definitions, we can illustrate the concepts, correctly this time. Figure 2 shows two different divisions of the set of U of call cases (universe):

  • Some cases are desirable (D) and others are violations (V).
  • We would like to know which are which, but we have no way of finding out the exact answer, so instead we run an analysis which passes some cases (P) and rejects some others (R).

Figure 2: All cases, classified

The first classification, left versus right columns in Figure 2, is how things are (the reality). The second classification, top versus bottom rows, is how we try to assess them. Then we get four possible categories:

  • In two categories, marked in green, assessment hits reality on the nail:  accepted desirables (A), rightly passed, and caught violations (C), rightly rejected.
  • In the other two, marked in red, the assessment is off the mark: missed violations (M), wrongly passed; and false alarms (F), wrongly accepted.

The following properties hold, where U (Universe) is the set of all cases and  ⊕ is disjoint union [5]:

— Properties applicable to all cases:
U = D ⊕ V
U = P ⊕ R
D = A ⊕ F
V = C ⊕ M
P = A ⊕ M
R = C ⊕ F
U = A ⊕M ⊕ F ⊕ C

We also see how to define the precision pr: as the proportion of actual violations to reported violations, that is, the size of C relative to R. With the convention that u is the size of U and so on, then  pr = c / r, that is to say:

  • pr = c / (c + f)      — Precision
  • im = f / (c + f)      — Imprecision

We can similarly define soundness in its quantitative variant (recall):

  • so = a / (a + m)      — Soundness (quantitative)
  • un = m / (a + m)   — Unsoundness

These properties reflect the full duality of soundness and completeness. If we reverse our (subjective) criterion of what makes a case desirable or a violation, everything else gets swapped too, as follows:

Figure 3: Duality

We will say that properties paired this way “dual” each other [6].

It is just as important (perhaps as a symptom that things are not as obvious as sometimes assumed) to note which properties do not dual. The most important examples are the concepts of  “true” and “false” as used in “true positive” etc. These expressions are all the more confusing that the concepts of True and False do dual each other in the standard duality of Boolean algebra (where True duals False,  Or duals And, and an expression duals its negation). In “true positive” or “false negative”,  “true” and “false” do not mean True and False: they mean cases in which (see figure 2 again) the assessment respectively matches or does not match the reality. Under duality we reverse the criteria in both the reality and the assessment; but matching remains matching! The green areas remain green and the red areas remain red.

The dual of positive is negative, but the dual of true is true and the dual of false is false (in the sense in which those terms are used here: matching or not). So the dual of true positive is true negative, not false negative, and so on. Hereby lies the source of the endless confusions.

The terminology of this article removes these confusions. Desirable duals violation, passed duals rejected, the green areas dual each other and the red areas dual each other.

4. Sound and complete analyses

If we define an ideal world as one in which assessment matches reality [7], then figure 2 would simplify to just two possibilities, the green areas:

Figure 4: Perfect analysis (sound and complete)

This scheme has the following properties:

— Properties of a perfect (sound and complete) analysis as in Figure 4:
M = ∅              — No missed violations
F = ∅               — No false alarms
P = D                — Identify  desirables exactly
R = V                –Identify violations exactly

As we have seen, however, the perfect analysis is usually impossible. We can choose to build a sound solution, potentially incomplete:

Figure 5: Sound desirability analysis, not complete

In this case:

— Properties of a sound analysis (not necessarily complete) as in Figure 5:
M = ∅              — No missed violations
P = A                — Accept only desirables
V = C                — Catch all violations
P ⊆ D               — Under-approximate desirables
R ⊇ V               — Over-approximate violations

Note the last two properties. In the perfect solution, the properties P = D and R = V mean that the assessment, yielding P and V, exactly matches the reality, D and V. From now on we settle for assessments that approximate the sets of interest: under-approximations, where the assessment is guaranteed to compute no more than the reality, and over-approximations, where it computes no less. In all cases the assessed sets are either subsets or supersets of their counterparts. (Non-strict, i.e. ⊆ and ⊇ rather than ⊂ and ⊃; “approximation” means possible approximation. We may on occasion be lucky and capture reality exactly.)

We can go dual and reach for completeness at the price of possible unsoundness:

Figure 6: Complete desirability analysis, not sound

The properties are dualled too:

— Properties of a complete analysis (not necessarily sound), as in Figure 6:
F = ∅              — No false alarms
R = C               — Reject only violations
D = A               — Accept all desirables
P ⊇ D               — Over-approximate desirables
R ⊆ V              — Under-approximate violations

5. Desirability analysis versus violation analysis

We saw above why the terms “true positives”, “false negatives” etc., which do not cause any qualms in classification theory, are deceptive when applied to the kind of pass/fail analysis (desirables versus violations) of interest here. The definition of precision provides further evidence of the damage. Figure 7 takes us back to the general case of Figure 2 (for analysis that is guaranteed neither sound nor complete)  but adds these terms to the respective categories.

Figure 7: Desirability analysis (same as fig. 2 with added labeling)

The analyzer checks for a certain desirable property, so if it wrongly reports a violation (F) that is a false negative, and if it misses a violation (M) it is a false positive. In the  definition from classification theory (section 2, with abbreviations standing for True/False Positives/Negatives): TP = A, FP = M, FN =  F, TN = C, and similarly for the set sizes: tp = a, fp = m, fn = f, tn = c.

The definition of precision from classification theory was pr = tp / (tp + fp), which here gives a / (a + m). This cannot be right! Precision has to do with how close the analysis is to completeness, that is to day, catching all violations.

Is classification theory wrong? Of course not. It is simply that, just as Alice stepped on the wrong side of the mirror, we stepped on the wrong side of duality. Figures 2 and 7 describe desirability analysis: checking that a tool does something good. We assess non-fraud from the bank’s viewpoint, not the stranded customer’s; termination of input-to-output programs, not continuously running ones; code reachability for a static checker, not an optimizing compiler. Then, as seen in section 3, a / (a + m) describes not precision but  soundness (in its quantitative interpretation, the parameter called “so” above).

To restore the link with classification theory , we simply have to go dual and take the viewpoint of violation analysis. If we are looking for possible violations, the picture looks like this:

Figure 8: Violation analysis (same as fig. 7 with different positive/negative labeling)

Then everything falls into place:  tp = c, fp = f, fn =  m, tn = a, and the classical definition of  precision as pr = tp / (tp + fp) yields c / (c + f) as we are entitled to expect.

In truth there should have been no confusion since we always have the same picture, going back to Figure 2, which accurately covers all cases and supports both interpretations: desirability analysis and violation analysis. The confusion, as noted, comes from using the duality-resistant “true”/”false” opposition.

To avoid such needless confusion, we should use the four categories of the present discussion:  accepted desirables, false alarms, caught violations and missed violations [8]. Figure 2 and its variants clearly show the duality, given explicitly in Figure 3, and sustains  interpretations both for desirability analysis and for violation analysis. Soundness and completeness are simply special cases of the general framework, obtained by ruling out one of the cases of incorrect analysis in each of Figures 4 and 5. The set-theoretical properties listed after Figure 2 express the key concepts and remain applicable in all variants. Precision c / (c + f) and quantitative soundness a / (a + m) have unambiguous definitions matching intuition.

The discussion is, I hope, sound. I have tried to make it complete. Well, at least it is precise.

Notes and references

[1] Actually it’s not your bank that “thinks” so but its wonderful new “Artificial Intelligence” program.

[2] For a discussion of these concepts as used in testing see Mauro Pezzè and Michal Young, Software Testing and Analysis: Process, Principles and Techniques, Wiley, 2008.

[3] Edward E. Tufte: The Visual Display of Quantitative Information, 2nd edition, Graphics Press, 2001.

[4] Michael Hicks,What is soundness (in static analysis)?, blog article available here, October 2017.

[5] The disjoint union property X = Y ⊕ Z means that Y ∩ Z = ∅ (Y and Z are disjoint) and X = Y ∪ Z (together, they yield X).

[6] I thought this article would mark the introduction into the English language of “dual” as a verb, but no, it already exists in the sense of turning a road from one-lane to two-lane (dual).

[7] As immortalized in a toast from the cult movie The Prisoner of the Caucasus: “My great-grandfather says: I have the desire to buy a house, but I do not have the possibility. I have the possibility to buy a goat, but I do not have the desire. So let us drink to the matching of our desires with our possibilities.” See 6:52 in the version with English subtitles.

[8] To be fully consistent we should replace the term “false alarm” by rejected desirable. I is have retained it because it is so well established and, with the rest of the terminology as presented, does not cause confusion.

[9] Byron Cook, Andreas Podelski, Andrey Rybalchenko: Proving Program Termination, in Communications of the ACM, May 2011, Vol. 54 No. 5, Pages 88-98.

Background and acknowledgments

This reflection arose from ongoing work on static analysis of OO structures, when I needed to write formal proofs of soundness and completeness and found that the definitions of these concepts are more subtle than commonly assumed. I almost renounced writing the present article when I saw Michael Hicks’s contribution [4]; it is illuminating, but I felt there was still something to add. For example, Hicks’s set-based illustration is correct but still in my opinion too complex; I believe that the simple 2 x 2 pictures used above convey the ideas  more clearly. On substance, his presentation and others that I have seen do not explicitly mention duality, which in my view is the key concept at work here.

I am grateful to Carlo Ghezzi for enlightening discussions, and benefited from comments by Alexandr Naumchev and others from the Software Engineering Laboratory at Innopolis University.

Appendix: about termination

With apologies to readers who have known all of the following from kindergarten: a statement such as (section 1): “consider an analyzer that finds out whether a program will terminate” can elicit no particular reaction (the enviable bliss of ignorance) or the shocked rejoinder that such an analyzer is impossible because termination (the “halting” problem) is undecidable. This reaction is just as incorrect as the first. The undecidability result for the halting problem says that it is impossible to write a general termination analyzer that will always provide the right answer, in the sense of both soundness and completeness, for any program in a realistic programming language. But that does not preclude writing termination analyzers that answer the question correctly, in finite time, for given programs. After all it is not hard to write an analyzer that will tell us that the program from do_nothing until True loop do_nothing end will terminate and that the program from do_nothing until False loop do_nothing end will not terminate. In the practice of software verification today, analyzers can give such sound answers for very large classes of programs, particularly with some help from programmers who can obligingly provide variants (loop variants, recursion variants). For a look into the state of the art on termination, see the beautiful survey by Cook, Podelski and Rybalchenko [9].

Also appears in the Communications of the ACM blog

VN:F [1.9.10_1130]
Rating: 10.0/10 (7 votes cast)
VN:F [1.9.10_1130]
Rating: +1 (from 3 votes)

Festina retro

We “core” computer scientists and software engineers always whine that our research themes forever prevent us, to the delight of our physicist colleagues but unjustly, from reaching the gold standard of academic recognition: publishing in Nature. I think I have broken this barrier now by disproving the old, dusty laws of physics! Brace yourself for my momentous discovery: I have evidence of negative speeds.

My experimental setup (as a newly self-anointed natural scientist I am keen to offer the possibility of replication) is the Firefox browser. I was downloading an add-on, with a slow connection, and at some point got this in the project bar:

Negative download speed

Negative speed! Questioning accepted wisdom! Nobel in sight! What next, cold fusion?

I fear I have to temper my enthusiasm in deference to more mundane explanations. There’s the conspiracy explanation: the speed is truly negative (more correctly, it is a “velocity”, a vector of arbitrary direction, hence in dimension 1 possibly negative); Firefox had just reversed the direction of transfer, surreptitiously dumping my disk drive to some spy agency’s server.

OK, that is rather far-fetched. More likely, it is a plain bug. A transfer speed cannot be negative; this property is not just wishful thinking but should be expressed as an integral part of the software. Maybe someone should tell Firefox programmers about class invariants.

VN:F [1.9.10_1130]
Rating: 9.6/10 (9 votes cast)
VN:F [1.9.10_1130]
Rating: +4 (from 4 votes)

The end of software engineering and the last methodologist

(Reposted from the CACM blog [*].)

Software engineering was never a popular subject. It started out as “programming methodology”, evoking the image of bearded middle-aged men telling you with a Dutch, Swiss-German or Oxford accent to repent and mend your ways. Consumed (to paraphrase Mark Twain) by the haunting fear that someone, somewhere, might actually enjoy coding.

That was long ago. With a few exceptions including one mentioned below, to the extent that anyone still studies programming methodology, it’s in the agile world, where the decisive argument is often “I always say…”. (Example from a consultant’s page:  “I always tell teams: `I’d like a [user] story to be small, to fit in one iteration but that isn’t always the way.’“) Dijkstra did appeal to gut feeling but he backed it through strong conceptual arguments.

The field of software engineering, of which programming methodology is today just a small part, has enormously expanded in both depth and width. Conferences such as ICSE and ESEC still attract a good crowd, the journals are buzzing, the researchers are as enthusiastic as ever about their work, but… am I the only one to sense frustration? It is not clear that anyone outside of the community is interested. The world seems to view software engineering as something that everyone in IT knows because we all develop software or manage people who develop software. In the 2017 survey of CS faculty hiring in the U.S., software engineering accounted, in top-100 Ph.D.-granting universities, for 3% of hires! (In schools that stop at the master’s level, the figure is 6%; not insignificant, but not impressive either given that these institutions largely train future software engineers.) From an academic career perspective, the place to go is obviously  “Artificial Intelligence, Data Mining, and Machine Learning”, which in those top-100 universities got 23% of hires.

Nothing against our AI colleagues; I always felt “AI winter” was an over-reaction [1], and they are entitled to their spring. Does it mean software engineering now has to go into a winter of its own? That is crazy. Software engineering is more important than ever. The recent Atlantic  “software apocalypse” article (stronger on problems than solutions) is just the latest alarm-sounding survey. Or, for just one recent example, see the satellite loss in Russia [2] (juicy quote, which you can use the next time you teach a class about the challenges of software testing: this revealed a hidden problem in the algorithm, which was not uncovered in decades of successful launches of the Soyuz-Frigate bundle).

Such cases, by the way, illustrate what I would call the software professor’s dilemma, much more interesting in my opinion than the bizarre ethical brain-teasers (you see what I mean, trolley levers and the like) on which people in philosophy departments spend their days: is it ethical for a professor of software engineering, every morning upon waking up, to go to cnn.com in the hope that a major software-induced disaster has occurred,  finally legitimizing the profession? The answer is simple: no, that is not ethical. Still, if you have witnessed the actual state of ordinary software development, it is scary to think about (although not to wish for) all the catastrophes-in-waiting that you suspect are lying out there just waiting for the right circumstances .

So yes, software engineering is more relevant than ever, and so is programming methodology. (Personal disclosure: I think of myself as the very model of a modern methodologist [3], without a beard or a Dutch accent, but trying to carry, on today’s IT scene, the torch of the seminal work of the 1970s and 80s.)

What counts, though, is not what the world needs; it is what the world believes it needs. The world does not seem to think it needs much software engineering. Even when software causes a catastrophe, we see headlines for a day or two, and then nothing. Radio silence. I have argued to the point of nausea, including at least four times in this blog (five now), for a simple rule that would require a public auditing of any such event; to quote myself: airline transportation did not become safer by accident but by accidents. Such admonitions fall on deaf ears. As another sign of waning interest, many people including me learned much of what they understand of software engineering through the ACM Risks Forum, long a unique source of technical information on software troubles. The Forum still thrives, and still occasionally reports about software engineering issues, but most of the traffic is about privacy and security (with a particular fondness for libertarian rants against any reasonable privacy rule that the EU passes). Important topics indeed, but where do we go for in-depth information about what goes wrong with software?

Yet another case in point is the evolution of programming languages. Language creation is abuzz again with all kinds of fancy new entrants. I can think of one example (TypeScript) in which the driving force is a software engineering goal: making Web programs safer, more scalable and more manageable by bringing some discipline into the JavaScript world. But that is the exception. The arguments for many of the new languages tend to be how clever they are and what expressive new constructs they introduce. Great. We need new ideas. They would be even more convincing if they addressed the old, boring problems of software engineering: correctness, robustness, extendibility, reusability.

None of this makes software engineering less important, or diminishes in the least the passion of those of us who have devoted our careers to the field. But it is time to don our coats and hats: winter is upon us.

Notes

[1] AI was my first love, thanks to Jean-Claude Simon at Polytechnique/Paris VI and John McCarthy at Stanford.

[2] Thanks to Nikolay Shilov for alerting me to this information. The text is in Russian but running it through a Web translation engine (maybe this link will work) will give the essentials.

[3] This time borrowing a phrase from James Noble.

[*] I am reposting these CACM blog articles rather than just putting a link, even though as a software engineer I do not like copy-paste. This is my practice so far, and it might change since it raises obvious criticism, but here are the reasons: (A) The audiences for the two blogs are, as experience shows, largely disjoint. (B) I like this site to contain a record of all my blog articles, regardless of what happens to other sites. (C) I can use my preferred style conventions.

VN:F [1.9.10_1130]
Rating: 9.9/10 (11 votes cast)
VN:F [1.9.10_1130]
Rating: +7 (from 7 votes)

Devops (the concept, and a workshop announcement)

One of the most significant recent developments in software engineering is the concept of Devops*. Dismissing the idea as “just the latest buzzword” would be wrong. It may be a buzzword but it reflects a fundamental change in the way we structure system development; with web applications in particular the traditional distinctions between steps of development, V&V** and deployment fade out. If you are using Microsoft Word, you know or can easily find out the version number; but which version of your search engine are you using?

With the new flexibility indeed come new risks, as when a bug in the latest “devopsed”  version of Google Docs caused me to lose a whole set of complex diagrams irretrievably; an earlier article on this blog (“The Cloud and Its Risks“, October 2010) told the story.

In the new world of continuous integrated development/V&V/deployment, software engineering principles are more necessary than ever, but their application has to undergo a profound adaptation.

With Jean-Michel Bruel (Toulouse), Elisabetta Di Nitto (Milan) and Manuel Mazzara (Innopolis), we are organizing a workshop on the topic, DEVOPS 18, on 5-6 March 2018 near Toulouse. The Call for Papers is available here, with Springer LNCS proceedings. The submission deadline is January 15, but for that date a 2-page extended abstract is sufficient. I hope that the event will help the community get a better grasp of the software engineering techniques and practices applicable to this new world of software development.

Notes

*I know, it’s supposed to be DevOps (I am not a great fan of upper case in the middle of words).
** Validation & Verification.

VN:F [1.9.10_1130]
Rating: 10.0/10 (3 votes cast)
VN:F [1.9.10_1130]
Rating: +1 (from 1 vote)

AutoProof workshop: Verification As a Matter of Course

The AutoProof technology pursues the goal of “Verification As a Matter Of Course”, integrated into the EVE development environment. (The AutoProof  project page here; see particularly the online interactive tutorial.) A one-day workshop devoted to the existing AutoProof and current development will take place on October 1 near Toulouse in France. It is an informal event (no proceedings planned at this point, although based on the submissions we might decide to produce a volume), on a small scale, designed to bring together people interested in making the idea of practical verification a reality.

The keynote will be given by Rustan Leino from Microsoft Research, the principal author of the Boogie framework on which the current implementation of AutoProof relies.

For submissions (or to attend without submitting) see the workshop page here. You are also welcome to contact me for more information.

VN:F [1.9.10_1130]
Rating: 5.3/10 (15 votes cast)
VN:F [1.9.10_1130]
Rating: -2 (from 6 votes)

New article: contracts in practice

For almost anyone programming in Eiffel, contracts are just a standard part of daily life; Patrice Chalin’s pioneering study of a few years ago [1] confirmed this impression. A larger empirical study is now available to understand how developers actually use contracts when available. The study, to published at FM 2014 [2] covers 21 programs, not just in Eiffel but also in JML and in Code Contracts for C#, totaling 830,000 lines of code, and following the program’s revision history for a grand total of 260 million lines of code over 7700 revisions. It analyzes in detail whether programmers use contracts, how they use them (in particular, which kinds, among preconditions, postconditions and invariants), how contracts evolve over time, and how inheritance interacts with contracts.

The paper is easy to read so I will refer you to it for the detailed conclusions, but one thing is clear: anyone who thinks contracts are for special development or special developers is completely off-track. In an environment supporting contracts, especially as a native part of the language, programmers understand their benefits and apply them as a matter of course.

References

[1] Patrice Chalin: Are practitioners writing contracts?, in Fault-Tolerant System, eds. Butler, Jones, Romanovsky, Troubitsyna, Springer LNCS, vol. 4157, pp. 100–113, 2006.

[2] H.-Christian Estler, Carlo A. Furia, Martin Nordio, Marco Piccioni and Bertrand Meyer: Contracts in Practice, to appear in proceedings of 19th International Symposium on Formal Methods (FM 2014), Singapore, May 2014, draft available here.

VN:F [1.9.10_1130]
Rating: 8.4/10 (11 votes cast)
VN:F [1.9.10_1130]
Rating: +4 (from 6 votes)

Reading notes: strong specifications are well worth the effort

 

This report continues the series of ICSE 2013 article previews (see the posts of these last few days, other than the DOSE announcement), but is different from its predecessors since it talks about a paper from our group at ETH, so you should not expect any dangerously delusional,  disingenuously dubious or downright deceptive declaration or display of dispassionate, disinterested, disengaged describer’s detachment.

The paper [1] (mentioned on this blog some time ago) is entitled How good are software specifications? and will be presented on Wednesday by Nadia Polikarpova. The basic result: stronger specifications, which capture a more complete part of program functionality, cause only a modest increase in specification effort, but the benefits are huge; in particular, automatic testing finds twice as many faults (“bugs” as recently reviewed papers call them).

Strong specifications are specifications that go beyond simple contracts. A straightforward example is a specification of a push operation for stacks; in EiffelBase, the basic Eiffel data structure library, the contract’s postcondition will read

item =                                          /A/
count = old count + 1

where x is the element being pushed, item the top of the stack and count the number of elements. It is of course sound, since it states that the element just pushed is now the new top of the stack, and that there is one more element; but it is also  incomplete since it says nothing about the other elements remaining as they were; an implementation could satisfy the contract and mess up with these elements. Using “complete” or “strong” preconditions, we associate with the underlying domain a theory [2], or “model”, represented by a specification-only feature in the class, model, denoting a sequence of elements; then it suffices (with the convention that the top is the first element of the model sequence, and that “+” denotes concatenation of sequences) to use the postcondition

model = <x> + old model         /B/

which says all there is to say and implies the original postconditions /A/.

Clearly, the strong contracts, in the  /B/ style, are more expressive [3, 4], but they also require more specification effort. Are they worth the trouble?

The paper explores this question empirically, and the answer, at least according to the criteria used in the study, is yes.  The work takes advantage of AutoTest [5], an automatic testing framework which relies on the contracts already present in the software to serve as test oracles, and generates test cases automatically. AutoTest was applied to both to the classic EiffelBase, with classic partial contracts in the /A/ style, and to the more recent EiffelBase+ library, with strong contracts in the /B/ style. AutoTest is for Eiffel programs; to check for any language-specificity in the results the work also included testing a smaller set of classes from a C# library, DSA, for which a student developed a version (DSA+) equipped with strong model-based contracts. In that case the testing tool was Microsoft Research’s Pex [7]. The results are similar for both languages: citing from the paper, “the fault rates are comparable in the C# experiments, respectively 6 . 10-3 and 3 . 10-3 . The fault complexity is also qualitatively similar.

The verdict on the effect of strong specifications as captured by automated testing is clear: the same automatic testing tools applied to the versions with strong contracts yield twice as many real faults. The term “real fault” comes from excluding spurious cases, such as specification faults (wrong specification, right implementation), which are a phenomenon worth studying but should not count as a benefit of the strong specification approach. The paper contains a detailed analysis of the various kinds of faults and the corresponding empirically determined measures. This particular analysis is for the Eiffel code, since in the C#/Pex case “it was not possible to get an evaluation of the faults by the original developers“.

In our experience the strong specifications are not that much harder to write. The paper contains a precise measure: about five person-weeks to create EiffelBase+, yielding an “overall benefit/effort ratio of about four defects detected per person-day“. Such a benefit more than justifies the effort. More study of that effort is needed, however, because the “person” in the person-weeks was not just an ordinary programmer. True, Eiffel experience has shown that most programmers quickly get the notion of contract and start applying it; as the saying goes in the community, “if you can write an if-then-else, you can write a contract”. But we do not yet have significant evidence of whether that observation extends to model-based contracts.

Model-based contracts (I prefer to call them “theory-based” because “model” means so many other things, but I do not think I will win that particular battle) are, in my opinion, a required component of the march towards program verification. They are the right compromise between simple contracts, which have proved to be attractive to many practicing programmers but suffer from incompleteness, and full formal specification à la Z, which say everything but require too much machinery. They are not an all-or-nothing specification technique but a progressive one: programmers can start with simple contracts, then extend and refine them as desired to yield exactly the right amount of precision and completeness appropriate in any particular context. The article shows that the benefits are well worth the incremental effort.

According to the ICSE program the talk will be presented in the formal specification session, Wednesday, May 22, 13:30-15:30, Grand Ballroom C.

References

[1] Nadia Polikarpova, Carlo A. Furia, Yu Pei, Yi Wei and Bertrand Meyer: What Good Are Strong Specifications?, to appear in ICSE 2013 (Proceedings of 35th International Conference on Software Engineering), San Francisco, May 2013, draft available here.

[2] Bertrand Meyer: Domain Theory: the forgotten step in program verification, article on this blog, see here.

[3] Bernd Schoeller, Tobias Widmer and Bertrand Meyer: Making Specifications Complete Through Models, in Architecting Systems with Trustworthy Components, eds. Ralf Reussner, Judith Stafford and Clemens Szyperski, Lecture Notes in Computer Science, Springer-Verlag, 2006, available here.

[4] Nadia Polikarpova, Carlo Furia and Bertrand Meyer: Specifying Reusable Components, in Verified Software: Theories, Tools, Experiments (VSTTE ‘ 10), Edinburgh, UK, 16-19 August 2010, Lecture Notes in Computer Science, Springer Verlag, 2010, available here.

[5] Bertrand Meyer, Ilinca Ciupa, Andreas Leitner, Arno Fiva, Yi Wei and Emmanuel Stapf: Programs that Test Themselves, IEEE Computer, vol. 42, no. 9, pages 46-55, September 2009, also available here.

[6] Bertrand Meyer, Ilinca Ciupa, Andreas Leitner, Arno Fiva, Yi Wei and Emmanuel Stapf: Programs that Test Themselves, in IEEE Computer, vol. 42, no. 9, pages 46-55, September 2009, also available here.

[7] Nikolai Tillman and Peli de Halleux, Pex: White-Box Generation for .NET, in Tests And Proofs (TAP 2008), pp. 134-153.

 

VN:F [1.9.10_1130]
Rating: 10.0/10 (4 votes cast)
VN:F [1.9.10_1130]
Rating: +2 (from 2 votes)

How good are strong specifications? (New paper, ICSE 2013)

 

A core aspect of our verification work is the use of “strong” contracts, which express sophisticated specification properties without requiring a separate specification language: even for advanced properties, there is no need for a separate specification language, with special notations such as those of first-order logic; instead, one can continue to rely, in the tradition of Design by Contract, on the built-in notations of the programming language, Eiffel.

This is the idea of domain theory, as discussed in earlier posts on this blog, in particular [1]. An early description of the approach, part of Bernd Schoeller’s PhD thesis work, was [2]; the next step was [3], presented at VSTTE in 2010.

A new paper to be presented at ICSE in May [3], part of an effort led by Nadia Polikarpova for her own thesis in progress, shows new advances in using strong specifications, demonstrating their expressive power and submitting them to empirical evaluation. The results show in particular that strong specifications justify the extra effort; in particular they enable automatic tests to find significantly more bugs.

A byproduct of this work is to show again the complementarity between various forms of verification, including not only proofs but (particularly in the contribution of two of the co-authors, Yi Wei and Yu Pei, as well as Carlo Furia) tests.

References

[1] Bertrand Meyer: Domain Theory: the forgotten step in program verification, article on this blog, see here.

[2] Bernd Schoeller, Tobias Widmer and Bertrand Meyer: Making Specifications Complete Through Models, in Architecting Systems with Trustworthy Components, eds. Ralf Reussner, Judith Stafford and Clemens Szyperski, Lecture Notes in Computer Science, Springer-Verlag, 2006, available here.

[3] Nadia Polikarpova, Carlo Furia and Bertrand Meyer: Specifying Reusable Components, in Verified Software: Theories, Tools, Experiments (VSTTE ‘ 10), Edinburgh, UK, 16-19 August 2010, Lecture Notes in Computer Science, Springer Verlag, 2010, available here.

[4] Nadia Polikarpova, Carlo A. Furia, Yu Pei, Yi Wei and Bertrand Meyer: What Good Are Strong Specifications?, to appear in ICSE 2013 (Proceedings of 35th International Conference on Software Engineering), San Francisco, May 2013, draft available here.

VN:F [1.9.10_1130]
Rating: 10.0/10 (1 vote cast)
VN:F [1.9.10_1130]
Rating: +1 (from 1 vote)

A fundamental duality of software engineering

A couple of weeks ago I proposed a small quiz. (I also stated that the answer would come “on Wednesday” — please understand any such promise as “whenever I find the time”. Sorry.) Here is the answer.

The quiz was:

I have a function:

  • For 0 it yields 0.
  • For 1 it yields 1.
  • For 2 it yields 4.
  • For 3 it yields 9.
  • For 4 it yields 16.

What is the value for 5?

Shortly thereafter I added a hint: the value for 5 is 25, and changed the question to: “What is the value for 6?”. For good measure we can also ask about the value for 1000. Now compare your answer to  what follows.

A good answer for the value at 6 is: 34 . The function in this case is -10 + 5 x + |2 x – 3| + |2 x -7|. It matches the values for the given inputs.

Linear, small values

 

 

 

 

 

 

 

 

 

The value for 1000 is 8980:

Linear function, full range

 

 

 

 

 

 

 

 

 

Another good answer at position 6 is 35.6. It comes up if we assume the function is over reals rather than integers; then a possible formula, which correlates very well (R-square of 0.9997) with the values at the given inputs, is:

869.42645566111 (1 – 0.4325853145802 e-.0467615868913719  (x – 17.7342512233011))2.3116827277657443

Exponential function, initial range

 

 

 

 

 

 

 

 

 

 

with a quite different asymptotic behavior, giving the value 869.4 at position 1000:

Exponential, full range

 

 

 

 

 

 

 

 

 

 

Some readers might have thought of another possibility, the square function x2, which again matches all the given values:

Square function, initial range

 

 

 

 

 

 

 

 

 

 

So which of these answers is right? Each is as good as the others, and as bad. There is in particular no reason to believe that the values given in the quiz’s statement suggest the square function. Any function that fits the given values, exactly (if we stick to integers) or approximately (with reals as simulated on a computer) is an equally worthy candidate. Six inputs, or six thousand, do not resolve the question. At best they are hints.

This difference between a hint and a solution is at the core of software engineering. It is, for example, the difference between a test and a specification. A test tells us that the program works for some values; as Dijkstra famously pointed out, and anyone who has developed a serious program has experienced, it does not tell us that it will work for others. The more successful tests, the more hints; but they are still only hints. I have always wondered whether Dijkstra was explicitly thinking of the Popperian notion of falsifiability: no number of experiments will prove a physical theory (although a careful experiment may boost the confidence in the theory, especially if competing theories fail to explain it, as the famous Eddington expedition did for relativity in 1919 [1]); but a single experiment can disprove a theory. Similarly, being told that our function’s value at 6 is 34 disqualifies the square function and the last one (the exponential), but does not guarantee that the first function (the linear combination) is the solution.

The specification-testing duality is the extension to computer science of the basic duality of logic. It starts with the elementary boolean operators: to prove a or b it suffices to establish a or to establish b; and to disprove a and b it suffices to show that a does not hold or to show that b does not hold. The other way round, to disprove a or b we have to show that a does not hold and to show that b does not hold; to prove that a and b holds, we have to show that a holds and to show that b holds.

Predicate calculus generalizes or to , “there exists”, and and to , “for all”. To prove ∃ x | p (x) (there is an x of which p holds) it suffices to find one value a such that p (a); let’s be pretentious and say we have “skolemized” x. To disprove∀ x | p (x) (p holds of all x) it suffices to find one value for which p does not hold.

In software engineering the corresponding duality is between proofs and tests, or (equivalently) specifications and use cases. A specification is like a “for all”: it tells us what must happen for all envisioned inputs. A test is like a “there exists”: it tells us what happens for a particular input and hence, as in predicate calculus, it is interesting as a disproof mechanism:

  • A successful test brings little information (like learning the value for 5 when trying to figure out what a function is, or finding one true value in trying to prove a or a false value in trying to prove a ).
  • An unsuccessful test brings us decisive information (like a false value for a ): the program is definitely not correct. It skolemizes incorrectness.

A proof, for its part, brings the discussion to an end when it is successful. In practice, testing may still be useful in this case, but only testing that addresses issues not covered by the proof:

  • Correctness of the compiler and platform, if not themselves proved correct.
  • Correctness the proof tools themselves, since most practical proofs require software support.
  • Aspects not covered by the specification such as, typically, performance and usability.

But for the properties it does cover the proof is final.

It is as foolish, then, to use tests in lieu of specifications as it would be to ignore the limitations of a proof. Agile approaches have caused much confusion here; as often happens in the agile literature [2], the powerful insight is mixed up with harmful advice. The insight, which has significantly improved the practice of software development, is that the regression test suite is a key asset of a project and that tests should be run throughout. The bad advice is to ditch upfront requirements and specifications in favor of tests. The property that tests lack and specifications possess is generality. A test is an instance; a thousand tests can never be more than a thousand instances. As I pointed out in a short note in EiffelWorld (the precursor to this blog) a few years ago [3], the relationship is not symmetric: one can generate tests from a specification, but not the other way around.

The same relationship holds between use cases and requirements. It is stunning to see how many people think that use cases (scenarios) are a form of requirements. As requirements they are as useless as one or ten values are to defining a function. Use cases are a way to complement the requirements by describing the system’s behavior in selected important cases. A kind of reality check, to ensure that whatever abstract aims have been defined for the system it still covers the cases known to be of immediate interest. But to rely on use cases as requirements means that you will get a system that will satisfy the use cases — and possibly little else.

When I use systems designed in recent years, in particular Web-based systems, I often find myself in a stranglehold: I am stuck with the cases that the specifiers thought of. Maybe it’s me, but my needs tend, somehow, to fall outside of these cases. Actually it is not just me. Not long ago, I was sitting close to a small-business owner who was trying to find her way through an insurance site. Clearly the site had a planned execution path for employees, and another for administrators. Problem: she was both an employee and the administrator. I do not know how the session ended, but it was a clear case of misdesign: a system built in terms of standard scenarios. Good specification performs an extra step of abstraction (for example using object-oriented techniques and contracts, but this is for another article). Skipping this step means forsaking the principal responsibility of the requirements phase: to generalize from an analysis of the behavior in known cases to a definition of the desired behaviors in all relevant cases.

Once more, as everywhere else in computer science [4], abstraction is the key to solid results that stand the test of time. Definitely better than judging a book by its cover, inferring a function by its first few values, verifying a program by its tests, or specifying a system by its use cases.

References

[1] See e.g. a blog article: Einstein and Eddington, here.

[2] Bertrand Meyer: Agile! The Good, the Hype and the Ugly, 2013, to appear.

[3] Bertrand Meyer: Test or spec? Test and spec? Test from spec!, EiffelWorld column, 2004 available here.

[4] Jeff Kramer: Is Abstraction the Key to Computer Science?, in Communications of the ACM, vol. 50, no. 4, April 2007, pages 36-42,  available from CiteSeer here

VN:F [1.9.10_1130]
Rating: 9.5/10 (32 votes cast)
VN:F [1.9.10_1130]
Rating: +14 (from 14 votes)

New LASER proceedings

Springer has just published in the tutorial sub-series of Lecture Notes in Computer Science a new proceedings volume for the LASER summer school [1]. The five chapters are notes from the 2008, 2009 and 2010 schools (a previous volume [2] covered earlier schools). The themes range over search-based software engineering (Mark Harman and colleagues), replication of software engineering experiments (Natalia Juristo and Omar Gómez), integration of testing and formal analysis (Mauro Pezzè and colleagues), and, in two papers by our ETH group, Is branch coverage a good measure of testing effectiveness (with Yi Wei and Manuel Oriol — answer: not really!) and a formal reference for SCOOP (with Benjamin Morandi and Sebastian Nanz).

The idea of these LASER tutorial books — which are now a tradition, with the volume from the 2011 school currently in preparation — is to collect material from the presentations at the summer school, prepared by the lecturers themselves, sometimes in collaboration with some of the participants. Reading them is not quite as fun as attending the school, but it gives an idea.

The 2012 school is in full preparation, on the theme of “Advanced Languages for Software Engineering” and with once again an exceptional roster of speakers, or should I say an exceptional roster of exceptional speakers: Guido van Rossum (Python), Ivar Jacobson (from UML to Semat), Simon Peyton-Jones (Haskell), Roberto Ierusalimschy (Lua), Martin Odersky (Scala), Andrei Alexandrescu (C++ and D),Erik Meijer (C# and LINQ), plus me on the design and evolution of Eiffel.

The preparation of LASER 2012 is under way, with registration now open [3]; the school will take place from Sept. 2 to Sept. 8 and, like its predecessors, in the wonderful setting on the island of Elba, off the coast of Tuscany, with a very dense technical program but time for enjoying the beach, the amenities of a 4-star hotel and the many treasures of the island. On the other hand not everyone likes Italy, the sun, the Mediterranean etc.; that’s fine too, you can wait for the 2013 proceedings.

References

[1] Bertrand Meyer and Martin Nordio (eds): Empirical Software Engineering and Verification, International Summer Schools LASER 2008-2010, Elba Island, Italy, Revised Tutorial Lectures, Springer Verlag, Lecture Notes in Computer Science 7007, Springer-Verlag, 2012, see here.

[2] Peter Müller (ed.): Advanced Lectures on Software Engineering, LASER Summer School 2007-2008, Springer Verlag, Lecture Notes in Computer Science 7007, Springer-Verlag, 2012, see here.

[3] LASER summer school information and registration form, http://se.ethz.ch/laser.

VN:F [1.9.10_1130]
Rating: 10.0/10 (1 vote cast)
VN:F [1.9.10_1130]
Rating: +1 (from 1 vote)

R&D positions in software engineering

In the past couple of weeks, three colleagues separately sent me announcements of PhD and postdoc positions, asking me to circulate them. This blog is as good a place as any other (as a matter of fact when I asked Oge de Moor if it was appropriate  he replied: “Readers of your blog are just the kind of applicants we look for…” — I hope you appreciate the compliment). So here they are:

  • Daniel Jackson, for work on Alloy, see here. (“Our plan now is to take Alloy in new directions, and dramatically improve the usability and scalability of the analyzer. We are looking for someone who is excited by these possibilities and will be deeply involved not only in design and implementation but also in strategic planning. There are also opportunities to co-advise students and participate in research proposals.”)
  • University College London (Anthony Finkelstein), a position as lecturer, similar to assistant professor), see here. The title is “Lecturer in software testing” (which, by the way, seems a bit restrictive, as I am more familiar with an academic culture in which the general academic discipline is set, say software engineering or possibly software verification, rather than a narrow specialty, but I hope the scope can be broadened).
  • Semmle, Oge de Moor’s company, see here.

These are all excellent environments, so if you are very, very good, please consider applying. If you are very, very, very good don’t, as I am soon going to post a whole set of announcements for PhD and postdoc positions at ETH in connection with a large new grant.

 

VN:F [1.9.10_1130]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.10_1130]
Rating: 0 (from 0 votes)

TOOLS 2012, “The Triumph of Objects”, Prague in May: Call for Workshops

Workshop proposals are invited for TOOLS 2012, The Triumph of Objectstools.ethz.ch, to be held in Prague May 28 to June 1. TOOLS is a federated set of conferences:

  • TOOLS EUROPE 2012: 50th International Conference on Objects, Models, Components, Patterns.
  • ICMT 2012: 5th International Conference on Model Transformation.
  • Software Composition 2012: 10th International Conference.
  • TAP 2012: 6th International Conference on Tests And Proofs.
  • MSEPT 2012: International Conference on Multicore Software Engineering, Performance, and Tools.

Workshops, which are normally one- or two-day long, provide organizers and participants with an opportunity to exchange opinions, advance ideas, and discuss preliminary results on current topics. The focus can be on in-depth research topics related to the themes of the TOOLS conferences, on best practices, on applications and industrial issues, or on some combination of these.

SUBMISSION GUIDELINES

Submission proposal implies the organizers’ commitment to organize and lead the workshop personally if it is accepted. The proposal should include:

  •  Workshop title.
  • Names and short bio of organizers .
  • Proposed duration.
  •  Summary of the topics, goals and contents (guideline: 500 words).
  •  Brief description of the audience and community to which the workshop is targeted.
  • Plans for publication if any.
  • Tentative Call for Papers.

Acceptance criteria are:

  • Organizers’ track record and ability to lead a successful workshop.
  •  Potential to advance the state of the art.
  • Relevance of topics and contents to the topics of the TOOLS federated conferences.
  •  Timeliness and interest to a sufficiently large community.

Please send the proposals to me (Bertrand.Meyer AT inf.ethz.ch), with a Subject header including the words “TOOLS WORKSHOP“. Feel free to contact me if you have any question.

DATES

  •  Workshop proposal submission deadline: 17 February 2012.
  • Notification of acceptance or rejection: as promptly as possible and no later than February 24.
  • Workshops: 28 May to 1 June 2012.

 

VN:F [1.9.10_1130]
Rating: 7.3/10 (3 votes cast)
VN:F [1.9.10_1130]
Rating: +1 (from 1 vote)

All Bugs Great and Small

(Acknowledgment: this article came out of a discussion with Manuel Oriol, Carlo Furia and Yi Wei. The material is largely theirs but the opinions are mine.)

A paper on automatic testing, submitted some time ago, received the following referee comment:

The case study seems unrealistic and biased toward the proposed technique. 736 unique faults found in 92 classes means at least 8 unique faults per class at the same time. I have never seen in all my life a published library with so many faults …

This would be a good start for a discussion of what is wrong with refereeing in computer science today (on the negativism of our field see [1]); we have a referee who mistakes experience for expertise, prejudice for truth, and refuses to accept carefully documented evidence because “in all his life”, presumably a rich and rewarding life, he has never seen anything of the sort. That is not the focus of the present article, however; arrogant referees eventually retire and good papers eventually get published. The technical problems are what matters. The technical point here is about testing.

Specifically, what bugs are worth finding, and are high bug rates extraordinary?

The paper under review was a step in the work around the automatic testing tool AutoTest (see [2] for a slightly older overall description and [3] for the precise documentation). AutoTest applies a fully automatic strategy, exercising classes and their routines without the need to provide test cases or test oracles. What makes such automation possible is the combination of  random generation of tests and reliance on contracts to determine the success of tests.

For several years we have regularly subjected libraries, in particular the EiffelBase data structure library, to long AutoTest sessions, and we keep finding bugs (the better term is faults). The fault counts are significant; here they caught the referee’s eye. In fact we have had such comments before: I don’t believe your fault counts for production software; your software must be terrible!

Well, maybe.

My guess is that in fact EiffelBase has no more bugs, and possibly far fewer bugs, than other “production” code. The difference is that the  AutoTest framework performs far more exhaustive tests than usually practiced.

This is only a conjecture; unlike the referee I do not claim any special powers that make my guesses self-evident. Until we get test harnesses comparable to AutoTest for environments other than Eiffel and, just as importantly, libraries that are fully equipped with contracts, enabling the detection of bugs that otherwise might not come to light, we will not know whether the explanation is the badness of EiffelBase or the goodness of AutoTest.

What concrete, incontrovertible evidence demonstrates is that systematic random testing does find faults that human testers typically do not. In a 2008 paper [4] with Ilinca Ciupa, Manuel Oriol and Alexander Pretschner, we ran AutoTest on some classes and compared the results with those of human testers (as well as actual bug reports from the field, since this was released software). We found that the two categories are complementary: human testers find faults that are still beyond the reach of automated tools, but they typically never find certain faults that AutoTest, with its stubborn dedication to leaving no stone unturned, routinely uncovers. We keep getting surprised at bugs that AutoTest detects and which no one had sought to test before.

A typical set of cases that human programmers seldom test, but which frequently lead to uncovering bugs, involves boundary values. AutoTest, in its “random-plus” strategy, always exercises special values of every type, such as MAXINT, the maximum representable integer. Programmers don’t. They should — all testing textbooks tell them so — but they just don’t, and perhaps they can’t, as the task is often too tedious for a manual process. It is remarkable how many routines using integers go bezerk when you feed them MAXINT or its negative counterpart. Some of the fault counts that seem so outrageous to our referee directly come from trying such values.

Some would say the cases are so extreme as to be insignificant. Wrong. Many documented software failures and catastrophes are due to untested extreme values. Perhaps the saddest is the case of the Patriot anti-missile system, which at the beginning of the first Gulf war was failing to catch Scud missiles, resulting in one case in the killing of twenty-eight American soldiers in an army barrack. It was traced to a software error [5]. To predict the position of the incoming missile, the computation multiplied time by velocity. The time computation used multiples of the time unit, a tenth of a second, stored in a 24-bit register and hence approximated. After enough time, long enough to elapse on the battlefield, but longer than what the tests had exercised, the accumulated error became so large as to cause a significant — and in the event catastrophic — deviation. The unique poser of automatic testing is that unlike human testers it is not encumbered by a priori notions of a situation being extreme or unlikely. It tries all the possibilities it can.

The following example, less portentous in its consequences but just as instructive, is directly related to AutoTest. For his work on model-based contracts [6] performed as part of his PhD completed in 2008 at ETH, Bernd Schoeller developed classes representing the mathematical notion of set. There were two implementations; it turned out that one of them, say SET1, uses data structures that make the subset operation easy to program efficiently; in the corresponding class, the superset operation, ab, is then simply implemented as ba. In the other implementation, say SET2, it is the other way around: is directly implemented, and ab, is implemented as ba. This all uses a nice object-oriented structure, with a general class SET defining the abstract notion and the two implementations inheriting from it.

Now you may see (if you have developed a hunch for automated testing) where this is heading: AutoTest knows about polymorphism and dynamic binding, and tries all the type combinations that make sense. One of the generated test cases has two variables s1 and s2 of type SET, and tries out s2s1; in one of the combinations that AutoTest tries, s1 is dynamically and polymorphically of type SET1 and s2 of type SET2. The version of that it will use is from SET2, so it actually calls s1s2; but this tests the SET1 version of , which goes back to SET2. The process would go on forever, were it not for a timeout in AutoTest that uncovers the fault. Bernd Schoeller had tried AutoTest on these classes not in the particular expectation of finding bugs, but more as a favor to the then incipient development of AutoTest, to see how well the tool could handle model-based contracts. The uncovering of the fault, testament to the power of relentless, systematic automatic testing, surprised us all.

In this case no contract was violated; the problem was infinite recursion, due to a use of O-O techniques that for all its elegance had failed to notice a pitfall. In most cases, AutoTest finds the faults through violated postconditions or class invariants. This is one more reason to be cautious about sweeping generalizations of the kind “I do not believe these bug rates, no serious software that I have seen shows anything of the sort!”. Contracts express semantic properties of the software, which the designer takes care of stating explicitly. In run-of-the-mill code that does not benefit from such care, lots of things can go wrong but remain undetected during testing, only to cause havoc much later during some actual execution.

When you find such a fault, it is irrelevant that the case is extreme, or special, or rare, or trivial. When a failure happens it no longer matter that the fault was supposed to be rare; and you will only know how harmful it is when you deal with the consequences. Testing, single-mindedly  devoted to the uncovering of faults [7], knows no such distinction: it hunts all bugs large and small.

References

[1] The nastiness problem in computer science, article on the CACM blog, 22 August 2011, available here.

[2] Bertrand Meyer, Ilinca Ciupa, Andreas Leitner, Arno Fiva, Yi Wei and Emmanuel Stapf: Programs that Test Themselves, IEEE Computer, vol. 42, no. 9, pages 46-55, September 2009, also available here.

[3] Online AutoTest documentation, available here at docs.eiffel.com.

[4] Ilinca Ciupa, Bertrand Meyer, Manuel Oriol and Alexander Pretschner: Finding Faults: Manual Testing vs. Random+ Testing vs. User Reports, in ISSRE ’08, Proceedings of the 19th IEEE International Symposium on Software Reliability Engineering, Redmond, November 2008, available here.

[5] US General Accounting Office: GAO Report: Patriot Missile Defense– Software Problem Led to System Failure at Dhahran, Saudi Arabia, February 4, 1992, available here.

[6] Bernd Schoeller, Tobias Widmer and Bertrand Meyer: Making Specifications Complete Through Models, in Architecting Systems with Trustworthy Components, eds. Ralf Reussner, Judith Stafford and Clemens Szyperski, Lecture Notes in Computer Science, Springer-Verlag, 2006, available here.

[7] Bertrand Meyer: Seven Principles of Software testing, in IEEE Computer, vol. 41, no. 10, pages 99-101, August 2008available here.

VN:F [1.9.10_1130]
Rating: 9.1/10 (9 votes cast)
VN:F [1.9.10_1130]
Rating: +6 (from 6 votes)

Testing insights

Lionel Briand and his group at the Simula Research Laboratory in Oslo have helped raise the standard for empirical research in testing and other software engineering practices by criticizing work that in their opinion relies on wrong assumptions or insufficiently supported evidence. In one of their latest papers [1] they take aim at “Adaptive Random Testing” (ART); one of the papers they criticize is from our group at ETH, on the ARTOO extension [2] to this testing method. Let’s examine the criticism!

We need a bit of background on random testing, ART, and ARTOO:

  • Random testing tries inputs based on a random process rather than attempting a more sophisticated strategy; it was once derided as silly [3], but has emerged in recent years as a useful technique. Our AutoTest tool [4], now integrated in EiffelStudio, has shown it to be particularly effective when applied to code equipped with contracts, which provide built-in test oracles. As a result of this combination, testing can be truly automatic: the two most tedious tasks of traditional testing, test case preparation and test oracle definition, can be performed without human intervention.
  • ART, developed by Chen and others [5], makes random testing not entirely random by ensuring that the inputs are spread reasonably evenly in the input domain.
  • ARTOO, part of Ilinca Ciupa’s PhD thesis on testing defended in 2008,   generalized ART to object-oriented programs, by defining a notion of distance between objects; the ARTOO strategy  avoids choosing objects that are too close to each other. The distance formula, which you can find in[2], combines three elementary distances: between the types of the objects involved,  the values in their primitive fields (integers etc.), and, recursively, the objects to which they have references.

Arcuri and Briand dispute the effectiveness of ART and criticize arguments that various papers have used to show its effectiveness. About the ARTOO paper they write

The authors concluded that ART was better than random testing since it needed to sample less test cases before finding the first failure. However, ART was also reported as taking on average 1.6 times longer due to the distance calculations!

To someone not having read our paper the comment and the exclamation mark would seem to suggest that the paper somehow downplays this property of random testing, but in fact it stresses it repeatedly. The property appears for example in boldface as part of the caption to Table 2: In most cases ARTOO requires significantly less tests to find a fault, but entails a time overhead, and again in boldface in the caption to Table 3: The overhead that the distance calculations introduce in the testing process causes ARTOO to require on average 1.6 times more time than RAND to find the first fault.

There is no reason, then, to criticize the paper on this point. It reports the results clearly and fairly.

If we move the focus from the paper to the method, however, Arcuri and Briand have a point. As they correctly indicate, the number of tests to first fault is not a particularly useful criterion. In fact I argued against it in another paper on testing [6]

The number of tests is not that useful to managers, who need help deciding when to stop testing and ship, or to customers, who need an estimate of fault densities. More relevant is the testing time needed to uncover the faults. Otherwise we risk favoring strategies that uncover a failure quickly but only after a lengthy process of devising the test; what counts is total time. This is why, just as flies get out faster than bees, a seemingly dumb strategy such as random testing might be better overall.

(To understand the mention of flies and bees you need to read [6].) The same article states, as its final principle:

Principle 7: Assessment criteria A testing strategy’s most important property is the number of faults it uncovers as a function of time.

The ARTOO paper, which appeared early in our testing work, used “time to first failure” because it has long been a standard criterion in the testing literature, but it should have applied our own advice and focused on more important properties of testing strategies.

The “principles” paper [6] also warned against a risk awaiting anyone looking for new test strategies:

Testing research is vulnerable to a risky thought process: You hit upon an idea that seemingly promises improvements and follow your intuition. Testing is tricky; not all clever ideas prove helpful when submitted to objective evaluation.

The danger is that the clever ideas may result in so much strategy setup time that any benefit on the rest of the testing process is lost. This danger threatens testing researchers, including those who are aware of it.

The idea of ARTOO and object distance remains attractive, but more work is needed to make it an effective contributor to automated random testing and demonstrate that effectiveness. We can be grateful to Arcuri and Briand for their criticism, and I hope they continue to apply their iconoclastic zeal to empirical software engineering work, ours included.

I have objections of my own to their method. They write that “all the work in the literature is based either on simulations or case studies with unreasonably high failure rates”. This is incorrect for our work, which does not use simulations, relying instead on actual, delivered software, where AutoTest routinely finds faults in an automatic manner.

In contrast, however, Arcuri and Briand rely on fault seeding (also known as fault introduction or fault injection):

To obtain more information on how shapes appear in actual SUT, we carried out a large empirical analysis on 11 programs. For each program, a series of mutants were generated to introduce faults in these programs in a systematic way. Faults generated through mutation [allow] us to generate a large number of faults, in an unbiased and varied manner. We generated 3727 mutants and selected the 780 of them with lower detection probabilities to carry out our empirical analysis of faulty region shapes.

In the absence of objective evidence attesting to the realism of fault seeding, I do not believe any insights into testing obtained from such a methodology. In fact we adopted, from the start of our testing work, the principle that we would never rely on fault seeding. The problem with seeded faults is that there is no guarantee they reflect the true faults that programmers make, especially the significant ones. Techniques for fault seeding are understandably good at introducing typographical mistakes, such as a misspelling or the replacement of a “+” by a “-”; but these are not interesting kinds of fault, as they are easily caught by the compiler, by inspection, by low-tech static tools, or by simple tests. Interesting faults are those resulting from a logical error in the programmer’s mind, and in my experience (I do not know of good empirical studies on this topic) seeding techniques do not generate them.

For these reasons, all our testing research has worked on real software, and all the faults that AutoTest has found were real faults, resulting from a programmer’s mistake.

We can only apply this principle because we work with software equipped with contracts, where faults will be detected through the automatic oracle of a violated assertion clause. It is essential, however, to the credibility and practicality of any testing strategy; until I see evidence to the contrary, I will continue to disbelieve any testing insights resulting from studies based on artificial fault injection.

References

[1] Andrea Arcuri and Lionel Briand: Adaptive Random Testing: An Illusion of Effectiveness, in ISSTA 2011 (International Symposium on Software Testing and Analysis), available here.

[2] Ilinca Ciupa, Andreas Leitner, Manuel Oriol and Bertrand Meyer: ARTOO: Adaptive Random Testing for Object-Oriented Software, in ICSE 2008: Proceedings of 30th International Conference on Software Engineering, Leipzig, 10-18 May 2008, IEEE Computer Society Press, 2008, also available here.

[3] Glenford J. Myers. The Art of Software Testing. Wiley, New York, 1979. Citation:

Probably the poorest methodology of all is random-input testing: the process of testing a program by selecting, at random, some subset of all possible input values. In terms of the probability of detecting the most errors, a randomly selected collection of test cases has little chance of being an optimal, or close to optimal, subset. What we look for is a set of thought processes that allow one to select a set of test data more intelligently. Exhaustive black-box and white-box testing are, in general, impossible, but a reasonable testing strategy might use elements of both. One can develop a reasonably rigorous test by using certain black-box-oriented test-case-design methodologies and then supplementing these test cases by examining the logic of the program (i.e., using white-box methods).

[4] Bertrand Meyer, Ilinca Ciupa, Andreas Leitner, Arno Fiva, Yi Wei and Emmanuel Stapf: Programs that Test Themselves, IEEE Computer, vol. 42, no. 9, pages 46-55, September 2009, available here. For practical uses of AutoTest within EiffelStudio see here.

[5] T. Y. Chen, H Leung and I K Mak: Adaptive Random Testing, in  Advances in Computer Science, ASIAN 2004, Higher-Level Decision Making,  ed. M.J. Maher,  Lecture Notes in Computer Science 3321, Springer-Verlag, pages 320-329, 2004, available here.

[6] Bertrand Meyer: Seven Principles of Software testing, in IEEE Computer, vol. 41, no. 10, pages 99-101, August 2008, also available here.

VN:F [1.9.10_1130]
Rating: 8.5/10 (6 votes cast)
VN:F [1.9.10_1130]
Rating: +1 (from 7 votes)