Archive for the ‘Software verification’ Category.

Agile methods: the good, the bad and the ugly

It was a bit imprudent last Monday to announce the continuation of the SCOOP discussion for this week; with the TOOLS conference happening now, with many satellite events such as the Eiffel Design Feast of the past week-end and today’s “New Eiffel Technology Community” workshop, there is not enough time for a full article. Next week might also be problematic. The SCOOP series will resume, but in the meantime I will report on other matters.

As something that can be conveniently typed in while sitting in the back of the TOOLS room during fascinating presentations, here is a bit of publicity for the next round of one-day seminars for industry — “Compact Course” is the official terminology — that I will be teaching at ETH in Zurich next November (one in October), some of them with colleagues. It’s the most extensive session that we have ever done; you can see the full programs and registration information here.

  • Software Engineering for Outsourced and Distributed Development, 27 October 2011
    Taught with Peter Kolb and Martin Nordio
  • Requirements Engineering, 17 November
  • Software Testing and Verification: state of the art, 18 November
    With Carlo Furia and Sebastian Nanz
  • Agile Methods: the Good, the Bad and the Ugly, 23 November
  • Concepts and Constructs of Concurrent Computation, 24 November
    With Sebastian Nanz
  • Design by Contract, 25 November

The agile methods course is new; its summary reads almost like a little blog article, so here it is.

Agile methods: the Good, the Bad and the Ugly

Agile methods are wonderful. They’ll give you software in no time at all, turn your customers and users into friends, catch bugs before they catch you, change the world, and boost your love life. Do you believe these claims (even excluding the last two)? It’s really difficult to form an informed opinion, since most of the presentations of eXtreme Programming and other agile practices are intended to promote them (and the consultants to whom they provide a living), not to deliver an objective assessment.

If you are looking for a guru-style initiation to the agile religion, this is not the course for you. What it does is to describe in detail the corpus of techniques covered by the “agile” umbrella (so that you can apply them effectively to your developments), and assess their contribution to software engineering. It is neither “for” nor “against” agile methods but fundamentally descriptive, pedagogical, objective and practical. The truth is that agile methods include some demonstrably good ideas along with some whose benefits are at best dubious. In addition (and this should not be a surprise) they cannot make the fundamental laws of software engineering go away.

Agile methods have now been around for more than a decade, during which many research teams, applying proven methods of experimental science, have performed credible empirical studies of how well the methods really work and how they compare to more traditional software engineering practices. This important body of research results, although not widely known, is critical to managers and developers in industry for deciding whether and how to use agile development. The course surveys these results, emphasizing the ones most directly relevant to practitioners.

A short discussion session will enable participants with experience in agile methods to share their results.

Taking this course will give you a strong understanding of agile development, and a clear view of when, where and how to apply them.

Schedule

Morning session: A presentation of agile methods

  • eXtreme Programming, pair programming, Scrum, Test-Driven Development, continuous integration, refactoring, stakeholder involvement, feature-driven development etc.
  • The agile lifecycle.
  • Variants: lean programming etc.

Afternoon session (I): Assessment of agile methods

  • The empirical software engineering literature: review of available studies. Assessment of their value. Principles of empirical software engineering.
  • Agile methods under the scrutiny of empirical research: what helps, what harms, and what has no effect? How do agile methods fare against traditional techniques?
  • Examples: pair programming versus code reviews; tests versus specifications; iterative development versus “Big Upfront Everything”.

Afternoon session (II): Discussion and conclusion

This final part of the course will present, after a discussion session involving participants with experience in agile methods, a summary of the contribution of agile methods to software engineering.

It will conclude with advice for organizations involved in software development and interested in applying agile methods in their own environment.

Target groups

CIOs; software project leaders; software developers; software testers and QA engineers.

VN:F [1.9.10_1130]
Rating: 8.8/10 (4 votes cast)
VN:F [1.9.10_1130]
Rating: +3 (from 5 votes)

Publish no loop without its invariant

 

There may be no more blatant example of  the disconnect between the software engineering community and the practice of programming than the lack of widespread recognition for the fundamental role of loop invariants. 

Let’s recall the basics, as they are taught in the fourth week or so of the ETH introductory programming course [1], from the very moment the course introduces loops. A loop is a mechanism to compute a result by successive approximations. To describe the current approximation, there is a loop invariant. The invariant must be:

  1. Weak enough that we can easily ensure it on a subset, possibly trivial, of our data set. (“Easily” means than this task is substantially easier than the full problem we are trying to solve.)
  2. Versatile enough that if it holds on some subset of the data we can easily (in the same sense) make it hold on a larger subset — even if only slightly larger.
  3. Strong enough that, when it covers the entire data, it yields the result we seek.

As a simple example, assume we seek the maximum of an array a of numbers, indexed from 1. The invariant states that Result is the maximum of the array slice from 1 to i. Indeed:

  1. We can trivially obtain the invariant by setting Result to be a [1]. (It is then the maximum of the slice a [1..1].)
  2. If the invariant holds, we can extend it to a slightly larger slice — larger by just one element — by increasing i by 1 and updating Result to be the greater of the previous Result and the element a [i] (for the new  i).
  3. When the slice covers the entire array — that is, i = n — the invariant tells us that Result is the maximum of the slice a [1..n], giving us the result we seek.

You cannot understand the corresponding program text

    from
        i := 1; Result := a [1]
    until i = n loop
        i := i + 1
        if Result < a [i] then Result := a [i] end
    end

without understanding the loop invariant. That is true even of people who have never heard the term: they will somehow form a mental image of the intermediate situation that justifies the algorithm. With the formal notion, the reasoning becomes precise and checkable. The difference is the same as between a builder who has no notion of theory, and one who has learned the laws of mechanics and construction engineering.

As another example, take Levenshtein distance (also known as edit distance). It is the shortest sequence of operations (insert, delete or replace a character) that will transform a string into another. The algorithm (a form of dynamic programming) fills in a matrix top to bottom and left to right, each entry being one plus the maximum of the three neighboring ones to the top and left, except if the corresponding characters in the strings are the same, in which case it keeps the top-left neighbor’s value. The basic operation in the loop body reads

      if source [i] = target [j] then
           dist [i, j] := dist [i -1, j -1]
      else
           dist [i, j] := min (dist [i, j-1], dist [i-1, j-1], dist [i-1, j]) + 1
      end

You can run this and see it work, filling the array cell after cell, then delivering the result at (dist [M, N] (the bottom-right entry, M and i being the lengths of the source and target strings. Or just watch the animation on page 60 of [2]. It works, but why it works remains a total mystery until someone tells you the invariant:

Every value of dist filled so far is the minimum distance from the initial substrings of the source, containing characters at position 1 to p, to the initial substring of the target, positions 1 to q.

This is the rationale for the above code: we want to compute the next value, at position [i, j]; if the corresponding characters in the source and target are the same, no operation is needed to extend the result we had in the top-left neighbor (position [i-1, j-1]); if not, the best we can do is the minimum we can get by extending the results obtained for our three neighbors: through the insertion of source [i] if the minimum comes from the neighbor to the left, [i-1, j]; through the deletion of target [j] if it comes from the neighbor above; or through a replacement if from the top-left neighbor.

With this explanation, a mysterious, almost hermetic algorithm instantly becomes crystal-clear. 

Yet another example is in-place linked list reversal. The body of the loop is a pointer ballet:

temp := previous
previous
:= next
next
:= next.right
previous.put_right
(temp)

with proper initialization (set next to the value of first and previous to Void) and finalization (set first to the value of previous). This is not the only possible implementation, but all variants of the algorithm use a very similar scheme.

The code looks again pretty abstruse, and hard to get right if you do not remember it exactly. As in the other examples, the only way to understand it is to see the invariant, describing the intermediate assumption after a typical loop iteration. If the original situation was this:

List reversal: initial state

List reversal: initial state

then after a few iterations the algorithm yields this intermediate situation: 

List reversal: intermediate state

List reversal: intermediate state

 The figure illustrates the invariant:

Starting from previous and repeatedly following right links yields the elements of some initial part of the list, but in the reverse of their original order; starting from next and following right links yields the remaining elements, in their original order. 

Then it is clear what the loop body with its pointer ballet is about: it moves by one position to the right the boundary between the two parts, making sure that the invariant holds again in the new state, with one more element in the first (yellow) part and one fewer in the second (pink) part. At the end the second part will be empty and the first part will encompass all elements, so that (after resetting first to the value of previous) we get the desired result.

This example is particularly interesting because list reversal is a standard interview questions for programmers seeking a job; as a result, dozens of  pages around the Web helpfully present algorithms for the benefit of job candidates. I ran a search  on “List reversal algorithm” [3], which yields many such pages. It is astounding to see that from the first fifteen hits or so, which include pages from programming courses at both Stanford and MIT, not a single one mentions invariants, or (even without using the word) gives the above explanation. The situation is all the more bizarre that many of these pages — read them for yourself! — go into intricate details about variants of the pointer manipulations. There are essentially no correctness arguments.

If you go a bit further down the search results, you will find some papers that do reference invariants, but here is the catch: rather than programming or algorithms papers, they are papers about software verification, such as one by Richard Bornat which uses a low-level (C) version of the example to illustrate separation logic [4]. These are good papers but they are completely distinct from those directed at ordinary programmers, who simply wish to learn a basic algorithm, understand it in depth, and remember it on the day of the interview and beyond.

This chasm is wrong. Software verification techniques are not just good for the small phalanx of experts interested in formal proofs. The basic ideas have potential applications to the daily business of programming, as the practice of Eiffel has shown (this is the concept of  “Verification As a Matter Of Course” briefly discussed in an earlier post [5]). Absurdly, the majority of programmers do not know them.

It’s not that they cannot do their job: somehow they eke out good enough results, most of the time. After all, the European cathedrals of the middle ages were built without the benefit of sophisticated mathematical models, and they still stand. But today we would not hire a construction engineer who had not studied the appropriate mathematical techniques. Why should we make things different for software engineering, and deprive practitioners from the benefits of solid, well-accepted theory?  

As a modest first step, there is no excuse, ever, for publishing a loop without the basic evidence of its adequacy: the loop invariant.

References

[1] Bertrand Meyer: Touch of Class: Learning to Program Well, Using Objects and Contracts, Springer, 2009. See course page (English version) here.

[2] Course slides on control structures,  here in PowerPoint (or here in PDF, without the animation); see example starting on page 51, particularly the animation on page 54. More recent version in German here (and in PDF here), animation on page 60.

[3] For balance I ran the search using Qrobe, which combines results from Ask, Bing and Google.

[4] Richard Bornat, Proving Pointer Programs in Hoare Logic, in  MPC ’00, 5th International Conference on Mathematics of Program Construction, 2000, available here.

[5] Bertrand Meyer, Verification as a Matter of Course, a post on this blog.

VN:F [1.9.10_1130]
Rating: 10.0/10 (5 votes cast)
VN:F [1.9.10_1130]
Rating: +4 (from 4 votes)

Analyzing a software failure

More than once I have emphasized here [1] [2] the urgency of rules requiring systematic a posteriori analysis of software mishaps that have led to disasters. I have a feeling that many more posts will be necessary before the idea registers.

Some researchers are showing the way. In a June 2009 article [4], Tetsuo Tamai from the University of Tokyo published a fascinating dissection of the 2005 Mizuo Securities incident at the Tokyo Stock Exchange, where market havoc resulted from a software fault that prevented proper execution of the cancel command after an employee who wanted to sell one share at 610,000 yen mistakenly switched the two numbers.

I found out only recently about the article while browsing Dines Bjørner’s page and hitting on an unpublished paper [3] where Bjørner proposes a mathematical model for the trading rules. Tamai’s article deserves to be widely read.

References

[1] The one sure way to advance software engineering: this blog, see here.
[2] Dwelling on the point: this blog, see here.
[3] Dines Bjørner: The TSE Trading Rules, version 2, unpublished report, 22 February 2010, available online.
[4] Tetsuo Tamai: Social Impact of Information System Failures, in IEEE Computer, vol. 42, no. 6, June 2009, pages 58-65, available online (with registration); the article’s text is also included in [3].

VN:F [1.9.10_1130]
Rating: 5.3/10 (3 votes cast)
VN:F [1.9.10_1130]
Rating: 0 (from 2 votes)

Programming on the cloud?

I am blogging live from the “Cloud Futures” conference organized by Microsoft in Redmond [1]. We had two excellent keynotes today, by Ed Lazowska [1] and David Patterson.

Lazowska emphasized the emergence of a new kind of science — eScience — based on analysis of enormous amounts of data. His key point was that this approach is a radical departure from “computational science” as we know it, based mostly on large simulations. With the eScience paradigm, the challenge is to handle the zillions of bytes of data that are available, often through continuous streams, in such fields as astronomy, oceanography or biology. It is unthinkable in his view to process such data through super-computing architectures specific to an institution; the Cloud is the only solution. One of the reasons (developed more explicitly in Patterson’s talk) is that cloud computing supports scaling down as well as scaling up. If your site experiences sudden bursts of popularity — say you get slashdotted — followed by downturns, you just cannot size the hardware right.

Lazowska also noted that it is impossible to convince your average  university president that Cloud is the way to go, as he will get his advice from the science-by-simulation  types. I don’t know who the president is at U. of Washington, but I wonder if the comment would apply to Stanford?

The overall argument for cloud computing is compelling. Of course the history of IT is a succession of swings of the pendulum between centralization and delocalization: mainframes, minis, PCs, client-server, “thin clients”, “The Network Is The Computer” (Sun’s slogan in the late eighties), smart clients, Web services and so on. But this latest swing seems destined to define much of the direction of computing for a while.

Interestingly, no speaker so far has addressed issues of how to program reliably for the cloud, even though cloud computing seems only to add orders of magnitude to the classical opportunities for messing up. Eiffel and contracts have a major role to play here.

More generally the opportunity to improve quality should not be lost. There is a widespread feeling (I don’t know of any systematic studies) that a non-negligible share of results generated by computational science are just bogus, the product of old Fortran programs built by generations of graduate students with little understanding of software principles. At the very least, moving to cloud computing should encourage the use of 21-th century tools, languages and methods. Availability on the cloud should also enhance a critical property of good scientific research: reproducibility.

Software engineering is remarkably absent from the list of scientific application areas that speaker after speaker listed for cloud computing. Maybe software engineering researchers are timid, and do not think of themselves as deserving large computing resources; consider, however, all the potential applications, for example in program verification and empirical software engineering. The cloud is a big part of our own research in verification; in particular the automated testing paradigm pioneered by AutoTest [3] fits ideally with the cloud and we are actively working in this direction.

Lazowska mentioned that development environments are the ultimate application of cloud computing. Martin Nordio at ETH has developed, with the help of Le Minh Duc, a Master’s student at Hanoi University of Technology, a cloud-based version of EiffelStudio: CloudStudio, which I will present in my talk at the conference tomorrow. I’ll write more about it in later posts; just one note for the moment: no one should ever be forced again to update or commit.

References

[1] Program of the Cloud Futures conference.

[2] Keynote by Ed Lazowska. You can see his slides here.

[3] Bertrand Meyer, Arno Fiva, Ilinca Ciupa, Andreas Leitner, Yi Wei, Emmanuel Stapf: Programs That Test Themselves. IEEE Computer, vol. 42, no. 9, pages 46-55, September 2009; online version here.

VN:F [1.9.10_1130]
Rating: 7.7/10 (6 votes cast)
VN:F [1.9.10_1130]
Rating: +3 (from 3 votes)

Verification As a Matter Of Course

At the ACM Symposium on Applied Computing (SAC) in Sierre last week, I gave a talk entitled “How you will be programming in 10 years”, describing a number of efforts by various people, with a special emphasis on our work at both ETH and Eiffel Software, which I think point to the future of software development. Several people have asked me for the slides, so I am making them available [1].

It occurred to me after the talk that the slogan “Verification As a Matter Of Course” (VAMOC) characterizes the general idea well. The world needs verified software, but the software development community is reluctant  to use traditional heavy-duty verification techniques. While some of the excuses are unacceptable, others sources of resistance are justified and it is our job to make verification part of the very fabric of everyday software development.

My bet, and the basis of large part of both Eiffel and the ETH verification work, is that it is possible to bring verification to practicing developers as a natural, unobtrusive component of the software development process, through the tools they use.

The talk also broaches on concurrency, where many of the same ideas apply; CAMOC is the obvious next slogan.

Reference

[1] Slides of “How you will be programming in 10 years” talk (PDF).

VN:F [1.9.10_1130]
Rating: 8.8/10 (8 votes cast)
VN:F [1.9.10_1130]
Rating: +1 (from 1 vote)

The theory and calculus of aliasing

In a previous post I briefly mentioned some work that I am doing on aliasing. There is a draft paper [1], describing the theory, calculus, graphical notation (alias diagrams) and implementation. Here I will try to give an idea of what it’s about, with the hope that you will be intrigued enough to read the article. Even before you read it you might try out the implementation[2], a simple interactive interface with all the examples of the article .

What the article does not describe in detail — that will be for a companion paper— is how the calculus will be used as part of a general framework for developing object-oriented software proved correct from the start, the focus of our overall  “Trusted Components” project’ [3]. Let me simply state that the computation of aliases is the key missing step in the effort to make correctness proofs a standard part of software development. This is a strong claim which requires some elaboration, but not here.

The alias calculus asks a simple question: given two expressions representing references (pointers),  can their values, at a given point in the program, ever reference the same object during execution?

As an example  application, consider two linked lists x and y, which can be manipulated with operations such as extend, which creates a new cell and adds it at the end of the list:

lists

 The calculus makes it possible to prove that if  x and y are not aliased to each other, then none of the pointers in any of the cells in either of the lists can point to  (be aliased to) any cell of  the other. If  x is initially aliased to y, the property no longer holds. You can run the proof (examples 18 and 19) in the downloadable implementation.

The calculus gives a set of rules, each applying to a particular construct of the language, and listed below.

The rule for a construct p is of the form

          a |= p      =   a’
 

where a and a’ are alias relations; this states that executing p  in a state where the alias relation is a will yield the alias relation a’ in the resulting state. An alias relation is a symmetric, irreflexive relation; it indicates which expressions and variables can be aliased to each other in a given state.

The constructs p considered in the discussion are those of a simplified programming language; a modern object-oriented language such as Eiffel can easily be translated into that language. Some precision will be lost in the process, meaning that the alias calculus (itself precise) can find aliases that would not exist in the original program; if this prevents proofs of desired properties, the cut instruction discussed below serves to correct the problem.

The first rule is for the forget instruction (Eiffel: x := Void):

          a |= forget x       =   a \- {x}

where the \- operator removes from a relation all the elements belonging to a given set A. In the case of object-oriented programming, with multidot expressions x.y.z, the application of this rule must remove all elements whose first component, here x, belongs to A.

The rule for creation is the same as for forget:

         a |= create x          =   a \- {x}

The two instructions have different semantics, but the same effect on aliasing.

The calculus has a rule for the cut instruction, which removes the connection between two expressions:

        a |= cut x, y       =   a — <x, y>

where is set difference and <x, y> includes the pairs [x, y] and [y, x] (this is a special case of a general notation defined in the article, using the overline symbol). The cut   instruction corresponds, in Eiffel, to cut   x /=end:  a hint given to the alias calculus (and proved through some other means, such as standard axiomatic semantics) that some references will not be aliased.

The rule for assignment is

      a |= (x := y)      =   given  b = a \- {x}   then   <b È {x} x (b / y)}> end

where b /y (“quotient”), similar to an equivalence class in an equivalence relation, is the set of elements aliased to y in b, plus y itself (remember that the relation is irreflexive). This rule works well for object-oriented programming thanks to the definition of the \- operator: in x := x.y, we must not alias x to x.y, although we must alias it to any z that was aliased to x.y.

The paper introduces a graphical notation, alias diagrams, which makes it possible to reason effectively about such situations. Here for example is a diagram illustrating the last comment:

Alias diagram for a multidot assignment

Alias diagram for a multidot assignment

(The grayed elements are for explanation and not part of the final alias relation.)

For the compound instruction, the rule is:

           a |= (p ;  q)      =   (a |= p) |= q)

For the conditional instruction, we get:

           a |= (then p else  q end)      =   (a |= p) È  (a |= q)

Note the form of the instruction: the alias calculus ignores information from the then clause present in the source language. The union operator is the reason why  alias relations,  irreflexive and symmetric, are not  necessarily transitive.

The loop instruction, which also ignores the test (exit or continuation condition), is governed by the following rule:

           a |= (loop p end)       =   tN

where span style=”color: #0000ff;”>N is the first value such that tN = tN+1 (in other words, tN is the fixpoint) in the following sequence:

            t0          =    a
           tn+1       =   (tn È (tn |= p))     

The existence of a fixpoint and the correctness of this formula to compute it are the result of a theorem in the paper, the “loop aliasing theorem”; the proof is surprisingly elaborate (maybe I missed a simpler one).

For procedures, the rule is

         a |= call p        =   a |= p.body

where p.body is the body of the procedure. In the presence of recursion, this gives rise to a set of equations, whose solution is the fixpoint; again a theorem is needed to demonstrate that the fixpoint exists. The implementation directly applies fixpoint computation (see examples 11 to 13 in the paper and implementation).

The calculus does not directly consider routine arguments but treats them as attributes of the corresponding class; so a call is considered to start with assignments of the form f : = a for every pair of formal and actual arguments f and a. Like the omission of conditions in loops and conditionals, this is a source of possible imprecision in translating from an actual programming language into the calculus, since values passed to recursive activations of the same routine will be conflated.

Particularly interesting is the last rule, which generalizes the previous one to qualified calls of the form x. f (…)  as they exist in object-oriented programming. The rule involves the new notion of inverse variable, written x’ where x is a variable. Laws of the calculus (with Current denoting the current object, one of the fundamental notions of object-oriented programming) are

        Current.x            = x   
        x.Current            = x
        x.x’                      = Current
        x’.x                      = Current

In other words, Current plays the role of zero element for the dot operator, and variable inversion is the inverse operation. In a call x.f, where x denotes the supplier object (the target of the call), the inverse variable provides a back reference to the client object (the caller), indispensable to interpret references in the original context. This property is reflected by the qualified client rule, which uses  the auxiliary operator n (where x n a, for a relation a and a variable x, is the set of pairs [x.u, y.v] such that the pair [u, v] is in a). The rule is:

         a |= call x.r       =   x n ((x’ n a ) |= call r)

You need to read the article for the full explanation, but let me offer the following quote from the corresponding section (maybe you will note a familiar turn of phrase):

Thus we are permitted to prove that the unqualified call creates certain aliasings, on the assumption that it starts in its own alias environment but has access to the caller’s environment through the inverted variable, and then to assert categorically that the qualified call has the same aliasings transposed back to the original environment. This change of environment to prove the unqualified property, followed by a change back to the original environment to prove the qualified property, explains well the aura of magic which attends a programmer’s first introduction to object-oriented programming.

I hope you will enjoy the calculus and try the examples in the implementation. It is fun to apply, and there seems to be no end to the potential applications.

Reference

[1] Bertrand Meyer: The Theory and Calculus of Aliasing, draft paper, first published 12 January 2009 (revised 21 January 2010), available here and also at arxiv.org.
[2] Implementation (interactive version to try all the examples of the paper): downloadable Windows executable here.
[3] Bertrand Meyer: The Grand Challenge of Trusted Components, in 2003 International Conference on Software Engineering, available here.

VN:F [1.9.10_1130]
Rating: 7.9/10 (8 votes cast)
VN:F [1.9.10_1130]
Rating: +7 (from 7 votes)

Dwelling on the point

Once again, and we are not learning!

La Repubblica of last Thursday [1] and other Italian newspapers have reported on a “computer” error that temporarily brought thousands of accounts at the national postal service bank into the red. It is a software error, due to a misplacement of the decimal points in some transactions.

As usual the technical details are hazy; La Repubblica writes that:

Because of a software change that did not succeed, the computer system did not always read the decimal point during transactions”.

As a result, it could for example happen that a 15.00-euro withdrawal was understood as 1500 euros.
I have no idea what “reading the decimal point ” means. (There is no mention of OCR, and the affected transactions seem purely electronic.) Only some of the 12 million checking or “Postamat” accounts were affected; the article cites a number of customers who could not withdraw money from ATMs because the system wrongly treated their accounts as over-drawn. It says that this was the only damage and that the postal service will send a letter of apology. The account leaves many questions unanswered, for example whether the error could actually have favored some customers, by allowing them to withdraw money they did not have, and if so what will happen.

The most important unanswered question is the usual one: what was the software error? As usual, we will probably never know. The news items will soon be forgotten, the postal service will somehow fix its code, life will go on. Nothing will be learned; the next time around similar causes will produce similar effects.

I criticized this lackadaisical attitude in an earlier column [2] and have to hammer its conclusion again: any organization using public money should be required, when it encounters a significant software malfunction, to let experts investigate the incident in depth and report the results publicly. As long as we keep forgetting our errors we will keep repeating them. Where would airline safety be in the absence of thorough post-accident reports? That a software error did not kill anyone is not a reason to ignore it. Whether it is the Italian post messing up, a US agency’s space vehicle crashing on the moon or any other software fault causing systems to fail, it is not enough to fix the symptoms: we must have a professional report and draw the lessons for the future.

Reference

[1] Luisa Grion: Poste in tilt per una virgola — conti gonfiati, stop ai prelievi. In La Repubblica, 26 November 2009, page 18 of the print version. (At the time of writing it does not appear at repubblica.it,  but see  the TV segment also titled “Poste in tilt per una virgola” on Primocanale Web TV here, and other press articles e.g. in Il Tempo here.)

[2] On this blog: The one sure way to advance software engineering (post of 21 August 2009).

VN:F [1.9.10_1130]
Rating: 9.5/10 (4 votes cast)
VN:F [1.9.10_1130]
Rating: +3 (from 5 votes)

The one sure way to advance software engineering

Airplanes today are incomparably safer than 20, 30, 50 years ago: 0.05 deaths per billion kilometers. That’s not by accident.

Rather, it’s by accidents. What has turned air travel from a game of chance into one of the safest modes of traveling is the relentless study of crashes and other mishaps. In the US the National Transportation Safety Board has investigated more than 110,000 accidents since it began its operations in 1967. Any accident must, by law, be investigated thoroughly; airplanes themselves carry the famous “black boxes” whose only purpose is to provide evidence in the case of a catastrophe. It is through this systematic and obligatory process of dissecting unsafe flights that the industry has made almost all flights safe.

Now consider software. No week passes without the announcement of some debacle due to “computers” — meaning, in most cases, bad software. The indispensable Risks forum [1] and many pages around the Web collect software errors; several books have been devoted to the topic.

A few accidents have been investigated thoroughly; two examples are Nancy Leveson’s milestone study of the Therac-25 patient-killing medical device [2], and Gilles Kahn’s analysis of the Ariane 5 crash (which Jean-Marc Jézéquel and I used as a basis for our 1997 article [3]). Both studies improved our understanding of software engineering. But these are exceptions. Most of what we have elsewhere is made of hearsay and partial information, and plain urban legends (like the endlessly repeated story about the Venus probe that supposedly failed because a period was typed instead of a comma — most likely a canard).

Software disasters continue; they attract attention when they arise, and inevitably some kind of announcement is made that the problem is being corrected, or that a committee will study the causes; almost as inevitably, that is the last we hear of it. In the latest issue of Risks alone, you can find several examples (such as [4]). In the past months, breakdowns at Skype, Google and Twitter made headlines; we all learned about the failures, but have you seen precise analyses of what actually happened?

As another typical example, we heard a few months ago from the French press that an “IT error” (une erreur informatique) led to overestimating the pensions of about a million people; since (strangely!)  no one was suggesting that they would be asked to pay the money back, the cost to taxpayers will be over 300 million euros. I looked in vain for any follow-up story: what happened? What was the actual error? Were the tools at fault? The quality assurance procedures? The programmers’ qualifications? Or was it a matter of bad deployment? Of erroneous data, and if so, what was the process for validating inputs? And so on. Most likely we will never know.

But we should know. Especially with public money, any such incident should have a post-mortem, with experts called in (surely at a fraction of the cost of the failure) to analyze what happened and produce a public report.

At least this was a public project, for which some disclosure was inevitable. The software engineering community buzzes with unconfirmed reports of huge software-induced errors, that go unreported because they happen in private companies eager to avoid bad publicity. It’s as if we had allowed aircraft manufacturers, decade after decade, to keep mum about accidents. Where then would air travel safety be today?

Progress in software engineering will come from many sources. Research is critical, including on topics which today appear exotic. But if anyone is looking for one practical, low-tech idea that has an iron-clad guarantee of improving software engineering, here it is: pass a law that requires extensive professional analysis of any large software failure.

The details are not so hard to refine. The initiative would probably have to start at the national level; any industrialized country could be the pioneer. (Or what about Europe as whole?) The law would have to define what constitutes a “large” failure; for example it could be any failure that may be software-related and has resulted in loss either of human life or of property beyond a certain threshold, say $50 million. In the latter case, to avoid accusations of government meddling in private matters, the law could initially be limited to cases involving public money; when it has shown its value, it could then be extended to private failures as well. Even with some limitations, such a law would have a tremendous effect. Only with a thorough investigation of software projects gone wrong can we help the majority of projects to go right.

We can no longer afford to let the IT industry get away with covering up its failures. Lobbying for a Software Incident Full Disclosure Law is the single most important step we can take today to make the world’s software better.

Note (2011)

Later articles have come back to the theme discussed here, and there will probably be more in the future as it remains ever current. They can be found by selecting the tag “Advance.

References

[1] Peter G. Neumann, moderator: The Risks Digest Forum on Risks to the Public in Computers and Related Systems, available online (going back to 1985!).

[2] Nancy Leveson: Medical Devices: The Therac-25, extract from her book Safeware: System Safety and Computers, Addison-Wesley, 1995, available online.

[3] Jean-Marc Jézéquel and Bertrand Meyer: Design by Contract: The Lessons of Ariane, in Computer (IEEE), vol. 30, no. 1, January 1997, pages 129-130, also available here.

[4] Monty Solomon: Computer Error Caused Rent Troubles for Public Housing Tenants, in Risks 25.76, 15 August 2009, see here.

[5] Une erreur informatique à 300 millions d’euros, in Le Point, 12 May 2009, available here.

 

VN:F [1.9.10_1130]
Rating: 9.6/10 (8 votes cast)
VN:F [1.9.10_1130]
Rating: +6 (from 6 votes)

Specifying user interfaces

Many blogs including this one rely on the WordPress software. In previous states of the present page you may have noticed a small WordPress bug, which I find interesting.

“Tags” are a nifty WordPress feature. When you post a message, you can specify one or more informative “tags”. The tags of all messages appear in the right sidebar, each with a smaller or bigger font size depending on the number of messages that specified it. You can click such a tag in the sidebar and get, on the left, a page containing all the associated messages.

Now assume that many posts use a particular tag; in our example it is “Design by Contract”, not unexpected for this blog. Assume further that the tag name is long. It is indeed in this case: 18 characters. As a side note, no problem would arise if I used normal spaces in the name, which would then appear on two or three lines; precisely to avoid this  I use HTML “non-breaking spaces”. This is probably not in the WordPress spirit, but any other long tag without spaces would create the same problem. That problem is a garbled display:

dbc_overflows

The long tag overflows the bluish browser area assigned to tags, producing an ugly effect. This behavior is hard to defend: either the tag should have been rejected as too long when the poster specified it or it should fit in its zone, whether by truncation or by applying a smaller font.

I quickly found a workaround, not nice but good enough: make sure that some short tag  (such as “Hoare”) appears much more often than the trouble-making tag. Since font size indicates the relative frequency of tags, the long one will be scaled down to a smaller font which fits.

Minor as it is, this WordPress glitch raises some general questions. First, is it really a bug? Assume, by a wild stretch of imagination, that a jury had to resolve this question; it could easily find an expert to answer positively, by stating that the behavior does not satisfy reasonable user expectations, and another who notes that it is not buggy behavior since it does not appear to violate any expressly stated property of the specification. (At least I did not find in the WordPress documentation any mention of either the display size of tags or a limit on tag length; if I missed it please indicate the reference.)

Is it a serious matter? Not in this particular example; uncomely Web display does not kill.   But the distinction between “small matter of esthetics” and software fault can be tenuous. We may note in particular that the possibility for large data to overflow its assigned area is a fundamental source of security risks; and even pure user interface issues can become life-threatening in the case of critical applications such as air-traffic control.

Our second putative expert is right, however: no behavior is buggy unless it contradicts a specification. Where will the spec be in such an example? There are three possibilities, each with its limitations.

The first solution is to expect that in a carefully developed system every such property will have to be specified. This is conceivable, but hard, and the question arises of how to make sure nothing has been forgotten. Past  some threshold of criticality and effort, the only specifications that count are formal; there is not that much literature on specifying user interfaces formally, since much of the work on formal specifications has understandably concentrated on issues thought to be more critical.

Because of the tediousness of specifying such general properties again and again for each case, it might be better  — this is the second solution — to specify them globally, for an entire system, or for the user interfaces of an entire class of systems. Like any serious effort at specification, if it is worth doing, it is worth doing formally.

In either of these approaches the question remains of how we know we have specified everything of interest. This question, specification completeness, is not as hopeless as most people think; I plan to write an entry about it sometime (hint:  bing for “guttag horning”). Still, it is hard to be sure you did not miss anything relevant. Remember this the next time formal methods advocates — who should know better — tell you that with their techniques there “no longer is a need to test”, or when you read about the latest OS kernel that is “guaranteed correct and secure”. However important formal methods and proofs are, they can only guarantee satisfaction of the properties that the specifier has considered and stated. To paraphrase someone [1], I would venture that

Proofs can only show the absence of envisaged bugs, never rule out the presence of unimagined ones.

This is one of the reasons why tests will always, regardless of the progress of proofs, remain an indispensable part of the software development landscape [2]. Whatever you have specified and proved, you will still want to run the system (or, for certain classes of embedded software, some simulation of it) and see the results for yourself.

What then if we do not explicitly specify the desired property (as we did in the two approaches considered so far) but testing or actual operation still reveals some behavior that is clearly unsatisfactory? On what basis do we complain to the software’s producer? A solution here, the third in our list, might be to rely on generally accepted standards of professional development. This is common in other kinds of engineering: if you commission someone to build you a house, the contract may not explicitly state that the roof should not fall on your head while you are asleep; when this happens, you will still sue and accuse the builder of malpractice. Such remedies can work for software too, but the rules are murkier because we have not accepted, or even just codified, a set of general professional practices that would cover such details as “no display of information should overflow its assigned area”.

Until then I will remember to use one short tag a lot.

References

[1] Edsger W. Dijksra, Notes on Structured Programming, in Dahl, Dijkstra, Hoare, Structured Programming, Academic Press, 1972.

[2]  See Tests And Proofs (TAP) conference series, since 2007. The next conference, program-chaired by Angelo Gargantini and Gordon Fraser, will take place in the week of the TOOLS Federated Conferences in Málaga, Spain, in the week of June 28, 2010.

VN:F [1.9.10_1130]
Rating: 5.8/10 (4 votes cast)
VN:F [1.9.10_1130]
Rating: +1 (from 3 votes)

What is the purpose of testing?

Last year I published in IEEE Computer a short paper entitled “Seven Principles of Software Testing” [1]. Although technical, it was an opinion piece and the points were provocative enough to cause a reader, Gerald Everett, to express strong disagreement. Robert Glass, editor of the “Point/Counterpoint” rubric of the sister publication, IEEE Software, invited both of us to a debate in the form of a critique by Mr. Everett, my answer to the critique, his rejoinder to the answer, and my rejoinder to his rejoinder. The result appeared recently [2].

Other than a matter of terminology (Mr. Everett wants “testing” to cover static as well as dynamic techniques of quality assurance), the main point of disagreement was my very first principle: to test a program is to make it fail. Indeed this flies in the face of some established wisdom, which holds that testing serves to increase one’s confidence in the software; see for example the Wikipedia entry [3]. My article explains that this is a delusion and that it is more productive to limit the purpose of the testing process to what it does well, finding faults,  rather than leting it claim goals of quality assurance that are beyond its scope. Finding faults is no minor feat already. In this view — the practical view, for example as seen by a software project manager  — Dijkstra’s famous dismissal of testing (it can prove the presence of bugs, never their absence) is the greatest compliment to testing, and the most powerful advertisement one can think of for taking testing seriously.

What do you think? What is testing good for?

I should add that in terms of research this debate is a bit of a sideline.  The real goal of our work is to build completely automatic testing tools. An article on this topic will appear in the next issue of Computer (September); I will post a link to it when the issue is out.

References

[1] Bertrand Meyer: Seven Principles of Software Testing, in IEEE Computer, vol. 41, no. 8, pages 99-101, Aug. 2008; available on the IEEE site, and also in draft form here. (An earlier version, without the beautiful picture of bees and flies in the bottle drawn by Computer‘s artist, appeared as an EiffelWorld column.)

[2] Gerald D. Everett and Bertrand Meyer: Point/Counterpoint, in IEEE Software, Vol. 26, No. 4, pages 62-64, July/August 2009.  Available on the IEEE site and also here.

[3] Wikipedia entry on Software Testing.

[4] Bertrand Meyer, Ilinca Ciupa, Andreas Leitner, Arno Fiva, Emmanuel Stapf and Yi Wei: Programs that test themselves, to appear in IEEE Computer, September 2009.

VN:F [1.9.10_1130]
Rating: 10.0/10 (3 votes cast)
VN:F [1.9.10_1130]
Rating: +1 (from 1 vote)