Analyzing a software failure

More than once I have emphasized here [1] [2] the urgency of rules requiring systematic a posteriori analysis of software mishaps that have led to disasters. I have a feeling that many more posts will be necessary before the idea registers.

Some researchers are showing the way. In a June 2009 article [4], Tetsuo Tamai from the University of Tokyo published a fascinating dissection of the 2005 Mizuo Securities incident at the Tokyo Stock Exchange, where market havoc resulted from a software fault that prevented proper execution of the cancel command after an employee who wanted to sell one share at 610,000 yen mistakenly switched the two numbers.

I found out only recently about the article while browsing Dines Bjørner’s page and hitting on an unpublished paper [3] where Bjørner proposes a mathematical model for the trading rules. Tamai’s article deserves to be widely read.

References

[1] The one sure way to advance software engineering: this blog, see here.
[2] Dwelling on the point: this blog, see here.
[3] Dines Bjørner: The TSE Trading Rules, version 2, unpublished report, 22 February 2010, available online.
[4] Tetsuo Tamai: Social Impact of Information System Failures, in IEEE Computer, vol. 42, no. 6, June 2009, pages 58-65, available online (with registration); the article’s text is also included in [3].

From programming to software engineering: ICSE keynote slides available

In response to many requests, I have made available [1] the slides of my education keynote at ICSE earlier this month. The theme was “From programming to software engineering: notes of an accidental teacher”. Some of the material has been presented before, notably at the Informatics Education Europe conference in Venice in 2009. (In research you can give a new talk every month, but in education things move at a more senatorial pace.) Still, part of the content is new. The talk is a summary of my experience teaching programming and software engineering at ETH.

The usual caveats apply: these are only slides (I did not write a paper), and not all may be understandable independently of the actual talk.

Reference

[1] From programming to software engineering: notes of an accidental teacher, slides from a keynote talk at ICSE 2010.

The other impediment to software engineering research

In the decades since structured programming, many of the advances in software engineering have come out of non-university sources, mostly of four kinds:

  • Start-up technology companies  (who played a large role, for example, in the development of object technology).
  • Industrial research labs, starting with Xerox PARC and Bell Labs.
  • Independent (non-university-based) author-consultants. 
  • Independent programmer-innovators, who start open-source communities (and often start their own businesses after a while, joining the first category).

 Academic research has had its part, honorable but limited.

Why? In earlier posts [1] [2] I analyzed one major obstacle to software engineering research: the absence of any obligation of review after major software disasters. I will come back to that theme, because the irresponsible attitude of politicial authorities hinders progress by depriving researchers of some of their most important potential working examples. But for university researchers there is another impediment: the near-impossibility of developing serious software.

If you work in theory-oriented parts of computer science, the problem is less significant: as part of a PhD thesis or in preparation of a paper you can develop a software prototype that will support your research all the way to the defense or the publication, and can be left to wither gracefully afterwards. But software engineering studies issues that arise for large systems, where  “large” encompasses not only physical size but also project duration, number of users, number of changes. A software engineering researcher who only ever works on prototypes will be denied the opportunity to study the most significant and challenging problems of the field. The occasional consulting job is not a substitute for this hands-on experience of building and maintaining large software, which is, or should be, at the core of research in our field.

The bodies that fund research in other sciences understood this long ago for physics and chemistry with their huge labs, for mechanical engineering, for electrical engineering. But in computer science or any part of it (and software engineering is generally viewed as a subset of computer science) the idea that we would actually do something , rather than talk about someone else’s artifacts, is alien to the funding process.

The result is an absurd situation that blocks progress. Researchers in experimental physics or mechanical engineering employ technicians: often highly qualified personnel who help researchers set up experiments and process results. In software engineering the equivalent would be programmers, software engineers, testers, technical writers; in the environments that I have seen, getting financing for such positions from a research agency is impossible. If you have requested a programmer position as part of a successful grant request, you can be sure that this item will be the first to go. Researchers quickly understand the situation and learn not even to bother including such requests. (I have personally never seen a counter-example. If you have a different experience, I will be interested to learn who the enlightened agency is. )

The result of this attitude of funding bodies is a catastrophe for software engineering research: the only software we can produce, if we limit ourselves to official guidelines, is demo software. The meaningful products of software engineering (large, significant, usable and useful open-source software systems) are theoretically beyond our reach. Of course many of us work around the restrictions and do manage to produce working software, but only by spending considerable time away from research on programming and maintenance tasks that would be far more efficiently handled by specialized personnel.

The question indeed is efficiency. Software engineering researchers should program as part of their normal work:  only by writing programs and confronting the reality of software development can we hope to make relevant contributions. But in the same way that an experimental physicist is helped by professionals for the parts of experimental work that do not carry a research value, a software engineering researcher should not have to spend time on porting the software to other architectures, performing configuration management, upgrading to new releases of the operating system, adapting to new versions of the libraries, building standard user interfaces, and all the other tasks, largely devoid of research potential, that software-based innovation requires.

Until  research funding mechanisms integrate the practical needs of software engineering research, we will continue to be stymied in our efforts to produce a substantial effect on the quality of the world’s software.

References

[1] The one sure way to advance software engineering: this blog, see here.
[2] Dwelling on the point: this blog, see here.

Programming on the cloud?

I am blogging live from the “Cloud Futures” conference organized by Microsoft in Redmond [1]. We had two excellent keynotes today, by Ed Lazowska [1] and David Patterson.

Lazowska emphasized the emergence of a new kind of science — eScience — based on analysis of enormous amounts of data. His key point was that this approach is a radical departure from “computational science” as we know it, based mostly on large simulations. With the eScience paradigm, the challenge is to handle the zillions of bytes of data that are available, often through continuous streams, in such fields as astronomy, oceanography or biology. It is unthinkable in his view to process such data through super-computing architectures specific to an institution; the Cloud is the only solution. One of the reasons (developed more explicitly in Patterson’s talk) is that cloud computing supports scaling down as well as scaling up. If your site experiences sudden bursts of popularity — say you get slashdotted — followed by downturns, you just cannot size the hardware right.

Lazowska also noted that it is impossible to convince your average  university president that Cloud is the way to go, as he will get his advice from the science-by-simulation  types. I don’t know who the president is at U. of Washington, but I wonder if the comment would apply to Stanford?

The overall argument for cloud computing is compelling. Of course the history of IT is a succession of swings of the pendulum between centralization and delocalization: mainframes, minis, PCs, client-server, “thin clients”, “The Network Is The Computer” (Sun’s slogan in the late eighties), smart clients, Web services and so on. But this latest swing seems destined to define much of the direction of computing for a while.

Interestingly, no speaker so far has addressed issues of how to program reliably for the cloud, even though cloud computing seems only to add orders of magnitude to the classical opportunities for messing up. Eiffel and contracts have a major role to play here.

More generally the opportunity to improve quality should not be lost. There is a widespread feeling (I don’t know of any systematic studies) that a non-negligible share of results generated by computational science are just bogus, the product of old Fortran programs built by generations of graduate students with little understanding of software principles. At the very least, moving to cloud computing should encourage the use of 21-th century tools, languages and methods. Availability on the cloud should also enhance a critical property of good scientific research: reproducibility.

Software engineering is remarkably absent from the list of scientific application areas that speaker after speaker listed for cloud computing. Maybe software engineering researchers are timid, and do not think of themselves as deserving large computing resources; consider, however, all the potential applications, for example in program verification and empirical software engineering. The cloud is a big part of our own research in verification; in particular the automated testing paradigm pioneered by AutoTest [3] fits ideally with the cloud and we are actively working in this direction.

Lazowska mentioned that development environments are the ultimate application of cloud computing. Martin Nordio at ETH has developed, with the help of Le Minh Duc, a Master’s student at Hanoi University of Technology, a cloud-based version of EiffelStudio: CloudStudio, which I will present in my talk at the conference tomorrow. I’ll write more about it in later posts; just one note for the moment: no one should ever be forced again to update or commit.

References

[1] Program of the Cloud Futures conference.

[2] Keynote by Ed Lazowska. You can see his slides here.

[3] Bertrand Meyer, Arno Fiva, Ilinca Ciupa, Andreas Leitner, Yi Wei, Emmanuel Stapf: Programs That Test Themselves. IEEE Computer, vol. 42, no. 9, pages 46-55, September 2009; online version here.

Reflexivity, and other pillars of civilization

Let me start, dear reader of this blog, by probing your view of equality, and also of assignment. Two questions:  

  • Is a value x always equal to itself? (For the seasoned programmers in the audience: I am talking about a value, as in mathematics, not an expression, as in programming, which could have a side effect.)
  • In programming, if we consider an assignment

       x := y

and v is the value of y before that assignment (again, this little detour is to avoid bothering with side effects), is the value of x always equal to v after the assignment?  

Maybe I should include here one of these Web polls that one finds on newspaper sites, so that you can vote and compare your answer to the Wisdom of Crowds. My own vote is clear: yes to both. Equality is reflexive (every value is equal to itself, at any longitude and temperature, no excuses and no exceptions); and the purpose of assignment is to make the value of the target equal to the value of the source. Such properties are some of the last ramparts of civilization. If they go away, what else is left?  

754 enters the picture

Now come floating-point numbers and the famous IEEE “754” floating-point standard [1]. Because not all floating point operations yield a result usable as a floating-point number, the standard introduces a notion of “NaN”, Not a Number; certain operations involving floating-point numbers may yield a NaN. The term NaN does not denote a single value but a set of values, distinguished by their “payloads”.  

Now assume that the value of x is a NaN. If you use a programming language that supports IEEE 754 (as they all do, I think, today) the test in  

        if x = x then …  

is supposed to yield False. Yes, that is specified in the standard: NaN is never equal to NaN (even with the same payload); nor is it equal to anything else; the result of an equality comparison involving NaN will always be False.  

Assignment behavior is consistent with this: if y (a variable, or an expression with no side effect) has a NaN value, then after  

        x := y  

the test xy will yield False. 

Before commenting further let me note the obvious: I am by no means a numerics expert; I know that IEEE 754 was a tremendous advance, and that it was designed by some of the best minds in the field, headed by Velvel Kahan who received a Turing Award in part for that success. So it is quite possible that I am about to bury myself in piles of ridicule. On the other hand I have also learned that (1) ridicule does not kill, so I am game; and more importantly (2) experts are often right but not always, and it is always proper to question their reasoning. So without fear let me not stop at the arguments that “the committee must have voted on this point and they obviously knew what they were doing” and “it is the standard and implemented on zillions of machines, you cannot change it now”. Both are true enough, but not an excuse to censor questions.  

What are the criteria?

The question is: compatibility with an existing computer standard is great, but what about compatibility with a few hundred years of mathematics? Reflexivity of equality  is something that we expect for any data type, and it seems hard to justify that a value is not equal to itself. As to assignment, what good can it be if it does not make the target equal to the source value?  

The question of assignment is particularly vivid in Eiffel because we express the expected abstract properties of programs in the form of contracts. For example, the following “setter” procedure may have a postcondition (expressed by the ensure clause):  

        set_x (v: REAL)
                        — Set the value of x (an attribute, also of type REAL) the value of v.
                do
                        …
                        x := v  
                ensure
                        x = v
                end  

   
If you call this procedure with a NaN argument for a compiler that applies IEEE 754 semantics, and monitor contracts at run time for testing and debugging, the execution will report a contract violation. This is very difficult for a programmer to accept.  

A typical example arises when you have an assignment to an item of an array of REAL values. Assume you are executing a [i] := x. In an object-oriented view of the world (as in Eiffel), this is considered simplified syntax  for the routine call a.put (x, i). The postcondition is that a [i] = x. It will be violated!  

The experts’ view

I queried a number of experts on the topic. (This is the opportunity to express my gratitude to members of the IFIP working group 2.5 on numerical software [2], some of the world’s top experts in the field, for their willingness to respond quickly and with many insights.) A representative answer, from Stuart Feldman, was:  

If I remember the debate correctly (many moons ago), NaN represents an indefinite value, so there is no reason to believe that the result of one calculation with unclear value should match that of another calculation with unclear value. (Different orders of infinity, different asymptotic approaches toward 0/0, etc.)  

Absolutely correct! Only one little detail, though: this is an argument against using the value True as a result of the test; but it is not an argument for using the value False! The exact same argument can be used to assert that the result should not be False:  

… there is no reason to believe that the result of one calculation with unclear value should not match that of another calculation with unclear value.  

Just as convincing! Both arguments complement each other: there is no compelling reason for demanding that the values be equal; and there is no compelling argument either to demand that they be different. If you ignore one of the two sides, you are biased.  

What do we do then?

The conclusion is not that the result should be False. The rational conclusion is that True and False are both unsatisfactory solutions. The reason is very simple: in a proper theory (I will sketch it below) the result of such a comparison should be some special undefined below; the same way that IEEE 754 extends the set of floating-point numbers with NaN, a satisfactory solution would extend the set of booleans with some NaB (Not a Boolean). But there is no NaB, probably because no one (understandably) wanted to bother, and also because being able to represent a value of type BOOLEAN on a single bit is, if not itself a pillar of civilization, one of the secrets of a happy life.  

If both True and False are unsatisfactory solutions, we should use the one that is the “least” bad, according to some convincing criterion . That is not the attitude that 754 takes; it seems to consider (as illustrated by the justification cited above) that False is somehow less committing than True. But it is not! Stating that something is false is just as much of a commitment as stating that it is true. False is no closer to NaB than True is. A better criterion is: which of the two possibilities is going to be least damaging to time-honored assumptions embedded in mathematics? One of these assumptions is the reflexivity of equality:  come rain or shine, x is equal to itself. Its counterpart for programming is that after an assignment the target will be equal to the original value of the source. This applies to numbers, and it applies to a NaN as well. 

Note that this argument does not address equality between different NaNs. The standard as it is states that a specific NaN, with a specific payload, is not equal to itself.  

What do you think?

A few of us who had to examine the issue recently think that — whatever the standard says at the machine level — a programming language should support the venerable properties that equality is reflexive and that assignment yields equality.

Every programming language should decide this on its own; for Eiffel we think this should be the specification. Do you agree?  

Some theory

For readers who like theory, here is a mathematical restatement of the observations above. There is nothing fundamentally new in this section, so if you do not like strange symbols you can stop here.  

The math helps explain the earlier observation that neither True nor False is more“committing” than the other. A standard technique (coming from denotational semantics) for dealing with undefinedness in basic data types, is to extend every data type into a lattice, with a partial order relation meaning “less defined than”. The lattice includes a bottom element, traditionally written “” (pronounced “Bottom”) and a top element written (“Top”). represents an unknown value (not enough information) and an error value (too much information). Pictorially, the lattice for natural numbers would look like this:  

Integer lattice
The lattice of integers

On basic types, we always use very simple lattices of this form, with three kinds of element: , less than every other element; , larger than all other elements; and in-between all the normal values of the type, which for the partial order of interest are all equal. (No, this is not a new math in which all integers are equal. The order in question simply means “is less defined than”. Every integer is as defined as all other integers, more defined than , and less defined than .) Such lattices are not very exciting, but they serve as a starting point; lattices with more interesting structures are those applying to functions on such spaces — including functions of functions —, which represent programs.  

The modeling of floating-point numbers with NaN involves such a lattice; introducing NaN means introducing a value. (Actually, one might prefer to interpret NaN as , but the reasoning transposes immediately.)  More accurately, since there are many NaN values, the lattice will look more like this:

Float lattice
The lattice of floats in IEEE 754

For simplicity we can ignore the variety of NaNs and consider a single .

Functions on lattices — as implemented by programs — should satisfy a fundamental property: monotonicity. A function f  is monotone (as in ordinary analysis) if, whenever xy, then f (x) ≤ f (y). Monotonicity is a common-sense requirement: we cannot get more information from less information. If we know less about x than about y, we cannot expect that any function (with a computer, any program) f will, out of nowhere, restore the missing information.  

Demanding monotonicity of all floating-point operations reflects this exigency of monotonicity: indeed, in IEEE 754, any arithmetic operation — addition, multiplication … — that has a NaN as one of its arguments must yield a Nan as its result. Great. Now for soundness we should also have such a property for boolean-valued operations, such as equality. If we had a NaB as the  of booleans, just like NaN is the  of floating-point numbers,  then the result of NaN = NaN would be NaB. But the world is not perfect and the IEEE 754 standard does not govern booleans. Somehow (I think) the designers thought of False as somehow less defined than True. But it is not! False is just as defined as True in the very simple lattice of booleans; according to the partial order, True and False are equal for the relevant partial order:

Boolean lattice
The lattice of booleans

Because any solution that cannot use a NaB will violate monotonicity and hence will be imperfect, we must resort to heuristic criteria. A very strong criterion in favor of choosing True is reflexivity — remaining compatible with a fundamental property of equality. I do not know of any argument for choosing False. 

The Spartan approach

There is, by the way, a technique that accepts booleans as we love them (with two values and no NaB) and achieves mathematical rigor. If operations involving NaNs  truly give us pimples, we can make any such operation trigger an exception. In the absence of values,  this is an acceptable programming technique for representing undefinedness. The downside, of course, is that just about everywhere the program must be ready to handle the exception in some way. 

It is unlikely that in practice many people would be comfortable with such a solution. 

Final observations

Let me point out two objections that I have seen. Van Snyder wrote: 

NaN is not part of the set of floating-point numbers, so it doesn’t behave as if “bottom” were added to the set. 

Interesting point, but, in my opinion not applicable: is indeed not part of the mathematical set of floating point numbers, but in the form of NaN it is part of the corresponding type (float in C, REAL in Eiffel); and the operations of the type are applicable to all values, including NaN if, as noted, we have not taken the extreme step of triggering an exception every time an operation uses a NaN as one of its operands. So we cannot free ourselves from the monotonicity concern by just a sleight of hands. 

Another comment, also by Van Snyder (slightly abridged): 

Think of [the status of NaN] as a variety of dynamic run-time typing. With static typing, if  x is an integer variable and y

        x := y 

does not inevitably lead to 

        x = y

 True; and a compelling argument to require that conversions satisfy equality as a postcondition! Such  reasoning — reflexivity again — was essential in the design of the Eiffel conversion mechanism [3], which makes it possible to define conversions between various data types (not just integers and reals and the other classical examples, but also any other user types as long as the conversion does not conflict with inheritance). The same conversion rules apply to assignment and equality, so that yes, whenever the assignment x := y is permitted, including when it involves a conversion, the property x = y  is afterwards always guaranteed to hold.

It is rather dangerous indeed to depart from the fundamental laws of mathematics. 

References

[1] IEEE floating-point standard, 754-2008; see summary and references in the Wikipedia entry.  

[2] IFIP Working Group 2.5 on numerical software: home page

[3] Eiffel standard (ECMA and ISO), available on the ECMA site.

More expressive loops for Eiffel

New variants of the loop construct have been introduced into Eiffel, allowing a safer, more concise and more abstract style of programming. The challenge was to remain compatible with the basic loop concept, in particular correctness concerns (loop invariant and loop variant), to provide a flexible mechanism that would cover many different cases of iteration, and to keep things simple.

Here are some examples of the new forms. Consider a structure s, such as a list, which can be traversed in sequence. The following loop prints all elements of the list:

      across s as c loop print (c.item) end

(The procedure print is available on all types, and can be adapted to any of them by redefining the out feature; item gives access to individual values of the list.) More about c in just a moment; in the meantime you can just consider consider that using “as c” and manipulating the structure through c rather than directly through s  is a simple idiom to be learned and applied systematically for such across loops.

The above example is an instruction. The all and some variants yield boolean expressions, as in

across s as c all c.item > 0 end

which denotes a boolean value, true if and only if all elements of the list are positive. To find out if at least one is positive, use

across s as c some c.item > 0 end

Such expressions could appear, for example, in a class invariant, but they may be useful in many different contexts.

In some cases, a from clause is useful, as in

        across s as c from sum := 0  loop sum := sum + c.index c.item end
— Computes Σ i * s [i]

The original form of loop in Eiffel is more explicit, and remains available; you can achieve the equivalent of the last example, on a suitable structure, as

A list and a cursor
A list and a cursor

      from
sum := 0 ; s.start
until
s.after
loop
sum := sum + s.index s.item
s.forth

        end

which directly manipulates a cursor through s, using start to move it to the beginning, forth to advance it, and after to test if it is past the last element. The forms with across achieve the same purpose in a more concise manner. More important than concision is abstraction: you do not need to worry about manipulating the cursor through start, forth and after. Under the hood, however, the effect is the same. More precisely, it is the same as in a loop of the form

from
sum := 0 ; c.start
until
c.after
loop
sum := sum + c.index c.item
c.forth

        end

where c is a cursor object associated with the loop. The advantage of using a cursor is clear: rather than keeping the state of the iteration in the object itself, you make it external, part of a cursor object that, so to speak, looks at the list. This means in particular that many traversals can be active on the same structure at the same time; with an internal cursor, they would mess up with each other, unless you manually took the trouble to save and restore cursor positions. With an external cursor, each traversal has its own cursor object, and so does not interfere with other traversals — at least as long as they don’t change the structure (I’ll come back to that point).

With the across variant, you automatically use a cursor; you do not have to declare or create it: it simply comes as a result of the “as c” part of the construct, which introduces c as the cursor.

On what structures can you perform such iterations? There is no limitation; you simply need a type based on a class that inherits, directly or indirectly, from the library class ITERABLE. All relevant classes from the EiffelBase library have been updated to provide this inheritance, so that you can apply the across scheme to lists of all kinds, hash tables, arrays, intervals etc.

One of these structures is the integer interval. The notation  m |..| n, for integers m and n, denotes the corresponding integer interval. (This is not a special language notation, simply an operator, |..|, defined with the general operator mechanism as an alias for the feature interval of INTEGER classes.) To iterate on such an interval, use the same form as in the examples above:

        across m |..| n  as c from sum := 0  loop sum := sum + a [c.item] end
— Computes Σ a [i], for i ranging from m to n, for an array (or other structure) a

The key feature in ITERABLE is new_cursor, which returns a freshly created cursor object associated with the current structure. By default it is an ITERATION_CURSOR, the most general cursor type, but every descendant of ITERABLE can redefine the result type to something more specific to the current structure. Using a cursor — c in the above examples —, rather than manipulating the structure s directly, provides considerable flexibility thanks to the property that ITERATION_CURSOR itself inherits from ITERABLE   and hence has all the iteration mechanisms. For example you may write

across s.new_cursor.reversed as c loop print (c.item) end

to print elements in reverse order. (Using Eiffel’s operator  mechanism, you may write s.new_cursor, with a minus operator, as a synonym for new_cursor.reversed.) The function reversed gives you a new cursor on the same target structure, enabling you to iterate backwards. Or you can use

        across s.new_cursor + 3 as c loop print (c.item) end

(using s.new_cursor.incremented (3) rather than s.new_cursor + 3 if you are not too keen on operator syntax) to iterate over every third item. A number of other possibilities are available in the cursor classes.

A high-level iteration mechanism is only safe if you have the guarantee that the structure remains intact throughout the iteration. Assume you are iterating through a structure

across  as c loop some_routine end

and some_routine changes the structure s: the whole process could be messed up. Thanks to Design by Contract mechanisms, the library protects you against such mistakes. Features such as item and index, accessing properties of the structure during the iteration, are equipped with a precondition clause

require
is_valid

and every operation that changes the structure sets is_valid to false. So as soon as any change happens, you cannot continue the iteration; all you can do is restart a new one; the command start, used internally to start the operation, does not have the above precondition.

Sometimes, of course, you do want to change a structure while traversing it; for example you may want to add an element just to the right of the iteration position. If you know what you are doing that’s fine. (Let me correct this: if you know what you are doing, express it through precise contracts, and you’ll be fine.) But then you should not use the abstract forms of the loop, across; you should control the iteration itself through the basic form from … until with explicit cursor manipulation, protected by appropriate contracts.

The two styles, by the way, are not distinct constructs. Eiffel has always had only one form of loop and this continues the case. The across forms are simply new possibilities added to the classical loop construct, with obvious constraints stating for example that you may not have both a some or all form and an explicit  loop body.  In particular, an across loop can still have an invariant clause , specifying the correctness properties of the loop, as in

        across s as c from sum := 0  invariant sum = sigma (s, index)  loop sum := sum + c.index c.item end

EiffelStudio 6.5 already included the language update; the library update (not yet including the is_valid preconditions for data structure classes) will be made available this week.

These new mechanisms should increase the level of abstraction and the reliability of loops, a fundamental element of  almost all programs.

The theory and calculus of aliasing

In a previous post I briefly mentioned some work that I am doing on aliasing. There is a draft paper [1], describing the theory, calculus, graphical notation (alias diagrams) and implementation. Here I will try to give an idea of what it’s about, with the hope that you will be intrigued enough to read the article. Even before you read it you might try out the implementation[2], a simple interactive interface with all the examples of the article .

What the article does not describe in detail — that will be for a companion paper— is how the calculus will be used as part of a general framework for developing object-oriented software proved correct from the start, the focus of our overall  “Trusted Components” project’ [3]. Let me simply state that the computation of aliases is the key missing step in the effort to make correctness proofs a standard part of software development. This is a strong claim which requires some elaboration, but not here.

The alias calculus asks a simple question: given two expressions representing references (pointers),  can their values, at a given point in the program, ever reference the same object during execution?

As an example  application, consider two linked lists x and y, which can be manipulated with operations such as extend, which creates a new cell and adds it at the end of the list:

lists

 The calculus makes it possible to prove that if  x and y are not aliased to each other, then none of the pointers in any of the cells in either of the lists can point to  (be aliased to) any cell of  the other. If  x is initially aliased to y, the property no longer holds. You can run the proof (examples 18 and 19) in the downloadable implementation.

The calculus gives a set of rules, each applying to a particular construct of the language, and listed below.

The rule for a construct p is of the form

          a |= p      =   a’
 

where a and a’ are alias relations; this states that executing p  in a state where the alias relation is a will yield the alias relation a’ in the resulting state. An alias relation is a symmetric, irreflexive relation; it indicates which expressions and variables can be aliased to each other in a given state.

The constructs p considered in the discussion are those of a simplified programming language; a modern object-oriented language such as Eiffel can easily be translated into that language. Some precision will be lost in the process, meaning that the alias calculus (itself precise) can find aliases that would not exist in the original program; if this prevents proofs of desired properties, the cut instruction discussed below serves to correct the problem.

The first rule is for the forget instruction (Eiffel: x := Void):

          a |= forget x       =   a \- {x}

where the \- operator removes from a relation all the elements belonging to a given set A. In the case of object-oriented programming, with multidot expressions x.y.z, the application of this rule must remove all elements whose first component, here x, belongs to A.

The rule for creation is the same as for forget:

         a |= create x          =   a \- {x}

The two instructions have different semantics, but the same effect on aliasing.

The calculus has a rule for the cut instruction, which removes the connection between two expressions:

        a |= cut x, y       =   a — <x, y>

where is set difference and <x, y> includes the pairs [x, y] and [y, x] (this is a special case of a general notation defined in the article, using the overline symbol). The cut   instruction corresponds, in Eiffel, to cut   x /=end:  a hint given to the alias calculus (and proved through some other means, such as standard axiomatic semantics) that some references will not be aliased.

The rule for assignment is

      a |= (x := y)      =   given  b = a \- {x}   then   <b È {x} x (b / y)}> end

where b /y (“quotient”), similar to an equivalence class in an equivalence relation, is the set of elements aliased to y in b, plus y itself (remember that the relation is irreflexive). This rule works well for object-oriented programming thanks to the definition of the \- operator: in x := x.y, we must not alias x to x.y, although we must alias it to any z that was aliased to x.y.

The paper introduces a graphical notation, alias diagrams, which makes it possible to reason effectively about such situations. Here for example is a diagram illustrating the last comment:

Alias diagram for a multidot assignment
Alias diagram for a multidot assignment

(The grayed elements are for explanation and not part of the final alias relation.)

For the compound instruction, the rule is:

           a |= (p ;  q)      =   (a |= p) |= q)

For the conditional instruction, we get:

           a |= (then p else  q end)      =   (a |= p) È  (a |= q)

Note the form of the instruction: the alias calculus ignores information from the then clause present in the source language. The union operator is the reason why  alias relations,  irreflexive and symmetric, are not  necessarily transitive.

The loop instruction, which also ignores the test (exit or continuation condition), is governed by the following rule:

           a |= (loop p end)       =   tN

where span style=”color: #0000ff;”>N is the first value such that tN = tN+1 (in other words, tN is the fixpoint) in the following sequence:

            t0          =    a
           tn+1       =   (tn È (tn |= p))     

The existence of a fixpoint and the correctness of this formula to compute it are the result of a theorem in the paper, the “loop aliasing theorem”; the proof is surprisingly elaborate (maybe I missed a simpler one).

For procedures, the rule is

         a |= call p        =   a |= p.body

where p.body is the body of the procedure. In the presence of recursion, this gives rise to a set of equations, whose solution is the fixpoint; again a theorem is needed to demonstrate that the fixpoint exists. The implementation directly applies fixpoint computation (see examples 11 to 13 in the paper and implementation).

The calculus does not directly consider routine arguments but treats them as attributes of the corresponding class; so a call is considered to start with assignments of the form f : = a for every pair of formal and actual arguments f and a. Like the omission of conditions in loops and conditionals, this is a source of possible imprecision in translating from an actual programming language into the calculus, since values passed to recursive activations of the same routine will be conflated.

Particularly interesting is the last rule, which generalizes the previous one to qualified calls of the form x. f (…)  as they exist in object-oriented programming. The rule involves the new notion of inverse variable, written x’ where x is a variable. Laws of the calculus (with Current denoting the current object, one of the fundamental notions of object-oriented programming) are

        Current.x            = x   
        x.Current            = x
        x.x’                      = Current
        x’.x                      = Current

In other words, Current plays the role of zero element for the dot operator, and variable inversion is the inverse operation. In a call x.f, where x denotes the supplier object (the target of the call), the inverse variable provides a back reference to the client object (the caller), indispensable to interpret references in the original context. This property is reflected by the qualified client rule, which uses  the auxiliary operator n (where x n a, for a relation a and a variable x, is the set of pairs [x.u, y.v] such that the pair [u, v] is in a). The rule is:

         a |= call x.r       =   x n ((x’ n a ) |= call r)

You need to read the article for the full explanation, but let me offer the following quote from the corresponding section (maybe you will note a familiar turn of phrase):

Thus we are permitted to prove that the unqualified call creates certain aliasings, on the assumption that it starts in its own alias environment but has access to the caller’s environment through the inverted variable, and then to assert categorically that the qualified call has the same aliasings transposed back to the original environment. This change of environment to prove the unqualified property, followed by a change back to the original environment to prove the qualified property, explains well the aura of magic which attends a programmer’s first introduction to object-oriented programming.

I hope you will enjoy the calculus and try the examples in the implementation. It is fun to apply, and there seems to be no end to the potential applications.

Reference

[1] Bertrand Meyer: The Theory and Calculus of Aliasing, draft paper, first published 12 January 2009 (revised 21 January 2010), available here and also at arxiv.org.
[2] Implementation (interactive version to try all the examples of the paper): downloadable Windows executable here.
[3] Bertrand Meyer: The Grand Challenge of Trusted Components, in 2003 International Conference on Software Engineering, available here.

Just another day at the office

In the past few weeks I wrote a program to compute the aliases of variables and expressions in an object-oriented program (based on a new theory [1]).

For one of the data structures, I needed a specific notion of equality, so I did the standard thing in Eiffel: redefine the is_equal function inherited from the top class ANY, to implement the desired variant.

The first time I ran the result, I got a postcondition violation. The violated postcondition clauses was not even any that I wrote: it was an original postcondition of is_equal (other: like Current)  in ANY, which my redefinition inherited as per the rules of Design by Contract; it reads

symmetric: Result implies other ~ Current

meaning: equality is symmetric, so if Result is true, i.e. the Current object is equal to other, then other must also be equal to Current. (~ is object equality, which applies the local version is is_equal).  What was I doing wrong? The structure is a list, so the code iterates on both the current list and the other list:

from
    start ; other.start ; Result := True
until (not Result) or after loop
        if other.after then Result := False else
              Result := (item ~ other.item)
              forth ; other.forth
        end
end

Simple enough: at each position check whether the item in the current list is equal to the item in the other list, and if so move forth in both the current list and the other one; stop whenever we find two unequal elements, or we exhaust either list as told by after list. (Since is_equal is a function and not produce any side effect, the actual code saves the cursors before the iteration and restores them afterwards. Thanks to Ian Warrington for asking about this point in a comment to this post. The new across loop variant described in  two later postings uses external cursors and manages them automatically, so this business of maintaining the cursor manually goes away.)

The problem is that with this algorithm it is possible to return True if the first list was exhausted but not the second, so that the first list is a subset of the other rather than identical. The correction is immediate: add

Result and other_list.after

after the loop; alternatively, enclose the loop in a conditional so that it is only executed if count = other.count (this solution is  better since it saves much computation in cases of lists of different sizes, which cannot be equal).

The lesson (other than that I need to be more careful) is that the error was caught immediately, thanks to a postcondition violation — and one that I did not even have to write. Just another day at the office; and let us shed a tear for the poor folks who still program without this kind of capability in their language and development environment.

Reference

[1] Bertrand Meyer: The Theory and Calculus of Aliasing, draft paper, available here.

Dwelling on the point

Once again, and we are not learning!

La Repubblica of last Thursday [1] and other Italian newspapers have reported on a “computer” error that temporarily brought thousands of accounts at the national postal service bank into the red. It is a software error, due to a misplacement of the decimal points in some transactions.

As usual the technical details are hazy; La Repubblica writes that:

Because of a software change that did not succeed, the computer system did not always read the decimal point during transactions”.

As a result, it could for example happen that a 15.00-euro withdrawal was understood as 1500 euros.
I have no idea what “reading the decimal point ” means. (There is no mention of OCR, and the affected transactions seem purely electronic.) Only some of the 12 million checking or “Postamat” accounts were affected; the article cites a number of customers who could not withdraw money from ATMs because the system wrongly treated their accounts as over-drawn. It says that this was the only damage and that the postal service will send a letter of apology. The account leaves many questions unanswered, for example whether the error could actually have favored some customers, by allowing them to withdraw money they did not have, and if so what will happen.

The most important unanswered question is the usual one: what was the software error? As usual, we will probably never know. The news items will soon be forgotten, the postal service will somehow fix its code, life will go on. Nothing will be learned; the next time around similar causes will produce similar effects.

I criticized this lackadaisical attitude in an earlier column [2] and have to hammer its conclusion again: any organization using public money should be required, when it encounters a significant software malfunction, to let experts investigate the incident in depth and report the results publicly. As long as we keep forgetting our errors we will keep repeating them. Where would airline safety be in the absence of thorough post-accident reports? That a software error did not kill anyone is not a reason to ignore it. Whether it is the Italian post messing up, a US agency’s space vehicle crashing on the moon or any other software fault causing systems to fail, it is not enough to fix the symptoms: we must have a professional report and draw the lessons for the future.

Reference

[1] Luisa Grion: Poste in tilt per una virgola — conti gonfiati, stop ai prelievi. In La Repubblica, 26 November 2009, page 18 of the print version. (At the time of writing it does not appear at repubblica.it,  but see  the TV segment also titled “Poste in tilt per una virgola” on Primocanale Web TV here, and other press articles e.g. in Il Tempo here.)

[2] On this blog: The one sure way to advance software engineering (post of 21 August 2009).

SEAFOOD 2010

The next SEAFOOD (Software Engineering Advances For Offshore and Outsourced Development) conference will take place in Saint Petersburg, Russia, on 17 and 18 June 2010. The conference co-chairs are Andrey Terekhov from Saint Petersburg State University and Lanit-Tercom, and Martin Nordio from ETH are conference co-chairs. Mathai Joseph from Tata Consulting Services and I will be co-chairing the PC. The Call for Papers will be issued soon; information about this year’s conference at seafood.ethz.ch.