In praise of Knuth and Liskov

In November of 2005, as part of the festivities of its 150-th anniversary, the ETH Zurich bestowed honorary doctorates on Don Knuth and Barbara Liskov. I gave the speech (the “laudatio”). It was published in Informatik Spektrum, the journal of Gesellschaft für Informatik, the German Computer Society, vo. 29, no. 1, February 2006, pages 74-76; I came across it recently and thought others might be interested in this homage to two great computer scientists.  The beginning was in German; I translated it into English. I also replaced a couple of German expressions by their translations: “ETH commencement” for ETH-Tag (the official name of the annual ceremony) and “main building” for Hauptgebäude.

I took this picture of Wirth, Liskov and Knuth (part of my gallery of computer scientists)  later that same day.

 

Laudatio

 In an institution, Ladies and Gentlement, which so proudly celebrates its hundred-and-fiftieth anniversary, a relatively young disciplines sometimes has cause for envy. We computer scientists are still the babies, or at least the newest kids on the block. Outside of this building, for example, you will see streets bearing such names as Clausius, yet there is neither a Von Neumann Lane nor a a Wirth Square. Youth, however,  also has its advantages; perhaps the most striking is that we still can, in our own lifetime, meet in person some of the very founders of our discipline. No living physicist has seen Newton; no chemist has heard Lavoisier. For us, it works. Today, Ladies and Gentlemen, we have the honor of introducing two of the undisputed pioneers of informatics.

Barbara Liskov

The first of our honorees today is Professor Barbara Liskov. To understand her contributions it is essential to realize the unfair competition in which the so-called Moore’s law pits computer software against computing hardware. To match the astounding progress of computing speed and memory over the past five decades, all that we have on the software side is our own intelligence which, it is safe to say, doesn’t double every eighteen months at constant price. The key to scaling up is abstraction; all advances in programming methodology have relied on new abstraction techniques. Perhaps the most significant is data abstraction, which enables us to organize complex systems on the basis of the types of objects they manipulate, defined in completely abstract terms. This is the notion of abstract data type, a staple component today of every software curriculum, including in the very first programming course here ETH. it was introduced barely thirty years ago in a seemingly modest article in SIGPLAN Notices — the kind of publication that hardly registers a ripple in science indexes — by Barbara Liskov and Stephen Zilles. Few papers have had a more profound impact on the theory and practice of software development than this contribution, “Programming with Abstract Data Types”.

The idea of abstract data types, or ADTs, is one of those Egg of Christopher Columbus moments; a seemingly simple intuition that changes the course of things. An ADT is a class of objects described in terms not of their internal properties, but of the operations applicable to them, and the abstract properties of these operations. Not by what they are, but by what they have. A rather capitalistic view of the world, but well suited to the description of complex systems where each part knows as little as possible about the others to protect itself about their future changes.

An abstraction such as ETH-Commencement could be described in a very concrete way: it happens in a certain place, consists of one event after another, gathers so many people. This is what we computer scientists call an implementation-oriented view, and relying on it means that we can’t change any detail without endangering the consistency of other processes, such as the daily planning of room allocation in the Main Building, which use it. In an ADT view, the abstraction “ETH Commencement” is characterized not by what it is but by what it has: a start, an end, an audience, and operations such as “Schedule the ETH Commencement”, “ Reschedule it”, “Start it”, “End it”. They provide to the rest of the world a clean, precisely specified interface which enables every ADT to use every other based on the minimum properties it requires, thus isolating them from irrelevant internal changes, and providing an irreplaceable weapon in the incessant task of software engineering: battling complexity.

Barbara Liskov didn’t stay with the theoretical concepts but implemented the ideas in the CLU language, one of the most influential of the set of programming languages that in the nineteen-seventies changed our perspective of how to develop good software.

She went on to seminal work on operating systems and distributed computing, introducing several widely applied concepts such as guardians, and always backing her theoretical innovations by building practical systems, from the CLU language and compiler to the Argus and Mercury distributed operating systems. Distributed systems, such as those which banks, airlines and other global enterprises run on multiple machines across multiple networks, raise particularly challenging issues. To quote from the introduction of her article on Argus:

A centralized system is either running or crashed, but a distributed system may be partly running and partly crashed. Distributed programs must cope with failures of the underlying hardware. Both the nodes and the network may fail. The goal of Argus is to provide mechanisms that make it easier for programmers to cope with these problems.

Barbara Liskov’s work introduced seminal concepts to deal with these extremely difficult problems.

Now Ford professor of engineering at MIT, she received not long ago the prestigious John von Neumann award of the IEEE; she has been one of the most influential people in software engineering. We are grateful for how Professor Barbara Liskov has helped shape the field are honored to have her at ETH today.

 Donald Knuth

In computer science and beyond, the name of Donald Knuth carries a unique aura. A professor at Stanford since 1968, now emeritus, he is the only person on record whose job title is the title of his own book: Professor of the Art of Computer Programming. This is for his eponymous multi-volume treatise, which established the discipline of algorithm analysis, and has had more effect than any other computer science publication. The Art of Computer Programming is a marvel of breadth, depth, completeness, mathematical rigor and clarity, not to forget humor. In that legendary book you will find exposed in detail the algorithms and data structures that lie at the basis of all software applications today. A Monte Carlo simulation, as a physicists may use, requires a number sequence that is both very long and very random-looking, even though the computer is a deterministic machine; if the simulation is any good, it almost certainly relies on the devious techniques which The Art of Computer Programming presents for making a perfectly deterministic sequence appear to have no order or other recognizable property. If you are running complex programs on your laptop, and they keep creating millions of software objects without clogging up gigabytes of memory, chances are the author of the garbage collector program is using techniques he learned from Knuth, with such delightful names as “the Buddy System”. If your search engine can at the blink of an eye find a needle of useful information in a haystack of tens of billions of Web pages, it’s most likely because they’ve been indexed using finely tuned data structures, such as hash tables, for which Knuth has been the reference for three decades through volume three, Searching and Sorting.

Knuth is famous for his precision and attention to detail, going so far as to offer a financial reward for every error found in his books, although one suspects this doesn’t cost him too much since people are so proud that instead of cashing the check they have it framed for display. The other immediately striking characteristic of Knuth is how profoundly he is driven by esthetics. This applies to performing arts, as anyone who was in the Fraumünster this morning and found out who the organist was can testify, but even more to his scientific work. The very title “the Art of computer programming” betrays this. Algorithms and data structures for Knuth are never dull codes for computers, but objects of intense esthetic pleasure and friendly discussion. This concern with beauty led to a major turn in his career, which delayed the continuation of the book series by many years but resulted in a development that has affected anyone who publishes scientific text. As he received the page proofs of the second edition of one of the volumes in the late seventies he was so repelled by its physical appearance, resulting from newly introduced computer typesetting technology, that he decided to build a revolutionary font design and text processing system, all by himself, from the ground up. This resulted in a number of publications such as a long and fascinating paper in the Bulletin of the American Mathematical Society entitled “The Letter S”, but even more importantly in widely successful and practical software programs which he wrote himself, TeX and Metafont, which have today become standards for scientific publishing. Here too he has shown the way in quality and rigor, being one of the very few people in the world who promise their software to be free of bugs, and backs that promise by giving a small financial reward for any counter-example.

His numerous other contributions are far too diverse to allow even a partial mention here; they have ranged across wide areas of computer science and mathematics.

To tell the truth, we are a little embarrassed that by bringing Professor Knuth here we are delaying by a bit more the long awaited release of volume 4. But we overcome this embarrassment in time to express our pride for having Donald Erwin Knuth at ETH for this anniversary celebration.

Publish no loop without its invariant

 

There may be no more blatant example of  the disconnect between the software engineering community and the practice of programming than the lack of widespread recognition for the fundamental role of loop invariants. 

Let’s recall the basics, as they are taught in the fourth week or so of the ETH introductory programming course [1], from the very moment the course introduces loops. A loop is a mechanism to compute a result by successive approximations. To describe the current approximation, there is a loop invariant. The invariant must be:

  1. Weak enough that we can easily ensure it on a subset, possibly trivial, of our data set. (“Easily” means than this task is substantially easier than the full problem we are trying to solve.)
  2. Versatile enough that if it holds on some subset of the data we can easily (in the same sense) make it hold on a larger subset — even if only slightly larger.
  3. Strong enough that, when it covers the entire data, it yields the result we seek.

As a simple example, assume we seek the maximum of an array a of numbers, indexed from 1. The invariant states that Result is the maximum of the array slice from 1 to i. Indeed:

  1. We can trivially obtain the invariant by setting Result to be a [1]. (It is then the maximum of the slice a [1..1].)
  2. If the invariant holds, we can extend it to a slightly larger slice — larger by just one element — by increasing i by 1 and updating Result to be the greater of the previous Result and the element a [i] (for the new  i).
  3. When the slice covers the entire array — that is, i = n — the invariant tells us that Result is the maximum of the slice a [1..n], giving us the result we seek.

You cannot understand the corresponding program text

    from
        i := 1; Result := a [1]
    until i = n loop
        i := i + 1
        if Result < a [i] then Result := a [i] end
    end

without understanding the loop invariant. That is true even of people who have never heard the term: they will somehow form a mental image of the intermediate situation that justifies the algorithm. With the formal notion, the reasoning becomes precise and checkable. The difference is the same as between a builder who has no notion of theory, and one who has learned the laws of mechanics and construction engineering.

As another example, take Levenshtein distance (also known as edit distance). It is the shortest sequence of operations (insert, delete or replace a character) that will transform a string into another. The algorithm (a form of dynamic programming) fills in a matrix top to bottom and left to right, each entry being one plus the maximum of the three neighboring ones to the top and left, except if the corresponding characters in the strings are the same, in which case it keeps the top-left neighbor’s value. The basic operation in the loop body reads

      if source [i] = target [j] then
           dist [i, j] := dist [i -1, j -1]
      else
           dist [i, j] := min (dist [i, j-1], dist [i-1, j-1], dist [i-1, j]) + 1
      end

You can run this and see it work, filling the array cell after cell, then delivering the result at (dist [M, N] (the bottom-right entry, M and i being the lengths of the source and target strings. Or just watch the animation on page 60 of [2]. It works, but why it works remains a total mystery until someone tells you the invariant:

Every value of dist filled so far is the minimum distance from the initial substrings of the source, containing characters at position 1 to p, to the initial substring of the target, positions 1 to q.

This is the rationale for the above code: we want to compute the next value, at position [i, j]; if the corresponding characters in the source and target are the same, no operation is needed to extend the result we had in the top-left neighbor (position [i-1, j-1]); if not, the best we can do is the minimum we can get by extending the results obtained for our three neighbors: through the insertion of source [i] if the minimum comes from the neighbor to the left, [i-1, j]; through the deletion of target [j] if it comes from the neighbor above; or through a replacement if from the top-left neighbor.

With this explanation, a mysterious, almost hermetic algorithm instantly becomes crystal-clear. 

Yet another example is in-place linked list reversal. The body of the loop is a pointer ballet:

temp := previous
previous
:= next
next
:= next.right
previous.put_right
(temp)

with proper initialization (set next to the value of first and previous to Void) and finalization (set first to the value of previous). This is not the only possible implementation, but all variants of the algorithm use a very similar scheme.

The code looks again pretty abstruse, and hard to get right if you do not remember it exactly. As in the other examples, the only way to understand it is to see the invariant, describing the intermediate assumption after a typical loop iteration. If the original situation was this:

List reversal: initial state
List reversal: initial state

then after a few iterations the algorithm yields this intermediate situation: 

List reversal: intermediate state
List reversal: intermediate state

 The figure illustrates the invariant:

Starting from previous and repeatedly following right links yields the elements of some initial part of the list, but in the reverse of their original order; starting from next and following right links yields the remaining elements, in their original order. 

Then it is clear what the loop body with its pointer ballet is about: it moves by one position to the right the boundary between the two parts, making sure that the invariant holds again in the new state, with one more element in the first (yellow) part and one fewer in the second (pink) part. At the end the second part will be empty and the first part will encompass all elements, so that (after resetting first to the value of previous) we get the desired result.

This example is particularly interesting because list reversal is a standard interview questions for programmers seeking a job; as a result, dozens of  pages around the Web helpfully present algorithms for the benefit of job candidates. I ran a search  on “List reversal algorithm” [3], which yields many such pages. It is astounding to see that from the first fifteen hits or so, which include pages from programming courses at both Stanford and MIT, not a single one mentions invariants, or (even without using the word) gives the above explanation. The situation is all the more bizarre that many of these pages — read them for yourself! — go into intricate details about variants of the pointer manipulations. There are essentially no correctness arguments.

If you go a bit further down the search results, you will find some papers that do reference invariants, but here is the catch: rather than programming or algorithms papers, they are papers about software verification, such as one by Richard Bornat which uses a low-level (C) version of the example to illustrate separation logic [4]. These are good papers but they are completely distinct from those directed at ordinary programmers, who simply wish to learn a basic algorithm, understand it in depth, and remember it on the day of the interview and beyond.

This chasm is wrong. Software verification techniques are not just good for the small phalanx of experts interested in formal proofs. The basic ideas have potential applications to the daily business of programming, as the practice of Eiffel has shown (this is the concept of  “Verification As a Matter Of Course” briefly discussed in an earlier post [5]). Absurdly, the majority of programmers do not know them.

It’s not that they cannot do their job: somehow they eke out good enough results, most of the time. After all, the European cathedrals of the middle ages were built without the benefit of sophisticated mathematical models, and they still stand. But today we would not hire a construction engineer who had not studied the appropriate mathematical techniques. Why should we make things different for software engineering, and deprive practitioners from the benefits of solid, well-accepted theory?  

As a modest first step, there is no excuse, ever, for publishing a loop without the basic evidence of its adequacy: the loop invariant.

References

[1] Bertrand Meyer: Touch of Class: Learning to Program Well, Using Objects and Contracts, Springer, 2009. See course page (English version) here.

[2] Course slides on control structures,  here in PowerPoint (or here in PDF, without the animation); see example starting on page 51, particularly the animation on page 54. More recent version in German here (and in PDF here), animation on page 60.

[3] For balance I ran the search using Qrobe, which combines results from Ask, Bing and Google.

[4] Richard Bornat, Proving Pointer Programs in Hoare Logic, in  MPC ’00, 5th International Conference on Mathematics of Program Construction, 2000, available here.

[5] Bertrand Meyer, Verification as a Matter of Course, a post on this blog.

Another DOSE of distributed software development

The software world is not flat; it is multipolar. Gone are the days of one-site, one-team developments. The increasingly dominant model today is a distributed team; the place where the job gets done is the place where the appropriate people reside, even if it means that different parts of the job get done in different places.

This new setup, possibly the most important change to have affected the practice of software engineering in this early part of the millennium,  has received little attention in the literature; and even less in teaching techniques. I got interested in the topic several years ago, initially by looking at the phenomenon of outsourcing from a software engineering perspective [1]. At ETH, since 2004, Peter Kolb and I, aided by Martin Nordio and Roman Mitin, have taught a course on the topic [2], initially called “software engineering for outsourcing”. As far as I know it was the first course of its kind anywhere; not the first course about outsourcing, but the first to explore the software engineering implications, rather than business or political issues. We also teach an industry course on the same issues [3], attended since 2005 by several hundred participants, and started, with Mathai Joseph from Tata Consulting Services, the SEAFOOD conference [4], Software Engineering Advances For Outsourced and Offshore Development, whose fourth edition starts tomorrow in Saint Petersburg.

After a few sessions of the ETH course we realized that the most important property of the mode of software development explored in the course is not that it involves outsourcing but that it is distributed. In parallel I became directly involved with highly distributed development in the practice of Eiffel Software’s development. In 2007 we renamed the ETH course “Distributed and Outsourced Software Engineering” (DOSE) to acknowledge the broadened scope. The topic is still new; each year we learn a little more about what to teach and how to teach it.

The 2007 session saw another important addition. We felt it was no longer sufficient to talk about distributed development, but that students should practice it. Collaboration between groups in Zurich and other groups in Zurich was not good enough. So we contacted colleagues around the world interested in similar issues, and received an enthusiastic response. The DOSE project is itself distributed: teams from students in different universities collaborate in a single development. Typically, we have two or three geographically distributed locations in each project group. The participating universities have been Politecnico di Milano (where our colleagues Carlo Ghezzi and Elisabetta di Nitto have played a major role in the current version of the project), University of Nijny-Novgorod in Russia, University of Debrecen in Hungary, Hanoi University of Technology in Vietnam, Odessa National Polytechnic in the Ukraine and (across town for us) University of Zurich. For the first time in 2010 a university from the Western hemisphere will join: University of Rio Cuarto in Argentina.

We have extensively studied how the projects actually fare (see publications [4-8]). For students, the job is hard. Often, after a couple of weeks, many want to give up: they have trouble reaching their partner teams, understanding their accents on Skype calls, agreeing on modes of collaboration, finalizing APIs, devising a proper test plan. Yet they hang on and, in most cases, succeed. At the end of the course they tell us how much they have learned about software engineering. For example I know few better way of teaching the importance of carefully documented program interfaces — including contracts — than to ask the students to integrate their modules with code from another team halfway around the globe. This is exactly what happens in industrial software development, when you can no longer rely on informal contacts at the coffee machine or in the parking lot to smooth out misunderstandings: software engineering principles and techniques come in full swing. With DOSE, students learn and practice these fundamental techniques in the controlled environment of a university project.

An example project topic, used last year, was based on an idea by Martin Nordio. He pointed out that in most countries there are some card games played in that country only. The project was to program such a game, where the team in charge of the game logic (what would be the “business model” in an industrial project) had to explain enough of their country’s game, and abstractly enough, to enable the other team to produce the user interface, based on a common game engine started by Martin. It was tough, but some of the results were spectacular, and these are students who will not need more preaching on the importance of specifications.

We are currently preparing the next session of DOSE, in collaboration with our partner universities. The more the merrier: we’d love to have other universities participate, including from the US. Adding extra spice to the project, the topic will be chosen among those from the ICSE SCORE competition [9], so that winning students have the opportunity to attend ICSE in Hawaii. If you are teaching a suitable course, or can organize a student group that will fit, please read the project description [10] and contact me or one of the other organizers listed on the page. There is a DOSE of madness in the idea, but no one, teacher or student,  ever leaves the course bored.

References

[1] Bertrand Meyer: Offshore Development: The Unspoken Revolution in Software Engineering, in Computer (IEEE), January 2006, pages 124, 122-123. Available here.

[2] ETH course page: see here for last year’s session (description of Fall 2010 session will be added soon).

[3] Industry course page: see here for latest (June 2010( session (description of November 2010 session will be added soon).

[4] SEAFOOD 2010 home page.

[5] Bertrand Meyer and Marco Piccioni: The Allure and Risks of a Deployable Software Engineering Project: Experiences with Both Local and Distributed Development, in Proceedings of IEEE Conference on Software Engineering & Training (CSEE&T), Charleston (South Carolina), 14-17 April 2008, ed. H. Saiedian, pages 3-16. Preprint version  available online.

[6] Bertrand Meyer:  Design and Code Reviews in the Age of the Internet, in Communications of the ACM, vol. 51, no. 9, September 2008, pages 66-71. (Original version in Proceedings of SEAFOOD 2008 (Software Engineering Advances For Offshore and Outsourced Development,  Lecture Notes in Business Information Processing 16, Springer Verlag, 2009.) Available online.

[7] Martin Nordio, Roman Mitin, Bertrand Meyer, Carlo Ghezzi, Elisabetta Di Nitto and Giordano Tamburelli: The Role of Contracts in Distributed Development, in Proceedings of SEAFOOD 2009 (Software Engineering Advances For Offshore and Outsourced Development), Zurich, June-July 2009, Lecture Notes in Business Information Processing 35, Springer Verlag, 2009. Available online.

[8] Martin Nordio, Roman Mitin and Bertrand Meyer: Advanced Hands-on Training for Distributed and Outsourced Software Engineering, in ICSE 2010: Proceedings of 32th International Conference on Software Engineering, Cape Town, May 2010, IEEE Computer Society Press, 2010. Available online.

[9] ICSE SCORE 2011 competition home page.

[10] DOSE project course page.

From programming to software engineering: ICSE keynote slides available

In response to many requests, I have made available [1] the slides of my education keynote at ICSE earlier this month. The theme was “From programming to software engineering: notes of an accidental teacher”. Some of the material has been presented before, notably at the Informatics Education Europe conference in Venice in 2009. (In research you can give a new talk every month, but in education things move at a more senatorial pace.) Still, part of the content is new. The talk is a summary of my experience teaching programming and software engineering at ETH.

The usual caveats apply: these are only slides (I did not write a paper), and not all may be understandable independently of the actual talk.

Reference

[1] From programming to software engineering: notes of an accidental teacher, slides from a keynote talk at ICSE 2010.

Programming on the cloud?

I am blogging live from the “Cloud Futures” conference organized by Microsoft in Redmond [1]. We had two excellent keynotes today, by Ed Lazowska [1] and David Patterson.

Lazowska emphasized the emergence of a new kind of science — eScience — based on analysis of enormous amounts of data. His key point was that this approach is a radical departure from “computational science” as we know it, based mostly on large simulations. With the eScience paradigm, the challenge is to handle the zillions of bytes of data that are available, often through continuous streams, in such fields as astronomy, oceanography or biology. It is unthinkable in his view to process such data through super-computing architectures specific to an institution; the Cloud is the only solution. One of the reasons (developed more explicitly in Patterson’s talk) is that cloud computing supports scaling down as well as scaling up. If your site experiences sudden bursts of popularity — say you get slashdotted — followed by downturns, you just cannot size the hardware right.

Lazowska also noted that it is impossible to convince your average  university president that Cloud is the way to go, as he will get his advice from the science-by-simulation  types. I don’t know who the president is at U. of Washington, but I wonder if the comment would apply to Stanford?

The overall argument for cloud computing is compelling. Of course the history of IT is a succession of swings of the pendulum between centralization and delocalization: mainframes, minis, PCs, client-server, “thin clients”, “The Network Is The Computer” (Sun’s slogan in the late eighties), smart clients, Web services and so on. But this latest swing seems destined to define much of the direction of computing for a while.

Interestingly, no speaker so far has addressed issues of how to program reliably for the cloud, even though cloud computing seems only to add orders of magnitude to the classical opportunities for messing up. Eiffel and contracts have a major role to play here.

More generally the opportunity to improve quality should not be lost. There is a widespread feeling (I don’t know of any systematic studies) that a non-negligible share of results generated by computational science are just bogus, the product of old Fortran programs built by generations of graduate students with little understanding of software principles. At the very least, moving to cloud computing should encourage the use of 21-th century tools, languages and methods. Availability on the cloud should also enhance a critical property of good scientific research: reproducibility.

Software engineering is remarkably absent from the list of scientific application areas that speaker after speaker listed for cloud computing. Maybe software engineering researchers are timid, and do not think of themselves as deserving large computing resources; consider, however, all the potential applications, for example in program verification and empirical software engineering. The cloud is a big part of our own research in verification; in particular the automated testing paradigm pioneered by AutoTest [3] fits ideally with the cloud and we are actively working in this direction.

Lazowska mentioned that development environments are the ultimate application of cloud computing. Martin Nordio at ETH has developed, with the help of Le Minh Duc, a Master’s student at Hanoi University of Technology, a cloud-based version of EiffelStudio: CloudStudio, which I will present in my talk at the conference tomorrow. I’ll write more about it in later posts; just one note for the moment: no one should ever be forced again to update or commit.

References

[1] Program of the Cloud Futures conference.

[2] Keynote by Ed Lazowska. You can see his slides here.

[3] Bertrand Meyer, Arno Fiva, Ilinca Ciupa, Andreas Leitner, Yi Wei, Emmanuel Stapf: Programs That Test Themselves. IEEE Computer, vol. 42, no. 9, pages 46-55, September 2009; online version here.

Reflexivity, and other pillars of civilization

Let me start, dear reader of this blog, by probing your view of equality, and also of assignment. Two questions:  

  • Is a value x always equal to itself? (For the seasoned programmers in the audience: I am talking about a value, as in mathematics, not an expression, as in programming, which could have a side effect.)
  • In programming, if we consider an assignment

       x := y

and v is the value of y before that assignment (again, this little detour is to avoid bothering with side effects), is the value of x always equal to v after the assignment?  

Maybe I should include here one of these Web polls that one finds on newspaper sites, so that you can vote and compare your answer to the Wisdom of Crowds. My own vote is clear: yes to both. Equality is reflexive (every value is equal to itself, at any longitude and temperature, no excuses and no exceptions); and the purpose of assignment is to make the value of the target equal to the value of the source. Such properties are some of the last ramparts of civilization. If they go away, what else is left?  

754 enters the picture

Now come floating-point numbers and the famous IEEE “754” floating-point standard [1]. Because not all floating point operations yield a result usable as a floating-point number, the standard introduces a notion of “NaN”, Not a Number; certain operations involving floating-point numbers may yield a NaN. The term NaN does not denote a single value but a set of values, distinguished by their “payloads”.  

Now assume that the value of x is a NaN. If you use a programming language that supports IEEE 754 (as they all do, I think, today) the test in  

        if x = x then …  

is supposed to yield False. Yes, that is specified in the standard: NaN is never equal to NaN (even with the same payload); nor is it equal to anything else; the result of an equality comparison involving NaN will always be False.  

Assignment behavior is consistent with this: if y (a variable, or an expression with no side effect) has a NaN value, then after  

        x := y  

the test xy will yield False. 

Before commenting further let me note the obvious: I am by no means a numerics expert; I know that IEEE 754 was a tremendous advance, and that it was designed by some of the best minds in the field, headed by Velvel Kahan who received a Turing Award in part for that success. So it is quite possible that I am about to bury myself in piles of ridicule. On the other hand I have also learned that (1) ridicule does not kill, so I am game; and more importantly (2) experts are often right but not always, and it is always proper to question their reasoning. So without fear let me not stop at the arguments that “the committee must have voted on this point and they obviously knew what they were doing” and “it is the standard and implemented on zillions of machines, you cannot change it now”. Both are true enough, but not an excuse to censor questions.  

What are the criteria?

The question is: compatibility with an existing computer standard is great, but what about compatibility with a few hundred years of mathematics? Reflexivity of equality  is something that we expect for any data type, and it seems hard to justify that a value is not equal to itself. As to assignment, what good can it be if it does not make the target equal to the source value?  

The question of assignment is particularly vivid in Eiffel because we express the expected abstract properties of programs in the form of contracts. For example, the following “setter” procedure may have a postcondition (expressed by the ensure clause):  

        set_x (v: REAL)
                        — Set the value of x (an attribute, also of type REAL) the value of v.
                do
                        …
                        x := v  
                ensure
                        x = v
                end  

   
If you call this procedure with a NaN argument for a compiler that applies IEEE 754 semantics, and monitor contracts at run time for testing and debugging, the execution will report a contract violation. This is very difficult for a programmer to accept.  

A typical example arises when you have an assignment to an item of an array of REAL values. Assume you are executing a [i] := x. In an object-oriented view of the world (as in Eiffel), this is considered simplified syntax  for the routine call a.put (x, i). The postcondition is that a [i] = x. It will be violated!  

The experts’ view

I queried a number of experts on the topic. (This is the opportunity to express my gratitude to members of the IFIP working group 2.5 on numerical software [2], some of the world’s top experts in the field, for their willingness to respond quickly and with many insights.) A representative answer, from Stuart Feldman, was:  

If I remember the debate correctly (many moons ago), NaN represents an indefinite value, so there is no reason to believe that the result of one calculation with unclear value should match that of another calculation with unclear value. (Different orders of infinity, different asymptotic approaches toward 0/0, etc.)  

Absolutely correct! Only one little detail, though: this is an argument against using the value True as a result of the test; but it is not an argument for using the value False! The exact same argument can be used to assert that the result should not be False:  

… there is no reason to believe that the result of one calculation with unclear value should not match that of another calculation with unclear value.  

Just as convincing! Both arguments complement each other: there is no compelling reason for demanding that the values be equal; and there is no compelling argument either to demand that they be different. If you ignore one of the two sides, you are biased.  

What do we do then?

The conclusion is not that the result should be False. The rational conclusion is that True and False are both unsatisfactory solutions. The reason is very simple: in a proper theory (I will sketch it below) the result of such a comparison should be some special undefined below; the same way that IEEE 754 extends the set of floating-point numbers with NaN, a satisfactory solution would extend the set of booleans with some NaB (Not a Boolean). But there is no NaB, probably because no one (understandably) wanted to bother, and also because being able to represent a value of type BOOLEAN on a single bit is, if not itself a pillar of civilization, one of the secrets of a happy life.  

If both True and False are unsatisfactory solutions, we should use the one that is the “least” bad, according to some convincing criterion . That is not the attitude that 754 takes; it seems to consider (as illustrated by the justification cited above) that False is somehow less committing than True. But it is not! Stating that something is false is just as much of a commitment as stating that it is true. False is no closer to NaB than True is. A better criterion is: which of the two possibilities is going to be least damaging to time-honored assumptions embedded in mathematics? One of these assumptions is the reflexivity of equality:  come rain or shine, x is equal to itself. Its counterpart for programming is that after an assignment the target will be equal to the original value of the source. This applies to numbers, and it applies to a NaN as well. 

Note that this argument does not address equality between different NaNs. The standard as it is states that a specific NaN, with a specific payload, is not equal to itself.  

What do you think?

A few of us who had to examine the issue recently think that — whatever the standard says at the machine level — a programming language should support the venerable properties that equality is reflexive and that assignment yields equality.

Every programming language should decide this on its own; for Eiffel we think this should be the specification. Do you agree?  

Some theory

For readers who like theory, here is a mathematical restatement of the observations above. There is nothing fundamentally new in this section, so if you do not like strange symbols you can stop here.  

The math helps explain the earlier observation that neither True nor False is more“committing” than the other. A standard technique (coming from denotational semantics) for dealing with undefinedness in basic data types, is to extend every data type into a lattice, with a partial order relation meaning “less defined than”. The lattice includes a bottom element, traditionally written “” (pronounced “Bottom”) and a top element written (“Top”). represents an unknown value (not enough information) and an error value (too much information). Pictorially, the lattice for natural numbers would look like this:  

Integer lattice
The lattice of integers

On basic types, we always use very simple lattices of this form, with three kinds of element: , less than every other element; , larger than all other elements; and in-between all the normal values of the type, which for the partial order of interest are all equal. (No, this is not a new math in which all integers are equal. The order in question simply means “is less defined than”. Every integer is as defined as all other integers, more defined than , and less defined than .) Such lattices are not very exciting, but they serve as a starting point; lattices with more interesting structures are those applying to functions on such spaces — including functions of functions —, which represent programs.  

The modeling of floating-point numbers with NaN involves such a lattice; introducing NaN means introducing a value. (Actually, one might prefer to interpret NaN as , but the reasoning transposes immediately.)  More accurately, since there are many NaN values, the lattice will look more like this:

Float lattice
The lattice of floats in IEEE 754

For simplicity we can ignore the variety of NaNs and consider a single .

Functions on lattices — as implemented by programs — should satisfy a fundamental property: monotonicity. A function f  is monotone (as in ordinary analysis) if, whenever xy, then f (x) ≤ f (y). Monotonicity is a common-sense requirement: we cannot get more information from less information. If we know less about x than about y, we cannot expect that any function (with a computer, any program) f will, out of nowhere, restore the missing information.  

Demanding monotonicity of all floating-point operations reflects this exigency of monotonicity: indeed, in IEEE 754, any arithmetic operation — addition, multiplication … — that has a NaN as one of its arguments must yield a Nan as its result. Great. Now for soundness we should also have such a property for boolean-valued operations, such as equality. If we had a NaB as the  of booleans, just like NaN is the  of floating-point numbers,  then the result of NaN = NaN would be NaB. But the world is not perfect and the IEEE 754 standard does not govern booleans. Somehow (I think) the designers thought of False as somehow less defined than True. But it is not! False is just as defined as True in the very simple lattice of booleans; according to the partial order, True and False are equal for the relevant partial order:

Boolean lattice
The lattice of booleans

Because any solution that cannot use a NaB will violate monotonicity and hence will be imperfect, we must resort to heuristic criteria. A very strong criterion in favor of choosing True is reflexivity — remaining compatible with a fundamental property of equality. I do not know of any argument for choosing False. 

The Spartan approach

There is, by the way, a technique that accepts booleans as we love them (with two values and no NaB) and achieves mathematical rigor. If operations involving NaNs  truly give us pimples, we can make any such operation trigger an exception. In the absence of values,  this is an acceptable programming technique for representing undefinedness. The downside, of course, is that just about everywhere the program must be ready to handle the exception in some way. 

It is unlikely that in practice many people would be comfortable with such a solution. 

Final observations

Let me point out two objections that I have seen. Van Snyder wrote: 

NaN is not part of the set of floating-point numbers, so it doesn’t behave as if “bottom” were added to the set. 

Interesting point, but, in my opinion not applicable: is indeed not part of the mathematical set of floating point numbers, but in the form of NaN it is part of the corresponding type (float in C, REAL in Eiffel); and the operations of the type are applicable to all values, including NaN if, as noted, we have not taken the extreme step of triggering an exception every time an operation uses a NaN as one of its operands. So we cannot free ourselves from the monotonicity concern by just a sleight of hands. 

Another comment, also by Van Snyder (slightly abridged): 

Think of [the status of NaN] as a variety of dynamic run-time typing. With static typing, if  x is an integer variable and y

        x := y 

does not inevitably lead to 

        x = y

 True; and a compelling argument to require that conversions satisfy equality as a postcondition! Such  reasoning — reflexivity again — was essential in the design of the Eiffel conversion mechanism [3], which makes it possible to define conversions between various data types (not just integers and reals and the other classical examples, but also any other user types as long as the conversion does not conflict with inheritance). The same conversion rules apply to assignment and equality, so that yes, whenever the assignment x := y is permitted, including when it involves a conversion, the property x = y  is afterwards always guaranteed to hold.

It is rather dangerous indeed to depart from the fundamental laws of mathematics. 

References

[1] IEEE floating-point standard, 754-2008; see summary and references in the Wikipedia entry.  

[2] IFIP Working Group 2.5 on numerical software: home page

[3] Eiffel standard (ECMA and ISO), available on the ECMA site.

More expressive loops for Eiffel

New variants of the loop construct have been introduced into Eiffel, allowing a safer, more concise and more abstract style of programming. The challenge was to remain compatible with the basic loop concept, in particular correctness concerns (loop invariant and loop variant), to provide a flexible mechanism that would cover many different cases of iteration, and to keep things simple.

Here are some examples of the new forms. Consider a structure s, such as a list, which can be traversed in sequence. The following loop prints all elements of the list:

      across s as c loop print (c.item) end

(The procedure print is available on all types, and can be adapted to any of them by redefining the out feature; item gives access to individual values of the list.) More about c in just a moment; in the meantime you can just consider consider that using “as c” and manipulating the structure through c rather than directly through s  is a simple idiom to be learned and applied systematically for such across loops.

The above example is an instruction. The all and some variants yield boolean expressions, as in

across s as c all c.item > 0 end

which denotes a boolean value, true if and only if all elements of the list are positive. To find out if at least one is positive, use

across s as c some c.item > 0 end

Such expressions could appear, for example, in a class invariant, but they may be useful in many different contexts.

In some cases, a from clause is useful, as in

        across s as c from sum := 0  loop sum := sum + c.index c.item end
— Computes Σ i * s [i]

The original form of loop in Eiffel is more explicit, and remains available; you can achieve the equivalent of the last example, on a suitable structure, as

A list and a cursor
A list and a cursor

      from
sum := 0 ; s.start
until
s.after
loop
sum := sum + s.index s.item
s.forth

        end

which directly manipulates a cursor through s, using start to move it to the beginning, forth to advance it, and after to test if it is past the last element. The forms with across achieve the same purpose in a more concise manner. More important than concision is abstraction: you do not need to worry about manipulating the cursor through start, forth and after. Under the hood, however, the effect is the same. More precisely, it is the same as in a loop of the form

from
sum := 0 ; c.start
until
c.after
loop
sum := sum + c.index c.item
c.forth

        end

where c is a cursor object associated with the loop. The advantage of using a cursor is clear: rather than keeping the state of the iteration in the object itself, you make it external, part of a cursor object that, so to speak, looks at the list. This means in particular that many traversals can be active on the same structure at the same time; with an internal cursor, they would mess up with each other, unless you manually took the trouble to save and restore cursor positions. With an external cursor, each traversal has its own cursor object, and so does not interfere with other traversals — at least as long as they don’t change the structure (I’ll come back to that point).

With the across variant, you automatically use a cursor; you do not have to declare or create it: it simply comes as a result of the “as c” part of the construct, which introduces c as the cursor.

On what structures can you perform such iterations? There is no limitation; you simply need a type based on a class that inherits, directly or indirectly, from the library class ITERABLE. All relevant classes from the EiffelBase library have been updated to provide this inheritance, so that you can apply the across scheme to lists of all kinds, hash tables, arrays, intervals etc.

One of these structures is the integer interval. The notation  m |..| n, for integers m and n, denotes the corresponding integer interval. (This is not a special language notation, simply an operator, |..|, defined with the general operator mechanism as an alias for the feature interval of INTEGER classes.) To iterate on such an interval, use the same form as in the examples above:

        across m |..| n  as c from sum := 0  loop sum := sum + a [c.item] end
— Computes Σ a [i], for i ranging from m to n, for an array (or other structure) a

The key feature in ITERABLE is new_cursor, which returns a freshly created cursor object associated with the current structure. By default it is an ITERATION_CURSOR, the most general cursor type, but every descendant of ITERABLE can redefine the result type to something more specific to the current structure. Using a cursor — c in the above examples —, rather than manipulating the structure s directly, provides considerable flexibility thanks to the property that ITERATION_CURSOR itself inherits from ITERABLE   and hence has all the iteration mechanisms. For example you may write

across s.new_cursor.reversed as c loop print (c.item) end

to print elements in reverse order. (Using Eiffel’s operator  mechanism, you may write s.new_cursor, with a minus operator, as a synonym for new_cursor.reversed.) The function reversed gives you a new cursor on the same target structure, enabling you to iterate backwards. Or you can use

        across s.new_cursor + 3 as c loop print (c.item) end

(using s.new_cursor.incremented (3) rather than s.new_cursor + 3 if you are not too keen on operator syntax) to iterate over every third item. A number of other possibilities are available in the cursor classes.

A high-level iteration mechanism is only safe if you have the guarantee that the structure remains intact throughout the iteration. Assume you are iterating through a structure

across  as c loop some_routine end

and some_routine changes the structure s: the whole process could be messed up. Thanks to Design by Contract mechanisms, the library protects you against such mistakes. Features such as item and index, accessing properties of the structure during the iteration, are equipped with a precondition clause

require
is_valid

and every operation that changes the structure sets is_valid to false. So as soon as any change happens, you cannot continue the iteration; all you can do is restart a new one; the command start, used internally to start the operation, does not have the above precondition.

Sometimes, of course, you do want to change a structure while traversing it; for example you may want to add an element just to the right of the iteration position. If you know what you are doing that’s fine. (Let me correct this: if you know what you are doing, express it through precise contracts, and you’ll be fine.) But then you should not use the abstract forms of the loop, across; you should control the iteration itself through the basic form from … until with explicit cursor manipulation, protected by appropriate contracts.

The two styles, by the way, are not distinct constructs. Eiffel has always had only one form of loop and this continues the case. The across forms are simply new possibilities added to the classical loop construct, with obvious constraints stating for example that you may not have both a some or all form and an explicit  loop body.  In particular, an across loop can still have an invariant clause , specifying the correctness properties of the loop, as in

        across s as c from sum := 0  invariant sum = sigma (s, index)  loop sum := sum + c.index c.item end

EiffelStudio 6.5 already included the language update; the library update (not yet including the is_valid preconditions for data structure classes) will be made available this week.

These new mechanisms should increase the level of abstraction and the reliability of loops, a fundamental element of  almost all programs.

“Touch of Class” published

My textbook Touch of Class: An Introduction to Programming Well Using Objects and Contracts [1] is now available from Springer Verlag [2]. I have been told of many bookstores in Europe that have it by now; for example Amazon Germany [3] offers immediate delivery. Amazon US still lists the book as not yet published [4], but I think this will be corrected very soon.

touch_of_class

The book results from six years of teaching introductory programming at ETH Zurich. It is richly illustrated in full color (not only with technical illustrations but with numerous photographs of people and artefacts). It is pretty big, but designed so that a typical one-semester introductory course can cover most of the material.

Many topics are addressed (see table of contents below), including quite a few that are seldom seen at the introductory level. Some examples, listed here in random order: a fairly extensive introduction to software engineering including things like requirements engineering (not usually mentioned in programming courses, with results for everyone to see!) and CMMI, a detailed discussion of how to implement recursion, polymorphism and dynamic binding and their role for software architecture, multiple inheritance, lambda calculus (at an introductory level of course), a detailed analysis of the Observer and Visitor patterns, event-driven programming, the lure and dangers of references and aliasing, topological sort as an example of both algorithm and API design, high-level function closures, software tools, properties of computer hardware relevant for programmers, undecidability etc.

The progression uses an object-oriented approach throughout; the examples are in Eiffel, and four appendices present the details of Java, C#, C++ and C. Concepts of Design by Contract and rigorous development are central to the approach; for example, loops are presented as a technique for computing a result by successive approximation, with a central role for the concept of loop invariant. This is not a “formal methods” book in the sense of inflicting on the students a heavy mathematical apparatus, but it uses preconditions, postconditions and invariants throughout to alert them to the importance of reasoning rigorously about programs. The discussion introduces many principles of sound design, in line with the book’s subtitle, “Learning to Program Well”.

The general approach is “Outside-In” (also known as “Inverted Curriculum” and described at some length in some of my articles, see e.g. [5]): students have, right from the start, the possibility of working with real software, a large (150,000-line) library that has been designed specifically for that purpose. Called Traffic, this library simulates traffic in a city; it is graphical and of good enough visual quality to be attractive to today’s “Wii generation” students, something that traditional beginners’ exercises, like computing the 7-th Fibonacci number, cannot do (although we have these too as well). Using the Traffic software through its API, students can right from the first couple of weeks produce powerful applications, without understanding the internals of the library. But they do not stop there: since the whole thing is available in open source, students learn little by little how the software is made internally. Hence the name “Outside-In”: understand the interface first, then dig into the internals. Two advantages of the approach are particularly worth noting:

  • It emphasizes the value of abstraction, and particular contracts, not by preaching but by showing to students that abstraction helps them master a large body of professional-level software, doing things that would otherwise be unthinkable at an introductory level.
  • It addresses what is probably today the biggest obstacle to teaching introductory programming: the wide diversity of initial student backgrounds. The risk with traditional approaches is either to fly too high and lose the novices, or stay too low and bore those who already have programming experience. With the Outside-In method the novices can follow the exact path charted from them, from external API to internal implementation; those who already know something about programming can move ahead of the lectures and start digging into the code by themselves for information and inspiration.

(We have pretty amazing data on students’ prior programming knowledge, as  we have been surveying students for the past six years, initially at ETH and more recently at the University of York in England thanks to our colleague Manuel Oriol; some day I will post a blog entry about this specific topic.)

The book has been field-tested in its successive drafts since 2003 at ETH, for the Introduction to Programming course (which starts again in a few weeks, for the first time with the benefit of the full text in printed form). Our material, such as a full set of slides, plus exercises, video recordings of the lectures etc. is available to any instructor selecting the text. I must say that Springer did an outstanding job with the quality of the printing and I hope that instructors, students, and even some practitioners already in industry will like both form and content.

Table of contents

Front matter: Community resource, Dedication (to Tony Hoare and Niklaus Wirth), Prefaces, Student_preface, Instructor_preface, Note to instructors: what to cover?, Contents

PART I: Basics
1 The industry of pure ideas
2 Dealing with objects
3 Program structure basics
4 The interface of a class
5 Just Enough Logic
6 Creating objects and executing systems
7 Control structures
8 Routines, functional abstraction and information hiding
9 Variables, assignment and references
PART II: How things work
10 Just enough hardware
11 Describing syntax
12 Programming languages and tools
PART III: Algorithms and data structures
13 Fundamental data structures, genericity, and algorithm complexity
14 Recursion and trees
15 Devising and engineering an algorithm: Topological Sort
PART IV: Object-Oriented Techniques
16 Inheritance
17 Operations as objects: agents and lambda calculus
18 Event-driven design
PART V: Towards software engineering
19 Introduction to software engineering
PART VI: Appendices
A An introduction to Java (from material by Marco Piccioni)
B An introduction to C# (from material by Benjamin Morandi)
C An introduction to C++ (from material by Nadia Polikarpova)
D From C++ to C
E Using the EiffelStudio environment
Picture credits
Index

References

[1] Bertrand Meyer, Touch of Class: An Introduction to Programming Well Using Objects and Contracts, Springer Verlag, 2009, 876+lxiv pages, Hardcover, ISBN: 978-3-540-92144-8.

[2] Publisher page for [1]: see  here. List price: $79.95. (The page says “Ships in 3 to 4 weeks” but I think this is incorrect as the book is available; I’ll try to get the mention corrected.)

[3] Amazon.de page: see here. List price: EUR 53.45 (with offers starting at EUR 41.67).

[4] Amazon.com page: see here. List price: $63.96.

[5] Michela Pedroni and Bertrand Meyer: The Inverted Curriculum in Practice, in Proceedings of SIGCSE 2006, ACM, Houston (Texas), 1-5 March 2006, pages 481-485; available online.