Archive for the ‘Eiffel’ Category.

Tomorrow (Thursday) noon EDT: ACM talk on requirements

In the software engineering family requirements engineering is in my experience the poor cousin, lagging behind the progress of other parts (such as design). I have been devoting attention to the topic in recent months and am completing a book on the topic.

Tomorrow (Thursday), I will be covering some of the material in a one-hour Tech Talk for ACM, with the title

The Four PEGS of Requirements Engineering

The time is THursday, 4 March 2021, at noon EDT (New York) and 18 CET (Paris, Zurich etc.). Attendance is free but requires registration, on the event page  here.

Abstract:

Bad software requirements can jeopardize projects. There is a considerable literature on requirements, but practice is far behind: what passes for requirements in industry usually consists of a few use cases or user stories, which are useful but not sufficient as a solution. Can we fix requirements engineering (known in other circles as business analysis) so that it is no longer the weak link in software engineering?

I will present ongoing work intended to help industry produce more useful requirements. It includes precise definitions of requirements concepts and a standard plan for requirements specifications, intended to replace the venerable but woefully obsolete IEEE standard from 1998. The plan contains four books covering the four “PEGS” of requirements engineering (which I will explain). The approach builds on existing knowledge to define a practical basis for requirements engineering and provide projects with precise and helpful guidelines.

This is I think the fourth time I am giving talks in this venue (previous talks were about Design by Contract, Agile Methods and Concurrency).

VN:F [1.9.10_1130]
Rating: 4.9/10 (7 votes cast)
VN:F [1.9.10_1130]
Rating: 0 (from 6 votes)

Some contributions

Science progresses through people taking advantage of others’ insights and inventions. One of the conditions that makes the game possible is that you acknowledge what you take. For the originator, it is rewarding to see one’s ideas reused, but frustrating when that happens without acknowledgment, especially when you are yourself punctilious about citing your own sources of inspiration.

I have started to record some concepts that are widely known and applied today and which I believe I originated in whole or in part, whether or not their origin is cited by those who took them. The list below is not complete and I may update it in the future. It is not a list of ideas I contributed, only of those fulfilling two criteria:

  • Others have built upon them.  (If there is an idea that I think is great but no one paid attention to it, the list does not include it.)
  • They have gained wide visibility.

There is a narcissistic aspect to this exercise and if people want to dismiss it as just showing I am full of myself so be it. I am just a little tired of being given papers to referee that state that genericity was invented by Java, that no one ever thought of refactoring before agile methods, and so on. It is finally time to state some facts.

Facts indeed: I back every assertion by precise references. So if I am wrong — i.e. someone preceded me — the claims of precedence can be refuted; if so I will update or remove them. All articles by me cited in this note are available (as downloadable PDFs) on my publication page. (The page is up to date until 2018; I am in the process of adding newer publications.)

Post-publication note: I have started to receive some comments and added them in a Notes section at the end; references to those notes are in the format [A].

Final disclaimer (about the narcissistic aspect): the exercise of collecting such of that information was new for me, as I do not usually spend time reflecting on the past. I am much more interested in the future and definitely hope that my next contributions will eclipse any of the ones listed below.

Programming concepts: substitution principle

Far from me any wish to under-represent the seminal contributions of Barbara Liskov, particularly her invention of the concept of abstract data type on which so much relies. As far as I can tell, however, what has come to be known as the “Liskov Substitution Principle” is essentially contained in the discussion of polymorphism in section 10.1 of in the first edition (Prentice Hall, 1988) of my book Object-Oriented Software Construction (hereafter OOSC1); for example, “the type compatibility rule implies that the dynamic type is always a descendant of the static type” (10.1.7) and “if B inherits from A, the set of objects that can be associated at run time with an entity [generalization of variable] includes instances of B and its descendants”.

Perhaps most tellingly, a key aspect of the substitution principle, as listed for example in the Wikipedia entry, is the rule on assertions: in a proper descendant, keep the invariant, keep or weaken the precondition, keep or strengthen the postcondition. This rule was introduced in OOSC1, over several pages in section 11.1. There is also an extensive discussion in the article Eiffel: Applying the Principles of Object-Oriented Design published in the Journal of Systems and Software, May 1986.

The original 1988 Liskov article cited (for example) in the Wikipedia entry on the substitution principle says nothing about this and does not in fact include any of the terms “assertion”, “precondition”, “postcondition” or “invariant”. To me this absence means that the article misses a key property of substitution: that the abstract semantics remain the same. (Also cited is a 1994 Liskov article in TOPLAS, but that was many years after OOSC1 and other articles explaining substitution and the assertion rules.)

Liskov’s original paper states that “if for each object o1 of type S there is an object o2 of type T such that for all programs P defined in terms of T, the behavior of P is unchanged when o1 is substituted for oz, then S is a subtype of T.” As stated, this property is impossible to satisfy: if the behavior is identical, then the implementations are the same, and the two types are identical (or differ only by name). Of course the concrete behaviors are different: applying the operation rotate to two different figures o1 and o2, whose types are subtypes of FIGURE and in some cases of each other, will trigger different algorithms — different behaviors. Only with assertions (contracts) does the substitution idea make sense: the abstract behavior, as characterized by preconditions, postconditions and the class invariants, is the same (modulo respective weakening and strengthening to preserve the flexibility of the different version). Realizing this was a major step in understanding inheritance and typing.

I do not know of any earlier (or contemporary) exposition of this principle and it would be normal to get the appropriate recognition.

Software design: design patterns

Two of the important patterns in the “Gang of Four” Design Patterns book (GoF) by Gamma et al. (1995) are the Command Pattern and the Bridge Pattern. I introduced them (under different names) in the following publications:

  • The command pattern appears in OOSC1 under the name “Undo-Redo” in section 12.2. The solution is essentially the same as in GoF. I do not know of any earlier exposition of the technique. See also notes [B] and [C].
  • The bridge pattern appears under the name “handle technique” in my book Reusable Software: The Base Component Libraries (Prentice Hall, 1994). It had been described several years earlier in manuals for Eiffel libraries. I do not know of an earlier reference. (The second edition of Object-Oriented Software Construction — Prentice Hall, 1997, “OOSC2” –, which also describes it, states that a similar technique is described in an article by Josef Gil and Ricardo Szmit at the TOOLS USA conference in the summer of 1994, i.e. after the publication of Reusable Software.)

Note that it is pointless to claim precedence over GoF since that book explicitly states that it is collecting known “best practices”, not introducing new ones. The relevant questions are: who, pre-GoF, introduced each of these techniques first; and which publications does the GoF cites as “prior art”  for each pattern. In the cases at hand, Command and Bridge, it does not cite OOSC1.

To be concrete: unless someone can point to an earlier reference, then anytime anyone anywhere using an interactive system enters a few “CTRL-Z” to undo commands, possibly followed by some “CTRL-Y” to redo them (or uses other UI conventions to achieve these goals), the software most likely relying on a technique that I first described in the place mentioned above.

Software design: Open-Closed Principle

Another contribution of OOSC1 (1988), section 2.3, reinforced in OOSC2 (1997) is the Open-Closed principle, which explained one of the key aspects of inheritance: the ability to keep a module both closed (immediately usable as is) and open to extension (through inheritance, preserving the basic semantics. I am mentioning this idea only in passing since in this case my contribution is usually recognized, for example in the Wikipedia entry.

Software design: OO for reuse

Reusability: the Case for Object-Oriented Design (1987) is, I believe, the first publication that clearly explained why object-oriented concepts were (and still are today — in Grady Booch’s words, “there is no other game in town”) the best answer to realize the goal of software construction from software components. In particular, the article:

  • Explains the relationship between abstract data types and OO programming, showing the former as the theoretical basis for the latter. (The CLU language at MIT originated from Liskov’s pioneering work on abstract data types, but was not OO in the full sense of the term, missing in particular a concept of inheritance.)
  • Shows that reusability implies bottom-up development. (Top-down refinement was the mantra at the time, and promoting bottom-up was quite a shock for many people.)
  • Explains the role of inheritance for reuse, as a complement to Parnas’s interface-based modular construction with information hiding.

Software design: Design by Contract

The contribution of Design by Contract is one that is widely acknowledged so I don’t have any point to establish here — I will just recall the essentials. The notion of assertion goes back to the work of Floyd, Hoare and Dijkstra in the sixties and seventies, and correctness-by-construction to Dijktra, Gries and Wirth, but Design by Contract is a comprehensive framework providing:

  • The use of assertions in an object-oriented context. (The notion of class invariant was mentioned in a paper by Tony Hoare published back in 1972.)
  • The connection of inheritance with assertions (as sketched above). That part as far as I know was entirely new.
  • A design methodology for quality software: the core of DbC.
  • Language constructs carefully seamed into the fabric of the language. (There were precedents there, but in the form of research languages such as Alphard, a paper design only, not implemented, and Euclid.)
  • A documentation methodology.
  • Support for testing.
  • Support for a consistent theory of exception handling (see next).

Design by Contract is sometimes taken to mean simply the addition of a few assertions here and there. What the term actually denotes is a comprehensive methodology with all the above components, tightly integrated into the programming language. Note in particular that preconditions and postconditions are not sufficient; in an OO context class invariants are essential.

Software design: exceptions

Prior to the Design by Contract work, exceptions were defined very vaguely, as something special you do outside of “normal” cases, but without defining “normal”. Design by Contract brings a proper perspective by defining these concepts precisely. This was explained in a 1987 article, Disciplined Exceptions ([86] in the list), rejected by ECOOP but circulated as a technical report; they appear again in detail in OOSC1 (sections 7.10.3 to 7.10.5).

Other important foundational work on exceptions, to which I know no real precursor (as usual I would be happy to correct any omission), addressed what happens to the outcome of an exception in a concurrent or distributed context. This work was done at ETH, in particular in the PhD theses  of B. Morandi and A. Kolesnichenko, co-supervised with S. Nanz. See the co-authored papers [345] and [363].

On the verification aspect of exceptions, see below.

Software design: refactoring

I have never seen a discussion of refactoring that refers to the detailed discussion of generalization in both of the books Reusable Software (1994, chapter 3) and Object Success (Prentice Hall, 1995, from page 122 to the end of chapter 6). These discussions describe in detail how, once a program has been shown to work, it should be subject to a posteriori design improvements. It presents several of the refactoring techniques (as they were called when the idea gained traction several years later), such as moving common elements up in the class hierarchy, and adding an abstract class as parent to concrete classes ex post facto.

These ideas are an integral part of the design methodology presented in these books (and again in OOSC2 a few later). It is beyond me why people would present refactoring (or its history, as in the Wikipedia entry on the topic) without referring to these publications, which were widely circulated and are available for anyone to inspect.

Software design: built-in documentation and Single-Product principle

Another original contribution was the idea of including documentation in the code itself and relying on tools to extract the documentation-only information (leaving implementation elements aside). The idea, described in detail in OOSC1 in 1988 (sections 9.4 and 9.5) and already mentioned in the earlier Eiffel papers, is that code should be self-complete, containing elements of various levels of abstraction; some of them describe implementation, but the higher-level elements describe specification, and are distinguished syntactically in such a way that tools can extract them to produce documentation at any desired level of abstraction.

The ideas were later applied through such mechanisms as JavaDoc (with no credit as far as I know). They were present in Eiffel from the start and the underlying principles, in particular the “Single Product principle” (sometimes “Self-Documentation principle”, and also generalized by J. Ostroff and R. Paige as “Single-Model principle”). Eiffel is the best realization of these principles thanks to:

  • Contracts (as mentioned above): the “contract view” of a class (called “short form” in earlier descriptions) removes the implementations but shows the relevant preconditions, postconditions and class invariants, given a precise and abstract specification of the class.
  • Eiffel syntax has a special place for “header comments”, which describe high-level properties and remain in the contract view.
  • Eiffel library class documentation has always been based on specifications automatically extracted from the actual text of the classes, guaranteeing adequacy of the documentation. Several formats are supported (including, from 1995 on, HTML, so that documentation can be automatically deployed on the Web).
  • Starting with the EiffelCase tool in the early 90s, and today with the Diagram Tool of EiffelStudio, class structures (inheritance and client relationships) are displayed graphically, again in an automatically extracted form, using either the BON or UML conventions.

One of the core benefits of the Single-Product principle is to guard against what some of my publications called the “Dorian Gray” syndrome: divergence of an implementation from its description, a critical problem in software because of the ease of modifying stuff. Having the documentation as an integral part of the code helps ensure that when information at some level of abstraction (specification, design, implementation) changes, the other levels will be updated as well.

Crucial in the approach is the “roundtripping” requirement: specifiers or implementers can make changes in any of the views, and have them reflected automatically in the other views. For example, you can graphically draw an arrow between two bubbles representing classes B and A in the Diagram Tool, and the code of B will be updated with “inherit A”; or you can add this Inheritance clause textually in the code of class B, and the diagram will be automatically updated with an arrow.

It is important to note how contrarian and subversive these ideas were at the time of their introduction (and still to some extent today). The wisdom was that you do requirements then design then implementation, and that code is a lowly product entirely separate from specification and documentation. Model-Driven Development perpetuates this idea (you are not supposed to modify the code, and if you do there is generally no easy way to propagate the change to the model.) Rehabilitating the code (a precursor idea to agile methods, see below) was a complete change of perspective.

I am aware of no precedent for this Single Product approach. The closest earlier ideas I can think of are in Knuth’s introduction of Literate Programming in the early eighties (with a book in 1984). As in the Single-product approach, documentation is interspersed with code. But the literate programming approach is (as presented) top-down, with English-like explanations progressively being extended with implementation elements. The Single Product approach emphasizes the primacy of code and, in terms of the design process, is very much yoyo, alternating top-down (from the specification to the implementation) and bottom-up (from the implementation to the abstraction) steps. In addition, a large part of the documentation, and often the most important one, is not informal English but formal assertions. I knew about Literate Programming, of course, and learned from it, but Single-Product is something else.

Software design: from patterns to components

Karine Arnout’s thesis at ETH Zurich, resulting in two co-authored articles ([255] and [257], showed that contrary to conventional wisdom a good proportion of the classical design patterns, including some of the most sophisticated, can be transformed into reusable components (indeed part of an Eiffel library). The agent mechanism (see below) was instrumental in achieving that result.

Programming, design and specification concepts: abstract data types

Liskov’s and Zilles’s ground-breaking 1974 abstract data types paper presented the concepts without a mathematical specification, using programming language constructs instead. A 1976 paper (number [3] in my publication list, La Description des Structures de Données, i.e. the description of data structures) was as far as I know one of the first to present a mathematical formalism, as  used today in presentations of ADTs. John Guttag was taking a similar approach in his PhD thesis at about the same time, and went further in providing a sound mathematical foundation, introducing in particular (in a 1978 paper with Jim Horning) the notion of sufficient completeness, to which I devoted a full article in this blog  (Are My Requirements Complete?) about a year ago. My own article was published in a not very well known journal and in French, so I don’t think it had much direct influence. (My later books reused some of the material.)

The three-level description approach of that article (later presented in English for an ACM workshop in the US in 1981, Pingree Park, reference [28]) is not well known but still applicable, and would be useful to avoid frequent confusions between ADT specifications and more explicit descriptions.

When I wrote my 1976 paper, I was not aware of Guttag’s ongoing work (only of the Liskov and Zilles paper), so the use of a mathematical framework with functions and predicates on them was devised independently. (I remember being quite happy when I saw what the axioms should be for a queue.) Guttag and I both gave talks at a workshop organized by the French programming language interest group in 1977 and it was fun to see that our presentations were almost identical. I think my paper still reads well today (well, if you read French). Whether or not it exerted direct influence, I am proud that it independently introduced the modern way of thinking of abstract data types as characterized by mathematical functions and their formal (predicate calculus) properties.

Language mechanisms: genericity with inheritance

Every once in a while I get to referee a paper that starts “Generics, as introduced in Java…” Well, let’s get some perspective here. Eiffel from its introduction in 1985 combined genericity and inheritance. Initially, C++ users and designers claimed that genericity was not needed in an OO context and the language did not have it; then they introduced template. Initially, the designers of Java claimed (around 1995) that genericity was not needed, and the language did not have it; a few years later Java got generics. Initially, the designers of C# (around 1999) claimed that genericity was not needed, and the language did not have it; a few years later C# and .NET got generics.

Genericity existed before Eiffel of course; what was new was the combination with inheritance. I had been influenced by work on generic modules by a French researcher, Didier Bert, which I believe influenced the design of Ada as well; Ada was the language that brought genericity to a much broader audience than the somewhat confidential languages that had such a mechanism before. But Ada was not object-oriented (it only had modules, not classes). I was passionate about object-oriented programming (at a time when it was generally considered, by the few people who had heard of it as an esoteric, academic pursuit). I started — in the context of an advanced course I was teaching at UC Santa Barbara — an investigation of how the two mechanisms relate to each other. The results were a paper at the first OOPSLA in 1986, Genericity versus Inheritance, and the design of the Eiffel type system, with a class mechanism, inheritance (single and multiple), and genericity, carefully crafted to complement each other.

With the exception of a Trellis-Owl, a  design from Digital Equipment Corporation also presented at the same OOPSLA (which never gained significant usage), there were no other OO languages with both mechanisms for several years after the Genericity versus Inheritance paper and the implementation of genericity with inheritance in Eiffel available from 1986 on. Eiffel also introduced, as far as I know, the concept of constrained genericity, the second basic mechanism for combining genericity with inheritance, described in Eiffel: The Language (Prentice Hall, 1992, section 10.8) and discussed again in OOSC2 (section 16.4 and throughout). Similar mechanisms are present in many languages today.

It was not always so. I distinctly remember people bringing their friends to our booth at some conference in the early nineties, for the sole purpose of having a good laugh with them at our poster advertising genericity with inheritance. (“What is this thing they have and no one else does? Generi-sissy-tee? Hahaha.”). A few years later, proponents of Java were pontificating that no serious language needs generics.

It is undoubtedly part of of the cycle of invention (there is a Schopenhauer citation on this, actually the only thing from Schopenhauer’s philosophy that I ever understood [D]) that people at some point will laugh at you; if it did brighten their day, why would the inventor deny them one of the little pleasures of life? But in terms of who laughs last, along the way C++ got templates, Java got generics, C# finally did too, and nowadays all typed OO languages have something of the sort.

Language mechanisms: multiple inheritance

Some readers will probably have been told that multiple inheritance is a bad thing, and hence will not count it as a contribution, but if done properly it provides a major abstraction mechanism, useful in many circumstances. Eiffel showed how to do multiple inheritance right by clearly distinguishing between features (operations) and their names, defining a class as a finite mapping between names and features, and using renaming to resolve any name clashes.

Multiple inheritance was made possible by an implementation innovation: discovering a technique (widely imitated since, including in single-inheritance contexts) to implement dynamic binding in constant time. It was universally believed at the time that multiple inheritance had a strong impact on performance, because dynamic binding implied a run-time traversal of the class inheritance structure, already bad enough for single inheritance where the structure is a tree, but prohibitive with multiple inheritance for which it is a directed acyclic graph. From its very first implementation in 1986 Eiffel used what is today known as a virtual table technique which guarantees constant-time execution of routine (method) calls with dynamic binding.

Language mechanisms: safe GC through strong static typing

Simula 67 implementations did not have automatic garbage collection, and neither had implementations of C++. The official excuse in the C++ case was methodological: C programmers are used to exerting manual control of memory usage. But the real reason was a technical impossibility resulting from the design of the language: compatibility with C precludes the provision of a good GC.

More precisely, of a sound and complete GC. A GC is sound if it will only reclaim unreachable objects; it is complete if it will reclaim all unreachable objects. With a C-based language supporting casts (e.g. between integers and pointers) and pointer arithmetic, it is impossible to achieve soundness if we aim at a reasonable level of completeness: a pointer can masquerade as an integer, only to be cast back into a pointer later on, but in the meantime the garbage collector, not recognizing it as a pointer, may have wrongly reclaimed the corresponding object. Catastrophe.

It is only possible in such a language to have a conservative GC, meaning that it renounces completeness. A conservative GC will treat as a pointer any integer whose value could possibly be a pointer (because it lies between the bounds of the program’s data addresses in memory). Then, out of precaution, the GC will refrain from reclaiming the objects at these addresses even if they appear unreachable.

This approach makes the GC sound but it is only a heuristics, and it inevitably loses completeness: every once in a while it will fail to reclaim some dead (unreachable) objects around. The result is a program with memory leaks — usually unacceptable in practice, particularly for long-running or continuously running programs where the leaks inexorably accumulate until the program starts thrashing then runs out of memory.

Smalltalk, like Lisp, made garbage collection possible, but was not a typed language and missed on the performance benefits of treating simple values like integers as a non-OO language would. Although in this case I do not at the moment have a specific bibliographic reference, I believe that it is in the context of Eiffel that the close connection between strong static typing (avoiding mechanisms such as casts and pointer arithmetic) and the possibility of sound and complete garbage collection was first clearly explained. Explained in particular around 1990 in a meeting with some of the future designers of Java, which uses a similar approach, also taken over later on by C#.

By the way, no one will laugh at you today for considering garbage collection as a kind of basic human right for programmers, but for a long time the very idea was quite sulfurous, and advocating it subjected you to a lot of scorn. Here is an extract of the review I got when I submitted the first Eiffel paper to IEEE Transactions on Software Engineering:

Systems that do automatic garbage collection and prevent the designer from doing his own memory management are not good systems for industrial-strength software engineering.

Famous last words. Another gem from another reviewer of the same paper:

I think time will show that inheritance (section 1.5.3) is a terrible idea.

Wow! I wish the anonymous reviewers would tell us what they think today. Needless to say, the paper was summarily rejected. (It later appeared in the Journal of Systems and Software — as [82] in the publication list — thanks to the enlightened views of Robert Glass, the founding editor.)

Language mechanisms: void safety

Void safety is a property of a language design that guarantees the absence of the plague of null pointer dereferencing.

The original idea came (as far as I know) from work at Microsoft Research that led to the design of a research language called C-omega; the techniques were not transferred to a full-fledged programming language. Benefiting from the existence of this proof of concept, the Eiffel design was reworked to guarantee void safety, starting from my 2005 ECOOP keynote paper (Attached Types) and reaching full type safety a few years later. This property of the language was mechanically proved in a 2016 ETH thesis by A. Kogtenkov.

Today all significant Eiffel development produces void-safe code. As far as I know this was a first among production programming languages and Eiffel remains the only production language to provide a guarantee of full void-safety.

This mechanism, carefully crafted (hint: the difficult part is initialization), is among those of which I am proudest, because in the rest of the programming world null pointer dereferencing is a major plague, threatening at any moment to crash the execution of any program that uses pointers of references. For Eiffel users it is gone.

Language mechanisms: agents/delegates/lambdas

For a long time, OO programming languages did not have a mechanism for defining objects wrapping individual operations. Eiffel’s agent facility was the first such mechanism or among the very first together the roughly contemporaneous but initially much more limited delegates of C#. The 1999 paper From calls to agents (with P. Dubois, M. Howard, M. Schweitzer and E. Stapf, [196] in the list) was as far as I know the first description of such a construct in the scientific literature.

Language mechanisms: concurrency

The 1993 Communications of the ACM paper on Systematic Concurrent Object-Oriented Programming [136] was certainly not the first concurrency proposal for OO programming (there had been pioneering work reported in particular in the 1987 book edited by Tokoro and Yonezawa), but it innovated in offering a completely data-race-free model, still a rarity today (think for example of the multi-threading mechanisms of dominant OO languages).

SCOOP, as it came to be called, was implemented a few years later and is today a standard part of Eiffel.

Language mechanisms: selective exports

Information hiding, as introduced by Parnas in his two seminal 1972 articles, distinguishes between public and secret features of a module. The first OO programming language, Simula 67, had only these two possibilities for classes and so did Ada for modules.

In building libraries of reusable components I realized early on that we need a more fine-grained mechanism. For example if class LINKED_LIST uses an auxiliary class LINKABLE to represent individual cells of a linked list (each with a value field and a “right” field containing a reference to another LINKABLE), the features of LINKABLE (such as the operation to reattach the “right” field) should not be secret, since LINKED_LIST needs them; but they should also not be generally public, since we do not want arbitrary client objects to mess around with the internal structure of the list. They should be exported selectively to LINKED_LIST only. The Eiffel syntax is simple: declare these operations in a clause of the class labeled “feature {LINKED_LIST}”.

This mechanism, known as selective exports, was introduced around 1989 (it is specified in full in Eiffel: The Language, from 1992, but was in the Eiffel manuals earlier). I think it predated the C++ “friends” mechanism which serves a similar purpose (maybe someone with knowledge of the history of C++ has the exact date). Selective exports are more general than the friends facility and similar ones in other OO languages: specifying a class as a friend means it has access to all your internals. This solution is too coarse-grained. Eiffel’s selective exports make it possible to define the specific export rights of individual operations (including attributes/fields) individually.

Language mechanisms and implementation: serialization and schema evolution

I did not invent serialization. As a student at Stanford in 1974 I had the privilege, at the AI lab, of using SAIL (Stanford Artificial Intelligence Language). SAIL was not object-oriented but included many innovative ideas; it was far ahead of its time, especially in terms of the integration of the language with (what was not yet called) its IDE. One feature of SAIL with which one could fall in love at first sight was the possibility of selecting an object and having its full dependent data structure (the entire subgraph of the object graph reached by following references from the object, recursively) stored into a file, for retrieval at the next section. After that, I never wanted again to live without such a facility, but no other language and environment had it.

Serialization was almost the first thing we implemented for Eiffel: the ability to write object.store (file) to have the entire structure from object stored into file, and the corresponding retrieval operation. OOSC1 (section 15.5) presents these mechanisms. Simula and (I think) C++ did not have anything of the sort; I am not sure about Smalltalk. Later on, of course, serialization mechanisms became a frequent component of OO environments.

Eiffel remained innovative by tackling the difficult problems: what happens when you try to retrieve an object structure and some classes have changed? Only with a coherent theoretical framework as provided in Eiffel by Design by Contract can one devise a meaningful solution. The problem and our solutions are described in detail in OOSC2 (the whole of chapter 31, particularly the section entitled “Schema evolution”). Further advances were made by Marco Piccioni in his PhD thesis at ETH and published in joint papers with him and M. Oriol, particularly [352].

Language mechanisms and implementation: safe GC through strong static typing

Simula 67 (if I remember right) did not have automatic garbage collection, and neither had C++ implementations. The official justification in the case of C++ was methodological: C programmers are used to exerting manual control of memory usage. But the real obstacle was technical: compatibility with C makes it impossible to have a good GC. More precisely, to have a sound and complete GC. A GC is sound if it will only reclaim unreachable objects; it is complete if it will reclaim all unreachable objects. With a C-based language supporting casts (e.g. between integers and pointers) and pointer arithmetic, it is impossible to achieve soundness if we aim at a reasonable level of completeness: a pointer can masquerade as an integer, only to be cast back into a pointer later on, but in the meantime the garbage collector, not recognizing it as a pointer, may have wrongly reclaimed the corresponding object. Catastrophe. It is only possible in such a language to have a conservative GC, which will treat as a pointer any integer whose value could possibly be a pointer (because its value lies between the bounds of the program’s data addresses in memory). Then, out of precaution, it will not reclaim the objects at the corresponding address. This approach makes the GC sound but it is only a heuristics, and it may be over-conservative at times, wrongly leaving dead (i.e. unreachable) objects around. The result is, inevitably, a program with memory leaks — usually unacceptable in practice.

Smalltalk, like Lisp, made garbage collection possible, but was not a typed language and missed on the performance benefits of treating simple values like integers as a non-OO language would. Although in this case I do not at the moment have a specific bibliographic reference, I believe that it is in the context of Eiffel that the close connection between strong static typing (avoiding mechanisms such as casts and pointer arithmetic) and the possibility of sound and complete garbage collection was first clearly explained. Explained in particular to some of the future designers of Java, which uses a similar approach, also taken over later on by C#.

By the way, no one will laugh at you today for considering garbage collection as a kind of basic human right for programmers, but for a long time it was quite sulfurous. Here is an extract of the review I got when I submitted the first Eiffel paper to IEEE <em>Transactions on Software Engineering:

Software engineering: primacy of code

Agile methods are widely and properly lauded for emphasizing the central role of code, against designs and other non-executable artifacts. By reading the agile literature you might be forgiven for believing that no one brought up that point before.

Object Success (1995) makes the argument very clearly. For example, chapter 3, page 43:

Code is to our industry what bread is to a baker and books to a writer. But with the waterfall code only appears late in the process; for a manager this is an unacceptable risk factor. Anyone with practical experience in software development knows how many things can go wrong once you get down to code: a brilliant design idea whose implementation turns out to require tens of megabytes of space or minutes of response time; beautiful bubbles and arrows that cannot be implemented; an operating system update, crucial to the project which comes five weeks late; an obscure bug that takes ages to be fixed. Unless you start coding early in the process, you will not be able to control your project.

Such discourse was subversive at the time; the wisdom in software engineering was that you need to specify and design a system to death before you even start coding (otherwise you are just a messy “hacker” in the sense this word had at the time). No one else in respectable software engineering circles was, as far as I know, pushing for putting code at the center, the way the above extract does.

Several years later, agile authors started making similar arguments, but I don’t know why they never referenced this earlier exposition, which still today I find not too bad. (Maybe they decided it was more effective to have a foil, the scorned Waterfall, and to claim that everyone else before was downplaying the importance of code, but that was not in fact everyone.)

Just to be clear, Agile brought many important ideas that my publications did not anticipate; but this particular one I did.

Software engineering: the roles of managers

Extreme Programming and Scrum have brought new light on the role of managers in software development. Their contributions have been important and influential, but here too they were for a significant part prefigured by a long discussion, altogether two chapters, in Object Success (1995).

To realize this, it is enough to read the titles of some of the sections in those chapters, describing roles for managers (some universal, some for a technical manager): “risk manager”, “interface with the rest of the world” (very scrummy!), “protector of the team’s sanity”, “method enforcer” (think Scrum Master), “mentor and critic”. Again, as far as I know, these were original thoughts at the time; the software engineering literature for the most part did not talk about these issues.

Software engineering: outsourcing

As far as I know the 2006 paper Offshore Development: The Unspoken Revolution in Software Engineering was the first to draw attention, in the software engineering community, to the peculiar software engineering challenges of distributed and outsourced development.

Software engineering: automatic testing

The AutoTest project (with many publications, involving I. Ciupa, A. Leitner, Y. Wei, M. Oriol, Y. Pei, M. Nordio and others) was not the first to generate tests automatically by creating numerous instances of objects and calling applicable operations (it was preceded by Korat at MIT), but it was the first one to apply this concept with Design by Contract mechanisms (without which it is of little practical value, since one must still produce test oracles manually) and the first to be integrated in a production environment (EiffelStudio).

Software engineering: make-less system building

One of the very first decisions in the design of Eiffel was to get rid of Make files.

Feldman’s Make had of course been a great innovation. Before Make, programmers had to produce executable systems manually by executing sequences of commands to compile and link the various source components. Make enabled them to instead  to define dependencies between components in a declarative way, resulting in a partial order, and then performed a topological sort to produce the sequence of comments. But preparing the list of dependencies remains a tedious task, particularly error-prone for large systems.

I decided right away in the design of Eiffel that we would never force programmers to write such dependencies: they would be automatically extracted from the code, through an exhaustive analysis of the dependencies between modules. This idea was present from the very the first Eiffel report in 1985 (reference [55] in the publication list): Eiffel programmers never need to write a Make file or equivalent (other than for non-Eiffel code, e.g. C or C++, that they want to integrate); they just click a Compile button and the compiler figures out the steps.

Behind this approach was a detailed theoretical analysis of possible relations between modules in software development (in many programming languages), published as the “Software Knowledge Base” at ICSE in 1985. That analysis was also quite instructive and I would like to return to this work and expand it.

Educational techniques: objects first

Towards an Object-Oriented Curriculum ( TOOLS conference, August 1993, see also the shorter JOOP paper in May of the same year) makes a carefully argued case for what was later called the Objects First approach to teaching programming. I would be interested to know if there are earlier publications advocating starting programming education with an OO language.

The article also advocated for the “inverted curriculum”, a term borrowed from work by Bernie Cohen about teaching electrical engineering. It was the first transposition of this concept to software education. In the article’s approach, students are given program components to use, then little by little discover how they are made. This technique met with some skepticism and resistance since the standard approach was to start from the very basics (write trivial programs), then move up. Today, of course, many introductory programming courses similarly provide students from day one with a full-fledged set of components enabling them to produce significant programs.

More recent articles on similar topics, taking advantage of actual teaching experience, are The Outside-In Method of Teaching Programming (2003) and The Inverted Curriculum in Practice (at ICSE 2006, with Michela Pedroni). The culmination of that experience is the textbook Touch of Class from 2009.

Educational techniques: Distributed Software Projects

I believe our team at ETH Zurich (including among others M. Nordio, J. Tschannen, P. Kolb and C. Estler and in collaboration with C. Ghezzi, E. Di Nitto and G. Tamburrelli at Politecnico di Milano, N. Aguirre at Rio Cuarto and many others in various universities) was the first to devise,  practice and document on a large scale (see publications and other details here) the idea of an educational software project conducted in common by student groups from different universities. It yielded a wealth of information on distributed software development and educational issues.

Educational techniques: Web-based programming exercises

There are today a number of cloud-based environments supporting the teaching of programming by enabling students to compile and test their programs on the Web, benefiting from a prepared environment (so that they don’t have to download any tools or prepare control files) and providing feedback. One of the first — I am not sure about absolute precedence — and still a leading one, used by many universities and applicable to many programming languages, is Codeboard.

The main developer, in my chair at ETH Zurich, was Christian Estler, supported in particular by M. Nordio and M. Piccioni, so I am only claiming a supporting role here.

Educational techniques: key CS/SE concepts

The 2001 paper Software Engineering in the Academy did a good job, I think, of defining the essential concepts to teach in a proper curriculum (part of what Jeannette Wing’s 2006 paper called Computational Thinking).

Program verification: agents (delegates etc.)

Reasoning about Function Objects (ICSE 2010, with M. Nordio, P. Müller and J. Tschannen) introduced verification techniques for objects representing functions (such as agents, delegates etc., see above) in an OO language. Not sure whether there were any such techniques before.

Specification languages: Z

The Z specification language has been widely used for formal development, particularly in the UK. It is the design of J-R Abrial. I may point out that I was a coauthor of the first publication on Z in English (1980),  describing a version that preceded the adaptation to a more graphical-style notation done later at Oxford. The first ever published description of Z, pertaining to an even earlier version, was in French, in my book Méthodes de Programmation (with C. Baudoin), Eyrolles, 1978, running over 15 pages (526-541), with the precise description of a refinement process.

Program verification: exceptions

Largely coming out of the PhD thesis of Martin Nordio, A Sound and Complete Program Logic for Eiffel (TOOLS 2009) introduces rules for dealing with exceptions in a Hoare-style verification framework.

Program verification: full library, and AutoProof

Nadia Polikarpova’s thesis at ETH, aided by the work of Carlo Furia and Julian Tschannen (they were the major contributors and my participation was less important), was as far as I know the first to produce a full functional verification of an actual production-quality reusable library. The library is EiffelBase 2, covering fundamental data structures.

AutoProof — available today, as a still experimental tool, through its Web interface, see here — relied on the AutoProof prover, built by the same team, and itself based on Microsoft Research’s Boogie and Z3 engines.

More

There are more concepts worthy of being included here, but for today I will stop here.

Notes

[A] One point of divergence between usual presentations of the substitution principle and the view in OOSC and my other publications is the covariance versus contravariance of routine argument types. It reflects a difference of views as to what the proper policy (both mathematically sound and practically usable) should be.

[B]  The GoF book does not cite OOSC for the command or bridge patterns. For the command pattern it cites (thanks to Adam Kosmaczewski for digging up the GoF text!) a 1985 SIGGRAPH paper by Henry Lieberman (There’s More to Menu Systems than Meets the Screen). Lieberman’s paper describes the notion of command object and mentions undoing in passing, but does not include the key elements of the command pattern (as explained in full in OOSC1), i.e. an abstract (deferred) command class with deferred procedures called (say) do_it and undo_it, then specific classes for each kind of command, each providing a specific implementation of those procedures, then a history list of commands supporting multiple-level undo and redo as explained in OOSC1. (Reading Lieberman’s paper with a 2021 perspective shows that it came tantalizingly close to the command pattern, but doesn’t get to it. The paper does talk about inheritance between command classes, but only to “define new commands as extensions to old commands”, not in the sense of a general template that can be implemented in many specific ways. And it does mention a list of objects kept around to enable recovery from accidental deletions, and states that the application can control its length, as is the case with a history list; but the objects in the list are not command objects, they are graphical and other objects that have been deleted.)

[C] Additional note on the command pattern: I vaguely remember seeing something similar to the OOSC1 technique in an article from a supplementary volume of the OOPSLA proceedings in the late eighties or early nineties, i.e. at the same time or slightly later, possibly from authors from Xerox PARC, but I have lost the reference.

[D] Correction: I just checked the source and learned that the actual Schopenhauer quote (as opposed to the one that is usually quoted) is different; it does not include the part about laughing. So much for my attempts at understanding philosophy.

 

VN:F [1.9.10_1130]
Rating: 8.4/10 (17 votes cast)
VN:F [1.9.10_1130]
Rating: +2 (from 6 votes)

The right forms of expression

If you want to know whether your_string has at least one upper-case character, you will write this in Eiffel:

if  ∃ c: your_string ¦ c.is_upper then

Such predicate-calculus boolean expressions, using a quantifier (“for all”) or (“there exists”) are becoming common in Eiffel code. They are particularly useful in Design by Contract assertions, making it possible to characterize deep semantic properties of the code and its data structures. For example a class invariant clause in a class I wrote recently states

from_lists_exist: ∀ tf: triples_from ¦ tf Void                        — [1]

meaning that all the elements, if any, of the list triples_from  are non-void (non-null). The notation is the exact one from mathematics. (Mathematical notation sometimes uses a dot in place of the bar, but the bar is clearer, particularly in an OO context where the dot has another use.)

Programming languages should support time-honored notations from mathematics. Reaching this goal has been a driving force in the evolution of Eiffel, but not as a concession to “featurism” (the gratuitous piling up of language feature upon feature). The language must remain simple and consistent; any new feature must find its logical place in the overall edifice.

The design of programming languages is a constant search for the right balance between rigor, simplicity, consistency, formal understanding, preservation of existing code, innovation and expressiveness. The design of Eiffel has understood the last of these criteria as implying support for established notations from mathematics, not through feature accumulation but by re-interpreting these notations in terms of the language’s fundamental concepts. A typical example is the re-interpretation of the standard mathematical notation a + b as as simply an operator-based form for the object-oriented call a.plus (b), obtained by declaring “+” as an operator alias for the function plus in the relevant classes. There are many more such cases in today’s Eiffel. Quantifier expressions using and  are the latest example.

 They are not a one-of-a-kind trick but just as a different syntax form for loops. Expressed in a more verbose form, the only one previously available, [1] would be:

across triples_from is tf all tf /= Void end                         — [2]

It is interesting to walk back the history further. [2] is itself a simplification of

across triples_from as tf all tf.item /= Void end               — [3]

where the “.item” has a good reason for being there, but that reason is irrelevant to a beginner. The earlier use of as in [3] is also the reason for the seemingly bizarre use of is in [2], which is only explainable by the backward compatibility criterion (code exists that uses as , which has a slightly different semantics from is), and will go away. But a few years ago the across loop variant did not exist and you would have had to write the above boolean expressions as

all_non_void (triples_from)

after defining a function

all_non_void (l: LIST [T]): BOOLEAN                                    — [4]
                         — Are all the elements of `l’, if any, non-void?
          local
pos: INTEGER
do
from
pos := l.index
l.start
Result := True
until not Result or l.after loop
l.forth
end
go_ith (pos)
end

The road traveled from [4] to [1] is staggering. As we introduced new notations in the history of Eiffel the reaction of the user community has sometimes been between cautious and negative. With the exception of a couple of quickly discarded ideas (such as the infamous and short-lived “!!” for creation), they were generally adopted widely because they simplify people’s life without adding undue complexity to the language. The key has been to avoid featurism and choose instead to provide two kinds of innovation:

  • Major conceptual additions, which elevate the level of abstraction of the language. A typical introduction was the introduction of agents, which provide the full power of functional programming in an object-oriented context; another was the SCOOP concurrency mechanism. There have been only a few such extensions, all essential.
  • Syntactical variants for existing concepts, allowing more concise forms obtained from traditional mathematical notation. The use of quantifier expressions as in [1] is the latest example.

Complaints of featurism still occasionally happen when people first encounter the new facilities, but they fade away quickly as people start using them. After writing a few expressions such as [1], no one wants to go back to any of the other forms.

These quantifier expressions using and , as well as the “” not-equal sign for what used to be (and still commonly is) written “/=”, rely on Unicode. Eiffel started out when ASCII was the law of the land. (Or 8-bit extended ASCII, which does not help much since the extensions are rendered differently in different locales, i.e. the same 8-bit character code may mean something different on French and Swedish texts.) In recent years, Eiffel has made a quiet transition to full Unicode support. (Such support extends to manifest strings and operators, not to identifiers. The decision, which could be revisited, has been to keep the ASCII-only  policy for identifiers to favor compatible use by programmers regardless of their mother tongues.) The use of Unicode considerably extends the expressive power of the language, in particular for scientific software which can — thanks to Eiffel’s mechanism for defining free operators — rely on advanced mathematical notations.

Unicode is great, but I hear the question: how in the world can we enter the corresponding symbols, since our keyboards are still ASCII plus some extensions?

It would be tedious to have to select from a list of special symbols (as you do when inserting a mathematical symbol in Microsoft Word or, for that matter, as I did when inserting the phrase “ and ” in the preceding paragraph using WordPress).

The answer lies in the interplay between the language and the development environment. EiffelStudio, like other modern IDEs, includes an automatic completion mechanism which lets you enter the beginning of a construct and will take care of filling in the rest. Already useful for complex structures (if you type “if” the tools will create the entire “if then else end” conditional structure for you to fill in), automatic completion will take care of inserting the appropriate Unicode symbols for you. Type for example “across”,  then CTRL-Space to trigger completion, and the choices will include the “∀” and “” forms. You can see below how this works:

across_all

Programming languages can be at the same time simple, easy to learn, consistent, and expressive. Start using quantifiers now!

Acknowledgments to the Ecma Technical Committee on Eiffel and the Eiffel Software team, particularly Alexander Kogtenkov (see his blog post here) and (for the completion mechanism and its animated illustration above) Jocelyn Fiat.

VN:F [1.9.10_1130]
Rating: 10.0/10 (6 votes cast)
VN:F [1.9.10_1130]
Rating: +4 (from 4 votes)

New video lecture: distances, invariants and recursion

I have started a new series of video lectures, which I call “Meyer’s Object-Oriented Classes” (MOOC). The goal is to share insights I have gained over the years on various aspects of programming and software engineering. Many presentations are focused on one area, such as coding, design, analysis, theoretical computer science (even there you find a division between “Theory A”, i.e. complexity, Turing machines and the like, and “Theory B”, i.e. semantics, type theory etc.), software project management, concurrency… I have an interest in all and try to explain connections.

 

The first lecture describes the edit distance (Levenshtein) algorithm, explains its correctness by introducing the loop invariant, expands on that notion, then shows a recursive version, explores the connection with the original version (it’s the invariant), and probes further into another view of recursive computations, leading to the concept of dynamic programming.

The videos are on YouTube and can be accessed from bertrandmeyer.com/levenshtein. (The general page for all lectures is at bertrandmeyer.com/mooc.)

The lecture is recorded in four segments of about 15 minutes each. In the future I will limit myself to 8-10 minutes. In fact I may record this lecture again; for example it would be better if I had a live audience rather than talking to my screen, and in general the recording is somewhat low-tech, but circumstances command. Also, I will correct a few hiccups (at some point in the recording I notice a typo on a slide and fix it on the fly), but the content will remain the same.

Feedback is of course welcome. I hope to record about a lecture a week from now on.

VN:F [1.9.10_1130]
Rating: 10.0/10 (5 votes cast)
VN:F [1.9.10_1130]
Rating: +2 (from 2 votes)

Fan mail

Received this today from a heretofore unknown correspondent (I don’t often check Facebook Messenger but just happened to). Name removed (I am not sure he would want me to identify him), text translated from another language into English.

Hello, thanks for your book “Object-Oriented Software Construction” [read in a translation]. I read it after a horrible failure of a project on which I was a consultant. Another consultant was my technical leader. He was truly insufferable but I appreciated him for one reason: his code! I had never seen such “beautiful” program code; he was using principles of genericity, dynamic binding and others, which were totally unknown to me after the lousy programming education I had received. He had insulted me, telling me that I was no developer at all; I was deeply offended since I could feel that he was right. In spite of his unbearable personality I wanted to learn at his side, but he was far too selfish, seeing me just as a competitor, even if a pathetic one. He had a book on the side of his desk… and it’s that book that enabled me to understand where he had learned all those OO design methods. That book, obviously, was yours, and I acquired a copy for myself. I sincerely think that it should be used as textbook in educational institutions. And I really wanted to thank you for writing it. I hope to become a real developer thanks to you. So, thank you.

Note 1: Thanks to you.

Note 2: There is also the intro programming text, Touch of Class (Amazon page).

Note 3 (to my fan club): You are welcome to take advantage of the ideas and there is actually no compelling requirement to be, in addition, “insufferable”.

VN:F [1.9.10_1130]
Rating: 9.3/10 (13 votes cast)
VN:F [1.9.10_1130]
Rating: +5 (from 5 votes)

Getting a program right, in nine episodes

About this article: it originated as a series of posts on the Communications of the ACM blog. I normally repost such articles here. (Even though copy-paste is usually not good, there are three reasons for this duplication: the readership seems to be largely disjoint; I can use better formatting, since their blog software is more restrictive than WordPress; and it is good to have a single repository for all my articles, including both those who originated on CACM and those who did not.) The series took the form of nine articles, where each of the first few ended with a quiz, to which the next one, published a couple of days later, provided an answer. Since all these answers are now available it would make no sense to use the same scheme, so I am instead publishing the whole thing as a single article  with nine sections, slightly adapted from the original.

I was too lazy so far to collect all the references into a single list, so numbers such as [1] refer to the list at the end of the corresponding section.


A colleague recently asked me to present a short overview of  axiomatic semantics as a guest lecture in one of his courses. I have been teaching courses on software verification for a long time (see e.g. here), so I have plenty of material; but instead of just reusing it, I decided to spend a bit of time on explaining why it is good to have a systematic approach to software verification. Here is the resulting tutorial.


 

1. Introduction and attempt #1

Say “software verification” to software professionals, or computer science students outside of a few elite departments, and most of them will think  “testing”. In a job interview, for example, show a loop-based algorithm to a programmer and ask “how would you verify it?”: most will start talking about devising clever test cases.

Far from me to berate testing [1]; in fact, I have always thought that the inevitable Dijkstra quote about testing — that it can only show the presence of errors, not their absence [2] — which everyone seems to take as an indictment and dismissal of testing (and which its author probably intended that way) is actually a fantastic advertisement for testing: a way to find bugs? Yes! Great! Where do I get it?  But that is not the same as verifying the software, which means attempting to ascertain that it has no bugs.

Until listeners realize that verification cannot just mean testing, the best course material on axiomatic semantics or other proof techniques will not attract any interest. In fact, there is somewhere a video of a talk by the great testing and public-speaking guru James Whittaker where he starts by telling his audience not to worry, this won’t be a standard boring lecture, he will not start talking about loop invariants [3]! (Loop invariants are coming in this article, in fact they are one of its central concepts, but in later sections only, so don’t bring the sleeping bags yet.) I decided to start my lecture by giving an example of what happens when you do not use proper verification. More than one example, in fact, as you will see.

A warning about this article: there is nothing new here. I am using an example from my 1990 book Introduction to the Theory of Programming Languages (exercise 9.12). Going even further back, a 1983 “Programming Pearls” Communications of the ACM article by Jon Bentley [4] addresses the same example with the same basic ideas. Yet almost forty years later these ideas are still not widely known among practitioners. So consider these articles as yet another tutorial on fundamental software engineering stuff.

The tutorial is a quiz. We start with a program text:

from

i := 1 ; j := n              — Result initialized to 0.

until i = j loop

m := (i + j) // 2         — Integer division

if t [m] ≤ x then i := m  else  j := m end

end

if x = t [i] then Result := i end

All variables are of integer type. t is an up-sorted array of integers, indexed from 1 to n . We do not let any notation get between friends. A loop from p until e loop q end executes p then, repeatedly: stops if e (the exit condition) is true, otherwise executes q. (Like {p ; while not e do {q}} in some other notations.) “:=” is assignment, “=” equality testing.  “//” is integer division, e.g. 6 //3 = 7 //3 = 2. Result is the name of a special variable whose final value will be returned by this computation (as part of a function, but we only look at the body). Result is automatically initialized to zero like all integer variables, so if execution does not assign anything to Result the function will return zero.

First question: what is this program trying to do?

OK, this is not the real quiz. I assume you know the answer: it is an attempt at “binary search”, which finds an element in the array, or determines its absence, in a sequence of about log2 (n) steps, rather than n if we were use sequential search.  (Remember we assume the array is sorted.) Result should give us a position where x appears in the array, if it does, and otherwise be zero.

Now for the real quiz: does this program meet this goal?

The answer should be either yes or no. (If no, I am not asking for a correct version, at least not yet, and in any case you can find some in the literature.) The situation is very non-symmetric, we might say Popperian:

  • To justify a no answer it suffices of a single example, a particular array t and a particular value x, for which the program fails to set Result as it should.
  • To justify a yes answer we need to provide a credible argument that for every t and  x the program sets Result as it should.

Notes to section 1

[1] The TAP conference series (Tests And Proofs), which Yuri Gurevich and I started, explores the complementarity between the two approaches.

[2] Dijkstra first published his observation in 1969. He did not need consider the case of infinite input sets: even for a trivial finite program that multiplies two 32-bit integers, the number of cases to be examined, 264, is beyond human reach. More so today with 64-bit integers. Looking at this from a 2020 perspective, we may note that exhaustive testing of a finite set of cases, which Dijkstra dismissed as impossible in practice, is in fact exactly what the respected model checking verification technique does; not on the original program, but on a simplified — abstracted — version precisely designed to keep the number of cases tractable. Dijkstra’s argument remains valid, of course, for  the original program if non-trivial. And model-checking does not get us out of the woods: while we are safe if its “testing” finds no bug, if it does find one we have to ensure that the bug is a property of the original program rather than an artifact of the abstraction process.

[3] It is somewhere on YouTube, although I cannot find it right now.

[4] Jon Bentley: Programming Pearls: Writing Correct Programs, in Communications of the ACM, vol. 26, no. 12, pp. 1040-1045, December 1983, available for example here.


2. Attempt #2

Was program #1 correct? If so it should yield the correct answer. (An answer is correct if either Result is the index in t of an element equal to x, or Result = 0 and x does not appear in t.)

This program is not correct. To prove that it is not correct it suffices of a single example (test case) for which the program does not  “yield the correct answer”. Assume x = 1 and the array t has two elements both equal to zero (n = 2, remember that arrays are indexed from 1):

t = [0   0]

The successive values of the variables and expressions are:

                                            m       i          j            i + j + 1

After initialization:                   1         2                3

i ≠ j, so enter loop:           1       1        2                 6         — First branch of “if” since t [1] ≤ x
— so i gets assigned the value of m

But then neither of the values of i and j has changed, so the loop will repeat its body identically (taking the first branch) forever. It is not even that the program yields an incorrect answer: it does not yield an answer at all!

Note (in reference to the famous Dijkstra quote mentioned in the first article), that while it is common to pit tests against proofs, a test can actually be a proof: a test that fails is a proof that the program is incorrect. As valid as the most complex mathematical proof. It may not be the kind of proof we like most (our customers tend to prefer a guarantee that the program is correct), but it is a proof all right.

We are now ready for the second attempt:

—  Program attempt #2.

from

i := 1 ; j := n

until i = j or Result > 0  loop

m := (i + j) // 2         — Integer division

if t [m] ≤ x then

i := m  + 1

elseif t [m] = x then

Result := m

else                         — In this case t [m] > x

j := m – 1

end

end

Unlike the previous one this version always changes i or j, so we may hope it does not loop forever. It has a nice symmetry between i and j.

Same question as before: does this program meet its goal?


3. Attempt #3

The question about program #2, as about program #1: was: it right?

Again no.  A trivial example disproves it: n = 1, the array t contains a single element t [1] = 0, x = 0. Then the initialization sets both i and j to 1, i = j holds on entry to the loop which stops immediately, but Result is zero whereas it should be 1 (the place where x appears).

Here now is attempt #3, let us see it if fares better:

—  Program attempt #3.

from

i := 1 ; j := n

until i = j loop

m := (i + j + 1) // 2

if t [m] ≤ x then

i := m  + 1

else

j := m

end

end

if 1  ≤ i  and i ≤ n then Result := i end
       — If not, Result remains 0.

What about this one?


3. Attempt #4 (also includes 3′)

The first two program attempts were wrong. What about the third?

I know, you have every right to be upset at me, but the answer is no once more.

Consider a two-element array t = [0 0] (so n = 2, remember that our arrays are indexed from 1 by convention) and a search value x = 1. The successive values of the variables and expressions are:

                                                  m          i          j            i + j + 1

After initialization:                            1        2           4

i ≠ j, so enter loop:               2           3        2          6                  — First branch of “if” since t [2] < x

i ≠ j,  enter loop again:        3           ⚠                                       — Out-of-bounds memory access!
— (trying to access non-existent t [3])

Oops!

Note that we could hope to get rid of the array overflow by initializing i to 0 rather than 1. This variant (version #3′) is left as a bonus question to the patient reader. (Hint: it is also not correct. Find a counter-example.)

OK, this has to end at some point. What about the following version (#4): is it right?

—  Program attempt #4.

from

i := 0 ; j := n + 1

until i = j loop

m := (i + j) // 2

if t [m] ≤ x then

i := m  + 1

else

j := m

end

end

if 1 ≤ i  and i ≤ n then Result := i end


5. Attempt #5

Yes, I know, this is dragging on. But that’s part of the idea: witnessing how hard it is to get a program right if you just judging by the seat of your pants. Maybe we can get it right this time?

Are we there yet? Is program attempt #4 finally correct?

Sorry to disappoint, but no. Consider a two-element array t = [0 0], so n = 2, and a search value x = 1 (yes, same counter-example as last time, although here we could also use x = 0). The successive values of the variables and expressions are:

                                                 m          i          j            i + j

After initialization:                           0        3           3

i ≠ j, so enter loop:               1           2       3          5            — First branch of “if

i ≠ j, enter loop again:         2         3        3         6            — First branch again

i = j, exit loop

The condition of the final “if” is true, so Result gets the value 3. This is quite wrong, since there is no element at position 3, and in any case x does not appear in t.

But we are so close! Something like this should work, should it not?

So patience, patience, let us tweak it just one trifle more, OK?

—  Program attempt #5.

from

i := 0 ; j := n

until i ≥ j or Result > 0 loop

m := (i + j) // 2

if t [m] < x then

i := m + 1

elseif  t [m] > x then

j := m

else

Result := m

end

end

Does it work now?


6. Attempt #6

The question about program #5  was the same as before: is it right, is it wrong?

Well, I know you are growing more upset at me with each section, but the answer is still that this program is wrong. But the way it is wrong is somewhat specific; and it applies, in fact, to all previous variants as well.

This particular wrongness (fancy word for “bug”) has a history. As I pointed out in the first article, there is a long tradition of using binary search to illustrate software correctness issues. A number of versions were published and proved correct, including one in the justly admired Programming Pearls series by Jon Bentley. Then in 2006 Joshua Bloch, then at Google, published a now legendary blog article [2] which showed that all these versions suffered from a major flaw: to obtain m, the approximate mid-point between i and j, they compute

(i + j) // 2

which, working on computer integers rather than mathematical integers, might overflow! This in a situation in which both i and j, and hence m as well, are well within the range of the computer’s representable integers, 2-n to 2n (give or take 1) where n is typically 31 or, these days, 63, so that there is no conceptual justification for the overflow.

In the specification that I have used for this article, i starts at 1, so the problem will only arise for an array that occupies half of the memory or more, which is a rather extreme case (but still should be handled properly). In the general case, it is often useful to use arrays with arbitrary bounds (as in Eiffel), so we can have even a small array, with high indices, for which the computation will produce an overflow and bad results.

The Bloch gotcha is a stark reminder that in considering the correctness of programs we must include all relevant aspects and consider programs as they are executed on a real computer, not as we wish they were executed in an ideal model world.

(Note that Jon Bentley alluded to this requirement in his original article: while he did not explicitly mention integer overflow, he felt it necessary to complement his proof by the comment that that  “As laborious as our proof of binary search was, it is still unfinished by some standards. How would you prove that the program is free of runtime errors (such as division by zero, word overflow, or array indices out of bounds)?” Prescient words!)

It is easy to correct the potential arithmetic overflow bug: instead of (i + j) // 2, Bloch suggested we compute the average as

i + (j – i) // 2

which is the same from a mathematician’s viewpoint, and indeed will compute the same value if both variants compute one, but will not overflow if both i and j are within range.

So we are ready for version 6, which is the same as version 5 save for that single change:

—  Program attempt #6.

from

i := 0 ; j := n

until i ≥ j or Result > 0 loop

m := i + (j – i) // 2

if t [m] < x then

i := m + 1

elseif  t [m] > x then

j := m

else

Result := m

end

end

Now is probably the right time to recall the words by which Donald Knuth introduces binary search in the original 1973 tome on Sorting and Searching of his seminal book series The Art of Computer Programming:knuth

Although the basic idea of binary search is comparatively straightforward, the details can be somewhat tricky, and many good programmers have done it wrong the first few times they tried.

Do you need more convincing? Be careful what you answer, I have more variants up my sleeve and can come up with many more almost-right-but-actually-wrong program attempts if you nudge me. But OK, even the best things have an end. This is not the last section yet, but that was the last program attempt. To the naturally following next question in this running quiz,  “is version 6 right or wrong”, I can provide the answer: it is, to the best of my knowledge, a correct program. Yes! [3].

But the quiz continues. Since answers to the previous questions were all  that the programs were not correct, it sufficed in each case to find one case for which the program did not behave as expected. Our next question is of a different nature: can you find an argument why version #6 is correct?

References for section 6

[1] (In particular) Jon Bentley: Programming Pearls — Writing Correct Programs, in Communications of the ACM, vol. 26, no. 12, December 1983, pages 1040-1045, available here.

[2] Joshua Bloch: Extra, Extra — Read All About It: Nearly All Binary Searches and Mergesorts are Broken, blog post, on the Google AI Blog, 2 June 2006, available here.

[3] A caveat: the program is correct barring any typos or copy-paste errors — I am starting from rigorously verified programs (see the next posts), but the blogging system’s UI and text processing facilities are not the best possible for entering precise technical text such as code. However carefully I check, I cannot rule out a clerical mistake, which of course would be corrected as soon as it is identified.


7. Using a program prover

Preceding sections presented candidate binary search algorithms and asked whether they are correct. “Correct” means something quite precise: that for an array t and a value x, the final value of the variable Result is a valid index of t (that is to say, is between 1 and n, the size of t) if and only if x appears at that index in t.

The last section boldly stated that program attempt #6 was correct. The question was: why?

In the case of the preceding versions, which were incorrect, you could prove that property, and I do mean prove, simply by exhibiting a single counter-example: a single t and x for which the program does not correctly set Result. Now that I asserting the program to be correct, one example, or a million examples, do not suffice. In fact they are almost irrelevant. Test as much as you like and get correct results every time, you cannot get rid of the gnawing fear that if you had just tested one more time after the millionth test you would have produced a failure. Since the set of possible tests is infinite there is no solution in sight [1].

We need a proof.

I am going to explain that proof in the next section, but before that I would like to give you an opportunity to look at the proof by yourself. I wrote in one of the earlier articles that most of what I have to say was already present in Jon Bentley’s 1983 Programming Pearls contribution [2], but a dramatic change did occur in the four decades since: the appearance of automated proof system that can handle significant, realistic programs. One such system, AutoProof, was developed at the Chair of Software engineering at ETH Zurich [3] (key project members were Carlo Furia, Martin Nordio, Nadia Polikarpova and Julian Tschannen, with initial contributions by Bernd Schoeller) on the basis of the Boogie proof technology from Microsoft Research).

AutoProof is available for online use, and it turns out that one of the basic tutorial examples is binary search. You can go to the corresponding page and run the proof.

I am going to let you try this out (and, if you are curious, other online AutoProof examples as well) without too many explanations; those will come in the next section. Let me simply name the basic proof technique: loop invariant. A loop invariant is a property INV associated with a loop, such that:

  • A. After the loop’s initialization, INV will hold.
  • B. One execution of the loop’s body, if started with INV satisfied (and the loop’s exit condition not satisfied, otherwise we wouldn’t be executing the body!), satisfies INV again when it terminates.

This idea is of course the same as that of a proof by induction in mathematics: the initialization corresponds to the base step (proving that P (0) holds) and the body property to the induction step (proving that from P (n) follows P (n + 1). With a traditional induction proof we deduce that the property (P (n)) holds for all integers. For the loop, we deduce that when the loop finishes its execution:

  • The invariant still holds, since executing the loop means executing the initialization once then the loop body zero or more times.
  • And of course the exit condition also holds, since otherwise we would still be looping.

That is how we prove the correctness of a loop: the conjunction of the invariant and the exit condition must yield the property that we seek (in the example, the property, stated above of Result relative to t and x).

We also need to prove that the loop does terminate. This part involves another concept, the loop’s variant, which I will explain in the next section.

For the moment I will not say anything more and let you look at the AutoProof example page (again, you will find it here), run the verification, and read the invariant and other formal elements in the code.

To “run the verification” just click the Verify button on the page. Let me emphasize (and emphasize again and again and again) that clicking Verify will not run the code. There is no execution engine in AutoProof, and the verification does not use any test cases. It processes the text of the program as it appears on the page and below. It applies mathematical techniques to perform the proof; the core property to be proved is that the proposed loop invariant is indeed invariant (i.e. satisfies properties A and B above).

The program being proved on the AutoProof example page is version #6 from the last section, with different variable names. So far for brevity I have used short names such as i, j and m but the program on the AutoProof site applies good naming practices with variables called low, up, middle and the like. So here is that version again with the new variable names:

—  Program attempt #7  (identical to #6 with different variable names) .

from

low := 0 ; up := n

until low ≥ up or Result > 0 loop

middle := low + ((up – low) // 2)

if a [middle] < value then      — The array is now called a rather than t

low := middle + 1

elseif  a [middle] > value then

up := middle

else

Result := middle

end

end

This is exactly the algorithm text on the AutoProof page, the one that you are invited to let AutoProof verify for you. I wrote “algorithm text” rather than “program text” because the actual program text (in Eiffel) includes variant and invariant clauses which do not affect the program’s execution but make the proof possible.

Whether or not these concepts (invariant, variant, program proof) are completely new to you, do try the prover and take a look at the proof-supporting clauses. In the next article I will remove any remaining mystery.

Note and references for section 7

[1] Technically the set of possible [array, value] pairs is finite, but of a size defying human abilities. As I pointed out in the first section, the “model checking” and “abstract interpretation” verification techniques actually attempt to perform an exhaustive test anyway, after drastically reducing the size of the search space. That will be for some other article.

[2]  Jon Bentley: Programming Pearls: Writing Correct Programs, in Communications of the ACM, vol. 26, no. 12, pp. 1040-1045, December 1983, available for example here.

[3] The AutoProof page contains documentations and numerous article references.


8. Understanding the proof

The previous section invited you to run the verification on the AutoProof tutorial page dedicated to the example. AutoProof is an automated proof system for programs. This is just a matter of clicking  “Verify”, but more importantly, you should read the annotations added to the program text, particularly the loop invariant, which make the verification possible. (To avoid any confusion let me emphasize once more that clicking “Verify” does not run the program, and that no test cases are used; the effect is to run the verifier, which attempts to prove the correctness of the program by working solely on the program text.)

Here is the program text again, reverting for brevity to the shorter identifiers (the version on the AutoProof page has more expressive ones):

from

i := 0 ; j := n

until i ≥ j or Result > 0 loop

m := i + (j – i) // 2

if t [m] < x then

i := m + 1

elseif  t [m] > x then

j := m

else

Result := m

end

end

Let us now see what makes the proof possible. The key property is the loop invariant, which reads

A:   1  ≤ i  ≤ j  ≤ n + 1
B:   0  ≤ Result  ≤ n
C:   ∀ k: 1 .. i –1  |  t [k] < x
D:   ∀ k: j .. n  |  t [k] > x
E:    (Result > 0)   ⇒   (t [Result] = x)

The notation is slightly different on the Web page to adapt to the Eiffel language as it existed at the time it was produced; in today’s Eiffel you can write the invariant almost as shown above. Long live Unicode, allowing us to use symbols such as (obtained not by typing them but by using smart completion, e.g. you start typing “forall” and you can select the symbol that pops up), for  “implies” and many others

Remember that the invariant has to be established by the loop’s initialization and preserved by every iteration. The role of each of its clauses is as follows:

  • A: keep the indices in range.
  • B: keep the variable Result, whose final value will be returned by the function, in range.
  • C and D: eliminate index intervals in which we have determined that the sought value, x, does not appear. Before i, array values are smaller; starting at j, they are greater. So these two intervals, 1..i and j..n, cannot contain the sought value. The overall idea of the algorithm (and most other search algorithms) is to extend one of these two intervals, so as to narrow down the remaining part of 1..n where x may appear.
  • E: express that as soon as we find a positive (non-zero) Result, its value is an index in the array (see B) where x does appear.

Why is this invariant useful? The answer is that on exit it gives us what we want from the algorithm. The exit condition, recalled above, is

i ≥ j or Result > 0

Combined with the invariant, it tells us that on exit one of the following will hold:

  • Result > 0, but then because of E we know that x appears at position Result.
  • i < j, but then A,  C and D  imply that x does not appear anywhere in t. In that case it cannot be true that Result > 0, but then because of B Result must be zero.

What AutoProof proves, mechanically, is that under the function’s precondition (that the array is sorted):

  • The initialization ensures the invariant.
  • The loop body, assuming that the invariant is satisfied but the exit condition is not, ensures the loop invariant again after it executes.
  • The combination of the invariant and the exit condition ensures, as just explained, the postcondition of the function (the property that Result will either be positive and the index of an element equal to x, or zero with the guarantee that x appears nowhere in t).

Such a proof guarantees the correctness of the program if it terminates. We (and AutoProof) must prove separately that it does terminate. The technique is simple: find a “loop variant”, an integer quantity v  which remains non-negative throughout the loop (in other words, the loop invariant includes or implies v ≥ 0) and decreases on each iteration, so that the loop cannot continue executing forever. An obvious variant here is j – i + 1 (where the + 1 is needed because j – i may go down to -1 on the last iteration if x does not appear in the array). It reflects the informal idea of the algorithm: repeatedly decrease an interval i .. j – 1 (initially, 1 .. n) guaranteed to be such that x appears in t if and only if it appears at an index in that interval. At the end, either we already found x or the interval is empty, implying that x does not appear at all.

A great reference on variants and the techniques for proving program termination is a Communications of the ACM article of 2011: [3].

The variant gives an upper bound on the number of iterations that remain at any time. In sequential search, j – i + 1 would be our best bet; but for binary search it is easy to show that  log(j – i + 1) is also a variant, extending the proof of correctness with a proof of performance (the key goal of binary search being to ensure a logarithmic rather than linear execution time).

This example is, I hope, enough to highlight the crucial role of loop invariants and loop variants in reasoning about loops. How did we get the invariant? It looks like I pulled it out of a hat. But in fact if we go the other way round (as advocated in classic books [1] [2]) and develop the invariant and the loop together the process unfolds itself naturally and there is nothing mysterious about the invariant.

Here I cannot resist quoting (thirty years on!) from my own book Introduction to the Theory of Programming Languages [4]. It has a chapter on axiomatic semantics (also known as Hoare logic, the basis for the ideas used in this discussion), which I just made available: see here [5]. Its exercise 9.12 is the starting point for this series of articles. Here is how the book explains how to design the program and the invariant [6]:

In the general case [of search, binary or not] we aim for a loop body of the form

m := ‘‘Some value in 1.. n such that i ≤ m < j’’;

if t [m] ≤ x then

i := m + 1

else

j := m

end

It is essential to get all the details right (and easy to get some wrong):

  • The instruction must always decrease the variant j – i, by increasing i or decreasing j. If the the definition of m specified just m ≤ j rather than m < j, the second branch would not meet this goal.
  •  This does not transpose directly to i: requiring i < m < j would lead to an impossibility when j – i is equal to 1. So we accept i ≤ m but then we must take m + 1, not m, as the new value of i in the first branch.
  •  The conditional’s guards are tests on t [m], so m must always be in the interval 1 . . n. This follows from the clause 0 ≤ i ≤ j ≤ n + 1 which is part of the invariant.
  •  If this clause is satisfied, then m ≤ n and m > 0, so the conditional instruction indeed leaves this clause invariant.
  • You are invited to check that both branches of the conditional also preserve the rest of the invariant.
  • Any policy for choosing m is acceptable if it conforms to the above scheme. Two simple choices are i  and j – 1; they lead to variants of the sequential search algorithm [which the book discussed just before binary search].

For binary search, m will be roughly equal to the average of i and j.

“Roughly” because we need an integer, hence the // (integer division).

In the last section, I will reflect further on the lessons we can draw from this example, and the practical significance of the key concept of invariant.

References and notes for section 8

[1] E.W. Dijkstra: A Discipline of Programming, Prentice Hall, 1976.

[2] David Gries: The Science of Programming, Springer, 1989.

[3] Byron Cook, Andreas  Podelski and Andrey Rybalchenko: Proving program termination, in Communications of the ACM, vol. 54, no. 11, May 2011, pages 88-98, available here.

[4] Bertrand Meyer, Introduction to the Theory of Programming Languages, Prentice Hall, 1990. The book is out of print but can be found used, e.g. on Amazon. See the next entry for an electronic version of two chapters.

[5] Bertrand Meyer Axiomatic semantics, chapter 9 from [3], available here. Note that the PDF was reconstructed from an old text-processing system (troff); the figures could not be recreated and are missing. (One of these days I might have the patience of scanning them from a book copy and adding them. Unless someone wants to help.) I also put online, with the same caveat, chapter 2 on notations and mathematical basis: see here.

[6] Page 383 of [4] and [5]. The text is verbatim except a slight adaptation of the programming notation and a replacement of the variables: i in the book corresponds to i – 1 here, and j to j – 1. As a matter of fact I prefer the original conventions from the book (purely as a matter of taste, since the two are rigorously equivalent), but I changed here to the conventions of the program as it appears in the AutoProof page, with the obvious advantage that you can verify it mechanically. The text extract is otherwise exactly as in the 1990 book.

9. Lessons learned

What was this journey about?

We started with a succession of attempts that might have “felt right” but were in fact all wrong, each in its own way: giving the wrong answer in some cases, crashing (by trying to access an array outside of its index interval) in some cases, looping forever in some cases. Always “in some cases”,  evidencing the limits of testing, which can never guarantee that it exercises all the problem cases. A correct program is one that works in all cases. The final version was correct; you were able to prove its correctness with an online tool and then to understand (I hope) what lies behind that proof.

To show how to prove such correctness properties, I have referred throughout the series to publications from the 1990s (my own Introduction to The Theory of Programming Languages), the 1980s (Jon Bentley’s Programming Pearls columns, Gries’s Science of Programming), and even the 1970s (Dijkstra’s Discipline of Programming). I noted that the essence of my argument appeared in a different form in one of Bentley’s Communications articles. What is the same and what has changed?

The core concepts have been known for a long time and remain applicable: assertion, invariant, variant and a few others, although they are much better understood today thanks to decades of theoretical work to solidify the foundation. Termination also has a more satisfactory theory.

On the practical side, however, the progress has been momentous. Considerable engineering has gone into making sure that the techniques scaled up. At the time of Bentley’s article, binary search was typical of the kind of programs that could be proved correct, and the proof had to proceed manually. Today, we can tackle much bigger programs, and use tools to perform the verification.

Choosing binary search again as an example today has the obvious advantage that everyone can understand all the details, but should not be construed as representative of the state of the art. Today’s proof systems are far more sophisticated. Entire operating systems, for example, have been mechanically (that is to say, through a software tool) proved correct. In the AutoProof case, a major achievement was the proof of correctness [1] of an entire data structure (collections) library, EiffelBase 2. In that case, the challenge was not so much size (about 8,000 source lines of code), but the complexity of both:

  • The scope of the verification, involving the full range of mechanisms of a modern object-oriented programming language, with classes,  inheritance (single and multiple), polymorphism, dynamic binding, generics, exception handling etc.
  • The code itself, using sophisticated data structures and algorithms, involving in particular advanced pointer manipulations.

In both cases, progress has required advances on both the science and engineering sides. For example, the early work on program verification assumed a bare-bones programming language, with assignments, conditionals, loops, routines, and not much more. But real programs use many other constructs, growing ever richer as programming languages develop. To cover exception handling in AutoProof required both theoretical modeling of this construct (which appeared in [2]) and implementation work.

More generally, scaling up verification capabilities from the small examples of 30 years ago to the sophisticated software that can be verified today required the considerable effort of an entire community. AutoProof, for example, sits at the top of a tool stack relying on the Boogie environment from Microsoft Research, itself relying on the Z3 theorem prover. Many person-decades of work make the result possible.

tool_stack

Beyond the tools, the concepts are esssential. One of them, loop invariants, has been illustrated in the final version of our program. I noted in the first article the example of a well-known expert and speaker on testing who found no better way to announce that a video would not be boring than  “relax, we are not going to talk about loop invariants.” Funny perhaps, but unfair. Loop invariants are one of the most beautiful concepts of computer science. Not so surprisingly, because loop invariants are the application to programming of the concept of mathematical induction. According to the great mathematician Henri Poincaré, all of mathematics rests on induction; maybe he exaggerated, maybe not, but who would think of teaching mathematics without explaining induction? Teaching programming without explaining loop invariants is no better.

Below is an illustration (if you will accept my psychedelic diagram) of what a loop is about, as a problem-solving technique. Sometimes we can get the solution directly. Sometimes we identify several steps to the solution; then we use a sequence (A ; B; C). Sometimes we can find two (or more) different ways of solving the problem in different cases; then we use a conditional (if c then A else B end). And sometimes we can only get a solution by getting closer repeatedly, not necessarily knowing in advance how many times we will have to advance towards it; then, we use a loop.

loop_strategy

We identify an often large (i.e. very general) area where we know the solution will lie; we call that area the loop invariant. The solution or solutions (there may be more than one) will have to satisfy a certain condition; we call it the exit condition. From wherever we are, we shoot into the invariant region, using an appropriate operation; we call it the initialization. Then we execute as many times as needed (maybe zero if our first shot was lucky) an operation that gets us closer to that goal; we call it the loop body. To guarantee termination, we must have some kind of upper bound of the distance to the goal, decreasing each time discretely; we call it the loop variant.

This explanation is only an illustration, but I hope it makes the ideas intuitive. The key to a loop is its invariant. As the figure suggests, the invariant is always a generalization of the goal. For example, in binary search (and many other search algorithms, such as sequential search), our goal is to find a position where either x appears or, if it does not, we can be sure that it appears nowhere. The invariant says that we have an interval with the same properties (either x appears at a position belonging to that interval or, if it does not, it appears nowhere). It obviously includes the goal as a special case: if the interval has length 1, it defines a single position.

An invariant should be:

  1. Strong enough that we can devise an exit condition which in the end, combined with the invariant, gives us the goal we seek (a solution).
  2. Weak enough that we can devise an initialization that ensures it (by shooting into the yellow area) easily.
  3. Tuned so that we can devise a loop body that, from a state satifying the invariant, gets us to a new one that is closer to the goal.

In the example:

  1. The exit condition is simply that the interval’s length is 1. (Technically, that we have computed Result as the single interval element.) Then from the invariant and the exit condition, we get the goal we want.
  2. Initialization is easy, since we can just take the initial interval to be the whole index range of the array, which trivially satisfies the invariant.
  3. The loop body simply decreases the length of the interval (which can serve as loop variant to ensure termination). How we decrease the length depends on the search strategy; in sequential search, each iteration decreases the length by 1, correct although not fast, and binary search decreases it by about half.

The general scheme always applies. Every loop algorithm is characterized by an invariant. The invariant may be called the DNA of the algorithm.

To demonstrate the relevance of this principle, my colleagues Furia, Velder, and I published a survey paper [6] in ACM Computing Surveys describing the invariants of important algorithms in many areas of computer science, from search algorithms to sorting (all major algorithms), arithmetic (long integer addition, squaring), optimization and dynamic programming  (Knapsack, Levenshtein/Edit distance), computational geometry (rotating calipers), Web (Page Rank)… I find it pleasurable and rewarding to go deeper into the basis of loop algorithms and understand their invariants; like a geologist who does not stop at admiring the mountain, but gets to understand how it came to be.

Such techniques are inevitable if we want to get our programs right, the topic of this article. Even putting aside the Bloch average-computation overflow issue, I started with 5 program attempts, all kind of friendly-looking but wrong in different ways. I could have continued fiddling with the details, following my gut feeling to fix the flaws and running more and more tests. Such an approach can be reasonable in some cases (if you have an algorithm covering a well-known and small set of cases), but will not work for non-trivial algorithms.

Newcomers to the concept of loop invariant sometimes panic: “this is all fine, you gave me the invariants in your examples, how do I find my own invariants for my own loops?” I do not have a magic  recipe (nor does anyone else), but there is no reason to be scared. Once you have understood the concept and examined enough examples (just a few of those in [6] should be enough), writing the invariant at the same time as you are devising a loop will come as a second nature to you.

As the fumbling attempts in the first few sections should show, there is not much of an alternative. Try this approach. If you are reaching these final lines after reading what preceded them, allow me to thank you for your patience, and to hope that this rather long chain of reflections on verification will have brought you some new insights into the fascinating challenge of writing correct programs.

References

[1] Nadia Polikarpova, Julian Tschannen, and Carlo A. Furia: A Fully Verified Container Library, in Proceedings of 20th International Symposium on Formal Methods (FM 15), 2015. (Best paper award.)

[2] Martin Nordio, Cristiano Calcagno, Peter Müller and Bertrand Meyer: A Sound and Complete Program Logic for Eiffel, in Proceedings of TOOLS 2009 (Technology of Object-Oriented Languages and Systems), Zurich, June-July 2009, eds. M. Oriol and B. Meyer, Springer LNBIP 33, June 2009.

[3] Boogie page at MSR, see here for publications and other information.

[4] Z3 was also originally from MSR and has been open-sourced, one can get access to publications and other information from  its Wikipedia page.

[5] Carlo Furia, Bertrand Meyer and Sergey Velder: Loop invariants: Analysis, Classification and Examples, in ACM Computing Surveys, vol. 46, no. 3, February 2014. Available here.

[6] Dynamic programming is a form of recursion removal, turning a recursive algorithm into an iterative one by using techniques known as “memoization” and  “bottom-up computation” (Berry). In this transformation, the invariant plays a key role. I will try to write this up some day as it is a truly elegant and illuminating explanation.

VN:F [1.9.10_1130]
Rating: 10.0/10 (7 votes cast)
VN:F [1.9.10_1130]
Rating: +3 (from 3 votes)

Notations you didn’t even know you could use

Consider the following expression:

∃ c: s   ¦   moisture (c) = soft

This is obviously mathematics. To express such a property in a programming language, you have to write a function containing a loop that iterates through the elements of s. Right?

Wrong. The above construct is valid Eiffel. It’s a consequence of recent syntax extensions that retain all the simplicity and consistency of the language but take full advantage of Unicode. Of course you do not have Unicode characters such as on you keyboard, but EiffelStudio’s completion mechanism inserts them for you.

To see how this works, just read Alexander Kogtenkov’s recent blog post on the topic.

Note added 24 December 2020: you will find a longer exposition in a later article on this blog.

VN:F [1.9.10_1130]
Rating: 10.0/10 (2 votes cast)
VN:F [1.9.10_1130]
Rating: +1 (from 1 vote)

Talk on requirements at UC Santa Barbara tomorrow

I am giving a “distinguished lecture” at the University of California, Santa Barbara, January 10 (Friday, tomorrow) at 14. The title is A Comprehensive Approach to Requirements Engineering.

The abstract and rest of the information are here.

I will spend the last few minutes of the talk discussing other current developments (verification, concurrency).

VN:F [1.9.10_1130]
Rating: 7.0/10 (3 votes cast)
VN:F [1.9.10_1130]
Rating: +1 (from 1 vote)

This Wednesday in Nice: survey talk on the Eiffel method

The “Morgenstern Colloquium” at the University of Nice / INRIA Sophia Antipolis invited me to give a talk, next Wednesday (18 December) at 11 in Sophia Antipolis, in the aptly named* “Kahn Building”. The announcement appears here. I proposed various topics but (pleasant surprise) the organizers explicitly asked me to lecture about what I really want to talk about: the Eiffel approach. I will give a general presentation describing not specifically the language but the unified view of software construction embodied in Eiffel, from modeling to requirements to design, implementation and verification. Here is the abstract:

With society’s growing reliance on IT systems, the ability to write high-quality software is ever more critical. While a posteriori verification techniques have their role, there is no substitute for methods and tools that provide built-in quality (“correctness by construction”) and scale up to very large systems. For several decades my colleagues and I have been building such a method, based in particular on the concept of Design by Contract, the associated tools and the supporting language, Eiffel. The scope is wide, encompassing all aspects of the software development process, from requirements and design to implementation and verification. I will present an overview of the approach, show what it can yield, and discuss remaining open issues.

This talk is meant for everyone, whether from industry or academia, with an interest in practical techniques for engineering high-quality software.

No registration is required. The presentation will be in English.

Note

*Gilles Kahn, a brilliant computer scientist who died too young, was for a while director of INRIA.

VN:F [1.9.10_1130]
Rating: 6.3/10 (4 votes cast)
VN:F [1.9.10_1130]
Rating: 0 (from 2 votes)

Publications on CS/SE/informatics education

Recently I had a need to collect my education-related publications, so I went through my publication list and extracted items devoted to issues of learning computer science (informatics) and software engineering. There turned out to be far more than I expected; I did not think of myself as primarily an education researcher but it seems I am that too. (Looking around in my fields and institutions I don’t immediately see other research computer scientists with a comparable record at SIGCSE, ITiCSE and other top CS education publications.)

Without presuming that the list will be of interest I am reproducing it below for the record. All comes from my publication list here, which contains more information, in particular a descriptive paragraph or two for every single publication. (Note: that consolidate list is up to date until 2017 only, because it is produced by a script and one of the supporting system tools — not an Eiffel program, thanks for asking — has stopped functioning on the latest OS version on the server. I have not found the time to fix it yet. There are a good dozen publications missing for 2018 and 2019.)

I have also included PhD theses in education. (Whole list of PhD theses supervised here.)

The topics include among others, in approximate chronological order (although the list below is in the reverse order):

    • Early experience teaching modern programming concepts in both industry and universities.
    • In the nineties, I was full time at Eiffel Software, the development of a general framework for teaching programming. This was written from the safe position of someone in industry advising academic colleagues on what to do (usually the advice goes the other way). I did have, however, the opportunity to practice my preaching in short stints at University of Technology, Sydney and  particularly Monash University. The concept of the Inverted Curriculum (also known as “ Outside-In”) date back to that period, with objects first (actually classes) and contracts first too.
    • When I joined ETH, a general paper on the fundamental goals and concepts of software engineering education, “Software Engineering in the Academy”, published in IEEE Computer.
    • At ETH, putting the Inverted Curriculum in practice, with 14 consecutive sessions of the introductory programming courses for all computer science students, resulting in the Touch of Class textbook and a number of papers coming out of our observations. An estimated 6000 students took the course. A variant of it has also been given several times at Innopolis University.
    • A theory of how to structure knowledge for educational purposes, leading to the notion of “Truc” (Teachable, Reusable Unit of Cognition).
    • The development by Michela Pedroni of the Trucstudio environment, similar in its form to an IDE but devoted, instead of the development of programs, to the visual development of courses, textbooks, curricula etc.
    • Empirical work by Marie-Hélène Ng Cheong Vee (Nienaltowski) and Michela Pedroni on what beginners understand easily, and not, for example according to the phrasing of compiler error messages.
    • Other empirical work, by Michela Pedroni and Manuel Oriol, on the prior knowledge of entering computer science students.
    • The DOSE course (Distributed and Outsourced Software Engineering) ran for several years a student project done by joint student teams from several cooperating universities, including Politecnico di Milano which played a key role along with us. It enabled many empirical studies on the effect on software development of having geographically distributed teams. People who played a major role in this effort are, at ETH, Martin Nordio, Julian Tschannen and Christian Estler and, at Politecnico, Elisabetta di Nitto, Giordano Tamburrelli and Carlo Ghezzi.
    • Several MOOCs, among the first at ETH, on introductory computing and agile methods. They do not appear below because they are not available at the moment on the EdX site (I do not know why and will try to get them reinstated). The key force there was Marco Piccioni. MOOCs are interesting for many reasons; they are a substitute neither for face-to-face teaching nor for textbooks, but an interesting complement offering novel educational possibilities. Our programming MOOCs are particularly innovative since (thanks to codeboard, see below) they provide the opportunity to compile and run program directly from the course exercise pages, compare the run’s result to correct answers for prepared tests, and get immediate feedback .
    • A comparative study of teaching effectiveness of two concurrency models, Eiffel SCOOP and JavaThreads (Sebastian Nanz, Michela Pedroni).
    • The development (Christian Estler with Martin Nordio) of the Codeboard system and site, still as far as I know the most advanced system for cloud support to teach programming, enabling students to compile, correct and run programs on the web, with support for various languages. Codeboard is used in the programming MOOCs.
    • A hint system (Paolo Antonucci, Michela Pedroni) to help students get progressive help, as in video games, when they stumble trying to write a program, e.g. with Codeboard.

Supervised PhD theses on education

The following three theses are devoted to educational topics (although many of the  other theses have educational aspects too):

Christian Estler, 2014, Understanding and Improving Collaboration in Distributed Software Development, available here.

Michela Pedroni, 2009, Concepts and Tools for Teaching Programming, available here.

Markus Brändle, 2006: GraphBench: Exploring the Limits of Complexity with Educational Software: Exploring the limits of complexity with educational software, available here. (The main supervisor in this case was Jürg Nievergelt.)

MOOCs (Massive Online Open Courses)

Internal MOOCs, and three courses on EdX (links will be added when available):

  • Computing: Art, Magic, Science? Part 1 (CAMS 1), 2013.
  • Computing: Art, Magic, Science? Part 1 (CAMS 2), 2014.
  • Agile Software Development, 2015.

Publications about education

1. Paolo Antonucci, Christian Estler, Durica Nikolic, Marco Piccioni and Bertrand Meyer: An Incremental Hint System For Automated Programming Assignments, in ITiCSE ’15, Proceedings of 2015 ACM Conference on Innovation and Technology in Computer Science Education, 6-8 July 2015, Vilnius, ACM Press, pages 320-325. (The result of a master’s thesis, a system for helping students solve online exercises, through successive hints.) Available here.

2. Jiwon Shin, Andrey Rusakov and Bertrand Meyer: Concurrent Software Engineering and Robotics Education, in 37th International Conference on Software Engineering (ICSE 2015), Florence, May 2015, IEEE Press, pages 370-379. (Describes our innovative Robotics Programming Laboratory course, where students from 3 departments, CS, Mechanical Engineering and Electrical Engineering learned how to program robots.) Available here.

3. Cristina Pereira, Hannes Werthner, Enrico Nardelli and Bertrand Meyer: Informatics Education in Europe: Institutions, Degrees, Students, Positions, Salaries — Key Data 2008-2013, Informatics Europe report, October 2014. (Not a scientific publication but a report. I also collaborated in several other editions of this yearly report series, which I started, from 2011 on. A unique source of information about the state of CS education in Europe.) Available here.

4. (One of the authors of) Informatics education: Europe cannot afford to miss the boat, edited by Walter Gander, joint Informatics Europe and ACM Europe report, April 2013. An influential report which was instrumental in the introduction of computer science in high schools and primary schools in Europe, particularly Switzerland. Emphasized the distinction between “digital literacy” and computer science. Available here.

5. Sebastian Nanz, Faraz Torshizi, Michela Pedroni and Bertrand Meyer: Design of an Empirical Study for Comparing the Usability of Concurrent Programming Languages, in Information and Software Technology Journal Elsevier, volume 55, 2013. (Journal version of conference paper listed next.) Available here.

6. Bertrand Meyer: Knowledgeable beginners, in Communications of the ACM, vol. 55, no. 3, March 2012, pages 10-11. (About a survey of prior knowledge of entering ETH CS students, over many years. Material from tech report below.) Available here.

7. Sebastian Nanz, Faraz Torshizi, Michela Pedroni and Bertrand Meyer: Design of an Empirical Study for Comparing the Usability of Concurrent Programming Languages, in ESEM 2011 (ACM/IEEE International Symposium on Empirical Software Engineering and Measurement), 22-23 September 2011 (best paper award). Reports on a carefully designed empirical study to assess the teachability of various approaches to concurrent programming. Available here.

8. Martin Nordio, H.-Christian Estler, Julian Tschannen, Carlo Ghezzi, Elisabetta Di Nitto and Bertrand Meyer: How do Distribution and Time Zones affect Software Development? A Case Study on Communication, in Proceedings of the 6th International Conference on Global Software Engineering (ICGSE), IEEE Computer Press, 2011, pages 176-184. (A study of the results of our DOSE distributed course, which involved students from different universities in different countries collaborating on a common software development project.) Available here.

9. Martin Nordio, Carlo Ghezzi, Elisabetta Di Nitto, Giordano Tamburrelli, Julian Tschannen, Nazareno Aguirre, Vidya Kulkarni and Bertrand Meyer: Teaching Software Engineering using Globally Distributed Projects: the DOSE course, in Collaborative Teaching of Globally Distributed Software Development – Community Building Workshop (CTGDSD), Hawaii (at ICSE), May 2011. (Part of the experience of our Distributed Outsourced Software Engineering course, taught over many years with colleagues from Politecnico di Milano and elsewhere, see paper in previous entry.) Available here.

10. Bertrand Meyer: From Programming to Software Engineering (slides only), material for education keynote at International Conference on Software Engineering (ICSE 2010), Cape Town, South Africa, May 2010. Available here.

11. Michela Pedroni and Bertrand Meyer: Object-Oriented Modeling of Object-Oriented Concepts, in ISSEP 2010, Fourth International Conference on Informatics in Secondary Schools, Zurich, January 2010, eds. J. Hromkovic, R. Královic, J. Vahrenhold, Lecture Notes in Computer Science 5941, Springer, 2010. Available here.

12. Michela Pedroni, Manuel Oriol and Bertrand Meyer: What Do Beginning CS Majors Know?, ETH Technical Report, 2009. (Unpublished report about the background of 1st-year ETH CS students surveyed over many years. See shorter 2012 CACM version above.) Available here.

13. Bertrand Meyer: Touch of Class: Learning to Program Well Using Object Technology and Design by Contract, Springer, 2009 (also translated into Russian). (Introductory programming textbook, used for many years at ETH Zurich and Innopolis University for the first programming course. The herecontains a long discussion of pedagogical issues of teaching programming and CS.) Book page and text of several chapters here.

14. Michela Pedroni, Manuel Oriol, Lukas Angerer and Bertrand Meyer: Automatic Extraction of Notions from Course Material, in Proceedings of SIGCSE 2008 (39th Technical Symposium on Computer Science Education), Portland (Oregon), 12-15 March 2008, ACM SIGCSE Bulletin, vol. 40, no. 1, ACM Press, 2008, pages 251-255. (As the title indicates, tools for automatic analysis of course material to extract the key pedagogical notions or “Trucs”.) Available here.

15. Marie-Hélène Nienaltowski, Michela Pedroni and Bertrand Meyer: Compiler Error Messages: What Can Help Novices?, in Proceedings of SIGCSE 2008 (39th Technical Symposium on Computer Science Education), Portland (Oregon), Texas, 12-15 March 2008, ACM SIGCSE Bulletin, vol. 40, no. 1, ACM Press, 2008, pages 168-172. (Discusses the results of experiments with different styles of compiler error messages, which can be baffling to beginners, to determine what works best.) Available here.

16. Bertrand Meyer and Marco Piccioni: The Allure and Risks of a Deployable Software Engineering Project: Experiences with Both Local and Distributed Development, in Proceedings of IEEE Conference on Software Engineering & Training (CSEE&T), Charleston (South Carolina), 14-17 April 2008, ed. H. Saiedian, pages 3-16. (Paper associated with a keynote at an SE education conference. See other papers on the DOSE distributed project experience below.) Available here.

17. Marie-Hélène Ng Cheong Vee (Marie-Hélène Nienaltowski), Keith L. Mannock and Bertrand Meyer: Empirical study of novice error paths, Proceedings of workshop on educational data mining at the 8th international conference on intelligent tutoring systems (ITS 2006), 2006, pages 13-20. (An empirical study of the kind of programming mistakes learners make.) Available here.

18. Bertrand Meyer: Testable, Reusable Units of Cognition, in Computer (IEEE), vol. 39, no. 4, April 2006, pages 20-24. (Introduced a general approach for structuring knowledge for teaching purposes: “Trucs”. Served as the basis for some other work listed, in particular papers with Michela Pedroni on the topics of her PhD thesis. Available here.

19. Michela Pedroni and Bertrand Meyer: The Inverted Curriculum in Practice, in Proceedings of SIGCSE 2006, Houston (Texas), 1-5 March 2006, ACM Press, 2006, pages 481-485. (Develops the idea of inverted curriculum which served as the basis for our teaching of programming at ETH, Innopolis etc. and led to the “Touch of Class” textbook.) Available here.

20. Bertrand Meyer: The Outside-In Method of Teaching Introductory Programming, in Perspective of System Informatics, Proceedings of fifth Andrei Ershov Memorial Conference, Akademgorodok, Novosibirsk, 9-12 July 2003, eds. Manfred Broy and Alexandr Zamulin, Lecture Notes in Computer Science 2890, Springer, 2003, pages 66-78. (An early version of the ideas presented in the previous entry.) Available here.

21. Bertrand Meyer: Software Engineering in the Academy, in Computer (IEEE), vol. 34, no. 5, May 2001, pages 28-35. Translations: Russian in Otkrytye Systemy (Open Systems Publications), #07-08-2001, October 2001. (A general discussion of the fundamental concepts to be taught in software engineering. Served as a blueprint for my teaching at ETH.) Available here.

22. Bertrand Meyer: Object-Oriented Software Construction, second edition, Prentice Hall, 1296 pages, January 1997. Translations: Spanish, French Russian, Serbian, Japanese. (Not a publication on education per se but cited here since it is a textbook that has been widely used for teaching and has many comments on pedagogy.)
23. Bertrand Meyer: The Choice for Introductory Software Education, Guest editorial in Journal of Object-Oriented Programming, vol. 7, no. 3, June 1994, page 8. (A discussion of the use of Eiffel for teaching software engineering topics.)

24. Bertrand Meyer, Towards an Object-Oriented Curriculum, in Journal of Object-Oriented Programming, vo. 6, number 2, May 1993, pages 76-81. (Journal version of paper cited next.) Available here.

25. Bertrand Meyer: Towards an Object-Oriented Curriculum, in TOOLS 11, Technology of Object-Oriented Languages and Systems, Santa Barbara, August 1993, eds. Raimund Ege, Madhu Singh and B. Meyer, Prentice Hall 1993, pages 585-594. (Early advocacy for using OO techniques in teaching programming – while I was not in academia. Much of my subsequent educational work relied on those ideas.) Available here.

26. Bertrand Meyer: Object-Oriented Software Construction, Prentice Hall, 592 pages, 1988. (First edition, translated into German, Italian, French, Dutch, Romanian, Chinese. As noted for second edition above, not about education per se, but widely used textbook with pedagogical implications.)

27. Initiation à la programmation en milieu industriel (Teaching Modern Programming Methodology in an Industrial Environment), in RAIRO, série bleue (informatique), vol. 11, no. 1, pages 21-34 1977. (Early paper on teaching advanced programming techniques in industry.) Available here.

28. Claude Kaiser, Bertrand Meyer and Etienne Pichat, L’Enseignement de la Programmation à l’IIE (Teaching Programming at the IIE engineering school), in Zéro-Un Informatique, 1977. (A paper on my first teaching experience barely out of school myself.) Available here.

VN:F [1.9.10_1130]
Rating: 10.0/10 (2 votes cast)
VN:F [1.9.10_1130]
Rating: +1 (from 1 vote)

The Formal Picnic approach to requirements

picnicRequirements engineering (studying and documenting what a software system should do, independently of how it will do it) took some time to be recognized as a key part of software engineering, since the early focus was, understandably, on programming. It is today a recognized sub-discipline and has benefited in the last decades from many seminal concepts. An early paper of mine, On Formalism in Specifications [1], came at the beginning of this evolution; it made the case for using formal (mathematics-based) approaches. One of the reasons it attracted attention is its analysis of the “seven sins of the specifier”: a list of pitfalls into which authors of specifications and requirements commonly fall.

One of the techniques presented in the paper has not made it into the standard requirements-enginering bag of tricks. I think it deserves to be known, hence the present note. There really will not be anything here that is not in the original article; in fact I will be so lazy as to reuse its example. (Current requirements research with colleagues should lead to the publication of new examples.)

Maybe the reason the idea did not register is that I did not give it a name. So here goes: formal picnic.

The usual software engineering curriculum includes, regrettably, no room for  field trips. We are jealous of students and teachers of geology or zoology and their occasional excursions: once in a while you put on your boots, harness your backpack, and head out to quarries or grasslands to watch pebbles or critters in flagrante, after a long walk with the other boys and girls and before all having lunch together in the wild. Yes, scientific life in these disciplines really is a picnic. What I propose for the requirements process is a similar excursion; not into muddy fields, but into the dry pastures of mathematics.

The mathematical picnic process starts with a natural-language requirements document. It continues, for some part of the requirements, with a translation into a mathematical version. It terminates with a return trip into natural language.

The formal approach to requirements, based on mathematical notations (as was discussed in my paper), is still controversial; a common objection is that requirements must be understandable by ordinary project stakeholders, many of whom do not have advanced mathematical skills. I am not entering this debate here, but there can be little doubt that delicate system properties can be a useful step, if only for the requirements engineers themselves. Mathematical notation forces precision.

What, then, if we want to end up with natural language for clarity, but also to take advantage of the precision of mathematics? The formal picnic answer is that we can use mathematics as a tool to improve the requirements. The three steps are:

  • Start: a natural-language requirements document. Typically too vague and deficient in other ways (the seven sins) to serve as an adequate basis for the rest of the software process, as a good requirements document should.
  • Picnic: an excursion into mathematics. One of the main purposes of a requirements process is to raise and answer key questions about the system’s properties. Using mathematics helps raise the right questions and obtain precise answers. You do not need to apply the mathematical picnic to the entire system: even if the overall specification remains informal, some particularly delicate aspects may benefit from a more rigorous analysis.
  • Return trip: thinking of the non-formalist stakeholders back home, we translate the mathematical descriptions into a new natural-language version.

This final version is still in (say) English, but typically not the kind of English that most people naturally write. It may in fact “sound funny”. That is because it is really just mathematical formulae translated back into English. It retains the precision and objectivity of mathematics, but is expressed in terms that anyone can understand.

Let me illustrate the mathematical picnic idea with the example from my article. For reasons that do not need to be repeated here (they are in the original), it discussed a very elementary problem of text processing: splitting a text across lines. The original statement of the problem, from a paper by Peter Naur, read:

Given a text consisting of words separated by BLANKS or by NL (new line) characters, convert it to a line-by-line form in accordance with the following rules: (1) line breaks must be made only where the given text has BLANK or NL; (2) each line is filled as far as possible as long as  (3) no line will contain more than MAXPOS characters.

My article then cited an alternative specification proposed in a paper by testing experts John Goodenough and Susan Gerhart. G&G criticized Naur’s work (part of the still relevant debate between proponents of tests and proponents of proofs such as Naur). They pointed out deficiencies in his simple problem statement above; for example, it says nothing about the case of a text containing a word of more than MAXPOS characters. G&G stated that the issue was largely one of specification (requirements) and went on to propose a new problem description, four times as long as Naur’s. In my own article, I had a field day taking aim at their own endeavor. (Sometime later I met Susan Gerhart, who was incredibly gracious about my critique of her work, and became an esteemed colleague.) I am not going to cite the G&G replacement specification here; you can find it in my article.

Since that article’s topic was formal approaches, it provided a mathematical statement of Naur’s problem. It noted that  the benefit of mathematical formalization is not just to gain precision but also to identify important questions about the problem, with a view to rooting out dangerous potential bugs. Mathematics means not just formalization but proofs. If you formalize the Naur problem, you soon realize that — as originally posed — it does not always have a solution (because of over-MAXPOS words). The process forces you to specify the conditions under which solutions do exist. This is one of the software engineering benefits of a mathematical formalization effort: if such conditions are not identified at the requirements level, they will take their revenge in the program, in the form of erroneous results and crashes.

You can find the mathematical specification (only one of several possibilities) in the article.  The discussion also noted that one could start again from that spec and go back to English. That was, without the name, the mathematical picnic. The result’s length is in-between the other two versions: twice Naur’s, but half G&G’s. Here it is:

Given are a non-negative integer MAXPOS and a character set including two “break characters” blank and newline. The program shall accept as input a finite sequence of characters and produce as output a sequence of characters satisfying the following conditions:
• It only differs from the input by having a single break character wherever the input has one or more break characters;
• Any MAXPOS + 1 consecutive characters include a newline;
• The number of newline characters is minimal.
If (and only if) an input sequence contains a group of MAXPOS + 1 consecutive nonbreak characters, there exists no such output. In this case, the program shall produce the output associated with the initial part of the sequence, up to and including the MAXPOS·th character of the first such group, and report the error.

This post-picnic version is the result of a quasi-mechanical retranscription from the mathematical specification in the paper.

It uses the kind of English that one gets after a mathematical excursion. I wrote above that this style might sound funny; not to me in fact, because I am used to mathematical picnics, but probably to others (does it sound funny to you?).

The picnic technique provides a good combination of the precision of mathematics and the readability of English. English requirements as ordinarily written are subject to the seven sins described in my article, from ambiguity and contradiction to overspecification and noise. A formalization effort can correct these issues, but yields a mathematical text. Whether we like it or not, many people react negatively to such texts. We might wish they learn, but that is often not an option, and if they are important stakeholders we need their endorsement or correction of the requirements. With a mathematical picnic we translate the formal text back into something they will understand, while avoiding the worst problems of natural-language specifications.

Practicing the Formal Picnic method also has a long-term benefit for a software team. Having seen first-hand that better natural-language specifications (noise-free and more precise) are possible, team members little by little learn to apply the same style to the English texts they write, even without a mathematical detour.

If the goal is high-quality requirements, is there any alternative? What I have seen in many requirements documents is a fearful attempt to avoid ambiguity and imprecision by leaving no stone unturned: adding information and redundancy over and again. This was very much what I criticized in the G&G statement of requirements, which attempted to correct the deficiencies of the Naur text by throwing ever-more details that caused ever more risks of entanglement. It is fascinating to see how every explanation added in the hope of filling a possible gap creates more sources of potential confusion and a need for even more explanations. In industrial projects, this is the process that leads to thousands-of-pages documents, so formidable that they end up (as in the famous Ariane-5 case) on a shelf where no one will consult them when they would provide critical answers.

Mathematical specifications yield the precision and uncover the contradictions, but they also avoid noise and remain terse. Translating them back into English yields a reasonable tradeoff. Try a formal picnic one of these days.

Acknowledgments

For numerous recent discussions of these and many other related topics, I am grateful to my colleagues from the Innopolis-Toulouse requirements research group: Jean-Michel Bruel, Sophie Ebersold, Florian Galinier, Manuel Mazzara and Alexander Naumchev. I remain grateful to Axel van Lamsweerde (beyond his own seminal contributions to requirements engineering) for telling me, six years after I published a version of [1] in French, that I should take the time to produce a version in English too.

Reference

Bertrand Meyer: On Formalism in Specifications, in IEEE Software, vol. 3, no. 1, January 1985, pages 6-25. PDF available via IEEE Xplore with account, and also from here. Adapted translation of an original article in French (AFCET Software Engineering newsletter, no. 1, pages 81-122, 1979).

(This article was originally published on the Comm. ACMM blog.)

VN:F [1.9.10_1130]
Rating: 10.0/10 (4 votes cast)
VN:F [1.9.10_1130]
Rating: +1 (from 1 vote)

Why not program right?

recycled-logo (Originally published on CACM blog.)

Most of the world programs in a very strange way. Strange to me. I usually hear the reverse question: people ask us, the Eiffel community, to explain why we program our way. I hardly understand the question, because the only mystery is how anyone can even program in any other way.

The natural reference is the beginning of One Flew Over the Cuckoo’s Nest: when entering an insane asylum and wondering who is an inmate and who a doctor, you may feel at a loss for objective criteria. Maybe the rest of the world is right and we are the nut cases. Common sense suggests it.

But sometimes one can go beyond common sense and examine the evidence. So lend me an ear while I explain my latest class invariant. Here it is, in Figure 1. (Wait, do not just run away yet.)

multigraph_invariant

Figure 1: From the invariant of class MULTIGRAPH

This is a program in progress and by the time you read this note the invariant and enclosing class will have changed. But the ideas will remain.

Context: multigraphs

The class is called MULTIGRAPH and describes a generalized notion of graph, illustrated in Figure 2. The differences are that: there can be more than one edge between two nodes, as long as they have different tags (like the spouse and boss edges between 1 and 2); and there can be more than one edge coming out of a given node and with a given tag (such as the two boss edges out of 1, reflecting that 1’s boss might be 2 in some cases and 3 in others). Some of the nodes, just 1 here, are “roots”.

The class implements the notion of multigraph and provides a wide range of operations on multigraphs.

multigraph_example

Figure 2: A multigraph

Data structures

Now we turn to the programming and software engineering aspects. I am playing with various ways of accessing multigraphs. For the basic representation of a multigraph, I have chosen a table of triples:

                triples_table: HASH_TABLE [TRIPLE, TUPLE [source: INTEGER; tag: INTEGER; target: INTEGER]]  — Table of triples, each retrievable through its `source’, `tag’ and `target’.

where the class TRIPLE describes [source, tag, target] triples, with a few other properties, so they are not just tuples. It is convenient to use a hash table, where the key is such a 3-tuple. (In an earlier version I used just an ARRAY [TRIPLE], but a hash table proved more flexible.)

Sources and targets are nodes, also called “objects”; we represent both objects and tags by integers for efficiency. It is easy to have structures that map symbolic tag names such as “boss” to integers.

triples_table is the core data structure but it turns out that for the many needed operations it is convenient to have others. This technique is standard: for efficiency, provide different structures to access and manipulate the same underlying information, with some redundancy. So I also have:

 triples_from:  ARRAYED_LIST [LIST [TRIPLE]]
               — Triples starting from a given object. Indexed by object numbers.

  triples_with:  HASH_TABLE [LIST [TRIPLE], INTEGER]
               — Triples labeled by a given tag. Key is tag number.

 triples_to:  ARRAYED_LIST [LIST [TRIPLE]]
               — Triples leading into a given object. Indexed by object numbers.

Figure 3 illustrates triples_from and Figures 4 illustrates triples_with. triples_to is similar.

triples_from

Figure 3: The triples_from array of lists and the triples_table

triples_with

Figure 4: The triples_with array of lists and the triples_table

It is also useful to access multigraphs through yet another structure, which gives us the targets associated with a given object and tag:

successors: ARRAY [HASH_TABLE [LIST [TRIPLE], INTEGER]]
               — successors [obj] [t] includes all o such that there is a t- reference from obj to o.

For example in Figure 1 successors [1] [spouse] is {2, 3}, and in Figures 3 and 4 successors [26] [t] is {22, 55, 57}. Of course we can obtain the “successors” information through the previously defined structures, but since this is a frequently needed operation I decided to include a specific data structure (implying that every operation modifying the multigraph must update it). I can change my mind later on and decide to make “successors” a function rather than a data structure; it is part of the beauty of OO programming, particularly in Eiffel, that such changes are smooth and hardly impact client classes.

There is similar redundancy in representing roots:

                roots:  LINKED_SET [INTEGER]
                              — Objects that are roots.

                is_root:  ARRAY [BOOLEAN]
                              — Which objects are roots? Indexed by object numbers.

If o is a root, then it appears in the “roots” set and is_root [o] has value True.

Getting things right

These are my data structures. Providing such a variety of access modes is a common programming technique. From a software engineering perspective ― specification, implementation, verification… ― it courts disaster. How do we maintain their consistency? It is very easy for a small mistake to slip into an operation modifying the graph, causing one of the data structures to be improperly updated, but in a subtle and rare enough way that it will not manifest itself during testing, coming back later to cause strange behavior that will be very hard to debug.

For example, one of the reasons I have a class TRIPLE and not just 3-tuples is that a triple is not exactly  the same as an edge in the multigraph. I have decided that by default the operation that removes and edge would not remove the corresponding triple from the data structure, but leave it in and mark it as “inoperative” (so class TRIPLE has an extra “is_inoperative” boolean field). There is an explicit GC-like mechanism to clean up deleted edges occasionally. This approach brings efficiency but makes the setup more delicate since we have to be extremely careful about what a triple means and what removal means.

This is where I stop understanding how the rest of the world can work at all. Without some rigorous tools I just do not see how one can get such things right. Well, sure, spend weeks of trying out test cases, printing out the structures, manually check everything (in the testing world this is known as writing lots of “oracles”), try at great pains to find out the reason for wrong results, guess what program change will fix the problem, and start again. Stop when things look OK. When, as Tony Hoare once wrote, there are no obvious errors left.

Setting aside the minuscule share of projects (typically in embedded life-critical systems) that use some kind of formal verification, this process is what everyone practices. One can only marvel that systems, including many successful ones, get produced at all. To take an analogy from another discipline, this does not compare to working like an electrical engineer. It amounts to working like an electrician.

For a short time I programmed like that too (one has to start somewhere, and programming methodology was not taught back then). I no longer could today. Continuing with the Hoare citation, the only acceptable situation is to stop when there are obviously no errors left.

How? Certainly not, in my case, by always being right the first time. I make mistakes like everyone else does. But I have the methodology and tools to avoid some, and, for those that do slip through, to spot and fix them quickly.

Help is available

First, the type system. Lots of inconsistencies, some small and some huge, which in an untyped language would only hit during execution, do not make it past compilation. We are not just talking here about using REAL instead of INTEGER. With a sophisticated type system involving multiple inheritance, genericity, information hiding and void safety, a compiler error message can reflect a tricky logical mistake. You are using a SET as if it were a LIST (some operations are common, but others not). You are calling an operation on a reference that may be void (null) at run time. And so on.

By the way, about void-safety: for a decade now, Eiffel has been void-safe, meaning a compile-time guarantee of no run-time null pointer dereferencing. It is beyond my understanding how the rest of the world can still live with programs that run under myriad swords of Damocles: x.op (…) calls that might any minute, without any warning or precedent, hit a null x and crash.

Then there is the guarantee of logical consistency, which is where my class invariant (Figure 1) comes in. Maybe it scared you, but in reality it is all simple concepts, intended to make sure that you know what you are doing, and rely on tools to check that you are right. When you are writing your program, you are positing all kinds, logical assumptions, large and (mostly) small, all the time. Here, for the structure triples_from [o] to make sense, it must be a list such that:

  • It contains all the triples t in the triples_table such that t.source = o.
  •  It contains only those triples!

You know this when you write the program; otherwise you would not be having a “triples_from” structure. Such gems of knowledge should remain an integral part of the program. Individually they may not be rocket science, but accumulated over the lifetime of a class design, a subsystem design or a system design they collect all the intelligence that makes the software possible.  Yet in the standard process they are gone the next minute! (At best, some programmers may write a comment, but that does not happen very often, and a comment has no guarantee of precision and no effect on testing or correctness.)

Anyone who takes software development seriously must record such fundamental properties. Here we need the following invariant clause:

across triples_from as tf all

across tf.item as tp all tp.item.source = tf.cursor_index end

end

(It comes in the class, as shown in Figure 1, with the label “from_list_consistent”. Such labels are important for documentation and debugging purposes. We omit them here for brevity.)

What does that mean? If we could use Unicode (more precisely, if we could type it easily with our keyboards) we would write things like “∀ x: E | P (x) for all x in E, property P holds of x. We need programming-language syntax and write this as across E as x all P (x.item) end. The only subtlety is the .item part, which gives us generality beyond the  notation: x in the across is not an individual element of E but a cursor that moves over E. The actual element at cursor position is x.item, one of the properties of that cursor. The advantage is that the cursor has more properties, for example x.cursor_index, which gives its position in E. You do not get that with the plain of mathematics.

If instead of  you want  (there exists), use some instead of all. That is pretty much all you need to know to understand all the invariant clauses of class MULTIGRAPH as given in Figure 1.

So what the above invariant clause says is: take every position tf in triples_from; its position is tf.cursor_index and its value is tf.item. triples_from is declared as ARRAYED_LIST [LIST [TRIPLE]], so tf.cursor_index is an integer representing an object o, and tf.item is a list of triples. That list should  consist of the triples having tf.cursor_index as their source. This is the very property that we are expressing in this invariant clause, where the innermost across says: for every triple tp.item in the list, the source of that triple is the cursor index (of the outside across). Simple and straightforward, I think (although such English explanations are so much more verbose than formal versions, such as the Eiffel one here, and once you get the hang of it you will not need them any more).

How can one ever include a structure such as triples_from without expressing such a property? To put the question slightly differently: am I inside the asylum looking out, or outside the asylum looking in? Any clue would be greatly appreciated.

More properties

For the tag ( with_) and target lists, the properties are similar:

across triples_with as tw all across tw.item as tp all tp.item.tag = tw.key end end

across triples_to as tt all across tt.item as tp all tp.item.target = tt.cursor_index end end 

We also have some properties of array bounds:

 is_root.lower = 1 and is_root.upper = object_count

triples_from.lower = 1 and triples_from.upper = object_count

triples_to.lower = 1 and triples_to.upper = object_count

where object_count is the number of objects (nodes), and for an array a (whose bounds in Eiffel are arbitrary, not necessarily 0 or 1, and set on array creation), a.lower and a.upper are the bounds. Here we number the arrays from 1.

There are, as noted, two ways to represent rootness. We must express their consistency (or risk trouble). Two clauses of the invariant do the job:

across roots as t all is_root [t.item] end

across is_root as t all (t.item = roots.has (t.cursor_index)) end

The first one says that if we go through the list roots we only find elements whose is_root value is true; the second, that if we go through the array “is_root” we find values that are true where and only where the corresponding object, given by the cursor index, is in the roots set. Note that the = in that second property is between boolean values (if in doubt, check the type instantly in the EIffelStudio IDE!), so it means “if and only if.

Instead of these clauses, a more concise version, covering them both, is just

roots ~ domain (is_root)

with a function domain that gives the domain of a function represented by a boolean array. The ~ operator denotes object equality, redefined in many classes, and in particular in the SET classes (roots is a LINKED_SET) to cover equality between sets, i.e. the property of having the same elements.

The other clauses are all similarly self-explanatory. Let us just go through the most elaborate one, successors_consistent, involving three levels of across:

across successors as httpl all                   — httpl.item: hash table of list of triples

        across httpl.item as tpl all                — tpl.item: list of triples (tpl.key: key (i.e. tag) in hash table (tag)

                  across tpl.item as tp all            — tp.item: triple

                         tp.item.tag = tpl.key

and tp.item.source = httpl.cursor_index

                   end

          end

end

You can see that I struggled a bit with this one and made provisions for not having to struggle again when I would look at the code again 10 minutes, 10 days or 10 months later. I chose (possibly strange but consistent) names such as httpl for hash-table triple, and wrote comments (I do not usually need any in invariant and other contract clauses) to remind me of the type of everything. That was not strictly needed since once again the IDE gives me the types, but it does not cost much and could help.

What this says: go over successors; which as you remember is an ARRAY, indexed by objects, of HASH_TABLE, where each entry of such a hash table has an element of type [LIST [TRIPLE] and a key of type INTEGER, representing the tag of a number of outgoing edges from the given object. Go over each hash table httpl. Go over the associated list of triples tpl. Then for each triple tp in this list: the tag of the triple must be the key in the hash table entry (remember, the key does denote a tag); and the source of the triple must the object under consideration, which is the current iteration index in the array of the outermost iteration.

I hope I am not scaring you at this point. Although the concepts are simple, this invariant is more sophisticated than most of those we typically write. Many invariant clauses (and preconditions, and postconditions) are very simple properties, such as x > 0 or x ≠ y. The reason this one is more elaborate is not that I am trying to be fussy but that without it I would be the one scared to death. What is elaborate here is the data structure and programming technique. Not rocket science, not anything beyond programmers typically do, but elaborate. The only way to get it right is to buttress it by the appropriate logical properties. As noted, these properties are there anyway, in the back of your head, when you write the program. If you want to be more like an electrical engineer than an electrician, you have to write them down.

There is more to contracts

Invariants are not the only kind of such “contract properties. Here for example, from the same class, is a (slightly abbreviated) part of the postcondition (output property) of the operation that tells us, through a boolean Result, if the multigraph has an edge of given components osource, t (the tag) and otarget :

Result =

(across successors [osource] [t] as tp some

not tp.item.is_inoperative and tp.item.target = otarget

end)

In words, this clause expresses the compatibility of the operation with the successors view: it must answer yes if and only if otarget appears in the successor set of osource for t, and the corresponding triple is not marked inoperative.

The concrete benefits

And so? What do we get out of making these logical properties explicit? Just the intellectual satisfaction of doing things right, and the methodological guidance? No! Once you have done this work, it is all downhill. Turn on the run-time assertion monitoring option (tunable separately for preconditions, postconditions, invariants etc., and on by default in development mode), and watch your tests run. If you are like almost all of us, you will have made a few mistakes, some which will seem silly when or rather if you find them in time (but there is nothing funny about a program that crashes during operation) and some more subtle. Sit back, and just watch your contracts be violated. For example if I change <= to < in the invariant property tw.key <= max_tag, I get the result of Figure 5. I see the call stack that I can traverse, the object run-time structure that I can explore, and all the tools of a modern debugger for an OO language. Finding and correcting the logical flaw will be a breeze.

debugger

Figure 5: An invariant violation brings up the debugger

The difference

It will not be a surprise that I did not get all the data structures and algorithms of the class MULTIGRAPH  right the first time. The Design by Contract approach (the discipline of systematically expressing, whenever you write any software element, the associated logical properties) does lead to fewer mistakes, but everyone occasionally messes up. Everyone also looks at initial results to spot and correct mistakes. So what is the difference?

Without the techniques described here, you execute your software and patiently examine the results. In the example, you might output the content of the data structures, e.g.

List of outgoing references for every object:

        1: 1-1->1|D, 1-1->2|D, 1-1->3|D, 1-2->1|D, 1-2->2|D,  1-25->8|D, 1-7->1|D, 1-7->6|D,

1-10->8|D, 1-3->1|D, 1-3->2|D, 1-6->3|D, 1-6->4|D, 1-6->5|D

        3: 3-6->3, 3-6->4, 3-6->5, 3-9->14, 3-9->15,   3-9->16, 3-1->3, 3-1->2, 3-2->3, 3-2->2,

                  3-25->8, 3-7->3, 3-7->6, 3-10->8, 3-3->3,  3-3->2    

List of outgoing references for every object:

        1: 1-1->1|D, 1-1->2|D, 1-1->3|D, 1-2->1|D, 1-2->2|D, 1-25->8|D, 1-7->1|D, 1-7->6|D,

1-10->8|D, 1-3->1|D,  1-3->2|D, 1-6->3|D, 1-6->4|D, 1-6->5|D

        3: 3-6->3, 3-6->4, 3-6->5, 3-9->14, 3-9->15,  3-9->16, 3-1->3, 3-1->2, 3-2->3, 3-2->2,

                                 3-25->8, 3-7->3, 3-7->6, 3-10->8, 3-3->3,  3-3->2

and so on for all the structures. You check the entries one by one to ascertain that they are as expected. The process nowadays has some automated support, with tools such as JUnit, but it is still essentially manual, tedious and partly haphazard: you write individual test oracles for every relevant case. (For a more automated approach to testing, taking advantage of contracts, see [1].) Like the logical properties appearing in contracts, these oracles are called assertions but the level of abstraction is radically different: an oracle describes the desired result of one test, where a class invariant, or routine precondition, or postcondition expresses the properties desired of all executions.

Compared to the cost of writing up such contract properties (simply a matter of formalizing what you are thinking anyway when you write the code), their effect on testing is spectacular. Particularly when you take advantage of across iterators. In the example, think of all the checks and crosschecks automatically happening across all the data structures, including the nested structures as in the 3-level across clause. Even with a small test suite, you immediately get, almost for free, hundreds or thousands of such consistency checks, each decreasing the likelihood that a logical flaw will survive this ruthless process.

Herein lies the key advantage. Not that you will magically stop making mistakes; but that the result of such mistakes, in the form of contract violations, directly points to logical properties, at the level of your thinking about the program. A wrong entry in an output, whether you detect it visually or through a Junit clause, is a symptom, which may be far from the cause. (Remember Dijkstra’s comment, the real point of his famous Goto paper, about the core difficulty of programming being to bridge the gap between the static program text, which is all that we control, and its effect: the myriad possible dynamic executions.) Since the cause of a bug is always a logical mistake, with a contract violation, which expresses a logical inconsistency, you are much close to that cause.

(About those logical mistakes: since a contract violation reflects a discrepancy between intent, expressed by the contract, and reality, expressed by the code, the mistake may be on either side. And yes, sometimes it is the contract that is wrong while the implementation in fact did what is informally expected. There is partial empirical knowledge [1] of how often this is the case. Even then, however, you have learned something. What good is a piece of code of which you are not able to say correctly what it is trying to do?)

The experience of Eiffel programmers reflects these observations. You catch the mistakes through contract violations; much of the time, you find and correct the problem easily. When you do get to producing actual test output (which everyone still does, of course), often it is correct.

This is what has happened to me so far in the development of the example. I had mistakes, but converging to a correct version was a straightforward process of examining violations of invariant violations and other contract elements, and fixing the underlying logical problem each time.

By the way, I believe I do have a correct version (in the sense of the second part of the Hoare quote), on the basis not of gut feeling or wishful thinking but of solid evidence. As already noted it is hard to imagine, if the code contains any inconsistencies, a test suite surviving all the checks.

Tests and proofs

Solid evidence, not perfect; hard to imagine, not impossible. Tests remain only tests; they cannot exercise all cases. The only way to achieve demonstrable correctness is to rely on mathematical proofs performed mechanically. We have this too, with the AutoProof proof system for Eiffel, developed in recent years [1]. I cannot overstate my enthusiasm for this work (look up the Web-based demo), its results (automated proof of correctness of a full-fledged data structures and algorithms library [2]) and its potential, but it is still a research effort. The dynamic approach (meaning test-based rather than proof-based) presented above is production technology, perfected over several decades and used daily for large-scale mission-critical applications. Indeed (I know you may be wondering) it scales up without difficulty:

  • The approach is progressive. Unlike fully formal methods (and proofs), it does not require you to write down every single property down to the last quantifier. You can start with simple stuff like x > 0. The more you write, the more you get, but it is the opposite of an all-or-nothing approach.
  • On the practical side, if you are wondering about the consequences on performance of a delivered system: there is none. Run-time contract monitoring is a compilation option, tunable for different kinds of contracts (invariants, postconditions etc.) and different parts of a system. People use it, as discussed here, for development, testing and debugging. Most of the time, when you deliver a debugged system, you turn it off.
  • It is easy to teach. As a colleague once mentioned, if you can write an if-then-else you can write a precondition. Our invariants in the above example where a bit more sophisticated, but programmers do write loops (in fact, the Eiffel loop for iterating over a structure also uses across, with loop and instructions instead of all or some and boolean expressions). If you can write a loop over an array, you can write a property of the array’s elements.
  • A big system is an accumulation of small things. In a blog article [5] I recounted how I lost a full day of producing a series of technical diagrams of increasing complexity, using one of the major Web-based collaborative development tools. A bug of the system caused all the diagrams to reproduce the first, trivial one. I managed to get through to the developers. My impression (no more than an educated guess resulting from this interaction) is that the data structures involved were far simpler than the ones used in the above discussion. One can surmise that even simple invariants would have uncovered the bug during testing rather than after deployment.
  • Talking about deployment and tools used directly on the cloud: the action in software engineering today is in DevOps, a rapid develop-deploy loop scheme. This is where my perplexity becomes utter cluelessness. How can anyone even consider venturing into that kind of exciting but unforgiving development model without the fundamental conceptual tools outlined above?

We are back then to the core question. These techniques are simple, demonstrably useful, practical, validated by years of use, explained in professional books (e.g. [6]), introductory programming textbooks (e.g. [7]), EdX MOOCs (e.g. [8]), YouTube videos, online tutorials at eiffel.org, and hundreds of articles cited thousands of times. On the other hand, most people reading this article are not using Eiffel. On reflection, a simple quantitative criterion does exist to identify the inmates: there are far more people outside the asylum than inside. So the evidence is incontrovertible.

What, then, is wrong with me?

References

(Nurse to psychiatrist: these are largely self-references. Add narcissism to list of patient’s symptoms.)

1.    Ilinca Ciupa, Andreas Leitner, Bertrand Meyer, Manuel Oriol, Yu Pei, Yi Wei and others: AutoTest articles and other material on the AutoTest page.

2. Bertrand Meyer, Ilinca Ciupa, Lisa (Ling) Liu, Manuel Oriol, Andreas Leitner and Raluca Borca-Muresan: Systematic evaluation of test failure results, in Workshop on Reliability Analysis of System Failure Data (RAF 2007), Cambridge (UK), 1-2 March 2007 available here.

3.    Nadia Polikarpova, Ilinca Ciupa and Bertrand Meyer: A Comparative Study of Programmer-Written and Automatically Inferred Contracts, in ISSTA 2009: International Symposium on Software Testing and Analysis, Chicago, July 2009, available here.

4.    Carlo Furia, Bertrand Meyer, Nadia Polikarpova, Julian Tschannen and others: AutoProof articles and other material on the AutoProof page. See also interactive web-based online tutorial here.

5.    Bertrand Meyer, The Cloud and Its Risks, blog article, October 2010, available here.

6.    Bertrand Meyer: Object-Oriented Software Construction, 2nd edition, Prentice Hall, 1997.

7.    Bertrand Meyer: Touch of Class: Learning to Program Well Using Objects and Contracts, Springer, 2009, see touch.ethz.ch and Amazon page.

8.    MOOCs (online courses) on EdX : Computer: Art, Magic, Science, Part 1 and Part 2. (Go to archived versions to follow the courses.)

VN:F [1.9.10_1130]
Rating: 9.9/10 (12 votes cast)
VN:F [1.9.10_1130]
Rating: +8 (from 10 votes)

Mainstream enough for me

Every couple of weeks or so, I receive a message such as the one below; whenever I give a talk on any computer science topic anywhere in the world, strangers come to me to express similar sentiments. While I enjoy compliments as much as anyone else, I am not the right recipient for such comments. In fact there are 7,599,999,999  more qualified recipients. For me, Eiffel is “mainstream” enough.

What strikes me is why so many commenters, after the compliment, stop at the lament. Eiffel is not some magical dream, it is a concrete technology available for download at eiffel.org. Praising Eiffel will not change the world. Using EiffelStudio might.

When one answers the compliments with “Thanks! Then use it for your work“, the variety of excuses is amusing, or sad depending on the perspective, from “my boss would not allow it” (variant: “my subordinates would not accept it”) to “does it work with [library that does not work with anything else]?”.

Well, you might have some library wrapping to do (EiffelStudio easily interfaces with C, C++ and others). Also, you should not stop at the first hurdle: it might be due to a bug (surprise! The technology is not perfect!), but it might also just be that Eiffel and EiffelStudio are different and you have to shed some long-held assumptions and practices. What matters is that the technology does work; companies large and small use Eiffel all the time for long-running projects, some into the millions of lines and tens of thousands of classes, and refuse to switch to anything else.

What follows is a literal translation of the original message into English (it was written in another language). Since the author, whom I do not know, did not state the email was a public comment, I removed identifying details.

 

Subject:Eiffel is fantastic! But why is it not mainstream?

Dear Professor Meyer:

Greetings from [the capital of a country on another continent].

I graduated from [top European university] in 1996 and completed a master’s in physics from [institute on another continent] in 2006.

I have worked for twenty years in the industry, from application engineer to company head. In my industry career I have been able to be both CEO and CTO at the same time, thanks to the good education I received originally.

Information systems were always a pillar of my business strategy. Unfortunately, I was disappointed every single time I commissioned the development of a new system. This led me to study further and to investigate why the problem is not solved. That’s how I found your book Object-Oriented Software Construction and became enthusiastic about Design by Contract, Eiffel and EiffelStudio. To me your method is the only method for developing “correct” software. The Eiffel programming language is, in my view, the only true object-oriented language.

However it befuddles me — I cannot understand —  why the “big” players in this industry (Apple, Google, Microsoft etc.) do not use Design by Contract. .NET has a Visual Studio extension with the name “Code Contracts” but it is no longer supported in the latest Visual Studio 2017. Big players, why don’t you promote Design by Contract?

Personally, after 20 years in industry, I found out that my true calling is in research. It would be a great pleasure to be able to work in research. My dream job is Data Scientist and I had thought to apply to Google for a job. Studying the job description, I noted that “Python” is one of the desired languages. Python is dynamically typed and does not support good encapsulation. No trace of Design by Contract…

What’s wrong with the software industry?

With best regards,

VN:F [1.9.10_1130]
Rating: 10.0/10 (7 votes cast)
VN:F [1.9.10_1130]
Rating: +3 (from 3 votes)

Festina retro

We “core” computer scientists and software engineers always whine that our research themes forever prevent us, to the delight of our physicist colleagues but unjustly, from reaching the gold standard of academic recognition: publishing in Nature. I think I have broken this barrier now by disproving the old, dusty laws of physics! Brace yourself for my momentous discovery: I have evidence of negative speeds.

My experimental setup (as a newly self-anointed natural scientist I am keen to offer the possibility of replication) is the Firefox browser. I was downloading an add-on, with a slow connection, and at some point got this in the project bar:

Negative download speed

Negative speed! Questioning accepted wisdom! Nobel in sight! What next, cold fusion?

I fear I have to temper my enthusiasm in deference to more mundane explanations. There’s the conspiracy explanation: the speed is truly negative (more correctly, it is a “velocity”, a vector of arbitrary direction, hence in dimension 1 possibly negative); Firefox had just reversed the direction of transfer, surreptitiously dumping my disk drive to some spy agency’s server.

OK, that is rather far-fetched. More likely, it is a plain bug. A transfer speed cannot be negative; this property is not just wishful thinking but should be expressed as an integral part of the software. Maybe someone should tell Firefox programmers about class invariants.

VN:F [1.9.10_1130]
Rating: 9.6/10 (9 votes cast)
VN:F [1.9.10_1130]
Rating: +4 (from 4 votes)

Split the Root: a little design pattern

Many programs take “execution arguments” which the program users provide at the start of execution. In EiffelStudio you can enter them under Execution -> Execution parameters.

The program can access them through the Kernel Library class ARGUMENTS. Typically, the root class of the system inherits from ARGUMENTS and its creation procedure will include something like

if argument_count /= N then
……..print (“XX expects exactly N arguments: AA, BB, …%N”)
else
……..u := argument (1) ; v := argument (2) ; …
……..“Proceed with normal execution, using u, v, …”
end

where N is the number of expected arguments, XX is the name of the program, and AA, …. are the roles of arguments. u, v, … are local variables. The criterion for acceptance could be “at least N” instead of exactly N. The features argument_count and arguments come from class ARGUMENTS.

In all but trivial cases this scheme (which was OK years ago, in a less sophisticated state of the language) does not work! The reason is that the error branch will fail to initialize attributes. Typically, the “Proceed with…” part in the other branch is of the form

               attr1 := u
                attr2 := v
                …
                create obj1.make (attr1, …)
                create obj2.make (attr2, …)
                “Work with obj1, obj2, …”

If you try to compile code of this kind, you will get a compilation error:

Compiler error message

Eiffel is void-safe: it guarantees that no execution will ever produce null-pointer dereference (void call). To achieve this guarantee, the compiler must make sure that all attributes are “properly set” to an object reference (non-void) at the end of the creation procedure. But the error branch fails to initialize obj1 etc.

You might think of replacing the explicit test by a precondition to the creation procedure:

               require
                                argument_count = N

but that does not work; the language definition explicit prohibits preconditions in a root creation procedure. The Ecma-ISO standard (the official definition of the language, available here) explains the reason for the corresponding validity rule (VSRP, page 32):

A routine can impose preconditions on its callers if these callers are other routines; but it makes no sense to impose a precondition on the external agent (person, hardware device, other program…) that triggers an entire system execution, since there is no way to ascertain that such an agent, beyond the system’s control, will observe the precondition.

The solution is to separate the processing of arguments from the rest of the program’s work. Add a class CORE which represents the real core of the application and separate it from the root class, say APPLICATION. In APPLICATION, all the creation procedure does is to check the arguments and, if they are fine, pass them on to an instance of the core class:

                note
                                description: “Root class, processes execution arguments and starts execution”
                class APPLICATION create make feature
                                core: CORE
                                                — Application’s core object
                                make
……..……..……..……..……..……..— Check arguments and proceed if they make sense.
                                                do
                                                             if argument_count /= N then
                                                                                print (“XX expects exactly N arguments: AA, BB, …%N”)
                                                                else
                                                                                create core.make (argument (1), argument (2) ; …)
                                                                                                — By construction the arguments are defined!
                                                                                core.live
                                                                                                — Perform actual work
                                                                                               — (`live’ can instead be integrated with `make’ in CORE.)

                                                                end
                                                end
                 end
 
We may call this little design pattern “Split the Root”. Nothing earth-shattering; it is simply a matter of separating concerns (cutting off the Model from the View). It assumes a system that includes text-based output, whereas many applications are graphical. It is still worth documenting, for two reasons.

First, in its own modest way, the pattern is useful for simple programs; beginners, in particular, may not immediately understand why the seemingly natural way of processing and checking arguments gets rejected by the compiler.

The second reason is that Split the Root illustrates the rules that preside over a carefully designed language meant for carefully designed software. At first it may be surprising and even irritating to see code rejected because, in a first attempt, the system’s root procedure has a precondition, and in a second attempt because some attributes are not initialized — in the branch where they do not need to be initialized. But there is a reason for these rules, and once you understand them you end up writing more solid software.

 

VN:F [1.9.10_1130]
Rating: 10.0/10 (8 votes cast)
VN:F [1.9.10_1130]
Rating: +4 (from 4 votes)

Concurrency/verification positions at Politecnico di Milano

As part of the continuation of the ERC Advanced Investigator Grant project “Concurrency Made Easy” (started at ETH Zurich, see the project pages at cme.ethz.ch), I have positions at Politecnico di Milano for:

  • Postdocs (having a doctoral degree)
  • Research associates (officially: “Assegno di Ricerca”, with the requirement of having a master degree), which can lead to a PhD position.

The deadline for applications is October 11. Please contact me directly if interested. What I expect:

  • The requisite degrees as stated above.
  • Innovative and enterprising spirit, passion for quality work in software engineering.
  • Either or both of excellent programming abilities and strong CS theoretical background.
  • Knowledge of as many of possible of: object-oriented programming, concurrency/parallelism, software verification/formal methods, Eiffel.
  • Familiarity with the basics of the project as described in the project pages at the URL above.
VN:F [1.9.10_1130]
Rating: 7.0/10 (3 votes cast)
VN:F [1.9.10_1130]
Rating: 0 (from 2 votes)

LASER summer school on software for robotics: last call for registration

Much of the progress in robotics is due to software advances, and software issues remain at the heart of the formidable challenges that remain. The 2017 LASER summer school, held in September in Elba, brings together some of the most prestigious international experts in the area.

The LASER school has established itself as one of the principal forums to discussed advanced software issues. The 2017 school takes place from 9 to 17 September in the idyllic setting of the Hotel del Golfo in Procchio, Elba Island, Italy.

Robotics is progressing at an amazing pace, bringing improvements to almost areas of human activity. Today’s robotics systems rely ever more fundamentally on complex software, raising difficult issues. The LASER 2017 summer school covers both the current state of robotics software technology and open problems. The lecturers are top international experts with both theoretical contributions and major practical achievements in developing robotics systems.
The LASER school is intended for professionals from the industry (engineers and managers) as well as university researchers, including PhD students. Participants learn about the most important software technology advances from the pioneers in the field. The school’s focus is applied, although theory is welcome to establish solid foundations. The format of the school favors extensive interaction between participants and speakers.

We have lined up an impressive roster of speakers from the leading edge of both industry and academia:

Rodolphe Gélin, Aldebaran Robotics
Ashish Kapoor, Microsoft Research
Davide Brugali, University of Bergamo, on Managing software variability in robotic control systems
Nenad Medvidovic, University of Southern California, on Software Architectures of Robotics Systems
Bertrand Meyer, Politecnico di Milano & Innopolis University, on Concurrent Object-Oriented Robotics Software
Issa Nesnas, NASA Jet Propulsion Laboratory, on Experiences from robotic software development for research and planetary flight robots
Hiroshi (“Gitchang”) Okuno, Waseda University & Kyoto University, on Open-Sourced Robot Audition Software HARK: Capabilities and Applications

The school takes place at the magnificent Hotel del Golfo in the Gulf of Procchio, Elba. Along with an intensive scientific program, participants will have time to enjoy the countless natural and cultural riches of this wonderful, history-laden jewel of the Mediterranean.

For more information about the school, the speakers and registration see the LASER site.

VN:F [1.9.10_1130]
Rating: 5.5/10 (4 votes cast)
VN:F [1.9.10_1130]
Rating: +1 (from 1 vote)

The perils of feature interaction

One of the most delicate aspects of design is feature interaction. As users, we suffer daily from systems offering features that individually make sense but clash with each other. In my agile book [1] I explained in detail, building on the work of Pamela Zave, why this very problem makes one of the key ideas of agile methods,  the reliance on “user stories” for requirements, worthless and damaging.

A small recent incident reminded me of the perils of feature interaction. I used my Lenovo W540 laptop without power for a short while, then reached a sedentary location and plugged it in. Hence my surprise when, some hours later, it started beeping to alert me that it was running out of battery. The natural reactions — check the outlet and the power cord — had no effect. I found the solution, but just in time: otherwise, including if I had not heard the warning sound, I would have been unable to use the laptop any further. That’s right: I would not have been able to restart the computer at all, even with access to a power outlet, and even though it was perfectly functional and so was its (depleted) battery. The reason is that the problem arose from a software setting, which (catch-22 situation) I could not correct without starting the computer [2].

The only solution would have been to find another, non-depleted battery. That is not a trivial matter if you have traveled with your laptop outside of a metropolis: the W540 has a special battery which ordinary computer shops do not carry [3].

The analysis of what made such a situation possible must start with the list of relevant hardware and software product features.

Hardware:

  • HA. This Lenovo W series includes high-end laptops with high power requirements, which the typical 65-watt airplane power jack does not satisfy.
  • HB. With models prior to the W540, if you tried to connect a running laptop to the power supply in an airplane, it would not charge, and the power indicator would start flickering.  But you could still charge it if you switched it off.
  • HC. The W540 effectively requires 135 watts and will not take power from a 65-watt power source under any circumstances.

Software:

  • SA. The operating system (this discussion assumes Windows) directly reflects HC by physically disabling charging if the laptop is in the “Airplane” power mode.
  • SB. If you disable wireless, the operating system automatically goes into the “Airplane” power mode.
  • SC. In the “Airplane” power mode, the laptop, whether or not connected through a charger to a power outlet of any wattage, will not charge. The charging function is just disabled.
  • SD. One can edit power modes to change parameters, such as time to automatic shutoff, but the no-charging property in Airplane mode is not editable and not even mentioned in the corresponding UI dialog. It seems to be a behind-the-scenes property magically attached to the power-mode name “Airplane”.
  • SE. There is a function key for disabling wireless: F8. As a consequence of SB it also has the effect of switching to “Airplane” mode.
  • SF. Next to F8 on the keyboard is F7.
  • SG. F7 serves to display the screen content on another monitor (Windows calls it a “projector”). F7 offers a cyclic set of choices: laptop only, laptop plus monitor etc.
  • SH. In the old days (like five years ago), such function keys setting important operating system parameters on laptops used to be activated only if you held them together with a special key labeled “Fn”. For some reason (maybe the requirement was considered too complicated for ordinary computer users) the default mode on Lenovo laptops does not use the “Fn” key anymore: you just press the desired key, such as F7 or F8.
  • SI. You can revert to the old mode, requiring pressing “Fn”, by going into the BIOS and performing some not-absolutely-trivial steps, making this possibility the preserve of techies. (Helpfully, this earlier style is called “Legacy mode”, as a way to remind you that your are an old-timer, probably barely graduated from MS-DOS and still using obsolete conventions. In reality, the legacy mode is the right one to use, whether for techies or novices: it is all too easy to hit a function key by mistake and get totally unexpected results. The novice, not the techie, is the one who will be completely confused and panicked as a result. The first thing I do with a new laptop is to go to the BIOS and set legacy mode.)

By now you have guessed what happened in my case, especially once you know that I had connected the laptop to a large monitor and had some trouble getting that display to work. In the process I hit Fn-F7 (feature SG) several times.  I must have mistakenly (SF) pressed F8 instead of F7 at some point. Normally, Legacy mode (SI) should have made me immune to the effects of hitting a function key by mistake, but I did use the neighboring key F7 for another purpose. Hitting F8 disabled wireless (SE) and switched on Airplane power mode (SB). At that point the laptop, while plugged in correctly, stopped charging (SC, SD).

How did I find out? Since I was looking for a hardware problem I could have missed the real cause entirely and ended up with a seemingly dead laptop. Fortunately I opened the Power Options dialog to see what it said about the battery. I noticed that among the two listed power plans the active one was not “Power Saver”, to which I am used, but “Airplane”. I did not immediately pay  attention to that setting; since I had not used the laptop for a while I just thought that maybe the last time around I had switched on “Airplane”, even though that made little sense since I was not even aware of the existence of that option. After trying everything else, though, I came back to that intriguing setting, changed to the more usual “Power Saver”, and the computer started to charge again. I was lucky to have a few percent of battery still left at that point.

Afterwards I found a relevant discussion thread on a Lenovo user forum.

As is often the case in such feature-interaction mishaps, most of the features make sense individually [4]. What causes trouble is some unforeseen combination of features.

There is no sure way to avoid such trouble, but there is a sure way to cause it: design a system feature by feature, as with user stories in agile development. The system must do this and it must do that. Oh, by the way, it must also do that. And that. User stories have one advantage: everyone understands them. But that is also their limitation. Good requirements and design require professionals who can see the whole beyond the parts.

A pernicious side of this situation is that many people believe that use cases and user stories are part of object-oriented analysis, whereas the OO approach to requirements and design is the reverse: rise above individual examples to uncover the fundamental abstractions.

As to my laptop, it is doing well, thanks. And I will be careful with function keys.

Reference and notes

[1] Bertrand Meyer: Agile! The Good, the Hype and the Ugly, Springer, 2014,  Amazon page: here, book page: here. A description of the book appeared here on this blog at the time of publication.

[2] Caveat: I have not actually witnessed this state in which a plugged-in laptop will not restart. The reason is simply that I do not have an alternate battery at the moment so I cannot perform the experiment with the almost certain result of losing the use of my laptop. I will confirm the behavior as soon as I have access to a spare battery.

[3] It has been my systematic experience over the past decade and a half that Lenovo seems to make a point, every couple of years, to introduce new models with incompatible batteries and docking stations. (They are also ever more incredibly bulky, with the one for the W540 almost as heavy as the laptop itself. On the other hand the laptops are good, otherwise I would not be bothering with them.)

[4] One exception here is feature SB: switching wireless off does not necessaril y mean you want to select a specific power mode! It is a manifestation of the common syndrome  of software tools that think they are smarter than you, and are not. Another exception is SE: to let a simple key press change fundamental system behavior is to court disaster. But I had protected myself by using legacy mode and was hit anyway.

VN:F [1.9.10_1130]
Rating: 7.1/10 (8 votes cast)
VN:F [1.9.10_1130]
Rating: +1 (from 1 vote)

AutoProof workshop: Verification As a Matter of Course

The AutoProof technology pursues the goal of “Verification As a Matter Of Course”, integrated into the EVE development environment. (The AutoProof  project page here; see particularly the online interactive tutorial.) A one-day workshop devoted to the existing AutoProof and current development will take place on October 1 near Toulouse in France. It is an informal event (no proceedings planned at this point, although based on the submissions we might decide to produce a volume), on a small scale, designed to bring together people interested in making the idea of practical verification a reality.

The keynote will be given by Rustan Leino from Microsoft Research, the principal author of the Boogie framework on which the current implementation of AutoProof relies.

For submissions (or to attend without submitting) see the workshop page here. You are also welcome to contact me for more information.

VN:F [1.9.10_1130]
Rating: 5.3/10 (15 votes cast)
VN:F [1.9.10_1130]
Rating: -2 (from 6 votes)

Design by Contract: ACM Webinar this Thursday

A third ACM webinar this year (after two on agile methods): I will be providing a general introduction to Design by Contract. The date is this coming Thursday, September 17, and the time is noon New York (18 Paris/Zurich, 17 London, 9 Los Angeles, see here for hours elsewhere). Please tune in! The event is free but requires registration here.

VN:F [1.9.10_1130]
Rating: 5.8/10 (19 votes cast)
VN:F [1.9.10_1130]
Rating: -4 (from 8 votes)

Framing the frame problem (new paper)

Among the open problems of verification, particularly the verification of object-oriented programs, one of the most vexing is framing: how to specify and verify what programs element do not change. Continuing previous work, this article presents a “double frame inference” method, automatic on both sides the specification and verification sides. There is no need to write frame specifications: they will be inferred from routine postconditions. For verification, the method computes the set of actually changed properties through a “change calculus”, itself based on the previously developed alias calculus.

Some verification techniques, such as Hoare-style proofs, require significant annotation effort and potentially yield full functional verification; others, such as model checking and abstract interpretation, have more limited goals but seek full automation. Framing, in my opinion, should be automatic, freeing the programmer-verifier to devote the annotation effort to truly interesting properties.

Reference

[1] Bertrand Meyer: Framing the Frame Problem, in Dependable Software Systems, Proceedings of August 2014 Marktoberdorf summer school, eds. Alexander Pretschner, Manfred Broy and Maximilian Irlbeck, NATO Science for Peace and Security, Series D: Information and Communication Security, Springer, 2015 (to appear), pages 174-185; preprint available here.

VN:F [1.9.10_1130]
Rating: 5.8/10 (17 votes cast)
VN:F [1.9.10_1130]
Rating: -1 (from 9 votes)

Detecting deadlock automatically? (New paper)

To verify sequential programs, we have to prove that they do the right thing, but also that they do it within our lifetime — that they terminate. The termination problem is considerably harder with concurrent programs, since they add a new form of non-termination: deadlock. A set of concurrent processes or threads will deadlock if they end up each holding a resource that another wants and wanting a resource that another holds.

There is no general solution to the deadlock problem, even a good enough general solution. (“Good enough” is the best we can hope for, since like many important problems deadlock is undecidable.) It is already hard enough to provide run-time deadlock detection, to be able at least to cancel execution when deadlock happens. The research reported in this new paper [1] pursues the harder goal of static detection. It applies to an object-oriented context (specifically the SCOOP model of concurrent OO computation) and relies fundamentally on the alias calculus, a static alias analysis technique developed in previous publications.

The approach is at its inception and considerable work remains to be done. Still, the example handled by the paper is encouraging: analyzing two versions of the dining philosophers problem and proving — manually — that one can deadlock and the other cannot.

References

[1] Bertrand Meyer: An automatic technique for static deadlock prevention, in PSI 2014 (Ershov Informatics Conference), eds. Irina Virbitskaite and Andrei Voronkov, Lecture Notes in Computer Science, Springer, 2015, to appear.; draft available here.

VN:F [1.9.10_1130]
Rating: 6.0/10 (19 votes cast)
VN:F [1.9.10_1130]
Rating: -2 (from 12 votes)

Lampsort

 

In support of his view of software methodology, Leslie Lamport likes to use the example of non-recursive Quicksort. Independently of the methodological arguments, his version of the algorithm should be better known. In fact, if I were teaching “data structures and algorithms” I would consider introducing it first.

As far as I know he has not written down his version in an article, but he has presented it in lectures; see [1]. His trick is to ask the audience to give a non-recursive version of Quicksort, and of course everyone starts trying to remove the recursion, for example by making the stack explicit or looking for invertible functions in calls. But his point is that recursion is not at all fundamental in Quicksort. The recursive version is a specific implementation of a more general idea.

Lamport’s version — let us call it Lampsort —is easy to express in Eiffel. We may assume the following context:

a: ARRAY [G -> COMPARABLE]        — The array to be sorted.
pivot: INTEGER                                      —  Set by partition.
picked: INTEGER_INTERVAL            — Used by the sorting algorithm, see below.
partition (i, j: INTEGER)
……..require      — i..j is a sub-interval of the array’s legal indexes:
……..……..i < j
……..……..i >= a.lower
……..……..j <= a.upper
……..do
……..……..… Usual implementation of partition
……..ensure     — The expected effect of partition:
……..……..pivot >= i
……..……..pivot < j
……..……..a [i..j] has been reshuffled so that elements in i..pivot are less than
……..……..or equal to those in pivot+1 .. j.
……..end

We do not write the implementation of partition since the point of the present discussion is the overall algorithm. In the usual understanding, that algorithm consists of doing nothing if the array has no more than one element, otherwise performing a partition and then recursively calling itself on the two resulting intervals. The implementation can take advantage of parallelism by forking the recursive calls out to different processors. That presentation, says Lamport, describes only a possible implementation. The true Quicksort is more general. The algorithm works on a set not_sorted of integer intervals i..j such that the corresponding array slices a [i..j] are the only ones possibly not sorted; the goal of the algorithm is to make not_sorted empty, since then we know the entire array is sorted. In Eiffel we declare this set as:

not_sorted: SET [INTEGER_INTERVAL]

The algorithm initializes not_sorted to contain a single element, the entire interval; at each iteration, it removes an interval from the set, partitions it if that makes sense (i.e. the interval has more than one element), and inserts the resulting two intervals into the set. It ends when not_sorted is empty. Here it is:

……..from                                 — Initialize interval set to contain a single interval, the array’s entire index range:
……..…..create not_sorted.make_one (a.lower |..| a.upper)….         ..……..
……..invariant
……..…..— See below
……..until
……..…..not_sorted.is_empty                                                            — Stop when there are no more intervals in set
……..loop
……..…..picked := not_sorted.item                                                     — Pick an interval from (non-empty) interval set.
……..……if picked.count > 1 then                                                      — (The precondition of partition holds, see below.)
……..……..…..partition (picked.lower, picked.upper)                 — Split, moving small items before & large ones after pivot.
……..……..…..not_sorted.extend (picked.lower |..| pivot)            — Insert new intervals into the set of intervals: first
……..……....not_sorted.extend (pivot + 1 |..| picked.upper)     — and second.
……..……end
……..…...not_sorted.remove (picked)                                               — Remove interval that was just partitioned.
…….end

Eiffel note: the function yielding an integer interval is declared in the library class INTEGER using the operator |..| (rather than just  ..).

The query item from SET, with the precondition not is_empty,  returns an element of the set. It does not matter which element. In accordance with the Command-Query Separation principle, calling item does not modify the set; to remove the element you have to use the command remove. The command extend adds an element to the set.

The abstract idea behind Lampsort, explaining why it works at all, is the following loop invariant (see [2] for a more general discussion of how invariants provide the basis for understanding loop algorithms). We call “slice” of an array a non-empty contiguous sub-array; for adjacent slices we may talk of concatenation; also, for slices s and t s <= t means that every element of s is less than or equal to every element of t. The invariant is:

a is the concatenation of the members of a set slices of disjoint slices, such that:
– The elements of a are a permutation of its original elements.
– The index range of any member  of slices having more than one element is in not_sorted.
– For any adjacent slices s and t (with s before t), s <= t.

The first condition (conservation of the elements modulo permutation) is a property of partition, the only operation that can modify the array. The rest of the invariant is true after initialization (from clause) with slices made of a single slice, the full array. The loop body maintains it since it either removes a one-element interval from not_sorted (slices loses the corresponding slice) or performs partition with the effect of partitioning one slice into two adjacent ones satisfying s <= t, whose intervals replace the original one in not_sorted. On exit, not_sorted is empty, so slices is a set of one-element slices, each less than or equal to the next, ensuring that the array is sorted.

The invariant also ensures that the call to partition satisfies that routine’s precondition.

The Lampsort algorithm is a simple loop; it does not use recursion, but relies on an interesting data structure, a set of intervals. It is not significantly longer or more difficult to understand than the traditional recursive version

sort (i, j: INTEGER)
……..require
……..……..i <= j
……..……..i >= a.lower
……..……..j <= a.upper
……..do
……..……if j > i then                    — Note that precondition of partition holds.
……..……..…..partition (i, j)         — Split into two slices s and t such that s <= t.
……..……..…..sort (i, pivot)          — Recursively sort first slice.
……..……..…..sort (pivot+1, j)      — Recursively sort second slice.
……..……end……..…..
……..end

Lampsort, in its author’s view, captures the true idea of Quicksort; the recursive version, and its parallelized variants, are only examples of possible implementations.

I wrote at the start that the focus of this article is Lampsort as an algorithm, not issues of methodology. Let me, however, give an idea of the underlying methodological debate. Lamport uses this example to emphasize the difference between algorithms and programs, and to criticize the undue attention being devoted to programming languages. He presents Lampsort in a notation which he considers to be at a higher level than programming languages, and it is for him an algorithm rather than a program. Programs will be specific implementations guided in particular by efficiency considerations. One can derive them from higher-level versions (algorithms) through refinement. A refinement process may in particular remove or restrict non-determinism, present in the above version of Lampsort through the query item (whose only official property is that it returns an element of the set).

The worldview underlying the Eiffel method is almost the reverse: treating the whole process of software development as a continuum; unifying the concepts behind activities such as requirements, specification, design, implementation, verification, maintenance and evolution; and working to resolve the remaining differences, rather than magnifying them. Anyone who has worked in both specification and programming knows how similar the issues are. Formal specification languages look remarkably like programming languages; to be usable for significant applications they must meet the same challenges: defining a coherent type system, supporting abstraction, providing good syntax (clear to human readers and parsable by tools), specifying the semantics, offering modular structures, allowing evolution while ensuring compatibility. The same kinds of ideas, such as an object-oriented structure, help on both sides. Eiffel as a language is the notation that attempts to support this seamless, continuous process, providing tools to express both abstract specifications and detailed implementations. One of the principal arguments for this approach is that it supports change and reuse. If everything could be fixed from the start, maybe it could be acceptable to switch notations between specification and implementation. But in practice specifications change and programs change, and a seamless process relying on a single notation makes it possible to go back and forth between levels of abstraction without having to perform repeated translations between levels. (This problem of change is, in my experience, the biggest obstacle to refinement-based approaches. I have never seen a convincing description of how one can accommodate specification changes in such a framework without repeating the whole process. Inheritance, by the way, addresses this matter much better.)

The example of Lampsort in Eiffel suggests that a good language, equipped with the right abstraction mechanisms, can be effective at describing not only final implementations but also abstract algorithms. It does not hurt, of course, that these abstract descriptions can also be executable, at the possible price of non-optimal performance. The transformation to an optimal version can happen entirely within the same method and language.

Quite apart from these discussions of software engineering methodology, Lamport’s elegant version of Quicksort deserves to be known widely.

References

[1] Lamport video here, segment starting at 0:32:34.
[2] Carlo Furia, Bertrand Meyer and Sergey Velder: Loop invariants: Analysis, Classification and Examples, in ACM Computing Surveys, September 2014, preliminary text here.

VN:F [1.9.10_1130]
Rating: 7.0/10 (27 votes cast)
VN:F [1.9.10_1130]
Rating: +5 (from 11 votes)

New MOOC opens Tuesday

Our online course Computing: Art, Magic, Science, available from EdX, opens this Tuesday (tomorrow, 30 September) at 9 AM Zurich time (and at this time in your area).

An earlier article on this blog described the course, which integrates ten years of experience teaching introductory programming at ETH, and takes advantage of remote-compilation and remote-execution technology from our distributed development research.

You can find the course here.

VN:F [1.9.10_1130]
Rating: 10.0/10 (3 votes cast)
VN:F [1.9.10_1130]
Rating: +2 (from 2 votes)

The Eiffel Documentation Drive

EiffelStudio releases are semi-annual, end of May and end of November. Release 14-05 just came out. The next release (14-11) is entirely devoted to documentation. We are hoping for extensive community involvement in this first-time Eiffel Documentation Drive.

Many people regularly comment that there is not enough Eiffel and EiffelStudio documentation, and some of what exists is not good enough. We have decided to tackle the problem seriously, hence the dedication of an entire release cycle to documentation. The term is taken here in a broad sense: “documentation” means what is at http://docs.eiffel.com, but also everything else that can help understand Eiffel, for example updating Wikipedia entries on topics for which Eiffel has something to offer.

Anyone with an understanding of an Eiffel-related topic can help. We particularly need help from two (non-disjoint) categories of contributors

  • Those with a good understanding of one or more Eiffel-related topics.
  • Those with good writing skills.

The process will involve reviewing, so if you are an Eiffelist with moderate taste for writing, or a good writer with incomplete knowledge of Eiffel, we need your help anyway; someone else will compensate for the missing side. In particular, a common criticism is that some of the documentation was written by developers who do not have English as their mother tongue; if you can help improve it everyone will benefit. Of course if you are good at both technology and writing it’s even better.

We are mentioning English because it is the first target, but documentation in other languages, either original or a translation of existing English pages, is needed too.

Here is how the Eiffel Documentation Drive works:

  • Here you will find a form to report missing or unsatisfactory documentation. Please fill it on every applicable occasion.
  • The entries will be read by a member of the Eiffel Software team, who in applicable cases will add a row to the Eiffel Documentation Drive spreadsheet here. You can not only read that spreadsheet but also edit it yourself, so as to keep it as accurate and up-to-date as possible.
  • An email will be sent to the user list, with “Eiffel Documentation Drive” in the header (so that people not interested in the topic can filter them out), requesting help.
  • Those willing to help can enter their names in the corresponding row, indicating a planned date of completion.

Each row includes among its fields the following: topic, link to existing documentation, volunteer writer(s), planned completion, volunteer reviewer(s).

The full Eiffel Software team will participate – as noted above, improving the documentation is the strategic goal for the release – but we hope for considerable community participation. Please help make EiffelStudio documentation shine as much as the environment itself.

VN:F [1.9.10_1130]
Rating: 9.0/10 (9 votes cast)
VN:F [1.9.10_1130]
Rating: +4 (from 4 votes)

Programming language features

 

InfoWorld is currently publishing a series of programming language assessments:

  • 9 Things We Hate About Objective-C, 4 June.
  • 15 Things We Hate About Java, 6 March.
  • 10 Features Apple Stole for the Swift Programming Language, 9 June.

Notable in these articles is what they do not mention: Eiffel has most of what the author misses in Objective-C and Java; and most of what Swift “stole” it stole from Eiffel.

In this article let us concentrate on the nine Objective-C complaints, by Peter Wayner [1]; subsequent articles will examine the Java “hates” and the Swift “steals”.

Criticism 1: “It is a little too different

“Objective-C lovers tout that Objective-C is a strict superset of C: If you can do it in C, you should be able to do it in Objective-C. But it doesn’t go the other way, so you’re stuck wondering, “Should I use an Objective-C method description or a C one?” Achieving portability to C programs requires constant vigilance and forethought.”

This is what happens when you mix language paradigms. Eiffel has a close relationship with C, but the two sides are clearly separated. You can call C from Eiffel, and the other way around. You can declare an Eiffel routine as “external C” and even include the C code inline: in other words an Eiffel “method description” can have a C implementation. The structure is always object-oriented (no need to fear that a novice programmer will revert to a C style for the design) but for access to low-level system mechanisms and small functions that should be optimized to the byte and microsecond you use C directly, in its ideal role.

Criticism 2: “It’s still mostly just plain old C

“For all its object-oriented coolness, you don’t get much else from Objective-C. It’s more of a way to organize your code for large systems than a way to write better code. You’re still responsible for pointers. You’re still responsible for keeping track of memory.

Eiffel is object-oriented all the way. You are not “responsible for pointers“. References are tame: no pointer arithmetic. You are not “responsible for keeping track of memory“:  objects are garbage-collected

“The C programmers loved to call their software a ‘portable assembly code’, and the same is true for Objective-C … except it’s only portable from the Mac to the iPad.”

“Portable assembly code” is exactly what C provides, and hence an excellent target for an Eiffel compiler. As to Eiffel, it runs on all platforms, from Windows to Linux to Solaris to VMS to the Mac.

Criticism 3: Stuck in the 80’s

Criticism 3: “Stuck in the ’80s

“Parachute pants, big hair, ‘The Breakfast Club’ — and the NeXT machine: Objective-C is like a time machine in programming-language land.”

Eiffel has undergone constant evolution, innovating on all fronts of programming constructs and integrating the best of known techniques.

“The primitives aren’t first-class citizens. Garbage collection, that wonderful idea that sustained Lisp, was adopted by Java ages ago. Objective-C got it in 2006. The same goes for properties and closures.”

All this has been in Eiffel forever. Agents (closures) were introduced in 1999, long before Java, C# and other OO languages had anything of the sort. Eiffel’s assigner commands are vastly superior to properties (no need to write all these boring getter functions).

 Criticism 4: “Punctuation

“The cool modern kids writing Python, Ruby, and CoffeeScript can craft billion-dollar companies without using brackets, braces, and parentheses. You’ll be wearing out your punctuation keys writing Objective-C. Colons, at-signs, asterisks? Is there any character that the language doesn’t use?”

Come on. How can one be so misinformed? The semicolon has been optional in Eiffel for fifteen years. The high-priest style of C, Objective-C, Java, C# and so many others, with its piling up of strange symbols, is something that Eiffel users never had to suffer.

Criticism 5: “Modern syntax

Not modern syntax, that is:

“Objective-C”s syntax is like Coke: They tried to modernize it in the ’90s, but it never stuck.”

Eiffel’s syntax is clear and simple. Total beginners, including high-school students, pick it up just as easily and naturally as advanced programmers, and as application experts who want to concentrate on their problem, not on learning strange language conventions going back to the nineteen-sixties.

Criticism 6: “No namespaces

Here Eiffel does not provide what the journalist wants: it is “post-namespaces” (as in “postmodern”). The Eiffel community has decided that the complexity of namespaces was not worth the trouble (what happens when you move packages around?) and prefers simple mechanisms for resolving class name clashes.

Criticism 7: “It only runs in Apple’s corner of the universe

” Variety is the spice of life. It’s even more important in a world where not everything is an iPhone. If a Windows or Linux shop recruits you, you can forget all of those extra Objective-C extensions you learned because they’ll be of no use.”

Eiffel is not tied to any manufacturer, computer architecture or operating system. If a new processor comes out, or a user needs an exotic platform, a port can usually be produced in a matter of hours. The compiler and the entire environment to which it belongs, EiffelStudio, are written in Eiffel; the supporting runtime is in a highly portable form of C, which requires very little customization, if any, for a new platform. (Here “the compiler” means the Eiffel Software implementation, but other implementations also put a strong emphasis on portability.)

Criticism 8: “XCode is your only choice

“In the Objective-C world, you get really only one choice. Why do you need to be different, comrade?”

Besides EiffelStudio other compilers and tools are available for Eiffel.

Criticism 9: “Apple’s benevolent dictatorship

“Do you want to give out more than 100 copies of your iPhone app? Forget it. Do you want to “think different” with your UI? Please go back and read the user interface guidelines. You can’t do anything without Apple’s permission because Apple uses strong crypto to lock down everything — and fanatically tyrannical policies to lock down the rest.”

The Eiffel language definition is steered by a standards committee under Ecma (the organization behind many of the major standards in IT), which anyone can join. EiffelStudio itself is available in open source. The Eiffel world knows nothing like the close control Apple exerts over its product; it welcomes all contributors.

Maybe someone should talk to Mr. Wayner and help him broaden his scope of programming language knowledge.

References

[1] Peter Wayner, 9 Things We Hate About Objective-C, InfoWorld, 4 June 2014, available here.

VN:F [1.9.10_1130]
Rating: 6.7/10 (17 votes cast)
VN:F [1.9.10_1130]
Rating: +4 (from 8 votes)

Attached by default?

 

Opinions requested! See at end.

A void call, during the execution of an object-oriented program, is a call of the standard OO form

x·some_routine (…)                                                /CALL/

where x, a reference, happens to be void (null) instead of denoting, as expected, an object. The operation is not possible; it leads to an exception and, usually, a crash of the program. Void calls are also called “null pointer dereferencing”.

One of the major advances in Eiffel over the past years has been the introduction of attached types, entirely removing the risk of void calls. The language mechanisms, extending the type system, make void-call avoidance a static property, part of type checking: just as the compiler will prevent you from assigning a boolean value to an integer variable, so will it flag your program if it sees a risk of void call. Put the other way around, if your program passes compilation, you have the guarantee that its executions will never produce a void call. Attached types thus remove one of the major headaches of programming, what Tony Hoare [1] called his “one-billion-dollar mistake”:

I call it my billion-dollar mistake. It was the invention of the null reference in 1965. At that time, I was designing the first comprehensive type system for references in an object oriented language (ALGOL W) [2]. My goal was to ensure that all use of references should be absolutely safe, with checking performed automatically by the compiler. But I couldn’t resist the temptation to put in a null reference, simply because it was so easy to implement. This has led to innumerable errors, vulnerabilities, and system crashes, which have probably caused a billion dollars of pain and damage in the last forty year

Thanks to attached types, Eiffel programmers can sleep at night: their programs will not encounter void calls.

To benefit from this advance, you must declare variables accordingly, as either attached (never void after initialization) or detachable (possibly void). You must also write the program properly:

  • If you declare x attached, you must ensure in the rest of the program that before its first use x will have been attached to an object, for example through a creation instruction create x.
  • If you declare x detachable, you must make sure that any call of the above form /CALL/ happens in a context where x is guaranteed to be non-void; for example, you could protect it by a test if x /= Void then or, better, an “object test”.

Code satisfying these properties is called void-safe.

Void safety is the way to go: who wants to worry about programs, even after they have been thoroughly tested and have seemingly worked for a while, crashing at unpredictable times? The absence of null-pointer-dereferencing can be a statically  enforced property, as the experience of Eiffel now demonstrates; and that what it should be. One day, children will think void-safely from the most tender age, and their great-grandparents will tell them, around the fireplace during long and scary winter nights, about the old days when not everyone was programming in Eiffel and even those who did were worried about the sudden null-pointer-derefencing syndrome. To get void safety through ordinary x: PERSON declarations, you had (children, hold your breath) to turn on a compiler option!

The transition to void safety was neither fast nor easy; in fact, it has taken almost ten years. Not everyone was convinced from the beginning, and we have had to improve and simplify the mechanism along the way to make void-safe programming practical. Compatibility has been a key issue throughout: older classes are generally not void-safe, but in a language that has been around for many years and has a large code base of operational software it is essential to ensure a smooth transition. Void safety has, from its introduction, been controlled by a compiler option:

  • With the option off, old code will compile as it used to do, but you do not get any guarantee of void safety. At execution time, a void call can still cause your program to go berserk.
  • With the option on, you get the guarantee: no void calls. To achieve this goal, you have to make sure the classes obey the void safety rules; if they do not, the compiler will reject them until you fix the problem.

In the effort to reconcile the compatibility imperative with the inexorable evolution to void safety, the key decisions have affected default values for compiler options and language conventions. Three separate decisions, in fact. Two of the defaults have already been switched; the question asked at the end of this article addresses the switching of the last remaining one.

The first default governed the void-safety compiler option. On its introduction, void-safety was off by default; the mechanism had to be turned on explicitly, part of the “experimental” option that most EiffelStudio releases offer for new, tentative mechanisms. That particular decision changed a year ago, with version 7.3 (May 2013): now void safety is the default. To include non-void-safe code you must mark  it explicitly.

The second default affects a language convention: the meaning of a standard declaration. A typical declaration, such as

x: PERSON                                                                                      /A/

says that at run time x denotes a reference which, if not void, will be attached to an object of type PERSON.  In pre-void-safety Eiffel, as in today’s other typed OO languages,  the reference could occasionally become void at run time; in other words, x was detachable. With the introduction of void safety, you could emphasize this property by specifying it explicitly:

x: detachable PERSON                                                             /B/

You could also specify that x would never be void by declaring it attached, asking the compiler to guarantee this property for you (through its application of the void-safety rules to all operations involving x). The explicit form in this case is

x: attached PERSON                                                               /C/

In practical programming, of course, you do not want to specify attached or detachable all the time: you want to use the simple form /A/ as often as possible. Originally, since we were starting from a non-void-safe language, compatibility required /A/ to mean /B/ by default. But it turns out that “attached” really is the dominant case: most references should remain attached at all times and Void values should be reserved for important but highly specialized cases such as terminating linked data structures. So the simple form should, in the final state of the language, mean /C/. That particular default was indeed switched early (version 7.0, November 2011) for people using the void-safety compiler option. As a result, the attached keyword is no longer necessary for declarations such as the above, although it remains available. Everything is attached by default; when you want a reference that could be void (and are prepared to bear the responsibility for convincing the compiler that it won’t when you actually use it in a call), you declare it as detachable; that keyword remains necessary.

There remains one last step in the march to all-aboard-for-void-safety: removing the “detachable by default” option, that is to say, the compiler option that will make /A/ mean /B/ (rather than /C/). It is only an option, and not the default; but still it remains available. Do we truly need it? The argument for removing it  is that it simplifies the specification (the fewer options the better) and encourages everyone, even more than before, to move to the new world. The argument against is to avoid disturbing existing projects, including their compiler control files (ECFs).

The question looms: when do we switch the defaults? Some of us think the time is now; specifically, the November release (14.11) [4].

Do you think the option should go? We would like your opinion. Please participate in the Eiffelroom poll [5].

 

References and note

[1] C.A.R. Hoare: Null References: The Billion Dollar Mistake , abstract of talk at QCon London, 9-12 March 2009, available here.

[2] (BM note) As a consolation, before Algol W, LISP already had NIL, which is the null pointer.

[3] Bertrand Meyer, Alexander Kogtenkov and Emmanuel Stapf: Avoid a Void: The Eradication of Null Dereferencing, in Reflections on the Work of C.A.R. Hoare, eds. C. B. Jones, A.W. Roscoe and K.R. Wood, Springer-Verlag, 2010, pages 189-211, available here.

[4] EiffelStudio version numbering changed in 2014: from a classic major_number.minor_number to a plain year.month, with two principal releases, 5 and 11 (May and November).

[5] Poll on switching the attachment defaults: at the bottom of the Eiffelroom page here (direct access here).

VN:F [1.9.10_1130]
Rating: 8.6/10 (7 votes cast)
VN:F [1.9.10_1130]
Rating: +2 (from 4 votes)

New article: contracts in practice

For almost anyone programming in Eiffel, contracts are just a standard part of daily life; Patrice Chalin’s pioneering study of a few years ago [1] confirmed this impression. A larger empirical study is now available to understand how developers actually use contracts when available. The study, to published at FM 2014 [2] covers 21 programs, not just in Eiffel but also in JML and in Code Contracts for C#, totaling 830,000 lines of code, and following the program’s revision history for a grand total of 260 million lines of code over 7700 revisions. It analyzes in detail whether programmers use contracts, how they use them (in particular, which kinds, among preconditions, postconditions and invariants), how contracts evolve over time, and how inheritance interacts with contracts.

The paper is easy to read so I will refer you to it for the detailed conclusions, but one thing is clear: anyone who thinks contracts are for special development or special developers is completely off-track. In an environment supporting contracts, especially as a native part of the language, programmers understand their benefits and apply them as a matter of course.

References

[1] Patrice Chalin: Are practitioners writing contracts?, in Fault-Tolerant System, eds. Butler, Jones, Romanovsky, Troubitsyna, Springer LNCS, vol. 4157, pp. 100–113, 2006.

[2] H.-Christian Estler, Carlo A. Furia, Martin Nordio, Marco Piccioni and Bertrand Meyer: Contracts in Practice, to appear in proceedings of 19th International Symposium on Formal Methods (FM 2014), Singapore, May 2014, draft available here.

VN:F [1.9.10_1130]
Rating: 8.4/10 (11 votes cast)
VN:F [1.9.10_1130]
Rating: +4 (from 6 votes)

Eiffel as an expression language

A functional-programming style, or more generally a style involving more expressions and fewer instructions, is possible in Eiffel. In particular, Eiffel’s agent mechanism embeds a full functional-programming mechanism in the object-oriented framework of the language.

To make the notations simpler, we are discussing and tentatively implementing a number of proposed extensions. They involve no fundamental new language mechanisms, but provide new, more concise notations for existing mechanisms. Examples are:

  • Conditional expressions.
  • Implicit tuple, a rule allowing the omission of brackets for an actual argument when it is a tuple and the last argument, e.g. f (x, y, z) as an abbreviation for f ([x, y, z]) (an example involving just one argument). Tuples already provided the equivalent of a variable-argument (“varargs”) facility, but it is made simpler to use with this convention.
  • Parenthesis alias, making it possible to write just f (x, y) when f is an agent (closure, lambda expression, delegate etc. in other terminologies), i.e. treating f as if it were a function; the notation is simply an abbreviation for f.item ([x, y]) (an example that also takes advantage of implicit tuples). It has many other applications since a “parenthesis alias” can be defined for a feature of any class.
  • Avoiding explicit assignments to Result.
  • Type inference (to avoid explicitly specifying the type when it can be deduced from the context). This is a facility for the programmer, useful in particular for local variables, but does not affect the type system: Eiffel remains strongly typed, it is just that you can be lazy about writing the type when there is no ambiguity.
  • In the same vein, omitting the entire list of generic parameters when it can be inferred.

The description of the mechanism (see the link in [1]) is in the form of a set of slides explaining the concepts and presenting example. This is a working document and feedback is welcome.

References

[1] Eiffel as an expression language, Eiffel Software working document, 2012-2014, see here.

VN:F [1.9.10_1130]
Rating: 7.8/10 (6 votes cast)
VN:F [1.9.10_1130]
Rating: 0 (from 2 votes)

Negative variables: new version

I have mentioned this paper before (see the earlier blog entry here) but it is now going to be published [1] and has been significantly revised, both to take referee comments into account and because we found better ways to present the concepts.

We have  endeavored to explain better than in the draft why the concept of negative variable is necessary and why the usual techniques for modeling object-oriented programs do not work properly for the fundamental OO operation, qualified call x.r (…). These techniques are based on substitution and are simply unable to express certain properties (let alone verify them). The affected properties are those involving properties of the calling context or the global project structure.

The basic idea (repeated in part from the earlier post) is as follows. In modeling OO programs, we have to take into account the unique “general relativity” property of OO programming: all the operations you write are expressed relative to a “current object” which changes repeatedly during execution. More precisely at the start of a call x.r (…) and for the duration of that call the current object changes to whatever x denotes — but to determine that object we must again interpret x in the context of the previous current object. This raises a challenge for reasoning about programs; for example in a routine the notation f.some_reference, if f is a formal argument, refers to objects in the context of the calling object, and we cannot apply standard rules of substitution as in the non-OO style of handling calls.

We introduced a notion of negative variable to deal with this issue. During the execution of a call x.r (…) the negation of x , written x’, represents a back pointer to the calling object; negative variables are characterized by axiomatic properties such as x.x’= Current and x’.(old x)= Current.

Negative variable as back pointer

The paper explains why this concept is necessary, describes the associated formal rules, and presents applications.

Reference

[1] Bertrand Meyer and Alexander Kogtenkov: Negative Variables and the Essence of Object-Oriented Programming, to appear in Specification, Algebra, and Software, eds. Shusaku Iida, Jose Meseguer and Kazuhiro Ogata, Springer Lecture Notes in Computer Science, 2014, to appear. See text here.

VN:F [1.9.10_1130]
Rating: 7.8/10 (4 votes cast)
VN:F [1.9.10_1130]
Rating: +2 (from 4 votes)