Archive for the ‘Concurrency’ Category.

The French School of Programming

July 14 (still here for 15 minutes) is not a bad opportunity to announced the publication of a new book: The French School of Programming.

The book is a collection of chapters, thirteen of them, by rock stars of programming and software engineering research (plus me), preceded by a Foreword by Jim Woodcock and a Preface by me. The chapters are all by a single author, reflecting the importance that the authors attached to the project. Split into four sections after chapter 1, the chapters are, in order:

1. The French School of Programming: A Personal View, by Gérard Berry (serving as a general presentation of the subsequent chapters).

Part I: Software Engineering

2. “Testing Can Be Formal Too”: 30 Years Later, by  Marie-Claude Gaudel

3. A Short Visit to Distributed Computing Where Simplicity Is Considered a First-Class Property, by Michel Raynal

4. Modeling: From CASE Tools to SLE and Machine Learning, by Jean-Marc Jézéquel

5. At the Confluence of Software Engineering and Human-Computer Interaction: A Personal Account,  by Joëlle Coutaz

Part II:  Programming Language Mechanisms and Type Systems

6. From Procedures, Objects, Actors, Components, Services, to Agents, by  Jean-Pierre Briot

7. Semantics and Syntax, Between Computer Science and Mathematics, by Pierre-Louis Curien

8. Some Remarks About Dependent Type Theory, by Thierry Coquand

Part III: Theory

9. A Personal Historical Perspective on Abstract Interpretation, by Patrick Cousot

10. Tracking Redexes in the Lambda Calculus, by  Jean-Jacques Lévy

11. Confluence of Terminating Rewriting Computations, by  Jean-Pierre Jouannaud

Part IV: Language Design and Programming Methodology

12. Programming with Union, Intersection, and Negation Types, by Giuseppe Castagna

13, Right and Wrong: Ten Choices in Language Design, by Bertrand Meyer

What is the “French School of Programming”? As discussed in the Preface (although Jim Woodcock’s Foreword does not entirely agree) it is not anything defined in a formal sense, as the variety of approaches covered in the book amply demonstrates. What could be more different (for example) than Coq, OCaml (extensively referenced by several chapters) and Eiffel? Beyond the differences, however, there is a certain je ne sais quoi of commonality; to some extent, in fact, je sais quoi: reliance on mathematical principles, a constant quest for simplicity, a taste for elegance. It will be for the readers to judge.

Being single authors of their chapters, the authors felt free to share some of their deepest insights an thoughts. See for example Thierry Coquand’s discussion of the concepts that led to the widely successful Coq proof system, Marie-Claude Gaudel’s new look at her seminal testing work of 30 years ago, and Patrick Cousot’s detailed recounting of the intellectual path that led him and Radhia to invent abstract interpretation.


The French School of Programming
Edited by Bertrand Meyer
Springer, 2024. xxiv + 439 pages

Book page on Springer site
Amazon US page
Amazon France page
Amazon Germany page

The book is expensive (I tried hard to do something about it, and failed). But many readers should be able to download it, or individual chapters, for free through their institutions.

It was a privilege for me to take this project to completion and work with such extraordinary authors who produced such a collection of gems.

VN:F [1.9.10_1130]
Rating: 10.0/10 (6 votes cast)
VN:F [1.9.10_1130]
Rating: +5 (from 5 votes)

OOSC-2 available online (officially)

My book Object-Oriented Software Construction, 2nd edition (see the Wikipedia page) has become hard to get. There are various copies floating around the Web but they often use bad typography (wrong colors) and are unauthorized.

In response to numerous requests and in anticipation of the third edition I have been able to make it available electronically (with the explicit permission of the original publisher).

You can find the link on another page on this site. (In sharing or linking please use that page, not the URL of the actual PDF which might change.)

I hope having the text freely available proves useful.

 

VN:F [1.9.10_1130]
Rating: 8.5/10 (6 votes cast)
VN:F [1.9.10_1130]
Rating: +2 (from 2 votes)

Some contributions

Science progresses through people taking advantage of others’ insights and inventions. One of the conditions that makes the game possible is that you acknowledge what you take. For the originator, it is rewarding to see one’s ideas reused, but frustrating when that happens without acknowledgment, especially when you are yourself punctilious about citing your own sources of inspiration.

I have started to record some concepts that are widely known and applied today and which I believe I originated in whole or in part, whether or not their origin is cited by those who took them. The list below is not complete and I may update it in the future. It is not a list of ideas I contributed, only of those fulfilling two criteria:

  • Others have built upon them.  (If there is an idea that I think is great but no one paid attention to it, the list does not include it.)
  • They have gained wide visibility.

There is a narcissistic aspect to this exercise and if people want to dismiss it as just showing I am full of myself so be it. I am just a little tired of being given papers to referee that state that genericity was invented by Java, that no one ever thought of refactoring before agile methods, and so on. It is finally time to state some facts.

Facts indeed: I back every assertion by precise references. So if I am wrong — i.e. someone preceded me — the claims of precedence can be refuted; if so I will update or remove them. All articles by me cited in this note are available (as downloadable PDFs) on my publication page. (The page is up to date until 2018; I am in the process of adding newer publications.)

Post-publication note: I have started to receive some comments and added them in a Notes section at the end; references to those notes are in the format [A].

Final disclaimer (about the narcissistic aspect): the exercise of collecting such of that information was new for me, as I do not usually spend time reflecting on the past. I am much more interested in the future and definitely hope that my next contributions will eclipse any of the ones listed below.

Programming concepts: substitution principle

Far from me any wish to under-represent the seminal contributions of Barbara Liskov, particularly her invention of the concept of abstract data type on which so much relies. As far as I can tell, however, what has come to be known as the “Liskov Substitution Principle” is essentially contained in the discussion of polymorphism in section 10.1 of in the first edition (Prentice Hall, 1988) of my book Object-Oriented Software Construction (hereafter OOSC1); for example, “the type compatibility rule implies that the dynamic type is always a descendant of the static type” (10.1.7) and “if B inherits from A, the set of objects that can be associated at run time with an entity [generalization of variable] includes instances of B and its descendants”.

Perhaps most tellingly, a key aspect of the substitution principle, as listed for example in the Wikipedia entry, is the rule on assertions: in a proper descendant, keep the invariant, keep or weaken the precondition, keep or strengthen the postcondition. This rule was introduced in OOSC1, over several pages in section 11.1. There is also an extensive discussion in the article Eiffel: Applying the Principles of Object-Oriented Design published in the Journal of Systems and Software, May 1986.

The original 1988 Liskov article cited (for example) in the Wikipedia entry on the substitution principle says nothing about this and does not in fact include any of the terms “assertion”, “precondition”, “postcondition” or “invariant”. To me this absence means that the article misses a key property of substitution: that the abstract semantics remain the same. (Also cited is a 1994 Liskov article in TOPLAS, but that was many years after OOSC1 and other articles explaining substitution and the assertion rules.)

Liskov’s original paper states that “if for each object o1 of type S there is an object o2 of type T such that for all programs P defined in terms of T, the behavior of P is unchanged when o1 is substituted for oz, then S is a subtype of T.” As stated, this property is impossible to satisfy: if the behavior is identical, then the implementations are the same, and the two types are identical (or differ only by name). Of course the concrete behaviors are different: applying the operation rotate to two different figures o1 and o2, whose types are subtypes of FIGURE and in some cases of each other, will trigger different algorithms — different behaviors. Only with assertions (contracts) does the substitution idea make sense: the abstract behavior, as characterized by preconditions, postconditions and the class invariants, is the same (modulo respective weakening and strengthening to preserve the flexibility of the different version). Realizing this was a major step in understanding inheritance and typing.

I do not know of any earlier (or contemporary) exposition of this principle and it would be normal to get the appropriate recognition.

Software design: design patterns

Two of the important patterns in the “Gang of Four” Design Patterns book (GoF) by Gamma et al. (1995) are the Command Pattern and the Bridge Pattern. I introduced them (under different names) in the following publications:

  • The command pattern appears in OOSC1 under the name “Undo-Redo” in section 12.2. The solution is essentially the same as in GoF. I do not know of any earlier exposition of the technique. See also notes [B] and [C].
  • The bridge pattern appears under the name “handle technique” in my book Reusable Software: The Base Component Libraries (Prentice Hall, 1994). It had been described several years earlier in manuals for Eiffel libraries. I do not know of an earlier reference. (The second edition of Object-Oriented Software Construction — Prentice Hall, 1997, “OOSC2” –, which also describes it, states that a similar technique is described in an article by Josef Gil and Ricardo Szmit at the TOOLS USA conference in the summer of 1994, i.e. after the publication of Reusable Software.)

Note that it is pointless to claim precedence over GoF since that book explicitly states that it is collecting known “best practices”, not introducing new ones. The relevant questions are: who, pre-GoF, introduced each of these techniques first; and which publications does the GoF cites as “prior art”  for each pattern. In the cases at hand, Command and Bridge, it does not cite OOSC1.

To be concrete: unless someone can point to an earlier reference, then anytime anyone anywhere using an interactive system enters a few “CTRL-Z” to undo commands, possibly followed by some “CTRL-Y” to redo them (or uses other UI conventions to achieve these goals), the software most likely relying on a technique that I first described in the place mentioned above.

Software design: Open-Closed Principle

Another contribution of OOSC1 (1988), section 2.3, reinforced in OOSC2 (1997) is the Open-Closed principle, which explained one of the key aspects of inheritance: the ability to keep a module both closed (immediately usable as is) and open to extension (through inheritance, preserving the basic semantics. I am mentioning this idea only in passing since in this case my contribution is usually recognized, for example in the Wikipedia entry.

Software design: OO for reuse

Reusability: the Case for Object-Oriented Design (1987) is, I believe, the first publication that clearly explained why object-oriented concepts were (and still are today — in Grady Booch’s words, “there is no other game in town”) the best answer to realize the goal of software construction from software components. In particular, the article:

  • Explains the relationship between abstract data types and OO programming, showing the former as the theoretical basis for the latter. (The CLU language at MIT originated from Liskov’s pioneering work on abstract data types, but was not OO in the full sense of the term, missing in particular a concept of inheritance.)
  • Shows that reusability implies bottom-up development. (Top-down refinement was the mantra at the time, and promoting bottom-up was quite a shock for many people.)
  • Explains the role of inheritance for reuse, as a complement to Parnas’s interface-based modular construction with information hiding.

Software design: Design by Contract

The contribution of Design by Contract is one that is widely acknowledged so I don’t have any point to establish here — I will just recall the essentials. The notion of assertion goes back to the work of Floyd, Hoare and Dijkstra in the sixties and seventies, and correctness-by-construction to Dijktra, Gries and Wirth, but Design by Contract is a comprehensive framework providing:

  • The use of assertions in an object-oriented context. (The notion of class invariant was mentioned in a paper by Tony Hoare published back in 1972.)
  • The connection of inheritance with assertions (as sketched above). That part as far as I know was entirely new.
  • A design methodology for quality software: the core of DbC.
  • Language constructs carefully seamed into the fabric of the language. (There were precedents there, but in the form of research languages such as Alphard, a paper design only, not implemented, and Euclid.)
  • A documentation methodology.
  • Support for testing.
  • Support for a consistent theory of exception handling (see next).

Design by Contract is sometimes taken to mean simply the addition of a few assertions here and there. What the term actually denotes is a comprehensive methodology with all the above components, tightly integrated into the programming language. Note in particular that preconditions and postconditions are not sufficient; in an OO context class invariants are essential.

Software design: exceptions

Prior to the Design by Contract work, exceptions were defined very vaguely, as something special you do outside of “normal” cases, but without defining “normal”. Design by Contract brings a proper perspective by defining these concepts precisely. This was explained in a 1987 article, Disciplined Exceptions ([86] in the list), rejected by ECOOP but circulated as a technical report; they appear again in detail in OOSC1 (sections 7.10.3 to 7.10.5).

Other important foundational work on exceptions, to which I know no real precursor (as usual I would be happy to correct any omission), addressed what happens to the outcome of an exception in a concurrent or distributed context. This work was done at ETH, in particular in the PhD theses  of B. Morandi and A. Kolesnichenko, co-supervised with S. Nanz. See the co-authored papers [345] and [363].

On the verification aspect of exceptions, see below.

Software design: refactoring

I have never seen a discussion of refactoring that refers to the detailed discussion of generalization in both of the books Reusable Software (1994, chapter 3) and Object Success (Prentice Hall, 1995, from page 122 to the end of chapter 6). These discussions describe in detail how, once a program has been shown to work, it should be subject to a posteriori design improvements. It presents several of the refactoring techniques (as they were called when the idea gained traction several years later), such as moving common elements up in the class hierarchy, and adding an abstract class as parent to concrete classes ex post facto.

These ideas are an integral part of the design methodology presented in these books (and again in OOSC2 a few later). It is beyond me why people would present refactoring (or its history, as in the Wikipedia entry on the topic) without referring to these publications, which were widely circulated and are available for anyone to inspect.

Software design: built-in documentation and Single-Product principle

Another original contribution was the idea of including documentation in the code itself and relying on tools to extract the documentation-only information (leaving implementation elements aside). The idea, described in detail in OOSC1 in 1988 (sections 9.4 and 9.5) and already mentioned in the earlier Eiffel papers, is that code should be self-complete, containing elements of various levels of abstraction; some of them describe implementation, but the higher-level elements describe specification, and are distinguished syntactically in such a way that tools can extract them to produce documentation at any desired level of abstraction.

The ideas were later applied through such mechanisms as JavaDoc (with no credit as far as I know). They were present in Eiffel from the start and the underlying principles, in particular the “Single Product principle” (sometimes “Self-Documentation principle”, and also generalized by J. Ostroff and R. Paige as “Single-Model principle”). Eiffel is the best realization of these principles thanks to:

  • Contracts (as mentioned above): the “contract view” of a class (called “short form” in earlier descriptions) removes the implementations but shows the relevant preconditions, postconditions and class invariants, given a precise and abstract specification of the class.
  • Eiffel syntax has a special place for “header comments”, which describe high-level properties and remain in the contract view.
  • Eiffel library class documentation has always been based on specifications automatically extracted from the actual text of the classes, guaranteeing adequacy of the documentation. Several formats are supported (including, from 1995 on, HTML, so that documentation can be automatically deployed on the Web).
  • Starting with the EiffelCase tool in the early 90s, and today with the Diagram Tool of EiffelStudio, class structures (inheritance and client relationships) are displayed graphically, again in an automatically extracted form, using either the BON or UML conventions.

One of the core benefits of the Single-Product principle is to guard against what some of my publications called the “Dorian Gray” syndrome: divergence of an implementation from its description, a critical problem in software because of the ease of modifying stuff. Having the documentation as an integral part of the code helps ensure that when information at some level of abstraction (specification, design, implementation) changes, the other levels will be updated as well.

Crucial in the approach is the “roundtripping” requirement: specifiers or implementers can make changes in any of the views, and have them reflected automatically in the other views. For example, you can graphically draw an arrow between two bubbles representing classes B and A in the Diagram Tool, and the code of B will be updated with “inherit A”; or you can add this Inheritance clause textually in the code of class B, and the diagram will be automatically updated with an arrow.

It is important to note how contrarian and subversive these ideas were at the time of their introduction (and still to some extent today). The wisdom was that you do requirements then design then implementation, and that code is a lowly product entirely separate from specification and documentation. Model-Driven Development perpetuates this idea (you are not supposed to modify the code, and if you do there is generally no easy way to propagate the change to the model.) Rehabilitating the code (a precursor idea to agile methods, see below) was a complete change of perspective.

I am aware of no precedent for this Single Product approach. The closest earlier ideas I can think of are in Knuth’s introduction of Literate Programming in the early eighties (with a book in 1984). As in the Single-product approach, documentation is interspersed with code. But the literate programming approach is (as presented) top-down, with English-like explanations progressively being extended with implementation elements. The Single Product approach emphasizes the primacy of code and, in terms of the design process, is very much yoyo, alternating top-down (from the specification to the implementation) and bottom-up (from the implementation to the abstraction) steps. In addition, a large part of the documentation, and often the most important one, is not informal English but formal assertions. I knew about Literate Programming, of course, and learned from it, but Single-Product is something else.

Software design: from patterns to components

Karine Arnout’s thesis at ETH Zurich, resulting in two co-authored articles ([255] and [257], showed that contrary to conventional wisdom a good proportion of the classical design patterns, including some of the most sophisticated, can be transformed into reusable components (indeed part of an Eiffel library). The agent mechanism (see below) was instrumental in achieving that result.

Programming, design and specification concepts: abstract data types

Liskov’s and Zilles’s ground-breaking 1974 abstract data types paper presented the concepts without a mathematical specification, using programming language constructs instead. A 1976 paper (number [3] in my publication list, La Description des Structures de Données, i.e. the description of data structures) was as far as I know one of the first to present a mathematical formalism, as  used today in presentations of ADTs. John Guttag was taking a similar approach in his PhD thesis at about the same time, and went further in providing a sound mathematical foundation, introducing in particular (in a 1978 paper with Jim Horning) the notion of sufficient completeness, to which I devoted a full article in this blog  (Are My Requirements Complete?) about a year ago. My own article was published in a not very well known journal and in French, so I don’t think it had much direct influence. (My later books reused some of the material.)

The three-level description approach of that article (later presented in English for an ACM workshop in the US in 1981, Pingree Park, reference [28]) is not well known but still applicable, and would be useful to avoid frequent confusions between ADT specifications and more explicit descriptions.

When I wrote my 1976 paper, I was not aware of Guttag’s ongoing work (only of the Liskov and Zilles paper), so the use of a mathematical framework with functions and predicates on them was devised independently. (I remember being quite happy when I saw what the axioms should be for a queue.) Guttag and I both gave talks at a workshop organized by the French programming language interest group in 1977 and it was fun to see that our presentations were almost identical. I think my paper still reads well today (well, if you read French). Whether or not it exerted direct influence, I am proud that it independently introduced the modern way of thinking of abstract data types as characterized by mathematical functions and their formal (predicate calculus) properties.

Language mechanisms: genericity with inheritance

Every once in a while I get to referee a paper that starts “Generics, as introduced in Java…” Well, let’s get some perspective here. Eiffel from its introduction in 1985 combined genericity and inheritance. Initially, C++ users and designers claimed that genericity was not needed in an OO context and the language did not have it; then they introduced template. Initially, the designers of Java claimed (around 1995) that genericity was not needed, and the language did not have it; a few years later Java got generics. Initially, the designers of C# (around 1999) claimed that genericity was not needed, and the language did not have it; a few years later C# and .NET got generics.

Genericity existed before Eiffel of course; what was new was the combination with inheritance. I had been influenced by work on generic modules by a French researcher, Didier Bert, which I believe influenced the design of Ada as well; Ada was the language that brought genericity to a much broader audience than the somewhat confidential languages that had such a mechanism before. But Ada was not object-oriented (it only had modules, not classes). I was passionate about object-oriented programming (at a time when it was generally considered, by the few people who had heard of it as an esoteric, academic pursuit). I started — in the context of an advanced course I was teaching at UC Santa Barbara — an investigation of how the two mechanisms relate to each other. The results were a paper at the first OOPSLA in 1986, Genericity versus Inheritance, and the design of the Eiffel type system, with a class mechanism, inheritance (single and multiple), and genericity, carefully crafted to complement each other.

With the exception of a Trellis-Owl, a  design from Digital Equipment Corporation also presented at the same OOPSLA (which never gained significant usage), there were no other OO languages with both mechanisms for several years after the Genericity versus Inheritance paper and the implementation of genericity with inheritance in Eiffel available from 1986 on. Eiffel also introduced, as far as I know, the concept of constrained genericity, the second basic mechanism for combining genericity with inheritance, described in Eiffel: The Language (Prentice Hall, 1992, section 10.8) and discussed again in OOSC2 (section 16.4 and throughout). Similar mechanisms are present in many languages today.

It was not always so. I distinctly remember people bringing their friends to our booth at some conference in the early nineties, for the sole purpose of having a good laugh with them at our poster advertising genericity with inheritance. (“What is this thing they have and no one else does? Generi-sissy-tee? Hahaha.”). A few years later, proponents of Java were pontificating that no serious language needs generics.

It is undoubtedly part of of the cycle of invention (there is a Schopenhauer citation on this, actually the only thing from Schopenhauer’s philosophy that I ever understood [D]) that people at some point will laugh at you; if it did brighten their day, why would the inventor deny them one of the little pleasures of life? But in terms of who laughs last, along the way C++ got templates, Java got generics, C# finally did too, and nowadays all typed OO languages have something of the sort.

Language mechanisms: multiple inheritance

Some readers will probably have been told that multiple inheritance is a bad thing, and hence will not count it as a contribution, but if done properly it provides a major abstraction mechanism, useful in many circumstances. Eiffel showed how to do multiple inheritance right by clearly distinguishing between features (operations) and their names, defining a class as a finite mapping between names and features, and using renaming to resolve any name clashes.

Multiple inheritance was made possible by an implementation innovation: discovering a technique (widely imitated since, including in single-inheritance contexts) to implement dynamic binding in constant time. It was universally believed at the time that multiple inheritance had a strong impact on performance, because dynamic binding implied a run-time traversal of the class inheritance structure, already bad enough for single inheritance where the structure is a tree, but prohibitive with multiple inheritance for which it is a directed acyclic graph. From its very first implementation in 1986 Eiffel used what is today known as a virtual table technique which guarantees constant-time execution of routine (method) calls with dynamic binding.

Language mechanisms: safe GC through strong static typing

Simula 67 implementations did not have automatic garbage collection, and neither had implementations of C++. The official excuse in the C++ case was methodological: C programmers are used to exerting manual control of memory usage. But the real reason was a technical impossibility resulting from the design of the language: compatibility with C precludes the provision of a good GC.

More precisely, of a sound and complete GC. A GC is sound if it will only reclaim unreachable objects; it is complete if it will reclaim all unreachable objects. With a C-based language supporting casts (e.g. between integers and pointers) and pointer arithmetic, it is impossible to achieve soundness if we aim at a reasonable level of completeness: a pointer can masquerade as an integer, only to be cast back into a pointer later on, but in the meantime the garbage collector, not recognizing it as a pointer, may have wrongly reclaimed the corresponding object. Catastrophe.

It is only possible in such a language to have a conservative GC, meaning that it renounces completeness. A conservative GC will treat as a pointer any integer whose value could possibly be a pointer (because it lies between the bounds of the program’s data addresses in memory). Then, out of precaution, the GC will refrain from reclaiming the objects at these addresses even if they appear unreachable.

This approach makes the GC sound but it is only a heuristics, and it inevitably loses completeness: every once in a while it will fail to reclaim some dead (unreachable) objects around. The result is a program with memory leaks — usually unacceptable in practice, particularly for long-running or continuously running programs where the leaks inexorably accumulate until the program starts thrashing then runs out of memory.

Smalltalk, like Lisp, made garbage collection possible, but was not a typed language and missed on the performance benefits of treating simple values like integers as a non-OO language would. Although in this case I do not at the moment have a specific bibliographic reference, I believe that it is in the context of Eiffel that the close connection between strong static typing (avoiding mechanisms such as casts and pointer arithmetic) and the possibility of sound and complete garbage collection was first clearly explained. Explained in particular around 1990 in a meeting with some of the future designers of Java, which uses a similar approach, also taken over later on by C#.

By the way, no one will laugh at you today for considering garbage collection as a kind of basic human right for programmers, but for a long time the very idea was quite sulfurous, and advocating it subjected you to a lot of scorn. Here is an extract of the review I got when I submitted the first Eiffel paper to IEEE Transactions on Software Engineering:

Systems that do automatic garbage collection and prevent the designer from doing his own memory management are not good systems for industrial-strength software engineering.

Famous last words. Another gem from another reviewer of the same paper:

I think time will show that inheritance (section 1.5.3) is a terrible idea.

Wow! I wish the anonymous reviewers would tell us what they think today. Needless to say, the paper was summarily rejected. (It later appeared in the Journal of Systems and Software — as [82] in the publication list — thanks to the enlightened views of Robert Glass, the founding editor.)

Language mechanisms: void safety

Void safety is a property of a language design that guarantees the absence of the plague of null pointer dereferencing.

The original idea came (as far as I know) from work at Microsoft Research that led to the design of a research language called C-omega; the techniques were not transferred to a full-fledged programming language. Benefiting from the existence of this proof of concept, the Eiffel design was reworked to guarantee void safety, starting from my 2005 ECOOP keynote paper (Attached Types) and reaching full type safety a few years later. This property of the language was mechanically proved in a 2016 ETH thesis by A. Kogtenkov.

Today all significant Eiffel development produces void-safe code. As far as I know this was a first among production programming languages and Eiffel remains the only production language to provide a guarantee of full void-safety.

This mechanism, carefully crafted (hint: the difficult part is initialization), is among those of which I am proudest, because in the rest of the programming world null pointer dereferencing is a major plague, threatening at any moment to crash the execution of any program that uses pointers of references. For Eiffel users it is gone.

Language mechanisms: agents/delegates/lambdas

For a long time, OO programming languages did not have a mechanism for defining objects wrapping individual operations. Eiffel’s agent facility was the first such mechanism or among the very first together the roughly contemporaneous but initially much more limited delegates of C#. The 1999 paper From calls to agents (with P. Dubois, M. Howard, M. Schweitzer and E. Stapf, [196] in the list) was as far as I know the first description of such a construct in the scientific literature.

Language mechanisms: concurrency

The 1993 Communications of the ACM paper on Systematic Concurrent Object-Oriented Programming [136] was certainly not the first concurrency proposal for OO programming (there had been pioneering work reported in particular in the 1987 book edited by Tokoro and Yonezawa), but it innovated in offering a completely data-race-free model, still a rarity today (think for example of the multi-threading mechanisms of dominant OO languages).

SCOOP, as it came to be called, was implemented a few years later and is today a standard part of Eiffel.

Language mechanisms: selective exports

Information hiding, as introduced by Parnas in his two seminal 1972 articles, distinguishes between public and secret features of a module. The first OO programming language, Simula 67, had only these two possibilities for classes and so did Ada for modules.

In building libraries of reusable components I realized early on that we need a more fine-grained mechanism. For example if class LINKED_LIST uses an auxiliary class LINKABLE to represent individual cells of a linked list (each with a value field and a “right” field containing a reference to another LINKABLE), the features of LINKABLE (such as the operation to reattach the “right” field) should not be secret, since LINKED_LIST needs them; but they should also not be generally public, since we do not want arbitrary client objects to mess around with the internal structure of the list. They should be exported selectively to LINKED_LIST only. The Eiffel syntax is simple: declare these operations in a clause of the class labeled “feature {LINKED_LIST}”.

This mechanism, known as selective exports, was introduced around 1989 (it is specified in full in Eiffel: The Language, from 1992, but was in the Eiffel manuals earlier). I think it predated the C++ “friends” mechanism which serves a similar purpose (maybe someone with knowledge of the history of C++ has the exact date). Selective exports are more general than the friends facility and similar ones in other OO languages: specifying a class as a friend means it has access to all your internals. This solution is too coarse-grained. Eiffel’s selective exports make it possible to define the specific export rights of individual operations (including attributes/fields) individually.

Language mechanisms and implementation: serialization and schema evolution

I did not invent serialization. As a student at Stanford in 1974 I had the privilege, at the AI lab, of using SAIL (Stanford Artificial Intelligence Language). SAIL was not object-oriented but included many innovative ideas; it was far ahead of its time, especially in terms of the integration of the language with (what was not yet called) its IDE. One feature of SAIL with which one could fall in love at first sight was the possibility of selecting an object and having its full dependent data structure (the entire subgraph of the object graph reached by following references from the object, recursively) stored into a file, for retrieval at the next section. After that, I never wanted again to live without such a facility, but no other language and environment had it.

Serialization was almost the first thing we implemented for Eiffel: the ability to write object.store (file) to have the entire structure from object stored into file, and the corresponding retrieval operation. OOSC1 (section 15.5) presents these mechanisms. Simula and (I think) C++ did not have anything of the sort; I am not sure about Smalltalk. Later on, of course, serialization mechanisms became a frequent component of OO environments.

Eiffel remained innovative by tackling the difficult problems: what happens when you try to retrieve an object structure and some classes have changed? Only with a coherent theoretical framework as provided in Eiffel by Design by Contract can one devise a meaningful solution. The problem and our solutions are described in detail in OOSC2 (the whole of chapter 31, particularly the section entitled “Schema evolution”). Further advances were made by Marco Piccioni in his PhD thesis at ETH and published in joint papers with him and M. Oriol, particularly [352].

Language mechanisms and implementation: safe GC through strong static typing

Simula 67 (if I remember right) did not have automatic garbage collection, and neither had C++ implementations. The official justification in the case of C++ was methodological: C programmers are used to exerting manual control of memory usage. But the real obstacle was technical: compatibility with C makes it impossible to have a good GC. More precisely, to have a sound and complete GC. A GC is sound if it will only reclaim unreachable objects; it is complete if it will reclaim all unreachable objects. With a C-based language supporting casts (e.g. between integers and pointers) and pointer arithmetic, it is impossible to achieve soundness if we aim at a reasonable level of completeness: a pointer can masquerade as an integer, only to be cast back into a pointer later on, but in the meantime the garbage collector, not recognizing it as a pointer, may have wrongly reclaimed the corresponding object. Catastrophe. It is only possible in such a language to have a conservative GC, which will treat as a pointer any integer whose value could possibly be a pointer (because its value lies between the bounds of the program’s data addresses in memory). Then, out of precaution, it will not reclaim the objects at the corresponding address. This approach makes the GC sound but it is only a heuristics, and it may be over-conservative at times, wrongly leaving dead (i.e. unreachable) objects around. The result is, inevitably, a program with memory leaks — usually unacceptable in practice.

Smalltalk, like Lisp, made garbage collection possible, but was not a typed language and missed on the performance benefits of treating simple values like integers as a non-OO language would. Although in this case I do not at the moment have a specific bibliographic reference, I believe that it is in the context of Eiffel that the close connection between strong static typing (avoiding mechanisms such as casts and pointer arithmetic) and the possibility of sound and complete garbage collection was first clearly explained. Explained in particular to some of the future designers of Java, which uses a similar approach, also taken over later on by C#.

By the way, no one will laugh at you today for considering garbage collection as a kind of basic human right for programmers, but for a long time it was quite sulfurous. Here is an extract of the review I got when I submitted the first Eiffel paper to IEEE <em>Transactions on Software Engineering:

Software engineering: primacy of code

Agile methods are widely and properly lauded for emphasizing the central role of code, against designs and other non-executable artifacts. By reading the agile literature you might be forgiven for believing that no one brought up that point before.

Object Success (1995) makes the argument very clearly. For example, chapter 3, page 43:

Code is to our industry what bread is to a baker and books to a writer. But with the waterfall code only appears late in the process; for a manager this is an unacceptable risk factor. Anyone with practical experience in software development knows how many things can go wrong once you get down to code: a brilliant design idea whose implementation turns out to require tens of megabytes of space or minutes of response time; beautiful bubbles and arrows that cannot be implemented; an operating system update, crucial to the project which comes five weeks late; an obscure bug that takes ages to be fixed. Unless you start coding early in the process, you will not be able to control your project.

Such discourse was subversive at the time; the wisdom in software engineering was that you need to specify and design a system to death before you even start coding (otherwise you are just a messy “hacker” in the sense this word had at the time). No one else in respectable software engineering circles was, as far as I know, pushing for putting code at the center, the way the above extract does.

Several years later, agile authors started making similar arguments, but I don’t know why they never referenced this earlier exposition, which still today I find not too bad. (Maybe they decided it was more effective to have a foil, the scorned Waterfall, and to claim that everyone else before was downplaying the importance of code, but that was not in fact everyone.)

Just to be clear, Agile brought many important ideas that my publications did not anticipate; but this particular one I did.

Software engineering: the roles of managers

Extreme Programming and Scrum have brought new light on the role of managers in software development. Their contributions have been important and influential, but here too they were for a significant part prefigured by a long discussion, altogether two chapters, in Object Success (1995).

To realize this, it is enough to read the titles of some of the sections in those chapters, describing roles for managers (some universal, some for a technical manager): “risk manager”, “interface with the rest of the world” (very scrummy!), “protector of the team’s sanity”, “method enforcer” (think Scrum Master), “mentor and critic”. Again, as far as I know, these were original thoughts at the time; the software engineering literature for the most part did not talk about these issues.

Software engineering: outsourcing

As far as I know the 2006 paper Offshore Development: The Unspoken Revolution in Software Engineering was the first to draw attention, in the software engineering community, to the peculiar software engineering challenges of distributed and outsourced development.

Software engineering: automatic testing

The AutoTest project (with many publications, involving I. Ciupa, A. Leitner, Y. Wei, M. Oriol, Y. Pei, M. Nordio and others) was not the first to generate tests automatically by creating numerous instances of objects and calling applicable operations (it was preceded by Korat at MIT), but it was the first one to apply this concept with Design by Contract mechanisms (without which it is of little practical value, since one must still produce test oracles manually) and the first to be integrated in a production environment (EiffelStudio).

Software engineering: make-less system building

One of the very first decisions in the design of Eiffel was to get rid of Make files.

Feldman’s Make had of course been a great innovation. Before Make, programmers had to produce executable systems manually by executing sequences of commands to compile and link the various source components. Make enabled them to instead  to define dependencies between components in a declarative way, resulting in a partial order, and then performed a topological sort to produce the sequence of comments. But preparing the list of dependencies remains a tedious task, particularly error-prone for large systems.

I decided right away in the design of Eiffel that we would never force programmers to write such dependencies: they would be automatically extracted from the code, through an exhaustive analysis of the dependencies between modules. This idea was present from the very the first Eiffel report in 1985 (reference [55] in the publication list): Eiffel programmers never need to write a Make file or equivalent (other than for non-Eiffel code, e.g. C or C++, that they want to integrate); they just click a Compile button and the compiler figures out the steps.

Behind this approach was a detailed theoretical analysis of possible relations between modules in software development (in many programming languages), published as the “Software Knowledge Base” at ICSE in 1985. That analysis was also quite instructive and I would like to return to this work and expand it.

Educational techniques: objects first

Towards an Object-Oriented Curriculum ( TOOLS conference, August 1993, see also the shorter JOOP paper in May of the same year) makes a carefully argued case for what was later called the Objects First approach to teaching programming. I would be interested to know if there are earlier publications advocating starting programming education with an OO language.

The article also advocated for the “inverted curriculum”, a term borrowed from work by Bernie Cohen about teaching electrical engineering. It was the first transposition of this concept to software education. In the article’s approach, students are given program components to use, then little by little discover how they are made. This technique met with some skepticism and resistance since the standard approach was to start from the very basics (write trivial programs), then move up. Today, of course, many introductory programming courses similarly provide students from day one with a full-fledged set of components enabling them to produce significant programs.

More recent articles on similar topics, taking advantage of actual teaching experience, are The Outside-In Method of Teaching Programming (2003) and The Inverted Curriculum in Practice (at ICSE 2006, with Michela Pedroni). The culmination of that experience is the textbook Touch of Class from 2009.

Educational techniques: Distributed Software Projects

I believe our team at ETH Zurich (including among others M. Nordio, J. Tschannen, P. Kolb and C. Estler and in collaboration with C. Ghezzi, E. Di Nitto and G. Tamburrelli at Politecnico di Milano, N. Aguirre at Rio Cuarto and many others in various universities) was the first to devise,  practice and document on a large scale (see publications and other details here) the idea of an educational software project conducted in common by student groups from different universities. It yielded a wealth of information on distributed software development and educational issues.

Educational techniques: Web-based programming exercises

There are today a number of cloud-based environments supporting the teaching of programming by enabling students to compile and test their programs on the Web, benefiting from a prepared environment (so that they don’t have to download any tools or prepare control files) and providing feedback. One of the first — I am not sure about absolute precedence — and still a leading one, used by many universities and applicable to many programming languages, is Codeboard.

The main developer, in my chair at ETH Zurich, was Christian Estler, supported in particular by M. Nordio and M. Piccioni, so I am only claiming a supporting role here.

Educational techniques: key CS/SE concepts

The 2001 paper Software Engineering in the Academy did a good job, I think, of defining the essential concepts to teach in a proper curriculum (part of what Jeannette Wing’s 2006 paper called Computational Thinking).

Program verification: agents (delegates etc.)

Reasoning about Function Objects (ICSE 2010, with M. Nordio, P. Müller and J. Tschannen) introduced verification techniques for objects representing functions (such as agents, delegates etc., see above) in an OO language. Not sure whether there were any such techniques before.

Specification languages: Z

The Z specification language has been widely used for formal development, particularly in the UK. It is the design of J-R Abrial. I may point out that I was a coauthor of the first publication on Z in English (1980),  describing a version that preceded the adaptation to a more graphical-style notation done later at Oxford. The first ever published description of Z, pertaining to an even earlier version, was in French, in my book Méthodes de Programmation (with C. Baudoin), Eyrolles, 1978, running over 15 pages (526-541), with the precise description of a refinement process.

Program verification: exceptions

Largely coming out of the PhD thesis of Martin Nordio, A Sound and Complete Program Logic for Eiffel (TOOLS 2009) introduces rules for dealing with exceptions in a Hoare-style verification framework.

Program verification: full library, and AutoProof

Nadia Polikarpova’s thesis at ETH, aided by the work of Carlo Furia and Julian Tschannen (they were the major contributors and my participation was less important), was as far as I know the first to produce a full functional verification of an actual production-quality reusable library. The library is EiffelBase 2, covering fundamental data structures.

AutoProof — available today, as a still experimental tool, through its Web interface, see here — relied on the AutoProof prover, built by the same team, and itself based on Microsoft Research’s Boogie and Z3 engines.

More

There are more concepts worthy of being included here, but for today I will stop here.

Notes

[A] One point of divergence between usual presentations of the substitution principle and the view in OOSC and my other publications is the covariance versus contravariance of routine argument types. It reflects a difference of views as to what the proper policy (both mathematically sound and practically usable) should be.

[B]  The GoF book does not cite OOSC for the command or bridge patterns. For the command pattern it cites (thanks to Adam Kosmaczewski for digging up the GoF text!) a 1985 SIGGRAPH paper by Henry Lieberman (There’s More to Menu Systems than Meets the Screen). Lieberman’s paper describes the notion of command object and mentions undoing in passing, but does not include the key elements of the command pattern (as explained in full in OOSC1), i.e. an abstract (deferred) command class with deferred procedures called (say) do_it and undo_it, then specific classes for each kind of command, each providing a specific implementation of those procedures, then a history list of commands supporting multiple-level undo and redo as explained in OOSC1. (Reading Lieberman’s paper with a 2021 perspective shows that it came tantalizingly close to the command pattern, but doesn’t get to it. The paper does talk about inheritance between command classes, but only to “define new commands as extensions to old commands”, not in the sense of a general template that can be implemented in many specific ways. And it does mention a list of objects kept around to enable recovery from accidental deletions, and states that the application can control its length, as is the case with a history list; but the objects in the list are not command objects, they are graphical and other objects that have been deleted.)

[C] Additional note on the command pattern: I vaguely remember seeing something similar to the OOSC1 technique in an article from a supplementary volume of the OOPSLA proceedings in the late eighties or early nineties, i.e. at the same time or slightly later, possibly from authors from Xerox PARC, but I have lost the reference.

[D] Correction: I just checked the source and learned that the actual Schopenhauer quote (as opposed to the one that is usually quoted) is different; it does not include the part about laughing. So much for my attempts at understanding philosophy.

 

VN:F [1.9.10_1130]
Rating: 8.7/10 (27 votes cast)
VN:F [1.9.10_1130]
Rating: +8 (from 14 votes)

This Wednesday in Nice: survey talk on the Eiffel method

The “Morgenstern Colloquium” at the University of Nice / INRIA Sophia Antipolis invited me to give a talk, next Wednesday (18 December) at 11 in Sophia Antipolis, in the aptly named* “Kahn Building”. The announcement appears here. I proposed various topics but (pleasant surprise) the organizers explicitly asked me to lecture about what I really want to talk about: the Eiffel approach. I will give a general presentation describing not specifically the language but the unified view of software construction embodied in Eiffel, from modeling to requirements to design, implementation and verification. Here is the abstract:

With society’s growing reliance on IT systems, the ability to write high-quality software is ever more critical. While a posteriori verification techniques have their role, there is no substitute for methods and tools that provide built-in quality (“correctness by construction”) and scale up to very large systems. For several decades my colleagues and I have been building such a method, based in particular on the concept of Design by Contract, the associated tools and the supporting language, Eiffel. The scope is wide, encompassing all aspects of the software development process, from requirements and design to implementation and verification. I will present an overview of the approach, show what it can yield, and discuss remaining open issues.

This talk is meant for everyone, whether from industry or academia, with an interest in practical techniques for engineering high-quality software.

No registration is required. The presentation will be in English.

Note

*Gilles Kahn, a brilliant computer scientist who died too young, was for a while director of INRIA.

VN:F [1.9.10_1130]
Rating: 6.3/10 (4 votes cast)
VN:F [1.9.10_1130]
Rating: 0 (from 2 votes)

Publications on CS/SE/informatics education

Recently I had a need to collect my education-related publications, so I went through my publication list and extracted items devoted to issues of learning computer science (informatics) and software engineering. There turned out to be far more than I expected; I did not think of myself as primarily an education researcher but it seems I am that too. (Not so many research computer scientists take the trouble to publish in SIGCSE, ITiCSE and other top CS education venues.)

Without presuming that the list will be of interest I am reproducing it below for the record. All comes from my publication list here, which contains more information, in particular a descriptive paragraph or two for every single publication.

I have also included PhD theses in education. (Whole list of PhD theses supervised here.)

The topics include among others, in approximate chronological order (although the list below is in the reverse order):

    • Early experience teaching modern programming concepts in both industry and universities.
    • In the nineties, I was full time at Eiffel Software, the development of a general framework for teaching programming. This was written from the safe position of someone in industry advising academic colleagues on what to do (usually the advice goes the other way). I did have, however, the opportunity to practice my preaching in short stints at University of Technology, Sydney and  particularly Monash University. The concept of the Inverted Curriculum (also known as “ Outside-In”) date back to that period, with objects first (actually classes) and contracts first too.
    • When I joined ETH, a general paper on the fundamental goals and concepts of software engineering education, “Software Engineering in the Academy”, published in IEEE Computer.
    • At ETH, putting the Inverted Curriculum in practice, with 14 consecutive sessions of the introductory programming courses for all computer science students, resulting in the Touch of Class textbook and a number of papers coming out of our observations. An estimated 6000 students took the course. A variant of it has also been given several times at Innopolis University.
    • A theory of how to structure knowledge for educational purposes, leading to the notion of “Truc” (Teachable, Reusable Unit of Cognition).
    • The development by Michela Pedroni of the Trucstudio environment, similar in its form to an IDE but devoted, instead of the development of programs, to the visual development of courses, textbooks, curricula etc.
    • Empirical work by Marie-Hélène Ng Cheong Vee (Nienaltowski) and Michela Pedroni on what beginners understand easily, and not, for example according to the phrasing of compiler error messages.
    • Other empirical work, by Michela Pedroni and Manuel Oriol, on the prior knowledge of entering computer science students.
    • The DOSE course (Distributed and Outsourced Software Engineering) ran for several years a student project done by joint student teams from several cooperating universities, including Politecnico di Milano which played a key role along with us. It enabled many empirical studies on the effect on software development of having geographically distributed teams. People who played a major role in this effort are, at ETH, Martin Nordio, Julian Tschannen and Christian Estler and, at Politecnico, Elisabetta di Nitto, Giordano Tamburrelli and Carlo Ghezzi.
    • Several MOOCs, among the first at ETH, on introductory computing and agile methods. They do not appear below because they are not available at the moment on the EdX site (I do not know why and will try to get them reinstated). The key force there was Marco Piccioni. MOOCs are interesting for many reasons; they are a substitute neither for face-to-face teaching nor for textbooks, but an interesting complement offering novel educational possibilities. Thanks to codeboard, see below, our programming MOOCs provide the opportunity to compile and run program directly from the course exercise pages, compare the run’s result to correct answers for prepared tests, and get immediate feedback .
    • A comparative study of teaching effectiveness of two concurrency models, Eiffel SCOOP and JavaThreads (Sebastian Nanz, Michela Pedroni).
    • The development of the EiffelMedia multimedia library at ETH, which served as a basis for dozens of student projects over many years. Credit for both the idea and its realization, including student supervision, goes to Till Bay and Michela Pedroni.
    • The development (Christian Estler with Martin Nordio) of the Codeboard system and site, an advanced system for cloud support to teach programming, enabling students to compile, correct and run programs on the web, with support for various languages. Codeboard is used in the programming MOOCs.
    • A hint system (Paolo Antonucci, Michela Pedroni) to help students get progressive help, as in video games, when they stumble trying to write a program, e.g. with Codeboard.

Supervised PhD theses on education

The following three theses are devoted to educational topics (although many of the  other theses have educational aspects too):

Christian Estler, 2014, Understanding and Improving Collaboration in Distributed Software Development, available here.

Michela Pedroni, 2009, Concepts and Tools for Teaching Programming, available here.

Markus Brändle, 2006: GraphBench: Exploring the Limits of Complexity with Educational Software, available here. (The main supervisor in this case was Jürg Nievergelt.)

MOOCs (Massive Online Open Courses)

Internal MOOCs, and three courses on EdX (links will be added when available):

  • Computing: Art, Magic, Science? Part 1 (CAMS 1), 2013.
  • Computing: Art, Magic, Science? Part 1 (CAMS 2), 2014.
  • Agile Software Development, 2015.

Publications about education

1. Paolo Antonucci, Christian Estler, Durica Nikolic, Marco Piccioni and Bertrand Meyer: An Incremental Hint System For Automated Programming Assignments, in ITiCSE ’15, Proceedings of 2015 ACM Conference on Innovation and Technology in Computer Science Education, 6-8 July 2015, Vilnius, ACM Press, pages 320-325. (The result of a master’s thesis, a system for helping students solve online exercises, through successive hints.) Available here.

2. Jiwon Shin, Andrey Rusakov and Bertrand Meyer: Concurrent Software Engineering and Robotics Education, in 37th International Conference on Software Engineering (ICSE 2015), Florence, May 2015, IEEE Press, pages 370-379. (Describes our innovative Robotics Programming Laboratory course, where students from 3 departments, CS, Mechanical Engineering and Electrical Engineering learned how to program robots.) Available here.

3. Cristina Pereira, Hannes Werthner, Enrico Nardelli and Bertrand Meyer: Informatics Education in Europe: Institutions, Degrees, Students, Positions, Salaries — Key Data 2008-2013, Informatics Europe report, October 2014. (Not a scientific publication but a report. I also collaborated in several other editions of this yearly report series, which I started, from 2011 on. A unique source of information about the state of CS education in Europe.) Available here.

4. (One of the authors of) Informatics education: Europe cannot afford to miss the boat, edited by Walter Gander, joint Informatics Europe and ACM Europe report, April 2013. An influential report which was instrumental in the introduction of computer science in high schools and primary schools in Europe, particularly Switzerland. Emphasized the distinction between “digital literacy” and computer science. Available here.

5. Sebastian Nanz, Faraz Torshizi, Michela Pedroni and Bertrand Meyer: Design of an Empirical Study for Comparing the Usability of Concurrent Programming Languages, in Information and Software Technology Journal Elsevier, volume 55, 2013. (Journal version of conference paper listed next.) Available here.

6. Bertrand Meyer: Knowledgeable beginners, in Communications of the ACM, vol. 55, no. 3, March 2012, pages 10-11. (About a survey of prior knowledge of entering ETH CS students, over many years. Material from tech report below.) Available here.

7. Sebastian Nanz, Faraz Torshizi, Michela Pedroni and Bertrand Meyer: Design of an Empirical Study for Comparing the Usability of Concurrent Programming Languages, in ESEM 2011 (ACM/IEEE International Symposium on Empirical Software Engineering and Measurement), 22-23 September 2011 (best paper award). Reports on a carefully designed empirical study to assess the teachability of various approaches to concurrent programming. Available here.

8. Martin Nordio, H.-Christian Estler, Julian Tschannen, Carlo Ghezzi, Elisabetta Di Nitto and Bertrand Meyer: How do Distribution and Time Zones affect Software Development? A Case Study on Communication, in Proceedings of the 6th International Conference on Global Software Engineering (ICGSE), IEEE Computer Press, 2011, pages 176-184. (A study of the results of our DOSE distributed course, which involved students from different universities in different countries collaborating on a common software development project.) Available here.

9. Martin Nordio, Carlo Ghezzi, Elisabetta Di Nitto, Giordano Tamburrelli, Julian Tschannen, Nazareno Aguirre, Vidya Kulkarni and Bertrand Meyer: Teaching Software Engineering using Globally Distributed Projects: the DOSE course, in Collaborative Teaching of Globally Distributed Software Development – Community Building Workshop (CTGDSD), Hawaii (at ICSE), May 2011. (Part of the experience of our Distributed Outsourced Software Engineering course, taught over many years with colleagues from Politecnico di Milano and elsewhere, see paper in previous entry.) Available here.

10. Bertrand Meyer: From Programming to Software Engineering (slides only), material for education keynote at International Conference on Software Engineering (ICSE 2010), Cape Town, South Africa, May 2010. Available here.

11. Michela Pedroni and Bertrand Meyer: Object-Oriented Modeling of Object-Oriented Concepts, in ISSEP 2010, Fourth International Conference on Informatics in Secondary Schools, Zurich, January 2010, eds. J. Hromkovic, R. Královic, J. Vahrenhold, Lecture Notes in Computer Science 5941, Springer, 2010. Available here.

12. Michela Pedroni, Manuel Oriol and Bertrand Meyer: What Do Beginning CS Majors Know?, ETH Technical Report, 2009. (Unpublished report about the background of 1st-year ETH CS students surveyed over many years. See shorter 2012 CACM version above.) Available here.

13. Bertrand Meyer: Touch of Class: Learning to Program Well Using Object Technology and Design by Contract, Springer, 2009 (also translated into Russian). (Introductory programming textbook, used for many years at ETH Zurich and Innopolis University for the first programming course. The herecontains a long discussion of pedagogical issues of teaching programming and CS.) Book page and text of several chapters here.

14. Michela Pedroni, Manuel Oriol, Lukas Angerer and Bertrand Meyer: Automatic Extraction of Notions from Course Material, in Proceedings of SIGCSE 2008 (39th Technical Symposium on Computer Science Education), Portland (Oregon), 12-15 March 2008, ACM SIGCSE Bulletin, vol. 40, no. 1, ACM Press, 2008, pages 251-255. (As the title indicates, tools for automatic analysis of course material to extract the key pedagogical notions or “Trucs”.) Available here.

15. Marie-Hélène Nienaltowski, Michela Pedroni and Bertrand Meyer: Compiler Error Messages: What Can Help Novices?, in Proceedings of SIGCSE 2008 (39th Technical Symposium on Computer Science Education), Portland (Oregon), Texas, 12-15 March 2008, ACM SIGCSE Bulletin, vol. 40, no. 1, ACM Press, 2008, pages 168-172. (Discusses the results of experiments with different styles of compiler error messages, which can be baffling to beginners, to determine what works best.) Available here.

16. Bertrand Meyer and Marco Piccioni: The Allure and Risks of a Deployable Software Engineering Project: Experiences with Both Local and Distributed Development, in Proceedings of IEEE Conference on Software Engineering & Training (CSEE&T), Charleston (South Carolina), 14-17 April 2008, ed. H. Saiedian, pages 3-16. (Paper associated with a keynote at an SE education conference. See other papers on the DOSE distributed project experience below.) Available here.

17. Till Bay, Michela Pedroni and Bertrand Meyer: By students, for students: a production-quality multimedia library and its application to game-based teaching, in JOT (Journal of Object Technology), vol. 7, no. 1, pages 147-159, January 2008. Available here (PDF) and here (HTML).

18. Marie-Hélène Ng Cheong Vee (Marie-Hélène Nienaltowski), Keith L. Mannock and Bertrand Meyer: Empirical study of novice error paths, Proceedings of workshop on educational data mining at the 8th international conference on intelligent tutoring systems (ITS 2006), 2006, pages 13-20. (An empirical study of the kind of programming mistakes learners make.) Available here.

19. Bertrand Meyer: Testable, Reusable Units of Cognition, in Computer (IEEE), vol. 39, no. 4, April 2006, pages 20-24. (Introduced a general approach for structuring knowledge for teaching purposes: “Trucs”. Served as the basis for some other work listed, in particular papers with Michela Pedroni on the topics of her PhD thesis. Available here.

21. Michela Pedroni and Bertrand Meyer: The Inverted Curriculum in Practice, in Proceedings of SIGCSE 2006, Houston (Texas), 1-5 March 2006, ACM Press, 2006, pages 481-485. (Develops the idea of inverted curriculum which served as the basis for our teaching of programming at ETH, Innopolis etc. and led to the “Touch of Class” textbook.) Available here.

22. Bertrand Meyer: The Outside-In Method of Teaching Introductory Programming, in Perspective of System Informatics, Proceedings of fifth Andrei Ershov Memorial Conference, Akademgorodok, Novosibirsk, 9-12 July 2003, eds. Manfred Broy and Alexandr Zamulin, Lecture Notes in Computer Science 2890, Springer, 2003, pages 66-78. (An early version of the ideas presented in the previous entry.) Available here.

23. Bertrand Meyer: Software Engineering in the Academy, in Computer (IEEE), vol. 34, no. 5, May 2001, pages 28-35. Translations: Russian in Otkrytye Systemy (Open Systems Publications), #07-08-2001, October 2001. (A general discussion of the fundamental concepts to be taught in software engineering. Served as a blueprint for my teaching at ETH.) Available here.

24. Bertrand Meyer: Object-Oriented Software Construction, second edition, Prentice Hall, 1296 pages, January 1997. Translations: Spanish, French Russian, Serbian, Japanese. (Not a publication on education per se but cited here since it is a textbook that has been widely used for teaching and has many comments on pedagogy.)
23. Bertrand Meyer: The Choice for Introductory Software Education, Guest editorial in Journal of Object-Oriented Programming, vol. 7, no. 3, June 1994, page 8. (A discussion of the use of Eiffel for teaching software engineering topics.)

25. Bertrand Meyer, Towards an Object-Oriented Curriculum, in Journal of Object-Oriented Programming, vo. 6, number 2, May 1993, pages 76-81. (Journal version of paper cited next.) Available here.

26. Bertrand Meyer: Towards an Object-Oriented Curriculum, in TOOLS 11, Technology of Object-Oriented Languages and Systems, Santa Barbara, August 1993, eds. Raimund Ege, Madhu Singh and B. Meyer, Prentice Hall 1993, pages 585-594. (Early advocacy for using OO techniques in teaching programming – while I was not in academia. Much of my subsequent educational work relied on those ideas.) Available here.

27. Bertrand Meyer: Object-Oriented Software Construction, Prentice Hall, 592 pages, 1988. (First edition, translated into German, Italian, French, Dutch, Romanian, Chinese. As noted for second edition above, not about education per se, but widely used textbook with pedagogical implications.)

28. Initiation à la programmation en milieu industriel (Teaching Modern Programming Methodology in an Industrial Environment), in RAIRO, série bleue (informatique), vol. 11, no. 1, pages 21-34 1977. (Early paper on teaching advanced programming techniques in industry.) Available here.

29. Claude Kaiser, Bertrand Meyer and Etienne Pichat, L’Enseignement de la Programmation à l’IIE (Teaching Programming at the IIE engineering school), in Zéro-Un Informatique, 1977. (A paper on my first teaching experience barely out of school myself.) Available here.

VN:F [1.9.10_1130]
Rating: 10.0/10 (2 votes cast)
VN:F [1.9.10_1130]
Rating: +1 (from 1 vote)

Blockchains, bitcoin and distributed trust: LASER school lineup complete

The full lineup of speakers at the 2018 LASER summer school on Software for Blockchains, Bitcoin and Distributed Trust is now ready, with the announcement of a new speaker, Primavera De Filippi from CNRS and Harvard on social and legal aspects.

The other speakers are Christian Cachin (IBM), Maurice Herlihy (Brown), Christoph Jentzsch (slock.it), me, Emil Gun Sirer (Cornell) and Roger Wattenhofer (ETH).

The school is the 14th in the LASER series and takes place June 2-10, 2018, on the island of Elba in Italy.

Early-fee registration deadline is February 10. The school’s page is here.

VN:F [1.9.10_1130]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.10_1130]
Rating: 0 (from 0 votes)

Concurrency/verification positions at Politecnico di Milano

As part of the continuation of the ERC Advanced Investigator Grant project “Concurrency Made Easy” (started at ETH Zurich, see the project pages at cme.ethz.ch), I have positions at Politecnico di Milano for:

  • Postdocs (having a doctoral degree)
  • Research associates (officially: “Assegno di Ricerca”, with the requirement of having a master degree), which can lead to a PhD position.

The deadline for applications is October 11. Please contact me directly if interested. What I expect:

  • The requisite degrees as stated above.
  • Innovative and enterprising spirit, passion for quality work in software engineering.
  • Either or both of excellent programming abilities and strong CS theoretical background.
  • Knowledge of as many of possible of: object-oriented programming, concurrency/parallelism, software verification/formal methods, Eiffel.
  • Familiarity with the basics of the project as described in the project pages at the URL above.
VN:F [1.9.10_1130]
Rating: 7.0/10 (3 votes cast)
VN:F [1.9.10_1130]
Rating: 0 (from 2 votes)

LASER summer school on software for robotics: last call for registration

Much of the progress in robotics is due to software advances, and software issues remain at the heart of the formidable challenges that remain. The 2017 LASER summer school, held in September in Elba, brings together some of the most prestigious international experts in the area.

The LASER school has established itself as one of the principal forums to discussed advanced software issues. The 2017 school takes place from 9 to 17 September in the idyllic setting of the Hotel del Golfo in Procchio, Elba Island, Italy.

Robotics is progressing at an amazing pace, bringing improvements to almost areas of human activity. Today’s robotics systems rely ever more fundamentally on complex software, raising difficult issues. The LASER 2017 summer school covers both the current state of robotics software technology and open problems. The lecturers are top international experts with both theoretical contributions and major practical achievements in developing robotics systems.
The LASER school is intended for professionals from the industry (engineers and managers) as well as university researchers, including PhD students. Participants learn about the most important software technology advances from the pioneers in the field. The school’s focus is applied, although theory is welcome to establish solid foundations. The format of the school favors extensive interaction between participants and speakers.

We have lined up an impressive roster of speakers from the leading edge of both industry and academia:

Rodolphe Gélin, Aldebaran Robotics
Ashish Kapoor, Microsoft Research
Davide Brugali, University of Bergamo, on Managing software variability in robotic control systems
Nenad Medvidovic, University of Southern California, on Software Architectures of Robotics Systems
Bertrand Meyer, Politecnico di Milano & Innopolis University, on Concurrent Object-Oriented Robotics Software
Issa Nesnas, NASA Jet Propulsion Laboratory, on Experiences from robotic software development for research and planetary flight robots
Hiroshi (“Gitchang”) Okuno, Waseda University & Kyoto University, on Open-Sourced Robot Audition Software HARK: Capabilities and Applications

The school takes place at the magnificent Hotel del Golfo in the Gulf of Procchio, Elba. Along with an intensive scientific program, participants will have time to enjoy the countless natural and cultural riches of this wonderful, history-laden jewel of the Mediterranean.

For more information about the school, the speakers and registration see the LASER site.

VN:F [1.9.10_1130]
Rating: 5.5/10 (4 votes cast)
VN:F [1.9.10_1130]
Rating: +1 (from 1 vote)

Robotics and concurrency

Many robotics applications are by nature concurrent; in his ongoing PhD work, Andrey Rusakov [1] is building a comprehensive concurrent robot programming framework, Roboscoop [2], based on the SCOOP model for simple concurrent object-oriented programming [3] and the Ros operating system. As part of this work it is important to know how much robotics applications use concurrency, stay away from concurrency — perhaps because programmers are afraid of the risks — and could benefit from more concurrency. Rusakov has prepared a questionnaire [4] to help find out. If you have experience in robot programming, please help him by answering the questionnaire, which takes only a few minutes.

References

[1] Rusakov’s home page here.

[2] Roboscoop project page, here,

[3] Simple Concurrent Object-Oriented Programming, see here.

[4] The questionnaire is here.

VN:F [1.9.10_1130]
Rating: 5.4/10 (18 votes cast)
VN:F [1.9.10_1130]
Rating: -5 (from 5 votes)

Software for Robotics: 2016 LASER summer school, 10-18 September, Elba

The 2016 session of the LASER summer school, now in its 13th edition, has just been announced. The theme is new for the school, and timely: software for robotics. Below is the announcement.

School site: here

The 2016 LASER summer school will be devoted to Software for Robotics. It takes place from 10 to 18 September in the magnificent setting of the Hotel del Golfo in Procchio, Elba Island, Italy.

Robotics is progressing at an amazing pace, bringing improvements to almost areas of human activity. Today’s robotics systems rely ever more fundamentally on complex software, raising difficult issues. The LASER 2016 summer school both covers the current state of robotics software technology and open problems. The lecturers are top international experts with both theoretical contributions and major practical achievements in developing robotics systems.
The LASER school is intended for professionals from the industry (engineers and managers) as well as university researchers, including PhD students. Participants learn about the most important software technology advances from the pioneers in the field. The school’s focus is applied, although theory is welcome to establish solid foundations. The format of the school favors extensive interaction between participants and speakers.
The speakers include:

  • Joydeep Biswas, University of Massachussetts, on Development, debugging, and maintenance of deployed robots
  • Davide Brugali, University of Bergamo, on Managing software variability in robotic control systems
  • Nenad Medvidovic, University of Southern California, on Software Architectures of Robotics Systems
  • Bertrand Meyer, Politecnico di Milano and Innopolis University, with Jiwon Shin, on Concurrent Object-Oriented Robotics Software: Concepts, Framework and Applications
  • Issa Nesnas, NASA Jet Propulsion Laboratory, on Experiences from robotic software development for research and planetary flight robots
  • Richard Vaughan, Simon Fraser University

Organized by Politecnico di Milano, the school takes place at the magnificent Hotel del Golfo (http://www.hoteldelgolfo.it/) in Golfo di Procchio, Elba. Along with an intensive scientific program, participants will have time to enjoy the natural and cultural riches of this history-laden jewel of the Mediterranean.

For more information about the school, the speakers and registration see here.

.

— Bertrand Meyer

VN:F [1.9.10_1130]
Rating: 4.9/10 (15 votes cast)
VN:F [1.9.10_1130]
Rating: -5 (from 5 votes)

Design by Contract: ACM Webinar this Thursday

A third ACM webinar this year (after two on agile methods): I will be providing a general introduction to Design by Contract. The date is this coming Thursday, September 17, and the time is noon New York (18 Paris/Zurich, 17 London, 9 Los Angeles, see here for hours elsewhere). Please tune in! The event is free but requires registration here.

VN:F [1.9.10_1130]
Rating: 5.8/10 (19 votes cast)
VN:F [1.9.10_1130]
Rating: -4 (from 8 votes)

New paper: Theory of Programs

Programming, wrote Dijkstra many years ago, is a branch of applied mathematics. That is only half of the picture: the other half is engineering, and this dual nature of programming is part of its attraction.

Descriptions of the mathematical side are generally, in my view, too complicated. This article [1] presents a mathematical theory of programs and programming based on concepts taught in high school: elementary set theory. The concepts covered include:

  • Programming.
  • Specification.
  • Refinement.
  • Non-determinism.
  • Feasibility.
  • Correctness.
  • Programming languages.
  • Kinds of programs: imperative, functional, object-oriented.
  • Concurrency (small-step and large-step)
  • Control structures (compound, if-then-else and Dijkstra-style conditional, loop).
  • State, store and environment.
  • Invariants.
  • Notational conventions for building specifications and programs incrementally.
  • Loop invariants and variants.

One of the principal ideas is that a program is simply the description of a mathematical relation. The program text is a rendering of that relation. As a consequence, one may construct programming languages simply as notations to express certain kinds of mathematics. This approach is the reverse of the usual one, where the program text and its programming languages are the starting point and the center of attention: theoreticians develop techniques to relate them to mathematical concepts. It is more effective to start from the mathematics (“unparsing” rather than parsing).

All the results (74 properties expressed formally, a number of others in the text) are derived as theorems from rules of elementary set theory; there are no new axioms whatsoever.

The paper also has a short version [2], omitting proofs and many details.

References

[1] Theory of Programs, available here.
[2] Theory of Programs, short version of [1] (meant for quick understanding of the ideas, not for publication), available here.

 

VN:F [1.9.10_1130]
Rating: 5.6/10 (29 votes cast)
VN:F [1.9.10_1130]
Rating: 0 (from 12 votes)

Detecting deadlock automatically? (New paper)

To verify sequential programs, we have to prove that they do the right thing, but also that they do it within our lifetime — that they terminate. The termination problem is considerably harder with concurrent programs, since they add a new form of non-termination: deadlock. A set of concurrent processes or threads will deadlock if they end up each holding a resource that another wants and wanting a resource that another holds.

There is no general solution to the deadlock problem, even a good enough general solution. (“Good enough” is the best we can hope for, since like many important problems deadlock is undecidable.) It is already hard enough to provide run-time deadlock detection, to be able at least to cancel execution when deadlock happens. The research reported in this new paper [1] pursues the harder goal of static detection. It applies to an object-oriented context (specifically the SCOOP model of concurrent OO computation) and relies fundamentally on the alias calculus, a static alias analysis technique developed in previous publications.

The approach is at its inception and considerable work remains to be done. Still, the example handled by the paper is encouraging: analyzing two versions of the dining philosophers problem and proving — manually — that one can deadlock and the other cannot.

References

[1] Bertrand Meyer: An automatic technique for static deadlock prevention, in PSI 2014 (Ershov Informatics Conference), eds. Irina Virbitskaite and Andrei Voronkov, Lecture Notes in Computer Science, Springer, 2015, to appear.; draft available here.

VN:F [1.9.10_1130]
Rating: 6.0/10 (19 votes cast)
VN:F [1.9.10_1130]
Rating: -2 (from 12 votes)

Lampsort

 

In support of his view of software methodology, Leslie Lamport likes to use the example of non-recursive Quicksort. Independently of the methodological arguments, his version of the algorithm should be better known. In fact, if I were teaching “data structures and algorithms” I would consider introducing it first.

As far as I know he has not written down his version in an article, but he has presented it in lectures; see [1]. His trick is to ask the audience to give a non-recursive version of Quicksort, and of course everyone starts trying to remove the recursion, for example by making the stack explicit or looking for invertible functions in calls. But his point is that recursion is not at all fundamental in Quicksort. The recursive version is a specific implementation of a more general idea.

Lamport’s version — let us call it Lampsort —is easy to express in Eiffel. We may assume the following context:

a: ARRAY [G -> COMPARABLE]        — The array to be sorted.
pivot: INTEGER                                      —  Set by partition.
picked: INTEGER_INTERVAL            — Used by the sorting algorithm, see below.
partition (i, j: INTEGER)
……..require      — i..j is a sub-interval of the array’s legal indexes:
……..……..i < j
……..……..i >= a.lower
……..……..j <= a.upper
……..do
……..……..… Usual implementation of partition
……..ensure     — The expected effect of partition:
……..……..pivot >= i
……..……..pivot < j
……..……..a [i..j] has been reshuffled so that elements in i..pivot are less than
……..……..or equal to those in pivot+1 .. j.
……..end

We do not write the implementation of partition since the point of the present discussion is the overall algorithm. In the usual understanding, that algorithm consists of doing nothing if the array has no more than one element, otherwise performing a partition and then recursively calling itself on the two resulting intervals. The implementation can take advantage of parallelism by forking the recursive calls out to different processors. That presentation, says Lamport, describes only a possible implementation. The true Quicksort is more general. The algorithm works on a set not_sorted of integer intervals i..j such that the corresponding array slices a [i..j] are the only ones possibly not sorted; the goal of the algorithm is to make not_sorted empty, since then we know the entire array is sorted. In Eiffel we declare this set as:

not_sorted: SET [INTEGER_INTERVAL]

The algorithm initializes not_sorted to contain a single element, the entire interval; at each iteration, it removes an interval from the set, partitions it if that makes sense (i.e. the interval has more than one element), and inserts the resulting two intervals into the set. It ends when not_sorted is empty. Here it is:

……..from                                 — Initialize interval set to contain a single interval, the array’s entire index range:
……..…..create not_sorted.make_one (a.lower |..| a.upper)….         ..……..
……..invariant
……..…..— See below
……..until
……..…..not_sorted.is_empty                                                            — Stop when there are no more intervals in set
……..loop
……..…..picked := not_sorted.item                                                     — Pick an interval from (non-empty) interval set.
……..……if picked.count > 1 then                                                      — (The precondition of partition holds, see below.)
……..……..…..partition (picked.lower, picked.upper)                 — Split, moving small items before & large ones after pivot.
……..……..…..not_sorted.extend (picked.lower |..| pivot)            — Insert new intervals into the set of intervals: first
……..……....not_sorted.extend (pivot + 1 |..| picked.upper)     — and second.
……..……end
……..…...not_sorted.remove (picked)                                               — Remove interval that was just partitioned.
…….end

Eiffel note: the function yielding an integer interval is declared in the library class INTEGER using the operator |..| (rather than just  ..).

The query item from SET, with the precondition not is_empty,  returns an element of the set. It does not matter which element. In accordance with the Command-Query Separation principle, calling item does not modify the set; to remove the element you have to use the command remove. The command extend adds an element to the set.

The abstract idea behind Lampsort, explaining why it works at all, is the following loop invariant (see [2] for a more general discussion of how invariants provide the basis for understanding loop algorithms). We call “slice” of an array a non-empty contiguous sub-array; for adjacent slices we may talk of concatenation; also, for slices s and t s <= t means that every element of s is less than or equal to every element of t. The invariant is:

a is the concatenation of the members of a set slices of disjoint slices, such that:
– The elements of a are a permutation of its original elements.
– The index range of any member  of slices having more than one element is in not_sorted.
– For any adjacent slices s and t (with s before t), s <= t.

The first condition (conservation of the elements modulo permutation) is a property of partition, the only operation that can modify the array. The rest of the invariant is true after initialization (from clause) with slices made of a single slice, the full array. The loop body maintains it since it either removes a one-element interval from not_sorted (slices loses the corresponding slice) or performs partition with the effect of partitioning one slice into two adjacent ones satisfying s <= t, whose intervals replace the original one in not_sorted. On exit, not_sorted is empty, so slices is a set of one-element slices, each less than or equal to the next, ensuring that the array is sorted.

The invariant also ensures that the call to partition satisfies that routine’s precondition.

The Lampsort algorithm is a simple loop; it does not use recursion, but relies on an interesting data structure, a set of intervals. It is not significantly longer or more difficult to understand than the traditional recursive version

sort (i, j: INTEGER)
……..require
……..……..i <= j
……..……..i >= a.lower
……..……..j <= a.upper
……..do
……..……if j > i then                    — Note that precondition of partition holds.
……..……..…..partition (i, j)         — Split into two slices s and t such that s <= t.
……..……..…..sort (i, pivot)          — Recursively sort first slice.
……..……..…..sort (pivot+1, j)      — Recursively sort second slice.
……..……end……..…..
……..end

Lampsort, in its author’s view, captures the true idea of Quicksort; the recursive version, and its parallelized variants, are only examples of possible implementations.

I wrote at the start that the focus of this article is Lampsort as an algorithm, not issues of methodology. Let me, however, give an idea of the underlying methodological debate. Lamport uses this example to emphasize the difference between algorithms and programs, and to criticize the undue attention being devoted to programming languages. He presents Lampsort in a notation which he considers to be at a higher level than programming languages, and it is for him an algorithm rather than a program. Programs will be specific implementations guided in particular by efficiency considerations. One can derive them from higher-level versions (algorithms) through refinement. A refinement process may in particular remove or restrict non-determinism, present in the above version of Lampsort through the query item (whose only official property is that it returns an element of the set).

The worldview underlying the Eiffel method is almost the reverse: treating the whole process of software development as a continuum; unifying the concepts behind activities such as requirements, specification, design, implementation, verification, maintenance and evolution; and working to resolve the remaining differences, rather than magnifying them. Anyone who has worked in both specification and programming knows how similar the issues are. Formal specification languages look remarkably like programming languages; to be usable for significant applications they must meet the same challenges: defining a coherent type system, supporting abstraction, providing good syntax (clear to human readers and parsable by tools), specifying the semantics, offering modular structures, allowing evolution while ensuring compatibility. The same kinds of ideas, such as an object-oriented structure, help on both sides. Eiffel as a language is the notation that attempts to support this seamless, continuous process, providing tools to express both abstract specifications and detailed implementations. One of the principal arguments for this approach is that it supports change and reuse. If everything could be fixed from the start, maybe it could be acceptable to switch notations between specification and implementation. But in practice specifications change and programs change, and a seamless process relying on a single notation makes it possible to go back and forth between levels of abstraction without having to perform repeated translations between levels. (This problem of change is, in my experience, the biggest obstacle to refinement-based approaches. I have never seen a convincing description of how one can accommodate specification changes in such a framework without repeating the whole process. Inheritance, by the way, addresses this matter much better.)

The example of Lampsort in Eiffel suggests that a good language, equipped with the right abstraction mechanisms, can be effective at describing not only final implementations but also abstract algorithms. It does not hurt, of course, that these abstract descriptions can also be executable, at the possible price of non-optimal performance. The transformation to an optimal version can happen entirely within the same method and language.

Quite apart from these discussions of software engineering methodology, Lamport’s elegant version of Quicksort deserves to be known widely.

References

[1] Lamport video here, segment starting at 0:32:34.
[2] Carlo Furia, Bertrand Meyer and Sergey Velder: Loop invariants: Analysis, Classification and Examples, in ACM Computing Surveys, September 2014, preliminary text here.

VN:F [1.9.10_1130]
Rating: 7.0/10 (27 votes cast)
VN:F [1.9.10_1130]
Rating: +5 (from 11 votes)

New article: passive processors

 

The SCOOP concurrency model has a clear division of objects into “regions”, improving the clarity and reliability of concurrent programs by establishing a close correspondence between the object structure and the process structure. Each region has an associated “processor”, which executes operations on the region’s objects. A literal application of this rule implies, however, a severe performance penalty. As part of the work for his PhD thesis (defended two weeks ago), Benjamin Morandi found out that a mechanism for specifying certain processors as “passive” yields a considerable performance improvement. The paper, to be published at COORDINATION, describes the technique and its applications.

Reference

Benjamin Morandi, Sebastian Nanz and Bertrand Meyer: Safe and Efficient Data Sharing for Message-Passing Concurrency, to appear in proceedings of COORDINATION 2014, 16th International Conference on Coordination Models and Languages, Berlin, 3-6 June 2014, draft available here.
.

VN:F [1.9.10_1130]
Rating: 8.2/10 (5 votes cast)
VN:F [1.9.10_1130]
Rating: +2 (from 4 votes)

LASER 2014 (Elba, September)

2014 marks the 10-th anniversary (11th edition) of the LASER summer school. The school will be held September 7-14, 2014, and the detailed information is here.

LASER (the name means Laboratory for Applied Software Engineering Research) is dedicated to practical software engineering. The roster of speakers since we started is a who’s who of innovators in the field. Some of the flavor of the school can gathered from the three proceedings volumes published in Springer LNCS (more on the way) or simply by browsing the pages of the schools from previous years.

Usually we have a theme, but to mark this anniversary we decided to go for speakers first; we do have a title, “Leading-Edge Software Engineering”, but broad enough to encompass a wide variety of a broad range of topics presented by star speakers: Harald Gall, Daniel Jackson, Michael Jackson, Erik Meijer (appearing at LASER for the third time!), Gail Murphy and Moshe Vardi. With such a cast you can expect to learn something important regardless of your own primary specialty.

LASER is unique in its setting: a 5-star hotel in the island paradise of Elba, with outstanding food and countless opportunities for exploring the marvelous land, the beaches, the sea, the geology (since antiquity Elba has been famous for its stones and minerals) and the history, from the Romans to Napoleon, who in the 9 months of his reign changed the island forever. The school is serious stuff (8:30 to 13 and 17 to 20 every day), but with enough time to enjoy the surroundings.

Registration is open now.

VN:F [1.9.10_1130]
Rating: 7.0/10 (3 votes cast)
VN:F [1.9.10_1130]
Rating: +1 (from 3 votes)

PhD positions in concurrency/distribution/verification at ETH

As part of our “Concurrency Made Easy” ERC Advanced Investigator Grant project (2012-2017), we are offering PhD positions at the Chair of Software Engineering of ETH Zurich. The goal of the project is to build a sophisticated programming and verification architecture to make concurrent and distributed programming simple and reliable, based on the ideas of Eiffel and particularly the SCOOP concurrency model. Concurrency in its various forms (particularly multithreading) as well as distributed computing are required for most of today’s serious programs, but programming concurrent applications remains a challenge. The CME project is determined to break this complexity barrier.  Inevitably, achieving simplicity for users (in this case, application programmers) requires, under the hood, a sophisticated infrastructure, both conceptual (theoretical models) and practical (the implementation). We are building that infrastructure.

ETH offers an outstanding research and education environment and competitive salaries for “assistants” (PhD students), who are generally expected in addition to their research to participate in teaching, in particular introductory programming, and other activities of the Chair.  The candidates we seek have: a master’s degree in computer science or related field from a recognized institution (as required by ETH); a strong software engineering background, both practical and theoretical, and more generally a strong computer science and mathematical culture; a good knowledge of verification techniques (e.g. Hoare-style, model-checking, abstract interpretation); some background in concurrency or distribution; and a passion for high-quality software development. Prior publications, and experience with Eiffel, are pluses. In line with ETH policy, particular attention will be given to female candidates.

Before applying, you should become familiar with our work; see in particular the research pages at se.ethz.ch including the full description of the CME project at cme.ethz.ch.

Candidates should send (in PDF or text ) to se-open-positions@lists.inf.ethz.ch a CV and a short cover letter describing their view of the CME project and ideas about their possible contribution.

VN:F [1.9.10_1130]
Rating: 5.7/10 (6 votes cast)
VN:F [1.9.10_1130]
Rating: +2 (from 4 votes)

Concurrency video

Our Concurrency Made Easy project, the result of an ERC Advanced Investigator Grant, is trying to solve the problem of making concurrent programming simple, reliable and effective. It has spurred related efforts, in particular the Roboscoop project applying concurrency to robotics software.

Sebastian Nanz and other members of the CME project at ETH have just produced a video that describes the aims of the project and presents some of the current achievements. The video is available on the CME project page [1] (also directly on YouTube [2]).

References

[1] Concurrency Made Easy project, here.

[2] YouTube CME video, here.

VN:F [1.9.10_1130]
Rating: 9.5/10 (17 votes cast)
VN:F [1.9.10_1130]
Rating: +13 (from 15 votes)

LASER summer school: Software for the Cloud and Big Data

The 2013 LASER summer school, organized by our chair at ETH, will take place September 8-14, once more in the idyllic setting of the Hotel del Golfo in Procchio, on the island of Elba in Italy. This is already the 10th conference; the roster of speakers so far reads like a who’s who of software engineering.

The theme this year is Software for the Cloud and Big Data and the speakers are Roger Barga from Microsoft, Karin Breitman from EMC,  Sebastian Burckhardt  from Microsoft,  Adrian Cockcroft from Netflix,  Carlo Ghezzi from Politecnico di Milano,  Anthony Joseph from Berkeley,  Pere Mato Vila from CERN and I.

LASER always has a strong practical bent, but this year it is particularly pronounced as you can see from the list of speakers and their affiliations. The topic is particularly timely: exploring the software aspects of game-changing developments currently redefining the IT scene.

The LASER formula is by now well-tuned: lectures over seven days (Sunday to Saturday), about five hours in the morning and three in the early evening, by world-class speakers; free time in the afternoon to enjoy the magnificent surroundings; 5-star accommodation and food in the best hotel of Elba, made affordable as we come towards the end of the season (and are valued long-term customers). The group picture below is from last year’s school.

Participants are from both industry and academia and have ample opportunities for interaction with the speakers, who typically attend each others’ lectures and engage in in-depth discussions. There is also time for some participant presentations; a free afternoon to discover Elba and brush up on your Napoleonic knowledge; and a boat trip on the final day.

Information about the 2013 school can be found here.

LASER 2012, Procchio, Hotel del Golvo

VN:F [1.9.10_1130]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.10_1130]
Rating: 0 (from 0 votes)

Negative variables and the essence of object-oriented programming (new paper)

In modeling object-oriented programs, for purposes of verification (proofs) or merely for a better understanding, we are faced with the unique “general relativity” property of OO programming: all the operations you write (excluding non-OO mechanisms such as static functions) are expressed relative to a “current object” which changes repeatedly during execution. More precisely at the start of a call x.r (…) and for the duration of that call the current object changes to whatever x denotes — but to determine that object we must again interpret x in the context of the previous current object. This raises a challenge for reasoning about programs; for example in a routine the notation f.some_reference, if f is a formal argument, refers to objects in the context of the calling object, and we cannot apply standard rules of substitution as in the non-OO style of handling calls.

In earlier work [1, 2] initially motivated by the development of the Alias Calculus, I introduced a notion of negative variable to deal with this issue. During the execution of a call x.r (…) the negation of x , written x’, represents a back pointer to the calling object; negative variables are characterized by axiomatic properties such as x.x’= Current and x’.(old x)= Current. Alexander Kogtenkov has implemented these ideas and refined them.

Negative variable as back pointer

In a recent paper under submission [3], we review the concepts and applications of negative variables.

References

[1] Bertrand Meyer: Steps Towards a Theory and Calculus of Aliasing, in International Journal of Software and Informatics, 2011, available here.

[2] Bertrand Meyer: Towards a Calculus of Object Programs, in Patterns, Programming and Everything, Judith Bishop Festschrift, eds. Karin Breitman and Nigel Horspool, Springer-Verlag, 2012, pages 91-128, available here.

[3] Bertrand Meyer and Alexander Kogtenkov: Negative Variables and the Essence of Object-Oriented Programming, submitted for publication, 2012. [Updated 13 January 2014: I have removed the link to the draft mentioned in this post since it is now superseded by the new version, soon to be published, and available here.]

VN:F [1.9.10_1130]
Rating: 9.5/10 (6 votes cast)
VN:F [1.9.10_1130]
Rating: +3 (from 5 votes)

The manhood test

 

I came across an obscure and surprisingly interesting article by Cliff Jones [1], about the history of rely-guarantee but with the following extract:

It was perhaps not fully appreciated at the time of [Hoare’s 1969 axiomatic semantics paper] that the roles of pre and post conditions differ in that a pre condition gives permission to a developer to ignore certain possibilities; the onus is on a user to prove that a component will not be initiated in a state that does not satisfy its pre condition. In contrast a post condition is an obligation on the code that is created according to the specification. This Deontic view carries over [to rely-guarantee reasoning].

I use words more proletarian than “deontic”, but this view is exactly what stands behind the concepts of Design by Contract and has been clearly emphasized in all Eiffel literature ever since the first edition of OOSC. It remains, however, misunderstood outside of the Eiffel community; many people confuse Design by Contract with its opposite, defensive programming. The criterion is simple: if you have a precondition to a routine, are you willing entirely to forsake the corresponding checks (conditionals, exceptions…) in the routine body? If not, you may be using the word “contract” as a marketing device, but that’s all. The courage to remove the checks is the true test of adulthood.

The application of Microsoft’s “Code Contracts” mechanism to the .NET libraries fails that test: a precondition may say “buffer not full” or “insertions allowed”, but the code still checks the condition and triggers an exception. The excuse I have heard is that one cannot trust those unwashed developers. But the methodological discipline is lost. Now let me repeat this using clearer terminology: it’s not deontic.

Reference

[1] Cliff Jones: The role of auxiliary variables in the formal development of concurrent programs, in Reflections on the work of C. A. R. Hoare, eds. Jones, Roscoe and Wood, Springer Lecture Notes in Computer Science,  2009, technical report version available here.

VN:F [1.9.10_1130]
Rating: 10.0/10 (6 votes cast)
VN:F [1.9.10_1130]
Rating: +4 (from 4 votes)

New LASER proceedings

Springer has just published in the tutorial sub-series of Lecture Notes in Computer Science a new proceedings volume for the LASER summer school [1]. The five chapters are notes from the 2008, 2009 and 2010 schools (a previous volume [2] covered earlier schools). The themes range over search-based software engineering (Mark Harman and colleagues), replication of software engineering experiments (Natalia Juristo and Omar Gómez), integration of testing and formal analysis (Mauro Pezzè and colleagues), and, in two papers by our ETH group, Is branch coverage a good measure of testing effectiveness (with Yi Wei and Manuel Oriol — answer: not really!) and a formal reference for SCOOP (with Benjamin Morandi and Sebastian Nanz).

The idea of these LASER tutorial books — which are now a tradition, with the volume from the 2011 school currently in preparation — is to collect material from the presentations at the summer school, prepared by the lecturers themselves, sometimes in collaboration with some of the participants. Reading them is not quite as fun as attending the school, but it gives an idea.

The 2012 school is in full preparation, on the theme of “Advanced Languages for Software Engineering” and with once again an exceptional roster of speakers, or should I say an exceptional roster of exceptional speakers: Guido van Rossum (Python), Ivar Jacobson (from UML to Semat), Simon Peyton-Jones (Haskell), Roberto Ierusalimschy (Lua), Martin Odersky (Scala), Andrei Alexandrescu (C++ and D),Erik Meijer (C# and LINQ), plus me on the design and evolution of Eiffel.

The preparation of LASER 2012 is under way, with registration now open [3]; the school will take place from Sept. 2 to Sept. 8 and, like its predecessors, in the wonderful setting on the island of Elba, off the coast of Tuscany, with a very dense technical program but time for enjoying the beach, the amenities of a 4-star hotel and the many treasures of the island. On the other hand not everyone likes Italy, the sun, the Mediterranean etc.; that’s fine too, you can wait for the 2013 proceedings.

References

[1] Bertrand Meyer and Martin Nordio (eds): Empirical Software Engineering and Verification, International Summer Schools LASER 2008-2010, Elba Island, Italy, Revised Tutorial Lectures, Springer Verlag, Lecture Notes in Computer Science 7007, Springer-Verlag, 2012, see here.

[2] Peter Müller (ed.): Advanced Lectures on Software Engineering, LASER Summer School 2007-2008, Springer Verlag, Lecture Notes in Computer Science 7007, Springer-Verlag, 2012, see here.

[3] LASER summer school information and registration form, http://se.ethz.ch/laser.

VN:F [1.9.10_1130]
Rating: 10.0/10 (1 vote cast)
VN:F [1.9.10_1130]
Rating: +1 (from 1 vote)

ERC Advanced Investigator Grant: Concurrency Made Easy

In April we will be starting the  “Concurrency Made Easy” research project, the result of a just announced Advanced Investigator Grant from the European Research Council. Such ERC grants are awarded to a specific person, rather than a consortium of research organizations as in the usual EU funding scheme. The usual amount, which applies in my case, is 2.5 million euros (currently almost 3 .3 million dollars) over five years, on a specific theme. According to the ERC’s own description [1],

ERC Advanced Grants allow exceptional established research leaders of any nationality and any age to pursue ground-breaking, high-risk projects that open new directions in their respective research fields or other domains.

This is the most sought-after research funding instrument of the EU, with a success rate of about 12% [2], out of a group already preselected by the host institutions. What makes ERC Advanced Investigator Grants so coveted is the flexibility of the scheme (no constraints on the topic, light administrative baggage) and the trust that an award implies in a particular researcher and his ability to carry out advanced research.

The name of the CME project clearly signals its ambition: to turn concurrent programming into a normal, unheroic part of programming. Today adding concurrency to a program, usually in the form of multithreading, is very hard, complexity and risk of all kinds. Everyone is telling us that we must rethink programming, retrain programmers and revamp curricula to put the specific reasoning modes of concurrent programming at the center. I don’t think this can work; thinking concurrently is just too hard to become the default mode. Instead, we should adapt programming languages, theories and tools so that programmers can continue to apply the reasoning schemes that have proved so successful in classical programming, especially object-oriented programming with the benefit of Design by Contract.

The starting point is the SCOOP model, to which I started an introduction in an earlier article of this blog [3], with a sequel yet to come. SCOOP is a minimal extension to the O-O framework to support concurrency, yielding very simple (the S in the acronym) solutions to concurrent programming problems. As part of the CME project we plan to develop it in many different directions and establish a sound and effective formal basis.

I have put the project description — the scientific part of the actual proposal text accepted by the ERC — online [4].

In the next few weeks I will be publishing here specific announcements for the positions we are seeking to fill very quickly; they include postdocs, PhD students, and one research engineer. We are looking for candidates with excellent knowledge and practice of concurrency, Eiffel, formal techniques etc. The formal application procedure will be Web-based and is not in place yet but you can contact me if you fit the profile and are interested.

We can defeat the curse: concurrent programming (an obligatory condition of any path towards a successful future for information technology) does not have to be black magic. It can be made simple and efficient. Such is the challenge of the CME project.

References

[1] European Research Council: Advanced Grants, available here.

[2] European Research Council: Press release on 2011 Advanced Investigator Grants, 24 January 2012, available here.

[3] Concurrent Programming is Easy, article from this blog, available here.

[4] CME Advanced Investigator Grant project description, available here.

VN:F [1.9.10_1130]
Rating: 10.0/10 (4 votes cast)
VN:F [1.9.10_1130]
Rating: +2 (from 2 votes)

Concurrency seminars

Ever more people are realizing that concurrency is at the center of IT challenges for the next decades. Concurrent programming remains as hard as ever; we have put together a one-day seminar that helps understand the concepts and build successful concurrent applications. The sessions for the first few months of 2012 are:

  • Palo Alto (February 15)
  • Zurich, (March 2)
  • London (March 22)
  • Paris (May 10)
  • Stockholm (June 15)
  • Seattle (July 20)

and the seminar program is available here.

 

 

VN:F [1.9.10_1130]
Rating: 10.0/10 (4 votes cast)
VN:F [1.9.10_1130]
Rating: +1 (from 1 vote)

The story of our field, in a few short words

 

(With all dues to [1], but going up from four to five as it is good to be brief yet not curt.)

At the start there was Alan. He was the best of all: built the right math model (years ahead of the real thing in any shape, color or form); was able to prove that no one among us can know for sure if his or her loops — or their code as a whole — will ever stop; got to crack the Nazis’ codes; and in so doing kind of saved the world. Once the war was over he got to build his own CPUs, among the very first two or three of any sort. But after the Brits had used him, they hated him, let him down, broke him (for the sole crime that he was too gay for the time or at least for their taste), and soon he died.

There was Ed. Once upon a time he was Dutch, but one day he got on a plane and — voilà! — the next day he was a Texan. Yet he never got the twang. The first topic that had put him on  the map was the graph (how to find a path, as short as can be, from a start to a sink); he also wrote an Algol tool (the first I think to deal with all of Algol 60), and built an OS made of many a layer, which he named THE in honor of his alma mater [2]. He soon got known for his harsh views, spoke of the GOTO and its users in terms akin to libel, and wrote words, not at all kind, about BASIC and PL/I. All this he aired in the form of his famed “EWD”s, notes that he would xerox and send by post along the globe (there was no Web, no Net and no Email back then) to pals and foes alike. He could be kind, but often he stung. In work whose value will last more, he said that all we must care about is to prove our stuff right; or (to be more close to his own words) to build it so that it is sure to be right, and keep it so from start to end, the proof and the code going hand in hand. One of the keys, for him, was to use as a basis for ifs and loops the idea of a “guard”, which does imply that the very same code can in one case print a value A and in some other case print a value B, under the watch of an angel or a demon; but he said this does not have to be a cause for worry.

At about that time there was Wirth, whom some call Nick, and Hoare, whom all call Tony. (“Tony” is short for a list of no less than three long first names, which makes for a good quiz at a party of nerds — can you cite them all from rote?) Nick had a nice coda to Algol, which he named “W”; what came after Algol W was also much noted, but the onset of Unix and hence of C cast some shade over its later life. Tony too did much to help the field grow. Early on, he had shown a good way to sort an array real quick. Later he wrote that for every type of unit there must be an axiom or a rule, which gives it an exact sense and lets you know for sure what will hold after every run of your code. His fame also comes from work (based in part on Ed’s idea of the guard, noted above) on the topic of more than one run at once, a field that is very hot today as the law of Moore nears its end and every maker of chips has moved to  a mode where each wafer holds more than one — and often many — cores.

Dave (from the US, but then at work under the clime of the North) must not be left out of this list. In a paper pair, both from the same year and both much cited ever since,  he told the world that what we say about a piece of code must only be a part, often a very small part, of what we could say if we cared about every trait and every quirk. In other words, we must draw a clear line: on one side, what the rest of the code must know of that one piece; on the other, what it may avoid to know of it, and even not care about. Dave also spent much time to argue that our specs must not rely so much on logic, and more on a form of table.  In a later paper, short and sweet, he told us that it may not be so bad that you do not apply full rigor when you chart your road to code, as long as you can “fake” such rigor (his own word) after the fact.

Of UML, MDA and other such TLAs, the less be said, the more happy we all fare.

A big step came from the cold: not just one Norse but two, Ole-J (Dahl) and Kris, came up with the idea of the class; not just that, but all that makes the basis of what today we call “O-O”. For a long time few would heed their view, but then came Alan (Kay), Adele and their gang at PARC, who tied it all to the mouse and icons and menus and all the other cool stuff that makes up a good GUI. It still took a while, and a lot of hit and miss, but in the end O-O came to rule the world.

As to the math basis, it came in part from MIT — think Barb and John — and the idea, known as the ADT (not all TLAs are bad!), that a data type must be known at a high level, not from the nuts and bolts.

There also is a guy with a long first name (he hates it when they call him Bert) but a short last name. I feel a great urge to tell you all that he did, all that he does and all that he will do, but much of it uses long words that would seem hard to fit here; and he is, in any case, far too shy.

It is not all about code and we must not fail to note Barry (Boehm), Watts, Vic and all those to whom we owe that the human side (dear to Tom and Tim) also came to light. Barry has a great model that lets you find out, while it is not yet too late, how much your tasks will cost; its name fails me right now, but I think it is all in upper case.  At some point the agile guys — Kent (Beck) and so on — came in and said we had got it all wrong: we must work in pairs, set our goals to no more than a week away, stand up for a while at the start of each day (a feat known by the cool name of Scrum), and dump specs in favor of tests. Some of this, to be fair, is very much like what comes out of the less noble part of the male of the cow; but in truth not all of it is bad, and we must not yield to the urge to throw away the baby along with the water of the bath.

I could go on (and on, and on); who knows, I might even come back at some point and add to this. On the other hand I take it that by now you got the idea, and even on this last day of the week I have other work to do, so ciao.

Notes

[1] Al’s Famed Model Of the World, In Words Of Four Signs Or Fewer (not quite the exact title, but very close): find it on line here.

[2] If not quite his alma mater in the exact sense of the term, at least the place where he had a post at the time. (If we can trust this entry, his true alma mater would have been Leyde, but he did not stay long.)

VN:F [1.9.10_1130]
Rating: 10.0/10 (14 votes cast)
VN:F [1.9.10_1130]
Rating: +11 (from 11 votes)

PhD position: concurrent programming (SCOOP) for robotics

The ETH Chair of Software Engineering has won a grant from the Hasler foundation, in a joint project with the Technical University of Lucerne and the Autonomous Systems Lab of ETH, to develop a robotics framework involving concurrent computation. The project, called Roboscoop,  will produce a demonstrator system: a “SmartWalker” robot — a robotic version of  “walkers” used by elderly people and others with reduced mobility. The research proposal is available [2].

One of the major goals of the Roboscoop framework is to provide robotics applications with the full possibilities of concurrent programming. Many robotics developments use no or little concurrency because of the tricky programming involved in using threads, and the difficulty of getting applications right. With SCOOP [1] programmers have a simple, high-level mechanism that removes the risk of data races and other plagues of concurrent programming.

We are looking for someone with a strong background in both software engineering and robotics. If you are interested, please see the position announcement [3].

References

[1] On SCOOP see here and here. See also a YouTube video of a small robot programmed with  SCOOP as part of an earlier student project (by Ganesh Ramanathan).

[2] Roboscoop research proposal, here.

[2] Position announcement: here

VN:F [1.9.10_1130]
Rating: 10.0/10 (4 votes cast)
VN:F [1.9.10_1130]
Rating: +3 (from 3 votes)

The charming naïveté of an IEEE standard

The IEEE Standard for Requirements Specifications [1], a short and readable text providing concrete and useful advice, is a valuable guide for anyone writing requirements. In our course projects we always require students to follow its recommended structure.

Re-reading it recently, I noticed the following extract  in the section that argues that a  requirements specification should be verifiable (sentence labels in brackets are my addition):

[A] Nonverifiable requirements include statements such as “works well,” “good human interface,” and “shall usually happen.” [B] These requirements cannot be verified because it is impossible to define the terms “good,” “well,” or “usually.”

[C] The statement that “the program shall never enter an infinite loop” is nonverifiable because the testing of this quality is theoretically impossible.

[D] An example of a verifiable statement is
      [E] “Output of the program shall be produced within 20 s of event 60% of the time; and shall be produced within 30 s of event 100% of the time.”
[F] This statement can be verified because it uses concrete terms and measurable quantities.

[A] and [B] are good advice, deserving to be repeated in every software engineering course and to anyone writing requirements. [C], however, is puzzling.

One might initially understand that the authors are telling us that it is impossible to devise a finite set of tests guaranteeing that a program terminates. But on closer examination this cannot be what they mean. Such a statement, although correct, would be uninteresting since it can be applied to any functional requirement: if I say “the program shall accept an integer as input and print out that same integer on the output”, I also cannot test that (trivial) requirement finitely since I would have to try all integers. The same observation applies to the example given in [D, E, F]: the property [D] they laud as an example of a  “verifiable” requirement is just as impossible to test exhaustively [2].

Since the literal interpretation of [C] is trivial and applies to essentially all possible requirements, whether bad or good in the authors’ eyes, they must mean something else when they cite loop termination as their example of a nonverifiable requirement. The word “theoretically” suggests what they have in mind: the undecidability results of computation theory, specifically the undecidability of the Halting Problem. It is well known that no general mechanism exists to determine whether an arbitrary program, or even just an arbitrary loop, will terminate. This must be what they are referring to.

Except, of course, that they are wrong. And a very good thing too that they are wrong, since “The program shall never enter an infinite loop” is a pretty reasonable requirement for any system [3].

If we were to accept [C], we would also accept that it is OK for any program to enter an infinite loop every once in a while, because the authors of its requirements were not permitted to specify otherwise! Fortunately for users of software systems, this particular sentence of the standard is balderdash.

What the halting property states, of course, is that no general mechanism exists that could examine an arbitrary program or loop and tell us whether it will always terminate. This result in no way excludes the possibility of verifying (although not through “testing”) that a particular program or loop will terminate. If the text of a program shows that it will print “Hello World” and do nothing else, we can safely determine that it will terminate. If a loop is of the form

from i := 1 until i > 10 loop
…..print (i)
…..i := i + 1
end

there is also no doubt about its termination. More complex examples require the techniques of modern program verification, such as exhibiting a loop variant in the sense of Hoare logic, but they can still be practically tractable.

Like many fundamental results of modern science (think of Heisenberg’s uncertainty principle), Turing’s demonstration of the undecidability of the Halting Problem is at the same time simple to state, striking, deep, and easy to misunderstand. It is touchingly refreshing to find such a misunderstanding in an IEEE standard.

Do not let it discourage you from applying the excellent advice of the rest of IEEE 830-1998, ; but when you write a program, do make sure — whether or not the requirements specify this property explicitly — that all its loops terminate.

Reference and notes

[1] IEEE Computer Society: IEEE Recommended Practice for Software Requirements SpeciÞcations, IEEE Standard 830-1998, revised 1998; available here (with subscription).

[2] The property [E] is actually more difficult to test, even non-exhaustively, than the authors acknowledge, if only because it is a probabilistic requirement, which can only be tested after one has defined appropriate probabilistic hypotheses.

[3] In requesting that all programs must terminate we must of course take note of the special case of systems that are non-terminating by design, such as most embedded systems. Such systems, however, are still made out of components representing individual steps that must terminate. The operating system on your smartphone may need to run forever (or until the next reboot), but the processing of an incoming text message is still, like a traditional program, required to terminate in finite time.

 

VN:F [1.9.10_1130]
Rating: 10.0/10 (11 votes cast)
VN:F [1.9.10_1130]
Rating: +6 (from 6 votes)

Agile methods: the good, the bad and the ugly

It was a bit imprudent last Monday to announce the continuation of the SCOOP discussion for this week; with the TOOLS conference happening now, with many satellite events such as the Eiffel Design Feast of the past week-end and today’s “New Eiffel Technology Community” workshop, there is not enough time for a full article. Next week might also be problematic. The SCOOP series will resume, but in the meantime I will report on other matters.

As something that can be conveniently typed in while sitting in the back of the TOOLS room during fascinating presentations, here is a bit of publicity for the next round of one-day seminars for industry — “Compact Course” is the official terminology — that I will be teaching at ETH in Zurich next November (one in October), some of them with colleagues. It’s the most extensive session that we have ever done; you can see the full programs and registration information here.

  • Software Engineering for Outsourced and Distributed Development, 27 October 2011
    Taught with Peter Kolb and Martin Nordio
  • Requirements Engineering, 17 November
  • Software Testing and Verification: state of the art, 18 November
    With Carlo Furia and Sebastian Nanz
  • Agile Methods: the Good, the Bad and the Ugly, 23 November
  • Concepts and Constructs of Concurrent Computation, 24 November
    With Sebastian Nanz
  • Design by Contract, 25 November

The agile methods course is new; its summary reads almost like a little blog article, so here it is.

Agile methods: the Good, the Bad and the Ugly

Agile methods are wonderful. They’ll give you software in no time at all, turn your customers and users into friends, catch bugs before they catch you, change the world, and boost your love life. Do you believe these claims (even excluding the last two)? It’s really difficult to form an informed opinion, since most of the presentations of eXtreme Programming and other agile practices are intended to promote them (and the consultants to whom they provide a living), not to deliver an objective assessment.

If you are looking for a guru-style initiation to the agile religion, this is not the course for you. What it does is to describe in detail the corpus of techniques covered by the “agile” umbrella (so that you can apply them effectively to your developments), and assess their contribution to software engineering. It is neither “for” nor “against” agile methods but fundamentally descriptive, pedagogical, objective and practical. The truth is that agile methods include some demonstrably good ideas along with some whose benefits are at best dubious. In addition (and this should not be a surprise) they cannot make the fundamental laws of software engineering go away.

Agile methods have now been around for more than a decade, during which many research teams, applying proven methods of experimental science, have performed credible empirical studies of how well the methods really work and how they compare to more traditional software engineering practices. This important body of research results, although not widely known, is critical to managers and developers in industry for deciding whether and how to use agile development. The course surveys these results, emphasizing the ones most directly relevant to practitioners.

A short discussion session will enable participants with experience in agile methods to share their results.

Taking this course will give you a strong understanding of agile development, and a clear view of when, where and how to apply them.

Schedule

Morning session: A presentation of agile methods

  • eXtreme Programming, pair programming, Scrum, Test-Driven Development, continuous integration, refactoring, stakeholder involvement, feature-driven development etc.
  • The agile lifecycle.
  • Variants: lean programming etc.

Afternoon session (I): Assessment of agile methods

  • The empirical software engineering literature: review of available studies. Assessment of their value. Principles of empirical software engineering.
  • Agile methods under the scrutiny of empirical research: what helps, what harms, and what has no effect? How do agile methods fare against traditional techniques?
  • Examples: pair programming versus code reviews; tests versus specifications; iterative development versus “Big Upfront Everything”.

Afternoon session (II): Discussion and conclusion

This final part of the course will present, after a discussion session involving participants with experience in agile methods, a summary of the contribution of agile methods to software engineering.

It will conclude with advice for organizations involved in software development and interested in applying agile methods in their own environment.

Target groups

CIOs; software project leaders; software developers; software testers and QA engineers.

VN:F [1.9.10_1130]
Rating: 8.8/10 (4 votes cast)
VN:F [1.9.10_1130]
Rating: +3 (from 5 votes)

Concurrent programming is easy

EiffelStudio 6.8, released last month, contains the first official implementation of the SCOOP programming model for concurrent programming. This is an important milestone; let me try to explain why.

Concurrency challenging us

Concurrency is the principal stumbling block in the progress of programming. Do not take just my word for it:

  • Intel: “Multi-core processing is taking the industry on a fast-moving and exciting ride into profoundly new territory. The defining paradigm in computing performance has shifted inexorably from raw clock speed to parallel operations and energy efficiency” [1].
  • Rick Rashid (head of Microsoft Research):  “Multicore processors represent one of the largest technology transitions in the computing industry today, with deep implications for how we develop software.” [2].
  • Bill Gates: “Multicore: This is the one which will have the biggest impact on us. We have never had a problem to solve like this. A breakthrough is needed in how applications are done on multicore devices.” [3]
  • David Patterson: “Industry has basically thrown a Hail Mary. The whole industry is betting on parallel computing. They’ve thrown it, but the big problem is catching it.” [4]
  • Gordon Bell: “I’m skeptical until I see something that gives me some hope…  the machines are here and we haven’t got it right.” [4].

What has happened? Concurrency  used to be a highly specialized domain of interest to a small minority of programmers building operating systems and networking systems and database engines. Just about everyone else could live comfortably pretending that the world was sequential. And then suddenly we all need to be aware of concurrency. The principal reason is the end of Moore’s law as we know it [5].

The end of Moore's law as we know it

This chart show that we can no longer rely on the automatic and regular improvement to our programs’ performance, roughly by a factor of two every two years, thanks to faster chips. The free lunch is over; continued performance increases require taking advantage of concurrency, in particular through multithreading.

Performance is not the only reason for getting into concurrency. Another one is user convenience: ever since the first browser showed that one could write an email and load a Web page in the same window, users have been clamoring for multithreaded applications. Yet another source of concurrency requirements is the need to produce Internet and Web applications.

How do programmers write these applications? The almost universal answer relies on threading mechanisms, typically offered through some combination of language and library mechanisms: Java Threads, .NET threading, POSIX threads, EiffelThreads. The underlying techniques are semaphores and mutexes: nineteen-sixties vintage concepts, rife with risks of data races (access conflicts to a variable or resource, leading to crashes or incorrect computations) and deadlocks (where the system hangs). These risks are worse than the classical bugs of sequential programs because they are very difficult to detect through testing.

Ways to tame the beast

Because the need is so critical, the race is on — a “frantic” race in the words of a memorable New York Times article by John Markoff [4] — to devise a modern programming framework that will bring concurrent programming under control. SCOOP is a contender in this battle. In this post and the next I will try to explain why we think it is exactly what the world needs to tame concurrency.

The usual view, from which SCOOP departs, is that concurrent programming is intrinsically hard and requires a fundamental change in the way programmers think. Indeed some of the other approaches that have attracted attention imply radical departures from accepted programming paradigm:

  • Concurrency calculi such as CSP [6, 7], CCS [8] and the π-Calculus [9] define  high-level mathematical frameworks addressing concurrency, but they are very far from the practical concerns of programmers. An even more serious problem is that they focus on only some aspects of programming, but being concurrent is only one property of a program, among many others (needing a database, relying on graphical user interface, using certain data structures, perform certain computations…). We need mechanisms that integrate concurrency with all the other mechanisms that a program uses.
  • Functional programming languages have also offered interesting idioms for concurrency, taking advantage of the non-imperative nature of functional programming. Advocacy papers have argued for Haskell [10 and Erlang [11] in this role. But should the world renounce other advances of modern software engineering, in particular object-oriented programming, for the sake of these mechanisms? Few people are prepared to take that step, and (as I have discussed in a detailed article [12]) the advantages of functional programming are counter-balanced by the superiority of the object-oriented model in its support for the modular construction of realistic systems.

What if we did not have to throw away everything and relearn programming from the ground up for concurrency? What if we could retain the benefits of five decades of software progress, as crystallized in modern object-oriented programming? This is the conjecture behind SCOOP: that we can benefit from all the techniques we have learned to make our software reliable, extendible and reusable, and add concurrency to the picture in an incremental way.

From sequential to concurrent

A detailed presentation of SCOOP will be for next Monday, but let me give you a hint and I hope whet your appetite by describing how to move a typical example from sequential to concurrent. Here is a routine for transferring money between two accounts:

transfer (amount: INTEGER ; source, target: ACCOUNT)
               -- Transfer amount dollars from source to target.
        require
               enough: source·balance >= amount
        do
         source·withdraw (amount)
         target·deposit (amount)
        ensure
               removed: source·balance = old source·balance – amount
               added: target·balance = old target·balance + amount
        end

The caller must satisfy the precondition, requiring the source account to have enough money to withdraw the requested amount; the postcondition states that the source account will then be debited, and the target account credited, by that amount.

Now assume that we naïvely apply this routine in a concurrent context, with concurrent calls

        if acc1·balance >= 100 then transfer (acc1, acc2, 100) end

and

        if acc1·balance >= 100 then transfer (acc1, acc3, 100) end

If the original balance on acc1 is 100, it would be perfectly possible in the absence of a proper concurrency mechanism that both calls, as they reach the test acc1·balance >= 100, find the property to be true and proceed to do the transfer — but incorrectly since they cannot both happen without bringing the balance of acc1 below zero, a situation that the precondition of transfer and the tests were precisely designed to rule out. This is the classic data race. To avoid it in the traditional approaches, you need complicated and error-prone applications of semaphores or conditional critical regions (the latter with their “wait-and-signal” mechanism, just as clumsy and low-level as the operations on semaphores).

In SCOOP, such data races, and data races of any other kind, cannot occur. If the various objects involved are to run in separate threads of control, the declaration of the routine will be of the form

transfer (amount: INTEGER ; source, target: separate ACCOUNT)
               -- The rest of the routine exactly as before.

where separate is the only specific language keyword of SCOOP. This addition of the separate marker does the trick. will result in the following behavior:

  • Every call to transfer is guaranteed exclusive access to both separate arguments (the two accounts).
  • This simultaneous reservation of multiple objects (a particularly tricky task when programmers must take care of it through their own programs, as they must in traditional approaches) is automatically guaranteed by the SCOOP scheduler. The calls wait as needed.
  • As a consequence, the conditional instructions (if then) are no longer needed. Just call transfer and rely on SCOOP to do the synchronization and guarantee correctness.
  • As part of this correctness guarantee, the calls may have to wait until the preconditions hold, in other words until there is enough money on the account.

This is the desired behavior in the transition from sequential to concurrent. It is achieved here not by peppering the code with low-level concurrent operations, not by moving to a completely different programming scheme, but by simply declaring which objects are “separate” (potentially running elsewhere.

The idea of SCOOP is indeed that we reuse all that we have come to enjoy in modern object-oriented programming, and simply declare what needs to be parallel, expecting things to work (“principle of least surprise”).

This is not how most of the world sees concurrency. It’s supposed to be hard. Indeed it is; very hard, in fact. But the view of the people who built SCOOP is that as much of the difficulty should be for the implementers. Hence the title of this article: for programmers, concurrency should be easy. And we think SCOOP demonstrates that it can be.

SCOOP in practice

A few words of caution: we are not saying that SCOOP as provided in EiffelStudio 6.8 is the last word. (Otherwise it would be called 7.0.) In fact, precisely because implementation is very hard, a number of details are still not properly handled; for example, as discussed in recent exchanges on the EiffelStudio user group [13], just printing out the contents of a separate string is non-trivial. We are working to provide all the machinery that will make everything work well, the ambitious goals and the practical details. But the basics of the mechanism are there, with a solid implementation designed to scale properly for large applications and in distributed settings.

In next week’s article I will describe in a bit more detail what makes up the SCOOP mechanisms. To get a preview, you are welcome to look at the documentation [14, 15]; I hope it will convince you that despite what everyone else says concurrent programming can be easy.

References

[1] Official Intel statement, see e.g. here.

[2] Rich Rashid, Microsoft Faculty Summit, 2008.

[3] This statement was cited at the Microsoft Faculty Summit in 2008 and is part of the official transcript; hence it can be assumed to be authentic, although I do not know the original source.

[4] Patterson and Bell citations from John Markoff, Faster Chips Are Leaving Programmers in Their Dust, New York Times, 17 December 2007, available here.

[5] The chart is from the course material of Tryggve Fossum at the LASER summer school in 2008.

[6] C.A.R. Hoare: em>Communicating Sequential Processes, Prentice Hall, 1985, also available online.

[7] Bill Roscoe: The Theory and Practice of Concurrency, revised edition, Prentice Hall, 2005, also available online.

[8] Robin Milner: Communication and Concurrency, Prentice Hall, 1989.

[9] Robin Milner: Communicating and Mobile Systems: The π-calculus, Cambridge University Press, 1999.

[10] Simon Peyton-Jones: Beautiful Concurrency, in Beautiful Code, ed. Greg Wilson, O’Reilly, 2007, also available online.

[11] Joe Armstrong: Erlang, in Communications of the ACM, vol. 53, no. 9, September 2010, pages 68-75.

[12] Bertrand Meyer: Software Architecture: Functional vs. Object-Oriented Design, in Beautiful Architecture, eds. Diomidis Spinellis and Georgios Gousios, O’Reilly, 2009, pages 315-348, available online.

[13] EiffelStudio user group; see here for a link to current discussions and to join the group.

[14] SCOOP project documentation at ETH, available here.

VN:F [1.9.10_1130]
Rating: 8.9/10 (8 votes cast)
VN:F [1.9.10_1130]
Rating: +6 (from 6 votes)

Assessing concurrency models

By describing a  poorly conceived hypothetical experiment, last week’s article described the “Professor Smith syndrome” consisting of four risks that threaten the validity of empirical software engineering experiments relying on students in a course:

  • Professor Smith Risk 1: possible bias if the evaluator has a stake in the ideas or tools under assessment.
  • Professor Smith Risk 2: creating different levels of motivation in the different groups (Hawthorne effect).
  • Professor Smith Risk 3: extrapolating from students to professionals.
  • Professor Smith Risk 4: violation of educational ethics if the experiment may cause some students to learn better than others.

If you have developed a great new method or tool and would like to assess it, the best way to address Risk 1 is to find someone else to do the assessment. What if  this solution is not practical? Recently we wanted to get some empirical evidence on the merits of the SCOOP (Simple Concurrent Object-Oriented Programming) approach to concurrency [1, 2], on which I have worked for a long time and which is now part of EiffelStudio since the release of 6.8 a couple of weeks ago. We wanted to see if, despite the Professor Smith risks, we could do a credible study ourselves.

The ETH Software Architecture course[3], into which we introduced some introductory material on concurrency last year (as part of a general effort to push more concurrency into software courses at ETH), looked like a good place to try an evaluation; it is a second-year course, where students, or so we thought, would have little prior experience in concurrent software design.

The study’s authors — Sebastian Nanz, Faraz Torshizi and Michela Pedroni — paid special attention to the methodological issues. To judge for yourself whether we addressed them properly, you can read the current version of our paper to be presented at ESEM 2011 [4]. Do note that it is a draft and that we will improve the paper for final publication.

Here is some of what we did. I will not address the Professor Smith Risk 3, the use of students, which (as Lionel Briand has pointed out in a comment on the previous article) published work has studied; in a later article I will give  references to some of that work. But we were determined to tackle the other risks explicitly, so as to obtain credible results.

The basic experiment was a session in which the students were exposed to two different design methods for concurrent software: multithreaded programming in Java, which I’ll call “Java Threads”, and SCOOP. We wanted to explore whether it is easier to program in SCOOP than in Java. This is too general a hypothesis, so it was refined into three concrete hypotheses: is it easier to understand a SCOOP program? Is it easier to find errors in SCOOP programs? Do programmers using SCOOP make fewer errors?

A first step towards reducing the effect — Professor Smith Risk 1 — of any emotional attachment of the experimenters  to one of the approaches, SCOOP in our case, was to generalize the study. Although what directly interested us was to compare SCOOP against Java Threads, we designed the study as a general scheme to compare concurrency approaches; SCOOP and Java Threads are just an illustration, but anyone else interested in assessing concurrency techniques — say Erlang versus C# concurrency — can apply the same methodology. This decision had two benefits: it freed the study from dependency on the particular techniques, hence, we hope, reducing bias; and as side attraction of the kind that is hard for researchers to resist, it increased the publishability of the results.

Circumstances unexpectedly afforded us another protection against any for-SCOOP bias: unbeknownst to us at the time of the study’s design, a first-year course had newly added (in 2009, whereas our study was performed in 2010) an introduction to concurrent programming — using Java Threads! While we had thought that concurrency in any form would be new to most students, in fact almost all of them had now seen Java Threads before. (The new material in the first-year course was taken by ETH students only, but many transfer students had also already had an exposure to Java Threads.) On the other hand, students had not had any prior introduction to SCOOP. So any advantage that one of the approaches may have had because of students’ prior experience would work against our hypotheses. This unexpected development would not help if the study’s results heavily favored Java Threads, but if they favored SCOOP it would reinforce their credibility.

A particular pedagogical decision was made regarding the teaching of our concurrency material: it started with a self-study rather than a traditional lecture. One of the reasons for this decision was purely pedagogical: we felt (and the course evaluations confirmed) that at that stage of the semester the students would enjoy a break in the rhythm of the course. But another reason was to avoid any bias that might have arisen from any difference in the lecturers’ levels of enthusiasm and effectiveness in teaching the two approaches. In the first course session devoted to concurrency, students were handed study materials presenting Java Threads and SCOOP and containing a test to be taken; the study’s results are entirely based on their answers to these tests. The second session was a traditional lecture presenting both approaches again and comparing them. The purpose of this lecture was to make sure the students got the full picture with the benefit of a teacher’s verbal explanations.

The study material was written carefully and with a tone as descriptive and neutral as possible. To make comparisons meaningful, it does not follow a structure specific to Java Threads or  SCOOP  (as we might have used had we taught only one of these approaches); instead it relies in both cases on the same overall plan  (figure 2 of the paper), based on a neutral analysis of concurrency concepts and issues: threads, mutual exclusion, deadlock etc. Each section then presents, for one such general concurrency question, the solution proposed by Java Threads or SCOOP.

This self-study material, as well as everything else about the study, is freely available on the Web; see the paper for the links.

In the self-study, all students studied both the Java Threads and SCOOP materials. They were randomly assigned to two groups, for which the only difference was the order of studying the approaches. We feel that this decision addresses the ethical issue (Professor Smith Risk 4): any pedagogical effect of reading about A before B rather than the reverse, in the course of a few hours, has to be minimal if you end up reading about the two of them, and on the next day follow a lecture that also covers both.

Having all students study both approaches — a crossover approach in the terminology of [5] — should also address the Hawthorne effect (Professor Smith Risk 2): students have no particular incentive to feel that one of the approaches is more hip than the other. While they are not told that SCOOP is partly the work of the instructors, some of them may know or guess this information; the consequences, positive or negative, are limited, since they are asked in both cases to do as well as they can in answering the assessment questions.

The design of that evaluation is another crucial element in trying to avoid bias. We tried, to the extent possible, to base the assessment on objective criteria. For the first hypothesis (program understanding) the technique was to ask the students to predict the output of some simple concurrent programs. To address the risk of a binary correct/incorrect assessment, and get a more fine-grained view, we devised the programs so that they would produce output strings and measured the Levenshtein (edit) distance to the correct result. For the second hypothesis (ease of program debugging), we gave students programs exhibiting typical errors in both approaches and asked them to tell us both the line number of any error they found and an explanation. Assessing the explanation required human analysis; the idea of also assigning partial credit for pointing out a line number without providing a good explanation is that it may be meaningful that a student found that something is amiss even without being quite able to define what it is. The procedure for the third hypothesis (producing programs with fewer errors) was more complex and required two passes over the result; it requires some human analysis, as you will see in the article, but we hope that the two-pass process removes any bias.

This description of the study is only partial and you should read the article [4] for the full details of the procedure.

So what did we find in the end? Does SCOOP really makes concurrency easier to learn, concurrent programs easier to debug, and concurrent programmers less error-prone? Here too  I will refer you to the article. Let me simply mention that the results held some surprises.

In obtaining these results we tried very hard to address the Professor Smith syndrome and its four risks. Since all of our materials, procedures and data are publicly accessible, described in some detail in the paper, you can determine for yourself how well we met this objective, and whether it is possible to perform credible assessments even of one’s own work.

References

Further reading: for general guidelines on how to conduct empirical research see [5]; for ethical guidelines, applied to psychological research but generalizable, see [6].

[1] SCOOP Eiffel documentation, available here.

[2] SCOOP project documentation at ETH, available here.

[3] Software Architecture course at ETH, course page (2011).

[4] Sebastian Nanz, Faraz Torshizi, Michela Pedroni and Bertrand Meyer: Design of an Empirical Study for Comparing the Usability of Concurrent Programming Languages, to appear in ESEM 2011 (ACM/IEEE International Symposium on Empirical Software Engineering and Measurement), 22-23 September 2011. Draft available here.

[5] Barbara A. Kitchenham, Shari L. Pfleeger, Lesley M. Pickard, Peter W. Jones, David C. Hoaglin, Khaled El-Emam and Jarrett Rosenberg: Preliminary Guidelines for Empirical Research in Software Engineering, national Research Council Canada (NRC-CNRC), Report ERB-1082, 2001, available here.

[6] Robert Rosenthal: Science and ethics in conducting, analyzing, and reporting psychological research, in  Psychological Science, 5, 1994, p127-134. I found a copy cached by a search engine here.

VN:F [1.9.10_1130]
Rating: 10.0/10 (8 votes cast)
VN:F [1.9.10_1130]
Rating: +7 (from 7 votes)