Archive for the ‘Computer science’ Category.

Software for Robotics: 2016 LASER summer school, 10-18 September, Elba

The 2016 session of the LASER summer school, now in its 13th edition, has just been announced. The theme is new for the school, and timely: software for robotics. Below is the announcement.

School site: here

The 2016 LASER summer school will be devoted to Software for Robotics. It takes place from 10 to 18 September in the magnificent setting of the Hotel del Golfo in Procchio, Elba Island, Italy.

Robotics is progressing at an amazing pace, bringing improvements to almost areas of human activity. Today’s robotics systems rely ever more fundamentally on complex software, raising difficult issues. The LASER 2016 summer school both covers the current state of robotics software technology and open problems. The lecturers are top international experts with both theoretical contributions and major practical achievements in developing robotics systems.
The LASER school is intended for professionals from the industry (engineers and managers) as well as university researchers, including PhD students. Participants learn about the most important software technology advances from the pioneers in the field. The school’s focus is applied, although theory is welcome to establish solid foundations. The format of the school favors extensive interaction between participants and speakers.
The speakers include:

  • Joydeep Biswas, University of Massachussetts, on Development, debugging, and maintenance of deployed robots
  • Davide Brugali, University of Bergamo, on Managing software variability in robotic control systems
  • Nenad Medvidovic, University of Southern California, on Software Architectures of Robotics Systems
  • Bertrand Meyer, Politecnico di Milano and Innopolis University, with Jiwon Shin, on Concurrent Object-Oriented Robotics Software: Concepts, Framework and Applications
  • Issa Nesnas, NASA Jet Propulsion Laboratory, on Experiences from robotic software development for research and planetary flight robots
  • Richard Vaughan, Simon Fraser University

Organized by Politecnico di Milano, the school takes place at the magnificent Hotel del Golfo (http://www.hoteldelgolfo.it/) in Golfo di Procchio, Elba. Along with an intensive scientific program, participants will have time to enjoy the natural and cultural riches of this history-laden jewel of the Mediterranean.

For more information about the school, the speakers and registration see here.

.

— Bertrand Meyer

VN:F [1.9.10_1130]
Rating: 4.9/10 (15 votes cast)
VN:F [1.9.10_1130]
Rating: -5 (from 5 votes)

Danke sehr!

(A version of this article also appeared on the CACM blog.)

Miracles happen!

Many months ago I accepted, perhaps too fast, a kind invitation to talk at the European Computer Science Summit, the annual conference of Informatics Europe, this week in Vienna. It came with a catch: I was not to choose my own topic but to talk on an imposed theme, ethics in relation to computer science.

As the summer advanced, I became increasingly worried about the talk. What was I going to say? For a nerd, an invited lecture on a free topic is easy: talk about how alias analysis makes automatic frame inference possible, explain the bizarre mixture of the best and worst in agile methods, present a simple method of concurrent programming, describe an environment enabling common mortals to prove programs correct, reflect on our 13-year experience of teaching programming and the supporting MOOCs, and so on. All straightforward stuff which one can present in one’s sleep. But ethics!

The summer receded, but the worry did not. What in the world would I talk about?

And then!

From the deepest of my heart: thank you, Volkswagen.

VN:F [1.9.10_1130]
Rating: 7.6/10 (40 votes cast)
VN:F [1.9.10_1130]
Rating: +10 (from 20 votes)

Design by Contract: ACM Webinar this Thursday

A third ACM webinar this year (after two on agile methods): I will be providing a general introduction to Design by Contract. The date is this coming Thursday, September 17, and the time is noon New York (18 Paris/Zurich, 17 London, 9 Los Angeles, see here for hours elsewhere). Please tune in! The event is free but requires registration here.

VN:F [1.9.10_1130]
Rating: 5.8/10 (19 votes cast)
VN:F [1.9.10_1130]
Rating: -4 (from 8 votes)

New paper: Theory of Programs

Programming, wrote Dijkstra many years ago, is a branch of applied mathematics. That is only half of the picture: the other half is engineering, and this dual nature of programming is part of its attraction.

Descriptions of the mathematical side are generally, in my view, too complicated. This article [1] presents a mathematical theory of programs and programming based on concepts taught in high school: elementary set theory. The concepts covered include:

  • Programming.
  • Specification.
  • Refinement.
  • Non-determinism.
  • Feasibility.
  • Correctness.
  • Programming languages.
  • Kinds of programs: imperative, functional, object-oriented.
  • Concurrency (small-step and large-step)
  • Control structures (compound, if-then-else and Dijkstra-style conditional, loop).
  • State, store and environment.
  • Invariants.
  • Notational conventions for building specifications and programs incrementally.
  • Loop invariants and variants.

One of the principal ideas is that a program is simply the description of a mathematical relation. The program text is a rendering of that relation. As a consequence, one may construct programming languages simply as notations to express certain kinds of mathematics. This approach is the reverse of the usual one, where the program text and its programming languages are the starting point and the center of attention: theoreticians develop techniques to relate them to mathematical concepts. It is more effective to start from the mathematics (“unparsing” rather than parsing).

All the results (74 properties expressed formally, a number of others in the text) are derived as theorems from rules of elementary set theory; there are no new axioms whatsoever.

The paper also has a short version [2], omitting proofs and many details.

References

[1] Theory of Programs, available here.
[2] Theory of Programs, short version of [1] (meant for quick understanding of the ideas, not for publication), available here.

 

VN:F [1.9.10_1130]
Rating: 5.6/10 (29 votes cast)
VN:F [1.9.10_1130]
Rating: 0 (from 12 votes)

Agile methods: follow-up webinar and discussion

After my earlier ACM Webinar on Agile Methods! The Good, the Hype and the Ugly there were so many questions from the audience, left unanswered for lack of time, that a follow-up session has been set up. It will take place tomorrow (Friday, 27 March 2015) at noon New York time (18 Paris/Berlin/Zurich, 5 PM London etc.). Like the first one it is free and you can find the registration information here.

VN:F [1.9.10_1130]
Rating: 5.5/10 (13 votes cast)
VN:F [1.9.10_1130]
Rating: -1 (from 5 votes)

Framing the frame problem (new paper)

Among the open problems of verification, particularly the verification of object-oriented programs, one of the most vexing is framing: how to specify and verify what programs element do not change. Continuing previous work, this article presents a “double frame inference” method, automatic on both sides the specification and verification sides. There is no need to write frame specifications: they will be inferred from routine postconditions. For verification, the method computes the set of actually changed properties through a “change calculus”, itself based on the previously developed alias calculus.

Some verification techniques, such as Hoare-style proofs, require significant annotation effort and potentially yield full functional verification; others, such as model checking and abstract interpretation, have more limited goals but seek full automation. Framing, in my opinion, should be automatic, freeing the programmer-verifier to devote the annotation effort to truly interesting properties.

Reference

[1] Bertrand Meyer: Framing the Frame Problem, in Dependable Software Systems, Proceedings of August 2014 Marktoberdorf summer school, eds. Alexander Pretschner, Manfred Broy and Maximilian Irlbeck, NATO Science for Peace and Security, Series D: Information and Communication Security, Springer, 2015 (to appear), pages 174-185; preprint available here.

VN:F [1.9.10_1130]
Rating: 6.0/10 (18 votes cast)
VN:F [1.9.10_1130]
Rating: -1 (from 9 votes)

Detecting deadlock automatically? (New paper)

To verify sequential programs, we have to prove that they do the right thing, but also that they do it within our lifetime — that they terminate. The termination problem is considerably harder with concurrent programs, since they add a new form of non-termination: deadlock. A set of concurrent processes or threads will deadlock if they end up each holding a resource that another wants and wanting a resource that another holds.

There is no general solution to the deadlock problem, even a good enough general solution. (“Good enough” is the best we can hope for, since like many important problems deadlock is undecidable.) It is already hard enough to provide run-time deadlock detection, to be able at least to cancel execution when deadlock happens. The research reported in this new paper [1] pursues the harder goal of static detection. It applies to an object-oriented context (specifically the SCOOP model of concurrent OO computation) and relies fundamentally on the alias calculus, a static alias analysis technique developed in previous publications.

The approach is at its inception and considerable work remains to be done. Still, the example handled by the paper is encouraging: analyzing two versions of the dining philosophers problem and proving — manually — that one can deadlock and the other cannot.

References

[1] Bertrand Meyer: An automatic technique for static deadlock prevention, in PSI 2014 (Ershov Informatics Conference), eds. Irina Virbitskaite and Andrei Voronkov, Lecture Notes in Computer Science, Springer, 2015, to appear.; draft available here.

VN:F [1.9.10_1130]
Rating: 6.0/10 (19 votes cast)
VN:F [1.9.10_1130]
Rating: -2 (from 12 votes)

Lampsort

 

In support of his view of software methodology, Leslie Lamport likes to use the example of non-recursive Quicksort. Independently of the methodological arguments, his version of the algorithm should be better known. In fact, if I were teaching “data structures and algorithms” I would consider introducing it first.

As far as I know he has not written down his version in an article, but he has presented it in lectures; see [1]. His trick is to ask the audience to give a non-recursive version of Quicksort, and of course everyone starts trying to remove the recursion, for example by making the stack explicit or looking for invertible functions in calls. But his point is that recursion is not at all fundamental in Quicksort. The recursive version is a specific implementation of a more general idea.

Lamport’s version — let us call it Lampsort —is easy to express in Eiffel. We may assume the following context:

a: ARRAY [G -> COMPARABLE]        — The array to be sorted.
pivot: INTEGER                                      —  Set by partition.
picked: INTEGER_INTERVAL            — Used by the sorting algorithm, see below.
partition (i, j: INTEGER)
……..require      — i..j is a sub-interval of the array’s legal indexes:
……..……..i < j
……..……..i >= a.lower
……..……..j <= a.upper
……..do
……..……..… Usual implementation of partition
……..ensure     — The expected effect of partition:
……..……..pivot >= i
……..……..pivot < j
……..……..a [i..j] has been reshuffled so that elements in i..pivot are less than
……..……..or equal to those in pivot+1 .. j.
……..end

We do not write the implementation of partition since the point of the present discussion is the overall algorithm. In the usual understanding, that algorithm consists of doing nothing if the array has no more than one element, otherwise performing a partition and then recursively calling itself on the two resulting intervals. The implementation can take advantage of parallelism by forking the recursive calls out to different processors. That presentation, says Lamport, describes only a possible implementation. The true Quicksort is more general. The algorithm works on a set not_sorted of integer intervals i..j such that the corresponding array slices a [i..j] are the only ones possibly not sorted; the goal of the algorithm is to make not_sorted empty, since then we know the entire array is sorted. In Eiffel we declare this set as:

not_sorted: SET [INTEGER_INTERVAL]

The algorithm initializes not_sorted to contain a single element, the entire interval; at each iteration, it removes an interval from the set, partitions it if that makes sense (i.e. the interval has more than one element), and inserts the resulting two intervals into the set. It ends when not_sorted is empty. Here it is:

……..from                                 — Initialize interval set to contain a single interval, the array’s entire index range:
……..…..create not_sorted.make_one (a.lower |..| a.upper)….         ..……..
……..invariant
……..…..— See below
……..until
……..…..not_sorted.is_empty                                                            — Stop when there are no more intervals in set
……..loop
……..…..picked := not_sorted.item                                                     — Pick an interval from (non-empty) interval set.
……..……if picked.count > 1 then                                                      — (The precondition of partition holds, see below.)
……..……..…..partition (picked.lower, picked.upper)                 — Split: move small items before & large ones after pivot.
……..……..…..not_sorted.extend (picked.lower |..| pivot)            — Insert new intervals into the set of intervals: first
……..……....not_sorted.extend (pivot + 1 |..| picked.upper)     — and second.
……..……end
……..…...not_sorted.remove (picked)                                               — Remove interval that was just partitioned.
…….end

Eiffel note: the function yielding an integer interval is declared in the library class INTEGER using the operator |..| (rather than just  ..).

The query item from SET, with the precondition not is_empty,  returns an element of the set. It does not matter which element. In accordance with the Command-Query Separation principle, calling item does not modify the set; to remove the element you have to use the command remove. The command extend adds an element to the set.

The abstract idea behind Lampsort, explaining why it works at all, is the following loop invariant (see [2] for a more general discussion of how invariants provide the basis for understanding loop algorithms). We call “slice” of an array a non-empty contiguous sub-array; for adjacent slices we may talk of concatenation; also, for slices s and t s <= t means that every element of s is less than or equal to every element of t. The invariant is:

a is the concatenation of the members of a set slices of disjoint slices, such that:
– The elements of a are a permutation of its original elements.
– The index range of any member  of slices having more than one element is in not_sorted.
– For any adjacent slices s and t (with s before t), s <= t.

The first condition (conservation of the elements modulo permutation) is a property of partition, the only operation that can modify the array. The rest of the invariant is true after initialization (from clause) with slices made of a single slice, the full array. The loop body maintains it since it either removes a one-element interval from not_sorted (slices loses the corresponding slice) or performs partition with the effect of partitioning one slice into two adjacent ones satisfying s <= t, whose intervals replace the original one in not_sorted. On exit, not_sorted is empty, so slices is a set of one-element slices, each less than or equal to the next, ensuring that the array is sorted.

The invariant also ensures that the call to partition satisfies that routine’s precondition.

The Lampsort algorithm is a simple loop; it does not use recursion, but relies on an interesting data structure, a set of intervals. It is not significantly longer or more difficult to understand than the traditional recursive version

sort (i, j: INTEGER)
……..require
……..……..i <= j
……..……..i >= a.lower
……..……..j <= a.upper
……..do
……..……if j > i then                    — Note that precondition of partition holds.
……..……..…..partition (i, j)         — Split into two slices s and t such that s <= t.
……..……..…..sort (i, pivot)          — Recursively sort first slice.
……..……..…..sort (pivot+1, j)      — Recursively sort second slice.
……..……end……..…..
……..end

Lampsort, in its author’s view, captures the true idea of Quicksort; the recursive version, and its parallelized variants, are only examples of possible implementations.

I wrote at the start that the focus of this article is Lampsort as an algorithm, not issues of methodology. Let me, however, give an idea of the underlying methodological debate. Lamport uses this example to emphasize the difference between algorithms and programs, and to criticize the undue attention being devoted to programming languages. He presents Lampsort in a notation which he considers to be at a higher level than programming languages, and it is for him an algorithm rather than a program. Programs will be specific implementations guided in particular by efficiency considerations. One can derive them from higher-level versions (algorithms) through refinement. A refinement process may in particular remove or restrict non-determinism, present in the above version of Lampsort through the query item (whose only official property is that it returns an element of the set).

The worldview underlying the Eiffel method is almost the reverse: treating the whole process of software development as a continuum; unifying the concepts behind activities such as requirements, specification, design, implementation, verification, maintenance and evolution; and working to resolve the remaining differences, rather than magnifying them. Anyone who has worked in both specification and programming knows how similar the issues are. Formal specification languages look remarkably like programming languages; to be usable for significant applications they must meet the same challenges: defining a coherent type system, supporting abstraction, providing good syntax (clear to human readers and parsable by tools), specifying the semantics, offering modular structures, allowing evolution while ensuring compatibility. The same kinds of ideas, such as an object-oriented structure, help on both sides. Eiffel as a language is the notation that attempts to support this seamless, continuous process, providing tools to express both abstract specifications and detailed implementations. One of the principal arguments for this approach is that it supports change and reuse. If everything could be fixed from the start, maybe it could be acceptable to switch notations between specification and implementation. But in practice specifications change and programs change, and a seamless process relying on a single notation makes it possible to go back and forth between levels of abstraction without having to perform repeated translations between levels. (This problem of change is, in my experience, the biggest obstacle to refinement-based approaches. I have never seen a convincing description of how one can accommodate specification changes in such a framework without repeating the whole process. Inheritance, by the way, addresses this matter much better.)

The example of Lampsort in Eiffel suggests that a good language, equipped with the right abstraction mechanisms, can be effective at describing not only final implementations but also abstract algorithms. It does not hurt, of course, that these abstract descriptions can also be executable, at the possible price of non-optimal performance. The transformation to an optimal version can happen entirely within the same method and language.

Quite apart from these discussions of software engineering methodology, Lamport’s elegant version of Quicksort deserves to be known widely.

References

[1] Lamport video here, segment starting at 0:32:34.
[2] Carlo Furia, Bertrand Meyer and Sergey Velder: Loop invariants: Analysis, Classification and Examples, in ACM Computing Surveys, September 2014, preliminary text here.

VN:F [1.9.10_1130]
Rating: 7.0/10 (27 votes cast)
VN:F [1.9.10_1130]
Rating: +5 (from 11 votes)

A gold medal

The French National Research Center (CNRS) has just awarded [1] its annual gold medal to Gérard Berry, a great recognition for an outstanding computer scientist. I first discovered Berry’s work through a brilliant 1976 article, Bottom-Up Computation of Recursive Programs [2], which explained recursion using methods of denotational semantics, and should really figure in collections of classic CS articles. One can hardly understand dynamic programming algorithms, for example, without realizing that they are bottom-up versions of recursive algorithms, through transformations described in the article. I relied extensively on its techniques for my own textbooks, from the advanced concepts of Introduction to the Theory of Programming Languages all the way down to the introductory presentation of recursion in Touch of Class.

Berry then went on to play a founding role in synchronous languages, explaining (in a paper whose reference I don’t have right now) that he went through a revelation of sorts after realizing that the automatic-control industry needed languages completely different from those computer scientists typically love. He created the Estérel language and shepherded it all the way to industrialization, serving as Chief Scientist of Estérel Technologies in the 2000 decade, before coming back to research.

He is also well known in France as a popularizer of informatics (computer science) through a number of successful books aiming at a large audience. He was appointed the first professor of informatics at the Collège de France, and in his inaugural lecture series presented a sweeping description of how information technology advances, based on powerful scientific concepts, permeate today’s world and raise ever new challenges. Hardly could the CNRS have chosen to distinguish a better representative of our discipline.

References

[1] CNRS announcement: here.

[2] Gérard Berry, Bottom-Up Computation of Recursive Programs, in RAIRO (Revue Française d’Automatique, Informatique, Recherche Opérationnelle, Informatique Théorique), tome 10, no R1 (1976), pp. 47-82, available here.

VN:F [1.9.10_1130]
Rating: 5.5/10 (2 votes cast)
VN:F [1.9.10_1130]
Rating: +2 (from 2 votes)

Record enrollment

First week of semester at ETH. The number of incoming informatics (computer science) students has reached an all-time high: 345; see the (bad-quality) picture from day one of “Introduction to Programming”. (And no, the gender distribution has not changed.) So much for fears that MOOCs and such will displace universities; in fact we have not one but two MOOCs to support the course. Computer science is back in fashion!

Introduction to Programming, ETH Zurich, September 2014, first day of class

 

 

 

 

 

VN:F [1.9.10_1130]
Rating: 7.8/10 (4 votes cast)
VN:F [1.9.10_1130]
Rating: +1 (from 3 votes)

Non-morganatic specifications

I have temporarily withdrawn this article because the specific case it used as an example has changed. I will re-publish it as soon as the situation has stabilized.

VN:F [1.9.10_1130]
Rating: 6.5/10 (6 votes cast)
VN:F [1.9.10_1130]
Rating: +2 (from 2 votes)

Computing: the Art, the Magic, the Science

 

My colleagues and I have just finished recording our new MOOC (online course), an official ETH offering on the EdX platform. The preview is available [1] and the course will run starting in September.

As readers of this blog know, I  have enthusiastically, under the impulsion of Marco Piccioni at ETH, embraced MOOC technology to support and spread our courses. The particular target has been the introduction to programming that I have taught for over a decade at ETH based on the Touch of Class textbook [2]. In February this blog announced [3] the release of our first MOOC, embodying the essentials of our ETH course and making it available not only to ETH students but to the whole world. The course does not just include video lectures: it also supports active student participation through online exercises and programs that can be compiled and tested on the cloud, with no software installation. These advanced features result from our research on support for distributed software development (by Christian Estler and Martin Nordio, with Carlo Furia and others).

This first course was a skunkworks project, which we did entirely on our own without any endorsement from ETH or any of the main MOOC players. We and our students have very much benefited from the consequent flexibility, and the use of homegrown technology relying on the MOODLE framework. We will keep this course for our own students and for any outside participant who prefers a small-scale, “boutique” version. But the EdX brand and EdX’s marketing power will enable us to reach a much broader audience. We want to provide the best introductory computing course on the market and the world needs to know about it. In addition, the full support of media services at ETH  helped us reach a higher standard on the technical side. (For our first course, the home-brewed one, we did not have a studio, so that every time an ambulance drove by — our offices are close to the main Zurich hospital — we had to restart the current take.)

The course’s content is not exactly the same: we have broadened the scope from just programming to computing, although it retains a strong programming component. We introduced additional elements such as an interview with Professor Peter Widmayer of ETH on the basics of computer science theory. For both new material and the topics retained from the first version we have adapted to the accepted MOOC practice of short segments, although we did not always exactly meet the eight-minute upper limit that was suggested to us.

We hope that you, and many newcomers, will like the course and benefit from it.

References

[3] EdX course: Computing: Art, Magic, Science, preview available here.

[2] Bertrand Meyer: Touch of Class: Learning how to Program Well, with Objects and Contracts, Springer Verlag, revised printing, 2013, book page here.

[3] Learning to Program, Online: article from this blog, 3 February 2014, available here.

 

 

VN:F [1.9.10_1130]
Rating: 8.1/10 (8 votes cast)
VN:F [1.9.10_1130]
Rating: +3 (from 3 votes)

Attached by default?

 

Opinions requested! See at end.

A void call, during the execution of an object-oriented program, is a call of the standard OO form

x·some_routine (…)                                                /CALL/

where x, a reference, happens to be void (null) instead of denoting, as expected, an object. The operation is not possible; it leads to an exception and, usually, a crash of the program. Void calls are also called “null pointer dereferencing”.

One of the major advances in Eiffel over the past years has been the introduction of attached types, entirely removing the risk of void calls. The language mechanisms, extending the type system, make void-call avoidance a static property, part of type checking: just as the compiler will prevent you from assigning a boolean value to an integer variable, so will it flag your program if it sees a risk of void call. Put the other way around, if your program passes compilation, you have the guarantee that its executions will never produce a void call. Attached types thus remove one of the major headaches of programming, what Tony Hoare [1] called his “one-billion-dollar mistake”:

I call it my billion-dollar mistake. It was the invention of the null reference in 1965. At that time, I was designing the first comprehensive type system for references in an object oriented language (ALGOL W) [2]. My goal was to ensure that all use of references should be absolutely safe, with checking performed automatically by the compiler. But I couldn’t resist the temptation to put in a null reference, simply because it was so easy to implement. This has led to innumerable errors, vulnerabilities, and system crashes, which have probably caused a billion dollars of pain and damage in the last forty year

Thanks to attached types, Eiffel programmers can sleep at night: their programs will not encounter void calls.

To benefit from this advance, you must declare variables accordingly, as either attached (never void after initialization) or detachable (possibly void). You must also write the program properly:

  • If you declare x attached, you must ensure in the rest of the program that before its first use x will have been attached to an object, for example through a creation instruction create x.
  • If you declare x detachable, you must make sure that any call of the above form /CALL/ happens in a context where x is guaranteed to be non-void; for example, you could protect it by a test if x /= Void then or, better, an “object test”.

Code satisfying these properties is called void-safe.

Void safety is the way to go: who wants to worry about programs, even after they have been thoroughly tested and have seemingly worked for a while, crashing at unpredictable times? The absence of null-pointer-dereferencing can be a statically  enforced property, as the experience of Eiffel now demonstrates; and that what it should be. One day, children will think void-safely from the most tender age, and their great-grandparents will tell them, around the fireplace during long and scary winter nights, about the old days when not everyone was programming in Eiffel and even those who did were worried about the sudden null-pointer-derefencing syndrome. To get void safety through ordinary x: PERSON declarations, you had (children, hold your breath) to turn on a compiler option!

The transition to void safety was neither fast nor easy; in fact, it has taken almost ten years. Not everyone was convinced from the beginning, and we have had to improve and simplify the mechanism along the way to make void-safe programming practical. Compatibility has been a key issue throughout: older classes are generally not void-safe, but in a language that has been around for many years and has a large code base of operational software it is essential to ensure a smooth transition. Void safety has, from its introduction, been controlled by a compiler option:

  • With the option off, old code will compile as it used to do, but you do not get any guarantee of void safety. At execution time, a void call can still cause your program to go berserk.
  • With the option on, you get the guarantee: no void calls. To achieve this goal, you have to make sure the classes obey the void safety rules; if they do not, the compiler will reject them until you fix the problem.

In the effort to reconcile the compatibility imperative with the inexorable evolution to void safety, the key decisions have affected default values for compiler options and language conventions. Three separate decisions, in fact. Two of the defaults have already been switched; the question asked at the end of this article addresses the switching of the last remaining one.

The first default governed the void-safety compiler option. On its introduction, void-safety was off by default; the mechanism had to be turned on explicitly, part of the “experimental” option that most EiffelStudio releases offer for new, tentative mechanisms. That particular decision changed a year ago, with version 7.3 (May 2013): now void safety is the default. To include non-void-safe code you must mark  it explicitly.

The second default affects a language convention: the meaning of a standard declaration. A typical declaration, such as

x: PERSON                                                                                      /A/

says that at run time x denotes a reference which, if not void, will be attached to an object of type PERSON.  In pre-void-safety Eiffel, as in today’s other typed OO languages,  the reference could occasionally become void at run time; in other words, x was detachable. With the introduction of void safety, you could emphasize this property by specifying it explicitly:

x: detachable PERSON                                                             /B/

You could also specify that x would never be void by declaring it attached, asking the compiler to guarantee this property for you (through its application of the void-safety rules to all operations involving x). The explicit form in this case is

x: attached PERSON                                                               /C/

In practical programming, of course, you do not want to specify attached or detachable all the time: you want to use the simple form /A/ as often as possible. Originally, since we were starting from a non-void-safe language, compatibility required /A/ to mean /B/ by default. But it turns out that “attached” really is the dominant case: most references should remain attached at all times and Void values should be reserved for important but highly specialized cases such as terminating linked data structures. So the simple form should, in the final state of the language, mean /C/. That particular default was indeed switched early (version 7.0, November 2011) for people using the void-safety compiler option. As a result, the attached keyword is no longer necessary for declarations such as the above, although it remains available. Everything is attached by default; when you want a reference that could be void (and are prepared to bear the responsibility for convincing the compiler that it won’t when you actually use it in a call), you declare it as detachable; that keyword remains necessary.

There remains one last step in the march to all-aboard-for-void-safety: removing the “detachable by default” option, that is to say, the compiler option that will make /A/ mean /B/ (rather than /C/). It is only an option, and not the default; but still it remains available. Do we truly need it? The argument for removing it  is that it simplifies the specification (the fewer options the better) and encourages everyone, even more than before, to move to the new world. The argument against is to avoid disturbing existing projects, including their compiler control files (ECFs).

The question looms: when do we switch the defaults? Some of us think the time is now; specifically, the November release (14.11) [4].

Do you think the option should go? We would like your opinion. Please participate in the Eiffelroom poll [5].

 

References and note

[1] C.A.R. Hoare: Null References: The Billion Dollar Mistake , abstract of talk at QCon London, 9-12 March 2009, available here.

[2] (BM note) As a consolation, before Algol W, LISP already had NIL, which is the null pointer.

[3] Bertrand Meyer, Alexander Kogtenkov and Emmanuel Stapf: Avoid a Void: The Eradication of Null Dereferencing, in Reflections on the Work of C.A.R. Hoare, eds. C. B. Jones, A.W. Roscoe and K.R. Wood, Springer-Verlag, 2010, pages 189-211, available here.

[4] EiffelStudio version numbering changed in 2014: from a classic major_number.minor_number to a plain year.month, with two principal releases, 5 and 11 (May and November).

[5] Poll on switching the attachment defaults: at the bottom of the Eiffelroom page here (direct access here).

VN:F [1.9.10_1130]
Rating: 8.6/10 (7 votes cast)
VN:F [1.9.10_1130]
Rating: +2 (from 4 votes)

Code matters

(Adapted from an article previously published on the CACM blog.)

Often, you will be told that programming languages do not matter much. What actually matters more is not clear; maybe tools, maybe methodology, maybe process. It is a pretty general rule that people arguing that language does not matter are defending bad languages.

Let us consider the Apple bug of a few weeks ago. Only a few weeks; the world has already moved to Heartbleed (to be discussed in a subsequent article), but that is not a reason to sweep away the memory of the Apple bug and the language design that it reflects.

In late February, users of  iPhones, iPads and iPods were enjoined to upgrade their devices immediately because  “an attacker with a privileged network position may capture or modify data in sessions protected by SSL/TLS.” The bug was traced [1] to code of the following form:

if (error_of_first_kind)
goto fail;
if (error_of_second_kind)
goto fail;
if (error_of_third_kind)
goto fail;
if (error_of_fourth_kind)
goto fail;
if (error_of_fifth_kind)
goto fail;
goto fail;
if (error_of_sixth_kind)
goto fail;
The_truly_important_code_handling_non_erroneous_case

In other words: just a duplicated line! (The extra line is highlighted above.) But the excess “goto” is beyond the scope of the preceding “if“, so it is executed unconditionally: all executions go directly to the “fail” label, so that The_truly_important_code_handling_non_erroneous_case never gets executed.

Critics have focused their ire on the  goto instruction, but it is of little relevance. What matters, language-wise, is the C/C++-Java-C# convention of delimiting the scope of conditional instructions, loops and other kinds of composite structures. Every component of such structures in these languages is syntactically a single instruction, so that:

  • If you want the branch to consist of an atomic instruction, you write that instruction by itself, as in: if (c) a = b;
  • If you want a sequence of instructions, you write it as a compound, enclosed by the ever so beautiful braces: if (c) {a = b; x = y;}

Although elegant in principle (after all, it comes from Algol), this convention is disastrous from a software engineering perspective because software engineering means understanding that programs change. One day, a branch of a conditional or loop has one atomic instruction; sometime later, a maintainer realizes that the corresponding case requires more sophisticated treatment, and adds an instruction, but fails to add the braces.

The proper language solution is to do away with the notion of compound instruction as a separate concept, but simply expect all branches of composite instructions to consist of a sequence, which could consist of several instructions, just one, or none at all. In Eiffel, you will write

if  c then
   x := y
end

or

 if  c then
   a := b
   x := y
else
   u := v
end

or

from i := 1 until c loop
   a := b
   i := i + 1
end

or

across my_list as l loop
   l.add (x)
end

and so on. This syntax also gets rid of all the noise that pollutes programs in languages retaining C’s nineteen-sixties conventions: parentheses around the conditions, semicolons for instructions on different lines; these small distractions accumulate into serious impediments to program readability.

With such a modern language design, the Apple bug could not have arisen. A duplicated line is either:

  • A keyword such as end, immediately caught as a syntax error.
  • An actual instruction such as an assignment, whose duplication causes either no effect or an effect limited to the particular case covered by the branch, rather than catastrophically disrupting all cases, as in the Apple bug.

Some people, however, find it hard to accept the obvious responsibility of language design. Take this comment derisively entitled  “the goto squirrel” by Dennis Hamilton in the ACM Risks forum [2]:

It is amazing to me that, once the specific defect is disclosed (and the diff of the actual change has also been published), the discussion has devolved into one of coding style and whose code is better.  I remember similar distractions around the Ariane 501 defect too, although in that case there was nothing wrong with the code—the error was that it was being run when it wasn’t needed and it was not simulation tested with new launch parameters under the mistaken assumption that if the code worked for Ariane 4, it should work for Ariane 5.

It is not about the code.  It is not about the code.  It is not about goto. It is not about coming up with ways to avoid introducing this particular defect by writing the code differently.

Such certainty! Repeating a wrong statement ( “it is not about the code“) does not make it  right. Of course “it” is about the code! If the code had been different the catastrophe would not have happened, so one needs some gall to state that the code is not the issue — and just as much gall, given that the catastrophe would also not have happened if the programming language had been different, to state that it is not about the programming language.

When Mr. Hamilton dismisses as “distractions” the explanations pointing to programming-related causes for the Ariane-5 disaster, I assume he has in mind the analysis which I published at the time with Jean-Marc Jézéquel [3], which explained in detail how the core issue was the absence of proper specifications (contracts). At that time too, we heard dismissive comments; according to one of the critics, the programming aspects did not count, since the whole thing was really a social problem: the French engineers in Toulouse did not communicate properly with their colleagues in England! What is great with such folk explanations is that they sound just right and please people because they reinforce existing stereotypes. They are by nature as impossible to refute as they are impossible to prove. And they avoid raising the important but disturbing questions: were the teams using the right programming language, the right specification method (contracts, as our article suggested), appropriate tools? In both the Ariane-5 and Apple cases, they were not.

If you want to be considered polite, you are not supposed to point out that the use of programming languages designed for the PDP-8 or some other long-gone machine is an invitation to disaster. The more terrible the programming language people use, and the more they know it is terrible (even if they will not admit it), the more scandalized they will be that you point out that it is, indeed, terrible. It is as if you had said something about their weight or the pimples on their cheeks. Such reactions do not make the comment less true. The expression of outrage is particularly inappropriate when technical choices are not just matters for technical argument, but have catastrophic consequences on society.

The usual excuse, in response to language criticisms, is that better tools, better quality control (the main recommendation of the Ariane-5 inquiry committee back in 1997), better methodology would also have avoided the problem. Indeed, a number of the other comments in the comp.risks discussion that includes Hamilton’s dismissal of code [2] point in this direction, noting for example that static analyzers could have detected code duplication and unreachable instructions. These observations are all true, but change nothing to the role of programming languages and coding issues.  One of the basic lessons from the study of software and other industrial disasters — see for example the work of Nancy Leveson — is that a disaster results from a combination of causes. This property is in fact easy to understand: a disaster coming from a single cause would most likely have been avoided. Consider the hypothetical example of a disastrous flaw in Amazon’s transaction processing. It seems from various sources that Amazon processes something like 300 transactions a second. Now let us assume three independent factors, each occurring with a probability of a thousandth (10-3), which could contribute to a failure. Then:

  • It is impossible that one of the factors could cause failure just by itself: that means it would make a transaction fail after around 3 seconds, and would be caught even in the most trivial unit testing. No one but the developer would ever know about it.
  • If two of the factors together cause failure, they will occur every million transactions, meaning about once an hour. Any reasonable testing will discover the problem before a release is ever deployed.
  • If all three factors are required, the probability is 10-9, meaning that a failure will occur about once a year. Only in that case will a real problem exist: a flaw that goes undetected for a long time, during which everything seems normal, until disaster strikes.

These observations explain why post-mortem examinations of catastrophes always point to a seemingly impossible combination of unfortunate circumstances. The archduke went to Sarajevo and he insisted on seeing the wounded and someone forgot to tell the drivers about the prudent decision to bypass the announced itinerary and the convoy stalled  and the assassin saw it and he hit Franz-Ferdinand right in the neck and there was nationalistic resentment in various countries and the system of alliances required countries to declare war [4]. Same thing for industrial accidents. Same thing for the Apple bug: obviously, there were no good code reviews and no static analysis tools applied and no good management; and, obviously, a programming language that blows out innocent mistakes into disasters of planetary import.

So much for the accepted wisdom, heard again and again in software engineering circles, that code does not matter, syntax does not count, typos are caught right away, and that all we should care about is process or agility or requirements or some other high-sounding concern more respectable than programming. Code? Programming languages? Did we not take care of those years ago? I remember similar distractions.”

There is a  positive conclusion to the “and” nature (in probabilistic terms, the multiplicative nature) of causes necessary to produce a catastrophe in practice: it suffices to get rid of one of the operands of the “and” to falsify its result, hence avoiding the catastrophe. When people tell you that code does not matter or that language does not matter, just understand the comment for what it really means, “I am ashamed of the programming language and techniques I use but do not want to admit it so I prefer to blame problems on the rest of the world“, and make the correct deduction: use a good programming language.

References

[1] Paul Duckline:  Anatomy of a “goto fail” – Apple’s SSL bug explained, plus an unofficial patch for OS X!, Naked Security blog (Sophos), 24 February 2014, available here.

[2] Dennis E. Hamilton: The Goto Squirrel, ACM Risks Forum, 28 February 2014, available here.

[3] Jean-Marc Jézéquel and Bertrand Meyer: Design by Contract: The Lessons of Ariane, in Computer (IEEE), vol. 30, no. 1, January 1997, pages 129-130, available online here and, with reader responses here.

[4] Assassination of Ferdinand of Autria: here.

VN:F [1.9.10_1130]
Rating: 8.1/10 (12 votes cast)
VN:F [1.9.10_1130]
Rating: +4 (from 6 votes)

New article: contracts in practice

For almost anyone programming in Eiffel, contracts are just a standard part of daily life; Patrice Chalin’s pioneering study of a few years ago [1] confirmed this impression. A larger empirical study is now available to understand how developers actually use contracts when available. The study, to published at FM 2014 [2] covers 21 programs, not just in Eiffel but also in JML and in Code Contracts for C#, totaling 830,000 lines of code, and following the program’s revision history for a grand total of 260 million lines of code over 7700 revisions. It analyzes in detail whether programmers use contracts, how they use them (in particular, which kinds, among preconditions, postconditions and invariants), how contracts evolve over time, and how inheritance interacts with contracts.

The paper is easy to read so I will refer you to it for the detailed conclusions, but one thing is clear: anyone who thinks contracts are for special development or special developers is completely off-track. In an environment supporting contracts, especially as a native part of the language, programmers understand their benefits and apply them as a matter of course.

References

[1] Patrice Chalin: Are practitioners writing contracts?, in Fault-Tolerant System, eds. Butler, Jones, Romanovsky, Troubitsyna, Springer LNCS, vol. 4157, pp. 100–113, 2006.

[2] H.-Christian Estler, Carlo A. Furia, Martin Nordio, Marco Piccioni and Bertrand Meyer: Contracts in Practice, to appear in proceedings of 19th International Symposium on Formal Methods (FM 2014), Singapore, May 2014, draft available here.

VN:F [1.9.10_1130]
Rating: 8.4/10 (11 votes cast)
VN:F [1.9.10_1130]
Rating: +4 (from 6 votes)

New article: passive processors

 

The SCOOP concurrency model has a clear division of objects into “regions”, improving the clarity and reliability of concurrent programs by establishing a close correspondence between the object structure and the process structure. Each region has an associated “processor”, which executes operations on the region’s objects. A literal application of this rule implies, however, a severe performance penalty. As part of the work for his PhD thesis (defended two weeks ago), Benjamin Morandi found out that a mechanism for specifying certain processors as “passive” yields a considerable performance improvement. The paper, to be published at COORDINATION, describes the technique and its applications.

Reference

Benjamin Morandi, Sebastian Nanz and Bertrand Meyer: Safe and Efficient Data Sharing for Message-Passing Concurrency, to appear in proceedings of COORDINATION 2014, 16th International Conference on Coordination Models and Languages, Berlin, 3-6 June 2014, draft available here.
.

VN:F [1.9.10_1130]
Rating: 8.2/10 (5 votes cast)
VN:F [1.9.10_1130]
Rating: +2 (from 4 votes)

Learning to program, online

Introduction to Programming MOOCThe ETH introductory programming course, which I have taught since 2003 and used as the basis for the Springer Touch of Class textbook, is now available as a MOOC: an online course, open to anyone interested [1]. The project was started and led by Marco Piccioni.

The MOOC was released in September; although it was “open” (the other “O”) from the start, we have not publicized it widely until now, since we used it first for the benefit of students taking the course at ETH, and took advantage of this experience to polish it. If you follow the acronym buzz you may say it was first a “SPOC” (Small Personal Online Course”). The experience with our students has been extremely encouraging: they took it as a supplement to the lectures and widely praised its value.We hope that many others will find it useful as well.

MOOCs are hot but they have attracted as much criticism as hype. We have seen the objections: low completion rates, lack of direct human contact, threats to traditional higher education. Two things are clear, though: MOOCs are more than a passing fad; and they have their own pedagogical advantages.

MOOCs are here to stay; one ignores them at one’s own peril. For courses on a popular topic, I believe that in a few years almost everyone will be teaching from a MOOC. Not in the sense of telling students “just follow this course on the Web and come back for the exam“, but as a basis for individual institutions’ courses. For example the students might be told to take the lessons online, then come to in-person lectures (“Flip The Classroom“) or discussion sessions. For any given topic, such as introduction to programming, only a handful of MOOCs will emerge in this role. We would like the ETH course to be one of them.

The question is not just to replace courses and textbooks with an electronic version. MOOCs enrich the learning experience. Introduction to Programming is a particularly fertile application area for taking advantage of technology: the presentation of programming methods and techniques becomes even more effective if students can immediately try out the ideas, compile the result, run it, and see the results on predefined tests. Our course offers many such interactive exercises, thanks to the E4Mooc (Eiffel for MOOCs) framework developed by Christian Estler [2]. This feature has proved to be a key attraction of the course for ETH students. Here for example is an exercise asking you to write a function that determines whether a string is a palindrome (reads identically in both directions):

The program area is pre-filled with a class skeleton where all that remains for you to enter is the algorithm for the relevant function. Then you can click “Compile” and, if there are no compilation errors, “Run” to test your candidate code against a set of predefined test values. One of the benefits for users taking the course is that there is no software to install: everything runs in the cloud, accessible from the browser. Here we see the MOOC not just as a technique for presenting standard material but as an innovative learning tool, opening up pedagogical techniques that were not previously possible.

Besides E4Mooc, our course relies on the Moodle learning platform. Our experience with Moodle has been pleasant; we noticed, for example, that students really liked the Moodle feature enabling them to gain virtual “badges” for good answers, to the point of repeating exercises until they got the badges. For instructors preparing the course, building a MOOC is a huge amount of work (that was not a surprise, people had told us); but it is worth the effort.

We noted that attendance at the lectures increased as compared to previous years. The matter is a natural concern: other than the cold November mornings in Zurich (one of the lectures is at 8 AM) there are many reasons not to show up in class:  the textbook covers much of the material; all the slides are online; so are the slides for exercise sessions (tutorials) and texts of exercises and some earlier exams; lectures were video-recorded in previous years, and students can access the old recordings. Our feeling is that the MOOC makes the course more exciting and gives students want to come to class and hear more.

The MOOC course is not tied to a particular period; you can take it whenever you like. (The current practice of offering MOOCs at fixed times is disappointing: what is the point of putting a course online if participants are forced to fit to a fixed schedule?)

Marco Piccioni and I are now off to our second MOOC, which will be a generalization of the first, covering the basics not just of programming but of computer science and IT overall, and will be available on one of the major MOOC platforms. We will continue to develop the existing MOOC, which directly supports our in-person course, and which we hope will be of use to many other people.

Take the MOOC, or tell a beginner near you to take it, and tell us what you think.

References

[1] Bertrand Meyer, Marco Piccioni and other members of the ETH Chair of Software Engineering: ETH Introduction to Programming MOOC, available here.

[2] H-Christian Estler, E4Mooc demo, available on YouTube: here.

VN:F [1.9.10_1130]
Rating: 9.4/10 (14 votes cast)
VN:F [1.9.10_1130]
Rating: +7 (from 9 votes)

LASER 2014 (Elba, September)

2014 marks the 10-th anniversary (11th edition) of the LASER summer school. The school will be held September 7-14, 2014, and the detailed information is here.

LASER (the name means Laboratory for Applied Software Engineering Research) is dedicated to practical software engineering. The roster of speakers since we started is a who’s who of innovators in the field. Some of the flavor of the school can gathered from the three proceedings volumes published in Springer LNCS (more on the way) or simply by browsing the pages of the schools from previous years.

Usually we have a theme, but to mark this anniversary we decided to go for speakers first; we do have a title, “Leading-Edge Software Engineering”, but broad enough to encompass a wide variety of a broad range of topics presented by star speakers: Harald Gall, Daniel Jackson, Michael Jackson, Erik Meijer (appearing at LASER for the third time!), Gail Murphy and Moshe Vardi. With such a cast you can expect to learn something important regardless of your own primary specialty.

LASER is unique in its setting: a 5-star hotel in the island paradise of Elba, with outstanding food and countless opportunities for exploring the marvelous land, the beaches, the sea, the geology (since antiquity Elba has been famous for its stones and minerals) and the history, from the Romans to Napoleon, who in the 9 months of his reign changed the island forever. The school is serious stuff (8:30 to 13 and 17 to 20 every day), but with enough time to enjoy the surroundings.

Registration is open now.

VN:F [1.9.10_1130]
Rating: 7.0/10 (3 votes cast)
VN:F [1.9.10_1130]
Rating: +1 (from 3 votes)

Negative variables: new version

I have mentioned this paper before (see the earlier blog entry here) but it is now going to be published [1] and has been significantly revised, both to take referee comments into account and because we found better ways to present the concepts.

We have  endeavored to explain better than in the draft why the concept of negative variable is necessary and why the usual techniques for modeling object-oriented programs do not work properly for the fundamental OO operation, qualified call x.r (…). These techniques are based on substitution and are simply unable to express certain properties (let alone verify them). The affected properties are those involving properties of the calling context or the global project structure.

The basic idea (repeated in part from the earlier post) is as follows. In modeling OO programs, we have to take into account the unique “general relativity” property of OO programming: all the operations you write are expressed relative to a “current object” which changes repeatedly during execution. More precisely at the start of a call x.r (…) and for the duration of that call the current object changes to whatever x denotes — but to determine that object we must again interpret x in the context of the previous current object. This raises a challenge for reasoning about programs; for example in a routine the notation f.some_reference, if f is a formal argument, refers to objects in the context of the calling object, and we cannot apply standard rules of substitution as in the non-OO style of handling calls.

We introduced a notion of negative variable to deal with this issue. During the execution of a call x.r (…) the negation of x , written x’, represents a back pointer to the calling object; negative variables are characterized by axiomatic properties such as x.x’= Current and x’.(old x)= Current.

Negative variable as back pointer

The paper explains why this concept is necessary, describes the associated formal rules, and presents applications.

Reference

[1] Bertrand Meyer and Alexander Kogtenkov: Negative Variables and the Essence of Object-Oriented Programming, to appear in Specification, Algebra, and Software, eds. Shusaku Iida, Jose Meseguer and Kazuhiro Ogata, Springer Lecture Notes in Computer Science, 2014, to appear. See text here.

VN:F [1.9.10_1130]
Rating: 7.8/10 (4 votes cast)
VN:F [1.9.10_1130]
Rating: +2 (from 4 votes)

Niklaus Wirth birthday symposium, 20 February, Zurich

In honor of Niklaus Wirth’s 80-th birthday we are organizing a symposium at ETH on February 20, 2014. This is a full-day event with invited talks by:

  • Vint Cerf
  • Hans Eberlé
  • Michael Franz
  • me
  • Carroll Morgan
  • Martin Odersky
  • Clemens Szyperski
  • Niklaus Wirth himself

From the symposium’s web page:

Niklaus Wirth was a Professor of Computer Science at ETH Zürich, Switzerland, from 1968 to 1999. His principal areas of contribution were programming languages and methodology, software engineering, and design of personal workstations. He designed the programming languages Algol W, Pascal, Modula-2, and Oberon, was involved in the methodologies of structured programming and stepwise refinement, and designed and built the workstations Lilith and Ceres. He published several text books for courses on programming, algorithms and data structures, and logical design of digital circuits. He has received various prizes and honorary doctorates, including the Turing Award, the IEEE Computer Pioneer, and the Award for outstanding contributions to Computer Science Education.

Participation is free (including breaks, lunch and the concluding “Apéro”) but space is strictly limited and we expect to run out of seats quickly. So if you are interested (but only if you are certain to attend) please register right away.

Symposium page and access to registration form: here.

VN:F [1.9.10_1130]
Rating: 8.7/10 (7 votes cast)
VN:F [1.9.10_1130]
Rating: +4 (from 6 votes)

Hungarian rotation

The 2013 Informatics Europe “Best Practices in Education” award was devoted, this year, to initiatives for teaching informatics in schools [1].  It was given out last week at the European Computer Science Summit in Amsterdam [2]. Two teams shared it, one from Poland and the other from Romania. Both teams showed excellent projects, but the second was beyond anything I expected.

The project comes from the Hungarian-speaking Sapientia University in Transylvania and is devoted to teaching algorithms visually and “at the same time enhancing intercultural communication” in the region. It illustrates the classical sorting algorithms through folk dances. Quicksort is Hungarian, selection sort is gypsy, merge sort is “Transylvanian-saxon”. I think my favorite is Shell sort [3]. For more, see their YouTube channel [4].

Now  if only they could act the loop invariants [5].

References

[1] 2013 Best Practices in Education award, see here.

[2] 2013 European Computer Science Summit, here.

[3] “Shell sort with Hungarian (Székely) folk dance”, see here.

[4] YouTube Algorythmics channel, here.

[5] Carlo Furia, Bertrand Meyer and Sergey Velder: Loop invariants: Analysis, Classification and Examples, in ACM Computing Surveys, Septembre 2014, to appear, available here.

VN:F [1.9.10_1130]
Rating: 9.1/10 (10 votes cast)
VN:F [1.9.10_1130]
Rating: +5 (from 7 votes)

The laws of branching (part 2): Tichy and Joy

Recently I mentioned the first law of branching (see earlier article) to Walter Tichy, famed creator of RCS, the system that established modern configuration management. He replied with the following anecdote, which is worth reproducing in its entirety (in his own words):

I started work on RCS in 1980, because I needed an alternative for SCCS, for which the license cost would have been prohibitive. Also, I wanted to experiment with reverse deltas. With reverse deltas, checking out the latest version is fast, because it is stored intact. For older ones, RCS applied backward deltas. So the older revisions took longer to extract, but that was OK, because most accesses are to the newest revision anyway.

At first, I didn’t know how to handle branches in this scheme. Storing each branch tip in full seemed like a waste. So I simply left out the branches.

It didn’t take long an people were using RCS. Bill Joy, who was at Berkeley at the time and working on Berkeley Unix, got interested. He gave me several hints about unpleasant features of SCCS that I should correct. For instance, SCCS didn’t handle identification keywords properly under certain circumstances, the locking scheme was awkward, and the commands too. I figured out a way to solve these issue. Bill was actually my toughest critic! When I was done with all the modifications, Bill cam back and said that he was not going to use RCS unless I put in branches. So I figured out a way. In order to reconstruct a branch tip, you start with the latest version on the main trunk, apply backwards deltas up to the branch point, and then apply forward deltas out to the branch tip. I also implemented a numbering scheme for branches that is extensible.

When discussing the solution, Bill asked me whether this scheme meant that it would take longer to check in and out on branches. I had to admit that this was true. With the machines at that time (VAXen) efficiency was important. He thought about this for a moment and then said that that was actually great. It would discourage programmers from using branches! He felt they were a necessary evil.

VN:F [1.9.10_1130]
Rating: 6.1/10 (10 votes cast)
VN:F [1.9.10_1130]
Rating: -2 (from 4 votes)

Smaller, better textbook

A new version of my Touch of Class [1] programming textbook is available. It is not quite a new edition but more than just a new printing. All the typos that had been reported as of a few months ago have been corrected.

The format is also significantly smaller. This change is more than a trifle. When а  reader told me for the first time “really nice book, pity it is so heavy!”, I commiserated and did not pay much attention. After twenty people said that, and many more after them, including professors looking for textbooks for their introductory programming classes, I realized it was a big deal. The reason the book was big and heavy was not so much the size of the contents (876 is not small, but not outrageous for a textbook introducing all the fundamental concepts of programming). Rather, it is a technical matter: the text is printed in color, and Springer really wanted to do a good job, choosing thick enough paper that the colors would not seep though. In addition I chose a big font to make the text readable, resulting in a large format. In fact I overdid it; the font is bigger than necessary, even for readers who do not all have the good near-reading sight of a typical 19-year-old student.

We kept the color and the good paper,  but reduced the font size and hence the length and width. The result is still very readable, and much more portable. I am happy to make my contribution to reducing energy consumption (at ETH alone, think of the burden on Switzerland’s global energy bid of 200+ students carrying the book — as I hope they do — every morning on the buses, trains and trams crisscrossing the city!).

Springer also provides electronic access.

Touch of Class is the textbook developed on the basis of the Introduction to Programming course [2], which I have taught at ETH Zurich for the last ten years. It provides a broad overview of programming, starting at an elementary level (zeros and ones!) and encompassing topics not usually covered in introductory courses, even a short introduction to lambda calculus. You can get an idea of the style of coverage of such topics by looking up the sample chapter on recursion at touch.ethz.ch. Examples of other topics covered include a general introduction to software engineering and software tools. The presentation uses full-fledged object-oriented concepts (including inheritance, polymorphism, genericity) right from the start, and Design by Contract throughout. Based on the “inverted curriculum” ideas on which I published a number of articles, it presents students with a library of reusable components, the Traffic library for graphical modeling of traffic in a city, and builds on this infrastructure both to teach students abstraction (reusing code through interfaces including contracts) and to provide them models of high-quality code for imitation and inspiration.

For more details see the article on this blog that introduced the book when it was first published [3].

References

[1] Bertrand Meyer, Touch of Class: An Introduction to Programming Well Using Objects and Contracts, Springer Verlag, 2nd printing, 2013. The Amazon page is here. See the book’s own page (with slides and other teaching materials, sample chapter etc.) here. (Also available in Russian, see here.)

[2] Einführung in die Programmierung (Introduction to Programming) course, English course page here.

[3] Touch of Class published, article on this blog, 11 August 2009, see [1] here.

VN:F [1.9.10_1130]
Rating: 7.8/10 (4 votes cast)
VN:F [1.9.10_1130]
Rating: +5 (from 7 votes)

The invariants of key algorithms (new paper)

 

I have mentioned this paper before but as a draft. It has now been accepted by ACM’s Computing Surveys and is scheduled to appear in September 2014; the current text, revised from the previous version, is available [1].

Here is the abstract:

Software verification has emerged as a key concern for ensuring the continued progress of information technology. Full verification generally requires, as a crucial step, equipping each loop with a “loop invariant”. Beyond their role in verification, loop invariants help program understanding by providing fundamental insights into the nature of algorithms. In practice, finding sound and useful invariants remains a challenge. Fortunately, many invariants seem intuitively to exhibit a common flavor. Understanding these fundamental invariant patterns could therefore provide help for understanding and verifying a large variety of programs.

We performed a systematic identification, validation, and classification of loop invariants over a range of fundamental algorithms from diverse areas of computer science. This article analyzes the patterns, as uncovered in this study,governing how invariants are derived from postconditions;it proposes a taxonomy of invariants according to these patterns, and presents its application to the algorithms reviewed. The discussion also shows the need for high-level specifications based on “domain theory”. It describes how the invariants and the corresponding algorithms have been mechanically verified using an automated program prover; the proof source files are available. The contributions also include suggestions for invariant inference and for model-based specification.

Reference

[1] Carlo Furia, Bertrand Meyer and Sergey Velder: Loop invariants: Analysis, Classification and Examples, in ACM Computing Surveys, to appear in September 2014, preliminary text available here.

VN:F [1.9.10_1130]
Rating: 7.0/10 (3 votes cast)
VN:F [1.9.10_1130]
Rating: +1 (from 3 votes)

Reading notes: strong specifications are well worth the effort

 

This report continues the series of ICSE 2013 article previews (see the posts of these last few days, other than the DOSE announcement), but is different from its predecessors since it talks about a paper from our group at ETH, so you should not expect any dangerously delusional,  disingenuously dubious or downright deceptive declaration or display of dispassionate, disinterested, disengaged describer’s detachment.

The paper [1] (mentioned on this blog some time ago) is entitled How good are software specifications? and will be presented on Wednesday by Nadia Polikarpova. The basic result: stronger specifications, which capture a more complete part of program functionality, cause only a modest increase in specification effort, but the benefits are huge; in particular, automatic testing finds twice as many faults (“bugs” as recently reviewed papers call them).

Strong specifications are specifications that go beyond simple contracts. A straightforward example is a specification of a push operation for stacks; in EiffelBase, the basic Eiffel data structure library, the contract’s postcondition will read

item =                                          /A/
count = old count + 1

where x is the element being pushed, item the top of the stack and count the number of elements. It is of course sound, since it states that the element just pushed is now the new top of the stack, and that there is one more element; but it is also  incomplete since it says nothing about the other elements remaining as they were; an implementation could satisfy the contract and mess up with these elements. Using “complete” or “strong” preconditions, we associate with the underlying domain a theory [2], or “model”, represented by a specification-only feature in the class, model, denoting a sequence of elements; then it suffices (with the convention that the top is the first element of the model sequence, and that “+” denotes concatenation of sequences) to use the postcondition

model = <x> + old model         /B/

which says all there is to say and implies the original postconditions /A/.

Clearly, the strong contracts, in the  /B/ style, are more expressive [3, 4], but they also require more specification effort. Are they worth the trouble?

The paper explores this question empirically, and the answer, at least according to the criteria used in the study, is yes.  The work takes advantage of AutoTest [5], an automatic testing framework which relies on the contracts already present in the software to serve as test oracles, and generates test cases automatically. AutoTest was applied to both to the classic EiffelBase, with classic partial contracts in the /A/ style, and to the more recent EiffelBase+ library, with strong contracts in the /B/ style. AutoTest is for Eiffel programs; to check for any language-specificity in the results the work also included testing a smaller set of classes from a C# library, DSA, for which a student developed a version (DSA+) equipped with strong model-based contracts. In that case the testing tool was Microsoft Research’s Pex [7]. The results are similar for both languages: citing from the paper, “the fault rates are comparable in the C# experiments, respectively 6 . 10-3 and 3 . 10-3 . The fault complexity is also qualitatively similar.

The verdict on the effect of strong specifications as captured by automated testing is clear: the same automatic testing tools applied to the versions with strong contracts yield twice as many real faults. The term “real fault” comes from excluding spurious cases, such as specification faults (wrong specification, right implementation), which are a phenomenon worth studying but should not count as a benefit of the strong specification approach. The paper contains a detailed analysis of the various kinds of faults and the corresponding empirically determined measures. This particular analysis is for the Eiffel code, since in the C#/Pex case “it was not possible to get an evaluation of the faults by the original developers“.

In our experience the strong specifications are not that much harder to write. The paper contains a precise measure: about five person-weeks to create EiffelBase+, yielding an “overall benefit/effort ratio of about four defects detected per person-day“. Such a benefit more than justifies the effort. More study of that effort is needed, however, because the “person” in the person-weeks was not just an ordinary programmer. True, Eiffel experience has shown that most programmers quickly get the notion of contract and start applying it; as the saying goes in the community, “if you can write an if-then-else, you can write a contract”. But we do not yet have significant evidence of whether that observation extends to model-based contracts.

Model-based contracts (I prefer to call them “theory-based” because “model” means so many other things, but I do not think I will win that particular battle) are, in my opinion, a required component of the march towards program verification. They are the right compromise between simple contracts, which have proved to be attractive to many practicing programmers but suffer from incompleteness, and full formal specification à la Z, which say everything but require too much machinery. They are not an all-or-nothing specification technique but a progressive one: programmers can start with simple contracts, then extend and refine them as desired to yield exactly the right amount of precision and completeness appropriate in any particular context. The article shows that the benefits are well worth the incremental effort.

According to the ICSE program the talk will be presented in the formal specification session, Wednesday, May 22, 13:30-15:30, Grand Ballroom C.

References

[1] Nadia Polikarpova, Carlo A. Furia, Yu Pei, Yi Wei and Bertrand Meyer: What Good Are Strong Specifications?, to appear in ICSE 2013 (Proceedings of 35th International Conference on Software Engineering), San Francisco, May 2013, draft available here.

[2] Bertrand Meyer: Domain Theory: the forgotten step in program verification, article on this blog, see here.

[3] Bernd Schoeller, Tobias Widmer and Bertrand Meyer: Making Specifications Complete Through Models, in Architecting Systems with Trustworthy Components, eds. Ralf Reussner, Judith Stafford and Clemens Szyperski, Lecture Notes in Computer Science, Springer-Verlag, 2006, available here.

[4] Nadia Polikarpova, Carlo Furia and Bertrand Meyer: Specifying Reusable Components, in Verified Software: Theories, Tools, Experiments (VSTTE ‘ 10), Edinburgh, UK, 16-19 August 2010, Lecture Notes in Computer Science, Springer Verlag, 2010, available here.

[5] Bertrand Meyer, Ilinca Ciupa, Andreas Leitner, Arno Fiva, Yi Wei and Emmanuel Stapf: Programs that Test Themselves, IEEE Computer, vol. 42, no. 9, pages 46-55, September 2009, also available here.

[6] Bertrand Meyer, Ilinca Ciupa, Andreas Leitner, Arno Fiva, Yi Wei and Emmanuel Stapf: Programs that Test Themselves, in IEEE Computer, vol. 42, no. 9, pages 46-55, September 2009, also available here.

[7] Nikolai Tillman and Peli de Halleux, Pex: White-Box Generation for .NET, in Tests And Proofs (TAP 2008), pp. 134-153.

 

VN:F [1.9.10_1130]
Rating: 10.0/10 (4 votes cast)
VN:F [1.9.10_1130]
Rating: +2 (from 2 votes)

Reading notes: misclassified bugs

 

(Please note the general disclaimer [1].)

How Misclassification Impacts Bug Prediction [2], an article to be presented on Thursday at ICSE, is the archetype of today’s successful empirical software engineering research, deriving significant results from the mining of publicly available software project repositories — in this case Tomcat5 and three others from Apache, as well as Rhino from Mozilla. The results are in some sense meta-results, because many studies have already mined the bug records of such repositories to draw general lessons about bugs in software development; what Herzig, Just and Zeller now tell us is that the mined data is highly questionable: many problems classified as bugs are not bugs.

The most striking results (announced in a style a bit stentorian to my taste, but indeed striking) are that: every third bug report does not describe a bug, but a request for a new feature, an improvement, better documentation or tests, code cleanup or refactoring; and that out of five program files marked as defective, two do not in fact contain any bug.

These are both false positive results. The repositories signal very few misclassifications the other way: only a small subset of enhancement and improvement requests (around 5%) should have been classified as bugs, and even fewer faulty files are missed (8%, but in fact less than 1% if one excludes an outlier, tomcat5 with 38%, a discrepancy that the paper does not discuss).

The authors have a field day, in the light of this analysis, of questioning the validity of the many studies in recent years — including some, courageously cited, by Zeller himself and coauthors — that start from bug repositories to derive general lessons about bugs and their properties.

The methodology is interesting if a bit scary. The authors (actually, just the two non-tenured authors, probably just a coincidence) analyzed 7401 issue reports manually; more precisely, one of them analyzed all of them and the second one took a second look at the reports that came out from the first step as misclassified, without knowing what the proposed reclassification was, then the results were merged. At 4 minutes per report this truly stakhanovite effort took 90 working days. I sympathize, but I wonder what the rules are in Saarland for experiments involving living beings, particularly graduate students.

Precise criteria were used for the reclassification; for example a report describes a bug, in the authors’ view, if it mentions a null pointer exception (I will skip the opportunity of a pitch for Eiffel’s void safety mechanism), says that the code has to be corrected to fix the semantics, or if there is a “memory issue” or infinite loop. These criteria are reasonable if a bit puzzling (why null pointer exceptions and not other crashes such as arithmetic overflows?); but more worryingly there is no justification for them. I wonder  how much of the huge discrepancy found by the authors — a third or reported bugs are not bugs, and 40% of supposedly defective program files are not defective — can be simply explained by different classification criteria applied by the software projects under examination. The authors give no indication that they interacted with the people in charge of these projects. To me this is the major question hovering over this paper and its spectacular results. If you are in the room and get the chance, don’t hesitate to ask this question on my behalf or yours!

Another obvious question is how much the results depend on the five projects selected. If there ever was room for replicating a study (a practice whose rarity in software engineering we lament, but whose growth prospects are limited by the near-impossibility of convincing selective software engineering venues to publish confirmatory empirical studies), this would be it. In particular it would be good to see some of the results for commercial products.

The article offers an explanation for the phenomena it uncovered: in its view, the reason why so many bug reports end up misclassified is the difference of perspective between users of the software, who complain about the problems they encounter,  and the software professionals  who prepare the actual bug reports. The explanation is plausible but I was surprised not to see any concrete evidence that supports it. It is also surprising that the referees did not ask the authors to provide more solid arguments to buttress that explanation. Yet another opportunity to raise your hand and ask a question.

This (impressive) paper will call everyone’s attention to the critical problem of data quality in empirical studies. It is very professionally prepared, and could, in addition to its specific contributions, serve as a guide on how to get an empirical software engineering paper accepted at ICSE: take a critical look at an important research area; study it from a viewpoint that has not been considered much so far; perform an extensive study, with reasonable methodological assumptions; derive a couple of striking results, making sure they are both visibly stated and backed by the evidence; and include exactly one boxplot.

Notes and references

[1] This article review is part of the “Reading Notes” series. General disclaimer here.

[2] Kim Herzig, Sascha Just and Andreas Zeller: It’s not a Bug, it’s a Feature: How Misclassification Impacts Bug Prediction, in ICSE 2013, available here. According to the ICSE program the paper will be presented on May 23 in the Bug Prediction session, 16 to 17:30.

VN:F [1.9.10_1130]
Rating: 10.0/10 (4 votes cast)
VN:F [1.9.10_1130]
Rating: +3 (from 3 votes)

Presentations at ICSE and VSTTE

 

The following presentations from our ETH group in the ICSE week (International Conference on Software Engineering, San Francisco) address important issues of software specification and verification, describing new techniques that we have recently developed as part of our work building EVE, the Eiffel Verification Environment. One is at ICSE proper and the other at VSTTE (Verified Software: Tools, Theories, Experiments). If you are around please attend them.

Julian Tschannen will present Program Checking With Less Hassle, written with Carlo A. Furia, Martin Nordio and me, at VSTTE on May 17 in the 15:30-16:30 session (see here in the VSTTE program. The draft is available here. I will write a blog article about this work in the coming days.

Nadia Polikarpova will present What Good Are Strong Specifications?, written with , Carlo A. Furia, Yu Pei, Yi Wei and me at ICSE on May 22 in the 13:30-15:30 session (see here in the ICSE program). The draft is available here. I wrote about this paper in an earlier post: see here. It describes the systematic application of theory-based modeling to the full specification and verification of advanced software.

VN:F [1.9.10_1130]
Rating: 10.0/10 (1 vote cast)
VN:F [1.9.10_1130]
Rating: 0 (from 0 votes)

Bringing C code to the modern world

The C2Eif translator developed by Marco Trudel takes C code and translates it into Eiffel; it produces not just a literal translation but a re-engineering version exhibiting object-oriented properties. Trudel defended his PhD thesis last Friday at ETH (the examiners were Hausi Muller from Victoria University, Manuel Oriol from ABB, Richard Paige from the University of York,  and me as the advisor). The thesis is not yet available online but earlier papers describing C2Eif are, all reachable from the project’s home page [1].

At issue is what we do with legacy code. “J’ai plus de souvenirs que si j’avais mille ans”, wrote Charles Baudelaire in Les Fleurs du Mal (“Spleen de Paris”). The software industry is not a thousand years old, but has accumulated even more “souvenirs” than

A heavy chest of drawers cluttered with balance-sheets,
Poems, love letters, lawsuits, romances
And heavy locks of hair wrapped in invoices
.

We are suffocating under layers of legacy code heaped up by previous generations of programmers using languages that no longer meet our scientific and engineering standards. We cannot get rid of this heritage; how do we bring it to the modern world? We need automatic tools to wrap it in contemporary code, or, better, translate it into contemporary code. The thesis and the system offer a way out through translation to a modern object-oriented language. It took courage to choose such a topic, since there have been many attempts in the past, leading to conventional wisdom consisting of two strongly established opinions:

  • Plain translation: it has been tried, and it works. Not interesting for a thesis.
  • Object-oriented reengineering: it has been tried, and it does not work. Not realistic for a thesis.

Both are wrong. For translation, many of the proposed solutions “almost work”: they are good enough to translate simple programs, or even some large programs but on the condition that the code avoids murky areas of C programming such as signals, exceptions (setjmp/longjmp) and library mechanisms. In practice, however, most useful C programs need these facilities, so any tool that ignores them is bound to be of conceptual value only. The basis for Trudel’s work has been to tackle C to OO translation “beyond the easy stuff” (as stated in the title of one of the published papers). This effort has been largely successful, as demonstrated by the translation of close to a million lines of actual C code, including some well-known and representative tools such as the Vim editor.

As to OO reengineering, C2Eif makes a serious effort to derive code that exhibits a true object-oriented design and hence resembles, in its structure at least, what a programmer in the target language might produce. The key is to identify the right data abstractions, yielding classes, and specialization properties, yielding inheritance. In this area too, many people have tried to come up with solutions, with little success. Trudel has had the good sense of avoiding grandiose goals and sticking to a number of heuristics that work, such as looking at the signatures of a set of functions to see if they all involve a common argument type. Clearly there is more to be done in this direction but the result is already significant.

Since Eiffel has a sophisticated C interface it is also possible to wrap existing code; some tools are available for that purpose, such as Andreas Leitner’s EWG (Eiffel Wrapper Generator). Wrapping and translating each have their advantages and limitations; wrapping may be more appropriate for C libraries that someone else is still actively updating  (so that you do not have to redo a translation with every new release), and translation for legacy code that you want to take over and bring up to par with the rest of your software. C2Eif is engineered to support both. More generally, this is a practitioner’s tool, devoting considerable attention to the many details that make all the difference between a nice idea and a tool that really works. The emphasis is on full automation, although more parametrization has been added in recent months.

C2Eif will make a big mark on the Eiffel developer community. Try it yourself — and don’t be shy about telling its author about the future directions in which you think the tool should evolve.

Reference

[1] C2Eif project page, here.

VN:F [1.9.10_1130]
Rating: 10.0/10 (13 votes cast)
VN:F [1.9.10_1130]
Rating: +8 (from 8 votes)

LASER summer school: Software for the Cloud and Big Data

The 2013 LASER summer school, organized by our chair at ETH, will take place September 8-14, once more in the idyllic setting of the Hotel del Golfo in Procchio, on the island of Elba in Italy. This is already the 10th conference; the roster of speakers so far reads like a who’s who of software engineering.

The theme this year is Software for the Cloud and Big Data and the speakers are Roger Barga from Microsoft, Karin Breitman from EMC,  Sebastian Burckhardt  from Microsoft,  Adrian Cockcroft from Netflix,  Carlo Ghezzi from Politecnico di Milano,  Anthony Joseph from Berkeley,  Pere Mato Vila from CERN and I.

LASER always has a strong practical bent, but this year it is particularly pronounced as you can see from the list of speakers and their affiliations. The topic is particularly timely: exploring the software aspects of game-changing developments currently redefining the IT scene.

The LASER formula is by now well-tuned: lectures over seven days (Sunday to Saturday), about five hours in the morning and three in the early evening, by world-class speakers; free time in the afternoon to enjoy the magnificent surroundings; 5-star accommodation and food in the best hotel of Elba, made affordable as we come towards the end of the season (and are valued long-term customers). The group picture below is from last year’s school.

Participants are from both industry and academia and have ample opportunities for interaction with the speakers, who typically attend each others’ lectures and engage in in-depth discussions. There is also time for some participant presentations; a free afternoon to discover Elba and brush up on your Napoleonic knowledge; and a boat trip on the final day.

Information about the 2013 school can be found here.

LASER 2012, Procchio, Hotel del Golvo

VN:F [1.9.10_1130]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.10_1130]
Rating: 0 (from 0 votes)

The ABC of software engineering

Lack of a precise context can render discussions of software engineering and particularly of software quality meaningless. Take for example the (usually absurd) statement “We cannot expect that programmers will equip their programs with contracts”. Whom do you mean? A physicist who writes 50 lines of Matlab code to produce a graph illustrating his latest experiment? A member of the maintenance team for Microsoft Word? A programmer on the team for a flight control system? These are completely different constituencies, and the answer is also different. In the last case, the answer is probably that we do not care what the programmers like and do not like. When you buy an electrical device that malfunctions, would you accept from the manufacturer the excuse that differential equations are, really, you see, too hard for our electrical engineers?

In discussing the evolution of software methods and tools we must first specify what and whom we are talking about. The following ABC characterization is sufficient for most cases.

C is for Casual. Programs in that category do all kinds of useful things, and like anything else they should work properly, but if they are not ideal in software engineering terms of reliability, reusability, extendibility and so on — if sometimes they crash, sometimes produce not-quite-right results,  cannot be easily understood or maintained by anyone other than their original developers, target just one platform, run too slowly, eat up too much memory, are not easy to change, include duplicated code — it is not the end of the world. I do not have any scientific figures, but I suspect that most of the world’s software is actually in that category, from JavaScript or Python code that runs web sites to spreadsheet macros. Obviously it has to be good enough to serve its needs, but “good enough” is good enough.

B is for Business. Programs in that category run key processes in the organization. While often far from impeccable, they must satisfy strict quality constraints; if they do not, the organization will suffer significantly.

A is for Acute. This is life-critical software: if it does not work — more precisely, if it does not work exactly right — someone will get killed, someone will lose huge amounts of money, or something else will go terribly wrong. We are talking transportation systems, software embedded in critical devices, make-or-break processes of an organization.

Even in a professional setting, and even within a single company, the three categories usually coexist. Take for example a large engineering or scientific organization.  Some programs are developed to support experiments or provide an answer to a specific technical question. Some programs run the organization, both on the information systems side (enterprise management) and on the technical side (large scientific simulations, experiment set-up). And some programs play a critical role in making strategy decisions, or run the organization’s products.

The ABC classification is independent of the traditional division between enterprise and technical computing. Organizations often handle these two categories separately, whereas in fact they raise issues of similar difficulty and are subject to solutions of a similar nature. It is more important to assess the criticality of each software projects, along the ABC scale.

It is surprising that few organizations make that scale explicit.  It is partly a consequence of that neglect that many software quality initiatives and company-wide software engineering policies are ineffective: they lump everything together, and since they tend to be driven by A-grade applications, for which the risk of bad quality is highest, they create a burden that can be too high for C- and even B-grade developments. People resent the constraints where they are not justified, and as a consequence ignore them where they would be critical. Whether your goal for the most demanding projects is to achieve CMMI qualification or to establish an effective agile process, you cannot impose the same rules on everyone. Sometimes the stakes are high; and sometimes a program is just a program.

The first step in establishing a successful software policy is to separate levels of criticality, and require every development to position itself along the resulting scale. The same observation qualifies just about any discussion of software methodology. Acute, Business or Casual: you must know your ABC.

VN:F [1.9.10_1130]
Rating: 9.6/10 (13 votes cast)
VN:F [1.9.10_1130]
Rating: +10 (from 10 votes)