Archive for the ‘Agile’ Category.

New book: the Requirements Handbook

cover

I am happy to announce the publication of the Handbook of Requirements and Business Analysis (Springer, 2022).

It is the result of many years of thinking about requirements and how to do them right, taking advantage of modern principles of software engineering. While programming, languages, design techniques, process models and other software engineering disciplines have progressed considerably, requirements engineering remains the sick cousin. With this book I am trying to help close the gap.

pegsThe Handbook introduces a comprehensive view of requirements including four elements or PEGS: Project, Environment, Goals and System. One of its principal contributions is the definition of a standard plan for requirements documents, consisting of the four corresponding books and replacing the obsolete IEEE 1998 structure.

The text covers both classical requirements techniques and novel topics such as object-oriented requirements and the use of formal methods.

The successive chapters address: fundamental concepts and definitions; requirements principles; the Standard Plan for requirements; how to write good requirements; how to gather requirements; scenario techniques (use cases, user stories); object-oriented requirements; how to take advantage of formal methods; abstract data types; and the place of requirements in the software lifecycle.

The Handbook is suitable both as a practical guide for industry and as a textbook, with over 50 exercises and supplementary material available from the book’s site.

You can find here a book page with the preface and sample chapters.

To purchase the book, see the book page at Springer and the book page at Amazon US.

VN:F [1.9.10_1130]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.10_1130]
Rating: 0 (from 0 votes)

A standard plan for modern requirements

Requirements documents for software projects in industry, agile or not, typically follow a plan defined in a 1998 IEEE standard (IEEE 830-1998 [1]),  “reaffirmed” in 2009. IEEE 830 has the merit of simplicity, as it fits in 37 pages of which just a few (competently) describe basic requirements concepts and less than 10 are devoted to explaining the standard recommended plan, which itself consists of 3 sections with subsections. Simplicity is good but the elementary nature of the IEEE-830 plan is just not up to the challenges of modern information technology. It is time to retire this venerable precursor and move to a structure that works for the kind of ambitious, multi-faceted IT systems we build today.

For the past few years I have worked on defining a systematic approach to requirements, culminating in a book to be published in the Fall, Handbook of Requirements and Business Analysis. One of the results of this effort is a standard plan, based on the “PEGS” view of requirements where the four parts cover Project, Environment, Goals and System. The details are in the book (for some of the basic concepts see a paper at TOOLS 2019, [2]). Here I will introduce some of the key principles, since they are already  be used — as various people have done since I first started presenting the ideas in courses and seminars (particularly an ACM Webinar, organized by Will Tracz last March, whose recording is available on YouTube, and another hosted by Grady Booch for IBM).

pegs

The starting point, which gives its name to the approach, is that requirements should cover the four aspects mentioned, the four “PEGS”, defined as follows:

  • A Goal is a result desired by an organization.
  • A System is a set of related artifacts, devised to help meet certain goals.
  • A Project is the set of human processes involved in the planning, construction, revision and  operation of a system.
  • An Environment is the set of entities (such as people, organizations, devices and other material objects, regulations and other systems) external to the project and system but with the potential to affect the goals, project or system or to be affected by them.

The recommended standard plan consequently consists of four parts or books.

This proposed standard does not prescribe any particular approach to project management, software development, project lifecycle or requirements expression, and is applicable in particular to both traditional (“waterfall”) and agile projects. It treats requirements as a project activity, not necessarily a lifecycle step. One of the principles developed in the book is that requirements should be treated as a dynamic asset of the project, written in a provisional form (more or less detailed depending on the project methodology) at the beginning of the project, and then regularly extended and updated.

Similarly, the requirements can be written using any appropriate notation and method, from the most informal to the most mathematical.  In a recently published ACM Computing Surveys paper [3], my colleagues and I reviewed the various levels of formalism available  in today’s requirements approaches. The standard plan is agnostic with respect to this matter.

The books may reference each other but not arbitrarily. The permitted relations are as follows:

references

Note in particular that the description of the Goals should leave out technical details and hence may not refer to Project and System elements, although it may need to explain the properties of the Environment that influence or constrain the business goals. The Environment exists independently of the IT effort, and hence the Environment book should not reference any of the others, with the possible exception (dotted arrow in the figure) of effects of the System if it is to change the environment. (For example, a payroll IT system may affect the payroll process; a heating system may affect the ambient temperature.)

The multi-book structure of the recommended PEGS standard plan already goes beyond the traditional view of a single, linear “requirements document”. The books themselves are not necessarily written as linear texts; with today’s technology, requirements parts can and generally should be stored in a requirements repository which includes all requirements-relevant elements.  A linear form remains necessary; it can be either written as such or produced by tools from elements in the repository.

The standard plan defines the structure of the four PEGS books down to one more level, chapters. For any further levels (sections), each organization can define its own rules. Books can also include front and back matter, including for example  tables of contents, legal disclaimers, revision history etc., not covered by the standard structure. Here is that structure:

books

It is meant to be self-explanatory, but here are a few comments on each of the books:

  • One of the products of the requirements effort should be to help plan and manage  the rest of the Project. This is the goal of the Project book; note in particular P.4 and P.5 covering tasks and deadlines. P.7 starts out at the beginning of the project as a blueprint for the requirements effort, and as this effort proceeds (stakeholder interviews, stakeholder workshops, competitive analysis, requirements writing …) can be regularly updated to report on how it went. (This feature is an example of treating the requirements repository and documents as a living, dynamic asset, as noted above.)
  • In the Environment book, constraints (E.3) are properties of the environment (the problem domain) imposed on the project and system. Effects (E.5) go the other way around, describing how the system may affect the environment. Invariants (E.6) do both. Assumptions (E.4) are properties that are taken for granted to simplify the construction of the system (for example, a maximum number of simultaneous users), as distinct from actual constraints.
  • The Goals book is intended for a less technical audience than the other books: it must be understandable to decision makers and non-IT-savvy stakeholders. It includes a short summary (G.4) of functionality, a kind of capsule version of the System book trimmed down to the essentials. Note the importance of specifying (in G.6) what aspects the system is not intended to address. The Goals book can include some (G.5) usage scenarios expressed in terms meaningful to the book’s constituencies and hence remaining at a high level of generality.
  • Detailed usage scenarios will appear in the System book (S.4).  It is important to prioritize the functions (S.5), allowing a reasoned approach (rather than decisions made in a panic) if the project hits roadblocks and must sacrifice some of the functionality.

A naïve but still widely encountered view of requirements is that they serve to  “describe what the system will do” (independently of how it will do it). In the structure above, it corresponds to just one-fourth of the requirements effort, the System part. Work on requirements engineering in the past few decades has emphasized, among other concepts, the need to separate system and environment properties (Michael Jackson, Pamela Zave) and the importance of goals (Axel van Lamsweerde).

The plan reflects this richness of the requirements concept and I hope it can help many projects produce better requirements for better software.

References

[1] IEEE 830-1998, available here.

[2] Bertrand Meyer, Jean-Michel Bruel, Sophie Ebersold, Florian Galinier and Alexandr Naumchev: The Anatomy of Software Requirements, in TOOLS 2019, Springer Lecture Notes in Computer Science 11771, 2019, pages 10-40.

[3] Jean-Michel Bruel, Sophie Ebersold, Florian Galinier, Manuel Mazzara, Alexander Naumchev and Bertrand Meyer:  The Role of Formalism in System Requirements, in  Computing Surveys (ACM), vol. 54, no. 5, June 2021, pages 1-36, DOI: https://doi.org/10.1145/3448975, preprint available here.

RecycledA version of this article appeared earlier in the Communications of the ACM blog.

 

VN:F [1.9.10_1130]
Rating: 8.4/10 (5 votes cast)
VN:F [1.9.10_1130]
Rating: +2 (from 2 votes)

Some contributions

Science progresses through people taking advantage of others’ insights and inventions. One of the conditions that makes the game possible is that you acknowledge what you take. For the originator, it is rewarding to see one’s ideas reused, but frustrating when that happens without acknowledgment, especially when you are yourself punctilious about citing your own sources of inspiration.

I have started to record some concepts that are widely known and applied today and which I believe I originated in whole or in part, whether or not their origin is cited by those who took them. The list below is not complete and I may update it in the future. It is not a list of ideas I contributed, only of those fulfilling two criteria:

  • Others have built upon them.  (If there is an idea that I think is great but no one paid attention to it, the list does not include it.)
  • They have gained wide visibility.

There is a narcissistic aspect to this exercise and if people want to dismiss it as just showing I am full of myself so be it. I am just a little tired of being given papers to referee that state that genericity was invented by Java, that no one ever thought of refactoring before agile methods, and so on. It is finally time to state some facts.

Facts indeed: I back every assertion by precise references. So if I am wrong — i.e. someone preceded me — the claims of precedence can be refuted; if so I will update or remove them. All articles by me cited in this note are available (as downloadable PDFs) on my publication page. (The page is up to date until 2018; I am in the process of adding newer publications.)

Post-publication note: I have started to receive some comments and added them in a Notes section at the end; references to those notes are in the format [A].

Final disclaimer (about the narcissistic aspect): the exercise of collecting such of that information was new for me, as I do not usually spend time reflecting on the past. I am much more interested in the future and definitely hope that my next contributions will eclipse any of the ones listed below.

Programming concepts: substitution principle

Far from me any wish to under-represent the seminal contributions of Barbara Liskov, particularly her invention of the concept of abstract data type on which so much relies. As far as I can tell, however, what has come to be known as the “Liskov Substitution Principle” is essentially contained in the discussion of polymorphism in section 10.1 of in the first edition (Prentice Hall, 1988) of my book Object-Oriented Software Construction (hereafter OOSC1); for example, “the type compatibility rule implies that the dynamic type is always a descendant of the static type” (10.1.7) and “if B inherits from A, the set of objects that can be associated at run time with an entity [generalization of variable] includes instances of B and its descendants”.

Perhaps most tellingly, a key aspect of the substitution principle, as listed for example in the Wikipedia entry, is the rule on assertions: in a proper descendant, keep the invariant, keep or weaken the precondition, keep or strengthen the postcondition. This rule was introduced in OOSC1, over several pages in section 11.1. There is also an extensive discussion in the article Eiffel: Applying the Principles of Object-Oriented Design published in the Journal of Systems and Software, May 1986.

The original 1988 Liskov article cited (for example) in the Wikipedia entry on the substitution principle says nothing about this and does not in fact include any of the terms “assertion”, “precondition”, “postcondition” or “invariant”. To me this absence means that the article misses a key property of substitution: that the abstract semantics remain the same. (Also cited is a 1994 Liskov article in TOPLAS, but that was many years after OOSC1 and other articles explaining substitution and the assertion rules.)

Liskov’s original paper states that “if for each object o1 of type S there is an object o2 of type T such that for all programs P defined in terms of T, the behavior of P is unchanged when o1 is substituted for oz, then S is a subtype of T.” As stated, this property is impossible to satisfy: if the behavior is identical, then the implementations are the same, and the two types are identical (or differ only by name). Of course the concrete behaviors are different: applying the operation rotate to two different figures o1 and o2, whose types are subtypes of FIGURE and in some cases of each other, will trigger different algorithms — different behaviors. Only with assertions (contracts) does the substitution idea make sense: the abstract behavior, as characterized by preconditions, postconditions and the class invariants, is the same (modulo respective weakening and strengthening to preserve the flexibility of the different version). Realizing this was a major step in understanding inheritance and typing.

I do not know of any earlier (or contemporary) exposition of this principle and it would be normal to get the appropriate recognition.

Software design: design patterns

Two of the important patterns in the “Gang of Four” Design Patterns book (GoF) by Gamma et al. (1995) are the Command Pattern and the Bridge Pattern. I introduced them (under different names) in the following publications:

  • The command pattern appears in OOSC1 under the name “Undo-Redo” in section 12.2. The solution is essentially the same as in GoF. I do not know of any earlier exposition of the technique. See also notes [B] and [C].
  • The bridge pattern appears under the name “handle technique” in my book Reusable Software: The Base Component Libraries (Prentice Hall, 1994). It had been described several years earlier in manuals for Eiffel libraries. I do not know of an earlier reference. (The second edition of Object-Oriented Software Construction — Prentice Hall, 1997, “OOSC2” –, which also describes it, states that a similar technique is described in an article by Josef Gil and Ricardo Szmit at the TOOLS USA conference in the summer of 1994, i.e. after the publication of Reusable Software.)

Note that it is pointless to claim precedence over GoF since that book explicitly states that it is collecting known “best practices”, not introducing new ones. The relevant questions are: who, pre-GoF, introduced each of these techniques first; and which publications does the GoF cites as “prior art”  for each pattern. In the cases at hand, Command and Bridge, it does not cite OOSC1.

To be concrete: unless someone can point to an earlier reference, then anytime anyone anywhere using an interactive system enters a few “CTRL-Z” to undo commands, possibly followed by some “CTRL-Y” to redo them (or uses other UI conventions to achieve these goals), the software most likely relying on a technique that I first described in the place mentioned above.

Software design: Open-Closed Principle

Another contribution of OOSC1 (1988), section 2.3, reinforced in OOSC2 (1997) is the Open-Closed principle, which explained one of the key aspects of inheritance: the ability to keep a module both closed (immediately usable as is) and open to extension (through inheritance, preserving the basic semantics. I am mentioning this idea only in passing since in this case my contribution is usually recognized, for example in the Wikipedia entry.

Software design: OO for reuse

Reusability: the Case for Object-Oriented Design (1987) is, I believe, the first publication that clearly explained why object-oriented concepts were (and still are today — in Grady Booch’s words, “there is no other game in town”) the best answer to realize the goal of software construction from software components. In particular, the article:

  • Explains the relationship between abstract data types and OO programming, showing the former as the theoretical basis for the latter. (The CLU language at MIT originated from Liskov’s pioneering work on abstract data types, but was not OO in the full sense of the term, missing in particular a concept of inheritance.)
  • Shows that reusability implies bottom-up development. (Top-down refinement was the mantra at the time, and promoting bottom-up was quite a shock for many people.)
  • Explains the role of inheritance for reuse, as a complement to Parnas’s interface-based modular construction with information hiding.

Software design: Design by Contract

The contribution of Design by Contract is one that is widely acknowledged so I don’t have any point to establish here — I will just recall the essentials. The notion of assertion goes back to the work of Floyd, Hoare and Dijkstra in the sixties and seventies, and correctness-by-construction to Dijktra, Gries and Wirth, but Design by Contract is a comprehensive framework providing:

  • The use of assertions in an object-oriented context. (The notion of class invariant was mentioned in a paper by Tony Hoare published back in 1972.)
  • The connection of inheritance with assertions (as sketched above). That part as far as I know was entirely new.
  • A design methodology for quality software: the core of DbC.
  • Language constructs carefully seamed into the fabric of the language. (There were precedents there, but in the form of research languages such as Alphard, a paper design only, not implemented, and Euclid.)
  • A documentation methodology.
  • Support for testing.
  • Support for a consistent theory of exception handling (see next).

Design by Contract is sometimes taken to mean simply the addition of a few assertions here and there. What the term actually denotes is a comprehensive methodology with all the above components, tightly integrated into the programming language. Note in particular that preconditions and postconditions are not sufficient; in an OO context class invariants are essential.

Software design: exceptions

Prior to the Design by Contract work, exceptions were defined very vaguely, as something special you do outside of “normal” cases, but without defining “normal”. Design by Contract brings a proper perspective by defining these concepts precisely. This was explained in a 1987 article, Disciplined Exceptions ([86] in the list), rejected by ECOOP but circulated as a technical report; they appear again in detail in OOSC1 (sections 7.10.3 to 7.10.5).

Other important foundational work on exceptions, to which I know no real precursor (as usual I would be happy to correct any omission), addressed what happens to the outcome of an exception in a concurrent or distributed context. This work was done at ETH, in particular in the PhD theses  of B. Morandi and A. Kolesnichenko, co-supervised with S. Nanz. See the co-authored papers [345] and [363].

On the verification aspect of exceptions, see below.

Software design: refactoring

I have never seen a discussion of refactoring that refers to the detailed discussion of generalization in both of the books Reusable Software (1994, chapter 3) and Object Success (Prentice Hall, 1995, from page 122 to the end of chapter 6). These discussions describe in detail how, once a program has been shown to work, it should be subject to a posteriori design improvements. It presents several of the refactoring techniques (as they were called when the idea gained traction several years later), such as moving common elements up in the class hierarchy, and adding an abstract class as parent to concrete classes ex post facto.

These ideas are an integral part of the design methodology presented in these books (and again in OOSC2 a few later). It is beyond me why people would present refactoring (or its history, as in the Wikipedia entry on the topic) without referring to these publications, which were widely circulated and are available for anyone to inspect.

Software design: built-in documentation and Single-Product principle

Another original contribution was the idea of including documentation in the code itself and relying on tools to extract the documentation-only information (leaving implementation elements aside). The idea, described in detail in OOSC1 in 1988 (sections 9.4 and 9.5) and already mentioned in the earlier Eiffel papers, is that code should be self-complete, containing elements of various levels of abstraction; some of them describe implementation, but the higher-level elements describe specification, and are distinguished syntactically in such a way that tools can extract them to produce documentation at any desired level of abstraction.

The ideas were later applied through such mechanisms as JavaDoc (with no credit as far as I know). They were present in Eiffel from the start and the underlying principles, in particular the “Single Product principle” (sometimes “Self-Documentation principle”, and also generalized by J. Ostroff and R. Paige as “Single-Model principle”). Eiffel is the best realization of these principles thanks to:

  • Contracts (as mentioned above): the “contract view” of a class (called “short form” in earlier descriptions) removes the implementations but shows the relevant preconditions, postconditions and class invariants, given a precise and abstract specification of the class.
  • Eiffel syntax has a special place for “header comments”, which describe high-level properties and remain in the contract view.
  • Eiffel library class documentation has always been based on specifications automatically extracted from the actual text of the classes, guaranteeing adequacy of the documentation. Several formats are supported (including, from 1995 on, HTML, so that documentation can be automatically deployed on the Web).
  • Starting with the EiffelCase tool in the early 90s, and today with the Diagram Tool of EiffelStudio, class structures (inheritance and client relationships) are displayed graphically, again in an automatically extracted form, using either the BON or UML conventions.

One of the core benefits of the Single-Product principle is to guard against what some of my publications called the “Dorian Gray” syndrome: divergence of an implementation from its description, a critical problem in software because of the ease of modifying stuff. Having the documentation as an integral part of the code helps ensure that when information at some level of abstraction (specification, design, implementation) changes, the other levels will be updated as well.

Crucial in the approach is the “roundtripping” requirement: specifiers or implementers can make changes in any of the views, and have them reflected automatically in the other views. For example, you can graphically draw an arrow between two bubbles representing classes B and A in the Diagram Tool, and the code of B will be updated with “inherit A”; or you can add this Inheritance clause textually in the code of class B, and the diagram will be automatically updated with an arrow.

It is important to note how contrarian and subversive these ideas were at the time of their introduction (and still to some extent today). The wisdom was that you do requirements then design then implementation, and that code is a lowly product entirely separate from specification and documentation. Model-Driven Development perpetuates this idea (you are not supposed to modify the code, and if you do there is generally no easy way to propagate the change to the model.) Rehabilitating the code (a precursor idea to agile methods, see below) was a complete change of perspective.

I am aware of no precedent for this Single Product approach. The closest earlier ideas I can think of are in Knuth’s introduction of Literate Programming in the early eighties (with a book in 1984). As in the Single-product approach, documentation is interspersed with code. But the literate programming approach is (as presented) top-down, with English-like explanations progressively being extended with implementation elements. The Single Product approach emphasizes the primacy of code and, in terms of the design process, is very much yoyo, alternating top-down (from the specification to the implementation) and bottom-up (from the implementation to the abstraction) steps. In addition, a large part of the documentation, and often the most important one, is not informal English but formal assertions. I knew about Literate Programming, of course, and learned from it, but Single-Product is something else.

Software design: from patterns to components

Karine Arnout’s thesis at ETH Zurich, resulting in two co-authored articles ([255] and [257], showed that contrary to conventional wisdom a good proportion of the classical design patterns, including some of the most sophisticated, can be transformed into reusable components (indeed part of an Eiffel library). The agent mechanism (see below) was instrumental in achieving that result.

Programming, design and specification concepts: abstract data types

Liskov’s and Zilles’s ground-breaking 1974 abstract data types paper presented the concepts without a mathematical specification, using programming language constructs instead. A 1976 paper (number [3] in my publication list, La Description des Structures de Données, i.e. the description of data structures) was as far as I know one of the first to present a mathematical formalism, as  used today in presentations of ADTs. John Guttag was taking a similar approach in his PhD thesis at about the same time, and went further in providing a sound mathematical foundation, introducing in particular (in a 1978 paper with Jim Horning) the notion of sufficient completeness, to which I devoted a full article in this blog  (Are My Requirements Complete?) about a year ago. My own article was published in a not very well known journal and in French, so I don’t think it had much direct influence. (My later books reused some of the material.)

The three-level description approach of that article (later presented in English for an ACM workshop in the US in 1981, Pingree Park, reference [28]) is not well known but still applicable, and would be useful to avoid frequent confusions between ADT specifications and more explicit descriptions.

When I wrote my 1976 paper, I was not aware of Guttag’s ongoing work (only of the Liskov and Zilles paper), so the use of a mathematical framework with functions and predicates on them was devised independently. (I remember being quite happy when I saw what the axioms should be for a queue.) Guttag and I both gave talks at a workshop organized by the French programming language interest group in 1977 and it was fun to see that our presentations were almost identical. I think my paper still reads well today (well, if you read French). Whether or not it exerted direct influence, I am proud that it independently introduced the modern way of thinking of abstract data types as characterized by mathematical functions and their formal (predicate calculus) properties.

Language mechanisms: genericity with inheritance

Every once in a while I get to referee a paper that starts “Generics, as introduced in Java…” Well, let’s get some perspective here. Eiffel from its introduction in 1985 combined genericity and inheritance. Initially, C++ users and designers claimed that genericity was not needed in an OO context and the language did not have it; then they introduced template. Initially, the designers of Java claimed (around 1995) that genericity was not needed, and the language did not have it; a few years later Java got generics. Initially, the designers of C# (around 1999) claimed that genericity was not needed, and the language did not have it; a few years later C# and .NET got generics.

Genericity existed before Eiffel of course; what was new was the combination with inheritance. I had been influenced by work on generic modules by a French researcher, Didier Bert, which I believe influenced the design of Ada as well; Ada was the language that brought genericity to a much broader audience than the somewhat confidential languages that had such a mechanism before. But Ada was not object-oriented (it only had modules, not classes). I was passionate about object-oriented programming (at a time when it was generally considered, by the few people who had heard of it as an esoteric, academic pursuit). I started — in the context of an advanced course I was teaching at UC Santa Barbara — an investigation of how the two mechanisms relate to each other. The results were a paper at the first OOPSLA in 1986, Genericity versus Inheritance, and the design of the Eiffel type system, with a class mechanism, inheritance (single and multiple), and genericity, carefully crafted to complement each other.

With the exception of a Trellis-Owl, a  design from Digital Equipment Corporation also presented at the same OOPSLA (which never gained significant usage), there were no other OO languages with both mechanisms for several years after the Genericity versus Inheritance paper and the implementation of genericity with inheritance in Eiffel available from 1986 on. Eiffel also introduced, as far as I know, the concept of constrained genericity, the second basic mechanism for combining genericity with inheritance, described in Eiffel: The Language (Prentice Hall, 1992, section 10.8) and discussed again in OOSC2 (section 16.4 and throughout). Similar mechanisms are present in many languages today.

It was not always so. I distinctly remember people bringing their friends to our booth at some conference in the early nineties, for the sole purpose of having a good laugh with them at our poster advertising genericity with inheritance. (“What is this thing they have and no one else does? Generi-sissy-tee? Hahaha.”). A few years later, proponents of Java were pontificating that no serious language needs generics.

It is undoubtedly part of of the cycle of invention (there is a Schopenhauer citation on this, actually the only thing from Schopenhauer’s philosophy that I ever understood [D]) that people at some point will laugh at you; if it did brighten their day, why would the inventor deny them one of the little pleasures of life? But in terms of who laughs last, along the way C++ got templates, Java got generics, C# finally did too, and nowadays all typed OO languages have something of the sort.

Language mechanisms: multiple inheritance

Some readers will probably have been told that multiple inheritance is a bad thing, and hence will not count it as a contribution, but if done properly it provides a major abstraction mechanism, useful in many circumstances. Eiffel showed how to do multiple inheritance right by clearly distinguishing between features (operations) and their names, defining a class as a finite mapping between names and features, and using renaming to resolve any name clashes.

Multiple inheritance was made possible by an implementation innovation: discovering a technique (widely imitated since, including in single-inheritance contexts) to implement dynamic binding in constant time. It was universally believed at the time that multiple inheritance had a strong impact on performance, because dynamic binding implied a run-time traversal of the class inheritance structure, already bad enough for single inheritance where the structure is a tree, but prohibitive with multiple inheritance for which it is a directed acyclic graph. From its very first implementation in 1986 Eiffel used what is today known as a virtual table technique which guarantees constant-time execution of routine (method) calls with dynamic binding.

Language mechanisms: safe GC through strong static typing

Simula 67 implementations did not have automatic garbage collection, and neither had implementations of C++. The official excuse in the C++ case was methodological: C programmers are used to exerting manual control of memory usage. But the real reason was a technical impossibility resulting from the design of the language: compatibility with C precludes the provision of a good GC.

More precisely, of a sound and complete GC. A GC is sound if it will only reclaim unreachable objects; it is complete if it will reclaim all unreachable objects. With a C-based language supporting casts (e.g. between integers and pointers) and pointer arithmetic, it is impossible to achieve soundness if we aim at a reasonable level of completeness: a pointer can masquerade as an integer, only to be cast back into a pointer later on, but in the meantime the garbage collector, not recognizing it as a pointer, may have wrongly reclaimed the corresponding object. Catastrophe.

It is only possible in such a language to have a conservative GC, meaning that it renounces completeness. A conservative GC will treat as a pointer any integer whose value could possibly be a pointer (because it lies between the bounds of the program’s data addresses in memory). Then, out of precaution, the GC will refrain from reclaiming the objects at these addresses even if they appear unreachable.

This approach makes the GC sound but it is only a heuristics, and it inevitably loses completeness: every once in a while it will fail to reclaim some dead (unreachable) objects around. The result is a program with memory leaks — usually unacceptable in practice, particularly for long-running or continuously running programs where the leaks inexorably accumulate until the program starts thrashing then runs out of memory.

Smalltalk, like Lisp, made garbage collection possible, but was not a typed language and missed on the performance benefits of treating simple values like integers as a non-OO language would. Although in this case I do not at the moment have a specific bibliographic reference, I believe that it is in the context of Eiffel that the close connection between strong static typing (avoiding mechanisms such as casts and pointer arithmetic) and the possibility of sound and complete garbage collection was first clearly explained. Explained in particular around 1990 in a meeting with some of the future designers of Java, which uses a similar approach, also taken over later on by C#.

By the way, no one will laugh at you today for considering garbage collection as a kind of basic human right for programmers, but for a long time the very idea was quite sulfurous, and advocating it subjected you to a lot of scorn. Here is an extract of the review I got when I submitted the first Eiffel paper to IEEE Transactions on Software Engineering:

Systems that do automatic garbage collection and prevent the designer from doing his own memory management are not good systems for industrial-strength software engineering.

Famous last words. Another gem from another reviewer of the same paper:

I think time will show that inheritance (section 1.5.3) is a terrible idea.

Wow! I wish the anonymous reviewers would tell us what they think today. Needless to say, the paper was summarily rejected. (It later appeared in the Journal of Systems and Software — as [82] in the publication list — thanks to the enlightened views of Robert Glass, the founding editor.)

Language mechanisms: void safety

Void safety is a property of a language design that guarantees the absence of the plague of null pointer dereferencing.

The original idea came (as far as I know) from work at Microsoft Research that led to the design of a research language called C-omega; the techniques were not transferred to a full-fledged programming language. Benefiting from the existence of this proof of concept, the Eiffel design was reworked to guarantee void safety, starting from my 2005 ECOOP keynote paper (Attached Types) and reaching full type safety a few years later. This property of the language was mechanically proved in a 2016 ETH thesis by A. Kogtenkov.

Today all significant Eiffel development produces void-safe code. As far as I know this was a first among production programming languages and Eiffel remains the only production language to provide a guarantee of full void-safety.

This mechanism, carefully crafted (hint: the difficult part is initialization), is among those of which I am proudest, because in the rest of the programming world null pointer dereferencing is a major plague, threatening at any moment to crash the execution of any program that uses pointers of references. For Eiffel users it is gone.

Language mechanisms: agents/delegates/lambdas

For a long time, OO programming languages did not have a mechanism for defining objects wrapping individual operations. Eiffel’s agent facility was the first such mechanism or among the very first together the roughly contemporaneous but initially much more limited delegates of C#. The 1999 paper From calls to agents (with P. Dubois, M. Howard, M. Schweitzer and E. Stapf, [196] in the list) was as far as I know the first description of such a construct in the scientific literature.

Language mechanisms: concurrency

The 1993 Communications of the ACM paper on Systematic Concurrent Object-Oriented Programming [136] was certainly not the first concurrency proposal for OO programming (there had been pioneering work reported in particular in the 1987 book edited by Tokoro and Yonezawa), but it innovated in offering a completely data-race-free model, still a rarity today (think for example of the multi-threading mechanisms of dominant OO languages).

SCOOP, as it came to be called, was implemented a few years later and is today a standard part of Eiffel.

Language mechanisms: selective exports

Information hiding, as introduced by Parnas in his two seminal 1972 articles, distinguishes between public and secret features of a module. The first OO programming language, Simula 67, had only these two possibilities for classes and so did Ada for modules.

In building libraries of reusable components I realized early on that we need a more fine-grained mechanism. For example if class LINKED_LIST uses an auxiliary class LINKABLE to represent individual cells of a linked list (each with a value field and a “right” field containing a reference to another LINKABLE), the features of LINKABLE (such as the operation to reattach the “right” field) should not be secret, since LINKED_LIST needs them; but they should also not be generally public, since we do not want arbitrary client objects to mess around with the internal structure of the list. They should be exported selectively to LINKED_LIST only. The Eiffel syntax is simple: declare these operations in a clause of the class labeled “feature {LINKED_LIST}”.

This mechanism, known as selective exports, was introduced around 1989 (it is specified in full in Eiffel: The Language, from 1992, but was in the Eiffel manuals earlier). I think it predated the C++ “friends” mechanism which serves a similar purpose (maybe someone with knowledge of the history of C++ has the exact date). Selective exports are more general than the friends facility and similar ones in other OO languages: specifying a class as a friend means it has access to all your internals. This solution is too coarse-grained. Eiffel’s selective exports make it possible to define the specific export rights of individual operations (including attributes/fields) individually.

Language mechanisms and implementation: serialization and schema evolution

I did not invent serialization. As a student at Stanford in 1974 I had the privilege, at the AI lab, of using SAIL (Stanford Artificial Intelligence Language). SAIL was not object-oriented but included many innovative ideas; it was far ahead of its time, especially in terms of the integration of the language with (what was not yet called) its IDE. One feature of SAIL with which one could fall in love at first sight was the possibility of selecting an object and having its full dependent data structure (the entire subgraph of the object graph reached by following references from the object, recursively) stored into a file, for retrieval at the next section. After that, I never wanted again to live without such a facility, but no other language and environment had it.

Serialization was almost the first thing we implemented for Eiffel: the ability to write object.store (file) to have the entire structure from object stored into file, and the corresponding retrieval operation. OOSC1 (section 15.5) presents these mechanisms. Simula and (I think) C++ did not have anything of the sort; I am not sure about Smalltalk. Later on, of course, serialization mechanisms became a frequent component of OO environments.

Eiffel remained innovative by tackling the difficult problems: what happens when you try to retrieve an object structure and some classes have changed? Only with a coherent theoretical framework as provided in Eiffel by Design by Contract can one devise a meaningful solution. The problem and our solutions are described in detail in OOSC2 (the whole of chapter 31, particularly the section entitled “Schema evolution”). Further advances were made by Marco Piccioni in his PhD thesis at ETH and published in joint papers with him and M. Oriol, particularly [352].

Language mechanisms and implementation: safe GC through strong static typing

Simula 67 (if I remember right) did not have automatic garbage collection, and neither had C++ implementations. The official justification in the case of C++ was methodological: C programmers are used to exerting manual control of memory usage. But the real obstacle was technical: compatibility with C makes it impossible to have a good GC. More precisely, to have a sound and complete GC. A GC is sound if it will only reclaim unreachable objects; it is complete if it will reclaim all unreachable objects. With a C-based language supporting casts (e.g. between integers and pointers) and pointer arithmetic, it is impossible to achieve soundness if we aim at a reasonable level of completeness: a pointer can masquerade as an integer, only to be cast back into a pointer later on, but in the meantime the garbage collector, not recognizing it as a pointer, may have wrongly reclaimed the corresponding object. Catastrophe. It is only possible in such a language to have a conservative GC, which will treat as a pointer any integer whose value could possibly be a pointer (because its value lies between the bounds of the program’s data addresses in memory). Then, out of precaution, it will not reclaim the objects at the corresponding address. This approach makes the GC sound but it is only a heuristics, and it may be over-conservative at times, wrongly leaving dead (i.e. unreachable) objects around. The result is, inevitably, a program with memory leaks — usually unacceptable in practice.

Smalltalk, like Lisp, made garbage collection possible, but was not a typed language and missed on the performance benefits of treating simple values like integers as a non-OO language would. Although in this case I do not at the moment have a specific bibliographic reference, I believe that it is in the context of Eiffel that the close connection between strong static typing (avoiding mechanisms such as casts and pointer arithmetic) and the possibility of sound and complete garbage collection was first clearly explained. Explained in particular to some of the future designers of Java, which uses a similar approach, also taken over later on by C#.

By the way, no one will laugh at you today for considering garbage collection as a kind of basic human right for programmers, but for a long time it was quite sulfurous. Here is an extract of the review I got when I submitted the first Eiffel paper to IEEE <em>Transactions on Software Engineering:

Software engineering: primacy of code

Agile methods are widely and properly lauded for emphasizing the central role of code, against designs and other non-executable artifacts. By reading the agile literature you might be forgiven for believing that no one brought up that point before.

Object Success (1995) makes the argument very clearly. For example, chapter 3, page 43:

Code is to our industry what bread is to a baker and books to a writer. But with the waterfall code only appears late in the process; for a manager this is an unacceptable risk factor. Anyone with practical experience in software development knows how many things can go wrong once you get down to code: a brilliant design idea whose implementation turns out to require tens of megabytes of space or minutes of response time; beautiful bubbles and arrows that cannot be implemented; an operating system update, crucial to the project which comes five weeks late; an obscure bug that takes ages to be fixed. Unless you start coding early in the process, you will not be able to control your project.

Such discourse was subversive at the time; the wisdom in software engineering was that you need to specify and design a system to death before you even start coding (otherwise you are just a messy “hacker” in the sense this word had at the time). No one else in respectable software engineering circles was, as far as I know, pushing for putting code at the center, the way the above extract does.

Several years later, agile authors started making similar arguments, but I don’t know why they never referenced this earlier exposition, which still today I find not too bad. (Maybe they decided it was more effective to have a foil, the scorned Waterfall, and to claim that everyone else before was downplaying the importance of code, but that was not in fact everyone.)

Just to be clear, Agile brought many important ideas that my publications did not anticipate; but this particular one I did.

Software engineering: the roles of managers

Extreme Programming and Scrum have brought new light on the role of managers in software development. Their contributions have been important and influential, but here too they were for a significant part prefigured by a long discussion, altogether two chapters, in Object Success (1995).

To realize this, it is enough to read the titles of some of the sections in those chapters, describing roles for managers (some universal, some for a technical manager): “risk manager”, “interface with the rest of the world” (very scrummy!), “protector of the team’s sanity”, “method enforcer” (think Scrum Master), “mentor and critic”. Again, as far as I know, these were original thoughts at the time; the software engineering literature for the most part did not talk about these issues.

Software engineering: outsourcing

As far as I know the 2006 paper Offshore Development: The Unspoken Revolution in Software Engineering was the first to draw attention, in the software engineering community, to the peculiar software engineering challenges of distributed and outsourced development.

Software engineering: automatic testing

The AutoTest project (with many publications, involving I. Ciupa, A. Leitner, Y. Wei, M. Oriol, Y. Pei, M. Nordio and others) was not the first to generate tests automatically by creating numerous instances of objects and calling applicable operations (it was preceded by Korat at MIT), but it was the first one to apply this concept with Design by Contract mechanisms (without which it is of little practical value, since one must still produce test oracles manually) and the first to be integrated in a production environment (EiffelStudio).

Software engineering: make-less system building

One of the very first decisions in the design of Eiffel was to get rid of Make files.

Feldman’s Make had of course been a great innovation. Before Make, programmers had to produce executable systems manually by executing sequences of commands to compile and link the various source components. Make enabled them to instead  to define dependencies between components in a declarative way, resulting in a partial order, and then performed a topological sort to produce the sequence of comments. But preparing the list of dependencies remains a tedious task, particularly error-prone for large systems.

I decided right away in the design of Eiffel that we would never force programmers to write such dependencies: they would be automatically extracted from the code, through an exhaustive analysis of the dependencies between modules. This idea was present from the very the first Eiffel report in 1985 (reference [55] in the publication list): Eiffel programmers never need to write a Make file or equivalent (other than for non-Eiffel code, e.g. C or C++, that they want to integrate); they just click a Compile button and the compiler figures out the steps.

Behind this approach was a detailed theoretical analysis of possible relations between modules in software development (in many programming languages), published as the “Software Knowledge Base” at ICSE in 1985. That analysis was also quite instructive and I would like to return to this work and expand it.

Educational techniques: objects first

Towards an Object-Oriented Curriculum ( TOOLS conference, August 1993, see also the shorter JOOP paper in May of the same year) makes a carefully argued case for what was later called the Objects First approach to teaching programming. I would be interested to know if there are earlier publications advocating starting programming education with an OO language.

The article also advocated for the “inverted curriculum”, a term borrowed from work by Bernie Cohen about teaching electrical engineering. It was the first transposition of this concept to software education. In the article’s approach, students are given program components to use, then little by little discover how they are made. This technique met with some skepticism and resistance since the standard approach was to start from the very basics (write trivial programs), then move up. Today, of course, many introductory programming courses similarly provide students from day one with a full-fledged set of components enabling them to produce significant programs.

More recent articles on similar topics, taking advantage of actual teaching experience, are The Outside-In Method of Teaching Programming (2003) and The Inverted Curriculum in Practice (at ICSE 2006, with Michela Pedroni). The culmination of that experience is the textbook Touch of Class from 2009.

Educational techniques: Distributed Software Projects

I believe our team at ETH Zurich (including among others M. Nordio, J. Tschannen, P. Kolb and C. Estler and in collaboration with C. Ghezzi, E. Di Nitto and G. Tamburrelli at Politecnico di Milano, N. Aguirre at Rio Cuarto and many others in various universities) was the first to devise,  practice and document on a large scale (see publications and other details here) the idea of an educational software project conducted in common by student groups from different universities. It yielded a wealth of information on distributed software development and educational issues.

Educational techniques: Web-based programming exercises

There are today a number of cloud-based environments supporting the teaching of programming by enabling students to compile and test their programs on the Web, benefiting from a prepared environment (so that they don’t have to download any tools or prepare control files) and providing feedback. One of the first — I am not sure about absolute precedence — and still a leading one, used by many universities and applicable to many programming languages, is Codeboard.

The main developer, in my chair at ETH Zurich, was Christian Estler, supported in particular by M. Nordio and M. Piccioni, so I am only claiming a supporting role here.

Educational techniques: key CS/SE concepts

The 2001 paper Software Engineering in the Academy did a good job, I think, of defining the essential concepts to teach in a proper curriculum (part of what Jeannette Wing’s 2006 paper called Computational Thinking).

Program verification: agents (delegates etc.)

Reasoning about Function Objects (ICSE 2010, with M. Nordio, P. Müller and J. Tschannen) introduced verification techniques for objects representing functions (such as agents, delegates etc., see above) in an OO language. Not sure whether there were any such techniques before.

Specification languages: Z

The Z specification language has been widely used for formal development, particularly in the UK. It is the design of J-R Abrial. I may point out that I was a coauthor of the first publication on Z in English (1980),  describing a version that preceded the adaptation to a more graphical-style notation done later at Oxford. The first ever published description of Z, pertaining to an even earlier version, was in French, in my book Méthodes de Programmation (with C. Baudoin), Eyrolles, 1978, running over 15 pages (526-541), with the precise description of a refinement process.

Program verification: exceptions

Largely coming out of the PhD thesis of Martin Nordio, A Sound and Complete Program Logic for Eiffel (TOOLS 2009) introduces rules for dealing with exceptions in a Hoare-style verification framework.

Program verification: full library, and AutoProof

Nadia Polikarpova’s thesis at ETH, aided by the work of Carlo Furia and Julian Tschannen (they were the major contributors and my participation was less important), was as far as I know the first to produce a full functional verification of an actual production-quality reusable library. The library is EiffelBase 2, covering fundamental data structures.

AutoProof — available today, as a still experimental tool, through its Web interface, see here — relied on the AutoProof prover, built by the same team, and itself based on Microsoft Research’s Boogie and Z3 engines.

More

There are more concepts worthy of being included here, but for today I will stop here.

Notes

[A] One point of divergence between usual presentations of the substitution principle and the view in OOSC and my other publications is the covariance versus contravariance of routine argument types. It reflects a difference of views as to what the proper policy (both mathematically sound and practically usable) should be.

[B]  The GoF book does not cite OOSC for the command or bridge patterns. For the command pattern it cites (thanks to Adam Kosmaczewski for digging up the GoF text!) a 1985 SIGGRAPH paper by Henry Lieberman (There’s More to Menu Systems than Meets the Screen). Lieberman’s paper describes the notion of command object and mentions undoing in passing, but does not include the key elements of the command pattern (as explained in full in OOSC1), i.e. an abstract (deferred) command class with deferred procedures called (say) do_it and undo_it, then specific classes for each kind of command, each providing a specific implementation of those procedures, then a history list of commands supporting multiple-level undo and redo as explained in OOSC1. (Reading Lieberman’s paper with a 2021 perspective shows that it came tantalizingly close to the command pattern, but doesn’t get to it. The paper does talk about inheritance between command classes, but only to “define new commands as extensions to old commands”, not in the sense of a general template that can be implemented in many specific ways. And it does mention a list of objects kept around to enable recovery from accidental deletions, and states that the application can control its length, as is the case with a history list; but the objects in the list are not command objects, they are graphical and other objects that have been deleted.)

[C] Additional note on the command pattern: I vaguely remember seeing something similar to the OOSC1 technique in an article from a supplementary volume of the OOPSLA proceedings in the late eighties or early nineties, i.e. at the same time or slightly later, possibly from authors from Xerox PARC, but I have lost the reference.

[D] Correction: I just checked the source and learned that the actual Schopenhauer quote (as opposed to the one that is usually quoted) is different; it does not include the part about laughing. So much for my attempts at understanding philosophy.

 

VN:F [1.9.10_1130]
Rating: 8.6/10 (25 votes cast)
VN:F [1.9.10_1130]
Rating: +7 (from 13 votes)

Time to resurrect PSP?

Let us assume for the sake of the argument that software quality matters. There are many ingredients to software quality, of which one must be the care that every programmer devotes to the job. The Personal Software Process, developed by Watts Humphrey in the 1990s [1], prescribes a discipline that software developers should apply to produce good software and improve their professional ability over their careers. It has enjoyed moderate success but was never a mass movement and rarely gets mentioned nowadays; few software developers, in my experience, even know the name. Those who do often think of it as passé, a touching memory from the era of Monica Lewinsky and the Roseanne show.

Once cleaned of a few obsolete elements, PSP deserves to be known and applied.

PSP came out of Watts Humphrey’s earlier work on the Capability Maturity Model (see my earlier article on this blog, What is wrong with CMMI), a collection of recommended practices and assessment criteria for software processes, originally developed in the mid-eighties for the U.S. military contractor community but soon thereafter embraced by software outsourcing companies (initially, Indian ones) and later by other industries. Responding to complaints that CMM/CMMI, focused on processes in large companies, ignored the needs of smaller ones, and lacked individual guidance for developers, Humphrey developed TSP, the Team Software Process, and PSP.

The most visible part of PSP is a six-step process pictured in the middle of this diagram:
cmmi

The most visible and also the most corny. Who today wants to promise always to follow such a strict sequence of steps? Always to write the code for a module in full before compiling it? (Notice there is no backward arrow, the process is sequential.) Always to test at the end only? Come on. This is the third decade of the 21st century.

Today we compile as we code, using the development environment (IDE) as a brilliant tool to check everything we do or plan to do. For my part, whenever I am writing code and have not compiled my current draft for more than a few minutes I start feeling like an addict in need of a fix; my fix is the Compile button of EiffelStudio. At some eventual stage the compiler becomes a tool to generate excutable code, but long before that it has been my friend, coach, mentor, and doppelgänger, helping me get things (types, null references, inheritance…) right and gently chiding me when I wander off the rails.

As to tests, even if you do not buy into the full dogma of Test-Driven Development (I don’t), they get written and exercised right from the start, as you are writing the code, not afterwards. Compile all the time, test all the time.

It’s not just that a process such as the above ignores the contributions of agile methods, which are largely posterior to PSP. As analyzed in [2], agile is a curious mix of good ideas and a few horrendous ones. But among its durable contributions is the realization that development must be incremental, not a strict succession of separate activities.

This old-style flavor or PSP is probably the reason why it has fallen out of favor. But (like the agile rejection of upfront lifecycle activities) such a reaction is a case of criticism gone too far, ignoring the truly beneficial contributions. Ignore PSP’s outmoded sequence of activities and you will find that PSP’s core message is as relevant today as it ever was. That message is: we should learn from the practices of traditional engineers and apply a strict professional discipline. For example:

  • Keep a log of all activities. (See “Logs” in the above figure.) Engineers are taught to record everything they do; many programmers don’t bother. This practice, however, is essential to self-improvement.
  • Keep measurements of everything you do. (There are lots of things to measure, from hours spent on every kind of task to bugs found, time to fix them etc.)
  • Estimate and plan your work.
  • Clearly define commitments, and meet them.
  • Resist pressure to make unreasonable commitments (something that agilists approach also emphasize).
  • Understand your current performance.
  • Understand your programming style and how it affects various measures. (As an example, code size, as a function of the number of routines, depends on whether you are more concise or more verbose in style).
  • Continually improve your expertise as a professional.

PSP does not limit itself to such exhortations but gives concrete tools to apply the principles, with a view to: measuring, tracking and analyzing your work; learning from your performance variations; and incorporating the lessons learned into your professional practices. On the topic of measurement, for example, PSP includes precise guidelines on what to measure and how to measure it, and how to rely on proxies for quantities that are hard to assess directly. On this last point, PSP includes PROBE (PROxy-Based Estimating, you cannot have a method coming out of the world of US government organizations without cringeworthy acronyms), a general framework for estimating size and resource parameters from directly measurable proxies.

This is what PSP is about: a discipline of personal productivity and growth, emphasizing personal discipline, tracking and constant improvement. It is not hard to learn; a technical report by Humphrey available online [3] provides a sufficient basis to understand the concepts and start a process of self-improvement.

Watts Humphrey himself, as all who had the privilege to meet him can testify, was a model of seriousness and professionalism, the quintessential engineer. (I also remember him as the author of what may be the best pun I ever heard — ask me sometime.) PSP directly reflects these qualities and — ignoring its visible but in the end unimportant remnants from outdated technical choices — should be part of every software engineering curriculum and every software engineer’s collection of fundamental practices.

References

[1] Watts Humphrey, Introduction to the Personal Software Process, Addison-Wesley, 1996.

[2] Bertrand Meyer: Agile! The Good, the Hype and the Ugly, Springer, 2014, see here.

[3] Watts Humphrey, The Personal Software Process, Software Engineering Institute Technical Report CMU/SEI-2000-TR-022, available (in PDF, free) here.

 

Recycled A version of this article was first published in the Communications of the ACM blog.

.

VN:F [1.9.10_1130]
Rating: 7.8/10 (8 votes cast)
VN:F [1.9.10_1130]
Rating: +1 (from 3 votes)

This Wednesday in Nice: survey talk on the Eiffel method

The “Morgenstern Colloquium” at the University of Nice / INRIA Sophia Antipolis invited me to give a talk, next Wednesday (18 December) at 11 in Sophia Antipolis, in the aptly named* “Kahn Building”. The announcement appears here. I proposed various topics but (pleasant surprise) the organizers explicitly asked me to lecture about what I really want to talk about: the Eiffel approach. I will give a general presentation describing not specifically the language but the unified view of software construction embodied in Eiffel, from modeling to requirements to design, implementation and verification. Here is the abstract:

With society’s growing reliance on IT systems, the ability to write high-quality software is ever more critical. While a posteriori verification techniques have their role, there is no substitute for methods and tools that provide built-in quality (“correctness by construction”) and scale up to very large systems. For several decades my colleagues and I have been building such a method, based in particular on the concept of Design by Contract, the associated tools and the supporting language, Eiffel. The scope is wide, encompassing all aspects of the software development process, from requirements and design to implementation and verification. I will present an overview of the approach, show what it can yield, and discuss remaining open issues.

This talk is meant for everyone, whether from industry or academia, with an interest in practical techniques for engineering high-quality software.

No registration is required. The presentation will be in English.

Note

*Gilles Kahn, a brilliant computer scientist who died too young, was for a while director of INRIA.

VN:F [1.9.10_1130]
Rating: 6.3/10 (4 votes cast)
VN:F [1.9.10_1130]
Rating: 0 (from 2 votes)

New paper: making sense of agile methods

Bertrand Meyer: Making Sense of Agile Methods, in IEEE Software, vol. 35, no. 2, March 2018, pages 91-94. IEEE article page here (may require membership or purchase). Draft available here.

An assessment of agile methods, based on my book Agile! The Good, the Hype and the Ugly. It discusses, beyond the hype, the benefits and dangers of agile principles and practices, focusing on concrete examples of what helps and what hurts.

VN:F [1.9.10_1130]
Rating: 9.3/10 (6 votes cast)
VN:F [1.9.10_1130]
Rating: +4 (from 6 votes)

The end of software engineering and the last methodologist

(Reposted from the CACM blog [*].)

Software engineering was never a popular subject. It started out as “programming methodology”, evoking the image of bearded middle-aged men telling you with a Dutch, Swiss-German or Oxford accent to repent and mend your ways. Consumed (to paraphrase Mark Twain) by the haunting fear that someone, somewhere, might actually enjoy coding.

That was long ago. With a few exceptions including one mentioned below, to the extent that anyone still studies programming methodology, it’s in the agile world, where the decisive argument is often “I always say…”. (Example from a consultant’s page:  “I always tell teams: `I’d like a [user] story to be small, to fit in one iteration but that isn’t always the way.’“) Dijkstra did appeal to gut feeling but he backed it through strong conceptual arguments.

The field of software engineering, of which programming methodology is today just a small part, has enormously expanded in both depth and width. Conferences such as ICSE and ESEC still attract a good crowd, the journals are buzzing, the researchers are as enthusiastic as ever about their work, but… am I the only one to sense frustration? It is not clear that anyone outside of the community is interested. The world seems to view software engineering as something that everyone in IT knows because we all develop software or manage people who develop software. In the 2017 survey of CS faculty hiring in the U.S., software engineering accounted, in top-100 Ph.D.-granting universities, for 3% of hires! (In schools that stop at the master’s level, the figure is 6%; not insignificant, but not impressive either given that these institutions largely train future software engineers.) From an academic career perspective, the place to go is obviously  “Artificial Intelligence, Data Mining, and Machine Learning”, which in those top-100 universities got 23% of hires.

Nothing against our AI colleagues; I always felt “AI winter” was an over-reaction [1], and they are entitled to their spring. Does it mean software engineering now has to go into a winter of its own? That is crazy. Software engineering is more important than ever. The recent Atlantic  “software apocalypse” article (stronger on problems than solutions) is just the latest alarm-sounding survey. Or, for just one recent example, see the satellite loss in Russia [2] (juicy quote, which you can use the next time you teach a class about the challenges of software testing: this revealed a hidden problem in the algorithm, which was not uncovered in decades of successful launches of the Soyuz-Frigate bundle).

Such cases, by the way, illustrate what I would call the software professor’s dilemma, much more interesting in my opinion than the bizarre ethical brain-teasers (you see what I mean, trolley levers and the like) on which people in philosophy departments spend their days: is it ethical for a professor of software engineering, every morning upon waking up, to go to cnn.com in the hope that a major software-induced disaster has occurred,  finally legitimizing the profession? The answer is simple: no, that is not ethical. Still, if you have witnessed the actual state of ordinary software development, it is scary to think about (although not to wish for) all the catastrophes-in-waiting that you suspect are lying out there just waiting for the right circumstances .

So yes, software engineering is more relevant than ever, and so is programming methodology. (Personal disclosure: I think of myself as the very model of a modern methodologist [3], without a beard or a Dutch accent, but trying to carry, on today’s IT scene, the torch of the seminal work of the 1970s and 80s.)

What counts, though, is not what the world needs; it is what the world believes it needs. The world does not seem to think it needs much software engineering. Even when software causes a catastrophe, we see headlines for a day or two, and then nothing. Radio silence. I have argued to the point of nausea, including at least four times in this blog (five now), for a simple rule that would require a public auditing of any such event; to quote myself: airline transportation did not become safer by accident but by accidents. Such admonitions fall on deaf ears. As another sign of waning interest, many people including me learned much of what they understand of software engineering through the ACM Risks Forum, long a unique source of technical information on software troubles. The Forum still thrives, and still occasionally reports about software engineering issues, but most of the traffic is about privacy and security (with a particular fondness for libertarian rants against any reasonable privacy rule that the EU passes). Important topics indeed, but where do we go for in-depth information about what goes wrong with software?

Yet another case in point is the evolution of programming languages. Language creation is abuzz again with all kinds of fancy new entrants. I can think of one example (TypeScript) in which the driving force is a software engineering goal: making Web programs safer, more scalable and more manageable by bringing some discipline into the JavaScript world. But that is the exception. The arguments for many of the new languages tend to be how clever they are and what expressive new constructs they introduce. Great. We need new ideas. They would be even more convincing if they addressed the old, boring problems of software engineering: correctness, robustness, extendibility, reusability.

None of this makes software engineering less important, or diminishes in the least the passion of those of us who have devoted our careers to the field. But it is time to don our coats and hats: winter is upon us.

Notes

[1] AI was my first love, thanks to Jean-Claude Simon at Polytechnique/Paris VI and John McCarthy at Stanford.

[2] Thanks to Nikolay Shilov for alerting me to this information. The text is in Russian but running it through a Web translation engine (maybe this link will work) will give the essentials.

[3] This time borrowing a phrase from James Noble.

[*] I am reposting these CACM blog articles rather than just putting a link, even though as a software engineer I do not like copy-paste. This is my practice so far, and it might change since it raises obvious criticism, but here are the reasons: (A) The audiences for the two blogs are, as experience shows, largely disjoint. (B) I like this site to contain a record of all my blog articles, regardless of what happens to other sites. (C) I can use my preferred style conventions.

VN:F [1.9.10_1130]
Rating: 9.9/10 (11 votes cast)
VN:F [1.9.10_1130]
Rating: +7 (from 7 votes)

Devops (the concept, and a workshop announcement)

One of the most significant recent developments in software engineering is the concept of Devops*. Dismissing the idea as “just the latest buzzword” would be wrong. It may be a buzzword but it reflects a fundamental change in the way we structure system development; with web applications in particular the traditional distinctions between steps of development, V&V** and deployment fade out. If you are using Microsoft Word, you know or can easily find out the version number; but which version of your search engine are you using?

With the new flexibility indeed come new risks, as when a bug in the latest “devopsed”  version of Google Docs caused me to lose a whole set of complex diagrams irretrievably; an earlier article on this blog (“The Cloud and Its Risks“, October 2010) told the story.

In the new world of continuous integrated development/V&V/deployment, software engineering principles are more necessary than ever, but their application has to undergo a profound adaptation.

With Jean-Michel Bruel (Toulouse), Elisabetta Di Nitto (Milan) and Manuel Mazzara (Innopolis), we are organizing a workshop on the topic, DEVOPS 18, on 5-6 March 2018 near Toulouse. The Call for Papers is available here, with Springer LNCS proceedings. The submission deadline is January 15, but for that date a 2-page extended abstract is sufficient. I hope that the event will help the community get a better grasp of the software engineering techniques and practices applicable to this new world of software development.

Notes

*I know, it’s supposed to be DevOps (I am not a great fan of upper case in the middle of words).
** Validation & Verification.

VN:F [1.9.10_1130]
Rating: 10.0/10 (3 votes cast)
VN:F [1.9.10_1130]
Rating: +1 (from 1 vote)

New session of online Agile course starts now

Just about a year ago I posted this announcement about my just released Agile course:

In spite of all the interest in both agile methods and MOOCs (Massive Open Online Courses) there are few courses on agile methods; I know only of some specialized MOOCs focused on a particular language or method.

I produced for EdX, with the help of Marco Piccioni, a new MOOC entitled Agile Software Development. It starts airing today and is supported by exercises and quizzes. The course uses some of the material from my Agile book.

The course is running again! You can find it on EdX here.

Such online courses truly “run”: they are not just canned videos but include exercises and working material on which you can get feedback.

Like the book (“Agile: The Good, the Hype and the Ugly“, Springer), the course is a tutorial on agile methods, presenting an unbiased analysis of their benefits and limits.

VN:F [1.9.10_1130]
Rating: 10.0/10 (4 votes cast)
VN:F [1.9.10_1130]
Rating: +1 (from 1 vote)

Feature interactions, continued

Microsoft Office tools offer  features for (1) spelling correction and (2) multi-language support. They are not very good at working together, another example of the perils of feature interaction.

Spelling correction will by default
Image10
when misspelled, but in the case of common misspellings known to the tools it will simply correct words without bothering the user. For example if you type “bagage” it will change it silently to “baggage”. This feature can be turned off, and the list of known misspellings can be edited, but most people use the defaults, as I am assuming here.

Multi-language support enables you to “install” several languages. Then when you type a text it will after a few words guess the relevant language. From then on it can  apply the proper spell checks and corrections.

These features are both useful, and by and large they both work. (The second one is not always reliable. I regularly end up, particularly in Microsoft Word, with entire paragraphs underlined in red because for some inscrutable reason the tool assigns them the wrong language. In such cases you must tell it manually what language it should apply.)

Their combination can lead to funny results. Assume your default language is English but you also have French installed, and you are typing an email in French under Outlook. Your email will say “Le bagage est encore dans l’avion”, meaning “The baggage is still in the plane”. The word “baggage” has one more “g” in English than in French. You start typing  “Le bagage”, but because at that point the tool assumes English it corrects it silently:

Image4

Next you type “est encore”:

Image5

The  word “est” (is) gets flagged because (unlike “encore”, here meaning “still”) it does not exist in English. When you add the next word, “dans” (in), the tool is still assuming an English text, so it flags it too:

Image6

Now you type “l” and when you add the apostrophe, you can almost hear a “silly me, I see now, that’s French!”. Outlook  switches languages and unflags the previously flagged words, removing the red squiggle under the ones that are correct in French:

Image7

But that is too late for “baggage”: the automatic respelling of “bagage”, coming from the default assumption that the text was in English, no longer makes sense now that we know it is in French. So the word gets flagged as a misspelling. You have to go back and correct it yourself. That is frustrating, since you typed the correct spelling in the first place (“bagage”), and it is the tool that messed it up.

This bug hits me often. It is indeed a bug, which can introduce misspellings into a text when the user typed it correctly. When the tool recognizes that the text is in another language than the one assumed so far, and performs a second pass over the part already analyzed, it should reconsider both the words previously flagged as misspellings but also those previously corrected. There is no justification for doing one and not the other.

Among the world’s most momentous problems, this one does not rank very high. It is only a small annoyance, and only a tiny set of people will ever notice it. But it provides another illustration of how tricky it is to go from good individual features to a good overall design.


Related:

VN:F [1.9.10_1130]
Rating: 7.0/10 (6 votes cast)
VN:F [1.9.10_1130]
Rating: +1 (from 1 vote)

Agile MOOC starts this week

In spite of all the interest in both agile methods and MOOCs (Massive Open Online Courses) there are few courses on agile methods; I know only of some specialized MOOCs focused on a particular language or method.

I produced for EdX, with the help of Marco Piccioni, a new MOOC entitled Agile Software Development. It starts airing today and is supported by exercises and quizzes. The course uses some of the material from my Agile book.

Registration is free and open to anyone at this address.

VN:F [1.9.10_1130]
Rating: 5.1/10 (20 votes cast)
VN:F [1.9.10_1130]
Rating: -6 (from 6 votes)

Agile methods: follow-up webinar and discussion

After my earlier ACM Webinar on Agile Methods! The Good, the Hype and the Ugly there were so many questions from the audience, left unanswered for lack of time, that a follow-up session has been set up. It will take place tomorrow (Friday, 27 March 2015) at noon New York time (18 Paris/Berlin/Zurich, 5 PM London etc.). Like the first one it is free and you can find the registration information here.

VN:F [1.9.10_1130]
Rating: 5.5/10 (13 votes cast)
VN:F [1.9.10_1130]
Rating: -1 (from 5 votes)

Understanding and assessing Agile: free ACM webinar next Wednesday

ACM is offering this coming Wednesday a one-hour webinar entitled Agile Methods: The Good, the Hype and the Ugly. It will air on February 18 at 1 PM New York time (10 AM West Coast, 18 London, 19 Paris, see here for more cities). The event is free and the registration link is here.

The presentation is based on my recent book with an almost identical title [1]. It will be a general discussion of agile methods, analyzing both their impressive contributions to software engineering and their excesses, some of them truly damaging. It is often hard to separate the beneficial from the indifferent and the plain harmful, because most of the existing presentations are of the hagiographical kind, gushing in admiration of the sacred word. A bit of critical distance does not hurt.

As you can see from the Amazon page, the first readers (apart from a few dissenters, not a surprise for such a charged topic) have relished this unprejudiced, no-nonsense approach to the presentation of agile methods.

Another characteristic of the standard agile literature is that it exaggerates the contrast with classic software engineering. This slightly adolescent attitude is not helpful; in reality, many of the best agile ideas are the direct continuation of the best classic ideas, even when they correct or adapt them, a normal phenomenon in technology evolution. In the book I tried to re-place agile ideas in this long-term context, and the same spirit will also guide the webinar. Ideological debates are of little interest to software practitioners; what they need to know is what works and what does not.

References

[1] Bertrand Meyer, Agile! The Good, the Hype and the Ugly, Springer, 2014, see Amazon page here, publisher’s page here and my own book page here.

VN:F [1.9.10_1130]
Rating: 6.8/10 (26 votes cast)
VN:F [1.9.10_1130]
Rating: +2 (from 6 votes)

Accurately Analyzing Agility

  
Book announcement:

Agile! The Good, the Hype and the Ugly
Bertrand Meyer
Springer, 2014 (just appeared)
Book page: here.
Amazon page: here.
Publisher’s page: here

A few years ago I became fascinated with agile methods: with the unique insights they include; with the obvious exaggerations and plainly wrong advice they also promote; and perhaps most of all with the constant intermingling of these two extremes.

I decided to play the game seriously: I read a good part of the agile literature, including all the important books; I sang the song, became a proud certified Scrum Master; I applied many agile techniques in my own work.

The book mentioned above is the result of that study and experience. It is both a tutorial and a critique.

The tutorial component was, I felt, badly needed. Most of the agile presentations I have seen are partisan texts, exhorting you to genuflect and adopt some agile method as the secret to a better life. Such preaching has a role but professionals know there is no magic in software development.  Agile! describes the key agile ideas objectively, concretely, and as clearly as I could present them. It does not introduce them in a vacuum, like the many agile books that pretend software engineering did not exist before (except for a repulsive idea, the dreaded “waterfall”). Instead, it relates them to many other concepts and results of software engineering, to which they bring their own additions and improvements.

Unfortunately, not all the additions are improvements. Up to now, the field has largely been left (with the exception of Boehm’s and Turner’s 2005 “Guide for the Perplexed“) to propaganda pieces and adoring endorsements. I felt that software developers would benefit more from a reasoned critical analysis. All the more so that agile methods are a remarkable mix of the best and the worst; the book carefully weeds out — in the terminology of the title — the ugly from the hype and the truly good.

Software developers and managers need to know about the “ugly”: awful agile advice that is guaranteed to harm your project. The “hype” covers ideas that have been widely advertised as shining agile contributions but have little relevance to the core goals of software development. The reason it was so critical to identify agile ideas belonging to these two categories is that they detract from the “good”, some of it remarkably good. I would not have devoted a good part of the last five years to studying agile methods if I did not feel they included major contributions to software engineering. I also found that some of these contributions do not get, in the agile literature itself, the explanations and exposure they deserve; I made sure they got their due in the book. An example is the “closed-window rule”, a simple but truly brilliant idea, of immediate benefit to any project.

Software methodology is a difficult topic, on which we still have a lot to learn. I expect some healthy discussions, but I hope readers will appreciate the opportunity to discuss agile ideas in depth for the greater benefit of quality software development.

I also made a point of writing a book that (unlike my last two) is short: 190 pages, including preface, index and everything else.

The table of contents follows; more details and sample chapters can be found on the book page listed above.

Preface
1 OVERVIEW
     1.1 VALUES
     1.2 PRINCIPLES
          Organizational principles
          Technical principles
     1.3 ROLES
     1.4 PRACTICES
          Organizational practices
          Technical practices
     1.5 ARTIFACTS
          Virtual artifacts
          Material artifacts
     1.6 A FIRST ASSESSMENT
          Not new and not good
          New and not good
          Not new but good
          New and good!

2 DECONSTRUCTING AGILE TEXTS
     2.1 THE PLIGHT OF THE TRAVELING SEMINARIST
          Proof by anecdote
          When writing beats speaking
          Discovering the gems
          Agile texts: reader beware!
     2.2 THE TOP SEVEN RHETORICAL TRAPS
          Proof by anecdote
          Slander by association
          Intimidation
          Catastrophism
          All-or-nothing
          Cover-your-behind
          Unverifiable claims
          Postscript: you have been ill-served by the software industry!

&3 THE ENEMY: BIG UPFRONT ANYTHING
     3.1 PREDICTIVE IS NOT WATERFALL
     3.2 REQUIREMENTS ENGINEERING
          Requirements engineering techniques
          Agile criticism of upfront requirements
          The waste criticism
          The change criticism
          The domain and the machine
     3.3 ARCHITECTURE AND DESIGN
          Is design separate from implementation?
          Agile methods and design
     3.4 LIFECYCLE MODELS
     3.5 RATIONAL UNIFIED PROCESS
     3.6 MATURITY MODELS
          CMMI in plain English
          The Personal Software Process
          CMMI/PSP and agile methods
          An agile maturity scale

4 AGILE PRINCIPLES
     4.1 WHAT IS A PRINCIPLE?
     4.2 THE OFFICIAL PRINCIPLES
     4.3 A USABLE LIST
     4.4 ORGANIZATIONAL PRINCIPLES
          Put the customer at the center
          Let the team self-organize
          Maintain a sustainable pace
          Develop minimal software
          Accept change
     4.5 TECHNICAL PRINCIPLES
          Develop iteratively
          Treat tests as a key resource
          Do not start any new development until all tests pass
          Test first
          Express requirements through scenarios

5 AGILE ROLES
     5.1 MANAGER
     5.2 PRODUCT OWNER
     5.3 TEAM
          Self-organizing
          Cross-functional
     5.4 MEMBERS AND OBSERVERS
     5.5 CUSTOMER
     5.6 COACH, SCRUM MASTER
     5.7 SEPARATING ROLES

6 AGILE PRACTICES: MANAGERIAL
     6.1 SPRINT
          Sprint basics
          The closed-window rule
          Sprint: an assessment
     6.2 DAILY MEETING
     6.3 PLANNING GAME
     6.4 PLANNING POKER
     6.5 ONSITE CUSTOMER
     6.6 OPEN SPACE
     6.7 PROCESS MINIATURE
     6.8 ITERATION PLANNING
     6.9 REVIEW MEETING
     6.10 RETROSPECTIVE
     6.11 SCRUM OF SCRUMS
     6.12 COLLECTIVE CODE OWNERSHIP
          The code ownership debate
          Collective ownership and cross-functionality

7 AGILE PRACTICES: TECHNICAL
     7.1 DAILY BUILD AND CONTINUOUS INTEGRATION
     7.2 PAIR PROGRAMMING
          Pair programming concepts
          Pair programming versus mentoring
          Mob programming
          Pair programming: an assessment
     7.3 CODING STANDARDS
     7.4 REFACTORING
          The refactoring concept
          Benefits and limits of refactoring
          Incidental and essential changes
          Combining a priori and a posteriori approaches
     7.5 TEST-FIRST AND TEST-DRIVEN DEVELOPMENT
          The TDD method of software development
          An assessment of TFD and TDD

8 AGILE ARTIFACTS
     8.1 CODE
     8.2 TESTS
     8.3 USER STORIES
     8.4 STORY POINTS
     8.5 VELOCITY
     8.6 DEFINITION OF DONE
     8.7 WORKING SPACE
     8.8 PRODUCT BACKLOG, ITERATION BACKLOG
     8.9 STORY CARD, TASK CARD
     8.10 TASK AND STORY BOARDS
     8.11 BURNDOWN AND BURNUP CHARTS
     8.12 IMPEDIMENT
     8.13 WASTE, TECHNICAL DEBT, DEPENDENCY, DEPENDENCY CHARTS

9 AGILE METHODS
     9.1 METHODS AND METHODOLOGY
          Terminology
          The fox and the hedgehog
     9.2 LEAN SOFTWARE AND KANBAN
          Lean Software’s Big Idea
          Lean Software’s principles
          Lean Software: an assessment
          Kanban
     9.3 EXTREME PROGRAMMING
          XP’s Big Idea
          XP: the unadulterated source
          Key XP techniques
          Extreme Programming: an assessment
     9.4 SCRUM
          Scrum’s Big Idea
          Key Scrum practices
          Scrum: an assessment
     9.5 CRYSTAL
          Crystal’s Big Idea
          Crystal principles
          Crystal: an assessment

10 DEALING WITH AGILE TEAMS
     10.1 GRAVITY STILL HOLDS
     10.2 THE EITHER-WHAT-OR-WHEN FALLACY

11 THE UGLY, THE HYPE AND THE GOOD: AN ASSESSMENT OF THE AGILE APPROACH
     11.1 THE BAD AND THE UGLY
          Deprecation of upfront tasks
          User stories as a basis for requirements
          Feature-based development and ignorance of dependencies
          Rejection of dependency tracking tools
          Rejection of traditional manager tasks
          Rejection of upfront generalization
          Embedded customer
          Coach as a separate role
          Test-driven development
          Deprecation of documents
     11.2 THE HYPED
     11.3 THE GOOD
     11.4 THE BRILLIANT
Bibliography
Index

 

VN:F [1.9.10_1130]
Rating: 8.6/10 (9 votes cast)
VN:F [1.9.10_1130]
Rating: +1 (from 3 votes)

Agile book announced

My book “Agile! The Good, the Hype and the Ugly” will be published in a few weeks by Springer. The announced date is April 30 and there is a preview Amazon page: here.

VN:F [1.9.10_1130]
Rating: 8.5/10 (13 votes cast)
VN:F [1.9.10_1130]
Rating: +1 (from 5 votes)

Doing it right or doing it over?

(Adapted from an article in the Communications of the ACM blog.)

I have become interested in agile methods because they are all the rage now in industry and, upon dispassionate examination, they appear to be a pretty amazing mix of good and bad ideas. I am finishing a book that tries to sort out the nuggets from the gravel [1].

An interesting example is the emphasis on developing a system by successive increments covering expanding slices of user functionality. This urge to deliver something that can actually be shown — “Are we shipping yet?” — is excellent. It is effective in focusing the work of a team, especially once the foundations of the software have been laid. But does it have to be the only way of working? Does it have to exclude the time-honored engineering practice of building the infrastructure first? After all, when a building gets constructed, it takes many months before any  “user functionality” becomes visible.

In a typical exhortation [2], the Poppendiecks argue that:

The right the first time approach may work for well-structured problems, but the try-it, test-it, fix-it approach is usually the better approach for ill-structured problems.

Very strange. It is precisely ill-structured problems that require deeper analysis before you jump in into wrong architectural decisions which may require complete rework later on. Doing prototypes to try possible solutions can be a great way to evaluate potential solutions, but a prototype is an experiment, something quite different from an increment (an early version of the future system).

One of the problems with the agile literature is that its enthusiastic admonitions to renounce standard software engineering practices are largely based on triumphant anecdotes of successful projects. I am willing to believe all these anecdotes, but they are only anecdotes. In the present case systematic empirical evidence does not seem to support the agile view. Boehm and Turner [3] write:

Experience to date indicates that low-cost refactoring cannot be depended upon as projects scale up.

and

The only sources of empirical data we have encountered come from less-expert early adopters who found that even for small applications the percentage of refactoring and defect-correction effort increases with [the size of requirements].

They do not cite references here, and I am not aware of any empirical study that definitely answers the question. But their argument certainly fits my experience. In software as in engineering of any kind, experimenting with various solutions is good, but it is critical to engage in the appropriate Big Upfront Thinking to avoid starting out with the wrong decisions. Some of the worst project catastrophes I have seen were those in which the customer or manager was demanding to see something that worked right away — “it doesn’t matter if it’s not the whole thing, just demonstrate a piece of it! — and criticized the developers who worked on infrastructure that did not produce immediately visible results (in other words, were doing their job of responsible software professionals). The inevitable result: feel-good demos throughout the project, reassured customer, and nothing to deliver at the end because the difficult problems have been left to rot. System shelved and never to be heard of again.

When the basis has been devised right, perhaps with nothing much to show for months, then it becomes critical to insist on regular visible releases. Doing it prematurely is just sloppy engineering.

The problem here is extremism. Software engineering is a difficult balance between conflicting criteria. The agile literature’s criticism of teams that spend all their time on design or on foundations and never deliver any usable functionality is unfortunately justified. It does not mean that we have to fall into the other extreme and discard upfront thinking.

In the agile tradition of argument by anecdote, here is an extract from James Surowiecki’s  “Financial Page” article in last month’s New Yorker. It’s not about software but about the current Boeing 787 “Dreamliner” debacle:

Determined to get the Dreamliners to customers quickly, Boeing built many of them while still waiting for the Federal Aviation Administration to certify the plane to fly; then it had to go back and retrofit the planes in line with the FAA’s requirements. “If the saying is check twice and build once, this was more like build twice and check once”, [an industry analyst] said to me. “With all the time and cost pressures, it was an alchemist’s recipe for trouble.”

(Actually, the result is “build twice and check twice”, or more, since every time you rebuild you must also recheck.) Does that ring a bell?

Erich Kästner’s ditty about reaching America, cited in a previous article [5], is once again the proper commentary here.

References

[1] Bertrand Meyer: Agile! The Good, the Hype and the Ugly, Springer, 2013, to appear.

[2] Mary and Tom Poppendieck: Lean Software Development — An Agile Toolkit, Addison-Wesley, 2003.

[3] Barry W. Boehm and Richard Turner: Balancing Agility with Discipline — A Guide for the Perplexed, Addison-Wesley, 2004. (Second citation slightly abridged.)

[4] James Surowiecki, in the New Yorker, 4 February 2013, available here.

[5] Hitting on America, article from this blog, 5 December 2012, available here.

VN:F [1.9.10_1130]
Rating: 8.9/10 (9 votes cast)
VN:F [1.9.10_1130]
Rating: +5 (from 5 votes)