Technology plus

As the name indicates, this blog is particularly devoted to technology, and even more particularly to software technology. My technical persona is not, however, completely cut off from my personal persona and I increasingly include non-software topics.

Whether there is any readership equally interested in learning about using domain theory for specification and finding out what is the most beautiful monument of Europe remains to be seen, but from now on this will be my “technology+” blog.

VN:F [1.9.10_1130]
Rating: 7.3/10 (3 votes cast)
VN:F [1.9.10_1130]
Rating: +3 (from 3 votes)

Habit, happiness, and programming languages

One of the occupational hazards of spreading the word about Eiffel is the frequent answer “yes, it’s much better than the language I use now, I would like to switch, but…“, followed by some sheepish excuse.

Last night I went to see Eugene Onegin once more (I still hope some day to land the part of Monsieur Triquet). Towards the beginning of the first act [1], Tatiana’s mother (Larina), reflecting in a melancholic tone on the vicissitudes of her (long ago) arranged marriage (and (amazingly) anticipating the very fate (as sketched in the last act) of her own daughter (talking about (amazing) anticipation, is there any other similarly hair-raising case of an author (here of the text behind the libretto) so presciently staging the (exact (down to the very last details)) story of his own future tragic death) but enough digressions (sorry (this is supposed (after all) to be (although it is not the first time (and probably not the last either) it strays from the script) a technology blog))), sings

From above, we were given habit:
It is a substitute for happiness

Is this not exactly the excuse?

Reference

[1] Libretto of Onegin, in English here, in the original there.

 

 

VN:F [1.9.10_1130]
Rating: 10.0/10 (5 votes cast)
VN:F [1.9.10_1130]
Rating: +3 (from 3 votes)

Handshake with a clown

Cirque Zavattta posterIn the circus business, the Zavatta family is a legend. From father to son and grandson, a Zavatta has for decades been the foremost clown of his generation, first in Italy, then in French North Africa, then in France proper. None was ever more famous than Achille Zavatta, who carried the family name through the sixties and seventies.

Legend or not, everyone has to make ends meet; the Zavatta troupe toured the beach resorts of Brittany in the summer, and was not above resorting to the occasional publicity gimmick to lure vacationers to the evening show.

I think I was 15, so the year must have been 1966. I was, like every year, spending the summer in Trébeurden in Northern Brittany, to which (after reading how it was forever spoiled, a decade or two later, by the unbridled development that has disgraced much of the French coast) I shall never return. Then it was paradise, if a wintry and rainy kind of paradise. We were told that at three in the afternoon a swimming competition would take place and — supreme enticement! — the winner would receive a prize of fifty francs from the very hands of Monsieur Achille Zavatta, the great clown. 50 francs was a not inconsiderable sum for a 15-year-old (with inflation it might be something like fifty dollars or euros today), but the name of the prize-giver was an even stronger attraction. So at the appointed time I was at the harbor, together with a dozen or so other boys, in our swimming suits. I don’t remember any girls; they probably had their own race.

No one has ever called me athletic. I could swim pretty well, and was good at staying in the water for a long time — I am still amazed that my parents once let me do a tour of several kilometers and several hours, far away from the coast, just by myself — but I certainly was not fast. Still, I wanted to try. At school we were always told what Pierre de Coubertin said when he founded the modern Olympic games: l’important, c’est de participer. What counts is to be in the game.

The swimming contest was a simple affair. A man on the pier told us: “See the small boat out there, where a boy is standing? You go swim around it, then you come back”. Trois, deux, un, partez: we jumped in and started moving towards the boat. I was not last, but definitely was not first; three or four boys were ahead of me, and maybe as many behind, a fair reflection of my place in the order of the world. I would never have thought of winning anyway.

Then I saw something interesting. The first swimmer, truly fast and modestly triumphant, held his hand out to the boy in the boat, who obligingly extended his own and helped him jump inside. The second followed, then the third. All were in the boat, looking quite happy with themselves.

SwimmerNow I may not be the fastest swimmer in the former kingdom of Brittany, but when it comes to carrying out a clearly stated specification I do not let myself be influenced by the first guy or two, or three, who just did not pay attention. I knew what we had been told to do, and I was going to do it whatever anyone else was thinking. I went all the way to the boat, ignored the hand stretched out to me as it had been to my predecessors, swam around, and started going back towards the shore.

It did not take long for the others to realize their mistake. They jumped back. By now, however, I was far ahead. For the remainder of the race I quietly enjoyed the position of the one with whom everyone else is trying to catch up, rather than the other way around, to which I was more accustomed. They were still faster, but by then I had secured my advantage: I reached the shore first, gaining my first ever victory in a sports competition. Regrettably my last one too, so far.

In the years since, I have many times been in the company of people faster and — in science — brighter than I; often, as in the harbor of Trébeurden in the summer of 1966, they did not prevail in the end. Knowing your limitations does not mean you should let yourself be intimidated by the smart guys. How often have I seen the students whom everyone thought the most brilliant of all collapse on the day of the exam; the “most promising researcher of his generation” peter out; the author of a breakthrough paper succumb to the comfortable laziness of tenured life! Although outside of fables the race never goes to the turtle, the hare does not always win either; neither does the frog (or froggy, as I have been called a few times to my face and no doubt more behind my back); but the patient donkey, having memorized the instructions and never forgetting the destination, may well finish ahead of them all.

On that summer evening I received the fifty francs from the hands of Monsieur Achille Zavatta. In the following days I made good use of them. From the prize-giving event I remember the handshake, and the envious look of the other boys. I remember, too, my mother’s comment; at least this is the only reaction I remember from her: “Did you make sure to say ‘thank you, sir’?”.

VN:F [1.9.10_1130]
Rating: 9.9/10 (24 votes cast)
VN:F [1.9.10_1130]
Rating: +12 (from 14 votes)

Word of the day

Word

Umlottery [ˈʊm.lɑːɾɚɹi] (fr. umloterie, f; ger. Umlotterie, f.; it. umlotteria, f.; rus. умлотерия , f.).

Definition

A spur-of-the-moment and generally random decision, by a hesitant speaker of German talking in front of a (typically large) native-speaker audience, to pronounce the next intended word with, or without, an umlaut on the decisive syllable.

Example use

My programming teacher seems to play the umlottery a lot recently, and most of the time he loses.

Discussion

In the German language some vowels change their pronunciation under the effect of the “umlaut”, a diacritical mark appearing as a double period, as in “ä” (pronounced like an “e” in “Ted“) versus a plain “a” (pronounced “Ah“). Many nouns and verbs add umlauts in some but not all of their inflections (declinations or conjugations).  Non-native speakers find it a challenge to remember when to umlaut and when not. The consequences of incorrect umlauting can be dramatic; for example “drucken” means to print and “drücken” to press. Imagine the consequences of a police chief telling a subordinate “Print a new copy of the witness’s statement” and being understood as “Pressure the witness into making a different statement“. The stress is consequently high on the public speaker who suddenly cannot remember, in the crux of a talk, whether the first syllable of his next chosen word should be umlauted  or not.  Umlottery is the widely practiced technique of taking a big breath, silently praying to some higher power for protection, and taking a chance (as if betting at the lottery) one way or the other.

Etymology

Composite word made up from “umlaut” and “lottery”.

Origin

Precise origin unknown. First attested written occurrence in “Bertrand Meyer’s Technology Blog“, an obscure publication of dubious circulation, in October of 2012. Rumored, without independently confirmed evidence, to have been in common use in the early years of the 21st century among foreign-born professors at the ETH Zurich.

Alternative hypotheses

Some researchers have hypothesized that the word is related to the “Unified Modeling Language” (or “UML“, hence the suggestion), under the argument that using UML for a project is akin to betting on its success at the lottery. There is, however, no scholarly consensus in favor of such a connection.

VN:F [1.9.10_1130]
Rating: 9.7/10 (11 votes cast)
VN:F [1.9.10_1130]
Rating: +7 (from 7 votes)

A fundamental duality of software engineering

A couple of weeks ago I proposed a small quiz. (I also stated that the answer would come “on Wednesday” — please understand any such promise as “whenever I find the time”. Sorry.) Here is the answer.

The quiz was:

I have a function:

  • For 0 it yields 0.
  • For 1 it yields 1.
  • For 2 it yields 4.
  • For 3 it yields 9.
  • For 4 it yields 16.

What is the value for 5?

Shortly thereafter I added a hint: the value for 5 is 25, and changed the question to: “What is the value for 6?”. For good measure we can also ask about the value for 1000. Now compare your answer to  what follows.

A good answer for the value at 6 is: 34 . The function in this case is -10 + 5 x + |2 x – 3| + |2 x -7|. It matches the values for the given inputs.

Linear, small values

 

 

 

 

 

 

 

 

 

The value for 1000 is 8980:

Linear function, full range

 

 

 

 

 

 

 

 

 

Another good answer at position 6 is 35.6. It comes up if we assume the function is over reals rather than integers; then a possible formula, which correlates very well (R-square of 0.9997) with the values at the given inputs, is:

869.42645566111 (1 – 0.4325853145802 e-.0467615868913719  (x – 17.7342512233011))2.3116827277657443

Exponential function, initial range

 

 

 

 

 

 

 

 

 

 

with a quite different asymptotic behavior, giving the value 869.4 at position 1000:

Exponential, full range

 

 

 

 

 

 

 

 

 

 

Some readers might have thought of another possibility, the square function x2, which again matches all the given values:

Square function, initial range

 

 

 

 

 

 

 

 

 

 

So which of these answers is right? Each is as good as the others, and as bad. There is in particular no reason to believe that the values given in the quiz’s statement suggest the square function. Any function that fits the given values, exactly (if we stick to integers) or approximately (with reals as simulated on a computer) is an equally worthy candidate. Six inputs, or six thousand, do not resolve the question. At best they are hints.

This difference between a hint and a solution is at the core of software engineering. It is, for example, the difference between a test and a specification. A test tells us that the program works for some values; as Dijkstra famously pointed out, and anyone who has developed a serious program has experienced, it does not tell us that it will work for others. The more successful tests, the more hints; but they are still only hints. I have always wondered whether Dijkstra was explicitly thinking of the Popperian notion of falsifiability: no number of experiments will prove a physical theory (although a careful experiment may boost the confidence in the theory, especially if competing theories fail to explain it, as the famous Eddington expedition did for relativity in 1919 [1]); but a single experiment can disprove a theory. Similarly, being told that our function’s value at 6 is 34 disqualifies the square function and the last one (the exponential), but does not guarantee that the first function (the linear combination) is the solution.

The specification-testing duality is the extension to computer science of the basic duality of logic. It starts with the elementary boolean operators: to prove a or b it suffices to establish a or to establish b; and to disprove a and b it suffices to show that a does not hold or to show that b does not hold. The other way round, to disprove a or b we have to show that a does not hold and to show that b does not hold; to prove that a and b holds, we have to show that a holds and to show that b holds.

Predicate calculus generalizes or to , “there exists”, and and to , “for all”. To prove ∃ x | p (x) (there is an x of which p holds) it suffices to find one value a such that p (a); let’s be pretentious and say we have “skolemized” x. To disprove∀ x | p (x) (p holds of all x) it suffices to find one value for which p does not hold.

In software engineering the corresponding duality is between proofs and tests, or (equivalently) specifications and use cases. A specification is like a “for all”: it tells us what must happen for all envisioned inputs. A test is like a “there exists”: it tells us what happens for a particular input and hence, as in predicate calculus, it is interesting as a disproof mechanism:

  • A successful test brings little information (like learning the value for 5 when trying to figure out what a function is, or finding one true value in trying to prove a or a false value in trying to prove a ).
  • An unsuccessful test brings us decisive information (like a false value for a ): the program is definitely not correct. It skolemizes incorrectness.

A proof, for its part, brings the discussion to an end when it is successful. In practice, testing may still be useful in this case, but only testing that addresses issues not covered by the proof:

  • Correctness of the compiler and platform, if not themselves proved correct.
  • Correctness the proof tools themselves, since most practical proofs require software support.
  • Aspects not covered by the specification such as, typically, performance and usability.

But for the properties it does cover the proof is final.

It is as foolish, then, to use tests in lieu of specifications as it would be to ignore the limitations of a proof. Agile approaches have caused much confusion here; as often happens in the agile literature [2], the powerful insight is mixed up with harmful advice. The insight, which has significantly improved the practice of software development, is that the regression test suite is a key asset of a project and that tests should be run throughout. The bad advice is to ditch upfront requirements and specifications in favor of tests. The property that tests lack and specifications possess is generality. A test is an instance; a thousand tests can never be more than a thousand instances. As I pointed out in a short note in EiffelWorld (the precursor to this blog) a few years ago [3], the relationship is not symmetric: one can generate tests from a specification, but not the other way around.

The same relationship holds between use cases and requirements. It is stunning to see how many people think that use cases (scenarios) are a form of requirements. As requirements they are as useless as one or ten values are to defining a function. Use cases are a way to complement the requirements by describing the system’s behavior in selected important cases. A kind of reality check, to ensure that whatever abstract aims have been defined for the system it still covers the cases known to be of immediate interest. But to rely on use cases as requirements means that you will get a system that will satisfy the use cases — and possibly little else.

When I use systems designed in recent years, in particular Web-based systems, I often find myself in a stranglehold: I am stuck with the cases that the specifiers thought of. Maybe it’s me, but my needs tend, somehow, to fall outside of these cases. Actually it is not just me. Not long ago, I was sitting close to a small-business owner who was trying to find her way through an insurance site. Clearly the site had a planned execution path for employees, and another for administrators. Problem: she was both an employee and the administrator. I do not know how the session ended, but it was a clear case of misdesign: a system built in terms of standard scenarios. Good specification performs an extra step of abstraction (for example using object-oriented techniques and contracts, but this is for another article). Skipping this step means forsaking the principal responsibility of the requirements phase: to generalize from an analysis of the behavior in known cases to a definition of the desired behaviors in all relevant cases.

Once more, as everywhere else in computer science [4], abstraction is the key to solid results that stand the test of time. Definitely better than judging a book by its cover, inferring a function by its first few values, verifying a program by its tests, or specifying a system by its use cases.

References

[1] See e.g. a blog article: Einstein and Eddington, here.

[2] Bertrand Meyer: Agile! The Good, the Hype and the Ugly, 2013, to appear.

[3] Bertrand Meyer: Test or spec? Test and spec? Test from spec!, EiffelWorld column, 2004 available here.

[4] Jeff Kramer: Is Abstraction the Key to Computer Science?, in Communications of the ACM, vol. 50, no. 4, April 2007, pages 36-42,  available from CiteSeer here

VN:F [1.9.10_1130]
Rating: 9.5/10 (32 votes cast)
VN:F [1.9.10_1130]
Rating: +14 (from 14 votes)

Precedent

Alexander Kogtenkov pointed out to me that precursor work to my papers on the Alias Calculus [1] [2] had been published by John Whaley and Martin Rinard [3]. There are some significant differences; in particular my rules are simpler, and their work is not explicitly presented as a calculus. But many of the basic ideas are the same. The reason I did not cite that paper is simply that I was not aware of it; I am happy to correct the omission.

References

[1] Bertrand Meyer: Towards a Theory and Calculus of Aliasing, in Journal of Object Technology, vol. 9, no. 2, March-April 2010, pages 37-74, available here (superseded by [2])
[2] Bertrand Meyer: Steps Towards a Theory and Calculus of Aliasing, in International Journal of Software and Informatics, 2011, available here (revised and improved version of [1].)
[3] John Whaley and Martin Rinard: Compositional Pointer and Escape Analysis for Java Programs, in POPL 1999, available here.

VN:F [1.9.10_1130]
Rating: 10.0/10 (4 votes cast)
VN:F [1.9.10_1130]
Rating: +3 (from 3 votes)

Quiz (2): hint

Here is a hint: the value for 5 is 25. The next question is: what is the value for 6?

VN:F [1.9.10_1130]
Rating: 10.0/10 (2 votes cast)
VN:F [1.9.10_1130]
Rating: +2 (from 2 votes)

Quiz (1): What is this function?

For various reasons there have been no articles in recent weeks; now we are restarting on a regular basis!

A the first topic for this new season, here is a little quiz. I have a function:

  • For 0 it yields 0.
  • For 1 it yields 1.
  • For 2 it yields 4.
  • For 3 it yields 9.
  • For 4 it yields 16.

The question: what is the value for 5?

Answer next Wednesday (at least it will be Wednesday in some time zone).

VN:F [1.9.10_1130]
Rating: 10.0/10 (3 votes cast)
VN:F [1.9.10_1130]
Rating: +3 (from 3 votes)

Salad requirements, requirements salad

 

You know what salad is.

Salad is made of green leaves. Actually no, there are lots of other colors, lots of other kinds; and many, such as rice salad, pasta salad, potato salad, include no leaves at all.

In any case, salad is made of vegetables. Actually no: fruit salad.

I meant vegetal, as in non-animal. Actually no: salads often contain cheese, meat, fish, seafood.

In any case, salad is a cold dish. Actually no: did you never try a warm goat cheese salad?

Salad has dressing. Actually no: I know quite a few people who shun dressings.

Salads are consumed at the beginning of a meal. Actually no: in France, the normal place of a salad is after the main course.

At least they are only part of a meal. Actually no: have you not heard of the dinner salad?

Salads have something to do with salt. Actually no: although you are right etymologically, as the word comes through the French salade from the Latin saleta, salty, in our blood-pressure-conscious world the cook often does not put any salt.

Salads are only consumed at lunch. At dinner too. And maybe… I take that back.

I know a salad when I see one. Or maybe when I taste one. Although I have never tried blindfolded.

Then explain to us what it is.

Well, if it says “salad” on the menu it must be a salad.

Can you do better?

I will have to come back to you on that one.

If it is so hard to come up with a convincing definition for such a banal notion (and it is real fun to look at good dictionaries and see the contortions they go through in trying to make some sense of it), no wonder software requirements specifications (SRS) are so hard. One of the obligatory steps in a requirements process —  “agile” or not — is to build up a glossary for the project [1]: a set of definitions for the terms of the trade, those words from the problem domain that the stakeholders throw in assuredly all the time in discussions, with the assumption that everyone else understands, except that when you try to understand too you realize there is no clear definition and even, in some cases, different people understand them in different ways.

If definitions are so hard, are requirements then impossible? The trick is that we often do not need a dictionary-style definition of what things are; we only need to know what they have, in other words what are their properties and operations. This is the abstract data type approach, also known as object technology. But it is still hard to convince the stakeholders to explain what they mean.

The German language has one more use of salads: the affectionate term to describe the jumble of wires that mars the back of your desk (I am guessing) and also the front of mine (in this case I know) is Kabelsalat, cable salad [2]. More than a few SRS are like that too: requirements salads.

References

[1] IEEE: Standard 830-1998, Recommended Practice for Software Requirements Specifications, available (for a fee) here.

[2] German Wikipedia: Kabelsalat entry, available here.

 

VN:F [1.9.10_1130]
Rating: 10.0/10 (10 votes cast)
VN:F [1.9.10_1130]
Rating: +7 (from 9 votes)

The manhood test

 

I came across an obscure and surprisingly interesting article by Cliff Jones [1], about the history of rely-guarantee but with the following extract:

It was perhaps not fully appreciated at the time of [Hoare’s 1969 axiomatic semantics paper] that the roles of pre and post conditions differ in that a pre condition gives permission to a developer to ignore certain possibilities; the onus is on a user to prove that a component will not be initiated in a state that does not satisfy its pre condition. In contrast a post condition is an obligation on the code that is created according to the specification. This Deontic view carries over [to rely-guarantee reasoning].

I use words more proletarian than “deontic”, but this view is exactly what stands behind the concepts of Design by Contract and has been clearly emphasized in all Eiffel literature ever since the first edition of OOSC. It remains, however, misunderstood outside of the Eiffel community; many people confuse Design by Contract with its opposite, defensive programming. The criterion is simple: if you have a precondition to a routine, are you willing entirely to forsake the corresponding checks (conditionals, exceptions…) in the routine body? If not, you may be using the word “contract” as a marketing device, but that’s all. The courage to remove the checks is the true test of adulthood.

The application of Microsoft’s “Code Contracts” mechanism to the .NET libraries fails that test: a precondition may say “buffer not full” or “insertions allowed”, but the code still checks the condition and triggers an exception. The excuse I have heard is that one cannot trust those unwashed developers. But the methodological discipline is lost. Now let me repeat this using clearer terminology: it’s not deontic.

Reference

[1] Cliff Jones: The role of auxiliary variables in the formal development of concurrent programs, in Reflections on the work of C. A. R. Hoare, eds. Jones, Roscoe and Wood, Springer Lecture Notes in Computer Science,  2009, technical report version available here.

VN:F [1.9.10_1130]
Rating: 10.0/10 (6 votes cast)
VN:F [1.9.10_1130]
Rating: +4 (from 4 votes)

EIS: Putting into Practice the Single Model Principle

Since release 6.2 (November 2008) EiffelStudio has included the EIS system, Eiffel Information System. It has been regularly revised, and significantly improved for the recent 7.1 release.

For us EIS is a key contribution with far-reaching software engineering implications, but many users seem unaware of it, perhaps because we have not been explicit enough about why we think it is important. We would love to have more people try it and give us their feedback. (Please make sure to use the 7.1 version.) Information on EIS can be found in the documentation [1] and also in a blog entry by Tao Feng [2].

EIS connects an Eiffel system with external documents in arbitrary formats; examples of formats currently supported are Microsoft Word and PDF, but you can easily add protocols. Such a connection links an element of the Eiffel text, such as a feature, with an element of the external document, such as a paragraph. Then clicking the Eiffel element in EiffelStudio will open the document at the corresponding place in the external tool (Word, Acrobat etc.); this is the EIS “outgoing” mechanism. Conversely the external element has a back link: clicking in the external tool will open EiffelStudio at the right place; this is the EIS “incoming” mechanism.

For the outgoing mechanism, the link will appear as part of a note clause (with attributes filled by default, you need only edit the URL and any option that you wish to change):

EIS incoming note

The fundamental idea behind EIS is to support the seamless form of software development promoted and permitted by Eiffel, where all phases of a project’s lifecycle are closely linked and the code provides the ultimate reference. Since other documents are often involved, in particular a requirements document (SRS, Software Requirements Specification), it is essential to record their precise associations with elements of the software text. For example a paragraph in the SRS could state that “Whenever the tank temperature reaches 50 degrees, the valve shall be closed”. In the software text, there will be some feature, for example monitor_temperature in the class TANK, reflecting this requirement. The two elements should be linked, in particular to ensure that dependencies appear clearly and that any change in either the requirements or the code triggers the corresponding update to the other side. This is what EIS provides.

We envision further tools to track dependencies and in particular to warn users if an element of a connection (e.g. requirement or code) changes, alerting them to the need to check the linked elements on the other side. One of the key goals here is traceability: effective project management, particular during the evolution of a system, requires that all dependencies between the project’s artifact are properly recorded so that it is possible to find out the consequences of any change, proposed or carried out.

The general approach reflects the essential nature of Eiffel development, with its Single Product Principle linking all elements of a software system and minimizing, rather than exaggerating, the inevitable differences of levels of abstraction between requirements, design, code, test plans, test logs, schedules and all the other products of a software project. The core problem of software engineering is change: if we use different tools and notations at each step, and keep the documents separate, we constantly run the risk of divergence between intent and reality. Eiffel by itself offers a good part of the solution by providing a single method (with all its principles, from Design by Contract to open-closed etc.), a single notation (the Eiffel language itself) and a single integrated set of tools (the EiffelStudio IDE) supporting the entire lifecycle; the language, in particular is meant for requirements and design as much as for implementation. The graphical forms (BON and UML, as produced by the Diagram Tool of EiffelStudio in a roundtrip style, i.e. changes to the diagram immediately generate code and changes to the code are reflected in the diagram) directly support these ideas. Of course documents in other formalisms, for example SRS, remain necessary for human consumption; but they should be closely linked to the core project asset, the Eiffel code; hence the need for EIS and its connection mechanisms.

This approach, as I have often noted when presenting it in public, is hard to convey to people steeped in the mindset of the past (UML as separate from code, model-driven development) which magnify the differences between software levels, hence introducing the risk of divergence and making change painful. The Eiffel approach is innovative enough to cause incomprehension or even rejection. (“What, you are not model-driven, but everyone says model-driven is good!” – well, models are bad if they are inaccurate. In the Eiffel approach the model and the program are the same thing, or more precisely the model is the abstract view of the program, obtained through abstraction mechanisms such as deferred classes with contracts and the “contract view” tool of EiffelStudio.)

To be effective, these ideas require proper tool support, for which EIS is a start. But we would like to know if we are on the right track and hence need feedback. We would be grateful if you could try out EIS and tell us what you think, both about the current state of the mechanism and its long-term prospects in the general framework of high-quality, sustainable software development.

References

[1] EIS documentation, here.

[2] Tao Feng, Start using Eiffel Information System, Eiffelroom blog entry of 17 April 2008, available here.

VN:F [1.9.10_1130]
Rating: 9.8/10 (5 votes cast)
VN:F [1.9.10_1130]
Rating: +3 (from 3 votes)

SP software engineering seminar (web-streamed): talks by H. Gall and L. Baresi, Thursday, 5 July

On Thursday, July 5 at 15 Saint Petersburg time (7 AM New York, noon London, 13 Paris/Brussels/Zurich/Milan), the Saint Petersburg software engineering seminar presents two talks, streamed over the Internet:

  • Firat hour:  Luciano Baresi, Politecnico di Milano: A-3: A Middleware for Self-organizing Pervasive Systems
  • Second hour: Harald Gall, University of Zurich: Software Assessment with Software Sensing and Bug Smelling

See abstracts and other details on the <a href=”http://sel.ifmo.ru/seminar/” target=”blog_illustrations”><span style=”color: #0000ff; text-decoration: underline;”>seminar page</span></a>.

The seminar will be streamed over the Internet at the usual address: <a href=”Software Assessment with Software Sensing and Bug Smelling” target=”blog_illustrations”><span style=”color: #0000ff; text-decoration: underline;”>Software Assessment with Software Sensing and Bug Smelling</span></a>. Please join us for two exciting presentations!

 

 

VN:F [1.9.10_1130]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.10_1130]
Rating: 0 (from 0 votes)

Positions open at ETH in concurrency and verification

We have positions open at both the PhD and Postdoc levels at the Chair of Software Engineering at ETH Zurich, the Chair of Software Engineering. As noted in an earlier article, I recently
received an Advanced Investigator Grant from the European Research Council (5 years, 2.5 million euros) on the theme “Concurrency Made Easy”; see [1] for the project description. We are also recruiting for our effort of building an advanced verification environment around Eiffel (EVE, Eiffel Verification Environment), under the slogan “Verification As a Matter of Course”; see e.g. the slides at [2], although the details are no longer up to date.

Requirements:

  • Excellent background in verification.
  • For the concurrency project, excellent background in concurrency.
  • Good publications on relevant topics (particularly for the postdoc positions).
  • Excellent mastery of object-oriented programming, including Eiffel concepts & Design by Contract.
  • Mix of theoretical and practical (software development) talents.
  • Passion for research and determination to advance the state of software engineering.

Please send a CV to this address; also include a position statement describing (briefly, half a page to two pages) on what topics you would like to work and what you think you can contribute. Obviously you should take some time to become familiar with our work , starting from the research pages at [3].

References

[1] Concurrency Made Easy project description, here.

[2] Slides of a talk at SAC, here.

[3] Home page of ETH Chair of Software Engineering, here.

VN:F [1.9.10_1130]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.10_1130]
Rating: 0 (from 0 votes)

The most beautiful monument of Europe

 

The most beautiful of all monuments in Europe is not the palace of Versailles, notwithstanding the Hall of Mirrors with its endless reflections of chandeliers and pillars, notwithstanding the fairy-tale grace of the Trianons, notwithstanding the sumptuous Hall of Congresses where the 1919 peace conference put a formal end … read the entire text. Le plus beau des monuments d’Europe n’est pas Versailles, malgré sa Galerie des Glaces où se reflètent à l’infini les lustres et les pilastres, malgré ses Trianons, malgré son imposante Salle du Congrès où prit officiellement fin, en 1919, … lire le texte complet en français.

 

Yes, I know, this is supposed to be a technology blog.

There are, however, times like right now when intellectuals should not remain silent — especially engineers and scientists.

I wrote the text referenced above several years ago; I don’t remember the exact date but it sounds very much Maastricht-aftermath. I have circulated it to a few friends, but think the time has come to publish it.

I am quite aware that unfolding events may make it look ridiculous. And then what? I will have done my tiny bit to bring people back to reason.

Note: I do not remember the provenance of the photograph. If informed, I would be happy to add the proper acknowledgment.

VN:F [1.9.10_1130]
Rating: 9.4/10 (14 votes cast)
VN:F [1.9.10_1130]
Rating: +13 (from 13 votes)

Talks in coming months

 

Here is a list of some presentations I am scheduled to give in the near future. Time for the faithful followers of this blog to start organizing groupie trips and get ready to haggle with the ticket scalpers and queue up at 3 AM for the best seats.

On May 9 I will give a talk in Paris, at Valtech, on the topic “Eiffel: Objects, Languages, Concurrency” [1]. It will be a general overview talk (in French) describing the key concepts of the Eiffel method and new developments.

On May 17 I will give a keynote at the Russian conference on IT education [2]. I haven’t sent a title and abstract yet but will talk about our experience of teaching introductory programming at ETH, now for 9 years, supported by the Touch of Class textbook which is now available in Russian. The talk itself will be in Russian.

On Tuesday, 29 May, I will give a keynote at TOOLS EUROPE in Prague [2]. This is the 50th TOOLS conference, a milestone, and I will talk on the theme of the conference, “The Triumph of Objects”, to assess the impact of object technology on the field of IT.

I am also giving a keynote at MSEPT (Multicore Software Engineering, Performance and Tools) the same week, on May 31. MSEPT [4] is co-located with TOOLS in Prague. The title of my presentation is “Concurrent Programming is Easy” and I will in particular present new developments in the SCOOP model and our first steps in the Concurrency Made Easy ERC Advanced Investigator Project. In addition I will be participating in two Eiffel-related workshops at TOOLS, WAVE on Advances in Verification for Eiffel, May 29 [5], where we are submitting several papers, and the third “Eiffel Web Design Feast” on May 30, part of a community project that is building an Eiffel-based Web development framework [6].

On June 5 I am giving an invited talk at the New Faculty Symposium of ICSE (International Conference on Software Engineering) in Zurich [7]. The title (assigned by the organizers) is “Promoting your ideas”. I did warn the organizers that this would be a contrarian talk as I find the current computer science publication culture in need of a reboot — this is the goal of the November Dagstuhl workshop mentioned below — but they said it was OK; I might even have heard the word “welcome” at some point.

On June 12 I will deliver a keynote at the International Conference on Reliable Software Technologies, also known as Ada-Europe, in Stockholm  [8]. The Ada community remains significant and is becoming interested in contracts, hence the subject of my talk: Life with Contracts. I will summarize the experience gained in applying Design by Contract as a core principle throughout development, and the next steps in developing the approach.

On June 24 I am  on one of the two panels at the Alan Turing Centenary Conference in Manchester [9]; the panel is entitled The Big Questions in Computation, Intelligence and Life.

In Seattle, 16-20 July, I look forward to presenting our latest verification ideas to the other members of the IFIP Working Group 2.3 on programming methodology [10]; this is the toughest and most unforgiving audience I know, but their feedback has always proved invaluable.

The next set of talks (apart from a possible presentation at the Snowbird conference in July, which I haven’t confirmed yet) is at our LASER summer school in Elba, September 2-8 [11], where I will deliver a set of lectures entitled Eiffel: a study in language design and evolution; it will be an in-depth discussion of issues that arise in devising a quality-focused programming language and managing its continued refinement over a long period, focusing on a few key design principles.

A few weeks later, on September 26, in Natal, I will present a keynote at the Brazilian software engineering conference, SBES [12]. I will talk about concurrency again, hoping of course to have new results to showcase by then.

Another event in which I am involved and expect to give a presentation is a Dagstuhl “Perspectives” workshop on the Publication Culture in Computer Science, November 6-9 [13]. The workshop was set up on the initiative of Moshe Vardi and I am one of the organizers. There is a widespread feeling that the publication model of computer science is broken; a number of articles in this blog have discussed the issues. At Dagstuhl we hope to be able to start fixing the process. Stay tuned.

References

[1] Talk at Valtech, 9 May 2012, information here.

[2] 10th All-Russian conference on IT education, Moscow, 16-18 May 2012, conference page here.

[3] TOOLS Europe 2012, Prague, 28 May – 1 June 2012, conference page here.

[4] MSEPT: International Conference on Multicore Software Engineering, Performance, and Tools, Prague, 31 May – June 1, 2012, conference page here.

[5] Workshop on Advances in Verification for Eiffel (WAVE), Prague, 29 May 2012, workshop page here.

[6] Eiffel Web Design Feast, Prague, 30 May 2012, call for participation available here.

[7] New Faculty Symposium at the International Conference on Software Engineering (ICSE), Zurich, 5 June 2012, symposium page here.

[8] 17th International Conference on Reliable Software Technologies (Ada Europe 2012), Stockholm, 11-15 June 2012, conference page here.

[9] Alan Turing Centenary Conference, Manchester, 11-25 June 2012, conference page here.

[10] IFIP TC2-WG2.3 (Working group on Programming Methodology), group page here (meetings by invitation only).

[11] LASER summer school 2012, Innovative Languages for Software Engineering, 2-8 September 2012 (other speakers are Andrei Alexandrescu on D, Roberto Ierusalimschy on Lua, Ivar Jacobson on UML and SEMAT, Eric Meijer on C# and Linq, Martin Odersky on Scala, Simon Peyton-Jones on Haskell, and Guido van Rossum on Python; school page here.

[12] XXVI Brazilian Symposium on Software Engineering (SBES), part of CBSoft (3rd Brazilian Conference on Software: Theory and Practice), Natal, 23-28 September 2012, symposium page here.

[13] Perspectives Workshop: Publication Culture in Computing Research (by invitation), Dagstuhl, 6-9 November 2012, workshop page here.

VN:F [1.9.10_1130]
Rating: 9.3/10 (7 votes cast)
VN:F [1.9.10_1130]
Rating: +2 (from 2 votes)

Domain Theory: precedents

Both Gary Leavens and Jim Horning commented (partly here, partly on Facebook) about my Domain Theory article [1] to mention that Larch had mechanisms for domain modeling and specification reuse. As Horning writes:

The Larch Shared Language was really all about creating reusable domain theories, including theorems about the domains.  See, for example [2] and [3].

I am honored that they found the time to write about the article and happy to acknowledge Larch, one of the most extensive efforts, over several decades, to provide serious notations and tools for specification. Leavens’s and Horning’s messages gave me the opportunity to re-read some Larch papers and discover a couple I did not know.

My article did not try to provide exhaustive references; if it had, Larch would have been among them. I would probably have cited my own paper on M [4], earlier than [3], which introduces a notation for composing specifications; see section 1.4 (“Features of the M method and the associated notation have thus been devised to allow for modular descriptions of systems. A system description may include an interface paragraph that describes the connection of the current specification with others, existing or yet to be written”) and the  presentation of these mechanisms in section 5.

Larch traits, described in [3], pursue a similar aim, but the earlier article cited by Horning [2] is a general, informal discussion of formal specification; it does not mention traits, and in fact does not cite Larch, stating instead “We have experimented with the use of two very different tools, PIE and Affirm, in constructing modest sized algebraic specifications”. Its general observations about the specification task remain useful today, and it does mention reuse in passing.

If we were to look for precedents, the basic source would have to be the Clear specification language of Goguen and Burstall, for which the citations [5, 6, 7] all appear in my M paper [4] and go back further: 1977-1981. Clear made a convincing case for modularizing specifications, and defined supporting language constructs.

Since these early publications, many people have come to realize that reuse and composition can be as useful on the specification side as they are for programming. Typical specification and verification techniques, however, do not take advantage of this idea and tend to make us restart every time from the lowest level. Domain Theory, as outlined in [1], is intended to bring abstraction, which has proved so beneficial in other parts of software engineering, to the world of specification.

References

[1] Domain Theory: The Forgotten step in program verification, an article in this blog, see here.

[2] John V. Guttag, James J. Horning, Jeannette M. Wing: Some Notes on Putting Formal Specifications to Productive Use, in Science of Computer Programming, vol. 2, no. 1, 1982, pages 53-68. (BM note: I found a copy here.)

[3] John V. Guttag, James J. Horning: A Larch Shared Language Handbook, in Science of Computer Programming, vol. 6, no. 2, 1986, pages 135-157. (BM note: I found a copy here, which also has a link to the Larch report.)

[4] Bertrand Meyer: M: A System Description Method, Technical Report TR CS 85-15, University of California, Santa Barbara, 1985, available here.

[5] Rod M. Burstall and Joe A. Goguen: Putting Theories Together to Make Specifications, in Proceedings of 5th International Joint Conference on Artificial Intelligence, Cambridge (Mass.), 1977, pages 1045- 1058.

[6] Rod M. Burstall and Joe A. Goguen: “The Semantics of Clear, a Specification Language,” in Proceedings of Advanced Course on Abstract Software Specifications, Copenhagen, Lecture Notes on Computer Science 86, Copenhagen, Springer-Verlag, 1980, pages 292-332, available here.

[7] Rod M. Burstall and Joe A. Goguen: An Informal Introduction to Specifications using Clear, in The Correctness Problem in Computer Science, eds R. S. Boyer and JJ. S. Moore, Springer-Verlag, 1981, pages 185-213.

VN:F [1.9.10_1130]
Rating: 8.8/10 (5 votes cast)
VN:F [1.9.10_1130]
Rating: +5 (from 5 votes)

Domain Theory: the forgotten step in program verification

 

Program verification is making considerable progress but is hampered by a lack of abstraction in specifications. A crucial step is, almost always, absent from the process; this omission is the principal obstacle to making verification a standard component of everyday software development.

1. Steps in software verification

In the first few minutes of any introduction to program verification, you will be told that the task requires two artifacts: a program, and a specification. The program describes what executions will do; the specification, what they are supposed to do. To verify software is to ascertain that the program matches the specification: that it does is what it should.

The consequence usually drawn is that verification consists of three steps: write a specification, write a program, prove that the program satisfies the specification. The practical process is of course messier, if only because the first two steps may occur in the reverse order and, more generally, all three steps are often intertwined: the specification and the program influence each other, in particular through the introduction of “verification conditions” into the program; and initial proof attempts will often lead to changes in both the specification and the program. But by and large these are the three accepted steps.

Such a description misses a fourth step, a prerequisite to specification that is essential to a scalable verification process: Domain Theory. Any program addresses a specific domain of discourse, be it the domain of network access and communication for a mobile phone system, the domain of air travel for a flight control system, of companies and shares for a stock exchange system and so on. Even simple programs with a limited scope, such as the computation of the maximum of an array, use a specific domain beyond elementary mathematics. In this example, it is the domain of arrays, with their specific properties: an array has a range, a minimum and maximum indexes in that range, an associated sequence of values; we may define a slice a [i..j], ask for the value associated with a given index, replace an element at a given index and so on. The Domain Theory provides a formal model for any such domain, with the appropriate mathematical operations and their properties. In the example the operations are the ones just mentioned, and the properties will include the axiom that if we replace an element at a certain index i with a value v then access the element at an index j, the value we get is v if i = j, and otherwise the earlier value at j.

2. The role of a Domain Theory

The task of devising a Domain Theory is to describe such a domain of reference, in the spirit of abstract data types: by listing the applicable operations and their properties. If we do not treat this task as a separate step, we end up with the kind of specification that works for toy examples but quickly becomes unmanageable for real-life applications. Most of the verification literature, unfortunately, relies on such specifications. They lack abstraction since they keep using the lowest-level mathematical objects and constructs, such as numbers and quantified expressions. They are to specification what assembly language is to modern programming.

Dines Bjørner has for a long time advocated a closely related idea, domain engineering; see for example his book in progress [1]. Unfortunately, he does not take advantage of modularization through abstract data types; the book is an example of always-back-to-the-basics specification, resorting time and again to fully explicit specifications based on a small number of mathematical primitives, and as a consequence making formal specification look difficult.

3. Maximum computed from both ends

As a simple example of modeling through an abstract theory consider an algorithm for computing the maximum of an array. We could use the standard technique that goes through the array one-way, but for variety let us take the algorithm that works from both ends, moving two integer cursors towards each other until they meet.  (This example was used in a verification competition at a recent conference, I forgot which one.) The code looks like this:

Two-way maximum

The specification, expressed by the postcondition (ensure) should state that Result is the maximum of the array; the loop invariant will be closely related to it. How do we express these properties? The obvious way is not the right way. It states the postcondition as something like

k: Z | (ka.lowerka.upper) ⇒ a [k] ≤ Result

k: Z | ka.lowerka.upper a [k] = Result

In words, Result is at least as large as every element of the array, and is equal to at least one of the elements of the array. The invariant can also be expressed in this style (try it).

The preceding specification expresses the desired property, but it is of an outrageously lower level than called for. The notion of maximum is a general one for arrays over an ordered type. It can be computed through many different algorithms in addition to the one shown above, and exists independently of these algorithms. The detailed, assembly-language-like definition of its properties should not have to be repeated in every case. It should be part of the Domain Theory for the underlying notion, arrays.

4. A specification at the right level of abstraction

In a Domain Theory for arrays of elements from an ordered set, one of the principal operations is maximum, satisfying the above properties. The definition of maximum through these properties belongs at the Domain Theory level. The Domain Theory should include that definition, independent of any particular computational technique such as two_way_max. Then the routine’s postcondition, relying on this notion from the Domain Theory, becomes simply

Result = a.maximum

The application of this approach to the loop invariant is particularly interesting. If you tried to write it at the lowest level, as suggested above, you should have produced something like this:

a.lowerija.upper

k: Z | kikj ∧ (∀ l: Z | l a.lowerl a.upper a [l] ≤ a [k])

The first clause is appropriate but the rest is horrible! With its nested quantified expressions it gives an impression of great complexity for a property that is in fact straightforward, simple enough in fact to be explained to a 10-year-old: the maximum of the entire array can be found between indexes i and j. In other words, it is also the maximum of the array slice going from i to j. The Domain Theory will define the notion of slice and enable us to express the invariant as just

a.lowerij a.upper — This bounding clause remains

a.maximum = (a [i..j ]).maximum

(where we will write the slice a [i..j ] as a.slice (i, j ) if we do not have mechanisms for defining special syntax). To verify the routine becomes trivial: on loop exit the invariant still holds and i = j, so the maximum of the entire array is given by the maximum of the single-element slice a [i..i ], which is the value of its single element a [i ]. This last property — the maximum of a single-element array is its single value — is independent of the verification of any particular program and should be proved as a little theorem of the Domain Theory for arrays.

The comparison between the two versions is striking: without Domain Theory, we are back to the most tedious mathematical manipulations again and again; simple, clear properties look complicated and obscure. This just for a small example on basic data structures; now think what it will be for a complex application domain. Without a first step of formal modeling to develop a Domain Theory, no realistic specification and verification process is realistic.

Although the idea is illustrated here through examples of individual routines, the construction of a Domain Theory should usually occur, in an object-oriented development process, at the level of a class: the embodiment of an abstract data type, which is at the appropriate level of granularity. The theory applies to objects of a given type, and hence will be used for the verification of all operations of that type. This observation justifies the effort of devising a Domain Theory, since it will benefit a whole set of software elements.

5. Components of a Domain Theory

The Domain Theory should include the three ingredients illustrated in the example:

  • Operations, modeled as mathematical functions (no side effects of course, we are in the world of specification).
  • Axioms characterizing the defining properties of these operations.
  • Theorems, characterizing other important properties.

This approach is of course nothing else than abstract data types (the same thing, although few people realize it, as object-oriented analysis). Even though ADTs are a widely popularized notion, supported for example by tools such as CafeOBJ [2] and Maude [3], it is generally not taken to its full conclusions; in particular there is too often a tendency to define every new ADT from scratch, rather than building up libraries of reusable high-level mathematical components in the O-O spirit of reuse.

6. Results, not just definitions

In devising a Domain Theory with the three kinds of ingredient listed above, we should not forget the last one, the theorems! The most depressing characteristic of much of the work on formal specification is that it is long on definitions and short on results, while good mathematics is supposed to be the reverse. I think people who have seriously looked at formal methods and do not adopt them are turned off not so much by the need to use mathematics but by the impression they get little value for it.

That is why Eiffel contracts do get adopted: even if it’s just for testing and debugging, people see immediate returns. It suffices for a programmer to have caught one bug as the violation of a simple postcondition to be convinced for life and lose any initial math-phobia.

7. Quantifiers are evil

As we go beyond simple contract properties — this argument must be positive, this reference will not be void — the math needs to be at the same level of abstraction to which, as modern programmers, we are accustomed. For example, one should always be wary of program specifications relying directly on quantified expressions, as in the low-level variants of the postcondition and loop invariant of the two_way_max routine.

This is not just a matter of taste, as in the choice in logic [4] between lambda expressions (more low-level but also more immediately understandable) and combinators (more abstract but, for many, more abstruse). We are talking here about the fundamental software engineering problem of scalability; more generally, of the understandability, extendibility and reusability of programs, and the same criteria for their specification and verification. Quantifiers are of course needed to express fundamental properties of a structure but in general should not directly appear in program assertions: as the example illustrated, their level of abstraction is lower than the level of discourse of a modern object-oriented program. If the rule — Quantifiers Considered Harmful — is not absolute, it must be pretty close.

Quantified expressions, “All elements of this structure possess this property” and “Some element of this structure possesses this property” — belong in the description of the structure and not in the program. They should appear in the Domain Theory, not in the verification. If you want to express that a hash table search found an element of key K, you should not write

(Result = Void ∧ (∀ i: Z | i a.loweri a.upper a.item (i).key ≠ K))

(ResultVoid ∧ (∀ i: Z | i a.loweri a.upper a.item (i).key = K ∧ Result = a.item (i))

but

Result /= Void     (Result a.elements_of_key (K))

The quantified expressions will appear in the Domain Theory for the corresponding structure, in the definition of such domain properties as elements_of_key. Then the program’s specification — the contracts to be verified — can rely on concepts that make sense to the programmer; the verification will take advantage of theorems that have been proved independently since they belong to the Domain Theory and do not depend on individual programs.

8. Even the simplest examples…

Practical software verification requires Domain Theory even in the simplest cases, including those often used as purely academic examples. Perhaps the most common (and convenient) way to explain the notion of loop invariant is Euclid’s algorithm to compute the greatest common divisor (gcd) of two numbers (with a structure remarkably similar to that of two_way_max):
Euclid

I have expressed the postcondition using a concept from an assumed Domain Theory for the underlying problem: gcd, the mathematical function that yields the greatest common divisor of two integers. Many specifications I have seen go back to the basics, with something like this (using \\ for integer remainder):

a \\ Result = 0 b \\ Result = 0   ∀ i: N | (a \\ i = 0) ∧ (b \\ i = 0)  i Result

This is indeed the definition of what it means for Result to be the gcd of a and b (it divides a, it divides b, and is greater than any other integer that also has these two properties). But it makes no sense to include such a detailed mathematical property in the specification of a program element. It belongs in the domain theory, where it will serve as the definition of a function gcd, which we can then use directly in the specification of the program.

Note how the invariant makes the necessity of the Domain Theory approach even more clear: try to express it in the basic mathematical form, not using the function gcd, It can be done, but the result is typical of the high complexity to usefulness ratio of traditional formal specifications mentioned above. Instead, the invariant that I have included in the program text above says exactly what there is to say, clearly and concisely: at each iteration, the gcd of our two temporary values, i and j, is the result that we are seeking, the gcd of the original values a and b. On exit from the loop, when i and j are equal, their common value is that result.

It is also thanks to the Domain Theory modeling that the verification of the program — consisting of proving that the stated property is indeed invariant — will be so simple: as part of the theory, we should have the two little theorems

i > j > 0 gcd (i, j) = gcd (ij, j)
gcd
(i, i) = i

which immediately show the implementation to be correct.

Inside of any big, fat, messy, quantifier-ridden specification there is a simple, elegant and clear Domain-Theory-based specification desperately trying to get out. Find it and use it.

9. From Domain Theory to domain library

One of the reasons most people working on program verification have not used the division into levels of discourse described here, with a clear role for developing a Domain Theory, is that they lack the appropriate notational support. Mathematical notation is of course available, but we are talking about programs a general verification framework cannot resort to a new special notation for every new application domain.

This is one of the places where Eiffel provides a consistent solution, through its seamless approach to integrating programs and specifications in a single notation. Thanks to mechanisms such as deferred classes (classes that describe concepts through detailed specifications without committing to an implementation), Eiffel is as much for specification as for design and implementation; a Domain Theory can be expressed though a set of deferred Eiffel classes, which we may call a domain library. The classes in a domain library should not just be deferred, meaning devoid of implementation; they should in addition describe stateless operations only — queries, not commands — since they are modeling purely mathematical concepts.

An earlier article in this blog [5] outlined the context of our verification work: the EVE project (Eiffel Verification Environment), a practical approach to integrating software verification in the day-to-day practice of modern software development, with the slogan ““Verification As a Matter Of Course”. In this project we have applied the idea of Domain Theory by building a domain library covering fundamental concepts of set theory, including functions and relations. This is the Mathematical Model Library (MML) [6, 7], which we use to verify the new data structure library EiffelBase 2 using specifications at the appropriate level of abstraction.

MML is in fact useful for the specification of a wide variety of programs, since almost every application area can benefit from the general concepts of set, subset, relation and such. But to cover a specific application domain, say flight traffic control, MML will generally not suffice; you will need to devise a Domain Theory that mathematically models the target domain, and may express it in the form of a domain library written in the same general spirit as MML: all deferred, stateless, focused on high-level abstractions.

It is one of the attractions of Eiffel that you can express such a theory and library in the same notation as the programs that will use it — more precisely in a subset of that notation, since the specification classes do not need the imperative constructs of the language such as instructions and attributes. Then both the development process and the verification use a seamlessly integrated set of notations and techniques, and all use the same tools from a modern IDE, in our case EiffelStudio, for browsing, editing, working with graphical repreentation, metrics etc.

10. DSL libraries for specifications

A mechanism to express Domain Theories is to a general specification mechanism essentially like a Domain Specific Language (DSL) is to a general programming language: a specialization for a particular domain. Domain libraries make the approach practical by:

  • Embedding the specification language in the programming language.
  • Fundamentally relying on reuse, in the best spirit of object technology.

This approach is in line with the one I presented for handling DSLs in an earlier article of this blog [8] (thanks, by the way, for the many comments received, some of them posted here and some on Facebook and LinkedIn where the post triggered long discussions). It is usually a bad idea to invent a new language for a new application domain. A better solution is to rely on libraries, by taking advantage of the power of object-oriented mechanisms to model (in domain libraries) and implement (for DSLs) the defining features of such a domain, and to make the result widely reusable. The resulting libraries are purely descriptive in the case of a domain library expressing a Domain Theory, and directly usable by programs in the case of a library embodying a DSL, but the goal is the same.

11. A sound and necessary engineering practice

Many ideas superficially look similar to Domain Theory: domain engineering as mentioned above, “domain analysis” as widely discussed in the requirements literature, model-driven development, abstract data type specification… They all start from some of the same observations, but  Domain Theory as described in this article is something different: a systematic approach to modeling an arbitrary application domain mathematically, which:

  • Describes the concepts through applicable operations, axioms and (most importantly) theorems.
  • Expresses these elements in an applicative (side-effect free, i.e. equivalent to pure mathematics) subset of the programming language, for direct embedding in program specifications.
  • Relies on the class mechanism to structure the results.
  • Collects the specifications into specification libraries and promotes the reuse of specifications in the same way we promote software reuse.
  • Uses the combination of these techniques to ensure that program specifications are at a high level of abstraction, compatible with the programmers’ view of their software.
  • Promotes a clear and effective verification process.

The core idea is in line with standard engineering practices in disciplines other than software: to build a bridge, a car or a chip you need first to develop a sound model of the future system and its environment, using any useful models developed previously rather than always going back to elementary textbook mathematics.

It seems in fact easier to justify doing Domain Analysis than to justify not doing it. The power of expression and abstraction of our programs has grown by leaps and bounds; it’s time for our specifications to catch up.

References

[1] Dines Bjørner: From Domains to Requirements —The Triptych Approach to Software Engineering, “to be submitted to Springer”, available here.

[2] Kokichi Futatsugi and others: CafeObj page, here.

[3] José Meseguer and others: Maude publication page, here.

[4] J. Roger Hindley, J. P. Seldin: Introduction to Combinators and l-calculus, Cambridge University Press, 1986.

[5] Verification As a Matter Of Course, earlier article on this blog (March 2010), available here.

[6] Bernd Schoeller, Tobias Widmer and Bertrand Meyer. Making specifications complete through models, in Architecting Systems with Trustworthy Components, eds. Ralf Reussner, Judith Stafford and Clemens Szyperski, Lecture Notes in Computer Science, Springer-Verlag, 2006, pages 48-70, available here.

[7] Nadia Polikarpova, Carlo A. Furia and Bertrand Meyer: Specifying Reusable Components, in VSTTE’10: Verified Software: Theories, Tools and Experiments, Edinburgh, August 2010, Lecture Notes in Computer Science, Springer-Verlag, available here.

[8] Never Design a Language, earlier article on this blog (January 2012), available here.

VN:F [1.9.10_1130]
Rating: 9.3/10 (12 votes cast)
VN:F [1.9.10_1130]
Rating: +7 (from 7 votes)

Aliasing and framing: Saint Petersburg seminar next week

In  last Thursday’s session of the seminar, Kokichi Futatsugi’s talk took longer than planned (and it would have been a pity to stop him), so I postponed my own talk on Automatic inference of frame conditions through the alias calculus to next week (Thursday local date). As usual it will be broadcast live.

Seminar page: here, including the link to follow the webcast.

Time and date: 5 April 2012, 18 Saint Petersburg time; you can see the local time at your location here.

Abstract:

Frame specifications, the description of what does not change in a routine call, are one of the most annoying components of verification, in particular for object-oriented software. Ideally frame conditions should be inferred automatically. I will present how the alias calculus, described in recent papers, can address this need.

There may be a second talk, on hybrid systems, by Sergey Velder.

VN:F [1.9.10_1130]
Rating: 10.0/10 (3 votes cast)
VN:F [1.9.10_1130]
Rating: +1 (from 3 votes)

Seminar sessions in Saint Petersburg: CafeOBJ and the frame issue

The Saint Petersburg software engineering seminar has two sessions today (29 March 2012, 18 local time, see here for the date and time in your area), broadcast live:

  • By Kokichi Futatsugi from KAIST (Japan): Combining Inference and Search in Verification with CafeOBJ.
  • By me: Automatic inference of frame conditions through the alias calculus.

See details including the link for the live webcast on the seminar page. The page also includes links to video recordings of recent sessions.

VN:F [1.9.10_1130]
Rating: 8.5/10 (2 votes cast)
VN:F [1.9.10_1130]
Rating: +1 (from 1 vote)

A carefully designed Result

 

In the Eiffel user discussion group [1], Ian Joyner recently asked:

A lot of people are now using Result as a variable name for the return value in many languages. I believe this first came from Eiffel, but can’t find proof. Or was it adopted from an earlier language?

Proof I cannot offer, but certainly my recollection is that the mechanism was an original design and not based on any previous language. (Many of Eiffel’s mechanisms were inspired by other languages, which I have always acknowledged as precisely as I could, but this is not one of them. If there is any earlier language with this convention — in which case a reader will certainly tell me — I was and so far am not aware of it.)

The competing conventions are a return instruction, as in C and languages based on it (C++, Java, C#), and Fortran’s practice, also used in Pascal, of using the function name as a variable within the function body. Neither is satisfactory. The return instruction suffers from two deficiencies:

  • It is an extreme form of goto, jumping out of a function from anywhere in its control structure. The rest of the language sticks to one-entry, one-exit structures, as I think all languages should.
  • In most non-trivial cases the return value is not just a simple formula but has to be computed through some algorithm, requiring the declaration of a local variable just to denote that result. In every case the programmer must invent a name for that variable and, in a typed language, include a declaration. This is tedious and suggests that the language should take care of the declaration for the programmer.

The Fortran-Pascal convention does not combine well with recursion (which Fortran for a long time did not support). In the body of the function, an occurrence of the function’s name can denote the result, or it can denote a recursive call; conventions can be defined to remove the ambiguity, but they are messy, especially for a function without arguments: in function f, does the instruction

f := f + 1

add one to the value of the function’s result as computed so far, as it would if f were an ordinary variable, or to the result of calling f recursively?

Another problem with the Fortran-Pascal approach is that in the absence of a language-defined rule for variable initialization a function can return an undefined result, if some path has failed to initialize the corresponding variable.

The Eiffel design addresses these problems. It combines several ideas:

  • No nesting of routines. This condition is essential because without it the name Result would be ambiguous. In all Algol- and Pascal-like languages it was considered really cool to be able to declare routines within routines, without limitation on the depth of recursion. I realized that in an object-oriented language such a mechanism was useless and in fact harmful: a class should be a collection of features — services offered to the rest of the world — and it would be confusing to define features within features. Simula 67 offered such a facility; I wrote an analysis of inter-module relations in Simula, including inheritance and all the mechanisms retained from Algol such as nesting (I am trying to find that document, and if I do I will post it in this blog); my conclusion was the result was too complicated and that the main culprit was nesting. Requiring classes to be flat structures was, in my opinion, one of the most effective design decisions for Eiffel.
  • Language-defined initialization. Even a passing experience with C and C++ shows that uninitialized variables are one of the major sources of bugs. Eiffel introduced a systematic rule for all variables, including Result, and it is good to see that some subsequent languages such as Java have retained that convention. For a function result, it is common to ignore the default case, relying on the standard initialization, as in if “interesting case” then Result:= “interesting value” end without an else clause (I like this convention, but some people prefer to make all cases explicit).
  • One-entry, one-exit blocks; no goto in overt or covert form (break, continue etc.).
  • Design by Contract mechanisms: postconditions usually need to refer to the result computed by a function.

The convention is then simple: in any function, you can use a language-defined local variable Result for you, of the type that you declared for the function result; you can use it as a normal variable, and the result returned by any particular call will be the final value of the variable on exit from the function body.

The convention has been widely imitated, starting with Delphi and most recently in Microsoft’s “code contracts”, a kind of poor-man’s Design by Contract emulation, achieved through libraries; it requires a Result notation to denote the function result in a postcondition, although this notation is unrelated to the mechanisms in the target languages such as C#. As the example of Eiffel’s design illustrates, a programming language is a delicate construction where all elements should fit together; the Result convention relies on many other essential concepts of the language, and in turn makes them possible.

Reference

[1] Eiffel Software discussion group, here.

VN:F [1.9.10_1130]
Rating: 8.3/10 (12 votes cast)
VN:F [1.9.10_1130]
Rating: +5 (from 9 votes)

New LASER proceedings

Springer has just published in the tutorial sub-series of Lecture Notes in Computer Science a new proceedings volume for the LASER summer school [1]. The five chapters are notes from the 2008, 2009 and 2010 schools (a previous volume [2] covered earlier schools). The themes range over search-based software engineering (Mark Harman and colleagues), replication of software engineering experiments (Natalia Juristo and Omar Gómez), integration of testing and formal analysis (Mauro Pezzè and colleagues), and, in two papers by our ETH group, Is branch coverage a good measure of testing effectiveness (with Yi Wei and Manuel Oriol — answer: not really!) and a formal reference for SCOOP (with Benjamin Morandi and Sebastian Nanz).

The idea of these LASER tutorial books — which are now a tradition, with the volume from the 2011 school currently in preparation — is to collect material from the presentations at the summer school, prepared by the lecturers themselves, sometimes in collaboration with some of the participants. Reading them is not quite as fun as attending the school, but it gives an idea.

The 2012 school is in full preparation, on the theme of “Advanced Languages for Software Engineering” and with once again an exceptional roster of speakers, or should I say an exceptional roster of exceptional speakers: Guido van Rossum (Python), Ivar Jacobson (from UML to Semat), Simon Peyton-Jones (Haskell), Roberto Ierusalimschy (Lua), Martin Odersky (Scala), Andrei Alexandrescu (C++ and D),Erik Meijer (C# and LINQ), plus me on the design and evolution of Eiffel.

The preparation of LASER 2012 is under way, with registration now open [3]; the school will take place from Sept. 2 to Sept. 8 and, like its predecessors, in the wonderful setting on the island of Elba, off the coast of Tuscany, with a very dense technical program but time for enjoying the beach, the amenities of a 4-star hotel and the many treasures of the island. On the other hand not everyone likes Italy, the sun, the Mediterranean etc.; that’s fine too, you can wait for the 2013 proceedings.

References

[1] Bertrand Meyer and Martin Nordio (eds): Empirical Software Engineering and Verification, International Summer Schools LASER 2008-2010, Elba Island, Italy, Revised Tutorial Lectures, Springer Verlag, Lecture Notes in Computer Science 7007, Springer-Verlag, 2012, see here.

[2] Peter Müller (ed.): Advanced Lectures on Software Engineering, LASER Summer School 2007-2008, Springer Verlag, Lecture Notes in Computer Science 7007, Springer-Verlag, 2012, see here.

[3] LASER summer school information and registration form, http://se.ethz.ch/laser.

VN:F [1.9.10_1130]
Rating: 10.0/10 (1 vote cast)
VN:F [1.9.10_1130]
Rating: +1 (from 1 vote)

ERC Advanced Investigator Grant: Concurrency Made Easy

In April we will be starting the  “Concurrency Made Easy” research project, the result of a just announced Advanced Investigator Grant from the European Research Council. Such ERC grants are awarded to a specific person, rather than a consortium of research organizations as in the usual EU funding scheme. The usual amount, which applies in my case, is 2.5 million euros (currently almost 3 .3 million dollars) over five years, on a specific theme. According to the ERC’s own description [1],

ERC Advanced Grants allow exceptional established research leaders of any nationality and any age to pursue ground-breaking, high-risk projects that open new directions in their respective research fields or other domains.

This is the most sought-after research funding instrument of the EU, with a success rate of about 12% [2], out of a group already preselected by the host institutions. What makes ERC Advanced Investigator Grants so coveted is the flexibility of the scheme (no constraints on the topic, light administrative baggage) and the trust that an award implies in a particular researcher and his ability to carry out advanced research.

The name of the CME project clearly signals its ambition: to turn concurrent programming into a normal, unheroic part of programming. Today adding concurrency to a program, usually in the form of multithreading, is very hard, complexity and risk of all kinds. Everyone is telling us that we must rethink programming, retrain programmers and revamp curricula to put the specific reasoning modes of concurrent programming at the center. I don’t think this can work; thinking concurrently is just too hard to become the default mode. Instead, we should adapt programming languages, theories and tools so that programmers can continue to apply the reasoning schemes that have proved so successful in classical programming, especially object-oriented programming with the benefit of Design by Contract.

The starting point is the SCOOP model, to which I started an introduction in an earlier article of this blog [3], with a sequel yet to come. SCOOP is a minimal extension to the O-O framework to support concurrency, yielding very simple (the S in the acronym) solutions to concurrent programming problems. As part of the CME project we plan to develop it in many different directions and establish a sound and effective formal basis.

I have put the project description — the scientific part of the actual proposal text accepted by the ERC — online [4].

In the next few weeks I will be publishing here specific announcements for the positions we are seeking to fill very quickly; they include postdocs, PhD students, and one research engineer. We are looking for candidates with excellent knowledge and practice of concurrency, Eiffel, formal techniques etc. The formal application procedure will be Web-based and is not in place yet but you can contact me if you fit the profile and are interested.

We can defeat the curse: concurrent programming (an obligatory condition of any path towards a successful future for information technology) does not have to be black magic. It can be made simple and efficient. Such is the challenge of the CME project.

References

[1] European Research Council: Advanced Grants, available here.

[2] European Research Council: Press release on 2011 Advanced Investigator Grants, 24 January 2012, available here.

[3] Concurrent Programming is Easy, article from this blog, available here.

[4] CME Advanced Investigator Grant project description, available here.

VN:F [1.9.10_1130]
Rating: 10.0/10 (4 votes cast)
VN:F [1.9.10_1130]
Rating: +2 (from 2 votes)

R&D positions in software engineering

In the past couple of weeks, three colleagues separately sent me announcements of PhD and postdoc positions, asking me to circulate them. This blog is as good a place as any other (as a matter of fact when I asked Oge de Moor if it was appropriate  he replied: “Readers of your blog are just the kind of applicants we look for…” — I hope you appreciate the compliment). So here they are:

  • Daniel Jackson, for work on Alloy, see here. (“Our plan now is to take Alloy in new directions, and dramatically improve the usability and scalability of the analyzer. We are looking for someone who is excited by these possibilities and will be deeply involved not only in design and implementation but also in strategic planning. There are also opportunities to co-advise students and participate in research proposals.”)
  • University College London (Anthony Finkelstein), a position as lecturer, similar to assistant professor), see here. The title is “Lecturer in software testing” (which, by the way, seems a bit restrictive, as I am more familiar with an academic culture in which the general academic discipline is set, say software engineering or possibly software verification, rather than a narrow specialty, but I hope the scope can be broadened).
  • Semmle, Oge de Moor’s company, see here.

These are all excellent environments, so if you are very, very good, please consider applying. If you are very, very, very good don’t, as I am soon going to post a whole set of announcements for PhD and postdoc positions at ETH in connection with a large new grant.

 

VN:F [1.9.10_1130]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.10_1130]
Rating: 0 (from 0 votes)

TOOLS 2012, “The Triumph of Objects”, Prague in May: Call for Workshops

Workshop proposals are invited for TOOLS 2012, The Triumph of Objectstools.ethz.ch, to be held in Prague May 28 to June 1. TOOLS is a federated set of conferences:

  • TOOLS EUROPE 2012: 50th International Conference on Objects, Models, Components, Patterns.
  • ICMT 2012: 5th International Conference on Model Transformation.
  • Software Composition 2012: 10th International Conference.
  • TAP 2012: 6th International Conference on Tests And Proofs.
  • MSEPT 2012: International Conference on Multicore Software Engineering, Performance, and Tools.

Workshops, which are normally one- or two-day long, provide organizers and participants with an opportunity to exchange opinions, advance ideas, and discuss preliminary results on current topics. The focus can be on in-depth research topics related to the themes of the TOOLS conferences, on best practices, on applications and industrial issues, or on some combination of these.

SUBMISSION GUIDELINES

Submission proposal implies the organizers’ commitment to organize and lead the workshop personally if it is accepted. The proposal should include:

  •  Workshop title.
  • Names and short bio of organizers .
  • Proposed duration.
  •  Summary of the topics, goals and contents (guideline: 500 words).
  •  Brief description of the audience and community to which the workshop is targeted.
  • Plans for publication if any.
  • Tentative Call for Papers.

Acceptance criteria are:

  • Organizers’ track record and ability to lead a successful workshop.
  •  Potential to advance the state of the art.
  • Relevance of topics and contents to the topics of the TOOLS federated conferences.
  •  Timeliness and interest to a sufficiently large community.

Please send the proposals to me (Bertrand.Meyer AT inf.ethz.ch), with a Subject header including the words “TOOLS WORKSHOP“. Feel free to contact me if you have any question.

DATES

  •  Workshop proposal submission deadline: 17 February 2012.
  • Notification of acceptance or rejection: as promptly as possible and no later than February 24.
  • Workshops: 28 May to 1 June 2012.

 

VN:F [1.9.10_1130]
Rating: 7.3/10 (3 votes cast)
VN:F [1.9.10_1130]
Rating: +1 (from 1 vote)

Never design a language

It is a common occurrence in software development. Someone says: “We should design a language”. The usual context is that some part of the development requires a rich functionality set, and it appears appropriate to provide a flexible solution through a specialized language. As an example, in the development of an airline’s frequent flyer program on which I once worked the suggestion came to design a “Flyer Award Language” , with instructions appropriate for that application domain: record a trip, redeem an award, provide a statement of available miles and so on. A common term for such notations is DSL, for Domain-Specific Language.

Designing a language in such a context is almost always a bad idea (and I am not sure why I wrote “almost”). Languages are endless objects of discussion, usually on the least important aspects, which are also the most visible and those on which everyone has a strong opinion: concrete syntactic properties. People might pretend otherwise (“let’s not get bogged down on syntax, this is just one possible form”) but syntax is what the discussions will get bogged down to — keywords or symbols, this order or that order of operands, one instruction with several variants vs. several instructions… — at the expense of discussing the fundamental issues of functionality.

Worse yet, even if a language will be part of the solution it is usually just one facet to the solution. As was already explained in detail in [1], any useful functionality set will naturally be useful through several interfaces: a textual notation with concrete syntax may be one of them, but other possible ones include an API (Abstract Program Interface) for use from other software elements, a Graphical User Interface, a web user interface, yet another for web services (typically WSDL or some other XML or JSON format).

In such cases, starting with a concrete textual language is pretty silly, since it cannot yield the others directly (it would have to be parsed and further analyzed, which does not make sense). Of all the kinds of interface listed, the most fundamental one is the API: it describes the raw functionality, excluding any choice of syntax but including, thanks to contracts, elements of semantics. For example, a class AWARD in our frequent flyer application might include the feature


             redeem_for_upgrade (c: CUSTOMER; f : FLIGHT)
                                     — Upgrade c to next class of service on f.
                       require
                                    c /= holder
implies holder.allowed_substitute (c)
                                    f.permitted_for_upgrade
(Current)
                                    c.booked
( f )
                       
ensure
                                    c.class_of_service
( f ) =  old c.class_of_service ( f ) + 1

There is of course no implementation as this declaration only specifies an interface, but it says what needs to be said: to redeem the award for an upgrade, the intended customer must be either the holder of the award or an allowed substitute; the flight must be available for an upgrade with the current award (including the availability of enough miles); the intended customer must already be booked on the flight; and the upgrade will be for the next class of service.

These details are the kind of things that need to be discussed and agreed before the API is finalized. Then one can start discussing about a textual form (a DSL), a graphical interface, a web services interface. They all consist of relatively simple layers to be superimposed on a solidly defined and precisely specified basis. Once you have that basis, you can have all the fun you like arguing over everyone’s favorite forms of concrete syntax; it cannot hurt the project any more. Having these discussions early, at the expense of the more fundamental issues, is a great danger.

One of the key rules for successful software construction — as for many other ventures of course, especially in science and technology — is to distinguish the essential from the auxiliary, and consequently to devote proper attention to the essential issues while avoiding disputations of auxiliary issues. To define functionality, API is essential; language is auxiliary.

So when should you design a language? Never. Well, hardly ever.

Reference

[1] Bertrand Meyer: Introduction to the Theory of Programming Languages, Prentice Hall, 1990.

VN:F [1.9.10_1130]
Rating: 7.9/10 (18 votes cast)
VN:F [1.9.10_1130]
Rating: +8 (from 16 votes)

Concurrency seminars

Ever more people are realizing that concurrency is at the center of IT challenges for the next decades. Concurrent programming remains as hard as ever; we have put together a one-day seminar that helps understand the concepts and build successful concurrent applications. The sessions for the first few months of 2012 are:

  • Palo Alto (February 15)
  • Zurich, (March 2)
  • London (March 22)
  • Paris (May 10)
  • Stockholm (June 15)
  • Seattle (July 20)

and the seminar program is available here.

 

 

VN:F [1.9.10_1130]
Rating: 10.0/10 (4 votes cast)
VN:F [1.9.10_1130]
Rating: +1 (from 1 vote)

Webinar today: the Varieties of Loop Invariants

I did not have time to complete my Monday post this week; it will be for next Monday (title: Never design a language). In the meantime, here is the announcement for today’s Saint Petersburg Software Engineering seminar , which can be followed live at http://sel.ifmo.ru/seminar/live (19:30 Saint Petersburg time, meaning 16:30 Zurich/Paris, 7:30 PDT on 12 January 2012), duration about one hour.

I will be talking today; the topic is “The varieties of loop invariants”, reporting on joint work with Carlo Furia and Sergey Velder. The abstract appears below.

A recording of previous talks, starting from those of last week, will soon be available on the seminar page.

 

Abstract

The key practical issue in verifying software is to come up with the right loop invariants. We are performing an extensive analysis of loop invariants in important algorithms across all major areas of computer science, and have developed a taxonomy. I will present some of the results of this ongoing work, performed with Sergey Velder (ITMO) and Carlo Furia (ETH).

VN:F [1.9.10_1130]
Rating: 9.0/10 (2 votes cast)
VN:F [1.9.10_1130]
Rating: +1 (from 1 vote)

Various interviews

Over the past few months I have given a few interviews to Russian news outlets on technology- and software-related issues. Here are the links I have.

In September, Mikhail Saprykin interviewed me [1] for Kommersant (the main Russian business daily) on a question that worries everyone in technology and academia: the brain drain.

In early November, at the SECR conference in Moscow where I gave a keynote [2], Natalia Dubova from Open Systems, the principal applied publication on software issues (which published translations of many of my articles over the years), interviewed me on the theme of software reliability and Eiffel [3].

On the same occasion, Internet University, which recently published the translation [4] of my introductory programming textbook Touch of Class [5], recorded a video conversation [6] between Prof. Vladimir Billig from Tver Technical University and me. Vladimir (pictured here a few weeks later)

Vladimir Billig

is the book’s translator; he had already translated the second edition of Object-Oriented Software Construction.

On December 19 I was interviewed with Dmitry Grishin, head of mail.ru — the biggest Russian internet companies — by Alexander Belanovskiy at the radio station “Echo Moskvy” in Moscow, for Echonet, the station’s technology program. The interview will air, I was told, on January 15.

On the occasion of a talk I gave on December 19 at the Technical University of Tver, a historic city at the junction of the Volga (appearing on the far-right in the picture) and the Tverska,

Tver, November 2011I was interviewed on two separate TV stations (one of them Russia 1); I didn’t get to see the broadcasts, but if anyone finds them on the Web I will be grateful for the links.

Tver house

Tver church

References

[1] Interview by Mikhail Saprykin in Kommersant, 20 September 2011, available here.

[2] Keynote at Software Engineering Conference Russia, available here.

[3] Interview by Natalia Dubova in Otkrytye Systemy (Open Systems), vol. 10, no. 21, December 2011, available here.

[4] Potchustvuj Klass: translation by Vladimir Billig of Touch of Class [5], book page available here.

[5] Bertrand Meyer: Touch of Class: Learning to Program Well, Using Objects and Contracts, Springer Verlag, 2009, book page available here.

[6] Video interview with Vladimir Billig, available here.

VN:F [1.9.10_1130]
Rating: 10.0/10 (2 votes cast)
VN:F [1.9.10_1130]
Rating: +1 (from 1 vote)

Webinars Dec. 29: (1) Model-based contracts (2) Assessing agile methods

The Saint Petersburg Software Engineering seminar (organized jointly by ITMO and SPbSPU universities) takes place every Thursday, normally 18-21. You can find the program at

 

http://sel.ifmo.ru/seminar/ .

Starting with the Dec. 29 seminar, the talks can now be attended remotely. You can follow them live (i.e. starting at 18:00 SP time, 15:00 Zurich/Paris, 9 AM PDT) at

http://sel.ifmo.ru/seminar/live

Warning: this is an experimental setup and it may not work perfectly the first time around.

The talks on Dec. 29 are the following (see the seminar page for the abstracts):

Nadia Polikarpova (ETH): API design with strong specifications 18-19
Bertrand Meyer: Agile Methods: The Good, The Hype and The Ugly 19-20

For the abstracts, see the seminar page referenced above.

VN:F [1.9.10_1130]
Rating: 10.0/10 (4 votes cast)
VN:F [1.9.10_1130]
Rating: +2 (from 2 votes)

Guest article: funding great research

In a blog article posted in its original version on this blog [1] and in a revised version on the Communications of the ACM blog [2], I emphasized the relevance of incremental research. Recently Mikkel Thorup sent me some interesting comments, which I am publishing here as the first Guest Column of this blog.

References

[1] Bertrand Meyer: One Cheer for Incremental Research, in the present blog, 10 August 2009, available here

[2] Bertrand Meyer: Long Live Incremental Research, in Communications of the ACM Blog, 13 June 2011, available here.

Guest article by Mikkel Thorup: Funding Great Research

Research foundations want great research projects. However, a while back Bertrand Meyer wrote an interesting blog post: Long Live Incremental Research [2]. With examples he showed that many of the greatest results of research could not possibly be the projected results of great sounding project descriptions. His conclusion is that we should drop the high-flying ambitions from project descriptions, and instead support more incremental research proposals, hoping that great stuff will happen on the way. Indeed incremental research is perfect for research projects with predictable deliverables. However, I suggest the opposite conclusion; namely that we for some of the funding drop the project description.

The basic idea is that foundations should encourage researchers to look for results far better than those that can reasonably be projected. In particular, researchers should be free to follow their inspiration when they see new exiting opportunities. This is not done by tying researchers to incremental projects. Instead we can sometimes switch to result based funding, that is, funding based on results already achieved (with emphasis on the more recent past). Such result based funding is more like rewards for great results, and it offers researchers the perfect incentive to do their very best so as to secure future funding.

Consider a researcher with a history of brilliant ideas taking research in surprising new directions. If we try casting this as a project, the referees will rightly complain: “It is not clear how the applicant will come up with a brilliant idea, nor is it clear what the surprise will be”. With such lack of focus and feasibility, a low project score is expected.  If the project description has a predefined weight of, say, 40%, then the overall score will be too low for funding, regardless of the researcher’s established track record of succeeding in unlikely situations.  However, research needs great new ideas. Therefore we need some result based funding so that we can support researchers with a proven talent for generating great new ideas even if we do not quite understand how it will happen.

The above problem is often very real in my field of theoretical computer science. Like in other fields, theoretical research is only interesting if it contains surprises (otherwise it is more like development). A project plan would make sense if the starting point was a surprising idea or approach that it would take years to develop, but in theory, the most exciting ideas are often strikingly simple. When first you have such an idea, you are typically close to done, ready to start writing a paper. Thus, if you have a great idea when you apply for a grant, you will typically be done long before you get the grant. The essence of the research is thus the unpredictable search for powerful ideas and insights. The most appropriate project description is therefore just a description of the importance of the area to be researched and the type of results aimed for. The track record shows which researchers have the talent to succeed.

Dropping the how-part of the project description will greatly increase methodological diversity, allowing researchers to use the strategy that has proved most suitable for their area and their own talent and skills.  As a simple example, Bertrand suggested funding incremental research, hoping that great surprising things would turn up on the way. My strategy is the opposite. I try to spend as much time as possible on overly ambitious targets. Most of the time I fail, but I rarely come home empty-handed, for by studying the unknown I nearly always discover something new, sometimes even more interesting than the original target. From the perspective of ambition, I see it as an advantage that I minimize time spend on easy targets, but foundations seem to prefer that you take a planned path with some guaranteed targets on the way. The point here is not to argue whether one strategy is superior to the other, but rather to embrace the diversity of strategies that may work depending on the area and the individual researcher.

Perhaps more seriously, if a target is hard to achieve, it may be because it requires a crazy approach that would not look reasonable to anyone else, but which may work for a researcher thanks to his special talents and intuition. Indeed I have often been positively surprised seeing how others succeeded using an approach I had myself dismissed.  As a project, such crazy approaches would fail on perceived feasibility, but the point in result based funding is that researchers are free to use whatever approach they find most efficient. Funding is given to those who prove successful. This gives the perfect incentive to do great work, securing future funding.

Result-based funding would also reduce resources needed to evaluate applications. It is very hard for a general panel to evaluate the methodology and success probability of a project.  Moreover, it requires an intimate knowledge of a field to evaluate how big a difference a result would make relative to what is already known. However, handling published results, we know what happened and we can rely on peer-review for the difference it made to the field. All the panel has to do is to evaluate how the successes meet with the objectives of the foundation.

Let us, as an example, take something like the ERC Advanced Investigator Grant which welcomes high risk high gain research. It would seem that aiming for surprising breakthroughs in an important area would fall well within this scope. Having researchers with proven skills explore the area and follow their inspiration may be the optimal strategy. Uncertainty about what they would find should not be worse than high risk. In fact, based on past performance, it may be safe to assume that they will discover something interesting if not ground-breaking. However, when projects are scored on focused feasibility, such projects will fail even if their expected return is very high. It has to be possible to get a high overall score for promising research even if standard project parameters like focus and feasibility would be counterproductive.  At the end of the day, what we want are results, not project descriptions, so what should determine the overall score is which proposal is expected to yield the greatest results.

Long live great research!

Mikkel Thorup

VN:F [1.9.10_1130]
Rating: 10.0/10 (1 vote cast)
VN:F [1.9.10_1130]
Rating: +2 (from 2 votes)