LASER 2020 in Elba Island: DevOps, Microservices and more, first week of June

The page for the 2020 LASER summer school (31 May to 7 June) now has the basic elements (some additions still forthcoming) and registration at the early price is open. The topic is DevOps, Microservices and Software Development for the Age of the Web with both conceptual lectures and contributions from industry, by technology leaders from Amazon, Facebook and ServiceNow. The confirmed speakers are:

  • Fabio Casati, ServiceNow and University of Trento, and Kannan Govindarajan from ServiceNow on Taking AI from research to production – at scale.
  • Adrian Cockcroft, Amazon Web Services, on Building and Operating Modern Applications.
  • Elisabetta Di Nitto, Politecnico di Milano.
  • Valérie Issarny, INRIA, on The Web for the age of the IoT.
  • Erik Meijer, Facebook, on Software Development At Scale.
  • Me, on Software from beginning to end: a comprehensive method.

As always, the setup is the incomparable environment of the Hotel del Golfo in Procchio, Elba Island off the coast of Tuscany, ideal at that time of year (normally good weather, warm but not hot, few tourists). The school is intensive but there is time to enjoy the beach, the hotel’s amenities and the wonderful of environment of Elba (wake up your inner Napoleon). The school has a fairly small size and everyone lives under the same (beautiful) roof, so there is plenty of time for interaction with the speakers and other participants.

About these participants: the school is intended for engineers and managers in industry as well as researchers and PhD student. In fact it’s a mix that one doesn’t find that often, allowing for much cross-learning.

Another way to put it is that this is now the 16th edition of the school (it started in 2004 but we skipped one year), so it cannot be doing everything wrong.

 

Defining and classifying requirements (new publication)

Software engineering has improved a lot in the past couple of decades, but there remains an area where the old doomsday style of starting a software engineering paper (software crisis, everything is rotten…) still fits: requirements engineering. Just see the chasm between textbook advice and the practice of most projects.

I have written on requirements in this blog, including very recently, and will continue in forthcoming installments. For today I  want to point to a recent article [1],  presented at the newly revived TOOLS conference in October. It attempts to bring some order and rigor to the basic definitions in the field.

From the abstract:

Requirements engineering is crucial to software development but lacks a precise definition of its fundamental concepts. Even the basic definitions in the literature and in industry standards are often vague and verbose.

To remedy this situation and provide a solid basis for discussions of requirements, this work provides precise definitions of the fundamental requirements concepts and two systematic classifications: a taxonomy of requirement elements (such as components, goals, constraints…) ; and a taxonomy of possible relations between these elements (such as “extends”, “excepts”, “belongs”…).

The discussion evaluates the taxonomies on published requirements documents; readers can test the concepts in two online quizzes.

The intended result of this work is to spur new advances in the study and practice of software requirements by clarifying the fundamental concepts.

This version is a first step; we are aware of its limitations and are already revising the definitions and taxonomy. The project is aimed at providing a solid foundation for a delicate area of software engineering and it will take some time to get it completely right. Still, I think the paper as it is already introduces important concepts. I will within the next two weeks write a more detailed blog article summarizing some of them.

References

[1] Bertrand Meyer, Jean-Michel Bruel, Sophie Ebersold, Florian Galinier, Alexandr Naumchev, The Anatomy of Requirements, in TOOLS 51, Software Technology: Methods and Tools
Innopolis, Russia, October 15–17, 2019, pages 10-40, available here (Springer site, paywall) and here (arXiv draft).

What happened to the kilogram? Schaffhausen, 16 December

December 16 (next Monday), the newly created Schaffhausen Institute of Technology organizes an entire day of events around three (no less) talks by the physics Nobel prize winner and MIT professor Wolfgang Ketterle.

The culmination of the day is a talk by Prof. Ketterle in the evening on “What happened to the kilogram?”. From the abstract:

For 130 years, a cylinder made of a platinum-iridium alloy stored in Saint-Cloud near Paris was the official definition of a kilogram, the basic unit of mass. This all changed on May 20 of this year: a kilo is now be defined by a fundamental constant of nature known, the Planck constant, which relates the energy of a photon to its frequency: 6.62607015 times 10-34 kilograms times square meters per second. Try that the next time you buy a kilo of asparagus.

Sounds complicated? For MIT’s Wolfgang Ketterle, a Nobel Prize winner, “Conceptually, the definition is very simple”.

Simple? Really? Come to Schaffhausen and hear for yourself whether Prof. Ketterle can make the new kilogram crystal-clear to common mortals.

Earlier in the day, he will give a talk in German on new forms of materials that appear at temperatures near the absolute zero, complete with demonstrations.

More generally, there is a full set of talks throughout the day about various aspects of advanced physics and computer science, and even a “quantum magician”, plus music and food.

Schaffhausen is about 40 minutes from Zurich (or Zurich airport) by train or car.

Attendance is free but registration is recommended. One can register for the full day or for some events only. See further information and registration form here.

A theorem of software engineering

Some of the folk wisdom going around in software engineering, often cluessly repeated for decades, is just wrong.  It can be particularly damaging when it affects key aspects of software development and is contradicted by solid scientific evidence. The present discussion covers a question that meets both of these conditions: whether it makes sense to add staff to a project to shorten its delivery time.

My aim is to popularize a result that is well known in the software engineering literature, going back to the early work of Barry Boehm [1], and explained with great clarity by Steve McConnell in his 2006 book on software cost estimation [2] under the name “Shortest Possible Schedule”. While an empirical rather than a logical result, I believe it deserves to be called a theorem (McConnell stays shy of using the term) because it is as close as we have in the area of software engineering management to a universal property, confirmed by numerous experimental studies.

This article contributes no new concept since McConnell’s chapter 20 says all there is to say about the topic;  my aim is simply to make the Shortest Possible Schedule Theorem better known, in particular to practitioners.

The myth about shortening project times begins with an observation that is clearly correct, at least in an extreme form. Everyone understands that if our project has been evaluated, through accepted cost estimation techniques, to require three developers over a year we cannot magically hire 36 people to complete it in one month. Productivity does not always scale up.

But neither does common sense. Too often the conclusion from the preceding trival observation takes the form of an old  saw, “Brooks’ Law”: adding people to a late project delays it further. The explanation is that the newcomers cost more through communication overhead than they bring through actual contributions. While a few other sayings of Brooks’ Mythical Man-Month have stood the test of time, this one has always struck me as describing, rather than any actual law, a definition of bad management. Of course if you keep haplessly throwing people at deadlines you are just going to add communication problems and make things worse. But if you are a competent manager expanding the team size is one of the tools at your disposal to improve the state of a project, and it would be foolish to deprive yourself of it. A definitive refutation of the supposed law, also by McConnell, was published 20 years ago [3].

For all the criticism it deserves, Brooks’s pronouncement was at least limited in its scope: it addressed addition of staff to a project that is already late. It is even wronger to apply it to the more general issue of cost-estimating and staffing software projects, at any stage of their progress.  Forty-year-old platitudes have even less weight here. As McConnell’s book shows, cost estimation is no longer a black art. It is not an exact science either, but techniques exist for producing solid estimates.

The Shortest Possible Schedule theorem is one of the most interesting results. Much more interesting than Brooks’s purported law, because it is backed by empirical studies (rather than asking us to believe one person’s pithy pronouncement), and instead of just a general negative view it provides a positive result complemented by a limitation of that result; and both are expressed quantitatively.

Figure 1 gives the general idea of the SPS theorem. General idea only; Figure 2 will provide a more precise view.

Image4

Figure 1: General view of the Shortest Possible Schedule theorem.

The  “nominal project” is the result of a cost and schedule estimation yielding the optimum point. The figure and the theorem provide project managers with both a reason to rejoice and a reason to despair:

  • Rejoice: by putting in more money, i.e. more people (in software engineering, project costs are essentially people costs [4]), you can bring the code to fruition faster.
  • Despair: whatever you do, there is a firm limit to the time you can gain: 25%. It seems to be a kind of universal constant of software engineering.

The “despair” part typically gets the most attention at first, since it sets an absolute value on how much money can buy (so to speak) in software: try as hard as you like, you will never get below 75% of the nominal (optimal) value. The “impossible zone” in Figure 1 expresses the fundamental limitation. This negative result is the reasoned and precise modern replacement for the older folk “law”.

The positive part, however, is just as important. A 75%-empty glass is also 25%-full. It may be disappointing for a project manager to realize that no amount of extra manpower will make it possible to guarantee to higher management more than a 25% reduction in time. But it is just as important to know that such a reduction, not at all insignificant, is in fact reachable given the right funding, the right people, the right tools and the right management skills. The last point is critical: money by itself does not suffice, you need management; Brooks’ law, as noted, is mostly an observation of the effects of bad management.

Figure 1 only carries the essential idea, and is not meant to provide precise numerical values. Figure 2, the original figure from McConnell’s book, is. It plots effort against time rather than the reverse but, more importantly, it shows several curves, each corresponding to a published empirical study or cost model surveyed by the book.

Image5

Figure 2: Original illustration of the Shortest Possible Schedule
(figure 2-20 of [3], reproduced with the author’s permission)

On the left of the nominal point, the curves show how, according to each study, increased cost leads to decreased time. They differ on the details: how much the project needs to spend, and which maximal reduction it can achieve. But they all agree on the basic Shortest Possible Schedule result: spending can decrease time, and the maximal reduction will not exceed 25%.

The figure also provides an answer, although a disappointing one, to another question that arises naturally. So far this discussion has assumed that time was the critical resource and that we were prepared to spend more to get a product out sooner. But sometimes it is the other way around: the critical resource is cost, or, concretely, the number of developers. Assume that nominal analysis tells us that the project will take four developers for a year and, correspondingly, cost 600K (choose your currency).  We only have a budget of 400K. Can we spend less by hiring fewer developers, accepting that it will take longer?

On that side, right of the nominal point in Figure 2, McConnell’s survey of surveys shows no consensus. Some studies and models do lead to decreased costs, others suggest that with the increase in time the cost will actually increase too. (Here is my interpretation, based on my experience rather than on any systematic study: you can indeed achieve the original goal with a somewhat smaller team over a longer period; but the effect on the final cost can vary. If the new time is t’= t + T and the new team size s’= s – S, t and s being the nominal values, the cost difference is proportional to  Ts – t’S. It can be positive as well as negative depending on the values of the original t and s and the precise effect of reduced team size on project duration.)

The firm result, however, is the left part of the figure. The Shortest Possible Schedule theorem confirms what good project managers know: you can, within limits, shorten delivery times by bringing all hands on deck. The precise version deserves to be widely known.

References and note

[1] Barry W. Boehm: Software Engineering Economics, Prentice Hall, 1981.

[2] Steve McConnell: Software Estimation ― Demystifying the Black Art, Microsoft Press, 2006.

[3] Steve McConnell: Brooks’ Law Repealed, in IEEE Software, vol. 16, no. 6, pp. 6–8, November-December 1999, available here.

[4] This is the accepted view, even though one might wish that the industry paid more attention to investment in tools in addition to people.

Recycled A version of this article was first published on the Comm. ACM blog under the title The Shortest Possible Schedule Theorem: Yes, You Can Throw Money at Software Deadlines

Ten traits of exceptional innovators

Imagine having had coffee, over the years, with each of Euclid, Galileo, Descartes, Marie Curie, Newton, Einstein, Lise Leitner, Planck and de Broglie. For a computer scientist, if we set aside the founding generation (the Turings and von Neumanns), the equivalent is possible. I have had the privilege of meeting and in some cases closely interacting with pioneer scientists, technologists and entrepreneurs, including Nobel, Fields and Turing winners, Silicon-Valley-type founders and such. It is only fair that I should share some of the traits I have observed in them.

Clarification and disclaimer:

  • This discussion is abstract and as a result probably boring because I am not citing anyone by name (apart from a few famous figures, most of whom are dead and none of whom I have met). It would be more concrete and lively if I buttressed my generalities by actual examples, of which I have many. The absence of any name-dropping is a matter of courtesy and respect for people who have interacted with me unguardedly as a colleague, not a journalist preparing a tell-all book. I could of course cite the names for positive anecdotes only, but that would bias the story (see point 4). So, sorry, no names (and I won’t relent even if you ask me privately — mumm like a fish).
  • I am looking at truly exceptional people. They are drawn from a more general pool of brilliant, successful scientists and technologists, of which they form only a small subset. Many of their traits also apply to this more general community and to highly successful people in any profession. What interests me is the extra step from brilliant to exceptional. It would not be that difficult to identify fifty outstanding mathematics researchers in, say, 1900, and analyze their psychological traits. The question is: why are some of them Hilbert and Poincaré, and others not?
  • Of course I do not even begin to answer that question. I only offer a few personal remarks.
  • More generally, cargo cult does not work. Emulating every one of the traits listed below will not get you a Nobel prize. You will not turn into a great composer by eating lots of Tournedos Rossini. (Well, you might start looking like the aging Rossini.) This note presents some evidence; it does not present any conclusion, let alone advice. Any consequence is for you to draw, or not.
  • The traits obviously do not universally characterize the population observed. Not all of the people exhibit all of the traits. On the other hand, my impression is that most exhibit most.

1 Idiosyncratic

“Idiosyncratic” is a high-sounding synonym for “diverse,” used here to deflect the ridicule of starting a list of what is common to those people by stating that they are different from each other. The point is important, though, and reassuring. Those people come in all stripes, from the stuffy professor to the sandals-shorts-and-Hawaiian-shirt surfer.  Their ethnic backgrounds vary. And (glad you asked) some are men and some are women.

Consideration of many personality and lifestyle features yields no pattern at all. Some of the people observed are courteous, a delight to deal with, but there are a few jerks too. Some are voluble, some reserved. Some boastful, some modest. Some remain for their full life married to the same person, some have been divorced many times, some are single. Some become CEOs and university presidents, others prefer the quieter life of a pure researcher. Some covet honors, others are mostly driven by the pursuit of knowledge. Some wanted to become very rich and did, others care little about money.  It is amazing to see how many traits appear irrelevant, perhaps reinforcing the value of those that do make a difference.

2 Lucky

In trying to apply a cargo-cult-like recipe, this one would be the hardest to emulate. We all know that Fleming came across penicillin thanks to a petri dish left uncleaned on the window sill; we also know that luck favors only the well-prepared: someone other than Fleming would have grumbled at the dirtiness of the place and thrown the dish into the sink. But I am not just talking about that kind of luck. You have to be at the right place at the right time.

Read the biographies, and you will see that almost always the person happened to study with a professor who just then was struggling with a new problem, or did an internship in a group that had just invented a novel technique, or heard about recent results before everyone else did.

Part of what comes under “luck” is luck in obtaining the right education. Sure, there are a few autodidacts, but most of the top achievers studied in excellent institutions.

Success comes from a combination of nature and nurture. The perfect environment, such as a thriving laboratory or world-class research university, is not enough; but neither is individual brilliance. In most cases it is their combination that produces the catalysis.

3 Smart

Laugh again if you wish, but I do not just mean the obvious observation that those people were clever in what they did. In my experience they are extremely intelligent in other ways too. They often possess deep knowledge beyond their specialties and have interesting conversations.

You approach them because of the fame they gained in one domain, and learn from them about topics far beyond it.

4 Human

At first, the title of this section is another cause for ridicule: what did you expect, extraterrestrials? But “human” here means human in their foibles too. You might expect, if not an extraterrestrial, someone of the oracle-of-Delphi or wizard-on-a-mountain type, who after a half-hour of silence makes a single statement perfect in its concision and exactitude.

Well, no. They are smart, but they say foolish things too. And wrong things. Not only do they say them, they even publish them. (Newton wasted his brilliance on alchemy. Voltaire — who was not a scientist but helped promote science, translating Newton and supporting the work of Madame du Châtelet — wasted his powerful wit to mock the nascent study of paleontology: so-called fossils are just shells left over by picnicking tourists! More recently, a very famous computer scientist wrote a very silly book — of which I once wrote, fearlessly, a very short and very disparaging review.)

So what? It is the conclusion of the discussion that counts, not the meanderous path to it, or the occasional hapless excursion into a field where your wisdom fails you. Once you have succeeded, no one will care how many wrong comments you made in the process.

It is fair to note that the people under consideration probably say fewer stupid things than most. (The Erich Kästner ditty from an earlier article applies.) But no human, reassuringly perhaps, is right 100% of the time.

What does set them apart from many people, and takes us back to the previous trait (smart), is that even those who are otherwise vain have no qualms recognizing  mistakes in their previous thinking. They accept the evidence and move on.

5 Diligent

Of two people, one an excellent, top-ranked academic, the other a world-famous pioneer, who is the more likely to answer an email? In my experience, the latter.

Beyond the folk vision of the disheveled, disorganized, absent-minded professor lies the reality of a lifetime of rigor and discipline.

This should not be a surprise. There is inspiration, and there is perspiration.  Think of it as the dual of the  broken-windows theory, or of the judicial view that a defendant who lies in small things probably lies in big things: the other way around, if you do huge tasks well, you probably do small tasks well too.

6 Focused

Along with diligence comes focus, carried over from big matters to small matters. It is the lesser minds that pretend to multiplex. Great scientists, in my experience, do not hack away at their laptops during talks, and they turn off their cellphones. They choose carefully what they do (they are deluged with requests and learn early to say no), but what they accept to do they do. Seriously, attentively, with focus.

A fascinating spectacle is a world-famous guru sitting in the first row at a conference presentation by a beginning Ph.D. student, and taking detailed notes. Or visiting an industrial lab and quizzing a junior engineer about the details of the latest technology.

For someone who in spite of the cargo cult risk is looking for one behavior to clone, this would be it. Study after study has shown that we only delude ourselves in thinking we can multiplex. Top performers understand this. In the seminar room, they are not the ones doing email. If they are there at all, then watch and listen.

7 Eloquent

Top science and technology achievers are communicators. In writing, in speaking, often in both.

This quality is independent from their personal behavior, which can cover the full range from shy to boisterous.  It is the quality of being articulate. They know how to convey their results — and often do not mind crossing the line to self-advertising. It is not automatically the case that true value will out: even the most impressive advances need to be pushed to the world.

The alternative is to become Gregor Mendel: he single-handedly discovered the laws of genetics, and was so busy observing the beans in his garden that no one heard about his work until some twenty years after his death. Most of us prefer to get the recognition earlier. (Mendel was a monk, so maybe he believed in an afterlife; yet again maybe he, like everyone else, might have enjoyed attracting interest in this world first.)

In computer science it is not surprising that many of the names that stand out are of people who have written seminal books that are a pleasure to read. Many of them are outstanding teachers and speakers as well.

8 Open

Being an excellent communicator does not mean that you insist on talking. The great innovators are excellent listeners too.

Some people keep talking about themselves. They exist in all human groups, but this particular trait is common among scientists, particularly junior scientists, who corner you and cannot stop telling you about their ideas and accomplishments. That phenomenon is understandable, and in part justified by an urge to avoid the Mendel syndrome. But in a conversation involving some less and some more recognized professionals it is often the most accomplished members of the group who talk least. They are eager to learn. They never forget that the greatest insighs can start with a casual observation from an improbable source. They know when to talk, and when to shut up and listen.

Openness also means intellectual curiosity, willingness to have your intellectual certainties challenged, focus on the merit of a comment rather than the commenter’s social or academic status, and readiness to learn from disciplines other than your own.

9 Selfish

People having achieved exceptional results were generally obsessed with the chase and the prey. They are as driven as an icebreaker ship in the Sea of Barents. They have to get through; the end justifies the means; anything in the way is collateral damage.

So it is not surprising, in the case of academics, to hear colleagues from their institutions mumble that X never wanted to do his share, leaving it to others to sit in committees, teach C++ to biology majors and take their turn as department chair. There are notable exceptions, such as the computer architecture pioneer who became provost then president at Stanford before receiving the Turing Award. But  you do not achieve breakthroughs by doing what everything else is doing. When the rest of the crowd is being sociable and chatty at the conference party long into the night, they go back to their hotel to be alert for tomorrow’s session. A famous if extreme case is Andrew Wiles, whom colleagues in the department considered a has-been, while he was doing the minimum necessary to avoid trouble while working secretly and obsessively to prove Fermat’s last theorem.

This trait is interesting in light of the soothing discourse in vogue today. Nothing wrong with work-life balance, escaping the rat race, perhaps even changing your research topic every decade (apparently the rule in some research organizations). Sometimes a hands-off, zen-like attitude will succeed where too much obstination would get stuck. But let us not fool ourselves: the great innovators never let go of the target.

10. Generous

Yes, selfishness can go with generosity. You obsess over your goals, but it does not mean you forget other people.

Indeed, while there are a few solo artists in the group under observation, a striking feature of the majority is that in addition to their own achievements they led to the creation of entire communities, which often look up to them as gurus. (When I took the comprehensive exam at Stanford, the first question was what the middle initial “E.” of a famous professor stood for. It was a joke question, counting for maybe one point out of a hundred, helpfully meant to defuse students’ tension in preparation for the hard questions that followed. But what I remember is that every fellow student whom I asked afterwards knew the answer. Me too. Such was the personality cult.) The guru effect can lead to funny consequences, as with the famous computer scientist whose disciples you could spot right away in conferences by their sandals and beards (I do not remember how the women coped), carefully patterned after the master’s.

The leader is often good at giving every member of that community flattering personal attention. In a retirement symposium for a famous professor, almost every person I talked too was proud of having developed a long-running, highly personal and of course unique relationship with the honoree. One prestigious computer scientist who died in the 80’s encouraged and supported countless young people in his country; 30 years later, you keep running into academics, engineers and managers who tell you that they owe their career to him.

Some of this community-building can be self-serving and part of a personal strategy for success. There has to be more to it, however. It is not just that community-building will occur naturally as people discover the new ideas: since these ideas are often controversial at first, those who understood their value early band together to defend them and support their inventor. But there is something else as well in my observation: the creators’ sheer, disinterested generosity.

These people are passionate in their quest for discovery and creation and genuinely want to help others. Driven and self-promoting they may be, but the very qualities that led to their achievements — insight, intellectual courage, ability to think beyond accepted ideas — are at the antipodes of pettiness and narrow-mindedness. A world leader cannot expect any significant personal gain from spotting and encouraging a promising undergraduate, telling a first-time conference presenter that her idea is great and worth pushing further, patiently explaining elementary issues to a beginning student, or responding to a unknown correspondent’s emails. And still, as I have observed many times, they do all of this and more, because they are in the business of advancing knowledge.

These are some of the traits I have observed. Maybe there are more but, sorry, I have to go now. The pan is sizzling and I don’t like my tournedos too well-done.

recycled-logo (Originally published on CACM blog.)

The end of software engineering and the last methodologist

(Reposted from the CACM blog [*].)

Software engineering was never a popular subject. It started out as “programming methodology”, evoking the image of bearded middle-aged men telling you with a Dutch, Swiss-German or Oxford accent to repent and mend your ways. Consumed (to paraphrase Mark Twain) by the haunting fear that someone, somewhere, might actually enjoy coding.

That was long ago. With a few exceptions including one mentioned below, to the extent that anyone still studies programming methodology, it’s in the agile world, where the decisive argument is often “I always say…”. (Example from a consultant’s page:  “I always tell teams: `I’d like a [user] story to be small, to fit in one iteration but that isn’t always the way.’“) Dijkstra did appeal to gut feeling but he backed it through strong conceptual arguments.

The field of software engineering, of which programming methodology is today just a small part, has enormously expanded in both depth and width. Conferences such as ICSE and ESEC still attract a good crowd, the journals are buzzing, the researchers are as enthusiastic as ever about their work, but… am I the only one to sense frustration? It is not clear that anyone outside of the community is interested. The world seems to view software engineering as something that everyone in IT knows because we all develop software or manage people who develop software. In the 2017 survey of CS faculty hiring in the U.S., software engineering accounted, in top-100 Ph.D.-granting universities, for 3% of hires! (In schools that stop at the master’s level, the figure is 6%; not insignificant, but not impressive either given that these institutions largely train future software engineers.) From an academic career perspective, the place to go is obviously  “Artificial Intelligence, Data Mining, and Machine Learning”, which in those top-100 universities got 23% of hires.

Nothing against our AI colleagues; I always felt “AI winter” was an over-reaction [1], and they are entitled to their spring. Does it mean software engineering now has to go into a winter of its own? That is crazy. Software engineering is more important than ever. The recent Atlantic  “software apocalypse” article (stronger on problems than solutions) is just the latest alarm-sounding survey. Or, for just one recent example, see the satellite loss in Russia [2] (juicy quote, which you can use the next time you teach a class about the challenges of software testing: this revealed a hidden problem in the algorithm, which was not uncovered in decades of successful launches of the Soyuz-Frigate bundle).

Such cases, by the way, illustrate what I would call the software professor’s dilemma, much more interesting in my opinion than the bizarre ethical brain-teasers (you see what I mean, trolley levers and the like) on which people in philosophy departments spend their days: is it ethical for a professor of software engineering, every morning upon waking up, to go to cnn.com in the hope that a major software-induced disaster has occurred,  finally legitimizing the profession? The answer is simple: no, that is not ethical. Still, if you have witnessed the actual state of ordinary software development, it is scary to think about (although not to wish for) all the catastrophes-in-waiting that you suspect are lying out there just waiting for the right circumstances .

So yes, software engineering is more relevant than ever, and so is programming methodology. (Personal disclosure: I think of myself as the very model of a modern methodologist [3], without a beard or a Dutch accent, but trying to carry, on today’s IT scene, the torch of the seminal work of the 1970s and 80s.)

What counts, though, is not what the world needs; it is what the world believes it needs. The world does not seem to think it needs much software engineering. Even when software causes a catastrophe, we see headlines for a day or two, and then nothing. Radio silence. I have argued to the point of nausea, including at least four times in this blog (five now), for a simple rule that would require a public auditing of any such event; to quote myself: airline transportation did not become safer by accident but by accidents. Such admonitions fall on deaf ears. As another sign of waning interest, many people including me learned much of what they understand of software engineering through the ACM Risks Forum, long a unique source of technical information on software troubles. The Forum still thrives, and still occasionally reports about software engineering issues, but most of the traffic is about privacy and security (with a particular fondness for libertarian rants against any reasonable privacy rule that the EU passes). Important topics indeed, but where do we go for in-depth information about what goes wrong with software?

Yet another case in point is the evolution of programming languages. Language creation is abuzz again with all kinds of fancy new entrants. I can think of one example (TypeScript) in which the driving force is a software engineering goal: making Web programs safer, more scalable and more manageable by bringing some discipline into the JavaScript world. But that is the exception. The arguments for many of the new languages tend to be how clever they are and what expressive new constructs they introduce. Great. We need new ideas. They would be even more convincing if they addressed the old, boring problems of software engineering: correctness, robustness, extendibility, reusability.

None of this makes software engineering less important, or diminishes in the least the passion of those of us who have devoted our careers to the field. But it is time to don our coats and hats: winter is upon us.

Notes

[1] AI was my first love, thanks to Jean-Claude Simon at Polytechnique/Paris VI and John McCarthy at Stanford.

[2] Thanks to Nikolay Shilov for alerting me to this information. The text is in Russian but running it through a Web translation engine (maybe this link will work) will give the essentials.

[3] This time borrowing a phrase from James Noble.

[*] I am reposting these CACM blog articles rather than just putting a link, even though as a software engineer I do not like copy-paste. This is my practice so far, and it might change since it raises obvious criticism, but here are the reasons: (A) The audiences for the two blogs are, as experience shows, largely disjoint. (B) I like this site to contain a record of all my blog articles, regardless of what happens to other sites. (C) I can use my preferred style conventions.

Before I start screaming once again…

… at my would-be coauthors, would someone please tell them, and every non-native-English-speaker-but-aspiring-English-author, to read this? Please, please, please, please, please.

In English the verb “allow” cannot take an infinitive as a complement. Ever. You may not write “my method allows to improve productivity” (even if it’s true, which it probably isn’t, but never mind). Ever. You may write the equivalent in French, German, Russian, Italian and whatever, but not in English. Ever. In English you do not “allow to” do something. Ever. You allow someone or something to do something. Maybe, or maybe not, your method allows its users to improve productivity. That’s correct English. It is also OK to use a gerund [1]: your method allows improving productivity. Actually that sounds clumsy but at least it is grammatically correct.

The reason the gerund does not sound quite right here is that  in situations where foreign speakers instinctively think “allow to…” in their mother tongues and transport it directly to English, the native English speaker instinctively comes up with  something  different. Typically, one of:

  • Allow someone to, using a specific word instead of “someone”. The English language has a concrete slant and favors expressing all details, including some that in other languages remain implicit.
  • Make it possible to:  a bit wordy, but common and convenient, and definitely correct when followed by an infinitive (“my method makes it possible to improve productivity”). We politely leave it unsaid what the “it” is that is being made possible. This turn of phrase is the easiest if you want to remain as close to the original “allow to…” in your native language. Consider “make it possible to” as a mechanical translation of “allow to”. It works.
  • Support something. Remember this word. It is used more widely in English than its typical translations in other languages. Often it fits just where you initially would come up with “allow to”. Your method may support policies for improving productivity.
  • The gerund. It will sound less clumsy if what you are “allowing” is truly a process, and you are using “allow” in its direct sense of giving permission [2], rather than in the more general and weaker sense of supporting. The rules of tennis allow playing in either singles or doubles.
  • Generalizing the gerund, a plain noun (substantive). You can, in fact, allow something. Your methodology allows productivity improvements. Like the gerund, it does not sound as good as the other forms (“support” is better unless there truly is a notion of permission), but it is correct.
  • Or… nothing at all. Paraphrased from a text seen recently: “some techniques only allow to model internal properties, others allow to model external properties too”. So much better (in any language): some techniques only model internal properties, others also cover external ones. Whoever wrote the first variant should not, in the next three years, be allowed anywhere near the word “allow”.

Some people go around the issue by using “allow for doing something”. That usage is acceptable in American English (less so in British English), but by default “allow for” means something else: tolerating some possible variation in an estimate, as in “plan two hours for your drive, allowing for traffic”. As a substitute for “allowing to” this phrase has no advantage over the solutions listed above.

On last count, I had corrected “allow to” in drafts from coworkers, using one of these solutions, approximately 5,843,944,027 times (allowing for a few cases that I might have forgotten). Enough! Please, please, please, please, please, please, please. Make a note of this. It will allow me to live better, it will allow you to avoid my wrath, it  will make it possible for us to work together again, it will support a better understanding among the people in the world, it will allow faster refereeing and a better peer review process, it covers all needs, and it still allows for human imperfection.

Notes

[1] Or gerundive, or present participle: a word form resulting from addition of the suffix “-ing” to a verb radical.

[2] Note that beyond “allow” this discussion also applies to the verb “permit”. You permit someone to do something.

[3] Post-publication, Oscar Nierstrasz mentioned on Facebook that he has a Web page addressing the same point.

Blockchains, bitcoin and distributed trust: LASER school lineup complete

The full lineup of speakers at the 2018 LASER summer school on Software for Blockchains, Bitcoin and Distributed Trust is now ready, with the announcement of a new speaker, Primavera De Filippi from CNRS and Harvard on social and legal aspects.

The other speakers are Christian Cachin (IBM), Maurice Herlihy (Brown), Christoph Jentzsch (slock.it), me, Emil Gun Sirer (Cornell) and Roger Wattenhofer (ETH).

The school is the 14th in the LASER series and takes place June 2-10, 2018, on the island of Elba in Italy.

Early-fee registration deadline is February 10. The school’s page is here.

Small and big pleasures

(Reproduced from my CACM blog.)

One of the small pleasures of life is to win a technical argument with a graduate student. You feel good, as well you should. It is only human to be want to be right. Besides, if you ended up being wrong all or most of the time, you should start questioning your sanity: why are they the students and you the supervisor, rather than the other way around?

One of the big pleasures of life is to lose an argument with a graduate student. Then you have learned something.

LASER summer school on software for robotics: last call for registration

Much of the progress in robotics is due to software advances, and software issues remain at the heart of the formidable challenges that remain. The 2017 LASER summer school, held in September in Elba, brings together some of the most prestigious international experts in the area.

The LASER school has established itself as one of the principal forums to discussed advanced software issues. The 2017 school takes place from 9 to 17 September in the idyllic setting of the Hotel del Golfo in Procchio, Elba Island, Italy.

Robotics is progressing at an amazing pace, bringing improvements to almost areas of human activity. Today’s robotics systems rely ever more fundamentally on complex software, raising difficult issues. The LASER 2017 summer school covers both the current state of robotics software technology and open problems. The lecturers are top international experts with both theoretical contributions and major practical achievements in developing robotics systems.
The LASER school is intended for professionals from the industry (engineers and managers) as well as university researchers, including PhD students. Participants learn about the most important software technology advances from the pioneers in the field. The school’s focus is applied, although theory is welcome to establish solid foundations. The format of the school favors extensive interaction between participants and speakers.

We have lined up an impressive roster of speakers from the leading edge of both industry and academia:

Rodolphe Gélin, Aldebaran Robotics
Ashish Kapoor, Microsoft Research
Davide Brugali, University of Bergamo, on Managing software variability in robotic control systems
Nenad Medvidovic, University of Southern California, on Software Architectures of Robotics Systems
Bertrand Meyer, Politecnico di Milano & Innopolis University, on Concurrent Object-Oriented Robotics Software
Issa Nesnas, NASA Jet Propulsion Laboratory, on Experiences from robotic software development for research and planetary flight robots
Hiroshi (“Gitchang”) Okuno, Waseda University & Kyoto University, on Open-Sourced Robot Audition Software HARK: Capabilities and Applications

The school takes place at the magnificent Hotel del Golfo in the Gulf of Procchio, Elba. Along with an intensive scientific program, participants will have time to enjoy the countless natural and cultural riches of this wonderful, history-laden jewel of the Mediterranean.

For more information about the school, the speakers and registration see the LASER site.