Archive for the ‘General technology’ Category.

Why we love computers and software

A 1949 book I just bought* has the following printer’s label, hand-glued on the title page:

printer_note

Because of a change in pagination, you will need to add the digit 9 [actually, the number, not the digit] to references to page numbers between 1 and 114, and the digit 8 to those between 116 and the end of the volume.

Examples: The reference that says “see page 92” should be read as “see page 101”
The reference that says “ see page 150” should be read as “see page 158”

*Précis d’histoire de la langue et du vocabulaire français (handbook of the history of French language and vocabulary) by Albert Dauzat, Larousse, Paris, 1949.

VN:F [1.9.10_1130]
Rating: 10.0/10 (1 vote cast)
VN:F [1.9.10_1130]
Rating: 0 (from 0 votes)

Things to do to an algorithm

What can you do to or with an algorithm? In other words, what is a good verb to substitute for the hyphen in   “— the algorithm”?

You can learn an algorithm. Discovering classical algorithms is a large part of the Bildungsroman of a computer scientist. Sorting algorithms, graph algorithms, parsing algorithms, numerical algorithms, matrix algorithms, graphical algorithms…

You can teach an algorithm. Whether a professor or just a team leader, you explain to others why the obvious solution is not always the right one. As when  I saw that someone had implemented the traversal part of a garbage collection scheme (the “mark” of mark-and-sweep) using a recursive algorithm. Recursion is a great tool, but not here: it needs a stack of unpredictable size, and garbage collection, which you trigger when you run out of space, is not precisely the moment to start wildly allocating memory. In comes the Deutsch-Schorr-Waite algorithm, which improbably (as if tightrope-walking) subverts the structure itself to find its way forth and back.

To teach it, you can dance an algorithm. Sounds strange, but Informatics Europe gave its 2013 education award to the “AlgoRhythmics” group from at Sapientia University in Romania, which  demonstrates algorithms using central-European dances; see their rendering of Merge Sort:

(Their page has more examples. I see that recently they expanded to other kinds of dance and will let you discover binary search as flamenco and backtracking as classical ballet.) More generally you can simulate or animate an algorithm.

You can admire an algorithm. Many indeed are a source of wonder. The inner beauty of topological sort, Levenshtein or AVL can leave no one indifferent.

You can improve an algorithm. At least you can try.

You can invent an algorithm. Small or large, ambitious or mundane, but not imagined yet by anyone. Devising a new algorithm is a sort of rite of passage in our profession. If it does prove elegant, useful and elegant, you’ll get a real kick (trust me). Then you can publish the algorithm.

You can prove an algorithm, that is to say, mathematically establish its correctness. It is indeed increasingly unreasonable to publish an algorithm without correctness arguments. Maybe I have an excuse here to advertize for an an article that examines important algorithms across a wide variety of fields and showcases their main claim to correctness: their loop invariants.

You can implement an algorithm. That is much of what we do in software engineering, even if as an OO guy I would immediately add “as part of the associated data structure.

Of late, algorithms have come to be associated with yet another verb; one that I would definitely not have envisioned when first learning about algorithms in Knuth (the book) and from Knuth (the man who most certainly does not use foul language).

You can fuck an algorithm.

Thousands of British students marched recently to that slogan:

They were demonstrating against a formula (the Guardian gives the details) that decided on university admissions. The starting point for these events was a ministerial decision to select students not from their grades at exams (“A-level”), which could not take place because of Covid, but instead from their assessed performance in their schools. So far so good but the authorities decided to calibrate these results with parameters deduced from each school’s past performance. Your grade is no longer your grade: if Jill and Joan both got a B, but Jill’s school has been better at getting students into (say) Oxford in the past, then Jill’s B is worth more than Joan’s B.

The outcry was easy to predict, or should have been for a more savvy government. Students want to be judged by their own performance, not by the results of some other students they do not even know. Arguments that the sole concern was a legimitate one (an effort to compensate for possible grade inflation in some schools) ceased to be credible when it came out that on average the algorithm boosted grades from private schools by 4.7. No theoretical justification was going to be of much comfort anyway to the many students who had been admitted to the universities of their dreams on the basis of their raw grades, and after the adjustment found themselves rejected.

In the end, “Fuck the Algorithm!” worked. The government withdrew the whole scheme; it tried to lay the blame for the fiasco on the regulatory authority (Ofqual), fooling no one.

These U.K. events of August 2020 will mark a turning point in the relationship between computer science and society. Not for the revelation that our technical choices have human consequences; that is old news, even if we often pretend to ignore it. Not for the use of Information Technology as an excuse; it is as old (“Sorry, the computer does not allow that!”) as IT itself. What “Fuck the Algorithm!” highlights is the massive danger of the current rush to apply machine learning to everything.

As long as we are talking marketing campaigns (“customers who bought the product you just ordered also bought …”) or image recognition, the admiring mood remains appropriate. But now, ever more often, machine learning (usually presented as “Artificial Intelligence” to sound more impressive) gets applied to decisions affecting human lives. In the US, for example, machine-learning algorithms increasingly help judges make decisions, or make the decisions themselves. Following this slippery path is crazy and unethical. It is also dangerous, as the U.K. students’ reaction indicates.

Machine learning does what the name indicates: it reproduces and generalizes the dominant behaviors of the past. The algorithms have no notion of right and wrong; they just learn. When they affect societal issues, the potential for societal disaster is everywhere.

Amid all the enthusiasm generated by the elegant techniques invented by machine-learning pioneers over the last two decades, one barely encounters any serious reservation. Codes of ethics (from ACM and others) have little to contribute.

We should be careful, though. Either we get our act together and define exacting controls on the use of machine learning for matters affecting people’s fates, or we will see a massive rejection of algorithmic technology, the right parts along with the wrong ones.

The British students of the year 2020’s weird summer will not be the last ones to tell us to fuck the algorithm.

This article was first published in the Communications of the ACM blog.Recycled

VN:F [1.9.10_1130]
Rating: 10.0/10 (8 votes cast)
VN:F [1.9.10_1130]
Rating: +4 (from 4 votes)

A novel concept for success in science

No one seems until now [1] to have identified a key element of any scientific article being submitted or revised for publication. It is guaranteed to increase, if not the quality of your articles, at least their chances of publication, which after all is what counts.

We are told to include a “related work” section, but just as important is the unrelated work section. For example:

Unrelated work

The following publications have no attested  relevance to the topic of this paper but, as pointed out by an anonymous reviewer [2], they are breathtakingly brilliant: [Meyer 1997], [Meyer 2005], [Meyer et al. 2009], [Al et Meyer 2011]. In addition, having taken a look at the composition of the Editorial Board,  we would like to point out the pioneering results introduced by [Meyer 2017] and [Meyer 2019].

This insight is shared with the sole selfless purpose of helping the community, particularly young and aspiring researchers.

Notes

[1] I did find a 2018 Twitter thread started by Arvind Narayanan, with the insightful (if dejected) observation that “`related work’ sections exclusively cite unrelated work”.

[2] Example only, for the sake of an example, since for my part I actually refuse to be an anonymous reviewer; I always sign my reviews, so if I want to tell the authors “I think you should cite my such-and-such paper here” I can do so without any qualms.

VN:F [1.9.10_1130]
Rating: 8.6/10 (7 votes cast)
VN:F [1.9.10_1130]
Rating: +2 (from 2 votes)

New master program at SIT: Webinar tomorrow

The Schaffhausen Institute of Technology (SIT) is holding a Webinar tomorrow with a set of three talks by: Serguei Beloussov, founder of Acronis and president of SIT; Michael Widenius, CTO of MariaDB and creator of MySQL Server; and Mauro Pezzè, my colleague at SIT, who will present the new master program that we have just announced, combining CS/SE topics with management and marketing courses to train future technology leaders.

The talks are in the form of a Webinar, starting at 9 AM this Tuesday (9 June). You can find all the details on the corresponding SIT page at here.

VN:F [1.9.10_1130]
Rating: 10.0/10 (2 votes cast)
VN:F [1.9.10_1130]
Rating: 0 (from 0 votes)

How to protect from the coronavirus

In the current state of the pandemic and for many more months until a vaccine is found, there is exactly one way to fight the coronavirus, protecting yourself and protecting others.

It is not a mask.

It is two masks. You wear a mask, I wear a mask.

Many people still believe that they can only get the virus if an infected person coughs or sneezes on them. This is a tragic myth. Droplets are carried by breath in conversation, by food particles from someone eating near you, or simply by air flowing your way.

Anyone today who goes out without wearing a mask is irresponsible (or suicidal, but that is not an excuse, since he harms others too). Your mask is not enough, though. I must wear one too.

Then we are safe from each other. Remember: we do not have definitive figures, but at least one carrier in five is asymptomatic.

Everything else (and I am not even considering quack solutions and unproven treatments) is pointless. Disinfectant (or better soap) helps, but as a complement. Gloves help medical professionals, who know how to use them properly, but for the general public they can do more harm than good: look at people in shops, once they have gloves they touch everything, moving the virus everywhere. Testing will be critical, of course, but here is another sobering statistics: while there are no false positives (if you test positive, you are infected), around 20% of negative tests are wrong (people have the virus, and it is not detected).

I know: in many places, including some the most technologically advanced nations on earth, there are no masks to be found. This may be the greatest scandal of the modern era. But in the meantime makeshift masks are an acceptable palliative. There are guides all over the web as to how to make them, and if nothing else is available a tightly bound scarf or equivalent, cleaned thoroughly and regularly, will do.

Wear a mask and tell others to do the same.

VN:F [1.9.10_1130]
Rating: 9.0/10 (9 votes cast)
VN:F [1.9.10_1130]
Rating: +2 (from 6 votes)

Time to live up to the boasting

The decent media is not modest these days. “Democracy Dies in Darkness” says the excellent Washington Post, intimating, if I understand it right, that the only way for the US to avoid dictatorship is that I pay subscription fees. Maybe I would if they just stopped devoting every single one of their articles to King Ubu. La Repubblica tells us that it will “always fight for the defense of freeedom of information, for its readers and for all those who have in their hearts the principles of democracy and of civil coexistence.” Beautiful (and behind a paywall).

The epidemic expert Jonathan Quick, interviewed by the Guardian, had this remarkable observation, talking about Covid-19: news tends to be behind paywalls, while fake news is free. The Guardian is in a way the right place to make this comment, since it remains, admirably, free-access with voluntary subscription (and all the same does not seem to be doing too poorly). But everywhere else there has been no change of policy. Whether you are looking at the New York Times, the Washington Post, Le Monde, Le Figaro, Libération, the Neue Zürcher Zeitung, Tages Anzeiger (“Dieser Abo+ Artikel ist exklusiv für Abonnenten”), La Repubblica, La Stampa, the kind of reputable press organs to which we would naturally turn, all have their more in-depth analyses reserved for subscribers. (The Russian Vedomosti seems to be an exception.)

Granted, every company (except maybe the Washington Post, since I have a feeling I am ordering enough from Amazon already) is entitled to earn money. But not all companies claim that their business model is about saving the world. My dear self-praising press, if you are really as generously public-minded as you are, here is a good way to demonstrate it. People around the world are genuinely worried about the Coronavirus epidemic and eager for serious information, if only to counter rumors and conspiracy theories. They eagerly seek credible, validated information that has gone through professional vetting, but many of them cannot afford to subscribe to all the relevant sources.

A few days before and after major elections, outlets such as the NYT and Wapo generally make their political articles free-access. The current health scare is an even more serious occasion.

This is the time for all serious news media around the world to show that their grand declarations of philanthropy are not just words.

We, the readers, should vociferously demand that as a public service these press organs immediately make all Covid-19 news, reports and analyses free-access.

VN:F [1.9.10_1130]
Rating: 10.0/10 (6 votes cast)
VN:F [1.9.10_1130]
Rating: 0 (from 0 votes)

In the scary land of irrational discourse

A chemistry researcher published a paper in Science with two junior collaborators and, a few months later, found flaws and retracted the article.

She commented “I am totally bummed to announce that we have retracted last year’s paper on enzymatic synthesis of beta-lactams” and “it is painful to admit, but important to do so” and “the work has not been reproducible” and I apologize to all” and  “I was a bit busy when this was submitted, and did not do my job well”.

Not very unusual news; this kind of thing happens all the time as part of the normal process of research and publication. (This just in! Scientists are human! They make mistakes once in a while! Full story at 11!)

Perhaps this one is slightly more worthy of notice because the lead author is a Nobel prize winner. Time for some rejoicing (Schadenfreude as it is called in good English)  for anyone who is not a Nobel prize winner: haha, you think you are so smart but you mess up too. It never hurts to have an occasional reminder that we should not deify anyone. But hardly prime-time news.

Well, it is  prime-time news for Fox News, which devotes a whole article to the matter. OK, I know, Fox News. And yes, it does pain me to include a hyperlink to a foxnews.com page in this otherwise perfectly decent, civilized, family-safe blog. But in fact that particular article is not by itself outrageous. Suspicious, yes: why such a sudden focus on a minor scientific episode in a news source not particularly famous (I hope you admire my gift for euphemism) for its extensive coverage of the frontlines of scientific research? But whatever its ultimate agenda the article itself is  factual, not judgmental.

What is striking is the avalanche of reader comments on that page. If you go and take a look at them, be prepared; put on your parka. Reading these comments will be, for many of us, a peek into a completely different world. A world that we vaguely know exists, but do not actually visit.

It is not a nice world to venture into: full of bile, frustration, resentment, jealousy, conspiracy theories, slander, attacks on anyone trying to take a rational approach to issues, with hardly a pleasant or optimistic note ever. It is not a world one wants to visit often, but reading such a page is an eye-opener for anyone who accepts the premises of rational thinking and might believe that they are universally accepted.

“Striking”, I wrote. Scary is a more apposite word. With the kind of nonsense-spouting and science-bashing that appears in countless messages in the comments section of the page, one can fear the worst regarding questions that face our society, for which rational, science-based advice is critical. (Yes, coronavirus, I am looking at you!)

Very few of the comments on the page says the obvious: it is not good to make errors, but errors will occur, and the scientist should be commended for checking further and coming out with the admission that her study had flaws. As far as we know the initiative came from her, spontaneously. It is one of the signs of the healthiness of science that we always question results. We question those of other people (there are plenty of sites, such as pubpeer and forbetterscience, entirely devoted to tracking and debunking flawed research). We also question our own: partly to avoid the humiliation of having someone else report one of our mistakes before we do; but also because of the good scientist’s natural search for intellectual honesty.

Most of the article commenters do not mention this key lesson of the incident; the Nobel prize winner’s integrity. For them, the article retraction demonstrates that… the entire edifice of science is flawed! For example:

She’s a liberal… I thought her being wrong was understood.

Now we need to find an honest Climate Change researcher to admit that their computer models are faulty and much of their “data” is fake.

Integrity! Now if the “scientists” who have fabricated Global Warming/ Climate Change, whatever, “research” would come forward with admissions about their flawed, fallacious “research” we would be golden.

Now if we could get the climate change “scientists” to do the same maybe some credibility could be restored to the field.

and so on ad nauseam. (Not a figure of style — reading these comments is truly nauseating.) In reality the retraction demonstrates, or rather illustrates (one example is not a demonstration), the reverse of these assertions: that the scientific process includes its own correction mechanisms. To use a computer scientist’s terminology, it is not fault-free (no scientist ever claimed anything like that) but fault-tolerant.

Of course the reason the Fox News crowd is suddenly so interested in science is not (one imagines) science per se but the science of climate change. Comment after comment uses the article, as illustrated by the above examples, to dismiss the scientific consensus on the reports of the United Nations’ Intergovernmental Panel on Climate Change. In other words: the retraction of one three-author paper on beta-lactams proves that the the work of hundreds of scientists producing thousands of articles on climatology over several decades is flawed? The logic of such a deduction is… shaky.

The modern world is based, through technology, on science. To post on the Web their absurd rejections of scientifically established facts, the Fox News readers couldn’t do without relying on mobile phones, mobile networks, software systems, computers and other extraordinary achievements of human intelligence, the result of centuries of patient cumulative application of the same scientific principles and techniques that these posts ridicule. They are stuck in a pre-scientific mindset, dominated by the kind of magical thinking that the founders of modern thought had to overcome between the 16th and 18th century, as brilliantly analyzed by Gaston Bachelard’s Formation of the Scientific Mind.

Somehow they skipped what the rest of us learn in grade school (that two plus two equals four, cause precedes effect and so on). They are many, they vote, they  think they are right and the rest of the world is wrong, hold these beliefs very strongly (Dunning-Kruger effect), and put the world at risk.

VN:F [1.9.10_1130]
Rating: 10.0/10 (4 votes cast)
VN:F [1.9.10_1130]
Rating: +1 (from 1 vote)

Call for suggestions: beauty

On April 29 in the early evening at the Schaffhausen Institute of Technology I will give a talk on “The Beauty of Software”, exploring examples of what makes some concepts, algorithms, data structures etc. produce a sense of esthetics. (Full abstract below.) I gave a first version at TOOLS last year but am revising and expanding the talk extensively.

I obviously have my own examples but am interested in more. If you have some that you feel should be considered for inclusion, perhaps because you experienced a “Wow!” effect when you encountered them, please tell me. I am only asking for names or general pointers, not an in-depth analysis (that’s my job). To avoid having my thunder stolen I would prefer that you alert me by email. I will give credit for examples not previously considered.

Thanks!

Abstract of the talk as published:

Scientists often cite the search for beauty as one of their primary guiding forces. Programming and software engineering offer an inexhaustible source of astoundingly beautiful ideas, from strikingly elegant algorithms and data structures to powerful principles of methodology and language design.

Defining beauty is elusive, but true beauty imposes itself in such a way as to remove any doubt. Drawing comparisons from art, literature and other endeavours. He will show a sample of ideas from all walks of software, directly understandable to a wide audience of non-software-experts, offering practical applications in technology that we use daily, and awe-inspiring in their simplicity and elegance.

VN:F [1.9.10_1130]
Rating: 9.0/10 (4 votes cast)
VN:F [1.9.10_1130]
Rating: +2 (from 2 votes)

An annoying practice from another age

When you want to contact academic researchers, particularly computer scientists, you often find their email addresses on their Web pages in a mildly obfuscated form such as “albert dot einstein at princeton dot edu”.

If you try to copy-paste such a pseudo-address into an email client so as to fix it there, you often have to spend some time fighting the email client’s knowledge of what an email address looks like. It can result in errors and bounced mail. Not the world’s worst scandal but an annoying waste of time.

An address written out in that form is a way for the page owner to announce to the cognoscenti: “I am a computer scientist and hence very knowledgeable about the ways of the Internet; I know that spammers run bots to harvest addresses. See how I defeat them.

So 1995!

Both spam and defenses against it are no longer what they were back then. Anyone who manages to use email effectively is protected, even without knowing it, by spam blockers, which have become quite good. (According to a specialized site, 14.5 billion spam emails are sent every day, so without these protections we would all be be drowning in spam, which for most people is not the case.)

As to any spam harvesters who are really interested in computer science researchers, they are most likely able anyway to write a little regular expression analyzer that captures the vast majority of the supposedly obfuscated addresses.

If you really want strangers to be able to email you without making your address visible to the spammers, and are a CS person, just include in your Web page a few lines of Javascript that, without revealing the email address in the HTML code, will display something like “Here is my email address”, in such a way that a visitor who clicks on Here gets a new-email window with your email address pre-filled. Not very hard — I use this trick on my own home page and am certainly not a Javascript expert.

But I suspect that  as long as you are prepared to let people email you, even just letting your email address appear in clear is not going to result in catastrophe. Your organization’s or ISP’s spam filter is protecting you.

Come on. This is 2020. Windows 95 and the OJ Simpson trial are no longer the news of the day.  Time to stop worrying about what no longer matters, and stop bothering people who are just trying to reach you.

Down with corny address obfuscation!

 

 

 

VN:F [1.9.10_1130]
Rating: 9.0/10 (2 votes cast)
VN:F [1.9.10_1130]
Rating: +1 (from 1 vote)

LASER 2020 in Elba Island: DevOps, Microservices and more, first week of June

The page for the 2020 LASER summer school (31 May to 7 June) now has the basic elements (some additions still forthcoming) and registration at the early price is open. The topic is DevOps, Microservices and Software Development for the Age of the Web with both conceptual lectures and contributions from industry, by technology leaders from Amazon, Facebook and ServiceNow. The confirmed speakers are:

  • Fabio Casati, ServiceNow and University of Trento, and Kannan Govindarajan from ServiceNow on Taking AI from research to production – at scale.
  • Adrian Cockcroft, Amazon Web Services, on Building and Operating Modern Applications.
  • Elisabetta Di Nitto, Politecnico di Milano.
  • Valérie Issarny, INRIA, on The Web for the age of the IoT.
  • Erik Meijer, Facebook, on Software Development At Scale.
  • Me, on Software from beginning to end: a comprehensive method.

As always, the setup is the incomparable environment of the Hotel del Golfo in Procchio, Elba Island off the coast of Tuscany, ideal at that time of year (normally good weather, warm but not hot, few tourists). The school is intensive but there is time to enjoy the beach, the hotel’s amenities and the wonderful of environment of Elba (wake up your inner Napoleon). The school has a fairly small size and everyone lives under the same (beautiful) roof, so there is plenty of time for interaction with the speakers and other participants.

About these participants: the school is intended for engineers and managers in industry as well as researchers and PhD student. In fact it’s a mix that one doesn’t find that often, allowing for much cross-learning.

Another way to put it is that this is now the 16th edition of the school (it started in 2004 but we skipped one year), so it cannot be doing everything wrong.

 

VN:F [1.9.10_1130]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.10_1130]
Rating: +1 (from 1 vote)

Defining and classifying requirements (new publication)

Software engineering has improved a lot in the past couple of decades, but there remains an area where the old doomsday style of starting a software engineering paper (software crisis, everything is rotten…) still fits: requirements engineering. Just see the chasm between textbook advice and the practice of most projects.

I have written on requirements in this blog, including very recently, and will continue in forthcoming installments. For today I  want to point to a recent article [1],  presented at the newly revived TOOLS conference in October. It attempts to bring some order and rigor to the basic definitions in the field.

From the abstract:

Requirements engineering is crucial to software development but lacks a precise definition of its fundamental concepts. Even the basic definitions in the literature and in industry standards are often vague and verbose.

To remedy this situation and provide a solid basis for discussions of requirements, this work provides precise definitions of the fundamental requirements concepts and two systematic classifications: a taxonomy of requirement elements (such as components, goals, constraints…) ; and a taxonomy of possible relations between these elements (such as “extends”, “excepts”, “belongs”…).

The discussion evaluates the taxonomies on published requirements documents; readers can test the concepts in two online quizzes.

The intended result of this work is to spur new advances in the study and practice of software requirements by clarifying the fundamental concepts.

This version is a first step; we are aware of its limitations and are already revising the definitions and taxonomy. The project is aimed at providing a solid foundation for a delicate area of software engineering and it will take some time to get it completely right. Still, I think the paper as it is already introduces important concepts. I will within the next two weeks write a more detailed blog article summarizing some of them.

References

[1] Bertrand Meyer, Jean-Michel Bruel, Sophie Ebersold, Florian Galinier, Alexandr Naumchev, The Anatomy of Requirements, in TOOLS 51, Software Technology: Methods and Tools
Innopolis, Russia, October 15–17, 2019, pages 10-40, available here (Springer site, paywall) and here (arXiv draft).

VN:F [1.9.10_1130]
Rating: 10.0/10 (1 vote cast)
VN:F [1.9.10_1130]
Rating: 0 (from 0 votes)

What happened to the kilogram? Schaffhausen, 16 December

December 16 (next Monday), the newly created Schaffhausen Institute of Technology organizes an entire day of events around three (no less) talks by the physics Nobel prize winner and MIT professor Wolfgang Ketterle.

The culmination of the day is a talk by Prof. Ketterle in the evening on “What happened to the kilogram?”. From the abstract:

For 130 years, a cylinder made of a platinum-iridium alloy stored in Saint-Cloud near Paris was the official definition of a kilogram, the basic unit of mass. This all changed on May 20 of this year: a kilo is now be defined by a fundamental constant of nature known, the Planck constant, which relates the energy of a photon to its frequency: 6.62607015 times 10-34 kilograms times square meters per second. Try that the next time you buy a kilo of asparagus.

Sounds complicated? For MIT’s Wolfgang Ketterle, a Nobel Prize winner, “Conceptually, the definition is very simple”.

Simple? Really? Come to Schaffhausen and hear for yourself whether Prof. Ketterle can make the new kilogram crystal-clear to common mortals.

Earlier in the day, he will give a talk in German on new forms of materials that appear at temperatures near the absolute zero, complete with demonstrations.

More generally, there is a full set of talks throughout the day about various aspects of advanced physics and computer science, and even a “quantum magician”, plus music and food.

Schaffhausen is about 40 minutes from Zurich (or Zurich airport) by train or car.

Attendance is free but registration is recommended. One can register for the full day or for some events only. See further information and registration form here.

VN:F [1.9.10_1130]
Rating: 10.0/10 (2 votes cast)
VN:F [1.9.10_1130]
Rating: +1 (from 1 vote)

A theorem of software engineering

Some of the folk wisdom going around in software engineering, often cluessly repeated for decades, is just wrong.  It can be particularly damaging when it affects key aspects of software development and is contradicted by solid scientific evidence. The present discussion covers a question that meets both of these conditions: whether it makes sense to add staff to a project to shorten its delivery time.

My aim is to popularize a result that is well known in the software engineering literature, going back to the early work of Barry Boehm [1], and explained with great clarity by Steve McConnell in his 2006 book on software cost estimation [2] under the name “Shortest Possible Schedule”. While an empirical rather than a logical result, I believe it deserves to be called a theorem (McConnell stays shy of using the term) because it is as close as we have in the area of software engineering management to a universal property, confirmed by numerous experimental studies.

This article contributes no new concept since McConnell’s chapter 20 says all there is to say about the topic;  my aim is simply to make the Shortest Possible Schedule Theorem better known, in particular to practitioners.

The myth about shortening project times begins with an observation that is clearly correct, at least in an extreme form. Everyone understands that if our project has been evaluated, through accepted cost estimation techniques, to require three developers over a year we cannot magically hire 36 people to complete it in one month. Productivity does not always scale up.

But neither does common sense. Too often the conclusion from the preceding trival observation takes the form of an old  saw, “Brooks’ Law”: adding people to a late project delays it further. The explanation is that the newcomers cost more through communication overhead than they bring through actual contributions. While a few other sayings of Brooks’ Mythical Man-Month have stood the test of time, this one has always struck me as describing, rather than any actual law, a definition of bad management. Of course if you keep haplessly throwing people at deadlines you are just going to add communication problems and make things worse. But if you are a competent manager expanding the team size is one of the tools at your disposal to improve the state of a project, and it would be foolish to deprive yourself of it. A definitive refutation of the supposed law, also by McConnell, was published 20 years ago [3].

For all the criticism it deserves, Brooks’s pronouncement was at least limited in its scope: it addressed addition of staff to a project that is already late. It is even wronger to apply it to the more general issue of cost-estimating and staffing software projects, at any stage of their progress.  Forty-year-old platitudes have even less weight here. As McConnell’s book shows, cost estimation is no longer a black art. It is not an exact science either, but techniques exist for producing solid estimates.

The Shortest Possible Schedule theorem is one of the most interesting results. Much more interesting than Brooks’s purported law, because it is backed by empirical studies (rather than asking us to believe one person’s pithy pronouncement), and instead of just a general negative view it provides a positive result complemented by a limitation of that result; and both are expressed quantitatively.

Figure 1 gives the general idea of the SPS theorem. General idea only; Figure 2 will provide a more precise view.

Image4

Figure 1: General view of the Shortest Possible Schedule theorem.

The  “nominal project” is the result of a cost and schedule estimation yielding the optimum point. The figure and the theorem provide project managers with both a reason to rejoice and a reason to despair:

  • Rejoice: by putting in more money, i.e. more people (in software engineering, project costs are essentially people costs [4]), you can bring the code to fruition faster.
  • Despair: whatever you do, there is a firm limit to the time you can gain: 25%. It seems to be a kind of universal constant of software engineering.

The “despair” part typically gets the most attention at first, since it sets an absolute value on how much money can buy (so to speak) in software: try as hard as you like, you will never get below 75% of the nominal (optimal) value. The “impossible zone” in Figure 1 expresses the fundamental limitation. This negative result is the reasoned and precise modern replacement for the older folk “law”.

The positive part, however, is just as important. A 75%-empty glass is also 25%-full. It may be disappointing for a project manager to realize that no amount of extra manpower will make it possible to guarantee to higher management more than a 25% reduction in time. But it is just as important to know that such a reduction, not at all insignificant, is in fact reachable given the right funding, the right people, the right tools and the right management skills. The last point is critical: money by itself does not suffice, you need management; Brooks’ law, as noted, is mostly an observation of the effects of bad management.

Figure 1 only carries the essential idea, and is not meant to provide precise numerical values. Figure 2, the original figure from McConnell’s book, is. It plots effort against time rather than the reverse but, more importantly, it shows several curves, each corresponding to a published empirical study or cost model surveyed by the book.

Image5

Figure 2: Original illustration of the Shortest Possible Schedule
(figure 2-20 of [3], reproduced with the author’s permission)

On the left of the nominal point, the curves show how, according to each study, increased cost leads to decreased time. They differ on the details: how much the project needs to spend, and which maximal reduction it can achieve. But they all agree on the basic Shortest Possible Schedule result: spending can decrease time, and the maximal reduction will not exceed 25%.

The figure also provides an answer, although a disappointing one, to another question that arises naturally. So far this discussion has assumed that time was the critical resource and that we were prepared to spend more to get a product out sooner. But sometimes it is the other way around: the critical resource is cost, or, concretely, the number of developers. Assume that nominal analysis tells us that the project will take four developers for a year and, correspondingly, cost 600K (choose your currency).  We only have a budget of 400K. Can we spend less by hiring fewer developers, accepting that it will take longer?

On that side, right of the nominal point in Figure 2, McConnell’s survey of surveys shows no consensus. Some studies and models do lead to decreased costs, others suggest that with the increase in time the cost will actually increase too. (Here is my interpretation, based on my experience rather than on any systematic study: you can indeed achieve the original goal with a somewhat smaller team over a longer period; but the effect on the final cost can vary. If the new time is t’= t + T and the new team size s’= s – S, t and s being the nominal values, the cost difference is proportional to  Ts – t’S. It can be positive as well as negative depending on the values of the original t and s and the precise effect of reduced team size on project duration.)

The firm result, however, is the left part of the figure. The Shortest Possible Schedule theorem confirms what good project managers know: you can, within limits, shorten delivery times by bringing all hands on deck. The precise version deserves to be widely known.

References and note

[1] Barry W. Boehm: Software Engineering Economics, Prentice Hall, 1981.

[2] Steve McConnell: Software Estimation ― Demystifying the Black Art, Microsoft Press, 2006.

[3] Steve McConnell: Brooks’ Law Repealed, in IEEE Software, vol. 16, no. 6, pp. 6–8, November-December 1999, available here.

[4] This is the accepted view, even though one might wish that the industry paid more attention to investment in tools in addition to people.

Recycled A version of this article was first published on the Comm. ACM blog under the title The Shortest Possible Schedule Theorem: Yes, You Can Throw Money at Software Deadlines

VN:F [1.9.10_1130]
Rating: 9.8/10 (6 votes cast)
VN:F [1.9.10_1130]
Rating: +2 (from 2 votes)

Ten traits of exceptional innovators

Imagine having had coffee, over the years, with each of Euclid, Galileo, Descartes, Marie Curie, Newton, Einstein, Lise Leitner, Planck and de Broglie. For a computer scientist, if we set aside the founding generation (the Turings and von Neumanns), the equivalent is possible. I have had the privilege of meeting and in some cases closely interacting with pioneer scientists, technologists and entrepreneurs, including Nobel, Fields and Turing winners, Silicon-Valley-type founders and such. It is only fair that I should share some of the traits I have observed in them.

Clarification and disclaimer:

  • This discussion is abstract and as a result probably boring because I am not citing anyone by name (apart from a few famous figures, most of whom are dead and none of whom I have met). It would be more concrete and lively if I buttressed my generalities by actual examples, of which I have many. The absence of any name-dropping is a matter of courtesy and respect for people who have interacted with me unguardedly as a colleague, not a journalist preparing a tell-all book. I could of course cite the names for positive anecdotes only, but that would bias the story (see point 4). So, sorry, no names (and I won’t relent even if you ask me privately — mumm like a fish).
  • I am looking at truly exceptional people. They are drawn from a more general pool of brilliant, successful scientists and technologists, of which they form only a small subset. Many of their traits also apply to this more general community and to highly successful people in any profession. What interests me is the extra step from brilliant to exceptional. It would not be that difficult to identify fifty outstanding mathematics researchers in, say, 1900, and analyze their psychological traits. The question is: why are some of them Hilbert and Poincaré, and others not?
  • Of course I do not even begin to answer that question. I only offer a few personal remarks.
  • More generally, cargo cult does not work. Emulating every one of the traits listed below will not get you a Nobel prize. You will not turn into a great composer by eating lots of Tournedos Rossini. (Well, you might start looking like the aging Rossini.) This note presents some evidence; it does not present any conclusion, let alone advice. Any consequence is for you to draw, or not.
  • The traits obviously do not universally characterize the population observed. Not all of the people exhibit all of the traits. On the other hand, my impression is that most exhibit most.

1 Idiosyncratic

“Idiosyncratic” is a high-sounding synonym for “diverse,” used here to deflect the ridicule of starting a list of what is common to those people by stating that they are different from each other. The point is important, though, and reassuring. Those people come in all stripes, from the stuffy professor to the sandals-shorts-and-Hawaiian-shirt surfer.  Their ethnic backgrounds vary. And (glad you asked) some are men and some are women.

Consideration of many personality and lifestyle features yields no pattern at all. Some of the people observed are courteous, a delight to deal with, but there are a few jerks too. Some are voluble, some reserved. Some boastful, some modest. Some remain for their full life married to the same person, some have been divorced many times, some are single. Some become CEOs and university presidents, others prefer the quieter life of a pure researcher. Some covet honors, others are mostly driven by the pursuit of knowledge. Some wanted to become very rich and did, others care little about money.  It is amazing to see how many traits appear irrelevant, perhaps reinforcing the value of those that do make a difference.

2 Lucky

In trying to apply a cargo-cult-like recipe, this one would be the hardest to emulate. We all know that Fleming came across penicillin thanks to a petri dish left uncleaned on the window sill; we also know that luck favors only the well-prepared: someone other than Fleming would have grumbled at the dirtiness of the place and thrown the dish into the sink. But I am not just talking about that kind of luck. You have to be at the right place at the right time.

Read the biographies, and you will see that almost always the person happened to study with a professor who just then was struggling with a new problem, or did an internship in a group that had just invented a novel technique, or heard about recent results before everyone else did.

Part of what comes under “luck” is luck in obtaining the right education. Sure, there are a few autodidacts, but most of the top achievers studied in excellent institutions.

Success comes from a combination of nature and nurture. The perfect environment, such as a thriving laboratory or world-class research university, is not enough; but neither is individual brilliance. In most cases it is their combination that produces the catalysis.

3 Smart

Laugh again if you wish, but I do not just mean the obvious observation that those people were clever in what they did. In my experience they are extremely intelligent in other ways too. They often possess deep knowledge beyond their specialties and have interesting conversations.

You approach them because of the fame they gained in one domain, and learn from them about topics far beyond it.

4 Human

At first, the title of this section is another cause for ridicule: what did you expect, extraterrestrials? But “human” here means human in their foibles too. You might expect, if not an extraterrestrial, someone of the oracle-of-Delphi or wizard-on-a-mountain type, who after a half-hour of silence makes a single statement perfect in its concision and exactitude.

Well, no. They are smart, but they say foolish things too. And wrong things. Not only do they say them, they even publish them. (Newton wasted his brilliance on alchemy. Voltaire — who was not a scientist but helped promote science, translating Newton and supporting the work of Madame du Châtelet — wasted his powerful wit to mock the nascent study of paleontology: so-called fossils are just shells left over by picnicking tourists! More recently, a very famous computer scientist wrote a very silly book — of which I once wrote, fearlessly, a very short and very disparaging review.)

So what? It is the conclusion of the discussion that counts, not the meanderous path to it, or the occasional hapless excursion into a field where your wisdom fails you. Once you have succeeded, no one will care how many wrong comments you made in the process.

It is fair to note that the people under consideration probably say fewer stupid things than most. (The Erich Kästner ditty from an earlier article applies.) But no human, reassuringly perhaps, is right 100% of the time.

What does set them apart from many people, and takes us back to the previous trait (smart), is that even those who are otherwise vain have no qualms recognizing  mistakes in their previous thinking. They accept the evidence and move on.

5 Diligent

Of two people, one an excellent, top-ranked academic, the other a world-famous pioneer, who is the more likely to answer an email? In my experience, the latter.

Beyond the folk vision of the disheveled, disorganized, absent-minded professor lies the reality of a lifetime of rigor and discipline.

This should not be a surprise. There is inspiration, and there is perspiration.  Think of it as the dual of the  broken-windows theory, or of the judicial view that a defendant who lies in small things probably lies in big things: the other way around, if you do huge tasks well, you probably do small tasks well too.

6 Focused

Along with diligence comes focus, carried over from big matters to small matters. It is the lesser minds that pretend to multiplex. Great scientists, in my experience, do not hack away at their laptops during talks, and they turn off their cellphones. They choose carefully what they do (they are deluged with requests and learn early to say no), but what they accept to do they do. Seriously, attentively, with focus.

A fascinating spectacle is a world-famous guru sitting in the first row at a conference presentation by a beginning Ph.D. student, and taking detailed notes. Or visiting an industrial lab and quizzing a junior engineer about the details of the latest technology.

For someone who in spite of the cargo cult risk is looking for one behavior to clone, this would be it. Study after study has shown that we only delude ourselves in thinking we can multiplex. Top performers understand this. In the seminar room, they are not the ones doing email. If they are there at all, then watch and listen.

7 Eloquent

Top science and technology achievers are communicators. In writing, in speaking, often in both.

This quality is independent from their personal behavior, which can cover the full range from shy to boisterous.  It is the quality of being articulate. They know how to convey their results — and often do not mind crossing the line to self-advertising. It is not automatically the case that true value will out: even the most impressive advances need to be pushed to the world.

The alternative is to become Gregor Mendel: he single-handedly discovered the laws of genetics, and was so busy observing the beans in his garden that no one heard about his work until some twenty years after his death. Most of us prefer to get the recognition earlier. (Mendel was a monk, so maybe he believed in an afterlife; yet again maybe he, like everyone else, might have enjoyed attracting interest in this world first.)

In computer science it is not surprising that many of the names that stand out are of people who have written seminal books that are a pleasure to read. Many of them are outstanding teachers and speakers as well.

8 Open

Being an excellent communicator does not mean that you insist on talking. The great innovators are excellent listeners too.

Some people keep talking about themselves. They exist in all human groups, but this particular trait is common among scientists, particularly junior scientists, who corner you and cannot stop telling you about their ideas and accomplishments. That phenomenon is understandable, and in part justified by an urge to avoid the Mendel syndrome. But in a conversation involving some less and some more recognized professionals it is often the most accomplished members of the group who talk least. They are eager to learn. They never forget that the greatest insighs can start with a casual observation from an improbable source. They know when to talk, and when to shut up and listen.

Openness also means intellectual curiosity, willingness to have your intellectual certainties challenged, focus on the merit of a comment rather than the commenter’s social or academic status, and readiness to learn from disciplines other than your own.

9 Selfish

People having achieved exceptional results were generally obsessed with the chase and the prey. They are as driven as an icebreaker ship in the Sea of Barents. They have to get through; the end justifies the means; anything in the way is collateral damage.

So it is not surprising, in the case of academics, to hear colleagues from their institutions mumble that X never wanted to do his share, leaving it to others to sit in committees, teach C++ to biology majors and take their turn as department chair. There are notable exceptions, such as the computer architecture pioneer who became provost then president at Stanford before receiving the Turing Award. But  you do not achieve breakthroughs by doing what everything else is doing. When the rest of the crowd is being sociable and chatty at the conference party long into the night, they go back to their hotel to be alert for tomorrow’s session. A famous if extreme case is Andrew Wiles, whom colleagues in the department considered a has-been, while he was doing the minimum necessary to avoid trouble while working secretly and obsessively to prove Fermat’s last theorem.

This trait is interesting in light of the soothing discourse in vogue today. Nothing wrong with work-life balance, escaping the rat race, perhaps even changing your research topic every decade (apparently the rule in some research organizations). Sometimes a hands-off, zen-like attitude will succeed where too much obstination would get stuck. But let us not fool ourselves: the great innovators never let go of the target.

10. Generous

Yes, selfishness can go with generosity. You obsess over your goals, but it does not mean you forget other people.

Indeed, while there are a few solo artists in the group under observation, a striking feature of the majority is that in addition to their own achievements they led to the creation of entire communities, which often look up to them as gurus. (When I took the comprehensive exam at Stanford, the first question was what the middle initial “E.” of a famous professor stood for. It was a joke question, counting for maybe one point out of a hundred, helpfully meant to defuse students’ tension in preparation for the hard questions that followed. But what I remember is that every fellow student whom I asked afterwards knew the answer. Me too. Such was the personality cult.) The guru effect can lead to funny consequences, as with the famous computer scientist whose disciples you could spot right away in conferences by their sandals and beards (I do not remember how the women coped), carefully patterned after the master’s.

The leader is often good at giving every member of that community flattering personal attention. In a retirement symposium for a famous professor, almost every person I talked too was proud of having developed a long-running, highly personal and of course unique relationship with the honoree. One prestigious computer scientist who died in the 80’s encouraged and supported countless young people in his country; 30 years later, you keep running into academics, engineers and managers who tell you that they owe their career to him.

Some of this community-building can be self-serving and part of a personal strategy for success. There has to be more to it, however. It is not just that community-building will occur naturally as people discover the new ideas: since these ideas are often controversial at first, those who understood their value early band together to defend them and support their inventor. But there is something else as well in my observation: the creators’ sheer, disinterested generosity.

These people are passionate in their quest for discovery and creation and genuinely want to help others. Driven and self-promoting they may be, but the very qualities that led to their achievements — insight, intellectual courage, ability to think beyond accepted ideas — are at the antipodes of pettiness and narrow-mindedness. A world leader cannot expect any significant personal gain from spotting and encouraging a promising undergraduate, telling a first-time conference presenter that her idea is great and worth pushing further, patiently explaining elementary issues to a beginning student, or responding to a unknown correspondent’s emails. And still, as I have observed many times, they do all of this and more, because they are in the business of advancing knowledge.

These are some of the traits I have observed. Maybe there are more but, sorry, I have to go now. The pan is sizzling and I don’t like my tournedos too well-done.

recycled-logo (Originally published on CACM blog.)

VN:F [1.9.10_1130]
Rating: 9.3/10 (8 votes cast)
VN:F [1.9.10_1130]
Rating: +4 (from 4 votes)

The end of software engineering and the last methodologist

(Reposted from the CACM blog [*].)

Software engineering was never a popular subject. It started out as “programming methodology”, evoking the image of bearded middle-aged men telling you with a Dutch, Swiss-German or Oxford accent to repent and mend your ways. Consumed (to paraphrase Mark Twain) by the haunting fear that someone, somewhere, might actually enjoy coding.

That was long ago. With a few exceptions including one mentioned below, to the extent that anyone still studies programming methodology, it’s in the agile world, where the decisive argument is often “I always say…”. (Example from a consultant’s page:  “I always tell teams: `I’d like a [user] story to be small, to fit in one iteration but that isn’t always the way.’“) Dijkstra did appeal to gut feeling but he backed it through strong conceptual arguments.

The field of software engineering, of which programming methodology is today just a small part, has enormously expanded in both depth and width. Conferences such as ICSE and ESEC still attract a good crowd, the journals are buzzing, the researchers are as enthusiastic as ever about their work, but… am I the only one to sense frustration? It is not clear that anyone outside of the community is interested. The world seems to view software engineering as something that everyone in IT knows because we all develop software or manage people who develop software. In the 2017 survey of CS faculty hiring in the U.S., software engineering accounted, in top-100 Ph.D.-granting universities, for 3% of hires! (In schools that stop at the master’s level, the figure is 6%; not insignificant, but not impressive either given that these institutions largely train future software engineers.) From an academic career perspective, the place to go is obviously  “Artificial Intelligence, Data Mining, and Machine Learning”, which in those top-100 universities got 23% of hires.

Nothing against our AI colleagues; I always felt “AI winter” was an over-reaction [1], and they are entitled to their spring. Does it mean software engineering now has to go into a winter of its own? That is crazy. Software engineering is more important than ever. The recent Atlantic  “software apocalypse” article (stronger on problems than solutions) is just the latest alarm-sounding survey. Or, for just one recent example, see the satellite loss in Russia [2] (juicy quote, which you can use the next time you teach a class about the challenges of software testing: this revealed a hidden problem in the algorithm, which was not uncovered in decades of successful launches of the Soyuz-Frigate bundle).

Such cases, by the way, illustrate what I would call the software professor’s dilemma, much more interesting in my opinion than the bizarre ethical brain-teasers (you see what I mean, trolley levers and the like) on which people in philosophy departments spend their days: is it ethical for a professor of software engineering, every morning upon waking up, to go to cnn.com in the hope that a major software-induced disaster has occurred,  finally legitimizing the profession? The answer is simple: no, that is not ethical. Still, if you have witnessed the actual state of ordinary software development, it is scary to think about (although not to wish for) all the catastrophes-in-waiting that you suspect are lying out there just waiting for the right circumstances .

So yes, software engineering is more relevant than ever, and so is programming methodology. (Personal disclosure: I think of myself as the very model of a modern methodologist [3], without a beard or a Dutch accent, but trying to carry, on today’s IT scene, the torch of the seminal work of the 1970s and 80s.)

What counts, though, is not what the world needs; it is what the world believes it needs. The world does not seem to think it needs much software engineering. Even when software causes a catastrophe, we see headlines for a day or two, and then nothing. Radio silence. I have argued to the point of nausea, including at least four times in this blog (five now), for a simple rule that would require a public auditing of any such event; to quote myself: airline transportation did not become safer by accident but by accidents. Such admonitions fall on deaf ears. As another sign of waning interest, many people including me learned much of what they understand of software engineering through the ACM Risks Forum, long a unique source of technical information on software troubles. The Forum still thrives, and still occasionally reports about software engineering issues, but most of the traffic is about privacy and security (with a particular fondness for libertarian rants against any reasonable privacy rule that the EU passes). Important topics indeed, but where do we go for in-depth information about what goes wrong with software?

Yet another case in point is the evolution of programming languages. Language creation is abuzz again with all kinds of fancy new entrants. I can think of one example (TypeScript) in which the driving force is a software engineering goal: making Web programs safer, more scalable and more manageable by bringing some discipline into the JavaScript world. But that is the exception. The arguments for many of the new languages tend to be how clever they are and what expressive new constructs they introduce. Great. We need new ideas. They would be even more convincing if they addressed the old, boring problems of software engineering: correctness, robustness, extendibility, reusability.

None of this makes software engineering less important, or diminishes in the least the passion of those of us who have devoted our careers to the field. But it is time to don our coats and hats: winter is upon us.

Notes

[1] AI was my first love, thanks to Jean-Claude Simon at Polytechnique/Paris VI and John McCarthy at Stanford.

[2] Thanks to Nikolay Shilov for alerting me to this information. The text is in Russian but running it through a Web translation engine (maybe this link will work) will give the essentials.

[3] This time borrowing a phrase from James Noble.

[*] I am reposting these CACM blog articles rather than just putting a link, even though as a software engineer I do not like copy-paste. This is my practice so far, and it might change since it raises obvious criticism, but here are the reasons: (A) The audiences for the two blogs are, as experience shows, largely disjoint. (B) I like this site to contain a record of all my blog articles, regardless of what happens to other sites. (C) I can use my preferred style conventions.

VN:F [1.9.10_1130]
Rating: 9.9/10 (11 votes cast)
VN:F [1.9.10_1130]
Rating: +7 (from 7 votes)

Before I start screaming once again…

… at my would-be coauthors, would someone please tell them, and every non-native-English-speaker-but-aspiring-English-author, to read this? Please, please, please, please, please.

In English the verb “allow” cannot take an infinitive as a complement. Ever. You may not write “my method allows to improve productivity” (even if it’s true, which it probably isn’t, but never mind). Ever. You may write the equivalent in French, German, Russian, Italian and whatever, but not in English. Ever. In English you do not “allow to” do something. Ever. You allow someone or something to do something. Maybe, or maybe not, your method allows its users to improve productivity. That’s correct English. It is also OK to use a gerund [1]: your method allows improving productivity. Actually that sounds clumsy but at least it is grammatically correct.

The reason the gerund does not sound quite right here is that  in situations where foreign speakers instinctively think “allow to…” in their mother tongues and transport it directly to English, the native English speaker instinctively comes up with  something  different. Typically, one of:

  • Allow someone to, using a specific word instead of “someone”. The English language has a concrete slant and favors expressing all details, including some that in other languages remain implicit.
  • Make it possible to:  a bit wordy, but common and convenient, and definitely correct when followed by an infinitive (“my method makes it possible to improve productivity”). We politely leave it unsaid what the “it” is that is being made possible. This turn of phrase is the easiest if you want to remain as close to the original “allow to…” in your native language. Consider “make it possible to” as a mechanical translation of “allow to”. It works.
  • Support something. Remember this word. It is used more widely in English than its typical translations in other languages. Often it fits just where you initially would come up with “allow to”. Your method may support policies for improving productivity.
  • The gerund. It will sound less clumsy if what you are “allowing” is truly a process, and you are using “allow” in its direct sense of giving permission [2], rather than in the more general and weaker sense of supporting. The rules of tennis allow playing in either singles or doubles.
  • Generalizing the gerund, a plain noun (substantive). You can, in fact, allow something. Your methodology allows productivity improvements. Like the gerund, it does not sound as good as the other forms (“support” is better unless there truly is a notion of permission), but it is correct.
  • Or… nothing at all. Paraphrased from a text seen recently: “some techniques only allow to model internal properties, others allow to model external properties too”. So much better (in any language): some techniques only model internal properties, others also cover external ones. Whoever wrote the first variant should not, in the next three years, be allowed anywhere near the word “allow”.

Some people go around the issue by using “allow for doing something”. That usage is acceptable in American English (less so in British English), but by default “allow for” means something else: tolerating some possible variation in an estimate, as in “plan two hours for your drive, allowing for traffic”. As a substitute for “allowing to” this phrase has no advantage over the solutions listed above.

On last count, I had corrected “allow to” in drafts from coworkers, using one of these solutions, approximately 5,843,944,027 times (allowing for a few cases that I might have forgotten). Enough! Please, please, please, please, please, please, please. Make a note of this. It will allow me to live better, it will allow you to avoid my wrath, it  will make it possible for us to work together again, it will support a better understanding among the people in the world, it will allow faster refereeing and a better peer review process, it covers all needs, and it still allows for human imperfection.

Notes

[1] Or gerundive, or present participle: a word form resulting from addition of the suffix “-ing” to a verb radical.

[2] Note that beyond “allow” this discussion also applies to the verb “permit”. You permit someone to do something.

[3] Post-publication, Oscar Nierstrasz mentioned on Facebook that he has a Web page addressing the same point.

VN:F [1.9.10_1130]
Rating: 7.6/10 (9 votes cast)
VN:F [1.9.10_1130]
Rating: +8 (from 8 votes)

Blockchains, bitcoin and distributed trust: LASER school lineup complete

The full lineup of speakers at the 2018 LASER summer school on Software for Blockchains, Bitcoin and Distributed Trust is now ready, with the announcement of a new speaker, Primavera De Filippi from CNRS and Harvard on social and legal aspects.

The other speakers are Christian Cachin (IBM), Maurice Herlihy (Brown), Christoph Jentzsch (slock.it), me, Emil Gun Sirer (Cornell) and Roger Wattenhofer (ETH).

The school is the 14th in the LASER series and takes place June 2-10, 2018, on the island of Elba in Italy.

Early-fee registration deadline is February 10. The school’s page is here.

VN:F [1.9.10_1130]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.10_1130]
Rating: 0 (from 0 votes)

Small and big pleasures

(Reproduced from my CACM blog.)

One of the small pleasures of life is to win a technical argument with a graduate student. You feel good, as well you should. It is only human to be want to be right. Besides, if you ended up being wrong all or most of the time, you should start questioning your sanity: why are they the students and you the supervisor, rather than the other way around?

One of the big pleasures of life is to lose an argument with a graduate student. Then you have learned something.

VN:F [1.9.10_1130]
Rating: 9.0/10 (10 votes cast)
VN:F [1.9.10_1130]
Rating: 0 (from 4 votes)

LASER summer school on software for robotics: last call for registration

Much of the progress in robotics is due to software advances, and software issues remain at the heart of the formidable challenges that remain. The 2017 LASER summer school, held in September in Elba, brings together some of the most prestigious international experts in the area.

The LASER school has established itself as one of the principal forums to discussed advanced software issues. The 2017 school takes place from 9 to 17 September in the idyllic setting of the Hotel del Golfo in Procchio, Elba Island, Italy.

Robotics is progressing at an amazing pace, bringing improvements to almost areas of human activity. Today’s robotics systems rely ever more fundamentally on complex software, raising difficult issues. The LASER 2017 summer school covers both the current state of robotics software technology and open problems. The lecturers are top international experts with both theoretical contributions and major practical achievements in developing robotics systems.
The LASER school is intended for professionals from the industry (engineers and managers) as well as university researchers, including PhD students. Participants learn about the most important software technology advances from the pioneers in the field. The school’s focus is applied, although theory is welcome to establish solid foundations. The format of the school favors extensive interaction between participants and speakers.

We have lined up an impressive roster of speakers from the leading edge of both industry and academia:

Rodolphe Gélin, Aldebaran Robotics
Ashish Kapoor, Microsoft Research
Davide Brugali, University of Bergamo, on Managing software variability in robotic control systems
Nenad Medvidovic, University of Southern California, on Software Architectures of Robotics Systems
Bertrand Meyer, Politecnico di Milano & Innopolis University, on Concurrent Object-Oriented Robotics Software
Issa Nesnas, NASA Jet Propulsion Laboratory, on Experiences from robotic software development for research and planetary flight robots
Hiroshi (“Gitchang”) Okuno, Waseda University & Kyoto University, on Open-Sourced Robot Audition Software HARK: Capabilities and Applications

The school takes place at the magnificent Hotel del Golfo in the Gulf of Procchio, Elba. Along with an intensive scientific program, participants will have time to enjoy the countless natural and cultural riches of this wonderful, history-laden jewel of the Mediterranean.

For more information about the school, the speakers and registration see the LASER site.

VN:F [1.9.10_1130]
Rating: 5.5/10 (4 votes cast)
VN:F [1.9.10_1130]
Rating: +1 (from 1 vote)

Feature interactions, continued

Microsoft Office tools offer  features for (1) spelling correction and (2) multi-language support. They are not very good at working together, another example of the perils of feature interaction.

Spelling correction will by default
Image10
when misspelled, but in the case of common misspellings known to the tools it will simply correct words without bothering the user. For example if you type “bagage” it will change it silently to “baggage”. This feature can be turned off, and the list of known misspellings can be edited, but most people use the defaults, as I am assuming here.

Multi-language support enables you to “install” several languages. Then when you type a text it will after a few words guess the relevant language. From then on it can  apply the proper spell checks and corrections.

These features are both useful, and by and large they both work. (The second one is not always reliable. I regularly end up, particularly in Microsoft Word, with entire paragraphs underlined in red because for some inscrutable reason the tool assigns them the wrong language. In such cases you must tell it manually what language it should apply.)

Their combination can lead to funny results. Assume your default language is English but you also have French installed, and you are typing an email in French under Outlook. Your email will say “Le bagage est encore dans l’avion”, meaning “The baggage is still in the plane”. The word “baggage” has one more “g” in English than in French. You start typing  “Le bagage”, but because at that point the tool assumes English it corrects it silently:

Image4

Next you type “est encore”:

Image5

The  word “est” (is) gets flagged because (unlike “encore”, here meaning “still”) it does not exist in English. When you add the next word, “dans” (in), the tool is still assuming an English text, so it flags it too:

Image6

Now you type “l” and when you add the apostrophe, you can almost hear a “silly me, I see now, that’s French!”. Outlook  switches languages and unflags the previously flagged words, removing the red squiggle under the ones that are correct in French:

Image7

But that is too late for “baggage”: the automatic respelling of “bagage”, coming from the default assumption that the text was in English, no longer makes sense now that we know it is in French. So the word gets flagged as a misspelling. You have to go back and correct it yourself. That is frustrating, since you typed the correct spelling in the first place (“bagage”), and it is the tool that messed it up.

This bug hits me often. It is indeed a bug, which can introduce misspellings into a text when the user typed it correctly. When the tool recognizes that the text is in another language than the one assumed so far, and performs a second pass over the part already analyzed, it should reconsider both the words previously flagged as misspellings but also those previously corrected. There is no justification for doing one and not the other.

Among the world’s most momentous problems, this one does not rank very high. It is only a small annoyance, and only a tiny set of people will ever notice it. But it provides another illustration of how tricky it is to go from good individual features to a good overall design.


Related:

VN:F [1.9.10_1130]
Rating: 7.0/10 (6 votes cast)
VN:F [1.9.10_1130]
Rating: +1 (from 1 vote)

The perils of feature interaction

One of the most delicate aspects of design is feature interaction. As users, we suffer daily from systems offering features that individually make sense but clash with each other. In my agile book [1] I explained in detail, building on the work of Pamela Zave, why this very problem makes one of the key ideas of agile methods,  the reliance on “user stories” for requirements, worthless and damaging.

A small recent incident reminded me of the perils of feature interaction. I used my Lenovo W540 laptop without power for a short while, then reached a sedentary location and plugged it in. Hence my surprise when, some hours later, it started beeping to alert me that it was running out of battery. The natural reactions — check the outlet and the power cord — had no effect. I found the solution, but just in time: otherwise, including if I had not heard the warning sound, I would have been unable to use the laptop any further. That’s right: I would not have been able to restart the computer at all, even with access to a power outlet, and even though it was perfectly functional and so was its (depleted) battery. The reason is that the problem arose from a software setting, which (catch-22 situation) I could not correct without starting the computer [2].

The only solution would have been to find another, non-depleted battery. That is not a trivial matter if you have traveled with your laptop outside of a metropolis: the W540 has a special battery which ordinary computer shops do not carry [3].

The analysis of what made such a situation possible must start with the list of relevant hardware and software product features.

Hardware:

  • HA. This Lenovo W series includes high-end laptops with high power requirements, which the typical 65-watt airplane power jack does not satisfy.
  • HB. With models prior to the W540, if you tried to connect a running laptop to the power supply in an airplane, it would not charge, and the power indicator would start flickering.  But you could still charge it if you switched it off.
  • HC. The W540 effectively requires 135 watts and will not take power from a 65-watt power source under any circumstances.

Software:

  • SA. The operating system (this discussion assumes Windows) directly reflects HC by physically disabling charging if the laptop is in the “Airplane” power mode.
  • SB. If you disable wireless, the operating system automatically goes into the “Airplane” power mode.
  • SC. In the “Airplane” power mode, the laptop, whether or not connected through a charger to a power outlet of any wattage, will not charge. The charging function is just disabled.
  • SD. One can edit power modes to change parameters, such as time to automatic shutoff, but the no-charging property in Airplane mode is not editable and not even mentioned in the corresponding UI dialog. It seems to be a behind-the-scenes property magically attached to the power-mode name “Airplane”.
  • SE. There is a function key for disabling wireless: F8. As a consequence of SB it also has the effect of switching to “Airplane” mode.
  • SF. Next to F8 on the keyboard is F7.
  • SG. F7 serves to display the screen content on another monitor (Windows calls it a “projector”). F7 offers a cyclic set of choices: laptop only, laptop plus monitor etc.
  • SH. In the old days (like five years ago), such function keys setting important operating system parameters on laptops used to be activated only if you held them together with a special key labeled “Fn”. For some reason (maybe the requirement was considered too complicated for ordinary computer users) the default mode on Lenovo laptops does not use the “Fn” key anymore: you just press the desired key, such as F7 or F8.
  • SI. You can revert to the old mode, requiring pressing “Fn”, by going into the BIOS and performing some not-absolutely-trivial steps, making this possibility the preserve of techies. (Helpfully, this earlier style is called “Legacy mode”, as a way to remind you that your are an old-timer, probably barely graduated from MS-DOS and still using obsolete conventions. In reality, the legacy mode is the right one to use, whether for techies or novices: it is all too easy to hit a function key by mistake and get totally unexpected results. The novice, not the techie, is the one who will be completely confused and panicked as a result. The first thing I do with a new laptop is to go to the BIOS and set legacy mode.)

By now you have guessed what happened in my case, especially once you know that I had connected the laptop to a large monitor and had some trouble getting that display to work. In the process I hit Fn-F7 (feature SG) several times.  I must have mistakenly (SF) pressed F8 instead of F7 at some point. Normally, Legacy mode (SI) should have made me immune to the effects of hitting a function key by mistake, but I did use the neighboring key F7 for another purpose. Hitting F8 disabled wireless (SE) and switched on Airplane power mode (SB). At that point the laptop, while plugged in correctly, stopped charging (SC, SD).

How did I find out? Since I was looking for a hardware problem I could have missed the real cause entirely and ended up with a seemingly dead laptop. Fortunately I opened the Power Options dialog to see what it said about the battery. I noticed that among the two listed power plans the active one was not “Power Saver”, to which I am used, but “Airplane”. I did not immediately pay  attention to that setting; since I had not used the laptop for a while I just thought that maybe the last time around I had switched on “Airplane”, even though that made little sense since I was not even aware of the existence of that option. After trying everything else, though, I came back to that intriguing setting, changed to the more usual “Power Saver”, and the computer started to charge again. I was lucky to have a few percent of battery still left at that point.

Afterwards I found a relevant discussion thread on a Lenovo user forum.

As is often the case in such feature-interaction mishaps, most of the features make sense individually [4]. What causes trouble is some unforeseen combination of features.

There is no sure way to avoid such trouble, but there is a sure way to cause it: design a system feature by feature, as with user stories in agile development. The system must do this and it must do that. Oh, by the way, it must also do that. And that. User stories have one advantage: everyone understands them. But that is also their limitation. Good requirements and design require professionals who can see the whole beyond the parts.

A pernicious side of this situation is that many people believe that use cases and user stories are part of object-oriented analysis, whereas the OO approach to requirements and design is the reverse: rise above individual examples to uncover the fundamental abstractions.

As to my laptop, it is doing well, thanks. And I will be careful with function keys.

Reference and notes

[1] Bertrand Meyer: Agile! The Good, the Hype and the Ugly, Springer, 2014,  Amazon page: here, book page: here. A description of the book appeared here on this blog at the time of publication.

[2] Caveat: I have not actually witnessed this state in which a plugged-in laptop will not restart. The reason is simply that I do not have an alternate battery at the moment so I cannot perform the experiment with the almost certain result of losing the use of my laptop. I will confirm the behavior as soon as I have access to a spare battery.

[3] It has been my systematic experience over the past decade and a half that Lenovo seems to make a point, every couple of years, to introduce new models with incompatible batteries and docking stations. (They are also ever more incredibly bulky, with the one for the W540 almost as heavy as the laptop itself. On the other hand the laptops are good, otherwise I would not be bothering with them.)

[4] One exception here is feature SB: switching wireless off does not necessaril y mean you want to select a specific power mode! It is a manifestation of the common syndrome  of software tools that think they are smarter than you, and are not. Another exception is SE: to let a simple key press change fundamental system behavior is to court disaster. But I had protected myself by using legacy mode and was hit anyway.

VN:F [1.9.10_1130]
Rating: 7.3/10 (9 votes cast)
VN:F [1.9.10_1130]
Rating: +2 (from 2 votes)

The mythical Brooks law

(First published on the CACM blog.)

A book by Laurent Bossavit [1] lists what he calls “leprechauns” of software engineering: pearls of conventional wisdom that do not necessarily survive objective analysis. Whether or not we agree with him on every specific example, his insights are fruitful and the general approach commendable: it is healthy to question revered truths.

A revered truth not cited in his book but worth questioning is “Brooks’ Law” from The Mythical Man-Month [2]. Disclosure: I never cared much for that book, even when I read it at the time of its first publication. I know it is supposed to be a font of wisdom, but with one exception (the “second-system effect”, which actually contradicts some of the book’s other precepts) I find its advice either trivial or wrong. For those readers still with me after this admission of sacrilege, one of the most quoted pronouncements is the modestly titled “Brooks’ Law” according to which adding manpower to a late project makes it later.

Like many unwarranted generalizations, this supposed law can hold in special cases, particular at the extremes: you cannot do with thirty programmers in one day what one programmer would do in a month. That’s why, like many urban legends, it may sound right at first. But an extreme example is not a general argument. Applied in meaningful contexts, the law is only valid as a description of bad project management. If you just add people without adapting the organization accordingly, you will run into disaster. True, but not a momentous discovery.

The meaningful observation is that when a team’s size grows, communication and collaboration issues grow too, and the manager must put in place the appropriate mechanisms for communication and collaboration. Also not a strikingly original idea. Good managers know how to set up these mechanisms. Such an ability is almost the definition of  “good manager”: the good manager is the one to whom Brooks’ Law does not apply. Anyone with experience in the software industry has seen, along with disasters, cases in which a good manager was able to turn around a failing project by, among other techniques, adding people. The tools and methods of modern software engineering and modern project management are of great help in such an effort. Pithy, simplistic, superficial generalizations are not.

I thought of the matter recently when chancing upon Nathan Fielder’s Maid Service video [3]. (Warning: Fielder is sometimes funny — and sometimes not — but always obnoxious.) While programming is not quite the same as house cleaning, there is still a lesson there. With good organization, a competent manager can indeed reduce time, perhaps not linearly but close, by multiplying the number of workers.

One leprechaun dispatched, many to go.

References

[1] Laurent Bossavit:  The Leprechauns of Software Engineering – How folklore turns into fact and what to do about it, e-book available here (for a fee, with a free preview).

[2] Fred Brooks: The Mythical Man-Month – Essays on Software Engineering, Addison-Wesley, 1975, new editions 1982 and 1995.

[3] Nathan for You: Maid Service, video available here.

VN:F [1.9.10_1130]
Rating: 6.1/10 (7 votes cast)
VN:F [1.9.10_1130]
Rating: +1 (from 1 vote)

Software for Robotics: 2016 LASER summer school, 10-18 September, Elba

The 2016 session of the LASER summer school, now in its 13th edition, has just been announced. The theme is new for the school, and timely: software for robotics. Below is the announcement.

School site: here

The 2016 LASER summer school will be devoted to Software for Robotics. It takes place from 10 to 18 September in the magnificent setting of the Hotel del Golfo in Procchio, Elba Island, Italy.

Robotics is progressing at an amazing pace, bringing improvements to almost areas of human activity. Today’s robotics systems rely ever more fundamentally on complex software, raising difficult issues. The LASER 2016 summer school both covers the current state of robotics software technology and open problems. The lecturers are top international experts with both theoretical contributions and major practical achievements in developing robotics systems.
The LASER school is intended for professionals from the industry (engineers and managers) as well as university researchers, including PhD students. Participants learn about the most important software technology advances from the pioneers in the field. The school’s focus is applied, although theory is welcome to establish solid foundations. The format of the school favors extensive interaction between participants and speakers.
The speakers include:

  • Joydeep Biswas, University of Massachussetts, on Development, debugging, and maintenance of deployed robots
  • Davide Brugali, University of Bergamo, on Managing software variability in robotic control systems
  • Nenad Medvidovic, University of Southern California, on Software Architectures of Robotics Systems
  • Bertrand Meyer, Politecnico di Milano and Innopolis University, with Jiwon Shin, on Concurrent Object-Oriented Robotics Software: Concepts, Framework and Applications
  • Issa Nesnas, NASA Jet Propulsion Laboratory, on Experiences from robotic software development for research and planetary flight robots
  • Richard Vaughan, Simon Fraser University

Organized by Politecnico di Milano, the school takes place at the magnificent Hotel del Golfo (http://www.hoteldelgolfo.it/) in Golfo di Procchio, Elba. Along with an intensive scientific program, participants will have time to enjoy the natural and cultural riches of this history-laden jewel of the Mediterranean.

For more information about the school, the speakers and registration see here.

.

— Bertrand Meyer

VN:F [1.9.10_1130]
Rating: 4.9/10 (15 votes cast)
VN:F [1.9.10_1130]
Rating: -5 (from 5 votes)

Design by Contract: ACM Webinar this Thursday

A third ACM webinar this year (after two on agile methods): I will be providing a general introduction to Design by Contract. The date is this coming Thursday, September 17, and the time is noon New York (18 Paris/Zurich, 17 London, 9 Los Angeles, see here for hours elsewhere). Please tune in! The event is free but requires registration here.

VN:F [1.9.10_1130]
Rating: 5.8/10 (19 votes cast)
VN:F [1.9.10_1130]
Rating: -4 (from 8 votes)

How to learn languages

Most people in technology, trade, research or education work in an international environment and need to use a foreign language which they learned at some earlier stage [1]. It is striking to see how awfully most of us perform. International conferences are a particular pain; many speakers are impossible to understand. You just want to go home and read the paper — or, often, not.

Teachers — English teachers in the case of the most commonly used international language — often get the blame, as in “They teach us Shakespeare’s English instead of what we need for today’s life“, but such complaints are  unfounded: look at any contemporary language textbook and you will see that it is all about some Svetlanas or Ulriches or Natsukos meeting or tweeting their friends Cathy and Bill.

It is true, though, that everyone teaches languages the wrong way.

There is only one way to teach languages right: start with the phonetics. Languages were spoken [2] millennia before they ever got written down. The basis of all natural languages is vocal. If you do not pronounce a language right you do not speak that language. It is unconscionable for example that most of us non-native speakers, when using English, still have an accent. We should have got rid of its last traces by age 12.

I cannot understand why people who are otherwise at the vanguard of intellectual achievement make a mess of their verbal expression, seemingly not even realizing there might be a problem. Some mistakes seem to be handed out from generation to generation. Most French speakers of English, for example, pronounce the “ow” of “allow” as in “low”, not “cow” (it took a long time before a compassionate colleague finally rid me of that particular mistake, and I don’t know how many more I may still be making); Italians seem to have a particular fondness for pronouncing as a “v” the “w” of “write”; an so on.

The only place I ever saw that taught languages right was a Soviet school for interpreters. Graduating students of French, having had no exposure to the language before their studies, spoke it like someone coming out of a Métro station. (Actually they spoke more like a grandmother coming out of the Métro, since they had little access to contemporary materials, but that would have been easy to fix.) The trick: they spent their entire first year doing phonetics, getting the “r”and the “u” and so on right, shedding the intonation of their native tongue. That year was solely devoted to audio practice in a phonetics lab. At the end of it they did not know the meaning of what they were saying,but they said it perfectly. Then came a year of grammar, then a year of conversation. Then came the Métro result. (This is not an apology of the Soviet Union. Someone there just happened to get that particular thing right.)

We should teach everyone this way. There is no reason to tolerate phonetic deviations. If you do not get the sounds exactly as they should be, everything else will be flawed. Take, for example, the “r”. If, like me, you cannot roll your “r”s, then when you try to speak Russian or Italian, even if you think you can get the other sounds right you don’t because  your tongue or palate or teeth are in the wrong place. Another example is the “th” sound in English (two distinct sounds in fact) which I never got right. I can fake it but then something else comes out wrong and I still sound foreign. My high-school teachers — to whom I owe gratitude for so much else — should have tortured me until my “th”s were perfect. True, teaching time is a fixed-pie problem, but I am sure something else could have been sacrificed. Since, for example, I can answer in a blink that seven times nine is sixty-three, I must at some stage have taken the time to memorize it. In retrospect I would gladly sacrifice that element of knowledge, which I can reconstruct when needed, for the ability to roll my “r”s.

Age is indeed critical. While we humans can learn anything at any time, it is a well-known fact (although the reasons behind it remain mysterious) that until puberty we are malleable and can learn languages perfectly.  Witness bilingual and trilingual children; they do not have any accent. But around the time we develop new abilities and desires our brain shuts itself off to that particular skill; from then on we can only learn languages at great pain, with only remote hopes of reaching the proficiency of natives. The time to learn the phonetics of a foreign language, and learn it perfectly, is around the age of nine or ten at the latest. Then, at the age of reason, we should learn the structures — the grammar. Declensions in German, the use of tenses in English, the perfective and imperfective aspects of Russian. Conversation — Svetlana greeting Cathy — can come later if there is time left. Once you have the basics wired into your head, the rest is trivial.

Focusing children on phonetics as the crucial part of learning a language will also help them shine. Like physical appearance, verbal clarity is an enormous advantage. I must not be the only one in conferences who pays far more attention to the content of an article if the speaker has impeccable pronunciation, innate or learned. Syntax and choice of words come next. Of course substance matters; we have all heard top scientists with accents thicker than a Humvee tire and grammar thinner than a summer dress.  Everyone else needs fluency.

Conceivably, someone might object that a year of phonetic drilling is not the most amusing pastime for a 10-year-old. Without even noting that it’s not worse than having to learn to play the violin — where did we ever get the idea that learning should be fun?

As to me, like those who before they die want to get into space, visit the capitals of all countries on earth or reach the top of Mount Everest, I have my dream; it has lesser impact on the environment and depends on me, not on the help of others: just once, I’d like to roll an “r” like a Polish plumber.

Notes

[1] Many native English speakers provide the exception to this observation, since they often do not learn any foreign language beyond “Buon giorno” and “melanzane alla parmigiana”, and hence will probably not see the point of this article.

[2] And motioned. Sign language as practiced by deaf people (informally before it was codified starting from the 17th century on) is also a potential teaching start.

VN:F [1.9.10_1130]
Rating: 6.8/10 (24 votes cast)
VN:F [1.9.10_1130]
Rating: +5 (from 11 votes)

When pictures lie

 

One of the most improvable characteristics of scientific papers is the graphical presentation of numerical data. It is sad to see that thirty years after Tufte published the first edition of his masterpiece [1] many authors are still including grossly inaccurate graphics. Sadder still when the authors are professional graphists, who should know better. Take this chart [2] from the last newsletter of Migros, Switzerland’s largest supermarket chain. To convince Swiss people that they should not worry about their food bills, it displays the ratio of food expenses to revenue in various countries. There would be many good ways to represent this information graphically, but someone thought it clever to draw variable-size coins of the respective currencies. According to the text, “the bigger the circle, the larger the income’s share devoted to food“.

Just a minor problem: the visual effect is utterly misleading. Taking three examples from the numbers given, the ratio is roughly 10% for Switzerland, 30% for Russia, 40% for Morocco. And, sure enough,  compared to the Swiss coin in the figure, the Russian coin is about three times bigger and the Moroccan coin four times… in diameter! What the eye sees, of course, is the area. Since the area varies as the square of the diameter, one gets the impression that Russians spend nine times, not three, and Moroccans sixteen times, not four, as much as the Swiss.

To convey the correct suggestion, the diameter of the Russian coin should have been about 73% larger than the Swiss coin’s diameter (the square root of three is about 1.73) , and the diameter of the Moroccan coin twice larger, that is to say half of what it is.

The impression is particularly misleading for countries where the ratio, unlike in Russia or Morocco, is close to Switzerland’s. Most interestingly, although no doubt by accident, for neighboring countries, where Swiss people are prone to go shopping in search of a bargain, a practice that possibly does not enthuse Migros. The extra percentage devoted to food (using this time  no longer rough approximations but precise values from the figures given in the Migros page) is 4% for Austria (10.9 ratio vs Switzerland’s 10.2), 8% for Germany (11.1), 30% for France (13.3) and  43% for Italy (14.6). But if you look at the picture the circles suggest much bigger differences; for example the Italian circle is obviously computed from the ratio of the squares, 14.62 / 10.62, showing an increase of 104%. In other words, Italians proportionally devote to food a little over two-fifths more  than the Swiss, but the graph suggests they spend twice as much.

On the premise that one should not ascribe to malevolence what can be explained by ignorance, I hope the Migros graphists will get a copy of Tufte’s book for their future endeavors.

Read Tufte too if you want the pictures in your papers to be not just attractive but accurate.

References

[1] Edward R. Tufte: The Visual Display of Quantitative Information, Graphics Press, second edition, 2001. See his site here.

[2] “How much do we spend to feed ourselves?” on the Migros site, available here for the French version (replace “fr” in the URL by “de” for German and “it” for Italian, I did not see an English version). Click on the figure for a readable version.

VN:F [1.9.10_1130]
Rating: 8.2/10 (5 votes cast)
VN:F [1.9.10_1130]
Rating: +2 (from 2 votes)

Crossing the Is and doting on the Ts

 

Last week at the the CSEE&T conference in Klagenfurt (the conference page is here, I gave a keynote), a panel discussed how universities should prepare students for software engineering. Barry Boehm, one of the panelists, stated the following principle, which afterwards he said he had learned from Simon Ramo, co-founder of TRW. In hiring people, he stated, it is better to avoid candidates with an I-shaped profile: narrowly specialized in one topic that they have explored to exhaustion. Better look for a T: someone who has mastered an area in depth and then branched out to learn about many others.

I started playing with the variants. One should avoid the hyphens, or em-dashes, ““: people with a smattering of everything but no detailed knowledge of anything. Boehm said that this is the reason he always argued against establishing such undergraduate majors as systems engineering. A variant of the hyphen is the overline ““: graduates who supposedly are so smart that they can learn anything, but whose actual knowledge is limited to abstract notions.

Along with the T we should consider the “bottom” symbol of denotational semantics: . It corresponds to people who have a broad educational base, for example in mathematics, and have deepened it by focusing on a particular topic. The T and can be combined into an H turned on its side, H on the side: acquiring a solid foundation, specializing, then using that experience to become familiar with new areas.

Extending the permutation group, I am not sure what a “+” profile would be, but in a discussion last night Rustan Leino and Peter Müller suggested the “O”, ability to to circle around topics, and the umlaut, knowing a thing or two; in fact, exactly two.

 

VN:F [1.9.10_1130]
Rating: 8.2/10 (5 votes cast)
VN:F [1.9.10_1130]
Rating: +2 (from 4 votes)

Reuse of another kind

This is a plug for a family member, but for a good cause: reuse and the environment. There are many reasons for promoting reuse in software; in other fields some of those reasons apply too, plus many others, economic and environmental. The core concern is to reduce waste of all kinds.

One of the most appalling sources of waste is the growing use of throw-away food containers. It is also avoidable.

A new company, bizeebox [1], has devised an ingenious scheme to let restaurants use containers that can be washed and reused many times. It is easy to use by both restaurants and customers, saves money, and avoids heaping tons of waste on landfills. Their slogan: “Take the Waste Out of Takeout“.

The founders of bizeebox have run successful pilot projects and are now starting for real with restaurants in the Ann Arbor area. They have launched a campaign on Indiegogo, with a $30,000 funding goal. Reuse deserves a chance; so do they.

References

[1] bizeebox page, at bizeebox.com.

[2] bizeebox Indiegogo campaign: here.

VN:F [1.9.10_1130]
Rating: 8.2/10 (5 votes cast)
VN:F [1.9.10_1130]
Rating: +3 (from 5 votes)

The laws of branching (part 2): Tichy and Joy

Recently I mentioned the first law of branching (see earlier article) to Walter Tichy, famed creator of RCS, the system that established modern configuration management. He replied with the following anecdote, which is worth reproducing in its entirety (in his own words):

I started work on RCS in 1980, because I needed an alternative for SCCS, for which the license cost would have been prohibitive. Also, I wanted to experiment with reverse deltas. With reverse deltas, checking out the latest version is fast, because it is stored intact. For older ones, RCS applied backward deltas. So the older revisions took longer to extract, but that was OK, because most accesses are to the newest revision anyway.

At first, I didn’t know how to handle branches in this scheme. Storing each branch tip in full seemed like a waste. So I simply left out the branches.

It didn’t take long an people were using RCS. Bill Joy, who was at Berkeley at the time and working on Berkeley Unix, got interested. He gave me several hints about unpleasant features of SCCS that I should correct. For instance, SCCS didn’t handle identification keywords properly under certain circumstances, the locking scheme was awkward, and the commands too. I figured out a way to solve these issue. Bill was actually my toughest critic! When I was done with all the modifications, Bill cam back and said that he was not going to use RCS unless I put in branches. So I figured out a way. In order to reconstruct a branch tip, you start with the latest version on the main trunk, apply backwards deltas up to the branch point, and then apply forward deltas out to the branch tip. I also implemented a numbering scheme for branches that is extensible.

When discussing the solution, Bill asked me whether this scheme meant that it would take longer to check in and out on branches. I had to admit that this was true. With the machines at that time (VAXen) efficiency was important. He thought about this for a moment and then said that that was actually great. It would discourage programmers from using branches! He felt they were a necessary evil.

VN:F [1.9.10_1130]
Rating: 6.1/10 (10 votes cast)
VN:F [1.9.10_1130]
Rating: -2 (from 4 votes)

Another displaced business

Front-page notice in yesterday’s Tages Anzeiger (one of the principal Swiss newspapers):

Dear Readers:

From today the employment-ads section will no longer appear as a separate supplement, but directly as a section of the Tuesdays and Thursday paper. The reason is the ever smaller number of position offerings.

It seems clear that what has decreased is not the number of positions offered — the job market in Switzerland is healthy — but the number of those offered through newspapers. End of an era.

VN:F [1.9.10_1130]
Rating: 7.5/10 (4 votes cast)
VN:F [1.9.10_1130]
Rating: -1 (from 1 vote)