The path wrongly taken

The dominant discourse right now is “Calm down, this is just the normal game of democracy”. Actually, “this” is not the normal course of democracy. Everyone has experienced the disappointment of a favored candidate losing. The result of Tuesday is something else, not seen before in our lifetime: the triumph of indecency and the rout of decency.

There is in the world a general category of decent people, who as one of their characteristics seek out the company of other decent people. (“Elective affinities”.) They have been massively and perhaps decisively defeated.

What makes people decent is not that they never do bad things (although they perhaps strive not to do more of them than necessary), but that as much as possible they prefer certain things over their obverses. For example, they prefer:

  • Telling  the truth over lying.
  • Elegance over vulgarity.
  • Education over arrogant ignorance.
  • Arguments over insults.
  • Beauty over ugliness.
  • Joy over gloom.
  • Progress over regress.
  • Health over disease.
  • Financial well-being over widespread poverty.
  • Reason over mania.
  • Science over fables.
  • Helping others over hurting them.
  • Encouraging others over denigrating them.
  • Peace over war.
  • Respect over contempt.
  • Calm over violence.
  • Tolerance over intolerance.
  • Honesty over dishonesty.
  • Democracy over totalitarianism.
  • Freedom over slavery.
  • As an example of the last pair, women’s freedom over their submission to hateful men.
  • Kindness over cruelty.
  • Fairness over injustice.
  • Sanity over insanity.

(Again) those preferences do not mean that decent people never indulge in any of the second terms of these pairs, but that given a choice they will lean towards the first terms,  that they prefer the world to evolve in the direction of these first terms, and that they naturally associate with other people with similar preferences. The first terms all go well with each other (after all, what is science if not the dogged pursuit of truth? What is democracy if not the reign of tolerance?), and all the second terms go well with each other too, but until now it was exceedingly rare to see a  widely popular leader in a civilized country, and his zealots, deliberately embrace everything indecent and reject everything decent. At worst they would on the sly adopt a few indecencies here and there.

The pair elected yesterday is unique in the history of the United States by having deliberately, ostensibly and proudly chosen every second term.Every single one, many times, in the public’s full view, and under the cheers of their supporters.

That is why all decent people are desperate today. The desperation has nothing to do with matters of left versus right, or democrat versus republican, or higher taxes versus tax cuts, or the price of eggs, or any other political issue of substance.  It has everything to do with decency over indecency.

And particularly with truth over falsehood. The first of the above pairs largely subsumes the others: when society starts tolerating constant, blatant, enormous lies as if they were part of expected discourse, everything else falls out. Dictators understand this process well.

We hear that “no one knows what is going to happen”. Not so. We know something with certainty: catastrophes are coming our way. The only unknown is how many of them will hit us. For one thing the fight against climate change is doomed: all experts tell us that the change is not linear and that we have (we had) at best a few years to avoid the worst. As the US, the biggest  source of warming and emissions (although by no means the only large one), turns away from climate action, everyone else, beginning with China, will have an excellent excuse to do nothing. The consequences are horrendous to contemplate, and will be with us soon.  Another certain catastrophe is chaos in the US, merrily encouraged by its enemies. The part of the country that voted for sanity is defeated and despondent but not gone; come the first round of anti-constitutional measures, we may expect no end to clashes. Tens of millions of Americans are almost certainly going to lose their health insurance, going back to a situation unique in developed countries.  Women, denied abortion and resorting to back-alley substitutes, will die by the thousands. It is better not to think too much of what will happen to Ukraine now (and through a possible ricochet effect to Poland and the Baltic states).  Or of what would ensue in the case of a new health crisis, with loony anti-vaccine, anti-mask activists at the helm. Of what will take place at all levels of governments, with none of the “adults in the room” around: the cool-headed conservative professionals who saved us from some disasters the first time around (and this time exhorted the country to vote for the sane candidate). We are back to the dark years of 2016 to 2020, when we would wake up almost every morning to the news of the latest crazy initiative, except that now there will be a rock-solid majority (presidency, Senate, Supreme Court, with the House still not decided as of this writing) and the entire party’s total subservience to the whims, however extreme, of one man.

The founders of the Republic had warned against exactly the kind of outrageous demagogues that will now assume power, but they could never imagine such a combination of circumstances as has now overwhelmed the country; if they had they would surely have put in more checks and balances. (For one thing, convicted felons cannot vote; why in the world can they be elected?)

The USA is, or was until now, the world’s oldest continuously functioning democracy. Does it have enough resilience to continue as a democracy? Do not hold your breath. For one thing, there is no democracy without civilized debate. Yet another certain and unprecedented catastrophe is the debasement of public discourse, step by step until destruction, in the past few years. Everyone now seems to have accepted that it is OK for a major party candidate, a past and now future president, to resort again and again and again to personal insults, to mocking disabled persons for their disability, war heroes for having been heroes, soldiers for having been soldiers, and opponents for being supposedly stupid. The press calls these insults “schoolyard bullying”, but a 12-year-old who says any of these things in  a school’s yard promptly gets a dressing-down from the principal and a suspension.

We in the West have been living, whether we realized or not, a wonderful 80 years. We have suffered traumas (the repeated Paris attacks, 9/11, October 7, February 2022) but we have also enjoyed peace and prosperity. We are at the end of an era. Particularly those among us who aspire to decency.

 

VN:F [1.9.10_1130]
Rating: 7.9/10 (15 votes cast)
VN:F [1.9.10_1130]
Rating: 0 (from 8 votes)

Europe asleep (a key-not)

This week, Informatics Europe, the association of European computer science departments and industry research centers, is holding its annual ECSS event, bizarrely billed as “20 years of Informatics Europe”. (Informatics Europe was created at the end of 2006 and incorporated officially in 2011. The first ever mention of the name appeared in an email from Jan van Leeuwen to me with cc to Christine Choppy, received on 23 October 2006 at 21:37 — we were working late. Extract from Jan’s message: “The name `Informatics Europe’ has emerged as as a name that several people find appealing (and  www.informatics-europe.org seems free).” So this year is at most the 18th anniversary.)

I would have liked to speak at this week’s event but was rejected, as explained at the end of this note. I am jotting down here a partial sketch of what I would have said, at least the introduction. (Engaging in a key-not since I was not granted a keynote.) Some of the underlying matters are of great importance and I hope to have the opportunity to talk or write about them in a more organized form in the future.

Informatics Europe came out of a need to support and unite Europe’s computer science (informatics) community. In October 2004 (funny how much seems to happen in October) Willy Zwaenepoel, chair of CS at EPFL (ETH Lausanne) wrote to me as the CS department head at ETH Zurich with an invitation to meet and discuss ways to work together towards making the discipline more visible in Switzerland. We met shortly thereafter, for a pleasant Sunday dinner on November 14. I liked his idea but suggested that any serious effort should happen at the European level rather than just Switzerland. We agreed to try to convince all the department heads that we could find across Europe and invite them to a first meeting. In the following weeks a frantic effort took place to identify, by going through university web sites and personal contacts, as many potential participants as possible. The meeting,  dubbed ECSS for European Computer Science Summit, took place at ETH Zurich on (you almost guessed it) 20-21 October 2005. The call for participation started with:

The departments of computer science at EPF Lausanne and ETH Zurich are taking the initiative of a first meeting of heads of departments in Europe.

Until now there hadn’t been any effort, comparable to the Computing Research Association in the US with its annual “Snowbird” conference, to provide a forum where they could discuss these matters and coordinate their efforts. We feel it’s time to start.

The event triggered enormous enthusiasm and in the following years we created the association (first with another name, pretty ridiculous in retrospect, but fortunately Jan van Leeuwen intervened) and developed it. For many years the associated was hosted at ETH in my group, with a fantastic Executive Board (in particular its two initial vice presidents, Jan van Leeuwen and Christine Choppy) and a single employee (worth many), Cristina Pereira, who devoted an incredible amount of energy to develop services for the members, who are not individuals but organizations (university departments and industry research labs). One of the important benefits of the early years was to bring together academics from the Eastern and Western halves of the continent, the former having still recently emerged from communism and eager to make contacts with their peers from the West.

This short reminder is just to situate Informatics Europe for those who do not know about the organization. I will talk more about it at the end because the true subject of this note is not the institution but European computer science. The common concern of the founders was to bring the community together and enable it to speak with a single voice to advance the discipline. The opening paragraphs of a paper that Zwaenepoel and I published in Communications of the ACM to announce the effort (see here for the published version, or here for a longer one, pre-copy-editing) reflect this ambition:

Europe’s contribution to computer science, going back seventy years with Turing and Zuse, is extensive and prestigious; but the European computer science community is far from having achieved the same strength and unity as its American counterpart. On 20 and 21 October 2005, at ETH Zurich, the “European Computer Science Summit” brought together, for the first time, heads of computer science departments throughout Europe and its periphery. This landmark event was a joint undertaking of the CS departments of the two branches of the Swiss Federal Institute of Technology: EPFL (Lausanne) and ETH (Zurich).

.
The initiative attracted interest far beyond its original scope. Close to 100 people attended, representing most countries of the European Union, plus Switzerland, Turkey, Ukraine, Russia, Israel, a delegate from South Africa, and a representative of the ACM,
Russ Shackelford, from the US. Eastern Europe was well represented. The program consisted of two keynotes and a number of panels and workshops on such themes as research policy, curriculum harmonization, attracting students, teaching CS to non-CS students, existing national initiatives, and plans for a Europe-wide organization. The reason our original call for participation attracted such immediate and widespread interest is that computer science in Europe faces a unique set of challenges as well as opportunities. There were dozens of emails in the style “It’s high time someone took such an initiative”; at the conference itself, the collective feeling of a major crystallizing event was palpable.

.
The challenges include some old and some new. Among the old, the fragmentation of Europe and its much treasured cultural diversity have their counterparts in the organization of the educational and research systems. To take just three examples from the education side, the UK has a system that in many ways resembles the US standard, although with significant differences (3- rather than 4-year bachelor’s degree, different hierarchy of academic personnel with fewer professors and more lecturers); German universities have for a long time relied on a long (9-semester) first degree, the “Diplom”; and France has a dual system of “Grandes Écoles”, engineering schools, some very prestigious and highly competitive, but stopping at a Master’s-level engineering degree, and universities with yet another sequence of degrees including a doctorate.

And so on. The immediate concerns in 2024 are different (Bologna adoption woes are a thing of the past) but the basic conundrum remains: the incredible amount of talent and creativity present in Europe remains dormant; research in academia (and industry) fails to deliver anywhere close to its potential. The signs are everywhere; as this note is only a sketch let me just mention a handful. The following picture  shows the provenance of papers in this year’s International Conference on Software Engineering (ICSE), the premier event in the field. Even if you cannot read all the details (it’s a photo taken quickly from a back row in the opening session, sorry for the bad quality), the basic message is unmistakable: all China, the US, then some papers from Singapore, Australia and Canada. A handful from Germany and Switzerland, not a single accepted paper from France! In a discipline that is crucial for the future of every European nation.

icse_2024

Venture capital? There is a bit more than twenty years ago, but it is still limited, avaricious and scared of risks. Government support? Horizon and other EU projects have helped many, with ERC grants  in particular (a brilliant European exclusive) leading to spectacular successes, but the bulk of the funding is unbelievably bureaucratic, forcing marriages of reason between institutions that have nothing in common (other than the hope of getting some monies from Brussels) and feeding a whole industry of go-between companies which claim to help applicants but contribute exactly zero to science and innovation. They have also had the perverse effect of limiting national sources of funding. (In one national research agency on whose evaluation committee I sat,  the acceptance rate is 11%. In another, where I recently was on the expert panel, it’s more like 8%. Such institutions are the main source of non-EU research funding in their respective countries.)

The result? Far less innovation than we deserve and a brain drain that every year gets worse. Some successes do occur, and we like to root for Dassault, SAP, Amadeus and more recently companies like Mistral, but almost all of the top names in technology   — like them or loathe them  — are US-based (except for their Chinese counterparts): Amazon, Microsoft, Google, OpenAI, Apple, Meta, X, or (to name another software company) Tesla. They benefit from European talent and European education: some have key research centers in Europe, and all have European engineers and researchers. So do non-European universities; not a few of  the ICSE papers labelled above as “American” or “Canadian” are actually by European authors. Talk to a brilliant young researcher or bright-eyed entrepreneur in Europe: in most cases, you will hear that he wants to find a position or create a company in the US, because that is where the action is.

Let me illustrate the situation with a vivid example. In honor of Niklaus Wirth’s 80th birthday I co-organized a conference in 2014 where at the break a few of us were chatting with one of the speakers, Vint Cerf. Someone asked him a question which was popping up everywhere at that time, right in the middle of the Snowden affair: “if you were a sysadmin for a government organization, would you buy a Huawei router?”. Cerf’s answer was remarkable: I don’t know, he said, but there is one thing I do not understand: why in the world doesn’t Europe develop its own cloud solution? So honest, coming from an American — a Vice President at Google! — and so true. So true today still: we are all putting all our data on Amazon’s AWS and Cerf’s employer’s Google Cloud and IBM Cloud and Microsoft Azure. Total madness. (A recent phenomenon that appears even worse is something I have seen happening at European university after university: relinquishing email and other fundamental solutions to Microsoft! More and more of us now have our professional emails at outlook.com. Even aside from the technical issues, such en-masse surrender is demented.) Is Europe so poor or so retarded that it cannot build local cloud or email solutions? Of course not. In fact, some of the concepts were invented here!

This inability to deliver on our science and technology potential is one of the major obstacles to social and economic improvement in Europe. (Case in point: there is an almost one-to-one correspondence between the small set of countries that are doing better economically than the rest of the Europe, often much better, and the small set of countries that take education and science seriously, giving them enough money and freeing them from overreaching bureaucracy. Did I mention Switzerland?) The brain drain should be a major source of worry; some degree of it is of course normal — enterprising people move around, and there are objective reasons for the magnetic attraction of the US — but the phenomenon is dangerously growing and is too unidirectional. Europe should offer its best and brightest a local choice commensurate with the remote one.

Many cases seem to suggest that Europe has simply given up on its ambitions. One specific example — academia-related but important — adds to the concerns raised apropos ICSE above. With a group of software engineering pioneers from across Europe (including some who would later help with Informatics Europe) we started the European Software Engineering Conference in 1987. I was the chair of the first conference, in Strasbourg that year, and the chair of the original steering committee for the following years (I later organized the 2013 session). The conference blossomed, reflecting the vibrant life of the European software engineering community, and open of course to researchers from all over the world. (The keynote speaker in Strasbourg was David Parnas, who joked that we had invited him, an American, because the French and Germans would never agree to a speaker from the other country. That quip was perhaps funny but as unfair as it was wrong: founders from different countries, notably including Italy and Belgium, even the UK, were working together in  a respectful and friendly way without any national preferences.) Having done my job I stepped aside but was flabbergasted to learn some years later that ESEC had attached itself to a US-based event, FSE (the symposium on Foundations of Software Engineering). The inevitable and predictable happened: FSE was supposed to be ESEC-FSE every other year, but soon that practice fell out and now ESEC is no more. FSE is not the culprit here: it’s an excellent conference (I had a paper in the last edition), it is just not European. My blood boils each time I think about how the people who should have nurtured and developed ESEC, the result of many years of discussions and of excellent Europe-wide cooperation, betrayed their mission and let the whole thing disappear. Pathetic and stupid, and terrible for Europe, which no longer has an international conference in this fundamental area of modern technology.

The ESEC story helps think about the inevitable question: who is responsible? Governments are not blameless; they are good at speeches but less at execution. When they do intervene, it’s often with haste (reacting to hype with pharaonic projects that burn heaps of money before running out of favor and delivering nothing). In France, the tendency is sometimes to let the state undertake technical projects that it cannot handle; the recipes that led to the TGV or Ariane do not necessarily work for IT. (A 2006 example was an attempt to create a homegrown search engine, which lasted just long enough to elicit stinging mockery in the Wall Street Journal, “Le Google”, unfortunately behind a paywall.)

It is too easy, however, to cast all the blame on outsiders. Perhaps the most important message that I would have wanted to convey to the department heads, deans, rectors and other academic decision-makers attending ECSS this week is that we should stop looking elsewhere and start working on the problems for which we are responsible. Academia is largely self-governed. Even in centralized countries where many decisions are made at the national level in ministries, the staff in those ministries largely consists of academics on secondment to the administration. European academia — except in the more successful countries, already alluded to, and by the way not exempt either from some of the problems of their neighbors — is suffocating under the weight of absurd rules. It is fashionable to complain about the bureaucracy, but many of the people complaining have the power to make and change these rules.

The absurdities are everywhere. In country A, a PhD must take exactly three years. (Oh yes? I thought it was the result that mattered.) By the way, if you have funding for 2.5 years, you cannot hire a PhD student (you say you will find the remaining funding in due time? What? You mean you are taking a risk?) In country B, you cannot be in the thesis committee of the student you supervised. (This is something bequeathed from the British system. After Brexit!) Countries C, D, E and F (with probably G, H, I, J and K to follow) have adopted the horrendous German idea of a “habilitation”, a second doctorate-like process after the doctorate, a very effective form of infantilization which maintains scientists in a subservient state until their late thirties, preventing them during their most productive years from devoting their energy to actual work. Universities everywhere subject each other to endless evaluation schemes in which no one cares about what you actually do in education and research but the game is about writing endless holier-than-thou dissertations on inclusiveness, equality etc. with no connection to any actual practice. In country L, politicized unions are represented in all the decision-making bodies and impose a political agenda, censoring important areas of research and skewing scientist hires on the basis of political preferences. In country M, there is a rule for every elementary event of academic life and the rule suffers no exception (even when you discover that it was made up two weeks earlier with the express goal of preventing you from doing something sensible). In country N, students who fail an exam have the right to a retake, and then a second retake, and then a third retake, in oral form of course. In country O, where all university presidents make constant speeches about the benefits of multidisciplinarity, a student passionate about robotics but with a degree in mechanical engineering cannot enroll in a master degree in robotics in the computer science department. In country P (and Q and R and S and T) students and instructors alike must, for any step of academic life, struggle with a poorly designed IT system, to which there is no alternative. In country U, expenses for scientific conferences are reimbursed six months later, when not rejected as non-conformant. In country V, researchers and educators are hired through a protracted  committee process which succeeds in weeding out candidates with an original profile. In country W, the primer criterion for hiring researchers is the H-index. In country X, it is the number of publications. In country Y, management looks at your research topics and forces you to change them every five years. I would need other alphabets but could go on.

When we complain about the difficulties to get things done, we are very much like the hero of Kafka’s Before the Law, who grows old waiting in front of a gate, only to learn in his final moments that he could just have entered by pushing it. We need to push the gate of European academia. No one but we ourselves is blocking it. Start by simplifying everything, but there are more ways to enter; they  are what I would have liked to present at ECSS and will have to wait for another day.

Which brings me back to the ECSS conference. I wrote to its organizers asking for the opportunity to give a talk. Naïvely, I thought the request would be obvious. After all, while Informatics Europe was at every step a group effort, with an outstanding group of colleagues from across Europe (I mentioned a few at the beginning, but there were many more, including all the members of the initial Executive Board), I played the key role as one of the two initiators of the idea, the organizer of the initial meeting and several of the following ECSS, the founding president for two terms (8 years), the prime writer of the foundational documents, the host of the first secretariat for many years in my ETH chair, the lead author of several reports, the marketer recruiting members, and the jack-of-all-trades for Informatics Europe. It may be exaggerated to say that for the first few years I carried the organization on my shoulders, but it is a fact that I found the generous funding (from ETH, industry partners and EPFL thanks to Zwaenepoel) that enabled us to get started and enabled me, when I passed the baton to my successor, to give him an organization in a sound financial situation, some 80 due-paying members, and a strong record of achievements. Is it outrageous, after two decades, to ask for a microphone to talk about the future for 45 minutes? The response I got from the Informatics Europe management was as surprising as it was boorish: in our program (they said in February 2024!) there is no place left. To add injury to insult they added that if I really wanted I could participate in some kind of panel discussion. (Sure, fly to Malta in the middle of the semester, cancel 4 classes and meetings, miss paper deadlines, all for 5 minutes of trying to put in a couple of words. By the way, one of the principles we had for the organization of ECSS was always to be in a big city with an important local community and an airport with lots of good connections to the principal places in Europe — and beyond for our US guests.) When people inherit a well-functioning organization, the result of hard work by a succession of predecessors, it is hard to imagine what pleasure they can take in telling them to go to hell. Pretty sick.

For me Informatics Europe was the application to my professional life of what remains a political passion: a passion for Europe and democracy. On this same blog in 2012 I published an article entitled “The most beautiful monument of Europe”, a vibrant hymn to the European project. While I know that some of it may appear naïve or even ridiculous, I still adhere to everything it says and I believe it is worth reading. While I have not followed the details of the activities of Informatics Europe since I stopped my direct involvement, I am saddened not to see any trace of European sentiment in it. We used to have Ukrainian members, from Odessa Polytechnic, who participated in the first ECSS meetings; today there is no member from Ukraine listed. One would  expect to see prominent words of solidarity with the country, which is defending our European values, including academic ones. Is that another sign of capitulation?

I am also surprised to see few new in-depth reports. Our friends from the US Computing Research Association, who were very helpful at the beginning of Informatics Europe (they included in particular Andy Bernat and Ed Laszowka, and Willy Zwaenepoel himself who had been a CRA officer during his years in the US), told us that one of the keys to success was to provide the community with factual information. Armed with that advice, we embarked on successive iterations of the “Informatics in Europe: Key Data” reports, largely due to the exhaustive work of Cristina Pereira, which provided unique data on salaries (something that we often do not discuss in Europe, but it is important to know how much a PhD student, postdoc, assistant professor of full professor makes in every surveyed country), student numbers, degrees, gender representation etc. etc., with the distinctive quality that — at Cristina’s insistence —we favored exactness over coverage: we included only the countries for which we could get reliable data, but for those we guaranteed full correctness and accuracy. From the Web site it seems these reports — which indeed required a lot of effort, but are they not the kind of thing the membership expects? — were discontinued some years ago. While the site shows some other interesting publications (“recommendations”), it seems regrettable to walk way from hard foundational work.

New management is entitled to its choices (as previous management is entitled to raise concerns). Beyond such differences of appreciation, the challenges facing European computer science are formidable. The enemies are outside, but they are also in ourselves. The people in charge are asleep at the wheel. I regret not to have had the opportunity to try to wake them up in person, but I do hope for a collective jolt to enable our discipline to bring Europe the informatics benefits Europe deserves.

VN:F [1.9.10_1130]
Rating: 7.6/10 (7 votes cast)
VN:F [1.9.10_1130]
Rating: +1 (from 3 votes)

The power and terror of imagination

Reading notes. From: Quelques éléments d’histoire des nombres négatifs (Elements of a history of negative numbers) by Anne Boyé, Proyecto Pénélope, 2002, revision available here; On Solving Equations, Negative Numbers, and Other Absurdities: Part II by Ralph Raimi, available  here; Note sur l’histoire des nombres entiers négatifs (Note on the History of Negative Numbers) by Rémi Lajugie, 2016, hereThe History of Negative Numbers by Leo Rogers, here; Historical Objections against the Number Line, by Albrecht Heeffer, here; Making Sense of Negative Numbers by Cecilia Kilhamn, 2011 PhD thesis at the University of Gothenburg, here.  Also the extensive book by Gert Schubring on Number Concepts Underlying the Development of Analysis in 17-19th Century France and Germany, here. Translations are mine (including from Maclaurin and De Morgan, retranslated from Lajugie’s and Boyé’s French citations). This excursion was spurred by a side remark in the article How to Take Advantage of the Blur Between the Finite and the Infinite by the recently deceased mathematician Pierre Cartier, available here.

negative_numbers

At dinner recently, with non-scientists, discussion revolved about ages and a very young child, not even able to read yet, volunteered about his forthcoming little brother that “when he comes out his age will be zero”. An adult remarked “indeed, and right now his age is minus five months”, which everyone young and old seemingly found self-evident. How remarkable!

From a elite concept to grade school topic

It is a characteristic of potent advances in human understanding that for a while they are understandable to a few geniuses only, or, if not geniuses, to a handful of forward-thinking luminaries, and a generation later, sometimes less, they are taught in grade school. When I came across object-oriented programming, those of us who had seen the light, so to speak, were very few. Feeling very much like plotting Carbonari, we would excitedly meet once in a while in exotic locations (for my Simula-fueled band usually in Scandinavia, although for the Smalltalk crowd it must have been California) to share our shared passion and commiserate about the decades it would take for the rest of humankind to see the truth. Then at some point, almost overnight, without any noticeable harbinger, the whole thing exploded and from then on it was object-oriented everything. Nowadays every beginning programmer talks objects — I did not write “understands”, they do not, but that will be for another article.

Zero too was a major invention. Its first recorded use as a number (not just a marker for absent entities) was in India in the first centuries of our era. It is not hard to imagine the mockeries. “Manish here has twenty sheep, Rahul has twelve sheep, and look at that nitwit Shankar, he sold all his sheep and still claims he has some, zero of them he says! Can you believe the absurdity? Ha ha ha.”

That dialog is imaginary, but for another momentous concept, negative numbers, we have written evidence of the resistance. From the best quarters!

The greatest minds on the attack

The great Italian mathematician Cardan (Gerolamo Cardano), in his Ars Magna from 1545, was among the skeptics. As told in a 1758 French History of Mathematics by Montucla (this quote and the next few ones are from Boyé):

In his article 7 Cardan proposes an equation which in our language would be x2 + 4 x = 21 and observes that the value of x can equally be +3 or -7, and that by changing the sign of the second term it becomes -3 or +7. The name he gives to such values is “fake”.

The words I am translating here as “fake values” are, in Montucla, valeurs feintes, where feint in French means feigned, or pretended (“pretend values”). Although I have not seen the text of Ars Magna, which is in Latin anyway, I like to think that Cardan was thinking of the Italian word finto. (Used for example  in the title of an opera composed by Mozart at the age of 19, La Finta Giardinera, the fake girl gardener — English has no feminine for “gardener”. The false gardenerette in question is a disguised marchioness.) It is fun to think of negative roots as feigned.

Cardan also uses terms like “abundant” versus “failing” quantities (abondantes and défaillantes in French texts) for positive and negative:

Simple advice: do not confuse failing quantities with abundant quantities. One must add the abundant quantities between themselves, also subtract failing quantities between themselves, and subtract failing quantities from abundant quantities but only by taking species into account, that is to say, only operate same with same […]

There is a recognition of negative values, but with a lot of apprehension. Something strange, the author seems to feel, is at play here. Boyé cites the precedent of Chinese accountants who could manipulate positive values through black sticks and negative ones through red sticks and notes that it resembles what Cardan seems to be thinking here. In the fifteenth century, Nicolas Chuquet “used negative numbers as exponents but referred to them as `absurd numbers’”.

For all his precautions, Cardan did consider negative quantities. No lesser mind as Descartes, a century later (La Géométrie, 1637), is more circumspect. In discussing roots of equations he writes:

Often it turns out that some of those roots are false, or less than nothing [“moindres que rien”] as if one supposes that x can also denote the lack of a quantity, for example 5, in which case we have x + 5 = 0, which, if we multiply it by x3 − 9 x2  + 26 x − 24 = 0 yields  x4 − 4 x3 − 19 x2 + 106 x − 120 = 0, an equation for which there are four roots, as follows: three true ones, namely 2, 3, 4, and a false one, namely 5.

Note the last value: “5”. Not a -5, but a 5 dismissed as “false”. The list of exorcising adjectives continues to grow: negative values are no longer “failing”, or “fake”, or “absurd”, now for Descartes they are “false”!  To the modern mind they are neither more nor less true than the “true” ones, but to him they are still hot potatoes, to be handled with great suspicion.

Carnot cannot take the heat

One more century later we are actually taking a step back with Lazare Carnot. Not the one of the thermodynamic cycle — that would be his son, as both were remarkable mathematicians and statesmen. Lazare in 1803 cannot hide his fear of negative numbers:

If we really were to obtain a negative quantity by itself, we would have to deduct an effective quantity from zero, that is to say, remove something from nothing : an impossible operation. How then can one conceived a negative quantity by itself?.

(Une quantité négative isolée : an isolated negative quantity, meaning a negative quantity considered in isolation). How indeed! What a scary thought!

The authors of all these statements, even when they consider negative values, cannot bring themselves to talk of negative numbers, only of negative quantities. Numbers, of course, are positive: who has ever heard of a shepherd who is guarding a herd of minus 7 lambs? Negative quantities are a slightly crazy concoction to be used only reluctantly as a kind of kludge.

Lajugie mentions another example, mental arithmetic: to compute 19 x 31  in your head, it is clever to multiply (20 -1) by (30 + 1), but then as you expand the product by applying the laws of distributivity you get negative values.

De Morgan too

We move on by three decades to England and Augustus De Morgan, yes, the one who came up with the two famous laws of logic duality. De Morgan in 1803, as cited by Raimi:

8-3 is easily understood; 3 can be taken from 8 and the remainder is 5; but 3-8 is an impossibility; it requires you to take from 3 more than there is in 3, which is absurd. If such an expression as 3-8 should be the answer to a problem, it would denote either that there was some absurdity inherent in the problem itself, or in the manner of putting it into an equation.

Raimi points out that “De Morgan is not naïve” but wants to caution students about possible errors. Maybe, but we are back to fear and to words such as “absurd”, as used by Chuquet three centuries before. De Morgan (cited by Boyé) doubles down in his reluctance to accept negatives as numbers:

0 − a is just as inconceivable as -a.

Here is an example. A father is 56 years old and his [son] is 29 years old. In how many years will the father’s age be twice his son’s age? Let x be that number of years; x satisfies 56 +x = 2 (29 + x). We find x = -2.

Great, we say, he got it! This simple result is screaming at De Morgan but he has to reject it:

This result is absurd. However if we change x into -x and correspondingly resolve 56−x = 2 (29−x), we find x = -2. The [previous] negative answer shows that we had made an error in the initial phrasing of the equation.

In other words, if you do not like the solution, change the problem! I too can remember a few exam situations in which I would have loved to make an equation more sympathetic by replacing a plus sign with a minus. Too bad no one told me I could.

De Morgan’s comment is remarkable as the “phrasing of  the equation” contained no “error” whatsoever.   The equation correctly reflected the problem as posed. One could find the statement of the problem mischievous (“in how many years” suggests a solution in the future whereas there is only one in the past), but the equation is meaningful and  has a solution — one, however, that horrifies De Morgan. As a result, when discussing the quadratic (second-degree) equation ax2 + bx + c = 0, instead of accepting that a, b and c can be negative, he distinguishes no fewer than 6 cases, such as ax2 – bx + c = 0, ax2 + bx – c = 0 etc. The coefficients are always non-negative, it is the operators that change between + and  -. As a consequence, the determinant actually has two possible values, the one familiar to us, b2 – 4ac, but also b2 + 4ac for some of the cases. According to Raimi, many American textbooks of the 19th century taught that approach, forcing students to remember all six cases. (For a report about a current teaching distortion of the same topic, see a recent article on the present blog, “Mathematics Is Not a Game of Hit and Miss”, here.)

De Morgan (cited here by Boyé) feels the need to turn this reluctance to use negative numbers into a general rule:

When the answer to a problem is negative, by changing the sign of x in the equation that produced the result, we can discover that an error was made in the method that served to form this equation, or show that the question asked by the problem is too limited.

Sure! It is no longer “if the facts do not fit the theory, change the facts” (a sarcastic definition of bad science), but also “if you do not like the solution, change the problem”. All the more unnecessary (to a modern reader, who thanks to the work of countless mathematicians over centuries learned negative numbers in grade school, and does not spend time wondering whether they mean something) that if we keep the original problem the computed solution, x = -2, makes perfect sense: the father was twice his son’s age two years ago. The past is a negative future. But to see things this way, and to accept that there is nothing fishy here, requires a mindset for which an early 19-th century mathematician was obviously not ready.

And Pascal, and Maclaurin

Not just a mathematician but a great mathematical innovator. What is remarkable in all such statements against negative numbers is that they do not emanate from little minds, unable to grasp abstractions. Quite the contrary! These negative-number-skeptics are outstanding mathematicians. Lajugie gives more examples from the very top. Blaise Pascal in 1670:

Too much truth surprises; I know people who cannot understand that when you deduct 4 from zero, what remains is zero.

(Oh yes?, one is tempted to tell the originator of probability theory, who was fascinated by betting and games of chance: then I put the 4 back and get 4? Quick way to get rich. Give me the address of that casino please.) A friend of Pascal, skeptical about the equality -1 / 1 = 1 / -1: “How could a smaller number be to a larger one as a larger one to a smaller one?”. An English contemporary, John Wallis, one of the creators of infinitesimal calculus — again, not a nitwit! — complains that a / 0 is infinity, but since in a / -1 the denominator is lesser than zero it must follow that a / -1, which is less than zero (since it is negative by the rule of signs), must also be greater than infinity! Nice one actually.

This apparent paradox also bothered the great scientist D’Alembert, the 18-th century co-editor of the Encyclopédie, who resolves it, so to speak, by stating (as cited by Heeffer) that “One can only go from positive to negative through either zero or through infinity”; so unlike Wallis he accepts that 1 / -a is negative, but only because it becomes negative when it passes through infinity. D’Alembert concludes (I am again going after Heeffer) that it is wrong to say that negative numbers are always smaller than zero. Euler was similarly bothered and similarly looking for explanations through infinity: what does Leibniz’s expansion of 1 / (1 – x)  into 1 + x + x2 + x3 + … become for x = 2? Well, the sum 1 + 2 + 4 + 8 + … diverges, so 1 / -1 is infinity!

We all know the name “Maclaurin” from the eponymous series. Colin Maclaurin  wrote in 1742, decades after Pascal (Boyé):

The use of the negative sign in algebra leads to several consequences that one initially has trouble accepting and has led to ideas that seem not to have any real foundation.

Again the supposed trouble is the absence of an immediately visible connection to everyday reality (a “real foundation”). And again Maclaurin accepts that quantities can be negative, but numbers cannot:

While abstract quantities can be both negative and positive, concrete quantities are not always capable of being the opposite of each other.

(cited by Kilhamn). Apparently Colin’s wife Anne never thought of buying him a Réaumur thermometer (see below) for his birthday.

Yes, two negatives make a positive

We may note that the authors cited above, and many of their contemporaries, had no issue manipulating negative quantities in some contexts, and to accept the law of signs, brilliantly expressed by the Indian mathematician Brahmagupta  in the early 7th century (not a typo); as cited by Rogers:

A debt minus zero is a debt.
A fortune minus zero is a fortune.
Zero minus zero is a zero.
A debt subtracted from zero is a fortune.
A fortune subtracted from zero is a debt.
The product of zero multiplied by a debt or fortune is zero.
The product of zero multiplied by zero is zero.
The product or quotient of two fortunes is one fortune.
The product or quotient of two debts is one fortune.
The product or quotient of a debt and a fortune is a debt.
The product or quotient of a fortune and a debt is a debt.

That view must have been clear to accountants. Whatever Pascal may have thought, 4 francs removed from nothing do not vanish; they become a debt. What the great mathematicians cited above could not fathom was that there is such a thing as a negative number. You can count up as far as your patience will let you; you can then count down, but you will inevitably stop. Everyone knows that, and even Pascal or Euler have trouble going beyond. (Old mathematical joke: “Do you know about the mathematician who was afraid of negative numbers? He will stop at nothing to avoid them”.)

The conceptual jump that took centuries to achieve was to accept that there are not only negative quantities, but negative numbers: numbers in their own right, not just temporarily  negated positive numbers (that is, the only ones to which we commonly rely in everyday life), prefaced with a minus sign because we want to use them as “debts”, but with the firm intention to move them back to the other side so as to restore their positivity  — their supposed naturalness —  at the end of the computation. We have seen superior minds “stopping at nothing” to avoid that step.

Others were bolder; Schubring has a long presentation of how Fontenelle, an 18-th century French scientist and philosopher who contributed to many fields of knowledge,  made the leap.

Not everyone may yet get it

While I implied above that today even small children understand the concept, we may note in passing that there may still be people for whom it remains a challenge. Lajugie notes that the Fahrenheit temperature scale frees people from having to think about negative temperatures in ordinary circumstances, but since the 18-th century the (much more reasonable) Réaumur thermometer and Celsius scale goes under as well as above zero, helping people get familiar with negative values as something quite normal and not scary. (Will the US ever switch?)

Maybe the battle is not entirely won.  Thanks to Rogers I learned about the 2018 Lottery Incident in the United Kingdom of Great Britain and Northern Ireland, where players could win by scratching away, on a card, a temperature lower than the displayed figure. Some temperatures were below freezing. The game had to be pulled after less than a week as a result of player confusion. Example complaints included this one from a  23-year-old who was adamant she should have won:

On one of my cards it said I had to find temperatures lower than -8. The numbers I uncovered were -6 and -7 so I thought I had won, and so did the woman in the shop. But when she scanned the card the machine said I hadn’t. I phoned Camelot [the lottery office] and they fobbed me off with some story that -6 is higher – not lower – than -8 but I’m not having it. I think Camelot are giving people the wrong impression – the card doesn’t say to look for a colder or warmer temperature, it says to look for a higher or lower number. Six is a lower number than 8. Imagine how many people have been misled.

Again, quantities versus numbers. As we have seen, she could claim solid precedent for this reasoning. Most people, of course, have figured out that while 8 is greater than 6 (actually, because of that), -6 is greater than -8. But as Lajugie points out the modern, rigorous definition of negative numbers is (in the standard approach) far from the physical intuition (which typically looks like the two-directional line pictured at the beginning of this article, with numbers spreading away from zero towards both the right and the left). The picture helps, but it is only a picture.

Away from the perceptible world

If we ignore the intuition coming from observing a Réaumur or Celsius thermometer (which does provide a “real world” guide), the early deniers of negative numbers were right that this concept does not directly reflect the experiential understanding of numbers, readily accessible to everyone. The general progress of science, however, has involved moving away from such immediate intuition. Everyday adventures (such as falling on the floor) absolutely do not suggest to us that matter is made of sparse atoms interacting through electrical and magnetic phenomena. This march towards abstraction has guided the evolution of modern science — most strikingly, the evolution of modern mathematics.

Some lament this trend; think of the negative reactions to the so-called “new math”. (Not from me. I was caught by the  breaking of the wave and loved every minute of it.) But there is no going back; in addition, it is well known that some of the initially most abstract mathematical development, initially pursued without any perceived connection with reality, found momentous unexpected applications later on; two famous examples are Minkowski’s space-time formalism, which provided the mathematical framework for specifying relativity, and number-theoretical research about factoring large numbers into primes, which made modern cryptography (and hence e-commerce) possible.

Negative numbers too required abstraction to acquire mathematical activity. That step required setting aside the appeal to intuition and considering the purely concepts solely through its posited properties. We computer scientists would say “applying the abstract data type approach”. The switch took place sometime in the middle of the 19th century, spurred among others by Évariste Galois. The German mathematician Hermann Hankel — who lived only a little longer than Galois — explained clearly how this transition occurred for negative numbers (cited by Boyé among others):

The [concept of] number is no longer today a thing, a substance that is supposed to exist outside of the thinking subject or the objects that lead to it being considered; it is no longer an independent principle, as the Pythagoreans thought. […] The mathematician considers as impossible only that which is logically impossible, in the sense of implying a contradiction. […] But if the numbers under study are logically possible, if the underlying concept is defined clearly and distinctly, the question can no longer be whether a substrate exists in the world of reality.

A very modern view: if you can dream it, and you can make it free of contradiction (well, Hankel lived in the blissful times before Gödel), then you can consider it exists. An engineer might replace the second of these conditions by: if you can build it. And a software engineer, by: if you can compile and run it. In the end it is all the same idea.

Formally: a general integer is an equivalence class

In modern mathematics, while no one forbids you from clinging for help to some concrete intuition such as the Celsius scale, it is not part of the definition. Negative numbers are formally defined members of the zoo.

For those interested (and not remembering the details), the rigorous definition goes like this. We start from zero-or-positive integers (the set N of “natural” numbers) and consider pairs [a, b] of numbers (as we would do to define rationals, but the sequel quickly diverges). We define an equivalence relation which holds with another pair [a’, b’] if a + b’ = a’ + b. Then we can define the set Z of all integers (positive, zero, negative) as the quotient of N x N by that relation. The intuition if that the characteristic property of an equivalence class, such as [1, 4], [2, 5],  [3, 6]… , is that b – a, the difference between the second and first values, is the same for all pairs: 3 in this example (4 – 1, 5 – 2, 6 – 3 etc.). At least that property holds for b >= a; since we are starting from N, subtraction is defined only in that case. But then if we take that quotient as the definition of Z, we call members of that set “negative”, by pure convention, whenever b < a (if this property holds for one of the pairs in an equivalence class it holds for all of them), and positive if b > a. Zero is obtained for a = b.

We reestablish the connection with our good old natural integers by identifying N with the subset of Z for which b >= a. (This is an informal statement; the correct technical phrasing is that there is a “bijection” — a one-to-one correspondence, in fact an isomorphism — between that subset and N.) So we have plunged, or “embedded”, N into something bigger, to which most of its treasured properties (associativity and commutativity of addition etc.) immediately spread, while some limitations disappear; in particular, unlike in N, we can now subtract any Z integer from any other.

We also get the opposites of numbers as a result: for any m in Z, we can easily prove that there is another one n such that m + n = 0. That n can be written -m. The property is true for both positive and negative numbers, concepts that are also easy to define: we show that “>” is one of those operations that extend from N to Z, and the positive numbers are those m such that m > 0. Then if m is positive -m is negative, and conversely; 0 is the only number for which m = -m.

Remarkably, Z too is in one-to-one correspondence with N. (It is one of the definitions of an infinite set that it can be in one-to-one correspondence with one of its strict subsets, something that is obviously not possible for a finite set. To shine in cocktail parties you can refer to this property as “Dedekind-infinite”.) In other words, we have uncovered yet a new attraction of Hilbert’s Grand Hotel: the hotel has an annex, ready for the case of a guest coming with an unannounced companion. The companion will be hosted in the annex, in a room uniquely paired with the original guest’s room. The annex is a second hotel, but it is not exactly like the first: it does not have an annex of its own in the form of yet another hotel. It does have an annex, but that is the original hotel (the hotel of which it itself is the annex).

If you were not aware of the construction through equivalence classes of pairs and your reaction is “so much ado about so little! I do not need any of this to understand negative numbers and to know that m + -m = 0”, well, maybe, but you are missing part of the story: the observation that even the “natural” numbers are not that natural. Those we can readily apprehend as part of “natural” reality are the ones from 1 to something like 1000,  denoting quantities that we can reasonably count. If you really have extraordinary patience and time make this 1000,000 or even 1 million, that does not change the argument.

Even zero, as noted, took millennia to be recognized as a number. Beyond the numbers that we can readily fathom in relation to our experience at human scale, the set of natural integers is also an intellectual fiction. (Its official construction in the modern mathematical canon is seemingly even more contorted than the extension to Z sketched above: N, in the so-called Zermelo-Fraenkel theory (more pickup lines for cocktail parties!) contains the empty set for 0, and then sets each containing the previous one and a set made of that previous one. It is clearer with symbols: ø, {ø}, {ø, {ø}}, {ø, {ø}, {ø, {ø}}}, ….)

Coming back to negative numbers, Riemann (1861, cited by Schubring) held their construction as a fundamental step in the generalization process that characterizes mathematics, beautifully explaining the process:

The original object of mathematics is the integer number; the field of study increases only gradually. This extension does not happen arbitrarily, however; it is always motivated by the fact that the initially restricted view leads toward a need for such an extension. Thus the task of subtraction requires us to seek such quantities, or to extend our concept of quantity in such a way that its execution is always possible, thus guiding us to the concept of the negative.

Nature and nurture

The generalization process is also a process of abstraction. The move away from the “natural” and “intuitive” is inevitable to understand negative numbers. All the misunderstandings and fears by great minds, reviewed above, were precisely caused by an exaggerated, desperate attempt to cling to supposedly natural concepts. And we only talked about negative numbers! Similar or worse resistance met the introduction of imaginary and complex numbers (the names themselves reflect the trepidation!), quaternions and other fruitful but artificial creation of mathematics. Millenia before, the Greeks experienced shock when they realized that numbers such as π and the square root of 2 could not be expressed as ratios of integers.

Innovation occurs when someone sets out to disprove a statement of impossibility. (This technique also lies behind one approach to solving puzzles and riddles: you despair that there is no way out; then try to prove that there is no solution. Failing to complete that proof might end up opening for you the path to one.)

Parallels exist between innovators and children. Children do not know yet that some things are impossible; they make up ways. Right now I am sitting next to the Rhine and I would gladly take a short walk on the other bank, but I do not want to go all the way to the bridge and back. If I were 4 years old, I would dream up some magic carpet or other fancy device, inferred from bedtime stories, that would instantly transport me there. We grow up and learn that there are no magic carpets, but true innovators who see an unsolved problem refuse to accept that state of affairs.

In their games, children often use the conditional: “I would be a princess, and you would be a magician!”. Innovators do this too when they refuse to be stopped by conventional-wisdom statements of impossibility. They set out to disprove the statements. The French expression “prouver le mouvement en marchant”, prove movement by walking, refers to the Greek philosophers Diogenes of Sinope and  Zeno of Elea. Zeno, the story goes, used the paradox of Achilles and the tortoise to claim he had proved that movement is impossible. Diogenes proved the reverse by starting to walk.

In mathematics and in computer science, we are even more like children because we can in fact summon our magic carpets — build anything we dream of, provided we can define it properly. Mathematics and computer science are among the best illustrations of Yuval Noah Harari’s thesis that a defining characteristic of the human race is our ability to tell ourselves stories, including very large and complex stories. A mathematical theory is a story that we tell ourselves and to which we can convert other mathematicians (plus, if the theory is really successful, generations of future students). Computer programs are the same with the somewhat lateral extra condition that we must also enable some computing system to execute it, although that system is itself a powerful story that has undergone the same process. You can find variants of these observations in such famous pronouncements as Butler Lampson’s “in computer science, we can solve any problem by introducing an extra level of indirection” and Alan Kay‘s  “the best way to predict the future is to invent it”.

There is a difference, however, with children’s role-playing; and it can have dramatic effects. Children can indulge in make-believe for quite some time, continuing to live their illusions until they grow up and become reasonable. Normally they will not experience bad consequences (well, apart from the child who believes a little too hard, or from a window little too high, that his arms really are wings.) In adult innovation, sooner or later you have to reconcile the products of your imagination with the world. It may be the physical world (your autonomous robot was fantastic in the lab but it requires heavy batteries making it impractical), but things are just as bad with the virtual world of mathematics or software. It is great to define and extend your own freaky artificial worlds, but at some point you have to make sure they are consistent not just with already defined worlds but with themselves. As noted earlier, a mathematical concoction, however audacious, should be free of contradictions; and a software concept, however powerful, should be implementable. (Efficiently implementable.)

By any measure the most breathtaking virtual construction of modern mathematics is Cantor’s set theory, which scared many mathematicians,  the way negative numbers had terrified their predecessors. (Case in point: the editor of a journal to which Cantor had submitted a paper wrote that it was “a hundred years too soon”.  Cantor did not want to wait until 1984. The great mathematician Kronecker described him as “a corrupter of youth”. And so on.) More enlightened colleagues, however, soon recognized the work as ushering in a new era. Hilbert, in particular, was a great supporter, as were many of the top names in several countries. Then intellectual disaster struck.

Cantor himself and others, most famously Russell in a remark included in a letter of Frege, noticed a problem. If sets can contain other sets, and even contain themselves (the set of infinite sets must be infinite), what do we make of the set of all sets that do not contain themselves? Variants of this simple question so shook the mathematical edifice that it took a half-century to put things back in order.

Dream, check, build

Cantor, for his part, went into depression and illness. He died destitute and desperate. There may not have been a direct cause-to-effect relationship, but certainly the intellectual rejection and crisis did not help.

All the sadder that in the end set theory, after significant cleanup, turned out to be one of the biggest successes of history. We still discuss the paradoxes, but it is unlikely that today they prevent anyone from sleeping soundly at night.

Unlike those genuinely disturbing paradoxes of set theory, the paradoxes that led mathematicians of previous centuries to reject negative numbers were apparent only. They were not paradoxes but tokens of intellectual timidity.

The sole reason for fearing and skirting negative numbers was an inability to accept a construction that contradicted a simplistic view of physical reality. Like object-oriented programming and many other bold advances, all that was required was the audacity to take imagined abstractions seriously.

Dream it; check it; build it.

 

VN:F [1.9.10_1130]
Rating: 10.0/10 (10 votes cast)
VN:F [1.9.10_1130]
Rating: +4 (from 4 votes)

Freely accessible books

Recently I prepared some of my books for free access on the Web (after gaining agreement from the publishers). Here are the corresponding links. They actually point to pages that present the respective books and have further links to the actual PDF versions.

Although the texts are essentially those of the books as published, I was able in most cases to make some improvements, in particular to the formatting, and to introduce some hyperlinking, for example in table of contents, to facilitate online navigation.

If you cite any of these books please use the links given here. Then you know that you are referring your readers to a legal and up-to-date version. In particular, there are a plethora of pirated copies of Object-Oriented Software Construction on various sites, with bad formatting, no copyright acknowledgment, and none of the improvements.   academia.edu hosts one of them, downloadable. I wrote to them and they did not even answer.

Here are the books and the links.

  • Introduction to the Theory of Programming Languages (Prentice Hall, 1990):  A general introduction to formal reasoning about programs and programming languages. Written without a heavy formal baggage so as to be understandable by programmers who do not have a special mathematical background. Full text freely available from here.
  • Object Success (Prentice Hall, 1995): . A general presentation of object technology, meant in particular for managers and decision-makers, presenting the essential OO ideas and their effect on project management and corporate culture. Full text freely available from here.
  • Object-Oriented Software Construction, 2nd edition (Prentice Hall, 1997): . The best-known of my books, providing an extensive (and long!) presentation of object technology, with particular emphasis on software engineering aspects, including Design by Contract. Introduced many ideas including some of the now classic design patterns (Command, called “undo-redo”, Bridge, called “handle” etc. Full text freely available from here.

In addition, let me include links to recent books published by Springer; they are not freely available, but many people can gain free access through their institutions:

  • Touch of Class: An Introduction to Programming Well Using Objects and Contracts. My introductory programming textbook, used in particular for many years for the intro programming course, altogether to something like 6000 students over 14 years, at ETH Zurich (and nourished by experience). The Springer page with  the text (paywall) is here. There is also my own freely accessible book page with substantial extracts (read for example the chapter on recursion): here.
  • Agile! The Good, the Hype and the Ugly A widely used presentation of agile methods, serving both as tutorial and as critique. The Springer page with  the text (paywall) is here. There is also my own freely accessible book page with substantial extracts: here.
  • Handbook of Requirements and Business Analysis (Springer, 2022). A short but extensive textbook on requirements engineering. The Springer page with  the text (paywall) is here. My own book page, which will soon have substantial extracts and supplementary material, is here.

Also note the volume which I recently edited, The French School of Programming, Springer, 2024, with 13 chapters by top French computer scientists (and a chapter by me). The Springer page  is here.

My full list of books is here. Full publication list in chronological order: here.

 

VN:F [1.9.10_1130]
Rating: 10.0/10 (5 votes cast)
VN:F [1.9.10_1130]
Rating: +4 (from 4 votes)

And what if everything went well?

I do not have a crystal ball and disaster may still strike. A terrorist attack, disruption by the hateful scoundrels of the extreme left. (Meaning I would have to eat the words below, since they will be here for the record, but then we will have worse things to deplore.)

After initial doubts I have had an increasingly good feeling, as we got closer to the event, about the Olympic games. A few months ago I feared that unions would stage irresponsible strikes, but that does not seem to be happening; if peace was bought it was worth it.

It looks like the organization has been truly efficient and professional, with the right dose of controlled craziness (for the opening ceremony). After all, for the first time in decades France has had a competent government since 2017, still in place even if on the way out, and it shows.

What if everything went according to plan and beyond expectations? What if the unimaginable just happened now?

A skillfully orchestrated production, national unity even if temporary, smiles and welcomes — two weeks of bliss?

It is permitted to hold one’s breath and cross one’s fingers.

Bienvenue à Paris.

VN:F [1.9.10_1130]
Rating: 7.5/10 (4 votes cast)
VN:F [1.9.10_1130]
Rating: +3 (from 3 votes)

The French School of Programming

July 14 (still here for 15 minutes) is not a bad opportunity to announced the publication of a new book: The French School of Programming.

The book is a collection of chapters, thirteen of them, by rock stars of programming and software engineering research (plus me), preceded by a Foreword by Jim Woodcock and a Preface by me. The chapters are all by a single author, reflecting the importance that the authors attached to the project. Split into four sections after chapter 1, the chapters are, in order:

1. The French School of Programming: A Personal View, by Gérard Berry (serving as a general presentation of the subsequent chapters).

Part I: Software Engineering

2. “Testing Can Be Formal Too”: 30 Years Later, by  Marie-Claude Gaudel

3. A Short Visit to Distributed Computing Where Simplicity Is Considered a First-Class Property, by Michel Raynal

4. Modeling: From CASE Tools to SLE and Machine Learning, by Jean-Marc Jézéquel

5. At the Confluence of Software Engineering and Human-Computer Interaction: A Personal Account,  by Joëlle Coutaz

Part II:  Programming Language Mechanisms and Type Systems

6. From Procedures, Objects, Actors, Components, Services, to Agents, by  Jean-Pierre Briot

7. Semantics and Syntax, Between Computer Science and Mathematics, by Pierre-Louis Curien

8. Some Remarks About Dependent Type Theory, by Thierry Coquand

Part III: Theory

9. A Personal Historical Perspective on Abstract Interpretation, by Patrick Cousot

10. Tracking Redexes in the Lambda Calculus, by  Jean-Jacques Lévy

11. Confluence of Terminating Rewriting Computations, by  Jean-Pierre Jouannaud

Part IV: Language Design and Programming Methodology

12. Programming with Union, Intersection, and Negation Types, by Giuseppe Castagna

13, Right and Wrong: Ten Choices in Language Design, by Bertrand Meyer

What is the “French School of Programming”? As discussed in the Preface (although Jim Woodcock’s Foreword does not entirely agree) it is not anything defined in a formal sense, as the variety of approaches covered in the book amply demonstrates. What could be more different (for example) than Coq, OCaml (extensively referenced by several chapters) and Eiffel? Beyond the differences, however, there is a certain je ne sais quoi of commonality; to some extent, in fact, je sais quoi: reliance on mathematical principles, a constant quest for simplicity, a taste for elegance. It will be for the readers to judge.

Being single authors of their chapters, the authors felt free to share some of their deepest insights an thoughts. See for example Thierry Coquand’s discussion of the concepts that led to the widely successful Coq proof system, Marie-Claude Gaudel’s new look at her seminal testing work of 30 years ago, and Patrick Cousot’s detailed recounting of the intellectual path that led him and Radhia to invent abstract interpretation.


The French School of Programming
Edited by Bertrand Meyer
Springer, 2024. xxiv + 439 pages

Book page on Springer site
Amazon US page
Amazon France page
Amazon Germany page

The book is expensive (I tried hard to do something about it, and failed). But many readers should be able to download it, or individual chapters, for free through their institutions.

It was a privilege for me to take this project to completion and work with such extraordinary authors who produced such a collection of gems.

VN:F [1.9.10_1130]
Rating: 10.0/10 (6 votes cast)
VN:F [1.9.10_1130]
Rating: +5 (from 5 votes)

Descente aux enfers

[English version forthcoming.]

Que peut-on faire ? Un pays vieux d’un millénaire et demi est en train de se suicider. Pour tentant que soit le désespoir, il est encore temps d’agir.

Le pire scénario, c’est la menace de la gauche. Ce qu’il restait de sociaux-démocrates s’est prosterné devant une bande d’extrémistes décidés à détruire toute structure sociale, défendant ouvertement les terroristes les plus sanguinaires, et conduits par un apprenti dictateur assoiffé de pouvoir absolu et de vengeance dans la pure tradition stalinienne. Les contrer est la priorité absolue : faire barrage à la gauche.

Ceux d’en face, s’ils sont moins immédiatement dangereux, ne valent guère mieux. À peine dégagés de leurs origines pétainistes, ils sont soudoyés par Moscou et leur arrogance n’a d’égale que leur incompétence. En faisant chavirer la France ils risquent d’entraîner l’Europe dans le naufrage, ouvrant la porte à l’agression russe. D’abord les pays baltes, puis la Pologne, et qui ensuite ?

Pour la première fois depuis des décennies la France avait un président et un gouvernement dévoués, honnêtes et compétents. Des gens sérieux, éduqués, mus par le souci du bien public et décidés à résoudre les problèmes structuraux du pays, ayant déjà en peu d’années vaincu le cancer du chômage, rééquilibré un régime de retraites voué à la catastrophe, rétabli la crédibilité internationale de la France, rendu le pays attractif pour les investisseurs, géré efficacement la crise sanitaire, assaini les conditions de l’immigration, attaqué l’islamisme et évité les attentats des quinquennats précédents… La liste pourrait continuer longtemps. Face à cette action massivement réussie les moyens d’information privés et publics, bien pires que les réseaux sociaux tant décriés, se sont déchaînés contre ce président et son gouvernement année après année, mois après mois, jour après jour. Les historiens qui analyseront la débâcle sauront faire la part de la presse dite de référence, et d’une grande partie des intellectuels, ceux-là même qui auraient dû être le rempart de la raison et n’ont su être que les acteurs d’une impardonnable trahison des clercs.

Aberrant, impensable et abominable.

Toute indulgence vis-à-vis des extrémistes du pire bord ou de l’autre vous rendrait complice de l’inévitable débâcle historique qui suivrait leur élection. Pour éviter le désastre absolu, chaque personne sensée doit voter dimanche pour le candidat local de la liste Renaissance.

 

VN:F [1.9.10_1130]
Rating: 10.0/10 (5 votes cast)
VN:F [1.9.10_1130]
Rating: +2 (from 2 votes)

Upside down

What is going on?

In the US, the leading presidential candidate is a vulgar crook, a serial business failure and convicted business fraudster; more ominously, he acts like a vassal to Putin. His first term was an endless string of catastrophes, including the deaths of hundreds of thousands of his compatriots through gross mismanagement. And yet he mesmerizes the entire Republican party and half of the population, which despises his adversary, one of the most skilled presidents ever, surrounded by an A-team of aides, who brought back financial stability  — taking the Dow to unheard levels  —, defended Israel’s right to exist against the extremists in his own camp, and re-established respect for the US. But no, the electorate is ready to elect again the sinister buffoon and thereby to bring to an end the longest-running democratic run in the history of the world.

Have the American people gone mad?

France has its best government in 50 years; a young, energetic, smart president, he too surrounded by an incredible team of passionate men and women dedicated to the public good and to solving the country’s ills, one at a time. And whom does the common folk, for once united with a large segment of the educated class, deeply hate? That president and his team. Whom do they idolize? The extreme right, led by Kremlin-funded ignorant demagogues, unable to manage anything but prompt to fan any discontent anywhere. Also the extreme left, which has turned into the official antisemitic party in the hope of winning the vote of the banlieues by pronouncements that seem to come out of der Stürmer. In-between, the moderate left and the moderate right are representatives of the governments which for decades have not dared to address any of France’s structural problems. The press and mass media, including the previously neutral references of record, eager to prove their independence, savage the government day in and day out, good initiatives and bad. (Mostly good actually, but who cares? Nasty headlines make you look cool.) For the European elections of next Sunday, Macron has fielded an outstanding slate of determined professionals in his image; and yet all the polls suggest a landslide for the extreme-right list, led by a know-nothing who in years at the European parliament missed most sessions and did not produce a single law, report or result.

Have the French people gone mad?

Meanwhile top universities in Western Europe, the US and Australia fall prey to supporters of terrorism, defenders of the rapists and killers and butchers of women and children. The oh-so-nice bourgeois leftist press publishes ignoble articles glorifying the enemies of peace who advocate of the destruction of the only democracy in the Middle East. (The Guardian, the favorite reading of intellectuals in the English-speaking world, deserves a special mention in abjection. Its uppity journalists cannot  let Rishi Sunak state that two plus three equals five without firing a volley of attacks and mockery. And as soon as an anti-Israel bigot makes a statement, they religiously amplify it, shedding any semblance of a critical mindset and rational analysis.) Young people are being brainwashed with words like Apartheid (they apparently do not know that one fifth of Israeli citizens are Arabs, most of them Muslim or Christian, with a strong place in society, representatives in Parliament and at the Supreme Court) and Genocide (they apparently do not know that Israel voluntarily relinquished Gaza, removing every reluctant Israeli by force, and that the Palestinian population has grown by a factor of five since 1950). Disinformation generously fanned by authoritarian regimes relentlessly tries to convince us that the aggressor is the victim and the victim is the aggressor. To make us forget that the terrorists immerse themselves in the civilian population, so as to maximize casualties which they then attribute to Israel. That they bar those civilians from their immense underground network, reserving it for combatants and hostages. That in cold blood and out of sheer hatred they tortured and murdered hundreds of innocent civilians, gang-raping the women with proud sadism. That they refuse to release those they are still holding. That they relied on the world’s compassion and subsidies to plan and implement their murderous rampage. They hide the fate of the hundreds of thousands of Jews who were forcibly expelled from Arab countries (any “right of return” there?) and had to find new countries and build new lives. And yet from Columbia in New York to Sciences Po in Paris, activist students insult democrats and promote obscurantism. (One of the most extreme examples, which would be funny if it were not tragic, is the “LGBTQ for Gaza” movement, apparently oblivious to what happens to homosexuals in Gaza: torture first, then usually being thrown from the roof. As someone wrote, the slogan evokes notions of “Turkeys for Thanksgiving”.) The truth is that the Israelis, by defending themselves, are defending us from fanatics who want to bring the Western world back one thousand years, to a society of religious absolutism, power of the warlords, constant fear of violence and abuse, subjugation of women, and absence of any form of freedom.

Have the supposed future elites of the West gone mad?

Others too are defending us by defending themselves: the Ukrainians. Resisting the savage onslaught of a neighbor many times bigger and richer, they are shedding their blood to defend their right to freedom and democracy, values that we in the West have taken for granted. And yet many people in that same West grumble about the money that we are giving them and the risk of provoking Putin. (As if he needed provocation to launch what we thought would never happen again in Europe, an imperialistic attack motivated only by a thirst for power and domination.) The West’s mixed reaction is emboldening China’s own tyrant, intent on destroying a thriving democracy. Republicans in the US, egged on by Trump, delayed by half a year the provision of supplies needed as a matter of survival (even though much of that money comes back to the US in the form of weapon purchases!). Here too Macron, today’s European statesman  in the lineage of Adenauer, Monnet, Schuman and de Gaulle, is showing the way, along with the leaders of Eastern Europe an countries (the Baltic republics, Czechia, Poland, who on top of all their existential issues have to cope with the systematic obstruction of Hungary). The miserable German chancellor is, for his part, scared of his own shadow. Germany, with its addiction to Russian oil stemming from an idiotic and criminal rejection of nuclear power two decades ago, was a significant enabler of Putin’s ability to start monstrous war, but today it refuses to play its part in coping with the consequences.

Have the Germans gone mad?

The world seems to be upside down.

This blog started out as a “technology blog” and branched into “technology+” as I started including topics from other domains, but mostly I have stayed away from politics. One major exception was an
extensive article about Europe twelve years ago, to which I would not change anything today, especially days before crucial European elections. I prefer to write about what I know best: programming languages, programming methodology, software engineering, with occasional incursions into music, and once in a while some observation about the little ironies of life. But there are circumstances under which anyone who has had the benefit of learning to think — we do not even need the word “intellectual” — has to raise the alarm and explain that we risk losing everything.

Yes, we are at risk of losing everything that we have gained in the past millennium and which (along with economic progress, which it has enabled) makes life worth living: freedom of thought and action, tolerance, respect, democracy, generosity, protection of the weakest members of society, the prevalence of reason over arbitrary might, checks and balances on every kind of power, gender equality and other forms of giving everyone a chance. In the 1930s Julien Benda talked of La Trahison des Clercs, the treason of the educated, when he saw his peers endorse authoritarian (and ultimately murderous) theories from the left and the right. Something similar is happening today. We have been spoiled by those very advances of freedom, spoiled into thinking that we can show off by smugly promoting contrarian ideas, without realizing that they are not clever retorts in fancy conversations but part of a demolition process. Something like this happened in a previous generation: in 1968, it was fashionable for bourgeois youth to advocate Trotskyist or Maoist precepts. That was a lot of fun and made you look cool for a few years, before you became a professor, a middle manager or a capitalist. Today the stakes are much higher because the ruthless adversaries are at the door, with considerable means of physical destruction, threatening the very basis of modern, stable, pleasant society. They do not tolerate us, actually they despise us, but they have noticed that we tolerate them and they take every advantage of our cherished tolerance.

Let us not help them. If you ever feel tempted to forget our own collective interest, please remember that the surest feature of rational thinking (I do not even need to say “intelligence” is the ability to distinguish the auxiliary from the essential. Today:

  • Biden is old: auxiliary. (He is as sharp as ever and has a brilliant team to support him.) Trump is unhinged and eager to become a dictator: essential.
  • Macron is arrogant: auxiliary. (Also, not true. He is just smarter than most and does not quite know how to hide it.) Le Pen, Bardella and co. are incompetent and nefarious: essential.
  • You do not agree with everything that Macron or Biden does: auxiliary; in a way, comforting. (Only in dictatorship is the Supreme Leader always right, supposedly.) Trump wants to ban abortion to please the most extreme religious absolutists in his camp: essential.
  • The clever columnists from the Guardian and Le Monde find something awful in every carefully thought–government initiative: auxiliary. The French extreme left and extreme right want to jeopardize the incredibly successful European project and pave the way for hostile, autocratic foreign powers: essential.

We cannot stay away. You cannot stay away. If you are in the US, a vote for Trump (as I have heard otherwise serious people advocate, out of absurd arguments seemingly meant to make them sound cutely contrarian), or some boutique competitor, is a catastrophe; it is crucial that you go cast your ballot for Biden and for other rational candidates. If you are in France, go vote for the Macron list this Sunday. In those countries and everywhere else, support politicians who are not subservient to an authoritarian regime.

Do your part. Vote for the competent and level-headed candidates against the crazies of all hues. Explain patiently to less educated and less informed people what is at stake and where right and wrong, evil and good truly lie.

Treat the defense of reason and freedom as if it were a matter of life and death, because it is.

 

VN:F [1.9.10_1130]
Rating: 5.8/10 (37 votes cast)
VN:F [1.9.10_1130]
Rating: +1 (from 15 votes)

Horribly transparent

A few years ago I was driving on a freeway in France and turned on the radio, chancing on France-Culture. (In passing it is fair to note the abundance of quality programs on that station. It has its share of empty Parisian intellectual chit-chat but much of the time I learn something interesting.) I was lucky: it was the start on a long discussion with Daniel Barenboim. Ever since, I have wanted to listen to it again but had forgotten the details, including the name of the program. I did remember that at some point the interviewer had found Barenboim in his hotel room, smoking a cigar and rooting for Argentina in its game against Switzerland at the beginning of the FIFA World Cup  it almost won; the latter detail helped find the date (thanks, Wikipedia) and, from it, the recording: here for part 1 and there for part 2.

On the side (again), Barenboim’s French is amazing. Even more so that YouTube has a multitude of interviews of him in just as seemingly perfect Italian, German, Spanish (his native language) and English,  and he is also fluent in Hebrew. Hearing him in French, one needs a while to realize that he is not a native speaker; his almost imperceptible accent could be just from some province. At some point he reveals himself through a trifling mistake that a French person would normally not make, like using “opéra” in the feminine as in Italian. (As an aside in the aside, I may be deluding myself in thinking that by default native French speakers know the word “opéra”, other than maybe as the moniker for a metro station in Paris. For one thing, under-40 Italians I meet usually know the latest Taylor Swift “song” but could not name a single Rossini aria, assuming they have even heard the name “Rossini”, other than maybe as the moniker for a meat dish. But let us not get dejected.) Ignoring these rare and small slips his French is elegant if slightly passé (who says “peu importe” nowadays?).

(For an earlier article in this blog involving Barenboim — as well as Arthur Rubinstein — see here.)

The most fascinating part of the interview is the beginning, where the interviewer quizzes him on Mozart, of whom Barenboim is one of the best performers in modern times. He quotes Arthur Schnabel:  “Mozart is too easy for children and too hard for adults”. (Schnabel’s actual  quip has “artists” for “adults” and there is this variant:  “Children are given Mozart because of the small quantity of the notes; grown-ups avoid Mozart because of the great quality of the notes”.) Professional artists, explains Barenboim, strive to reconcile the depth that they now perceive with the naïve pleasure they were  finding in the same music as children. Mozart’s music “weeps when it laughs and laughs when weeping”. Barenboim has this formula, which would be worth a treatise: Mozart’s music is “horriblement transparente”, horribly transparent.

Later in the recording he states that the 20th century distinguished itself by a tendency to deconstruction and fragmentation, and expresses the hope that the 21st will reconstruct and reunify. It is not taking that road.

VN:F [1.9.10_1130]
Rating: 10.0/10 (1 vote cast)
VN:F [1.9.10_1130]
Rating: +2 (from 2 votes)

Mathematics is not a game of hit and miss

I was recently looking at the math exercises of a 14-year-old, having to do with quadratic (second-degree) equations.

The first thing that caught my eye is not a surprise: the difference between school and life. The quadratic polynomials appearing in the exercises, such as x2 – x – 6, all happen to have integer roots (3 and -2 in this case). Also, they all have 1 as the second-degree coefficient (the `a’ in a x2 + b x + c). There are good pedagogical reasons for these choices: with more general parameters, solving the equation becomes a task of numerical computation, which has no connection to the topic. But in any real-life application (say, the computation of where a ball thrown into the air will hit the ground) the solution will not come ready-made as in these school exercises.

I found one of the exercises very good. It reads this way (this is a Zurich school, I am translating from the German): A pencil factory has two machines. To produce 100 pencils, the older machine would take 10 minutes more than both machines running together. If in a minute the newer machine produces 32 pencils, how many does the older one produce in the same time? It provides a good opportunity to practice how to model a problem by defining the appropriate mathematical parameters. Then  — surprise  — you get a quadratic equation whose  solutions  — surprise  — are integers (only one of which makes sense). Useful gymnastics.

Another interesting example is the  equation

equation1

Interesting because if you do not use a bit of insight you will not get anywhere; after all, there is a range of 4 for the exponents (from -1 for the square root  to  0 for the constants, 1 for the terms in x, and 2 for the terms in x2), so squaring both sides to get rid of the square root, for example, would be hopeless. Now if you remember that this exercise goes with a lecture about 2nd-degree equations and apply what you have learned about them, you get the roots of the polynomial in the denominator on the left, x2 – x – 6, and can rewrite it as (x – 3) (x + 2). Then you notice that x – 3 also appears, multiplied by 5, in the right-side denominator; so you remove it on both sides, and from then it is all downhill: you get another quadratic equation which  — surprise  — has simple roots.

Throughout these exercises I see, introduced from the start, an idea that not all people having learned quadratic equations remember: the rule that any second-degree polynomial a x2 + b x + c can be written a (x – x0) (x – x1) where x0 and  x1 are the roots (including when they are the same). It is often useful to turn such a polynomial into this form (particularly simple when a = 1).

Then I probed  further and asked the student what he would do if the roots were not such simple integers. It turns out that he had no idea since that is not something they are taught at that level! They have not heard about the notion of determinant (b2 – 4 a c), and how it gives the solutions, through the standard formula: (-b ± d) / 2 a where d is the square root of the determinant.

I realized that the way they are taught to “solve” the equation (I have to put the word in quotes) is to try to guess some integer values that will make x2 + b x + c (good thing that a = 1 for all the examples!) zero. Specifically, you try out values u and v whose product is c and check whether (x – u) (x – v) works out to the given polynomial. If not, you continue guessing values until you find a pair that works. Mathematics as a game of hit-and-miss.

In the x2 – x – 6 example, let’s see… -3 and +2? Oh no, they do not fit. 1 and -6? Bummer. 6 and -1?  Also not. Maybe -2 and +3? Bingo!

This method of poking around until you find something that clicks seems to me a strange thing to teach. I should include a caveat here: I am not an expert in mathematical pedagogy (and not even a mathematician). So I am asking questions, not passing judgment. Also, the place being Switzerland, where processes are usually thought out carefully, especially in education, I have to assume that whoever designed this curriculum had some hunch of what he was doing. But the result is puzzling. The kind of mathematics that helps people (and on which today’s world rests, whether directly or through physics and computer science) is not about guessing results. It is about establishing rules, in the form of axioms and theorems, proving the latter (from the former), and then relying on them to derive whatever specific results you need. The edifice of rules has a deep and elaborate structure, devised over centuries by giants standing on the shoulders of giants standing on the shoulders of other giants. But once you have a rule you no longer have to go to the underlying layers; you directly use the result of the combined work of countless smart people. It does not matter how many times they tried to derive them and how many mistakes they made along the way: their work has been vetted many times, and you can use its outcome in full confidence.

If I use my guessing powers to try values that might make a x2 + b x + c zero, I will succeed once in a while, particularly for schoolbook exercises. When I do not succeed, I have no clue whether the reason is my insufficient intellectual agility, the absence of simple solutions (integers or simple fractions), or the absence of any solution (students at the level under consideration have not heard of imaginary numbers yet). If I learn the formula  (meaning, with a good teacher, not only learning it as a recipe but discovering why the recipe works), I am equipped to solve any quadratic equation, with or without  simple integer solutions. When solving such equations I do not need to apply ingenuity to guess solutions, a remarkably pointless exercise (except maybe the first few times). I can reserve my ingenuity to more interesting pursuits.

Indeed there would be so much more to teach. I looked at the textbook with its many pages of examples of quadratic equation to solve; more accurately, to guess. All piecemeal, example by example; no attempt at generality (isn’t the quest for general results one of the prime characteristics of the mathematical spirit?) They are not even taught that sometimes there are no solutions among the numbers they know; good luck to the enterprising student who, having found the roots of x2 – x – 6 through the recommended approach of trial and error, confidently embarks on solving x2 – x + 6 = 0 (it is almost the same and so cannot be much harder, right?).

All that space and student attention are wasted at the expense of the theoretical and practical properties of quadratic equations, second-degree polynomials and parabolas. We could direct students to web sites such as this one where they can play with parabolas, try different parameters and animate the results. We could explain how quadratic equations serve as a fundamental tool for computing trajectories of projectiles. We could even introduce the notion of parabolic mirror and its very practical benefit following from a basic geometrical property. And we could put all this into a form that a 14-year-old will readily understand.

Mathematics is not just about applications. I also feel sorry that these children, by just being shown haphazard tricks, miss the sheer beauty of the underlying ideas, which struck many of us when we learned them at the same age. What a pity.

There may be a grand plan behind the way these topics get taught, but I do not see it. I only see an approach that seems poised to drill into young minds the conviction that mathematics is but a sad, boring and pointless game of tricks.

 

Thanks to Manuel Oriol for useful observati9ns.

 

 

VN:F [1.9.10_1130]
Rating: 10.0/10 (6 votes cast)
VN:F [1.9.10_1130]
Rating: +2 (from 2 votes)

A new scientific index

The CF-Index, or Conference Frustration index, is an integer n (n ≥ 1) defined as follows. You are at a conference where your paper submission was rejected, and sitting in the session devoted to that paper’s very topic. You think for yourself  “My paper was at least n times better than the average here”. That n is your CF-index.

It is a law of nature (like speed never exceeding that of light, or temperature never going below absolute zero) that n < 1 is impossible. (The reason is obvious: if you were not the kind to believe your work is at least as good as anyone else’s, you would have gone for another profession, one calling for modesty, realism and timidity — such as, say, politician.)  Values of n = 3 or 4 are normal. Beyond 10 you might consider seeking professional advice. (These observations have nothing to do with my being at ICSE right now.)

VN:F [1.9.10_1130]
Rating: 8.2/10 (5 votes cast)
VN:F [1.9.10_1130]
Rating: +2 (from 4 votes)

A remarkable group photo

On 13-15 September 1999 a symposium took place in St Catherine College in Oxford,  in honor of Tony Hoare’s “retirement” from Oxford (the word is in quotes because he has had several further productive careers since). The organizers were Jim Woodcock, Bill Roscoe and Jim Davies. The proceedings are available as Millenial Perspectives in Computer Science, MacMillan Education UK, edited by Davies, Roscoe and Woodcock. The Symposium was a milestone event.

As part of a recent conversation on something else, YuQian Zhou(who was also there) sent me a group photo from the event, which I did not know even existed. I am including it below; it is actually a photo of a paper photo but the resolution is good. It is a fascinating gallery of outstanding people in programming and verification. (How many Turing award winners can you spot? I see 7.)

Many thanks to YuQian Zhou, Jim Woodcock and Bill Roscoe for insights into the picture in discussions of the past two weeks.

photo

VN:F [1.9.10_1130]
Rating: 10.0/10 (6 votes cast)
VN:F [1.9.10_1130]
Rating: +3 (from 3 votes)

Niklaus Wirth and the Importance of Being Simple

[This is a verbatim copy of a post in the Communications of the ACM blog, 9 January 2024.]

I am still in shock from the unexpected death of Niklaus Wirth eight days ago. If you allow a personal note (not the last one in this article): January 11, two days from now, was inscribed in my mind as the date of the next time he was coming to my home for dinner. Now it is the date set for his funeral.

standing

Niklaus Wirth at the ACM Turing centenary celebration
San Francisco, 16 June 2012
(all photographs in this article are by B. Meyer)

A more composed person would wait before jotting down thoughts about Wirth’s contributions but I feel I should do it right now, even at the risk of being biased by fresh emotions.

Maybe I should first say why I have found myself, involuntarily, writing obituaries of computer scientists: Kristen Nygaard and Ole-Johan Dahl, Andrey Ershov, Jean Ichbiah, Watts Humphrey, John McCarthy, and most recently Barry Boehm (the last three in this very blog). You can find the list with comments and links to the eulogy texts on the corresponding section of my publication page. The reason is simple: I have had the privilege of frequenting giants of the discipline, tempered by the sadness of seeing some of them go away. (Fortunately many others are still around and kicking!) Such a circumstance is almost unbelievable: imagine someone who, as a student and young professional, discovered the works of Galileo, Descartes, Newton, Ampère, Faraday, Einstein, Planck and so on, devouring their writings and admiring their insights — and later on in his career got to meet all his heroes and conduct long conversations with them, for example in week-long workshops, or driving from a village deep in Bavaria (Marktoberdorf) to Munich airport. Not possible for a physicist, of course, but exactly the computer science equivalent of what happened to me. It was possible for someone of my generation to get to know some of the giants in the field, the founding fathers and mothers. In my case they included some of the heroes of programming languages and programming methodology (Wirth, Hoare, Dijkstra, Liskov, Parnas, McCarthy, Dahl, Nygaard, Knuth, Floyd, Gries, …) whom I idolized as a student without every dreaming that I would one day meet them. It is natural then to should share some of my appreciation for them.

My obituaries are neither formal, nor complete, nor objective; they are colored by my own experience and views. Perhaps you object to an author inserting himself into an obituary; if so, I sympathize, but then you should probably skip this article and its companions and go instead to Wikipedia and official biographies. (In the same vein, spurred at some point by Paul Halmos’s photographic record of mathematicians, I started my own picture gallery. I haven’t updated it recently, and the formatting shows the limits of my JavaScript skills, but it does provide some fresh, spontaneous and authentic snapshots of famous people and a few less famous but no less interesting ones. You can find it here. The pictures of Wirth accompanying this article are taken from it.)

liskov

Niklaus Wirth, Barbara Liskov, Donald Knuth
(ETH Zurich, 2005, on the occasion of conferring honorary doctorates to Liskov and Knuth)

A peculiarity of my knowledge of Wirth is that unlike his actual collaborators, who are better qualified to talk about his years of full activity, I never met him during that time. I was keenly aware of his work, avidly getting hold of anything he published, but from a distance. I only got to know him personally after his retirement from ETH Zurich (not surprisingly, since I joined ETH because of that retirement). In the more than twenty years that followed I learned immeasurably from conversations with him. He helped me in many ways to settle into the world of ETH, without ever imposing or interfering.

I also had the privilege of organizing in 2014, together with his longtime colleague Walter Gander, a symposium in honor of his 80th birthday, which featured a roster of prestigious speakers including some of the most famous of his former students (Martin Oderski, Clemens Szyperski, Michael Franz…) as well as Vint Cerf. Like all participants in this memorable event (see here for the program, slides, videos, pictures…) I learned more about his intellectual rigor and dedication, his passion for doing things right, and his fascinating personality.

Some of his distinctive qualities are embodied in a book published on the occasion of an earlier event, School of Niklaus Wirth: The Art of Simplicity (put together by his close collaborator Jürg Gutknecht together with Laszlo Boszormenyi and Gustav Pomberger; see the Amazon page). The book, with its stunning white cover, is itself a model of beautiful design achieved through simplicity. It contains numerous reports and testimonials from his former students and colleagues about the various epochs of Wirth’s work.

bauer

Niklaus Wirth (right)
with F.L. Bauer, one of the founders of German computer science
Zurich,22 June 2005

Various epochs and many different topics. Like a Renaissance man, or one of those 18-th century “philosophers” who knew no discipline boundaries, Wirth straddled many subjects. It was in particular still possible (and perhaps necessary) in his generation to pay attention to both hardware and software. Wirth is most remembered for his software work but he was also a hardware builder. The influence of his PhD supervisor, computer design pioneer and UC Berkeley professor Harry Huskey, certainly played a role.

Stirred by the discovery of a new world through two sabbaticals at Xerox PARC (Palo Alto Research Center, the mother lode of invention for many of today’s computer techniques) but unable to bring the innovative Xerox machines to Europe, Wirth developed his own modern workstations, Ceres and Lilith. (Apart from the Xerox stays, Wirth spent significant time in the US and Canada: University of Laval for his master degree, UC Berkeley for his PhD, then Stanford, but only as an assistant professor, which turned out to be Switzerland’s and ETH’s gain, as he returned in 1968,)

 

lilith

Lilith workstation and its mouse
(Public display in the CAB computer science building at ETH Zurich)

One of the Xerox contributions was the generalized use of the mouse (the invention of Doug Englebart at the nearby SRI, then the Stanford Research Institute). Wirth immediately seized on the idea and helped found the Logitech company, which soon became, and remains today, a world leader in mouse technology.
Wirth returned to hardware-software codesign late in his career, in his last years at ETH and beyond, to work on self-driving model helicopters (one might say to big drones) with a Strong-ARM-based hardware core. He was fascinated by the goal of maintaining stability, a challenge involving physics, mechanical engineering, electronic engineering in addition to software engineering.
These developments showed that Wirth was as talented as an electronics engineer and designer as he was in software. He retained his interest in hardware throughout his career; one of his maxims was indeed that the field remains driven by hardware advances, which make software progress possible. For all my pride as a software guy, I must admit that he was largely right: object-oriented programming, for example, became realistic once we had faster machines and more memory.

Software is of course what brought him the most fame. I struggle not to forget any key element of his list of major contributions. (I will come back to this article when emotions abate, and will add a proper bibliography of the corresponding Wirth publications.) He showed that it was possible to bring order to the world of machine-level programming through his introduction of the PL/360 structured assembly language for the IBM 360 architecture. He explained top-down design (“stepwise refinement“), as no one had done before, in a beautiful article that forever made the eight-queens problem famous. While David Gries had in his milestone book Compiler Construction for Digital Computers established compiler design as a systematic discipline, Wirth showed that compilers could be built simply and elegantly through recursive descent. That approach had a strong influence on language design, as will be discussed below in relation to Pascal.

The emphasis simplicity and elegance carried over to his book on compiler construction. Another book with the stunning title Algorithms + Data Structures = Programs presented a clear and readable compendium of programming and algorithmic wisdom, collecting the essentials of what was known at the time.

And then, of course, the programming languages. Wirth’s name will forever remained tied to Pascal, a worldwide success thanks in particular to its early implementations (UCSD Pascal, as well as Borland Pascal by his former student Philippe Kahn) on microcomputers, a market that was exploding at just that time. Pascal’s dazzling spread was also helped by another of Wirth’s trademark concise and clear texts, the Pascal User Manual and Report, written with Kathleen Jensen. Another key component of Pascal’s success was the implementation technique, using a specially designed intermediate language, P-Code, the ancestor of today’s virtual machines. Back then the diversity of hardware architectures was a major obstacle to the spread of any programming language; Wirth’s ETH compiler produced P-Code, enabling anyone to port Pascal to a new computer type by writing a translator from P-Code to the appropriate machine code, a relatively simple task.

Here I have a confession to make: other than the clear and simple keyword-based syntax, I never liked Pascal much. I even have a snide comment in my PhD thesis about Pascal being as small, tidy and exciting as a Swiss chalet. In some respects, cheekiness aside, I was wrong, in the sense that the limitations and exclusions of the language design were precisely what made compact implementations possible and widely successful. But the deeper reason for my lack of enthusiasm was that I had fallen in love with earlier designs from Wirth himself, who for several years, pre-Pascal, had been regularly churning out new language proposals, some academic, some (like PL/360) practical. One of the academic designs I liked was Euler, but I was particularly keen about Algol W, an extension and simplification of Algol 60 (designed by Wirth with the collaboration of Tony Hoare, and implemented in PL/360). I got to know it as a student at Stanford, which used it to teach programming. Algol W was a model of clarity and elegance. It is through Algol W that I started to understand what programming really is about; it had the right combination of freedom and limits. To me, Pascal, with all its strictures, was a step backward. As an Algol W devotee, I felt let down.
Algol W played, or more precisely almost played, a historical role. Once the world realized that Algol 60, a breakthrough in language design, was too ethereal to achieve practical success, experts started to work on a replacement. Wirth proposed Algol W, which the relevant committee at IFIP (International Federation for Information Processing) rejected in favor of a competing proposal by a group headed by the Dutch computer scientist (and somewhat unrequited Ph.D. supervisor of Edsger Dijkstra) Aad van Wijngaarden.

Wirth recognized Algol 68 for what it was, a catastrophe. (An example of how misguided the design was: Algol 68 promoted the concept of orthogonality, roughly stating that any two language mechanisms could be combined. Very elegant in principle, and perhaps appealing to some mathematicians, but suicidal: to make everything work with everything, you have to complicate the compiler to unbelievable extremes, whereas many of these combinations are of no use whatsoever to any programmer!) Wirth was vocal in his criticism and the community split for good. Algol W was a casualty of the conflict, as Wirth seems to have decided in reaction to the enormity of Algol 68 that simplicity and small size were the cardinal virtues of a language design, leading to Pascal, and then to its modular successors Modula and Oberon.

Continuing with my own perspective, I admired these designs, but when I saw Simula 67 and object-oriented programming I felt that I had come across a whole new level of expressive power, with the notion of class unifying types and modules, and stopped caring much for purely modular languages, including Ada as it was then. A particularly ill-considered feature of all these languages always irked me: the requirement that every module should be declared in two parts, interface and implementation. An example, in my view, of a good intention poorly realized and leading to nasty consequences. One of these consequences is that the information in the interface part inevitably gets repeated in the implementation part. Repetition, as David Parnas has taught us, is (particularly in the form of copy-paste) the programmer’s scary enemy. Any change needs to be checked and repeated in both the original and the duplicate. Any bug needs to be fixed in both. The better solution, instead of the interface-implementation separation, is to write everything in one place (the class of object-oriented programming) and then rely on tools to extract, from the text, the interface view but also many other interesting views abstracted from the text.

In addition, modular languages offer one implementation for each interface. How limiting! With object-oriented programming, you use inheritance to provide a general version of an abstraction and then as many variants as you like, adding them as you see fit (Open-Closed Principle) and not repeating the common information. These ideas took me towards a direction of language design completely different from Wirth’s.

One of his principles in language design was that it should be easy to write a compiler — an approach that paid off magnificently for Pascal. I mentioned above the beauty of recursive-descent parsing (an approach which means roughly that you parse a text by seeing how it starts, deducing the structure that you expect to follow, then applying the same technique recursively to the successive components of the expected structure). Recursive descent will only work well if the language is LL (1) or very close to it. (LL (1) means, again roughly, that the first element of a textual component unambiguously determines the syntactic type of that component. For example the instruction part of a language is LL (1) if an instruction is a conditional whenever it starts with the keyword if, a loop whenever it starts with the keyword while, and an assignment variable := expression whenever it starts with a variable name. Only with a near-LL (1) structure is recursive descent recursive-decent.) Pascal was designed that way.

A less felicitous application of this principle was Wirth’s insistence on one-pass compilation, which resulted in Pascal requiring any use of indirect recursion to include an early announcement of the element — procedure or data type — being used recursively. That is the kind of thing I disliked in Pascal: transferring (in my opinion) some of the responsibilities of the compiler designer onto the programmer. Some of those constraints remained long after advances in hardware and software made the insistence on one-pass compilation seem obsolete.

What most characterized Wirth’s approach to design — of languages, of machines, of software, of articles, of books, of curricula — was his love of simplicity and dislike of gratuitous featurism. He most famously expressed this view in his Plea for Lean Software article. Even if hardware progress drives software progress, he could not accept what he viewed as the lazy approach of using hardware power as an excuse for sloppy design. I suspect that was the reasoning behind the one-compilation-pass stance: sure, our computers now enable us to use several passes, but if we can do the compilation in one pass we should since it is simpler and leaner.
As in the case of Pascal, this relentless focus could be limiting at times; it also led him to distrust artificial intelligence, partly because of the grandiose promises its proponents were making at the time. For many years indeed, AI never made it into ETH computer science. I am talking here of the classical, logic-based form of AI; I had not yet had the opportunity to ask Niklaus what he thought of the modern, statistics-based form. Perhaps the engineer in him would have mollified his attitude, attracted by the practicality and well-defined scope of today’s AI methods. I will never know.

As to languages, I was looking forward to more discussions; while I wholeheartedly support his quest for simplicity, size to me is less important than simplicity of the structure and reliance on a small number of fundamental concepts (such as data abstraction for object-oriented programming), taken to their full power, permeating every facet of the language, and bringing consistency to a powerful construction.

Disagreements on specifics of language design are normal. Design — of anything — is largely characterized by decisions of where to be dogmatic and where to be permissive. You cannot be dogmatic all over, or will end with a stranglehold. You cannot be permissive all around, or will end with a mess. I am not dogmatic about things like the number of compiler passes: why care about having one, two, five or ten passes if they are fast anyway? I care about other things, such as the small number of basic concepts. There should be, for example, only one conceptual kind of loop, accommodating variants. I also don’t mind adding various forms of syntax for the same thing (such as, in object-oriented programming, x.a := v as an abbreviation for the conceptually sound x.set_a (v)). Wirth probably would have balked at such diversity.

In the end Pascal largely lost to its design opposite, C, the epitome of permissiveness, where you can (for example) add anything to almost anything. Recent languages went even further, discarding notions such as static types as dispensable and obsolete burdens. (In truth C is more a competitor to P-Code, since provides a good target for compilers: its abstraction level is close to that of the computer and operating system, humans can still with some effort decipher C code, and a C implementation is available by default on most platforms. A kind of universal assembly language. Somehow, somewhere, the strange idea creeped into people’s minds that it could also be used as a notation for human programmers.)

In any case I do not think Niklaus followed closely the evolution of the programming language field in recent years, away from principles of simplicity and consistency; sometimes, it seems, away from any principles at all. The game today is mostly “see this cute little feature in my language, I bet you cannot do as well in yours!” “Oh yes I can, see how cool my next construct is!“, with little attention being paid to the programming language as a coherent engineering construction, and even less to its ability to produce correct, robust, reusable and extendible software.

I know Wirth was horrified by the repulsive syntax choices of today’s dominant languages; he could never accept that a = b should mean something different from b = a, or that a = a + 1 should even be considered meaningful. The folly of straying away from conventions of mathematics carefully refined over several centuries (for example by distorting “=” to mean assignment and resorting to a special symbol for equality, rather than the obviously better reverse) depressed him. I remain convinced that the community will eventually come back to its senses and start treating language design seriously again.

One of the interesting features of meeting Niklaus Wirth the man, after decades of studying from the works of Professor Wirth the scientist, was to discover an unexpected personality. Niklaus was an affable and friendly companion, and most strikingly an extremely down-to-earth person. On the occasion of the 2014 symposium we were privileged to meet some of his children, all successful in various walks of life: well-known musician in the Zurich scene, specialty shop owner… I do not quite know how to characterize in words his way of speaking (excellent) English, but it is definitely impossible to forget its special character, with its slight but unmistakable Swiss-German accent (also perceptible in German). To get an idea, just watch one of the many lecture videos available on the Web. See for example the videos from the 2014 symposium mentioned above, or this full-length interview recorded in 2018 as part of an ACM series on Turing Award winners.

On the “down-to-earth” part: computer scientists, especially of the first few generations, tend to split into the mathematician types and the engineer types. He was definitely the engineer kind, as illustrated by his hardware work. One of his maxims for a successful career was that there are a few things that you don’t want to do because they are boring or feel useless, but if you don’t take care of them right away they will come back and take even more of your time, so you should devote 10% of that time to discharge them promptly. (I wish I could limit that part to 10%.)

He had a witty, subtle — sometimes caustic — humor. Here is a Niklaus Wirth story. On the seventh day of creation God looked at the result. (Side note: Wirth was an atheist, which adds spice to the choice of setting for the story.) He (God) was pretty happy about it. He started looking at the list of professions and felt good: all — policeman, minister, nurse, street sweeper, interior designer, opera singer, personal trainer, supermarket cashier, tax collector… — had some advantages and some disadvantages. But then He got to the University Professor row. The Advantages entry was impressive: long holidays, decent salary, you basically get to do what you want, and so on; but the Disadvantages entry was empty! Such a scandalous discrepancy could not be tolerated. For a moment, a cloud obscured His face. He thought and thought and finally His smile came back. At that point, He had created colleagues.

When the computing world finally realizes that design needs simplicity, it will do well to go back to Niklaus Wirth’s articles, books and languages. I can think of only a handful of people who have shaped the global hardware and software industry in a comparable way. Niklaus Wirth is, sadly, sadly gone — and I still have trouble accepting that he will not show up for dinner, on Thursday or ever again — but his legacy is everywhere.

VN:F [1.9.10_1130]
Rating: 9.8/10 (9 votes cast)
VN:F [1.9.10_1130]
Rating: +6 (from 6 votes)

AI will move mountains

In August I was planning for my participation in the ICTSS conference in Bergamo, Italy, and wanted to find some accommodation within walking distance of the conference place. Bergamo has a medieval “città alta”, high city, at the top of a hill, and a “città bassa”, low city, down in the valley, where modern expansion happens. I had only passed through Bergamo once before but enough to know that it is not that easy or fast to commute between the two parts, so it is better to plan your accommodation properly.

It was not immediately clear from the online map where the conference venue belonged, so I thought that maybe this was an opportunity to find some actual use for ChatGPT. (So far I am not a great fan, see here, but one has to keep one’s mind open.) I asked my question:

 

question_bergamo

and received an answer (here is the first part):

answer_bergamo

Good that I did not stop here because the answer is plain wrong; the Piazzale in question (the main site of the university, and a former convent, as I later found out) is in the high city. Even more interesting was the second part of the answer:

changed_bergamo

Now this is really good. With my Southern California experience I am not that easily surprised: it is a common joke in Santa Barbara (an area prone to mudslides, particularly when it rains after a fire) that you might go to bed in your house at the top of a hill and wake up the next morning in the same house but with a whole new set of neighbors at the bottom of a valley. The other way around, though, is quite new for me.

AI-induced levitation! Of an entire city area! Since September 2021, the Piazzale San Agostino and its historic university buildings might have moved up 250 meters from low to high city. Artificial Intelligence is so amazing.

As a codicil to this little report: at that point I had decided to drop this absurd tool and look for a reliable source, but noticed that I had made a mistake in the Italian phrase: the name of high city is “città alta”, whereas I had put the words in the reverse order (as shown above). Since I like to do things right I asked the question again with the proper order, not changing anything else, not questioning the previous results, just repeating the question with a correct phrasing:

 

question2

and got this:

answer2_bergamo

The amazement continues. I had not complained, not questioned the answer, not emitted any doubt or criticism, and here is this tool apologizing again. And leaving me with two exactly contradictory answers. Which one am I supposed to believe? If I ask again, am I going to get a new set of excuses and a reversal to the original answer? (I did not try.)

I will continue my quest to find out whatever this thing might be good for.

VN:F [1.9.10_1130]
Rating: 10.0/10 (9 votes cast)
VN:F [1.9.10_1130]
Rating: +6 (from 6 votes)

A writing exercise

I recently wrote a working paper (on academic careers) and since it was urgently needed I did not want to spend time on style issues but instead to keep things simple. So I preceded it with the comment “Using he’ as an abbreviation for `he or she’.”  I received the helpful suggestion that I could have used “they” ‘instead.

I have a great style exercise for you. Rewrite the following text (a fictitious description of a fictitious interview session) using the “they” style. Keep the rest of the content as it is of course.

All the candidates are in the room. Each in turn gives his presentation to the committee, in the presence of the other candidates, who may use the opportunity to revise their own presentations. It can make for an awkward situation because they are actually competing with him and with each other for the position. At the end of his presentation, the committee members ask him the questions that they have prepared during his talk; he engages in a free discussion with them. He then steps outside so that they can discuss his performance in his absence; when they are done, they call him back into the room and they tell him the result of their assessment of him, giving him the opportunity to prompt them for more detailed comments about his presentation and more generally about what they think of his profile. Afterwards, he will in turn listen to the other candidates’ presentations, which in spite of the competitive situation give him an opportunity to learn from them and network with them for his own benefit.

 

 

VN:F [1.9.10_1130]
Rating: 8.4/10 (11 votes cast)
VN:F [1.9.10_1130]
Rating: +1 (from 7 votes)

The “NATO expansion” canard

Are you not tired, too, of those endlessly repeated arguments that, sure, it was not very polite of Putin to invade Ukraine, but you have to understand the situation, it’s all the fault of NATO’s aggressive westward expansion which, you know, was provoking the Russians!

You see this argument everywhere on social networks and also from people such as the former French prime minister Jospin (in March of 2022!). Plus of course Noam Chomsky, for whom there is no atrocity committed by a dictator anywhere that cannot be justified by some real or imagined American turpitude. (Evidence that (1) a great scientist is not immune to shameful delusions and (2) Chomsky, the kind of person who would not last two weeks in one of the regimes he praises, is really fortunate that his family landed in a country where he can safely spew out whatever theory he likes, however outrageous.) Most recently in an opinion piece of the New York Times.

Come on. NATO is a defensive alliance. It has no offensive designs on any part of the world. It does not gobble up any countries: its members all decided to join NATO for their own security.

As to the supposed provocation: if I have an aggressive neighbor with attack dogs and my other neighbors have built a fence to shield themselves from him, am I “provoking” him if I ask them to extend the fence to encompass my house?

It is obvious to all who is aggressive and who is aggressed. Shame on those who insinuate otherwise.

VN:F [1.9.10_1130]
Rating: 6.9/10 (16 votes cast)
VN:F [1.9.10_1130]
Rating: +3 (from 7 votes)

New article: scenarios versus OO requirements

Maria Naumcheva, Sophie Ebersold, Alexandr Naumchev, Jean-Michel Bruel, Florian Galinier and Bertrand Meyer: Object-Oriented Requirements: a Unified Framework for Specifications, Scenarios and Tests, in JOT (Journal of Object Technology), vol. 22, no. 1, pages 1:1-19, 2023. Available here with link to PDF  (the journal is open-access).

From the abstract:

A paradox of requirements specifications as dominantly practiced in the industry is that they often claim to be object-oriented (OO) but largely rely on procedural (non-OO) techniques. Use cases and user stories describe functional flows, not object types.

To gain the benefits provided by object technology (such as extendibility, reusability, and reliability), requirements should instead take advantage of the same data abstraction concepts – classes, inheritance, information hiding – as OO design and OO programs.

Many people find use cases and user stories appealing because of the simplicity and practicality of the concepts. Can we reconcile requirements with object-oriented principles and get the best of both worlds?

This article proposes a unified framework. It shows that the concept of class is general enough to describe not only “object” in a narrow sense but also scenarios such as use cases and user stories and other important artifacts such as test cases and oracles. Having a single framework opens the way to requirements that enjoy the benefits of both approaches: like use cases and user stories, they reflect the practical views of stakeholders; like object-oriented requirements, they lend themselves to evolution and reuse.

The article builds in part on material from chapter 7 of my requirements book (Handbook of Requirements and Business Analysis, Springer).

VN:F [1.9.10_1130]
Rating: 10.0/10 (2 votes cast)
VN:F [1.9.10_1130]
Rating: +2 (from 2 votes)

Statement Considered Harmful

I harbor no illusion about the effectiveness of airing this particular pet peeve; complaining about it has about the same chance of success as protesting against split infinitives or music in restaurants. Still, it is worth mentioning that the widespread use of the word “statement” to denote a programming language element, such as an assignment, that directs a computer to perform some change, is misleading. “Instruction” is the better term.

A “statement” is “something stated, such as a single declaration or remark, or a report of fact or opinions” (Merriam-Webster).

Why does it matter? The use of “statement” to mean “instruction” obscures a fundamental distinction of software engineering: the duality between specification and implementation. Programming produces a solution to a problem; success requires expressing both the problem, in the form of a specification, and the devised solution, in the form of an implementation. It is important at every stage to know exactly where we stand: on the problem side (the “what”) or the solution side (the “how”). In his famous Goto Statement Considered Harmful of 1968, Dijkstra beautifully characterized this distinction as the central issue of programming:

Our intellectual powers are rather geared to master static relations and our powers to visualize processes evolving in time are relatively poorly developed. For that reason we should do (as wise programmers aware of our limitations) our utmost to shorten the conceptual gap between the static program and the dynamic process, to make the correspondence between the program (spread out in text space) and the process (spread out in time) as trivial as possible.

Software verification, whether conducted through dynamic means (testing) or static techniques (static analysis, proofs of correctness), relies on having separately expressed both a specification of the intent and a proposed implementation intended to realize that intent. They have to remain distinct; otherwise we cannot even define what it means that the program should be correct (correct with respect to what?), and even less what it means to validate the program (validate it against what?).

In many approaches to verification, the properties against which we validate programs are called assertions. An assertion expresses a property that should hold at some point of program execution. For example, after the assignment instruction a := b + 1, the assertion ab will hold. This notion of assertion is used both in testing frameworks, such as JUnit for Java or PyUnit for Python, and in program proving frameworks; see, for example, the interactive Web-based version of the AutoProof program-proving framework for Eiffel at autoproof.sit.org, and of course the entire literature on axiomatic (Floyd-Hoare-Dijkstra-style) verification.

The difference between the instruction and the assertion is critical: a := b + 1 tells the computer to do something (change the value of a), as emphasized here by the “:=” notation for assignment; ab does not direct the computer or the computation to do anything, but simply states a property that should hold at a certain stage of the computation if everything went fine so far.

In the second case, the word “states” is indeed appropriate: an assertion states a certain property. The expression of that property, ab, is a “statement” in the ordinary English sense of the term. The command to the computer, a := b + 1, is an instruction whose effect is to ensure the satisfaction of the statement ab. So if we use the word “statement” at all, we should use it to mean an assertion, not an instruction.

If we start calling instructions “statements” (a usage that Merriam-Webster grudgingly accepts in its last entry for the term, although it takes care to define it as “an instruction in a computer program,” emphasis added), we lose this key distinction.

There is no reason for this usage, however, since the word “instruction” is available, and entirely appropriate.

So, please stop saying “an assignment statement” or “a print statement“; say “an assignment instruction” and so on.

Maybe you won’t, but at least you have been warned.

Recycled This article was first published in the “Communications of the ACM” blog.

VN:F [1.9.10_1130]
Rating: 9.9/10 (10 votes cast)
VN:F [1.9.10_1130]
Rating: +2 (from 2 votes)

Toute la rage du Monde

Un chef d’état donne une conférence de presse ; par exemple Emmanuel Macron, président de la république française, le 17 avril dernier. Les journaux publieront des commentaires louangeurs ou critiques, mais en premier lieu, si ce sont des journaux d’information, ils rendront compte de ce qui a été dit. Leur manchette sera du genre de celle du Guardian ce jour-là :

guardian

Dans ses autres articles et éditoriaux le Guardian, très à gauche et très remonté contre Macron, ne se gêne pas pour exprimer ses critiques. Mais il commence par faire son travail d’information : M. Macron a donné tel jour une conférence de presse sur tel thème, il a expliqué ceci et annoncé cela. Quelle différence avec le quotidien autrefois « de référence » dans le pays même de Macron, Le Monde. Inutile en « une » de chercher à s’informer sur l’exposé ; à la place, le lecteur a droit à l’opinion des journalistes, un arrêt définitif et cinglant :

engagées

Ce qu’il a dit ? Quels sont ces chantiers ? Mystère. Et aucune importance. Sans doute le lecteur serait incapable de former sa propre opinion sur la nouveauté, ou non, des annonces de Macron. Ou il y perdrait trop de temps. Les grands experts du Monde lui évitent cette fatigue en interprétant pour lui le discours, plutôt que de le décrire. Tout ce qui compte est leur jugement.

Jour après jour,  au lieu d’informer, Le Monde mène une campagne de démolition du gouvernement actuel qui n’a rien à envier aux plus beaux (ou mauvais) jours de l’Humanité d’antan. Cantonons-nous à quelques exemples pris au hasard dans le mois d’avril 2023, reflétant comment un quotidien autrefois sérieux compose aujourd’hui ses  Unes. Le 5 avril, le gouvernement ayant annoncé vouloir dissoudre un groupuscule violent, « Les Soulèvements de la Terre », responsable de millions d’euros de destructions et déprédations les mois précédents et cherchant en permanence l’affrontement avec les forces de l’ordre, voici ce que Le Monde trouve de mieux à titrer :

Pasted

Aucune nuance, aucun recul. Le terme « méga-bassine » est lui-même tendancieux. Il s’agit d’une réserve d’eau, destinée à préserver cette ressource pour faire face aux étés de plus en plus chauds que nous connaissons. On peut être pour ou contre mais force est de noter que dans aucun autre pays d’Europe occidentale ce genre de discussion ne passe par des émeutes d’une telle violence (47 gendarmes blessés ce jour-là). La « bataille » principale n’est pas celle des arguments mais une bataille au sens propre entre les forces de l’ordre et des extrémistes déchaînés. Rien de tout cela dans le titre et le résumé, seulement l’annonce que le mouvement a « réfuté point par point » — la cause est entendue et jugée ! — les raisons du gouvernement. Comme s’il s’agissait d’une aimable discussion d’idées (où l’un des partenaires a raison par définition) et non du contrôle d’une organisation subversive (contestable ou non, la décision de construire la réserve a été votée par les pouvoirs régionaux normalement élus).

Le titre publié deux jours avant est lui, plutôt amusant dans son obsession critique :

bourdieu

Populiste en plus de ses autres tares, il est le représentant de la Noblesse d’État ! Ah tiens, François Hollande, que Le Monde traita toujours avec de grands égards, n’en était pas, lui ? Fils de médecin, élevé à Neuilly, ancien élève de Saint-Jean Baptiste de la Salle puis HEC, Sciences Po et l’ENA, ayant commencé sa carrière à la Cour des Comptes, ensuite militant et responsable politique pendant toute sa carrière, oui, Hollande est de gauche, donc c’est le Peuple, le vrai ! Et Macron l’affreux représentant du Système.

Mais ne nous inquiétons pas trop, dans ce cas précis il s’agissait d’une « Tribune », présentée comme telle. Revenons à l’information, ou plutôt ce qui devrait en être. Le 10 avril Le Monde traite en Une d’écologie, sujet que vous croyez peut-être sérieux mais sur lequel votre quotidien préféré choisit son parti — comme Pierre Dac et Francis Blanche, le parti d’en rire :

parti

Absolument. Rien de sérieux dans ce gouvernement, tout ce qu’il fait, quand ce n’est pas scandaleux, doit être risible. Le lendemain on revient au scandaleux :

étouffer

Si Macron voulait vraiment « étouffer » quelque chose, bonne chance dans un pays où les moyens d’information de masse (Le Monde n’étant que l’un d’eux) sont ligués contre lui. (Note lexicale : « peuple » tel qu’employé ici est une abréviation pour « émeutiers et incendiaires ». Quant à la « légitimité » des syndicats, parlons-en : 10% des salariés français sont syndiqués, moins de 8% dans le privé et, même dans le secteur public, moins de 20%. En outre, de quels « syndicats» s’agit-il au juste ? Dans les autres pays, les salariés d’une entreprise ou d’une branche se groupent en un syndicat pour défendre leurs intérêts. Un seul syndicat, bien sûr. En France, il y a 4 ou 5 syndicats rivaux dans une même entreprise, petits partis politiques subventionnés se disputant les voix des quelques votants.)
Le 13 avril, un point vraiment lumineux sur la situation :

colère

Nulle trace de ce que le supposé sentiment d’injustice et de colère n’est le fait que d’une partie de la population, chauffée à blanc par les extrémistes de gauche et de droite. Quant au 49.3, il est difficile de voir en quoi ce mécanisme prévu par la Constitution —précisément pour les cas difficiles, comme celui-ci, où une partie de la droite classique a été intimidée voire terrorisée par les menaces reçues de toute part — est injuste ou prompt à susciter la colère. Il est après tout sujet à un vote de défiance (qui a eu lieu et a échoué). Du reste ce mécanisme a été surtout utilisé par la gauche sous Mitterrand: 3 fois par Pierre Bérégovoy, 8 fois par Édith Cresson et 28 fois par Michel Rocard (vingt-huit fois !). Je n’ai pas souvenir que lors de ses 6 utilisations par Manuel Valls, sous Hollande, Le Monde ait crié à l’injustice et compati à la légitime colère du Peuple. Ce qui frappe dans ce titre c’est une fois de plus le matraquage quotidien : le scandale et la malfaisance sont toujours du même côté, et l’injustice subie et la colère justifiée toujours de l’autre.

Le 13 avril, suite des grèves à répétition. Un journal même minimalement soucieux de la vie quotidienne de ses lecteurs parlerait des complications incessantes, des attentes interminables dans les gares et aéroports, des trésors d’invention auxquels sont réduits ceux qui doivent faire garder leurs enfants, des nouvelles pertes colossales pour l’économie du pays, de l’annulation de la première visite d’État que le nouveau roi britannique avait choisi de réserver à la France. (D’avoir peu d’admiration pour la monarchie actuelle et encore moins pour l’Angleterre du Brexit n’empêche pas de ressentir la gifle monumentale qu’a constituée cette annulation.) Non, il ne s’agit que des luttes glorieuses du Peuple en révolte :

intransigeance

Intransigeance ? Qu’attend-on au juste : qu’un gouvernement élu sur la promesse d’une réforme et l’ayant fait passer au Parlement décide tout à coup de l’annuler ? Peut-être pour rassurer les Libraires en Colère (si l’on devine correctement le mot tronqué sur la photo de banderole) ? Cette Une du Monde et des dizaines d’autres comme elles sont de purs appels à manifester ; jour après jour le journal explique aimablement à ses lecteurs quant et où participer. Comme s’ils n’avaient rien de mieux à faire.

Le même jour, un autre sommet de l’élite intellectuelle éclairant le monde :

veutpas

En d’autres temps Le Monde était attaché aux principes constitutionnels. Notez l’illustration menaçante. Côté constitution, avec sa sagacité habituelle le journal avait annoncé dès le 26 mars ce qui allait se passer :

rousseau-3

Les opposants à la réforme, ayant perdu à toutes les étapes, se rattachaient à l’espoir que le Conseil Constitutionnel annulât tout. Bien entendu il n’avait aucune raison de le faire. Son rôle n’est pas de substituer la volonté des manifestants du jour à celle du Parlement élu. Peut-être y jouait-il  « en quelque sorte son destin » mais aux dernières nouvelles il existe encore. Le 26 mars il pouvait encore y avoir débat, mais un journal objectif et sérieux aurait publié une analyse factuelle et prudente.

Tout cela n’empêche pas Le Monde de continuer de tirer sur tout ce qui bouge du côté du gouvernement. Le 21 avril, Macron ayant rencontré des enseignants :

crispe

Si quelqu’un crispe, il semblerait que ce soit plutôt Le Monde, mais bon. Ce qui compte, bien sûr, ce ne sont pas les avancées forcément viciées du gouvernement mais la réaction des 18,4%, les syndicats. Conjecture oiseuse : s’il n’y avait pas eu le  « pacte enseignant », est-ce que plus rien n’aurait « terni » la joie débordante desdits syndicats et leur soutien désormais enthousiaste aux projets éducatifs du gouvernement ?

Après le passage de la réforme des retraites (au grand soulagement de beaucoup), Macron et Borne ont annoncé vouloir continuer avec les réformes. Quel dommage, selon Le Monde, qu’ils soient en situation si difficile ! Le 24 avril, pauvre Macron :

doute

Pour Borne ce n’est pas mieux (26 avril) :

spectre

À ce point d’affaiblissement rien ne pourrait être pire, mais si, on peut s’affaiblir encore :

affaiblit

Le 1er mai, reportage sur les manifestations, dans le même genre que les précédents, par exemple :

violence

Le « mais » est vraiment adorable. Un« mais » dans le style bien connu de « je ne suis pas raciste, mais… ». En réalité, depuis des mois (et dès la crise des gilets jaunes) Le Monde affiche une attitude de compréhension presque affectueuse vis-à-vis des pires excès. Macron, pour qui l’écoute, n’est en rien méprisant et son attitude est le contraire de celle de quelqu’un qui prendrait les gens pour des imbéciles. Ses discours sont de très haute tenue (comme l’étaient, du reste, ceux de François Hollande) ; il explique et il justifie. Ne se sentent méprisés que ceux qui en réalité le méprisent, pour des raisons qu’on n’a pas de mal à imaginer (il est passé par la banque Rothschild , comme Pompidou du reste, il parle bien, il joue du piano, il n’a pas besoin de « prendre de haut » pour qu’on détecte en lui le premier de la classe). Et d’ailleurs s’il l’était, méprisant, en quoi cela justifierait-il de mettre le feu à la brasserie La Rotonde ? Dans les pays développés seule la France est en proie à ces manifestations régulièrement violentes qui dégénèrent. Les activistes du Monde n’ont rien à y redire ; il préfèrent réserver leur indignation pour ceux qui essayent de moderniser le pays.

La rage anti-Macron et anti-Borne se déchaîne jour après jour dans ce qui fut le quotidien respecté de Beuve-Méry et (malgré ses défauts) une source d’informations souvent fiables et de commentaires pondérés. Il semble avoir été pris en otage par une poignée de propagandistes peu soucieux de journalisme. On voit bien que les éléments les plus responsables en sont gênés ; Sylvie Kaufmann publie dans le New York Times des analyses raisonnées et raisonnables, Françoise Fressoz écrit des éditoriaux équilibrés. On se demande si c’est pour maintenir une façade respectable pour les lecteurs étrangers qui ne voient pas le déferlement quotidien de bile anti-Établissement remplaçant l’information de base.

Dommage vraiment qu’on en soit venu là. Je ne sais pas ce qu’on enseigne aujourd’hui dans les écoles de journalisme en France, mais tous les autre grands pays démocratiques ont leurs journaux de référence qui appliquent (ou essayent d’appliquer, avec d’inévitables ratés) la distinction fondamentale entre nouvelles et opinions. Que faudrait il pour que les lecteurs français aient à nouveau un journal sérieux, objectif et crédible ?

 

VN:F [1.9.10_1130]
Rating: 10.0/10 (3 votes cast)
VN:F [1.9.10_1130]
Rating: +2 (from 2 votes)

“Object Success” now available

A full, free online version of Object Success
(1995)

success_cover

 

I am continuing the process of releasing some of my earlier books. Already available: Introduction to the Theory of Programming Languages (see here) and Object-Oriented Software Construction, 2nd edition (see here). The latest addition is Object Success, a book that introduced object technology to managers and more generally emphasized the management and organizational consequences of OO ideas.

The text (3.3 MB) is available here for download.

Copyright notice: The text is not in the public domain. It is copyrighted material (© Bertrand Meyer, 1995, 2023), made available free of charge on the Web for the convenience of readers, with the permission of the original publisher (Prentice Hall, now Pearson Education, Inc.). You are not permitted to copy it or redistribute it. Please refer others to the present version at bertrandmeyer.com/success.

(Please do not bookmark or share the above download link as it may change, but use the present page: https:/bertrandmeyer.com/success.) The text is republished identically, with minor reformatting and addition of some color. (There is only one actual change, a mention of the evolution of hardware resources, on page 136, plus a reference to a later book added to a bibliography section on page 103.) This electronic version is fully hyperlinked: clicking entries in the table of contents and index, and any element in dark red such as the page number above, will take you to the corresponding place in the text.

The book is a presentation of object technology for managers and a discussion of management issues of modern projects. While it is almost three decades old and inevitably contains some observations that will sound naïve  by today’s standards, I feel  it retains some of its value. Note in particular:

  • The introduction of a number of principles that went radically against conventional software engineering wisdom and were later included in agile methods. See Agile! The Good, the Hype and the Ugly, Springer, 2014, book page at agile.ethz.ch.
  • As an important example, the emphasis on the primacy of code. Numerous occurrences of the argument throughout the text. (Also, warnings about over-emphasizing analysis, design and other products, although unlike “lean development” the text definitely does not consider them to be “waste”. See the “bubbles and arrows of outrageous fortune”, page 80.)
  • In the same vein, the emphasis on incremental development.
  • Yet another agile-before-agile principle: Less-Is-More principle (in “CRISIS REMEDY”, page 133).
  • An analysis of the role of managers (chapters 7 to 9) which remains largely applicable, and I believe more realistic than the agile literature’s reductionist view of managers.
  • A systematic analysis of what “prototyping” means for software (chapter 4), distinguishing between desirable and less good forms.
  • Advice on how to salvage projects undergoing difficulties or crises (chapters 7 and 9).
  • A concise exposition of OO concepts (chapter 1 and appendix).
  • A systematic discussion of software lifecycle models (chapter 3), including the “cluster model”. See new developments on this topic in my recent “Handbook of Requirements and Business Analysis”, Springer, 2022, book page at bertrandmeyer.com/requirements.
  • More generally, important principles from which managers (and developers) can benefit today just as much as at the time of publication.

The download link again (3.3 MB): here it is.

VN:F [1.9.10_1130]
Rating: 9.4/10 (7 votes cast)
VN:F [1.9.10_1130]
Rating: +2 (from 2 votes)

The mathematics of the seven messengers

In my previous article I referred to the short story The Seven Messengers by Dino Buzzati, of which I have written a translation. Here is a quantitative analysis. I will also refer the reader to a very nice article published in 2009 on this topic: The Seven Messengers and the “Buzzati Sequence” by Giorgio D’Abramo from the National Institute of Astrophysics in Rome. It is available here on arXiv. I discovered ita few years ago after working out my own “sequence” and had a short and pleasant correspondence with Dr. D’Abramo. You can compare our respective derivations, which I think are equivalent. Here is mine.

Although Buzzati gives absolute values (40 leagues per day), all that matters is the ratio m between the messengers’ and caravan’s speeds (m > 1). The relevant measures of time are:

  • The messenger-day, which take as unit of time.
  • The caravan-day, which is m times a messenger-day.

If as unit of distance we take ground covered in one day by a messenger, then time is equal to distance.

So if Tn is the time when a messenger rejoins the caravan after his n-th trip back home, we have

Tn + Tn+1  = m (Tn+1 – Tn)                  [1]

Justification of [1]:  both sides measure the time from when the messenger leaves (for the n-th time) to when he next rejoins the caravan. Note that the messenger goes back for his n+1-st trip on the very day he completes the n-th one.  On the left we have the time/distance  covered by the messenger (Tn to go home, plus Tn+1 to catch up). On the right, Tn+1 – Tn is the time/distance covered by the caravan in caravan units, which we multiply by m to get messenger-days.

The equality can be rewritten

Tn+1 = (m + 1) / (m – 1) Tn

yielding a geometric progression

 Tn = Kn T0                  [2]

where T0 is when the messenger leaves for his first trip, and the constant K is (m + 1) / (m – 1).

The Prince, who is as bad at horses as he (unlike Buzzati) is at math, had initially expected m = 2. Then K is 3 / 1, that is to say, 3. In that case the progression [2] would have been Tn = 3n T0. Even then, he would have found the result disappointing: while the first messenger returns the first time after three days, the third messenger, for example, returns the fifth time after about almost 1000 days (35 is 243, to be multiplied by 4), i.e. close to a year, and the last messenger returns for the sixth time after 16 years ( 36 × 8 /365).

The way things actually happen in the the story, the Prince determines after a while that m = 3/2 (the messengers go faster by half than the caravan), so K is 5. (In the text: Soon enough, I realized that it sufficed to multiply by five the number of days passed so far to know when the messenger would be back with us.) The unit travel times (Kn) of messengers are as follows, giving return times if multiplied by two for the first messenger (since he first leaves on the second day), three for the second messenger) and so on:

 

(1)          5 days: as stated in the story, the first return is after 10 days for Alexander, 15 for Bartholomew, 20 for Cameron…

(2)          25 days: Alexander returns for the second time after almost one month.

(3)          4 months

(4)          Close to two years (20 months)

(5)          8 years and a half

(6)          43 years

(7)          214 years

(8)          Millennium

Buzzati was a journalist by trade; I do not know what mathematical education he had, but find his ingenuity and mastery impressive.

(By the way, there might be a good programming exercise here, with a graphical interface showing the caravan and the messengers going about their (opposite) business, and controls to vary the parameters and see what happens.)

Another point on which the Prince is delusional is his suspicion that he would have fared better by selecting more than 7 messengers, a number he now finds “ridiculously low”. It would have cost him more money but not helped him much, since the number of messengers only affects the initial value in the geometric progression: T0 in [2]. What truly matters is the exponential multiplier Kn, where the constant K  — defined as (m + 1) / (m – 1) — is always greater than 1, inexorably making the Tn values take off to dazzling heights by the law of compound interest  (the delight of investors and curse of borrowers).

Obviously, as m goes to infinity that constant K = (m + 1) / (m – 1)  approaches its limit 1. Concretely, what messenger speed would it take for the Prince’s scheme to work to his satisfaction? The story indicates that the caravan covers 40 leagues a day; that is about 160 kilometers (see here). Ambitious but feasible (8 hours a day excluding the inevitable stops, horses on trot); in any case, I would trust Buzzati, not just because people in the 1930s had a much more direct informal understanding of horse-based travel than we do, but mostly because of his own incredible attention to details. So they are going at about 20 kilometers per hour. Now assume that for the messengers, instead of horses that only go 50% faster than the caravan, he has secured a small fleet of Cessna-style individual planes. They might fly at 180 km/h. That’s m = 9, nine times faster. Hence now K = 10 / 8 = 1.25. So we only lose 25% on each return trip; planes or no planes, the law of compound interest takes its revenge on the prince all the same, only a bit later.

VN:F [1.9.10_1130]
Rating: 10.0/10 (4 votes cast)
VN:F [1.9.10_1130]
Rating: +3 (from 3 votes)

The Seven Messengers

A number of years ago I discovered the short stories of the Italian writer Dino Buzzati (most famous for his novel translated as The Desert of the Tartars). They have a unique haunting quality, for which the only equivalent which I can summon is Mahler’s Des Knaben Wunderhorn or perhaps the last variation of the Tema Con Variazoni in Mozart’s Gran Partita. I was particularly fascinated by the first one, I Sette Messaggeri (The Seven Messengers) in a collection entitled La Boutique del Mistero (The Mystery Boutique, Mondadori, first published in 1968 although I have a later softcover edition). I resolved to translate it. I completed the translation only now. It starts like this:

Day after day, having set out to explore my father’s realm, I am moving further away from the city, and the dispatches that reach me become ever more infrequent.

I began the journey not long after my thirtieth birthday and more than eight years have since passed; to be exact, eight years, six months and fifteen days of unceasing travel. I believed, when I departed, that within a few weeks I would easily have reached the confines of the kingdom, but instead I have continued to encounter new people and new lands, and, everywhere, men who spoke my own language and claimed to be my subjects.

At times I think that my geographer’s compass has gone awry and that while always believing to be heading south we may in reality have gone into circles, stepping back into our tracks without increasing the distance from the capital city; such might be the reason why we have not yet reached the outer frontier.

More often, though, I am tormented by a suspicion that the frontier may not exist, that the realm spreads out without any limit whatsoever, and that no matter how far I advance I will never arrive at its end. I set off on my journey when I was already past thirty years old, too late perhaps. My friends, and even my family, were mocking my project as a pointless sacrifice of the best years of my life. In truth, few of my faithful followers consented to leave with me. Insouciant as I was – so much more than now! – I was anxious to maintain communication, during the journey, with those dear to me, and among the knights in my escort I chose the seven best ones to serve as my messengers.

I believed, without having given it more thought, that seven would be more than enough. With the passing of time I have realized that this number was, to the contrary, ridiculously low; this even though none among them has fallen ill, or run into bandits, or exhausted his mounts. All seven have served with a tenacity and a devotion that I will find it hard ever to recompense.

To distinguish more easily between them, I assigned them names with initial letters in alphabetical order: Alexander, Bartholomew, Cameron, Dominic, Emilian, Frederic, Gregory.

Not being used to straying so far away from my home, I dispatched the first, Alexander, at the end of the second evening of our journey, when we had already traveled some eighty leagues. The next evening, to ensure the continuity of communications, I sent out the second one, then […]

That is only the beginning. The full text appears here but it is password-protected. Here is why: in 2010 I managed to locate the right holders and wrote to them asking for permission to publish an English translation and put it on the web. I received a polite, negative answer. So I gave up. Browsing around more recently, though, I found two freely available translations on the Web. (I also found the original Italian text here, although with a few differences from the published version.) All for the better, you would say, except that one of the translations is in my opinion awful and the other not that much better. Buzzati is a stylist in the tradition of Flaubert, in whose texts you quickly notice (especially when translating) that every word is exactly the right one, the only possible one, at the only possible place in the only possible sentence. You cannot translate a Buzzati story as you would an article in today’s paper. You have at least to try to respect the music of the text. So I completed my own attempt after all, but I still don’t want to violate anyone’s copyright. (Perhaps I am being silly.) In any case, though, I can certainly publish a fair-use extract as above and use the text for myself and my colleagues and friends. So if you want access just ask me.

One unique feature of the Seven Messengers is that it is a geek’s delight: it is actually based on a mathematical series. I wrote an analysis of the underlying math, but to avoid spoiling your pleasure if you want to look at it by yourself first I put it in a separate entry of this blog. Click here only if you do want the spoiler.

VN:F [1.9.10_1130]
Rating: 10.0/10 (2 votes cast)
VN:F [1.9.10_1130]
Rating: +1 (from 1 vote)

Macron and Borne: profiles in courage

The French president, Emmanuel Macron, and prime minister, Elizabeth Borne, are showing incredible political courage in promoting an indispensable reform of the pension system. The international press (with the exception of one recent reasonable Washington Post editorial) has largely taken the side of the strikers, explaining sententiously that the proper answer would be to tax companies more (as to the efficiency of that approach, here is an old but still valid example, from a left-wing paper). The unions have vowed, in the words of one of their leaders, to “bring the country to its knees” and seem intent on reaching this goal literally. (It may be useful  to point out that unions in France are not what the term suggests. In other countries a union represents the workers at a company or administration. In France every organization has several unions, usually 4 or 5, competing for, typically, a small minority of the workers, but with a role enshrined in the constitution. They are really state-supported political organizations, of various political hues, several of them openly hostile to employers and to capitalism. Interesting approach.)

The reform of the pension system was part of Macron’s electoral program and has been amended repeatedly to take into account the special characteristics of manual or otherwise difficult worth. Months of attempted negotiations took place with those union representatives who were willing to talk. The extreme left and extreme right were united to defeat the reform and at the last minute, after innumerable debates in Parliament which had resulted in a majority-backed solution, intimated enough moderate-right deputies to force the government to use a special constitutional mechanism (“article 49-3”) to ram it through. Who knows how many disruptions of basic services the country will have to endure in the coming months as saboteurs of various kinds try to make good on their promise to prevent the country from functioning. The attitude of the international bien-pensant press, who fans the flames (as they did with the Gilets Jaunes protests 5 years ago),  while castigating the January 6 Washington rioters, who are of the same ilk, is unconscionable.

The entire political class knows that a reform is indispensable, and has been delayed far too long, out of the cowardice of previous governments. Macron’s and Borne’s goal is simple: to preserve France’s pension system (the very system that the opponents deceitfully accuse them of destroying), based on solidarity between generations, workers paying for retirees, as opposed to a capitalization-based system with its dependence on the ups and downs of the stock market. Thanks in particular to a generous health service, people live ever longer; the new plan makes them work a couple of years more to help ensure the sustainability of the approach. Macron is in his second, non-renewable term and has decided that he would not leave office without having carried out this part of his duty. Borne, an outstanding manager with a distinguished record, has taken the risk of sacrificing her political career by bringing the reform through. (In the Fifth Republic’s mixed presidential system, the conventional wisdom is that the prime minister is the president’s “fuse”, an expendable resource for implementing difficult tasks. Cynical and tough, but a direct consequence of the constitution designed by De Gaulle and his deputy Debré 60 years ago.)

In the meantime, Macron and Borne are showing Europe and the world what true dedication and leadership mean.

VN:F [1.9.10_1130]
Rating: 7.4/10 (7 votes cast)
VN:F [1.9.10_1130]
Rating: 0 (from 4 votes)

Le courage de Macron

(An English variant will appear tomorrow.)

La presse nationale et internationale est déchaînée contre Borne et Macron. Les extrémistes et factieux de tous bords jurent de “mettre le pays par terre” (comment, au passage, peut-on accepter ce genre de langage de la part d’un responsable “syndical”?).

Toute la classe politique sait bien sûr que la réforme est indispensable. Elle est le seul moyen de protéger le système français de retraites par répartition. Elle tient compte de la pénibilité des travaux. Elle remet la France au niveau des pays voisins. Elle est le bon sens même. Elle suit des années de tergiversation de la part des gouvernements précédents effarouchés, et des mois de consultation avec les “partenaires sociaux”, si l’on peut parler de concertation pour une tentative de dialogue avec des gens qui ne cherchent que le tintamarre politique.

Quel courage, quelle détermination chez le président et la première ministre, qui au milieu des insultes sacrifient leur intérêt personnel au bien public. Les émeutiers — dans la tradition des ligues des années trente, des gilets jaunes, des voyous du 6 janvier 2021 à Washington — essayent de les faire reculer par la force, mais la raison et le droit triompheront.

VN:F [1.9.10_1130]
Rating: 8.2/10 (5 votes cast)
VN:F [1.9.10_1130]
Rating: +1 (from 3 votes)

The legacy of Barry Boehm

August of last year brought the sad news of Barry Boehm’s passing away on August 20. If software engineering deserves at all to be called engineering today, it is in no small part thanks to him.

“Engineer” is what Boehm was, even though his doctorate and other degrees were all in mathematics. He looked the part (you might almost expect him to carry a slide rule in his shirt pocket, until you realized that as a software engineer he did not need one) and more importantly he exuded the seriousness, dedication, precision, respect for numbers, no-nonsense attitude and practical mindset of outstanding engineers. He was employed as an engineer or engineering manager in the first part of his career, most notably at TRW, a large aerospace company (later purchased by Northrop Grumman), turning to academia (USC) afterwards, but even as a professor he retained that fundamental engineering ethos.

 

boehm_tichy_basili

 

LASER Summer School, Elba Island (Italy), September 2010
From left: Walter Tichy, Barry Boehm, Vic Basili (photograph by Bertrand Meyer)

Boehm’s passion was to turn the study of software away from intuition and over to empirical enquiry, rooted in systematic objective studies of actual projects. He was not the only one advocating empirical methods (others from the late seventies on included Basili, Zelkowitz, Tichy, Gilb, Rombach, McConnell…) but he had an enormous asset: access to mines of significant data—not student experiments, as most researchers were using!—from numerous projects at TRW. (Basili and Zelkowitz had similar sources at NASA.) He patiently collected huge amounts of project information, analyzed them systematically, and started publishing paper after paper about what works for software development; not what we wish would work, but what actually does on the basis of project results.

Then in 1981 came his magnum opus, Software Engineering Economics (Prentice Hall), still useful reading today (many people inquired over the years about projects for a second edition, but I guess he felt it was not warranted). Full of facts and figures, the book also popularized the Cocomo model for cost prediction, still in use nowadays in a revised version developed at USC (Cocomo II, 1995, directly usable through a simple Web interface at softwarecost.org/tools/COCOMO/

Cocomo provides a way to estimate both the cost and the duration of a project from the estimated number of lines of code (alternatively, in Cocomo II, from the estimated number of function points), and some auxiliary parameters to account for each project’s specifics. Boehm derived the formula by fitting from thousands of projects.

When people first encounter the idea of Cocomo (even in a less-rudimentary form than the simplified one I just gave), their first reaction is often negative: how can one use a single formula to derive an estimate for any project? Isn’t the very concept ludicrous anyway since by definition we do not know the number of lines of code (or even of function points) before we have developed the project? With lines of code, how do we distinguish between different languages? There are answers to all of these questions (the formula is ponderated by a whole set of criteria capturing project specifics, lines of code calibrated by programming language level do correlate better than most other measures with actual development effort, a good project manager will know in advance the order of magnitude of the code size etc.). Cocomo II is not a panacea and only gives a rough order of magnitude, but remains one of the best available estimation tools.

Software Engineering Economics and the discussion of Cocomo also introduced important laws of software engineering, not folk wisdom as was too often (and sometimes remains) prevalent, but firm results. I covered one in an article in this blog some time ago, calling it the “Shortest Possible Schedule Theorem”: if a serious estimation method, for example Cocomo, has determined an optimal cost and time for a project, you can reduce the time by devoting more resources to the project, but only down to a certain limit, which is about 75% of the original. In other words, you can throw money at a project to make things happen faster, but the highest time reduction you will ever be able to gain is by a quarter. Such a result, confirmed by many studies (by Boehm and many others after him), is typical of the kind of strong empirical work that Boehm favored.

The CMM and CMMI models  of technical management are examples of important developments that clearly reflect Boehm’s influence. I am not aware that he played any direct role (the leader was Watts Humphrey, about whom I wrote a few years ago), but the models’ constant emphasis on measurement, feedback and assessment are in line with the principles  so persuasively argued in his articles and books.

Another of his famous contributions is the Spiral model of the software lifecycle. His early work and Software Engineering Economics had made Boehm a celebrity in the field, one of its titans in fact, but also gave him the reputation, deserved or not, of representing what may be called big software engineering, typified by the TRW projects from which he drew his initial results: large projects with large budgets, armies of programmers of variable levels of competence, strong quality requirements (often because of the mission- and life-critical nature of the projects) leading to heavy quality assurance processes, active regulatory bodies, and a general waterfall-like structure (analyze, then specify, then design, then implement, then verify). Starting in the eighties other kinds of software engineering blossomed, pioneered by the personal computer revolution and Unix, and often typified by projects, large or small but with high added value, carried out iteratively by highly innovative teams and sometimes by just one brilliant programmer. The spiral model is a clear move towards flexible modes of software development. I must say I was never a great fan (for reasons not appropriate for discussion here) of taking the Spiral literally, but the model was highly influential and made Boehm a star again for a whole new generation of programmers in the nineties. It also had a major effect on agile methods, whose notion of  “sprint ” can be traced directly the spiral. It is a rare distinction to have influenced both the CMM and agile camps of software engineering with all their differences.

This effort not to remain wrongly identified with the old-style massive-project software culture, together with his natural openness to new ideas and his intellectual curiosity, led Boehm to take an early interest in agile methods; he was obviously intrigued by the iconoclasm of the first agile publications and eager to understand how they could be combined with timeless laws of software engineering. The result of this enquiry was his 2004 book (with Richard Turner) Balancing Agility and Discipline: A Guide for the Perplexed, which must have been the first non-hagiographic presentation (still measured, may be a bit too respectful out of a fear of being considered old-guard) of agile approaches.

Barry Boehm was an icon of the software engineering movement, with the unique position of having been in essence present at creation (from the predecessor conference of ICSE in 1975) and accompanying, as an active participant, the stupendous growth and change of the field over half a century.

 

boehm_shanghai

Barry Boehm at a dinner at ICSE 2006, Shanghai (photograph by Bertrand Meyer)

I was privileged to meet Barry very early, as we were preparing a summer school in 1978 on Programming Methodology where the other star was Tony Hoare. It was not clear how the mix of such different personalities, the statistics-oriented UCLA-graduate American engineer and the logic-driven classically-trained (at Oxford) British professor would turn out.

Boehm could be impatient with cryptic academic pursuits; one exercise in Software Engineering Economics (I know only a few other cases of sarcasm finding its refuge in exercises from textbooks) presents a problem in software project management and asks for an answer in multiple-choice form. All the proposed choices are sensible management decisions, except for one which goes something like this: “Remember that Bob Floyd [Turing-Awarded pioneer of algorithms and formal verification] published in Communications of the ACM vol. X no. Y pages 658-670 that scheduling of the kind required can be performed in O (n3 log log n) instead of O (n3 log n) as previously known; take advantage of this result to spend 6 months writing an undecipherable algorithm, then discover that customers do not care a bit about the speed.” (Approximate paraphrase from memory [1].)

He could indeed be quite scathing of what he viewed as purely academic pursuits removed from the reality of practical projects. Anyone who attended ICSE 1979 a few months later in Munich will remember the clash between him and Dijkstra; the organizers had probably engineered it (if I can use that term), having assigned them the topics  “Software Engineering As It Is” and “Software Engineering as It Should Be”, but it certainly was spectacular. There had been other such displays of the divide before. Would we experience something of the kind at the summer school?

No clash happened; rather, the reverse, a meeting of minds. The two sets of lectures (such summer schools lasted three weeks at that time!) complemented each other marvelously, participants were delighted, and the two lecturers also got along very well. They were, I think, the only native English speakers in that group, they turned out to have many things in common (such as spouses who were also brilliant software engineers on their own), and I believe they remained in contact for many years. (I wish I had a photo from that school—if anyone reading this has one, please contact me!)

Barry was indeed a friendly, approachable, open person, aware of his contributions but deeply modest.

Few people leave a profound personal mark on a field. A significant part of software engineering as it is today is a direct consequence of Barry’s foresight.

 

Note

[1] The full text of the exercise will appear shortly as a separate article on this blog.

 

Recycled A version of this article appeared previously in the Communications of the ACM blog.

VN:F [1.9.10_1130]
Rating: 8.8/10 (4 votes cast)
VN:F [1.9.10_1130]
Rating: +4 (from 4 votes)

Logical beats sequential

Often,  “we do this and then we do that” is just a lazy way of stating “to do that, we must have achieved this.” The second form is more general than the first, since there may be many things you can “do” to achieve a certain condition.

The extra generality is welcome for software requirements, which should describe essential properties without over-specifying, in particular without prescribing a specific ordering of operations  when it is only one possible sequence among several, thereby restricting the flexibility of designers and implementers.

This matter of logical versus sequential constraints is at the heart of the distinction between scenario-based techniques — use cases, user stories… — and object-oriented requirements. This article analyzes the distinction. It is largely extracted from my recent textbook, the Handbook of Requirements and Business Analysis [1], which contains a more extensive discussion.

1. Scenarios versus OO

Scenario techniques, most significantly use cases and user stories, have become dominant in requirements. They obviously fill a need and are intuitive to many people. As a general requirement technique, however, they lack abstraction. Assessed against object-oriented requirements techniques, they suffer from the same limitations as procedural (pre-OO)  techniques against their OO competitors in the area of design and programming. The same arguments that make object technology subsume non-OO approaches in those areas transpose to requirements.

Scenario techniques describe system properties in terms of a particular sequence of interactions with the system. A staple example of a use case is ordering a product through an e-commerce site, going through a number of steps. In contrast, an OO specification presents a certain number of abstractions and operations on them, chracterized by their logical properties. This description may sound vague, so we move right away to examples.

2. Oh no, not stacks again

Yes, stacks. This example is rather computer-sciency so it is not meant to convince anyone but just to explain the ideas. (An example more similar to what we deal with in the requirements of industry projects is coming next.)

A stack is a LIFO (Last-In, First-Out) structure. You insert and remove elements at the same end.

 

Think of a stack of plates, where you can deposit one plate at a time, at the top, and retrieve one plate at a time, also at the top. We may call the two operations put and remove. Both are commands (often known under the alternative names push and pop). We will also use an integer query count giving the number of elements.

Assume we wanted to specify the behavior of a stack through use cases. Possible use cases (all starting with an empty stack) are:

/1/

put
put ; put
put ; put ; put       
— etc.: any number of successive put (our stacks are not bounded)

put ; remove
put ; put ; remove
put ; put ; remove ; remove
put ; put ; remove ; remove ; put ; remove

We should also find a way to specify that the system does not support such use cases as

/2/

remove ; put

or even just

/3/

remove

We could keep writing such use cases forever — some expressing normal sequences of operations, others describing erroneous cases — without capturing the fundamental rule that at any stage, the number of put so far has to be no less than the number of remove.

A simple way to capture this basic requirement is through logical constraints, also known as contracts, relying on assertions: preconditions which state the conditions under which an operation is permitted, and postconditions which describe properties of its outcome. In the example we can state that:

  • put has no precondition, and the postcondition

          count = old count + 1

using the old notation to refer to the value of an expression before the operation (here, the postcondition states that put increases count by one).

  • remove has the precondition

count > 0

and the postcondition

count = old count – 1

since it is not possible to remove an element from an empty stack. More generally the LIFO discipline implies that we cannot remove more than we have put.(Such illegal usage sequences are sometimes called “misuse cases.”)

(There are other properties, but the ones just given suffice for this discussion.)

The specification states what can be done with stacks (and what cannot) at a sufficiently high level of abstraction to capture all possible use cases. It enables us to keep track of the value of count in the successive steps of a use case; it tells us for example that all the use cases under /1/ above observe the constraints: with count starting at 0, taking into account the postconditions of put and remove, the precondition of every operation will be satisfied prior to all of its calls. For /2/ and /3/ that is not the case, so we know that these use cases are incorrect.

Although this example covers a data structure, not  requirements in the general sense, it illustrates how logical constraints are more general than scenarios:

  • Use cases, user stories and other  forms of scenario only describe specific instances of behavior.
  • An OO model with contracts yields a more abstract specification, to which individual scenarios can be shown to conform, or not.

3. Avoiding premature ordering decisions

As the stack example illustrates, object-oriented specifications stay away from premature time-order decisions by focusing on object types (classes) and their operations (queries and commands), without making an early commitment to the order of executing these operations.

In the book, I use in several places a use-case example from one of the best books about use cases (along with Ivar Jacobson’s original one of course): Alistair Cockburn’s Writing Effective Use Cases (Pearson Education, 2001). A simplified form of the example is:

1. A reporting party who is aware of the event registers a loss to the insurance company.

2. A clerk receives and assigns claim to a claims agent.

3. The assigned claims adjuster:

3.1 Conducts an investigation.
3.2 Evaluates damages.
3.3 Sets reserves.
3.4 Negotiates the claim.
3.5 Resolves the claim and closes it.

(A reserve in the insurance business is an amount that an insurer, when receiving a claim, sets aside as to cover the financial liability that may result from the claim.)

As a specification, this scenario is trying to express useful things; for example, you must set reserves before starting to negotiate the claim. But it expresses them in the form of a strict sequence of operations, a temporal constraint which does not cover the wide range of legitimate scenarios. As in the stack example, describing a few such scenarios is helpful as part of requirements elicitation, but to specify the resulting requirements it is more effective to state the logical constraints.

Here is a sketch (in Eiffel) of how a class INSURANCE_CLAIM could specify them in the form of contracts. Note the use of require to introduce a precondition and ensure for postconditions.

class INSURANCE_CLAIM feature

        — Boolean queries (all with default value False):
    is_investigated, is_evaluated, is_reserved,is_agreed,is_imposed, is_resolved:

BOOLEAN

    investigate
                — Conduct investigation on validity of claim. Set is_investigated.
        deferred
        ensure
            is_investigated
        end

    evaluate
                — Assess monetary amount of damages.
        require
            is_investigated
        deferred
        ensure
            is_evaluated
            — Note: is_investigated still holds (see the invariant at the end of the class text).
        end

    set_reserve
                — Assess monetary amount of damages. Set is_reserved.
        require
            is_investigated
            — Note: we do not require is_evaluated.
        deferred
        ensure
            is_reserved
        end
 

    negotiate
                — Assess monetary amount of damages. Set is_agreed only if negotiation
                — leads to an agreement with the claim originator.
        require
                   is_reserved
is_evaluated   
                   

        deferred
        ensure
            is_reserved
            — See the invariant for is_evaluated and is_investigated.
        end

    impose (amount: INTEGER)
                — Determine amount of claim if negotiation fails. Set is_imposed.
        require
            not is_agreed
            is_reserved
        deferred
        ensure
            is_imposed
        end

    resolve
                — Finalize handling of claim. Set is_resolved.
        require
            is_agreed or is_imposed
        deferred
        ensure
            is_resolved
        end

invariant                    — “⇒” is logical implication.

is_evaluated is_investigated
is_reserved 
is_evaluated
is_resolved
is_agreed or is_imposed
is_agreed
is_evaluated
is_imposed
is_evaluated
is_imposed
not is_agreed

                          — Hence, by laws of logic, is_agreed not is_imposed

end

Notice the interplay between the preconditions, postconditions and class invariant, and the various boolean-valued queries they involve (is_investigated, is_evaluated, is_reserved…). You can specify a strict order of operations o1, o2 …, as in a use case, by having a sequence of assertions pi such that operation oi has the contract clauses require pi and ensure pi+1; but assertions also enable you to specify a much broader range of allowable orderings as all acceptable.
The class specification as given is only a first cut and leaves many aspects untouched. It will be important in practice, for example, to include a query payment describing the amount to be paid for the claim; then impose has the postcondition payment = amount, and negotiate sets a certain amount for payment.
Even in this simplified form, the specification includes a few concepts that the original use case left unspecified, in particular the notion of imposing a payment (through the command impose) if negotiation fails. Using a logical style typically uncovers such important questions and provides a framework for answering them, helping to achieve one of the principal goals of requirements engineering.

4. Logical constraints are more general than sequential orderings

The specific sequence of actions described in the original use case (“main success scenario”) is compatible with the logical constraints: you can check that in the sequence

investigate
evaluate
set_reserve
negotiate
resolve

the postcondition of each step implies the precondition of the next one (the first has no precondition). In other words, the temporal specification satisfies the logical one. But you can also see that prescribing this order is a case of overspecification: other orderings also satisfy the logical specification. It may be possible for example — subject to confirmation by Subject-Matter Experts — to change the order of evaluate and set_reserve, or to perform these two operations in parallel.

The specification does cover the fundamental sequencing constraints; for example, the pre- and postcondition combinations imply that investigation must come before evaluation and resolution must be preceded by either negotiation or imposition. But they avoid the non-essential constraints which, in the use case, were only an artifact of the sequential style of specification, not a true feature of the problem.

The logical style is also more conducive to conducting a fruitful dialogue with domain experts and stakeholders:

  • With a focus on use cases, the typical question from a requirements engineer (business analyst) is “do you do A before doing B?” Often the answer will be contorted, as in “usually yes, but only if C, oh and sometimes we might start with B if D holds, or we might work on A and B in parallel…“, leading to vagueness and to more complicated requirements specifications.
  • With logic-based specifications, the two fundamental question types are: “what conditions do you need before doing B?” and “does doing A ensure condition C?”. They force stakeholders to assess their own practices and specify precisely the relations between operations of interest.

5. What use for scenarios?

Use-cases and more generally scenarios, while more restrictive than logical specifications, remain important as complements to specifications. They serve as both input and output to more abstract requirements specifications (such as OO specifications with contracts):

  • As input to requirements: initially at least, stakeholders and Subject-Matter Experts often find it intuitive to describe typical system interactions, and their own activities, in the form of scenarios. Collecting such scenarios is an invaluable requirements elicitation technique. The requirements engineer must remember that any such scenario is just one example walk through the system, and must abstract from these examples to derive general logical rules.
  • As output from requirements: from an OO specification with its contracts, the requirements engineers can produce valid use cases. “Valid” means that the operation at every step satisfies the applicable precondition, as a consequence of the previous steps’ postconditions and of the class invariant. The requirements engineers can then submit these use cases to the SMEs and through them to stakeholders to confirm that they make sense, update the logical conditions if they do not (to rule out bad use cases), and check the results they are expected to produce.

6. Where do scenarios fit?

While many teams will prefer to write scenarios (for the purposes just described) in natural language, it is possible to go one step further and, in an object-oriented approach to requirements, gather scenarios in classes. But that point exceeds the scope of the present sketch. We will limit ourselves here to the core observation: logical constraints subsume sequential specifications; you can deduce the ltter from the former, but not the other way around; and focusing on abstract logical specifications leads to a better understanding of the requirements.

Reference

Bertrand Meyer: Handbook of Requirements and Business Analysis, Springer, 2022. See the book page with sample chapters and further material here.

Recycled(This article was first published on the Communications of the ACM blog.)

VN:F [1.9.10_1130]
Rating: 10.0/10 (4 votes cast)
VN:F [1.9.10_1130]
Rating: +2 (from 2 votes)

New paper: optimization of test cases generated from failed proofs

Li Huang (PhD student at SIT) will be presenting at an ISSRE workshop the paper Improving Counterexample Quality from Failed Program Verification, written with Manuel Oriol and me. One can find the text on arXiv here. (I will update this reference with the official publication link when I have it.)

The result being presented is part of a more general effort at combining proofs and tests (with other papers in the pipeline). The idea of treating proofs and tests as complementary rather than competing methods of software verification is an old pursuit of mine (which among other consequences resulted in the creation with Yuri Gurevich of the Tests and Proofs conference, which I see is continuing to run). A particular observation is that failure means a different thing for proofs and tests.

A failed test provides interesting information (in fact it is a successful proof — of incorrectness). A successful proof is, of course, also interesting (in principle it should be end of the story), whereas a successful test tells us very little. But in the practice of program proving the common occurrence is failure to prove a program element correct. You are typically left with no clue as to the source of the failure. In the AutoProof verification system for Eiffel, we are able to rely on the underlying technology (Boogie and Z3) to extract a counterexample which gives concrete evidence: as with a failed test, a programmer can in general quickly understand what is wrong.

In other words, the useless negative result of the bottom-left entry of the above picture can produce a useful result:

Pasted

The general approach is the subject of another article but this one focuses on producing tests that are actually significant for the programmer. If you get very large values, you will not immediately be able to relate to them. Hence the need for a process of minimization, described in the article. The results on our examples are encouraging, making it possible to evidence the bug on very small integer values.

Reference

Li Huang, Bertrand Meyer and Manuel Oriol: Improving Counterexample Quality from Failed Program Verification, 6th International Workshop on Software Faults, October 2022. Preprint available on arXiv here. The program workshop is available here; the presentation is on Monday, 31 October, 15:55 CET (7:55 AM Los Angeles, 10:55 New York).

 

VN:F [1.9.10_1130]
Rating: 10.0/10 (2 votes cast)
VN:F [1.9.10_1130]
Rating: +1 (from 1 vote)

New book: the Requirements Handbook

cover

I am happy to announce the publication of the Handbook of Requirements and Business Analysis (Springer, 2022).

It is the result of many years of thinking about requirements and how to do them right, taking advantage of modern principles of software engineering. While programming, languages, design techniques, process models and other software engineering disciplines have progressed considerably, requirements engineering remains the sick cousin. With this book I am trying to help close the gap.

pegsThe Handbook introduces a comprehensive view of requirements including four elements or PEGS: Project, Environment, Goals and System. One of its principal contributions is the definition of a standard plan for requirements documents, consisting of the four corresponding books and replacing the obsolete IEEE 1998 structure.

The text covers both classical requirements techniques and novel topics such as object-oriented requirements and the use of formal methods.

The successive chapters address: fundamental concepts and definitions; requirements principles; the Standard Plan for requirements; how to write good requirements; how to gather requirements; scenario techniques (use cases, user stories); object-oriented requirements; how to take advantage of formal methods; abstract data types; and the place of requirements in the software lifecycle.

The Handbook is suitable both as a practical guide for industry and as a textbook, with over 50 exercises and supplementary material available from the book’s site.

You can find here a book page with the preface and sample chapters.

To purchase the book, see the book page at Springer and the book page at Amazon US.

VN:F [1.9.10_1130]
Rating: 10.0/10 (1 vote cast)
VN:F [1.9.10_1130]
Rating: +1 (from 1 vote)

Winter will be warm

It is easy to engage in generalities; it is risky to make firm predictions. In the first case there is no reckoning; in the second one the actual events can prove you wrong for everyone to see.

I am taking the risk. Here is my prediction: Putin’s energy blackmail (Western Europe will freeze this winter!) will fail. We’ll have some trouble but by and large we’ll be OK.

The basic reason is simple: great idea (from the blackmailer’s viewpoint), terrible execution. (Do we see a pattern there?) If you are going to freeze Europe by cutting off gas, you keep the suspense until the last minute and shut off the valves in October, leaving your targets no time to react.

Instead they did it all wrong! They started making noises in the Spring and cutting off supplies in August. The result: people listened. Governments and technocrats got to work, with some time to get organized. A company such as EDF in France is sometimes criticized as too big and monolithic, but they know their business, which is to provide energy, and are pretty good at it. I would bet that they and their counterparts in the electricity and gas industries all over the continent are working day and night to find alternative sources.

In addition, no day passes without some announcement of new energy-saving measures. Some may seem like for show only but the accumulated result will be significant. Recently everyone (for example the usually better inspired Guardian) was mocking Macron’s prime minister Borne and her ministers for showing up to work in padded jeans and sweaters to save on heating, but that kind of message can be influential. (Almost a half-century ago Jimmy Carter was telling Americans that instead of turning the temperature to 19 degrees C in summer and 21 in winter they should do the reverse. He too was derided. But he was right and that kind of advice will finally come to pass. One of the few positive outcomes of the current tragedy.)

So yes, you succeeded in making yourself a big nuisance. And no, it won’t destroy us. It will make us stronger — also warmer.

 

VN:F [1.9.10_1130]
Rating: 8.5/10 (6 votes cast)
VN:F [1.9.10_1130]
Rating: +4 (from 4 votes)

Introduction to the Theory of Programming Languages: full book now freely available

itpl_coverShort version: the full text of my Introduction to the Theory of Programming Languages book (second printing, 1991) is now available. This page has more details including the table of chapters, and a link to the PDF (3.3MB, 448 + xvi pages).

The book is a survey of methods for language description, particularly semantics (operational, translational, denotational, axiomatic, complementary) and also serves as an introduction to formal methods. Obviously it would be written differently today but it may still have its use.

A few days ago I released the Axiomatic Semantics chapter of the book, and the chapter introducing mathematical notations. It looked at the time that I could not easily  release the rest in a clean form, because it is impossible or very hard to use the original text-processing tools (troff and such). I could do it for these two chapters because I had converted them years ago for my software verification classes at ETH.

By perusing old files, however,  I realized that around the same time (early 2000s) I actually been able to produce PDF versions of the other chapters as well, even integrating corrections to errata  reported after publication. (How I managed to do it then I have no idea, but the result looks identical, save the corrections, to the printed version.)

The figures were missing from that reconstructed version (I think they had been produced with Brian Kernighan’s PIC graphical description language , which is even more forgotten today than troff), but I scanned them from a printed copy and reinserted them into the PDFs.

Some elements were missing from my earlier resurrection: front matter, preface, bibliography, index. I was able to reconstruct them from the original troff source using plain MS Word. The downside is that they are not hyperlinked; the index has the page numbers (which may be off by 1 or 2 in some cases because of reformatting) but not hyperlinks to the corresponding occurrences as we would expect for a new book. Also, I was not able to reconstruct the table of contents; there is only a chapter-level table of contents which, however, is hyperlinked (in other words, chapter titles link to the actual chapters). In the meantime I obtained the permission of the original publisher (Prentice Hall, now Pearson Education Inc.).

Here again is the page with the book’s description and the link to the PDF:

bertrandmeyer.com/ITPL

 

 

VN:F [1.9.10_1130]
Rating: 9.6/10 (11 votes cast)
VN:F [1.9.10_1130]
Rating: +3 (from 3 votes)