Archive for the ‘Writing and style’ Category.

A writing exercise

I recently wrote a working paper (on academic careers) and since it was urgently needed I did not want to spend time on style issues but instead to keep things simple. So I preceded it with the comment “Using he’ as an abbreviation for `he or she’.”  I received the helpful suggestion that I could have used “they” ‘instead.

I have a great style exercise for you. Rewrite the following text (a fictitious description of a fictitious interview session) using the “they” style. Keep the rest of the content as it is of course.

All the candidates are in the room. Each in turn gives his presentation to the committee, in the presence of the other candidates, who may use the opportunity to revise their own presentations. It can make for an awkward situation because they are actually competing with him and with each other for the position. At the end of his presentation, the committee members ask him the questions that they have prepared during his talk; he engages in a free discussion with them. He then steps outside so that they can discuss his performance in his absence; when they are done, they call him back into the room and they tell him the result of their assessment of him, giving him the opportunity to prompt them for more detailed comments about his presentation and more generally about what they think of his profile. Afterwards, he will in turn listen to the other candidates’ presentations, which in spite of the competitive situation give him an opportunity to learn from them and network with them for his own benefit.

 

 

VN:F [1.9.10_1130]
Rating: 7.8/10 (8 votes cast)
VN:F [1.9.10_1130]
Rating: 0 (from 6 votes)

Statement Considered Harmful

I harbor no illusion about the effectiveness of airing this particular pet peeve; complaining about it has about the same chance of success as protesting against split infinitives or music in restaurants. Still, it is worth mentioning that the widespread use of the word “statement” to denote a programming language element, such as an assignment, that directs a computer to perform some change, is misleading. “Instruction” is the better term.

A “statement” is “something stated, such as a single declaration or remark, or a report of fact or opinions” (Merriam-Webster).

Why does it matter? The use of “statement” to mean “instruction” obscures a fundamental distinction of software engineering: the duality between specification and implementation. Programming produces a solution to a problem; success requires expressing both the problem, in the form of a specification, and the devised solution, in the form of an implementation. It is important at every stage to know exactly where we stand: on the problem side (the “what”) or the solution side (the “how”). In his famous Goto Statement Considered Harmful of 1968, Dijkstra beautifully characterized this distinction as the central issue of programming:

Our intellectual powers are rather geared to master static relations and our powers to visualize processes evolving in time are relatively poorly developed. For that reason we should do (as wise programmers aware of our limitations) our utmost to shorten the conceptual gap between the static program and the dynamic process, to make the correspondence between the program (spread out in text space) and the process (spread out in time) as trivial as possible.

Software verification, whether conducted through dynamic means (testing) or static techniques (static analysis, proofs of correctness), relies on having separately expressed both a specification of the intent and a proposed implementation intended to realize that intent. They have to remain distinct; otherwise we cannot even define what it means that the program should be correct (correct with respect to what?), and even less what it means to validate the program (validate it against what?).

In many approaches to verification, the properties against which we validate programs are called assertions. An assertion expresses a property that should hold at some point of program execution. For example, after the assignment instruction a := b + 1, the assertion ab will hold. This notion of assertion is used both in testing frameworks, such as JUnit for Java or PyUnit for Python, and in program proving frameworks; see, for example, the interactive Web-based version of the AutoProof program-proving framework for Eiffel at autoproof.sit.org, and of course the entire literature on axiomatic (Floyd-Hoare-Dijkstra-style) verification.

The difference between the instruction and the assertion is critical: a := b + 1 tells the computer to do something (change the value of a), as emphasized here by the “:=” notation for assignment; ab does not direct the computer or the computation to do anything, but simply states a property that should hold at a certain stage of the computation if everything went fine so far.

In the second case, the word “states” is indeed appropriate: an assertion states a certain property. The expression of that property, ab, is a “statement” in the ordinary English sense of the term. The command to the computer, a := b + 1, is an instruction whose effect is to ensure the satisfaction of the statement ab. So if we use the word “statement” at all, we should use it to mean an assertion, not an instruction.

If we start calling instructions “statements” (a usage that Merriam-Webster grudgingly accepts in its last entry for the term, although it takes care to define it as “an instruction in a computer program,” emphasis added), we lose this key distinction.

There is no reason for this usage, however, since the word “instruction” is available, and entirely appropriate.

So, please stop saying “an assignment statement” or “a print statement“; say “an assignment instruction” and so on.

Maybe you won’t, but at least you have been warned.

Recycled This article was first published in the “Communications of the ACM” blog.

VN:F [1.9.10_1130]
Rating: 10.0/10 (8 votes cast)
VN:F [1.9.10_1130]
Rating: +1 (from 1 vote)

New book: the Requirements Handbook

cover

I am happy to announce the publication of the Handbook of Requirements and Business Analysis (Springer, 2022).

It is the result of many years of thinking about requirements and how to do them right, taking advantage of modern principles of software engineering. While programming, languages, design techniques, process models and other software engineering disciplines have progressed considerably, requirements engineering remains the sick cousin. With this book I am trying to help close the gap.

pegsThe Handbook introduces a comprehensive view of requirements including four elements or PEGS: Project, Environment, Goals and System. One of its principal contributions is the definition of a standard plan for requirements documents, consisting of the four corresponding books and replacing the obsolete IEEE 1998 structure.

The text covers both classical requirements techniques and novel topics such as object-oriented requirements and the use of formal methods.

The successive chapters address: fundamental concepts and definitions; requirements principles; the Standard Plan for requirements; how to write good requirements; how to gather requirements; scenario techniques (use cases, user stories); object-oriented requirements; how to take advantage of formal methods; abstract data types; and the place of requirements in the software lifecycle.

The Handbook is suitable both as a practical guide for industry and as a textbook, with over 50 exercises and supplementary material available from the book’s site.

You can find here a book page with the preface and sample chapters.

To purchase the book, see the book page at Springer and the book page at Amazon US.

VN:F [1.9.10_1130]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.10_1130]
Rating: 0 (from 0 votes)

A standard plan for modern requirements

Requirements documents for software projects in industry, agile or not, typically follow a plan defined in a 1998 IEEE standard (IEEE 830-1998 [1]),  “reaffirmed” in 2009. IEEE 830 has the merit of simplicity, as it fits in 37 pages of which just a few (competently) describe basic requirements concepts and less than 10 are devoted to explaining the standard recommended plan, which itself consists of 3 sections with subsections. Simplicity is good but the elementary nature of the IEEE-830 plan is just not up to the challenges of modern information technology. It is time to retire this venerable precursor and move to a structure that works for the kind of ambitious, multi-faceted IT systems we build today.

For the past few years I have worked on defining a systematic approach to requirements, culminating in a book to be published in the Fall, Handbook of Requirements and Business Analysis. One of the results of this effort is a standard plan, based on the “PEGS” view of requirements where the four parts cover Project, Environment, Goals and System. The details are in the book (for some of the basic concepts see a paper at TOOLS 2019, [2]). Here I will introduce some of the key principles, since they are already  be used — as various people have done since I first started presenting the ideas in courses and seminars (particularly an ACM Webinar, organized by Will Tracz last March, whose recording is available on YouTube, and another hosted by Grady Booch for IBM).

pegs

The starting point, which gives its name to the approach, is that requirements should cover the four aspects mentioned, the four “PEGS”, defined as follows:

  • A Goal is a result desired by an organization.
  • A System is a set of related artifacts, devised to help meet certain goals.
  • A Project is the set of human processes involved in the planning, construction, revision and  operation of a system.
  • An Environment is the set of entities (such as people, organizations, devices and other material objects, regulations and other systems) external to the project and system but with the potential to affect the goals, project or system or to be affected by them.

The recommended standard plan consequently consists of four parts or books.

This proposed standard does not prescribe any particular approach to project management, software development, project lifecycle or requirements expression, and is applicable in particular to both traditional (“waterfall”) and agile projects. It treats requirements as a project activity, not necessarily a lifecycle step. One of the principles developed in the book is that requirements should be treated as a dynamic asset of the project, written in a provisional form (more or less detailed depending on the project methodology) at the beginning of the project, and then regularly extended and updated.

Similarly, the requirements can be written using any appropriate notation and method, from the most informal to the most mathematical.  In a recently published ACM Computing Surveys paper [3], my colleagues and I reviewed the various levels of formalism available  in today’s requirements approaches. The standard plan is agnostic with respect to this matter.

The books may reference each other but not arbitrarily. The permitted relations are as follows:

references

Note in particular that the description of the Goals should leave out technical details and hence may not refer to Project and System elements, although it may need to explain the properties of the Environment that influence or constrain the business goals. The Environment exists independently of the IT effort, and hence the Environment book should not reference any of the others, with the possible exception (dotted arrow in the figure) of effects of the System if it is to change the environment. (For example, a payroll IT system may affect the payroll process; a heating system may affect the ambient temperature.)

The multi-book structure of the recommended PEGS standard plan already goes beyond the traditional view of a single, linear “requirements document”. The books themselves are not necessarily written as linear texts; with today’s technology, requirements parts can and generally should be stored in a requirements repository which includes all requirements-relevant elements.  A linear form remains necessary; it can be either written as such or produced by tools from elements in the repository.

The standard plan defines the structure of the four PEGS books down to one more level, chapters. For any further levels (sections), each organization can define its own rules. Books can also include front and back matter, including for example  tables of contents, legal disclaimers, revision history etc., not covered by the standard structure. Here is that structure:

books

It is meant to be self-explanatory, but here are a few comments on each of the books:

  • One of the products of the requirements effort should be to help plan and manage  the rest of the Project. This is the goal of the Project book; note in particular P.4 and P.5 covering tasks and deadlines. P.7 starts out at the beginning of the project as a blueprint for the requirements effort, and as this effort proceeds (stakeholder interviews, stakeholder workshops, competitive analysis, requirements writing …) can be regularly updated to report on how it went. (This feature is an example of treating the requirements repository and documents as a living, dynamic asset, as noted above.)
  • In the Environment book, constraints (E.3) are properties of the environment (the problem domain) imposed on the project and system. Effects (E.5) go the other way around, describing how the system may affect the environment. Invariants (E.6) do both. Assumptions (E.4) are properties that are taken for granted to simplify the construction of the system (for example, a maximum number of simultaneous users), as distinct from actual constraints.
  • The Goals book is intended for a less technical audience than the other books: it must be understandable to decision makers and non-IT-savvy stakeholders. It includes a short summary (G.4) of functionality, a kind of capsule version of the System book trimmed down to the essentials. Note the importance of specifying (in G.6) what aspects the system is not intended to address. The Goals book can include some (G.5) usage scenarios expressed in terms meaningful to the book’s constituencies and hence remaining at a high level of generality.
  • Detailed usage scenarios will appear in the System book (S.4).  It is important to prioritize the functions (S.5), allowing a reasoned approach (rather than decisions made in a panic) if the project hits roadblocks and must sacrifice some of the functionality.

A naïve but still widely encountered view of requirements is that they serve to  “describe what the system will do” (independently of how it will do it). In the structure above, it corresponds to just one-fourth of the requirements effort, the System part. Work on requirements engineering in the past few decades has emphasized, among other concepts, the need to separate system and environment properties (Michael Jackson, Pamela Zave) and the importance of goals (Axel van Lamsweerde).

The plan reflects this richness of the requirements concept and I hope it can help many projects produce better requirements for better software.

References

[1] IEEE 830-1998, available here.

[2] Bertrand Meyer, Jean-Michel Bruel, Sophie Ebersold, Florian Galinier and Alexandr Naumchev: The Anatomy of Software Requirements, in TOOLS 2019, Springer Lecture Notes in Computer Science 11771, 2019, pages 10-40.

[3] Jean-Michel Bruel, Sophie Ebersold, Florian Galinier, Manuel Mazzara, Alexander Naumchev and Bertrand Meyer:  The Role of Formalism in System Requirements, in  Computing Surveys (ACM), vol. 54, no. 5, June 2021, pages 1-36, DOI: https://doi.org/10.1145/3448975, preprint available here.

RecycledA version of this article appeared earlier in the Communications of the ACM blog.

 

VN:F [1.9.10_1130]
Rating: 8.4/10 (5 votes cast)
VN:F [1.9.10_1130]
Rating: +2 (from 2 votes)

Sense and sensibility of systematically soliciting speaker slides

There is a fateful ritual to keynote invitations. The first message reads (I am paraphrasing): “Respected peerless luminary of this millennium and the next, Will your excellency ever forgive me for the audacity of asking if you would deign to leave for a short interlude the blessed abodes that habitually beget your immortal insights, and (how could I summon the gall of even forming the thought!) consider the remote possibility of honoring our humble gathering with your august presence, condescending upon that historic occasion to partake of some minute fragment of your infinite wisdom with our undeserving attendees?”. The subsequent email, a few months later, is more like: “Hi Bertrand, looks like we don’t have your slides yet, the conference is coming up and the deadline was last week, did I miss anything?”. Next: “Hey you! Do you ever read your email? Don’t try the old line about your spam filter. Where are your slides? Our sponsors are threatening to withdraw funding. People are cancelling their registrations. Alexandria Ocasio-Cortez is tweeting about the conference. Send the damn stuff!” Last: “Scum, listen. We have athletic friends and they know where you live. The time for sending your PDF is NOW.

Actually I don’t have my slides. I will have them all right for the talk. Maybe five minutes before, if you insist nicely. The talk will be bad or it will be less bad, but the slides are meant for the talk, they are not the talk. Even if I were not an inveterate procrastinator, rising at five on the day of the presentation to write them, I would still like to attend the talks before mine and refer to them. And why in the world would I let you circulate my slides in advance and steal my thunder?

This whole business of slides has become bizarre, a confusion of means and ends. Cicero did not use slides. Lincoln did not have PowerPoint. Their words still struck.

We have become hooked to slides. We pretend that they help the talk but that is a blatant lie except for the maybe 0.1% of speakers who use slides wisely. Slides are not for the benefit of the audience (are you joking? What slide user cares about the audience? Hahaha) but for the sole, exclusive and utterly selfish benefit of the speaker.

Good old notes (“cheat sheets”) would be more effective. Or writing down your speech and reading it from the lectern as historians still do (so we are told) at their conferences.

If we use slides at all, we should reserve them for illustration: to display a photograph, a graph, a table. The way politicians and police chiefs do when they bring a big chart to support their point a press conference.

I am like everyone else and still use slides as crutches. It’s so tempting. But don’t ask me to provide them in advance.

What about afterwards? It depends. Writing down the talk in the form of a paper is better. If you do not have the time, the text of the slides can serve as a simplified record. But only if the speaker wants to spread them that way. There is no rule  that slides should be published. If you see your slides as an ephemeral artifact to support an evanescent event (a speech at a conference) and wish them to remain only in the memory of those inspired enough to have attended it, that is your privilege.

Soliciting speaker slides: somewhat sane, slightly strange, or simply silly?

VN:F [1.9.10_1130]
Rating: 8.7/10 (7 votes cast)
VN:F [1.9.10_1130]
Rating: +3 (from 5 votes)

Creative writing

(This article is about the kind of French that you find on e-commerce sites these days.)

Le site amazon.fr a de plus en plus de produits destinés à des marchés internationaux et bénéficiant d’une présentation en français produite par des algorithmes approximatifs. Je ne suis pas absolument sûr du « bénéfice » en question, ni d’ailleurs de la relation exacte de ces textes à la langue de Molière. Exemple :

Service de longue vie: La lumière de la chaîne est en parallèle connecté qui empêche un échec ampoule influence à d’autres bulbs.Even avec les ampoules cassées ou enlevées, restant ampoules continuera à éclairer. Si une ampoule sortir, vous remplacez tout simplement.

Les deux ou trois premières fois cela peut avoir un certain charme mais on s’en lasse. Bien que n’ayant pas de statistiques précises, j’ai l’impression que si l’on exclut les livres ce style est aujourd’hui celui de la majorité des articles présentés sur Amazon France. On peut en rire. Mais au vu de la dégradation générale du langage visible par exemple sur les forums — il ne s’agit plus des « fautes de français » qu’on nous apprenait à éviter et qui paraissent maintenant bien innocentes, mais d’une dissolution complète, ou l’on ne distingue plus par exemple le participe passé, parlé, de l’infinitif, parler, ni de la seconde personne du présent au pluriel, parlez etc. etc. — c’est plutôt inquiétant.

Apparemment le marché français n’est pas assez grand pour justifier autre chose que ce mépris. Cela n’a pas l’air de gêner Amazon non plus. Mais even avec la tolérance on finira par remplacez tout simplement.

VN:F [1.9.10_1130]
Rating: 10.0/10 (1 vote cast)
VN:F [1.9.10_1130]
Rating: 0 (from 0 votes)

Infinite loop in Microsoft Word

 

I write a sentence, but Microsoft Word’s indefatigable grammar checker catches me:

Well-argued_with_hyphen
.

I see! Word  underlined “well-argued” in blue because the hyphen is bad style. Thanks! I choose the obligingly provided correction (just click the first menu entry). “Well argued“. Then:

Well-argued_without_hyphen

Sigh.

VN:F [1.9.10_1130]
Rating: 7.7/10 (24 votes cast)
VN:F [1.9.10_1130]
Rating: +1 (from 11 votes)

Computing: the Art, the Magic, the Science

 

My colleagues and I have just finished recording our new MOOC (online course), an official ETH offering on the EdX platform. The preview is available [1] and the course will run starting in September.

As readers of this blog know, I  have enthusiastically, under the impulsion of Marco Piccioni at ETH, embraced MOOC technology to support and spread our courses. The particular target has been the introduction to programming that I have taught for over a decade at ETH based on the Touch of Class textbook [2]. In February this blog announced [3] the release of our first MOOC, embodying the essentials of our ETH course and making it available not only to ETH students but to the whole world. The course does not just include video lectures: it also supports active student participation through online exercises and programs that can be compiled and tested on the cloud, with no software installation. These advanced features result from our research on support for distributed software development (by Christian Estler and Martin Nordio, with Carlo Furia and others).

This first course was a skunkworks project, which we did entirely on our own without any endorsement from ETH or any of the main MOOC players. We and our students have very much benefited from the consequent flexibility, and the use of homegrown technology relying on the MOODLE framework. We will keep this course for our own students and for any outside participant who prefers a small-scale, “boutique” version. But the EdX brand and EdX’s marketing power will enable us to reach a much broader audience. We want to provide the best introductory computing course on the market and the world needs to know about it. In addition, the full support of media services at ETH  helped us reach a higher standard on the technical side. (For our first course, the home-brewed one, we did not have a studio, so that every time an ambulance drove by — our offices are close to the main Zurich hospital — we had to restart the current take.)

The course’s content is not exactly the same: we have broadened the scope from just programming to computing, although it retains a strong programming component. We introduced additional elements such as an interview with Professor Peter Widmayer of ETH on the basics of computer science theory. For both new material and the topics retained from the first version we have adapted to the accepted MOOC practice of short segments, although we did not always exactly meet the eight-minute upper limit that was suggested to us.

We hope that you, and many newcomers, will like the course and benefit from it.

References

[3] EdX course: Computing: Art, Magic, Science, preview available here.

[2] Bertrand Meyer: Touch of Class: Learning how to Program Well, with Objects and Contracts, Springer Verlag, revised printing, 2013, book page here.

[3] Learning to Program, Online: article from this blog, 3 February 2014, available here.

 

 

VN:F [1.9.10_1130]
Rating: 7.9/10 (7 votes cast)
VN:F [1.9.10_1130]
Rating: +3 (from 3 votes)

When pictures lie

 

One of the most improvable characteristics of scientific papers is the graphical presentation of numerical data. It is sad to see that thirty years after Tufte published the first edition of his masterpiece [1] many authors are still including grossly inaccurate graphics. Sadder still when the authors are professional graphists, who should know better. Take this chart [2] from the last newsletter of Migros, Switzerland’s largest supermarket chain. To convince Swiss people that they should not worry about their food bills, it displays the ratio of food expenses to revenue in various countries. There would be many good ways to represent this information graphically, but someone thought it clever to draw variable-size coins of the respective currencies. According to the text, “the bigger the circle, the larger the income’s share devoted to food“.

Just a minor problem: the visual effect is utterly misleading. Taking three examples from the numbers given, the ratio is roughly 10% for Switzerland, 30% for Russia, 40% for Morocco. And, sure enough,  compared to the Swiss coin in the figure, the Russian coin is about three times bigger and the Moroccan coin four times… in diameter! What the eye sees, of course, is the area. Since the area varies as the square of the diameter, one gets the impression that Russians spend nine times, not three, and Moroccans sixteen times, not four, as much as the Swiss.

To convey the correct suggestion, the diameter of the Russian coin should have been about 73% larger than the Swiss coin’s diameter (the square root of three is about 1.73) , and the diameter of the Moroccan coin twice larger, that is to say half of what it is.

The impression is particularly misleading for countries where the ratio, unlike in Russia or Morocco, is close to Switzerland’s. Most interestingly, although no doubt by accident, for neighboring countries, where Swiss people are prone to go shopping in search of a bargain, a practice that possibly does not enthuse Migros. The extra percentage devoted to food (using this time  no longer rough approximations but precise values from the figures given in the Migros page) is 4% for Austria (10.9 ratio vs Switzerland’s 10.2), 8% for Germany (11.1), 30% for France (13.3) and  43% for Italy (14.6). But if you look at the picture the circles suggest much bigger differences; for example the Italian circle is obviously computed from the ratio of the squares, 14.62 / 10.62, showing an increase of 104%. In other words, Italians proportionally devote to food a little over two-fifths more  than the Swiss, but the graph suggests they spend twice as much.

On the premise that one should not ascribe to malevolence what can be explained by ignorance, I hope the Migros graphists will get a copy of Tufte’s book for their future endeavors.

Read Tufte too if you want the pictures in your papers to be not just attractive but accurate.

References

[1] Edward R. Tufte: The Visual Display of Quantitative Information, Graphics Press, second edition, 2001. See his site here.

[2] “How much do we spend to feed ourselves?” on the Migros site, available here for the French version (replace “fr” in the URL by “de” for German and “it” for Italian, I did not see an English version). Click on the figure for a readable version.

VN:F [1.9.10_1130]
Rating: 8.2/10 (5 votes cast)
VN:F [1.9.10_1130]
Rating: +2 (from 2 votes)

The secret of success

In the process of finishing a book right now, it occurred to me that writing is really a simple matter. There are only three issues to address:

  1. How to start.
  2. How to finish.
  3. How to take care of the stuff in-between.

In the early stages  (1) you face the anguish of the “empty paper, defended by its whiteness” (Mallarmé) and do not know where to begin. If you overcome it, you have all the grunt work to do (3), step after step. At the end (2), while deep down you feel you have done enough and should be permitted to  click “Publish”, you still have to fill the holes, which may be small in number but have remained holes for a reason: you have again and again delayed filling them because in reality you do not know what to write there.

All things considered, then, it is not such a big deal.

VN:F [1.9.10_1130]
Rating: 8.8/10 (8 votes cast)
VN:F [1.9.10_1130]
Rating: +2 (from 4 votes)

Adult entertainment

 

I should occasionally present examples of the strange reasons people sometimes invoke for not using Eiffel. In an earlier article [1] I gave the basic idea common to all these reasons, but there are many variants, in the general style “I am responsible for IT policy and purchases for IBM, the US Department of Defense and Nikke, and was about to sign the PO for the triple site license when I noticed that an article about Eiffel was published in 1997. How dare you! I had a tooth removed that year and it hurt a lot. I would really have liked to use Eiffel but you just made it impossible“.

While going through old email I found one of these carefully motivated strategic policy decisions: a missing “L” in a class name. Below is, verbatim [2], a message posted on the EiffelStudio developers list in 2006, and my answer. Also provides an interesting glimpse of what supposedly grown-up people find it worthwhile to spend their days on.

Original message

From: es-devel-bounces@origo.ethz.ch [mailto:es-devel-bounces@origo.ethz.ch] On Behalf Of Peter Gummer
Sent: Tuesday, 29 August, 2006 14:01
To: es-devel@origo.ethz.ch
Subject: [Es-devel] Misspelling as a naming convention
From: es-devel-bounces@origo.ethz.ch [mailto:es-devel-bounces@origo.ethz.ch] On Behalf Of Peter Gummer

Today I submitted a problem report that one of the EiffelVision classes has misspelt “tabbable” as “tabable“. Manu replied that the EiffelVision naming convention is that class or feature names ending in “able” will not double the preceding consonant, regardless of whether this results in wrong spelling.

Looking at the latest Es-changes Digest email, I see various changes implementing this naming convention. For example, the comment for revision 63043 is, “Changed from controllable to controlable to meet naming convention‘.

This is lunacy! “Controlable” (implying the existence of some verb “to controle“) might look quite ok to French eyes, but it looks utterly unprofessional to me. It does have a sort of Chaucerian, Middle English, pre-Gutenberg charm I suppose. Is this part of a plot for a Seconde Invasion Normande of the Langue Anglaise?

We are about to embark on some GUI work. Although we are probably going to use .NET WinForms, EiffelVision was a possible choice. But bad spelling puts me in a bad mood. I’d be very reluctant to work with EiffelVision because of this ridiculous naming convention.

– Peter Gummer

Answer

From: Bertrand Meyer
Sent: Wednesday, 30 August, 2006 00:52
To: Peter Gummer
Cc: es-devel@origo.ethz.ch
Subject: Re: [Es-devel] Misspelling as a naming convention

This has nothing to do with French. If anything, French practices the doubling of consonants before a suffix more than English does; an example (extracted from reports of users’ attitudes towards EiffelVision) is English “passionate“, French “passionné“. For the record, there’s no particular French dominance in the Eiffel development team, either at Eiffel Software or elsewhere. The recent discussion on EiffelVision’s “-able” class names involved one native English speaker out of three people, invalidating at the 33% level Kristen Nygaard’s observation that the language of science is English as spoken by foreigners.

The problem in English is that the rules defining which consonants should be doubled before a suffix such as “able” are not obvious. See for example this page from the University of Ottawa:

Double the final consonant before a suffix beginning with a vowel if both of the following are true: the consonant ends a stressed syllable or a one-syllable word, and the consonant is preceded by a single vowel.

Now close your eyes and repeat this from memory. I am sure that won’t be hard because you knew the rule all along, but can we expect this from all programmers using EiffelVision?

Another Web page , from a school in Oxfordshire, England, says:

Rule: Double the last consonant when adding a vowel suffix to a single syllable word ending in one vowel and one consonant.

Note that this is not quite the same rule; it doesn’t cover multi-syllable words with the stress (tonic accent) on the last syllable; and it would suggest “GROUPPABLE” (“group” is a one-syllable word ending in one vowel and one consonant), whereas the first rule correctly prescribes “GROUPABLE“. But apparently this is what is being taught to Oxfordshire pupils, whom we should stand ready to welcome as Eiffel programmers in a few years.

Both rules yield “TRANSFERABLE” because the stress is on the first syllable of “transfer“. But various dictionaries we have consulted also list “TRANSFERRABLE” and “TRANSFERRIBLE“.

As another example consider “FORMATING“. Both rules suggest a single “t“. The Solaris spell checker indeed rejects the form with two “t“s and accepts the version with one; but — a case of Unix-Windows incompatibility that seems so far to have escaped the attention of textbook authors — Microsoft Word does the reverse! In fact in default mode if you type “FORMATING” it silently corrects it to “FORMATTING“. It’s interesting in this example to note a change of tonic accent between the original and derived words: “fórmat” (both noun and verb) but “formáting“. Maybe the Word convention follows the “Ottawa” rule but by considering the stress in the derivation rather than the root? There might be a master’s thesis topic in this somewhere.

Both rules imply “MIXXABLE” and “FIXXABLE“, but we haven’t found a dictionary that accepts either of these forms.

Such rules cannot cover all cases anyway (they are “UNFATHOMMABLE“) because “consonant” vs “vowel” is a lexical distinction that doesn’t reflect the subtleties of English pronunciation. For example either rule would lead to “DRAWWABLE” because the word “draw” ends with “w“, a letter that you’ll find everywhere characterized as a consonant. Lexically it is a consonant, but phonetically it is sometimes a consonant and sometimes not, in particular at the end of a word. In “Wow“, the first “w” is a consonant, but not the second one. A valid rule would need to take into account not only spelling but also pronunciation. This is probably the reason behind the examples involving words ending in “x“: phonetically “X” can be considered two consonants, “KS“. But then the rule becomes more tricky, forcing the inquirer, who is understandably getting quite “PERPLEXXED” at this stage, to combine lexical and phonetic reasoning in appropriate doses.

No wonder then a page from the Oxford Dictionaries site states:

One of the most common types of spelling error is a mistake over whether a word is spelled with a double or a single consonant.

and goes on to list many examples.

You can find a list of of words ending in “ablehere . Here are a few cases involving derivations from a word ending with “p“:

Single consonant
DEVELOPABLE
GRASPABLE
GROUPABLE
HELPABLE
KEEPABLE
REAPABLE
RECOUPABLE

Doubled consonant
DIPPABLE
DROPPABLE (but: DRAPABLE)
FLOPPABLE
MAPPABLE
RECAPPABLE (but: CAPABLE)
RIPPABLE (but: ROPABLE)
SHIPPABLE
SKIPPABLE
STOPPABLE
STRIPPABLE
TIPPABLE

There are also differences between British and American usage.

True, the “Ottawa” rule could be amended to take into account words ending in “w“, “x“, “h” and a few other letters, and come reasonably close to matching dictionary practice. But should programmers have to remember all this? Will they?

Since we are dealing in part with artificial words, there is also some doubt as to what constitutes a “misspelt” word as you call it (or a “misspelled” one as Eiffel conventions — based on American English — would have it). Applying the rule yields “MAPPABLE“, which is indeed found in dictionaries. But in the world of graphics we have the term “bitmap“, where the stress is on the first syllable. The rule then yields “BITMAPABLE“. That’s suspicious but “GOOGLABLE“; a search produces 31 “BITMAPPABLE” and two “BITMAPABLE“, one of which qualified by “(Is that a word?)”. So either EiffelVision uses something that looks inconsistent (“BITMAPABLE” vs “MAPPABLE“) but follows the rule; or we decide for consistency.

In our view this case can be generalized. The best convention is the one that doesn’t require programmers to remember delicate and sometimes fuzzy rules of English spelling, but standardizes on a naming convention that will be as easy to remember as possible. To achieve this goal the key is consistency. A simple rule for EiffelVision classes is:

  • For an “-able” name derived from a word ending with “e“, drop the “e“: REUSABLE. There seems to be no case of words ending with another vowel.
  • If the name is derived from a word ending with a consonant, just add “able“: CONTROLABLE, TOOLTIPABLE, GROUPABLE.

Some of these might look strange the first couple of times but from then on you will remember the convention.

While we are flattered that EiffelVision should be treated as literature, we must admit that there are better recommendations for beach reading, and that Eiffel is not English (or French).

The above rule is just a convention and someone may have a better suggestion.

With best regards,

— Bertrand Meyer, Ian King, Emmanuel Stapf

Reference and note

[1] Habit, happiness and programming languages, article in this blog, 22 October 2012, see here.

[2] I checked the URLs, found that two pages have disappeared since 2006, and replaced them with others having the same content. The formatting (fonts, some of the indentation) is added. Peter Gummer asked me to make sure that his name always appears with two “m“.

VN:F [1.9.10_1130]
Rating: 10.0/10 (7 votes cast)
VN:F [1.9.10_1130]
Rating: +4 (from 4 votes)

What is wrong with CMMI

 

The CMMI model of process planning and assessment has been very successful in some industry circles, essentially as a way for software suppliers to establish credibility. It is far, however, from having achieved the influence it deserves. It is, for example, not widely taught in universities, which in turn limits its attractiveness to industry. The most tempting explanation involves the substance of CMMI: that it prescribes processes that are too heavy. But in fact the basic ideas are elegant, they are not so complicated, and they have been shown to be compatible with flexible approaches to development, such as agile methods.

I think there is a simpler reason, of form rather than substance: the CMMI defining documents are atrociously written.  Had they benefited from well-known techniques of effective technical writing, the approach would have been adopted much more widely. The obstacles to comprehension discourage many people and companies which could benefit from CMMI.

Defining the concepts

One of the first qualities you expect from a technical text is that it defines the basic notions. Take one of the important concepts of CMMI, “process area”. Not just important, but fundamental; you cannot understand anything about CMMI if you do not understand what a process area is. The glossary of the basic document ([1], page 449) defines it as

A cluster of related practices in an area that, when implemented collectively, satisfies a set of goals considered important for making improvement in that area.

The mangled syntax is not a good omen: contrary to what the sentence states, it is not the area that should be “implemented collectively”, but the practices. Let us ignore it and try to understand the intended definition. A process area is a collection of practices? A bit strange, but could make sense, provided the notion of “practice” is itself well defined. Before we look at that, we note that these are practices “in an area”. An area of what? Presumably, a process area, since no other kind of area is ever mentioned, and CMMI is about processes. But then a process area is… a collection of practices in a process area? Completely circular! (Not recursive: a meaningful recursive definition is one that defines simple cases directly and builds complex cases from them. A circular definition defines nothing.)

All that this is apparently saying is that if we already know what a process area is, CMMI adds the concept that a process area is characterized by a set of associated practices. This is actually a useful idea, but it does not give us a definition.

Let’s try to see if the definition of “practice” helps. The term itself does not have an entry in the glossary, a bit regrettable but not too worrying given that in CMMI there are two relevant kinds of practices: specific and generic. “Specific practice” is defined (page 461) as

An expected model component that is considered important in achieving the associated specific goal. (See also “process area” and “specific goal.”)

This statement introduces the important observation that in CMMI a practice is always related to a “goal” (another one of the key CMMI concepts); it is one of the ways to achieve that goal. But this is not a definition of “practice”! As to the phrase “an expected model component”, it simply tells us that practices, along with goals, are among the components of CMMI (“the model”), but that is a side remark, not a definition: we cannot define “practice” as meaning “model component”.

What is happening here is that the glossary does not give a definition at all; it simply relies on the ordinary English meaning of “practice”. Realizing this also helps us understand the definition of “process area”: it too is not a definition, but assumes that the reader already understands the words “process” and “area” from their ordinary meanings. It simply tells us that in CMMI a process area has a set of associated practices. But that is not what a glossary is for: the reader expects it to give precise definitions of the technical terms used in the document.

This misuse of the glossary is typical of what makes CMMI documents so ineffective. In technical discourse it is common to hijack words from ordinary language and give them a special meaning: the mathematical use of such words as “matrix” or “edge” (of a graph) denotes well-defined objects. But you have to explain such technical terms precisely, and be clear at each step whether you are using the plain-language meaning or the technical meaning. If you mix them up, you completely confuse the reader.

In fact any decent glossary should make the distinction explicit by underlining, in definitions, terms that have their own entries (as in: a cluster or related practices, assuming there is an entry for “practice”); then it is clear to the reader whether a word is used in its ordinary or technical sense. In an electronic version the underlined words can be links to the corresponding entries. It is hard to understand why the CMMI documents do not use this widely accepted convention.

Towards suitable definitions

Let us try our hand at what suitable definitions could have been for the two concepts just described; not a vacuous exercise since process area and practice are among the five or six core concepts of CMMI. (Candidates for the others are process, goal, institutionalization and assessment).

“Practice” is the more elementary concept. In fact it retains its essential meaning from ordinary language, but in the CMMI context a practice is related to a process and, as noted, is intended to satisfy a goal. What distinguishes a practice from a mere activity is that the practice is codified and repeatable. If a project occasionally decides to conduct a  design review that is not a practice; a systematically observed daily Scrum meeting is a practice. Here is my take on defining “practice” in CMMI:

Practice: A process-related activity, repeatable as part of a plan, that helps achieve a stated goal.

CMMI has both generic practices, applicable to the process as a whole, and specific practices, applicable to a particular process area. From this definition we can easily derive definitions for both variants.

Now for “process area”. In discussing this concept above, there is one point I did not mention: the reason the CMMI documents can get away with the bizarre definition (or rather non-definition) cited is that if you ask what a process area really is you will immediately be given examples: configuration management, project planning, risk management, supplier agreement management… Then  you get it. But examples are not a substitute for a definition, at least in a standard that is supposed to be precise and complete. Here is my attempt:

Process area: An important aspect of the process, characterized by a clearly identified set of issues and activities, and in CMMI by a set of applicable practices.

The definition of “specific practice” follows simply: a practice that is associated with a particular process area. Similarly, a “generic practice” is a practice not associated with any process area.

I’ll let you be the judge: which definitions do you prefer, these or the ones in the CMMI documents?

By the way, I can hear the cynical explanation: that the jargon and obscurity are intentional, the goal being to justify the need for experts that will interpret the sacred texts. Similar observations have been made to explain the complexity of certain programming languages. Maybe. But bad writing is common enough that we do not need to invoke a conspiracy in this case.

Training for the world championship of muddy writing

The absence of clear definitions of basic concepts is inexcusable. But the entire documents are written in government-committee-speak that erect barriers against comprehension. As an example among hundreds, take the following extract, the entire description of the generic practice GP2.2, “Establish and maintain the plan for performing the organizational training process“” , part of the Software CMM (a 729-page document!), [2], page 360:

This plan for performing the organizational training process differs from the tactical plan for organizational training described in a specific practice in this process area. The plan called for in this generic practice would address the comprehensive planning for all of the specific practices in this process area, from the establishment of strategic training needs all the way through to the assessment of the effectiveness of the organizational training effort. In contrast, the organizational training tactical plan called for in the specific practice would address the periodic planning for the delivery of individual training offerings.

Even to a good-willed reader the second and third sentences sound like gibberish. One can vaguely intuit that the practice just introduced is being distinguished from another, but which one, and how? Why the conditional phrases, “would address”? A plan either does or does not address its goals. And if it addresses them, what does it mean that a plan addresses a planning? Such tortured tautologies, in a high-school essay, would result in a firm request to clean up and resubmit.

In fact the text is trying to say something simple, which should have been expressed like this:

This practice is distinct from practice SP1.3, “Establish an Organizational Training Tactical Plan” (page 353). The present practice is strategic: it is covers planning the overall training process. SP1.3 is tactical: it covers the periodic planning of individual training activities.

(In the second sentence we could retain “from defining training needs all the way to assessing the effectiveness of training”, simplified of course from the corresponding phrase in the original, although I do not think it adds much.)

Again, which version do you prefer?

The first step in producing something decent was not even a matter of style but simple courtesy to the reader. The text compares a practice to another, but it is hard to find the target of the comparison: it is identified as the “tactical plan for organizational training” but that phrase does not appear anywhere else in the document!  You have to guess that it is listed elsewhere as the “organizational training tactical plan”, search for that string, and cycle through its 14 occurrences to see which one is the definition.  (To make things worse, the phrase “training tactical plan” also appears in the document — not very smart for what is supposed to be a precisely written standard.)

The right thing to do is to use the precise name, here SP1.3 — what good is it to introduce such code names throughout a document if it does not use them for reference? — and for good measure list the page number, since this is so easy to do with text processing tools.

In the CMMI chapter of my book Touch of Class (yes, an introductory programming textbook does contain an introduction to CMMI) I cited another extract from [2] (page 326):

The plan for performing the organizational process focus process, which is often called “the process-improvement plan,” differs from the process action plans described in specific practices in this process area. The plan called for in this generic practice addresses the comprehensive planning for all of the specific practices in this process area, from the establishment of organizational process needs all the way through to the incorporation of process-related experiences into the organizational process assets.

In this case the translation into text understandable by common mortals is left as an exercise for the reader.

With such uncanny ability to say in fifty words what would better be expressed in ten, it is not surprising that basic documents run into 729 pages and that understanding of CMMI by companies who feel compelled  to adopt it requires an entire industry of commentators of the sacred word.

Well-defined concepts have simple names

The very name of the approach, “Capability Maturity Model Integration”, is already a sign of WMD (Word-Muddying Disease) at the terminal stage. I am not sure if anyone anywhere knows how to parse it correctly: is it the integration of a model of maturity of capability (right-associative interpretation)? Of several models? These and other variants do not make much sense, if only because in CMMI “capability” and “maturity” are alternatives, used respectively for the Continuous and Staged versions.

In fact the only word that seems really useful is “model”; the piling up of tetrasyllabic words with very broad meanings does not help suggest what the whole thing is about. “Integration”, for example, only makes sense if you are aware of the history of CMMI, which generalized the single CMM model to a family of models, but this history is hardly interesting to a newcomer. A name, especially a long one, should carry the basic notion.

A much better name would have been “Catalog of Assessable Process Practices”, which is even pronounceable as an acronym, and defines the key elements: the approach is based on recognized best practices; these practices apply to processes (of an organization); they must be subject to assessment (the most visible part of CMMI — the famous five levels — although not necessarily the most important one); and they are collected into a catalog. If “catalog” is felt too lowly, “collection” would also do.

Catalog of Assessable Process Practices: granted, it sounds less impressive than the accumulation of pretentious words making up the actual acronym. As is often the case in software engineering methods and tools, once you remove the hype you may be disappointed at first (“So that’s all that it was after all!”), and then you realize that the idea, although brought back down to more modest size, remains a good idea, and can be put to effective use.

At least if you explain it in English.

References

[1] CMMI Product Team: CMMI for Development, Version 1.3, Improving processes for developing better products and services, Technical Report CMU/SEI-2010-TR-033, Software Engineering Institute, Carnegie Mellon University, November 2010, available here.

[2] CMMI Product Team, ; CMMI for Systems Engineering/Software Engineering/Integrated Product and Process Development/Supplier Sourcing, Version 1.1, Staged Representation (CMMI-SE/SW/IPPD/SS, V1.1, Staged) (CMU/SEI-2002-TR-012). Software Engineering Institute, Carnegie Mellon University, 2002, available here.

VN:F [1.9.10_1130]
Rating: 9.6/10 (14 votes cast)
VN:F [1.9.10_1130]
Rating: +12 (from 12 votes)

Interesting functionalities

Admittedly the number of people who will find the following funny, or of any interest at all, is pretty limited, so it is a good bet that about 99.86% of the faithful readers of this blog can safely wait for the next post. To appreciate the present one you must be a French-speaking nerd with an interest in both Pushkin’s poetry and Bayesian language translation. Worse, you must have a sense of the comic that somehow matches mine. (Then let’s make that 99.98%.)

Anyway, if you are still with me: there is a famous Pushkin love poem entitled Я помню чудное мгновенье…: “I remember the magic moment…” (the moment when he first saw her). I have no idea why I fed it into Google Translate; sheer idleness I suppose. In the line

И снились милые черты

the poet states that the delicate features of his beloved came to him in his sleep. Literally: “And [your] gentle [face’s] features came to my dreams”.

Google renders this into French as “Et rêvé de fonctionnalités intéressantes”: And dreamed of interesting functionalities. (The direct translation to English is less fun.)

Kudos to Google for capturing the subtle mood of 21st-century  passion. When the true nerd — I know what I am talking about — feels romantic, falls asleep, and starts dreaming, we now know what he dreams of: interesting functionalities.

VN:F [1.9.10_1130]
Rating: 10.0/10 (4 votes cast)
VN:F [1.9.10_1130]
Rating: +7 (from 7 votes)

Word of the day

Word

Umlottery [ˈʊm.lɑːɾɚɹi] (fr. umloterie, f; ger. Umlotterie, f.; it. umlotteria, f.; rus. умлотерия , f.).

Definition

A spur-of-the-moment and generally random decision, by a hesitant speaker of German talking in front of a (typically large) native-speaker audience, to pronounce the next intended word with, or without, an umlaut on the decisive syllable.

Example use

My programming teacher seems to play the umlottery a lot recently, and most of the time he loses.

Discussion

In the German language some vowels change their pronunciation under the effect of the “umlaut”, a diacritical mark appearing as a double period, as in “ä” (pronounced like an “e” in “Ted“) versus a plain “a” (pronounced “Ah“). Many nouns and verbs add umlauts in some but not all of their inflections (declinations or conjugations).  Non-native speakers find it a challenge to remember when to umlaut and when not. The consequences of incorrect umlauting can be dramatic; for example “drucken” means to print and “drücken” to press. Imagine the consequences of a police chief telling a subordinate “Print a new copy of the witness’s statement” and being understood as “Pressure the witness into making a different statement“. The stress is consequently high on the public speaker who suddenly cannot remember, in the crux of a talk, whether the first syllable of his next chosen word should be umlauted  or not.  Umlottery is the widely practiced technique of taking a big breath, silently praying to some higher power for protection, and taking a chance (as if betting at the lottery) one way or the other.

Etymology

Composite word made up from “umlaut” and “lottery”.

Origin

Precise origin unknown. First attested written occurrence in “Bertrand Meyer’s Technology Blog“, an obscure publication of dubious circulation, in October of 2012. Rumored, without independently confirmed evidence, to have been in common use in the early years of the 21st century among foreign-born professors at the ETH Zurich.

Alternative hypotheses

Some researchers have hypothesized that the word is related to the “Unified Modeling Language” (or “UML“, hence the suggestion), under the argument that using UML for a project is akin to betting on its success at the lottery. There is, however, no scholarly consensus in favor of such a connection.

VN:F [1.9.10_1130]
Rating: 9.7/10 (11 votes cast)
VN:F [1.9.10_1130]
Rating: +7 (from 7 votes)

The Modes and Uses of Scientific Publication

p> 

Recycled(This article was initially published in the CACM blog.)
Publication is about helping the advancement of humankind. Of course.

Let us take this basis for granted and look at the other, possibly less glamorous aspects.

Publication has four modes: Publicity; Exam; Business; and Ritual.

1. Publication as Publicity

The first goal of publication is to tell the world that you have discovered something: “See how smart I am!” (and how much smarter than all the others out there!). In a world devoid of material constraints for science, or where the material constraints are handled separately, as in 19th-century German universities where professors were expected to fund their own labs, this would be the only mode and use of publication. Science today is a more complex edifice.

A good sign that Publication as Publicity is only one of the modes is that with today’s technology we could easily skip all the others. If all we cared about were to make our ideas and results known, we would simply put out our papers on ArXiv or just our own Web page. But almost no one stops there; researchers submit to conferences and journals, demonstrating how crucial the other three modes are to the modern culture of science.

2. Publication as Exam

Academic careers depend on a publication record. Actually this is not supposed to be the case; search and tenure committees are officially interested in “impact,” but any candidate is scared of showing a short publication list where competitors have tens or (commonly) hundreds of items.

We do not just publish; we want to be chosen for publication. Authors are proud of the low acceptance rates of conferences at which their papers have been accepted; in the past few years it has in fact become common practice, in publication lists attached to CVs, to list this percentage next to each accepted article. Acceptance rates are carefully tracked; see for example [2] for software engineering.

As Jeff Naughton has pointed out [1], this mode of working amounts to giving researchers the status of students forced to take exams again and again. Maybe that part is inevitable; the need to justify ourselves anew every morning may be an integral part of being a scientist, especially one funded by other people’s money. Two other consequences of this phenomenon are, I believe, more damaging.

The first risk directly affects the primary purpose of publication (remember the advancement of humankind?): a time-limited review process with low acceptance rates implies that some good papers get rejected and some flawed ones accepted. Everyone in software engineering knows (and recent PC chairs have admitted) that getting a paper accepted at the International Conference on Software Engineering is in part a lottery; with an acceptance rate hovering around 13%, this is inevitable. The mistakes occur both ways: papers accepted or even getting awards, then shown a few months later to be inaccurate; and innovative papers getting rejected because some sentence rubbed the referees the wrong way, or some paper was not cited. With a 4-month review cycle, and the next deadline coming several months later, the publication of a truly important result can be delayed significantly.

The second visible damage is publication inflation. Today’s research environment channels productive research teams towards an LPU (Least Publishable Unit) publication practice, causing an explosion of small contributions and the continuous decrease of the ratio of readers to writers. When submitting a paper I have always had, as my personal goal, to be read; but looking at the overall situation of computer science publication today suggests that this is not the dominant view: the overwhelming goal of publication is publication.

3. Publication as Business

Publishing requires an infrastructure, and money plays a role. Conferences in particular are a business. They have a budget to balance, not always an easy task, although a truly successful conference can be a big money-maker for its sponsor, commercial or non-profit. The financial side of conference publication has its consequences on authors: if you do not pay your fees, not only will you be unable to participate, but your paper will not be published.

One can deplore these practices, in particular their effect on authors from less well-endowed institutions, but they result from today’s computer science publication culture with its focus on the conference, what Lance Fortnow has called “A Journal in a Hotel”.

Sometimes the consequences border on the absurd. The ASE conference (Automated Software Engineering) accepts some contributions as “short papers”. Fair enough. At ASE 2009, “short paper” did not mean a shorter conference presentation but the permission to put up a poster and stand next to it for a while and answer passersby’s questions. For that privilege — and the real one: a publication in the conference volume — one had to register for the conference. ASE 2009 was in New Zealand, the other end of the world for a majority of authors. I ceded to the injunction: who was I to tell the PhD student whose work was the core of the submission, and who was so happy to have a paper accepted at a well-ranked conference, that he was not going to be published after all? But such practices are dubious. It would be more transparent to set up an explicit pay-for-play system, with page charges: at least the money would go to a scientific society or a university. Instead we ended up funding (in addition to the conference, which from what I heard was an excellent experience) airlines and hotels.

What makes such an example remarkable is that a reasonable justification exists for every one of its components: a highly selective refereeing process to maintain the value of the publication venue; limiting the number of papers selected for full presentation, to avoid a conference with multiple parallel tracks (and the all too frequent phenomenon of conference sessions whose audience consists of the three presenters plus the session chair); making sure that authors of published papers actually attend the event, so that it is a real conference with personal encounters, not just an opportunity to increment one’s publication count. The concrete result, however, is that authors of short papers have the impression of being ransomed without getting the opportunity to present their work in a serious way. Literally seconds as I was going to hit the “publish” button for the present article, an author of an accepted short paper for ASE 2012 (where the process appears similar) sent an email to complain, triggering a new discussion. We clearly need to find better solutions to resolve the conflicting criteria.

4. Publication as Ritual

Many of the seminal papers in science, including some of the most influential in computer science, defy classification and used a distinctive, one-of-a-kind style. Would they stand a chance in one of today’s highly ranked conferences, such as ICSE in software or VLDB in databases? It’s hard to guess. Each community has developed its own standard look-and-feel, so that after a while all papers start looking the same. They are like a classical mass with its Te Deum, Agnus Dei and Kyrie Eleison. (The “Te Deum” part is, in a conference submission, spread throughout the paper, in the form of adoring citations of the program committee members’ own divinely inspired articles, good for their H-indexes if they bless your own offering.)

All empirical software engineering papers, for example, have the obligatory “Threats to Validity” section, which is has developed into a true art form. The trick is the same as in the standard interview question “What can you say about your own deficiencies?”, to which every applicant know the key: describe a personality trait so that you superficially appear self-critical but in reality continue boasting, as in “sometimes I take my work too much to heart” [3]. The “Threats to Validity” section follows the same pattern: you try to think of all possible referee objections, the better to refute them.

Another part of the ritual is the “related work” section, treacherous because you have to make sure not to omit anything that a PC member finds important; also, you must walk a fine line between criticizing existing research too much, which could offend someone, or not enough, which enables the referee to say that you are not bringing anything significantly new. I often wonder who, besides the referees, reads those sections. But here too it is easier to lament than to fault the basic idea or propose better solutions. We do want to avoid wasting our time on papers whose authors are not aware of previous work. The related work section allows referees to perform this check. Its importance in the selection process has, however, grown out of proportion. It is one thing to make sure that a paper is state-of-the-art, but another to reject it (as often happens) because it fails to cite a particular contribution whose results would not directly affect its own. Here we move from the world of the rational to the world of the ritual. An extreme and funny recent example — funny to me, not necessarily to the coauthors — is a rejection from  APSEC 2011, the Australia-Pacific Software Engineering Conference, based on one review (the others were positive) that stated: “How novel is this? Are [there] not any cloud-based IDEs out there that have [a] similar awareness model integrated into their CM? This is something the related work [section] fails to describe precisely. [4] The ritual here becomes bizarre: as far as we know, no existing system discusses a similar model; the reviewer too does not know of any; but he blasts the paper all the same for not citing work that he thinks must have been done by someone, somehow, somewhere. APSEC is a fine conference — it has to be, from the totally unbiased criterion that it accepted another one of our submissions this year! — and this particular paper may or may not have been ready for publication; judge it for yourself [5]. Such examples suggest, however, that the ritual of computer science publication has its limits.

Publicity, Exam, Business, Ritual: to which one of the four modes of publication are you most attuned? Oh, sorry, I forgot: in your case, it is solely for the advancement of humankind.

References and notes

[1] Jeffrey F. Naughton, DBMS Research: First 50 Years, Next 50 Years, slides of keynote at 26th IEEE International Conference on Data Engineering, 2010, available at lazowska.cs.washington.edu/naughtonicde.pdf .

[2] Tao Xie, Software Engineering Conferences, at people.engr.ncsu.edu/txie/seconferences.htm .

[3] I once saw on French TV a hilarious interview of an entrepreneur who had started a software company in Vietnam, where job candidates just did not know “the code”, and moved on, in response to such a question, to tell the interviewer about being rude to their mother and all the other horrible things they had done in their lives.

[4] The words in brackets were not in the review but I added them for clarity.

[5] Martin Nordio, H.-Christian Estler, Carlo A. Furia and Bertrand Meyer: Collaborative Software Development on the Web, available at arxiv.org/abs/1105.0768 .

(This article was first published on the CACM blog in September 2011.)

VN:F [1.9.10_1130]
Rating: 9.3/10 (11 votes cast)
VN:F [1.9.10_1130]
Rating: +9 (from 9 votes)

The story of our field, in a few short words

 

(With all dues to [1], but going up from four to five as it is good to be brief yet not curt.)

At the start there was Alan. He was the best of all: built the right math model (years ahead of the real thing in any shape, color or form); was able to prove that no one among us can know for sure if his or her loops — or their code as a whole — will ever stop; got to crack the Nazis’ codes; and in so doing kind of saved the world. Once the war was over he got to build his own CPUs, among the very first two or three of any sort. But after the Brits had used him, they hated him, let him down, broke him (for the sole crime that he was too gay for the time or at least for their taste), and soon he died.

There was Ed. Once upon a time he was Dutch, but one day he got on a plane and — voilà! — the next day he was a Texan. Yet he never got the twang. The first topic that had put him on  the map was the graph (how to find a path, as short as can be, from a start to a sink); he also wrote an Algol tool (the first I think to deal with all of Algol 60), and built an OS made of many a layer, which he named THE in honor of his alma mater [2]. He soon got known for his harsh views, spoke of the GOTO and its users in terms akin to libel, and wrote words, not at all kind, about BASIC and PL/I. All this he aired in the form of his famed “EWD”s, notes that he would xerox and send by post along the globe (there was no Web, no Net and no Email back then) to pals and foes alike. He could be kind, but often he stung. In work whose value will last more, he said that all we must care about is to prove our stuff right; or (to be more close to his own words) to build it so that it is sure to be right, and keep it so from start to end, the proof and the code going hand in hand. One of the keys, for him, was to use as a basis for ifs and loops the idea of a “guard”, which does imply that the very same code can in one case print a value A and in some other case print a value B, under the watch of an angel or a demon; but he said this does not have to be a cause for worry.

At about that time there was Wirth, whom some call Nick, and Hoare, whom all call Tony. (“Tony” is short for a list of no less than three long first names, which makes for a good quiz at a party of nerds — can you cite them all from rote?) Nick had a nice coda to Algol, which he named “W”; what came after Algol W was also much noted, but the onset of Unix and hence of C cast some shade over its later life. Tony too did much to help the field grow. Early on, he had shown a good way to sort an array real quick. Later he wrote that for every type of unit there must be an axiom or a rule, which gives it an exact sense and lets you know for sure what will hold after every run of your code. His fame also comes from work (based in part on Ed’s idea of the guard, noted above) on the topic of more than one run at once, a field that is very hot today as the law of Moore nears its end and every maker of chips has moved to  a mode where each wafer holds more than one — and often many — cores.

Dave (from the US, but then at work under the clime of the North) must not be left out of this list. In a paper pair, both from the same year and both much cited ever since,  he told the world that what we say about a piece of code must only be a part, often a very small part, of what we could say if we cared about every trait and every quirk. In other words, we must draw a clear line: on one side, what the rest of the code must know of that one piece; on the other, what it may avoid to know of it, and even not care about. Dave also spent much time to argue that our specs must not rely so much on logic, and more on a form of table.  In a later paper, short and sweet, he told us that it may not be so bad that you do not apply full rigor when you chart your road to code, as long as you can “fake” such rigor (his own word) after the fact.

Of UML, MDA and other such TLAs, the less be said, the more happy we all fare.

A big step came from the cold: not just one Norse but two, Ole-J (Dahl) and Kris, came up with the idea of the class; not just that, but all that makes the basis of what today we call “O-O”. For a long time few would heed their view, but then came Alan (Kay), Adele and their gang at PARC, who tied it all to the mouse and icons and menus and all the other cool stuff that makes up a good GUI. It still took a while, and a lot of hit and miss, but in the end O-O came to rule the world.

As to the math basis, it came in part from MIT — think Barb and John — and the idea, known as the ADT (not all TLAs are bad!), that a data type must be known at a high level, not from the nuts and bolts.

There also is a guy with a long first name (he hates it when they call him Bert) but a short last name. I feel a great urge to tell you all that he did, all that he does and all that he will do, but much of it uses long words that would seem hard to fit here; and he is, in any case, far too shy.

It is not all about code and we must not fail to note Barry (Boehm), Watts, Vic and all those to whom we owe that the human side (dear to Tom and Tim) also came to light. Barry has a great model that lets you find out, while it is not yet too late, how much your tasks will cost; its name fails me right now, but I think it is all in upper case.  At some point the agile guys — Kent (Beck) and so on — came in and said we had got it all wrong: we must work in pairs, set our goals to no more than a week away, stand up for a while at the start of each day (a feat known by the cool name of Scrum), and dump specs in favor of tests. Some of this, to be fair, is very much like what comes out of the less noble part of the male of the cow; but in truth not all of it is bad, and we must not yield to the urge to throw away the baby along with the water of the bath.

I could go on (and on, and on); who knows, I might even come back at some point and add to this. On the other hand I take it that by now you got the idea, and even on this last day of the week I have other work to do, so ciao.

Notes

[1] Al’s Famed Model Of the World, In Words Of Four Signs Or Fewer (not quite the exact title, but very close): find it on line here.

[2] If not quite his alma mater in the exact sense of the term, at least the place where he had a post at the time. (If we can trust this entry, his true alma mater would have been Leyde, but he did not stay long.)

VN:F [1.9.10_1130]
Rating: 10.0/10 (14 votes cast)
VN:F [1.9.10_1130]
Rating: +11 (from 11 votes)

Fun with Bayes

 

Try this:  go to translate.google.com, choose Russian as the source and English as the target languages. In the input field, type“Андрей Иванович мне писал” or, if you do not have a Cyrillic keyboard, the transliteration into the Latin alphabet, “Andrej Ivanovich mne pisal”, which (unless you uncheck the default option) will be automatically transcribed into Cyrillic as you go. The correct English translation appears: “Andrey wrote to me”.

Correct yes, but partial: the input did not read “Andrey” but “Andrey Ivanovich”. Russians have a first name, a last name and also a “patronymic”  based on the father’s name: our Andrey’s father is or was called Ivan. Following the characters in, say, War and Peace, would be next to impossible without patronymics (it’s hard enough with them). On the other hand, English usually omits the patronymic, so if you are translating something simpler than a Tolstoy novel it is reasonable for an automated tool to yield “Andrey” as the translation of “Andrey Ivanovich”. In some cases, depending on the context, it gives “Andrei”, and in some others the anglicized “Andrew”.

Google Translate has yet another translation for “Andrey Ivanovich”. Assume that you want to be specific; maybe you know two people called Andrey and use the patronymic to distinguish between them. You want to say, for example,

       I have in mind not Andrey Nikolaevich, but Andrey Petrovich.

You can enter this as “Ja imeju v vidu ne Andreja Nikolaevicha, no Andreja Petrovicha”, or copy-paste the Cyrillic: Я имею в виду не Андрея Николаевича, но Андрея Петровича. (Note for Russian speakers: the word expected after the comma is of course а, but Google Translate, knowing that а can mean “and” in other contexts, translates it here with the opposite of the intended meaning. This is why I use но, which sounds strange but understandable, and is correctly translated.). Try it now and see what comes out on the English side:

      I do not mean Kolmogorov, but Andrei Petrovich.

Google Translate, in other words, has another translation for “Andrey Nikolaevich”:

       Kolmogorov

The great mathematician A.N. Kolmogorov (1903-1987) indeed had this first name and this patronymic; to conclude that anyone with these names is also called Kolmogorov is not, however, a step that most of us are prepared to take, especially those of us with a (living) friend called Andrey Nikolaevich.

A favorite of Russians is “Dobroje Utro, Dmitri Anatolevich”, meaning “Good morning, Dimitri Anatolevich”, which Google translates into “Good morning, Mr. President”. I will let you figure this one out.

All the translations cited were, by the way, obtained on the date of this post; algorithms can change. For other examples, see this page, in Russian (thanks to Sergey Velder for bringing it to my attention).

What happened? Automatic translation has made great progress in recent years, largely as a result of the switch from structural, precise techniques based on linguistic theory to approximate methods based on statistics. These methods rely on an immense corpus of existing human translations, accessible on the Internet, and apply Bayesian techniques to match every text element to the most frequently encountered translation of a similar phrase in existing translations. This switch has caused a revolution in translation, making it possible to get approximate equivalents. Personally I find them most useful for a language I do not know at all: if I want to read a Web page in Korean, I can get its general idea, which I could not have done fifteen years ago without finding a native speaker. For a language that I know imperfectly, the help is less clear, because the translations are almost never entirely right; in fact they are almost always, beyond the level of simple phrases, grammatically incorrect.

With Bayesian techniques it is understandable why “Andrey Nikolaevich” sometimes comes out in English as “Kolmogorov”: he is probably the most famous of all Andrey Nikolaevichs in Google’s database of Russian-English translation pairs. If you do not know the database, the behavior is mysterious, as you cannot usually guess whether the translation in a particular context will be “Andrey, “Andrei”, “Andrew”, “Andrey N.” or “Kolmogorov”, the five variants that I have seen (try your own experiments!). Some cases are predictable once you know that the techniques are statistical: if you include the word “Teorema” (theorem) anywhere close, you are sure to get “Kolmogorov”. But usually there is no obvious clue.

Statistical techniques are great but such examples, beyond the fun, show their limits. I truly hope that in the future they can be combined with more exact techniques based on sound linguistics.

Postscript: are you bytypal?

Perhaps I should explain why I use Google Translate with Russian as the source language. I do not use it for translation, but I do need it to type texts in Russian. I could use a Cyrillic keyboard, but I don’t because I am a very fast touch typist on the English (QWERTY) keyboard. (Learning to type at a professional level early in life was one of the most useful skills I ever acquired — not as useful as grammar, set theory or axiomatic semantics, but far more useful than separation logic.) So it is convenient for me to type in Latin letters, say “Dostojeksky”, and rely on a tool that immediately transliterates into the Cyrillic equivalent, here Достоевский . Then I can copy-paste the result into, for example, an email to a Russian-speaking recipient.

I used to rely on a tool that does exactly this, Translit (www.translit.ru); I have of late found Google Translate generally more convenient because it does not just transliterate but relies on its database to correct some typos. I do not need the translation (except possibly to check that what I wrote makes sense), but I see it anyway; that is how I ran into the Bayesian fun described above.

As a matter of fact the transliteration tool is good but, as often with software from Google, only  “almost” right. Sometimes it simply refuses to transliterate what I wrote, because it insists on its own misguided idea of what I meant. The Auto Correct option of Microsoft Office has the good sense, when it wrongly corrects your input and  you retype it, to obey you the second time around; but Google Translate’s transliteration facility does not seem to have any such policy: it sticks to its own view, right or wrong. As a consequence it has occasionally taken me a good five minutes of fighting the tool to enter a single word. Such glitches might be removed over time, but at the moment they are sufficiently annoying that I am thinking of teaching myself to touch-type in Cyrillic.

Is this possible? Initially I learned to type on the French (AZERTY) keyboard and I had to unlearn it, since otherwise a Q would occasionally come out as an A, a Z as a W and a semicolon as an M. I know bilingual people, but none who have programmed themselves to touch-type on different keyboards. Anyone out there willing to comment on the experience of bytypalism?

VN:F [1.9.10_1130]
Rating: 8.4/10 (7 votes cast)
VN:F [1.9.10_1130]
Rating: +4 (from 6 votes)

Stendhal on abstraction

This week we step away from our usual sources of quotations — the Hoares and Dijkstras and Knuths — in favor an author who might seem like an unlikely inspiration for a technology blog: Stendhal. A scientist may like anyone else be fascinated by Balzac, Flaubert, Tolstoy or Dostoevsky, but they live in an entirely different realm; Stendhal is the mathematician’s novelist. Not particularly through the themes of his works (as could be the case with  Borges or Eco), but because of their clear structure and elegant style,  impeccable in its conciseness and razor-like in its precision. Undoubtedly his writing was shaped by his initial education; he prepared for the entrance exam of the then very young École Polytechnique, although at the last moment he yielded instead to the call of the clarion.

The scientific way of thinking was not just an influence on his writing; he understood the principles of scientific reasoning and knew how to explain them. Witness the following text, which explains just about as well as anything I know the importance of abstraction. In software engineering (see for example [1]), abstraction is the key talent, a talent of a paradoxical nature: the basic ideas take a few minutes to explain, and a lifetime to master. In this effort, going back to the childhood memories of Henri Beyle (Stendhal’s real name) is not a bad start.

Stendhal’s Life of Henri Brulard is an autobiography, with only the thinnest of disguises into a novel (compare the hero’s name with the author’s). In telling the story of his morose childhood in Grenoble, the narrator grumbles about the incompetence of his first mathematics teacher, a Mr. Dupuy, who taught mathematics “as a set of recipes to make vinegar” (comme une suite de recettes pour faire du vinaigre) and tells how his father found a slightly better one, Mr. Chabert. Here is the rest of the story, already cited in [2]. The translation is mine; you can read the original below, as well as a German version. Instead of stacks and circles  — or a university’s commencement day, see last week’s posting — the examples invoke eggs and cheese, but wouldn’t you agree that this paragraph is as good a definition of abstraction, directly applicable to software abstractions, and specifically to abstract data types and object abstractions (yes, it does discuss “objects”!), as any other?

So I went to see Mr. Chabert. Mr. Chabert was indeed less ignorant than Mr. Dupuy. Through him I discovered Euler and his problems on the number of eggs that a peasant woman brings to the market where a scoundrel steals a fifth of them, then she leaves behind the entire half of the remainder and so forth. This opened my mind, I glimpsed what it means to use the tool called algebra. I’ll be damned if anyone had ever explained it to me; endlessly Mr. Dupuy spun pompous sentences on the topic, but never did he say this one simple thing: it is a division of labor, and like every division of labor it creates wonders by allowing the mind to concentrate all its forces on just one side of objects, on just one of their qualities. What difference it would have made if Mr. Dupuy had told us: This cheese is soft or is it hard; it is white, it is blue; it is old, it is young; it is mine, it is yours; it is light or it is heavy. Of so many qualities, let us only consider the weight. Whatever that weight is, let us call it A. And now, no longer thinking of cheese, let us apply to A everything we know about quantities. Such a simple thing; and yet no one was explaining it to us in that far-away province [3]. Since that time, however, the influence of the École Polytechnique and Lagrange’s ideas may have trickled down to the provinces.

References

[1] Jeff Kramer: Is abstraction the key to computing?, in Communications of The ACM, vol. 50, 2007, pages 36-42.
[2] Bertrand Meyer and Claude Baudoin: Méthodes de Programmation, Eyrolles, 1978, third edition, 1982.
[3] No doubt readers from Grenoble, site of great universities and specifically one of the shrines of French computer science, will appreciate how Stendhal calls it  “that backwater” (cette province reculée).

Original French text

J’allai donc chez M. Chabert. M. Chabert était dans le fait moins ignare que M. Dupuy. Je trouvai chez lui Euler et ses problèmes sur le nombre d’œufs qu’une paysanne apportait au marché lorsqu’un méchant lui en vole un cinquième, puis elle laisse toute la moitié du reste, etc., etc. Cela m’ouvrit l’esprit, j’entrevis ce que c’était que se servir de l’instrument nommé algèbre. Du diable si personne me l’avait jamais dit ; sans cesse M. Dupuy faisait des phrases emphatiques sur ce sujet, mais jamais ce mot simple : c’est une division du travail qui produit des prodiges comme toutes les divisions du travail et permet à l’esprit de réunir toutes ses forces sur un seul côté des objets, sur une seule de leurs qualités. Quelle différence pour nous si M. Dupuy nous eût dit : Ce fromage est mou ou il est dur ; il est blanc, il est bleu ; il est vieux, il est jeune ; il est à moi, il est à toi ; il est léger ou il est lourd. De tant de qualités ne considérons absolument que le poids. Quel que soit ce poids, appelons-le A. Maintenant, sans plus penser absolument au fromage, appliquons à A tout ce que nous savons des quantités. Cette chose si simple, personne ne nous la disait dans cette province reculée ; depuis cette époque, l’École polytechnique et les idées de Lagrange auront reflué vers la province.

German translation (by Benjamin Morandi)

Deshalb ging ich zu Herrn Chabert. In der Tat war Herr Chabert weniger ignorant als Herr Dupuy. Bei ihm fand ich Euler und seine Probleme über die Zahl von Eiern, die eine Bäuerin zum Markt brachte, als ein Schurke ihr ein Fünftel stahl, sie dann die Hälfte des Restes hinterliest u.s.w. Es hat mir die Augen geöffnet. Ich sah was es bedeutet, das Algebra genannte Werkzeug zu benutzen. Unaufhörlich machte Herr Dupuy emphatische Sätze über dieses Thema, aber niemals dieses einfache Wort: Es ist eine Arbeitsteilung, die wie alle Arbeitsteilungen Wunder herstellt und dem Geist ermöglicht seine Kraft ganz auf eine einzige Seite von Objekten zu konzentrieren, auf eine Einzige ihrer Qualitäten. Welch Unterschied für uns, wenn uns Herr Dupuy gesagt hätte: Dieser Käse ist weich oder er ist hart; er ist weiss, er ist blau; er ist alt, er ist jung; er gehört dir, er gehört mir; er ist leicht oder er ist schwer. Bei so vielen Qualitäten betrachten wir unbedingt nur das Gewicht. Was dieses Gewicht auch sei, nennen wir es A. Jetzt, ohne unbedingt weiterhin an Käse denken zu wollen, wenden wir auf A alles an, was wir über Mengen wissen. Diese einfach Sache sagte uns niemand in dieser zurückgezogenen Provinz; von dieser Epoche an werden die École Polytechnique und die Ideen von Lagrange in die Provinz zurückgeflossen sein.

VN:F [1.9.10_1130]
Rating: 9.5/10 (6 votes cast)
VN:F [1.9.10_1130]
Rating: +4 (from 4 votes)

From duplication to duplicity: a short history of the technology of knowledge transfer

1440:

Gutenberg discovers the secret of producing, out of one text, many books for the benefit of many people.

2007:

Von und Zu Guttenberg discovers the secret of producing, out of many texts, one book for the benefit of one person.

 

 

VN:F [1.9.10_1130]
Rating: 9.9/10 (16 votes cast)
VN:F [1.9.10_1130]
Rating: +10 (from 12 votes)

The great programming haiku competition

In a few weeks I will be teaching again my Introductory Programming course at ETH, based for the first time on the published “Touch of Class” textbook [1]. For fun (mine if no one else’s) every lecture will conclude with a haiku summarizing the topic.

I made up a few, given below, and am opening a competition for more. Every proposal should be submitted in the form of a comment to this post. Every winner’s haiku and name will appear in the course slides, and in the special Programming Haiku page which will be added to the book’s site. There are four rules:

  • The contribution has to be a proper haiku: “three unrhymed lines of five, seven, and five syllables”.
  • It must summarize the principal concept of a chapter or main section of the textbook or, better yet, of one of the course’s lectures; see [2] for the lecture plan.
  • It must give the book reference (chapter or section) or lecture number or both.
  • The prize committee’s members are secret and its judgments final.

Here, for a start, are my own examples.

Proof of the undecidability of the halting problem

Section 7.5 of the book; lecture 5.2.

If it stops, it loops,
Yet if it looped, it would stop.
Sad contradiction.

Recursion

Chapter 14, especially section 14.3; lecture 9.1.


Often, I call you.
But when the going gets tough,
I will call myself.

Topological sort

Chapter 15; lectures 11.1 and 11.2.


Partial to total?
With the right data structures,
O of m plus n.

Dynamic binding

Section 16.3; lecture 8.1.


O-O programmers:
How many to screw a bulb?
None whatsoever.

Deferred classes

Section 16.5; lecture 8.1.


Do not implement!
Though for a truly Zen spec
You need a contract.

References

[1] Touch of Class: An Introduction to Programming Well Using Objects and Contracts, Springer Verlag, 2009. See Amazon page (still wrongly says the book is not yet published).

[2] “Introduction to Programming” course at ETH Zurich, Fall 2009: course page. This does not have the slides yet, but you can see last year’s slides in last year’s page.

VN:F [1.9.10_1130]
Rating: 10.0/10 (2 votes cast)
VN:F [1.9.10_1130]
Rating: +1 (from 1 vote)

Long AND clear?

recycled-small (Originally a Risks forum posting, 1998.)

Although complaints about Microsoft Word’s eagerness to correct what it sees as mistakes are not new in the Risks forum, I think it is still useful to protest vehemently the way recent versions of Word promote the dumbing down of English writing by flagging (at least when you use their default options) any sentence that, according to some mysterious criterion, it deems too long, even if the sentence is made of several comma- or semicolon-separated clauses, and even though it is perfectly obvious to anyone, fan of Proust or not, that clarity is not a direct function of length, since it is just as easy to write obscurely with short sentences as with longish ones and, conversely, quite possible to produce an absolutely limpid sentence that is very, very long.

VN:F [1.9.10_1130]
Rating: 9.0/10 (5 votes cast)
VN:F [1.9.10_1130]
Rating: +5 (from 5 votes)

Computer technology: making mozzies out of betties

Are you a Beethoven or a Mozart? If you’ll pardon the familarity, are you more of a betty or more of a mozzy? I am a betty. I am not referring to my musical abilities but to my writing style; actually, not the style of my writings (I haven’t completed any choral fantasies yet) but the style of my writing process. Mozart is famous for impeccable manuscripts; he could be writing in a stagecoach bumping its way through the Black Forest, on the kitchen table in the miserable lodgings of his second, ill-fated Paris trip, or in the antechamber of Archbishop Colloredo — no matter: the score comes out immaculate, not reflecting any of the doubts, hesitations and remorse that torment mere mortals. 

 Mozart

Beethoven’s music, note-perfect in its final form, came out of a very different process. Manuscripts show notes overwritten, lines struck out in rage, pages torn apart. He wrote and rewrote and gave up and tried again and despaired and came back until he got it the way it had to be.

Beethoven

How I sympathize! I seldom get things right the first time, and when I had to use a pen and paper I  almost never could produce a clean result; there always was one last detail to change. As soon as I could, I got my hands on typewriters, which removed the effects of ugly handwriting, but did not solve the problem of second thoughts followed by third thoughts and many more. Only with computers did it become possible to work sensibly. Even with a primitive text editor, the ability to try out ideas then correct and correct and correct is a profound change of the creation process. Once you have become used to the electronic medium, using a pen and paper seems as awkard and insufferable as, for someone accustomed to driving a car, being forced to travel in an oxen cart.

This liberating effect, the ability to work on your creations as a sculptor kneading an infinitely malleable material, is one of the greatest contributions of computer technology. Here we are talking about text, but the effect is just as profound on other media, as any architect or graphic artist will testify.

The electronic medium does not just give us more convenience; it changes the nature of writing (or composing, or designing). With paper, for example, there is a great practical difference between introducing new material at the end of the existing text,  which is easy, and inserting it at some unforeseen position, which is cumbersome and sometimes impossible. With computerized tools, it doesn’t matter. The change of medium changes the writing process and ultimately the writing: with paper the author ends up censoring himself to avoid practically painful revisions; with software tools, you work in whatever order suits you.

Technical texts, with their numbered sections and subsections, are another illustration of the change: with a text processor you do not need to come up with the full plan first, in an effort to avoid tedious renumbering later. You will use such a top-down scheme if it fits your natural way of working, but you can use any other  one you like, and renumber the existing sections at the press of a key. And just think of the pain it must have been to produce an index in the old days: add a page (or, worse, a paragraph, since it moves the following ones in different ways) and you would  have to recheck every single entry.

Recent Web tools have taken this evolution one step further, by letting several people revise a text collaboratively and concurrently (and, thanks to the marvels of  longest-common-subsequence algorithms and the resulting diff tools, retreat to an earlier version if in our enthusiasm to change our design we messed it up) . Wikis and Google Docs are the most impressive examples of these new techniques for collective revision.

Whether used by a single writer or in a collaborative development, computer tools have changed the very process of creation by freeing us from the tyranny of physical media and driving to zero the logistic cost of  one or a million changes of mind. For the betties among us, not blessed with an inborn ability to start at A, smoothly continue step by step, and end at Z, this is a life-changer. We can start where we like, continue where we like, and cover up our mistakes when we discover them. It does not matter how messy the process is, how many virtual pages we tore away, how much scribbling it took to bring a paragraph to a state that we like: to the rest of the world, we can present a result as pristine as the manuscript of a Mozart concerto.

These advances are not appreciated enough; more importantly, we do not take take enough advantage of them. It is striking, for example, to see that blogs and other Web pages too often remain riddled with typos and easily repairable mistakes. This is undoubtedly because the power of computer technology tempts us to produce ever more documents and in the euphoria to neglect the old ones. But just as importantly that technology empowers  us to go back and improve. The old schoolmaster’s advice — revise and revise again [1] — can no longer  be dismissed as an invitation to fruitless perfectionism; it is right, it is fun to apply, and at long last it is feasible.

Reference

 

[1] “Vingt fois sur le métier remettez votre ouvrage” (Twenty times back to the loom shall you bring your design), Nicolas Boileau

VN:F [1.9.10_1130]
Rating: 8.7/10 (3 votes cast)
VN:F [1.9.10_1130]
Rating: +2 (from 2 votes)