Archive for the ‘Conference’ Category.

Europe asleep (a key-not)

This week, Informatics Europe, the association of European computer science departments and industry research centers, is holding its annual ECSS event, bizarrely billed as “20 years of Informatics Europe”. (Informatics Europe was created at the end of 2006 and incorporated officially in 2011. The first ever mention of the name appeared in an email from Jan van Leeuwen to me with cc to Christine Choppy, received on 23 October 2006 at 21:37 — we were working late. Extract from Jan’s message: “The name `Informatics Europe’ has emerged as as a name that several people find appealing (and  www.informatics-europe.org seems free).” So this year is at most the 18th anniversary.)

I would have liked to speak at this week’s event but was rejected, as explained at the end of this note. I am jotting down here a partial sketch of what I would have said, at least the introduction. (Engaging in a key-not since I was not granted a keynote.) Some of the underlying matters are of great importance and I hope to have the opportunity to talk or write about them in a more organized form in the future.

Informatics Europe came out of a need to support and unite Europe’s computer science (informatics) community. In October 2004 (funny how much seems to happen in October) Willy Zwaenepoel, chair of CS at EPFL (ETH Lausanne) wrote to me as the CS department head at ETH Zurich with an invitation to meet and discuss ways to work together towards making the discipline more visible in Switzerland. We met shortly thereafter, for a pleasant Sunday dinner on November 14. I liked his idea but suggested that any serious effort should happen at the European level rather than just Switzerland. We agreed to try to convince all the department heads that we could find across Europe and invite them to a first meeting. In the following weeks a frantic effort took place to identify, by going through university web sites and personal contacts, as many potential participants as possible. The meeting,  dubbed ECSS for European Computer Science Summit, took place at ETH Zurich on (you almost guessed it) 20-21 October 2005. The call for participation started with:

The departments of computer science at EPF Lausanne and ETH Zurich are taking the initiative of a first meeting of heads of departments in Europe.

Until now there hadn’t been any effort, comparable to the Computing Research Association in the US with its annual “Snowbird” conference, to provide a forum where they could discuss these matters and coordinate their efforts. We feel it’s time to start.

The event triggered enormous enthusiasm and in the following years we created the association (first with another name, pretty ridiculous in retrospect, but fortunately Jan van Leeuwen intervened) and developed it. For many years the associated was hosted at ETH in my group, with a fantastic Executive Board (in particular its two initial vice presidents, Jan van Leeuwen and Christine Choppy) and a single employee (worth many), Cristina Pereira, who devoted an incredible amount of energy to develop services for the members, who are not individuals but organizations (university departments and industry research labs). One of the important benefits of the early years was to bring together academics from the Eastern and Western halves of the continent, the former having still recently emerged from communism and eager to make contacts with their peers from the West.

This short reminder is just to situate Informatics Europe for those who do not know about the organization. I will talk more about it at the end because the true subject of this note is not the institution but European computer science. The common concern of the founders was to bring the community together and enable it to speak with a single voice to advance the discipline. The opening paragraphs of a paper that Zwaenepoel and I published in Communications of the ACM to announce the effort (see here for the published version, or here for a longer one, pre-copy-editing) reflect this ambition:

Europe’s contribution to computer science, going back seventy years with Turing and Zuse, is extensive and prestigious; but the European computer science community is far from having achieved the same strength and unity as its American counterpart. On 20 and 21 October 2005, at ETH Zurich, the “European Computer Science Summit” brought together, for the first time, heads of computer science departments throughout Europe and its periphery. This landmark event was a joint undertaking of the CS departments of the two branches of the Swiss Federal Institute of Technology: EPFL (Lausanne) and ETH (Zurich).

.
The initiative attracted interest far beyond its original scope. Close to 100 people attended, representing most countries of the European Union, plus Switzerland, Turkey, Ukraine, Russia, Israel, a delegate from South Africa, and a representative of the ACM,
Russ Shackelford, from the US. Eastern Europe was well represented. The program consisted of two keynotes and a number of panels and workshops on such themes as research policy, curriculum harmonization, attracting students, teaching CS to non-CS students, existing national initiatives, and plans for a Europe-wide organization. The reason our original call for participation attracted such immediate and widespread interest is that computer science in Europe faces a unique set of challenges as well as opportunities. There were dozens of emails in the style “It’s high time someone took such an initiative”; at the conference itself, the collective feeling of a major crystallizing event was palpable.

.
The challenges include some old and some new. Among the old, the fragmentation of Europe and its much treasured cultural diversity have their counterparts in the organization of the educational and research systems. To take just three examples from the education side, the UK has a system that in many ways resembles the US standard, although with significant differences (3- rather than 4-year bachelor’s degree, different hierarchy of academic personnel with fewer professors and more lecturers); German universities have for a long time relied on a long (9-semester) first degree, the “Diplom”; and France has a dual system of “Grandes Écoles”, engineering schools, some very prestigious and highly competitive, but stopping at a Master’s-level engineering degree, and universities with yet another sequence of degrees including a doctorate.

And so on. The immediate concerns in 2024 are different (Bologna adoption woes are a thing of the past) but the basic conundrum remains: the incredible amount of talent and creativity present in Europe remains dormant; research in academia (and industry) fails to deliver anywhere close to its potential. The signs are everywhere; as this note is only a sketch let me just mention a handful. The following picture  shows the provenance of papers in this year’s International Conference on Software Engineering (ICSE), the premier event in the field. Even if you cannot read all the details (it’s a photo taken quickly from a back row in the opening session, sorry for the bad quality), the basic message is unmistakable: all China, the US, then some papers from Singapore, Australia and Canada. A handful from Germany and Switzerland, not a single accepted paper from France! In a discipline that is crucial for the future of every European nation.

icse_2024

Venture capital? There is a bit more than twenty years ago, but it is still limited, avaricious and scared of risks. Government support? Horizon and other EU projects have helped many, with ERC grants  in particular (a brilliant European exclusive) leading to spectacular successes, but the bulk of the funding is unbelievably bureaucratic, forcing marriages of reason between institutions that have nothing in common (other than the hope of getting some monies from Brussels) and feeding a whole industry of go-between companies which claim to help applicants but contribute exactly zero to science and innovation. They have also had the perverse effect of limiting national sources of funding. (In one national research agency on whose evaluation committee I sat,  the acceptance rate is 11%. In another, where I recently was on the expert panel, it’s more like 8%. Such institutions are the main source of non-EU research funding in their respective countries.)

The result? Far less innovation than we deserve and a brain drain that every year gets worse. Some successes do occur, and we like to root for Dassault, SAP, Amadeus and more recently companies like Mistral, but almost all of the top names in technology   — like them or loathe them  — are US-based (except for their Chinese counterparts): Amazon, Microsoft, Google, OpenAI, Apple, Meta, X, or (to name another software company) Tesla. They benefit from European talent and European education: some have key research centers in Europe, and all have European engineers and researchers. So do non-European universities; not a few of  the ICSE papers labelled above as “American” or “Canadian” are actually by European authors. Talk to a brilliant young researcher or bright-eyed entrepreneur in Europe: in most cases, you will hear that he wants to find a position or create a company in the US, because that is where the action is.

Let me illustrate the situation with a vivid example. In honor of Niklaus Wirth’s 80th birthday I co-organized a conference in 2014 where at the break a few of us were chatting with one of the speakers, Vint Cerf. Someone asked him a question which was popping up everywhere at that time, right in the middle of the Snowden affair: “if you were a sysadmin for a government organization, would you buy a Huawei router?”. Cerf’s answer was remarkable: I don’t know, he said, but there is one thing I do not understand: why in the world doesn’t Europe develop its own cloud solution? So honest, coming from an American — a Vice President at Google! — and so true. So true today still: we are all putting all our data on Amazon’s AWS and Cerf’s employer’s Google Cloud and IBM Cloud and Microsoft Azure. Total madness. (A recent phenomenon that appears even worse is something I have seen happening at European university after university: relinquishing email and other fundamental solutions to Microsoft! More and more of us now have our professional emails at outlook.com. Even aside from the technical issues, such en-masse surrender is demented.) Is Europe so poor or so retarded that it cannot build local cloud or email solutions? Of course not. In fact, some of the concepts were invented here!

This inability to deliver on our science and technology potential is one of the major obstacles to social and economic improvement in Europe. (Case in point: there is an almost one-to-one correspondence between the small set of countries that are doing better economically than the rest of the Europe, often much better, and the small set of countries that take education and science seriously, giving them enough money and freeing them from overreaching bureaucracy. Did I mention Switzerland?) The brain drain should be a major source of worry; some degree of it is of course normal — enterprising people move around, and there are objective reasons for the magnetic attraction of the US — but the phenomenon is dangerously growing and is too unidirectional. Europe should offer its best and brightest a local choice commensurate with the remote one.

Many cases seem to suggest that Europe has simply given up on its ambitions. One specific example — academia-related but important — adds to the concerns raised apropos ICSE above. With a group of software engineering pioneers from across Europe (including some who would later help with Informatics Europe) we started the European Software Engineering Conference in 1987. I was the chair of the first conference, in Strasbourg that year, and the chair of the original steering committee for the following years (I later organized the 2013 session). The conference blossomed, reflecting the vibrant life of the European software engineering community, and open of course to researchers from all over the world. (The keynote speaker in Strasbourg was David Parnas, who joked that we had invited him, an American, because the French and Germans would never agree to a speaker from the other country. That quip was perhaps funny but as unfair as it was wrong: founders from different countries, notably including Italy and Belgium, even the UK, were working together in  a respectful and friendly way without any national preferences.) Having done my job I stepped aside but was flabbergasted to learn some years later that ESEC had attached itself to a US-based event, FSE (the symposium on Foundations of Software Engineering). The inevitable and predictable happened: FSE was supposed to be ESEC-FSE every other year, but soon that practice fell out and now ESEC is no more. FSE is not the culprit here: it’s an excellent conference (I had a paper in the last edition), it is just not European. My blood boils each time I think about how the people who should have nurtured and developed ESEC, the result of many years of discussions and of excellent Europe-wide cooperation, betrayed their mission and let the whole thing disappear. Pathetic and stupid, and terrible for Europe, which no longer has an international conference in this fundamental area of modern technology.

The ESEC story helps think about the inevitable question: who is responsible? Governments are not blameless; they are good at speeches but less at execution. When they do intervene, it’s often with haste (reacting to hype with pharaonic projects that burn heaps of money before running out of favor and delivering nothing). In France, the tendency is sometimes to let the state undertake technical projects that it cannot handle; the recipes that led to the TGV or Ariane do not necessarily work for IT. (A 2006 example was an attempt to create a homegrown search engine, which lasted just long enough to elicit stinging mockery in the Wall Street Journal, “Le Google”, unfortunately behind a paywall.)

It is too easy, however, to cast all the blame on outsiders. Perhaps the most important message that I would have wanted to convey to the department heads, deans, rectors and other academic decision-makers attending ECSS this week is that we should stop looking elsewhere and start working on the problems for which we are responsible. Academia is largely self-governed. Even in centralized countries where many decisions are made at the national level in ministries, the staff in those ministries largely consists of academics on secondment to the administration. European academia — except in the more successful countries, already alluded to, and by the way not exempt either from some of the problems of their neighbors — is suffocating under the weight of absurd rules. It is fashionable to complain about the bureaucracy, but many of the people complaining have the power to make and change these rules.

The absurdities are everywhere. In country A, a PhD must take exactly three years. (Oh yes? I thought it was the result that mattered.) By the way, if you have funding for 2.5 years, you cannot hire a PhD student (you say you will find the remaining funding in due time? What? You mean you are taking a risk?) In country B, you cannot be in the thesis committee of the student you supervised. (This is something bequeathed from the British system. After Brexit!) Countries C, D, E and F (with probably G, H, I, J and K to follow) have adopted the horrendous German idea of a “habilitation”, a second doctorate-like process after the doctorate, a very effective form of infantilization which maintains scientists in a subservient state until their late thirties, preventing them during their most productive years from devoting their energy to actual work. Universities everywhere subject each other to endless evaluation schemes in which no one cares about what you actually do in education and research but the game is about writing endless holier-than-thou dissertations on inclusiveness, equality etc. with no connection to any actual practice. In country L, politicized unions are represented in all the decision-making bodies and impose a political agenda, censoring important areas of research and skewing scientist hires on the basis of political preferences. In country M, there is a rule for every elementary event of academic life and the rule suffers no exception (even when you discover that it was made up two weeks earlier with the express goal of preventing you from doing something sensible). In country N, students who fail an exam have the right to a retake, and then a second retake, and then a third retake, in oral form of course. In country O, where all university presidents make constant speeches about the benefits of multidisciplinarity, a student passionate about robotics but with a degree in mechanical engineering cannot enroll in a master degree in robotics in the computer science department. In country P (and Q and R and S and T) students and instructors alike must, for any step of academic life, struggle with a poorly designed IT system, to which there is no alternative. In country U, expenses for scientific conferences are reimbursed six months later, when not rejected as non-conformant. In country V, researchers and educators are hired through a protracted  committee process which succeeds in weeding out candidates with an original profile. In country W, the primer criterion for hiring researchers is the H-index. In country X, it is the number of publications. In country Y, management looks at your research topics and forces you to change them every five years. I would need other alphabets but could go on.

When we complain about the difficulties to get things done, we are very much like the hero of Kafka’s Before the Law, who grows old waiting in front of a gate, only to learn in his final moments that he could just have entered by pushing it. We need to push the gate of European academia. No one but we ourselves is blocking it. Start by simplifying everything, but there are more ways to enter; they  are what I would have liked to present at ECSS and will have to wait for another day.

Which brings me back to the ECSS conference. I wrote to its organizers asking for the opportunity to give a talk. Naïvely, I thought the request would be obvious. After all, while Informatics Europe was at every step a group effort, with an outstanding group of colleagues from across Europe (I mentioned a few at the beginning, but there were many more, including all the members of the initial Executive Board), I played the key role as one of the two initiators of the idea, the organizer of the initial meeting and several of the following ECSS, the founding president for two terms (8 years), the prime writer of the foundational documents, the host of the first secretariat for many years in my ETH chair, the lead author of several reports, the marketer recruiting members, and the jack-of-all-trades for Informatics Europe. It may be exaggerated to say that for the first few years I carried the organization on my shoulders, but it is a fact that I found the generous funding (from ETH, industry partners and EPFL thanks to Zwaenepoel) that enabled us to get started and enabled me, when I passed the baton to my successor, to give him an organization in a sound financial situation, some 80 due-paying members, and a strong record of achievements. Is it outrageous, after two decades, to ask for a microphone to talk about the future for 45 minutes? The response I got from the Informatics Europe management was as surprising as it was boorish: in our program (they said in February 2024!) there is no place left. To add injury to insult they added that if I really wanted I could participate in some kind of panel discussion. (Sure, fly to Malta in the middle of the semester, cancel 4 classes and meetings, miss paper deadlines, all for 5 minutes of trying to put in a couple of words. By the way, one of the principles we had for the organization of ECSS was always to be in a big city with an important local community and an airport with lots of good connections to the principal places in Europe — and beyond for our US guests.) When people inherit a well-functioning organization, the result of hard work by a succession of predecessors, it is hard to imagine what pleasure they can take in telling them to go to hell. Pretty sick.

For me Informatics Europe was the application to my professional life of what remains a political passion: a passion for Europe and democracy. On this same blog in 2012 I published an article entitled “The most beautiful monument of Europe”, a vibrant hymn to the European project. While I know that some of it may appear naïve or even ridiculous, I still adhere to everything it says and I believe it is worth reading. While I have not followed the details of the activities of Informatics Europe since I stopped my direct involvement, I am saddened not to see any trace of European sentiment in it. We used to have Ukrainian members, from Odessa Polytechnic, who participated in the first ECSS meetings; today there is no member from Ukraine listed. One would  expect to see prominent words of solidarity with the country, which is defending our European values, including academic ones. Is that another sign of capitulation?

I am also surprised to see few new in-depth reports. Our friends from the US Computing Research Association, who were very helpful at the beginning of Informatics Europe (they included in particular Andy Bernat and Ed Laszowka, and Willy Zwaenepoel himself who had been a CRA officer during his years in the US), told us that one of the keys to success was to provide the community with factual information. Armed with that advice, we embarked on successive iterations of the “Informatics in Europe: Key Data” reports, largely due to the exhaustive work of Cristina Pereira, which provided unique data on salaries (something that we often do not discuss in Europe, but it is important to know how much a PhD student, postdoc, assistant professor of full professor makes in every surveyed country), student numbers, degrees, gender representation etc. etc., with the distinctive quality that — at Cristina’s insistence —we favored exactness over coverage: we included only the countries for which we could get reliable data, but for those we guaranteed full correctness and accuracy. From the Web site it seems these reports — which indeed required a lot of effort, but are they not the kind of thing the membership expects? — were discontinued some years ago. While the site shows some other interesting publications (“recommendations”), it seems regrettable to walk way from hard foundational work.

New management is entitled to its choices (as previous management is entitled to raise concerns). Beyond such differences of appreciation, the challenges facing European computer science are formidable. The enemies are outside, but they are also in ourselves. The people in charge are asleep at the wheel. I regret not to have had the opportunity to try to wake them up in person, but I do hope for a collective jolt to enable our discipline to bring Europe the informatics benefits Europe deserves.

VN:F [1.9.10_1130]
Rating: 7.9/10 (8 votes cast)
VN:F [1.9.10_1130]
Rating: +2 (from 4 votes)

A new scientific index

The CF-Index, or Conference Frustration index, is an integer n (n ≥ 1) defined as follows. You are at a conference where your paper submission was rejected, and sitting in the session devoted to that paper’s very topic. You think for yourself  “My paper was at least n times better than the average here”. That n is your CF-index.

It is a law of nature (like speed never exceeding that of light, or temperature never going below absolute zero) that n < 1 is impossible. (The reason is obvious: if you were not the kind to believe your work is at least as good as anyone else’s, you would have gone for another profession, one calling for modesty, realism and timidity — such as, say, politician.)  Values of n = 3 or 4 are normal. Beyond 10 you might consider seeking professional advice. (These observations have nothing to do with my being at ICSE right now.)

VN:F [1.9.10_1130]
Rating: 8.2/10 (5 votes cast)
VN:F [1.9.10_1130]
Rating: +2 (from 4 votes)

A remarkable group photo

On 13-15 September 1999 a symposium took place in St Catherine College in Oxford,  in honor of Tony Hoare’s “retirement” from Oxford (the word is in quotes because he has had several further productive careers since). The organizers were Jim Woodcock, Bill Roscoe and Jim Davies. The proceedings are available as Millenial Perspectives in Computer Science, MacMillan Education UK, edited by Davies, Roscoe and Woodcock. The Symposium was a milestone event.

As part of a recent conversation on something else, YuQian Zhou(who was also there) sent me a group photo from the event, which I did not know even existed. I am including it below; it is actually a photo of a paper photo but the resolution is good. It is a fascinating gallery of outstanding people in programming and verification. (How many Turing award winners can you spot? I see 7.)

Many thanks to YuQian Zhou, Jim Woodcock and Bill Roscoe for insights into the picture in discussions of the past two weeks.

photo

VN:F [1.9.10_1130]
Rating: 10.0/10 (6 votes cast)
VN:F [1.9.10_1130]
Rating: +3 (from 3 votes)

New master program at SIT: Webinar tomorrow

The Schaffhausen Institute of Technology (SIT) is holding a Webinar tomorrow with a set of three talks by: Serguei Beloussov, founder of Acronis and president of SIT; Michael Widenius, CTO of MariaDB and creator of MySQL Server; and Mauro Pezzè, my colleague at SIT, who will present the new master program that we have just announced, combining CS/SE topics with management and marketing courses to train future technology leaders.

The talks are in the form of a Webinar, starting at 9 AM this Tuesday (9 June). You can find all the details on the corresponding SIT page at here.

VN:F [1.9.10_1130]
Rating: 10.0/10 (2 votes cast)
VN:F [1.9.10_1130]
Rating: 0 (from 0 votes)

LASER 2020 in Elba Island: DevOps, Microservices and more, first week of June

The page for the 2020 LASER summer school (31 May to 7 June) now has the basic elements (some additions still forthcoming) and registration at the early price is open. The topic is DevOps, Microservices and Software Development for the Age of the Web with both conceptual lectures and contributions from industry, by technology leaders from Amazon, Facebook and ServiceNow. The confirmed speakers are:

  • Fabio Casati, ServiceNow and University of Trento, and Kannan Govindarajan from ServiceNow on Taking AI from research to production – at scale.
  • Adrian Cockcroft, Amazon Web Services, on Building and Operating Modern Applications.
  • Elisabetta Di Nitto, Politecnico di Milano.
  • Valérie Issarny, INRIA, on The Web for the age of the IoT.
  • Erik Meijer, Facebook, on Software Development At Scale.
  • Me, on Software from beginning to end: a comprehensive method.

As always, the setup is the incomparable environment of the Hotel del Golfo in Procchio, Elba Island off the coast of Tuscany, ideal at that time of year (normally good weather, warm but not hot, few tourists). The school is intensive but there is time to enjoy the beach, the hotel’s amenities and the wonderful of environment of Elba (wake up your inner Napoleon). The school has a fairly small size and everyone lives under the same (beautiful) roof, so there is plenty of time for interaction with the speakers and other participants.

About these participants: the school is intended for engineers and managers in industry as well as researchers and PhD student. In fact it’s a mix that one doesn’t find that often, allowing for much cross-learning.

Another way to put it is that this is now the 16th edition of the school (it started in 2004 but we skipped one year), so it cannot be doing everything wrong.

 

VN:F [1.9.10_1130]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.10_1130]
Rating: +1 (from 1 vote)

What happened to the kilogram? Schaffhausen, 16 December

December 16 (next Monday), the newly created Schaffhausen Institute of Technology organizes an entire day of events around three (no less) talks by the physics Nobel prize winner and MIT professor Wolfgang Ketterle.

The culmination of the day is a talk by Prof. Ketterle in the evening on “What happened to the kilogram?”. From the abstract:

For 130 years, a cylinder made of a platinum-iridium alloy stored in Saint-Cloud near Paris was the official definition of a kilogram, the basic unit of mass. This all changed on May 20 of this year: a kilo is now be defined by a fundamental constant of nature known, the Planck constant, which relates the energy of a photon to its frequency: 6.62607015 times 10-34 kilograms times square meters per second. Try that the next time you buy a kilo of asparagus.

Sounds complicated? For MIT’s Wolfgang Ketterle, a Nobel Prize winner, “Conceptually, the definition is very simple”.

Simple? Really? Come to Schaffhausen and hear for yourself whether Prof. Ketterle can make the new kilogram crystal-clear to common mortals.

Earlier in the day, he will give a talk in German on new forms of materials that appear at temperatures near the absolute zero, complete with demonstrations.

More generally, there is a full set of talks throughout the day about various aspects of advanced physics and computer science, and even a “quantum magician”, plus music and food.

Schaffhausen is about 40 minutes from Zurich (or Zurich airport) by train or car.

Attendance is free but registration is recommended. One can register for the full day or for some events only. See further information and registration form here.

VN:F [1.9.10_1130]
Rating: 10.0/10 (2 votes cast)
VN:F [1.9.10_1130]
Rating: +1 (from 1 vote)

June LASER school, Elba, on Devops, Microservices…

The 2020 LASER summer school has been announced. It will take place June 1 to 6* , as always in Elba Island, this year with the theme DevOps, Microservices and Software Development for the Age of the Web. The first five speakers are listed on the conference page, with more to come, from both academia and industry.

This is the 16th edition of the school (already) and, as always, rests on the LASER recipe of “Sea, Sun and Software”: densely packed lectures by top experts with the opportunity to enjoy the extraordinary surroundings of the Island of Elba and the Hotel del Golfo’s unique food, beach and facilities, with lots of time devoted to interactions between speakers and attendees.

This year’s theme is devoted to advances in the newest Web technologies and the corresponding software engineering issues and development models.

*Arrival on May 31st, departure on June 7th.

VN:F [1.9.10_1130]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.10_1130]
Rating: 0 (from 0 votes)

Software Engineering Education: FISEE coming up

Over the past few days I have come across several people who told me they want to attend the Frontiers In Software Engineering Education (FISEE) workshop in Villebrumier, 11-13 November, but have not registered yet. If that’s your case please register right now because:

  • The number of spots is limited (it’s a residential event, everyone is hosted onsite, and there is a set number of rooms).
  • We need a preliminary program. The format of the event is flexible, Springer LNCS proceedings come after the meeting, we make room for impromptu presentations and discussions, but still we need a basic framework and we need to finalize it now.

So please go ahead and fill in the registration form.

From the previous posting about FISEE:

The next event at the LASER center in Villebrumier (Toulouse area, Southwest France) is FISEE, Frontiers in Software Engineering Education, see the web site. This small-scale workshop, 11 to 13 November is devoted to what Software Engineering needs, what should be changed, and how new and traditional institutions can adapt to the fast pace of technology.

Workshops at the Villebrumier center favor a friendly, informal and productive interaction between participants, who are all hosted on site. There are no formal submissions, but post-event proceedings will be published as part of the LASER sub-series of Springer Lecture Notes in Computer Science.

Like other events there, FISEE is by invitation; if you are active in the field of software engineering education as an educator, as a potential employer of software engineering graduates, or as a researcher, you can request an invitation by writing to me or one of the other organizers. Attendance is limited to 15-20 participants.

Among already scheduled talks: a keynote by Alexander Tormasov, rector of Innopolis University, and a talk by me on “the 15 concepts of software engineering”.

VN:F [1.9.10_1130]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.10_1130]
Rating: 0 (from 0 votes)

Software engineering education: Villebrumier LASER center, November

The next event at the LASER center in Villebrumier (Toulouse area, Southwest France) is FISEE, Frontiers in Software Engineering Education, see the web site. This small-scale workshop, 11 to 13 November is devoted to what Software Engineering needs, what should be changed, and how new and traditional institutions can adapt to the fast pace of technology.

Workshops at the Villebrumier center favor a friendly, informal and productive interaction between participants, who are all hosted on site. There are no formal submissions, but post-event proceedings will be published as part of the LASER sub-series of Springer Lecture Notes in Computer Science.

Like other events there, FISEE is by invitation; if you are active in the field of software engineering education as an educator, as a potential employer of software engineering graduates, or as a researcher, you can request an invitation by writing to me or one of the other organizers. Attendance is limited to 15-20 participants.

VN:F [1.9.10_1130]
Rating: 10.0/10 (1 vote cast)
VN:F [1.9.10_1130]
Rating: 0 (from 0 votes)

Schedule and last deadline for LASER AI + ML + SE, Elba, June

The lecture schedule has now been posted for the 2019 LASER summer school on artificial intelligence, machine learning and software engineering. The speakers are Shai Ben-David (Waterloo), Lionel Briand (Luxembourg), Pascal Fua (EPFL), Erik Meijer (Facebook), Tim Menzies (NC State) and I.

The last deadline for registration is May 20.

The school takes place June 1-9 in the magnificent Hotel del Golfo in Elba Island, Italy.

All details at www.laser-foundation.org/school/2019.

VN:F [1.9.10_1130]
Rating: 10.0/10 (3 votes cast)
VN:F [1.9.10_1130]
Rating: +1 (from 1 vote)

Gail Murphy to speak at Devops 19

The DEVOPS 2019 workshop (6-8 May 2019) follows a first 2018 workshop whose proceedings [1] have just been published in the special LASER-Villebrumier subseries of Springer Lecture notes in Computer Science. It is devoted to software engineering aspects of continuous development and new paradigms of software production and deployment, including but not limited to DevOps.

The keynote will be delivered by Gail Murphy, vice-president Research & Innovation at University of British Columbia and one of leaders in the field of empirical software engineering.

The workshop is held at the LASER conference center in Villebrumier near Toulouse. It is by invitation; if you would like to receive an invitation please contact one of the organizers (Jean-Michel Bruel, Manuel Mazzara and me) with a short description of your interest in the field.

Reference

Jean-Michel Bruel, Manuel Mazzara and Bertrand Meyer (eds.), Software Engineering Aspects of Continuous Development and New Paradigms of Software Production and Deployment, First International Workshop, DEVOPS 2018, Chateau de Villebrumier, France, March 5-6, 2018, Revised Selected Papers, see here..

VN:F [1.9.10_1130]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.10_1130]
Rating: 0 (from 0 votes)

AI+ML+SE — Reminder about LASER school, coming up in June

A reminder about this year’s LASER school, taking place in Elba, Italy, June 1 to 9. The theme is

               AI + ML + SE

and the speakers:

  • Shai Ben-David, University of Waterloo
  • Lionel C. Briand, University of Luxembourg
  • Pascal Fua, EPFL
  • Eric Meijer, Facebook
  • Tim Menzies, NC State University
  • Me

Details at https://www.laser-foundation.org/school/.  From that page:

The 15th edition of the prestigious LASER summer school, in the first week of June 2019, will be devoted to the complementarity and confluence of three major areas of innovation in IT: Artificial Intelligence, Machine Learning and of course Software Engineering.

The school takes place in the outstanding environment of the Hotel del Golfo in Procchio, Elba, off the coast of Tuscany.

 

VN:F [1.9.10_1130]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.10_1130]
Rating: 0 (from 0 votes)

Towards empirical answers to important software engineering questions

(Adapted from a two-part article on the Communications of the ACM blog.)

1 The rise of empirical software engineering

One of the success stories of software engineering research in recent decades has been the rise of empirical studies. Visionaries such as Vic Basili, Marvin Zelkowitz and Walter Tichy advocated empirical techniques early [1, 2, 3]; what enabled the field to take off was the availability of software repositories for such long-running projects as Apache, Linux and Eclipse [4], which researchers started mining using modern data analysis techniques.

These studies have yielded many insights including surprises. More experienced developers can produce more buggy code (Schröter, Zimmermann, Premraj, Zeller). To predict whether a module has bugs, intrinsic properties such as complexity seem to matter less than how many changes it went through (Moser, Pedrycz, Succi). Automatic analysis of user reports seems much better at identifying bugs than spotting feature requests (Panichella, Di Sorbo, Guzman, Visaggio, Canfora, Gall). More extensively tested modules tend to have more bugs (Mockus, Nagappan, Dinh-Trong). Eiffel programmers do use contracts (Estler, Furia, Nordio, Piccioni and me). Geographical distance between team members negatively affects the amount of communication in projects (Nordio, Estler, Tschannen, Ghezzi, Di Nitto and me). And so on.

The basic observation behind empirical software engineering is simple: if software products and processes are worthy of discussion, they must be worthy of quantitative discussion just like any natural artifact or human process. Usually at that point the advocacy cites Lord Kelvin:”If you cannot measure it, you cannot improve it” [5].

Not that advocacy is much needed today, at least for publishing research in software engineering and in computer science education. The need for empirical backing of conceptual proposals has achieved consensus.  The so-called a “Marco Polo paper” [6] (I traveled far and saw wonderful things, thank you very much for your attention) no longer suffices for program committees; today they want numbers (and also, thankfully, a “threats to validity” section which protects you against suspicions that the numbers are bogus by stating why they might be). Some think this practice of demanding empirical backing for anything you propose has gone too far; see Jeff Ullman’s complaint [7], pertaining to database research rather than software engineering, but reflecting some of the same discussions. Here we can counter Kelvin with another quote (for more effect attributed to Einstein, albeit falsely): not everything that can be counted counts, and not everything that counts can be counted.

2 Limits of empirical research

There can indeed be too much of a good thing. Still, no one would seriously deny the fundamental role that empirical research has gained in modern software engineering. Which does not prevent us from considering the limits of what it has achieved; not in a spirit of criticism for its own sake, but to help researchers define an effective agenda for the next steps. There are in my opinion two principal limitations of current empirical results in software engineering.

The first has to do with the distinction introduced above between the two kinds of possible targets for empirical assessment: products (artifacts) versus processes.

Both aspects are important, but one is much easier to investigate than the other. For software products, the material of study is available in the form of repositories mentioned above, with their wealth of information about lines of code, control and data structures, commits, editing changes, bug reports and bug fixes. Processes are harder to grasp. You gain some information on processes from the repositories (for example, patterns and delays of bug fixing), but processes deserve studies of their own. For example, agile teams practice iterations (sprints) of widely different durations, from a few days to a few weeks; what is the ideal length? A good empirical answer would help many practitioners. But this example illustrates how difficult empirical studies of processes can be: you would need to try many variations with teams of professional programmers (not students) in different projects, different application areas, different companies; for the results to be believable the projects should be real ones with business results at stake, there should be enough samples in each category to ensure statistical significance, and the companies should agree to publication of some form, possibly anonymized, of the outcomes. The difficulties are formidable.

This issue of how to obtain project-oriented metrics is related to the second principal limitation of some of the initial empirical software engineering work: the risk of indulging in lamppost research. The term refers to the well-known joke about the drunkard who, in the dark of the night, searches for his lost keys next to the lamp post, not because he has lost them there but because it is the only place where one can see anything. To a certain extent all research is lamppost research: by definition, if you succeed in studying something, it will be because it can be studied. But the risk is to choose to work on a problem only, or principally, because it is easy to set up an empirical study — regardless of its actual importance. To cite an example that I have used elsewhere, one may suspect that the reason there are so many studies of pair programming is not that it’s of momentous relevance but that it is not hard to set up an experiment.

3 Beyond the lamppost

As long as empirical software engineering was a young, fledgling discipline, it made good sense to start with problems that naturally lended themselves to empirical investigation. But now that the field has matured, it may be time to reverse the perspective and start from the consumer’s perspective: for practitioners of software engineering, what problems, not yet satisfactorily answered by software engineering theory, could benefit, in the search for answers, from empirical studies?

Indeed, this is what we are entitled to expect from empirical studies: guidance. The slogan of empirical software engineering is that software is worthy of study just like geological strata, photons, and lilies-of-the-valley; OK, sure, but we are talking about human artifacts rather than wonders of the natural world, and the idea should be to help us produce better software and produce software better.

4 A horror story

Whenever we call for guidance from empirical studies, we should immediately include a caveat: every empirical study has its limitations (politely called “threats to validity”) and one must be careful about any generalization. The following horror story serves as caution [9]. The fashion today in programming language design is to use the semicolon not as separator in the Algol tradition (instruction1 ; instruction2) but as a terminator in the C tradition (instruction1; instruction2;). The original justification, particularly in the case of Ada [10], is an empirical paper by Gannon and Horning [11], which purported to show that the terminator convention led to fewer errors. (The authors themselves not only give their experimental results but, departing from the experimenter’s reserve, explicitly jump to the conclusion that terminators are better.) This view defies reason: witness, among others, the ever-recommenced tragedy of if c then a; else; b where the semicolon after else is an error (a natural one, since one gets into the habit of adding semicolons just in case) but the code compiles, with the result that b will be executed in all cases rather than (as intended) just when c is false [12].

How in the world could an empirical study come up with such a bizarre conclusion? Go back to the original Gannon-Horning paper and the explanation becomes clear: the experiments used subjects who were familiar with the PL/I programming language, where semicolons are used generously and an extra semicolon is harmless, as it is in all practical languages (two successive semicolons being simply interpreted as the insertion of an empty instruction, causing no harm); but the experimental separator-based language and compiler used to the experiment treated an extra semicolon as an error! As if this were not enough, checking the details of the article reveals that the terminator language is terminator-based for both declarations and instructions, whereas the example delimiter language is only delimiter-based for instructions, but terminator-based for declarations. Talk about a biased experiment! The experiment was bogus and so are the results.

One should not be too harsh about a paper from 1975, when the very idea of systematic experimental studies of programming was novel, and some of its other results are worthy of consideration. But the sad terminator story, even though it only affected a syntax property, should serve as a reminder that we should not accept a view blindly just because someone invokes some empirical study to justify it. We should assess the study itself, its methods and its credibility.

5 Addressing the issues that matter

With this warning in mind, we should still expect empirical software engineering to help us practitioners. It should help address important software engineering problems.

Ideally, I should now list the open issues of software engineering, but I am in no position even to start such a list. All I can do is to give a few examples. They may not be important to you, but they give an idea:

  • What are the respective values of upfront design and refactoring? How best can we combine these approaches?
  • Specification and testing are complementary techniques. Specifications are in principles superior to testing in general, but testing remains necessary. What combination of specification and testing works best?
  • What is the best commit/release technique, and in particular should we use RTC (Review Then Commit, as with Apache originally then Google) or CTR (Commit To Review, as Apache later) [13]?
  • What measure of code properties best correlates with effort? Many fancy metrics have appeared in the literature over the years, but there is still a nagging feeling among many of us that for all its obvious limitations the vulgar SLOC metrics (Source Lines Of Code) still remains the least bad.
  • When can a manager decide to stop testing? We did some work on the topic [14], but it is only a start.
  • Is test coverage a good measure of test quality [15] (spoiler: it is not, but again we need more studies)?

And so on. These examples may not be the cases that you consider most important; indeed what we need is input from many software engineers to help steer empirical software engineering towards the topics that truly matter to the community.

To provide a venue for that discussion, a workshop will take place 10-12 September 2018 (provisional dates) in the Toulouse area, involving many of the tenors in empirical software engineering, with the same title as these two articles: Empirical Answers to Important Software Engineering Questions. The key idea is to start not from the solutions side (the lamppost) but from the actual challenges facing software engineers. It will not just be a traditional publication-oriented meeting but will also include ample time for discussions and joint work.

If you would like to contribute your example “important questions”, please use any appropriate support (responses to this blog, email to me, Facebook, LinkedIn, anything as long as one can find it). Suggestions will be taken into consideration for the workshop. Empirical software engineering has already established itself as a core area of research; it is time feed that research with problems that actually matter to software developers, managers and users

Acknowledgments

These reflections originated in a keynote that I gave at ESEM in Bolzano in 2010 (I am grateful to Barbara Russo and Giancarlo Succi for the invitation). I never wrote up the talk but I dug up the slides [8] since they might contain a few relevant observations. I used some of these ideas in a short panel statement at ESEC/FSE 2013 in Saint Petersburg, and I am grateful to Moshe Vardi for suggesting I should write them up for Communications of the ACM, which I never did.

References and notes

[1] Victor R. Basili: The role of experimentation in software engineering: past, present and future,  in 18th ICSE (International Conference on Software Engineering), 1996, see here.

[2] Marvin V. Zelkowitz and Dolores Wallace: Experimental validation in software engineering, International Conference on Empirical Assessment and Evaluation in Software Engineering, March 1997, see here.

[3] Walter F. Tichy: Should computer scientists experiment more?, in IEEE Computer, vol. 31, no. 5, pages 32-40, May 1998, see here.

[4] And EiffelStudio, whose repository goes back to the early 90s and has provided a fertile ground for numerous empirical studies, some of which appear in my publication list.

[5] This compact sentence is how the Kelvin statement is usually abridged, but his thinking was more subtle.

[6] Raymond Lister: After the Gold Rush: Toward Sustainable Scholarship in Computing, Proceedings of 10th conference on Australasian Computing Education Conference, pages 3-17, see here.

[7] Jeffrey D. Ullman: Experiments as research validation: have we gone too far?, in Communications of the ACM, vol. 58, no. 9, pages 37-39, 2015, see here.

[8] Bertrand Meyer, slides of a talk at ESEM (Empirical Software Engineering and Measurement), Bozen/Bolzano, 2010, available here. (Provided as background material only, they are  not a paper but just slide support for a 45-minute talk, and from several years ago.)

[9] This matter is analyzed in more detail in section 26.5 of my book Object-Oriented Software Construction, 2nd edition, Prentice Hall. No offense to the memory of Jim Horning, a great computer scientist and a great colleague. Even great computer scientists can be wrong once in a while.

[10] I know this from the source: Jean Ichbiah, the original designer of Ada, told me explicitly that this was the reason for his choice of  the terminator convention for semicolons, a significant decision since it was expected that the language syntax would be based on Pascal, a delimiter language.

[11] Gannon & Horning, Language Design for Programming Reliability, IEEE Transactions on Software Engineering, vol. SE-1, no. 2, June 1975, pages 179-191, see here.

[12] This quirk of C and similar languages is not unlike the source of the Apple SSL/TLS bug discussed earlier in this blog under the title Code matters.

[13] Peter C. Rigby, Daniel M. German, Margaret-Anne Storey: Open Source Software Peer Review Practices: a Case study of the Apache Server, in ICSE (International Conference on Software Engineering) 2008, pages 541-550, see here.

[14] Carlo A. Furia, Bertrand Meyer, Manuel Oriol, Andrey Tikhomirov and  Yi Wei:The Search for the Laws of Automatic Random Testing, in Proceedings of the 28th ACM Symposium on Applied Computing (SAC 2013), Coimbra (Portugal), ACM Press, 2013, see here.

[15] Yi Wei, Bertrand Meyer and Manuel Oriol: Is Coverage a Good Measure of Testing Effectiveness?, in Empirical Software Engineering and Verification (LASER 2008-2010), eds. Bertrand Meyer and Martin Nordio, Lecture Notes in Computer Science 7007, Springer, February 2012, see here.

VN:F [1.9.10_1130]
Rating: 9.8/10 (4 votes cast)
VN:F [1.9.10_1130]
Rating: +1 (from 1 vote)

Blockchains, bitcoin and distributed trust: LASER school lineup complete

The full lineup of speakers at the 2018 LASER summer school on Software for Blockchains, Bitcoin and Distributed Trust is now ready, with the announcement of a new speaker, Primavera De Filippi from CNRS and Harvard on social and legal aspects.

The other speakers are Christian Cachin (IBM), Maurice Herlihy (Brown), Christoph Jentzsch (slock.it), me, Emil Gun Sirer (Cornell) and Roger Wattenhofer (ETH).

The school is the 14th in the LASER series and takes place June 2-10, 2018, on the island of Elba in Italy.

Early-fee registration deadline is February 10. The school’s page is here.

VN:F [1.9.10_1130]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.10_1130]
Rating: 0 (from 0 votes)

Devops (the concept, and a workshop announcement)

One of the most significant recent developments in software engineering is the concept of Devops*. Dismissing the idea as “just the latest buzzword” would be wrong. It may be a buzzword but it reflects a fundamental change in the way we structure system development; with web applications in particular the traditional distinctions between steps of development, V&V** and deployment fade out. If you are using Microsoft Word, you know or can easily find out the version number; but which version of your search engine are you using?

With the new flexibility indeed come new risks, as when a bug in the latest “devopsed”  version of Google Docs caused me to lose a whole set of complex diagrams irretrievably; an earlier article on this blog (“The Cloud and Its Risks“, October 2010) told the story.

In the new world of continuous integrated development/V&V/deployment, software engineering principles are more necessary than ever, but their application has to undergo a profound adaptation.

With Jean-Michel Bruel (Toulouse), Elisabetta Di Nitto (Milan) and Manuel Mazzara (Innopolis), we are organizing a workshop on the topic, DEVOPS 18, on 5-6 March 2018 near Toulouse. The Call for Papers is available here, with Springer LNCS proceedings. The submission deadline is January 15, but for that date a 2-page extended abstract is sufficient. I hope that the event will help the community get a better grasp of the software engineering techniques and practices applicable to this new world of software development.

Notes

*I know, it’s supposed to be DevOps (I am not a great fan of upper case in the middle of words).
** Validation & Verification.

VN:F [1.9.10_1130]
Rating: 10.0/10 (3 votes cast)
VN:F [1.9.10_1130]
Rating: +1 (from 1 vote)

LASER summer school on software for robotics: last call for registration

Much of the progress in robotics is due to software advances, and software issues remain at the heart of the formidable challenges that remain. The 2017 LASER summer school, held in September in Elba, brings together some of the most prestigious international experts in the area.

The LASER school has established itself as one of the principal forums to discussed advanced software issues. The 2017 school takes place from 9 to 17 September in the idyllic setting of the Hotel del Golfo in Procchio, Elba Island, Italy.

Robotics is progressing at an amazing pace, bringing improvements to almost areas of human activity. Today’s robotics systems rely ever more fundamentally on complex software, raising difficult issues. The LASER 2017 summer school covers both the current state of robotics software technology and open problems. The lecturers are top international experts with both theoretical contributions and major practical achievements in developing robotics systems.
The LASER school is intended for professionals from the industry (engineers and managers) as well as university researchers, including PhD students. Participants learn about the most important software technology advances from the pioneers in the field. The school’s focus is applied, although theory is welcome to establish solid foundations. The format of the school favors extensive interaction between participants and speakers.

We have lined up an impressive roster of speakers from the leading edge of both industry and academia:

Rodolphe Gélin, Aldebaran Robotics
Ashish Kapoor, Microsoft Research
Davide Brugali, University of Bergamo, on Managing software variability in robotic control systems
Nenad Medvidovic, University of Southern California, on Software Architectures of Robotics Systems
Bertrand Meyer, Politecnico di Milano & Innopolis University, on Concurrent Object-Oriented Robotics Software
Issa Nesnas, NASA Jet Propulsion Laboratory, on Experiences from robotic software development for research and planetary flight robots
Hiroshi (“Gitchang”) Okuno, Waseda University & Kyoto University, on Open-Sourced Robot Audition Software HARK: Capabilities and Applications

The school takes place at the magnificent Hotel del Golfo in the Gulf of Procchio, Elba. Along with an intensive scientific program, participants will have time to enjoy the countless natural and cultural riches of this wonderful, history-laden jewel of the Mediterranean.

For more information about the school, the speakers and registration see the LASER site.

VN:F [1.9.10_1130]
Rating: 5.5/10 (4 votes cast)
VN:F [1.9.10_1130]
Rating: +1 (from 1 vote)

AutoProof workshop: Verification As a Matter of Course

The AutoProof technology pursues the goal of “Verification As a Matter Of Course”, integrated into the EVE development environment. (The AutoProof  project page here; see particularly the online interactive tutorial.) A one-day workshop devoted to the existing AutoProof and current development will take place on October 1 near Toulouse in France. It is an informal event (no proceedings planned at this point, although based on the submissions we might decide to produce a volume), on a small scale, designed to bring together people interested in making the idea of practical verification a reality.

The keynote will be given by Rustan Leino from Microsoft Research, the principal author of the Boogie framework on which the current implementation of AutoProof relies.

For submissions (or to attend without submitting) see the workshop page here. You are also welcome to contact me for more information.

VN:F [1.9.10_1130]
Rating: 5.3/10 (15 votes cast)
VN:F [1.9.10_1130]
Rating: -2 (from 6 votes)

Harlan Mills award 2015: nominations sought

The IEEE’s Harlan Mills award is the principal prize in software engineering. The 2014 recipients are Patrick and Radhia Cousot, recognized for their groundbreaking work on abstract interpretation; Patrick will receive the award at ICSME 2014 on Oct. 1st. The list of previous recipients is here.

I have the privilege of serving as the current committee chair; the deadline for nomination is October 15. Please nominate your favorite software engineering grandee! You can find more information and the nomination form here.

.

VN:F [1.9.10_1130]
Rating: 4.3/10 (3 votes cast)
VN:F [1.9.10_1130]
Rating: +2 (from 2 votes)

Crossing the Is and doting on the Ts

 

Last week at the the CSEE&T conference in Klagenfurt (the conference page is here, I gave a keynote), a panel discussed how universities should prepare students for software engineering. Barry Boehm, one of the panelists, stated the following principle, which afterwards he said he had learned from Simon Ramo, co-founder of TRW. In hiring people, he stated, it is better to avoid candidates with an I-shaped profile: narrowly specialized in one topic that they have explored to exhaustion. Better look for a T: someone who has mastered an area in depth and then branched out to learn about many others.

I started playing with the variants. One should avoid the hyphens, or em-dashes, ““: people with a smattering of everything but no detailed knowledge of anything. Boehm said that this is the reason he always argued against establishing such undergraduate majors as systems engineering. A variant of the hyphen is the overline ““: graduates who supposedly are so smart that they can learn anything, but whose actual knowledge is limited to abstract notions.

Along with the T we should consider the “bottom” symbol of denotational semantics: . It corresponds to people who have a broad educational base, for example in mathematics, and have deepened it by focusing on a particular topic. The T and can be combined into an H turned on its side, H on the side: acquiring a solid foundation, specializing, then using that experience to become familiar with new areas.

Extending the permutation group, I am not sure what a “+” profile would be, but in a discussion last night Rustan Leino and Peter Müller suggested the “O”, ability to to circle around topics, and the umlaut, knowing a thing or two; in fact, exactly two.

 

VN:F [1.9.10_1130]
Rating: 8.2/10 (5 votes cast)
VN:F [1.9.10_1130]
Rating: +2 (from 4 votes)

New article: passive processors

 

The SCOOP concurrency model has a clear division of objects into “regions”, improving the clarity and reliability of concurrent programs by establishing a close correspondence between the object structure and the process structure. Each region has an associated “processor”, which executes operations on the region’s objects. A literal application of this rule implies, however, a severe performance penalty. As part of the work for his PhD thesis (defended two weeks ago), Benjamin Morandi found out that a mechanism for specifying certain processors as “passive” yields a considerable performance improvement. The paper, to be published at COORDINATION, describes the technique and its applications.

Reference

Benjamin Morandi, Sebastian Nanz and Bertrand Meyer: Safe and Efficient Data Sharing for Message-Passing Concurrency, to appear in proceedings of COORDINATION 2014, 16th International Conference on Coordination Models and Languages, Berlin, 3-6 June 2014, draft available here.
.

VN:F [1.9.10_1130]
Rating: 8.2/10 (5 votes cast)
VN:F [1.9.10_1130]
Rating: +2 (from 4 votes)

LASER 2014 (Elba, September)

2014 marks the 10-th anniversary (11th edition) of the LASER summer school. The school will be held September 7-14, 2014, and the detailed information is here.

LASER (the name means Laboratory for Applied Software Engineering Research) is dedicated to practical software engineering. The roster of speakers since we started is a who’s who of innovators in the field. Some of the flavor of the school can gathered from the three proceedings volumes published in Springer LNCS (more on the way) or simply by browsing the pages of the schools from previous years.

Usually we have a theme, but to mark this anniversary we decided to go for speakers first; we do have a title, “Leading-Edge Software Engineering”, but broad enough to encompass a wide variety of a broad range of topics presented by star speakers: Harald Gall, Daniel Jackson, Michael Jackson, Erik Meijer (appearing at LASER for the third time!), Gail Murphy and Moshe Vardi. With such a cast you can expect to learn something important regardless of your own primary specialty.

LASER is unique in its setting: a 5-star hotel in the island paradise of Elba, with outstanding food and countless opportunities for exploring the marvelous land, the beaches, the sea, the geology (since antiquity Elba has been famous for its stones and minerals) and the history, from the Romans to Napoleon, who in the 9 months of his reign changed the island forever. The school is serious stuff (8:30 to 13 and 17 to 20 every day), but with enough time to enjoy the surroundings.

Registration is open now.

VN:F [1.9.10_1130]
Rating: 7.0/10 (3 votes cast)
VN:F [1.9.10_1130]
Rating: +1 (from 3 votes)

Niklaus Wirth birthday symposium, 20 February, Zurich

In honor of Niklaus Wirth’s 80-th birthday we are organizing a symposium at ETH on February 20, 2014. This is a full-day event with invited talks by:

  • Vint Cerf
  • Hans Eberlé
  • Michael Franz
  • me
  • Carroll Morgan
  • Martin Odersky
  • Clemens Szyperski
  • Niklaus Wirth himself

From the symposium’s web page:

Niklaus Wirth was a Professor of Computer Science at ETH Zürich, Switzerland, from 1968 to 1999. His principal areas of contribution were programming languages and methodology, software engineering, and design of personal workstations. He designed the programming languages Algol W, Pascal, Modula-2, and Oberon, was involved in the methodologies of structured programming and stepwise refinement, and designed and built the workstations Lilith and Ceres. He published several text books for courses on programming, algorithms and data structures, and logical design of digital circuits. He has received various prizes and honorary doctorates, including the Turing Award, the IEEE Computer Pioneer, and the Award for outstanding contributions to Computer Science Education.

Participation is free (including breaks, lunch and the concluding “Apéro”) but space is strictly limited and we expect to run out of seats quickly. So if you are interested (but only if you are certain to attend) please register right away.

Symposium page and access to registration form: here.

VN:F [1.9.10_1130]
Rating: 8.7/10 (7 votes cast)
VN:F [1.9.10_1130]
Rating: +4 (from 6 votes)

Informatics education in Europe: Just the facts

 

In 2005 a number of us started Informatics Europe [1], the association of university departments and industrial research labs in computer science in Europe. The association has now grown to 80 members across the entire continent; it organizes the annual European Computer Science Summit and has published a number of influential reports. The last one just came out: Informatics Education in Europe: Institutions, Degrees, Students, Positions, Salaries — Key Data 2008-2012 [2]. The principal author is Cristina Pereira, who collected and organized the relevant data over more than a year; I helped with the preparation of the final text.

At the beginning of Informatics Europe we considered with particular attention  the model of the Computing Research Association [3], which played a crucial role in giving computer science (informatics) its due place in the US academic landscape. Several past and current officers of the CRA,  such as Willy Zwaenepoel, Ed Lazowska, Bob Constable, Andy Bernat, Jeannette Wing, Moshe Vardi and J Strother Moore gave keynotes at our early conferences and we of course asked them for the secrets of their organization’s success. One answer that struck us was the central role played by data collection. Just gathering the  facts, such as degrees and salaries, established for the first time a solid basis for serious discussions. We took this advice to heart and the report is the first result.

Gathering the information is particularly difficult for Europe given the national variations and the absence of centralized statistical data. Even the list of names under which institutions teach informatics in Europe fills a large table in the report. Cristina’s decision was, from the start, to favor quality over quantity: to focus on impeccable data for countries for which we could get it, rather than trying to cover the whole continent with data of variable credibility.

The result is the first systematic repository of basic information on informatics education in Europe: institutions, degrees offered and numbers awarded, student numbers, position titles and definitions, and (a section which will not please everyone) salaries for PhD students, postdocs and professors of various ranks.

The report is a first step; it only makes sense if we can regularly continue to update it and particularly extend it to other countries. But even in its current form (and with the obvious observation that my opinion is not neutral)  I see it as a major step forward for the discipline in Europe. We need an impeccable factual basis to convince the public at large and political decision-makers to give informatics the place it deserves in today’s educational systems.

References

[1] Informatics Europe site, see here.

[2] Cristina Pereira and Bertrand Meyer: Informatics Education in Europe: Institutions, Degrees, Students, Positions, Salaries — Key Data 2008-2012, Informatics Europe report, 30 September 2013, available here.

[3] Computing Research Association (US), see here.

.

VN:F [1.9.10_1130]
Rating: 7.8/10 (4 votes cast)
VN:F [1.9.10_1130]
Rating: 0 (from 2 votes)

Reading notes: misclassified bugs

 

(Please note the general disclaimer [1].)

How Misclassification Impacts Bug Prediction [2], an article to be presented on Thursday at ICSE, is the archetype of today’s successful empirical software engineering research, deriving significant results from the mining of publicly available software project repositories — in this case Tomcat5 and three others from Apache, as well as Rhino from Mozilla. The results are in some sense meta-results, because many studies have already mined the bug records of such repositories to draw general lessons about bugs in software development; what Herzig, Just and Zeller now tell us is that the mined data is highly questionable: many problems classified as bugs are not bugs.

The most striking results (announced in a style a bit stentorian to my taste, but indeed striking) are that: every third bug report does not describe a bug, but a request for a new feature, an improvement, better documentation or tests, code cleanup or refactoring; and that out of five program files marked as defective, two do not in fact contain any bug.

These are both false positive results. The repositories signal very few misclassifications the other way: only a small subset of enhancement and improvement requests (around 5%) should have been classified as bugs, and even fewer faulty files are missed (8%, but in fact less than 1% if one excludes an outlier, tomcat5 with 38%, a discrepancy that the paper does not discuss).

The authors have a field day, in the light of this analysis, of questioning the validity of the many studies in recent years — including some, courageously cited, by Zeller himself and coauthors — that start from bug repositories to derive general lessons about bugs and their properties.

The methodology is interesting if a bit scary. The authors (actually, just the two non-tenured authors, probably just a coincidence) analyzed 7401 issue reports manually; more precisely, one of them analyzed all of them and the second one took a second look at the reports that came out from the first step as misclassified, without knowing what the proposed reclassification was, then the results were merged. At 4 minutes per report this truly stakhanovite effort took 90 working days. I sympathize, but I wonder what the rules are in Saarland for experiments involving living beings, particularly graduate students.

Precise criteria were used for the reclassification; for example a report describes a bug, in the authors’ view, if it mentions a null pointer exception (I will skip the opportunity of a pitch for Eiffel’s void safety mechanism), says that the code has to be corrected to fix the semantics, or if there is a “memory issue” or infinite loop. These criteria are reasonable if a bit puzzling (why null pointer exceptions and not other crashes such as arithmetic overflows?); but more worryingly there is no justification for them. I wonder  how much of the huge discrepancy found by the authors — a third or reported bugs are not bugs, and 40% of supposedly defective program files are not defective — can be simply explained by different classification criteria applied by the software projects under examination. The authors give no indication that they interacted with the people in charge of these projects. To me this is the major question hovering over this paper and its spectacular results. If you are in the room and get the chance, don’t hesitate to ask this question on my behalf or yours!

Another obvious question is how much the results depend on the five projects selected. If there ever was room for replicating a study (a practice whose rarity in software engineering we lament, but whose growth prospects are limited by the near-impossibility of convincing selective software engineering venues to publish confirmatory empirical studies), this would be it. In particular it would be good to see some of the results for commercial products.

The article offers an explanation for the phenomena it uncovered: in its view, the reason why so many bug reports end up misclassified is the difference of perspective between users of the software, who complain about the problems they encounter,  and the software professionals  who prepare the actual bug reports. The explanation is plausible but I was surprised not to see any concrete evidence that supports it. It is also surprising that the referees did not ask the authors to provide more solid arguments to buttress that explanation. Yet another opportunity to raise your hand and ask a question.

This (impressive) paper will call everyone’s attention to the critical problem of data quality in empirical studies. It is very professionally prepared, and could, in addition to its specific contributions, serve as a guide on how to get an empirical software engineering paper accepted at ICSE: take a critical look at an important research area; study it from a viewpoint that has not been considered much so far; perform an extensive study, with reasonable methodological assumptions; derive a couple of striking results, making sure they are both visibly stated and backed by the evidence; and include exactly one boxplot.

Notes and references

[1] This article review is part of the “Reading Notes” series. General disclaimer here.

[2] Kim Herzig, Sascha Just and Andreas Zeller: It’s not a Bug, it’s a Feature: How Misclassification Impacts Bug Prediction, in ICSE 2013, available here. According to the ICSE program the paper will be presented on May 23 in the Bug Prediction session, 16 to 17:30.

VN:F [1.9.10_1130]
Rating: 10.0/10 (4 votes cast)
VN:F [1.9.10_1130]
Rating: +3 (from 3 votes)

Reading notes: the design of bug fixes

 

To inaugurate the “Reading Notes” series [1] I will take articles from the forthcoming International Conference on Software Engineering. Since I am not going to ICSE this year I am instead spending a little time browsing through the papers, obligingly available on the conference site. I’ll try whenever possible to describe a paper before it is presented at the conference, to alert readers to interesting sessions. I hope in July and August to be able to do the same for some of the papers to be presented at ESEC/FSE [2].

Please note the general disclaimer [1].

The Design of Bug Fixes [3] caught my attention partly for selfish reasons, since we are working, through the AutoFix project [3], on automatic bug fixing, but also out of sheer interest and because I have seen previous work by some of the authors. There have been article about bug patterns before, but not so much is known with credible empirical evidence about bug fixes (corrections of faults). When a programmer encounters a fault, what strategies does he use to correct it? Does he always produce the best fix he can, and if so, why not? What is the influence of the project phase on such decisions (e.g. will you fix a bug the same way early in the process and close to shipping)? These are some of the questions addressed by the paper.

The most interesting concrete result is a list of properties of bug fixes, classified along two criteria: nature of a fix (the paper calls it “design space”), and reasoning behind the choice of a fix. Here are a few examples of the “nature” classification:

  • Data propagation: the bug arises in a component, fix it in another, for example a library class.
  • More or less accuracy: are we fixing the symptom or the cause?
  • Behavioral alternatives: rather than directly correcting the reported problem, change the user-experienced behavior (evoking the famous quip that “it’s not a bug, it’s a feature”). The authors were surprised to see that developers (belying their geek image) seem to devote a lot of effort trying to understand how users actually use the products, but also found that even so developers do not necessarily gain a solid, objective understanding of these usage patterns. It would be interesting to know if the picture is different for traditional locally-installed products and for cloud-based offerings, since in the latter case it is possible to gather more complete, accurate and timely usage data.

On the “reasoning” side, the issue is why and how programmers decide to adopt a particular approach. For example, bug fixes tend to be more audacious (implying redesign if appropriate) at the beginning of a project, and more conservative as delivery nears and everyone is scared of breaking something. Another object of the study is how deeply developers understand the cause rather than just the symptom; the paper reports that 18% “did not have time to figure out why the bug occurred“. Surprising or not, I don’t know, but scary! Yet another dimension is consistency: there is a tension between providing what might ideally be the best fix and remaining consistent with the design decisions that underlie a software system throughout its architecture.

I was more impressed by the individual categories of the classification than by that classification as a whole; some of the categories appear redundant (“interface breakage“, “data propagation” and “internal vs external“, for example, seem to be pretty much the same; ditto for “cause understanding” and “accuracy“). On the other hand the paper does not explicitly claim that the categories are orthogonal. If they turn this conference presentation into a journal article I am pretty sure they will rework the classification and make it more robust. It does not matter that it is a bit shaky at the moment since the main insights are in the individual kinds of fix and fix-reasoning uncovered by the study.

The authors are from Microsoft Research (one of them was visiting faculty) and interviewed numerous programmers from various Microsoft product groups to find out how they fix bugs.

The paper is nicely written and reads easily. It includes some audacious syntax, as in “this dimension” [internal vs external] “describes how much internal code is changed versus external code is changed as part of a fix“. It has a discreet amount of humor, some of which may escape non-US readers; for example the authors explain that when approaching programmers out of the blue for the survey they tried to reassure them through the words “we are from Microsoft Research, and we are here to help“, a wry reference to the celebrated comment by Ronald Reagan (or his speechwriter) that the most dangerous words in the English language are “I am from the government, and I am here to help“. To my taste the authors include too many details about the data collection process; I would have preferred the space to be used for a more detailed discussion of the findings on bug fixes. On the other hand we all know that papers to selective conferences are written for referees, not readers, and this amount of methodological detail was probably the minimum needed to get past the reviewers (by avoiding the typical criticism, for empirical software engineering research, that the sample is too small, the questions biased etc.). Thankfully, however, there is no pedantic discussion of statistical significance; the authors openly present the results as dependent on the particular population surveyed and on the interview technique. Still, these results seem generalizable in their basic form to a large subset of the industry. I hope their publication will spawn more detailed studies.

According to the ICSE program the paper will be presented on May 23 in the Debugging session, 13:30 to 15:30.

Notes and references

[1] This article review is part of the “Reading Notes” series. General disclaimer here.

[2] European Software Engineering Conference 2013, Saint Petersburg, Russia, 18-24 August, see here.

[3] Emerson Murphy-Hill, Thomas Zimmerman, Christian Bird and Nachiappan Nagapan: The Design of Bug Fixes, in ICSE 2013, available here.

[4] AutoFix project at ETH Zurich, see project page here.

[5] Ronald Reagan speech extract on YouTube.

VN:F [1.9.10_1130]
Rating: 10.0/10 (4 votes cast)
VN:F [1.9.10_1130]
Rating: +3 (from 3 votes)

Specify less to prove more

Software verification is progressing slowly but surely. Much of that progress is incremental: making the fundamental results applicable to real programs as they are built every day by programmers working in standard circumstances. A key condition is to minimize the amount of annotations that they have to provide.

The article mentioned in my previous post, “Program Checking With Less Hassle” [1], to be presented at VSTTE in San Francisco on Friday by its lead author, Julian Tschannen, introduces several interesting contributions in this direction. One of the surprising conclusions is that sometimes it pays to specify less. That goes against intuition: usually, the more specification information (correctness annotations) you provide the more you help the prover. But in fact partial specifications can hurt rather than help. Consider for example a swap routine with a partial specification, which actually stands in the way of a proof. If modularity is not a concern, for example if the routine is part of the code being verified rather than of a library, it may be more effective to ignore the specification and use the routine’s implementation. This is particularly appropriate for smallhelper routines such as the swap example.

This inlining technique is applicable in other cases, for example to make up for a missing precondition: assume that a helper routine will only work for x > 0 but does not state that precondition, or maybe states only the weaker one x ≥ 0 ; in the code, however, it is only called with positive arguments. If we try to verify the code modularly we will fail, as indeed we should since the routine is incorrect as a general-purpose primitive. But within the context of the code there is nothing wrong with it. Forgetting the contract of the routine if any, and instead using its actual implementation, we may be able to show that everything is fine.

Another component of the approach is to fill in preconditions that programmers have omitted because they are somehow obvious to them. For example it is tempting and common to write just a [1] > 0 rather than a /= Void and then a [1] > 0 for a detachable array a. The tool takes care of  interpreting the simpler precondition as the more complete one.

The resulting “two-step verification”, integrated into the AutoProof verification tool for Eiffel, should turn out to be an important simplification towards the goal of “Verification As a Matter Of Course” [2].

References

[1] Julian Tschannen, Carlo A. Furia, Martin Nordio and Bertrand Meyer: Program Checking With Less Hassle, in VSTTE 2013, Springer LNCS, to appear, draft available here; presentation on May 17 in the 15:30-16:30  session.

[2] Verification As a Matter Of Course, article in this blog, 29 March 2010, see here.

VN:F [1.9.10_1130]
Rating: 10.0/10 (5 votes cast)
VN:F [1.9.10_1130]
Rating: +1 (from 1 vote)

Presentations at ICSE and VSTTE

 

The following presentations from our ETH group in the ICSE week (International Conference on Software Engineering, San Francisco) address important issues of software specification and verification, describing new techniques that we have recently developed as part of our work building EVE, the Eiffel Verification Environment. One is at ICSE proper and the other at VSTTE (Verified Software: Tools, Theories, Experiments). If you are around please attend them.

Julian Tschannen will present Program Checking With Less Hassle, written with Carlo A. Furia, Martin Nordio and me, at VSTTE on May 17 in the 15:30-16:30 session (see here in the VSTTE program. The draft is available here. I will write a blog article about this work in the coming days.

Nadia Polikarpova will present What Good Are Strong Specifications?, written with , Carlo A. Furia, Yu Pei, Yi Wei and me at ICSE on May 22 in the 13:30-15:30 session (see here in the ICSE program). The draft is available here. I wrote about this paper in an earlier post: see here. It describes the systematic application of theory-based modeling to the full specification and verification of advanced software.

VN:F [1.9.10_1130]
Rating: 10.0/10 (1 vote cast)
VN:F [1.9.10_1130]
Rating: 0 (from 0 votes)

LASER summer school: Software for the Cloud and Big Data

The 2013 LASER summer school, organized by our chair at ETH, will take place September 8-14, once more in the idyllic setting of the Hotel del Golfo in Procchio, on the island of Elba in Italy. This is already the 10th conference; the roster of speakers so far reads like a who’s who of software engineering.

The theme this year is Software for the Cloud and Big Data and the speakers are Roger Barga from Microsoft, Karin Breitman from EMC,  Sebastian Burckhardt  from Microsoft,  Adrian Cockcroft from Netflix,  Carlo Ghezzi from Politecnico di Milano,  Anthony Joseph from Berkeley,  Pere Mato Vila from CERN and I.

LASER always has a strong practical bent, but this year it is particularly pronounced as you can see from the list of speakers and their affiliations. The topic is particularly timely: exploring the software aspects of game-changing developments currently redefining the IT scene.

The LASER formula is by now well-tuned: lectures over seven days (Sunday to Saturday), about five hours in the morning and three in the early evening, by world-class speakers; free time in the afternoon to enjoy the magnificent surroundings; 5-star accommodation and food in the best hotel of Elba, made affordable as we come towards the end of the season (and are valued long-term customers). The group picture below is from last year’s school.

Participants are from both industry and academia and have ample opportunities for interaction with the speakers, who typically attend each others’ lectures and engage in in-depth discussions. There is also time for some participant presentations; a free afternoon to discover Elba and brush up on your Napoleonic knowledge; and a boat trip on the final day.

Information about the 2013 school can be found here.

LASER 2012, Procchio, Hotel del Golvo

VN:F [1.9.10_1130]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.10_1130]
Rating: 0 (from 0 votes)

Public lecture at ITMO

I am giving my “inaugural lecture” at ITMO in Saint Petersburg tomorrow (Thursday, 28 February 2013) at 14 (2 PM) local time, meaning e.g. 11 AM in Western Europe and 2 AM (ouch!) in California. See here for the announcement. The title is “Programming: Magic, Art, Routine or Science?“. The talk will be streamed live: see here.

VN:F [1.9.10_1130]
Rating: 10.0/10 (1 vote cast)
VN:F [1.9.10_1130]
Rating: +1 (from 1 vote)

Conferences: Publication, Communication, Sanction

Recycled(This article was first published in the Communications of the ACM blog.)

A healthy discussion is taking place in the computer science community on our publication culture. It was spurred by Lance Fortnow’s 2009 article [1]; now Moshe Vardi has taken the lead to prepare a report on the topic, following a workshop in Dagstuhl in November [2]. The present article and one that follows (“The Waves of Publication”)  are intended as contributions to the debate.

One of the central issues is what to do with conferences. Fortnow had strong words for the computer science practice of using conferences as the selective publication venues, instead of relying on journals as traditional scientific disciplines do. The criticism is correct, but if we look at the problem from a practical perspective it is unlikely that top conferences will lose their role as certifiers of quality. This is not a scientific matter but one of power. People in charge of POPL or OOPSLA have decisive sway over the careers (one is tempted to say the lives) of academics, particularly young academics, and it is a rare situation in human affairs that people who have critical power voluntarily renounce it. Maybe the POPL committee will see the light: maybe starting in 2014 it will accept all reasonable papers somehow related to “principles of programming languages”, turn the event itself into a pleasant multi-track community affair where everyone in the field can network, and hand over the selection and stamp-of-approval job to a journal such as TOPLAS. Dream on; it is not going to happen.

We should not, however, remain stuck with the status quo and all its drawbacks. That situation is unsustainable. As a single illustration, consider the requirement, imposed by all conferences, that having a paper pass the refereeing process is not enough: you must also register. A couple of months before the conference, authors of accepted papers (at least, they thought their paper was accepted) receive a threatening email telling them that unless they register and pay their paper will not be published after all. Now assume an author, in a field where a conference is the top token of recognition, has his visa application rejected by the country of the conference — a not so uncommon situation — and does not register. (Maybe he does not mind paying the fee, but he does not want to lie by pretending he is going to attend whereas he knows he will not.) He has lost his opportunity for publication and perhaps severely harmed this career. What have such requirements to do with science?

To understand what can be done, we need to analyze the role of conferences. In an earlier article  [3] I described four “modes and uses” of publication: Publication, Exam, Business and Ritual. From the organizers’ viewpoint, ignoring the Business and Ritual aspects although they do play a significant role, a conference has three roles: Publication, Communication and Sanction. The publication part corresponds to the proceedings of the conference, which makes articles available to the community at large, not just the conference attendees. The communication part only addresses the attendees: it includes the presentation of papers as well as all other interactions made possible by being present at a conference. The sanction part (corresponding to the “exam” part of the more general classification) is the role of a renowned conference as a stamp of approval for the best work of the moment.

What we should do is separate these roles. A conference can play all three roles, but it can also select two of them, or even just one. A well-established, prestigious conference will want to retain its sanctioning role: accepted papers get the stamp of approval. It will also remain an event, where people meet. And it may distribute proceedings. But the three roles can also be untied:

  • Publication is the least critical, and can easily be removed from the other two, since everything will be available on the Web. In fact the very notion of proceedings is quickly becoming fuzzy: more and more conferences save money by not distributing printed proceedings to attendees, sometimes not printing any proceedings at all; and some even spare themselves the production of a proceedings-on-a-stick, putting the material on the Web instead. A conference may still decide to have its own proceedings, or it might outsource that part to a journal. Each conference will make these decisions based on its own culture, tradition, ambition and constraints. For authors, the decision does not particularly matter: what counts are the sanction, which is provided by the refereeing process, and the availability of their material to the world, which will be provided in any scenario (at least in computer science where we have, thankfully, the permission to put our papers on our own web sites, an acquired right that our colleagues from other disciplines do not all enjoy).
  • Separating sanction from communication is a natural step. Acceptance and participation are two different things.

Conference organizers should not be concerned about lost revenue: most authors will still want to participate in the conference, and will get the funding since institutions are used to pay for travel to present accepted papers; some new participants might come, attracted by more interaction-oriented conference styles; and organizers can replace the requirement to register by a choice between registering and paying a publication fee.

Separating the three roles does not mean that any established conference renounces its sanctioning status, acquired through the hard work of building the conference’s reputation, often over decades. But everyone gets more flexibility. Several combinations are possible, such as:

  • Sanction without communication or publication: papers are submitted for certification through peer-review, they are available on the Web anyway, and there is no need for a conference.
  • Publication without sanction or communication: an author puts a paper on his web page or on a self-publication site such as ArXiv.
  • Sanction and communication without publication: a traditional selective conference, which does not bother to produce proceedings.
  • Communication without sanction: a working conference whose sole aim is to advance the field through presentations and discussions, and accepts any reasonable submission. It may be by invitation (a kind of advance sanction). It may have proceedings (publication) or not.

Once we understand that the three roles are not inextricably tied, the stage is clear for removal for some impediments to a more effective publication culture. Some, not all. The more general problem is the rapidly changing nature of scientific publication, what may be called the concentric waves of publication. That will be the topic of the next article.

References

[1] Lance Fortnow: Time for Computer Science to Grow Up, in Communications of the ACM, Vol. 52, no. 8, pages 33-35, 2009, available here.

[2] Dagstuhl: Perspectives Workshop: Publication Culture in Computing Research, see here.

[3] Bertrand Meyer: The Modes and Uses of Scientific Publication, article on this blog, 22 November 2011, see here.

VN:F [1.9.10_1130]
Rating: 10.0/10 (1 vote cast)
VN:F [1.9.10_1130]
Rating: +1 (from 1 vote)