Archive for August 2011

Nastiness in computer science

 

Recycled(This article was originally published in the CACM blog.)
 

Are we malevolent grumps? Nothing personal, but as a community computer scientists sometimes seem to succumb to negativism.

They admit it themselves. A common complaint in the profession (at least in academia) is that instead of taking a cue from our colleagues in more cogently organized fields such as physics, who band together for funds, promotion, and recognition, we are incurably fractious. In committees, for example, we damage everyone’s chances by badmouthing colleagues with approaches other than ours. At least this is a widely perceived view (“Circling the wagons and shooting inward,” as Greg Andrews put it in a recent discussion). Is it accurate?

One statistic that I have heard cited is that in 1-to-5 evaluations of projects submitted to the U.S. National Science Foundation the average grade of computer science projects is one full point lower than the average for other disciplines. This is secondhand information, however, and I would be interested to know if readers with direct knowledge of the situation can confirm or disprove it.

More such examples can be found in the material from a recent keynote by Jeffrey Naughton, full of fascinating insights (see his Powerpoint slides External Link). Naughton, a database expert, mentions that only one paper out of 350 submissions to SIGMOD 2010 received a unanimous “accept” from its referees, and only four had an average accept recommendation. As he writes, “either we all suck or something is broken!

Much of the other evidence I have seen and heard is anecdotal, but persistent enough to make one wonder if there is something special with us. I am reminded of a committee for a generously funded CS award some time ago, where we came close to not giving the prize at all because we only had “good” proposals, and none that a committee member was willing to die for. The committee did come to its senses, and afterwards several members wondered aloud what was the reason for this perfectionism that almost made us waste a great opportunity to reward successful initiatives and promote the discipline.

We come across such cases so often—the research project review that gratuitously but lethally states that you have “less than a 10% chance” of reaching your goals, the killer argument  “I didn’t hear anything that surprised me” after a candidate’s talk—that we consider such nastiness normal without asking any more whether it is ethical or helpful. (The “surprise” comment is particularly vicious. Its real purpose is to make its author look smart and knowledgeable about the ways of the world, since he is so hard to surprise; and few people are ready to contradict it: Who wants to admit that he is naïve enough to have been surprised?)

A particular source of evidence is refereeing, as in the SIGMOD example.  I keep wondering at the sheer nastiness of referees in CS venues.

We should note that the large number of rejected submissions is not by itself the problem. Naughton complains that researchers spend their entire careers being graded, as if passing exams again and again. Well, I too like acceptance better than rejection, but we have to consider the reality: with acceptance rates in the 8%-20% range at good conferences, much refereeing is bound to be negative. Nor can we angelically hope for higher acceptance rates overall; research is a competitive business, and we are evaluated at every step of our careers, whether we like it or not. One could argue that most papers submitted to ICSE and ESEC are pretty reasonable contributions to software engineering, and hence that these conferences should accept four out of five submissions; but the only practical consequence would be that some other venue would soon replace ICSE and ESEC as the publication place that matters in software engineering. In reality, rejection remains a frequent occurrence even for established authors.

Rejecting a paper, however, is not the same thing as insulting the author under the convenient cover of anonymity.

The particular combination of incompetence and arrogance that characterizes much of what Naughton calls “bad refereeing” always stings when you are on the receiving end, although after a while it can be retrospectively funny; one day I will publish some of my own inventory, collected over the years. As a preview, here are two comments on the first paper I wrote on Eiffel, rejected in 1987 by the IEEE Transactions on Software Engineering (it was later published, thanks to a more enlightened editor, Robert Glass, in the Journal of Systems and Software, 8, 1988, pp. 199-246 External Link). The IEEE rejection was on the basis of such review gems as:

  • I think time will show that inheritance (section 1.5.3) is a terrible idea.
  • Systems that do automatic garbage collection and prevent the designer from doing his own memory management are not good systems for industrial-strength software engineering.

One of the reviewers also wrote: “But of course, the bulk of the paper is contained in Part 2, where we are given code fragments showing how well things can be done in Eiffel. I only read 2.1 arrays. After that I could not bring myself to waste the time to read the others.” This is sheer boorishness passing itself off as refereeing. I wonder if editors in other, more established disciplines tolerate such attitudes. I also have the impression that in non-CS journals the editor has more personal leverage. How can the editor of IEEE-TSE have based his decision on such a biased an unprofessional review? Quis custodiet ipsoes custodes?

“More established disciplines”: Indeed, the usual excuse is that we are still a young field, suffering from adolescent aggressiveness. If so, it may be, as Lance Fortnow has argued in a more general context, “time for computer science to grow up.” After some 60 or 70 years we are not so young any more.

What is your experience? Is the grass greener elsewhere? Are we just like everyone else, or do we truly have a nastiness problem in computer science?

VN:F [1.9.10_1130]
Rating: 9.5/10 (31 votes cast)
VN:F [1.9.10_1130]
Rating: +19 (from 19 votes)

All Bugs Great and Small

(Acknowledgment: this article came out of a discussion with Manuel Oriol, Carlo Furia and Yi Wei. The material is largely theirs but the opinions are mine.)

A paper on automatic testing, submitted some time ago, received the following referee comment:

The case study seems unrealistic and biased toward the proposed technique. 736 unique faults found in 92 classes means at least 8 unique faults per class at the same time. I have never seen in all my life a published library with so many faults …

This would be a good start for a discussion of what is wrong with refereeing in computer science today (on the negativism of our field see [1]); we have a referee who mistakes experience for expertise, prejudice for truth, and refuses to accept carefully documented evidence because “in all his life”, presumably a rich and rewarding life, he has never seen anything of the sort. That is not the focus of the present article, however; arrogant referees eventually retire and good papers eventually get published. The technical problems are what matters. The technical point here is about testing.

Specifically, what bugs are worth finding, and are high bug rates extraordinary?

The paper under review was a step in the work around the automatic testing tool AutoTest (see [2] for a slightly older overall description and [3] for the precise documentation). AutoTest applies a fully automatic strategy, exercising classes and their routines without the need to provide test cases or test oracles. What makes such automation possible is the combination of  random generation of tests and reliance on contracts to determine the success of tests.

For several years we have regularly subjected libraries, in particular the EiffelBase data structure library, to long AutoTest sessions, and we keep finding bugs (the better term is faults). The fault counts are significant; here they caught the referee’s eye. In fact we have had such comments before: I don’t believe your fault counts for production software; your software must be terrible!

Well, maybe.

My guess is that in fact EiffelBase has no more bugs, and possibly far fewer bugs, than other “production” code. The difference is that the  AutoTest framework performs far more exhaustive tests than usually practiced.

This is only a conjecture; unlike the referee I do not claim any special powers that make my guesses self-evident. Until we get test harnesses comparable to AutoTest for environments other than Eiffel and, just as importantly, libraries that are fully equipped with contracts, enabling the detection of bugs that otherwise might not come to light, we will not know whether the explanation is the badness of EiffelBase or the goodness of AutoTest.

What concrete, incontrovertible evidence demonstrates is that systematic random testing does find faults that human testers typically do not. In a 2008 paper [4] with Ilinca Ciupa, Manuel Oriol and Alexander Pretschner, we ran AutoTest on some classes and compared the results with those of human testers (as well as actual bug reports from the field, since this was released software). We found that the two categories are complementary: human testers find faults that are still beyond the reach of automated tools, but they typically never find certain faults that AutoTest, with its stubborn dedication to leaving no stone unturned, routinely uncovers. We keep getting surprised at bugs that AutoTest detects and which no one had sought to test before.

A typical set of cases that human programmers seldom test, but which frequently lead to uncovering bugs, involves boundary values. AutoTest, in its “random-plus” strategy, always exercises special values of every type, such as MAXINT, the maximum representable integer. Programmers don’t. They should — all testing textbooks tell them so — but they just don’t, and perhaps they can’t, as the task is often too tedious for a manual process. It is remarkable how many routines using integers go bezerk when you feed them MAXINT or its negative counterpart. Some of the fault counts that seem so outrageous to our referee directly come from trying such values.

Some would say the cases are so extreme as to be insignificant. Wrong. Many documented software failures and catastrophes are due to untested extreme values. Perhaps the saddest is the case of the Patriot anti-missile system, which at the beginning of the first Gulf war was failing to catch Scud missiles, resulting in one case in the killing of twenty-eight American soldiers in an army barrack. It was traced to a software error [5]. To predict the position of the incoming missile, the computation multiplied time by velocity. The time computation used multiples of the time unit, a tenth of a second, stored in a 24-bit register and hence approximated. After enough time, long enough to elapse on the battlefield, but longer than what the tests had exercised, the accumulated error became so large as to cause a significant — and in the event catastrophic — deviation. The unique poser of automatic testing is that unlike human testers it is not encumbered by a priori notions of a situation being extreme or unlikely. It tries all the possibilities it can.

The following example, less portentous in its consequences but just as instructive, is directly related to AutoTest. For his work on model-based contracts [6] performed as part of his PhD completed in 2008 at ETH, Bernd Schoeller developed classes representing the mathematical notion of set. There were two implementations; it turned out that one of them, say SET1, uses data structures that make the subset operation easy to program efficiently; in the corresponding class, the superset operation, ab, is then simply implemented as ba. In the other implementation, say SET2, it is the other way around: is directly implemented, and ab, is implemented as ba. This all uses a nice object-oriented structure, with a general class SET defining the abstract notion and the two implementations inheriting from it.

Now you may see (if you have developed a hunch for automated testing) where this is heading: AutoTest knows about polymorphism and dynamic binding, and tries all the type combinations that make sense. One of the generated test cases has two variables s1 and s2 of type SET, and tries out s2s1; in one of the combinations that AutoTest tries, s1 is dynamically and polymorphically of type SET1 and s2 of type SET2. The version of that it will use is from SET2, so it actually calls s1s2; but this tests the SET1 version of , which goes back to SET2. The process would go on forever, were it not for a timeout in AutoTest that uncovers the fault. Bernd Schoeller had tried AutoTest on these classes not in the particular expectation of finding bugs, but more as a favor to the then incipient development of AutoTest, to see how well the tool could handle model-based contracts. The uncovering of the fault, testament to the power of relentless, systematic automatic testing, surprised us all.

In this case no contract was violated; the problem was infinite recursion, due to a use of O-O techniques that for all its elegance had failed to notice a pitfall. In most cases, AutoTest finds the faults through violated postconditions or class invariants. This is one more reason to be cautious about sweeping generalizations of the kind “I do not believe these bug rates, no serious software that I have seen shows anything of the sort!”. Contracts express semantic properties of the software, which the designer takes care of stating explicitly. In run-of-the-mill code that does not benefit from such care, lots of things can go wrong but remain undetected during testing, only to cause havoc much later during some actual execution.

When you find such a fault, it is irrelevant that the case is extreme, or special, or rare, or trivial. When a failure happens it no longer matter that the fault was supposed to be rare; and you will only know how harmful it is when you deal with the consequences. Testing, single-mindedly  devoted to the uncovering of faults [7], knows no such distinction: it hunts all bugs large and small.

References

[1] The nastiness problem in computer science, article on the CACM blog, 22 August 2011, available here.

[2] Bertrand Meyer, Ilinca Ciupa, Andreas Leitner, Arno Fiva, Yi Wei and Emmanuel Stapf: Programs that Test Themselves, IEEE Computer, vol. 42, no. 9, pages 46-55, September 2009, also available here.

[3] Online AutoTest documentation, available here at docs.eiffel.com.

[4] Ilinca Ciupa, Bertrand Meyer, Manuel Oriol and Alexander Pretschner: Finding Faults: Manual Testing vs. Random+ Testing vs. User Reports, in ISSRE ’08, Proceedings of the 19th IEEE International Symposium on Software Reliability Engineering, Redmond, November 2008, available here.

[5] US General Accounting Office: GAO Report: Patriot Missile Defense– Software Problem Led to System Failure at Dhahran, Saudi Arabia, February 4, 1992, available here.

[6] Bernd Schoeller, Tobias Widmer and Bertrand Meyer: Making Specifications Complete Through Models, in Architecting Systems with Trustworthy Components, eds. Ralf Reussner, Judith Stafford and Clemens Szyperski, Lecture Notes in Computer Science, Springer-Verlag, 2006, available here.

[7] Bertrand Meyer: Seven Principles of Software testing, in IEEE Computer, vol. 41, no. 10, pages 99-101, August 2008available here.

VN:F [1.9.10_1130]
Rating: 9.1/10 (9 votes cast)
VN:F [1.9.10_1130]
Rating: +6 (from 6 votes)

European Computer Science Summit 2011

The program for ECSS 2011 (Milan, 7-9 November) has just been put online [1]. The European Computer Science Summit, held yearly since 2005, is the annual conference of Informatics Europe and a unique opportunity to discuss issues of interest to the computer science / informatics research and education community; much of the audience is made of deans, department heads, lab directors, researchers and senior faculty. Keynote speakers this year include Stefano Ceri, Mary Fernández, Monika Henzinger, Willem Jonker, Miron Livny, John Mylopoulos, Xavier Serra and John White.

ECSS is not a typical scientific conference; like Snowbird, its counterpart in the US, it is focused on professional and policy issues, and also a place to hear from technology leaders about their research visions. For me it is one of the most interesting events of the year.

References

[1] ECSS home page including advance program, here.

VN:F [1.9.10_1130]
Rating: 10.0/10 (2 votes cast)
VN:F [1.9.10_1130]
Rating: +2 (from 2 votes)

A safe and stable solution

Reading about the latest hullabaloo around Android’s usage of Java, and more generally following the incessant flow of news about X suing Y in the software industry (with many combinations of X and Y) over Java and other object-oriented technologies, someone with an Eiffel perspective can only smile. Throughout its history, suggestions to use Eiffel have often been met initially — along with “Will Eiffel still be around next year?”, becoming truly riotous after 25 years — with objections of proprietariness, apparently because Eiffel initially came from a startup company. In contrast, many other approaches, from C++ to Smalltalk and Java, somehow managed to get favorable vibes from the media; the respective institutions, from AT&T to Xerox and Sun, must be disinterested benefactors of humanity.

Now many who believed this are experiencing a next-morning surprise, discovering under daylight that the person next to whom they wake up is covered with patents and lawsuits.

For their part, people who adopted Eiffel over the years and went on to develop project after project  do not have to stay awake worrying about legal issues and the effects of corporate takeovers; they can instead devote their time to building the best software possible with adequate methods, notations and tools.

This is a good time to recall the regulatory situation of Eiffel. First, the Eiffel Software implementation (EiffelStudio): the product can be used through either an open-source and a proprietary licenses. With both licenses the software is exactly the same; what differs is the status of the code users generate: with the open-source license, they are requested to make their own programs open-source; to keep their code proprietary, they need the commercial license. This is a fair and symmetric requirement. It is made even more attractive by the absence of any run-time fees or royalties of the kind typically charged by database vendors.

The open-source availability of the entire environment, over 2.5 millions line of (mostly Eiffel) code, has spurred the development of countless community contributions, with many more in progress.

Now for the general picture on the language, separate from any particular implementation. Java’s evolution has always been tightly controlled by Sun and now its successor Oracle. There may actually be technical arguments in favor of the designers retaining a strong say in the evolution of a language, but they no longer seem to apply any more now that most of the Java creators have left the company. Contrast this with Eiffel, which is entirely under the control of an international standards committee at ECMA International, the oldest and arguably the most prestigious international standards body for information technology. The standard is freely available online from the ECMA site [1]. It is also an ISO standard [2].

The standardization process is the usual ECMA setup, enabling any interested party to participate. This is not just a statement of principle but the reality, to which I can personally testify since, in spite of being the language’s original designer and author of the reference book, I lost countless battles in the discussions that led to the current standard and continue in preparation of the next version. While I was not always pleased on the moment, the committee’s collegial approach has led to a much more solid result than any single person could have achieved.

The work of ECMA TC49-TG4 (the Eiffel standard committee) has disproved the conventional view that committees can only design camels. In fact TC49-TG4 has constantly worked to keep the language simple and manageable, not hesitating to remove features deemed obsolete or problematic, while extending the range of the language and increasing the Eiffel programmer’s power of expression. As a result, Eiffel today is an immensely better language than when we started our work in 2002. Without a strong community-based process we would never, for example, have made Eiffel the first widespread language to guarantee void-safety (the compile-time removal of null-pointer-dereferencing errors), a breakthrough for software reliability.

Open, fair, free from lawsuits and commercial fights, supported by an enthusiastic community: for projects that need a modern quality-focused software framework, Eiffel is a safe and stable solution.

References

[1] ECMA International: Standard ECMA-367: Eiffel: Analysis, Design and Programming Language, 2nd edition (June 2006), available here (free download).

[2] International Organization for Standardization: ISO/IEC 25436:2006: Information technology — Eiffel: Analysis, Design and Programming Language, available here (for a fee; same text as [1], different formatting).

VN:F [1.9.10_1130]
Rating: 5.0/10 (33 votes cast)
VN:F [1.9.10_1130]
Rating: -2 (from 28 votes)