Posts tagged ‘IEEE’

The charming naïveté of an IEEE standard

The IEEE Standard for Requirements Specifications [1], a short and readable text providing concrete and useful advice, is a valuable guide for anyone writing requirements. In our course projects we always require students to follow its recommended structure.

Re-reading it recently, I noticed the following extract  in the section that argues that a  requirements specification should be verifiable (sentence labels in brackets are my addition):

[A] Nonverifiable requirements include statements such as “works well,” “good human interface,” and “shall usually happen.” [B] These requirements cannot be verified because it is impossible to define the terms “good,” “well,” or “usually.”

[C] The statement that “the program shall never enter an infinite loop” is nonverifiable because the testing of this quality is theoretically impossible.

[D] An example of a verifiable statement is
      [E] “Output of the program shall be produced within 20 s of event 60% of the time; and shall be produced within 30 s of event 100% of the time.”
[F] This statement can be verified because it uses concrete terms and measurable quantities.

[A] and [B] are good advice, deserving to be repeated in every software engineering course and to anyone writing requirements. [C], however, is puzzling.

One might initially understand that the authors are telling us that it is impossible to devise a finite set of tests guaranteeing that a program terminates. But on closer examination this cannot be what they mean. Such a statement, although correct, would be uninteresting since it can be applied to any functional requirement: if I say “the program shall accept an integer as input and print out that same integer on the output”, I also cannot test that (trivial) requirement finitely since I would have to try all integers. The same observation applies to the example given in [D, E, F]: the property [D] they laud as an example of a  “verifiable” requirement is just as impossible to test exhaustively [2].

Since the literal interpretation of [C] is trivial and applies to essentially all possible requirements, whether bad or good in the authors’ eyes, they must mean something else when they cite loop termination as their example of a nonverifiable requirement. The word “theoretically” suggests what they have in mind: the undecidability results of computation theory, specifically the undecidability of the Halting Problem. It is well known that no general mechanism exists to determine whether an arbitrary program, or even just an arbitrary loop, will terminate. This must be what they are referring to.

Except, of course, that they are wrong. And a very good thing too that they are wrong, since “The program shall never enter an infinite loop” is a pretty reasonable requirement for any system [3].

If we were to accept [C], we would also accept that it is OK for any program to enter an infinite loop every once in a while, because the authors of its requirements were not permitted to specify otherwise! Fortunately for users of software systems, this particular sentence of the standard is balderdash.

What the halting property states, of course, is that no general mechanism exists that could examine an arbitrary program or loop and tell us whether it will always terminate. This result in no way excludes the possibility of verifying (although not through “testing”) that a particular program or loop will terminate. If the text of a program shows that it will print “Hello World” and do nothing else, we can safely determine that it will terminate. If a loop is of the form

from i := 1 until i > 10 loop
…..print (i)
…..i := i + 1
end

there is also no doubt about its termination. More complex examples require the techniques of modern program verification, such as exhibiting a loop variant in the sense of Hoare logic, but they can still be practically tractable.

Like many fundamental results of modern science (think of Heisenberg’s uncertainty principle), Turing’s demonstration of the undecidability of the Halting Problem is at the same time simple to state, striking, deep, and easy to misunderstand. It is touchingly refreshing to find such a misunderstanding in an IEEE standard.

Do not let it discourage you from applying the excellent advice of the rest of IEEE 830-1998, ; but when you write a program, do make sure — whether or not the requirements specify this property explicitly — that all its loops terminate.

Reference and notes

[1] IEEE Computer Society: IEEE Recommended Practice for Software Requirements SpeciÞcations, IEEE Standard 830-1998, revised 1998; available here (with subscription).

[2] The property [E] is actually more difficult to test, even non-exhaustively, than the authors acknowledge, if only because it is a probabilistic requirement, which can only be tested after one has defined appropriate probabilistic hypotheses.

[3] In requesting that all programs must terminate we must of course take note of the special case of systems that are non-terminating by design, such as most embedded systems. Such systems, however, are still made out of components representing individual steps that must terminate. The operating system on your smartphone may need to run forever (or until the next reboot), but the processing of an incoming text message is still, like a traditional program, required to terminate in finite time.

 

VN:F [1.9.10_1130]
Rating: 10.0/10 (11 votes cast)
VN:F [1.9.10_1130]
Rating: +6 (from 6 votes)

The good and the ugly

Once in a while one hits a tool that is just right. An example worth publicizing is the EasyChair system for conference management [1], which  — after a first experience as reviewer —  I have selected whenever I was in a position to make the choice for a new conference in recent years.

At first sight, a conference management system does not seem so hard to put together; it is in fact a traditional project topic for software engineering courses. But this apparent simplicity is deceptive, as a usable system must accommodate countless small and large needs. To take just one example, you can be a member of a program committee for a conference and also submit a paper to it; this implies strict rules about what you can see, for example reviews of other people’s papers with the referees’ names, and what you should not see. Taking care of myriad such rules and requirements requires in-depth domain knowledge about conferences, and a thorough analysis.

EasyChair is based on such an analysis. It knows what a conference is, and understands what its users need. Here for example is my login screen on EasyChair:

easychair

EasyChair knows about me: I only have one user name and one password. It knows the conferences in which I have been involved (and found them by itself). It knows about my various roles: chair, author etc., and will let me do different things depending on the role I choose.

The rest of the tool is up to the standards set by this initial screen. Granted, the Web design is very much vintage 1994; a couple of hours on the site by a professional graphics designer would not hurt, but, really, who cares? What matters is the functionality, and it is not by accident that EasyChair’s author is a brilliant logician [2]. Here is someone who truly understands the business of organizing and refereeing a conference, has translated this understanding into a solid logical model, and has at every step put himself in the shoes of the participants in the process. As a user you feel that everything has been done to make you feel comfortable  and perform efficiently, while protecting you from hassle.

Because this is all so simple and natural, you might forget that the system required extensive design. If you need proof, it suffices to consider, by contrast, the ScholarOne system, which as punishment for our sins both ACM and IEEE use for their journals.

Even after the last user still alive has walked away, ScholarOne will remain in the annals of software engineering, as a textbook illustration of how not to design a system and its user interface. Not the visuals; no doubt that site had a graphics designer. But everything is designed to make the system as repellent as possible for its users. You keep being asked for information that you have already entered. If you are a reviewer for Communications of the ACM and submit a paper to an IEEE Computer Society journal, the system does not remember you, since CACM has its own sub-site; you must re-enter everything. Since your identifier is your email address, you will have two passwords with the same id, which confuses the browser. (I keep forgetting the appropriate password, which the site obligingly emails me, in clear.) IEEE publications have a common page, but here is how it looks:

scholarone-detail

See the menu on the right? It is impossible  to see the full names of each of the “Transactio…”. (No tooltips, of course.) Assume you just want to know what one of them is, for example “th-cs”: if you select it you are prompted to provide all kinds of information (which you have entered before for other publications), before you can even proceed.

This user interface design (the minuscule menu, an example of what Scott Meyers calls the “Keyhole problem” [3]) is only a small part of usability flaws that plague the system. The matter is one of design: the prevailing viewpoint is that of the  designers and administrators, not the users. I was not really surprised when I found out that the system comes from the same source as the ISI Web of Science system (which should never be used for computer science, see [4]).

It is such a pleasure in contrast to see a system like EasyChair  — for all I know a one-man effort — with its attention to user needs, its profound understanding of the problem domain, and its constant improvements over the years.

References

[1] EasyChair system, at http://www.easychair.org.

[2] Andrei Voronkov, http://www.voronkov.com/.

[3] Scott Meyers, The Keyhole Problem, at http://www.aristeia.com/TKP/draftPaper.pdf; see also slides at http://se.ethz.ch/~meyer/publications/OTHERS/scott_meyers/keyhole.pdf

[4]  Bertrand Meyer, Christine Choppy, Jan van Leeuwen, Jørgen Staunstrup: Research Evaluation for Computer Science, in  Communications  of the ACM, vol. 52, no. 4, pages 131-134, online at http://portal.acm.org/citation.cfm?id=1498765.1498780 (requires subscription). Longer version, available at http://www.informatics-europe.org/docs/research_evaluation.pdf (free access).

VN:F [1.9.10_1130]
Rating: 10.0/10 (1 vote cast)
VN:F [1.9.10_1130]
Rating: +2 (from 2 votes)