Archive for December 2011

Webinars Dec. 29: (1) Model-based contracts (2) Assessing agile methods

The Saint Petersburg Software Engineering seminar (organized jointly by ITMO and SPbSPU universities) takes place every Thursday, normally 18-21. You can find the program at

 

http://sel.ifmo.ru/seminar/ .

Starting with the Dec. 29 seminar, the talks can now be attended remotely. You can follow them live (i.e. starting at 18:00 SP time, 15:00 Zurich/Paris, 9 AM PDT) at

http://sel.ifmo.ru/seminar/live

Warning: this is an experimental setup and it may not work perfectly the first time around.

The talks on Dec. 29 are the following (see the seminar page for the abstracts):

Nadia Polikarpova (ETH): API design with strong specifications 18-19
Bertrand Meyer: Agile Methods: The Good, The Hype and The Ugly 19-20

For the abstracts, see the seminar page referenced above.

VN:F [1.9.10_1130]
Rating: 10.0/10 (4 votes cast)
VN:F [1.9.10_1130]
Rating: +2 (from 2 votes)

Guest article: funding great research

In a blog article posted in its original version on this blog [1] and in a revised version on the Communications of the ACM blog [2], I emphasized the relevance of incremental research. Recently Mikkel Thorup sent me some interesting comments, which I am publishing here as the first Guest Column of this blog.

References

[1] Bertrand Meyer: One Cheer for Incremental Research, in the present blog, 10 August 2009, available here

[2] Bertrand Meyer: Long Live Incremental Research, in Communications of the ACM Blog, 13 June 2011, available here.

Guest article by Mikkel Thorup: Funding Great Research

Research foundations want great research projects. However, a while back Bertrand Meyer wrote an interesting blog post: Long Live Incremental Research [2]. With examples he showed that many of the greatest results of research could not possibly be the projected results of great sounding project descriptions. His conclusion is that we should drop the high-flying ambitions from project descriptions, and instead support more incremental research proposals, hoping that great stuff will happen on the way. Indeed incremental research is perfect for research projects with predictable deliverables. However, I suggest the opposite conclusion; namely that we for some of the funding drop the project description.

The basic idea is that foundations should encourage researchers to look for results far better than those that can reasonably be projected. In particular, researchers should be free to follow their inspiration when they see new exiting opportunities. This is not done by tying researchers to incremental projects. Instead we can sometimes switch to result based funding, that is, funding based on results already achieved (with emphasis on the more recent past). Such result based funding is more like rewards for great results, and it offers researchers the perfect incentive to do their very best so as to secure future funding.

Consider a researcher with a history of brilliant ideas taking research in surprising new directions. If we try casting this as a project, the referees will rightly complain: “It is not clear how the applicant will come up with a brilliant idea, nor is it clear what the surprise will be”. With such lack of focus and feasibility, a low project score is expected.  If the project description has a predefined weight of, say, 40%, then the overall score will be too low for funding, regardless of the researcher’s established track record of succeeding in unlikely situations.  However, research needs great new ideas. Therefore we need some result based funding so that we can support researchers with a proven talent for generating great new ideas even if we do not quite understand how it will happen.

The above problem is often very real in my field of theoretical computer science. Like in other fields, theoretical research is only interesting if it contains surprises (otherwise it is more like development). A project plan would make sense if the starting point was a surprising idea or approach that it would take years to develop, but in theory, the most exciting ideas are often strikingly simple. When first you have such an idea, you are typically close to done, ready to start writing a paper. Thus, if you have a great idea when you apply for a grant, you will typically be done long before you get the grant. The essence of the research is thus the unpredictable search for powerful ideas and insights. The most appropriate project description is therefore just a description of the importance of the area to be researched and the type of results aimed for. The track record shows which researchers have the talent to succeed.

Dropping the how-part of the project description will greatly increase methodological diversity, allowing researchers to use the strategy that has proved most suitable for their area and their own talent and skills.  As a simple example, Bertrand suggested funding incremental research, hoping that great surprising things would turn up on the way. My strategy is the opposite. I try to spend as much time as possible on overly ambitious targets. Most of the time I fail, but I rarely come home empty-handed, for by studying the unknown I nearly always discover something new, sometimes even more interesting than the original target. From the perspective of ambition, I see it as an advantage that I minimize time spend on easy targets, but foundations seem to prefer that you take a planned path with some guaranteed targets on the way. The point here is not to argue whether one strategy is superior to the other, but rather to embrace the diversity of strategies that may work depending on the area and the individual researcher.

Perhaps more seriously, if a target is hard to achieve, it may be because it requires a crazy approach that would not look reasonable to anyone else, but which may work for a researcher thanks to his special talents and intuition. Indeed I have often been positively surprised seeing how others succeeded using an approach I had myself dismissed.  As a project, such crazy approaches would fail on perceived feasibility, but the point in result based funding is that researchers are free to use whatever approach they find most efficient. Funding is given to those who prove successful. This gives the perfect incentive to do great work, securing future funding.

Result-based funding would also reduce resources needed to evaluate applications. It is very hard for a general panel to evaluate the methodology and success probability of a project.  Moreover, it requires an intimate knowledge of a field to evaluate how big a difference a result would make relative to what is already known. However, handling published results, we know what happened and we can rely on peer-review for the difference it made to the field. All the panel has to do is to evaluate how the successes meet with the objectives of the foundation.

Let us, as an example, take something like the ERC Advanced Investigator Grant which welcomes high risk high gain research. It would seem that aiming for surprising breakthroughs in an important area would fall well within this scope. Having researchers with proven skills explore the area and follow their inspiration may be the optimal strategy. Uncertainty about what they would find should not be worse than high risk. In fact, based on past performance, it may be safe to assume that they will discover something interesting if not ground-breaking. However, when projects are scored on focused feasibility, such projects will fail even if their expected return is very high. It has to be possible to get a high overall score for promising research even if standard project parameters like focus and feasibility would be counterproductive.  At the end of the day, what we want are results, not project descriptions, so what should determine the overall score is which proposal is expected to yield the greatest results.

Long live great research!

Mikkel Thorup

VN:F [1.9.10_1130]
Rating: 10.0/10 (1 vote cast)
VN:F [1.9.10_1130]
Rating: +2 (from 2 votes)

How to design software

 

 

I think I recently understood how software should be designed — or at least, since I have informally practiced the method for some time, how to explain it. Maybe not absolutely all types of software, but the most important kind: APIs (abstract program interfaces). The key task in software design is to define proper interfaces; if it is done right, everything else will fall into place, and if it is done wrong, there will be no end of problems everywhere else.

To put a Swiss theme to the description I may call this approach the Gotthard method. You are building a tunnel, starting from both sides at the same time, the northern, rainy, German-speaking part, and the southern, sunny, Italian-speaking part. (Actual languages and climates may vary.)

Tunnel through a mountain

For the method to work it is really important that the two crews should meet somewhere in the middle:

Gotthard crews meeting

In software design we are typically confronted with two views:

  • The client view: application software needs certain abstractions and functionalities that will make it easy to produce clear, simple, extendible, reusable client programs.
  • The supplier view: the necessary mechanisms are usually available in a raw form directly reflecting the underlying platform, a combination of hardware and software facilities.

Good design is a negotiation and iteration process that tries to reconcile the two views, working top-down from the client side and bottom-up from the supplier side, just as you would work when digging a tunnel between Unterwald and the Tessin.

As an example, consider a Web-oriented API. On the supplier side, we have a stateless protocol with essentially one mechanism: processing a request and sending a response. On the client side, we want to enable the building of applications, such as an e-commerce site, which need to pretend that they are working with stateful sessions, just as with a classical client-server GUI setup. The task of building software is to provide what the client application needs, in terms that make sense to the client and with all the abstractions that it needs — in our example, SESSION, STATE, USER and so on.

Since these higher-level abstractions are not directly provided by the supplier side, they need to be implemented or, to use a more appropriate term, faked. After all, everything in computer science is about faking: pretending that we have machines that we really don’t, simply by building them conceptually, in the form of APIs, in terms of machines that we have already built (bottom-up approach) or hope to build (top-down approach). “Building” a machine here means— except for the bottom-most machines, down at the level of the hardware, which very few programmers ever use directly anyway —faking them again, in terms of simpler ones. Fakes all the way down.

The process of software design then consists of developing intermediate levels of abstraction until we reach a compromise: a set of abstractions that satisfy the needs of application programmers and are efficiently implementable (or better yet, already implemented as part of this negotiation process) on the basis of what was available in the first place.

A poorly functioning software process will be more like yoyo design: trying something too abstract, then something too low-level and so on, converging too late if at all. Effective design is like boring a tunnel using modern engineering techniques, which rely on a clear understanding of where the crews start on both sides and make sure they end up meeting in the right place.

 

Photo reference: Herrenknecht AG, www.herrenknecht.com.

VN:F [1.9.10_1130]
Rating: 7.4/10 (10 votes cast)
VN:F [1.9.10_1130]
Rating: +7 (from 7 votes)