A somewhat open letter to Ben Kuipers
Dear Ben,
you speculated why theorists are tempted to regard experimental CS as a second-class effort. I think there is more to it. By virtue of the calling of the universities, academic disciplines are teachable disciplines and the primary purpose of academic research is to contribute to the improvement of teachable disciplines. A major objection to "experimental CS" is that it is a relatively very expensive and inefficient way of improving our teachable discipline: the equipment and its maintenance are expensive, the programs to be written cost an inordinate amount of effort, and the contribution to what we can and should teach is meagre. [Lest you misunderstand me: I did not start as a "theorist" and still refuse to be characterized as such. For 15 years I was in the fortunate situation that I could try out novelties using each time the most powerful equipment then available in my country. But a quarter of a century ago, I saw funding problems lurking around the corner, and I bought my intellectual freedom by concentrating on work that would not require equipment or assisting staff. My now not using machines is a direct consequence of my decision to reduce the expense and increase the effectiveness of my research.]
The whole idea of experimental CS has one of its seeds in the position paper by Newell, Simon and Perlis, the tenor of which was "Facts breed science; computers exist, ergo!". The weakness of their argument is that one can substitute "postage stamps" for "computers" and then use it to defend a Department of Philately. Having listened to so-called "experimental computer scientists", I have come to the conclusion that the term "experimental CS" is a gross misnomer.
In the natural sciences, experiments are indispensable to check whether informal nature provides an acceptable model for your theory. With modern technology we can easily build artefacts that though formally definedÑ are of such a structure that a derivation of their relevant properties is way beyond our analytical abilities, but can such artefacts play the role of "informal nature", as mentioned in the previous sentence?
Hardly. You see, experiments involve measuring, but to think that the measurements constitute the experiment is like equating composing with the writing of the score. It is always the theory at stake that lends the experiment its significance. Think of the experiment of Michelson and Morley, refuting a "stationary aether" or the one of Millikan, which to all intents and purposes only admits models with discrete electrons. That, in passing, the speed of light and the charge of the electron were measured, was of secondary significance. (This "secondary" is no exaggeration: you can teach the whole of the Theory of Relativity, complete with all the experimental evidence, without ever mentioning the actual value of the speed of light: just c will do.) What I have seen so far in "experimental CS" is measuring without any theory that could be refuted or supported by the outcome of the measurements; consequently, the latter are at most of very local significance and hardly contribute to "the improvement of our teachable discipline". In all cases I remember, "performance measurement" would have been a much more appropriate term than "experimental CS".
The answer to the question to what extent we should try to teach our students how to measure performance probably depends on the extent to which we think that a first-class university should be involved in vocational training.