NOTE: This transcription was contributed by Martin P.M. van der Burgt, who has devised a process for producing transcripts automatically. Although its markup is incomplete, we believe it serves a useful purpose by virtue of its searchability and its accessibility to text-reading software. It will be replaced by a fully marked-up version when time permits. —HR
26th November 1975
A sequel to EWD535.
I just realized that in EWD535 I have failed to touch upon the second point raised in your letter, where you write that you feel compelled to replace
if B → S1 ▯ non B → S2 fi | (1) |
if B then S1 else S2 fi | (2) |
In my book “A Discipline of Programming”, and in all the theory about semantics that underlies the notion of the guarded commands, all expressions —not only the guards, but also all arithmetic expressions at the right-hand side of an assigment operator— are regarded as (possibly partial) functions of the “current state”, which is supposed to change only as a result of explicit assignments. Furthermore I have restricted myself to a programming language that trivially admits a sequential implementation. I have given no further prescriptions about that implementation; in particular I have not prescribed that the execution of (1) must imply a separate evaluation of B and another one of non B . On the contrary! One can defend the point of view, that for any boolean expression B the evaluation of B by definition implies the concurrent evaluation of non B , because both answers give exactly the same information about the current state. From that point of view —I still talk about sequential uniprogramming— one can appreciate (2) as a hint to that part of the compiler that optimizes boolean expressions: it saves it the trouble to recognize that the two guards are the complement of each other. Needless to say, that all forms of side-effects are ruled out: they are regarded by me as invalid implementations as they would violate the axiom of assignment, etc.
The awkward point is how to transfer this pattern of reasoning in order to describe the semantics of a number of mutually unsynchronized programs that —at a certain grain of interleaving, say: a memory cycle— fool in the same store. I know of only one way (and it is not very attractive! I shall sketch it nevertheless; the unattractiveness is probably a consequence of the sad fact that these problems are inherently ugly.)
Consider programs A and B with the shared variables x and y; consider then separately program A in its private state space extended with x and y , and the program B in its private state space extended by x and y. When considering program A we now must admit that at each semicolon of A so to speak, the total state of A (i.e. including x and y) may change non-deterministically, only bound by the limitations of what B may do. If, for instance, B has the trivial form
do true → x:= random od |
do true → X:= 1 od | (3) |
The problem, of course, is, that program B has more structure than (3), and that, when studying program A we have to take that into account. If, in program A we have
do x > 0 → s:= x; x:= s - 1 od | (4) |
do true → do y > 0 → x:= 1 od od | (5) |
It was the experience of studying her thesis, and the moral of a number of my own exercises, that caused me —first thing I did!— when I tried to understand your solution, to do away with the boolean procedure INITIALIZERESPONSIBILITY: the value that its call returns is not a function of the state, but a (very complicated) function of past history. But that implies that I want to see at this level the semicolons, the sequencing to be more precise. One can read your observation as a plea for the if-then-else-fi construct; another conclusion can be that a function procedure, the evaluation of which references more than one shared variable is a misleading construct that we had better regard as “against the rules”. Susan Dwicki has made the latter choice, and she has my blessing. Unless new arguments emerge I think that I shall stick to my guarded commands: I am still quite happy with them!
The most effective way of mastering complexity is avoiding the introduction of complexity in the first place! I would love to know how I could put more “meat” into that observation.
Burroughs | prof.dr.Edsger W.Dijkstra |
Plataanstraat 5 | Burroughs Research Fellow |
NL-4565 NUENEN | |
The Netherlands |
Transcribed by Martin P.M. van der Burgt
Last revision