Major Section: MISCELLANEOUS
Useful Forms: (accumulated-persistence t) ; activate statistics gathering(show-accumulated-persistence :frames) ; display statistics ordered by (show-accumulated-persistence :tries) ; frames built, times tried, (show-accumulated-persistence :ratio) ; or their ratio
(accumulated-persistence nil) ; deactivate
Generally speaking, the more ACL2 knows, the slower it runs. That is because the search space grows with the number of alternative rules. Often, the system tries to apply rules that you have forgotten were even there, if you knew about them in the first place! ``Accumulated-persistence'' is a statistic (originally developed for Nqthm) that helps you identify the rules that are causing ACL2's search space to explode.
Accumulated persistence tracking can be turned on or off. It is generally off. When on, the system runs about 4 times slower than otherwise! But some useful numbers are collected. When it is turned on, by
ACL2 !>(accumulated-persistence t)an accumulation site is initialized and henceforth data about which rules are being tried is accumulated into that site. That accumulated data can be displayed with
show-accumulated-persistence
,
as described in detail below. When accumulated persistence is
turned off, with (accumulated-persistence nil)
, the accumulation
site is wiped out and the data in it is lost.The ``accumulated persistence'' of a rune is the number of runes the system has attempted to apply (since accumulated persistence was last activated) while the given rune was being tried.
Consider a :
rewrite
rule named rune
. For simplicity, let us imagine
that rune
is tried only once in the period during which accumulated
persistence is being monitored. Recall that to apply a rewrite rule
we must match the left-hand side of the conclusion to some term we
are trying to rewrite, establish the hypotheses of rune
by
rewriting, and, if successful, then rewrite the right-hand side of
the conclusion. We say rune
is ``being tried'' from the time we
have matched its left-hand side to the time we have either abandoned
the attempt or finished rewriting its right-hand side. (By
``match'' we mean to include any loop-stopper requirement;
see loop-stopper.) During that period of time other rules
might be tried, e.g., to establish the hypotheses. The rules tried
while rune
is being tried are ``billed'' to rune
in the sense
that they are being considered here only because of the demands of
rune
. Thus, if no other rules are tried during that period,
the accumulated persistence of rune
is 1
-- we ``bill'' rune
once for its own application attempt. If, on the other hand, we
tried 10
rules on behalf of that application of rune
, then
rune
's accumulated persistence would be 11
.
One way to envision accumulated persistence is to imagine that every time a rune is tried it is pushed onto a stack. The rules tried on behalf of a given application of a rune are thus pushed and popped on the stack above that rune. A lot of work might be done on its behalf -- the stack above the rune grows and shrinks repeatedly as the search continues for a way to use the rune. All the while, the rune itself ``persists'' in the stack, until we finish with the attempt to apply it, at which time we pop it off. The accumulated persistence of a rune is thus the number of stack frames built while the given rune was on the stack.
Note that accumulated persistence is not concerned with whether the attempt to apply a rune is successful. Each of the rules tried on its behalf might have failed and the attempt to apply the rune might have also failed. The ACL2 proof script would make no mention of the rune or the rules tried on its behalf because they did not contribute to the proof. But time was spent pursuing the possible application of the rune and accumulated persistence is a measure of that time.
A high accumulated persistence might come about in two extreme ways. One is that the rule causes a great deal of work every time it is tried. The other is that the rule is ``cheap'' but is tried very often. We therefore keep track of the number of times each rule is tried as well as its persistence. The ratio between the two is the average amount of work done on behalf of the rule each time it is tried.
When the accumulated persistence totals are displayed by the
function show-accumulated-persistence
we sort them so that the most
expensive runes are shown first. We can sort according to one of
three keys:
:frames - the number of frames built on behalf of the rune :tries - the number of times the rune was tried :ratio - frames built per tryThe key simply determines the order in which the information is presented. If no argument is supplied to
show-accumulated-persistence
, :frames
is used.
Note that a rune with high accumulated persistence may not actually
be the ``culprit.'' For example, suppose rune1
is reported to have
a :ratio
of 101
, meaning that on the average a hundred and one
frames were built each time rune1
was tried. Suppose rune2
has a
:ratio
of 100
. It could be that the attempt to apply rune1
resulted
in the attempted application of rune2
and no other rune. Thus, in
some sense, rune1
is ``cheap'' and rune2
is the ``culprit'' even
though it costs less than rune1
.
Users are encouraged to think about other meters we could install in
ACL2 to help diagnose performance problems.