A forum for Spin users
You are not logged in.
Hi,
This looks like a bug to me, yet I'm not really confident about it because I'm not sure how Spin handle deadlock states in the model when checking a neverclaim.
Here are two neverclaims that express the same property (it's GF(ap0)&&GF(!ap0)):
% cat never1
never {
T0_init:
if
:: (1) -> goto T0_init
:: ((ap0)) -> goto accept_S1
fi;
accept_S1:
if
:: ((!(ap0))) -> goto T0_init
:: (1) -> goto T0_S2
fi;
T0_S2:
if
:: ((!(ap0))) -> goto T0_init
fi;
}
% cat never2
never {
T0_init:
if
:: ((ap0)) -> goto accept_S1
:: (1) -> goto T0_init
fi;
accept_S1:
if
:: ((!(ap0))) -> goto T0_init
:: (1) -> goto T0_S2
fi;
T0_S2:
if
:: ((!(ap0))) -> goto T0_init
fi;
}
So just to make everything clear, the difference is just in the order of transitions of T0_init:
% diff -u never1 never2
@@ -1,8 +1,8 @@
never {
T0_init:
if
- :: (1) -> goto T0_init
:: ((ap0)) -> goto accept_S1
+ :: (1) -> goto T0_init
fi;
accept_S1:
if
Now here is what I do not understand:
% spin -N never1 -search -a -DNOREDUCE extinction.2.pm | grep -A4 error
State-vector 168 byte, depth reached 292, errors: 0
946765 states, stored (1.0977e+06 visited)
3342247 states, matched
4439942 transitions (= visited+matched)
1577441 atomic steps
% spin -N never2 -search -a -DNOREDUCE extinction.2.pm | grep -A4 error
State-vector 168 byte, depth reached 255, errors: 1
9418 states, stored (9420 visited)
24162 states, matched
33582 transitions (= visited+matched)
9170 atomic steps
In the latter case the trail ends with
[...]
170: proc 0 (Node_0:1) extinction.2.pm:50 (state 39) [((j==1))]
172: proc 0 (Node_0:1) extinction.2.pm:54 (state 45) [lrec = (lrec+1)]
172: proc 0 (Node_0:1) extinction.2.pm:54 (state 44) [win = ((m%(10*10))/10)]
174: proc 0 (Node_0:1) extinction.2.pm:30 (state 14) [(((lrec==1)&&(win==0)))]
174: proc 0 (Node_0:1) extinction.2.pm:30 (state 13) [leaders_num = (leaders_num+1)]
<<<<<START OF CYCLE>>>>>
Never claim moves to line 4 [((leaders_num==1))]
spin: trail ends after 176 steps
#processes: 6
leaders_num = 1
176: proc 5 (chnlnel_ch_2:1) extinction.2.pm:304 (state 2)
176: proc 4 (chnlnel_ch_1:1) extinction.2.pm:292 (state 2)
176: proc 3 (chnlnel_ch_0:1) extinction.2.pm:280 (state 2)
176: proc 2 (Node_2:1) extinction.2.pm:205 (state 18)
176: proc 1 (Node_1:1) extinction.2.pm:114 (state 23)
176: proc 0 (Node_0:1) extinction.2.pm:95 (state 106)
176: proc - (never_0:1) never2:8 (state 11)
6 processes created
but I do not understand how to read the cycle here. I believe proc 0 to 5 are
in deadlock, but the never claim is definitely not. If the system stutters in its
deadlock state (this is what should happen, right?) then the neverclaim should
be able to move to T0_S2 and (silently) die there.
The property GF(ap0)&&GF(!ap0) should not match runs ending in a deadstate if
the deadstate is made stuttering, since ap0 cannot change.
I've put all the files at https://gist.github.com/adl/1904a55e6b4b1f71ef9a/download
if you want to reproduce.
Offline
I found the cause for the bug. It is a very interesting case that should occur very rarely,
related to the implementation of claim stutter and the specific order in which states are
discovered in the first and second half of a nested depth-first search.
Glad you reported this example.
It will be fixed in the next release of Spin, and with the fix the two runs will produce
identical output -- as they should of course.
Offline
Great. We experienced a handful of such discrepancies in a benchmark we are building to explore how different never claims impact Spin.
Can you confirm that we are safe from this bug if we use the current version with -DNOSTUTTER? This is our current workaround.
Offline
That is correct -- you will not get that type of error, but technically you are introducing a different bug then, because claim stutter is a required feature of the automata theoretic verification method...
If you want to fix the source in version 6.4.3 you can do so though.
Find line 2726 in file pangen1.h (" #endif",) and add right below that line:
" if (depth > A_depth)",
Recompile spin, and you should have a correct version.
The same change will also be in the next release, 6.4.4.
Offline