A forum for Spin users
You are not logged in.
Pages: 1
Hi all.
I'm tackling a verification requiring large amount of memory, and I'm exploring the possibilities offered by the swarm preprocessor available on Spin website.
However, it seems to me that the swarm tool provides only a bitstate approximate search.
Is there a tool that enable to perform a full verification on a cluster of machines? Such a tool would be a way to overcome memory limitation of a single machine.
B.R., Martino Fornasa.
Offline
there is a way to compile the model checker so that it can take advantage of a network of machines, but splitting memory across those machine in an exhaustive search would add significant overhead. so this currently is not attractive, or even practical.
swarm does exploit bitstate hashing and has the advantage of being quite fast and still high confidence -- you effectively also split the memory over a larger number of computers, performing a different search in each of them, with the combined effect giving very high confidence in exhaustive coverage
for a large application this currently is the best option available to perform a thorough search (unless you'd have more than a lifetime available to wait for other types of searches to complete :-)
Offline
[quote=spinroot]there is a way to compile the model checker so that it can take advantage of a network of machines, but splitting memory across those machine in an exhaustive search would add significant overhead. so this currently is not attractive, or even practical.
[/quote]
Thank you for your clarification.
I'm a Spin novice, and my main trouble is estimating the complexity/scalability of the problem I've coded.
Your book explains in detail a number of approaches in order to reduce complexity, and I tried to consider such approaches. However, it is difficult for me to estimate the approximate number of reachable states when I try to scale my problem.
Without (for now) entering the boring details of my specific problem, I'm wondering if it is possible to outline a strategy to estimate the number of reachable states of a Promela program, in order to understand the amount of work required to make the problem tractable. In other words, it is difficult to me to understand if I'm covering 50% or 0.000001% of the reachable states... Knowing such an estimate would be a great help to define a verification strategy, also taking advantage of the swarm tool.
Last edited by m.fornasa (2011-05-27 16:05:29)
Offline
the best way to come up with such an estimate is to do a series of bitstate runs.
you start with, say, a 1 MB statespace, 2^20 bytes which is 2^23 bits or specified with a runtime flag of -w23 in bitstate mode
you note the number of states reached
then increase the parameter by 1 and repeat -w24, -w25, -w26 etc
keep doing this until the nr of states no longer doubles
at that point you can get a reliable estimate of the statespace size
by considering the hash-factor and the estimate of nr of states missed.
if you cannot get to that point, i.e., if the nr of states keeps doubling with each increment until you run out of memory, then the statespace size is intractably large, and there's not much hope of searching it exhaustively.
at that point it's best to consider abstraction techniques etc.
Offline
Thanks.
The outlined method works well.
Last edited by m.fornasa (2011-06-23 10:22:20)
Offline
Pages: 1