I have a question about the memory usage of spin. I am attempting verify some basic LTL properties on models with 100+ variables. At the moment, all non-determinism in the model is during initialization. After initialization each path is deterministic in nature (the rest of the model is enclosed in a d_step) and fairly short (depth < 20). However, running the models I am running out of memory (5 GB is available to Spin). The only explanation I can see is if Spin is storing every reachable state in the model.
My understanding of spin is that once a path is fully explored and fails to provide a counter-example, the path is discarded. This should lead to memory usage linear to path depth. Is this the behavior I should be observing? If so, is there a reason I am not seeing it?
Thank you,
Tiberius