Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Exponential search / Reasoning horizon #37

Open
wants to merge 21 commits into
base: main
Choose a base branch
from
Open

Conversation

danielkocher
Copy link
Collaborator

@danielkocher danielkocher commented Jan 9, 2023

Implements the stride option -l, --stride, which performs an exponential search until the first bad state is encountered - we called this the reasoning horizon.

Outline

  • Wrapped the existing implementation (unroll, prune, optimize/query with SMT solver) in a loop that exponentially expands the search space using base 2, i.e., the search steps are 2^0, 2^1, 2^2, ...
  • Once a bad state is satisfiable, the loop terminates and the bad state is extracted from bad_states_initial using a helper function print_reasoning_horizon in constant time. Since the final search step subsumes previous search steps, we do not need a binary search between the last and second to last step. However, it should be easy to extend the implementation with a binary search if required.
  • Currently, the stride option only works if a max. unroll depth (-u option) is specified. The max. unrolling depth is used as an upper bound for the exponential search. This seems reasonable but I can also change this to work without unrolling if desired. In addition, we discussed that we want specify a time budget, which would be the max. unrolling depth (assuming that the time budget are number of steps and not real time). I am happy to discuss (and change) this before we merge into main.
  • If the stride option is not specified, the loop terminates after the first iteration and uses the unrolling depth as before (i.e., it unrolls up to the given depth once).

Tested with the gcc-compiled RISC-V binary of lion/factorize_35.c:

  1. Pruning enabled, striding disabled (behavior remains unchanged; at depth n=189 a division-by-zero is satisfiable):
./target/release/unicorn beator lion/factorize_35.m -u 190 -p -s boolector -v debug
  1. Pruning and striding enabled (rather fast; at depth n=62 a division-by-zero is satisfiable, which is correct because it is encountered during the 7-th search step 2^7 = 128, i.e., 2^0 + 2^1 + 2^2 + 2^3 + 2^4 + 2^5 + 2^6 = 127 + 62 = 189; the final warning prints the derived depth: 189):
./target/release/unicorn beator lion/factorize_35.m -u 580 -p -s boolector -l -v debug
  1. Pruning disabled, striding enabled (rather slow since pruning is missing; again at depth n=62):
./target/release/unicorn beator lion/factorize_35.m -u 580 -s boolector -l -v debug

@danielkocher danielkocher changed the title Feat/exp search Exponential search / Reasoning horizon Jan 9, 2023
Copy link
Collaborator

@mstarzinger mstarzinger left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looking good. Just two comments/suggestion.

src/main.rs Outdated
let mut model_copy = model.clone();

for m in 0..n {
unroll_model(&mut model_copy, m);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The second parameter to unroll_model (i.e. the m) is only used to construct the names of bad-state nodes and input nodes in the current unrolling step. Would it make the algorithm simpler if we were to pass the correct absolute depth at this point already? I think this could make print_reasoning_horizon simpler.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good pointer, thanks.
I adapted the implementation accordingly and print_reasoning_horizon did simplify significantly (as expected).


pub type Nid = u64;
pub type NodeRef = Rc<RefCell<Node>>;

#[derive(Debug)]
#[derive(Clone,Debug)]
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I understand that we need to clone Model for now. But from what I can tell we are not cloning Node objects. I might very well be missing something. Could you double-check whether deriving Clone on Node is still necessary here?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

True, this is a leftover that I will fix - thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants