Skip to content

Commit

Permalink
Merge pull request #1 from fluentverification/isubspace
Browse files Browse the repository at this point in the history
Iterative Subspace Reduction
  • Loading branch information
ifndefJOSH authored Jan 25, 2024
2 parents b7a2a16 + 2683179 commit 619e8db
Show file tree
Hide file tree
Showing 11 changed files with 1,283 additions and 184 deletions.
4 changes: 4 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -1,3 +1,7 @@
__pycache__
ragtimer/
ideas.md
tmp
.python-version
.old-python-version
TODO.txt
35 changes: 14 additions & 21 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,31 +10,24 @@ Wayfarer treats a CRN as a VASS and computes a vectoral distance to the "boundar

#### Single-order priority

These methods explore a partial state space using a priority queue to explore states with a certain priority metric. When a satisfying state is found, a "traceback" method is used to find a certain number of traces back to the init state. The main difference is the priority metric used. The following are things which can be incormporated into the priority.
Single-order Priority assumes all solution states reside in a closed subspace, $S_s$, which in the RAGTIMER format, is always true. It then constructs a shortest distance to that closed subspace. In the RAGTIMER format, since the solution space is orthogonal to one or more species dimensions, there is no need to construct a projection matrix, just measure the distance on each non- "don't care" species. However, this method can be expanded to subspaces where a projection matrix ${P}_s$ can be constructed and prioritize states with low $\epsilon_s = |{P}_s{s}_{adj} - {s}_{adj}|_2$.

- **Vectoral Distance to Boundary**: takes the shortest vector to the boundary of satisfying states and computes the L2 norm of that. This distance, which we want to minimize, is the priority.
- **Angles**: There are a number of angle elements that can be included:
+ **Angle between flow and transition vector:** We create a flow vector based on the available transitions, weighted by their rates. We wish to minimize the angle between the flow and transition vector, however, we can just maximize the transition rate.
+ **Angle between transition vector and vector to boundary:** Are we going in the right direction?
#### Iterative Subspace Reduction

#### Multiple-order priority
Iterative subspace reduction takes the reactions in the dependency graph, and constructs a set of nested subspaces $S_0 \subseteq S_1 \subseteq S_2 \cdots \subseteq S_n$ where $S_n \cap S_s \neq \emptyset$. Note that $S_i = \{\ M_i{x} + {s}_0 + {f} | {x} \in {R}^m\ \}$ with ${f}$ ensuring that $S_n \cap S_s \neq \emptyset$. It then prioritizes states in smaller subspaces to approach $S_s$. These are calculated using a set of distances $\epsilon_i = |{P}_i{s}_{adj} - {s}_{adj}|_2$. ${s}_{adj} = {s} - ({s}_0 + {f})$ and ${P}_i = M_i(M_i^T M_i)^+ M_i^T$. Since $j < i \wedge \epsilon_i = 0 \implies \epsilon_j = 0$ we start calculating at $n$ and short circuit when zero distance is encountered.

*Not yet implemented*
From there, it's a priority first search, prioritizing first:

This method makes use of Landon's dependency graph, but takes the reactions from the root node and assigns them an "order", which just corresponds to the level in the graph, from the bottom. The root node is order 1, and every reaction beyond that is order 2, 3, etc. This gives us an order of reaction vectors:
1. Higher indexes $i$ in which $\epsilon_i = 0$
2. Lower values of $\epsilon_{i + 1}$

R1, R2, R3, ..., RN
It creates a partial state graph and seeks $K$ satisfying states.

This ordering is crucial, as we then take the reaction vectors in this order and create subspaces with these vectors as basis vectors. If a particular state, shifted by the init state vector, is not in this space, it cannot reach the satisfying state, and is not explored. Obviously, we can restrict reactions to those in the graph, and then this is not necessary. However, we then create a subspace:
## Usage

span(R2, R3, ..., RN)

and prioritize low distances to that subspace. Once in that subspace, we create another subspace

span(R3, R4, ..., RN)

And prioritize on low distances to that subspace. We do this until we are within a subspace of rank 1, which we then just prioritize on distance to the satisfying state.

### Upper bound probability

-
```bash
# For single order priority
./main.py -r $RAGTIMER_FILE -V -n $NUMDER_DESIRED_SATISFYING_STATES
# For iterative subspace reduction
./main.py -r $RAGTIMER_FILE -S -n $NUMDER_DESIRED_SATISFYING_STATES
```
101 changes: 91 additions & 10 deletions src/counterexample.py
Original file line number Diff line number Diff line change
@@ -1,10 +1,20 @@
from distance import *
from crn import *
from subspace import *

import sys

# from numpy import

# Nagini currently does not support numpy (or floats, or later versions of python)
# from nagini_contracts.contracts import *

import queue
import random

TRACEBACK_BOUND=10
sys.setrecursionlimit(sys.getrecursionlimit() * 50)

TRACEBACK_BOUND=1 # 10 # 10000 # 100000
DESIRED_NUMBER_COUNTEREXAMPLES=2

force_end_traceback=False
Expand Down Expand Up @@ -41,18 +51,21 @@ def traceback(c_state, tail=[], tail_probability = 1.0):
# If our tail probability is too small to handle or is zero we shouldn't continue exploring
if tail_probability == 0.0:
# We've probably found all of the counterexamples we can
print("Found tail probability of zero. Stopping traceback")
force_end_traceback=True
return
# We can afford a copy of the tail on every call, since traceback is only called from
# satisfying states and works BACKWARDS
new_tail = tail.copy()
new_tail.append(c_state)
backward_pointers[c_state].sort(reverse=True)
for normalized_rate, state in backward_pointers[c_state]:
if not state in backward_pointers:
new_tail.append(state)
counterexamples.append((tail_probability * normalized_rate, new_tail))
num_counterexamples = len(counterexamples)
# print(f"Found counterexample! Now we have {len(counterexamples)}")
print(f"Found counterexample! Now we have {len(counterexamples)}")
force_end_traceback = True
else:
traceback(state, new_tail, tail_probability * normalized_rate)
# print(f"ERROR: found no counterexample in traceback!")
Expand Down Expand Up @@ -83,7 +96,7 @@ def find_counterexamples(crn, number=1, print_when_done=False, include_flow_angl
curr_priority, curr_state = pq.get()
# print(f"Got state {curr_state} at priority {curr_priority}")
if satisfies(curr_state, boundary):
print(f"Found satisfying state {curr_state}")
print(f"Found satisfying state {curr_state} (explored {num_explored} states)")
force_end_traceback = False
traceback(curr_state)
# else:
Expand All @@ -94,6 +107,8 @@ def find_counterexamples(crn, number=1, print_when_done=False, include_flow_angl
for rate, _ in transitions:
total_rate += rate
for rate, next_state in transitions:
if rate == 0.0:
continue
if not tuple(next_state) in backward_pointers:
# print(f"State {next_state} has priority {priority}")
next_state_tuple = tuple(next_state)
Expand All @@ -103,13 +118,63 @@ def find_counterexamples(crn, number=1, print_when_done=False, include_flow_angl
priority = vass_priority(next_state, boundary, crn, include_flow_angle=include_flow_angle) # , curr_reach)
# reaches[next_state_tuple] = curr_reach
pq.put((priority, next_state_tuple))
backward_pointers[next_state_tuple] = [(rate / total_rate, curr_state)]
backward_pointers[next_state_tuple] = [((rate) / (total_rate), curr_state)]
else:
backward_pointers[tuple(next_state)].append((rate / total_rate, curr_state))
backward_pointers[tuple(next_state)].append(((rate) / (total_rate), curr_state))
if print_when_done:
print(f"Explored {num_explored} states")
print_counterexamples()

def find_counterexamples_subsp(crn, dep, number=1, print_when_done=False, write_when_done=False):
reset()
State.initialize_static_vars(crn, dep)
global DESIRED_NUMBER_COUNTEREXAMPLES
global backward_pointers
global num_counterexamples
global force_end_traceback
DESIRED_NUMBER_COUNTEREXAMPLES = number
# Min queue
boundary = crn.boundary
num_explored = 0
pq = queue.PriorityQueue()
curr_state = None
init_state = crn.init_state
reaches[tuple(init_state)] = 1.0
pq.put((State(init_state)))
while (not pq.empty()) and num_counterexamples < number: # and not force_end_traceback:
# Invariant(not pq.empty() or MustTerminate(num_counterexamples < number))
# print(pq.qsize())
num_explored += 1
if num_explored % 20000 == 0:
print(f"Explored {num_explored} states")
# print("=============================================")
curr_state_data = pq.get()
curr_state = curr_state_data.vec
# print(curr_state, curr_state_data.order)
# print(f"\tEpsilon: {curr_state_data.epsilon}") # [len(curr_state_data.epsilon) - 1]}")
if satisfies(curr_state, boundary):
print(f"Found satisfying state {tuple(curr_state)}")
force_end_traceback = False
traceback(tuple(curr_state))
# else:
# print(f"{curr_state} does NOT satisfy")
successors, total_rate = curr_state_data.successors()
for s, rate in successors:
next_state = s.vec
if not tuple(next_state) in backward_pointers:
# print(f"State {next_state} has priority {priority}")
next_state_tuple = tuple(next_state)
# Only explore new states
pq.put(s)
backward_pointers[next_state_tuple] = [((rate) / (total_rate), tuple(curr_state))]
else:
backward_pointers[tuple(next_state)].append(((rate) / (total_rate), tuple(curr_state)))
if print_when_done:
print(f"Explored {num_explored} states")
print_counterexamples()
if write_when_done:
print_counterexamples(True, True, True)

def find_counterexamples_randomly(crn, number=1, print_when_done=False, trace_length=100):
reset()
global DESIRED_NUMBER_COUNTEREXAMPLES
Expand Down Expand Up @@ -150,13 +215,22 @@ def find_counterexamples_randomly(crn, number=1, print_when_done=False, trace_le
print_counterexamples()


def print_counterexamples(show_entire_trace=False):


def print_counterexamples(show_entire_trace=False, write_when_done=False, single_line=False):
global counterexamples
print(f"Finished finding {len(counterexamples)} counterexamples")
out_file = sys.stdout
if write_when_done:
out_file = open("traces.wayfarer", "w")
print("Writing to traces.wayfarer")
end="\n"
if single_line:
end=" "
print(f"Finished finding {len(counterexamples)} counterexamples", file=out_file)
lower_bound = 0.0
for est_prob, ce in counterexamples:
lower_bound += est_prob
print(f"Counterexample size {len(ce)} (esimated probability {est_prob})")
print(f"Counterexample size {len(ce)} (esimated probability {est_prob})", file=out_file, end=end)
# print(ce)
if show_entire_trace:
for i in range(len(ce)):
Expand All @@ -166,5 +240,12 @@ def print_counterexamples(show_entire_trace=False):
additional_message = " (satisfying state)"
elif i == len(ce) - 1:
additional_message = " (initial state)"
print(f"\tState: {state}{additional_message}")
print(f"Total lower bound probability: {lower_bound}")
if single_line:
print(f"State: {state}{additional_message}", file=out_file, end=end)
else:
print(f"\tState: {state}{additional_message}", file=out_file, end=end)
if single_line:
print(file=out_file)
print(f"Total lower bound probability: {lower_bound}", file=out_file)
if write_when_done:
out_file.close()
25 changes: 22 additions & 3 deletions src/crn.py
Original file line number Diff line number Diff line change
Expand Up @@ -13,19 +13,31 @@ def __init__(self, bound, bound_type):
self.bound = bound
self.bound_type = bound_type

def to_num(self):
return -1 if self.bound_type == BoundTypes.DONT_CARE else self.bound

def to_mask(self):
return 0.0 if self.bound_type == BoundTypes.DONT_CARE else 1.0

class Transition:
# vector #: np.vector
# enabled #: lambda
# rate_finder #: lambda
def __init__(self, vector, enabled, rate_finder):
def __init__(self, vector, enabled, rate_finder, name=None, rate_constant=None):
self.vector = np.array(vector)
self.vec_as_mat = np.matrix(vector).T
self.enabled_lambda = enabled
self.rate_finder = rate_finder
self.can_eliminate = False
self.name = name
self.rate_constant = rate_constant

def enabled(self, state):
return self.enabled_lambda(state)

def __str__(self):
return self.name

class Crn:
def __init__(self, transitions=None, boundary=None, init_state=None, all_trans_always_enabled=False, all_rates_const=False):
self.boundary = boundary
Expand All @@ -42,9 +54,16 @@ def __init__(self, transitions=None, boundary=None, init_state=None, all_trans_a
def add_transition(self, transition : Transition):
self.transitions.append(transition)

def find_transition_by_name(self, name):
for t in self.transitions:
if t.name == name:
return t
print(f"[WARNING] transition by name '{name}' not found!")
return None

def prune_transitions(self):
'''
Unused: prunes transitions
Unused: prunes transitions
'''
for i in range(len(self.boundary)):
bound = self.boundary[i]
Expand All @@ -53,7 +72,7 @@ def prune_transitions(self):

def __prune_transitions_at_index(self, index):
'''
Unused
Unused
'''
for transition in self.transitions:
can_eliminate = True
Expand Down
Loading

0 comments on commit 619e8db

Please sign in to comment.