| Abstract | Despite considerable progress in model checking techniques,
large concurrent systems have more states than can be effectively
model checked. Other verification techniques, such as theorem
proving, require significant human expertise. The talk will present
research in three techniques we have developed at Illinois to reason
about concurrent systems: concolic testing, predictive monitoring, and
learning based verification. Concolic testing of concurrent systems
improves the efficiency of testing by using symbolic testing and
partial order reduction to guide testing. Random values are used to
simplify infeasible constraints, thus maintaining soundness.
Predictive monitoring improves the efficiency of testing by using
observed traces to predict other traces that may occur. Computation
learning based verification uses learning to reach fixed points rather
than explore the entire state space. I will illustrate these
techniques by means of examples from software, and discuss their
benefits and limitations. |