Formal verification plays a crucial role in making cyber-physical systems safe with strong guarantees about the system under a model of the system's behavior and its interaction with the environment. In cyber-physical systems, runtime safety is challenging because it requires predictions about physical dynamics in order to act ahead of time, before physics makes violating the desired safety properties inevitable. Building and verifying formal predictive models of intertwined computational and physical behavior, however, becomes increasingly difficult, since systems operate (semi-)autonomously and often utilize poorly understood learning-enabled components in unconstrained operating environments. This poses the double challenge of validating formal models from observations while simultaneously providing safety guarantees about unverified learned components and the true runtime behavior.
The PROVE-Lab researches logical foundations and tools for theorem proving, verified runtime monitoring, model validation, code synthesis, and safe learning. Together, these techniques provide true runtime safety guarantees by transferring proofs about models of cyber-physical systems to their actual runtime behavior.




The goal of this project is to enable LLMs to lead a conversation with theorem provers and SMT solvers for constructing a correctness proof of cyber-physical and intelligent autonomous systems. The role of the LLM is to provide expert insight to bridge decidability gaps in the logic system and to make reasoning tractable in decidable fragments; the role of the theorem prover and SMT solvers is to check the LLM- generated output, ask for continuation scripts, and give feedback in the form of counterexamples.
The SafeRails technology is a verified protection system for autonomous systems that is meant to safeguard operations by keeping motion within a safe envelope. SafeRails creates a protection system that is formally verified, meaning that it is mathematically proven to be safe, with computer-checked proof, ensuring high levels of safety and reliability. The SafeRails technology provides a safe path to deploy new technologies that provide many potential advantages but are difficult to test, like machine learning based control. The project performs customer discovery and industry interviews towards a potential transition into practice.
This project centers around the study of the integration of formal modeling and theorem proving for cyber-physical systems with testing. In partnership with University of Utah and Carnegie Mellon University, the project aims to develop PLC-style syntax for formal models, extend formal modeling to multi-PLC and multi-task configurations, integration of generated monitors with PLC and SCADA monitoring infrastructure, and extend HyPLC translation approach to apply to more general program shapes and configuration artifacts.
Thuan Cao Python code synthesis for the Gymnasium reinforcement learning environment
Caileigh Edstrom Control and monitoring of industrial training robots
Antonio Gaspari Risk-aware planning and control reinforcement learning
Tessa Hall Falsification of nondeterministic hybrid programs
[ pdf | poster ]
Aditi Kabra Logic-guided proof and monitor synthesis
Nishma Marur Nagendra LLM-assisted QE benchmark generation
Azizbek Shokosimov Python code synthesis for industrial training robots
Sam Sami Monitor parameter fitting from simulations, drone flight path stability monitoring
[ pdf | poster ]
Hari Hara Sudhan Kannan VS-Code extension for formal CPS models and proofs
[ git ]
Qubaisuddin Mohammed Compilation of formal models to ROS
Ismail Patel SMT proof replay, theorem prover benchmarking
[ poster ]
Marian Qian Actor-critic reinforcement learning with stability proofs
Keegen Vogel Stability proofs of CRN++ operations
Jakub Zywicky SMT-Lib parser frontend for a verified QE procedure