ProVE-Lab

ProVE ≡ ProVE, Validate, Execute

Formal verification plays a crucial role in making cyber-physical systems safe with strong guarantees about the system under a model of the system's behavior and its interaction with the environment. In cyber-physical systems, runtime safety is challenging because it requires predictions about physical dynamics in order to act ahead of time, before physics makes violating the desired safety properties inevitable. Building and verifying formal predictive models of intertwined computational and physical behavior, however, becomes increasingly difficult, since systems operate (semi-)autonomously and often utilize poorly understood learning-enabled components in unconstrained operating environments. This poses the double challenge of validating formal models from observations while simultaneously providing safety guarantees about unverified learned components and the true runtime behavior.

The PROVE-Lab researches logical foundations and tools for theorem proving, verified runtime monitoring, model validation, code synthesis, and safe learning. Together, these techniques provide true runtime safety guarantees by transferring proofs about models of cyber-physical systems to their actual runtime behavior.

Selected Publications

  1. HyTwin: Hybrid Program Semantics for Digital Twin-based Security Interventions in Industrial Control Systems (NFM 2025)
  2. Jainta Paul, Stefan Mitsch, Luis Garcia


  3. [ pdf ]
  1. Towards Verification-Driven Control Learning (ICAA 2024)
  2. Stefan Mitsch


  3. [ pdf | poster ]
  1. Provably Safe Neural Network Controllers via Differential Dynamic Logic (NeurIPS 2024)
  2. Samuel Teuber, Stefan Mitsch, André Platzer


  3. [ pdf ]
  1. CESAR: Control Envelope Synthesis via Angelic Refinements (TACAS 2024)
  2. Aditi Kabra, Jonathan Laurent, Stefan Mitsch, André Platzer


  3. [ pdf ]
  1. Reward Shaping from Hybrid Systems Models in Reinforcement Learning (NFM 2023)
  2. Marian Qian, Stefan Mitsch (NFM 2023)


  3. [ pdf ]

External Funding

Proof-producing Conversations (NVIDIA, 2025-2026)

The goal of this project is to enable LLMs to lead a conversation with theorem provers and SMT solvers for constructing a correctness proof of cyber-physical and intelligent autonomous systems. The role of the LLM is to provide expert insight to bridge decidability gaps in the logic system and to make reasoning tractable in decidable fragments; the role of the theorem prover and SMT solvers is to check the LLM- generated output, ask for continuation scripts, and give feedback in the form of counterexamples.

SafeRails (NSF National I-Corps, 2024-2026)

The SafeRails technology is a verified protection system for autonomous systems that is meant to safeguard operations by keeping motion within a safe envelope. SafeRails creates a protection system that is formally verified, meaning that it is mathematically proven to be safe, with computer-checked proof, ensuring high levels of safety and reliability. The SafeRails technology provides a safe path to deploy new technologies that provide many potential advantages but are difficult to test, like machine learning based control. The project performs customer discovery and industry interviews towards a potential transition into practice.

A Formal Verification and Implementation Stack for PLCs (NSF FMitF, 2023-2026)

This project centers around the study of the integration of formal modeling and theorem proving for cyber-physical systems with testing. In partnership with University of Utah and Carnegie Mellon University, the project aims to develop PLC-style syntax for formal models, extend formal modeling to multi-PLC and multi-task configurations, integration of generated monitors with PLC and SCADA monitoring infrastructure, and extend HyPLC translation approach to apply to more general program shapes and configuration artifacts.

Lab Members (alphabetical)

Thuan Cao Python code synthesis for the Gymnasium reinforcement learning environment

Caileigh Edstrom Control and monitoring of industrial training robots

Antonio Gaspari Risk-aware planning and control reinforcement learning

Tessa Hall Falsification of nondeterministic hybrid programs
[ pdf | poster ]

Aditi Kabra Logic-guided proof and monitor synthesis

Nishma Marur Nagendra LLM-assisted QE benchmark generation

Azizbek Shokosimov Python code synthesis for industrial training robots

Lab Alumni

Sam Sami Monitor parameter fitting from simulations, drone flight path stability monitoring
[ pdf | poster ]

Hari Hara Sudhan Kannan VS-Code extension for formal CPS models and proofs
[ git ]

Qubaisuddin Mohammed Compilation of formal models to ROS

Ismail Patel SMT proof replay, theorem prover benchmarking
[ poster ]

Marian Qian Actor-critic reinforcement learning with stability proofs

Keegen Vogel Stability proofs of CRN++ operations

Jakub Zywicky SMT-Lib parser frontend for a verified QE procedure