Location
NCS 120
Event Description

Title: Verified Autonomy

Abstract: Designers of self-driving cars and other autonomous cyber-physical systems must provide compelling evidence that their use of learning algorithms is safe and robust. Formal verification provides a high degree of confidence in safe system operation, but only if reality matches a verified model. Although a good model will be accurate most of the time, even the best models are incomplete. This is especially true for autonomous vehicles and other Cyber-Physical Systems because high-fidelity physical models of these systems are expensive to develop and often intractable to verify. Conversely, reinforcement learning-based controllers are lauded for their flexibility in unmodeled environments but do not provide guarantees of safe operation. This paper presents an approach for provably safe learning that provides the best of both worlds: the exploration and optimization capabilities of learning along with the safety guarantees of formal verification.

Bio: Nathan Fulton is a Ph.D. student in Computer Science at Carnegie Mellon University. He works in André Platzer's Logical Systems Lab at Carnegie Mellon where he develops symbolic techniques for analyzing hybrid dynamical systems, contributes to the development of the KeYmaera X theorem prover, and designs models of safety-critical cyber-physical systems. His current research focuses on safety properties of control systems that leverage reinforcement learning for control.

Event Title
Verified Autonomy - Nathan Fulton, Carnegie Mellon University