Workshop on “Data-Driven Verification and Control with Provable Guarantees”
A co-located workshop of the 22nd European Control Conference (ECC 2024), Stockholm, Sweden, June 25-28, 2024. Registration information is available here, with an early bird registration deadline for this workshop set for April 3rd, 2024.
Organizers
Scope
Formal verification and controller synthesis for dynamical systems have garnered remarkable attention over the past two decades, driven by their extensive applications in safety-critical systems. These systems, whose failure can potentially result in severe consequences such as loss of life, injuries, or substantial financial losses, span wide-ranging domains, including aerospace, automotive, transportation systems, robotics, chemical processes, critical infrastructure, energy, and healthcare.
Given a property of interest for a dynamical model, formal verification is concerned to soundly check whether the desired specification is satisfied. If the underlying model is stochastic,
the goal translates in formally quantifying the probability of satisfying the property of interest. A synthesis problem instead concerns dynamical models with the presence of control inputs: the goal is to formally design a controller (also known in different areas as policy, strategy, or scheduler), which is by and large a state-feedback architecture, to enforce the property of interest. This procedure is also called ‘‘correct-by-construction control design",
since every step in the controller synthesis procedure comes with a formal guarantee. In a stochastic setting, the key objective is to synthesize a controller that optimizes (e.g., maximizes) the probability of satisfying the given specification.
As a result of their intrinsic soundness, formal methods approaches do not require any costly, exhaustive, and possibly unsuccessful post-facto validation, which is needed in many safety-critical, real-world applications.
While formal verification and controller synthesis have become indispensable across numerous applications, they often necessitate closed-form mathematical models of dynamical systems. However, these models might either be unavailable or too complex to be constructed in real-world scenarios. Consequently, one can often not employ model-based techniques to analyze and design complex dynamical systems. Hence, the use of data-driven techniques becomes essential in enabling formal analysis for systems with unknown dynamics.
Over the past decade, several data-driven techniques have been proposed for the formal verification and controller synthesis of unknown dynamical systems. One may classify them in two types: the indirect and direct approaches. More specifically, indirect data-driven techniques} are those which leverage system identification to learn approximate models of unknown systems, followed by model-based controller analysis approaches. Their advantage is that once the identification phase is achieved, one may rely on the powerful armada of techniques available in model-based control. On the downside, most identification techniques are mainly limited to linear or some specific classes of nonlinear systems, and accordingly, acquiring an accurate model for complex systems via those indirect techniques could be complicated, time-consuming and expensive. In comparison, direct data-driven techniques are those that bypass the system identification phase and directly employ system measurements for the verification and controller design of unknown systems.
Objectives of the workshop
In this workshop, we bring together a number of researchers active in the area of data-driven verification and control with provable guarantees. The contributions will describe recently developed efforts towards new frontiers on the subject. Along with cherishing the exchange of ideas between researchers in the field, by gathering a number of key talks we aim to achieve the following goals for the audience attending the workshop:
Provide to the researchers new to these topics an updated view of the state of the art on data-driven verification and control including indirect and direct data-driven techniques.
Discuss scalable directions and areas to mitigate the, so-called, sample complexity.
Suggest novel applications of these data-driven techniques.
Additionally, we hope the active discussions of the participants will lead to fruitful collaborations.
Speakers
Speaker: Timm Faulwasser
Affiliation: Professor at TU Hamburg, Germany
Title: Data-driven stochastic control with formal guarantees - Recent results and open problems
Abstract: Data-driven control of linear and nonlinear systems is of considerable research attention. In this talk we discuss recent progress on data-driven approaches to stochastic systems. We present and compare different formulations of stochastic extensions to the fundamental lemma by Willems et al. and we explore distributionally robust formulations of stochastic optimal control which rely on the outer approximation of Wasserstein balls by Gelbrich sets. We discuss how these reformulations allow to give formal guarantees. The talk concludes with a brief outlook on open problems.
Biography: Timm Faulwasser has studied Engineering Cybernetics at the University of Stuttgart, with majors in control systems and philosophy. From 2008 to 2012 he was a member of the International Max Planck Research School for Analysis, Design and Optimization in Chemical and Biochemical Process Engineering Magdeburg. In 2012 he obtained his PhD from the Department of Electrical Engineering and Information Engineering, Otto-von-Guericke University Magdeburg, Germany. From 2013 to 2016 he was with the Laboratoire d’Automatique, École Polytechnique Fédérale de Lausanne (EPFL), Switzerland, while 2015-2019 he was leading the Optimization and Control Group at the Institute for Automation and Applied Informatics at Karlsruhe Institute for Technology (KIT), where he successfully completed his habilitation in the Department of Informatics in 2020. From November 2019 to March 2024 he held the Professorship for Energy Efficiency at TU Dortmund University, Germany. Since April 2024 he is full Professor of Control Systems at TU Hamburg where leads the Institute of Control Systems. Dr. Faulwasser received the 2021-2023 Automatica Paper Prize.
Speaker: Riccardo Scattolini
Affiliation: Professor at Politecnico di Milano, Italy
Title: Towards neural networks for modeling and control: stability, safety verification, adaptivity
Abstract: Despite their potential in describing complex and nonlinear dynamics, Neural Networks are still regarded with some suspicion in many fields of engineering, and particularly in control. This is due to the lack of theoretical guarantees and, for these reasons, in recent years huge efforts have been devoted to new analysis and synthesis methods for NNs guaranteeing stability and robustness properties. In particular, the focus has been on recurrent NN networks, such as Long Short Memory Networks (LSTMs), Gated Recurrent Units (GRUs), and NARX networks, which have gained wide popularity in Computer Science.
With the previous objectives in mind, two main topics are discussed in this presentation. The first concerns the stability properties of NNs, which can guarantee robustness even in the case of modeling errors, while the second concerns computationally affordable safety verification procedures. Specifically, stability is analyzed by forcing incremental input to state properties to the estimated models, while the safety verification issue is solved by means of two procedures: the first one consists in analytically estimating reachable sets, while the second relies on randomized, scenario-based procedures.
Finally, reference will also be made to techniques that improve NNs’ ability to adapt in the face of new or changed operating conditions, according to the so-called lifelong learning approach.
Biography: Riccardo Scattolini was born in Milano, Italy, in 1956. He received the Laurea degree in electronic engineering from the Politecnico di Milano, Italy, in 1979. From 1983 to 1994 he was Assistant Professor and Associate Professor with the Università di Pavia and Politecnico di Milano. Since 1994 he is Full Professor of automatic control with the Politecnico di Milano. From 1984 to 1985, he was a Visiting Researcher at the Department of Engineering Science, Oxford University, U.K. He also spent one year working in industry on the simulation and control of chemical plants. His current research interests include modeling, identification, simulation and control of industrial plants and distribution networks, with emphasis on distributed model predictive control of large-scale systems. Prof. Scattolini received the Heaviside Premium from the Institution of Electrical Engineers, U.K. He has been Associate Editor of the IFAC journal Automatica and of the International Journal of Adaptive Control. He is author of more than 130 papers published in the main journals in the field of control.
Speaker: Lars Lindemann
Affiliation: Assistant Professor at University of Southern California, USA
Title: Safe Control of Learning-Enabled Autonomous Systems using Conformal Prediction
Abstract: Learning-enabled autonomous systems promise to enable many future technologies such as autonomous driving, intelligent transportation, and robotics. Accelerated by algorithmic and computational advances in machine learning and AI, there has been tremendous success in the design of learning-enabled autonomous systems. However, these exciting developments are accompanied by new fundamental challenges that arise regarding the safety and reliability of these increasingly complex control systems in which sophisticated algorithms interact with unknown dynamic environments. Imperfect learning and unknowns in the environment require control techniques to rigorously account for such uncertainties. I advocate for the use of conformal prediction (CP) — a statistical tool for uncertainty quantification — due to its simplicity, generality, and efficiency as opposed to existing optimization techniques that are either conservative or subject to scalability issues. I first provide an accessible introduction to CP for the non-expert. My goal is then to show how we can use CP to design probabilistically safe motion planning algorithms in dynamic environments. Specifically, we will design a model predictive controller in conjunction with (i) learning-enabled trajectory predictors to obtain predictions of the environment, and (ii) conformal prediction regions quantifying uncertainty of these predictions. We will also discuss how to deal with distribution shifts that arise when the deployed learning-enabled system changes. While existing approaches quantify uncertainty heuristically, we quantify uncertainty in a distribution-free manner with probabilistic safety guarantees. Finally, we provide an extension that enables the consideration of high-level formal system specifications via mixed integer linear programing.
Biography: Lars Lindemann is an Assistant Professor in the Thomas Lord Department of Computer Science at the University of Southern California where he leads the Safe Autonomy and Intelligent Distributed Systems (SAIDS) lab. There, he is also a member of the Ming Hsieh Department of Electrical and Computer Engineering (by courtesy), the Robotics and Autonomous Systems Center, and the Center for Autonomy and Artificial Intelligence. Between 2020 and 2022, he was a Postdoctoral Fellow in the Department of Electrical and Systems Engineering at the University of Pennsylvania. He received the Ph.D. degree in Electrical Engineering from KTH Royal Institute of Technology in 2020. Prior to that, he received the M.Sc. degree in Systems, Control and Robotics from KTH in 2016 and two B.Sc. degrees in Electrical and Information Engineering and in Engineering Management from the Christian-Albrecht University of Kiel in 2014. His research interests include systems and control theory, formal methods, and autonomous systems. Professor Lindemann received the Outstanding Student Paper Award at the 58th IEEE Conference on Decision and Control and the Student Best Paper Award (as a co-advisor) at the 60th IEEE Conference on Decision and Control. He was finalist for the Best Paper Award at the 2022 Conference on Hybrid Systems: Computation and Control and for the Best Student Paper Award at the 2018 American Control Conference.
Speaker: Jun Liu
Affiliation: Associate Professor at University of Waterloo, Canada
Title: Physics-Informed and Data-Driven Learning of Lyapunov Functions with Formal Guarantees
Abstract: One of the longstanding challenges in systems and control is the effective construction of Lyapunov functions for formal system analysis. In this talk, we will discuss a framework for learning Lyapunov certificates to address stability and safety problems in systems and control using physics-informed neural networks and data-driven approaches. Specifically, we obtain neural Lyapunov functions by solving characteristic PDEs for these issues using neural networks and data-driven methods, and we derive sufficient conditions for the efficient verification of the learned certificates with satisfiability modulo theories solvers. We address approximation error bounds, convergence guarantees of neural approximations, and the formal correctness of neural Lyapunov certificates. The framework is illustrated with examples from nonlinear systems and control, ranging from low- to high-dimensional systems.
Biography: Jun Liu received the B.S. degree in Applied Mathematics from Shanghai Jiao-Tong University in 2002, the M.S. degree in Mathematics from Peking University in 2005, and the Ph.D. degree in Applied Mathematics from the University of Waterloo in 2011. Following an NSERC Postdoctoral Fellowship in Control and Dynamical Systems at Caltech, he became a Lecturer in Control and Systems Engineering at the University of Sheffield in 2012. He joined the Faculty of Mathematics at the University of Waterloo in 2015, where he currently holds a Canada Research Chair in Hybrid Systems and Control and is an Associate Professor of Applied Mathematics. His main research interests are in the theory and applications of hybrid systems and control, including rigorous computational methods for control design with applications in cyber-physical systems and robotics.
Speaker: Rafael Wisniewski
Affiliation: Professor at Aalborg University, Denmark
Title: Stochastic Safety Guarantees and Learning
Abstract: Design of control policies for safety-critical system must ensure safe operation. On the other hand, finding the mathematical model of a system can be too tedious, or the knowledge about the operating environment may not even be available. Therefore, we will present methods for safety assessment based on the observed data.
We take the starting point of a Markov decision process (MDP) with a policy that, for each state, provides the probability of choosing a particular control action. Nonetheless, we face a challenge. To compute the optimal path to the target states, we need a random time when the process reaches the target set before hitting the forbidden set. Our solution to this challenge is to use the evolution equation, which relates the occupation measure with the hitting probability. The occupation measure corresponds to the expected number of the states’ visits. When examining the hitting probability, we consider two sets: the set of target states and the set of forbidden states. Consequently, a safety function is derived from the evolution equation. The safety function provides the probability of hitting the forbidden set. We will relate the safety function with the solution of Bellman's equation and can be computed by an iterative procedure analogous to the one used for computing the value function in Dynamic Programming.
Subsequently, we will present an online data-driven method to assess the safety of an MDP. Since the discussed method is online, we do not allow the process to hit the set of forbidden states. Because, in practice, once the process hits the forbidden states, the system might get severely damaged. Consequently, we define a set of states - a proxy set, by visiting which, we find lower and upper bounds for the safety function. To this end, we present an online algorithm based on one-step temporal difference, TD(0).
Biography: Rafael Wisniewski is a professor and a deputy head for research of Department of Electronic Systems , Aalborg University. He received his Ph.D. in Electrical Engineering in 1997, and Ph.D. in Mathematics in 2005. In 2007-2008, he was a control specialist at
Danfoss A/S. His research interest is in system theory, particularly in safety, security and learning for decisions.
Speaker: Navid Azizan
Affiliation: Assistant Professor at Massachusetts Institute of Technology (MIT), USA
Title: Data-Driven Control with Inherent Lyapunov Stability
Abstract: Recent advances in learning-based control leverage deep function approximators, such as neural networks, to model the evolution of controlled dynamical systems over time. However, the problem of learning a dynamics model and a stabilizing controller persists, since the synthesis of a stabilizing feedback law for known nonlinear systems is a difficult task, let alone for complex parametric representations that must be fit to data. To this end, we propose Control with Inherent Lyapunov Stability (CoILS), a method for jointly learning parametric representations of a nonlinear dynamics model and a stabilizing controller from data. To do this, our approach simultaneously learns a parametric Lyapunov function which intrinsically constrains the dynamics model to be stabilizable by the learned controller. In addition to the stabilizability of the learned dynamics guaranteed by our novel construction, we show that the learned controller stabilizes the true dynamics under certain assumptions on the fidelity of the learned dynamics. Finally, we demonstrate the efficacy of CoILs on a variety of simulated nonlinear dynamical systems.
Biography: Navid Azizan is the Esther & Harold E. Edgerton (1927) Assistant Professor at MIT, where he is a Principal Investigator in the Laboratory for Information & Decision Systems (LIDS) and holds dual appointments in the Department of Mechanical Engineering (Control, Instrumentation, & Robotics) and the Schwarzman College of Computing's Institute for Data, Systems, & Society (IDSS). His research interests broadly lie in machine learning, systems and control, mathematical optimization, and network science. His research lab focuses on various aspects of enabling large-scale intelligent systems, with an emphasis on principled learning and optimization algorithms for autonomous systems and societal networks. He obtained his PhD in Computing and Mathematical Sciences (CMS) from the California Institute of Technology (Caltech), co-advised by Babak Hassibi and Adam Wierman, in 2020, his MSc in electrical engineering from the University of Southern California in 2015, and his BSc in electrical engineering with a minor in physics from Sharif University of Technology in 2013. Prior to joining MIT, he completed a postdoc in the Autonomous Systems Laboratory (ASL) at Stanford University in 2021. Additionally, he was a research scientist intern at Google DeepMind in 2019. His work has been recognized by several awards, including the 2020 Information Theory and Applications (ITA) Gold Graduation Award and the 2016 ACM GREENMETRICS Best Student Paper Award. He was named in the list of Outstanding Academic Leaders in Data from the CDO Magazine in 2024 and 2023, named an Amazon Fellow in Artificial Intelligence in 2017, and a PIMCO Fellow in Data Science in 2018. He was also the first-place winner and a gold medalist at the 2008 National Physics Olympiad in Iran.
Speaker: Sergio Lucia
Affiliation: Professor at TU Dortmund, Germany
Title: Efficient approximation of model predictive control via neural networks with guarantees
Abstract: Model predictive control still faces significant computational challenges, particularly in nonlinear systems under uncertain conditions. A promising approach involves using neural networks to approximate optimal control inputs based on the system's current state. Despite its growing popularity, this strategy has two major challenges: the substantial data requirement for training and the fact that the safety and stability properties of the original MPC controller are not automatically transferred to the closed-loop system. In this talk, we address these challenges. Firstly, we demonstrate how integrating parametric sensitivities into the training process can enhance data efficiency, making the neural network training more effective with less data. Secondly, we introduce both probabilistic and deterministic verification techniques to guarantee that important closed-loop properties are maintained despite the approximation errors, even when additional uncertainty is present. The effectiveness of these techniques is illustrated in several simulation studies for linear and non-linear systems.
Biography: Sergio Lucia received the M.Sc. degree in Electrical Engineering from the University of Zaragoza, Spain, in 2010 and the Dr.Ing. degree in Optimization and Automatic Control from the TU Dortmund University, Germany, in 2014. He joined the Otto-von-Guericke University Magdeburg and visited the Massachusetts Institute of Technology as a postdoctoral fellow. Between 2017 and 2020 he was an Assistant Professor at the TU Berlin. Since 2020, he is a Professor at TU Dortmund University and head of the Chair of Process Automation Systems. His research interests include decision making under uncertainty, and the interplay between machine learning techniques and control theory. Sergio Lucia is currently an Associate Editor of the Journal of Process Control.
Speaker: Riccardo Bonalli
Affiliation: Associate Professor at Université Paris-Saclay, France
Title: Formally Guaranteed Learning- and Optimization-based Methods for Risk-Averse Control of Autonomous Systems
Abstract: From energy networks to space systems, complex Autonomous Systems (AS) have become pervasive in our society. In this context, the design of increasingly sophisticated methods for the modeling and control of AS is of utmost relevance, given that they regularly operate in uncertain and dynamic circumstances. On the one hand, to mitigate hazardous and possibly catastrophic uncertain perturbations during the decision-making (or planning) process, one is led to reliably infuse Learning-based Models (LM) in the control pipeline. LM offer numerous advantages, including accurate representations of sophisticated systems which accomplish complex tasks. Nevertheless, due to the high degree of uncertainty in which AS operate, one must devise LM capable of offering formal guarantees of reliability. On the other hand, beneficially leveraging the aforementioned LM for safe-against-uncertainty deployment of AS may come only under specific optimal planning and control processes. In particular, the best trade-off is offered by risk-averse Stochastic Optimal Control Problems (SOCP), providing controls which optimize sophisticated stochastic worst-case-averse costs known as risk measures. This talk aims at introducing two promising techniques which start bridging the aforementioned gaps. Specifically, the first part of the talk will show how general, i.e., non-linear stochastic differential equations may be estimated through appropriate sampling- and RKHS-based LM, offering high-order error bounds in law. The second part of the talk will address the design of conditions for optimality which can then be leveraged to solve general, i.e., non-smooth risk-averse SOCP through efficient and provably reliable numerical computations.
Biography: Riccardo Bonalli obtained his PhD in applied mathematics from Sorbonne Université in 2018, in collaboration with ONERA-The French Aerospace Lab. He was a postdoctoral researcher with the Department of Aeronautics and Astronautics, Stanford University. Currently, Riccardo is a tenured CNRS researcher with the Laboratory of Signals and Systems (L2S), at Université Paris-Saclay. His main research interests concern theoretical and numerical robust optimal control with techniques from differential geometry, statistical analysis, and machine learning and applications in aerospace systems and robotics.
Workshop Program on June 25 (Local Time)
8:30 – 8:40 | Welcome and opening remarks |
8:40 – 9:20 | Timm Faulwasser, TU Hamburg, Germany Data-driven stochastic control with formal guarantees - Recent results and open problems (slides) |
9:20 – 10:00 | Riccardo Scattolini, Politecnico di Milano, Italy Towards neural networks for modeling and control: stability, safety verification, adaptivity (slides) |
10:00 – 10:30 | Coffee break |
10:30 – 11:10 | Lars Lindemann, University of Southern California, USA Safe Control of Learning-Enabled Autonomous Systems using Conformal Prediction (slides) |
11:10 – 11:50 | Jun Liu, University of Waterloo, Canada Physics-Informed and Data-Driven Learning of Lyapunov Functions with Formal Guarantees (slides) |
11:50 – 13:30 | Lunch break |
13:30 – 14:10 | Rafael Wisniewski, Aalborg University, Denmark Stochastic Safety Guarantees and Learning (slides) |
14:10 – 14:50 | Navid Azizan, MIT, USA Data-Driven Control with Inherent Lyapunov Stability (slides) |
14:50 – 15:20 | Coffee break |
15:20 – 16:00 | Sergio Lucia, TU Dortmund, Germany Efficient approximation of model predictive control via neural networks with guarantees (slides) |
16:00 – 16:40 | Riccardo Bonalli, Université Paris-Saclay, France Formally Guaranteed Learning- and Optimization-based Methods for Risk-Averse Control of Autonomous Systems (slides) |
16:40 – 17:00 | Discussions and closing remarks
|
|