BannerHauptseite TUMHauptseite LehrstuhlMathematik SchriftzugHauptseite LehrstuhlHauptseite Fakultät

Hauptseminar "Numerik von Kontrollsystemen" im WS 07/08

Prof. Junge

Kontrollsysteme sind dynamische Systeme, die zusätzlich von einem Steuerungsparameter abhängen. Eine typische Fragestellung für ein solches System ist, wie man eine Steuerung (bzw. Regelung) findet, die das System in eine gewünschte Zielmenge steuert. In vielen Fällen ist darüberhinaus zusätzlich eine Kostenfunktion zu optimieren, typischerweise ein Funktional auf den Lösungstrajektorien des Systems. Thema des Seminars wird ein einheitlicher numerischer Zugang zu diesen und verwandten Fragestellungen sein, der auf der Lösung einer Fixpunktgleichung, der Bellman-Gleichung, bzw. einer partiellen Differentialgleichung, der Hamilton-Jacobi- Bellmann Gleichung, basiert. Dabei sollen exemplarisch insbesondere aktuelle Entwicklungen vorgestellt werden, aber auch klassischere Zugänge sowie Verfahren aus dem Bereich der Neuro-Dynamischen Optimierung und der modellprädiktiven Regelung.

Interessenten schicken bitte eine email mit dem gewünschten Vortragsthema aus der untigen Liste und einem Wunschtermin aus der folgenden Liste an jungeematma.tum.de. Die Vorträge werden dann nach Eingangsdatum der email verteilt.

Termine

Do, 14:15-15:45, MI 02.08.011

noch verfügbar: 18.10., 25.10., 8.11., 29.11., 6.12., 10.1., 17.1., 24.1, 31.1., 7.2.

vergeben:
Termin   Thema   Vortragende(r)
15.11.   Relaxing Dynamic Programming   Michael Felux
29.11.   An efficient algorithm for Hamilton-Jacobi equations in high dimensions   Karen Tichmann
13.12.   Error estimation and adaptive discretization for the discrete stochastic Hamilton-Jacobi-Bellman equation   Martin Major
20.12.   A globalization procedure for locally stabilizing controllers   Raphael Boll

Literatur

Lars Grüne
Error estimation and adaptive discretization for the discrete stochastic Hamilton-Jacobi-Bellman equation
Generalizing an idea from deterministic optimal control, we construct a posteriori error estimates for the spatial discretization error of the stochastic dynamic programming method based on a discrete Hamilton{Jacobi{Bellman equation. These error estimates are shown to be effcient and reliable, furthermore, a priori bounds on the estimates depending on the regularity of the approximate solution are derived. Based on these error estimates we propose an adaptive space discretization scheme whose performance is illustrated by two numerical examples.

Bo Lincoln and Anders Rantzer
Relaxing Dynamic Programming
The idea of dynamic programming is general and very simple, but the “curse of dimensionality” is often prohibitive and restricts the fields of application. This paper introduces a method to reduce the complexity by relaxing the demand for optimality. The distance from optimality is kept within prespecified bounds and the size of the bounds determines the computational complexity. Several computational examples are considered. The first is optimal switching between linear systems, with application to design of a dc/dc voltage converter. The second is optimal control of a linear system with piecewise linear cost with application to stock order control. Finally, the method is applied to a partially observable Markov decision problem (POMDP).

J. Behrens and F. Wirth
A globalization procedure for locally stabilizing controllers
For a nonlinear system with a singular point that is locally asymptotically nullcontrollable we present a class of feedbacks that globally asymptotically stabilizes the system on the domain of asymptotic nullcontrollability. The design procedure is twofold. In a neighborhood of the singular point we use linearization arguments to construct a sampled (or discrete) feedback that yields a feedback invariant neighborhood of the singular point and locally exponentially stabilizes without the need for vanishing sampling rate as the trajectory approaches the equilibrium. On the remainder of the domain of controllability we construct a piecewise constant patchy feedback that guarantees that all Caratheodory solutions of the closed loop system reach the previously constructed neighborhood.

Fritz Colonius, Tobias Gayer and Wolfgang Kliemann
Near invariance for Markov diffusion systems
A concept of ’near invariance’ is developed starting from sets that are actually invariant under smaller perturbations. This is based on a theory for system dynamics of Markov diffusion processes illuminating the idea of ’large’ noise perturbations turning invariant sets for smaller noise ranges into transient sets. The controllability behavior of associated deterministic systems plays a crucial role. This setup also allows for numerical computation of nearly invariant sets, the exit times from these sets, and the exit locations under varying perturbation ranges. Three examples with additive perturbations are included: a one degree of freedom system with double well potential and the escape equation without and with periodic excitation.

Vijay R Konda, J.N. Tsitsiklis
On Actor-Critic Algorithms
In this article, we propose and analyze a class of actor-critic algorithms. These are two-time-scale algorithms in which the critic uses temporal difference learning with a linearly parameterized approximation architecture, and the actor is updated in an approximate gradient direction, based on information provided by the critic. We show that the features for the critic should ideally span a subspace prescribed by the choice of parameterization of the actor. We study actor-critic algorithms for Markov decision processes with Polish state and action spaces. We state and prove two results regarding their convergence.

M. Falcone
Numerical Methods for Differential Games based on Partial Differential Equations
In this paper we present some numerical methods for the solution of two-persons zero-sum deterministic differential games. The methods are based on the dynamic programming approach. We first solve the Isaacs equation associated to the game to get an approximate value function and then we use it to reconstruct approximate optimal feedback controls and optimal trajectories. The approximation schemes also have an interesting control interpretation since the time-discrete scheme stems from a dynamic programming principle for the associated discrete time dynamical system. The general framework for convergence results to the value function is the theory of viscosity solutions. Numerical experiments are presented solving some classical pursuit-evasion games. This paper is based on the lectures given at the Summer School on "Differential Games and Applications", held at GERAD, Montreal (June 14-18, 2004).

Elisabetta Carlini, Maurizio Falcone, Roberto Ferretti
An efficient algorithm for Hamilton–Jacobi equations in high dimension
In this paper we develop a new version of the semi-Lagrangian algorithm for first order Hamilton–Jacobi equations. This implementation is well suited to deal with problems in high dimension, i.e. in Rm with m ≥ 3, which typically arise in the study of control problems and differential games. Our model problem is the evolutive Hamilton–Jacobi equation related to the optimal control finite horizon problem. We will give a step-by-step description of the algorithm focusing our attention on two critical routines: the interpolation in high dimension and the search for the global minimum. We present some numerical results on test problems which range from m =3 to m = 5 and deal with applications to front propagation, aerospace engineering, ecomomy and biology.

Rolf Findeisen, Lars Imsland, Frank Allg¨ower , Bjarne A. Foss
State and Output Feedback Nonlinear Model Predictive Control: An Overview
The purpose of this paper is twofold. In the first part we give a review on the current state of nonlinear model predictive control (NMPC). After a brief presentation of the basic principle of predictive control we outline some of the theoretical, computational, and implementational aspects of this control strategy. Most of the theoretical developments in the area of NMPC are based on the assumption that the full state is available for measurement, an assumption that does not hold in the typical practical case. Thus, in the second part of this paper we focus on the output feedback problem in NMPC. After a brief overview on existing output feedback NMPC approaches we derive conditions that guarantee stability of the closed-loop if an NMPC state feedback controller is used together with a full state observer for the recovery of the system state.