📄️ Dynamic Programming and Bellman Optimality Principle
Dynamic programming provides a systematic approach to solving optimal control problems by decomposing complex problems into simpler subproblems. This document introduces the fundamental concepts and demonstrates the method through a practical drone control example.
📄️ Dynamic Programming for Continuous-Time Systems (HJB Equation)
This document derives the Hamilton-Jacobi-Bellman (HJB) equation, which is the continuous-time analog of the discrete-time Bellman equation. The HJB equation provides the foundation for solving optimal control problems in continuous time.
📄️ Dynamic Programming for Discrete-Time Systems
This document presents the mathematical framework of dynamic programming for discrete-time optimal control problems. We derive the recursive Bellman equations that form the foundation of optimal control theory.
📄️ Linear Time-Invariant Systems
Linear time-invariant (LTI) systems form the foundation of modern control theory and optimal control. This document covers the mathematical representation, solution methods, and key properties of LTI systems.
📄️ Linear Quadratic Regulator for Continuous-Time Systems
The continuous-time Linear Quadratic Regulator (LQR) extends the optimal control theory to systems described by differential equations. This document presents the complete derivation using the Hamilton-Jacobi-Bellman (HJB) equation approach.
📄️ Linear Quadratic Regulator for Discrete-Time Systems
The Linear Quadratic Regulator (LQR) is one of the most important results in optimal control theory. This document presents the complete derivation and implementation of LQR for discrete-time linear systems using dynamic programming.
📄️ LQR for Set-Point Regulation
Linear Quadratic Regulator (LQR) for set-point regulation extends the basic LQR framework to track constant reference signals. This document presents the complete formulation using state augmentation and demonstrates practical implementation.
📄️ LQR for Time-Varying Reference Tracking
Linear Quadratic Regulator (LQR) for tracking time-varying reference signals extends the set-point regulation framework to handle dynamic references. This approach uses incremental input optimization to achieve smooth tracking performance.
📄️ LQR Weight Matrix Design and Testing
This document demonstrates the effect of different weight matrix selections in Linear Quadratic Regulator (LQR) design through practical examples and simulations. Understanding how to tune the weight matrices $Q$ and $R$ is crucial for achieving desired system performance.
📄️ Matrix and Vector Calculus
Matrix and vector calculus forms the mathematical foundation for optimal control theory. This document covers the essential differentiation rules and formulas needed for gradient-based optimization methods in control systems.
📄️ Introduction to Model Predictive Control
Model Predictive Control (MPC), also known as Receding Horizon Control, is an advanced control strategy that solves an optimal control problem over a finite prediction horizon at each sampling time. This document provides a comprehensive introduction to MPC fundamentals and implementation.
📄️ Optimal Control Problem Formulation
This document introduces the fundamental concepts of optimal control problems using the unicycle model as a practical example. We explore different control objectives from basic parking problems to advanced collision avoidance scenarios.
📄️ Unconstrained Model Predictive Control
Unconstrained Model Predictive Control (MPC) provides a foundation for understanding the core principles of predictive control without the complexity of constraint handling. This document presents the complete mathematical derivation and implementation of unconstrained MPC for linear systems.