Linear Time-Invariant Systems
Linear time-invariant (LTI) systems form the foundation of modern control theory and optimal control. This document covers the mathematical representation, solution methods, and key properties of LTI systems.
State-Space Representation
The standard form of a linear time-invariant system is expressed as:
State equation:
x ˙ ( t ) = A x ( t ) + B u ( t ) \dot{x}(t) = Ax(t) + Bu(t) x ˙ ( t ) = A x ( t ) + B u ( t )
Output equation:
y ( t ) = C x ( t ) + D u ( t ) y(t) = Cx(t) + Du(t) y ( t ) = C x ( t ) + D u ( t )
Where:
x ( t ) ∈ R n x(t) \in \mathbb{R}^n x ( t ) ∈ R n is the state vector
u ( t ) ∈ R m u(t) \in \mathbb{R}^m u ( t ) ∈ R m is the input vector
y ( t ) ∈ R p y(t) \in \mathbb{R}^p y ( t ) ∈ R p is the output vector
A ∈ R n × n A \in \mathbb{R}^{n \times n} A ∈ R n × n is the system matrix
B ∈ R n × m B \in \mathbb{R}^{n \times m} B ∈ R n × m is the input matrix
C ∈ R p × n C \in \mathbb{R}^{p \times n} C ∈ R p × n is the output matrix
D ∈ R p × m D \in \mathbb{R}^{p \times m} D ∈ R p × m is the feedthrough matrix
Matrix Exponential
Scalar Case
For a scalar differential equation:
x ˙ = a x ⟹ x ( t ) = x ( 0 ) e a t \dot{x} = ax \implies x(t) = x(0)e^{at} x ˙ = a x ⟹ x ( t ) = x ( 0 ) e a t
where a a a is a scalar constant.
Matrix Case
For the matrix differential equation:
x ˙ = A x ⟹ x ( t ) = x ( 0 ) e A t \dot{x} = Ax \implies x(t) = x(0)e^{At} x ˙ = A x ⟹ x ( t ) = x ( 0 ) e A t
where A A A is a matrix and e A t e^{At} e A t is the matrix exponential .
Taylor Series Expansion
The matrix exponential is defined using the Taylor series:
Scalar exponential:
e a t = 1 + a t + 1 2 ! ( a t ) 2 + 1 3 ! ( a t ) 3 + ⋯ e^{at} = 1 + at + \frac{1}{2!}(at)^2 + \frac{1}{3!}(at)^3 + \cdots e a t = 1 + a t + 2 ! 1 ( a t ) 2 + 3 ! 1 ( a t ) 3 + ⋯
Matrix exponential:
e A t = I + A t + 1 2 ! ( A t ) 2 + 1 3 ! ( A t ) 3 + ⋯ e^{At} = I + At + \frac{1}{2!}(At)^2 + \frac{1}{3!}(At)^3 + \cdots e A t = I + A t + 2 ! 1 ( A t ) 2 + 3 ! 1 ( A t ) 3 + ⋯
Derivative of Matrix Exponential
The derivative of the matrix exponential is:
d d t e A t = 0 + A + 2 2 ! A 2 t + 3 3 ! A 3 t 2 + ⋯ = A e A t \frac{d}{dt}e^{At} = 0 + A + \frac{2}{2!}A^2t + \frac{3}{3!}A^3t^2 + \cdots = Ae^{At} d t d e A t = 0 + A + 2 ! 2 A 2 t + 3 ! 3 A 3 t 2 + ⋯ = A e A t
Properties of Matrix Exponential
e A ⋅ 0 = I e^{A \cdot 0} = I e A ⋅ 0 = I (identity matrix)
d d t e A t = A e A t = e A t A \frac{d}{dt}e^{At} = Ae^{At} = e^{At}A d t d e A t = A e A t = e A t A
( e A t ) − 1 = e − A t (e^{At})^{-1} = e^{-At} ( e A t ) − 1 = e − A t
e A ( t 1 + t 2 ) = e A t 1 e A t 2 e^{A(t_1+t_2)} = e^{At_1}e^{At_2} e A ( t 1 + t 2 ) = e A t 1 e A t 2 (when A A A commutes with itself)
Solution of State-Space Equations
Applying the Laplace transform to the state equation:
s X ( s ) − x ( 0 ) = A X ( s ) + B U ( s ) sX(s) - x(0) = AX(s) + BU(s) s X ( s ) − x ( 0 ) = A X ( s ) + B U ( s )
Rearranging:
( s I − A ) X ( s ) = x ( 0 ) + B U ( s ) (sI - A)X(s) = x(0) + BU(s) ( s I − A ) X ( s ) = x ( 0 ) + B U ( s )
Solving for X ( s ) X(s) X ( s ) :
X ( s ) = ( s I − A ) − 1 x ( 0 ) + ( s I − A ) − 1 B U ( s ) X(s) = (sI - A)^{-1}x(0) + (sI - A)^{-1}BU(s) X ( s ) = ( s I − A ) − 1 x ( 0 ) + ( s I − A ) − 1 B U ( s )
Taking the inverse Laplace transform:
x ( t ) = L − 1 [ X ( s ) ] = e A t x ( 0 ) + ∫ 0 t e A ( t − τ ) B u ( τ ) d τ x(t) = \mathcal{L}^{-1}[X(s)] = e^{At}x(0) + \int_0^t e^{A(t-\tau)}Bu(\tau)d\tau x ( t ) = L − 1 [ X ( s )] = e A t x ( 0 ) + ∫ 0 t e A ( t − τ ) B u ( τ ) d τ
where:
e A t = L − 1 [ ( s I − A ) − 1 ] e^{At} = \mathcal{L}^{-1}[(sI-A)^{-1}] e A t = L − 1 [( s I − A ) − 1 ] (state transition matrix)
The convolution integral represents the forced response
The Laplace transform method assumes zero initial time (t 0 = 0 t_0 = 0 t 0 = 0 ) due to the differential properties of the Laplace transform.
Method 2: Direct Integration
Multiply both sides of the state equation by e − A t e^{-At} e − A t :
e − A t d d t x ( t ) = e − A t A x ( t ) + e − A t B u ( t ) e^{-At}\frac{d}{dt}x(t) = e^{-At}Ax(t) + e^{-At}Bu(t) e − A t d t d x ( t ) = e − A t A x ( t ) + e − A t B u ( t )
Rearranging:
e − A t d d t x ( t ) − e − A t A x ( t ) = e − A t B u ( t ) e^{-At}\frac{d}{dt}x(t) - e^{-At}Ax(t) = e^{-At}Bu(t) e − A t d t d x ( t ) − e − A t A x ( t ) = e − A t B u ( t )
Using the product rule, the left side becomes:
d d t ( e − A t x ( t ) ) = e − A t B u ( t ) \frac{d}{dt}(e^{-At}x(t)) = e^{-At}Bu(t) d t d ( e − A t x ( t )) = e − A t B u ( t )
Integrating from t 0 t_0 t 0 to t t t :
∫ t 0 t d d τ ( e − A τ x ( τ ) ) d τ = ∫ t 0 t e − A τ B u ( τ ) d τ \int_{t_0}^t \frac{d}{d\tau}(e^{-A\tau}x(\tau))d\tau = \int_{t_0}^t e^{-A\tau}Bu(\tau)d\tau ∫ t 0 t d τ d ( e − A τ x ( τ )) d τ = ∫ t 0 t e − A τ B u ( τ ) d τ
This gives:
e − A t x ( t ) − e − A t 0 x ( t 0 ) = ∫ t 0 t e − A τ B u ( τ ) d τ e^{-At}x(t) - e^{-At_0}x(t_0) = \int_{t_0}^t e^{-A\tau}Bu(\tau)d\tau e − A t x ( t ) − e − A t 0 x ( t 0 ) = ∫ t 0 t e − A τ B u ( τ ) d τ
Final solution:
x ( t ) = e A ( t − t 0 ) x ( t 0 ) + ∫ t 0 t e A ( t − τ ) B u ( τ ) d τ x(t) = e^{A(t-t_0)}x(t_0) + \int_{t_0}^t e^{A(t-\tau)}Bu(\tau)d\tau x ( t ) = e A ( t − t 0 ) x ( t 0 ) + ∫ t 0 t e A ( t − τ ) B u ( τ ) d τ
Solution Components
The general solution consists of two parts:
Zero-input response (homogeneous solution):
x z i ( t ) = e A ( t − t 0 ) x ( t 0 ) x_{zi}(t) = e^{A(t-t_0)}x(t_0) x z i ( t ) = e A ( t − t 0 ) x ( t 0 )
Zero-state response (particular solution):
x z s ( t ) = ∫ t 0 t e A ( t − τ ) B u ( τ ) d τ x_{zs}(t) = \int_{t_0}^t e^{A(t-\tau)}Bu(\tau)d\tau x zs ( t ) = ∫ t 0 t e A ( t − τ ) B u ( τ ) d τ
The matrix Φ ( t , t 0 ) = e A ( t − t 0 ) \Phi(t,t_0) = e^{A(t-t_0)} Φ ( t , t 0 ) = e A ( t − t 0 ) is called the state transition matrix and represents how the state evolves from time t 0 t_0 t 0 to time t t t in the absence of inputs.
Key Properties
Stability
The system is asymptotically stable if and only if all eigenvalues of matrix A A A have negative real parts.
Controllability
The system is completely controllable if the controllability matrix:
C = [ B A B A 2 B ⋯ A n − 1 B ] \mathcal{C} = [B \quad AB \quad A^2B \quad \cdots \quad A^{n-1}B] C = [ B A B A 2 B ⋯ A n − 1 B ]
has full rank n n n .
Observability
The system is completely observable if the observability matrix:
O = [ C C A C A 2 ⋮ C A n − 1 ] \mathcal{O} = \begin{bmatrix} C \\ CA \\ CA^2 \\ \vdots \\ CA^{n-1} \end{bmatrix} O = C C A C A 2 ⋮ C A n − 1
has full rank n n n .
Applications in Optimal Control
LTI systems are fundamental to optimal control because:
Linear Quadratic Regulator (LQR) problems are naturally formulated for LTI systems
Model Predictive Control (MPC) often uses linearized models
Dynamic programming solutions have closed-form expressions for LTI systems
Kalman filtering is optimal for LTI systems with Gaussian noise
References
Wang, T., & Huang, J. (2023). 控制之美(卷2)—最优化控制MPC与卡尔曼滤波器 . Tsinghua University Press.
Kawada, M. MATLAB/Simulinkによる制御工学入門 . Morikita Publishing.
Franklin, G. F., Powell, J. D., & Emami-Naeini, A. Feedback Control of Dynamic Systems (7th ed.).