National Instruments NI MATRIXx Xmath User Manual

NI MATRIXx
XmathTM Model Reduction Module

Xmath Model Reduction Module

TM
April 2007 370755C-01

Worldwide Technical Support and Product Information

ni.com

National Instruments Corporate Headquarters

11500 North Mopac Expressway Austin, Texas 78759-3504 USA Tel: 512 683 0100

Worldwide Offices

Australia 1800 300 800, Austria 43 662 457990-0, Belgium 32 (0) 2 757 0020, Brazil 55 11 3262 3599, Canada 800 433 3488, China 86 21 5050 9800, Czech Republic 420 224 235 774, Denmark 45 45 76 26 00, Finland 385 (0) 9 725 72511, France 33 (0) 1 48 14 24 24, Germany 49 89 7413130, India 91 80 41190000, Israel 972 3 6393737, Italy 39 02 413091, Japan 81 3 5472 2970, Korea 82 02 3451 3400, Lebanon 961 (0) 1 33 28 28, Malaysia 1800 887710, Mexico 01 800 010 0793, Netherlands 31 (0) 348 433 466, New Zealand 0800 553 322, Norway 47 (0) 66 90 76 60, Poland 48 22 3390150, Portugal 351 210 311 210, Russia 7 495 783 6851, Singapore 1800 226 5886, Slovenia 386 3 425 42 00, South Africa 27 0 11 805 8197, Spain 34 91 640 0085, Sweden 46 (0) 8 587 895 00, Switzerland 41 56 2005151, Taiwan 886 02 2377 2222, Thailand 662 278 6777, Turkey 90 212 279 3031, United Kingdom 44 (0) 1635 523545
For further support information, refer to the Technical Support and Professional Services appendix. To comment on National Instruments documentation, refer to the National Instruments Web site at ni.com/info and enter the info code feedback.
© 2007 National Instruments Corporation. All rights reserved.

Important Information

Warranty

The media on which you receive National Instruments software are warranted not to fail to execute programming instructions, due to defects in materials and workmanship, for a period of 90 days from date of shipment, as evidenced by receipts or other documentation. National Instruments will, at its option, repair or replace soft ware media that do not execute programm in g i nstruct ions if National Instruments receives notice of such defects during the warranty p e riod . N ati on al In struments does not warrant that the operation of the software shall be uninterrupted or error free.
A Return Material Authorization (RMA) number must be obtained from the factory and clearly marked on the outside of the package before any equipment will be accepted for warranty work. National Instruments will pay the shipping costs of returning to the owner parts which are covered by warranty.
National Instruments believes that the information in this document is accurate. The document has been carefully reviewed for technical accuracy. In the event that technical or typographical errors exist, National Instruments reserves the right to make changes to subsequent editions of this document without prior notice to holders of this edition. The reader should consult National Instruments if errors are suspected. In no event shall National Instruments be liable for any damages arising out of or related to this document or the information contained in it.
XCEPT AS SPECIFIED HEREIN, NATIONAL INSTRUMENTS MAKES NO WARRANTIES, EXPRESS OR IMPLIED, AND SPECIFICALLY DISCLAIMS ANY WARRANTY OF
E
MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. CUSTOMERS RIGHT TO RECOVER DAMAGES CAUSED BY FAULT OR NEGLIGENCE ON THE PART OF NATIONAL
NSTRUMENTS SHALL BE LIMITED TO THE AMOUNT THERETOFORE PAID BY THE CUSTOMER. NATIONAL INSTRUMENTS WILL NOT BE LIABLE FOR DAMAGES RESULTING
I
FROM LOSS OF DATA, PROFITS, USE OF P RODUCTS, OR INCIDENTAL OR CONSEQUENTIAL DAMAGES, EVEN IF ADVISED OF THE POSSIBILITY THEREOF. This limitation of
the liability of National Instruments will apply regardless of the form of action, whether in contract or tort, including negligence. Any action against National Instruments must be brought within one year after the cause of action accrues. National Instruments shall not be liable for any delay in performance due to causes beyond its reasonable control. The warranty provided herein does not cover damages, defects, malfunctions, or service failures caused by owner’s failure to follow the National Instruments installation, operation, or maintenance instructions; owner’s modification of the product; owner’s abuse, misuse, or negligent acts; and power failure or surges, fire, flood, accident, actions of third parties, or other events outside reasonable control.

Copyright

Under the copyright laws, this publication may not be reproduced or transmitted in any form, electronic or mechanical, including photocopying, recording, storing in an information retrieval system, or translating, in whole or in part , w itho ut th e prior w r itten con sent of National Instruments Corporation.
National Instruments respects the intellectual property of others, and we ask our users to do the same. NI software is protected by copyright and other intellectual property laws. Where NI software may be used to reproduce software or other materials belonging to others, you may use NI software only to reproduce materials that you may reproduce in accordance with the terms of any applicable license or other legal restriction.

Trademarks

MATRIXx™, National Instruments™, NI™, ni.com™, and Xmath™ are trademarks of National Instruments Corporation. Refer to the Terms of Use section on ni.com/legal for more information about National Instruments trademarks.
Other product and company names mentioned herein are trademarks or trade names of their respective companies. Members of the National Instruments Alliance Partner Program are business entities independent from National Instruments and have no agency,
partnership, or joint-venture relationship with National Instruments.

Patents

For patents covering National Instruments products, refer to the ap prop riate lo cati on: Help»Patents in your software, th e patents.txt file on your CD, or
ni.com/patents.

WARNING REGARDING USE OF NATIONAL INSTRUMENTS PRODUCTS

(1) NATIONAL INSTRUMENTS PRODUCTS ARE NOT DESIGNED WITH COMPONENTS A ND T ESTING FOR A LE VEL OF RELIABILITY SUITABLE FOR USE IN OR IN CONNECTION WITH SURGICAL IMPLANTS OR AS CRITICAL COMPONENTS IN ANY LIFE SUPPORT SYSTEMS WHOSE FAILURE TO PERFORM CAN REASONABLY BE EXPECTED TO CAUSE SIGNIFICANT INJURY TO A HUMAN.
(2) IN ANY APPLICATION, INCLUDING THE ABOVE, RELIABILITY OF OPERATION OF THE SOFTW ARE P RODUCTS C AN BE IMPAIRED BY ADVERSE FACTORS, INCLUDING BUT NOT LIMITED TO FLUCTUATIONS IN ELECTRICAL POWER SUPPLY, COMPUTER HARDWARE MALFUNCTIONS, COMPUTER OPERATING SYSTEM SOFTWARE FITNESS, FITNESS OF COMPILERS AND DEVELOPMENT SOFTWARE USED TO DEVELOP AN APPLICATION, INSTALLATION ERRORS, SOFTWARE AND HARDWARE COMPATIBILITY PROBLEMS, MALFUNCTIONS OR FAILURES OF ELECTRONIC MONITORING OR CONTROL DEVICES, TRANSIENT FAILURES OF ELECTRONIC SYSTEMS (HARDWARE AND/OR SOFTWARE), UNANTICIPATED USES OR MISUSES, OR ERRORS ON THE PART OF THE USER OR APPLICATIONS DESIGNER (ADVERSE FACTORS SUCH AS THESE ARE HEREAFTER COLLECTIVELY TERMED “SYSTEM FAILURES”). ANY APPLICATION WHERE A SYSTEM FAILURE WOULD CREATE A RISK OF HARM TO PROPERTY OR PERSONS (INCLUDING THE RISK OF BO DILY INJURY AND DEATH) SHOULD NOT BE RELIANT SOLELY UPON ONE FORM OF ELECTRONIC SYSTEM DUE TO THE RISK OF SYSTEM FAILURE. TO AVOID DAMAGE, INJURY, OR DEATH, THE USER OR APPLICATION DESIGNER MUST TAKE REASONABLY PRUDENT S TEPS TO PROTECT AGAINST SYSTEM FAILURES, INCLUDING BUT NOT LIMITED TO BACK-UP OR SHUT DOWN MECHANISMS. BECAUSE EACH END-USER SYSTEM IS CUSTOMIZED AND DIFFERS FROM NATIONAL INSTRUMENTS' TESTING PLATFORMS AND BECAUSE A USER OR APPLICATION DESIGNER MAY USE NATIONAL INSTRUMENTS PRODUCTS IN COMBINATION WITH OTHER PRODUCTS IN A MANNER NOT EVALUATED OR CONTEMPLATED BY NATIONAL INSTRUMENTS, THE USER OR APPLICATION DESIGNER IS ULTIMATELY RESPONSIBLE FOR VERIFYING AND VALIDATING THE SUITABILITY OF NATIONAL INSTRUMENTS P RODUCTS W HENEVER NATIONAL INSTRUMENTS PRODUCTS ARE INCORPORATED IN A SYSTEM OR APPLICATION, INCLUDING, WITHOUT LIMITATION, THE APPROPRIATE DESIGN, PROCESS AND SAFETY LEVEL OF SUCH SYSTEM OR APPLICATION.

Conventions

The following conventions are used in this manual:
[ ] Square brackets enclose optional items—for example, [
brackets also cite bibliographic references.
» The » symbol leads you through nested menu items and dialog box options
to a final action. The sequence File»Page Setup»Options directs you to pull down the File menu, select the Page Setup item, and select Options from the last dialog box.
This icon denotes a note, which alerts you to important information.
bold Bold text denotes items that you must select or click in the software, such
as menu items and dialog box options. Bold text also denotes parameter names.
italic Italic text denotes variables, emphasis, a cross-reference, or an introduction
to a key concept. Italic text also denotes text that is a placeholder for a word or value that you must supply.
monospace Text in this font denotes text or characters that you should enter from the
keyboard, sections of code, programming examples, and syntax examples. This font is also used for the proper names of disk drives, paths, directories, programs, subprograms, subroutines, device names, functions, operations, variables, filenames, and extensions.
monospace bold Bold text in this font denotes the messages and responses that the computer
automatically prints to the screen. This font also emphasizes lines of code that are different from the other examples.
response]. Square
monospace italic
Italic text in this font denotes text that is a placeholder for a word or value that you must supply.

Contents

Chapter 1 Introduction
Using This Manual.................................................................................................. .......1-1
Document Organization...................................................................................1-1
Bibliographic References ................................................................................1-2
Commonly Used Nomenclature..................................... .................................1-2
Conventions.....................................................................................................1-2
Related Publications........................................................................................1-3
MATRIXx Help...............................................................................................1-3
Overview........................................................................................................................1-3
Functions .........................................................................................................1-4
Nomenclature .................................................................................................1-6
Commonly Used Concepts ........................................................... .................................1-7
Controllability and Observability Grammians ................................................1-7
Hankel Singular Values...................................................................................1-8
Internally Balanced Realizations.....................................................................1-10
Singular Perturbation.......................................................................................1-11
Spectral Factorization......................................................................................1-13
Low Order Controller Design Through Order Reduction .............................................1-15
Chapter 2 Additive Error Reduction
Introduction....................................................................................................................2-1
Truncation of Balanced Realizations.............................................................................2-2
Reduction Through Balanced Realization Truncation...................................................2-4
Singular Perturbation of Balanced Realization..............................................................2-5
Hankel Norm Approximation........................................................................................2-6
balmoore( )..................................................................................................................... 2-8
Related Functions............................................................................................2-11
truncate( ).......................................................................................................................2-11
Related Functions............................................................................................2-11
redschur( )......................................................................................................................2-12
Algorithm ........................................................................................................2-12
Related Functions............................................................................................2-14
ophank( )........................................................................................................................2-14
Restriction........................................................................................................2-14
Algorithm ........................................................................................................2-15
Behaviors.........................................................................................................2-15
© National Instruments Corporation v Xmath Model Reduction Module
Contents
Onepass Algorithm .........................................................................................2-18
Multipass Algorithm ......................................................................................2-20
Discrete-Time Systems ...................................................................................2-21
Impulse Response Error..................................................... .............................2-22
Unstable System Approximation....................................................................2-23
Related Functions............................................................................................2-23
Chapter 3 Multiplicative Error Reduction
Selecting Multiplicative Error Reduction...................................................................... 3-1
Multiplicative Robustness Result.................................................................... 3-2
bst( )...............................................................................................................................3-3
Restrictions...................................................................................................... 3-4
Algorithm........................................................................................................3-4
Algorithm with the Keywords right and left...................................................3-5
Securing Zero Error at DC.............................................................................. 3-8
Hankel Singular Values of Phase Matrix of G
Further Error Bounds ......................................................................................3-9
Reduction of Minimum Phase, Unstable G ....................................................3-9
Imaginary Axis Zeros (Including Zeros at ∞)................................................. 3-10
Related Functions............................................................................................3-14
mulhank( ) ..................................................................................................................... 3-14
Restrictions...................................................................................................... 3-14
Algorithm........................................................................................................3-14
right and left.................................................... ................................................3-15
Consequences of Step 5 and Justification of Step 6........................................ 3-18
Error Bounds...................................................................................................3-20
Imaginary Axis Zeros (Including Zeros at ∞)................................................. 3-21
Related Functions............................................................................................3-24
r................................................3-9
Chapter 4 Frequency-Weighted Error Reduction
Introduction ...................................................................................................................4-1
Controller Reduction......................... ........................................................... ...4-2
Controller Robustness Result............................ ..............................................4-2
Fractional Representations..............................................................................4-5
wtbalance( ) ................................................................................................................... 4-10
Algorithm........................................................................................................4-12
Related Functions............................................................................................4-15
Xmath Model Reduction Module vi ni.com
fracred( ) ........................................................................................................................4-15
Chapter 5 Utilities
hankelsv( )......................................................................................................................5-1
stable( ) ..........................................................................................................................5-2
compare( )......................................................................................................................5-4
Chapter 6 Tutorial
Plant and Full-Order Controller.....................................................................................6-1
Controller Reduction......................................... ........................................................... ..6-5
Contents
Restrictions......................................................................................................4-15
Defining and Reducing a Controller................................................................4-16
Algorithm ........................................................................................................4-18
Additional Background ...................................................................................4-20
Related Functions............................................................................................4-21
Related Functions............................................................................................5-2
Algorithm ........................................................................................................5-2
Related Functions............................................................................................5-4
ophank( )..........................................................................................................6-9
wtbalance.........................................................................................................6-12
fracred..............................................................................................................6-20
Appendix A Bibliography
Appendix B Technical Support and Professional Services
Index
© National Instruments Corporation vii Xmath Model Reduction Module
Introduction
This chapter starts with an outline of the manual and some useful notes. It also provides an overview of the Model Reduction Module, describes the functions in this module, and introduces nomenclature and concepts used throughout this manual.

Using This Manual

This manual describes the Model Reduction Module (MRM), which provides a collection of tools for reducing the order of systems.
Readers who are not familiar with Parameter Dependent Matrices (PDMs) should consult the Xmath User Guide before using MRM functions and tools. Although several MRM functions accept both PDMs and matrices as input parameters, PDMs are preferable because they can include additional information that is useful for simulation, plotting, and signal labeling.

Document Organization

This manual includes the following chapters:
Chapter 1, Introduction, starts with an outline of the manual and some useful notes. It also provides an overview of the Model Reduction Module, describes the functions in this module, and introduces nomenclature and concepts used throughout this manual.
Chapter 2, Additive Error Reduction, describes additive error reduction including discussions of truncation of, reduction by, and perturbation of balanced realizations.
Chapter 3, Multiplicative Error Reduction, describes multiplicative error reduction presenting considerations for using multiplicative rather than additive error reduction.
Chapter 4, Frequency-Weighted Error Reduction, describes frequency-weighted error reduction problems, including controller reduction and fractional representations.
1
© National Instruments Corporation 1-1 Xmath Model Reduction Module
Chapter 1 Introduction
Chapter 5, Utilities, describes three utility functions: hankelsv( ),
stable( ), and compare( ).
Chapter 6, Tutorial, illustrates a number of the MRM functions and their underlying ideas.

Bibliographic References

Throughout this document, bibliographic references are cited with bracketed entries. For example, a reference to [VODM1] corresponds to a paper published by Van Overschee and De Moor. For a table of bibliographic references, refer to Appendix A, Bibliography.

Commonly Used Nomenclature

This manual uses the following general nomenclature:
Matrix variables are generally denoted with capital letters; vectors are represented in lowercase.
G(s) is used to denote a transfer function of a system where s is the Laplace variable. G(q) is used when both continuous and discrete systems are allowed.
H(s) is used to denote the frequency response, over some range of frequencies of a system where s is the Laplace variable. H(q) is used to indicate that the system can be continuous or discrete.
A single apostrophe following a matrix variable, for example, x’, denotes the transpose of that variable. An asterisk following a matrix variable, for example, A*, indicates the complex conjugate, or Hermitian, transpose of that variable.

Conventions

This publication makes use of the following types of conventions: font, format, symbol, mouse, and note. These conventions are detailed in Chapter 2, MATRIXx Publications, Online Help, and Customer Support, of the MATRIXx Getting Started Guide.
Xmath Model Reduction Module 1-2 ni.com

Related Publications

For a complete list of MATRIXx publications, refer to Chapter 2,
MATRIXx Publications, Online Help, and Customer Support, of the MATRIXx Getting Started Guide. The following documents are particularly
useful for topics covered in this manual:
MATRIXx Getting Started Guide
Xmath User Guide
Control Design Module
Interactive Control Design Module
Interactive System Identification Module, Part 1
Interactive System Identification Module, Part 2
Model Reduction Module
Optimization Module
Robust Control Module
•X

MATRIXx Help

Model Reduction Module function reference information is available in the MATRIXx Help. The MATRIXx Help includes all Mode l Reduct ion functions. Each topic explains a function’s inputs, outputs, and keywords in detail. Refer to Chapter 2, MATRIXx Publications, Online Help, and Customer Support, of the MATRIXx Getting Started Guide for complete instructions on using the help feature.
μ
Module
Chapter 1 Introduction

Overview

The Xmath Model Reduction Module (MRM) provides a collection of tools for reducing the order of systems. Many of the functions are based on state-of-the-art algorithms in conjunction with researchers at the Australian National University, who were responsible for the original development of some of the algorithms. A common theme throughout the module is the use of Hankel singular values and balanced realizations, although considerations of numerical accuracy often dictates whether these tools are used implicitly rather than explicitly. The tools are particularly suitable when, as generally here, quality of approximation is measured by closeness of frequency domain behavior.
© National Instruments Corporation 1-3 Xmath Model Reduction Module
Chapter 1 Introduction

Functions

As shown in Figure 1-1, functions are provided to handle four broad tasks:
Model reduction with additive errors
Model reduction with multiplicative errors
Model reduction with frequency weighting of an additive error, including controller reduction
Utility functions
Additive Error
Model Reduction
balmoore redschur ophank truncate balance mreduce
Multiplicative
Model Reduction
bst mulhank
Utility Functions
hankelsv stable compare
Figure 1-1. MRM Function
Frequency Weighted
Model Reduction
wtbalance fracred
Xmath Model Reduction Module 1-4 ni.com
Chapter 1 Introduction
Certain restrictions regarding minimality and stability are required of the input data, and are summarized in Table 1-1.
Table 1-1. MRM Restrictions
balance( )
balmoore ( ) A state-space system must be stable and minimal,
A stable, minimal system
having at least one input, output, and state
bst( ) A state-space system must be linear,
continuous-time, and stable, with full rank along the jω-axis, including infinity
compare( ) Must be a state-space system
fracred( )
hankelsv( ) A system must be linear and stable mreduce( ) A submatrix of a matrix must be nonsingular
A state-space system must be linear and continuous
for continuous systems, and variant for discrete systems
mulhank( )
A state-space system must be linear, continuous-time, stable and square, with full rank along the jω-axis, including infinity
ophank( )
A state-space system must be linear, continuous-time and stable, but can be nonminimal
redschur( ) A state-space system must be stable and linear,
but can be nonminimal
stable ( ) No restriction
truncate( )
wtbalance( )
Any full-order state-space system A state-space system must be linear and
continuous. Interconnection of controller and plant must be stable, and/or weight must be stable.
Documentation of the individual functions sometimes indicates how the restrictions can be circumvented. There are a number of model reduction methods not covered here. These include:
Padé Approximation
Methods based on interpolating, or matching at discrete frequencies
© National Instruments Corporation 1-5 Xmath Model Reduction Module
Chapter 1 Introduction

Nomenclature

L2 approximation, in which the L2 norm of impulse response error (or, by Parsev al’ s theorem, the L
norm of the transfer-function error along
2
the imaginary axis) serves as the error measure
Markov parameter or impulse response matching, moment matching, covariance matching, and combinations of these, for example, q-COVER approximation
Controller reduction using canonical interactions, balanced Riccati equations, and certain balanced controller reduction algorithms
This manual uses standard nomenclature. The user should be familiar with the following:
sup denotes supremum, the least upper bound.
The acute accent (´) denotes matrix transposition.
A superscripted asterisk (*) denotes matrix transposition and complex conjugation.
λ
(A) for a square matrix A denotes the maximum eigenvalue,
max
presuming there are no complex eigenvalues.
•Reλ
(A) and |λi(A)| for a square matrix A denote an arbitrary real part
i
and an arbitrary magnitude of an eigenvalue of A.
Xjω()
for a transfer function X(·) denotes:
sup
ω∞<<
X*jω()Xjω()[][]
λ
max
12/
An all-pass transfer-function W(s) is one where for all ω;
Xjω() 1= to each pole, there corresponds a zero which is the reflection through the jω-axis of the pole, and there are no jω-axis poles.
An all-pass transfer-function matrix W(s) is a square matrix where W jω–()Wjω() I=
P > 0 and P ≥ 0 for a symmetric or hermitian matrix denote positive and nonnegative definiteness.
P
> P2 and P
1
P2 is positive definite and nonnegativ e definite.
P
1
P
for symmetric or hermitian P1 and P2 denote
1
2
•A superscripted number sign (#) for a square matrix A denotes the
Moore-Penrose pseudo-inverse of A.
Xmath Model Reduction Module 1-6 ni.com
Chapter 1 Introduction
An inequality or bound is tight if it can be met in practice, for example 1 xx 0log+
is tight because the inequality becomes an equality for x =1. Again, if F(jω) denotes the Fourier transform of some , the
ft() L
2
Heisenberg inequality states,
2
ft()
dt
------------------------------------------------------------------------------------------ -
2
t
ft()2dt
12
2
ω
Fjω()2dω
4π
12
and the bound is tight since it is attained for f(t) = exp + (–kt

Commonly Used Concepts

This section outlines some frequently used standard concepts.

Controllability and Observability Grammians

Suppose that G(s)=D + C(sI–A)–1B is a transfer-function matrix with
(A)<0. Then there exist symmetric matrices P, Q defined by:
Reλ
i
PA+AP = –BB
QA + AQ = –C′C
These are termed the controllability and observability grammians of the realization defined by {A,B,C,D}. (Sometimes in the code, WC is used for P and WO for Q.) They have a number of properties:
P 0, with P > 0 if and only if [A,B] is controllable, Q 0 with Q >0
if and only if [A,C ] is observable.
PeAtBBe
= Qe
• and
0
•With vec P denoting the column vector formed by stacking column 1
of P on column 2 on column 3, and so on, and denoting Kronecker product
At
dt
=
IAAI+[]vecP vec( BB′)=
0
At
CCeAtdt
2
).
The controllability grammian can be thought of as measuring the
difficulty of controlling a system. More specifically , if the system is in the zero state initially, the mini mum energy (as measured by the L
is x
P –1x
norm of u) required to bring it to the state x
0
0
; so small
0
2
eigenvalues of P correspond to systems that are difficult to control, while zero eigenvalues correspond to uncontrollable systems.
© National Instruments Corporation 1-7 Xmath Model Reduction Module
Chapter 1 Introduction
The controllability grammian is also E[x(t)x′(t)] when the system
·
x
Ax Bw+=
noise with .
has been excited from time –∞ by zero mean white
Ewt()w′ s()[]ts()=
The observability grammian can be thought of as measuring the information contained in the output concerning an initial state.
·
If with x(0) = x
x
Ax= y, Cx=
yt()yt()dt
0
then:
0
=
x0Qx
0
Systems that are easy to observe correspond to Q with large eigenvalues, and thus large output energy (when unforced).
lyapunov(A,B*B') produces P and lyapunov(A',C'*C)
produces Q. For a discrete-time G(z)=D + C(zI-A)
–1
B with |λi(A)|<1, P and Q are:
P – APA= BB
Q – AQA = C′C
The first dot point above remains valid. Also,
k
and
= QA
PA
k 0=
BB′A′
k
=
k
k 0=
CCA
k
with the sums being finite in case A is nilpotent (which is the case if the transfer-function matrix has a finite impulse response).
•[I–AA] vec P = vec (BB′)
lyapunov( ) can be used to evaluate P and Q.

Hankel Singular Values

If P, Q are the controllability and observability grammians of a transfer-function matrix (in continuous or discrete time), the Hankel Singular Values are the quantities λ
All eigenvalues of PQ are nonnegati v e, and so are the Hank el singular values.
The Hankel singular values are independent of the realization used to calculate them: when A,B,C,D are replaced by TAT then P and Q are replaced b y TPT and (T
–1
by TPQT
and the eigenvalues are unaltered.
The number of nonzero Hankel singular values is the order or McMillan degree of the transfer-function matrix, or the state dimension in a minimal realization.
Xmath Model Reduction Module 1-8 ni.com
1/2
(PQ). Notice the following:
i
–1
–1
)QT–1; then PQ is replaced
, TB, CT
–1
and D,
Chapter 1 Introduction
Suppose the transfer-function matrix corresponds to a discrete-time system, with state variable dimension n. Then the infinite Hankel matrix,
2
B
B
H
CB CAB CA
CAB CA
=
2
B
CA
2
has for its singular values the n nonzero Hankel singular values, together with an infinite number of zero singular values.
The Hankel singular values of a (stable) all pass system (or all pass matrix) are all 1.
Slightly different procedures are used for calculating the Hankel singular values (and so-called weighted Hankel singular values) in the various functions. These procedures are summarized in Table 1-2.
Table 1-2. Calculating Hankel Singular Values
(balance( ))
balmoore( )
hankelsv( )
ophank( )
redschur( )
bst( ) mulhank( ) wtbalance( ) fracred( )
For a discussion of the balancing algorithm, refer to the Internally Balanced Realizations section; the Hankel singular values are given by
1/2
diag(R
)=HSV
For a discussion of the balancing algorithm, refer to the Internally Balanced Realizations section; the matrix S
diag(SH)
real(sqrt(eig(p*q)))
yields the Hankel singular values through
H
Calls hankelsv( ) Computes a Schur decomposition of P*Q and then
takes the square roots of the diagonal entries Same as redschur( ) except either P or Q can be
a weighted grammian
© National Instruments Corporation 1-9 Xmath Model Reduction Module
Chapter 1 Introduction

Internally Balanced Realizations

Suppose that a realization of a transfer-function matrix has the controllability and observability grammian property that P = Q = Σ for some diagonal Σ. Then the realization is termed internally balanced. Notice that the diagonal entries σ that is, they are the Hankel singular values. Often the entries of Σ are assumed ordered with σ
As noted in the discussion of grammians, systems with small (eigenvalues of) P are hard to control and those with small (eigenvalues of) Q are hard to observe. Now a state transformation T = α I will cause P = Q to be replaced by α expense of difficulty of observation, and conversely . Balanced realizations are those when ease of control has been balanced against ease of observation.
Given an arbitrary realization, there are a number of ways of finding a state-variable coordinate transformation bringing it to balanced form.
A good survey of the available algorithms for balancing is in [LHPW87]. One of these is implemented in the Xmath function
2
P, α–2Q, implying that ease of control can be obtained at the
of Σ are square roots of the eigenvalues of PQ,
i
σ
.
i
i+1
balance( ).
The one implemented in
balmoore( ) as part of this module is more
sophisticated, but more time consuming. It proceeds as follows:
1. Singular value decompositions of P and Q are defined. Because P and Q are symmetric, this is equivalent to diagonalizing P and Q by orthogonal transformations.
cSc
U
c
o
P=U
Q=UoSo U
2. The matrix,
12
HS
=
0
UHSHV
12
H
is constructed, and from it, a singular value decomposition is obtained:
HU
=
HSHVH
3. The balancing transformation is given by:
TU
=
The balanced realization is T
Xmath Model Reduction Module 1-10 ni.com
12⁄–
0S0
–1
AT, T–1B, CT.
UHS
12
H
Chapter 1 Introduction
This is almost the algorithm set out in Section II of [LHPW87]. The one difference (and it is minor) is that in [LHPW87], lower triangular Cholesky factors of P and Q are used, in place of U
cSc
1/2
and U
1/2
in forming H
OSO
in step 2. The grammians themselves need never be formed, as these Cholesky factors can be computed directly from A, B, and C in both continuous and discrete time; this, however, is not done in
balmoore.
The algorithm has the property that:
1
PT1–()′ S
TQT T
==
H
Thus the diagonal entries of S The algorithm implemented in
A lower triangular Cholesky factor L Then the symmetric matrix L value decomposition), thus L the coordinate basis transformation is given by T = L TQT = T

Singular Perturbation

A common procedure for approximating systems is the technique of Singular Perturbation. The underlying heuristic reasoning is as follows. Suppose there is a system with the property that the unforced responses of the system contain some modes which decay to zero extremely fast. Then an approximation to the system behavior may be attained by setting state variable derivatives associated with these modes to zero, even in the forced case. The phrase “associated with the modes” is loose: exactly what occurs is shown below. The phrase “even in the forced case” captures a logical flaw in the argument: smallness in the unforced case associated with initial conditions is not identical with smallness in the forced case.
Suppose the system is defined by:
–1
P(T–1)= R
are the Hankel singular values.
H
balance( ) is older, refer to [Lau80].
of P is found, so that L
c
QL
is diagonalized (through a singular
c
c
= VRU, with actuall y V = U. Finally,
cQLc
1/2
.
·
x
·
x
A11A
1
A21A
2
x
B
1
12
x
B
2
22
VR
c
1
u+=
2
= P.
cLc
–1/4
, resulting in
x
y
C
1C2
© National Instruments Corporation 1-11 Xmath Model Reduction Module
1
Du+=
x
2
(1-1)
Chapter 1 Introduction
and also:
(A22)<0 and .
Reλ
i
Reλ
()0<
iA11A12A22
1–
A
21
Usually , we e xpect that,
Reλ
()ReλiA11A12A
iA22
()«
1
A
22
21
in the sense that the intuitive argument hinges on this, but it is not necessary .
·
Then a singular perturbation is obtained by replacing by zero; this
x
2
means that:
A21x1A22x2B
u++ 0=
2
or
1–
A
x
2
A21x1A
22
1–
B
u=
22
2
Accordingly,
·
x
A11A12A
1
yC
()x
1C2A22
1–
=()x
A
22
21
1–
A
21
1
B1A12A
1
DC2A
()u+=
()u+=
1
B
22
2
1–
B
22
2
(1-2)
Equation 1-2 may be an approximation for Equation 1-1. This means that:
The transfer-function matrices may be similar.
If Equation 1-2 is excited by some u(·), with initial condition x
1(to
), and if Equation 1-1 is excited by the same u(·) with initial condition given by,
•x
1(to
then x
) and x2(to) = –A
(·) and y(·) computed from Equation 1-1 and from Equation 1-2
1
–1
A21x1(to)–A
22
–1
22
B2u(to),
should be similar.
If Equation 1-1 and Equation 1-2 are excited with the same u(·), have the same x
) and Equation 1-1 has arbitrary x2, then x1(·) and y(·)
1(to
computed from Equation 1-1 and Equation 1-2 should be similar after a possible initial transient.
As far as the transfer-function matrices are concerned, it can be verif ied that they are actually equal at DC.
Xmath Model Reduction Module 1-12 ni.com
Chapter 1 Introduction
Similar considerations govern the discrete-time problem, where,
can be approximated by:
mreduce( ) can carry out singular perturbation. For further discussion,
refer to Chapter 2, Additive Error Reduction. If Equation 1-1 is balanced, singular perturbation is provably attractive.

Spectral Factorization

Let W(s) be a stable transfer-function matrix, and suppose a system S with transfer-function matrix W(s) is excited by zero mean unit intensity white noise. Then the output of S is a stationary process with a spectrum Φ(s) related to W(s) by:
k 1+()
x
1
k 1+()
x
2
x
k 1+()A
1
yk()
B
C1C2IA
y
k
DC
22
x1k()
k()
x
2
A
11A12
A21A
x1k()
C
1C2
x
2
IA
+[]x1k()+=
11A12
+[]uk()
1A12
+[]x1k()+=
+[]uk()
()
IA
()
()
IA
()
2
Du k()+=
k()
1–
A
22
21
1–
B
22
2
1–
A
22
21
1–
B
22
2
B
1
uk()+=
B
2
Φ s() Ws()W′ s–()=
(1-3)
Evidently,
*
Φ jω() Wjω()W
jω()=
so that Φ(jω) is nonnegative hermitian for all ω; when W(jω) is a scalar, so
2
is Φ(jω) with Φ( jω)=|W( jω)|
.
In the matrix case, Φ is singular for some ω only if W does not have full rank there, and in the scalar case only if W has a zero there.
Spectral factorization, as shown in Example 1-1, seeks a W(jω), given Φ(jω). In the rational case, a W(jω) exists if and only if Φ(jω) is
© National Instruments Corporation 1-13 Xmath Model Reduction Module
Chapter 1 Introduction
nonnegative hermitian for all ω. If Φ is scalar, then Φ(jω)≥0 for all ω. Normally one restricts attention to Φ(·) with lim
Φ(jω)<. A key result
ω→∞
is that, given a rational, nonnegative hermitian Φ(jω) with
Φ(jω)<∞, there exists a rational W(s) where,
lim
ω→∞
W()<∞.
W(s) is stable.
W(s) is minimum phase, that is, the rank of W(s) is constant in Re[s]>0.
In the scalar case, all zeros of W(s) lie in Re[s]0, or in Re[s]<0 if Φ(jω)>0 for all ω.
In the matrix case, and if Φ(jω) is nonsingular for some ω, it means that W(s) is square and W is nonsingular for all ω.
Moreover , the particular W(s) pre viously def ined is unique, to within right multiplication by a constant orthogonal matrix. In the scalar case, this means that W(s) is determined to within a ±1 multiplier.
Example 1-1 Example of Spectral Factorization
Suppose:
Then Equation 1-3 is satisfied by , which is stable and minimum phase.
Also, Equation 1-3 is satisfied by and , and , and so forth, but none of these is minimum phase.
bst( ) and mulhank( ) both require execution within the program of
a spectral factorization; the actual algorithm for achieving the spectral factorization depends on a Riccati equation. The concepts of a spectrum and spectral factor also underpin aspects of
–1
(s) has all its poles in Re[s] 0, or in Re[s]<0 if Φ(jω)
2
1+
Ws()
s 1
----------­s 2+
ω
---------------=
2
ω
4+
s 1+
-----------
±=
s 2+
s 3
----------­s 2+
wtbalance( ).
s 1
-----------
s 2+
s 1+
sT
-----------
e
s 2+
Φ j ω()
Xmath Model Reduction Module 1-14 ni.com
Chapter 1 Introduction

Low Order Controller Design Through Order Reduction

The Model Reduction Module is particularly suitable for achieving low order controller design for a high order plant. This section explains some of the broad issues involved.
Most modern controller design methods, for example, LQG and H controllers of order roughly comparable with that of the plant. It follows that, to obtain a low order controller using such methods, one must either follow a high order controller design by a controller reduction step, or reduce an initially given high order plant model, and then design a controller using the resulting low order plant, with the understanding that the controller will actually be used on the high order plant. Refer to Figure 1-2.
High Order Plant
Plant
High Order Controller
Controller
, yield
Reduction Reduction
Low Order Plant

Figure 1-2. Low Order Controller Design for a High Order Plant

Generally speaking, in any design procedure, it is better to postpone approximation to a late step of the procedure: if approxim atio n is done early, the subsequent steps of the design procedure may have unpredictable effects on the approximation errors. Hence, the scheme based on high order controller design followed by reduction is generally to be preferred.
Low Order Controller
Controller reduction should aim to preserve closed-loop properties as far as possible. Hence the controller reduction procedures advocated in this module reflect the plant in some way . This leads to the frequenc y weighted reduction schemes of Chapter 4, Frequency-Weighted Error Reduction. Plant reduction logically should also seek to preserve closed-loop properties, and thus should involv e the controller. With the controller unknown however, this is impossible. Nevertheless, it can be argued, on the basis of the high loop gain property within the closed-loop bandwidth that is typical of many systems, that
© National Instruments Corporation 1-15 Xmath Model Reduction Module
wtbalance( ) and fracred( ), as described in
Chapter 1 Introduction
multiplicative reduction, as described in Chapter 4, Frequency-Weighted
Error Reduction, is a sound approach. Chapter 3, Multiplicative Error Reduction, and Chapter 4, Frequency-Weighted Error Reduction, develop
these arguments more fully.
Xmath Model Reduction Module 1-16 ni.com
Additive Error Reduction
This chapter describes additive error reduction including discussions of truncation of, reduction by, and perturbation of balanced realizations.

Introduction

Additive error reduction focuses on errors of the form,
2
Gjω()G
where G is the originally given transfer function, or model, and G reduced one. Of course, in discrete-time, one works instead with:
Gejω()Gre
As is argued in later chapters, if one is reducing a plant that will sit inside a closed loop, or if one is reducing a controller, that again is sitting in a closed loop, focus on additive error model reduction may not be appropriate. It is, however, extremely appropriate in considering reducing the transfer function of a filter. One pertinent application comes specif ically from digital filtering: a great many design algorithms lead to a finite impulse response (FIR) filter which can have a very large number of coefficients when poles are close to the unit circle. Model reduction provides a means to replace an FIR design by a much lower order infinite impulse response (IIR) design, with close matching of the transfer function at all frequencies.
jω()
r
is the
r
jω
()
© National Instruments Corporation 2-1 Xmath Model Reduction Module
Chapter 2 Additive Error Reduction

Truncation of Balanced Realizations

A group of functions can be used to achieve a reduction through truncation of a balanced realization. This means that if the original system is
·
x
·
x
A11A
1
2
A21A
12
22
x
B
1
1
x
2
u+=
B
2
y
C
1C2
xD
+=
u
(2-1)
and the realization is internally balanced, then a truncation is provided by
·
x
A11x1B
1
yC
1x1
u+=
1
Du+=
The functions in question are:
balmoore( )
balance( ) (refer to the Xmath Help)
truncate( )
redschur( )
One only can speak of internally balanced realizations for systems which are stable; if the aim is to reduce a transfer function matrix G(s) which contains unstable poles, one must additively decompose it into a stable part and unstable part, reduce the stable part, and then add the unstable part back in. The function
stable( ), described in Chapter 5, Utilities, can be used
to decompose G(s). Thus: G(s) = G G
(s) = found by algorithm (reduction of Gs(s))
sr
(s) + Gu(s)(Gs(s) stable, Gu(s) unstable)
s
G
(s) = Gsr(s) + Gu(s) (reduction of G(s))
r
Xmath Model Reduction Module 2-2 ni.com
Chapter 2 Additive Error Reduction
A very attractive feature of the truncation procedure is the availability of an error bound. More precisely, suppose that the controllability and observability grammians for [Enn84] are
0
Σ
===
PQΣ
with the diagonal entries of Σ in decreasing order, that is, σ
1
0 Σ
2
≥ σ
≥ ···. Then
1
2
the key result is,
(2-2)
with G, G
Gjω()G
the transfer function matrices of Equation 2-1 and Equation 2-2,
r
jω()
r
2trΣ
2
respectively. This formula shows that small singular values can, without great cost, be thrown away. It also is valid in discrete time, and can be improved upon if there are repeated Hankel singular values. Provided that the smallest diagonal entry of Σ
, the reduced order system is guaranteed to be stable.
of Σ
2
strictly exceeds the largest diagonal entry
1
Several other points concerning the error can be made:
The error G( jω)–G
( jω) as a function of frequency is not flat; it is zero
r
at ω =, and may take its largest value at ω = 0, so that there is in general no matching of DC gains of the original and reduced system.
The actual error may be considerably less than the error bound at all frequencies, so that the error bound formula can be no more than an advance guide. However, the bound is tight when the dimension reduction is 1 and the reduction is of a continuous-time transfer-function matrix.
•With g(·) and g responses of G and G
(·) denoting the impulse responses for impulse
r
and with Gr of degree k, the following L1 bound
r
holds [GCP88]
gg
This bound also will apply for the L
r
1
42k 1+()trΣ
error on the step response.
2
It is helpful to note one situation where reduction is likely to be difficult (so that Σ will contain few diagonal entries which are, relatively, very small). Suppose G(s), strictly proper, has degree n and has (n – 1) unstable zeros. Then as ω runs from zero to infinity, the phase of G(s) will change by (2n – 1)π/2. Much of this change may occur in the passband. Suppose G
r
(s)
has degree n–1; it can have no more than (n – 2) zeros, since it is strictly
© National Instruments Corporation 2-3 Xmath Model Reduction Module
Chapter 2 Additive Error Reduction
proper. So, e ven if all zeros are unstable, the maximum phase shift when ω moves from 0 to is (2n–3)π/2. It follows that if G(jω) remains large in magnitude at frequencies when the phase shift has moved past (2n – 3)π/2, approximation of G by G approximation may depend somehow on removing roughly cancelling pole-zeros pairs; when there are no left half plane zeros, there can be no rough cancellation, and so approximation is unsatisfactory.
As a working rule of thumb, if there are p right half plane zeros in the passband of a strictly proper G(s), reduction to a G p + 1 is likely to involve substantial errors. For non-strictly proper G(s), having p right half plane zeros means that reduction to a G than p is likely to involve substantial errors.
An all-pass function exemplifies the problem: there are n stable poles and n unstable zeros. Since all singular values are 1, the error bound formula indicates for a reduction to order n – 1 (when it is not just a bound, but exact) a maximum error of 2.
Another situation where poor approximation can arise is when a highly oscillatory system is to be replaced by a system with a real pole.
will necessarily be poor. Put another way, good
r
(s) of order less than
r
(s) of order less
r

Reduction Through Balanced Realization Truncation

This section briefly describes functions that reduce( ), balance( ),
truncate( ) to achieve reduction.
and
balmoore( )Computes an internally balanced realization of a
system and optionally truncates the realization to form an approximation.
balance( )Computes an internally balanced realization of a
system.
truncate( )This function truncates a system. It allows
examination of a sequence of different reduced order models formed from the one balanced realization.
redschur( )These functions in theory function almost the same
as the two features of state-variable realization of a reduced order model, such that the transfer function matrix of the model could have resulted by truncating a balanced realization of the original full order transfer function matrix. Howe ver, the initially given realizat ion of the ori ginal transfe r function matrix is never actually balanced, which can be a numerically hazardous step. Moreover, the state-v ariable realization of the reduced
Xmath Model Reduction Module 2-4 ni.com
balmoore( ). That is, they produce a
Chapter 2 Additive Error Reduction
order model is not one in general obtainable by truncation of an internally-balanced realization of the full order model.
Figure 2-1 sets out several routes to a reduced-order realization. In continuous time, a truncation of a balanced realization is again balanced. This is not the case for discrete time, but otherwise it looks the same.
Full Order Realization
balmoore
(with both steps)
Balanced Realization of
Reduced Order Model
(in continuous time)
Reduced Order Model Transfer Function
balmoore
(with first step)
truncate
balance redschur
Nonbalanced Realization of
Reduced Order Model

Figure 2-1. Different Approaches for Obtaining the Same Reduced Order Model

Singular Perturbation of Balanced Realization

Singular perturbation of a balanced realization is an attractive way to produce a reduced order model. Suppose G(s) is defined by,
·
x
·
x
A11A
1
A21A
2
x
B
1
12
x
2
22
1
u+=
B
2
y
© National Instruments Corporation 2-5 Xmath Model Reduction Module
C
1C2
xDu+=
Chapter 2 Additive Error Reduction
with controllability and observability grammians given by,
in which the diagonal entries of Σ are in decreasing order, that is,
σ
1≥σ2
the first diagonal entry of Σ
Reλ
defined by:
0
Σ
===
PQΣ
1
0 Σ
2
···, and such that the last diagonal entry of Σ1 exceeds
(A11–A
i
. It turns out that Reλi()<0 and
1–
A
A
12
21
22
2
)< 0, and a reduced order model Gr(s) can be
A
22
1–
x·A11A12A
yC
1C2A22
The attractive feature [LiA89] is that the same error bound holds as for balanced truncation. For example,
Although the error bounds are the same, the actual frequency pattern of the errors, and the actual maximum modulus, need not be the same for reduction to the same order. One crucial difference is that balanced truncation provides exact matching at ω = ∞, but does not match at DC, while singular perturbation is exactly the other way round. Perfect matching at DC can be a substantial advantage, especially if input signals are known to be band-limited.
Singular perturbation can be achieved with the two alternative approaches. For both continuous-time and discrete-time reductions, the end result is a balanced realization.

Hankel Norm Approximation

1
()xB
()xDC
Gjω()G
A
22
21
1–
A
21
jω()
r
A
+()u+=
1
()u+=
2A22
2trΣ
mreduce( ). Figure 2-1 shows
1–
A
B
12
22
2
1–
B
2
2
In Hankel norm approximation, one relies on the fact that if one chooses an approximation to exactly minimize one norm (the Hankel norm) then the infinity norm will be approximately minimized. The Hankel norm is defined in the following way. Let G(s) be a (rational) stable transfer
Xmath Model Reduction Module 2-6 ni.com
Chapter 2 Additive Error Reduction
ˆ
ˆ
ˆ
function matrix. Consider the way the associated impulse response maps inputs defined over (–,0] in L
into outputs, and focus on the output over
2
[0,). Define the input as u(t) for t < 0, and set v(t)=u(–t). Define the output as y(t) for t > 0. Then the mapping is
yt() CexpAt r+()Bv r()dr
=
0
–1
if G(s)=C(sI-A)
G
norm of G. A key result is that if σ
H
values of G(s), then .
B. The norm of the associated operator is the Hankel
···, are the Hankel singular
1≥σ2
G
σ
=
H
1
T o av oid minor confusion, suppose that all Hankel singular values of G are distinct. Then consider approximating G by some stable of prescribed degree k much that is minimized. It turns out that
GG
H
G
inf
Gˆof degree k
and there is an algorithm available for obtaining . Further, the optimum which is minimizing does a reasonable job of minimizing , because it can be shown that
ˆ
G
ˆ
GG
GG
where n =deg G, with this bound subject to the proviso that G and are
GG
GG
ˆ
ˆ
σ
k 1+
G()=
H
G
ˆ
H
σ
j
jk1+=
ˆ
G
allowed to be nonzero and different at s = .
ˆ
GG
The bound on is one half that applying for balanced truncation.
However,
It is actual error that is important in practice (not bounds).
The Hankel norm approximation does not give zero error at ω = ∞ or at ω = 0. Balanced realization truncation gives zero error at ω = ∞, and singular perturbation of a balanced realization gives zero error at ω =0.
There is one further connection between optimum Hankel norm approximation and L
ˆ
with stable and of degree k and with F unstable, then:
G
error. If one seeks to approximate G by a sum + F,
ˆ
G
inf
Gˆof degree k and F unstable
© National Instruments Corporation 2-7 Xmath Model Reduction Module
ˆ
GG
F
σ
k 1+
G()=
Loading...
+ 97 hidden pages