Communications and Control Engineering

Ai-Guo Wu

Ying Zhang

Complex Conjugate

Matrix Equations

for Systems and

Control

Communications and Control Engineering

Series editors

Alberto Isidori, Roma, Italy

Jan H. van Schuppen, Amsterdam, The Netherlands

Eduardo D. Sontag, Piscataway, USA

Miroslav Krstic, La Jolla, USA

More information about this series at http://www.springer.com/series/61

Ai-Guo Wu Ying Zhang

•

Complex Conjugate Matrix

Equations for Systems

and Control

123

Ai-Guo Wu

Harbin Institute of Technology, Shenzhen

University Town of Shenzhen

Shenzhen

China

Ying Zhang

Harbin Institute of Technology, Shenzhen

University Town of Shenzhen

Shenzhen

China

ISSN 0178-5354

ISSN 2197-7119 (electronic)

Communications and Control Engineering

ISBN 978-981-10-0635-7

ISBN 978-981-10-0637-1 (eBook)

DOI 10.1007/978-981-10-0637-1

Library of Congress Control Number: 2016942040

Mathematics Subject Classiﬁcation (2010): 15A06, 11Cxx

© Springer Science+Business Media Singapore 2017

This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part

of the material is concerned, speciﬁcally the rights of translation, reprinting, reuse of illustrations,

recitation, broadcasting, reproduction on microﬁlms or in any other physical way, and transmission

or information storage and retrieval, electronic adaptation, computer software, or by similar or

dissimilar methodology now known or hereafter developed.

The use of general descriptive names, registered names, trademarks, service marks, etc. in this

publication does not imply, even in the absence of a speciﬁc statement, that such names are exempt

from the relevant protective laws and regulations and therefore free for general use.

The publisher, the authors and the editors are safe to assume that the advice and information in this

book are believed to be true and accurate at the date of publication. Neither the publisher nor the

authors or the editors give a warranty, express or implied, with respect to the material contained

herein or for any errors or omissions that may have been made.

Printed on acid-free paper

This Springer imprint is published by Springer Nature

The registered company is Springer Science+Business Media Singapore Pte Ltd.

To our supervisor, Prof. Guang-Ren Duan

To Hong-Mei, and Yi-Tian

To Rui, and Qi-Yu

(Ai-Guo Wu)

(Ying Zhang)

Preface

Theory of matrix equations is an important branch of mathematics, and has broad

applications in many engineering ﬁelds, such as control theory, information theory,

and signal processing. Speciﬁcally, algebraic Lyapunov matrix equations play vital

roles in stability analysis for linear systems, and coupled Lyapunov matrix equations appear in the analysis for Markovian jump linear systems; algebraic Riccati

equations are encountered in optimal control. Due to these reasons, matrix equations are extensively investigated by many scholars from various ﬁelds, and the

content on matrix equations has been very rich. Matrix equations are often covered

in some books on linear algebra, matrix analysis, and numerical analysis. We list

several books here, for example, Topics in Matrix Analysis by R.A. Horn and

C.R. Johnson [143], The Theory of Matrices by P. Lancaster and M. Tismenetsky

[172], and Matrix Analysis and Applied Linear Algebra by C.D. Meyer [187]. In

addition, there are some books on special matrix equations, for example, Lyapunov

Matrix Equations in System Stability and Control by Z. Gajic [128], Matrix Riccati

Equations in Control and Systems Theory by H. Abou-Kandil [2], and Generalized

Sylvester Equations: Uniﬁed Parametric Solutions by Guang-Ren Duan [90]. It

should be pointed out that all the matrix equations investigated in the aforementioned books are in real domain. By now, it seems that there is no book on complex

matrix equations with the conjugate of unknown matrices. For convenience, this

class of equations is called complex conjugate matrix equations.

The ﬁrst author of this book and his collaborators began to consider complex

matrix equations with the conjugate of unknown matrices in 2005 inspired by the

work [155] of Jiang published in Linear Algebra and Applications. Since then, he

and his collaborators have published many papers on complex conjugate matrix

equations. Recently, the second author of this book joined this ﬁeld, and has

obtained some interesting results. In addition, some complex conjugate matrix

equations have found applications in the analysis and design of antilinear systems.

This book aims to provide a relatively systematic introduction to complex conjugate

matrix equations and its applications in discrete-time antilinear systems.

vii

viii

Preface

The book has 12 chapters. In Chap. 1, ﬁrst a survey is given on linear matrix

equations, and then recent development on complex conjugate matrix equations is

summarized. Some mathematical preliminaries to be used in this book are collected

in Chap. 2. Besides these two chapters, the rest of this book is partitioned into three

parts. The ﬁrst part contains Chaps. 3–5, and focuses on the iterative solutions for

several types of complex conjugate matrix equations. The second part consists of

Chaps. 6–10, and focuses on explicit closed-form solutions for some complex

conjugate matrix equations. In the third part, including Chaps. 11 and 12, several

applications of complex conjugate matrix equations are considered. In Chap. 11,

stability analysis of discrete-time antilinear systems is investigated, and some stability criteria are given in terms of anti-Lyapunov matrix equations, which are

special complex conjugate matrix equations. In Chap. 12, some feedback design

problems are solved for discrete-time antilinear systems by using several types of

complex conjugate matrix equations. Except part of Chap. 2 and Subsection 6.1.1,

the other materials of this book are based on our own research work, including

some unpublished results.

The intended audience of this monograph includes students and researchers in

areas of control theory, linear algebra, communication, numerical analysis, and so

on. An appropriate background for this monograph would be the ﬁrst course on

linear algebra and linear systems theory.

Since 1980s, many researchers have devoted much effort in complex conjugate

matrix equations, and much contribution has been made to this area. Owing to

space limitation and the organization of the book, many of their published results

are not included or even not cited. We extend our apologies to these researchers.

It is under the supervision of our Ph.D. advisor, Prof. Guang-Ren Duan at

Harbin Institute of Technology (HIT), that we entered the ﬁeld of matrix equations

with their applications in control systems design. Moreover, Prof. Duan has also

made much contribution to the investigation of complex conjugate matrix equations, and has coauthored many papers with the ﬁrst author. Some results in these

papers have been included in this book. Therefore, at the beginning of preparing the

manuscript, we intended to get Prof. Duan as the ﬁrst author of this book due to his

contribution on complex conjugate matrix equations. However, he thought that he

did not make contribution to the writing of this book, and thus should not be an

author of this book. Here, we wish to express our sincere gratitude and appreciation

to Prof. Duan for his magnanimity and selflessness. We also would like to express

our profound gratitude to Prof. Duan for his careful guidance, wholehearted support, insightful comments, and great contribution.

We also would like to give appreciation to our colleague, Prof. Bin Zhou of HIT

for his help. The ﬁrst author has coauthored some papers included in this book with

Prof. Gang Feng when he visited City University of Hong Kong as a Research

Fellow. The ﬁrst author would like to express his sincere gratitude to Prof. Feng for

his help and contribution. Dr. Yan-Ming Fu, Dr. Ming-Zhe Hou, Mr. Yang-Yang

Qian, and Dr. Ling-Ling Lv have also coauthored with the ﬁrst author a few papers

included in this book. The ﬁrst author would extend his great thanks to all of them

for their contribution.

Preface

ix

Great thanks also go to Mr. Yang-Yang Qian and Mr. Ming-Fang Chang, Ph.D.

students of the ﬁrst author, who have helped us in typing a few sections of the

manuscripts. In addition, Mr. Fang-Zhou Fu, Miss Dan Guo, Miss Xiao-Yan He,

Mr. Zhen-Peng Zeng, and Mr. Tian-Long Qin, Master students of the ﬁrst author,

and Mr. Yang-Yang Qian and Mr. Ming-Fang Chang have provided tremendous

help in ﬁnding errors and typos in the manuscripts. Their help has signiﬁcantly

improved the quality of the manuscripts, and is much appreciated.

The ﬁrst and second authors would like to thank his wife Ms. Hong-Mei Wang

and her husband Dr. Rui Zhang, respectively, for their constant support in every

aspect. Part of the book was written when the ﬁrst author visited the University of

Western Australia (UWA) from July 2013 to July 2014. The ﬁrst author would like

to thank Prof. Victor Sreeram at UWA for his help and invaluable suggestions.

We would like to gratefully acknowledge the ﬁnancial support kindly provided

by the National Natural Science Foundation of China under Grant Nos.

60974044 and 61273094, by Program for New Century Excellent Talents in

University under Grant No. NCET-11-0808, by Foundation for the Author of

National Excellent Doctoral Dissertation of China under Grant No. 201342, by

Specialized Research Fund for the Doctoral Program of Higher Education under

Grant Nos. 20132302110053 and 20122302120069, by the Foundation for Creative

Research Groups of the National Natural Science Foundation of China under Grant

Nos. 61021002 and 61333003, by the National Program on Key Basic Research

Project (973 Program) under Grant No. 2012CB821205, by the Project for

Distinguished Young Scholars of the Basic Research Plan in Shenzhen City under

Contract No. JCJ201110001, and by Key Laboratory of Electronics Engineering,

College of Heilongjiang Province (Heilongjiang University).

Lastly, we thank in advance all the readers for choosing to read this book. It is

much appreciated if readers could possibly provide, via email: agwu@163.com,

feedback about any problems found.

July 2015

Ai-Guo Wu

Ying Zhang

Contents

1

2

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1.1 Linear Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1.2 Univariate Linear Matrix Equations . . . . . . . . . . . . . . . . .

1.2.1 Lyapunov Matrix Equations . . . . . . . . . . . . . . . .

1.2.2 Kalman-Yakubovich and Normal Sylvester Matrix

Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1.2.3 Other Matrix Equations. . . . . . . . . . . . . . . . . . . .

1.3 Multivariate Linear Matrix Equations . . . . . . . . . . . . . . . .

1.3.1 Roth Matrix Equations . . . . . . . . . . . . . . . . . . . .

1.3.2 First-Order Generalized Sylvester Matrix Equations

1.3.3 Second-Order Generalized Sylvester Matrix

Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1.3.4 High-Order Generalized Sylvester Matrix

Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1.3.5 Linear Matrix Equations with More Than Two

Unknowns. . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1.4 Coupled Linear Matrix Equations. . . . . . . . . . . . . . . . . . .

1.5 Complex Conjugate Matrix Equations. . . . . . . . . . . . . . . .

1.6 Overview of This Monograph . . . . . . . . . . . . . . . . . . . . .

.

.

.

.

.

.

.

.

.

.

.

.

1

2

5

5

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

9

13

16

16

18

...

24

...

25

.

.

.

.

.

.

.

.

.

.

.

.

26

27

30

33

Mathematical Preliminaries . . . . . . . . . .

2.1 Kronecker Products . . . . . . . . . . . .

2.2 Leverrier Algorithms . . . . . . . . . . .

2.3 Generalized Leverrier Algorithms. . .

2.4 Singular Value Decompositions . . . .

2.5 Vector Norms and Operator Norms .

2.5.1 Vector Norms . . . . . . . . . .

2.5.2 Operator Norms . . . . . . . . .

2.6 A Real Representation of a Complex

2.6.1 Basic Properties . . . . . . . . .

2.6.2 Proof of Theorem 2.7 . . . . .

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

35

35

42

46

49

52

52

56

63

64

68

......

......

......

......

......

......

......

......

Matrix .

......

......

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

xi

xii

Contents

2.7

2.8

Consimilarity. . . . . . . . . . . . . . . . . . . . . . . .

Real Linear Spaces and Real Linear Mappings

2.8.1 Real Linear Spaces. . . . . . . . . . . . . .

2.8.2 Real Linear Mappings . . . . . . . . . . .

2.9 Real Inner Product Spaces. . . . . . . . . . . . . . .

2.10 Optimization in Complex Domain . . . . . . . . .

2.11 Notes and References . . . . . . . . . . . . . . . . . .

Part I

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

73

75

76

81

83

87

90

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

Iterative Solutions

3

Smith-Type Iterative Approaches . . . . . . . . . . .

3.1 Inﬁnite Series Form of the Unique Solution.

3.2 Smith Iterations . . . . . . . . . . . . . . . . . . . .

3.3 Smith (l) Iterations . . . . . . . . . . . . . . . . . .

3.4 Smith Accelerative Iterations . . . . . . . . . . .

3.5 An Illustrative Example . . . . . . . . . . . . . .

3.6 Notes and References . . . . . . . . . . . . . . . .

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

97

98

103

105

108

115

116

4

Hierarchical-Update-Based Iterative Approaches . . . . . . .

4.1 Extended Con-Sylvester Matrix Equations. . . . . . . . .

4.1.1 The Matrix Equation AXB þ CXD ¼ F . . . . .

4.1.2 A General Case . . . . . . . . . . . . . . . . . . . . .

4.1.3 Numerical Examples. . . . . . . . . . . . . . . . . .

4.2 Coupled Con-Sylvester Matrix Equations . . . . . . . . .

4.2.1 Iterative Algorithms . . . . . . . . . . . . . . . . . .

4.2.2 Convergence Analysis . . . . . . . . . . . . . . . .

4.2.3 A More General Case . . . . . . . . . . . . . . . . .

4.2.4 A Numerical Example . . . . . . . . . . . . . . . .

4.3 Complex Conjugate Matrix Equations with Transpose

of Unknowns. . . . . . . . . . . . . . . . . . . . . . . . . . . . .

4.3.1 Convergence Analysis . . . . . . . . . . . . . . . .

4.3.2 A Numerical Example . . . . . . . . . . . . . . . .

4.4 Notes and References . . . . . . . . . . . . . . . . . . . . . . .

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

119

121

121

126

133

135

137

139

146

147

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

149

151

157

158

Finite Iterative Approaches . . . . . . . . . . . . . . . . . .

5.1 Generalized Con-Sylvester Matrix Equations . . .

5.1.1 Main Results . . . . . . . . . . . . . . . . . . .

5.1.2 Some Special Cases . . . . . . . . . . . . . .

5.1.3 Numerical Examples. . . . . . . . . . . . . .

5.2 Extended Con-Sylvester Matrix Equations. . . . .

5.2.1 The Matrix Equation AXB þ CXD ¼ F .

5.2.2 A General Case . . . . . . . . . . . . . . . . .

5.2.3 Numerical Examples. . . . . . . . . . . . . .

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

163

163

164

172

175

179

179

192

195

5

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

Contents

5.3

xiii

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

198

198

199

206

207

209

221

6

Real-Representation-Based Approaches. . . . . . . .

6.1 Normal Con-Sylvester Matrix Equations . . . .

6.1.1 Solvability Conditions . . . . . . . . . .

6.1.2 Uniqueness Conditions . . . . . . . . . .

6.1.3 Solutions. . . . . . . . . . . . . . . . . . . .

6.2 Con-Kalman-Yakubovich Matrix Equations . .

6.2.1 Solvability Conditions . . . . . . . . . .

6.2.2 Solutions. . . . . . . . . . . . . . . . . . . .

6.3 Con-Sylvester Matrix Equations. . . . . . . . . .

6.4 Con-Yakubovich Matrix Equations. . . . . . . .

6.5 Extended Con-Sylvester Matrix Equations. . .

6.6 Generalized Con-Sylvester Matrix Equations .

6.7 Notes and References . . . . . . . . . . . . . . . . .

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

225

226

226

230

233

241

241

243

250

259

267

270

273

7

Polynomial-Matrix-Based Approaches. . . . . . . . . . . . .

7.1 Homogeneous Con-Sylvester Matrix Equations . . .

7.2 Nonhomogeneous Con-Sylvester Matrix Equations.

7.2.1 The First Approach . . . . . . . . . . . . . . . .

7.2.2 The Second Approach . . . . . . . . . . . . . .

7.3 Con-Yakubovich Matrix Equations. . . . . . . . . . . .

7.3.1 The First Approach . . . . . . . . . . . . . . . .

7.3.2 The Second Approach . . . . . . . . . . . . . .

7.4 Extended Con-Sylvester Matrix Equations. . . . . . .

7.4.1 Basic Solutions . . . . . . . . . . . . . . . . . . .

7.4.2 Equivalent Forms . . . . . . . . . . . . . . . . . .

7.4.3 Further Discussion . . . . . . . . . . . . . . . . .

7.4.4 Illustrative Examples . . . . . . . . . . . . . . .

7.5 Generalized Con-Sylvester Matrix Equations . . . . .

7.5.1 Basic Solutions . . . . . . . . . . . . . . . . . . .

7.5.2 Equivalent Forms . . . . . . . . . . . . . . . . . .

7.5.3 Special Solutions . . . . . . . . . . . . . . . . . .

7.5.4 An Illustrative Example . . . . . . . . . . . . .

7.6 Notes and References . . . . . . . . . . . . . . . . . . . . .

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

275

276

284

285

293

294

295

305

307

308

311

316

318

321

322

324

329

332

334

5.4

Part II

Coupled Con-Sylvester Matrix Equations .

5.3.1 Iterative Algorithms . . . . . . . . . .

5.3.2 Convergence Analysis . . . . . . . .

5.3.3 A More General Case . . . . . . . . .

5.3.4 Numerical Examples. . . . . . . . . .

5.3.5 Proofs of Lemmas 5.15 and 5.16 .

Notes and References . . . . . . . . . . . . . . .

.

.

.

.

.

.

.

Explicit Solutions

xiv

Contents

8

Unilateral-Equation-Based Approaches . . . . . . . . . . . .

8.1 Con-Sylvester Matrix Equations. . . . . . . . . . . . . .

8.2 Con-Yakubovich Matrix Equations. . . . . . . . . . . .

8.3 Nonhomogeneous Con-Sylvester Matrix Equations.

8.4 Notes and References . . . . . . . . . . . . . . . . . . . . .

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

335

336

343

349

354

9

Conjugate Products . . . . . . . . . . . . . . . . . . . . . . .

9.1 Complex Polynomial Ring ðC½s; þ ; ~Þ. . . . .

9.2 Division with Remainder in ðC½s; þ ; ~Þ . . . .

9.3 Greatest Common Divisors in ðC½s; þ ; ~Þ . .

9.4 Coprimeness in ðC½s; þ ; ~Þ . . . . . . . . . . . .

9.5 Conjugate Products of Polynomial Matrices. . .

9.6 Unimodular Matrices and Smith Normal Form.

9.7 Greatest Common Divisors . . . . . . . . . . . . . .

9.8 Coprimeness of Polynomial Matrices . . . . . . .

9.9 Conequivalence and Consimilarity . . . . . . . . .

9.10 An Example . . . . . . . . . . . . . . . . . . . . . . . .

9.11 Notes and References . . . . . . . . . . . . . . . . . .

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

355

355

359

362

365

366

371

377

379

382

385

385

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

389

389

394

394

397

400

402

11 Stability for Antilinear Systems . . . . . . . . . . . . . . . . . . .

11.1 Stability for Discrete-Time Antilinear Systems. . . . . .

11.2 Stochastic Stability for Markovian Antilinear Systems

11.3 Solutions to Coupled Anti-Lyapunov Equations . . . . .

11.3.1 Explicit Iterative Algorithms . . . . . . . . . . . .

11.3.2 Implicit Iterative Algorithms . . . . . . . . . . . .

11.3.3 An Illustrative Example . . . . . . . . . . . . . . .

11.4 Notes and References . . . . . . . . . . . . . . . . . . . . . . .

11.4.1 Summary . . . . . . . . . . . . . . . . . . . . . . . . .

11.4.2 A Brief Overview . . . . . . . . . . . . . . . . . . .

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

405

407

410

423

424

428

432

435

435

436

....

....

....

....

....

Gain

....

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

439

439

442

443

445

446

447

10 Con-Sylvester-Sum-Based Approaches . . . . . . .

10.1 Con-Sylvester Sum. . . . . . . . . . . . . . . . . .

10.2 Con-Sylvester-Polynomial Matrix Equations

10.2.1 Homogeneous Case . . . . . . . . . . .

10.2.2 Nonhomogeneous Case . . . . . . . . .

10.3 An Illustrative Example . . . . . . . . . . . . . .

10.4 Notes and References . . . . . . . . . . . . . . . .

Part III

.

.

.

.

.

.

.

.

.

.

.

.

.

.

Applications in Systems and Control

12 Feedback Design for Antilinear Systems . . . . . . . . . . .

12.1 Generalized Eigenstructure Assignment. . . . . . . . .

12.2 Model Reference Tracking Control. . . . . . . . . . . .

12.2.1 Tracking Conditions . . . . . . . . . . . . . . . .

12.2.2 Solution to the Feedback Stabilizing Gain .

12.2.3 Solution to the Feedforward Compensation

12.2.4 An Example . . . . . . . . . . . . . . . . . . . . .

Contents

12.3 Finite Horizon Quadratic Regulation. .

12.4 Inﬁnite Horizon Quadratic Regulation.

12.5 Notes and References . . . . . . . . . . . .

12.5.1 Summary . . . . . . . . . . . . . .

12.5.2 A Brief Overview . . . . . . . .

xv

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

450

461

467

467

468

References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 471

Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 485

Notation

Notation Related to Subspaces

Z

R

C

Rn

Cn

RmÂn

CmÂn

RmÂn ½s

CmÂn ½s

Ker

Image

rdim

Set of all integer numbers

Set of all real numbers

Set of all complex numbers

Set of all real vectors of dimension n

Set of all complex vectors of dimension n

Set of all real matrices of dimension m Â n

Set of all complex matrices of dimension m Â n

Set of all polynomial matrices of dimension m Â n with real

coefﬁcients

Set of all polynomial matrices of dimension m Â n with complex

coefﬁcients

The kernel of a mapping

The image of a mapping

The real dimension of a real linear space

Notation Related to Vectors and Matrices

0n

0mÂn

IÂn Ã

aij mÂn

AÀ1

AT

A

AH

n

diag Ai

Zero vector in Rn

Zero matrix in RmÂn

Identity matrix of order n

Matrix of dimension m Â n with the i-th row and j-th column element

being aij

Inverse matrix of matrix A

Transpose of matrix A

Complex conjugate of matrix A

Transposed complex conjugate of matrix A

The matrix whose i-th block diagonal is Ai

i¼1

xvii

xviii

Notation

ReðAÞ

ImðAÞ

detðAÞ

adjðAÞ

trðAÞ

rankðAÞ

vecðAÞ

qð AÞ

kð AÞ

kmin ð AÞ

kmax ð AÞ

rmax ð AÞ

kAk2

kAk

Real part of matrix A

Imaginary part of matrix A

Determinant of matrix A

Adjoint of matrix A

Trace of matrix A

Rank of matrix A

Vectorization of matrix A

Kronecker product of two matrices

Spectral radius of matrix A

Set of the eigenvalues of matrix A

The minimal eigenvalue of matrix A

The maximal eigenvalue of matrix A

The maximal singular value of matrix A

2-norm of matrix A

Frobenius norm of matrix A

Ak

The k-th right alternating power of matrix A

Ak

kðE; AÞ

The k-th left alternating power of matrix A

Set of the ﬁnite eigenvalues of matrix pair ðE; AÞ

!

Other Notation

E

i

I½m; n

min

max

~

Mathematical expectation

The imaginary unit

The set of integers from m to n

The minimum value in a set

The maximum value in a set

Conjugate product of two polynomial matrices

F

Á

()

£

Con-Sylvester sum

If and only if

Empty set

Chapter 1

Introduction

The theory of matrix equations is an active research topic in matrix algebra, and has

been extensively investigated by many researchers. Different matrix equations have

wide applications in various areas, such as, communication, signal processing and

control theory. Specifically, Lyapunov matrix equations are often encountered in stability analysis of linear systems [160]; the homogeneous continuous-time Lyapunov

equation in block companion matrices plays a vital role in the investigation of factorizations of Hermitian block Hankel matrices [228]; generalized Sylvester matrix

equations are often encountered in eigenstructure assignment of linear systems [90].

As to a matrix equation, three basic problems need to be considered: the solvability conditions, solving approaches and expressions of the solutions. For real matrix

equations, a considerable number of results have been obtained for these problems.

In addition, some other problems have also been considered for some special matrix

equations. For example, geometric properties of continuous-time Lyapunov matrix

equations were investigated in [286]; bounds of the solution were studied for discretetime algebraic Lyapunov equations in [173, 227] and for continuous-time Lyapunov

equations in [173]. However, there are only a few results on complex matrix equations with the conjugate of unknown matrices reported in literature. For convenience,

the type of these matrix equations is called the complex conjugate matrix equation.

Recently, complex conjugate matrix equations have found some applications in

discrete-time antilinear systems. In this book, some recent results are summarized

for several kinds of complex conjugate matrix equations and their applications in

analysis and feedback design of antilinear systems. In this chapter, the main aim is to

first provide a survey on real linear matrix equations, and then give recent progress

on complex conjugate matrix equations. The recent progress on antilinear systems

and related problems will be given in the part of “Notes and References” of Chaps. 11

and 12. At the end of this chapter, an overview of this monograph is presented.

Symbols used in this chapter are now introduced. It should be pointed out that these

symbols are also adopted throughout this book. For two integers m ≤ n, the notation

© Springer Science+Business Media Singapore 2017

A.-G. Wu and Y. Zhang, Complex Conjugate Matrix Equations

for Systems and Control, Communications and Control Engineering,

DOI 10.1007/978-981-10-0637-1_1

1

2

1 Introduction

I[m, n] denotes the set {m, m + 1, . . . , n}. For a square matrix A, we use det A,

ρ (A), λ (A), λmin (A) , and λmax (A) to denote the determinant, the spectral radius,

the set of eigenvalues, the minimal and maximal eigenvalues of A, respectively. The

notations A, AT , and AH denote the conjugate, transpose and conjugate transpose of

the matrix A, respectively. Re (A) and Im (A) denote the real part and imaginary part

n

of the matrix A, respectively. In addition, diag Ai is used to denote the block diagonal

i=1

matrix whose elements in the main block-diagonal are Ai , i ∈ I[1, n]. The symbol

“⊗” is used to denote the Kronecker product of two matrices.

1.1 Linear Equations

The most common linear equation may be the following real equation

Ax = b,

(1.1)

where A ∈ Rm×n and b ∈ Rm are known, and x ∈ Rn is the vector to be determined.

If A is a square matrix, it is well-known that the linear equation (1.1) has a unique

solution if and only if the matrix A is invertible, and in this case, the unique solution

can be given by x = A−1 b. In addition, this unique solution can also be given by

xi =

det Ai

, i ∈ I[1, n],

det A

where xi is the i-th element of the vector x, and Ai is the matrix formed by replacing

the i-th column of A with the column vector b. This is the celebrated Cramer’s rule.

For the general case, it is well-known that the matrix equation (1.1) has a solution if

and only if

rank A b = rankA.

In addition, the solvability of the general equation (1.1) can be characterized in terms

of generalized inverses, and the general expression of all the solutions to the equation

(1.1) can also be given in terms of generalized inverses.

Definition 1.1 ([206, 208]) Given a matrix A ∈ Rm×n , if a matrix X ∈ Rn×m satisfies

AXA = A,

then X is called a generalized inverse of the matrix A.

The generalized inverse may be not unique. An arbitrary generalized inverse of

the matrix A is denoted by A− .

Theorem 1.1 ([208, 297]) Given a matrix A ∈ Rm×n , let A− be an arbitrary generalized inverse of A. Then, the vector equation (1.1) has a solution if and only

if

1.1 Linear Equations

3

AA− b = b.

(1.2)

Moreover, if the condition (1.2) holds, then all the solutions of the vector equation

(1.1) can be given by

x = A− b + I − A− A z,

where z is an arbitrary n-dimensional vector.

The analytical solutions of the equation (1.1) given by inverses or generalized

inverses have neat expressions, and play important roles in theoretical analysis. However, it has been recognized that the operation of matrix inverses is not numerically

reliable. Therefore, many numerical methods are applied in practice to solve linear

vector equations. These methods can be classified into two types. One is the transformation approach, in which the matrix A needs to be transformed into some special

canonical forms, and the other is the iterative approach which generates a sequence

of vectors that approach the exact solution. An iterative process may be stopped as

soon as an approximate solution is sufficiently accurate in practice.

For the equation (1.1) with m = n, the celebrated iterative methods include Jacobi

iteration and Gauss-Seidel iteration. Let

⎡ ⎤

⎡ ⎤

x1

b1

⎢ x2 ⎥

⎢ b2 ⎥

⎥

⎢

⎢

.

A = aij n×n , b = ⎣ ⎦ , x = ⎣ ⎥

···

···⎦

bn

xn

Then, the vector equation (1.1) can be explicitly written as

⎧

a11 x1 + a12 x2 + · · · + a1n xn = b1

⎪

⎪

⎨

a21 x1 + a22 x2 + · · · + a2n xn = b2

.

···

⎪

⎪

⎩

an1 x1 + an2 x2 + · · · + ann xn = bn

The Gauss-Seidel and Jacobi iterative methods require that the vector equation (1.1)

has a unique solution, and all the entries in the main diagonal of A are nonzero, that

is, aii = 0, i ∈ I[1, n]. It is assumed that the initial values xi (0) of xi , i ∈ I[1, n],

are given. Then, the Jacobi iterative method obtains the unique solution of (1.1) by

the following iteration [132]:

⎛

1 ⎝

xi (k + 1) =

bi −

aii

i−1

n

aij xj (k) −

j=1

⎞

aij xj (k)⎠ , i ∈ I[1, n],

j=i+1

and the Gauss-Seidel iterative method obtains the unique solution of (1.1) by the

following forward substitution [132]:

4

1 Introduction

⎛

1 ⎝

bi −

xi (k + 1) =

aii

i−1

⎞

n

aij xj (k)⎠ , i ∈ I[1, n].

aij xj (k + 1) −

j=1

j=i+1

Now, let

n

D = diag aii ,

i=1

⎡

0 0

⎢ a21 0

⎢

L=⎢

⎢ a31 a32

⎣··· ···

an1 an2

0 ···

0 ···

0 ···

··· ···

· · · an,n−1

⎡

⎤

0

0

⎢ 0

0 ⎥

⎢

⎥

⎢

0 ⎥

⎥, U = ⎢ 0

⎣···

⎦

···

0

0

a12

0

0

···

0

a13

a23

0

···

0

⎤

· · · a1n

· · · a2n ⎥

⎥

··· ··· ⎥

⎥.

· · · an−1,n ⎦

··· 0

Then, the Gauss-Seidel iterative algorithm can be written in the following compact

form:

x (k + 1) = − (D + L)−1 Ux (k) + (D + L)−1 b,

and the Jacobi iterative algorithm can be written as

x (k + 1) = −D−1 (L + U) x (k) + D−1 b.

It is easily known that the Gauss-Seidel iteration is convergent if and only if

ρ (D + L)−1 U < 1,

and the Jacobi iteration is convergent if and only if

ρ D−1 (L + U) < 1.

In general, the Gauss-Seidel iteration converges faster than the Jacobi iteration since

the recent estimation is used in the Gauss-Seidel iteration. However, there exist

examples where the Jacobi method is faster than the Gauss–Seidel method.

In 1950, David M. Young, Jr. and H. Frankel proposed a variant of the GaussSeidel iterative method for solving the equation (1.1) with m = n [156]. This is

the so-called successive over-relaxation (SOR) method, by which the elements xi ,

i ∈ I[1, n], of x can be computed sequentially by forward substitution:

⎛

i−1

ω ⎝

xi (k + 1) = (1 − ω) xi (k) +

aij xj (k + 1) −

bi −

aii

j=1

n

⎞

aij xj (k)⎠ , i ∈ I[1, n],

j=i+1

where the constant ω > 1 is called the relaxation factor. Analytically, this algorithm

can be written as

1.1 Linear Equations

5

x (k + 1) = (D + ωL)−1 [ωb − (ωU + (ω − 1) D) x (k)] .

The choice of relaxation factor is not necessarily easy, and depends on the properties

of A. It has been proven that if A is symmetric and positive definite, the SOR method

is convergent with 0 < ω < 2.

If A is symmetric and positive definite, the equation (1.1) can be solved by the

conjugate gradient method proposed by Hestenes and Stiefel. This method is given

in the following theorem.

Theorem 1.2 Given a symmetric and positive definite matrix A ∈ Rn×n , the solution

of the equation (1.1) can be obtained by the following iteration

⎧

T

(k)p(k)

⎪

α (k) = prT (k)Ap(k)

⎪

⎪

⎪

⎪

⎨ x (k + 1) = x (k) + α (k) p (k)

r (k + 1) = b − Ax (k + 1)

,

⎪

pT (k)Ar(k+1)

⎪

⎪

s (k) = − pT (k)Ap(k)

⎪

⎪

⎩

p (k + 1) = r (k + 1) + s (k) p (k)

where the initial conditions x (0), r (0) , and p (0) are given as x (0) = x0 ,

and p (0) = r (0) = b − Ax (0).

1.2 Univariate Linear Matrix Equations

In this section, a simple survey is provided for linear matrix equations with only one

unknown matrix variable. Let us start with the Lyapunov matrix equations.

1.2.1 Lyapunov Matrix Equations

The most celebrated univariate matrix equations may be the continuous-time and

discrete-time Lyapunov matrix equations, which play vital roles in stability analysis [75, 160], controllability and observability analysis of linear systems [3]. The

continuous-time and discrete-time Lyapunov matrix equations are respectively in

the forms as

AT X + XA = −Q,

(1.3)

X − A XA = Q,

(1.4)

T

where A ∈ Rn×n , and positive semidefinite matrix Q ∈ Rn×n are known, and X is the

matrix to be determined. In [103, 104], the robust stability analysis was investigated

for linear continuous-time and discrete-time systems, respectively, and the admissible perturbation bounds of the system matrices were given in terms of the solutions

6

1 Introduction

of the corresponding Lyapunov matrix equations. In [102], the robust stability was

considered for linear continuous-time systems subject to unmodeled dynamics, and

an admissible bound was given for the nonlinear perturbation function based on the

solution to the Lyapunov matrix equation of the nominal linear system. In linear

systems theory, it is well-known that the controllability and observability of a linear system can be checked by the existence of a positive definite solution to the

corresponding Lyapunov matrix equation [117].

In [145], the continuous-time Lyapunov matrix equation was used to analyze the

weighted logarithmic norm of matrices. While in [106], this equation was employed

to investigate the so-called generalized positive definite matrix. In [222], the inverse

solution of the discrete-time Lyapunov equation was applied to generate q-Markov

covers for single-input-single-output discrete-time systems. In [317], a relationship

between the weighted norm of a matrix and the corresponding discrete-time Lyapunov matrix equation was first established, and then an iterative algorithm was

presented to obtain the spectral radius of a matrix by the solutions of a sequence of

discrete-time Lyapunov matrix equations.

For the solutions of Lyapunov matrix equations with special forms, many results

have been reported in literature. When A is in the Schwarz form, and Q is in a special

diagonal form, the solution of the continuous-time Lyapunov matrix equation (1.3)

was explicitly given in [12]. When AT is in the following companion form:

⎡

⎤

0 1

⎢ ..

⎥

.. . .

⎢

⎥

.

.

AT = ⎢ .

⎥,

⎣ 0 0 ··· 1 ⎦

−a0 −a1 · · · −an−1

T

and Q = bbT with b = 0 0 · · · 1 , it was shown in [221] that the solution of the

Lyapunov matrix equation (1.3) with A Hurwitz stable can be given by using the

entries of a Routh table. In [165], a simple algorithm was proposed for a closedform solution to the continuous-time Lyapunov matrix equation (1.3) by using the

Routh array when A is in a companion form. In [19], the solutions for the above two

Lyapunov matrix equations, which are particularly suitable for symbolic implementation, were proposed for the case where the matrix A is in a companion form. In

[24], the following special discrete-time Lyapunov matrix equation was considered:

X − FXF T = GQG T ,

where the matrix pair (F, G) is in a controllable canonical form. It was shown in [24]

that the solution to this equation is the inverse of a Schur-Cohn matrix associated

with the characteristic polynomial of F.

When A is Hurwitz stable, the unique solution to the continuous-time Lyapunov

matrix equation (1.3) can be given by the following integration form [28]:

1.2 Univariate Linear Matrix Equations

7

∞

X=

T

eA t QeAt d t.

(1.5)

0

Further, let Q = BBT with B ∈ Rn×r , and let the matrix exponential function eAt be

expressed as a finite sum of the power of A:

eAt = a1 (t) I + a2 (t) A + · · · + an (t) An−1 .

Then, it was shown in [193] that the unique solution of (1.3) can also be expressed

by

(1.6)

X = Ctr (A, B) H Ctr T (A, B)

where

Ctr (A, B) = B AB · · · An−1 B

is the controllability matrix of the matrix pair (A, B), and H = G ⊗ Ir with G =

G T = gij n×n ,

∞

gij =

ai (t) aj (t) d t.

0

The expression in (1.6) may bring much convenience for the analysis of linear systems

due to the appearance of the controllability matrix. In addition, with the help of the

expression (1.6) some eigenvalue bounds of the solution to the equation (1.3) were

given in [193]. In [217], an infinite series representation of the unique solution to

the continuous-time Lyapunov matrix equation (1.3) was also given by converting it

into a discrete-time Lyapunov matrix equation.

When A is Schur stable, the following theorem summarizes some important properties of the discrete-time Lyapunov matrix equation (1.4).

Theorem 1.3 ([212]) If A is Schur stable, then the solution of the discrete-time

Lyapunov matrix equation (1.4) exists for any matrix Q, and is given as

X=

1

2π

2π

AT − ei θ I

−1

Q A − e− i θ I

−1

d θ,

0

or equivalently by

∞

X=

i

AT QAi .

i=0

Many numerical algorithms have been proposed to solve the Lyapunov matrix

equations. In view that the solution of the Lyapunov matrix equation (1.3) is at least

semidefinite, Hammarling [136] found an ingenuous way to compute the Cholesky

factor of X directly. The basic idea is to apply triangular structure to solve the

equation iteratively. By constructing a new rank-1 updating scheme, an improved

Hammarling method was proposed in [220] to accommodate a more general case

8

1 Introduction

of Lyapunov matrix equations. In [284], by using a dimension-reduced method an

algorithm was proposed to solve the continuous-time Lyapunov matrix equation

(1.3) in controllable canonical forms. In [18], the presented Smith iteration for the

discrete-time Lyapunov matrix equation (1.4) was in the form of

X (k + 1) = AT X (k) A + Q

with X (0) = Q.

Besides the solutions to Lyapunov matrix equations, the bounds of the solutions

have also been extensively investigated. In [191], the following result was given on

the eigenvalue bounds of the discrete-time Lyapunov matrix equation

X − AXAT = BBT ,

(1.7)

where A ∈ Rn×n and B ∈ Rn×r are known matrices, and X is the matrix to be

determined.

Theorem 1.4 Given matrices A ∈ Rn×n and B ∈ Rn×r , for the solution X to the

discrete-time Lyapunov matrix equation (1.7) there holds

λmin Ctr (A, B) Ctr T (A, B) P ≤ X ≤ λmax Ctr (A, B) Ctr T (A, B) P,

where P is the solution to the Lyapunov matrix equation

P − An P (An )T = I.

In [227], upper bounds for the norms and trace of the solution to the discrete-time

Lyapunov matrix equation (1.7) were presented in terms of the resolvent of A. In

[124], lower and upper bounds for the trace of the solution to the continuous-time

Lyapunov matrix equation (1.3) were given in terms of the logarithmic norm of A.

In [116], lower bounds were established for the minimal and maximal eigenvalues

of the solution to the discrete-time Lyapunov equation (1.7).

Recently, parametric Lyapunov matrix equations were extensively investigated.

In [307, 315], some properties of the continuous-time parametric Lyapunov matrix

equations were given. In [307], the solution of the parametric Lyapunov equation

was applied to semiglobal stabilization for continuous-time linear systems subject to

actuator saturation; while in [315] the solution was used to design a state feedback

stabilizing law for linear systems with input delay. The discrete-time parametric Lyapunov matrix equations were investigated in [313, 314], and some elegant properties

were established.

1.2 Univariate Linear Matrix Equations

9

1.2.2 Kalman-Yakubovich and Normal Sylvester Matrix

Equations

A general form of the continuous-time Lyapunov matrix equation is the so-called

normal Sylvester matrix equation

AX − XB = C.

(1.8)

A general form of the discrete-time Lyapunov matrix equation is the so-called

Kalman-Yakubovich matrix equation

X − AXB = C.

(1.9)

In the matrix equations (1.8) and (1.9), A ∈ Rn×n , B ∈ Rp×p , and C ∈ Rn×p are the

known matrices, and X ∈ Rn×p is the matrix to be determined. On the solvability of

the normal Sylvester matrix equation (1.8), there exists the following result which

has been well-known as Roth’s removal rule.

Theorem 1.5 ([210]) Given matrices A ∈ Rn×n , B ∈ Rp×p , and C ∈ Rn×p , the

normal Sylvester matrix equation (1.8) has a solution if and only if the following two

partitioned matrices are similar

AC

,

0 B

A0

;

0B

or equivalently, the following two polynomial matrices are equivalent:

sI − A −C

,

0 sI − B

sI − A 0

.

0 sI − B

This matrix equation has a unique solution if and only if λ (A) ∩ λ (B) = ∅.

The result in the preceding theorem was generalized to the Kalman-Yakubovich

matrix equation (1.9) in [238]. This is the following theorem.

Theorem 1.6 Given matrices A ∈ Rn×n , B ∈ Rp×p , and C ∈ Rn×p , the KalmanYakubovich matrix equation (1.9) has a solution if and only if there exist nonsingular

real matrices S and R such that

S

sI + A C

sI + A 0

R=

.

0 sB + I

0 sB + I

On the numerical solutions to the normal Sylvester matrix equations, there have

been a number of results reported in literature over the past 30 years. The BartelsStewart method [13] may be the first numerically stable approach to systematically

solving small-to-medium scale Lyapunov and normal Sylvester matrix equations.

The basic idea of this method is to apply the Schur decomposition to transform the

Ai-Guo Wu

Ying Zhang

Complex Conjugate

Matrix Equations

for Systems and

Control

Communications and Control Engineering

Series editors

Alberto Isidori, Roma, Italy

Jan H. van Schuppen, Amsterdam, The Netherlands

Eduardo D. Sontag, Piscataway, USA

Miroslav Krstic, La Jolla, USA

More information about this series at http://www.springer.com/series/61

Ai-Guo Wu Ying Zhang

•

Complex Conjugate Matrix

Equations for Systems

and Control

123

Ai-Guo Wu

Harbin Institute of Technology, Shenzhen

University Town of Shenzhen

Shenzhen

China

Ying Zhang

Harbin Institute of Technology, Shenzhen

University Town of Shenzhen

Shenzhen

China

ISSN 0178-5354

ISSN 2197-7119 (electronic)

Communications and Control Engineering

ISBN 978-981-10-0635-7

ISBN 978-981-10-0637-1 (eBook)

DOI 10.1007/978-981-10-0637-1

Library of Congress Control Number: 2016942040

Mathematics Subject Classiﬁcation (2010): 15A06, 11Cxx

© Springer Science+Business Media Singapore 2017

This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part

of the material is concerned, speciﬁcally the rights of translation, reprinting, reuse of illustrations,

recitation, broadcasting, reproduction on microﬁlms or in any other physical way, and transmission

or information storage and retrieval, electronic adaptation, computer software, or by similar or

dissimilar methodology now known or hereafter developed.

The use of general descriptive names, registered names, trademarks, service marks, etc. in this

publication does not imply, even in the absence of a speciﬁc statement, that such names are exempt

from the relevant protective laws and regulations and therefore free for general use.

The publisher, the authors and the editors are safe to assume that the advice and information in this

book are believed to be true and accurate at the date of publication. Neither the publisher nor the

authors or the editors give a warranty, express or implied, with respect to the material contained

herein or for any errors or omissions that may have been made.

Printed on acid-free paper

This Springer imprint is published by Springer Nature

The registered company is Springer Science+Business Media Singapore Pte Ltd.

To our supervisor, Prof. Guang-Ren Duan

To Hong-Mei, and Yi-Tian

To Rui, and Qi-Yu

(Ai-Guo Wu)

(Ying Zhang)

Preface

Theory of matrix equations is an important branch of mathematics, and has broad

applications in many engineering ﬁelds, such as control theory, information theory,

and signal processing. Speciﬁcally, algebraic Lyapunov matrix equations play vital

roles in stability analysis for linear systems, and coupled Lyapunov matrix equations appear in the analysis for Markovian jump linear systems; algebraic Riccati

equations are encountered in optimal control. Due to these reasons, matrix equations are extensively investigated by many scholars from various ﬁelds, and the

content on matrix equations has been very rich. Matrix equations are often covered

in some books on linear algebra, matrix analysis, and numerical analysis. We list

several books here, for example, Topics in Matrix Analysis by R.A. Horn and

C.R. Johnson [143], The Theory of Matrices by P. Lancaster and M. Tismenetsky

[172], and Matrix Analysis and Applied Linear Algebra by C.D. Meyer [187]. In

addition, there are some books on special matrix equations, for example, Lyapunov

Matrix Equations in System Stability and Control by Z. Gajic [128], Matrix Riccati

Equations in Control and Systems Theory by H. Abou-Kandil [2], and Generalized

Sylvester Equations: Uniﬁed Parametric Solutions by Guang-Ren Duan [90]. It

should be pointed out that all the matrix equations investigated in the aforementioned books are in real domain. By now, it seems that there is no book on complex

matrix equations with the conjugate of unknown matrices. For convenience, this

class of equations is called complex conjugate matrix equations.

The ﬁrst author of this book and his collaborators began to consider complex

matrix equations with the conjugate of unknown matrices in 2005 inspired by the

work [155] of Jiang published in Linear Algebra and Applications. Since then, he

and his collaborators have published many papers on complex conjugate matrix

equations. Recently, the second author of this book joined this ﬁeld, and has

obtained some interesting results. In addition, some complex conjugate matrix

equations have found applications in the analysis and design of antilinear systems.

This book aims to provide a relatively systematic introduction to complex conjugate

matrix equations and its applications in discrete-time antilinear systems.

vii

viii

Preface

The book has 12 chapters. In Chap. 1, ﬁrst a survey is given on linear matrix

equations, and then recent development on complex conjugate matrix equations is

summarized. Some mathematical preliminaries to be used in this book are collected

in Chap. 2. Besides these two chapters, the rest of this book is partitioned into three

parts. The ﬁrst part contains Chaps. 3–5, and focuses on the iterative solutions for

several types of complex conjugate matrix equations. The second part consists of

Chaps. 6–10, and focuses on explicit closed-form solutions for some complex

conjugate matrix equations. In the third part, including Chaps. 11 and 12, several

applications of complex conjugate matrix equations are considered. In Chap. 11,

stability analysis of discrete-time antilinear systems is investigated, and some stability criteria are given in terms of anti-Lyapunov matrix equations, which are

special complex conjugate matrix equations. In Chap. 12, some feedback design

problems are solved for discrete-time antilinear systems by using several types of

complex conjugate matrix equations. Except part of Chap. 2 and Subsection 6.1.1,

the other materials of this book are based on our own research work, including

some unpublished results.

The intended audience of this monograph includes students and researchers in

areas of control theory, linear algebra, communication, numerical analysis, and so

on. An appropriate background for this monograph would be the ﬁrst course on

linear algebra and linear systems theory.

Since 1980s, many researchers have devoted much effort in complex conjugate

matrix equations, and much contribution has been made to this area. Owing to

space limitation and the organization of the book, many of their published results

are not included or even not cited. We extend our apologies to these researchers.

It is under the supervision of our Ph.D. advisor, Prof. Guang-Ren Duan at

Harbin Institute of Technology (HIT), that we entered the ﬁeld of matrix equations

with their applications in control systems design. Moreover, Prof. Duan has also

made much contribution to the investigation of complex conjugate matrix equations, and has coauthored many papers with the ﬁrst author. Some results in these

papers have been included in this book. Therefore, at the beginning of preparing the

manuscript, we intended to get Prof. Duan as the ﬁrst author of this book due to his

contribution on complex conjugate matrix equations. However, he thought that he

did not make contribution to the writing of this book, and thus should not be an

author of this book. Here, we wish to express our sincere gratitude and appreciation

to Prof. Duan for his magnanimity and selflessness. We also would like to express

our profound gratitude to Prof. Duan for his careful guidance, wholehearted support, insightful comments, and great contribution.

We also would like to give appreciation to our colleague, Prof. Bin Zhou of HIT

for his help. The ﬁrst author has coauthored some papers included in this book with

Prof. Gang Feng when he visited City University of Hong Kong as a Research

Fellow. The ﬁrst author would like to express his sincere gratitude to Prof. Feng for

his help and contribution. Dr. Yan-Ming Fu, Dr. Ming-Zhe Hou, Mr. Yang-Yang

Qian, and Dr. Ling-Ling Lv have also coauthored with the ﬁrst author a few papers

included in this book. The ﬁrst author would extend his great thanks to all of them

for their contribution.

Preface

ix

Great thanks also go to Mr. Yang-Yang Qian and Mr. Ming-Fang Chang, Ph.D.

students of the ﬁrst author, who have helped us in typing a few sections of the

manuscripts. In addition, Mr. Fang-Zhou Fu, Miss Dan Guo, Miss Xiao-Yan He,

Mr. Zhen-Peng Zeng, and Mr. Tian-Long Qin, Master students of the ﬁrst author,

and Mr. Yang-Yang Qian and Mr. Ming-Fang Chang have provided tremendous

help in ﬁnding errors and typos in the manuscripts. Their help has signiﬁcantly

improved the quality of the manuscripts, and is much appreciated.

The ﬁrst and second authors would like to thank his wife Ms. Hong-Mei Wang

and her husband Dr. Rui Zhang, respectively, for their constant support in every

aspect. Part of the book was written when the ﬁrst author visited the University of

Western Australia (UWA) from July 2013 to July 2014. The ﬁrst author would like

to thank Prof. Victor Sreeram at UWA for his help and invaluable suggestions.

We would like to gratefully acknowledge the ﬁnancial support kindly provided

by the National Natural Science Foundation of China under Grant Nos.

60974044 and 61273094, by Program for New Century Excellent Talents in

University under Grant No. NCET-11-0808, by Foundation for the Author of

National Excellent Doctoral Dissertation of China under Grant No. 201342, by

Specialized Research Fund for the Doctoral Program of Higher Education under

Grant Nos. 20132302110053 and 20122302120069, by the Foundation for Creative

Research Groups of the National Natural Science Foundation of China under Grant

Nos. 61021002 and 61333003, by the National Program on Key Basic Research

Project (973 Program) under Grant No. 2012CB821205, by the Project for

Distinguished Young Scholars of the Basic Research Plan in Shenzhen City under

Contract No. JCJ201110001, and by Key Laboratory of Electronics Engineering,

College of Heilongjiang Province (Heilongjiang University).

Lastly, we thank in advance all the readers for choosing to read this book. It is

much appreciated if readers could possibly provide, via email: agwu@163.com,

feedback about any problems found.

July 2015

Ai-Guo Wu

Ying Zhang

Contents

1

2

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1.1 Linear Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1.2 Univariate Linear Matrix Equations . . . . . . . . . . . . . . . . .

1.2.1 Lyapunov Matrix Equations . . . . . . . . . . . . . . . .

1.2.2 Kalman-Yakubovich and Normal Sylvester Matrix

Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1.2.3 Other Matrix Equations. . . . . . . . . . . . . . . . . . . .

1.3 Multivariate Linear Matrix Equations . . . . . . . . . . . . . . . .

1.3.1 Roth Matrix Equations . . . . . . . . . . . . . . . . . . . .

1.3.2 First-Order Generalized Sylvester Matrix Equations

1.3.3 Second-Order Generalized Sylvester Matrix

Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1.3.4 High-Order Generalized Sylvester Matrix

Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1.3.5 Linear Matrix Equations with More Than Two

Unknowns. . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1.4 Coupled Linear Matrix Equations. . . . . . . . . . . . . . . . . . .

1.5 Complex Conjugate Matrix Equations. . . . . . . . . . . . . . . .

1.6 Overview of This Monograph . . . . . . . . . . . . . . . . . . . . .

.

.

.

.

.

.

.

.

.

.

.

.

1

2

5

5

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

9

13

16

16

18

...

24

...

25

.

.

.

.

.

.

.

.

.

.

.

.

26

27

30

33

Mathematical Preliminaries . . . . . . . . . .

2.1 Kronecker Products . . . . . . . . . . . .

2.2 Leverrier Algorithms . . . . . . . . . . .

2.3 Generalized Leverrier Algorithms. . .

2.4 Singular Value Decompositions . . . .

2.5 Vector Norms and Operator Norms .

2.5.1 Vector Norms . . . . . . . . . .

2.5.2 Operator Norms . . . . . . . . .

2.6 A Real Representation of a Complex

2.6.1 Basic Properties . . . . . . . . .

2.6.2 Proof of Theorem 2.7 . . . . .

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

35

35

42

46

49

52

52

56

63

64

68

......

......

......

......

......

......

......

......

Matrix .

......

......

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

xi

xii

Contents

2.7

2.8

Consimilarity. . . . . . . . . . . . . . . . . . . . . . . .

Real Linear Spaces and Real Linear Mappings

2.8.1 Real Linear Spaces. . . . . . . . . . . . . .

2.8.2 Real Linear Mappings . . . . . . . . . . .

2.9 Real Inner Product Spaces. . . . . . . . . . . . . . .

2.10 Optimization in Complex Domain . . . . . . . . .

2.11 Notes and References . . . . . . . . . . . . . . . . . .

Part I

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

73

75

76

81

83

87

90

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

Iterative Solutions

3

Smith-Type Iterative Approaches . . . . . . . . . . .

3.1 Inﬁnite Series Form of the Unique Solution.

3.2 Smith Iterations . . . . . . . . . . . . . . . . . . . .

3.3 Smith (l) Iterations . . . . . . . . . . . . . . . . . .

3.4 Smith Accelerative Iterations . . . . . . . . . . .

3.5 An Illustrative Example . . . . . . . . . . . . . .

3.6 Notes and References . . . . . . . . . . . . . . . .

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

97

98

103

105

108

115

116

4

Hierarchical-Update-Based Iterative Approaches . . . . . . .

4.1 Extended Con-Sylvester Matrix Equations. . . . . . . . .

4.1.1 The Matrix Equation AXB þ CXD ¼ F . . . . .

4.1.2 A General Case . . . . . . . . . . . . . . . . . . . . .

4.1.3 Numerical Examples. . . . . . . . . . . . . . . . . .

4.2 Coupled Con-Sylvester Matrix Equations . . . . . . . . .

4.2.1 Iterative Algorithms . . . . . . . . . . . . . . . . . .

4.2.2 Convergence Analysis . . . . . . . . . . . . . . . .

4.2.3 A More General Case . . . . . . . . . . . . . . . . .

4.2.4 A Numerical Example . . . . . . . . . . . . . . . .

4.3 Complex Conjugate Matrix Equations with Transpose

of Unknowns. . . . . . . . . . . . . . . . . . . . . . . . . . . . .

4.3.1 Convergence Analysis . . . . . . . . . . . . . . . .

4.3.2 A Numerical Example . . . . . . . . . . . . . . . .

4.4 Notes and References . . . . . . . . . . . . . . . . . . . . . . .

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

119

121

121

126

133

135

137

139

146

147

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

149

151

157

158

Finite Iterative Approaches . . . . . . . . . . . . . . . . . .

5.1 Generalized Con-Sylvester Matrix Equations . . .

5.1.1 Main Results . . . . . . . . . . . . . . . . . . .

5.1.2 Some Special Cases . . . . . . . . . . . . . .

5.1.3 Numerical Examples. . . . . . . . . . . . . .

5.2 Extended Con-Sylvester Matrix Equations. . . . .

5.2.1 The Matrix Equation AXB þ CXD ¼ F .

5.2.2 A General Case . . . . . . . . . . . . . . . . .

5.2.3 Numerical Examples. . . . . . . . . . . . . .

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

163

163

164

172

175

179

179

192

195

5

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

Contents

5.3

xiii

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

198

198

199

206

207

209

221

6

Real-Representation-Based Approaches. . . . . . . .

6.1 Normal Con-Sylvester Matrix Equations . . . .

6.1.1 Solvability Conditions . . . . . . . . . .

6.1.2 Uniqueness Conditions . . . . . . . . . .

6.1.3 Solutions. . . . . . . . . . . . . . . . . . . .

6.2 Con-Kalman-Yakubovich Matrix Equations . .

6.2.1 Solvability Conditions . . . . . . . . . .

6.2.2 Solutions. . . . . . . . . . . . . . . . . . . .

6.3 Con-Sylvester Matrix Equations. . . . . . . . . .

6.4 Con-Yakubovich Matrix Equations. . . . . . . .

6.5 Extended Con-Sylvester Matrix Equations. . .

6.6 Generalized Con-Sylvester Matrix Equations .

6.7 Notes and References . . . . . . . . . . . . . . . . .

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

225

226

226

230

233

241

241

243

250

259

267

270

273

7

Polynomial-Matrix-Based Approaches. . . . . . . . . . . . .

7.1 Homogeneous Con-Sylvester Matrix Equations . . .

7.2 Nonhomogeneous Con-Sylvester Matrix Equations.

7.2.1 The First Approach . . . . . . . . . . . . . . . .

7.2.2 The Second Approach . . . . . . . . . . . . . .

7.3 Con-Yakubovich Matrix Equations. . . . . . . . . . . .

7.3.1 The First Approach . . . . . . . . . . . . . . . .

7.3.2 The Second Approach . . . . . . . . . . . . . .

7.4 Extended Con-Sylvester Matrix Equations. . . . . . .

7.4.1 Basic Solutions . . . . . . . . . . . . . . . . . . .

7.4.2 Equivalent Forms . . . . . . . . . . . . . . . . . .

7.4.3 Further Discussion . . . . . . . . . . . . . . . . .

7.4.4 Illustrative Examples . . . . . . . . . . . . . . .

7.5 Generalized Con-Sylvester Matrix Equations . . . . .

7.5.1 Basic Solutions . . . . . . . . . . . . . . . . . . .

7.5.2 Equivalent Forms . . . . . . . . . . . . . . . . . .

7.5.3 Special Solutions . . . . . . . . . . . . . . . . . .

7.5.4 An Illustrative Example . . . . . . . . . . . . .

7.6 Notes and References . . . . . . . . . . . . . . . . . . . . .

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

275

276

284

285

293

294

295

305

307

308

311

316

318

321

322

324

329

332

334

5.4

Part II

Coupled Con-Sylvester Matrix Equations .

5.3.1 Iterative Algorithms . . . . . . . . . .

5.3.2 Convergence Analysis . . . . . . . .

5.3.3 A More General Case . . . . . . . . .

5.3.4 Numerical Examples. . . . . . . . . .

5.3.5 Proofs of Lemmas 5.15 and 5.16 .

Notes and References . . . . . . . . . . . . . . .

.

.

.

.

.

.

.

Explicit Solutions

xiv

Contents

8

Unilateral-Equation-Based Approaches . . . . . . . . . . . .

8.1 Con-Sylvester Matrix Equations. . . . . . . . . . . . . .

8.2 Con-Yakubovich Matrix Equations. . . . . . . . . . . .

8.3 Nonhomogeneous Con-Sylvester Matrix Equations.

8.4 Notes and References . . . . . . . . . . . . . . . . . . . . .

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

335

336

343

349

354

9

Conjugate Products . . . . . . . . . . . . . . . . . . . . . . .

9.1 Complex Polynomial Ring ðC½s; þ ; ~Þ. . . . .

9.2 Division with Remainder in ðC½s; þ ; ~Þ . . . .

9.3 Greatest Common Divisors in ðC½s; þ ; ~Þ . .

9.4 Coprimeness in ðC½s; þ ; ~Þ . . . . . . . . . . . .

9.5 Conjugate Products of Polynomial Matrices. . .

9.6 Unimodular Matrices and Smith Normal Form.

9.7 Greatest Common Divisors . . . . . . . . . . . . . .

9.8 Coprimeness of Polynomial Matrices . . . . . . .

9.9 Conequivalence and Consimilarity . . . . . . . . .

9.10 An Example . . . . . . . . . . . . . . . . . . . . . . . .

9.11 Notes and References . . . . . . . . . . . . . . . . . .

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

355

355

359

362

365

366

371

377

379

382

385

385

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

389

389

394

394

397

400

402

11 Stability for Antilinear Systems . . . . . . . . . . . . . . . . . . .

11.1 Stability for Discrete-Time Antilinear Systems. . . . . .

11.2 Stochastic Stability for Markovian Antilinear Systems

11.3 Solutions to Coupled Anti-Lyapunov Equations . . . . .

11.3.1 Explicit Iterative Algorithms . . . . . . . . . . . .

11.3.2 Implicit Iterative Algorithms . . . . . . . . . . . .

11.3.3 An Illustrative Example . . . . . . . . . . . . . . .

11.4 Notes and References . . . . . . . . . . . . . . . . . . . . . . .

11.4.1 Summary . . . . . . . . . . . . . . . . . . . . . . . . .

11.4.2 A Brief Overview . . . . . . . . . . . . . . . . . . .

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

405

407

410

423

424

428

432

435

435

436

....

....

....

....

....

Gain

....

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

439

439

442

443

445

446

447

10 Con-Sylvester-Sum-Based Approaches . . . . . . .

10.1 Con-Sylvester Sum. . . . . . . . . . . . . . . . . .

10.2 Con-Sylvester-Polynomial Matrix Equations

10.2.1 Homogeneous Case . . . . . . . . . . .

10.2.2 Nonhomogeneous Case . . . . . . . . .

10.3 An Illustrative Example . . . . . . . . . . . . . .

10.4 Notes and References . . . . . . . . . . . . . . . .

Part III

.

.

.

.

.

.

.

.

.

.

.

.

.

.

Applications in Systems and Control

12 Feedback Design for Antilinear Systems . . . . . . . . . . .

12.1 Generalized Eigenstructure Assignment. . . . . . . . .

12.2 Model Reference Tracking Control. . . . . . . . . . . .

12.2.1 Tracking Conditions . . . . . . . . . . . . . . . .

12.2.2 Solution to the Feedback Stabilizing Gain .

12.2.3 Solution to the Feedforward Compensation

12.2.4 An Example . . . . . . . . . . . . . . . . . . . . .

Contents

12.3 Finite Horizon Quadratic Regulation. .

12.4 Inﬁnite Horizon Quadratic Regulation.

12.5 Notes and References . . . . . . . . . . . .

12.5.1 Summary . . . . . . . . . . . . . .

12.5.2 A Brief Overview . . . . . . . .

xv

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

450

461

467

467

468

References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 471

Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 485

Notation

Notation Related to Subspaces

Z

R

C

Rn

Cn

RmÂn

CmÂn

RmÂn ½s

CmÂn ½s

Ker

Image

rdim

Set of all integer numbers

Set of all real numbers

Set of all complex numbers

Set of all real vectors of dimension n

Set of all complex vectors of dimension n

Set of all real matrices of dimension m Â n

Set of all complex matrices of dimension m Â n

Set of all polynomial matrices of dimension m Â n with real

coefﬁcients

Set of all polynomial matrices of dimension m Â n with complex

coefﬁcients

The kernel of a mapping

The image of a mapping

The real dimension of a real linear space

Notation Related to Vectors and Matrices

0n

0mÂn

IÂn Ã

aij mÂn

AÀ1

AT

A

AH

n

diag Ai

Zero vector in Rn

Zero matrix in RmÂn

Identity matrix of order n

Matrix of dimension m Â n with the i-th row and j-th column element

being aij

Inverse matrix of matrix A

Transpose of matrix A

Complex conjugate of matrix A

Transposed complex conjugate of matrix A

The matrix whose i-th block diagonal is Ai

i¼1

xvii

xviii

Notation

ReðAÞ

ImðAÞ

detðAÞ

adjðAÞ

trðAÞ

rankðAÞ

vecðAÞ

qð AÞ

kð AÞ

kmin ð AÞ

kmax ð AÞ

rmax ð AÞ

kAk2

kAk

Real part of matrix A

Imaginary part of matrix A

Determinant of matrix A

Adjoint of matrix A

Trace of matrix A

Rank of matrix A

Vectorization of matrix A

Kronecker product of two matrices

Spectral radius of matrix A

Set of the eigenvalues of matrix A

The minimal eigenvalue of matrix A

The maximal eigenvalue of matrix A

The maximal singular value of matrix A

2-norm of matrix A

Frobenius norm of matrix A

Ak

The k-th right alternating power of matrix A

Ak

kðE; AÞ

The k-th left alternating power of matrix A

Set of the ﬁnite eigenvalues of matrix pair ðE; AÞ

!

Other Notation

E

i

I½m; n

min

max

~

Mathematical expectation

The imaginary unit

The set of integers from m to n

The minimum value in a set

The maximum value in a set

Conjugate product of two polynomial matrices

F

Á

()

£

Con-Sylvester sum

If and only if

Empty set

Chapter 1

Introduction

The theory of matrix equations is an active research topic in matrix algebra, and has

been extensively investigated by many researchers. Different matrix equations have

wide applications in various areas, such as, communication, signal processing and

control theory. Specifically, Lyapunov matrix equations are often encountered in stability analysis of linear systems [160]; the homogeneous continuous-time Lyapunov

equation in block companion matrices plays a vital role in the investigation of factorizations of Hermitian block Hankel matrices [228]; generalized Sylvester matrix

equations are often encountered in eigenstructure assignment of linear systems [90].

As to a matrix equation, three basic problems need to be considered: the solvability conditions, solving approaches and expressions of the solutions. For real matrix

equations, a considerable number of results have been obtained for these problems.

In addition, some other problems have also been considered for some special matrix

equations. For example, geometric properties of continuous-time Lyapunov matrix

equations were investigated in [286]; bounds of the solution were studied for discretetime algebraic Lyapunov equations in [173, 227] and for continuous-time Lyapunov

equations in [173]. However, there are only a few results on complex matrix equations with the conjugate of unknown matrices reported in literature. For convenience,

the type of these matrix equations is called the complex conjugate matrix equation.

Recently, complex conjugate matrix equations have found some applications in

discrete-time antilinear systems. In this book, some recent results are summarized

for several kinds of complex conjugate matrix equations and their applications in

analysis and feedback design of antilinear systems. In this chapter, the main aim is to

first provide a survey on real linear matrix equations, and then give recent progress

on complex conjugate matrix equations. The recent progress on antilinear systems

and related problems will be given in the part of “Notes and References” of Chaps. 11

and 12. At the end of this chapter, an overview of this monograph is presented.

Symbols used in this chapter are now introduced. It should be pointed out that these

symbols are also adopted throughout this book. For two integers m ≤ n, the notation

© Springer Science+Business Media Singapore 2017

A.-G. Wu and Y. Zhang, Complex Conjugate Matrix Equations

for Systems and Control, Communications and Control Engineering,

DOI 10.1007/978-981-10-0637-1_1

1

2

1 Introduction

I[m, n] denotes the set {m, m + 1, . . . , n}. For a square matrix A, we use det A,

ρ (A), λ (A), λmin (A) , and λmax (A) to denote the determinant, the spectral radius,

the set of eigenvalues, the minimal and maximal eigenvalues of A, respectively. The

notations A, AT , and AH denote the conjugate, transpose and conjugate transpose of

the matrix A, respectively. Re (A) and Im (A) denote the real part and imaginary part

n

of the matrix A, respectively. In addition, diag Ai is used to denote the block diagonal

i=1

matrix whose elements in the main block-diagonal are Ai , i ∈ I[1, n]. The symbol

“⊗” is used to denote the Kronecker product of two matrices.

1.1 Linear Equations

The most common linear equation may be the following real equation

Ax = b,

(1.1)

where A ∈ Rm×n and b ∈ Rm are known, and x ∈ Rn is the vector to be determined.

If A is a square matrix, it is well-known that the linear equation (1.1) has a unique

solution if and only if the matrix A is invertible, and in this case, the unique solution

can be given by x = A−1 b. In addition, this unique solution can also be given by

xi =

det Ai

, i ∈ I[1, n],

det A

where xi is the i-th element of the vector x, and Ai is the matrix formed by replacing

the i-th column of A with the column vector b. This is the celebrated Cramer’s rule.

For the general case, it is well-known that the matrix equation (1.1) has a solution if

and only if

rank A b = rankA.

In addition, the solvability of the general equation (1.1) can be characterized in terms

of generalized inverses, and the general expression of all the solutions to the equation

(1.1) can also be given in terms of generalized inverses.

Definition 1.1 ([206, 208]) Given a matrix A ∈ Rm×n , if a matrix X ∈ Rn×m satisfies

AXA = A,

then X is called a generalized inverse of the matrix A.

The generalized inverse may be not unique. An arbitrary generalized inverse of

the matrix A is denoted by A− .

Theorem 1.1 ([208, 297]) Given a matrix A ∈ Rm×n , let A− be an arbitrary generalized inverse of A. Then, the vector equation (1.1) has a solution if and only

if

1.1 Linear Equations

3

AA− b = b.

(1.2)

Moreover, if the condition (1.2) holds, then all the solutions of the vector equation

(1.1) can be given by

x = A− b + I − A− A z,

where z is an arbitrary n-dimensional vector.

The analytical solutions of the equation (1.1) given by inverses or generalized

inverses have neat expressions, and play important roles in theoretical analysis. However, it has been recognized that the operation of matrix inverses is not numerically

reliable. Therefore, many numerical methods are applied in practice to solve linear

vector equations. These methods can be classified into two types. One is the transformation approach, in which the matrix A needs to be transformed into some special

canonical forms, and the other is the iterative approach which generates a sequence

of vectors that approach the exact solution. An iterative process may be stopped as

soon as an approximate solution is sufficiently accurate in practice.

For the equation (1.1) with m = n, the celebrated iterative methods include Jacobi

iteration and Gauss-Seidel iteration. Let

⎡ ⎤

⎡ ⎤

x1

b1

⎢ x2 ⎥

⎢ b2 ⎥

⎥

⎢

⎢

.

A = aij n×n , b = ⎣ ⎦ , x = ⎣ ⎥

···

···⎦

bn

xn

Then, the vector equation (1.1) can be explicitly written as

⎧

a11 x1 + a12 x2 + · · · + a1n xn = b1

⎪

⎪

⎨

a21 x1 + a22 x2 + · · · + a2n xn = b2

.

···

⎪

⎪

⎩

an1 x1 + an2 x2 + · · · + ann xn = bn

The Gauss-Seidel and Jacobi iterative methods require that the vector equation (1.1)

has a unique solution, and all the entries in the main diagonal of A are nonzero, that

is, aii = 0, i ∈ I[1, n]. It is assumed that the initial values xi (0) of xi , i ∈ I[1, n],

are given. Then, the Jacobi iterative method obtains the unique solution of (1.1) by

the following iteration [132]:

⎛

1 ⎝

xi (k + 1) =

bi −

aii

i−1

n

aij xj (k) −

j=1

⎞

aij xj (k)⎠ , i ∈ I[1, n],

j=i+1

and the Gauss-Seidel iterative method obtains the unique solution of (1.1) by the

following forward substitution [132]:

4

1 Introduction

⎛

1 ⎝

bi −

xi (k + 1) =

aii

i−1

⎞

n

aij xj (k)⎠ , i ∈ I[1, n].

aij xj (k + 1) −

j=1

j=i+1

Now, let

n

D = diag aii ,

i=1

⎡

0 0

⎢ a21 0

⎢

L=⎢

⎢ a31 a32

⎣··· ···

an1 an2

0 ···

0 ···

0 ···

··· ···

· · · an,n−1

⎡

⎤

0

0

⎢ 0

0 ⎥

⎢

⎥

⎢

0 ⎥

⎥, U = ⎢ 0

⎣···

⎦

···

0

0

a12

0

0

···

0

a13

a23

0

···

0

⎤

· · · a1n

· · · a2n ⎥

⎥

··· ··· ⎥

⎥.

· · · an−1,n ⎦

··· 0

Then, the Gauss-Seidel iterative algorithm can be written in the following compact

form:

x (k + 1) = − (D + L)−1 Ux (k) + (D + L)−1 b,

and the Jacobi iterative algorithm can be written as

x (k + 1) = −D−1 (L + U) x (k) + D−1 b.

It is easily known that the Gauss-Seidel iteration is convergent if and only if

ρ (D + L)−1 U < 1,

and the Jacobi iteration is convergent if and only if

ρ D−1 (L + U) < 1.

In general, the Gauss-Seidel iteration converges faster than the Jacobi iteration since

the recent estimation is used in the Gauss-Seidel iteration. However, there exist

examples where the Jacobi method is faster than the Gauss–Seidel method.

In 1950, David M. Young, Jr. and H. Frankel proposed a variant of the GaussSeidel iterative method for solving the equation (1.1) with m = n [156]. This is

the so-called successive over-relaxation (SOR) method, by which the elements xi ,

i ∈ I[1, n], of x can be computed sequentially by forward substitution:

⎛

i−1

ω ⎝

xi (k + 1) = (1 − ω) xi (k) +

aij xj (k + 1) −

bi −

aii

j=1

n

⎞

aij xj (k)⎠ , i ∈ I[1, n],

j=i+1

where the constant ω > 1 is called the relaxation factor. Analytically, this algorithm

can be written as

1.1 Linear Equations

5

x (k + 1) = (D + ωL)−1 [ωb − (ωU + (ω − 1) D) x (k)] .

The choice of relaxation factor is not necessarily easy, and depends on the properties

of A. It has been proven that if A is symmetric and positive definite, the SOR method

is convergent with 0 < ω < 2.

If A is symmetric and positive definite, the equation (1.1) can be solved by the

conjugate gradient method proposed by Hestenes and Stiefel. This method is given

in the following theorem.

Theorem 1.2 Given a symmetric and positive definite matrix A ∈ Rn×n , the solution

of the equation (1.1) can be obtained by the following iteration

⎧

T

(k)p(k)

⎪

α (k) = prT (k)Ap(k)

⎪

⎪

⎪

⎪

⎨ x (k + 1) = x (k) + α (k) p (k)

r (k + 1) = b − Ax (k + 1)

,

⎪

pT (k)Ar(k+1)

⎪

⎪

s (k) = − pT (k)Ap(k)

⎪

⎪

⎩

p (k + 1) = r (k + 1) + s (k) p (k)

where the initial conditions x (0), r (0) , and p (0) are given as x (0) = x0 ,

and p (0) = r (0) = b − Ax (0).

1.2 Univariate Linear Matrix Equations

In this section, a simple survey is provided for linear matrix equations with only one

unknown matrix variable. Let us start with the Lyapunov matrix equations.

1.2.1 Lyapunov Matrix Equations

The most celebrated univariate matrix equations may be the continuous-time and

discrete-time Lyapunov matrix equations, which play vital roles in stability analysis [75, 160], controllability and observability analysis of linear systems [3]. The

continuous-time and discrete-time Lyapunov matrix equations are respectively in

the forms as

AT X + XA = −Q,

(1.3)

X − A XA = Q,

(1.4)

T

where A ∈ Rn×n , and positive semidefinite matrix Q ∈ Rn×n are known, and X is the

matrix to be determined. In [103, 104], the robust stability analysis was investigated

for linear continuous-time and discrete-time systems, respectively, and the admissible perturbation bounds of the system matrices were given in terms of the solutions

6

1 Introduction

of the corresponding Lyapunov matrix equations. In [102], the robust stability was

considered for linear continuous-time systems subject to unmodeled dynamics, and

an admissible bound was given for the nonlinear perturbation function based on the

solution to the Lyapunov matrix equation of the nominal linear system. In linear

systems theory, it is well-known that the controllability and observability of a linear system can be checked by the existence of a positive definite solution to the

corresponding Lyapunov matrix equation [117].

In [145], the continuous-time Lyapunov matrix equation was used to analyze the

weighted logarithmic norm of matrices. While in [106], this equation was employed

to investigate the so-called generalized positive definite matrix. In [222], the inverse

solution of the discrete-time Lyapunov equation was applied to generate q-Markov

covers for single-input-single-output discrete-time systems. In [317], a relationship

between the weighted norm of a matrix and the corresponding discrete-time Lyapunov matrix equation was first established, and then an iterative algorithm was

presented to obtain the spectral radius of a matrix by the solutions of a sequence of

discrete-time Lyapunov matrix equations.

For the solutions of Lyapunov matrix equations with special forms, many results

have been reported in literature. When A is in the Schwarz form, and Q is in a special

diagonal form, the solution of the continuous-time Lyapunov matrix equation (1.3)

was explicitly given in [12]. When AT is in the following companion form:

⎡

⎤

0 1

⎢ ..

⎥

.. . .

⎢

⎥

.

.

AT = ⎢ .

⎥,

⎣ 0 0 ··· 1 ⎦

−a0 −a1 · · · −an−1

T

and Q = bbT with b = 0 0 · · · 1 , it was shown in [221] that the solution of the

Lyapunov matrix equation (1.3) with A Hurwitz stable can be given by using the

entries of a Routh table. In [165], a simple algorithm was proposed for a closedform solution to the continuous-time Lyapunov matrix equation (1.3) by using the

Routh array when A is in a companion form. In [19], the solutions for the above two

Lyapunov matrix equations, which are particularly suitable for symbolic implementation, were proposed for the case where the matrix A is in a companion form. In

[24], the following special discrete-time Lyapunov matrix equation was considered:

X − FXF T = GQG T ,

where the matrix pair (F, G) is in a controllable canonical form. It was shown in [24]

that the solution to this equation is the inverse of a Schur-Cohn matrix associated

with the characteristic polynomial of F.

When A is Hurwitz stable, the unique solution to the continuous-time Lyapunov

matrix equation (1.3) can be given by the following integration form [28]:

1.2 Univariate Linear Matrix Equations

7

∞

X=

T

eA t QeAt d t.

(1.5)

0

Further, let Q = BBT with B ∈ Rn×r , and let the matrix exponential function eAt be

expressed as a finite sum of the power of A:

eAt = a1 (t) I + a2 (t) A + · · · + an (t) An−1 .

Then, it was shown in [193] that the unique solution of (1.3) can also be expressed

by

(1.6)

X = Ctr (A, B) H Ctr T (A, B)

where

Ctr (A, B) = B AB · · · An−1 B

is the controllability matrix of the matrix pair (A, B), and H = G ⊗ Ir with G =

G T = gij n×n ,

∞

gij =

ai (t) aj (t) d t.

0

The expression in (1.6) may bring much convenience for the analysis of linear systems

due to the appearance of the controllability matrix. In addition, with the help of the

expression (1.6) some eigenvalue bounds of the solution to the equation (1.3) were

given in [193]. In [217], an infinite series representation of the unique solution to

the continuous-time Lyapunov matrix equation (1.3) was also given by converting it

into a discrete-time Lyapunov matrix equation.

When A is Schur stable, the following theorem summarizes some important properties of the discrete-time Lyapunov matrix equation (1.4).

Theorem 1.3 ([212]) If A is Schur stable, then the solution of the discrete-time

Lyapunov matrix equation (1.4) exists for any matrix Q, and is given as

X=

1

2π

2π

AT − ei θ I

−1

Q A − e− i θ I

−1

d θ,

0

or equivalently by

∞

X=

i

AT QAi .

i=0

Many numerical algorithms have been proposed to solve the Lyapunov matrix

equations. In view that the solution of the Lyapunov matrix equation (1.3) is at least

semidefinite, Hammarling [136] found an ingenuous way to compute the Cholesky

factor of X directly. The basic idea is to apply triangular structure to solve the

equation iteratively. By constructing a new rank-1 updating scheme, an improved

Hammarling method was proposed in [220] to accommodate a more general case

8

1 Introduction

of Lyapunov matrix equations. In [284], by using a dimension-reduced method an

algorithm was proposed to solve the continuous-time Lyapunov matrix equation

(1.3) in controllable canonical forms. In [18], the presented Smith iteration for the

discrete-time Lyapunov matrix equation (1.4) was in the form of

X (k + 1) = AT X (k) A + Q

with X (0) = Q.

Besides the solutions to Lyapunov matrix equations, the bounds of the solutions

have also been extensively investigated. In [191], the following result was given on

the eigenvalue bounds of the discrete-time Lyapunov matrix equation

X − AXAT = BBT ,

(1.7)

where A ∈ Rn×n and B ∈ Rn×r are known matrices, and X is the matrix to be

determined.

Theorem 1.4 Given matrices A ∈ Rn×n and B ∈ Rn×r , for the solution X to the

discrete-time Lyapunov matrix equation (1.7) there holds

λmin Ctr (A, B) Ctr T (A, B) P ≤ X ≤ λmax Ctr (A, B) Ctr T (A, B) P,

where P is the solution to the Lyapunov matrix equation

P − An P (An )T = I.

In [227], upper bounds for the norms and trace of the solution to the discrete-time

Lyapunov matrix equation (1.7) were presented in terms of the resolvent of A. In

[124], lower and upper bounds for the trace of the solution to the continuous-time

Lyapunov matrix equation (1.3) were given in terms of the logarithmic norm of A.

In [116], lower bounds were established for the minimal and maximal eigenvalues

of the solution to the discrete-time Lyapunov equation (1.7).

Recently, parametric Lyapunov matrix equations were extensively investigated.

In [307, 315], some properties of the continuous-time parametric Lyapunov matrix

equations were given. In [307], the solution of the parametric Lyapunov equation

was applied to semiglobal stabilization for continuous-time linear systems subject to

actuator saturation; while in [315] the solution was used to design a state feedback

stabilizing law for linear systems with input delay. The discrete-time parametric Lyapunov matrix equations were investigated in [313, 314], and some elegant properties

were established.

1.2 Univariate Linear Matrix Equations

9

1.2.2 Kalman-Yakubovich and Normal Sylvester Matrix

Equations

A general form of the continuous-time Lyapunov matrix equation is the so-called

normal Sylvester matrix equation

AX − XB = C.

(1.8)

A general form of the discrete-time Lyapunov matrix equation is the so-called

Kalman-Yakubovich matrix equation

X − AXB = C.

(1.9)

In the matrix equations (1.8) and (1.9), A ∈ Rn×n , B ∈ Rp×p , and C ∈ Rn×p are the

known matrices, and X ∈ Rn×p is the matrix to be determined. On the solvability of

the normal Sylvester matrix equation (1.8), there exists the following result which

has been well-known as Roth’s removal rule.

Theorem 1.5 ([210]) Given matrices A ∈ Rn×n , B ∈ Rp×p , and C ∈ Rn×p , the

normal Sylvester matrix equation (1.8) has a solution if and only if the following two

partitioned matrices are similar

AC

,

0 B

A0

;

0B

or equivalently, the following two polynomial matrices are equivalent:

sI − A −C

,

0 sI − B

sI − A 0

.

0 sI − B

This matrix equation has a unique solution if and only if λ (A) ∩ λ (B) = ∅.

The result in the preceding theorem was generalized to the Kalman-Yakubovich

matrix equation (1.9) in [238]. This is the following theorem.

Theorem 1.6 Given matrices A ∈ Rn×n , B ∈ Rp×p , and C ∈ Rn×p , the KalmanYakubovich matrix equation (1.9) has a solution if and only if there exist nonsingular

real matrices S and R such that

S

sI + A C

sI + A 0

R=

.

0 sB + I

0 sB + I

On the numerical solutions to the normal Sylvester matrix equations, there have

been a number of results reported in literature over the past 30 years. The BartelsStewart method [13] may be the first numerically stable approach to systematically

solving small-to-medium scale Lyapunov and normal Sylvester matrix equations.

The basic idea of this method is to apply the Schur decomposition to transform the

## Tài liệu Matlab tutorial for systems and control theory pdf

## Tài liệu A Practical Aproach to Signals Systems and Control pdf

## Tài liệu Process Systems Analysis And Control P2 pptx

## Tài liệu A Practical Aproach to Signals Systems and Control doc

## Dynamic Vision for Perception and Control of Motion ppt

## Hepatitis and Liver Cancer: A National Strategy for Prevention and Control of Hepatitis B and C ppt

## Hepatitis and Liver Cancer: A National Strategy for Prevention and Control of Hepatitis B and C doc

## Neural networks for modelling and control pdf

## Hepatitis and Liver Cancer: A National Strategy for Prevention and Control of Hepatitis B and C pdf

## Báo cáo " A combination of the identification algorithm and the modal superposition method for feedback active control of incomplete measured systems " doc

Tài liệu liên quan