This repository has been archived by the owner on Nov 13, 2017. It is now read-only.
-
Notifications
You must be signed in to change notification settings - Fork 1
/
synthesis.tex
1251 lines (1143 loc) · 62.7 KB
/
synthesis.tex
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
\chapter{Synthesis}
\label{chap:synth}
As we have shown in the previous chapters, the bilateral teleoperation problem can be carried over to a more general
framework of IQCs. By doing so not only the existing results are preserved, but also substantial generalization
options are obtained. Moreover, the shortcomings of the classical results and the underlying relations are clearly
seen.
Hence, we can proceed with the controller design problem and obtain controllers using the results for IQC synthesis
literature. Since the results here can be considered as established, we adopt a \enquote{running example} type of style
instead of repeating already available results in the literature with theorems and proofs enumerated one after another.
\section{Robust Controller Design using IQCs}
LMI-based control synthesis methods are in general nonconvex and even the existence of a possible convexification step
is not known. The $\mu$-synthesis and Lyapunov-based synthesis methods are the two main venues of obtaining conditions
for the design of a robust controller.
Clearly, the situation is not much different for IQC synthesis which is closely related to $\mu$-
tools. As we have shown in \cref{chap:analysis}, the dynamic multipliers are of the most importance when it comes to
reducing the conservatism involved in the frequency-domain stability analysis techniques. However including dynamics
to the multipliers makes the problem harder for the robust control synthesis problem. Since there is no known convex
robust control design method, we resort back to a nonconvex multiplier/controller iteration as is the case with
the classical $\mu$-tools with the so-called $D$-$K$ iteration. However, it has been recently shown that, a large class
of robust control design problems are convex depending on the uncertainty contribution on certain entries of the plant
data, \cite{scherer2009}.
%\enlargethispage*{1.5\baselineskip}
\subsection{Notation and Definitions}
We start with state-space descriptions of systems in the following form
\begin{equation}
\pmatr{\dot{x}\\ z\\ y} =
\underbrace{\left(
\begin{array}{ccc}
A &B_w &B_u\\
C_z &D_{wz} &D_{uz}\\
C_y &D_{wy} &D_{uy}
\end{array}
\right)}_{G}
\pmatr{x\\ w\\ u}
\label{eq:gennomplant}
\end{equation}
Here, $w$ denotes the generalized disturbance signals (perturbations, reference signals etc.), $z$ denotes the controlled
output signals that is selected as the objectives of the control design (error signals, control actions etc.)
$y$ signals are the measurements and $u$ are the controller inputs. Typically, the second row is called the \emph{performance
channel} and the last row is, similarly, the \emph{control channel}. The integers $n,n_u,n_w,m_z,m_y$ denote the number of
states, control inputs, disturbance inputs, control outputs, and measurements respectively. For the performance characterization,
we also use a quadratic form
\begin{equation}
\int_0^\infty{\pmatr{w(t) \\ z(t)}^TP_p \pmatr{w(t) \\ z(t)} dt} \leq -\epsilon\norm{w}^2.
\label{eq:quadperf}
\end{equation}
with $\epsilon>0$. As a typical example, the robust $\mathcal{L}_2$ gain from $w$ to $z$ can be rewritten as such by using
\[
P_p = \pmatr{-\gamma^2I_{n_w} &0\\ 0&I_{m_z}}
\]
We will also be referring to the state-space representation of a controller $K$ via
\begin{equation}
\pmatr{\dot{x}_K\\ u} =
\pmatr{
A_K &B_K\\
C_K &D_K
}
\pmatr{x_K\\ y}
\label{eq:contmats}
\end{equation}
with $n_K$ being the number of states of the controller. The common assumption of $D_{uy}=0$ is also adopted here. This causes no
loss of generality as long as the interconnection of $G$-$K$ interconnection is well-posed i.e. $(I-D_{uy}D_K)$ is nonsingular. One
can see the rationale behind this via computing $\tilde{K}$ in \Cref{fig:synth:d22zero}
\begin{figure}%
\centering
\begin{tikzpicture}[>=stealth,scale=0.9,transform shape,baseline=-1.25cm]
\draw[loosely dashed,fill=black!5] (-1.4,-0.7) rectangle (1.3,1.1);
\draw[loosely dashed,fill=black!5] (-1.4,1.3) rectangle (1.3,3.2);
\node[draw,inner sep=2mm] (delay) at (0,2.5) {$G$};
\node[draw,inner sep=2mm] (plant) at (0,0) {$K$};
\draw[<-] (delay.east) -| ++(0.5,-0.7) |- (plant.east);
\draw[<-] (plant.west) -| ++(-0.5,0.7)
node[draw,
fill=white,
circle,
inner sep=2pt,
label={[inner sep=0mm]above right:\tiny$+$},
label={[inner sep=0mm]above left:\tiny$+$}] (junc1) {}; --%
\draw[<-] (junc1) -- ++(0,1)
node[draw,
fill=white,
circle,
inner sep=2pt,
label={[inner sep=0mm]above right:\tiny$+$},
label={[inner sep=0mm]below right:\tiny$-$}] (junc2) {};
\draw[<-] (junc2) |- (delay.west);
\node[draw] (d1) at (delay|-junc1) {$D_{uy}$};
\node[draw] (d2) at (plant|-junc2) {$D_{uy}$};
\draw[->] (d1) -- (junc1);
\draw[->] (d2) -- (junc2);
\draw (d2) -- (d2-| {$(delay.east)+(0.5cm,-0.7cm)$});
\draw (d1) -- (d1-| {$(delay.east)+(0.5cm,-0.7cm)$});
\node[above left=2mm and 3.0mm of delay.west] (tildelay) {$\tilde{G}$};
\node[below left=2mm and 3mm of plant.west] (tilplant) {$\tilde{K}$};
\end{tikzpicture}
\caption[Zeroing out the $D_{uy}$ term.]{Zeroing out the $D_{uy}$ term. Feedforward path
zeroes the plant feedthrough term while it is absorbed by the controller provided that the
control loop is still well-posed.}%
\label{fig:synth:d22zero}%
\end{figure}
Then, the closed loop plant $(G\star K)(s)$ state-space matrices are obtained as
\begin{align}
\left[\begin{array}{c|c}
\mathcal{A} &\mathcal{B} \\\hline
\mathcal{C} &\mathcal{D}
\end{array}
\right] &= \left[\begin{array}{cc|c}
A + B_uD_KC_y & B_uC_K & B_w + B_uD_KD_{wy} \\
B_KC_y & A_K & B_KD_{wy}\\ \hline
C_z + D_{uz}D_KC_y & D_{uz}C_K & D_{wz} + D_{uz}D_KD_{wy}
\end{array}\right] \\
&= \left[
\begin{array}{cc|c}
A & 0 & B_w \\
0 & 0 & 0\\ \hline
C_z & 0 & D_{wz}
\end{array}
\right] +
\left[
\begin{array}{cc}
0 & B_u \\
I & 0\\ \hline
0 & D_{uz}
\end{array}
\right]
\left[
\begin{array}{cc}
A_K & B_K \\
C_K & D_K
\end{array}
\right]
\left[
\begin{array}{cc|c}
0 & I & 0\\
C_y & 0 & D_{wy}
\end{array}
\right].
\label{eq:nominalclinc}
\end{align}
Here two-letter subscripts, $\cdot_{ab}$ denote the term that represents the contribution from input $a$ to output $b$ which hopefully
gives some relief to the reader since the manipulations quickly get ugly.
\begin{define}[Generalized Plant] The plant $G$ is said to be a generalized plant if there exists at least one controller
$K$ such that the interconnection of $G$ and $K$ is stable. Equivalently, $G$ is a generalized plant if $(A,B_u)$ is stabilizable
and $(A,C_y)$ is detectable.
\end{define}
\subsection{Solving Analysis LMIs for Controller Matrices}
Once we have the generalized plant description we can design a controller that achieves closed loop stability and a performance level
characterized by $P_p$. Before we state the nominal controller synthesis conditions, we provide a \enquote{trick}, the so-called
Linearization Lemma which can be found in \cite{lmibook99} which we base our style on. It is also important to follow the rationale
behind the arguments given in \cite{scherermulti} for general quadratic performance synthesis.
\begin{lem}\label{lem:linlemma} Assume we are given with a quadratic form in which $A,S$ are constant matrices and $B(v),Q(v)$
and $R(v)\succeq 0$ are matrix functions of some decision variables denoted by $v$ such that
\begin{equation}
\pmatr{A\\B(v)}^T\pmatr{Q(v) &S\\S^T &R(v)}\pmatr{A\\B(v)} \prec 0
\label{eq:quadraticconst}
\end{equation}
should be verified. Also assume that there exists a decomposition such that
\[R(v) = T\inv{(U(v))}T^T\] where $U(v)\succ 0 $ is affine in $v$ then
\eqref{eq:quadraticconst} can be converted to the LMI problem
\begin{equation}
\pmatr{A^TQ(v)A + A^TSB(v) + B^T(v)S^TA &B^T(v)T\\T^TB(v) &-U(v)}\prec 0
\label{eq:linearizedconst}
\end{equation}
\end{lem}
\begin{proof} Applying Schur Complement formula with respect to the lower right block of the LMI and simply carrying out the block
multiplication in the quadratic inequality shows the equivalence.
\end{proof}
Note that, even if $R(v)=R$ i.e. some constant matrix, this step is needed for resolving the quadratic dependence of $B(v)$.
Following our analysis results from the previous chapter, we have seen that for a stable closed loop system $G\star K$ to have
a performance level of $P_p$ the condition
\[
\pmatr{I&0\\\mathcal{A}&\mathcal{B}\\\hline 0&I\\\mathcal{C}&\mathcal{D}}^T
\left(
\begin{array}{cc|c}
0&\mathcal{X}&0\\\mathcal{X}&0&0\\\hline 0&0&P_p
\end{array}
\right)
\pmatr{I&0\\\mathcal{A}&\mathcal{B}\\\hline 0&I\\\mathcal{C}&\mathcal{D}}\prec 0
\]
should hold for some symmetric matrix $\mathcal{X}$. However, in the absence of the knowledge of a stabilizing controller we encounter
an immediate problem. If we assume the controller matrices in calligraphic closed loop matrices as unknowns then they multiply the unknown
matrix $\mathcal{X}$ variables and hence destroying the affine dependence on the unknowns, rendering the constraint as a Bilinear Matrix
Inequality (BMI). Also in the analysis case the stability was assumed at the outset however here the positivity constraint,
$\mathcal{X}\succ 0$ should be included to guarantee closed loop stability. Thus, the synthesis problem is more involved. The resolution of
this problem appeared mainly in \cite{scherermulti,izumi} (also in a strict $\mathcal{H}_\infty$ context, \cite{gahapk,gahinet96} with a
two-step procedure of elimination of the controller parameters and obtaining some of the LMI variables and then resolving the LMIs for
controller parameters).
The essential trick is to manifacture a new set of derived variables from the original variables such that the problem is not altered but
conditions become LMIs again. For this purpose, suppose we partition the matrices
\[
\mathcal{X} = \pmatr{X &U\\U^T &\bullet}, \inv{\mathcal{X}} = \pmatr{Y &V\\V^T &\bullet},
\]
where $\bullet$ denotes the entries that we are not interested in. Also using the relations $\mathcal{X}\inv{\mathcal{X}}=I$, we have
\[
\mathcal{Y} = \pmatr{Y&I\\V^T&0}, \mathcal{Z} = \pmatr{I&0\\X&U}.
\]
Then by a congruence transformation the analysis LMI
\[
\pmatr{\mathcal{Y}&0\\0&I}^T\pmatr{I&0\\\mathcal{A}&\mathcal{B}\\\hline 0&I\\\mathcal{C}&\mathcal{D}}^T
\left(
\begin{array}{cc|c}
0&\mathcal{X}&0\\\mathcal{X}&0&0\\\hline 0&0&P_p
\end{array}
\right)
\pmatr{I&0\\\mathcal{A}&\mathcal{B}\\\hline 0&I\\\mathcal{C}&\mathcal{D}}\pmatr{\mathcal{Y}&0\\0&I}\prec 0
\]
becomes
\begin{equation}
\pmatr{I&0\\\mathbf{A}&\mathbf{B}\\\hline 0&I\\\mathbf{C}&\mathbf{D}}^T
\left(
\begin{array}{cc|c}
0&I&0\\I&0&0\\\hline 0&0&P_p
\end{array}
\right)
\pmatr{I&0\\\mathbf{A}&\mathbf{B}\\\hline 0&I\\\mathbf{C}&\mathbf{D}} \prec 0
\label{eq:synthnom}
\end{equation}
where boldface variables are defined as
\[
\left(\begin{array}{c|c}
\mathbf{A} &\mathbf{B}\\\hline\mathbf{C} &\mathbf{D}
\end{array}\right) \coloneqq \left(
\begin{array}{cc|c}
AY+B_uM &A+B_uNC_y &B_w+B_uND_{wy}\\
K &AX+LC_y &XB_w+LD_{wy}\\\hline
C_zY+D_{uz}M &C_z+D_{uz}NC_y &D_{zw}+D_{uz}ND_{wy}
\end{array}
\right)
\]
together with
\begin{equation}
\pmatr{K&L\\M&N}\coloneqq \pmatr{U&XB_u\\0&I}\pmatr{A_K & B_K \\C_K & D_K}
\pmatr{V^T&0\\C_yY&I}+\pmatr{XAY&0\\0&0}
\label{eq:contmatvartrafo}
\end{equation}
It is indeed not that easy to follow the inner workings of this transformation however after a tedious multiplication exercise
it can be seen that the aforementioned bijective transformation does the job. Thus, boldface variables are now functions of
$X,Y,K,L,M,N$ matrices and moreover the constraint is once again an LMI (after the application of \Cref{lem:linlemma}). For
stability characterization we also need $\mathcal{X}\succ 0$ and using the same transformation
$\mathcal{Y}^T\mathcal{X}\mathcal{Y}\succ 0$, we obtain
\[
\mathbf{X} \coloneqq \pmatr{X&I\\I&Y}\succ 0.
\]
Therefore, instead of BMIs we have obtained LMI conditions all involving boldface variables entering affinely. This set of variables
are typically denoted by $v=\{X,Y,K,L,M,N\}$. Unfortunately, $K(s)$ and $K$ creates a naming clash however we will not resolve this
to comply with the literature, instead we'll rely on the reader for the distinction and try to state whichever is meant in case of an
ambiguity.
With all this preparation, we arrive at the nominal synthesis problem :
\begin{thm}\label{thm:nomsynthLMI} The nominal synthesis problem is solvable if there exists a feasible set of
variables $v$ such that
\begin{equation}
\mathbf{X}\succ 0,\pmatr{I&0\\\mathbf{A}&\mathbf{B}\\\hline 0&I\\\mathbf{C}&\mathbf{D}}^T
\left(
\begin{array}{cc|c}
0&I&0\\I&0&0\\\hline 0&0&P_p
\end{array}
\right)
\pmatr{I&0\\\mathbf{A}&\mathbf{B}\\\hline 0&I\\\mathbf{C}&\mathbf{D}} \prec 0
\label{eq:nomsynthLMI}
\end{equation}
hold. Once a feasible solution is found by the aid of \Cref{lem:linlemma}, the controller can be obtained by first by finding arbitrary
invertible $U$ and $V$ matrices by solving $I-XY= UV^T$ and then back substituting the variables into \eqref{eq:contmatvartrafo}
\end{thm}
Once we obtain the feasible variables, via back substitution, a possible controller realization can be obtained via
\begin{equation}
\pmatr{A_K & B_K \\C_K & D_K} = \pmatr{U&XB_u\\0&I}^{-1}\pmatr{K-XAY&L\\M&N}
\pmatr{V^T&0\\C_yY&I}^{-1}
\label{eq:contmatvarbacktrafo}
\end{equation}
In summary, we have obtained a nominally stabilizing and $P_p$-performance level guaranteeing controller. Next, we include
uncertainty channels to our plant $G$ and add a robustness constraint in our design. Unfortunately, this breaks down our
\enquote{LMIzation} step and there is no known method to obtain LMI solutions from the BMI constraints given below.
\subsection{Adding Uncertainty Channels}
We renew our plant representation by the following relations
\begin{equation}
\pmatr{\dot{x}\\ q\\ z\\ y} =
\underbrace{\left(
\begin{array}{cccc}
A &B_p &B_w &B_u\\
C_q &D_{pq} &D_{wq} &D_{uq}\\
C_z &D_{pz} &D_{wz} &D_{uz}\\
C_y &D_{py} &D_{wy} &0
\end{array}
\right)}_{G}
\pmatr{x\\ p\\ w\\ u}
\label{eq:genplant}
\end{equation}
and $p=\Delta q$. The positive integers $n_p,m_q$ are the uncertainty operator row/column numbers respectively. Also this extra
channel is often called the \enquote{uncertainty channel}. Once again we start off with the robustness/performance analysis LMI.
For that we also need to connect our stabilizing controller. The new closed loop matrices are obtained as
\begin{equation}
\begin{multlined}[][0.9\textwidth]
\left[\begin{array}{c|cc}
\mathcal{A} &\mathcal{B}_p &\mathcal{B}_w \\\hline
\mathcal{C}_q &\mathcal{D}_{pq} &\mathcal{D}_{wq}\\
\mathcal{C}_q &\mathcal{D}_{pz} &\mathcal{D}_{wz}
\end{array}
\right] =
%\left[\begin{array}{cc|cc}
%A + B_uD_KC_y & B_uC_K & B_p + B_uD_KD_{py} & B_w + B_uD_KD_{wy} \\
%B_KC_y & A_K & B_KD_{py} & B_KD_wy\\ \hline
%C_q + D_{uq}D_KC_y & D_{uq}C_K & D_{pq} + D_{uq}D_KD_{py} & D_{wq} + D_{uq}D_KD_{wy}\\
%C_z + D_{uz}D_KC_y & D_{uz}C_K & D_{pz} + D_{uz}D_KD_{pz} & D_{wz} + D_{uz}D_KD_{wy}
%\end{array}\right] \\
\left[
\begin{array}{cc|cc}
A & 0 & B_p & B_w \\
0 & 0 & 0 & 0 \\ \hline
C_q & 0 & D_{pq} & D_{wq} \\
C_z & 0 & D_{pz} & D_{wz}
\end{array}
\right] +\\
\left[
\begin{array}{cc}
0 & B_u \\
I & 0\\ \hline
0 & D_{uq}\\
0 & D_{uz}
\end{array}
\right]
\left[
\begin{array}{cc}
A_K & B_K \\
C_K & D_K
\end{array}
\right]
\left[
\begin{array}{cc|cc}
0 & I & 0 & 0 \\
C_y & 0 & D_{py} & D_{wy}
\end{array}
\right].
\end{multlined}
\label{eq:uncclinc}
\end{equation}
Suppose a block diagonal collection of uncertainty operator set $\bm{\Delta}$ characterized by a family of constant multipliers $\mathbf{P}$
is known. Moreover, assume that the $\Delta\star(G\star K)$ is well-posed for all $\Delta\in\bm{\Delta}$. For notational convenience we also
introduce the partitions
\[
P=\begin{pmatrix}Q&S\\ S^T&R\end{pmatrix},\quad P_p=\begin{pmatrix}Q_p&S_p\\ S^T_p&R_p\end{pmatrix}
\]
Again, as given in \Cref{chap:analysis}, the closed loop system is robustly stable for all $\Delta\in\bm{\Delta}$ and achieves performance
characterized by the multiplier $P_p$ if there exist a $P\in\mathbf{P}$ and a symmetric matrix $\mathcal{X}$ such that
\begin{equation}
\pmatr{
I&0&0\\\mathcal{A}&\mathcal{B}_p&\mathcal{B}_w\\
0&I&0\\\mathcal{C}_q&\mathcal{D}_{pq}&\mathcal{D}_{wq}\\
0&0&I\\\mathcal{C}_z&\mathcal{D}_{pz}&\mathcal{D}_{wz}
}^T
\left(
\begin{array}{cc|cc|cc}
0&\mathcal{X}&&&&\\
\mathcal{X}&0&&&&\\\hline
&&Q&S&&\\
&&S^T&R&&\\\hline
&&&&Q_p&S_p\\
&&&&S_p^T&R_p\\
\end{array}
\right)
\pmatr{
I&0&0\\\mathcal{A}&\mathcal{B}_p&\mathcal{B}_w\\
0&I&0\\\mathcal{C}_q&\mathcal{D}_{pq}&\mathcal{D}_{wq}\\
0&0&I\\\mathcal{C}_z&\mathcal{D}_{pz}&\mathcal{D}_{wz}
}\prec 0
\label{eq:analysisLMI}
\end{equation}
%
%\begin{thm}
%
%\end{thm}
For the controller synthesis case, unfortunately previous transformation does not resolve the additional bilinear terms and in fact it is
not even known if such a transformation is possible or not. We can either try to solve BMIs directly with nonconvex optimization techniques
or we can use another nonconvex solution which is known as the multiplier-controller iteration.
Notice that if we have the stabilizing controller then the outer factors are constant matrices and we have an LMI problem. Conversely, if
we have a feasible multiplier such that the inequality is satisfied then it's a matter of applying the aforementioned transformation and
linearizing lemma to obtain a controller. Hence, we can perform an iteration by either fixing the multiplier or the controller.
\subsection{The Multiplier-Controller Iteration with Static Multipliers}
If we start with a uncertain generalized plant representation, we have neither the controller nor the robustness multipliers. Moreover,
we don't have a method to search for both simultaneously. Thus, we first consider the nominal control design problem and obtain a nominally
stabilizing controller as given above. It can also be shown that, the closed loop, with the well-posedness assumption and a simple
continuity argument, has some, possibly very limited, robustness properties against the uncertainty set we would like to consider. In
other words, there is no reason for the controller to be robustly stabilizing against the full uncertainty region that we originally
modeled since we did not enforce it by any constraint.
\begin{rem}Repeating what we have touched upon in \Cref{remiqc1}, this is briefly the rationale behind the common assumption of star-shaped
uncertainty region, i.e., any scaled uncertainty set is in the full sized uncertainty set e.g., $[0,1]\bm{\Delta}\in\bm{\Delta}$.
However, it is important to note that, the scaling needs not to be a simple scalar $r\in[0,1]$ such that the uncertainty is scaled with
$r\Delta$. This point is often implicitly assumed and rarely mentioned. Thus, the notation $r\Delta$ should be taken conceptually. As an
example, the delay uncertainty cannot be scaled with $re^{-s\tau}$ for any $r\in[0,1]$ since it would scale the unit-circle. In fact what
we want to scale is the $\tau$ variable as $e^{-sr\tau}$ such that when $r=0$ we have $e^0=1$ i.e. a non-delayed line and for $r=1$ we
have $e^{-s\tau}$ i.e. full duration of the maximum allowed delay. Similarly, saturation and dead-zone nonlinearities are examples of such
nontrivial scaling cases. Hence, the star-shapedness in this context becomes a parametrization of the size of the uncertainty from the
nominal case to the full-sized uncertainty. Nevertheless most uncertainty types are amenable to scaling with $r$-multiplication hence there
is no need to invent yet another notation for an already complicated procedure. Therefore, every uncertainty needs to be scaled in a
customized way but for notational convenience we will still use $r\bm{\Delta}$ to denote the scaled uncertainty size.
\end{rem}
Summarizing our current situation, we have obtained a nominally stabilizing controller and we have parameterized a custom scaling method
for each of our uncertainty subblocks. Now we would like to search for the maximum achievable $r$ via analysis. Hence, we can search over
the maximum $r$ such that the scaled closed loop is robustly stable for all $\Delta\in r\bm{\Delta}$.
Formally we have the following two theorems that constitutes the initialization and also the two steps of the iteration: We assume that
the uncertain LTI plant
\[
G:\mathcal{L}_{2e}^{n_p+n_w+n_u}\to\mathcal{L}_{2e}^{m_q+m_z+m_y},
\]
is a generalized plant i.e. there exists an LTI controller
\[
K:\mathcal{L}_{2e}^{m_y}\to\mathcal{L}_{2e}^{n_u},
\]
such that the nominal closed loop plant $G_{nom}\star K$ where
\[
G_{nom}(s) = 0_{n_p\times m_q}\star G = \pmatr{0_{(m_z+m_y)\times m_q} &I_{m_z+m_y}}G(s)\pmatr{0_{n_p\times(n_w+n_u)}\\I_{n_w+n_u}}
\]
is stable i.e. $G_{nom}\star K\in\mathcal{RH}_\infty^{m_z\times n_w}$.
%Additionally $\mathbf{P}$ denotes the set of all suitable multipliers that all $\Delta\in\bm{\Delta}$ satisfy the
%quadratic constraint.
\begin{figure}%
\centering
\begin{tikzpicture}[>=stealth]
\node[draw,minimum size=7mm] (d) {$\Delta$};
\node[draw,below = 1cm of d,minimum size=9mm] (g) {$G$};
\node[draw,below = 9mm of g,minimum size=7mm] (k) {$K$};
\draw[->] (g.150) -| ++(-8mm,8mm) |- (d);
\draw[->] (d) -| ++(1.2cm,-8mm) |- (g.30);
\draw[<-] (g) --++(1.5cm,0) node[right] {$w$};
\draw[->] (g) --++(-1.5cm,0) node[left] {$z$};
\draw[->] (g.-150) -| ++(-8mm,-8mm) |- (k);
\draw[->] (k) -| ++(1.2cm,8mm) |- (g.-30);
\end{tikzpicture}
\caption{The uncertain interconnection}%
\label{fig:uncicsynth}%
\end{figure}
\begin{thm}[Analysis Step] The interconnection of an LTI uncertain plant $G(s)$ given by \eqref{eq:uncclinc}
with a nominally stabilizing controller $K$ given by \eqref{eq:contmats}, depicted in \Cref{fig:uncicsynth}
which admits the realization given in \eqref{eq:uncclinc} is robustly stable in the face of all
$\Delta\in r\bm{\Delta}$ and achieves the performance level characterized by $P_p$ if there exists a symmetric matrix
$\mathcal{X}$ and $P\in\mathbf{P}$ such that \eqref{eq:analysisLMI} hold.
\end{thm}
One can simply perform a line search for the largest possible $r\in[0,1]$ such that the conditions are numerically
verified and $P_p$ is optimized. Then the resulting multiplier $P$ is fixed and we switch to the controller design step.
\begin{thm}[Synthesis Step]\label{thm:synthesis} Assume an uncertain LTI plant $G(s)$ given by \eqref{eq:uncclinc}
is given. There exist an LTI controller $K$ such that the closed loop is stable for all $\Delta\in r\bm{\Delta}$ and achieves
the performance level characterized by $P_p$ if there exists a set of variables $v=\{X,Y,K,L,M,N\}$ such that the linearized
version of the constraint (omitted for brevity)
\begin{equation}
\pmatr{
I&0&0\\\mathbf{A}&\mathbf{B}_p&\mathbf{B}_w\\\hline
0&I&0\\\mathbf{C}_q&\mathbf{D}_{pq}&\mathbf{D}_{wq}\\\hline
0&0&I\\\mathbf{C}_z&\mathbf{D}_{pz}&\mathbf{D}_{wz}
}^T
\left(
\begin{array}{cc|cc|cc}
0&I&&&&\\
I&0&&&&\\\hline
&&Q&S&&\\
&&S^T&R&&\\\hline
&&&&Q_p&S_p\\
&&&&S_p^T&R_p\\
\end{array}
\right)
\pmatr{
I&0&0\\\mathbf{A}&\mathbf{B}_p&\mathbf{B}_w\\\hline
0&I&0\\\mathbf{C}_q&\mathbf{D}_{pq}&\mathbf{D}_{wq}\\\hline
0&0&I\\\mathbf{C}_z&\mathbf{D}_{pz}&\mathbf{D}_{wz}
}\prec 0
\label{eq:thmsynthLMI}
\end{equation}
and $\mathbf{X}\succ 0$ hold.
\end{thm}
Proofs of both theorems are given in \cite{lmibook99} in detail.
Once again, in this step we optimize over $P_p$ while performing a line search over $r$. Theoretically, since the analysis result
is feasible for some $r_a$, the controller step should at least give a feasible result for $r=r_a$ and possibly larger values
should also return feasible results. However, numerically it is not the case. One might step down a little to actually obtain
feasible results. In our cases, we allowed the maximum retreat value to be the $0.99r$ of the previous step. We have also
observed that this might actually improve the conditioning of the LMI solution though by no means guaranteed.
\begin{rem}
Note that same theorem can be formulated for all $\Delta\in\bm{\Delta}$ and a closed loop plant $(G(r)\star K) (s)$,
in other words we can also modify the plant information to scale the uncertainty by subsuming the $r$ parameter suitably into the
plant.
To demonstrate the numerical problem, we assume that the uncertainties are of unstructured LTI type and for simplicity assume
constant multipliers. For parametric uncertainties, it suffices to scale down the respective uncertainty channels for the scaling
as shown in \Cref{fig:uncscale}.
\[
\pmatr{r\Delta\\I}^T\pmatr{Q&S\\S^T&R}\pmatr{r\Delta\\I} = \pmatr{\Delta\\I}^T\pmatr{r^2 Q&rS\\rS^T&R}\pmatr{\Delta\\I}\succeq 0.
\]
Out of our test experience, we have found out that reflecting the scaling to the plant is better for numerical stability. Seemingly the
reason for this is the numerical noise introduced when $r$ is small in the early iteration steps. Notice how the square of $r$ drives the
$Q$ block to zero if $0<r\ll 1$. In case of a feasible multiplier is found, this multiplier should be stripped off from $r$ since it will
be again used in the controller step, but due to the numerical inaccuracies wild changes are possible and usually the resulting multiplier
information is contaminated. We have found out that we have more freedom in the scaled plant case since one can decrease the bad effect of
small numbers via balanced realizations, cleaning up the state-space matrices etc. Moreover, as we show later, in the frequency dependent
multiplier case, $r$ variable becomes garbled in the factorizations, minimal realizations etc. Hence, in this work, it is recommended to
scale the plant instead of the multipliers.
\end{rem}
\begin{figure}%
\centering%
\begin{tikzpicture}[>=stealth]
\node[draw,minimum size=7mm] (d) {$\Delta$};
\node[draw,below = 1cm of d,minimum size=7mm] (g) {$G$};
\draw[->] (g.150) -| ++(-8mm,8mm) node[draw,fill=white] {$r$} |- (d);
\draw[->] (d) -| ++(1.2cm,-8mm) |- (g.30);
\draw[<-] (g.-30) --++(1cm,0) node[right] {$w$};
\draw[->] (g.-150) --++(-1cm,0) node[left] {$z$};
\end{tikzpicture}
\caption{Reflecting the uncertainty scaling to the plant}%
\label{fig:uncscale}%
\end{figure}
Hence, we have a theoretically increasing sequence of analysis and synthesis uncertainty sizes
\[
r_{s0} = 0< r_{a1} \leq r_{s1} \leq \ldots \leq r_{an} \simeq r_{sn}
\]
Similarly, the performance objective gets worse or at least stays constant at after each succesful iteration
revealing an increasing sequence of real scalar such as the robust $\mathcal{L}_2$ gain or a similar
functional.
The iteration terminates when the last approximate equality is satisfied with desired accuracy or both limits
are close enough to $1$ together with agreeing performance level $P_p$. Notice that we don't allow the iteration
to terminate even if the $r$ values agree, the performance levels should also agree for numerical consistency. The
aforementioned numerical difficulty might exhibit a phenomenon such that both $r_{a}$ and $r_{s}$ come very close
to $1$ but don't quite reach to $1$. We have coded an extra condition that if $r$ values come close to $1$ within
the prescribed accuracy, the code simply assumes $r=1$, such that the oscillations are removed and the $r=1$ is
tested at each step.
Another numerical difficulty is the factorization of $I-XY=UV^T$. Since the controller construction involves the
inverses of $U$ and $V$, it is imperative that the factorization is well-conditioned with respect to inversion.
\section{The Multiplier-Controller Iteration with Dynamic Multipliers}
From our analysis results it can be seen that dynamic multipliers provide substantial conservatism reduction. However,
the iterative control design procedure given above can not handle the dynamic multipliers in a straightforward
fashion. Although, the mechanism is essentially the same, the only missing step is subsuming the frequency-dependent
part of the multipliers into the outer factors. Let us give a conceptual example to demonstrate the obstacle. For
notational convenience we partition the closed loop plant into the uncertainty and performance channels
\[
H \coloneqq (G\star K) = \pmatr{H_q\\H_z}
\]
Suppose we are given with a feasible robustness/performance analysis LMI in frequency domain i.e.
\[
\left(
\begin{array}{cc}
I &0\\
\multicolumn{2}{c}{H_q}\\\hline
0& I\\
\multicolumn{2}{c}{H_z}
\end{array}
\right)^*
\left(
\begin{array}{cc|cc}
Q(\iw)&S(\iw)&&\\S^*(\iw)&R(\iw)&&\\\hline &&Q_p&S_p\\&&S_p^T&R_p
\end{array}
\right)
\left(
\begin{array}{cc}
I &0\\
\multicolumn{2}{c}{H_q}\\\hline
0& I\\
\multicolumn{2}{c}{H_z}
\end{array}
\right) \prec 0
\]
holds for some $P(\iw),P_p$ for all $\omega\in\Real_e$ and suppose that the frequency-dependent multiplier is parametrized with
outer factors, similar to what we have shown in our analysis examples before, as the following;
\[
\pmatr{\star}^*
\bigg(
\star
\bigg)^*
\left(
\begin{array}{cc|cc}
M_1&M_2&&\\M^T_2&M_3&&\\\hline &&Q_p&S_p\\&&S_p^T&R_p
\end{array}
\right)
\left(
\begin{array}{cc|cc}
\Psi_1&\Psi_2&&\\ \Psi_3&\Psi_4&&\\\hline &&I&0\\&&0&I
\end{array}
\right)
\left(
\begin{array}{cc}
I &0\\
\multicolumn{2}{c}{H_q}\\\hline
0& I\\
\multicolumn{2}{c}{H_z}
\end{array}
\right)
\prec 0
\]
Now, if we collect the frequency dependent parts together and leave the constant multiplier, we have
\[
\Bigg(\star\Bigg)^*
\left(
\begin{array}{cc|cc}
M_1&M_2&&\\M^T_2&M_3&&\\\hline &&Q_p&S_p\\&&S_p^T&R_p
\end{array}
\right)
\left(
\begin{array}{>{\centering\arraybackslash$} p{0.7cm} <{$}>{\centering\arraybackslash$} p{0.7cm} <{$}}
\multicolumn{2}{c}{\Psi_1 + \Psi_2H_q}\\
\multicolumn{2}{c}{\Psi_3 + \Psi_4H_q}\\\hline
0& I\\
\multicolumn{2}{c}{H_z}
\end{array}
\right)
\prec 0
\]
Had it been the case that we have $\pmatr{I &0}$ in the top row of the outer factor, we could simply use our previous
synthesis technique and we would be done. However, we don't have any solution to cope with such a complication. In other
words, the obstacle here is due to how we proceed with the synthesis step. Thus, the problem is exclusive to this synthesis method.
We remark this to avoid the confusion that might lead to the conclusion that the robust synthesis problem is difficult because of the
complications given above. The difficulty lies with the solution not with the problem. Thus, as it is, we need to find a way to obtain
the structure on that block such that we obtain an augmented plant from which we can extract the controller and hence obtain an
augmented open loop plant.
Notice that if we had $\Psi_2(\iw)$ identically zero and $\Psi_1$ invertable with a stable inverse, we can multiply
the inequality with $\begin{psmallmatrix}\inv{\Psi_1}&\\&I\end{psmallmatrix}$ and obtain the desired structure
\[
\Bigg(\star\Bigg)^*
\left(
\begin{array}{cc|cc}
M_1&M_2&&\\M^T_2&M_3&&\\\hline &&Q_p&S_p\\&&S_p^T&R_p
\end{array}
\right)
\left(
\begin{array}{cc}
I&0\\
(\Psi_3 + \Psi_4H_{q1})\Psi_1^{-1} &\Psi_4H_{q2}\\\hline
0& I\\
H_{z1}\Psi_1^{-1} &H_{z2}
\end{array}
\right)
\prec 0.
\]
Then, one can show that the resulting open-loop system becomes
\begin{equation}
G_{aug} = \pmatr{
(\Psi_3 + \Psi_4G_{pq})\Psi_1^{-1} &\Psi_4G_{wq} &\Psi_4G_{uq}\\
G_{pz}\Psi_1^{-1} &G_{wz} &G_{uz}\\
G_{py}\Psi_1^{-1} &G_{wy} &0\\
}
\label{eq:augplant}
\end{equation}
thus, the obstacle needs to be avoided via finding a way to obtain an equivalent multiplier of the form
\begin{align}
P(\iw) &=
\pmatr{\Psi_1(\iw)&\Psi_2(\iw)\\ \Psi_3(\iw)&\Psi_4(\iw)}^*
\pmatr{M_1&M_2\\M^T_2&M_3}
\pmatr{\Psi_1(\iw)&\Psi_2(\iw)\\
\Psi_3(\iw)&\Psi_4(\iw)}\\
&= \pmatr{\hat{\Psi}_1(\iw)&0\\ \hat{\Psi}_3(\iw)&\hat{\Psi}_4(\iw)}^*
\hat{M}
\pmatr{\hat{\Psi}_1(\iw)&0\\ \hat{\Psi}_3(\iw)&\hat{\Psi}_4(\iw)}
%&\mathrel{\reflectbox{$\coloneqq$}} \hat{P}(\iw)
\end{align}
where $\hat{\Psi}_1$ needs to biproper and bistable. This has been proposed in \cite{goh96,goh962} (see also
\cite{veenmanIFAC} for state-space derivation). Here the essential difficulty is that $\Psi_i$ are often
tall basis transfer matrices and are not invertible. Hence we look for square factorizations that are inherently
linked to $J$-spectral factorizations of quadratic forms. We will first take a detour on the structural properties
of the multipliers in general before we state the actual multiplier replacement. The style is based on \cite{helmersson2}.
\begin{define}[Inertia] The triple obtained by the enumeration of the number of the positive, zero and negative
eigenvalues is said to be the inertia of a hermitian matrix $A$ and denoted by $\inertia A = \{n_+,n_0,n_-\}$.
Specifically, $\nu(A) = n_-, \pi(A) = n_+, \zeta(A) = n_0$ where we use the abbreviations $\nu$egative, $\pi$ositive, and $\zeta$ero.
\end{define}
A few basic examples to set the convention is given below;
\[
\inertia I_{2\times 2} = \{2,0,0\}\ \ , \inertia 0_{n\times n} = \{0,n,0\} \ \ ,\inertia \pmatr{1 &&\\&-1&\\&&-1} = \{1,0,2\}.
\]
Recall that we have the following two main inequalities that are of interest by the IQC theorem;
\begin{align}\label{eq:interlude1}
\infint{\pmatr{\widehat{\Delta(v)}(\iw)\\\hat{v}(\iw)}^*\Pi(\iw)\pmatr{\widehat{\Delta(v)}(\iw)\\\hat{v}(\iw)}d\omega} &\succeq 0\\
\label{eq:interlude2}
\pmatr{I\\G(\iw)}^*\Pi(\iw)\pmatr{I\\G(\iw)} &\preceq -\epsilon I
\end{align}
These inequalities impose constraints on the inertia of the frequency-dependent multiplier. Consider the following
simple fact: let a multiplier $\Pi$ partitioned as
\begin{equation}
\Pi = \pmatr{\Pi_1&\Pi_2\\\Pi_2^* &\Pi_3}
\label{eq:Pipartition}
\end{equation}
\begin{lem}\label{lem:negnag}
The number of negative eigenvalues of $\Pi$ is greater than or equal to that of $\Pi_1$.
\end{lem}
\begin{proof}
Since
\[
\pmatr{I&0\\-\Pi_2^*\inv{\Pi_1} &I}\pmatr{\Pi_1&\Pi_2\\ \Pi_2^* &\Pi_3}
\pmatr{I&-\inv{\Pi_1}\Pi_2\\0&I} = \pmatr{\Pi_1&0\\0 &\Pi_3-\Pi_2^*\inv{\Pi_1}\Pi_2}
\]
and since the number can only increase.
Here, we assumed that $\inv{\Pi_1}$ exists. If not, then, we can make it nonsingular without changing the number of
negative eigenvalues by adding a sufficiently small matrix, $\epsilon I$. Since we are not interested in the number
of zero eigenvalues, this operation causes no problem.
\end{proof}
Next, we show that the inertia is further constrained by the outer factor rank.
\begin{lem}\label{lem:inertialemma} For a hermitian matrix $\Pi$, there exists an $X\in \mathbb{R}^{m\times n}$ such that
\begin{equation}
\pmatr{I\\X}^T\Pi\pmatr{I\\X} \prec 0
\label{eq:multinertia}
\end{equation}
if and only if $\Pi$ has at least $n$ negative eigenvalues.
\end{lem}
\begin{proof} ($\Rightarrow$) Complete the outer factors to a square matrix as
\[
P = \pmatr{I &0\\X &I}^T\Pi\pmatr{I &0\\X &I}
\]
and inertia is preserved i.e. $\inertia \Pi = \inertia P$. Note that $(1,1)$ block of $P$ is \eqref{eq:multinertia}.
Hence, from \Cref{lem:negnag}, we have, $n = \nu(P_{11})\leq \nu({P}) = \nu(\Pi)$.
($\Leftarrow$) Assume $n+m \geq \nu(\Pi)\geq n$. Let $U = \pmatr{U_1\\U_2} \in\mathbb{R}^{\nu(\Pi)\times n}$ be a
matrix spanning the negative eigenspace where $U_1$ is square. If $U_1$ is invertible, then, $X = U_2\inv{U_1}$
is a solution. Or we perturb with $\epsilon I$, and use $X= U_2\inv{(U_1+\epsilon I)}$.
\end{proof}
Thus a quadratic form has to have at least as many negative eigenvalues as the outer factor rank to be negative definite
since the quadratic form must be negative definite on the image of outer factor.
Using this information makes it easier to state some conclusions about the inertia of a frequency-dependent multiplier.
Obviously we need to make sure that the inertia stays the same on the imaginary axis. Hence we can make use of a result
regarding the frequency-dependent case by the following (see \cite{megretskitreil} and references therein):
\begin{thm}\label{thm:megretskitreil} Let $\phi(\iw)$ be a hermitian bounded measurable matrix-valued function.
The following statements are equivalent:
\begin{enumerate}
\item the functional
\[
\sigma(f) = \int_{-\infty}^\infty{\hat{f}^*(\iw)\phi(\iw)\hat{f}(\iw)d\omega}
\]
is nonnegative for all $f\in\mathcal{L}_2^k(0,\infty)$.
\item\label{thm:megretskitreilitemtwo} $\phi(\iw)\succeq 0$ almost everywhere for $\omega\in(0,\infty)$.
\end{enumerate}
\end{thm}
\begin{proof}\parbox{0pt}{}\par
$(2)\implies (1)$ is a direct consequence.
$(1)\implies (2)$ The quadratic form $\sigma(f)$ is time-invariant on $\mathcal{L}_2(-\infty,\infty)$. If
$\sigma\geq 0$ on $\mathcal{L}_2(0,\infty)$ then $\sigma\geq 0$ on $\mathcal{L}_2(t_0,\infty)$ for any $t_0>-\infty$.
Moreover,
\[
\bigcup\limits_{t>-\infty}\mathcal{L}_2(t,\infty)
\]
is dense in $\mathcal{L}_2(-\infty,\infty)$ and $\sigma$ is continuous on $\mathcal{L}_2(-\infty,\infty)$. Therefore
$\sigma\geq 0$ on $\mathcal{L}_2(-\infty,\infty)$ and hence \cref{thm:megretskitreilitemtwo} holds.
\end{proof}
The next result from \cite{goh96} reveals the IQC multiplier structure for robustness tests.
\begin{thm}\label{thm:IQCinertia} If a hermitian, bounded, matrix-valued, and invertible (on $i\Real_e$) multiplier $H$ satisfies the
IQC and the corresponding FDI \eqref{eq:interlude1} and \eqref{eq:interlude2} then there exists a matrix-valued, bounded and
invertible (on $i\Real_e$) $S$ such that
\[
H(\iw) = S^*(\iw)\pmatr{-I_{n_p}&0\\0&I_{m_q}}S(\iw) \quad \forall\omega\in\Real_e
\]
\end{thm}
\begin{proof} From \eqref{eq:interlude2} and \Cref{thm:megretskitreil} we can conclude that
for all $\omega\in\Real_e$, $H(\iw)$ has at least $n_p$ negative eigenvalues. Conversely, from
\eqref{eq:interlude1} and the nominal case we also see that $H_{22}$ is positive semi-definite.
Finally, from the invertibility of $H$ on the extended imaginary axis, such a diagonalizing
congruence transformation with some $S$ always exists.
\end{proof}
The following is also from \cite{goh96}:
\begin{thm} Let
\[
H(s) \coloneqq \pmatr{H_{11}(s)&H_{12}(s)\\ H^*_{12}(s)&H_{22}(s)} \in\mathcal{RL}_\infty
\]
be a hermitian, bounded and invertible transfer matrix on the imaginary axis with $H_{22}(s)\succ 0$ on $i\Real_e$.
Then there exists a factorization of the form
\[
H(\iw) = S^*(\iw)J_HS(\iw) \quad \forall\omega\in\Real_e
\]
where
\[
S(s) \coloneqq \pmatr{R(s)&0\\ Q(s)&P(s)}
\]
with $P(s),R(s)\in\mathcal{RH}_\infty$ with stable inverses.
\end{thm}
\begin{proof} For the sake of brevity, we omit the frequency dependence from the notation of transfer matrices. First,
from the assumption $H_{22}\succ 0$ and \Cref{lem:negnag} we have
\begin{equation}
H^{\vphantom{-1}}_{11} -H^{\vphantom{-1}}_{12}\inv{H_{22}}H_{12}^* \prec 0
\label{eq:multschur}
\end{equation}
for all $\omega\in\Real_e$. We write
\[
H = \pmatr{H^{\vphantom{-1}}_{11} -H^{\vphantom{-1}}_{12}\inv{H_{22}}H_{12}^*&0\\ 0&0} +
\pmatr{H^{\vphantom{-1}}_{12}\\ I}\inv{H_{22}}\pmatr{H_{12}^* &I}
\]
Hence both $H_{22}$ and \eqref{eq:multschur} can be replaced with biproper, bistable spectral factors
which will be defined next. Let $\hat{P}$ denote the spectral factor of $H_{22} = \hat{P}^*\hat{P}$ and $R$ denote similarly
that of \eqref{eq:multschur}. Then we have
\begin{align*}
H &= \pmatr{-R^*R&0\\0&0} + \pmatr{H_{12}\inv{\hat{P}}\\\hat{P}^*}\pmatr{{\hat{P}}^{-*}H_{12}&\hat{P}} \\
&\reflectbox{$\coloneqq$}
\pmatr{-R^*R&0\\0&0}+\pmatr{Q^*\\P^*}\pmatr{Q^*\\P^*}^*\\
&= \pmatr{R(s)&0\\ Q(s)&P(s)}^* \pmatr{-I&\\ &I}\pmatr{R(s)&0\\ Q(s)&P(s)}
\end{align*}
\end{proof}
Notice that the claim is only valid on the imaginary axis. For factorizations with respect to other contours, see \cite{bart10}.
The assumption of $H_{22}\succ 0$ is a mild one since many of the practically relevant multipliers have this property. However,
passivity multipliers and some other shifted parameter intervals might have positive semi-definite block. In
this case either a different factorization is employed (e.g. \cite{goh962}) or the uncertainty channel is shifted until the desired
property is achieved.
Now we are at a point where we know the inertia properties of the multiplier and also we know that the
inertia stays the same on the imaginary axis. Moreover, with certain inertia of the lower left block,
we can turn back to our problem of finding a suitable replacement of the multiplier with a new one involving
invertible, biproper, and bistable factors. First let us state a classical and powerful result regarding the
transfer matrix factorization (see \cite[Thm. 13.19]{zhoubook},\cite[Thm. 7.3]{francis}, and \cite[Thm. 2]{youla}).
\begin{thm}\label{thm:zhouspecfact}Let $A,B,Q,S,R$ be matrices of compatible dimensions such that $Q=Q^T$ and $R=R^T\succ 0$, with
$(A,B)$ stabilizable. Suppose either one of the following assumptions is satisfied
\begin{enumerate}[label=(A\arabic*)]
\item $A$ has no eigenvalues on the imaginary axis.
\item $Q$ is positive or negative semidefinite and $(A,Q)$ has no unobservable modes on the imaginary axis.
\end{enumerate}
Then,
\begin{enumerate}[label=(\Roman*)]
\item The following are equivalent;
\begin{enumerate}[label=(\alph*)]
\item The hermitian matrix
\[
\Phi(s) = \pmatr{\inv{(sI-A)}B\\ I}^*\pmatr{Q&S\\S^T&R}\pmatr{\inv{(sI-A)}B\\ I}
\]
satisfies $\Phi(\iw)\succ 0$ for all $\omega\in\Real_e$.
\item There exists a unique symmetric $X$ such that the matrix Riccati equation
\[
XA + A^TX - (XB+S)\inv{R}(XB+S)^T + Q = 0
\]
and $A-B\inv{R}(XB+S)^T$ is Hurwitz.
\item The Hamiltonian matrix
\[
\pmatr{A-B\inv{R}S^T & -B\inv{R}B^T\\-(Q-S\inv{R}S^T) &-(A-B\inv{R}S^T)^T}
\]
has no eigenvalues on the imaginary axis.
\end{enumerate}
\item The following statements are also equivalent:
\begin{enumerate}
\item[(d)] $\Phi(\iw) \succeq 0$ for all $\omega\in\Real_e$.
\item[(e)] There exists a unique symmetric $X$ such that
\[
XA + A^TX - (XB+S)\inv{R}(XB+S)^T + Q = 0
\]
and $A-B\inv{R}(XB+S)^T$ has all its eigenvalues in the closed left-half plane.
\end{enumerate}
\end{enumerate}
Every such $\Phi$ has a spectral factorization $\Phi = \Phi_s^*\Phi_s$ where $\Phi_s,\inv{\Phi_s}\in\mathcal{RH}_\infty$.
A $\Phi_s$ is denoted as the spectral factor of $\Phi$.
\end{thm}
In particular we are interested in the following corollary;
\begin{coroll} Consider a square hermitian transfer matrix $G\in\mathcal{RL}_\infty^{\bullet\times\bullet}$ with its inverse
$G^{-1}\in\mathcal{RL}_\infty^{\bullet\times\bullet}$ and $G(\infty)\succ 0$. Then, $G$ has a spectral factorization $G_s$ such that
\[
G = G_s^*G_s
\]
where $G_s,G_s^{-1}\in\mathcal{RH}_\infty^{\bullet\times\bullet}$.
\end{coroll}
Note that the system $G$ need not be obtained from a quadratic form as we will continue considering. However, via separating
the stable and the unstable modes with a state transformation and from the hermitian property of $G$, we can obtain a particular
structure without loss of generality. The structure is actually simpler to demonstrate; since the LTI system interconnection
$y=G_1G_2 u$ can be realized as
\begin{equation}
\pmatr{\dot{x}\\y} = \pmatr{A_1 &B_1C_2 &B_1D_2\\0 &A_2 &B_2\\C_1 &D_1C_2 &D_1D_2}\pmatr{\dot{x}\\u},
\end{equation}
then any factorization $G = G_s^*G_s$ admits a realization of the form
\begin{equation}
G = G_s^*G_s = \left[
\begin{array}{cc|c}
-A_s^T &0 &0\\0 &A_s &B_s\\\hline -B_s^T &0 &0
\end{array}\right]+
\left[
\begin{array}{c}
C_s^T \\ 0 \\\hline D_s^T
\end{array}
\right]
\bigg[
\begin{array}{cc|c}
0 &C_s &D_s
\end{array}
\bigg]
\end{equation}
partitioned accordingly. Also we have $G(\infty)= D_s^TD_s$. For the computation of the spectral factors
consider the following multiplier with tall outer factors that satisfies
\[
\Phi(\iw)^*M\Phi(\iw) \succ 0
\]
for all $\omega\in\Real_e$ as we had a few examples in \Cref{chap:analysis}. Using the minimal realization for $\Phi(s)$ we also have
\begin{equation}
\pmatr{B_{\Phi}^T(sI-A_{\Phi})^{-*}&I}
\pmatr{C_{\Phi}^TMC_{\Phi}&D_{\Phi}^TMC_{\Phi}\\C_{\Phi}^TMD_{\Phi}&D_{\Phi}^TMD_{\Phi}}
\pmatr{(sI-A_{\Phi})^{-1}B_{\Phi}\\I}
\label{eq:specfactshuffle}
\end{equation} which is the desired form given in \Cref{thm:zhouspecfact}. Though we have omitted the proof, the relation with the
Riccati equation can be sketched using the LMI version of this constraint. Via KYP Lemma, this is equivalent to the existence
of a symmetric matrix $X$ such that
\begin{equation}
\pmatr{I &0\\A_{\Phi} &B_{\Phi}\\C_{\Phi}&D_{\Phi}}^T
\pmatr{0 &X &0\\X &0 &0\\0 &0 &M}
\pmatr{I &0\\A_{\Phi} &B_{\Phi}\\C_{\Phi}&D_{\Phi}} \succ 0
\label{eq:specfactLMI}
\end{equation}
holds. This means the Algebraic Riccati Inequality, which is nothing but straightforward multiplication and a Schur complement
thanks to $\Phi(\infty)=D^T_\Phi M D_\Phi\succ 0$, also holds:
\begin{equation}
XA_{\Phi} + A^T_{\Phi}X - (XB_{\Phi}+\underbracket{C_{\Phi}^TMD_\Phi}_S)\underbracket{\inv{(D^T_{\Phi}MD_{\Phi})}}_{\inv{R}}
(XB_{\Phi}+C_{\Phi}^TMD_\Phi)^T+\underbracket{C^T_{\Phi}MC_{\Phi}}_Q \succ 0
\label{eq:specfactARI}
\end{equation}
Thus the corresponding Riccati Equation
\begin{equation}
XA_{\Phi} + A^T_{\Phi}X - (XB_{\Phi}+C_{\Phi}^TMD_\Phi)\inv{(D^T_{\Phi}MD_{\Phi})}
(XB_{\Phi}+C_{\Phi}^TMD_\Phi)^T+C^T_{\Phi}MC_{\Phi} = 0
\label{eq:specfactARE}
\end{equation}
has a unique stabilizing solution $X$ and
\[
A_\Phi - B_\Phi\inv{(D^T_{\Phi}MD_{\Phi})}(XB_{\Phi}+C_{\Phi}^TMD_\Phi)^T
\]
is Hurwitz. Since $(D^T_{\Phi}MD_{\Phi})\succ 0$ we can replace this matrix with an arbitrary square factorization (square root,
Cholesky etc.) $\hat{D}_\Phi^T\hat{D}_\Phi\succ 0$. Therefore, one possiblity of realization of the spectral factor $\Phi_s(s)$ is given by
\[
\Phi_s(s) = \left[
\begin{array}{c|c}
A_\Phi &B_\Phi\\\hline
\hat{D}_\Phi^{\mathstrut -T}(XB_{\Phi}+C_{\Phi}^TMD_\Phi)^T&\hat{D}_\Phi
\end{array}
\right]
\]
such that $\Phi(s),\inv{\Phi}(s)\in\mathcal{RH}_\infty$.
With a slight abuse of the general signature matrix definition, which is a diagonal matrix with all entries on
the diagonal are either $1$ or $-1$, we use the following:
\begin{define} Let $M$ be an invertible, hermitian matrix with $\pi(M) = p,\nu(M)=n$. A diagonal matrix of
$J= \operatorname{diag}\{-I_n,I_p\}$ is said to be the signature matrix of $M$.
\end{define}
When $\nu(M)\neq 0$ the type of factorizations given in \Cref{thm:IQCinertia} is known as \enquote{\emph{$J$-spectral
factorization}} and exact solvability conditions are derived which relate the solvability of a certain Riccati equation to the