[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Next Article in Journal
Application of a Hybrid Model Based on a Convolutional Auto-Encoder and Convolutional Neural Network in Object-Oriented Remote Sensing Classification
Next Article in Special Issue
Approximation Algorithms for the Geometric Firefighter and Budget Fence Problems
Previous Article in Journal
Optimization Design by Genetic Algorithm Controller for Trajectory Control of a 3-RRR Parallel Robot
Previous Article in Special Issue
Scheduling Non-Preemptible Jobs to Minimize Peak Demand
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

On Application of the Ray-Shooting Method for LQR via Static-Output-Feedback

Department of Computer Sciences, Lev Academic Center, Jerusalem College of Technology, P.O. Box 16031, 93721 Jerusalem, Israel
Algorithms 2018, 11(1), 8; https://doi.org/10.3390/a11010008
Submission received: 29 October 2017 / Revised: 27 December 2017 / Accepted: 4 January 2018 / Published: 16 January 2018
(This article belongs to the Special Issue Algorithms for Hard Problems: Approximation and Parameterization)

Abstract

:
In this article we suggest a randomized algorithm for the LQR (Linear Quadratic Regulator) optimal-control problem via static-output-feedback. The suggested algorithm is based on the recently introduced randomized optimization method called the Ray-Shooting Method that efficiently solves the global minimization problem of continuous functions over compact non-convex unconnected regions. The algorithm presented here is a randomized algorithm with a proof of convergence in probability. Its practical implementation has good performance in terms of the quality of controllers obtained and the percentage of success.

1. Introduction

Let a continuous-time system be given by
x ˙ t = A x t + B u t y t = C x t
where A R p × p , B R p × q , C R r × p and x , u , y are the state, the input and the measurement, respectively. We assume that A , B and A T , C T are controllable (see [1] for a new reduction from the case where A , B and A T , C T are only stabilizable). Let
J x 0 , u : = 0 x t T Q x t + u t T R u t d t ,
denote the cost functional, where Q > 0 and R 0 . Assuming that x 0 = x 0 is given, the LQR problem is to attenuate the disturbance x 0 using minimal control cost, i.e., to design a regulation input u t that minimizes J x 0 , u , subject to the constraints given by (1). Let u = K y be the static-output-feedback (SOF) with the closed-loop matrix A c K : = A B K C . Let C denote the left-half plane, let α > 0 and let C α denote the set of all z C with z α , where z is the real part of z. Let S q × r denote the set of all matrices K R q × r , such that A c is stable, i.e., σ A c C (where σ A c is the spectrum of A c ). By S α q × r , we denote the set of all matrices K R q × r , such that σ A c C α . In this case, we say that A c is α -stable. Below, we will occasionally write S α instead of S α q × r , when it is clear what the size of the related matrices is.
Let K S α q × r be given. Substitution of u = K y = K C x into (2) gives
J x 0 , K : = 0 x t T Q + C T K T R K C x t d t .
Since Q + C T K T R K C > 0 and since A c K is stable, it follows that the Lyapunov equation
A c K T P + P A c K = Q + C T K T R K C
has a unique solution P > 0 , given by
P = m a t I p A c K T + A c K T I p 1 · v e c Q + C T K T R K C ,
where v e c puts all the columns of the given matrix in a single column and m a t is the inverse of v e c . Let us denote the solution (5) as P K . Substitution of (4) into (3) and noting that x ˙ t = A c K x t with A c K stable, leads to
J x 0 , K = 0 x t T A c K T P K + P K A c K x t d t = 0 d d t x t T P K x t d t = x 0 T P K x 0 .
Thus, we look for K S α q × r that minimizes the functional J x 0 , K = x 0 T P K x 0 . When x 0 is unknown, we seek K S α q × r for which
σ m a x K : = max σ P K
is minimal. In this case, we get a robust LQR via SOF, in the sense that it minimizes J x 0 , K for the worst possible (unknown) x 0 . Note that
x 0 T P K x 0 = P K 1 2 x 0 2 P K 1 2 2 x 0 2 = P K x 0 2 = σ m a x K x 0 2 ,
and that there exists x 0 0 for which equality holds. Therefore J x 0 , K x 0 2 σ m a x K , where equality holds in the worst case.
Note that the functionals J x 0 , K and σ m a x K are generally not convex since their domain of definition S q × r (and therefore S α q × r ) is generally non-convex. Necessary conditions for optimality were given as three quadratic matrix equations in [2,3,4,5]. Necessary and sufficient conditions for optimality, based on linear matrix inequalities (LMI), were given in [6,7,8]. However, algorithms based on these formulations are generally not guaranteed to converge, seemingly because of the non-convexity of the coupled matrix equations or inequalities, and when they converge, it is to a local optimum only.
The application of SOFs in LQRs is appealing for several reasons: they are reliable and cheap, and their implementation is simple and direct. Moreover, the long-term memory of dynamic-feedbacks is useless for systems subject to random disturbances, to fast dynamic loadings or to impulses, and the application of state-feedbacks is not always possible, due to unavailability of full-state measurements (see [9], for example). On the other hand, in practical applications, the entries of the needed SOFs are bounded, and since the problem of SOFs with interval constrained entries is NP-hard (see [10,11]), one cannot expect the existence of a deterministic polynomial-time algorithm to solve this problem. Randomized algorithms are thus natural solutions to this problem. The probabilistic and randomized methods for the constrained SOF problem and robust stabilization via SOFs (among other hard problems) are discussed in [12,13,14,15]. The Ray-Shooting Method was recently introduced in [16], where it was utilized to derive the Ray-Shooting (RS) randomized algorithm for the minimal-gain SOF problem with regional pole-assignment, where the region can be non-convex and unconnected. For a survey of the SOF problem see [17] and for a recent survey of the robust SOF problem see [18].
The contribution of this research is as follows:
  • The suggested algorithm is based on the Ray-Shooting Method (see [16]), which, as opposed to smooth optimization methods, has the potential of finding a global optimum of continuous functions over compact non-convex and unconnected regions.
  • The suggested algorithm has a proof of convergence (in probability) and explicit complexity.
  • Experience with the algorithm shows good quality of controllers, high percent of success and good run-time for real-life systems. Thus, the suggested practical algorithm efficiently solves the problem of LQR via SOF.
  • The algorithm does not need to solve any Riccati equations and thus can be applied to large systems.
  • The suggested algorithm is one of the few that deals with LQR via SOF and has the ability to deal with discrete-time systems under the same formulation.
The reminder of the article is organized as follows:
In Section 2, we introduce the practical randomized algorithm for the problem of LQR via SOF. In Section 3, we give the results of the algorithm for some real-life systems and we compare its performance with the performance of a well known algorithm that has a proof of convergence to local minimum (under some reasonable assumptions). Finally, in Section 4 we conclude with some remarks.

2. The Practical Algorithm for the Problem of LQR via SOF

Assume that K 0 i n t S α was found by the RS algorithm (see [16]) or by any other method (see [19,20,21]). Let h > 0 and let U 0 be a unit vector w.r.t. the Frobenius norm, i.e., U 0 F = 1 . Let L 0 = K 0 + h · U 0 and let L be the hyperplane defined by L 0 + V , where V , U 0 F = 0 . Let r > 0 and let R denote the set of all F L , such that F L 0 F r . Let R ϵ = R + B 0 , ϵ ¯ , where B 0 , ϵ ¯ denotes the closed ball centered at 0 with radius ϵ ( 0 < ϵ 1 2 ), with respect to the Frobenius norm on R q × r . Let D 0 = C H K 0 , R ϵ denote the convex-hull of the vertex K 0 with the basis R ϵ . Let S α 0 = S α D 0 and note that S α 0 is compact (but generally not convex). We wish to minimize the continuous function σ m a x K (or the continuous function J x 0 , K , when x 0 is known) over the compact set S α B K 0 , h ¯ . Let K * denote a point in S α B K 0 , h ¯ where the minimum of σ m a x K is accepted. Obviously, K * D 0 , for some direction U 0 , as above.
The suggested algorithm in Algorithm 1 works as follows:
We start with a point K 0 i n t S α , found by the RS algorithm.
Assuming that K * D 0 , the inner-loop ( j = 1 , , n ) uses the Ray-Shooting Method in order to find an approximation of the global minimum of the function σ m a x K over S α 0 —the portion of S α bounded in the cone D 0 . The proof of convergence in probability of the inner-loop and its complexity (under the above mentioned assumption) can be found in [16] (see also [22]). In the inner-loop, we choose a search direction by choosing a point F in R ϵ —the base of the cone D 0 . Next, in the most inner-loop ( k = 1 , , s ) we scan the ray K t : = 1 t K 0 + t F and record the best controller on it. Repeating this a sufficient number of times (as is given in (7) and in the discussion right after), we reach K * (or an ϵ - neighborhood of it) with high probability, under the assumption that K * D 0 .
The outer-loop ( i = 1 , , m ) is used as a substitution for restarting the RS algorithm again and again, by taking K best as the new vertex of the search cone instead of K 0 and by choosing a different direction U 0 . The choice of a different direction is made as a backup to the case where the above mentioned assumption didn’t hold in the previous iterations (see Remark 1 below). The replacement of K 0 by K best can be considered as a heuristic step, which is made instead of running the RS algorithm many times in order to generate “the best starting point”, which is relevant only if we actually evaluate σ m a x K on each such point and take the point with the best value as the best starting point. Since we, in any case, evaluate σ m a x K in the main algorithm, we could avoid the repeated execution of the RS algorithm. The outer-loop is similar to what is done in the Hide-And-Seek algorithm (see [23,24]). The convergence in probability of the Hide-And-Seek algorithm can be found in [25].
Remark 1.
The volume of B K 0 , h ¯ is given by π / 2 Γ / 2 + 1 · h where : = q r and Γ is the known Γ-function. The volume of D 0 is given approximately (and exactly when ϵ = 0 ) by h · π 1 / 2 Γ 1 / 2 + 1 · r 1 . Thus, by taking r = h , the portion of B K 0 , h ¯ covered by D 0 (i.e., the probability that K * D 0 ) is given by Γ / 2 + 1 · π · Γ 1 / 2 + 1 . Let Θ denote the known relation between functions f , g : N R defined by f n = Θ g n if and only if lim n f n g n = 1 . Since Γ / 2 + 1 = Θ 2 π e / 2 2 + 1 2 and since 1 / 2 e 1 / 2 when + , it follows that
Γ / 2 + 1 · π · Γ 1 / 2 + 1 = Θ 1 e · 2 π .
Therefore, by taking m = e · 2 π iterations in the outer-loop, we have K * D 0 almost surely. Specifically, when 12 , we suggest taking m = 2 and × orthogonal matrix U = u 1 u 2 u , and to take the directions U j 0 = ± m a t u j , j = 1 , , in the outer-loop.
The complexity of the suggested practical algorithm measured as the number of its arithmetic operations is given as follows:
  • computing the matrix P K t as in (5) takes O p 2 3 = O p 6 , since the dominant operation is the inversion of the p 2 × p 2 matrix there.
  • checking K t S α by checking the α -stability of A c l K t (as well as computing σ K t ), takes O p 3 for computing the characteristic polynomial of A c K t (of P K t , respectively) and O p log 2 2 p log 2 2 p + log 2 2 b for computing approximations λ ˜ j for all the eigenvalues λ j , j = 1 , , p of A c K t (of P K t , respectively), with accuracy λ ˜ j λ j < 2 2 b p where b p log 2 p . The approximated eigenvalues can be computed to the accuracy 2 2 b p = ϵ , with b = 2 + log 2 1 ϵ p , by the algorithm of V. Y. Pan (see [26]). We end up with O p 3 for these operations.
  • computing uniformly distributed q × r matrix takes O max q , r 3 operations.
We therefore have a total complexity of O m n s max q , r 3 + p 6 .
Let a closed ϵ -neighborhood of K * in D 0 be defined by
S α 0 ϵ = K S α 0 σ K σ K * + ϵ .
Let the idealized algorithm be the algorithm that samples the search space D 0 until hitting S α 0 ϵ , where the sampling is according to a general p.d.f. g and a related generator G. For 0 < β < 1 , the number of iterations needed to guarantee a probability of at least 1 β to hit S α 0 ϵ is given by
M g V o D 0 m g V o S α 0 ϵ ln β ,
where Vo denotes the volume of the related set and M g , m g are the essential-supremum and essential-infimum of g over D 0 , respectively (see [16]). Similarly to what is done in [16], one can show that the last is O ln β h M g ϵ m g r r ϵ q r , where r ϵ is a radius of a ball of a basis of a cone with height ϵ and vertex K * that has a volume that equals to V o S α 0 ϵ . This results in an exponential number of iterations, but if we restrict the input of the algorithm to systems with q , r satisfying q q 0 , r r 0 when q 0 , r 0 are fixed, then, the number of iterations would be O ln β h M g ϵ m g r r ϵ q 0 r 0 , i.e., polynomial in r r ϵ —which can be considered as the true size of the problem (for a fixed p , q , r ). In this sense we can say that the algorithm is efficient. The total number of arithmetic operations of the idealized algorithm that guarantees a probability at least 1 β to hit S α 0 ϵ is therefore given by O ln β h ϵ r r ϵ q 0 r 0 max q , r 3 + p 6 , since sampling points according to the uniform distribution g (and therefore m g = M g = 1 ) and the related generator G, takes O max q , r 3 .
For the sake of comparison, which will be presented in the next section, we bring here, in Algorithm 2, the algorithm of D. Moerder and A. Calise (see [5]) adjusted to our formulation of the problem, which we call: the MC Algorithm. To the best of our knowledge, this is the best algorithm for LQR via SOF published so far.
Algorithm 1: The practical randomized algorithm for the LQR via static-output-feedback (SOF) problem.
Input: 0 < ϵ 1 2 , α , h , r > 0 , integers: m , n , s > 0 ,
    controllable pairs A , B and A T , C T ,
    matrices Q > 0 , R 0 and K 0 i n t S α .
Output: K S α close as possible to K * .
 1. compute P K 0 as in (5)
 2. P best P K 0
 3. σ m a x best max σ P best
 4. for i = 1 to m do
 4.1. choose U 0 such that U 0 F = 1 ,
     uniformly at random
 4.2. let L 0 K 0 + h · U 0
 4.3. for j = 1 to n do
 4.3.1. choose F R ϵ uniformly at random
 4.3.1.1. for k = 1 to s do
 4.3.1.1.1.  t k s
 4.3.1.1.2.  K t 1 t K 0 + t F
 4.3.1.1.3. if K t S α then
 4.3.1.1.3.1. compute P K t as in (5)
 4.3.1.1.3.2.  σ m a x K t max σ P K t
 4.3.1.1.3.3. if σ m a x K t < σ m a x best then
 4.3.1.1.3.3.1.  K best K t
 4.3.1.1.3.3.2.  P best P K t
 4.3.1.1.3.3.3.  σ m a x best σ m a x K t
 4.4.  K 0 K best
 5. return K best , P best , σ m a x best
Algorithm 2: The MC Algorithm.
Input: 0 < ϵ , α , integers: m , s > 0 ,
    controllable pairs A , B and A T , C T ,
    matrices Q > 0 , R > 0 and K 0 i n t S α .
Output: K S α close as possible to K * .
 1. j 0
 2. A 0 A B K 0 C
 3. P 0 m a t I p A 0 T + A 0 T I p 1 · v e c Q + C T K 0 T R K 0 C
 4. S 0 m a t I p A 0 + A 0 I p 1 · v e c I p
 5. σ m a x K 0 max σ P 0
 6. Δ K 0 R 1 B T P 0 S 0 C T C S 0 C 1 K 0
 7. f l a g 0
 8. for k = 1 to s do
 8.1.  t k s
 8.2.  K t 1 t K 0 + t Δ K 0
 8.3. if K t S α then
 8.3.1.  A t A B K t C
 8.3.2.  P t m a t I p A t T + A t T I p 1 · v e c Q + C T K t T R K t C
 8.3.3.  S t m a t I p A t + A t I p 1 · v e c I p
 8.3.4.  σ m a x K t max σ P t
 8.3.5. if σ m a x K t < σ m a x K 0 then
 8.3.5.1.  K 1 K t
 8.3.5.2.  A 1 A B K 1 C
 8.3.5.3.  P 1 P t
 8.3.5.4.  S 1 S t
 8.3.5.5.  σ m a x K 1 σ m a x K t
 8.3.5.6.  f l a g 1
 9. if f l a g = = 1 then
 9.1. while σ m a x K j + 1 σ m a x K j ϵ and j < m do
 9.1.1.  Δ K j R 1 B T P j S j C T C S j C 1 K j
 9.1.2. for k = 1 to s do
 9.1.2.1.  t k s
 9.1.2.2.  K t 1 t K j + t Δ K j
 9.1.2.3. if K t S α then
 9.1.2.3.1.  A t A B K t C
 9.1.2.3.2.  P t m a t I p A t T + A t T I p 1 · v e c Q + C T K t T R K t C
 9.1.2.3.3.  S t m a t I p A t + A t I p 1 · v e c I p
 9.1.2.3.4.  σ m a x K t max σ P t
 9.1.2.3.5. if σ m a x K t < σ m a x K j then
 9.1.2.3.5.1.  K j + 1 K t
 9.1.2.3.5.2.  A j + 1 A B K j + 1 C
 9.1.2.3.5.3.  P j + 1 P t
 9.1.2.3.5.4.  S j + 1 S t
 9.1.2.3.5.5.  σ m a x K j + 1 σ m a x K t
 9.1.2.3.5.6.  j j + 1
 10. return K best K j , A j , P j , S j , σ m a x best σ m a x K b e s t

3. Experiments

In the following experiments we applied the Algorithm 1 and Algorithm 2, on systems taken from the liberaries [27,28,29]. We took only the systems with controllable A , B , A T , C T pairs, for which the RS algorithm succeeded in finding SOFs (see [16], Table 8, p. 231). In order to initialize the MC Algorithm, we also used the RS algorithm to find a starting α -stabilizing static-output-feedback, for known optimal value for α . In all the experiments for the suggested algorithm we used m = 100 , n = 100 , s = 100 , h = 100 , r = 100 , ϵ = 10 16 , and for the MC Algorithm we used m = 10 , 000 , s = 100 (in order to get the same number of 10 6 overall iterations and the same number s = 100 of iterations for the local search). In every case, we took Q = I p , R = I q . The Stability Margin column of Table 1 relates to α > 0 for which the real part of any eigenvalue of the closed-loop is less than or equal to α . The values of α in Table 1 relates to the largest α for which the RS algorithm succeeded in finding K 0 . The reason is that if λ is an eigenvalue of A c K with corresponding eigenvector v then, (4) implies
λ = v * Q + C T K T R K C v 2 v * P K v v * Q v 2 v * P K v .
It follows that minimizing σ m a x K results in a larger abscissa. Thus, it is worth searching for a starting point K 0 that maximizes the abscissa α . This can be done efficiently by running a binary search on the abscissa and using the RS algorithm as an oracle. Note that RS CPU time appearing in the third column of Table 1 relates to running the RS algorithm for known optimal value of the abscissa. The RS algorithm is sufficiently fast also for this purpose, but other methods such as the HIFOO algorithm (see [19]) can be applied for this purpose as well.
Let σ m a x F denote the functional (6) for the system A , B , I p , where A B F is stable, i.e., F S q × p . Let P F denote the unique solution of (4) for the system A , B , I p with F as above. Let σ m a x K denote the functional (4) for the system A , B , C with K S q × r and related Lyapunov matrix P = P K . Now, if A B K C is stable for some K then, A B F is stable for F = K C (but there might exist a F such that A B F is stable, but which cannot be defined as K C for some q × r matrix K). Therefore
σ m a x F * = min F S q × p σ m a x F min K S α q × r B K 0 , h ¯ σ m a x K = σ m a x K * ,
where F * is an optimal LQR state-feedback controller for the system A , B , I p . We conclude that σ m a x F * σ m a x K * σ m a x K best . Thus, σ m a x F * is a lower-bound for σ m a x K best and can serve as a good estimator for it (as is evidently seen from Table 1 in many cases) in order to stop the algorithm earlier, since σ m a x F * can be calculated in advance.
The fourth column of Table 1 represents σ m a x K 0 . The fifth column stands for σ m a x K best , where the number in the parentheses is the relative improvement over σ m a x K 0 , in percent. The sixth column is for σ m a x F * and the seventh column is for the CPU time of the suggested algorithm, given in seconds. The eighth column stands for σ m a x K best found by the MC Algorithm and finally, the ninth column stands for the CPU time of the MC Algorithm.
Regarding the suggested algorithm, note that the relative improvement of σ m a x best over σ m a x 0 is at least 1000 % for the systems: AC6, AC11, AC12, DIS4, REA1, TF1, NN9, NN15 and NN17 (i.e., in 9 out of 29 systems). The AC12 should be noted, for which the improvement is 7.55 · 10 17 % ! Note also that for the (13 out of 29) systems: AC1, AC2, AC6, AC11, AC12, AC15, HE3, DIS4, REA1, REA2, HF2D10, HF2D11 and NN15, the value of σ m a x best for A , B , C is very close to the value of σ m a x F * for A , B .
Regarding the comparison with the MC Algorithm, we conclude that the MC algorithm actually failed in finding any improvement of σ m a x best over σ m a x 0 for the systems: AC11, HE4, ROC1, ROC2, TF1 and NN12. The suggested algorithm performs significantly better regarding this key of performance for the systems: AC6, AC11, AC15, HE3, HE4, TF1, ROC4, NN9 and NN12. The suggested algorithm performs slightly better for the systems: AC1, AC2, DIS4, DIS5, HE1, HF2D10, HF2D11, REA1, REA2, ROC1, TMD, NN1, NN5 and NN13. Finally, the MC algorithm performs slightly better only for the systems: AC5, AC12 and NN16. We conclude that the suggested algorithm outperforms the MC Algorithm regarding the above-mentioned key of performance, although the MC Algorithm outperforms the suggested algorithm in terms of CPU time.
The MC Algorithm seems to perform better locally, while the suggested algorithm seems to perform better globally. Thus, practically, the best approach would be to apply the suggested algorithm in order to find a close neighborhood of a global minimum and then to apply the MC Algorithm on the result, for the local optimization.

4. Concluding Remarks

For a discrete-time system
x k + 1 = A x k + B u k y k = C x k
and cost functional
J x 0 , u : = k = 0 x k T Q x k + u k T R x k ,
let u k = K y k be the SOF, and let A c K : = A B K C be the closed-loop matrix. Let D denote the open unit-disk, let 0 < α < 1 and let D α denote the set of all z D with z 1 α (where z is the absolute value of z). Let S q × r denote the set of all matrices K R q × r such that σ A c D (i.e., stable in the discrete-time sense), and let S α q × r denote the set of all matrices K R q × r such that σ A c D α . In this case we say that A c is α -stable. Let K S α q × r be given. Substitution of u k = K y k = K C x k into (10) yields
J x 0 , K : = k = 0 x k T Q + C T K T R K C x k .
Since Q + C T K T R K C > 0 and since A c K is stable, it follows that the Stein equation
P A c K T P A c K = Q + C T K T R K C
has a unique solution P > 0 , given by
P K = m a t I p I p A c K T A c K T 1 · v e c Q + C T K T R K C .
Substitution of (12) into (11) and noting that x k = A c K k x 0 with A c K stable, leads to
J x 0 , K = k = 0 x k T P A c K T P A c K x k = k = 0 x 0 T A c K T k P A c K T P A c K A c K k x 0 = x 0 T P K x 0 .
Thus, we look for K S α q × r that minimizes the functional J x 0 , K = x 0 T P K x 0 , and when x 0 is unknown, we seek K S α q × r for which σ m a x K : = max σ P K is minimal. Similarly to the continuous-time case, we have J x 0 , K x 0 2 σ m a x K with equality in the worst case. Finally, if λ is an eigenvalue of A c K and v is a corresponding eigenvector then (12) yields 1 λ 2 = v * Q + C T K T R K C v v * P K v v * Q v v * P K v . Therefore λ 2 1 v * Q v v * P K v , and thus, minimizing σ m a x K results in larger abscissa. Now, the suggested algorithm can be readily applied to discrete-time systems. As to the MC Algorithm, we are not aware of any discrete-time analogue of it.
We conclude that the Ray-Shooting Method is a powerful tool, since it practically solves the problem of LQR via SOF, for real-life systems. The suggested algorithm has good performance, and is proved to converge in probability. The suggested method can similarly handle the problem of LQR via SOF for discrete-time systems. Obviously, this enlarges the scope and usability of the Ray-Shooting Method and we expect to receive more results in this direction.

Acknowledgments

I wish to dedicate this article to the memory of my beloved father Pinhas Peretz and to the memory of my beloved wife and friend Rivka Rimonda Peretz.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Peretz, Y. A characterization of all the static stabilizing controllers for LTI systems. Linear Algebra Its Appl. 2012, 437, 525–548. [Google Scholar] [CrossRef]
  2. Johnson, T.; Atahns, M. On the design of optimal dynamic compansators for linear constant systems. IEEE Trans. Autom. Control 1970, 15, 658–660. [Google Scholar] [CrossRef]
  3. Levine, W.; Athans, M. On the determination of the optimal constant output feedback gains for linear multivariables systems. IEEE Trans. Autom. Control 1970, 15, 44–48. [Google Scholar] [CrossRef]
  4. Levine, W.; Johnson, T.L.; Athans, M. Optimal limited state variable feedback controllers for linear systems. IEEE Trans. Autom. Control 1971, 16, 785–793. [Google Scholar] [CrossRef]
  5. Moerder, D.; Calise, A. Convergence of numerical algorithm for calculating optimal output feedback gains. IEEE Trans. Autom. Control 1985, 30, 900–903. [Google Scholar] [CrossRef]
  6. Iwasaki, T.; Skelton, R. All controllers for the general H control problem: LMI existance conditions and stste space formulas. Automatica 1994, 30, 1307–1317. [Google Scholar] [CrossRef]
  7. Iwasaki, T.; Skelton, R.E. Linear quadratic suboptimal control with static output feedback. Syst. Control Lett. 1994, 23, 421–430. [Google Scholar] [CrossRef]
  8. Peres, P.L.D.; Geromel, J.; de Souza, S. Optimal H2 control by output feedback. In Proceedings of the 32nd IEEE Conference on Decision and Control, San Antonio, TX, USA, 15–17 December 1993; pp. 102–107. [Google Scholar]
  9. Camino, J.F.; Zampieri, D.E.; Peres, P.L.D. Design of A Vehicular Suspension Controller by Static Output Feedback. In Proceedings of the American Control Conference, San Diego, CA, USA, 2–4 June 1999. [Google Scholar]
  10. Blondel, V.; Tsitsiklis, J.N. NP-hardness of some linear control design problems. SIAM J. Control Optim. 1997, 35, 2118–2127. [Google Scholar] [CrossRef]
  11. Nemirovskii, A. Several NP-hard problems arising in robust stability analysis. Math. Control Signals Syst. 1993, 6, 99–105. [Google Scholar] [CrossRef]
  12. Arzelier, D.; Gryazina, E.N.; Peaucelle, D.; Polyak, B.T. Mixed LMI/randomized methods for static output feedback control. In Proceedings of the American Control Conference, Baltimore, MD, USA, 30 June–2 July 2010; pp. 4683–4688. [Google Scholar]
  13. Tempo, R.; Calafiore, G.; Dabbene, F. Randomized Algorithms for Analysis and Control of Uncertain Systems; Springer: London, UK, 2005. [Google Scholar]
  14. Tempo, R.; Ishii, H. Monte Carlo and Las Vegas Randomized Algorithms for Systems and Control. Eur. J. Control 2007, 13, 189–203. [Google Scholar] [CrossRef]
  15. Vidyasagar, M.; Blondel, V.D. Probabilistic solutions to some NP-hard matrix problems. Automatica 2001, 37, 1397–1405. [Google Scholar] [CrossRef]
  16. Peretz, Y. A randomized approximation algorithm for the minimal-norm static-output-feedback problem. Automatica 2016, 63, 221–234. [Google Scholar] [CrossRef]
  17. Syrmos, V.L.; Abdallah, C.; Dorato, P.; Grigoradis, K. Static Output Feedback: A Survey. Automatica 1997, 33, 125–137. [Google Scholar] [CrossRef]
  18. Sadabadi, M.S.; Peaucelle, D. From static output feedback to structured robust static output feedback: A survey. Annu. Rev. Control 2016, 42, 11–26. [Google Scholar] [CrossRef]
  19. Gumussoy, S.; Henrion, D.; Millstone, M.; Overton, M.L. Multiobjective Robust Control with HIFOO 2.0. In Proceedings of the IFAC Symposium on Robust Control Design, Haifa, Israel, 16–18 June 2009. [Google Scholar]
  20. Yang, K.; Orsi, R. Generalized pole placement via static output feedback: A methodology based on projections. Automatica 2006, 42, 2143–2150. [Google Scholar] [CrossRef]
  21. Henrion, D.; Loefberg, J.; Kočvara, M.; Stingl, M. Solving Polynomial static output feedback problems with PENBMI. In Proceedings of the IEEE Conference on Decision and Control, Sevilla, Spain, 15 December 2005. [Google Scholar]
  22. Peretz, Y. On applications of the Ray-Shooting method for structured and structured-sparse static-output-feedbacks. Int. J. Syst. Sci. 2017, 48, 1902–1913. [Google Scholar]
  23. Zabinsky, Z.B. Stochastic Adaptive Search for Global Optimization; Kluer Academic Publishers: Dordrecht, The Netherlands, 2003. [Google Scholar]
  24. Romeijn, H.E.; Smith, R.L. Simulated Annealing for Constrained Global Optimization. J. Glob. Optim. 1994, 5, 101–126. [Google Scholar] [CrossRef]
  25. Bélisle, C.J.P. Convergence Theorems for a Class of Simulated Annealing Algorithms on R d . J. Appl. Probab. 1992, 29, 885–895. [Google Scholar]
  26. Pan, V.Y. Univariate polynomials: Nearly optimal algorithms for numerical factorization and root-finding. J. Symb. Comput. 2002, 33, 701–733. [Google Scholar] [CrossRef]
  27. Leibfritz, F. COMPleib: Constrained Matrix-Optimization Problem Library—A Collection of Test Examples for Nonlinear Semidefinite Programs, Control System Design and Related Problems; Technical Report; Department of Methematics, University of Trier: Trier, Germany, 2003. [Google Scholar]
  28. Leibfritz, F. Description of the Benchmark Examples in COMPleib 1.0; Technical Report; Department of Methematics, University of Trier: Trier, Germany, 2003. [Google Scholar]
  29. Leibfritz, F.; Lipinski, W. COMPleib 1.0—User Manual and Quick Reference; Technical Report; Department of Methematics, University of Trier: Trier, Germany, 2004. [Google Scholar]
Table 1. Results of the suggested randomized algorithm for LQR via SOF compared with the results of the MC Algorithm.
Table 1. Results of the suggested randomized algorithm for LQR via SOF compared with the results of the MC Algorithm.
SystemStability MarginRS CPU Time σ max 0 for
A , B , C
σ max best for
A , B , C
Suggested
Algorithm
σ max F *
for
A , B
Suggested Algorithm CPU Time σ max best for
A , B , C
MC Algorithm
MC Algorithm CPU Time
AC1 0.1 0.062520.9727 13.3987 ( 56.52 % ) 13.068643.171816.23150.1562
AC2 0.1 0.031220.9727 13.4217 ( 56.25 % ) 13.068642.390616.23150.2500
AC5 0.875 0.0625 2.2821 × 10 6 2.2208 × 10 6 ( 2.76 % ) 8.4264 × 10 5 56.8125 2.0608 × 10 6 0.1093
AC6 0.875 0.1093463.8502 6.2413 ( 7.33 × 10 3 % ) 5.972143.812591.32760.0625
AC11 0.1 0.1093 1.0661 × 10 3 7.4244 ( 1.42 × 10 4 % ) 5.864831.6406 1.0661 × 10 3 0.0625
AC12 0.1 < 10 4 4.1640 × 10 19 5.5149 × 10 3 ( 7.55 × 10 17 % ) 2.7690 × 10 3 46.3437 2.9950 × 10 3 3.0625
AC15 0.25 < 10 4 112.0500 106.1312 ( 5.57 % ) 104.845842.0000111.97810.1562
HE1 0.1 < 10 4 26.7550 9.1169 ( 1.93 × 10 2 % ) 2.996144.812512.94120.1250
HE3 0.25 0.0312944.3921 632.1294 ( 49.39 % ) 611.8468100.7812668.08940.4218
HE4 0.01 0.0625 4.7440 × 10 3 474.7121 ( 8.99 × 10 2 % ) 229.9171152.9531 4.7440 × 10 3 < 10 4
ROC1 10 16 0.0468 1.4211 × 10 4 1.1368 × 10 4 ( 25 % ) 1.1207 × 10 3 81.2500 1.4211 × 10 4 0.0937
ROC4 10 16 0.1875 1.7688 × 10 4 8.6513 × 10 3 ( 1.04 × 10 2 % ) 8.5454 × 10 2 79.7968 1.7688 × 10 4 0.0468
DIS41< 10 4 3.2208 × 10 7 1.7529 ( 1.83 × 10 9 % ) 1.750468.28122.01660.7031
DIS5 0.1 0.0312 3.9985 × 10 5 2.3800 × 10 5 ( 68 % ) 9.0756 × 10 4 109.8906 2.8304 × 10 5 0.3593
REA1 0.75 0.0468 9.3478 × 10 5 2.2790 ( 4.10 × 10 7 % ) 2.226558.39062.73850.5156
REA2 0.1 0.03129.8349 2.2640 ( 3.34 × 10 2 % ) 2.244369.98432.77700.3125
TMD 0.05 0.031245.0958 27.2080 ( 65.74 % ) 16.768037.203127.60230.8281
TF1 10 16 0.1093 9.1632 × 10 3 193.2880 ( 4.64 × 10 3 % ) 58.129651.3750 9.1632 × 10 3 0.0312
HF2D10 10 16 0.03121.8090 1.4032 ( 28.91 % ) 1.3832118.26561.42692.0781
HF2D11 10 16 0.03120.4315 0.3699 ( 16.65 % ) 0.3676154.03120.37841
NN1 10 16 0.0312 7.4631 × 10 3 1.6144 × 10 3 ( 3.62 × 10 2 % ) 106.780150.4375 2.3561 × 10 3 0.2500
NN5 0.01 0.0468 3.9123 × 10 4 9.6741 × 10 3 ( 3.04 × 10 2 % ) 2.8787 × 10 3 129.2812 9.8102 × 10 3 0.3125
NN9 0.01 0.0312 4.0577 × 10 3 295.6797 ( 1.27 × 10 3 % ) 21.293740.6093 3.7349 × 10 3 0.0937
NN12 0.01 0.0937934.5127 236.4467 ( 2.95 × 10 2 % ) 30.371435.3281934.51270.0625
NN13 0.1 0.03126.4782 1.7094 ( 2.66 × 10 2 % ) 0.629933.92181.80550.4531
NN14 0.1 0.03124.8907 1.6664 ( 1.93 × 10 2 % ) 0.629933.46871.79220.1718
NN15 0.01 < 10 4 1.3309 × 10 5 387.3778 ( 3.42 × 10 4 % ) 386.574194.2031387.41830.3281
NN16 0.1 0.109335.9487 6.0864 ( 4.90 × 10 2 % ) 2.327645.04685.99820.7343
NN17 0.1 0.0312 2.6404 × 10 3 36.7664 ( 7.08 × 10 3 % ) 3.130828.3281666.72180.0937

Share and Cite

MDPI and ACS Style

Peretz, Y. On Application of the Ray-Shooting Method for LQR via Static-Output-Feedback. Algorithms 2018, 11, 8. https://doi.org/10.3390/a11010008

AMA Style

Peretz Y. On Application of the Ray-Shooting Method for LQR via Static-Output-Feedback. Algorithms. 2018; 11(1):8. https://doi.org/10.3390/a11010008

Chicago/Turabian Style

Peretz, Yossi. 2018. "On Application of the Ray-Shooting Method for LQR via Static-Output-Feedback" Algorithms 11, no. 1: 8. https://doi.org/10.3390/a11010008

APA Style

Peretz, Y. (2018). On Application of the Ray-Shooting Method for LQR via Static-Output-Feedback. Algorithms, 11(1), 8. https://doi.org/10.3390/a11010008

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop