HAUSDORFF MOMENT PROBLEM AND NONLINEAR TIME OPTIMALITY

. A complete analytic solution for the time-optimal control problem for nonlinear control systems of the form ˙ x 1 = u , ˙ x j = x j − 1 1 , j = 2 , . . . , n , is obtained for arbitrary n . In the paper we present the following surprising observation: this nonlinear optimality problem leads to a truncated Hausdorﬀ moment problem, which gives analytic tools for ﬁnding the optimal time and optimal controls. Being homogeneous, the mentioned system approximates aﬃne systems from a certain class in the sense of time optimality. Therefore, the obtained results can be used for solving the time-optimal control problem for systems from this class.


Introduction
The time-optimal control problem is one of the most well-studied topics in the optimal control theory.Among all such problems, one of the most famous concerns the linear single-input system of integrators [17] ẋ1 = u, ẋ2 = x 1 , . . ., ẋn = x n−1 , x(0) = x 0 , x(θ) = 0, |u(t)| ≤ 1, θ → min . (1.2) As follows from the Pontryagin Maximum Principle [17], the optimal control takes the values ±1 only and has no more than n − 1 points of discontinuity.Therefore, in the general case, the problem (1.1), (1.2) is reduced to finding n numbers: the optimal time θ and switching times t 1 , . . ., t n−1 .However, except the case n = 2, obtaining an explicit solution is not elementary.This problem was completely solved in [8,9], by applying ideas and the technique of the Markov moment problem.We explain the approach for a general linear single-input time-optimal control problem ẋ = Ax + bu, x(0) = x 0 , x(θ) = 0, |u(t)| ≤ 1, θ → min . (1.4) Applying the Cauchy formula for a trajectory, we obtain the following conditions for the optimal control, x 0 j = θ 0 g j (t)u(t)dt, j = 1, . . ., n, |u(t)| ≤ 1, (1.5) where g(t) = −e −At b, which can be interpreted as a classical Markov moment problem [13,15].Supplemented by the optimality requirement θ → min, the problem (1.5) turns into a Markov moment min-problem introduced in [8].In particular, the problem (1.1)-(1.2) is equivalent to the power Markov moment min-problem (−1) j (j − 1)!x 0 j = θ 0 t j−1 u(t)dt, j = 1, . . ., n, |u(t)| ≤ 1, θ → min . (1.6) An idea to apply the moment problem theory to control problems was originated by N.N.Krasovskiȋ [11,12], who used the results of M. G. Kreȋn in the abstract moment L-problem [1].As was mentioned above, V.I.Korobov and G.M. Sklyar [8,9] found out that for the linear time-optimal control problem (1.3)- (1.4) the Markov moment problem is applicable.In [8] they obtained a complete analytic solution of the problem (1.6), which is equivalent to (1.1)- (1.2).The heart of the solution is as follows: For any n, two special polynomials in one variable with coefficients explicitly expressed via x 0 are found.The maximum of real roots of these polynomials coincides with the optimal time θ.After finding θ, the switching times can be found as roots of another polynomial with coefficients explicitly expressed via θ and x 0 .Details of the algorithm and further references can be found also in [10].
Thus, regardless of system's dimension, the solution of the time-optimal control problem (1.1)-(1.2) is reduced to finding roots of two polynomials in one variable.
As was proved in [9,18], the integrator system (1.1) plays a special role since it approximates an arbitrary controllable linear system (1.3) in the following sense.Denote by ( θ x 0 , u x 0 (t)) and (θ x 0 , u x 0 (t)) the optimal times and the optimal controls for the problems (1.1)-(1.2) and (1.3)- (1.4) respectively.It can be shown that, after some change of variables y = F x in the system (1.3), the optimal times and optimal controls in the problems (1.1)-(1.2) and (1.3)-(1.4)become equivalent as x 0 → 0, that is where θ = min{ θ x 0 , θ F x 0 }.This means that locally, in a neighborhood of the origin, the optimal time and the optimal control for the system (1.1) approximates the optimal time and the optimal control for the general linear autonomous system (1.3).Moreover, it turns out that the solution of the general problem (1.3)-(1.4)can be found by successive solving the time-optimal control problems for the system (1.1).Similar analysis of the time optimality for nonlinear systems is much more sophisticated, even if one restricts himself by the time-optimal control problems for affine single-input systems ẋ = a(x) + b(x)u, a(0) = 0, (1.8) x(0) = x 0 , x(θ) = 0, |u(t)| ≤ 1, θ → min, (1.9) where the condition a(0) = 0 means that the origin, which is a final point, is an equilibrium of the system.We assume that the vector fields a(x) and b(x) are real analytic in a neighborhood of the origin.Unlike the linear case, the Pontryagin Maximum Principle does not allow describing exactly the class of possible optimal controls.Thus, it seems reasonable to solve the "simplest" nonlinear time-optimal problems, and then use their solutions for approximation.
When considering affine systems (1.8), we are led to a nonlinear Markov moment problem [5,19] x 0 = k≥1, mi≥0 v m1...m k ξ m1...m k (θ, u), |u(t)| ≤ 1, (1.10) where ξ m1...m k (θ, u) are nonlinear functionals of u of the form (1.11) We refer to (1.11) as nonlinear power moments.Vector coefficients v m1...m k ∈ R n are found by use of values of a and b and all their derivatives at the origin.For the linear system (1.3), the representation (1.10) reads which coincides with (1.5).Using this representation, in [19] we described a class of nonlinear systems (1.8) that can be approximated by linear ones in the sense close to (1.7): after some change of variables in the system (1.8), optimal times and optimal controls in the problems (1.1)-(1.2) and (1.8)-(1.9)become equivalent as x 0 → 0. We called such systems essentially linear.
In the general case, as a class of the simplest systems that approximate arbitrary affine systems in the sense of time optimality mentioned above, homogeneous systems should be considered.For homogeneous systems, the representation (1.10) has the form where 1 ≤ w 1 ≤ • • • ≤ w n are some integers.Let us notice that one limiting case of the representation (1.12) is when the right hand side of (1.12) contains the unique one-dimensional integral, as in (1.6); it corresponds to the linear integrator system (1.1).The opposite limiting case is when the right hand side contains the unique multiple integral of maximal multiplicity; one can show that this is possible if This representation corresponds to the system where the right hand sides of all equations, except the first one, include sequential powers of the variable x 1 .Thus, from the moment problem point of view, this system seems to be one of the simplest nonlinear systems, for which the time-optimal problem can be successfully solved.In [21], we described the class of possible optimal controls (even for slightly more general systems than (1.14)).In this paper we present, in a certain sense, a complete solution of the time-optimal problem for the system (1.14).
As a homogeneous system, (1.14) approximates a certain class of affine systems in the sense of time optimality [20].Among affine systems (1.8), they are such that (Actually, the approach can be generalized to non-autonomous affine systems.)Each such system after some appropriate change of variables takes the form where p j (x), q j (x) are real analytic functions such that their Taylor series contain terms An approximation in the sense of time optimality is close to (1.7): the optimal times and optimal controls for systems (1.15) and (1.14) are equivalent as x 0 → 0. It would be interesting to find conditions under which the successive approximation method is applicable for solving the time-optimal problem for systems (1.15), where the time-optimal problem for the system (1.14) is solved on each step, like the case of linear systems studied in [18].One can think of the solution of the time-optimal problem for the system (1.14) as the first step in solving the time-optimal problems for general nonlinear affine systems.
In the present paper we concentrate on solving the time-optimal control problem for the system (1.14).The crucial point of our study is the following surprising observation: this optimality problem, like (1.1), (1.2), leads to a moment problem, however, in this case we deal with the classical truncated Hausdorff moment problem [6,13].This allows us to use profound ideas and methods of the classical moment theory and, as a result, leads to finding an analytic solution of the time-optimal control problem for the system (1.14).
The solution we have found can be summarized as follows.For any n, depending on its parity, an optimal control can be one of four or five types only.For each type, in order to find the optimal time one needs to solve a system of at most two special polynomial equations in two variables with coefficients explicitly expressed via x 0 (in some cases, only one polynomial equation in one variable should be solved).Thus, regardless of system's dimension, the solution of the time-optimal control problem for the system (1.14) is reduced to solving a certain polynomial equation or a certain system of two polynomial equations.
The rest of the paper is organized as follows.In Section 2 a preliminary discussion is given.In Section 3 we apply these results to the system (1.14) and demonstrate how the truncated Hausdorff moment problem arises.Further we specify the statement of the Hausdorff moment problem for generic points x 0 and describe solvability conditions in Lemma 3.1.In Section 4 we formulate the main result of the paper (Thm.4.2) and propose an algorithm for finding optimal controls.Section 5 contains several illustrative examples.In Section 6 we formulate some open questions.In Appendix A we recall solvability conditions for a classical truncated Hausdorff moment problem and explain their connection with the results of Lemma 3.1.
It is well known [17] that optimal controls take values ±1 only and have no more than one switching point.The case n = 3 deals with the nonlinear system with quadratic right hand side The time-optimal problem for this system was completely solved in [20].It was shown that optimal controls take values ±1 and 0 only.More specifically, the controllability domain is broken into two subsets with nonempty interior: the first one contains points x 0 which correspond to optimal controls taking values ±1 (bang-bang) and the second one contains points x 0 which correspond to optimal controls taking values ±1 and 0 (singular).It was shown that, outside a set of zero measure, four possible types of optimal controls are possible and domains corresponding to these types of controls are disjoint.
In the present paper we consider the general case n ≥ 4. In [21], the time-optimal problem for a more general class of systems than (2.1) was studied and a description of optimal controls was obtained.Here we briefly recall the result for the particular case of the system (2.1).
Applying the Pontryagin Maximum Principle [17] to the problem (2.1)-(2.2),we consider a Hamilton-Pontryagin function where ψ 0 ≤ 0 is a constant, and the dual system of differential equations Hence, ψ 2 , . . ., ψ n are constants.Suppose an optimal control exists; denote it by u(t).Let x(t) be the corresponding optimal trajectory and θ be the optimal time.Then, due to Pontryagin Maximum Principle, there exists a function ψ 1 (t) and constants ψ 0 ≤ 0, ψ 2 , . . ., ψ n such that and Hence, the Pontryagin Maximum Principle defines u(t) at those points where ψ 1 (t) = 0.The question arises how the roots of the function ψ 1 (t) can be located on [0, θ].Suppose P (z) is not a constant, then ψ 1 (t) can have roots.If ψ 1 ( t) = 0 for some t, then (2.3) implies that u(t) equals 1 or −1 and does not change its sign in some interval ( t − ε, t + ε).Let t1 and t2 be the roots of ψ 1 (t) closest to t on the left and right, i.e., let ψ 1 ( t1 ) = ψ 1 ( t2 ) = 0 and ψ 1 (t) = 0 for t ∈ ( t1 , t2 ).Since ˙ Hence, (2.4) implies that x 1 ( t1 ) and x 1 ( t2 ) are different roots of the polynomial P (z).Moreover, since the polynomial P (z) has a finite number of different roots, there can exist only finite number of intervals where ψ 1 (t) is nonzero.
Therefore, an optimal control takes the values ±1 and 0 only and has a finite number of switching points.More specifically, -if t is a switching point of u(t), then z = x 1 ( t) is a root of P (z); -if t is a switching point such that u( t − 0) = 0 or u( t + 0) = 0, then z = x 1 ( t) is a multiple root of P (z); -P ( x 1 (t)) ≥ 0 for t ∈ [0, θ], i.e., x 1 (t) belongs to the connected component of the set {z : P (z) ≥ 0} containing the point z = 0.
Informally, the first component of the optimal trajectory x 1 (t) moves between the roots of P (z), where the sign of the optimal control changes, and the optimal control equals zero on a time interval when x 1 (t) is at a multiple root of P (z).
As was shown in [21], this conclusion can be strengthened.Namely, it turns out that x 1 (t) cannot take any value more than twice, except the case when x 1 (0) = 0; in the latter case zero value can be taken thrice.
Finally, we mention that an optimal control is not unique in general.However, as was shown in [21], an optimal control can be chosen in the "stair-step form", which can be described as follows.For definiteness, let x 0 1 ≥ 0. Then a "stair-step" control equals 1 in some time interval, then switches to 0 and −1 several times, and finally switches to 1. Two first and two last intervals of constancy of the optimal control can be empty.In Figure 1 we give a sketch of a possible graph of x 1 (t); the optimal control can be easily restored from the equality ˙ x 1 (t) = u(t).In order to express the result more precisely and formally, we introduce the following notation for the constant functions taking values ±1 and 0 where the sub-index a ≥ 0 is used to indicate a length of function's domain.In order to write piecewise constant functions, we use the notation • of the concatenation operation Then an optimal control u(t) can be expressed as where Additionally, u(t) obviously cannot be zero on its last interval of constancy, i.e., Besides, it is impossible that x 1 (t) ≡ 0 and u(t) ≡ 0 for some time interval.Otherwise, this time interval can be safely discarded, and the general time of motion will decrease.This requirement can be written as Finally, since x 1 (0) = x 0 1 and x 1 ( θ) = 0, we get The properties described above also restrict the number of switching points for an optimal control.Namely, for the control (2.5), the polynomial P (z) definitely has k − 1 multiple roots plus, maybe, one or two roots, which may be multiple or single (depending on vanishing a 1 , a 2 and σ 1 , σ k+1 ).Since P (z) is of degree n − 1, we derive an estimate for k.A more detailed study is given in the next section.

Time-optimal control problem as a Hausdorff moment problem
In this section we show that the problem (2.1)-(2.2) leads to a Hausdorff moment problem.First, we provide some preliminary calculations.We suppose that x 0 1 ≥ 0; for the case x 0 1 < 0 see Remark 4.3.Substituting the optimal control (2.5) to the Cauchy problem ẋ1 = u(t), x 1 (0) = x 0 1 , we obtain the first component of the optimal trajectory x 1 (t).We get where we use the notation and In particular, (2.8) implies that z 2 , . . ., z k = 0. Now, substituting x 1 (t) to the Cauchy problems ẋj = x j−1 1 (t), x j (0) = x 0 j and taking into account the end conditions x j ( θ) = 0, we get where x 1 (t) has the form (3.1).Since for any functions ϕ the equatilies (3.3) can be written as Let us simplify these expressions.First, taking into account the notation (3.2) and the equality (2.9), we denote and get a1 0 Finally, observe that Therefore, (3.4) can be written as Besides, (2.6) gives Thus, the solution of the time-optimal control problem (2.1)-(2.2) reduces to the algebraic system of n equations (3.6), (3.7) for the unknowns θ, a, b, σ 1 , . . ., σ k+1 , z 2 , . . ., z k .This system is nonlinear and seems to be rather complicated.However, fortunately, its solving can be essentially simplified by re-formulating as a truncated Hausdorff power moment problem.Let us recall the statement of this problem.
Suppose the interval [a, b] is fixed and the numbers c 1 , . . ., c n are given.The truncated Hausdorff power moment problem is to find a non-decreasing function σ(z) such that the following moment equalities hold Let us show how the equations (3.6), (3.7) can be interpreted in such a way.First, let us introduce the notation Then (3.6), (3.7) can be written as In what follows we apply the methods from the moment theory for finding optimal controls.Some key results on the truncated Hausdorff power moment problem related to our study are presented in Appendix A.
Our plan is as follows.First, we apply the conditions of solvability for the moment problem (3.8), which are expressed as equations and inequalities on c 1 , . . ., c n .Since c 1 , . . ., c n , defined by (3.9) and (3.10), include the unknown parameters θ, a, b, we get the system of equations for only three parameters θ, a, b.These conditions are described by Lemma 3.1.After finding θ, a, b, we find the rest of parameters as a solution of the moment problem (3.8).
In this paper, we restrict ourselves by a generic case only, which is described as follows.Let us return to the equalities (3.6), (3.7).They can be considered as a description of the map that takes the set of parameters θ, a (if a = 0), b (if b = x 0 1 ), σ 1 and σ k+1 (if they are non-zero), σ 2 , . . ., σ k , and z 2 , . . ., z k to the n-dimensional vector −x 0 .If the number of parameters is less than n, the image of such a map has zero measure.In the rest of the paper we are interested only in those points x 0 which are defined by no less than n parameters and, additionally, are such that x 0 1 = 0. We refer to these points as generic.It turns out that for generic points, along with condition (2.7), which can be rewritten as if a = 0, then σ k+1 = 0, (3.13) the following property hold: If fact, suppose the contrary, i.e., let b = x 0 1 and σ 1 > 0. Due to (3.13), three cases are possible: -if a < 0 and σ k+1 > 0, then the polynomial P (z) has at least k + 1 multiple roots.
Generally, we have nine cases.As an example, we analyze one of them, the rest can be considered similarly.Suppose b = x 0 1 , σ 1 = 0 and a = 0, σ k+1 = 0. Then the number of parameters equals 2k − 1 (they are θ, σ 2 , . . ., σ k , and z 2 , . . ., z k ), therefore, for generic points n ≤ 2k − 1.On the other hand, P (z) has no less than k − 1 multiple roots, therefore, Such an analysis shows that for a generic point four cases for even n and five cases for odd n are possible.We list all these cases in Table 1.Besides of information on n and k, each cell contains the type marker and values of parameter d mentioned in Lemma 3.1 below and sketches of x 1 (t) for all the cases when n = 4 and n = 5.
Lemma 3.1 below describes necessary and sufficient conditions for solvability of the Hausdorff moment problem (3.8).More specifically, for given numbers a, b and c 1 , . . ., c n , we formulate the conditions under which there exists a non-decreasing piecewise constant function σ(z) satisfying the equalities (3.8).We mention only those cases (concerning the number and location of jump points of σ(z)) that are used below to formulate an algorithm for solving the time-optimal control problem (2.1) and (2.2).We emphasize that, in this lemma, a, b and c i are supposed to be known.In the next section we explain how to use it if a and b are unknown and c i are functions on a, b, and θ.The formulation of the lemma is quite long since in fact it contains four statements, which concern four possible types of the function σ(z) (not having a jump point at the ends of the interval Table 1.Possible cases for generic points. (A) There exists a non-decreasing function σ(z) having exactly k − 1 nonzero points of growing between a and b, where 2k − 1 ≤ n, i.e., the following representation holds ) if and only if the following three conditions are satisfied: The points z 2 , . . ., z k can be found as the roots of the equation If z 2 , . . ., z k are known, the numbers σ 2 , . . ., σ k can be found from the first k − 1 equations of the system (3.16).
(B) There exists a non-decreasing function σ(z) having exactly k + 1 nonzero points of growing, including a and b, where 2k + 1 ≤ n, i.e., the following representation holds where (3.17 hold if and only if the following three conditions are satisfied: The points z 2 , . . ., z k can be found as the roots of the equation If z 2 , . . ., z k are known, the numbers σ 1 , . . ., σ k+1 can be found from the first k + 1 equations of the system

22). (C)
There exists a non-decreasing function σ(z) having exactly k nonzero points of growing, including b but not including a, where 2k ≤ n, i.e., the following representation holds where (3.17 If z 2 , . . ., z k are known, the numbers σ 1 , . . ., σ k can be found from the first k equations of the system (3.

26). (D)
There exists a non-decreasing function σ(z) having exactly k nonzero points of growing, including a but not including b, where 2k ≤ n, i.e., the following representation holds where (3.17 If z 2 , . . ., z k are known, the numbers σ 2 , . . ., σ k+1 can be found from the first k equations of the system (3.30).
To prove (A 2 ), let us consider a quadratic form where is nontrivial, then all terms in the right hand side of (3.34) are non-negative and no more than k − 2 of them vanish (we take into account (3.19)).Therefore, at least one term is positive, hence, the sum in the right hand side of (3.34) is positive.This implies that the matrix {c i+j−1 } k−1 i,j=1 is positive definite.For the matrix {c i+j+1 } k−1 i,j=1 , the same considerations give where Q(z) is of degree no more than k − 2. Arguing analogously and taking into account (3.18), we conclude that the matrix where Q(z) is of degree no more than k − 2. Taking into account (3.17Now we make use of the solvability conditions for this moment problem ( [13], Chap.III).Due to suppositions (A 2 ) and (A 1 ) for d = 0, the matrices {c i+j−1 } k−1 i,j=1 and {c i+j+1 } k−1 i,j=1 are positive definite, while the matrix {c i+j−1 } k i,j=1 is singular.Hence, the matrix {c i+j−1 } k i,j=1 is non-negative definite.Taking into account also condition (A 3 ) we obtain that the problem (3.35) has a unique solution σ(z), which is a piecewise constant function with exactly k − 1 points of growing inside the interval (a, b).Now, suppose b > z 2 > • • • > z k > a are the points of growing of σ(z) and σ 2 , . . ., σ k > 0 are the corresponding jump values, i.e., the moment equalities (3.16) hold for j = 1, . . ., 2k − 1.Then, analogously to the arguments mentioned above, we get where Suppose some jump point equals zero, z s0 = 0. Then there exists a nontrivial polynomial of degree k − 2 with the roots {z s : s = s 0 }.Then the right hand side of (3.36) equals zero, which is impossible since {c i+j+1 } k−1 i,j=1 is positive definite.Hence, all jump points z s are nonzero.

Algorithm for finding the optimal control
The analysis given in the previous section suggests the following algorithm of finding the optimal control: for a given point x 0 , consider one by one all possible cases from Table 1 (four or five, in dependence on the parity of n) and use the appropriate part of Lemma 3.1, which is indicated as "type" in Table 1.
Each part of Lemma 3.1 consists of three conditions.The first condition says that some matrices are singular, that is, the determinants of these matrices equal zero.Depending on the case number, we deal with one, two or three such equalities; below we discuss each case in detail.We consider these equalities as equations in θ and/or a, b, and solve them.This part require symbolic computations (for operating with polynomials) and numerical solving polynomial equations or systems of polynomial equations.As a result, we find one or several sets of values of θ and/or a < 0, b > x 0 1 satisfying the first condition (if no, we pass to the next case).Then, we substitute the values obtained to the matrices from the second and third conditions.We obtain numeric matrices, and should check if they are positive definite.This part includes only numeric computations.
If we succeeded in finding values of θ and/or a, b satisfying all three conditions, then the corresponding control exists.To find its switching moments, we find z i as the roots of the polynomial and then find σ i from the system of linear equations, as described in the corresponding part of Lemma 3.1.In any case, the control has n − 1 switching points.It has the form (2.5), where, taking into account (3.2) and (3.5), we have and the appropriate equalities for a, b, σ 1 , σ k+1 should be applied that are indicated in Table 1 and correspond to the case considered.We recall that a i , b i and σ i are durations of time intervals where u(t) is constant and equals 1, −1, and 0 respectively; using them, one can easily find switching moments.
If we find several such controls, we compare the corresponding times and find the time-optimal controls.If none of the cases is appropriate, then the point x 0 does not belong to the 0-controllability domain, i.e., there does not exist a control satisfying the constraint |u(t)| ≤ 1 and transferring the system (2.1) from the point x 0 to the origin in a finite time.Actually, if such a control exists, then an optimal control exists as well due to the Filippov Theorem [3,4].However, any optimal control relates to one of the cases described in Table 1.On the other hand, for n ≥ 3 any neighborhood of the origin contains points that do not belong to the 0-controllability domain due to equations ẋj = x j−1 1 for odd j ≥ 3. Thus, our algorithm allows us to determine if a given point x 0 belongs to the 0-controllability domain.Now, we present our algorithm more specifically.Suppose x 0 is given, where x 0 1 > 0; for the case x 0 1 < 0 see Remark 4.3.First, we define c 1 , . . ., c n by (3.9), (3.10),where the components of the vector x 0 are substituted.Hence, c 1 , . . ., c n become polynomials in a, b, θ.Below we use the notation (3.15).
For odd n = 2m + 1, we consider cases 1,3,5,7,9 from Table 1; for even n = 2m, we consider cases 2,4,6,8.The value of k mentioned below is indicated in Table 1.We give the description of the algorithm, combining the cases for which the steps are similar.
Case 1. Substitute a = 0 and b = x 0 1 to (3.9), (3.10), then . ., n.Therefore, c 1 is a linear function on θ and c 2 , . . ., c n are known numbers.The conditions are of type A, i.e., they are described in part (A) of Lemma 3.1.Applying them with k = m + 1 suggests the following steps.
Step 2. Check if the following three numeric matrices are positive definite, Step 3. If so, find z 2 , . . ., z k as the roots of the equation (3.21) and find σ 2 , . . ., σ k from the system of linear equations (3.16).Then define the control as Cases 2 and 3. Substituting a = 0 to (3.9), (3.10) gives where c a,b j = −c j+2 + bc j+1 , j = 1, . . ., n − 2, and c a j = c j+1 , j = 1, . . ., n − 1. (We note that k = m ≥ 2 since n ≥ 4.) Step 3. If so, find z 2 , . . ., z k as the roots of the equation (3.21) (for Case 2) or (3.29) (for Case 3).Then find σ i , from the linear system (3.16)(for Case 2) or (3.26) (for Case 3).We obtain the control of the form where Cases 4 and 7 are analogous to Cases 2 and 3 except for the fact that here b = x 0 1 is given and a is unknown.Here (3.9), (3.10) give , j = 2, . . ., n.The conditions are of type A for Case 4 and of type D for Case 7, and k = m.The algorithm is as follows.
Step 1. Solve the polynomial equation in a of the following form Case 4 : det{c i+j } k i,j=1 = 0; Case 7 : det{c a i+j } k i,j=1 = 0, where c a j = c j+1 − ac j , j = 2, . . ., n − 1. Substitute each a obtained such that a < 0 (if any) to the (linear) equation where c a 1 = c 2 − ac 1 , and find θ.The next two steps should be implemented for each pair (a, θ) such that a < 0 and θ > 0 (if any).Substitute such a pair to c j and c a j , then they become known numbers.Step 2. Check if the following three numeric matrices are positive definite, Case 4 : Step 3. If so, find z 2 , . . ., z k as the roots of the equation (3.21) (for Case 4) or (3.33) (for Case 7).Then find σ i , from the linear system (3.16)(for Case 4) or (3.30) (for Case 7).We obtain the control of the form = −a.Case 5, 6, 8, 9. Now, both variables a and b are unknown, i.e., c 1 , . . ., c n are defined by (3.9), (3.10) without simplification.Then, c 2 , . . ., c n are polynomials in two variables a and b, while c 1 is a polynomial in three variables a, b, and θ.For Case 5, the conditions are of type A and k = m.For Cases 6, 8, 9 the conditions are of type C, D, B respectively, and k = m − 1.In each case, we deal with three equations in the first step, two of which (for d = 1 and d = 2) are polynomial equations in a and b.In order to solve such a system, we may apply symbolic computation and find a resultant of these polynomials or apply the Gröbner basis tool [2].The algorithm is as follows.
Step 1. Solve the system of two equations in a, b of the following form Case 5 : where We emphasize that, for any n, in the worst case, a system of two polynomial equations in two variables should be solved.Remark 4.1.There exist well-known methods for solving polynomial equations in one variable.For solving a system of two polynomial equations in two variables, one can use a resultant of these two polynomials; another way is to use the Gröbner basis technique [2].In any case, the idea is to exclude a or b from the system and get a polynomial in one variable.
However, these methods involve symbolic computation and are therefore hardly applicable even for not very large n.Alternatively, one can solve this system numerically.In this case, the following observation may be helpful.For all odd j, the equality (3.12) gives that c j ≥ 0. Hence, (3.10) implies that a and b satisfy the following estimates b j + |a| j ≤ 1 2 ((x 0 1 ) j − jx 0 j ) for any odd j ≥ 3. (4.2) They can be used for numerical solving the system (4.1).For example, one can generate a mesh in the twodimensional domain for (a, b) described by the inequalities (4.2), and find the points at which the values of both polynomials of the system are close to zero.Then, considering a neighborhood of each such point, one finds a solution of the system of two polynomials (by use of some numerical method or by interpolation) or shows that these is no solution in this neighborhood.Here, only numerical calculations are performed.Although finding values of polynomials takes more time for large n, a mesh itself remains two-dimensional for all n.
Summarizing, we formulate our main result.The following theorem contains a general description of the solution of the time-optimal control problem (2.1), (2.2).It should be supplemented by Lemma 3.1 for a specific implementation of the listed cases.
Theorem 4.2.The time-optimal control problem (2.1), (2.2), where n ≥ 4, is reduced to solving a truncated Hausdorff moment problem.For generic points with x 0 1 > 0, four (if n is even) or five (if n is odd) cases of the optimal control are possible.These cases are listed in Table 1: cases 1, 3, 5, 7, 9 correspond to odd n and cases 2, 4, 6, 8 correspond to even n.
In Case 1 one needs solving one linear equation for θ, checking three matrices for positive definiteness, solving one polynomial equation in one variable for z i , and solving one linear system for σ i .
In Cases 2, 3, 4, 7 one needs solving one polynomial equation for a or b, one linear equation for θ, checking three matrices for positive definiteness, solving one polynomial equation in one variable for z i , and solving one linear system for σ i .
In Cases 5, 6, 8, 9 one needs solving a system of two polynomial equations for a and b, one linear equation for θ, checking three matrices for positive definiteness, solving one polynomial equation in one variable for z i , and solving one linear system for σ i .Remark 4.3.For a generic point with x 0 1 < 0 one can formulate the result similarly to Theorem 4.2.Alternatively, let us consider the starting point Then x 0 1 > 0 and, obviously, if u(t) is an optimal control for the point x 0 , then − u(t) is an optimal control for the point x 0 .For non-generic points x 0 , a similar analysis can be carried out.However, for examples using float-point computations, one can consider x 0 as generic.The result for the case x 0 1 = 0, which is considered as non-generic in this paper, also can be easily deduced from Theorem 4.2.

Examples
We illustrate our algorithm by the time-optimal control problem for the four-dimensional system ẋ1 = u, ẋ2 = x 1 , ẋ3 = x 2 1 , ẋ4 = x 3 1 . (5.1) Here n = 4 and m = 2.We suppose that our initial points are generic; at least, that it true when we deal with float point calculations.To solve the concrete examples given below, we used Python with SymPy library.
5.1.Initial point x 0 = (1, −2, −6, 2) We discuss this example in detail in order to demonstrate application of our algorithm.Since n is even, we have to check four cases, namely, Cases 2, 4, 6 and 8 from Table 1.In each case, we apply the corresponding conditions described in Lemma 3.1, as is explained in Section 4.
First, we substitute the initial point to (3.10) and write c i as polynomials in θ, a and b, Case 2.Here k = 2, a = 0 and the conditions are of type A. Substituting a = 0 to (5.2), we get In Step 1, we first consider the polynomial equation In Step 1 we first consider the equation We seek for such values of a for which these polynomials have a common root.As is well known [2], such values are roots of the determinant of the Sylvester matrix, Analogously to the previous case, we exclude b from this system and obtain the polynomial in a of the form This polynomial has no real roots.Hence, in Case 8, there are no suitable controls.Therefore, we have checked all possible cases and find the unique control satisfying all the conditions (in Case 4).Hence, this control is optimal.5.2.Initial point x 0 = (1, −2, − 12  5 , 2) For this initial point, analogously to the previous example, Case 4 is realized.However, now the optimal time is very large, θ ≈ 1907.10809.Components of the optimal trajectory are shown in Figure 2 (b).Here we have z 2 ≈ 0.001903 and σ 2 ≈ 1903.19513.Thus, almost all the time the trajectory moves with very small first component x 1 (t), i.e., in a neighborhood of the "equilibrium subspace" x 1 = 0.This shows that x 0 is close to the boundary of the 0-controllability domain; one can prove that the point (1, −2, − 12  5 + 1 100 , 2) cannot be steered to the origin by a control satisfying the constraint |u| ≤ 1.In [7] the three-dimensional time-optimal control problem for the system ẋ1 = u, ẋ2 = x 1 , ẋ3 = x 3 1 (5.3) was completely solved.Such a problem can be thought of as a time-optimal control problem for a fourdimensional system (5.1)where the coordinate x 0 3 is free.This feature leads to more complicated structure of optimal controls (though, equations for finding optimal controls are easier); in particular, an optimal control can be one of eight possible types.Moreover, some initial points admit two optimal controls of different types.For example, let us consider the line in the four-dimensional space such that x 1 = 1, x 2 = −8, x 4 ≈ −1.8792 (the precise value can be found in [7]) and x 3 can be arbitrary.One can show that the optimal time θ as a function of x 3 has two minimum points with the same minimum value, namely, x 3 ≈ −3.8289 and x 3 ≈ −28.4649; the minimum value is θ ≈ 17.0918.The point x 0 = (1, −8, −3.8289, −1.8792) is not generic in the sense mentioned above: the optimal control corresponds to Case 2 with b = x 0 1 or, what is the same, to Case 4 with a = 0.The optimal control for the point x 0 = (1, −8, −28.4649, −1.8792) is of Case 6.So, we obtain two different solutions for the system (5.3)found in [7]; components of the optimal trajectories are shown in Figure 3.The obtained result shows, in particular, that controllability sets for the system (5.1) can be non-convex.

Conclusion and open questions
In the paper, the time-optimal problem (2.1)-(2.2) was considered.We have shown that this problem reduces to a truncated Hausdorff moment problem.We proved that, regardless of system's dimension, only four or five cases (in dependence on the parity of dimension) of the Hausdorff moment problem arise.In these moment problems, the interval [a, b], where the problem is stated, can be unknown.In each case one needs to solve a system of at most two polynomial equations in two variables a and b and, after that, a polynomial equation in one variable to find a control (in some cases, one solves only one or two polynomial equations in one variable).As a result, we obtain an analytic solution of the time-optimal problem (2.1)-(2.2),which is in principle the same for all dimensions.

Figure 1 .
Figure 1.Sketch of a possible graph of x 1 (t).

1 )
j −2b j j , j = 2, . . ., n.Hence, c 2 , . . ., c n and c b 2 , . . .c b n−1 , defined by (3.15), become polynomials in b, while c 1 and c b 1 are polynomials in b and θ.The conditions are of type A for Case 2 and of type C for Case 3, and k = m.Here, the equation (3.20) or (3.28) with d = 1 does not include θ.Thus, this is a polynomial equation in b.Hence, the algorithm is as follows.Step 1. Solve the polynomial equation in b of the following form Case 2 : det{c i+j } k i,j=1 = 0; Case 3 : det{c b i+j } k i,j=1 = 0, where c b j = −c j+1 + bc j , j = 2, . . ., n − 1. Substitute each b obtained such that b > x 0 1 (if any) to the (linear) equation Case 2 : det{c i+j−1 } k i,j=1 = 0; Case 3 : det{c b i+j−1 } k i,j=1 = 0, where c b 1 = −c 2 + bc 1 , and find θ.The next two steps should be implemented for each pair (b, θ) such that b > x 0 1 and θ > 0 (if any).Substitute such a pair to c j and c b j , then they become known numbers.Step 2. Check if the following three numeric matrices are positive definite, Case 2 :

we use the notation e 1 = a 2 + 5 2 , e 2 = 2 3 a 3 + 19 3 , e 3 = − 1 2 a 4 + 7 4 .
The determinant of this matrix equals This polynomial has two negative roots, a ≈ −1.66181 and a ≈ −1.40390.Substituting them to the system, we get the unique pair a, b such that a < 0 and b > x 0 1 , namely, a ≈ −1.40390 and b ≈ 2.98306.Substituting these values to (5.2), we get c 1 ≈ θ − 7.77392 and c 2 ≈ −4.42772.Then we find θ from the equation c b 1 = −c 2 + bc 1 = 0, which gives θ ≈ 6.28964 > 0. Then c 1 ≈ −1.48429.Now we pass to Step 2. Since k = 1, here we have only one one-dimensional matrix c a 1 .Since c a 1 = c 2 − ac 1 ≈ −6.51151 < 0, it is not positive definite.Therefore, in Case 6, there are no suitable controls.Case 8.Here k = 1 and the conditions are of type D. In Step 1, we have the following system of two equations,
Thus, in any case the number of parameters is less than n, what is impossible for a generic point.This proves(3.14).Now we analyze the possible values of k for generic points.In view of conditions (3.13) and (3.14), we get three cases for b and σ 1 :