About the "horizon" parameter in the solve_ocp() method · python-control/python-control · Discussion #781

A bit more information: the problem seems to be with the default method used by scipy.optimize.minimize, which is 'SLSQP'. Near as can tell, given the initial guess (which is to drive in a straight line), it can't figure out what to do when there are more than 3 points in the horizon. You can see that something is wrong because the value of the cost function is very high (result.cost = 16000.0 in the output above).

If you switch the solver to trust-constr, then things work OK:

horizon = np.linspace(0, Tf, 5, endpoint=True)
result = opt.solve_ocp(
    vehicle, horizon, x0, traj_cost, constraints,
    terminal_cost=term_cost, initial_guess=u0, minimize_method='trust-constr')

The (abridged) result is now

    execution_time: 226.1267969608307
               fun: 0.007588438759261349
              grad: array([-7.73912826e-03, -2.26881260e-03, -2.45970453e-03, -1.93155936e-02,
        1.53641058e-01,  2.76074462e+00,  4.69169176e+00,  3.75040997e+00,
       -2.55960229e+04,  1.59332017e+00])
            inputs: array([[ 1.00001233e+01,  1.00030576e+01,  1.00030819e+01,
         1.00008577e+01,  1.00651172e+01],
       [ 6.58643630e-03,  2.78602633e-03,  2.28824265e-03,
        -4.30400352e-03, -7.74700114e-03]])
               jac: [array([[ 1.,  0.,  0.,  0.,  0.,  0.,  0.,  0., -0., -0.],
       [ 0.,  0.,  0.,  0.,  0.,  1.,  0.,  0., -0., -0.],
       [ 0.,  1.,  0.,  0.,  0.,  0.,  0.,  0., -0., -0.],
       [ 0.,  0.,  0.,  0.,  0.,  0.,  1.,  0., -0., -0.],
       [ 0.,  0.,  1.,  0.,  0.,  0.,  0.,  0., -0., -0.],
       [ 0.,  0.,  0.,  0.,  0.,  0.,  0.,  1., -0., -0.],
       [ 0.,  0.,  0.,  1.,  0.,  0.,  0.,  0., -0., -0.],
       [ 0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  1., -0.],
       [ 0.,  0.,  0.,  0.,  1.,  0.,  0.,  0., -0., -0.],
       [ 0.,  0.,  0.,  0.,  0.,  0.,  0.,  0., -0.,  1.]])]
           problem: <control.optimal.OptimalControlProblem object at 0x10fbb6980>
            states: array([[ 0.00000000e+00,  2.49962868e+01,  4.99732645e+01,
         7.49317792e+01,  9.99999628e+01],
       [-2.00000000e+00, -1.44955389e+00, -2.03455865e-01,
         1.29540778e+00,  1.99998404e+00],
       [ 0.00000000e+00,  3.87210391e-02,  5.99980593e-02,
         5.10708705e-02,  3.22629524e-04]])
            status: 2
           success: True
              time: array([ 0. ,  2.5,  5. ,  7.5, 10. ])

The cost is now result.cost = 0.007588438759261349. Note, however, that it takes a lot longer to get the result (almost four minutes) and also you get a warning about the Hessian being approximately zero (which may be why the other solver was not working well).

I'll keep looking at this, but the quick (but slow -:) fix is to use a different optimization method.

BTW, the much faster way to solve this problem is to exploit the fact that the system is differentially flat: https://python-control.readthedocs.io/en/0.9.2/kincar-flatsys.html. Approach #​3 in the example script is solving the same problem and returns a result instantaneously.