Direct interface | OpEn
As we discussed previously there are various ways you can use the auto-generated optimizer. You can use the generated Rust code, access it over a TCP socket, and so on. In this section we will focus on how to access it directly.
The idea is that OpEn can generate a Python module that you can import.
Generate a Python module for your optimizer
Consider the following parametric optimization problem:
\[ \begin{align} \operatorname*{Minimize}_{\|u\|\leq r}& \sum_{i=1}^{n_u - 1} b (u_{i+1} - u_{i}^2)^2 + (a-u_i)^2 \\ \text{subject to: }& 1.5 u_1 - u_2 = 0 \\ &u_3 - u_4 + 0.1 \leq 0 \end{align} \]
import opengen as og
import casadi.casadi as cs
u = cs.SX.sym("u", 5)
p = cs.SX.sym("p", 2)
phi = og.functions.rosenbrock(u, p)
c = cs.vertcat(1.5 * u[0] - u[1],
cs.fmax(0.0, u[2] - u[3] + 0.1))
bounds = og.constraints.Ball2(radius=1.5)
problem = og.builder.Problem(u, p, phi) \
.with_penalty_constraints(c) \
.with_constraints(bounds)
build_config = og.config.BuildConfiguration() \
.with_build_directory("my_optimizers") \
.with_build_mode("debug") \
.with_build_python_bindings()
meta = og.config.OptimizerMeta() \
.with_optimizer_name("rosenbrock")
builder = og.builder.OpEnOptimizerBuilder(problem, meta,
build_config)
builder.build()
Note that we have used with_build_python_bindings().
This will allow us to import the auto-generated optimizer as a Python module!
Use the generated module
The above code generates an optimizer which is stored in my_optimizers/rosenbrock.
In that directory you can find a file called rosenbrock.so (or rosenbrock.pyd on Windows).
This can be loaded as an autogenerated Python module.
However, mind that this directory is most likely not in your Python path,
so you will have to add it before you can import the optimizer.
This can be done very easily:
import sys
sys.path.insert(1, './my_optimizers/rosenbrock')
import rosenbrock
Then you will be able to use it as follows:
solver = rosenbrock.solver()
response = solver.run(p=[20., 1.])
if not response.is_ok():
raise RuntimeError(response.get().message)
result = response.get()
u_star = result.solution
In the first line, solver = rosenbrock.solver(), we obtain an instance of
Solver, which can be used to solve parametric optimization problems.
In the second line, response = solver.run(p=[20., 1.]), we call the solver
with parameter $p=(20, 1)$. Method run accepts another three optional
arguments, namely:
initial_guess(can be either a list or a numpy array),initial_lagrange_multipliers, andinitial_penalty
The solver returns an object of type SolverResponse, similar to the TCP
interface. First call response.is_ok() to determine whether the call
succeeded, then call response.get() to obtain either a SolverStatus
object or a SolverError. This mirrors the Python TCP interface, but without
the socket transport layer.
response = solver.run(p=[20., 1.])
if response.is_ok():
result = response.get()
u_star = result.solution
else:
error = response.get()
print(error.code, error.message)
The returned objects also implement __repr__, which makes them convenient to
inspect in a Python REPL or notebook:
response = solver.run(p=[20., 1.])
print(response)
print(response.get())
The SolverStatus object exposes the following properties:
| Property | Explanation |
|---|---|
exit_status | Exit status; can be (i) Converged or (ii) NotConvergedIterations, if the maximum number of iterations was reached, therefore, the algorithm did not converge up to the specified tolerances, or (iii) NotConvergedOutOfTime, if the solver did not have enough time to converge |
num_outer_iterations | Number of outer iterations |
num_inner_iterations | Total number of inner iterations (for all inner problems) |
last_problem_norm_fpr | Norm of the fixed-point residual of the last inner problem; this is a measure of the solution quality of the inner problem |
f1_infeasibility | Euclidean norm of $c^{-1}(y^+-y)$, which is equal to the distance between $F_1(u, p)$ and $C$ at the solution |
f2_norm | Euclidean norm of $F_2(u, p)$ at the solution |
solve_time_ms | Total execution time in milliseconds |
penalty | Last value of the penalty parameter |
solution | Solution |
cost | Cost function at solution |
lagrange_multipliers | Vector of Lagrange multipliers (if $n_1 > 0$) or an empty vector, otherwise |
These are the same properties as those of opengen.tcp.SolverStatus.
For backward compatibility, the generated module also exposes
OptimizerSolution as an alias of SolverStatus.
If the call fails, response.get() returns a SolverError with:
| Property | Explanation |
|---|---|
code | Error code, aligned with the TCP interface |
message | Detailed error message |
The most common error codes are:
| Code | Meaning |
|---|---|
1600 | Initial guess has incompatible dimensions |
1700 | Wrong dimension of initial Lagrange multipliers |
2000 | Problem solution failed; the message includes the solver-side reason |
3003 | Wrong number of parameters |
Importing optimizer with variable name
Previously we used import rosenbrock to import the auto-generated module.
The limitation of this syntax is that it makes it difficult to change the name of the optimizer, i.e., rosenbrock is hard-coded.
A better syntax would be:
import os
import sys
optimizers_dir = "my_optimizers"
optimizer_name = "rosenbrock"
sys.path.insert(1, os.path.join(optimizers_dir, optimizer_name))
rosenbrock = __import__(optimizer_name)