Non-proprietary Optimization

From ASCEND
Revision as of 04:36, 25 July 2010 by Jpye (talk | contribs)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

See IPOPT for our recently-implemented connection to the free, open-source IPOPT solver.

We are trying to locate a suitable non-commercial optimizer to build into ASCEND. This would alleviate the current dependence on the proprietary CONOPT optimizer.

Requirements for New Optimizer

Mathematical/algorithm requirements:

  • Must be able to handle 'box constraints', eg <math>a \le x \le b</math> for each variable.
  • Must be able to handle equality constrains, eg g(x,y,z) = 0 and x = f(y,z,a,b)
  • Must be able to handle non-linear problems
  • If matrix algebra is required, should use sparse methods rather than dense
  • Should be able to make use of derivatives

Open questions:

  • Should be able to handle general inequality constraints, eg <math>g(x,y,z) \le 0</math> ?
  • Should not assume convex problem?
  • Should not require second derivatives/Hessian matrix?
  • Should be able to handle non-linear constraints?

Supporting information requirements:

  • Should be known to solve a range of comparable problems to those which we expect to encounter
  • Must be accompanied by a set of solvable problems for which exact solutions are known, for testing purposes.
  • Should have a published algorithm
  • Must have a user's manual
  • Should have a freely-available user's manual

Pedantic legal stuff:

  • Must be freely available to academics
  • All requirement dependencies must be freely available to academics
  • Should be freely available to everybody
  • Should be redistributable in unmodified source form
  • Should be redistributable in modified source form
  • Should be redistributable in binary (compiled) form
  • Documentation should be redistributable in original form
  • Documentation should be redistributable in modified/repackaged form

Background info

Related stuff:

We have done our own cursory survey of optimisation software

TRON

Some work was done to investigate possible adding support for TRON in ASCEND. TRON can only solve the limited optimisation problem:

<math> \begin{align} \min f(\mathbf{x}) \\ \mathbf{x}_{lower} \le \mathbf{x} \le \mathbf{x}_{upper} \end{align} </math>

The following are observations:

  • TRON requires that you write your own iteration loop outside TRON itself.
  • TRON requires that you evaluate the function value and Hessian matrix (in a special compressed form that involves a vector of diagonal values then compressed column form for the off diagonal values, which being symmetric only need to cover half the matrix (so compressed column == compressed row?).
  • TRON updates the variable vector and tells you when an optimum has been found.
  • It's up to you to count iterations and terminate if it's taking too long etc.
  • It seems fairly conceivable that solving more general systems with TRON might be possible by performing a separate NLA problem solution each iteration. But we're hesitant about trying that, because we're not completely comfortable with the maths involved in doing that at this stage, and not sure if it will mess up considerations of feasibility/optimality from TRON's PoV.

IPOPT

IPOPT solves a more useful type of problem that includes equality constraints:

<math> \begin{align} \min f(\mathbf{x}) \\ \mathbf{c}_L \le \mathbf{c}(\mathbf{x}) \le \mathbf{c}_U \\ \mathbf{x}_L \le \mathbf{x} \le \mathbf{x}_U \end{align} </math>

It is emphasised that solving equality constraints is done just by setting elements of <math>\mathbf{c}_L</math> and <math>\mathbf{c}_U</math> equal to each other.

IPOPT supports a number of linear solvers including the HSL routine MA27, but also MUMPS, which is a free open source solver. So it is now possible to distribute a full working copy of IPOPT with ASCEND. It is also possible to have 'drop-in' support for HSL, because IPOPT can detect and dlopen HSL at runtime, if present.

Download/build instructions are here: [1]

MOOCHO / rSQP++

The way that MOOCHO is packaged for download is as part of the big Trilinos bundle. This is a huge download that takes more than an hour to build on my system.

An RPM has been built for the Trilinos bundle, including MOOCHO. Get it from: [2]

The support for generic linear algebra routines is also now deprecated, and they are going for all-out integration with the other Trilinos packages. That means there's quite a lot to learn about. It's a bit daunting -- if anyone else fancies helping out, by all means, please do.