TOMLAB /LGO combines deterministic and stochastic, derivative-free global and local optimization strategies in a seamlessly integrated manner. The LGO solver suite is maintained by Pinter Consulting Services, Inc. see http://www.pinterconsulting.com/.

The derivative-free (direct search) approach implemented in LGO enables fast global search; and it is of particular relevance with respect to applications, in which higher order (gradient, Hessian,...) information is difficult or costly to obtain.

The current LGO implementation incorporates the following solver modules:

  • Branch-and-bound global search method (BB)
     
  • Global adaptive random search (GARS)
     
  • Multi-start based global random search (MS)
     
  • Constrained local search (LS) by the reduced gradient method
     

The branch-and-bound search method (BB) is a robust implementation of a theoretically established global optimization approach. It combines set partition steps with deterministic and randomized sampling. This solver module will typically generate a sufficiently close approximation of the global optimizer point(s), before LGO is switched to local search. However, the program runtimes can be expected to grow for higher-dimensional and more difficult models, if the user wants to find a close approximation of the global solution by BB.

Pure random search is a very simple, 'folklore' approach to global optimization. GARS is a non-trivial improvement over that passive search approach in the sense that it adaptively reduces the search region, gradually focusing the global search effort on the region which - on the basis of the actual sample results - is estimated to contain the global solution point. The application of this search option can be especially recommended for problems with 'minimal' structure.

Multi-start (MS) based global search applies a similar search strategy to GARS; however, the total sampling effort is distributed among several searches. Each of these leads to a 'promising' starting point for subsequent local search. This strategy can be recommended in presence of a (possibly large) number of competitive local optima. Typically, this approach takes the most computational effort (due to its multiple local searches); however, in complicated models, it often finds the best numerical solution. For this reason, this is the recommended default LGO solver option.

Ideally - and also in many practical cases - the three global search methods outlined above will give the same answer, except small numerical differences due to rounding errors. When solving complicated problems, the LGO user may wish to try all three global search options, to see which gives the best results.

The local search phase (LS) is currently based on the generalized reduced gradient algorithm. This search strategy is locally precise: therefore its use can be recommended, as the last search phase, following one of the global searches. The application of LS typically results in a 'polished' solution, by its attempt to attain precise feasibility, and -simultaneously- to improve the objective function value.

For more information about TOMLAB /LGO see the TOMLAB /LGO User's Guide. The theoretical developments leading to LGO are described in the monograph 'Global Optimization in Action' (winner of the 2000 INFORMS Computing Society Prize), see http://www.springer.com/.

For user's guides to all TOMLAB products see the Manual section.