TOMLAB OPTIMIZATION LOGO TOMLAB OPTIMIZATION AREA top banner
  # LOGIN   # REGISTER (FREE TRIAL)
  # myTOMLAB  
Products
*TOMLAB Base Module
 *Solvers
    clsSolve
    conSolve
    cutPlane
    DualSolve
    expSolve
    glbDirect
    glbFast
    glbSolve
    glcCluster
    glcDirect
    glcFast
    glcSolve
    goalSolve
    infLinSolve
    infSolve
    L1LinSolve
    L1Solve
    linRatSolve
    lpSimplex
    lsei
    milpSolve
    minlpSolve
    mipSolve
    multiMin
    multiMINLP
    nlpSolve
    pdco
    pdsco
    QLD
    qpSolve
    slsSolve
    sTrustr
    Tfmin
    Tfzero
    Tlsqr
    Tnnls
    ucSolve
*TOMLAB /MINOS
*TOMLAB /NPSOL
*TOMLAB /DNOPT
*TOMLAB /SNOPT
*TOMLAB /SOL
*TOMLAB /CGO
*TOMLAB /CPLEX
*TOMLAB /GUROBI
*TOMLAB /MINLP
*TOMLAB /MIPNLP
*TOMLAB /MISQP
*TOMLAB /PENSDP
*TOMLAB /PENBMI
*TOMLAB /KNITRO
*TOMLAB /OQNLP
*TOMLAB /PROPT
*TOMLAB /NLPQL
*TOMLAB /LGO
*TOMLAB /LGO-MINLP
*TOMLAB /GP
*TOMLAB /GENO
*TOMLAB /MAD
*TOMLAB /MIDACO

conSolve

Solves general dense or sparse constrained nonlinear optimization problems with explicit handling of linear inequality and equality constraints, nonlinear inequality and equality constraints and simple bounds on the variables.

conSolve implements a sequential quadratic programming (SQP) method by Schittkowski. If the QP subproblem fails to give a search step, an augmented Lagrangian merit function gradient subspace step with line search is used. conSolve is using second order information in the function, and first order information in the constraints, if available.

Two methods are implemented:

Method Reference
Schittkowski SQP method (default)  Klaus Schittkowski: On the convergence of a Sequential Quadratic Programming, Method with an Augmented Lagrangian Line Search Function, Systems Optimization Laboratory, Dept. of Operations Research, Stanford University, Stanford CA 94305, January 1982.
Han-Powell SQP method See e.g. Luenberger: Linear and nonlinear programming (1984)

  Main features:

  • The QP systems are generally indefinite if using explicit Hessians. The QP solvers qpSolve or QPOPT, or the general solver MINOS may be used to solve these problems. MINOS is more suitable for large problems
     
  • The line search on the augmented Lagrangian merit function is a modified version of an algorithm by Fletcher (1987)
     
  • BFGS safeguarded quasi-Newton updates are used when Hessian information is missing. The initial BFGS matrix may be given by the user, if some prior knowledge is known about the Hessian.
     
  • If missing, unknown gradients and constraint Jacobians are estimated using any of the Tomlab methods.
     
  • Either analytic Hessian, numerical Hessian, or Hessian estimated by BFGS update may be used for both methods. Generally it is more unstable to use a numerical Hessian then the other two methods, and if the gradients are also estimated numerically, numerical Hessian should be avoided.
     
  • If rank problem occurs in the merit function gradient steps, the solver is using subspace minimization techniques. A rank tolerance parameter may be set by the user.

    Tomlab © 1989-2022. All rights reserved.    Last updated: Oct 3, 2022. Site map. Privacy Policy