# TOMLAB  
# REGISTER (TOMLAB)
# LOGIN  
# myTOMLAB
TOMLAB LOGO

« Previous « Start » Next »

5  TOMLAB /NLPJOB Solver Reference

A detailed description of the TOMLAB /NLPJOB[2] solver interface is given below. Also see the M-file help for nlpjobTL.m. The 15 different possibilities for the scalar objective function are listed below.

5.1  nlpjobTL

Purpose
Solves multicriteria nonlinear programming problems.

NLPJOB solves problems of the form
 
min
x
f(1,x),....,f(L,x)  
 
s/t xL x xU  
  bL Ax bU  
  cL c(x) cU  
    (2)
where x,xL,xU ∈ RnA ∈ Rm1 × nbL,bU ∈ Rm1 and c(x), cL, cU ∈ Rm2.
L is the number of objective functions. For details on the objective function see that different methods below.

Calling Syntax
Prob = clsAssign( ... );
Result = tomRun('nlpjob',Prob,...);


Description of Inputs
Prob Problem description structure. The following fields are used:
 
  A Linear constraints coefficient matrix.
  x_L, x_U Bounds on variables.
  b_L, b_U Bounds on linear constraints.
  c_L, c_U Bounds on nonlinear constraints. For equality constraints (or fixed variables), set e.g. b_L(k) == b_U(k).
 
  PriLevOpt Print level in MEX interface.
 
  NLPJOB Structure with special fields for the NLPJOB solver:
 
  model Desired scalar transformation as indicated below.
 
  1 Weighted sum: The scalar objective function is the weighted sum of individual objectives, i.e., F(X) := W1*F1(X) + W2*F2(X) + ... + WL*FL(X) , where W1, ..., WL are non-negative weights given by the user.
 
  2 Hierarchical optimization method: The idea is to formulate a sequence of L scalar optimization problems with respect to the individual objective functions subject to bounds on previously computed optimal values, i.e., we minimize F(X) := FI(X) , I = 1,...,L subject to the original and the additional constraints FJ(X) <= (1+EJ/100)*FJ , J = 1,...,I-1 , where EJ is the given coefficient of relative function increment as defined by the user and where FJ is the individual minimum. It is assumed that the objective functions are ordered with respect to their importance.
 
  3 Trade-off method: One objective is selected by the user and the other ones are considered as constraints with respect to individual minima, i.e., F(X) := FI(X) is minimized subject to the original and some additional constraints of the form FJ(X) <= EJ , J=1,...,L , J <> I , where EJ is a bound value of the J-th objective function.
 
  4 Method of distance functions in L1-norm: A sum of absolute values of the differences of objective functions from predetermined goals Y1, ..., YL is minimized, i.e., F(X) := |F1(X)−Y1| + ... + |FL(X)−YL| The goals are given by the user and their choice requires some knowledge about the ideal solution vector.
 
  5 Method of distance functions in L2-norm: A sum of squared values of the differences of objective functions from predetermined goals Y1, ..., Yl is minimized, F(X) := (F1(X)−Y1)2 + ... + (FL(X)−YL)2. Again the goals are provided by the user.
 
  6 Global criterion method: The scalar function to be minimized, is the sum of relative distances of individual objectives from their known minimal values, i.e., F(X) := (F1(X)−F1)/|F1| + ... + (FL(X)−FL)/|FL| where F1, ..., FL are the optimal function values obtained by minimizing F1(x), ..., FL(x) subject to original constraints.
 
  7 Global criterion method in L2-norm: The scalar function to be minimized, is the sum of squared distances of individual objectives from their known optimal values, i.e., F(X) := ((F1−F1(X))/F1)2 + ... + ((FLFL(X))/FL))2 where F1, ..., FL are the individual optimal function values.
 
  8 Min-max method no. 1: The maximum of absolute values of all objectives is minimized, i.e., F(X) := MAX ( |FI(X)| , I=1,...,L )
 
  9 Min-max method no. 2: The maximum of all objectives is minimized, i.e., F(X) := MAX ( FI(X) , I=1,...,L )
 
  10 Min-max method no. 3: The maximum of absolute distances of objective function values from given goals Y1, ..., YL is minimized, i.e., F(X) := MAX ( |FI(X)−YI| , I=1,...,L ). The goals must be determined by the user.
 
  11 Min-max method no. 4: The maximum of relative distances of objective function values from ideal values is minimized, i.e., F(X) := MAX ( (FI(X)−FI)/|FI| , I=1,...,L )
 
  12 Min-max method no. 5: The maximum of weighted relative distances of objective function values from individual minimal values is minimized, F(X) := MAX ( WI*(FI(X)−FI)/|FI| , I=1,...,L ). Weights must be provided by the user.
 
  13 Min-max method no. 6: The maximum of weighted objective function values is minimized, i.e., F(X) := MAX ( WI*FI(X) , I=1,...,L ) Weights must be provided by the user.
 
  14 Weighted global criterion method: The scalar function to be minimized, is the weighted sum of relative distances of individual objectives from their goals, i.e., F(X) := (F1(X)−Y1)/|Y1| + ... + (FL(X)−YL)/|YL| The weights W1, ..., WL and the goals Y1, ..., YL must be set by the user.
 
  15 Weighted global criterion method in L2-norm: The scalar function to be minimized, is the weighted sum of squared relative distances of individual objectives from their goals, i.e., F(X) := ((F1(X)−Y1)/Y1)2 + ... + ((FL(X)−YL)/YL)2 The weights W1, ..., WL and the goals Y1, ..., YL must be set by the user.
 
 
  imin If necessary (model = 2 or 3), imin defines the index of the objective function to be take into account for the desired scalar transformation.
 
  maxf The integer variable defines an upper bound for the number of function calls during the line search (e.g. 20).
 
  maxit Maximum number of iterations, where one iteration corresponds to one formulation and solution of the quadratic programming subproblem, or, alternatively, one evaluation of gradients (e.g. 100).
 
  acc The user has to specify the desired final accuracy (e.g. 1.0e-7). The termination accuracy should not be much smaller than the accuracy by which gradients are computed.
 
  scbou The real variable allows an automatic scaling of the problem functions. If at the starting point x_0, a function value is greater than SCBOU (e.g. E+3), then the function is divided by the square root. If SCBOU is set to any negative number, then the objective function will be multiplied by the value stored in WA(MMAX+1) and the Jth constraint function by the value stored in WA(J), J=1,...,M.
 
  w Weight vector of dimension L, to be filled with suitable values when calling NLPJOB depending on the transformation model: MODEL=1,10,12,13,14,15 - weights, MODEL=2 - bounds, MODEL=3 - bounds for objective functions, MODEL=4,5 - goal values.
 
  fk For MODEL=2,6,7,11,12,14,15, FK has to contain the optimal values of the individual scalar subproblems when calling NLPJOB.
 
  PrintFile Name of NLPJOB Print file. Amount and type of printing determined by PriLevOpt.
 

Description of Outputs
Result Structure with result from optimization. The following fields are set:
 
  f_k Function value at optimum.
  g_k Gradient of the function.
 
  x_k Solution vector.
  x_0 Initial solution vector.
 
  c_k Nonlinear constraint residuals.
  cJac Nonlinear constraint gradients.
 
  xState State of variables. Free == 0; On lower == 1; On upper == 2; Fixed == 3;
  bState State of linear constraints. Free == 0; Lower == 1; Upper == 2; Equality == 3;
  cState State of nonlinear constraints. Free == 0; Lower == 1; Upper == 2; Equality == 3;
 
  ExitFlag Exit status from NLPJOB MEX.
  ExitText Exit text from NLPJOB MEX.
  Inform NLPJOB information parameter.
 
  FuncEv Number of function evaluations.
  GradEv Number of gradient evaluations.
  ConstrEv Number of constraint evaluations.
  QP.B Basis vector in TOMLAB QP standard.
 
  Solver Name of the solver (NLPJOB).
  SolverAlgorithm Description of the solver.
 
  NLPJOB.u Contains the multipliers with respect to the actual iterate stored in X. The first M locations contain the multipliers of the nonlinear constraints, the subsequent N locations the multipliers of the lower bounds, and the final N locations the multipliers of the upper bounds subject to the scalar subproblem chosen. At an optimal solution, all multipliers with respect to inequality constraints should be nonnegative.
 
  NLPJOB.act The logical array indicates constraints, which NLPJOB considers to be active at the last computed iterate.
 


« Previous « Start » Next »