APROS - (Paules, Ciric, Aggarwal, Kokossis, Floudas)
APROS (Algorithms for PROcess Synthesis) is an algorithmic development methodology for classes of mathematical programming problems that require application of some form of decomposition technique and involve extensive communication of data between a set of subproblems whose sizes and structures may vary during an iterative solution procedure. Typical examples of such algorithms include most classes of mixed-integer nonlinear programming (MINLP) problems, variable partitioning algorithms that result in sequences of subproblems, large-scale mixed-integer linear programming (MILP) problems, as well as a wide variety of algorithms for large-scale MINLP, NLP, and LP systems exhibiting special structure.
The application of APROS to MINLP problems has a number of key features that are common to most decomposition algorithms. It should be noted that it is the generality of these features and their application to other classes of problems and algorithms that make APROS a very promising algorithmic development tool. These features include:
- Structuring control of the dynamic NLP subproblems and MILP master problem models using static and dynamic sets.
- Transfer and storage of the solution levels of the continuous variables as well as the Lagrange multipliers of the equations from the NLP subproblems.
- Transfer and storage of proposed integer combinations between master and NLP subproblems and parameterization of the NLP subproblems according to the new integer combinations.
- Procedures for bounding the MILP master problems.
- Implementation of a stopping criterion.
- Conditional branching.
- Storage and report of optimal solution data.
- Looping of repeated blocks of algorithmic steps.
- Saving and restarting the problem.
APROS is written in the GAMS modeling language.
MINOPT is modeling language and algorithmic framework for the solution of various types of optimization problems. The types of problems MINOPT is capable of solving are described by the types of variables and constraints used in the model. MINOPT handles continuous time invariant, continuous dynamic, and integer variables and it recognizes linear, nonlinear, dynamic, dynamic point, and dynamic path constraints. The types of models MINOPT is capable of addressing are
- Linear Programs (LP)
- Mixed Integer Linear Programs (MILP)
- Nonlinear Programs (NLP)
- NLPs with Dynamic Models (NLP/DAE)
- Mixed Integer Nonlinear Programs (MINLP)
- Dynamic Simulations
- MINLPs with Dynamic Models (MINLP/DAE)
- Optimal Control Problems (OCP)
- Mixed Integer Optimal Control Problems (MIOCP)
MINOPT employs appropriate algorithms and uses external software packages for the solution of the LP, MILP, and NLP problems as well as for the integration of the dynamic systems. For the solution of MINLPs, MINOPT employs the following algorithms:
- Generalized Benders Decomposition (GBD)
- Outer Approximation and Variants (OA, OA/ER, OA/ER/AP)
- Generalized Cross Decomposition (GCD)
For the solution of problems which have differential equations, MINOPT employs a parametric method. The system of Differential and Algebraic Equations (DAEs) is solved through an integration method which also determines the sensitivities of the dynamic variables to the parameters of the DAE system. This allows the NLP/DAE problem to be formulated as an NLP with implicit constraints.
The MINOPT code is written in ANSI C and has been developed with an expandable platform. Additional solvers and algorithms can be added to the program as they become available. The modeling language can be expanded to recognize additional variable types, constraint types, and commands should they be needed.
For more information about MINOPT, including how to obtain the program, visit the MINOPT Home Page.
GLOPEQ - (McDonald, Floudas)
GLOPEQ (GLobal Optimization for the Phase and chemical EQuilibrium problem) is a package written entirely in C for the computation of equilibrium solutions corresponding to a global minimum of the Gibbs free energy function. It can be used for systems of liquid phases whose behavior can be adequately modeled by any of the following equations:
- Wilson
- Modified Wilson
- NRTL
- UNIQUAC
- UNIFAC
- ASOG
All of these models can predict liquid phase-splitting except for the Wilson equation. Ideal vapor phases can also be incorporated.
GLOPEQ can be used to solve either of the following two problems:
- The minimization of the Gibbs free energy function (G)
- The minimization of the tangent plane distance function (S).
The recommended use of GLOPEQ is the combined use of both problems (G) and (S). The basic structure of the algorithm is as follows: (i) solve (G) locally a number of times as specified by the user; (ii) use the best local solution to solve (S) to verify if the current solution is the best solution; if yes, then stop, otherwise (iii) solve (G) globally for an equilibrium solution with a lower value of the Gibbs free energy and return to (ii), iterating until the stability check indicates that the global equilibrium solution has been obtained.
cGOP - (Visweswaran, Floudas)
cGOP is a package for rigorously solving nonconvex optimization problems to global optimality. The package implements the GOP algorithm using a set of C subroutines to solve these problems using decomposition and branch-and-bound techniques. It also incorporates several improvements made to the original GOP algorithm to reduce the computational complexity, as well as new formulations that permit implicit solutions of some of the subproblems encountered during the algorithmic steps. The algorithms use local optimization solvers (currently MINOS and CPLEX) to solve linear, mixed-integer linear and convex subproblems. Currently, the package can be used to solve problems with linear constraints. The original algorithm and its derivatives can be accessed by calls to high-level subroutines as well as through a standalone mode for quadratic problems. Furthermore, it can also be accessed using a high-level interface that permits easy description of the problems.
cGOP is designed as a library of subroutines that can be called from any high level programming language. It has been written in portable ANSI C, and is currently available for the HP 9000, IBM RS6000, SUN4 and Silicon Graphics architectures. There are no restrictions on the sizes of problems that can be solved; the algorithms are only limited by the available memory and CPU resources on the machine on which the software is run.
cGOP can currently be used to solve problems of the following form:
where x and y are n1 and n2 vectors of variables, c,d,l and u are constant vectors, Q is a constant matrix and can be indefinite, and the functions F1(x) and F2(y) are smooth convex scalar functions. Note that some (or all) of the y variables can be binary variables; in this case, the function F2(y) must be linear.
The cGOP algorithm is summarized as follows: at each node of the branch-and-bound tree, a primal problem is solved in the x variables to provide an upper bound on the global solution. The solution of this problem is used to construct a Lagrange function that underestimates the optimal solution of the original problem in the current domain of interest. The gradients of the Lagrange function are then used as a basis for partitioning this domain into subdomains. Subsequently, in each subdomain, a linear or convex lower bounding problem in y is solved. The solutions to these problems provide a set of "children" nodes for the incumbent node, and also provide a lower bound for the global solution. One of these children nodes is then selected as a candidate node for further exploration, and the procedure is repeated. Due to accumulation of cuts from previous iterations, the upper and lower bounds are successively tightened until the algorithm converges to the global solution.
You may download a beta-test version of cGOP as a 1.4 MB compressed tar file, or a 1.0 MB GNU-zipped tar file. The online version of the user's manual for cGOP (113 K compressed PostScript file) and the manual for cparse (50 K PostScript file) are also available.
alphaBB - (Adjiman, Androulakis, Maranas, Floudas)
alphaBB is a global optimization algorithm for general constrained nonconvex optimization problems, with twice-differentiable functions. It combines a novel convex lower bounding procedure within a branch and bound framework. General nonconvex terms are underestimated through quadratic functions derived from second-order information. In addition, a number of special underestimators have been incorporated into the package in order to take full advantage of any known mathematical properties of the functions.
The aim was to develop a user-friendly environment that allows the efficient optimization of twice differentiable NLP's while maintaining its simplicity and flexibility. Thus, when preparing the input file, the user can decide to use the very general quadratic underestimators, to use built-in special underestimators or to specify his/her own lower bounding functions.
The structure of the algorithm allows for the following features:
- user friendly input environment for an easy development of process models
- automatic differentiation for deriving gradients as well as Hessian matrices
- ability to generate interval Hessian matrices and perform interval arithmetic
- implementation of various techniques for estimating bounds on the parameter alpha
- several alternatives for the construction of the lower bounding problem depending on the mathematical characteristics of the terms appearing in the problems so as to achieve the tightest possible convex underestimation. These include :
- general nonconvex terms underestimated using the alpha parameter
- bilinear terms underestimated using linear cuts
- univariate concave terms underestimated through linearization
- interface with a local NLP solver (MINOS5.4)
- various branching strategies
- various variable bound-update strategies
Based on alphaBB, various other tools are being developed so as to:
- find all solutions of general nonlinear systems of equations
- solve to global optimality general MINLP problems
|