Pyomo Documentation 5.7.1.dev0¶
Pyomo is a Pythonbased, opensource optimization modeling language with a diverse set of optimization capabilities.
Installation¶
Pyomo currently supports the following versions of Python:
 CPython: 2.7, 3.4, 3.5, 3.6, 3.7, 3.8
 PyPy: 2, 3
Using CONDA¶
We recommend installation with conda, which is included with the Anaconda distribution of Python. You can install Pyomo in your system Python installation by executing the following in a shell:
conda install c condaforge pyomo
Optimization solvers are not installed with Pyomo, but some open source optimization solvers can be installed with conda as well:
conda install c condaforge ipopt glpk
Using PIP¶
The standard utility for installing Python packages is pip. You can install Pyomo in your system Python installation by executing the following in a shell:
pip install pyomo
Conditional Dependencies¶
Extensions to Pyomo, and many of the contributions in pyomo.contrib, also have conditional dependencies on a variety of thirdparty Python packages including but not limited to: numpy, scipy, sympy, networkx, openpxl, pyodbc, xlrd, pandas, matplotlib, pymysql, pyro4, and pint. Pyomo extensions that require any of these packages will generate an error message for missing dependencies upon use.
Many of the conditional dependencies are already distributed with
Anaconda. You can check which Python packages you already have installed
using the command conda list
or pip list
. Additional Python
packages may be installed as needed.
Citing Pyomo¶
Pyomo¶
Hart, William E., JeanPaul Watson, and David L. Woodruff. “Pyomo: modeling and solving mathematical programs in Python.” Mathematical Programming Computation 3, no. 3 (2011): 219260.
Hart, William E., Carl Laird, JeanPaul Watson, David L. Woodruff, Gabriel A. Hackebeil, Bethany L. Nicholson, and John D. Siirola. Pyomo – Optimization Modeling in Python. Springer, 2017.
PySP¶
Watson, JeanPaul, David L. Woodruff, and William E. Hart. “PySP: modeling and solving stochastic programs in Python.” Mathematical Programming Computation 4, no. 2 (2012): 109149.
Pyomo Overview¶
Mathematical Modeling¶
This section provides an introduction to Pyomo: Python Optimization Modeling Objects. A more complete description is contained in the [PyomoBookII] book. Pyomo supports the formulation and analysis of mathematical models for complex optimization applications. This capability is commonly associated with commercially available algebraic modeling languages (AMLs) such as [AMPL], [AIMMS], and [GAMS]. Pyomo’s modeling objects are embedded within Python, a fullfeatured, highlevel programming language that contains a rich set of supporting libraries.
Modeling is a fundamental process in many aspects of scientific research, engineering and business. Modeling involves the formulation of a simplified representation of a system or realworld object. Thus, modeling tools like Pyomo can be used in a variety of ways:
 Explain phenomena that arise in a system,
 Make predictions about future states of a system,
 Assess key factors that influence phenomena in a system,
 Identify extreme states in a system, that might represent worstcase scenarios or minimal cost plans, and
 Analyze tradeoffs to support human decision makers.
Mathematical models represent system knowledge with a formalized language. The following mathematical concepts are central to modern modeling activities:
Variables¶
Variables represent unknown or changing parts of a model (e.g., whether or not to make a decision, or the characteristic of a system outcome). The values taken by the variables are often referred to as a solution and are usually an output of the optimization process.
Parameters¶
Parameters represents the data that must be supplied to perform the optimization. In fact, in some settings the word data is used in place of the word parameters.
Relations¶
These are equations, inequalities or other mathematical relationships that define how different parts of a model are connected to each other.
Goals¶
These are functions that reflect goals and objectives for the system being modeled.
The widespread availability of computing resources has made the numerical analysis of mathematical models a commonplace activity. Without a modeling language, the process of setting up input files, executing a solver and extracting the final results from the solver output is tedious and errorprone. This difficulty is compounded in complex, largescale realworld applications which are difficult to debug when errors occur. Additionally, there are many different formats used by optimization software packages, and few formats are recognized by many optimizers. Thus the application of multiple optimization solvers to analyze a model introduces additional complexities.
Pyomo is an AML that extends Python to include objects for mathematical modeling. [PyomoBookI], [PyomoBookII], and [PyomoJournal] compare Pyomo with other AMLs. Although many good AMLs have been developed for optimization models, the following are motivating factors for the development of Pyomo:
Open Source¶
Pyomo is developed within Pyomo’s open source project to promote transparency of the modeling framework and encourage community development of Pyomo capabilities.
Customizable Capability¶
Pyomo supports a customizable capability through the extensive use of plugins to modularize software components.
Solver Integration¶
Pyomo models can be optimized with solvers that are written either in Python or in compiled, lowlevel languages.
Programming Language¶
Pyomo leverages a highlevel programming language, which has several advantages over custom AMLs: a very robust language, extensive documentation, a rich set of standard libraries, support for modern programming features like classes and functions, and portability to many platforms.
Overview of Modeling Components and Processes¶
Pyomo supports an objectoriented design for the definition of optimization models. The basic steps of a simple modeling process are:
 Create model and declare components
 Instantiate the model
 Apply solver
 Interrogate solver results
In practice, these steps may be applied repeatedly with different data or with different constraints applied to the model. However, we focus on this simple modeling process to illustrate different strategies for modeling with Pyomo.
A Pyomo model consists of a collection of modeling components that define different aspects of the model. Pyomo includes the modeling components that are commonly supported by modern AMLs: index sets, symbolic parameters, decision variables, objectives, and constraints. These modeling components are defined in Pyomo through the following Python classes:
Set¶
set data that is used to define a model instance
Param¶
parameter data that is used to define a model instance
Var¶
decision variables in a model
Objective¶
expressions that are minimized or maximized in a model
Constraint¶
constraint expressions that impose restrictions on variable values in a model
Abstract Versus Concrete Models¶
A mathematical model can be defined using symbols that represent data values. For example, the following equations represent a linear program (LP) to find optimal values for the vector \(x\) with parameters \(n\) and \(b\), and parameter vectors \(a\) and \(c\):
Note
As a convenience, we use the symbol \(\forall\) to mean “for all” or “for each.”
We call this an abstract or symbolic mathematical model since it
relies on unspecified parameter values. Data values can be used to
specify a model instance. The AbstractModel
class provides a
context for defining and initializing abstract optimization models in
Pyomo when the data values will be supplied at the time a solution is to
be obtained.
In many contexts, a mathematical model can and should be directly defined with the data values supplied at the time of the model definition. We call these concrete mathematical models. For example, the following LP model is a concrete instance of the previous abstract model:
The ConcreteModel
class is used to define concrete optimization
models in Pyomo.
Note
Python programmers will probably prefer to write concrete models, while users of some other algebraic modeling languages may tend to prefer to write abstract models. The choice is largely a matter of taste; some applications may be a little more straightforward using one or the other.
Simple Models¶
A Simple Concrete Pyomo Model¶
It is possible to get the same flexible behavior from models declared to be abstract and models declared to be concrete in Pyomo; however, we will focus on a straightforward concrete example here where the data is hardwired into the model file. Python programmers will quickly realize that the data could have come from other sources.
Given the following model from the previous section:
This can be implemented as a concrete model as follows:
import pyomo.environ as pyo
model = pyo.ConcreteModel()
model.x = pyo.Var([1,2], domain=pyo.NonNegativeReals)
model.OBJ = pyo.Objective(expr = 2*model.x[1] + 3*model.x[2])
model.Constraint1 = pyo.Constraint(expr = 3*model.x[1] + 4*model.x[2] >= 1)
Although rule functions can also be used to specify constraints and
objectives, in this example we use the expr
option that is available
only in concrete models. This option gives a direct specification of the
expression.
A Simple Abstract Pyomo Model¶
We repeat the abstract model from the previous section:
One way to implement this in Pyomo is as shown as follows:
from __future__ import division
import pyomo.environ as pyo
model = pyo.AbstractModel()
model.m = pyo.Param(within=pyo.NonNegativeIntegers)
model.n = pyo.Param(within=pyo.NonNegativeIntegers)
model.I = pyo.RangeSet(1, model.m)
model.J = pyo.RangeSet(1, model.n)
model.a = pyo.Param(model.I, model.J)
model.b = pyo.Param(model.I)
model.c = pyo.Param(model.J)
# the next line declares a variable indexed by the set J
model.x = pyo.Var(model.J, domain=pyo.NonNegativeReals)
def obj_expression(m):
return pyo.summation(m.c, m.x)
model.OBJ = pyo.Objective(rule=obj_expression)
def ax_constraint_rule(m, i):
# return the expression for the constraint for i
return sum(m.a[i,j] * m.x[j] for j in m.J) >= m.b[i]
# the next line creates one constraint for each member of the set model.I
model.AxbConstraint = pyo.Constraint(model.I, rule=ax_constraint_rule)
Note
Python is interpreted one line at a time. A line continuation
character, \
(backslash), is used for Python statements that need to span
multiple lines. In Python, indentation has meaning and must be
consistent. For example, lines inside a function definition must be
indented and the end of the indentation is used by Python to signal
the end of the definition.
We will now examine the lines in this example. The first import line is
used to ensure that int
or long
division arguments are converted
to floating point values before division is performed.
from __future__ import division
In Python versions before 3.0, division returns the floor of the
mathematical result of division if arguments are int
or long
.
This import line avoids unexpected behavior when developing mathematical
models with integer values in Python 2.x (and is not necessary in Python
3.x).
The next import line that is required in every Pyomo model. Its purpose is to make the symbols used by Pyomo known to Python.
import pyomo.environ as pyo
The declaration of a model is also required. The use of the name model
is not required. Almost any name could be used, but we will use the name
model
in most of our examples. In this example, we are declaring
that it will be an abstract model.
model = pyo.AbstractModel()
We declare the parameters \(m\) and \(n\) using the Pyomo
Param
component. This component can take a variety of arguments; this
example illustrates use of the within
option that is used by Pyomo
to validate the data value that is assigned to the parameter. If this
option were not given, then Pyomo would not object to any type of data
being assigned to these parameters. As it is, assignment of a value that
is not a nonnegative integer will result in an error.
model.m = pyo.Param(within=pyo.NonNegativeIntegers)
model.n = pyo.Param(within=pyo.NonNegativeIntegers)
Although not required, it is convenient to define index sets. In this
example we use the RangeSet
component to declare that the sets will
be a sequence of integers starting at 1 and ending at a value specified
by the the parameters model.m
and model.n
.
model.I = pyo.RangeSet(1, model.m)
model.J = pyo.RangeSet(1, model.n)
The coefficient and righthandside data are defined as indexed
parameters. When sets are given as arguments to the Param
component,
they indicate that the set will index the parameter.
model.a = pyo.Param(model.I, model.J)
model.b = pyo.Param(model.I)
model.c = pyo.Param(model.J)
The next line that is interpreted by Python as part of the model
declares the variable \(x\). The first argument to the Var
component is a set, so it is defined as an index set for the variable. In
this case the variable has only one index set, but multiple sets could
be used as was the case for the declaration of the parameter
model.a
. The second argument specifies a domain for the
variable. This information is part of the model and will passed to the
solver when data is provided and the model is solved. Specification of
the NonNegativeReals
domain implements the requirement that the
variables be greater than or equal to zero.
# the next line declares a variable indexed by the set J
model.x = pyo.Var(model.J, domain=pyo.NonNegativeReals)
Note
In Python, and therefore in Pyomo, any text after pound sign is considered to be a comment.
In abstract models, Pyomo expressions are usually provided to objective
and constraint declarations via a function defined with a
Python def
statement. The def
statement establishes a name for a
function along with its arguments. When Pyomo uses a function to get
objective or constraint expressions, it always passes in the
model (i.e., itself) as the the first argument so the model is always
the first formal argument when declaring such functions in Pyomo.
Additional arguments, if needed, follow. Since summation is an extremely
common part of optimization models, Pyomo provides a flexible function
to accommodate it. When given two arguments, the summation()
function
returns an expression for the sum of the product of the two arguments
over their indexes. This only works, of course, if the two arguments
have the same indexes. If it is given only one argument it returns an
expression for the sum over all indexes of that argument. So in this
example, when summation()
is passed the arguments m.c, m.x
it returns an internal representation of the expression
\(\sum_{j=1}^{n}c_{j} x_{j}\).
def obj_expression(m):
return pyo.summation(m.c, m.x)
To declare an objective function, the Pyomo component called
Objective
is used. The rule
argument gives the name of a
function that returns the objective expression. The default sense is
minimization. For maximization, the sense=pyo.maximize
argument must be
used. The name that is declared, which is OBJ
in this case, appears
in some reports and can be almost any name.
model.OBJ = pyo.Objective(rule=obj_expression)
Declaration of constraints is similar. A function is declared to generate
the constraint expression. In this case, there can be multiple
constraints of the same form because we index the constraints by
\(i\) in the expression \(\sum_{j=1}^n a_{ij} x_j \geq b_i
\;\;\forall i = 1 \ldots m\), which states that we need a constraint for
each value of \(i\) from one to \(m\). In order to parametrize
the expression by \(i\) we include it as a formal parameter to the
function that declares the constraint expression. Technically, we could
have used anything for this argument, but that might be confusing. Using
an i
for an \(i\) seems sensible in this situation.
def ax_constraint_rule(m, i):
# return the expression for the constraint for i
return sum(m.a[i,j] * m.x[j] for j in m.J) >= m.b[i]
Note
In Python, indexes are in square brackets and function arguments are in parentheses.
In order to declare constraints that use this expression, we use the
Pyomo Constraint
component that takes a variety of arguments. In this
case, our model specifies that we can have more than one constraint of
the same form and we have created a set, model.I
, over which these
constraints can be indexed so that is the first argument to the
constraint declaration. The next argument gives the rule that
will be used to generate expressions for the constraints. Taken as a
whole, this constraint declaration says that a list of constraints
indexed by the set model.I
will be created and for each member of
model.I
, the function ax_constraint_rule
will be called and it
will be passed the model object as well as the member of model.I
# the next line creates one constraint for each member of the set model.I
model.AxbConstraint = pyo.Constraint(model.I, rule=ax_constraint_rule)
In the object oriented view of all of this, we would say that model
object is a class instance of the AbstractModel
class, and
model.J
is a Set
object that is contained by this model. Many
modeling components in Pyomo can be optionally specified as indexed
components: collections of components that are referenced using one or
more values. In this example, the parameter model.c
is indexed with
set model.J
.
In order to use this model, data must be given for the values of the
parameters. Here is one file that provides data (in AMPL “.dat
” format).
# one way to input the data in AMPL format
# for indexed parameters, the indexes are given before the value
param m := 1 ;
param n := 2 ;
param a :=
1 1 3
1 2 4
;
param c:=
1 2
2 3
;
param b := 1 1 ;
There are multiple formats that can be used to provide data to a Pyomo model, but the AMPL format works well for our purposes because it contains the names of the data elements together with the data. In AMPL data files, text after a pound sign is treated as a comment. Lines generally do not matter, but statements must be terminated with a semicolon.
For this particular data file, there is one constraint, so the value of
model.m
will be one and there are two variables (i.e., the vector
model.x
is two elements long) so the value of model.n
will be
two. These two assignments are accomplished with standard
assignments. Notice that in AMPL format input, the name of the model is
omitted.
param m := 1 ;
param n := 2 ;
There is only one constraint, so only two values are needed for
model.a
. When assigning values to arrays and vectors in AMPL format,
one way to do it is to give the index(es) and the the value. The line 1
2 4 causes model.a[1,2]
to get the value
4. Since model.c
has only one index, only one index value is needed
so, for example, the line 1 2 causes model.c[1]
to get the
value 2. Line breaks generally do not matter in AMPL format data files,
so the assignment of the value for the single index of model.b
is
given on one line since that is easy to read.
param a :=
1 1 3
1 2 4
;
param c:=
1 2
2 3
;
param b := 1 1 ;
Symbolic Index Sets¶
When working with Pyomo (or any other AML), it is convenient to write abstract models in a somewhat more abstract way by using index sets that contain strings rather than index sets that are implied by \(1,\ldots,m\) or the summation from 1 to \(n\). When this is done, the size of the set is implied by the input, rather than specified directly. Furthermore, the index entries may have no real order. Often, a mixture of integers and indexes and strings as indexes is needed in the same model. To start with an illustration of general indexes, consider a slightly different Pyomo implementation of the model we just presented.
# abstract2.py
from __future__ import division
from pyomo.environ import *
model = AbstractModel()
model.I = Set()
model.J = Set()
model.a = Param(model.I, model.J)
model.b = Param(model.I)
model.c = Param(model.J)
# the next line declares a variable indexed by the set J
model.x = Var(model.J, domain=NonNegativeReals)
def obj_expression(model):
return summation(model.c, model.x)
model.OBJ = Objective(rule=obj_expression)
def ax_constraint_rule(model, i):
# return the expression for the constraint for i
return sum(model.a[i,j] * model.x[j] for j in model.J) >= model.b[i]
# the next line creates one constraint for each member of the set model.I
model.AxbConstraint = Constraint(model.I, rule=ax_constraint_rule)
To get the same instantiated model, the following data file can be used.
# abstract2a.dat AMPL format
set I := 1 ;
set J := 1 2 ;
param a :=
1 1 3
1 2 4
;
param c:=
1 2
2 3
;
param b := 1 1 ;
However, this model can also be fed different data for problems of the same general form using meaningful indexes.
# abstract2.dat AMPL data format
set I := TV Film ;
set J := Graham John Carol ;
param a :=
TV Graham 3
TV John 4.4
TV Carol 4.9
Film Graham 1
Film John 2.4
Film Carol 1.1
;
param c := [*]
Graham 2.2
John 3.1416
Carol 3
;
param b := TV 1 Film 1 ;
Solving the Simple Examples¶
Pyomo supports modeling and scripting but does not install a solver
automatically. In order to solve a model, there must be a solver
installed on the computer to be used. If there is a solver, then the
pyomo
command can be used to solve a problem instance.
Suppose that the solver named glpk (also known as glpsol) is installed
on the computer. Suppose further that an abstract model is in the file
named abstract1.py
and a data file for it is in the file named
abstract1.dat
. From the command prompt, with both files in the
current directory, a solution can be obtained with the command:
pyomo solve abstract1.py abstract1.dat solver=glpk
Since glpk is the default solver, there really is no need specify it so
the solver
option can be dropped.
Note
There are two dashes before the command line option names such as
solver
.
To continue the example, if CPLEX is installed then it can be listed as the solver. The command to solve with CPLEX is
pyomo solve abstract1.py abstract1.dat solver=cplex
This yields the following output on the screen:
[ 0.00] Setting up Pyomo environment
[ 0.00] Applying Pyomo preprocessing actions
[ 0.07] Creating model
[ 0.15] Applying solver
[ 0.37] Processing results
Number of solutions: 1
Solution Information
Gap: 0.0
Status: optimal
Function Value: 0.666666666667
Solver results file: results.json
[ 0.39] Applying Pyomo postprocessing actions
[ 0.39] Pyomo Finished
The numbers in square brackets indicate how much time was required for
each step. Results are written to the file named results.json
, which
has a special structure that makes it useful for postprocessing. To see
a summary of results written to the screen, use the summary
option:
pyomo solve abstract1.py abstract1.dat solver=cplex summary
To see a list of Pyomo command line options, use:
pyomo solve help
Note
There are two dashes before help
.
For a concrete model, no data file is specified on the Pyomo command line.
Pyomo Modeling Components¶
Sets¶
Declaration¶
Sets can be declared using instances of the Set
and RangeSet
classes or by
assigning set expressions. The simplest set declaration creates a set
and postpones creation of its members:
model.A = pyo.Set()
The Set
class takes optional arguments such as:
dimen
= Dimension of the members of the setdoc
= String describing the setfilter
= A Boolean function used during construction to indicate if a potential new member should be assigned to the setinitialize
= An iterable containing the initial members of the Set, or function that returns an iterable of the initial members the set.ordered
= A Boolean indicator that the set is ordered; the default isTrue
validate
= A Boolean function that validates new member datawithin
= Set used for validation; it is a superset of the set being declared.
In general, Pyomo attempts to infer the “dimensionality” of Set
components (that is, the number of apparent indices) when they are
constructed. However, there are situations where Pyomo either cannot
detect a dimensionality (e.g., a Set
that was not initialized with any
members), or you the user may want to assert the dimensionality of the
set. This can be accomplished through the dimen
keyword. For
example, to create a set whose members will be tuples with two items, one
could write:
model.B = pyo.Set(dimen=2)
To create a set of all the numbers in set model.A
doubled, one could
use
def DoubleA_init(model):
return (i*2 for i in model.A)
model.C = pyo.Set(initialize=DoubleA_init)
As an aside we note that as always in Python, there are lot of ways to
accomplish the same thing. Also, note that this will generate an error
if model.A
contains elements for which multiplication times two is
not defined.
The initialize
option can accept any Python iterable, including a
set
, list
, or tuple
. This data may be returned from a
function or specified directly as in
model.D = pyo.Set(initialize=['red', 'green', 'blue'])
The initialize
option can also specify either a generator or a
function to specify the Set members. In the case of a generator, all
data yielded by the generator will become the initial set members:
def X_init(m):
for i in range(10):
yield 2*i+1
model.X = pyo.Set(initialize=X_init)
For initialization functions, Pyomo supports two signatures. In the
first, the function returns an iterable (set
, list
, or
tuple
) containing the data with which to initialize the Set:
def Y_init(m):
return [2*i+1 for i in range(10)]
model.Y = pyo.Set(initialize=Y_init)
In the second signature, the function is called for each element,
passing the element number in as an extra argument. This is repeated
until the function returns the special value Set.End
:
def Z_init(model, i):
if i > 10:
return pyo.Set.End
return 2*i+1
model.Z = pyo.Set(initialize=Z_init)
Note that the element number starts with 1 and not 0:
>>> model.X.pprint()
X : Size=1, Index=None, Ordered=Insertion
Key : Dimen : Domain : Size : Members
None : 1 : Any : 10 : {1, 3, 5, 7, 9, 11, 13, 15, 17, 19}
>>> model.Y.pprint()
Y : Size=1, Index=None, Ordered=Insertion
Key : Dimen : Domain : Size : Members
None : 1 : Any : 10 : {1, 3, 5, 7, 9, 11, 13, 15, 17, 19}
>>> model.Z.pprint()
Z : Size=1, Index=None, Ordered=Insertion
Key : Dimen : Domain : Size : Members
None : 1 : Any : 10 : {3, 5, 7, 9, 11, 13, 15, 17, 19, 21}
Additional information about iterators for set initialization is in the [PyomoBookII] book.
Note
For Abstract models, data specified in an input file or through the
data
argument to AbstractModel.create_instance()
will
override the data
specified by the initialize options.
If sets are given as arguments to Set
without keywords, they are
interpreted as indexes for an array of sets. For example, to create an
array of sets that is indexed by the members of the set model.A
, use:
model.E = pyo.Set(model.A)
Arguments can be combined. For example, to create an array of sets,
indexed by set model.A
where each set contains three dimensional
members, use:
model.F = pyo.Set(model.A, dimen=3)
The initialize
option can be used to create a set that contains a
sequence of numbers, but the RangeSet
class provides a concise
mechanism for simple sequences. This class takes as its arguments a
start value, a final value, and a step size. If the RangeSet
has
only a single argument, then that value defines the final value in the
sequence; the first value and step size default to one. If two values
given, they are the first and last value in the sequence and the step
size defaults to one. For example, the following declaration creates a
set with the numbers 1.5, 5 and 8.5:
model.G = pyo.RangeSet(1.5, 10, 3.5)
Operations¶
Sets may also be created by storing the result of set operations using other Pyomo sets. Pyomo supports set operations including union, intersection, difference, and symmetric difference:
model.I = model.A  model.D # union
model.J = model.A & model.D # intersection
model.K = model.A  model.D # difference
model.L = model.A ^ model.D # exclusiveor
For example, the crossproduct operator is the asterisk (*). To define
a new set M
that is the cross product of sets B
and C
, one
could use
model.M = model.B * model.C
This creates a virtual set that holds references to the original sets,
so any updates to the original sets (B
and C
) will be reflected
in the new set (M
). In contrast, you can also create a concrete
set, which directly stores the values of the cross product at the time
of creation and will not reflect subsequent changes in the original
sets with:
model.M_concrete = pyo.Set(initialize=model.B * model.C)
Finally, you can indicate that the members of a set are restricted to be in the
cross product of two other sets, one can use the within
keyword:
model.N = pyo.Set(within=model.B * model.C)
Predefined Virtual Sets¶
For use in specifying domains for sets, parameters and variables, Pyomo provides the following predefined virtual sets:
Any
= all possible valuesReals
= floating point valuesPositiveReals
= strictly positive floating point valuesNonPositiveReals
= nonpositive floating point valuesNegativeReals
= strictly negative floating point valuesNonNegativeReals
= nonnegative floating point valuesPercentFraction
= floating point values in the interval [0,1]UnitInterval
= alias for PercentFractionIntegers
= integer valuesPositiveIntegers
= positive integer valuesNonPositiveIntegers
= nonpositive integer valuesNegativeIntegers
= negative integer valuesNonNegativeIntegers
= nonnegative integer valuesBoolean
= Boolean values, which can be represented as False/True, 0/1, ’False’/’True’ and ’F’/’T’Binary
= the integers {0, 1}
For example, if the set model.O
is declared to be within the virtual
set NegativeIntegers
then an attempt to add anything other than a
negative integer will result in an error. Here is the declaration:
model.O = pyo.Set(within=pyo.NegativeIntegers)
Sparse Index Sets¶
Sets provide indexes for parameters, variables and other sets. Index set issues are important for modelers in part because of efficiency considerations, but primarily because the right choice of index sets can result in very natural formulations that are conducive to understanding and maintenance. Pyomo leverages Python to provide a rich collection of options for index set creation and use.
The choice of how to represent indexes often depends on the application and the nature of the instance data that are expected. To illustrate some of the options and issues, we will consider problems involving networks. In many network applications, it is useful to declare a set of nodes, such as
model.Nodes = pyo.Set()
and then a set of arcs can be created with reference to the nodes.
Consider the following simple version of minimum cost flow problem:
where
 Set: Nodes \(\equiv \mathcal{N}\)
 Set: Arcs \(\equiv \mathcal{A} \subseteq \mathcal{N} \times \mathcal{N}\)
 Var: Flow on arc \((i,j)\) \(\equiv x_{i,j},\; (i,j) \in \mathcal{A}\)
 Param: Flow Cost on arc \((i,j)\) \(\equiv c_{i,j},\; (i,j) \in \mathcal{A}\)
 Param: Demand at node latexmath:i \(\equiv D_{i},\; i \in \mathcal{N}\)
 Param: Supply at node latexmath:i \(\equiv S_{i},\; i \in \mathcal{N}\)
In the simplest case, the arcs can just be the cross product of the nodes, which is accomplished by the definition
model.Arcs = model.Nodes*model.Nodes
that creates a set with two dimensional members. For applications where all nodes are always connected to all other nodes this may suffice. However, issues can arise when the network is not fully dense. For example, the burden of avoiding flow on arcs that do not exist falls on the data file where highenough costs must be provided for those arcs. Such a scheme is not very elegant or robust.
For many network flow applications, it might be better to declare the arcs using
model.Arcs = pyo.Set(dimen=2)
or
model.Arcs = pyo.Set(within=model.Nodes*model.Nodes)
where the difference is that the first version will provide error
checking as data is assigned to the set elements. This would enable
specification of a sparse network in a natural way. But this results in
a need to change the FlowBalance
constraint because as it was
written in the simple example, it sums over the entire set of nodes for
each node. One way to remedy this is to sum only over the members of the
set model.arcs
as in
def FlowBalance_rule(m, node):
return m.Supply[node] \
+ sum(m.Flow[i, node] for i in m.Nodes if (i,node) in m.Arcs) \
 m.Demand[node] \
 sum(m.Flow[node, j] for j in m.Nodes if (j,node) in m.Arcs) \
== 0
This will be OK unless the number of nodes becomes very large for a sparse network, then the time to generate this constraint might become an issue (admittely, only for very large networks, but such networks do exist).
Another method, which comes in handy in many network applications, is to
have a set for each node that contain the nodes at the other end of arcs
going to the node at hand and another set giving the nodes on outgoing
arcs. If these sets are called model.NodesIn
and model.NodesOut
respectively, then the flow balance rule can be rewritten as
def FlowBalance_rule(m, node):
return m.Supply[node] \
+ sum(m.Flow[i, node] for i in m.NodesIn[node]) \
 m.Demand[node] \
 sum(m.Flow[node, j] for j in m.NodesOut[node]) \
== 0
The data for NodesIn
and NodesOut
could be added to the input
file, and this may be the most efficient option.
For all but the largest networks, rather than reading Arcs
,
NodesIn
and NodesOut
from a data file, it might be more elegant
to read only Arcs
from a data file and declare model.NodesIn
with an initialize
option specifying the creation as follows:
def NodesIn_init(m, node):
for i, j in m.Arcs:
if j == node:
yield i
model.NodesIn = pyo.Set(model.Nodes, initialize=NodesIn_init)
with a similar definition for model.NodesOut
. This code creates a
list of sets for NodesIn
, one set of nodes for each node. The full
model is:
import pyomo.environ as pyo
model = pyo.AbstractModel()
model.Nodes = pyo.Set()
model.Arcs = pyo.Set(dimen=2)
def NodesOut_init(m, node):
for i, j in m.Arcs:
if i == node:
yield j
model.NodesOut = pyo.Set(model.Nodes, initialize=NodesOut_init)
def NodesIn_init(m, node):
for i, j in m.Arcs:
if j == node:
yield i
model.NodesIn = pyo.Set(model.Nodes, initialize=NodesIn_init)
model.Flow = pyo.Var(model.Arcs, domain=pyo.NonNegativeReals)
model.FlowCost = pyo.Param(model.Arcs)
model.Demand = pyo.Param(model.Nodes)
model.Supply = pyo.Param(model.Nodes)
def Obj_rule(m):
return pyo.summation(m.FlowCost, m.Flow)
model.Obj = pyo.Objective(rule=Obj_rule, sense=pyo.minimize)
def FlowBalance_rule(m, node):
return m.Supply[node] \
+ sum(m.Flow[i, node] for i in m.NodesIn[node]) \
 m.Demand[node] \
 sum(m.Flow[node, j] for j in m.NodesOut[node]) \
== 0
model.FlowBalance = pyo.Constraint(model.Nodes, rule=FlowBalance_rule)
for this model, a toy data file (in AMPL “.dat
” format) would be:
set Nodes := CityA CityB CityC ;
set Arcs :=
CityA CityB
CityA CityC
CityC CityB
;
param : FlowCost :=
CityA CityB 1.4
CityA CityC 2.7
CityC CityB 1.6
;
param Demand :=
CityA 0
CityB 1
CityC 1
;
param Supply :=
CityA 2
CityB 0
CityC 0
;
This can also be done somewhat more efficiently, and perhaps more clearly,
using a BuildAction
(for more information, see BuildAction and BuildCheck):
model.NodesOut = pyo.Set(model.Nodes, within=model.Nodes)
model.NodesIn = pyo.Set(model.Nodes, within=model.Nodes)
def Populate_In_and_Out(model):
# loop over the arcs and record the end points
for i, j in model.Arcs:
model.NodesIn[j].add(i)
model.NodesOut[i].add(j)
model.In_n_Out = pyo.BuildAction(rule=Populate_In_and_Out)
Sparse Index Sets Example¶
One may want to have a constraint that holds
There are many ways to accomplish this, but one good way is to create a
set of tuples composed of all model.k, model.V[k]
pairs. This
can be done as follows:
def kv_init(m):
return ((k,v) for k in m.K for v in m.V[k])
model.KV = pyo.Set(dimen=2, initialize=kv_init)
We can now create the constraint \(x_{i,k,v} \leq a_{i,k}y_i \;\forall\; i \in I, k \in K, v \in V_k\) with:
model.a = pyo.Param(model.I, model.K, default=1)
model.y = pyo.Var(model.I)
model.x = pyo.Var(model.I, model.KV)
def c1_rule(m, i, k, v):
return m.x[i,k,v] <= m.a[i,k]*m.y[i]
model.c1 = pyo.Constraint(model.I, model.KV, rule=c1_rule)
Parameters¶
The word “parameters” is used in many settings. When discussing a Pyomo
model, we use the word to refer to data that must be provided in order
to find an optimal (or good) assignment of values to the decision
variables. Parameters are declared as instances of a Param
class, which
takes arguments that are somewhat similar to the Set
class. For
example, the following code snippet declares sets model.A
and
model.B
, and then a parameter model.P
that is indexed by
model.A
and model.B
:
model.A = pyo.RangeSet(1,3)
model.B = pyo.Set()
model.P = pyo.Param(model.A, model.B)
In addition to sets that serve as indexes, Param
takes
the following options:
default
= The parameter value absent any other specification.doc
= A string describing the parameter.initialize
= A function (or Python object) that returns data used to initialize the parameter values.mutable
= Boolean value indicating if the Param values are allowed to change after the Param is initialized.validate
= A callback function that takes the model, proposed value, and indices of the proposed value; returningTrue
if the value is valid. ReturningFalse
will generate an exception.within
= Set used for validation; it specifies the domain of valid parameter values.
These options perform in the same way as they do for Set
. For
example, given model.A
with values {1, 2, 3}
, then there are many
ways to create a parameter that represents a square matrix with 9, 16, 25 on the
main diagonal and zeros elsewhere, here are two ways to do it. First using a
Python object to initialize:
v={}
v[1,1] = 9
v[2,2] = 16
v[3,3] = 25
model.S1 = pyo.Param(model.A, model.A, initialize=v, default=0)
And now using an initialization function that is automatically called
once for each index tuple (remember that we are assuming that
model.A
contains {1, 2, 3}
)
def s_init(model, i, j):
if i == j:
return i*i
else:
return 0.0
model.S2 = pyo.Param(model.A, model.A, initialize=s_init)
In this example, the index set contained integers, but index sets need not be numeric. It is very common to use strings.
Note
Data specified in an input file will override the data specified by
the initialize
option.
Parameter values can be checked by a validation function. In the
following example, the every value of the parameter T
(indexed by
model.A
) is checked
to be greater than 3.14159. If a value is provided that is less than
that, the model instantation will be terminated and an error message
issued. The validation function should be written so as to return
True
if the data is valid and False
otherwise.
t_data = {1: 10, 2: 3, 3: 20}
def t_validate(model, v, i):
return v > 3.14159
model.T = pyo.Param(model.A, validate=t_validate, initialize=t_data)
This example will prodice the following error, indicating that the value
provided for T[2]
failed validation:
Traceback (most recent call last):
...
ValueError: Invalid parameter value: T[2] = '3', value type=<class 'int'>.
Value failed parameter validation rule
Variables¶
Variables are intended to ultimately be given values by an optimization
package. They are declared and optionally bounded, given initial values,
and documented using the Pyomo Var
function. If index sets are given
as arguments to this function they are used to index the variable. Other
optional directives include:
 bounds = A function (or Python object) that gives a (lower,upper) bound pair for the variable
 domain = A set that is a superset of the values the variable can take on.
 initialize = A function (or Python object) that gives a starting value for the variable; this is particularly important for nonlinear models
 within = (synonym for
domain
)
The following code snippet illustrates some aspects of these options by
declaring a singleton (i.e. unindexed) variable named
model.LumberJack
that will take on real values between zero and 6
and it initialized to be 1.5:
model.LumberJack = Var(within=NonNegativeReals, bounds=(0,6), initialize=1.5)
Instead of the initialize
option, initialization is sometimes done
with a Python assignment statement as in
model.LumberJack = 1.5
For indexed variables, bounds and initial values are often specified by a rule (a Python function) that itself may make reference to parameters or other data. The formal arguments to these rules begins with the model followed by the indexes. This is illustrated in the following code snippet that makes use of Python dictionaries declared as lb and ub that are used by a function to provide bounds:
model.A = Set(initialize=['Scones', 'Tea'])
lb = {'Scones':2, 'Tea':4}
ub = {'Scones':5, 'Tea':7}
def fb(model, i):
return (lb[i], ub[i])
model.PriceToCharge = Var(model.A, domain=PositiveIntegers, bounds=fb)
Note
Many of the predefined virtual sets that are used as domains imply
bounds. A strong example is the set Boolean
that implies bounds
of zero and one.
Objectives¶
An objective is a function of variables that returns a value that an
optimization package attempts to maximize or minimize. The Objective
function in Pyomo declares an objective. Although other mechanisms are
possible, this function is typically passed the name of another function
that gives the expression. Here is a very simple version of such a
function that assumes model.x
has previously been declared as a
Var
:
>>> def ObjRule(model):
... return 2*model.x[1] + 3*model.x[2]
>>> model.obj1 = pyo.Objective(rule=ObjRule)
It is more common for an objective function to refer to parameters as in
this example that assumes that model.p
has been declared as a
Param
and that model.x
has been declared with the same index
set, while model.y
has been declared as a singleton:
>>> def ObjRule(model):
... return pyo.summation(model.p, model.x) + model.y
>>> model.obj2 = pyo.Objective(rule=ObjRule, sense=pyo.maximize)
This example uses the sense
option to specify maximization. The
default sense is minimize
.
Constraints¶
Most constraints are specified using equality or inequality expressions
that are created using a rule, which is a Python function. For example,
if the variable model.x
has the indexes ‘butter’ and ‘scones’, then
this constraint limits the sum over these indexes to be exactly three:
def teaOKrule(model):
return(model.x['butter'] + model.x['scones'] == 3)
model.TeaConst = Constraint(rule=teaOKrule)
Instead of expressions involving equality (==) or inequalities (<= or
>=), constraints can also be expressed using a 3tuple if the form
(lb, expr, ub) where lb and ub can be None
, which is interpreted as
lb <= expr <= ub. Variables can appear only in the middle expr. For
example, the following two constraint declarations have the same
meaning:
model.x = Var()
def aRule(model):
return model.x >= 2
model.Boundx = Constraint(rule=aRule)
def bRule(model):
return (2, model.x, None)
model.boundx = Constraint(rule=bRule)
For this simple example, it would also be possible to declare
model.x
with a bounds
option to accomplish the same thing.
Constraints (and objectives) can be indexed by lists or sets. When the
declaration contains lists or sets as arguments, the elements are
iteratively passed to the rule function. If there is more than one, then
the cross product is sent. For example the following constraint could be
interpreted as placing a budget of \(i\) on the
\(i^{\mbox{th}}\) item to buy where the cost per item is given by
the parameter model.a
:
model.A = RangeSet(1,10)
model.a = Param(model.A, within=PositiveReals)
model.ToBuy = Var(model.A)
def bud_rule(model, i):
return model.a[i]*model.ToBuy[i] <= i
aBudget = Constraint(model.A, rule=bud_rule)
Note
Python and Pyomo are case sensitive so model.a
is not the same as
model.A
.
Expressions¶
In this section, we use the word “expression” in two ways: first in the
general sense of the word and second to desribe a class of Pyomo objects
that have the name Expression
as described in the subsection on
expression objects.
Rules to Generate Expressions¶
Both objectives and constraints make use of rules to generate expressions. These are Python functions that return the appropriate expression. These are firstclass functions that can access global data as well as data passed in, including the model object.
Operations on model elements results in expressions, which seems natural
in expressions like the constraints we have seen so far. It is also
possible to build up expressions. The following example illustrates
this, along with a reference to global Python data in the form of a
Python variable called switch
:
switch = 3
model.A = RangeSet(1, 10)
model.c = Param(model.A)
model.d = Param()
model.x = Var(model.A, domain=Boolean)
def pi_rule(model):
accexpr = summation(model.c, model.x)
if switch >= 2:
accexpr = accexpr  model.d
return accexpr >= 0.5
PieSlice = Constraint(rule=pi_rule)
In this example, the constraint that is generated depends on the value
of the Python variable called switch
. If the value is 2 or greater,
then the constraint is summation(model.c, model.x)  model.d >= 0.5
;
otherwise, the model.d
term is not present.
Warning
Because model elements result in expressions, not values, the following does not work as expected in an abstract model!
model.A = RangeSet(1, 10)
model.c = Param(model.A)
model.d = Param()
model.x = Var(model.A, domain=Boolean)
def pi_rule(model):
accexpr = summation(model.c, model.x)
if model.d >= 2: # NOT in an abstract model!!
accexpr = accexpr  model.d
return accexpr >= 0.5
PieSlice = Constraint(rule=pi_rule)
The trouble is that model.d >= 2
results in an expression, not
its evaluated value. Instead use if value(model.d) >= 2
Note
Pyomo supports nonlinear expressions and can call nonlinear solvers such as Ipopt.
Piecewise Linear Expressions¶
Pyomo has facilities to add piecewise constraints of the form y=f(x) for a variety of forms of the function f.
The piecewise types other than SOS2, BIGM_SOS1, BIGM_BIN are implement as described in the paper [Vielma_et_al].
There are two basic forms for the declaration of the constraint:
#model.pwconst = Piecewise(indexes, yvar, xvar, **Keywords)
#model.pwconst = Piecewise(yvar,xvar,**Keywords)
where pwconst
can be replaced by a name appropriate for the
application. The choice depends on whether the x and y variables are
indexed. If so, they must have the same index sets and these sets are
give as the first arguments.
Keywords:¶
pw_pts={ },[ ],( )
A dictionary of lists (where keys are the index set) or a single list (for the nonindexed case or when an identical set of breakpoints is used across all indices) defining the set of domain breakpoints for the piecewise linear function.
Note
pw_pts is always required. These give the breakpoints for the piecewise function and are expected to fully span the bounds for the independent variable(s).
pw_repn=<Option>
Indicates the type of piecewise representation to use. This can have a major impact on solver performance. Options: (Default “SOS2”)
 “SOS2”  Standard representation using sos2 constraints.
 “BIGM_BIN”  BigM constraints with binary variables. The theoretically tightest M values are automatically determined.
 “BIGM_SOS1”  BigM constraints with sos1 variables. The theoretically tightest M values are automatically determined.
 “DCC”  Disaggregated convex combination model.
 “DLOG”  Logarithmic disaggregated convex combination model.
 “CC”  Convex combination model.
 “LOG”  Logarithmic branching convex combination.
 “MC”  Multiple choice model.
 “INC”  Incremental (delta) method.
Note
Step functions are supported for all but the two BIGM options. Refer to the ‘force_pw’ option.
pw_constr_type= <Option>
Indicates the bound type of the piecewise function. Options:
 “UB”  y variable is bounded above by piecewise function.
 “LB”  y variable is bounded below by piecewise function.
 “EQ”  y variable is equal to the piecewise function.
f_rule=f(model,i,j,…,x), { }, [ ], ( )
An object that returns a numeric value that is the range value corresponding to each piecewise domain point. For functions, the first argument must be a Pyomo model. The last argument is the domain value at which the function evaluates (Not a Pyomo
Var
). Intermediate arguments are the corresponding indices of the Piecewise component (if any). Otherwise, the object can be a dictionary of lists/tuples (with keys the same as the indexing set) or a singe list/tuple (when no indexing set is used or when all indices use an identical piecewise function). Examples:# A function that changes with index def f(model,j,x): if (j == 2): return x**2 + 1.0 else: return x**2 + 5.0 # A nonlinear function f = lambda model,x : exp(x) + value(model.p) # A step function f = [0,0,1,1,2,2]
force_pw=True/False
Using the given function rule and pw_pts, a check for convexity/concavity is implemented. If (1) the function is convex and the piecewise constraints are lower bounds or if (2) the function is concave and the piecewise constraints are upper bounds then the piecewise constraints will be substituted for linear constraints. Setting ‘force_pw=True’ will force the use of the original piecewise constraints even when one of these two cases applies.
warning_tol=<float>
To aid in debugging, a warning is printed when consecutive slopes of piecewise segments are within <warning_tol> of each other. Default=1e8
warn_domain_coverage=True/False
Print a warning when the feasible region of the domain variable is not completely covered by the piecewise breakpoints. Default=True
unbounded_domain_var=True/False
Allow an unbounded or partially bounded Pyomo Var to be used as the domain variable. Default=False
Note
This does not imply unbounded piecewise segments will be constructed. The outermost piecewise breakpoints will bound the domain variable at each index. However, the Var attributes .lb and .ub will not be modified.
Here is an example of an assignment to a Python dictionary variable that has keywords for a picewise constraint:
kwds = {'pw_constr_type':'EQ','pw_repn':'SOS2','sense':maximize,'force_pw':True}
Here is a simple example based on the example given earlier in
Symbolic Index Sets. In this new example, the objective function is the
sum of c times x to the fourth. In this example, the keywords are passed
directly to the Piecewise
function without being assigned to a
dictionary variable. The upper bound on the x variables was chosen
whimsically just to make the example. The important thing to note is
that variables that are going to appear as the independent variable in a
piecewise constraint must have bounds.
# abstract2piece.py
# Similar to abstract2.py, but the objective is now c times x to the fourth power
from pyomo.environ import *
model = AbstractModel()
model.I = Set()
model.J = Set()
Topx = 6.1 # range of x variables
model.a = Param(model.I, model.J)
model.b = Param(model.I)
model.c = Param(model.J)
# the next line declares a variable indexed by the set J
model.x = Var(model.J, domain=NonNegativeReals, bounds=(0, Topx))
model.y = Var(model.J, domain=NonNegativeReals)
# to avoid warnings, we set breakpoints at or beyond the bounds
PieceCnt = 100
bpts = []
for i in range(PieceCnt+2):
bpts.append(float((i*Topx)/PieceCnt))
def f4(model, j, xp):
# we not need j, but it is passed as the index for the constraint
return xp**4
model.ComputeObj = Piecewise(model.J, model.y, model.x, pw_pts=bpts, pw_constr_type='EQ', f_rule=f4)
def obj_expression(model):
return summation(model.c, model.y)
model.OBJ = Objective(rule=obj_expression)
def ax_constraint_rule(model, i):
# return the expression for the constraint for i
return sum(model.a[i,j] * model.x[j] for j in model.J) >= model.b[i]
# the next line creates one constraint for each member of the set model.I
model.AxbConstraint = Constraint(model.I, rule=ax_constraint_rule)
A more advanced example is provided in abstract2piecebuild.py in BuildAction and BuildCheck.
Expression
Objects¶
Pyomo Expression
objects are very similar to the Param
component
(with mutable=True
) except that the underlying values can be numeric
constants or Pyomo expressions. Here’s an illustration of expression
objects in an AbstractModel. An expression object with an index set
that is the numbers 1, 2, 3 is created and initialized to be the model
variable x times the index. Later in the model file, just to illustrate
how to do it, the expression is changed but just for the first index to
be x squared.
model = ConcreteModel()
model.x = Var(initialize=1.0)
def _e(m,i):
return m.x*i
model.e = Expression([1,2,3], rule=_e)
instance = model.create_instance()
print (value(instance.e[1])) # > 1.0
print (instance.e[1]()) # > 1.0
print (instance.e[1].value) # > a pyomo expression object
# Change the underlying expression
instance.e[1].value = instance.x**2
#... solve
#... load results
# print the value of the expression given the loaded optimal solution
print (value(instance.e[1]))
An alternative is to create Python functions that, potentially, manipulate model objects. E.g., if you define a function
def f(x, p):
return x + p
You can call this function with or without Pyomo modeling components as the arguments. E.g., f(2,3) will return a number, whereas f(model.x, 3) will return a Pyomo expression due to operator overloading.
If you take this approach you should note that anywhere a Pyomo expression is used to generate another expression (e.g., f(model.x, 3) + 5), the initial expression is always cloned so that the new generated expression is independent of the old. For example:
model = ConcreteModel()
model.x = Var()
# create a Pyomo expression
e1 = model.x + 5
# create another Pyomo expression
# e1 is copied when generating e2
e2 = e1 + model.x
If you want to create an expression that is shared between other
expressions, you can use the Expression
component.
Suffixes¶
Suffixes provide a mechanism for declaring extraneous model data, which can be used in a number of contexts. Most commonly, suffixes are used by solver plugins to store extra information about the solution of a model. This and other suffix functionality is made available to the modeler through the use of the Suffix component class. Uses of Suffix include:
 Importing extra information from a solver about the solution of a mathematical program (e.g., constraint duals, variable reduced costs, basis information).
 Exporting information to a solver or algorithm to aid in solving a mathematical program (e.g., warmstarting information, variable branching priorities).
 Tagging modeling components with local data for later use in advanced scripting algorithms.
Suffix Notation and the Pyomo NL File Interface¶
The Suffix component used in Pyomo has been adapted from the suffix notation used in the modeling language AMPL [AMPL]. Therefore, it follows naturally that AMPL style suffix functionality is fully available using Pyomo’s NL file interface. For information on AMPL style suffixes the reader is referred to the AMPL website:
A number of scripting examples that highlight the use AMPL style suffix
functionality are available in the examples/pyomo/suffixes
directory
distributed with Pyomo.
Declaration¶
The effects of declaring a Suffix component on a Pyomo model are determined by the following traits:
 direction: This trait defines the direction of information flow for
the suffix. A suffix direction can be assigned one of four possible
values:
LOCAL
 suffix data stays local to the modeling framework and will not be imported or exported by a solver plugin (default)IMPORT
 suffix data will be imported from the solver by its respective solver pluginEXPORT
 suffix data will be exported to a solver by its respective solver pluginIMPORT_EXPORT
 suffix data flows in both directions between the model and the solver or algorithm
 datatype: This trait advertises the type of data held on the suffix
for those interfaces where it matters (e.g., the NL file interface). A
suffix datatype can be assigned one of three possible values:
FLOAT
 the suffix stores floating point data (default)INT
 the suffix stores integer dataNone
 the suffix stores any type of data
Note
Exporting suffix data through Pyomo’s NL file interface requires all
active export suffixes have a strict datatype (i.e.,
datatype=None
is not allowed).
The following code snippet shows examples of declaring a Suffix component on a Pyomo model:
import pyomo.environ as pyo
model = pyo.ConcreteModel()
# Export integer data
model.priority = pyo.Suffix(
direction=pyo.Suffix.EXPORT, datatype=pyo.Suffix.INT)
# Export and import floating point data
model.dual = pyo.Suffix(direction=pyo.Suffix.IMPORT_EXPORT)
# Store floating point data
model.junk = pyo.Suffix()
Declaring a Suffix with a nonlocal direction on a model is not guaranteed to be compatible with all solver plugins in Pyomo. Whether a given Suffix is acceptable or not depends on both the solver and solver interface being used. In some cases, a solver plugin will raise an exception if it encounters a Suffix type that it does not handle, but this is not true in every situation. For instance, the NL file interface is generic to all AMPLcompatible solvers, so there is no way to validate that a Suffix of a given name, direction, and datatype is appropriate for a solver. One should be careful in verifying that Suffix declarations are being handled as expected when switching to a different solver or solver interface.
Operations¶
The Suffix component class provides a dictionary interface for mapping Pyomo modeling components to arbitrary data. This mapping functionality is captured within the ComponentMap base class, which is also available within Pyomo’s modeling environment. The ComponentMap can be used as a more lightweight replacement for Suffix in cases where a simple mapping from Pyomo modeling components to arbitrary data values is required.
Note
ComponentMap and Suffix use the builtin id()
function for
hashing entry keys. This design decision arises from the fact that
most of the modeling components found in Pyomo are either not
hashable or use a hash based on a mutable numeric value, making them
unacceptable for use as keys with the builtin dict
class.
Warning
The use of the builtin id()
function for hashing entry keys in
ComponentMap and Suffix makes them inappropriate for use in
situations where builtin object types must be used as keys. It is
strongly recommended that only Pyomo modeling components be used as
keys in these mapping containers (Var
, Constraint
, etc.).
Warning
Do not attempt to pickle or deepcopy instances of ComponentMap or Suffix unless doing so along with the components for which they hold mapping entries. As an example, placing one of these objects on a model and then cloning or pickling that model is an acceptable scenario.
In addition to the dictionary interface provided through the ComponentMap base class, the Suffix component class also provides a number of methods whose default semantics are more convenient for working with indexed modeling components. The easiest way to highlight this functionality is through the use of an example.
model = pyo.ConcreteModel()
model.x = pyo.Var()
model.y = pyo.Var([1,2,3])
model.foo = pyo.Suffix()
In this example we have a concrete Pyomo model with two different types of variable components (indexed and nonindexed) as well as a Suffix declaration (foo). The next code snippet shows examples of adding entries to the suffix foo.
# Assign a suffix value of 1.0 to model.x
model.foo.set_value(model.x, 1.0)
# Same as above with dict interface
model.foo[model.x] = 1.0
# Assign a suffix value of 0.0 to all indices of model.y
# By default this expands so that entries are created for
# every index (y[1], y[2], y[3]) and not model.y itself
model.foo.set_value(model.y, 0.0)
# The same operation using the dict interface results in an entry only
# for the parent component model.y
model.foo[model.y] = 50.0
# Assign a suffix value of 1.0 to model.y[1]
model.foo.set_value(model.y[1], 1.0)
# Same as above with the dict interface
model.foo[model.y[1]] = 1.0
In this example we highlight the fact that the __setitem__
and
setValue
entry methods can be used interchangeably except in the
case where indexed components are used (model.y). In the indexed case,
the __setitem__
approach creates a single entry for the parent
indexed component itself, whereas the setValue
approach by default
creates an entry for each index of the component. This behavior can be
controlled using the optional keyword ‘expand’, where assigning it a
value of False
results in the same behavior as __setitem__
.
Other operations like accessing or removing entries in our mapping can
performed as if the builtin dict
class is in use.
>>> print(model.foo.get(model.x))
1.0
>>> print(model.foo[model.x])
1.0
>>> print(model.foo.get(model.y[1]))
1.0
>>> print(model.foo[model.y[1]])
1.0
>>> print(model.foo.get(model.y[2]))
0.0
>>> print(model.foo[model.y[2]])
0.0
>>> print(model.foo.get(model.y))
50.0
>>> print(model.foo[model.y])
50.0
>>> del model.foo[model.y]
>>> print(model.foo.get(model.y))
None
>>> print(model.foo[model.y])
Traceback (most recent call last):
...
KeyError: "Component with id '...': y"
The nondict method clear_value
can be used in place of
__delitem__
to remove entries, where it inherits the same default
behavior as setValue
for indexed components and does not raise a
KeyError when the argument does not exist as a key in the mapping.
>>> model.foo.clear_value(model.y)
>>> print(model.foo[model.y[1]])
Traceback (most recent call last):
...
KeyError: "Component with id '...': y[1]"
>>> del model.foo[model.y[1]]
Traceback (most recent call last):
...
KeyError: "Component with id '...': y[1]"
>>> model.foo.clear_value(model.y[1])
A summary nondict Suffix methods is provided here:
clearAllValues()Clears all suffix data.clear_value(component, expand=True)Clears suffix information for a component.setAllValues(value)Sets the value of this suffix on all components.setValue(component, value, expand=True)Sets the value of this suffix on the specified component.updateValues(data_buffer, expand=True)Updates the suffix data given a list of component,value tuples. Providesan improvement in efficiency over calling setValue on every component.getDatatype()Return the suffix datatype.setDatatype(datatype)Set the suffix datatype.getDirection()Return the suffix direction.setDirection(direction)Set the suffix direction.importEnabled()Returns True when this suffix is enabled for import from solutions.exportEnabled()Returns True when this suffix is enabled for export to solvers.
Importing Suffix Data¶
Importing suffix information from a solver solution is achieved by declaring a Suffix component with the appropriate name and direction. Suffix names available for import may be specific to thirdparty solvers as well as individual solver interfaces within Pyomo. The most common of these, available with most solvers and solver interfaces, is constraint dual multipliers. Requesting that duals be imported into suffix data can be accomplished by declaring a Suffix component on the model.
model = pyo.ConcreteModel()
model.dual = pyo.Suffix(direction=pyo.Suffix.IMPORT)
model.x = pyo.Var()
model.obj = pyo.Objective(expr=model.x)
model.con = pyo.Constraint(expr=model.x >= 1.0)
The existence of an active suffix with the name dual that has an import
style suffix direction will cause constraint dual information to be
collected into the solver results (assuming the solver supplies dual
information). In addition to this, after loading solver results into a
problem instance (using a python script or Pyomo callback functions in
conjunction with the pyomo
command), one can access the dual values
associated with constraints using the dual Suffix component.
>>> results = pyo.SolverFactory('glpk').solve(model)
>>> pyo.assert_optimal_termination(results)
>>> print(model.dual[model.con])
1.0
Alternatively, the pyomo
option solversuffixes
can be used to
request suffix information from a solver. In the event that suffix names
are provided via this commandline option, the pyomo
script will
automatically declare these Suffix components on the constructed
instance making these suffixes available for import.
Exporting Suffix Data¶
Exporting suffix data is accomplished in a similar manner as to that of importing suffix data. One simply needs to declare a Suffix component on the model with an export style suffix direction and associate modeling component values with it. The following example shows how one can declare a special ordered set of type 1 using AMPLstyle suffix notation in conjunction with Pyomo’s NL file interface.
model = pyo.ConcreteModel()
model.y = pyo.Var([1,2,3], within=pyo.NonNegativeReals)
model.sosno = pyo.Suffix(direction=pyo.Suffix.EXPORT)
model.ref = pyo.Suffix(direction=pyo.Suffix.EXPORT)
# Add entry for each index of model.y
model.sosno.set_value(model.y, 1)
model.ref[model.y[1]] = 0
model.ref[model.y[2]] = 1
model.ref[model.y[3]] = 2
Most AMPLcompatible solvers will recognize the suffix names sosno
and ref
as declaring a special ordered set, where a positive value
for sosno
indicates a special ordered set of type 1 and a negative
value indicates a special ordered set of type 2.
Note
Pyomo provides the SOSConstraint
component for declaring special
ordered sets, which is recognized by all solver interfaces, including
the NL file interface.
Pyomo’s NL file interface will recognize an EXPORT style Suffix component with the name ‘dual’ as supplying initializations for constraint multipliers. As such it will be treated separately than all other EXPORT style suffixes encountered in the NL writer, which are treated as AMPLstyle suffixes. The following example script shows how one can warmstart the interiorpoint solver Ipopt by supplying both primal (variable values) and dual (suffixes) solution information. This dual suffix information can be both imported and exported using a single Suffix component with an IMPORT_EXPORT direction.
model = pyo.ConcreteModel()
model.x1 = pyo.Var(bounds=(1,5),initialize=1.0)
model.x2 = pyo.Var(bounds=(1,5),initialize=5.0)
model.x3 = pyo.Var(bounds=(1,5),initialize=5.0)
model.x4 = pyo.Var(bounds=(1,5),initialize=1.0)
model.obj = pyo.Objective(
expr=model.x1*model.x4*(model.x1 + model.x2 + model.x3) + model.x3)
model.inequality = pyo.Constraint(
expr=model.x1*model.x2*model.x3*model.x4 >= 25.0)
model.equality = pyo.Constraint(
expr=model.x1**2 + model.x2**2 + model.x3**2 + model.x4**2 == 40.0)
### Declare all suffixes
# Ipopt bound multipliers (obtained from solution)
model.ipopt_zL_out = pyo.Suffix(direction=pyo.Suffix.IMPORT)
model.ipopt_zU_out = pyo.Suffix(direction=pyo.Suffix.IMPORT)
# Ipopt bound multipliers (sent to solver)
model.ipopt_zL_in = pyo.Suffix(direction=pyo.Suffix.EXPORT)
model.ipopt_zU_in = pyo.Suffix(direction=pyo.Suffix.EXPORT)
# Obtain dual solutions from first solve and send to warm start
model.dual = pyo.Suffix(direction=pyo.Suffix.IMPORT_EXPORT)
ipopt = pyo.SolverFactory('ipopt')
The difference in performance can be seen by examining Ipopt’s iteration log with and without warm starting:
Without Warmstart:
ipopt.solve(model, tee=True)
... iter objective inf_pr inf_du lg(mu) d lg(rg) alpha_du alpha_pr ls 0 1.6109693e+01 1.12e+01 5.28e01 1.0 0.00e+00  0.00e+00 0.00e+00 0 1 1.6982239e+01 7.30e01 1.02e+01 1.0 6.11e01  7.19e02 1.00e+00f 1 2 1.7318411e+01 3.60e02 5.05e01 1.0 1.61e01  1.00e+00 1.00e+00h 1 3 1.6849424e+01 2.78e01 6.68e02 1.7 2.85e01  7.94e01 1.00e+00h 1 4 1.7051199e+01 4.71e03 2.78e03 1.7 6.06e02  1.00e+00 1.00e+00h 1 5 1.7011979e+01 7.19e03 8.50e03 3.8 3.66e02  9.45e01 9.98e01h 1 6 1.7014271e+01 1.74e05 9.78e06 3.8 3.33e03  1.00e+00 1.00e+00h 1 7 1.7014021e+01 1.23e07 1.82e07 5.7 2.69e04  1.00e+00 1.00e+00h 1 8 1.7014017e+01 1.77e11 2.52e11 8.6 3.32e06  1.00e+00 1.00e+00h 1 Number of Iterations....: 8 ...
With Warmstart:
### Set Ipopt options for warmstart # The current values on the ipopt_zU_out and ipopt_zL_out suffixes will # be used as initial conditions for the bound multipliers to solve the # new problem model.ipopt_zL_in.update(model.ipopt_zL_out) model.ipopt_zU_in.update(model.ipopt_zU_out) ipopt.options['warm_start_init_point'] = 'yes' ipopt.options['warm_start_bound_push'] = 1e6 ipopt.options['warm_start_mult_bound_push'] = 1e6 ipopt.options['mu_init'] = 1e6 ipopt.solve(model, tee=True)
... iter objective inf_pr inf_du lg(mu) d lg(rg) alpha_du alpha_pr ls 0 1.7014032e+01 2.00e06 4.07e06 6.0 0.00e+00  0.00e+00 0.00e+00 0 1 1.7014019e+01 3.65e12 1.00e11 6.0 2.50e01  1.00e+00 1.00e+00h 1 2 1.7014017e+01 4.48e12 6.42e12 9.0 1.92e06  1.00e+00 1.00e+00h 1 Number of Iterations....: 2 ...
Using Suffixes With an AbstractModel¶
In order to allow the declaration of suffix data within the framework of
an AbstractModel, the Suffix component can be initialized with an
optional construction rule. As with constraint rules, this function will
be executed at the time of model construction. The following simple
example highlights the use of the rule
keyword in suffix
initialization. Suffix rules are expected to return an iterable of
(component, value) tuples, where the expand=True
semantics are
applied for indexed components.
model = pyo.AbstractModel()
model.x = pyo.Var()
model.c = pyo.Constraint(expr=model.x >= 1)
def foo_rule(m):
return ((m.x, 2.0), (m.c, 3.0))
model.foo = pyo.Suffix(rule=foo_rule)
>>> # Instantiate the model
>>> inst = model.create_instance()
>>> print(inst.foo[inst.x])
2.0
>>> print(inst.foo[inst.c])
3.0
>>> # Note that model.x and inst.x are not the same object
>>> print(inst.foo[model.x])
Traceback (most recent call last):
...
KeyError: "Component with id '...': x"
The next example shows an abstract model where suffixes are attached only to the variables:
model = pyo.AbstractModel()
model.I = pyo.RangeSet(1,4)
model.x = pyo.Var(model.I)
def c_rule(m, i):
return m.x[i] >= i
model.c = pyo.Constraint(model.I, rule=c_rule)
def foo_rule(m):
return ((m.x[i], 3.0*i) for i in m.I)
model.foo = pyo.Suffix(rule=foo_rule)
>>> # instantiate the model
>>> inst = model.create_instance()
>>> for i in inst.I:
... print((i, inst.foo[inst.x[i]]))
(1, 3.0)
(2, 6.0)
(3, 9.0)
(4, 12.0)
Solving Pyomo Models¶
Solving ConcreteModels¶
If you have a ConcreteModel, add these lines at the bottom of your Python script to solve it
>>> opt = pyo.SolverFactory('glpk')
>>> opt.solve(model)
Solving AbstractModels¶
If you have an AbstractModel, you must create a concrete instance of your model before solving it using the same lines as above:
>>> instance = model.create_instance()
>>> opt = pyo.SolverFactory('glpk')
>>> opt.solve(instance)
pyomo solve
Command¶
To solve a ConcreteModel contained in the file my_model.py
using the
pyomo
command and the solver GLPK, use the following line in a
terminal window:
pyomo solve my_model.py solver='glpk'
To solve an AbstractModel contained in the file my_model.py
with data
in the file my_data.dat
using the pyomo
command and the solver GLPK,
use the following line in a terminal window:
pyomo solve my_model.py my_data.dat solver='glpk'
Supported Solvers¶
Pyomo supports a wide variety of solvers. Pyomo has specialized
interfaces to some solvers (for example, BARON, CBC, CPLEX, and Gurobi).
It also has generic interfaces that support calling any solver that can
read AMPL “.nl
” and write “.sol
” files and the ability to
generate GAMSformat models and retrieve the results. You can get the
current list of supported solvers using the pyomo
command:
pyomo help solvers
Working with Pyomo Models¶
This section gives an overview of commonly used scripting commands when working with Pyomo models. These commands must be applied to a concrete model instance or in other words an instantiated model.
Repeated Solves¶
>>> import pyomo.environ as pyo
>>> from pyomo.opt import SolverFactory
>>> model = pyo.ConcreteModel()
>>> model.nVars = pyo.Param(initialize=4)
>>> model.N = pyo.RangeSet(model.nVars)
>>> model.x = pyo.Var(model.N, within=pyo.Binary)
>>> model.obj = pyo.Objective(expr=pyo.summation(model.x))
>>> model.cuts = pyo.ConstraintList()
>>> opt = SolverFactory('glpk')
>>> opt.solve(model)
>>> # Iterate, adding a cut to exclude the previously found solution
>>> for i in range(5):
... expr = 0
... for j in model.x:
... if pyo.value(model.x[j]) < 0.5:
... expr += model.x[j]
... else:
... expr += (1  model.x[j])
... model.cuts.add( expr >= 1 )
... results = opt.solve(model)
... print ("\n===== iteration",i)
... model.display()
To illustrate Python scripts for Pyomo we consider an example that is in
the file iterative1.py
and is executed using the command
python iterative1.py
Note
This is a Python script that contains elements of Pyomo, so it is
executed using the python
command. The pyomo
command can be
used, but then there will be some strange messages at the end when
Pyomo finishes the script and attempts to send the results to a
solver, which is what the pyomo
command does.
This script creates a model, solves it, and then adds a constraint to preclude the solution just found. This process is repeated, so the script finds and prints multiple solutions. The particular model it creates is just the sum of four binary variables. One does not need a computer to solve the problem or even to iterate over solutions. This example is provided just to illustrate some elementary aspects of scripting.
# iterative1.py
import pyomo.environ as pyo
from pyomo.opt import SolverFactory
# Create a solver
opt = pyo.SolverFactory('glpk')
#
# A simple model with binary variables and
# an empty constraint list.
#
model = pyo.AbstractModel()
model.n = pyo.Param(default=4)
model.x = pyo.Var(pyo.RangeSet(model.n), within=pyo.Binary)
def o_rule(model):
return pyo.summation(model.x)
model.o = pyo.Objective(rule=o_rule)
model.c = pyo.ConstraintList()
# Create a model instance and optimize
instance = model.create_instance()
results = opt.solve(instance)
instance.display()
# Iterate to eliminate the previously found solution
for i in range(5):
expr = 0
for j in instance.x:
if pyo.value(instance.x[j]) == 0:
expr += instance.x[j]
else:
expr += (1instance.x[j])
instance.c.add( expr >= 1 )
results = opt.solve(instance)
print ("\n===== iteration",i)
instance.display()
Let us now analyze this script. The first line is a comment that happens
to give the name of the file. This is followed by two lines that import
symbols for Pyomo. The pyomo namespace is imported as
pyo
. Therefore, pyo.
must precede each use of a Pyomo name.
# iterative1.py
import pyomo.environ as pyo
from pyomo.opt import SolverFactory
An object to perform optimization is created by calling
SolverFactory
with an argument giving the name of the solver. The
argument would be 'gurobi'
if, e.g., Gurobi was desired instead of
glpk:
# Create a solver
opt = pyo.SolverFactory('glpk')
The next lines after a comment create a model. For our discussion here, we will refer to this as the base model because it will be extended by adding constraints later. (The words “base model” are not reserved words, they are just being introduced for the discussion of this example). There are no constraints in the base model, but that is just to keep it simple. Constraints could be present in the base model. Even though it is an abstract model, the base model is fully specified by these commands because it requires no external data:
model = pyo.AbstractModel()
model.n = pyo.Param(default=4)
model.x = pyo.Var(pyo.RangeSet(model.n), within=pyo.Binary)
def o_rule(model):
return pyo.summation(model.x)
model.o = pyo.Objective(rule=o_rule)
The next line is not part of the base model specification. It creates an empty constraint list that the script will use to add constraints.
model.c = pyo.ConstraintList()
The next noncomment line creates the instantiated model and refers to
the instance object with a Python variable instance
. Models run
using the pyomo
script do not typically contain this line because
model instantiation is done by the pyomo
script. In this example,
the create
function is called without arguments because none are
needed; however, the name of a file with data commands is given as an
argument in many scripts.
instance = model.create_instance()
The next line invokes the solver and refers to the object contain
results with the Python variable results
.
results = opt.solve(instance)
The solve function loads the results into the instance, so the next line writes out the updated values.
instance.display()
The next noncomment line is a Python iteration command that will
successively assign the integers from 0 to 4 to the Python variable
i
, although that variable is not used in script. This loop is what
causes the script to generate five more solutions:
for i in range(5):
An expression is built up in the Python variable named expr
. The
Python variable j
will be iteratively assigned all of the indexes of
the variable x
. For each index, the value of the variable (which was
loaded by the load
method just described) is tested to see if it is
zero and the expression in expr
is augmented accordingly. Although
expr
is initialized to 0 (an integer), its type will change to be a
Pyomo expression when it is assigned expressions involving Pyomo
variable objects:
expr = 0
for j in instance.x:
if pyo.value(instance.x[j]) == 0:
expr += instance.x[j]
else:
expr += (1instance.x[j])
During the first iteration (when i
is 0), we know that all values of
x
will be 0, so we can anticipate what the expression will look
like. We know that x
is indexed by the integers from 1 to 4 so we
know that j
will take on the values from 1 to 4 and we also know
that all value of x
will be zero for all indexes so we know that the
value of expr
will be something like
0 + instance.x[1] + instance.x[2] + instance.x[3] + instance.x[4]
The value of j
will be evaluated because it is a Python variable;
however, because it is a Pyomo variable, the value of instance.x[j]
not be used, instead the variable object will appear in the
expression. That is exactly what we want in this case. When we wanted to
use the current value in the if
statement, we used the value
function to get it.
The next line adds to the constaint list called c
the requirement
that the expression be greater than or equal to one:
instance.c.add( expr >= 1 )
The proof that this precludes the last solution is left as an exerise for the reader.
The final lines in the outer for loop find a solution and display it:
results = opt.solve(instance)
print ("\n===== iteration",i)
instance.display()
Note
The assignment of the solve output to a results object is somewhat anachronistic. Many scripts just use
>>> opt.solve(instance) # doctest: +SKIP
since the results are moved to the instance by default, leaving the results object with little of interest. If, for some reason, you want the results to stay in the results object and not be moved to the instance, you would use
>>> results = opt.solve(instance, load_solutions=False) # doctest: +SKIP
This approach can be usefull if there is a concern that the solver did not terminate with an optimal solution. For example,
>>> results = opt.solve(instance, load_solutions=False) # doctest: +SKIP
>>> if results.solver.termination_condition == TerminationCondition.optimal: # doctest: +SKIP
... instance.solutions.load_from(results) # doctest: +SKIP
Changing the Model or Data and Resolving¶
The iterative1.py
example above illustrates how a model can be changed and
then resolved. In that example, the model is changed by adding a
constraint, but the model could also be changed by altering the values
of parameters. Note, however, that in these examples, we make the
changes to the concrete model instances. This is particularly important
for AbstractModel
users, as this implies working with the
instance
object rather than the model
object, which allows us to
avoid creating a new model
object for each solve. Here is the basic
idea for users of an AbstractModel
:
 Create an
AbstractModel
(suppose it is calledmodel
)  Call
model.create_instance()
to create an instance (suppose it is calledinstance
)  Solve
instance
 Change someting in
instance
 Solve
instance
again
Note
Users of ConcreteModel
typically name their models model
, which
can cause confusion to novice readers of documentation. Examples based on
an AbstractModel
will refer to instance
where users of a
ConcreteModel
would typically use the name model
.
If instance
has a parameter whose name is Theta
that was
declared to be mutable
(i.e., mutable=True
) with an
index that contains idx
, then the value in NewVal
can be assigned to
it using
>>> instance.Theta[idx] = NewVal
For a singleton parameter named sigma
(i.e., if it is not
indexed), the assignment can be made using
>>> instance.sigma = NewVal
Note
If the Param
is not declared to be mutable, an error will occur if an assignment to it is attempted.
For more information about access to Pyomo parameters, see the section
in this document on Param
access Accessing Parameter Values. Note that for
concrete models, the model is the instance.
Fixing Variables and Resolving¶
Instead of changing model data, scripts are often used to fix variable values. The following example illustrates this.
# iterative2.py
import pyomo.environ as pyo
from pyomo.opt import SolverFactory
# Create a solver
opt = pyo.SolverFactory('cplex')
#
# A simple model with binary variables and
# an empty constraint list.
#
model = pyo.AbstractModel()
model.n = pyo.Param(default=4)
model.x = pyo.Var(pyo.RangeSet(model.n), within=pyo.Binary)
def o_rule(model):
return summation(model.x)
model.o = pyo.Objective(rule=o_rule)
model.c = pyo.ConstraintList()
# Create a model instance and optimize
instance = model.create_instance()
results = opt.solve(instance)
instance.display()
# "flip" the value of x[2] (it is binary)
# then solve again
if pyo.value(instance.x[2]) == 0:
instance.x[2].fix(1)
else:
instance.x[2].fix(0)
results = opt.solve(instance)
instance.display()
In this example, the variables are binary. The model is solved and then
the value of model.x[2]
is flipped to the opposite value before
solving the model again. The main lines of interest are:
if pyo.value(instance.x[2]) == 0:
instance.x[2].fix(1)
else:
instance.x[2].fix(0)
results = opt.solve(instance)
This could also have been accomplished by setting the upper and lower bounds:
>>> if instance.x[2] == 0:
... instance.x[2].setlb(1)
... instance.x[2].setub(1)
... else:
... instance.x[2].setlb(0)
... instance.x[2].setub(0)
Notice that when using the bounds, we do not set fixed
to True
because that would fix the variable at whatever value it presently has
and then the bounds would be ignored by the solver.
For more information about access to Pyomo variables, see the section in
this document on Var
access Accessing Variable Values.
Note that
>>> instance.x.fix(2)
is equivalent to
>>> instance.y.value = 2
>>> instance.y.fixed = True
 and
>>> instance.x.fix()
is equivalent to
>>> instance.x.fixed = True
Extending the Objective Function¶
One can add terms to an objective function of a ConcreteModel
(or
and instantiated AbstractModel
) using the expr
attribute
of the objective function object. Here is a simple example:
>>> import pyomo.environ as pyo
>>> from pyomo.opt import SolverFactory
>>> model = pyo.ConcreteModel()
>>> model.x = pyo.Var(within=pyo.PositiveReals)
>>> model.y = pyo.Var(within=pyo.PositiveReals)
>>> model.sillybound = pyo.Constraint(expr = model.x + model.y <= 2)
>>> model.obj = pyo.Objective(expr = 20 * model.x)
>>> opt = SolverFactory('glpk')
>>> opt.solve(model)
>>> model.pprint()
>>> print (" extend obj ")
>>> model.obj.expr += 10 * model.y
>>> opt = SolverFactory('cplex')
>>> opt.solve(model)
>>> model.pprint()
Activating and Deactivating Objectives¶
Multiple objectives can be declared, but only one can be active at a
time (at present, Pyomo does not support any solvers that can be given
more than one objective). If both model.obj1
and model.obj2
have
been declared using Objective
, then one can ensure that
model.obj2
is passed to the solver as shown in this simple example:
>>> model = pyo.ConcreteModel()
>>> model.obj1 = pyo.Objective(expr = 0)
>>> model.obj2 = pyo.Objective(expr = 0)
>>> model.obj1.deactivate()
>>> model.obj2.activate()
For abstract models this would be done prior to instantiation or else
the activate
and deactivate
calls would be on the instance
rather than the model.
Activating and Deactivating Constraints¶
Constraints can be temporarily disabled using the deactivate()
method.
When the model is sent to a solver inactive constraints are not included.
Disabled constraints can be reenabled using the activate()
method.
>>> model = pyo.ConcreteModel()
>>> model.v = pyo.Var()
>>> model.con = pyo.Constraint(expr=model.v**2 + model.v >= 3)
>>> model.con.deactivate()
>>> model.con.activate()
Indexed constraints can be deactivated/activated as a whole or by individual index:
>>> model = pyo.ConcreteModel()
>>> model.s = pyo.Set(initialize=[1,2,3])
>>> model.v = pyo.Var(model.s)
>>> def _con(m, s):
... return m.v[s]**2 + m.v[s] >= 3
>>> model.con = pyo.Constraint(model.s, rule=_con)
>>> model.con.deactivate() # Deactivate all indices
>>> model.con[1].activate() # Activate single index
Accessing Variable Values¶
Primal Variable Values¶
Often, the point of optimization is to get optimal values of variables. Some users may want to process the values in a script. We will describe how to access a particular variable from a Python script as well as how to access all variables from a Python script and from a callback. This should enable the reader to understand how to get the access that they desire. The Iterative example given above also illustrates access to variable values.
One Variable from a Python Script¶
Assuming the model has been instantiated and solved and the results have
been loded back into the instance object, then we can make use of the
fact that the variable is a member of the instance object and its value
can be accessed using its value
member. For example, suppose the
model contains a variable named quant
that is a singleton (has no
indexes) and suppose further that the name of the instance object is
instance
. Then the value of this variable can be accessed using
pyo.value(instance.quant)
. Variables with indexes can be referenced
by supplying the index.
Consider the following very simple example, which is similar to the
iterative example. This is a concrete model. In this example, the value
of x[2]
is accessed.
# noiteration1.py
import pyomo.environ as pyo
from pyomo.opt import SolverFactory
# Create a solver
opt = SolverFactory('glpk')
#
# A simple model with binary variables and
# an empty constraint list.
#
model = pyo.ConcreteModel()
model.n = pyo.Param(default=4)
model.x = pyo.Var(pyo.RangeSet(model.n), within=pyo.Binary)
def o_rule(model):
return summation(model.x)
model.o = pyo.Objective(rule=o_rule)
model.c = pyo.ConstraintList()
results = opt.solve(model)
if pyo.value(model.x[2]) == 0:
print("The second index has a zero")
else:
print("x[2]=",pyo.value(model.x[2]))
Note
If this script is run without modification, Pyomo is likely to issue a warning because there are no constraints. The warning is because some solvers may fail if given a problem instance that does not have any constraints.
All Variables from a Python Script¶
As with one variable, we assume that the model has been instantiated
and solved. Assuming the instance object has the name instance
,
the following code snippet displays all variables and their values:
>>> for v in instance.component_objects(pyo.Var, active=True):
... print("Variable",v) # doctest: +SKIP
... for index in v:
... print (" ",index, pyo.value(v[index])) # doctest: +SKIP
Alternatively,
>>> for v in instance.component_data_objects(pyo.Var, active=True):
... print(v, pyo.value(v)) # doctest: +SKIP
This code could be improved by checking to see if the variable is not
indexed (i.e., the only index value is None
), then the code could
print the value without the word None
next to it.
Assuming again that the model has been instantiated and solved and the results have been loded back into the instance object. Here is a code snippet for fixing all integers at their current value:
>>> for var in instance.component_data_objects(pyo.Var, active=True):
... if not var.is_continuous():
... print ("fixing "+str(v)) # doctest: +SKIP
... var.fixed = True # fix the current value
Another way to access all of the variables (particularly if there are blocks) is as follows (this particular snippet assumes that instead of import pyomo.environ as pyo from pyo.environ import * was used):
for v in model.component_objects(Var, descend_into=True):
print("FOUND VAR:" + v.name)
v.pprint()
for v_data in model.component_data_objects(Var, descend_into=True):
print("Found: "+v_data.name+", value = "+str(value(v_data)))
Accessing Parameter Values¶
Accessing parameter values is completely analogous to accessing variable values. For example, here is a code snippet to print the name and value of every Parameter in a model:
>>> for parmobject in instance.component_objects(pyo.Param, active=True):
... nametoprint = str(str(parmobject.name))
... print ("Parameter ", nametoprint) # doctest: +SKIP
... for index in parmobject:
... vtoprint = pyo.value(parmobject[index])
... print (" ",index, vtoprint) # doctest: +SKIP
Accessing Duals¶
Access to dual values in scripts is similar to accessing primal variable values, except that dual values are not captured by default so additional directives are needed before optimization to signal that duals are desired.
To get duals without a script, use the pyomo
option
solversuffixes='dual'
which will cause dual values to be included
in output. Note: In addition to duals (dual
) , reduced costs
(rc
) and slack values (slack
) can be requested. All suffixes can
be requested using the pyomo
option solversuffixes='.*'
Warning
Some of the duals may have the value None
, rather than 0
.
Access Duals in a Python Script¶
To signal that duals are desired, declare a Suffix component with the name “dual” on the model or instance with an IMPORT or IMPORT_EXPORT direction.
# Create a 'dual' suffix component on the instance
# so the solver plugin will know which suffixes to collect
instance.dual = pyo.Suffix(direction=pyo.Suffix.IMPORT)
See the section on Suffixes Suffixes for more information on Pyomo’s Suffix component. After the results are obtained and loaded into an instance, duals can be accessed in the following fashion.
# display all duals
print ("Duals")
for c in instance.component_objects(pyo.Constraint, active=True):
print (" Constraint",c)
for index in c:
print (" ", index, instance.dual[c[index]])
The following snippet will only work, of course, if there is a
constraint with the name AxbConstraint
that has and index, which is
the string Film
.
# access one dual
print ("Dual for Film=", instance.dual[instance.AxbConstraint['Film']])
Here is a complete example that relies on the file abstract2.py
to
provide the model and the file abstract2.dat
to provide the
data. Note that the model in abstract2.py
does contain a constraint
named AxbConstraint
and abstract2.dat
does specify an index for
it named Film
.
# driveabs2.py
from __future__ import division
import pyomo.environ as pyo
from pyomo.opt import SolverFactory
# Create a solver
opt = SolverFactory('cplex')
# get the model from another file
from abstract2 import model
# Create a model instance and optimize
instance = model.create_instance('abstract2.dat')
# Create a 'dual' suffix component on the instance
# so the solver plugin will know which suffixes to collect
instance.dual = pyo.Suffix(direction=pyo.Suffix.IMPORT)
results = opt.solve(instance)
# also puts the results back into the instance for easy access
# display all duals
print ("Duals")
for c in instance.component_objects(pyo.Constraint, active=True):
print (" Constraint",c)
for index in c:
print (" ", index, instance.dual[c[index]])
# access one dual
print ("Dual for Film=", instance.dual[instance.AxbConstraint['Film']])
Concrete models are slightly different because the model is the
instance. Here is a complete example that relies on the file
concrete1.py
to provide the model and instantiate it.
# driveconc1.py
from __future__ import division
import pyomo.environ as pyo
from pyomo.opt import SolverFactory
# Create a solver
opt = SolverFactory('cplex')
# get the model from another file
from concrete1 import model
# Create a 'dual' suffix component on the instance
# so the solver plugin will know which suffixes to collect
model.dual = pyo.Suffix(direction=pyo.Suffix.IMPORT)
results = opt.solve(model) # also load results to model
# display all duals
print ("Duals")
for c in model.component_objects(pyo.Constraint, active=True):
print (" Constraint",c)
for index in c:
print (" ", index, model.dual[c[index]])
Accessing Slacks¶
The functions lslack()
and uslack()
return the upper and lower
slacks, respectively, for a constraint.
Accessing Solver Status¶
After a solve, the results object has a member Solution.Status
that
contains the solver status. The following snippet shows an example of
access via a print
statement:
results = opt.solve(instance)
#print ("The solver returned a status of:"+str(results.solver.status))
The use of the Python str
function to cast the value to a be string
makes it easy to test it. In particular, the value ‘optimal’ indicates
that the solver succeeded. It is also possible to access Pyomo data that
can be compared with the solver status as in the following code snippet:
from pyomo.opt import SolverStatus, TerminationCondition
#...
if (results.solver.status == SolverStatus.ok) and (results.solver.termination_condition == TerminationCondition.optimal):
print ("this is feasible and optimal")
elif results.solver.termination_condition == TerminationCondition.infeasible:
print ("do something about it? or exit?")
else:
# something else is wrong
print (str(results.solver))
Alternatively,
from pyomo.opt import TerminationCondition
...
results = opt.solve(model, load_solutions=False)
if results.solver.termination_condition == TerminationCondition.optimal:
model.solutions.load_from(results)
else:
print ("Solution is not optimal")
# now do something about it? or exit? ...
Display of Solver Output¶
To see the output of the solver, use the option tee=True
as in
results = opt.solve(instance, tee=True)
This can be useful for troubleshooting solver difficulties.
Sending Options to the Solver¶
Most solvers accept options and Pyomo can pass options through to a solver. In scripts or callbacks, the options can be attached to the solver object by adding to its options dictionary as illustrated by this snippet:
optimizer = pyo.SolverFactory['cbc']
optimizer.options["threads"] = 4
If multiple options are needed, then multiple dictionary entries should be added.
Sometime it is desirable to pass options as part of the call to the solve function as in this snippet:
results = optimizer.solve(instance, options="threads=4", tee=True)
The quoted string is passed directly to the solver. If multiple options
need to be passed to the solver in this way, they should be separated by
a space within the quoted string. Notice that tee
is a Pyomo option
and is solverindependent, while the string argument to options
is
passed to the solver without very little processing by Pyomo. If the
solver does not have a “threads” option, it will probably complain, but
Pyomo will not.
There are no default values for options on a SolverFactory
object. If you directly modify its options dictionary, as was done
above, those options will persist across every call to
optimizer.solve(…)
unless you delete them from the options
dictionary. You can also pass a dictionary of options into the
opt.solve(…)
method using the options
keyword. Those options
will only persist within that solve and temporarily override any
matching options in the options dictionary on the solver object.
Specifying the Path to a Solver¶
Often, the executables for solvers are in the path; however, for
situations where they are not, the SolverFactory function accepts the
keyword executable
, which you can use to set an absolute or relative
path to a solver executable. E.g.,
opt = pyo.SolverFactory("ipopt", executable="../ipopt")
Warm Starts¶
Some solvers support a warm start based on current values of
variables. To use this feature, set the values of variables in the
instance and pass warmstart=True
to the solve()
method. E.g.,
instance = model.create()
instance.y[0] = 1
instance.y[1] = 0
opt = pyo.SolverFactory("cplex")
results = opt.solve(instance, warmstart=True)
Note
The Cplex and Gurobi LP file (and Python) interfaces will generate an MST file with the variable data and hand this off to the solver in addition to the LP file.
Warning
Solvers using the NL file interface (e.g., “gurobi_ampl”, “cplexamp”) do not accept warmstart as a keyword to the solve() method as the NL file format, by default, includes variable initialization data (drawn from the current value of all variables).
Solving Multiple Instances in Parallel¶
Use of parallel solvers for PySP is discussed in the section on parallel PySP Solving Subproblems in Parallel and/or Remotely.
Solvers are controlled by solver servers. The pyro mip solver server is
launched with the command pyro_mip_server
. This command may be
repeated to launch as many solvers as are desired. A name server and a
dispatch server must be running and accessible to the process that runs
the script that will use the mip servers as well as to the mip
servers. The name server is launched using the command pyomo_ns
and
then the dispatch server is launched with dispatch_srvr
. Note that
both commands contain an underscore. Both programs keep running until
terminated by an external signal, so it is common to pipe their output
to a file. The commands are:
 Once:
pyomo_ns
 Once:
dispatch_srvr
 Multiple times:
pyro_mip_server
This example demonstrates how to use these services to solve two instances in parallel.
# parallel.py
from __future__ import division
from pyomo.environ import *
from pyomo.opt import SolverFactory
from pyomo.opt.parallel import SolverManagerFactory
import sys
action_handle_map = {} # maps action handles to instances
# Create a solver
optsolver = SolverFactory('cplex')
# create a solver manager
# 'pyro' could be replaced with 'serial'
solver_manager = SolverManagerFactory('pyro')
if solver_manager is None:
print "Failed to create solver manager."
sys.exit(1)
#
# A simple model with binary variables and
# an empty constraint list.
#
model = AbstractModel()
model.n = Param(default=4)
model.x = Var(RangeSet(model.n), within=Binary)
def o_rule(model):
return summation(model.x)
model.o = Objective(rule=o_rule)
model.c = ConstraintList()
# Create two model instances
instance1 = model.create()
instance2 = model.create()
instance2.x[1] = 1
instance2.x[1].fixed = True
# send them to the solver(s)
action_handle = solver_manager.queue(instance1, opt=optsolver, warmstart=False, tee=True, verbose=False)
action_handle_map[action_handle] = "Original"
action_handle = solver_manager.queue(instance2, opt=optsolver, warmstart=False, tee=True, verbose=False)
action_handle_map[action_handle] = "One Var Fixed"
# retrieve the solutions
for i in range(2): # we know there are two instances
this_action_handle = solver_manager.wait_any()
solved_name = action_handle_map[this_action_handle]
results = solver_manager.get_results(this_action_handle)
print "Results for",solved_name
print results
This example creates two instances that are very similar and then sends
them to be dispatched to solvers. If there are two solvers, then these
problems could be solved in parallel (we say “could” because for such
trivial problems to be actually solved in parallel, the solvers would
have to be very, very slow). This example is nonsensical; the goal is
simply to show solver_manager.queue
to submit jobs to a name server
for dispatch to solver servers and solver_manager.wait_any
to
recover the results. The wait_all
function is similar, but it takes
a list of action handles (returned by queue
) as an argument and
returns all of the results at once.
Changing the temporary directory¶
A “temporary” directory is used for many intermediate files. Normally,
the name of the directory for temporary files is provided by the
operating system, but the user can specify their own directory name.
The pyomo commandline tempdir
option propagates through to the
TempFileManager service. One can accomplish the same through the
following few lines of code in a script:
from pyutilib.services import TempfileManager
TempfileManager.tempdir = YourDirectoryNameGoesHere
Working with Abstract Models¶
Instantiating Models¶
If you start with a ConcreteModel
, each component
you add to the model will be fully constructed and initialized at the
time it attached to the model. However, if you are starting with an
AbstractModel
, construction occurs in two
phases. When you first declare and attach components to the model,
those components are empty containers and not fully constructed, even
if you explicitly provide data.
>>> import pyomo.environ as pyo
>>> model = pyo.AbstractModel()
>>> model.is_constructed()
False
>>> model.p = pyo.Param(initialize=5)
>>> model.p.is_constructed()
False
>>> model.I = pyo.Set(initialize=[1,2,3])
>>> model.x = pyo.Var(model.I)
>>> model.x.is_constructed()
False
If you look at the model
at this point, you will see that everything
is “empty”:
>>> model.pprint()
1 Set Declarations
I : Size=0, Index=None, Ordered=Insertion
Not constructed
1 Param Declarations
p : Size=0, Index=None, Domain=Any, Default=None, Mutable=False
Not constructed
1 Var Declarations
x : Size=0, Index=I
Not constructed
3 Declarations: p I x
Before you can manipulate modeling components or solve the model, you
must first create a concrete instance by applying data to your
abstract model. This can be done using the
create_instance()
method, which takes
the abstract model and optional data and returns a new concrete
instance by constructing each of the model components in the order in
which they were declared (attached to the model). Note that the
instance creation is performed “out of place”; that is, the original
abstract model
is left untouched.
>>> instance = model.create_instance()
>>> model.is_constructed()
False
>>> type(instance)
<class 'pyomo.core.base.PyomoModel.ConcreteModel'>
>>> instance.is_constructed()
True
>>> instance.pprint()
1 Set Declarations
I : Size=1, Index=None, Ordered=Insertion
Key : Dimen : Domain : Size : Members
None : 1 : Any : 3 : {1, 2, 3}
1 Param Declarations
p : Size=1, Index=None, Domain=Any, Default=None, Mutable=False
Key : Value
None : 5
1 Var Declarations
x : Size=3, Index=I
Key : Lower : Value : Upper : Fixed : Stale : Domain
1 : None : None : None : False : True : Reals
2 : None : None : None : False : True : Reals
3 : None : None : None : False : True : Reals
3 Declarations: p I x
Note
AbstractModel users should note that in some examples, your concrete
model instance is called “instance” and not “model”. This
is the case here, where we are explicitly calling
instance = model.create_instance()
.
The create_instance()
method can also
take a reference to external data, which overrides any data specified in
the original component declarations. The data can be provided from
several sources, including using a dict,
DataPortal, or DAT file. For example:
>>> instance2 = model.create_instance({None: {'I': {None: [4,5]}}})
>>> instance2.pprint()
1 Set Declarations
I : Size=1, Index=None, Ordered=Insertion
Key : Dimen : Domain : Size : Members
None : 1 : Any : 2 : {4, 5}
1 Param Declarations
p : Size=1, Index=None, Domain=Any, Default=None, Mutable=False
Key : Value
None : 5
1 Var Declarations
x : Size=2, Index=I
Key : Lower : Value : Upper : Fixed : Stale : Domain
4 : None : None : None : False : True : Reals
5 : None : None : None : False : True : Reals
3 Declarations: p I x
Managing Data in AbstractModels¶
There are roughly three ways of using data to construct a Pyomo model:
 use standard Python objects,
 initialize a model with data loaded with a
DataPortal
object, and  load model data from a Pyomo data command file.
Standard Python data objects include native Python data types (e.g.
lists, sets, and dictionaries) as well as standard data formats
like numpy arrays and Pandas data frames. Standard Python data
objects can be used to define constant values in a Pyomo model, and
they can be used to initialize Set
and Param
components.
However, initializing Set
and Param
components in
this manner provides few advantages over direct use of standard
Python data objects. (An import exception is that components indexed
by Set
objects use less
memory than components indexed by native Python data.)
The DataPortal
class provides a generic facility for loading data from disparate
sources. A DataPortal
object can load data in a consistent manner, and this data can be
used to simply initialize all Set
and Param
components in
a model. DataPortal
objects can be used to initialize both concrete and abstract models
in a uniform manner, which is important in some scripting applications.
But in practice, this capability is only necessary for abstract
models, whose data components are initialized after being constructed. (In fact,
all abstract data components in an abstract model are loaded from
DataPortal
objects.)
Finally, Pyomo data command files provide a convenient mechanism
for initializing Set
and
Param
components with a
highlevel data specification. Data command files can be used with
both concrete and abstract models, though in a different manner.
Data command files are parsed using a DataPortal
object, which must be done
explicitly for a concrete model. However, abstract models can load
data from a data command file directly, after the model is constructed.
Again, this capability is only necessary for abstract models, whose
data components are initialized after being constructed.
The following sections provide more detail about how data can be used to initialize Pyomo models.
Using Standard Data Types¶
Defining Constant Values¶
In many cases, Pyomo models can be constructed without Set
and Param
data components. Native Python data types
class can be simply used to define constant values in Pyomo expressions.
Consequently, Python sets, lists and dictionaries can be used to
construct Pyomo models, as well as a wide range of other Python classes.
TODO
More examples here: set, list, dict, numpy, pandas.
Initializing Set and Parameter Components¶
The Set
and Param
components used in a Pyomo model
can also be initialized with standard Python data types. This
enables some modeling efficiencies when manipulating sets (e.g.
when reusing sets for indices), and it supports validation of set
and parameter data values. The Set
and Param
components are
initialized with Python data using the initialize
option.
Set Components¶
In general, Set
components
can be initialized with iterable data. For example, simple sets
can be initialized with:
list, set and tuple data:
model.A = Set(initialize=[2,3,5]) model.B = Set(initialize=set([2,3,5])) model.C = Set(initialize=(2,3,5))
generators:
model.D = Set(initialize=range(9)) model.E = Set(initialize=(i for i in model.B if i%2 == 0))
numpy arrays:
f = numpy.array([2, 3, 5]) model.F = Set(initialize=f)
Sets can also be indirectly initialized with functions that return native Python data:
def g(model):
return [2,3,5]
model.G = Set(initialize=g)
Indexed sets can be initialized with dictionary data where the dictionary values are iterable data:
H_init = {}
H_init[2] = [1,3,5]
H_init[3] = [2,4,6]
H_init[4] = [3,5,7]
model.H = Set([2,3,4],initialize=H_init)
Parameter Components¶
When a parameter is a single value, then a Param
component can be simply initialized with a
value:
model.a = Param(initialize=1.1)
More generally, Param
components can be initialized with dictionary data where the dictionary
values are single values:
model.b = Param([1,2,3], initialize={1:1, 2:2, 3:3})
Parameters can also be indirectly initialized with functions that return native Python data:
def c(model):
return {1:1, 2:2, 3:3}
model.c = Param([1,2,3], initialize=c)
Using a Python Dictionary¶
Data can be passed to the model
create_instance()
method
through a series of nested native Python dictionaries. The structure
begins with a dictionary of namespaces, with the only required entry
being the None
namespace. Each namespace contains a dictionary that
maps component names to dictionaries of component values. For scalar
components, the required data dictionary maps the implicit index
None
to the desired value:
>>> from pyomo.environ import * >>> m = AbstractModel() >>> m.I = Set() >>> m.p = Param() >>> m.q = Param(m.I) >>> m.r = Param(m.I, m.I, default=0) >>> data = {None: { ... 'I': {None: [1,2,3]}, ... 'p': {None: 100}, ... 'q': {1: 10, 2:20, 3:30}, ... 'r': {(1,1): 110, (1,2): 120, (2,3): 230}, ... }} >>> i = m.create_instance(data) >>> i.pprint() 2 Set Declarations I : Size=1, Index=None, Ordered=Insertion Key : Dimen : Domain : Size : Members None : 1 : Any : 3 : {1, 2, 3} r_index : Size=1, Index=None, Ordered=True Key : Dimen : Domain : Size : Members None : 2 : I*I : 9 : {(1, 1), (1, 2), (1, 3), (2, 1), (2, 2), (2, 3), (3, 1), (3, 2), (3, 3)} 3 Param Declarations p : Size=1, Index=None, Domain=Any, Default=None, Mutable=False Key : Value None : 100 q : Size=3, Index=I, Domain=Any, Default=None, Mutable=False Key : Value 1 : 10 2 : 20 3 : 30 r : Size=9, Index=r_index, Domain=Any, Default=0, Mutable=False Key : Value (1, 1) : 110 (1, 2) : 120 (2, 3) : 230 5 Declarations: I p q r_index r
Data Command Files¶
Note
The discussion and presentation below are adapted from Chapter 6 of
the “Pyomo Book” [PyomoBookII]. The discussion of the
DataPortal
class uses these same examples to illustrate how data can be loaded
into Pyomo models within Python scripts (see the
Data Portals section).
Model Data¶
Pyomo’s data command files employ a domainspecific language whose syntax closely resembles the syntax of AMPL’s data commands [AMPL]. A data command file consists of a sequence of commands that either (a) specify set and parameter data for a model, or (b) specify where such data is to be obtained from external sources (e.g. table files, CSV files, spreadsheets and databases).
The following commands are used to declare data:
 The
set
command declares set data.  The
param
command declares a table of parameter data, which can also include the declaration of the set data used to index the parameter data.  The
table
command declares a twodimensional table of parameter data.  The
load
command defines how set and parameter data is loaded from external data sources, including ASCII table files, CSV files, XML files, YAML files, JSON files, ranges in spreadsheets, and database tables.
The following commands are also used in data command files:
 The
include
command specifies a data command file that is processed immediately.  The
data
andend
commands do not perform any actions, but they provide compatibility with AMPL scripts that define data commands.  The
namespace
keyword allows data commands to be organized into named groups that can be enabled or disabled during model construction.
The following data types can be represented in a data command file:
 Numeric value: Any Python numeric value (e.g. integer, float, scientific notation, or boolean).
 Simple string: A sequence of alphanumeric characters.
 Quoted string: A simple string that is included in a pair of single or double quotes. A quoted string can include quotes within the quoted string.
Numeric values are automatically converted to Python integer or floating point values when a data command file is parsed. Additionally, if a quoted string can be intepreted as a numeric value, then it will be converted to Python numeric types when the data is parsed. For example, the string “100” is converted to a numeric value automatically.
Warning
Pyomo data commands do not exactly correspond to AMPL data
commands. The set
and param
commands are designed to
closely match AMPL’s syntax and semantics, though these commands
only support a subset of the corresponding declarations in AMPL.
However, other Pyomo data commands are not generally designed to
match the semantics of AMPL.
Note
Pyomo data commands are terminated with a semicolon, and the syntax of data commands does not depend on whitespace. Thus, data commands can be broken across multiple lines – newlines and tab characters are ignored – and data commands can be formatted with whitespace with few restrictions.
The set
Command¶
Simple Sets¶
The set
data command explicitly specifies the members of either a
single set or an array of sets, i.e., an indexed set. A single set is
specified with a list of data values that are included in this set. The
formal syntax for the set data command is:
set <setname> := [<value>] ... ;
A set may be empty, and it may contain any combination of numeric and
nonnumeric string values. For example, the following are valid set
commands:
# An empty set
set A := ;
# A set of numbers
set A := 1 2 3;
# A set of strings
set B := north south east west;
# A set of mixed types
set C :=
0
1.0e+10
'foo bar'
infinity
"100"
;
Sets of Tuple Data¶
The set
data command can also specify tuple data with the standard
notation for tuples. For example, suppose that set A
contains
3tuples:
model.A = Set(dimen=3)
The following set
data command then specifies that A
is the set
containing the tuples (1,2,3)
and (4,5,6)
:
set A := (1,2,3) (4,5,6) ;
Alternatively, set data can simply be listed in the order that the tuple is represented:
set A := 1 2 3 4 5 6 ;
Obviously, the number of data elements specified using this syntax should be a multiple of the set dimension.
Sets with 2tuple data can also be specified in a matrix denoting set
membership. For example, the following set
data command declares
2tuples in A
using plus (+
) to denote valid tuples and minus
(
) to denote invalid tuples:
set A : A1 A2 A3 A4 :=
1 +   +
2 +  + 
3  +   ;
This data command declares the following five 2tuples: ('A1',1)
,
('A1',2)
, ('A2',3)
, ('A3',2)
, and ('A4',1)
.
Finally, a set of tuple data can be concisely represented with tuple
templates that represent a slice of tuple data. For example,
suppose that the set A
contains 4tuples:
model.A = Set(dimen=4)
The following set
data command declares groups of tuples that are
defined by a template and data to complete this template:
set A :=
(1,2,*,4) A B
(*,2,*,4) A B C D ;
A tuple template consists of a tuple that contains one or more asterisk
(*
) symbols instead of a value. These represent indices where the
tuple value is replaced by the values from the list of values that
follows the tuple template. In this example, the following tuples are
in set A
:
(1, 2, 'A', 4)
(1, 2, 'B', 4)
('A', 2, 'B', 4)
('C', 2, 'D', 4)
Set Arrays¶
The set
data command can also be used to declare data for a set
array. Each set in a set array must be declared with a separate set
data command with the following syntax:
set <setname>[<index>] := [<value>] ... ;
Because set arrays can be indexed by an arbitrary set, the index value may be a numeric value, a nonnumeric string value, or a commaseparated list of string values.
Suppose that a set A
is used to index a set B
as follows:
model.A = Set()
model.B = Set(model.A)
Then set B
is indexed using the values declared for set A
:
set A := 1 aaa 'a b';
set B[1] := 0 1 2;
set B[aaa] := aa bb cc;
set B['a b'] := 'aa bb cc';
The param
Command¶
Simple or nonindexed parameters are declared in an obvious way, as shown by these examples:
param A := 1.4;
param B := 1;
param C := abc;
param D := true;
param E := 1.0e+04;
Parameters can be defined with numeric data, simple strings and quoted strings. Note that parameters cannot be defined without data, so there is no analog to the specification of an empty set.
Onedimensional Parameter Data¶
Most parameter data is indexed over one or more sets, and there are a
number of ways the param
data command can be used to specify indexed
parameter data. Onedimensional parameter data is indexed over a single
set. Suppose that the parameter B
is a parameter indexed by the set
A
:
model.A = Set()
model.B = Param(model.A)
A param
data command can specify values for B
with a list of
indexvalue pairs:
set A := a c e;
param B := a 10 c 30 e 50;
Because whitespace is ignored, this example data command file can be reorganized to specify the same data in a tabular format:
set A := a c e;
param B :=
a 10
c 30
e 50
;
Multiple parameters can be defined using a single param
data
command. For example, suppose that parameters B
, C
, and D
are onedimensional parameters all indexed by the set A
:
model.A = Set()
model.B = Param(model.A)
model.C = Param(model.A)
model.D = Param(model.A)
Values for these parameters can be specified using a single param
data command that declares these parameter names followed by a list of
index and parameter values:
set A := a c e;
param : B C D :=
a 10 1 1.1
c 30 3 3.3
e 50 5 5.5
;
The values in the param
data command are interpreted as a list of
sublists, where each sublist consists of an index followed by the
corresponding numeric value.
Note that parameter values do not need to be defined for all indices. For example, the following data command file is valid:
set A := a c e g;
param : B C D :=
a 10 1 1.1
c 30 3 3.3
e 50 5 5.5
;
The index g
is omitted from the param
command, and consequently
this index is not valid for the model instance that uses this data.
More complex patterns of missing data can be specified using the period
(.
) symbol to indicate a missing value. This syntax is useful when
specifying multiple parameters that do not necessarily have the same
index values:
set A := a c e;
param : B C D :=
a . 1 1.1
c 30 . 3.3
e 50 5 .
;
This example provides a concise representation of parameters that share a common index set while using different index values.
Note that this data file specifies the data for set A
twice:
(1) when A
is defined and (2) implicitly when the parameters are
defined. An alternate syntax for param
allows the user to concisely
specify the definition of an index set along with associated parameters:
param : A : B C D :=
a 10 1 1.1
c 30 3 3.3
e 50 5 5.5
;
Finally, we note that default values for missing data can also be
specified using the default
keyword:
set A := a c e;
param B default 0.0 :=
c 30
e 50
;
Note that default values can only be specified in param
commands
that define values for a single parameter.
MultiDimensional Parameter Data¶
Multidimensional parameter data is indexed over either multiple sets or
a single multidimensional set. Suppose that parameter B
is a
parameter indexed by set A
that has dimension 2:
model.A = Set(dimen=2)
model.B = Param(model.A)
The syntax of the param
data command remains essentially the same
when specifying values for B
with a list of index and parameter
values:
set A := a 1 c 2 e 3;
param B :=
a 1 10
c 2 30
e 3 50;
Missing and default values are also handled in the same way with multidimensional index sets:
set A := a 1 c 2 e 3;
param B default 0 :=
a 1 10
c 2 .
e 3 50;
Similarly, multiple parameters can defined with a single param
data
command. Suppose that parameters B
, C
, and D
are parameters
indexed over set A
that has dimension 2:
model.A = Set(dimen=2)
model.B = Param(model.A)
model.C = Param(model.A)
model.D = Param(model.A)
These parameters can be defined with a single param
command that
declares the parameter names followed by a list of index and parameter
values:
set A := a 1 c 2 e 3;
param : B C D :=
a 1 10 1 1.1
c 2 30 3 3.3
e 3 50 5 5.5
;
Similarly, the following param
data command defines the index set
along with the parameters:
param : A : B C D :=
a 1 10 1 1.1
c 2 30 3 3.3
e 3 50 5 5.5
;
The param
command also supports a matrix syntax for specifying the
values in a parameter that has a 2dimensional index. Suppose parameter
B
is indexed over set A
that has dimension 2:
model.A = Set(dimen=2)
model.B = Param(model.A)
The following param
command defines a matrix of parameter values:
set A := 1 a 1 c 1 e 2 a 2 c 2 e 3 a 3 c 3 e;
param B : a c e :=
1 1 2 3
2 4 5 6
3 7 8 9
;
Additionally, the following syntax can be used to specify a transposed matrix of parameter values:
set A := 1 a 1 c 1 e 2 a 2 c 2 e 3 a 3 c 3 e;
param B (tr) : 1 2 3 :=
a 1 4 7
c 2 5 8
e 3 6 9
;
This functionality facilitates the presentation of parameter data in a natural format. In particular, the transpose syntax may allow the specification of tables for which the rows comfortably fit within a single line. However, a matrix may be divided columnwise into shorter rows since the line breaks are not significant in Pyomo data commands.
For parameters with three or more indices, the parameter data values may
be specified as a series of slices. Each slice is defined by a template
followed by a list of index and parameter values. Suppose that
parameter B
is indexed over set A
that has dimension 4:
model.A = Set(dimen=4)
model.B = Param(model.A)
The following param
command defines a matrix of parameter values
with multiple templates:
set A := (a,1,a,1) (a,2,a,2) (b,1,b,1) (b,2,b,2);
param B :=
[*,1,*,1] a a 10 b b 20
[*,2,*,2] a a 30 b b 40
;
The B
parameter consists of four values: B[a,1,a,1]=10
,
B[b,1,b,1]=20
, B[a,2,a,2]=30
, and B[b,2,b,2]=40
.
The table
Command¶
The table
data command explicitly specifies a twodimensional array
of parameter data. This command provides a more flexible and complete
data declaration than is possible with a param
declaration. The
following example illustrates a simple table
command that declares
data for a single parameter:
table M(A) :
A B M N :=
A1 B1 4.3 5.3
A2 B2 4.4 5.4
A3 B3 4.5 5.5
;
The parameter M
is indexed by column A
, which must be
predefined unless declared separately (see below). The column labels
are provided after the colon and before the colonequal (:=
).
Subsequently, the table data is provided. The syntax is not sensitive
to whitespace, so the following is an equivalent table
command:
table M(A) :
A B M N :=
A1 B1 4.3 5.3 A2 B2 4.4 5.4 A3 B3 4.5 5.5 ;
Multiple parameters can be declared by simply including additional parameter names. For example:
table M(A) N(A,B) :
A B M N :=
A1 B1 4.3 5.3
A2 B2 4.4 5.4
A3 B3 4.5 5.5
;
This example declares data for the M
and N
parameters, which
have different indexing columns. The indexing columns represent set
data, which is specified separately. For example:
table A={A} Z={A,B} M(A) N(A,B) :
A B M N :=
A1 B1 4.3 5.3
A2 B2 4.4 5.4
A3 B3 4.5 5.5
;
This example declares data for the M
and N
parameters, along
with the A
and Z
indexing sets. The correspondence between the
index set Z
and the indices of parameter N
can be made more
explicit by indexing N
by Z
:
table A={A} Z={A,B} M(A) N(Z) :
A B M N :=
A1 B1 4.3 5.3
A2 B2 4.4 5.4
A3 B3 4.5 5.5
;
Set data can also be specified independent of parameter data:
table Z={A,B} Y={M,N} :
A B M N :=
A1 B1 4.3 5.3
A2 B2 4.4 5.4
A3 B3 4.5 5.5
;
Warning
If a table
command does not explicitly indicate the indexing
sets, then these are assumed to be initialized separately. A
table
command can separately initialize sets and parameters in a
Pyomo model, and there is no presumed association between the data
that is initialized. For example, the table
command initializes
a set Z
and a parameter M
that are not related:
table Z={A,B} M(A):
A B M N :=
A1 B1 4.3 5.3
A2 B2 4.4 5.4
A3 B3 4.5 5.5
;
Finally, simple parameter values can also be specified with a table
command:
table pi := 3.1416 ;
The previous examples considered examples of the table
command where
column labels are provided. The table
command can also be used
without column labels. For example, the first example can be revised to
omit column labels as follows:
table columns=4 M(1)={3} :=
A1 B1 4.3 5.3
A2 B2 4.4 5.4
A3 B3 4.5 5.5
;
The columns=4
is a keywordvalue pair that defines the number of
columns in this table; this must be explicitly specified in tables
without column labels. The default column labels are integers starting
from 1
; the labels are columns 1
, 2
, 3
, and 4
in
this example. The M
parameter is indexed by column 1
. The
braces syntax declares the column where the M
data is provided.
Similarly, set data can be declared referencing the integer column labels:
table columns=4 A={1} Z={1,2} M(1)={3} N(1,2)={4} :=
A1 B1 4.3 5.3
A2 B2 4.4 5.4
A3 B3 4.5 5.5
;
Declared set names can also be used to index parameters:
table columns=4 A={1} Z={1,2} M(A)={3} N(Z)={4} :=
A1 B1 4.3 5.3
A2 B2 4.4 5.4
A3 B3 4.5 5.5
;
Finally, we compare and contrast the table
and param
commands.
Both commands can be used to declare parameter and set data, and both
commands can be used to declare a simple parameter. However, there are
some important differences between these data commands:
 The
param
command can declare a single set that is used to index one or more parameters. Thetable
command can declare data for any number of sets, independent of whether they are used to index parameter data.  The
param
command can declare data for multiple parameters only if they share the same index set. Thetable
command can declare data for any number of parameters that are may be indexed separately.  The
table
syntax unambiguously describes the dimensionality of indexing sets. Theparam
command must be interpreted with a model that provides the dimension of the indexing set.
This last point provides a key motivation for the table
command.
Specifically, the table
command can be used to reliably initialize
concrete models using Pyomo’s DataPortal
class. By contrast, the
param
command can only be used to initialize concrete models with
parameters that are indexed by a single column (i.e., a simple set).
The load
Command¶
The load
command provides a mechanism for loading data from a
variety of external tabular data sources. This command loads a table of
data that represents set and parameter data in a Pyomo model. The table
consists of rows and columns for which all rows have the same length,
all columns have the same length, and the first row represents labels
for the column data.
The load
command can load data from a variety of different external
data sources:
 TAB File: A text file format that uses whitespace to separate columns of values in each row of a table.
 CSV File: A text file format that uses comma or other delimiters to separate columns of values in each row of a table.
 XML File: An extensible markup language for documents and data structures. XML files can represent tabular data.
 Excel File: A spreadsheet data format that is primarily used by the Microsoft Excel application.
 Database: A relational database.
This command uses a data manager that coordinates how data is
extracted from a specified data source. In this way, the load
command provides a generic mechanism that enables Pyomo models to
interact with standard data repositories that are maintained in an
applicationspecific manner.
Simple Load Examples¶
The simplest illustration of the load
command is specifying data for
an indexed parameter. Consider the file Y.tab
:
A Y
A1 3.3
A2 3.4
A3 3.5
This file specifies the values of parameter Y
which is indexed by
set A
. The following load
command loads the parameter data:
load Y.tab : [A] Y;
The first argument is the filename. The options after the colon
indicate how the table data is mapped to model data. Option [A]
indicates that set A
is used as the index, and option Y
indicates the parameter that is initialized.
Similarly, the following load command loads both the parameter data as
well as the index set A
:
load Y.tab : A=[A] Y;
The difference is the specification of the index set, A=[A]
, which
indicates that set A
is initialized with the index loaded from the
ASCII table file.
Set data can also be loaded from a ASCII table file that contains a single column of data:
A
A1
A2
A3
The format
option must be specified to denote the fact that the
relational data is being interpreted as a set:
load A.tab format=set : A;
Note that this allows for specifying set data that contains tuples.
Consider file C.tab
:
A B
A1 1
A1 2
A1 3
A2 1
A2 2
A2 3
A3 1
A3 2
A3 3
A similar load
syntax will load this data into set C
:
load C.tab format=set : C;
Note that this example requires that C
be declared with dimension
two.
Load Syntax Options¶
The syntax of the load
command is broken into two parts. The first
part ends with the colon, and it begins with a filename, database URL,
or DSN (data source name). Additionally, this first part can contain
option value pairs. The following options are recognized:
format 
A string that denotes how the relational table is interpreted 
password 
The password that is used to access a database 
query 
The query that is used to request data from a database 
range 
The subset of a spreadsheet that is requestedindex{spreadsheet} 
user 
The user name that is used to access the data source 
using 
The data manager that is used to process the data source 
table 
The database table that is requested 
The format
option is the only option that is required for all data
managers. This option specifies how a relational table is interpreted
to represent set and parameter data. If the using
option is
omitted, then the filename suffix is used to select the data manager.
The remaining options are specific to spreadsheets and relational
databases (see below).
The second part of the load
command consists of the specification of
column names for indices and data. The remainder of this section
describes different specifications and how they define how data is
loaded into a model. Suppose file ABCD.tab
defines the following
relational table:
A B C D
A1 B1 1 10
A2 B2 2 20
A3 B3 3 30
There are many ways to interpret this relational table. It could
specify a set of 4tuples, a parameter indexed by 3tuples, two
parameters indexed by 2tuples, and so on. Additionally, we may wish to
select a subset of this table to initialize data in a model.
Consequently, the load
command provides a variety of syntax options
for specifying how a table is interpreted.
A simple specification is to interpret the relational table as a set:
load ABCD.tab format=set : Z ;
Note that Z
is a set in the model that the data is being loaded
into. If this set does not exist, an error will occur while loading
data from this table.
Another simple specification is to interpret the relational table as a parameter with indexed by 3tuples:
load ABCD.tab : [A,B,C] D ;
Again, this requires that D
be a parameter in the model that the
data is being loaded into. Additionally, the index set for D
must
contain the indices that are specified in the table. The load
command also allows for the specification of the index set:
load ABCD.tab : Z=[A,B,C] D ;
This specifies that the index set is loaded into the Z
set in the
model. Similarly, data can be loaded into another parameter than what
is specified in the relational table:
load ABCD.tab : Z=[A,B,C] Y=D ;
This specifies that the index set is loaded into the Z
set and that
the data in the D
column in the table is loaded into the Y
parameter.
This syntax allows the load
command to provide an arbitrary
specification of data mappings from columns in a relational table into
index sets and parameters. For example, suppose that a model is defined
with set Z
and parameters Y
and W
:
model.Z = Set()
model.Y = Param(model.Z)
model.W = Param(model.Z)
Then the following command defines how these data items are loaded using
columns B
, C
and D
:
load ABCD.tab : Z=[B] Y=D W=C;
When the using
option is omitted the data manager is inferred from
the filename suffix. However, the filename suffix does not always
reflect the format of the data it contains. For example, consider the
relational table in the file ABCD.txt
:
A,B,C,D
A1,B1,1,10
A2,B2,2,20
A3,B3,3,30
We can specify the using
option to load from this file into
parameter D
and set Z
:
load ABCD.txt using=csv : Z=[A,B,C] D ;
Note
The data managers supported by Pyomo can be listed with the
pyomo help
subcommand
pyomo help datamanagers
The following data managers are supported in Pyomo 5.1:
Pyomo Data Managers  csv CSV file interface dat Pyomo data command file interface json JSON file interface pymysql pymysql database interface pyodbc pyodbc database interface pypyodbc pypyodbc database interface sqlite3 sqlite3 database interface tab TAB file interface xls Excel XLS file interface xlsb Excel XLSB file interface xlsm Excel XLSM file interface xlsx Excel XLSX file interface xml XML file interface yaml YAML file interface
Interpreting Tabular Data¶
By default, a table is interpreted as columns of one or more parameters
with associated index columns. The format
option can be used to
specify other interpretations of a table:
array 
The table is a matrix representation of a two dimensional parameter. 
param 
The data is a simple parameter value. 
set 
Each row is a set element. 
set_array 
The table is a matrix representation of a set of 2tuples. 
transposed_array 
The table is a transposed matrix representation of a two dimensional parameter. 
We have previously illustrated the use of the set
format value to
interpret a relational table as a set of values or tuples. The
following examples illustrate the other format values.
A table with a single value can be interpreted as a simple parameter
using the param
format value. Suppose that Z.tab
contains the
following table:
1.1
The following load command then loads this value into parameter p
:
load Z.tab format=param: p;
Sets with 2tuple data can be represented with a matrix format that
denotes set membership. The set_array
format value interprets a
relational table as a matrix that defines a set of 2tuples where +
denotes a valid tuple and 
denotes an invalid tuple. Suppose that
D.tab
contains the following relational table:
B A1 A2 A3
1 +  
2  + 
3   +
Then the following load command loads data into set B
:
load D.tab format=set_array: B;
This command declares the following 2tuples: ('A1',1)
,
('A2',2)
, and ('A3',3)
.
Parameters with 2tuple indices can be interpreted with a matrix format
that where rows and columns are different indices. Suppose that
U.tab
contains the following table:
I A1 A2 A3
I1 1.3 2.3 3.3
I2 1.4 2.4 3.4
I3 1.5 2.5 3.5
I4 1.6 2.6 3.6
Then the following load command loads this value into parameter U
with a 2dimensional index using the array
format value.:
load U.tab format=array: A=[X] U;
The transpose_array
format value also interprets the table as a
matrix, but it loads the data in a transposed format:
load U.tab format=transposed_array: A=[X] U;
Note that these format values do not support the initialization of the index data.
Loading from Spreadsheets and Relational Databases¶
Many of the options for the load
command are specific to
spreadsheets and relational databases. The range
option is used to
specify the range of cells that are loaded from a spreadsheet. The
range of cells represents a table in which the first row of cells
defines the column names for the table.
Suppose that file ABCD.xls
contains the range ABCD
that is shown
in the following figure:
The following command loads this data to initialize parameter D
and
index Z
:
load ABCD.xls range=ABCD : Z=[A,B,C] Y=D ;
Thus, the syntax for loading data from spreadsheets only differs from
CSV and ASCII text files by the use of the range
option.
When loading from a relational database, the data source specification
is a filename or data connection string. Access to a database may be
restricted, and thus the specification of username
and password
options may be required. Alternatively, these options can be specified
within a data connection string.
A variety of database interface packages are available within Python.
The using
option is used to specify the database interface package
that will be used to access a database. For example, the pyodbc
interface can be used to connect to Excel spreadsheets. The following
command loads data from the Excel spreadsheet ABCD.xls
using the
pyodbc
interface. The command loads this data to initialize
parameter D
and index Z
:
load ABCD.xls using=pyodbc table=ABCD : Z=[A,B,C] Y=D ;
The using
option specifies that the pyodbc
package will be
used to connect with the Excel spreadsheet. The table
option
specifies that the table ABCD
is loaded from this spreadsheet.
Similarly, the following command specifies a data connection string
to specify the ODBC driver explicitly:
load "Driver={Microsoft Excel Driver (*.xls)}; Dbq=ABCD.xls;"
using=pyodbc
table=ABCD : Z=[A,B,C] Y=D ;
ODBC drivers are generally tailored to the type of data source that
they work with; this syntax illustrates how the load
command
can be tailored to the details of the database that a user is working
with.
The previous examples specified the table
option, which declares the
name of a relational table in a database. Many databases support the
Structured Query Language (SQL), which can be used to dynamically
compose a relational table from other tables in a database. The classic
diet problem will be used to illustrate the use of SQL queries to
initialize a Pyomo model. In this problem, a customer is faced with the
task of minimizing the cost for a meal at a fast food restaurant – they
must purchase a sandwich, side, and a drink for the lowest cost. The
following is a Pyomo model for this problem:
# diet1.py
from pyomo.environ import *
infinity = float('inf')
MAX_FOOD_SUPPLY = 20.0 # There is a finite food supply
model = AbstractModel()
# 
model.FOOD = Set()
model.cost = Param(model.FOOD, within=PositiveReals)
model.f_min = Param(model.FOOD, within=NonNegativeReals, default=0.0)
def f_max_validate (model, value, j):
return model.f_max[j] > model.f_min[j]
model.f_max = Param(model.FOOD, validate=f_max_validate, default=MAX_FOOD_SUPPLY)
model.NUTR = Set()
model.n_min = Param(model.NUTR, within=NonNegativeReals, default=0.0)
model.n_max = Param(model.NUTR, default=infinity)
model.amt = Param(model.NUTR, model.FOOD, within=NonNegativeReals)
# 
def Buy_bounds(model, i):
return (model.f_min[i], model.f_max[i])
model.Buy = Var(model.FOOD, bounds=Buy_bounds, within=NonNegativeIntegers)
# 
def Total_Cost_rule(model):
return sum(model.cost[j] * model.Buy[j] for j in model.FOOD)
model.Total_Cost = Objective(rule=Total_Cost_rule, sense=minimize)
# 
def Entree_rule(model):
entrees = ['Cheeseburger', 'Ham Sandwich', 'Hamburger', 'Fish Sandwich', 'Chicken Sandwich']
return sum(model.Buy[e] for e in entrees) >= 1
model.Entree = Constraint(rule=Entree_rule)
def Side_rule(model):
sides = ['Fries', 'Sausage Biscuit']
return sum(model.Buy[s] for s in sides) >= 1
model.Side = Constraint(rule=Side_rule)
def Drink_rule(model):
drinks = ['Lowfat Milk', 'Orange Juice']
return sum(model.Buy[d] for d in drinks) >= 1
model.Drink = Constraint(rule=Drink_rule)
Suppose that the file diet1.sqlite
be a SQLite database file that
contains the following data in the Food
table:
FOOD  cost 

Cheeseburger  1.84 
Ham Sandwich  2.19 
Hamburger  1.84 
Fish Sandwich  1.44 
Chicken Sandwich  2.29 
Fries  0.77 
Sausage Biscuit  1.29 
Lowfat Milk  0.60 
Orange Juice  0.72 
In addition, the Food
table has two additional columns, f_min
and f_max
, with no data for any row. These columns exist to match
the structure for the parameters used in the model.
We can solve the diet1
model using the Python definition in
diet1.py
and the data from this database. The file
diet.sqlite.dat
specifies a load
command that uses that
sqlite3
data manager and embeds a SQL query to retrieve the data:
# File diet.sqlite.dat
load "diet.sqlite"
using=sqlite3
query="SELECT FOOD,cost,f_min,f_max FROM Food"
: FOOD=[FOOD] cost f_min f_max ;
The PyODBC driver module will pass the SQL query through an Access ODBC
connector, extract the data from the diet1.mdb
file, and return it
to Pyomo. The Pyomo ODBC handler can then convert the data received into
the proper format for solving the model internally. More complex SQL
queries are possible, depending on the underlying database and ODBC
driver in use. However, the name and ordering of the columns queried are
specified in the Pyomo data file; using SQL wildcards (e.g., SELECT
*
) or column aliasing (e.g., SELECT f AS FOOD
) may cause errors in
Pyomo’s mapping of relational data to parameters.
The include
Command¶
The include
command allows a data command file to execute data
commands from another file. For example, the following command file
executes data commands from ex1.dat
and then ex2.dat
:
include ex1.dat;
include ex2.dat;
Pyomo is sensitive to the order of execution of data commands, since
data commands can redefine set and parameter values. The include
command respects this data ordering; all data commands in the included
file are executed before the remaining data commands in the current file
are executed.
The namespace
Keyword¶
The namespace
keyword is not a data command, but instead it is used
to structure the specification of Pyomo’s data commands. Specifically,
a namespace declaration is used to group data commands and to provide a
group label. Consider the following data command file:
set C := 1 2 3 ;
namespace ns1
{
set C := 4 5 6 ;
}
namespace ns2
{
set C := 7 8 9 ;
}
This data file defines two namespaces: ns1
and ns2
that
initialize a set C
. By default, data commands contained within a
namespace are ignored during model construction; when no namespaces are
specified, the set C
has values 1,2,3
. When namespace ns1
is specified, then the set C
values are overridden with the set
4,5,6
.
Data Portals¶
Pyomo’s DataPortal
class standardizes the process of constructing model instances by
managing the process of loading data from different data sources in a
uniform manner. A DataPortal
object can load data from the
following data sources:
 TAB File: A text file format that uses whitespace to separate columns of values in each row of a table.
 CSV File: A text file format that uses comma or other delimiters to separate columns of values in each row of a table.
 JSON File: A popular lightweight datainterchange format that is easily parsed.
 YAML File: A human friendly data serialization standard.
 XML File: An extensible markup language for documents and data structures. XML files can represent tabular data.
 Excel File: A spreadsheet data format that is primarily used by the Microsoft Excel application.
 Database: A relational database.
 DAT File: A Pyomo data command file.
Note that most of these data formats can express tabular data.
Warning
The DataPortal
class requires the installation of Python packages to support some
of these data formats:
YAML File:
pyyaml
Excel File:
win32com
,openpyxl
orxlrd
These packages support different data Excel data formats: the
win32com
package supports.xls
,.xlsm
and.xlsx
, theopenpyxl
package supports.xlsx
and thexlrd
package supports.xls
.Database:
pyodbc
,pypyodbc
,sqlite3
orpymysql
These packages support different database interface APIs: the
pyodbc
andpypyodbc
packages support the ODBC database API, thesqlite3
package uses the SQLite C library to directly interface with databases using the DBAPI 2.0 specification, andpymysql
is a purePython MySQL client.
DataPortal
objects
can be used to initialize both concrete and abstract Pyomo models.
Consider the file A.tab
, which defines a simple set with a tabular
format:
A
A1
A2
A3
The load
method is used to load data into a DataPortal
object. Components in a
concrete model can be explicitly initialized with data loaded by a
DataPortal
object:
data = DataPortal()
data.load(filename='A.tab', set="A", format="set")
model = ConcreteModel()
model.A = Set(initialize=data['A'])
All data needed to initialize an abstract model must be provided by a
DataPortal
object,
and the use of the DataPortal
object to initialize components
is automated for the user:
model = AbstractModel()
model.A = Set()
data = DataPortal()
data.load(filename='A.tab', set=model.A)
instance = model.create_instance(data)
Note the difference in the execution of the load
method in these two
examples: for concrete models data is loaded by name and the format must
be specified, and for abstract models the data is loaded by component,
from which the data format can often be inferred.
The load
method opens the data file, processes it, and loads the
data in a format that can be used to construct a model instance. The
load
method can be called multiple times to load data for different
sets or parameters, or to override data processed earlier. The load
method takes a variety of arguments that define how data is loaded:
filename
: This option specifies the source data file.format
: This option specifies the how to interpret data within a table. Valid formats are:set
,set_array
,param
,table
,array
, andtransposed_array
.set
: This option is either a string or model compent that defines a set that will be initialized with this data.param
: This option is either a string or model compent that defines a parameter that will be initialized with this data. A list or tuple of strings or model components can be used to define multiple parameters that are initialized.index
: This option is either a string or model compent that defines an index set that will be initialized with this data.using
: This option specifies the Python package used to load this data source. This option is used when loading data from databases.select
: This option defines the columns that are selected from the data source. The column order may be changed from the data source, which allows theDataPortal
object to definenamespace
: This option defines the data namespace that will contain this data.
The use of these options is illustrated below.
The DataPortal
class also provides a simple API for accessing set and parameter data
that are loaded from different data sources. The []
operator is
used to access set and parameter values. Consider the following
example, which loads data and prints the value of the []
operator:
data = DataPortal()
data.load(filename='A.tab', set="A", format="set")
print(data['A']) #['A1', 'A2', 'A3']
data.load(filename='Z.tab', param="z", format="param")
print(data['z']) #1.1
data.load(filename='Y.tab', param="y", format="table")
for key in sorted(data['y']):
print("%s %s" % (key, data['y'][key]))
The DataPortal
class also has several methods for iterating over the data that has been
loaded:
keys()
: Returns an iterator of the data keys.values()
: Returns an iterator of the data values.items()
: Returns an iterator of (name, value) tuples from the data.
Finally, the data()
method provides a generic mechanism for
accessing the underlying data representation used by DataPortal
objects.
Loading Structured Data¶
JSON and YAML files are structured data formats that are wellsuited for data serialization. These data formats do not represent data in tabular format, but instead they directly represent set and parameter values with lists and dictionaries:
 Simple Set: a list of string or numeric value
 Indexed Set: a dictionary that maps an index to a list of string or numeric value
 Simple Parameter: a string or numeric value
 Indexed Parameter: a dictionary that maps an index to a numeric value
For example, consider the following JSON file:
{ "A": ["A1", "A2", "A3"],
"B": [[1, "B1"], [2, "B2"], [3, "B3"]],
"C": {"A1": [1, 2, 3], "A3": [10, 20, 30]},
"p": 0.1,
"q": {"A1": 3.3, "A2": 3.4, "A3": 3.5},
"r": [ {"index": [1, "B1"], "value": 3.3},
{"index": [2, "B2"], "value": 3.4},
{"index": [3, "B3"], "value": 3.5}]}
The data in this file can be used to load the following model:
model = AbstractModel()
data = DataPortal()
model.A = Set()
model.B = Set(dimen=2)
model.C = Set(model.A)
model.p = Param()
model.q = Param(model.A)
model.r = Param(model.B)
data.load(filename='T.json')
Note that no set
or param
option needs to be specified when
loading a JSON
or YAML
file. All of the set and parameter
data in the file are loaded by the DataPortal>
object, and only the data
needed for model construction is used.
The following YAML file has a similar structure:
A: [A1, A2, A3]
B:
 [1, B1]
 [2, B2]
 [3, B3]
C:
'A1': [1, 2, 3]
'A3': [10, 20, 30]
p: 0.1
q: {A1: 3.3, A2: 3.4, A3: 3.5}
r:
 index: [1, B1]
value: 3.3
 index: [2, B2]
value: 3.4
 index: [3, B3]
value: 3.5
The data in this file can be used to load a Pyomo model with the same syntax as a JSON file:
model = AbstractModel()
data = DataPortal()
model.A = Set()
model.B = Set(dimen=2)
model.C = Set(model.A)
model.p = Param()
model.q = Param(model.A)
model.r = Param(model.B)
data.load(filename='T.yaml')
Loading Tabular Data¶
Many data sources supported by Pyomo are tabular data formats. Tabular data is numerical or textual data that is organized into one or more simple tables, where data is arranged in a matrix. Each table consists of a matrix of numeric string values, simple strings, and quoted strings. All rows have the same length, all columns have the same length, and the first row typically represents labels for the column data.
The following section describes the tabular data sources supported by Pyomo, and the subsequent sections illustrate ways that data can be loaded from tabular data using TAB files. Subsequent sections describe options for loading data from Excel spreadsheets and relational databases.
Tabular Data¶
TAB files represent tabular data in an ascii file using whitespace as a
delimiter. A TAB file consists of rows of values, where each row has
the same length. For example, the file PP.tab
has the format:
A B PP
A1 B1 4.3
A2 B2 4.4
A3 B3 4.5
CSV files represent tabular data in a format that is very similar to TAB
files. Pyomo assumes that a CSV file consists of rows of values, where
each row has the same length. For example, the file PP.csv
has the
format:
A,B,PP
A1,B1,4.3
A2,B2,4.4
A3,B3,4.5
Excel spreadsheets can express complex data relationships. A range is
a contiguous, rectangular block of cells in an Excel spreadsheet. Thus,
a range in a spreadsheet has the same tabular structure as is a TAB file
or a CSV file. For example, consider the file excel.xls
that has
the range PPtable
:
A relational database is an application that organizes data into one or more tables (or relations) with a unique key in each row. Tables both reflect the data in a database as well as the result of queries within a database.
XML files represent tabular using table
and row
elements. Each
subelement of a row
element represents a different column, where
each row has the same length. For example, the file PP.xml
has the
format:
<table>
<row>
<A value="A1"/><B value="B1"/><PP value="4.3"/>
</row>
<row>
<A value="A2"/><B value="B2"/><PP value="4.4"/>
</row>
<row>
<A value="A3"/><B value="B3"/><PP value="4.5"/>
</row>
</table>
Loading Set Data¶
The set
option is used specify a Set
component that is loaded
with data.
Consider the file A.tab
, which defines a simple set:
A
A1
A2
A3
In the following example, a DataPortal
object loads data for a simple
set A
:
model = AbstractModel()
model.A = Set()
data = DataPortal()
data.load(filename='A.tab', set=model.A)
instance = model.create_instance(data)
Consider the file C.tab
:
A B
A1 1
A1 2
A1 3
A2 1
A2 2
A2 3
A3 1
A3 2
A3 3
In the following example, a DataPortal
object loads data for a
twodimensional set C
:
model = AbstractModel()
model.C = Set(dimen=2)
data = DataPortal()
data.load(filename='C.tab', set=model.C)
instance = model.create_instance(data)
In this example, the column titles do not directly impact the process of loading data. Column titles can be used to select a subset of columns from a table that is loaded (see below).
Consider the file D.tab
, which defines an array representation of a
twodimensional set:
B A1 A2 A3
1 +  
2  + 
3   +
In the following example, a DataPortal
object loads data for a
twodimensional set D
:
model = AbstractModel()
model.D = Set(dimen=2)
data = DataPortal()
data.load(filename='D.tab', set=model.D, format='set_array')
instance = model.create_instance(data)
The format
option indicates that the set data is declared in a array
format.
Loading Parameter Data¶
The param
option is used specify a Param
component that is
loaded with data.
The simplest parameter is simply a singleton value. Consider the file
Z.tab
:
1.1
In the following example, a DataPortal
object loads data for a simple
parameter z
:
model = AbstractModel()
data = DataPortal()
model.z = Param()
data.load(filename='Z.tab', param=model.z)
instance = model.create_instance(data)
An indexed parameter can be defined by a single column in a table. For
example, consider the file Y.tab
:
A Y
A1 3.3
A2 3.4
A3 3.5
In the following example, a DataPortal
object loads data for an indexed
parameter y
:
model = AbstractModel()
data = DataPortal()
model.A = Set(initialize=['A1','A2','A3'])
model.y = Param(model.A)
data.load(filename='Y.tab', param=model.y)
instance = model.create_instance(data)
When column names are not used to specify the index and parameter data,
then the DataPortal
object assumes that the rightmost column defines parameter values. In
this file, the A
column contains the index values, and the Y
column contains the parameter values.
Note that the data for set A
is predefined in the previous example.
The index set can be loaded with the parameter data using the index
option. In the following example, a DataPortal
object loads data for set A
and the indexed parameter y
model = AbstractModel()
data = DataPortal()
model.A = Set()
model.y = Param(model.A)
data.load(filename='Y.tab', param=model.y, index=model.A)
instance = model.create_instance(data)
An index set with multiple dimensions can also be loaded with an indexed
parameter. Consider the file PP.tab
:
A B PP
A1 B1 4.3
A2 B2 4.4
A3 B3 4.5
In the following example, a DataPortal
object loads data for a tuple
set and an indexed parameter:
model = AbstractModel()
data = DataPortal()
model.A = Set(dimen=2)
model.p = Param(model.A)
data.load(filename='PP.tab', param=model.p, index=model.A)
instance = model.create_instance(data)
Missing parameter data can be expressed in two ways. First, parameter
data can be defined with indices that are a subset of valid indices in
the model. The following example loads the indexed parameter y
:
model = AbstractModel()
data = DataPortal()
model.A = Set(initialize=['A1','A2','A3','A4'])
model.y = Param(model.A)
data.load(filename='Y.tab', param=model.y)
instance = model.create_instance(data)
The model defines an index set with four values, but only three
parameter values are declared in the data file Y.tab
.
Parameter data can also be declared with missing values using the period
(.
) symbol. For example, consider the file S.tab
:
A B PP
A1 B1 4.3
A2 B2 4.4
A3 B3 4.5
In the following example, a DataPortal
object loads data for the index
set A
and indexed parameter y
:
model = AbstractModel()
data = DataPortal()
model.A = Set()
model.s = Param(model.A)
data.load(filename='S.tab', param=model.s, index=model.A)
instance = model.create_instance(data)
The period (.
) symbol indicates a missing parameter value, but the
index set A
contains the index value for the missing parameter.
Multiple parameters can be initialized at once by specifying a list (or
tuple) of component parameters. Consider the file XW.tab
:
A X W
A1 3.3 4.3
A2 3.4 4.4
A3 3.5 4.5
In the following example, a DataPortal
object loads data for parameters
x
and w
:
model = AbstractModel()
data = DataPortal()
model.A = Set(initialize=['A1','A2','A3'])
model.x = Param(model.A)
model.w = Param(model.A)
data.load(filename='XW.tab', param=(model.x,model.w))
instance = model.create_instance(data)
We have previously noted that the column names do not need to be
specified to load set and parameter data. However, the select
option can be to identify the columns in the table that are used to load
parameter data. This option specifies a list (or tuple) of column names
that are used, in that order, to form the table that defines the
component data.
For example, consider the following load declaration:
model = AbstractModel()
data = DataPortal()
model.A = Set()
model.w = Param(model.A)
data.load(filename='XW.tab', select=('A','W'),
param=model.w, index=model.A)
instance = model.create_instance(data)
The columns A
and W
are selected from the file XW.tab
, and a
single parameter is defined.
Consider the file U.tab
, which defines an array representation of a
multiplyindexed parameter:
I A1 A2 A3
I1 1.3 2.3 3.3
I2 1.4 2.4 3.4
I3 1.5 2.5 3.5
I4 1.6 2.6 3.6
In the following example, a DataPortal
object loads data for a
twodimensional parameter u
:
model = AbstractModel()
data = DataPortal()
model.A = Set(initialize=['A1','A2','A3'])
model.I = Set(initialize=['I1','I2','I3','I4'])
model.u = Param(model.I, model.A)
data.load(filename='U.tab', param=model.u,
format='array')
instance = model.create_instance(data)
The format
option indicates that the parameter data is declared in a
array format. The format
option can also indicate that the
parameter data should be transposed.
model = AbstractModel()
data = DataPortal()
model.A = Set(initialize=['A1','A2','A3'])
model.I = Set(initialize=['I1','I2','I3','I4'])
model.t = Param(model.A, model.I)
data.load(filename='U.tab', param=model.t,
format='transposed_array')
instance = model.create_instance(data)
Note that the transposed parameter data changes the index set for the parameter.
Loading from Spreadsheets and Databases¶
Tabular data can be loaded from spreadsheets and databases using
auxilliary Python packages that provide an interface to these data
formats. Data can be loaded from Excel spreadsheets using the
win32com
, xlrd
and openpyxl
packages. For example, consider
the following range of cells, which is named PPtable
:
In the following example, a DataPortal
object loads the named range
PPtable
from the file excel.xls
:
model = AbstractModel()
data = DataPortal()
model.A = Set(dimen=2)
model.p = Param(model.A)
data.load(filename='excel.xls', range='PPtable',
param=model.p, index=model.A)
instance = model.create_instance(data)
Note that the range
option is required to specify the table of cell
data that is loaded from the spreadsheet.
There are a variety of ways that data can be loaded from a relational database. In the simplest case, a table can be specified within a database:
model = AbstractModel()
data = DataPortal()
model.A = Set(dimen=2)
model.p = Param(model.A)
data.load(filename='PP.sqlite', using='sqlite3',
table='PPtable',
param=model.p, index=model.A)
instance = model.create_instance(data)
In this example, the interface sqlite3
is used to load data from an
SQLite database in the file PP.sqlite
. More generally, an SQL query
can be specified to dynamicly generate a table. For example:
model = AbstractModel()
data = DataPortal()
model.A = Set()
model.p = Param(model.A)
data.load(filename='PP.sqlite', using='sqlite3',
query="SELECT A,PP FROM PPtable",
param=model.p, index=model.A)
instance = model.create_instance(data)
Data Namespaces¶
The DataPortal
class supports the concept of a namespace to organize data into named
groups that can be enabled or disabled during model construction.
Various DataPortal
methods have an optional namespace
argument that defaults to
None
:
data(name=None, namespace=None)
: Returns the data associated with data in the specified namespace[]
: For aDataPortal
objectdata
, the functiondata['A']
returns data corresponding toA
in the default namespace, anddata['ns1','A']
returns data corresponding toA
in namespacens1
.namespaces()
: Returns an iteratore for the data namespaces.keys(namespace=None)
: Returns an iterator of the data keys in the specified namespace.values(namespace=None)
: Returns and iterator of the data values in the specified namespace.items(namespace=None)
: Returns an iterator of (name, value) tuples in the specified namespace.
By default, data within a namespace are ignored during model construction. However, concrete models can be initialized with data from a specific namespace. Further, abstract models can be initialized with a list of namespaces that define the data used to initialized model components. For example, the following script generates two model instances from an abstract model using data loaded into different namespaces:
model = AbstractModel()
model.C = Set(dimen=2)
data = DataPortal()
data.load(filename='C.tab', set=model.C, namespace='ns1')
data.load(filename='D.tab', set=model.C, namespace='ns2',
format='set_array')
instance1 = model.create_instance(data, namespaces=['ns1'])
instance2 = model.create_instance(data, namespaces=['ns2'])
The pyomo
Command¶
The pyomo
command is issued to the DOS prompt or a Unix shell. To
see a list of Pyomo command line options, use:
pyomo solve help
Note
There are two dashes before help
.
In this section we will detail some of the options.
Passing Options to a Solver¶
To pass arguments to a solver when using the pyomo solve
command,
appned the Pyomo command line with the argument solveroptions=
followed by an argument that is a string to be sent to the solver
(perhaps with dashes added by Pyomo). So for most MIP solvers, the mip
gap can be set using
solveroptions= "mipgap=0.01 "
Multiple options are separated by a space. Options that do not take an argument should be specified with the equals sign followed by either a space or the end of the string.
For example, to specify that the solver is GLPK, then to specify a mipgap of two percent and the GLPK cuts option, use
solver=glpk solveroptions="mipgap=0.02 cuts="
If there are multiple “levels” to the keyword, as is the case for some
Gurobi and CPLEX options, the tokens are separated by underscore. For
example, mip cuts all
would be specified as mip_cuts_all
. For
another example, to set the solver to be CPLEX, then to set a mip gap of
one percent and to specify ‘y’ for the suboption numerical
to the
option emphasis
use
solver=cplex solveroptions="mipgap=0.001 emphasis_numerical=y"
See Sending Options to the Solver for a discussion of passing options in a script.
Troubleshooting¶
Many of things that can go wrong are covered by error messages, but sometimes they can be confusing or do not provide enough information. Depending on what the troubles are, there might be ways to get a little additional information.
If there are syntax errors in the model file, for example, it can occasionally be helpful to get error messages directly from the Python interpreter rather than through Pyomo. Suppose the name of the model file is scuc.py, then
python scuc.py
can sometimes give useful information for fixing syntax errors.
When there are no syntax errors, but there troubles reading the data or
generating the information to pass to a solver, then the verbose
option provides a trace of the execution of Pyomo. The user should be
aware that for some models this option can generate a lot of output.
If there are troubles with solver (i.e., after Pyomo has output
“Applying Solver”), it is often helpful to use the option
streamsolver
that causes the solver output to be displayed rather
than trapped. (See <<TeeTrue>> for information about getting this output
in a script). Advanced users may wish to examine the files that are
generated to be passed to a solver. The type of file generated is
controlled by the solverio
option and the keepfiles
option
instructs pyomo to keep the files and output their names. However, the
symbolicsolverlabels
option should usually also be specified so
that meaningful names are used in these files.
When there seem to be troubles expressing the model, it is often useful to embed print commands in the model in places that will yield helpful information. Consider the following snippet:
def ax_constraint_rule(model, i):
# return the expression for the constraint for i
print ("ax_constraint_rule was called for i=",str(i))
return sum(model.a[i,j] * model.x[j] for j in model.J) >= model.b[i]
# the next line creates one constraint for each member of the set model.I
model.AxbConstraint = Constraint(model.I, rule=ax_constraint_rule)
The effect will be to output every member of the set model.I
at the
time the constraint named model.AxbConstraint
is constructed.
Direct Interfaces to Solvers¶
In many applications, the default solver interface works well. However,
in some cases it is useful to specify the interface using the
solverio
option. For example, if the solver supports a direct
Python interface, then the option would be specified on the command line
as
solverio=python
Here are some of the choices:
 lp: generate a standard linear programming format file with filename
extension
lp
 nlp: generate a file with a standard format that supports linear and
nonlinear optimization with filename extension
n1lp
 os: generate an OSiL format XML file.
 python: use the direct Python interface.
Note
Not all solvers support all interfaces.
BuildAction
and BuildCheck
¶
This is a somewhat advanced topic. In some cases, it is desirable to
trigger actions to be done as part of the model building process. The
BuildAction
function provides this capability in a Pyomo model. It
takes as arguments optional index sets and a function to peform the
action. For example,
model.BuildBpts = BuildAction(model.J, rule=bpts_build)
calls the function bpts_build
for each member of model.J
. The
function bpts_build
should have the model and a variable for the
members of model.J
as formal arguments. In this example, the
following would be a valid declaration for the function:
def bpts_build(model, j):
A full example, which extends the Symbolic Index Sets and Piecewise Linear Expressions examples, is
# abstract2piecebuild.py
# Similar to abstract2piece.py, but the breakpoints are created using a build action
from pyomo.environ import *
model = AbstractModel()
model.I = Set()
model.J = Set()
model.a = Param(model.I, model.J)
model.b = Param(model.I)
model.c = Param(model.J)
model.Topx = Param(default=6.1) # range of x variables
model.PieceCnt = Param(default=100)
# the next line declares a variable indexed by the set J
model.x = Var(model.J, domain=NonNegativeReals, bounds=(0,model.Topx))
model.y = Var(model.J, domain=NonNegativeReals)
# to avoid warnings, we set breakpoints beyond the bounds
# we are using a dictionary so that we can have different
# breakpoints for each index. But we won't.
model.bpts = {}
def bpts_build(model, j):
model.bpts[j] = []
for i in range(model.PieceCnt+2):
model.bpts[j].append(float((i*model.Topx)/model.PieceCnt))
# The object model.BuildBpts is not refered to again;
# the only goal is to trigger the action at build time
model.BuildBpts = BuildAction(model.J, rule=bpts_build)
def f4(model, j, xp):
# we not need j in this example, but it is passed as the index for the constraint
return xp**4
model.ComputePieces = Piecewise(model.J, model.y, model.x, pw_pts=model.bpts, pw_constr_type='EQ', f_rule=f4)
def obj_expression(model):
return summation(model.c, model.y)
model.OBJ = Objective(rule=obj_expression)
def ax_constraint_rule(model, i):
# return the expression for the constraint for i
return sum(model.a[i,j] * model.x[j] for j in model.J) >= model.b[i]
# the next line creates one constraint for each member of the set model.I
model.AxbConstraint = Constraint(model.I, rule=ax_constraint_rule)
This example uses the build action to create a model component with
breakpoints for a Piecewise Linear Expressions function. The BuildAction
is
triggered by the assignment to model.BuildBpts
. This object is not
referenced again, the only goal is to cause the execution of
bpts_build,
which places data in the model.bpts
dictionary.
Note that if model.bpts
had been a Set
, then it could have been
created with an initialize
argument to the Set
declaration. Since it is a specialpurpose dictionary to support the
Piecewise Linear Expressions functionality in Pyomo, we use a BuildAction
.
Another application of BuildAction
can be intialization of Pyomo
model data from Python data structures, or efficient initialization of
Pyomo model data from other Pyomo model data. Consider the
Sparse Index Sets example. Rather than using an initialization for
each list of sets NodesIn
and NodesOut
separately using
initialize
, it is a little more efficient and probably a little
clearer, to use a build action.
The full model is:
# Isinglebuild.py
# NodesIn and NodesOut are created by a build action using the Arcs
from pyomo.environ import *
model = AbstractModel()
model.Nodes = Set()
model.Arcs = Set(dimen=2)
model.NodesOut = Set(model.Nodes, within=model.Nodes, initialize=[])
model.NodesIn = Set(model.Nodes, within=model.Nodes, initialize=[])
def Populate_In_and_Out(model):
# loop over the arcs and put the end points in the appropriate places
for (i,j) in model.Arcs:
model.NodesIn[j].add(i)
model.NodesOut[i].add(j)
model.In_n_Out = BuildAction(rule = Populate_In_and_Out)
model.Flow = Var(model.Arcs, domain=NonNegativeReals)
model.FlowCost = Param(model.Arcs)
model.Demand = Param(model.Nodes)
model.Supply = Param(model.Nodes)
def Obj_rule(model):
return summation(model.FlowCost, model.Flow)
model.Obj = Objective(rule=Obj_rule, sense=minimize)
def FlowBalance_rule(model, node):
return model.Supply[node] \
+ sum(model.Flow[i, node] for i in model.NodesIn[node]) \
 model.Demand[node] \
 sum(model.Flow[node, j] for j in model.NodesOut[node]) \
== 0
model.FlowBalance = Constraint(model.Nodes, rule=FlowBalance_rule)
for this model, the same data file can be used as for Isinglecomm.py in Sparse Index Sets such as the toy data file:
set Nodes := CityA CityB CityC ;
set Arcs :=
CityA CityB
CityA CityC
CityC CityB
;
param : FlowCost :=
CityA CityB 1.4
CityA CityC 2.7
CityC CityB 1.6
;
param Demand :=
CityA 0
CityB 1
CityC 1
;
param Supply :=
CityA 2
CityB 0
CityC 0
;
Build actions can also be a way to implement data validation,
particularly when multiple Sets or Parameters must be analyzed. However,
the the BuildCheck
component is prefered for this purpose. It
executes its rule just like a BuildAction
but will terminate the
construction of the model instance if the rule returns False
.
Modeling Extensions¶
Bilevel Programming¶
pyomo.bilevel
provides extensions supporting modeling of multilevel
optimization problems.
Dynamic Optimization with pyomo.DAE¶
The pyomo.DAE modeling extension [PyomoDAE] allows users to incorporate systems of differential algebraic equations (DAE)s in a Pyomo model. The modeling components in this extension are able to represent ordinary or partial differential equations. The differential equations do not have to be written in a particular format and the components are flexible enough to represent higherorder derivatives or mixed partial derivatives. Pyomo.DAE also includes model transformations which use simultaneous discretization approaches to transform a DAE model into an algebraic model. Finally, pyomo.DAE includes utilities for simulating DAE models and initializing dynamic optimization problems.
Modeling Components¶
Pyomo.DAE introduces three new modeling components to Pyomo:
pyomo.dae.ContinuousSet 
Represents a bounded continuous domain 
pyomo.dae.DerivativeVar 
Represents derivatives in a model and defines how a Var is differentiated 
pyomo.dae.Integral 
Represents an integral over a continuous domain 
As will be shown later, differential equations can be declared using
using these new modeling components along with the standard Pyomo
Var
and
Constraint
components.
ContinuousSet¶
This component is used to define continuous bounded domains (for example
‘spatial’ or ‘time’ domains). It is similar to a Pyomo
Set
component and can be used to index things
like variables and constraints. Any number of
ContinuousSets
can be used to index a
component and components can be indexed by both
Sets
and
ContinuousSets
in arbitrary order.
In the current implementation, models with
ContinuousSet
components may not be solved
until every ContinuousSet
has been
discretized. Minimally, a ContinuousSet
must be initialized with two numeric values representing the upper and lower
bounds of the continuous domain. A user may also specify additional points in
the domain to be used as finite element points in the discretization.

class
pyomo.dae.
ContinuousSet
(*args, **kwds)[source]¶ Represents a bounded continuous domain
Minimally, this set must contain two numeric values defining the bounds of a continuous range. Discrete points of interest may be added to the continuous set. A continuous set is one dimensional and may only contain numerical values.
Parameters:  initialize (list) – Default discretization points to be included
 bounds (tuple) – The bounding points for the continuous domain. The bounds will
be included as discrete points in the
ContinuousSet
and will be used to bound the points added to theContinuousSet
through the ‘initialize’ argument, a data file, or the add() method

_changed
¶ This keeps track of whether or not the ContinuousSet was changed during discretization. If the user specifies all of the needed discretization points before the discretization then there is no need to go back through the model and reconstruct things indexed by the
ContinuousSet
Type: boolean

_fe
¶ This is a sorted list of the finite element points in the
ContinuousSet
. i.e. this list contains all the discrete points in theContinuousSet
that are not collocation points. Points that are both finite element points and collocation points will be included in this list.Type: list

_discretization_info
¶ This is a dictionary which contains information on the discretization transformation which has been applied to the
ContinuousSet
.Type: dict

construct
(values=None)[source]¶ Constructs a
ContinuousSet
component

get_changed
()[source]¶ Returns flag indicating if the
ContinuousSet
was changed during discretizationReturns “True” if additional points were added to the
ContinuousSet
while applying a discretization schemeReturns: Return type: boolean

get_discretization_info
()[source]¶ Returns a dict with information on the discretization scheme that has been applied to the
ContinuousSet
.Returns: Return type: dict

get_finite_elements
()[source]¶ Returns the finite element points
If the
ContinuousSet
has been discretizaed using a collocation scheme, this method will return a list of the finite element discretization points but not the collocation points within each finite element. If theContinuousSet
has not been discretized or a finite difference discretization was used, this method returns a list of all the discretization points in theContinuousSet
.Returns: Return type: list of floats

get_lower_element_boundary
(point)[source]¶ Returns the first finite element point that is less than or equal to ‘point’
Parameters: point (float) – Returns: Return type: float
The following code snippet shows examples of declaring a
ContinuousSet
component on a
concrete Pyomo model:
Required imports
>>> from pyomo.environ import *
>>> from pyomo.dae import *
>>> model = ConcreteModel()
Declaration by providing bounds
>>> model.t = ContinuousSet(bounds=(0,5))
Declaration by initializing with desired discretization points
>>> model.x = ContinuousSet(initialize=[0,1,2,5])
Note
A ContinuousSet
may not be
constructed unless at least two numeric points are provided to bound the
continuous domain.
The following code snippet shows an example of declaring a
ContinuousSet
component on an
abstract Pyomo model using the example data file.
set t := 0 0.5 2.25 3.75 5;
Required imports
>>> from pyomo.environ import *
>>> from pyomo.dae import *
>>> model = AbstractModel()
The ContinuousSet below will be initialized using the points
in the data file when a model instance is created.
>>> model.t = ContinuousSet()
Note
If a separate data file is used to initialize a
ContinuousSet
, it is done using
the ‘set’ command and not ‘continuousset’
Note
Most valid ways to declare and initialize a
Set
can be used to
declare and initialize a ContinuousSet
.
See the documentation for Set
for additional
options.
Warning
Be careful using a ContinuousSet
as an implicit index in an expression,
i.e. sum(m.v[i] for i in m.myContinuousSet)
. The expression will
be generated using the discretization points contained in the
ContinuousSet
at the time the
expression was constructed and will not be updated if additional
points are added to the set during discretization.
Note
ContinuousSet
components are
always ordered (sorted) therefore the first()
and last()
Set
methods can be used to access the lower
and upper boundaries of the
ContinuousSet
respectively
DerivativeVar¶

class
pyomo.dae.
DerivativeVar
(sVar, **kwds)[source]¶ Represents derivatives in a model and defines how a
Var
is differentiatedThe
DerivativeVar
component is used to declare a derivative of aVar
. The constructor accepts a single positional argument which is theVar
that’s being differentiated. AVar
may only be differentiated with respect to aContinuousSet
that it is indexed by. The indexing sets of aDerivativeVar
are identical to those of theVar
it is differentiating.Parameters:  sVar (
pyomo.environ.Var
) – The variable being differentiated  wrt (
pyomo.dae.ContinuousSet
or tuple) – Equivalent to withrespectto keyword argument. TheContinuousSet
that the derivative is being taken with respect to. Higher order derivatives are represented by including theContinuousSet
multiple times in the tuple sent to this keyword. i.e.wrt=(m.t, m.t)
would be the second order derivative with respect tom.t

get_continuousset_list
()[source]¶ Return the a list of
ContinuousSet
components the derivative is being taken with respect to.Returns: Return type: list

get_derivative_expression
()[source]¶ Returns the current discretization expression for this derivative or creates an access function to its
Var
the first time this method is called. The expression gets built up as the discretization transformations are sequentially applied to eachContinuousSet
in the model.

is_fully_discretized
()[source]¶ Check to see if all the
ContinuousSets
this derivative is taken with respect to have been discretized.Returns: Return type: boolean

set_derivative_expression
(expr)[source]¶ Sets``_expr``, an expression representing the discretization equations linking the
DerivativeVar
to its stateVar
 sVar (
The code snippet below shows examples of declaring
DerivativeVar
components on a
Pyomo model. In each case, the variable being differentiated is supplied
as the only positional argument and the type of derivative is specified
using the ‘wrt’ (or the more verbose ‘withrespectto’) keyword
argument. Any keyword argument that is valid for a Pyomo
Var
component may also be specified.
Required imports
>>> from pyomo.environ import *
>>> from pyomo.dae import *
>>> model = ConcreteModel()
>>> model.s = Set(initialize=['a','b'])
>>> model.t = ContinuousSet(bounds=(0,5))
>>> model.l = ContinuousSet(bounds=(10,10))
>>> model.x = Var(model.t)
>>> model.y = Var(model.s,model.t)
>>> model.z = Var(model.t,model.l)
Declare the first derivative of model.x with respect to model.t
>>> model.dxdt = DerivativeVar(model.x, withrespectto=model.t)
Declare the second derivative of model.y with respect to model.t
Note that this DerivativeVar will be indexed by both model.s and model.t
>>> model.dydt2 = DerivativeVar(model.y, wrt=(model.t,model.t))
Declare the partial derivative of model.z with respect to model.l
Note that this DerivativeVar will be indexed by both model.t and model.l
>>> model.dzdl = DerivativeVar(model.z, wrt=(model.l), initialize=0)
Declare the mixed second order partial derivative of model.z with respect
to model.t and model.l and set bounds
>>> model.dz2 = DerivativeVar(model.z, wrt=(model.t, model.l), bounds=(10, 10))
Note
The ‘initialize’ keyword argument will initialize the value of a
derivative and is not the same as specifying an initial
condition. Initial or boundary conditions should be specified using a
Constraint
or
ConstraintList
or
by fixing the value of a Var
at a boundary
point.
Declaring Differential Equations¶
A differential equations is declared as a standard Pyomo
Constraint
and is not required to have
any particular form. The following code snippet shows how one might declare
an ordinary or partial differential equation.
Required imports
>>> from pyomo.environ import *
>>> from pyomo.dae import *
>>> model = ConcreteModel()
>>> model.s = Set(initialize=['a', 'b'])
>>> model.t = ContinuousSet(bounds=(0, 5))
>>> model.l = ContinuousSet(bounds=(10, 10))
>>> model.x = Var(model.s, model.t)
>>> model.y = Var(model.t, model.l)
>>> model.dxdt = DerivativeVar(model.x, wrt=model.t)
>>> model.dydt = DerivativeVar(model.y, wrt=model.t)
>>> model.dydl2 = DerivativeVar(model.y, wrt=(model.l, model.l))
An ordinary differential equation
>>> def _ode_rule(m, s, t):
... if t == 0:
... return Constraint.Skip
... return m.dxdt[s, t] == m.x[s, t]**2
>>> model.ode = Constraint(model.s, model.t, rule=_ode_rule)
A partial differential equation
>>> def _pde_rule(m, t, l):
... if t == 0 or l == m.l.first() or l == m.l.last():
... return Constraint.Skip
... return m.dydt[t, l] == m.dydl2[t, l]
>>> model.pde = Constraint(model.t, model.l, rule=_pde_rule)
By default, a Constraint
declared over a
ContinuousSet
will be applied at every
discretization point contained in the set. Often a modeler does not want to
enforce a differential equation at one or both boundaries of a continuous
domain. This may be addressed explicitly in the
Constraint
declaration using
Constraint.Skip
as shown above. Alternatively, the desired constraints can
be deactivated just before the model is sent to a solver as shown below.
>>> def _ode_rule(m, s, t):
... return m.dxdt[s, t] == m.x[s, t]**2
>>> model.ode = Constraint(model.s, model.t, rule=_ode_rule)
>>> def _pde_rule(m, t, l):
... return m.dydt[t, l] == m.dydl2[t, l]
>>> model.pde = Constraint(model.t, model.l, rule=_pde_rule)
Declare other model components and apply a discretization transformation
...
Deactivate the differential equations at certain boundary points
>>> for con in model.ode[:, model.t.first()]:
... con.deactivate()
>>> for con in model.pde[0, :]:
... con.deactivate()
>>> for con in model.pde[:, model.l.first()]:
... con.deactivate()
>>> for con in model.pde[:, model.l.last()]:
... con.deactivate()
Solve the model
...
Note
If you intend to use the pyomo.DAE
Simulator
on your model then you
must use constraint deactivation instead of constraint
skipping in the differential equation rule.
Declaring Integrals¶
Warning
The Integral
component is still under
development and considered a prototype. It currently includes only basic
functionality for simple integrals. We welcome feedback on the interface
and functionality but we do not recommend using it on general
models. Instead, integrals should be reformulated as differential
equations.

class
pyomo.dae.
Integral
(*args, **kwds)[source]¶ Represents an integral over a continuous domain
The
Integral
component can be used to represent an integral taken over the entire domain of aContinuousSet
. Once everyContinuousSet
in a model has been discretized, any integrals in the model will be converted to algebraic equations using the trapezoid rule. Future development will include more sophisticated numerical integration methods.Parameters:  *args – Every indexing set needed to evaluate the integral expression
 wrt (
ContinuousSet
) – The continuous domain over which the integral is being taken  rule (function) – Function returning the expression being integrated

get_continuousset
()[source]¶ Return the
ContinuousSet
the integral is being taken over
Declaring an Integral
component is similar to
declaring an Expression
component. A
simple example is shown below:
>>> model = ConcreteModel()
>>> model.time = ContinuousSet(bounds=(0,10))
>>> model.X = Var(model.time)
>>> model.scale = Param(initialize=1E3)
>>> def _intX(m,t):
... return m.X[t]
>>> model.intX = Integral(model.time,wrt=model.time,rule=_intX)
>>> def _obj(m):
... return m.scale*m.intX
>>> model.obj = Objective(rule=_obj)
Notice that the positional arguments supplied to the
Integral
declaration must include all indices
needed to evaluate the integral expression. The integral expression is defined
in a function and supplied to the ‘rule’ keyword argument. Finally, a user must
specify a ContinuousSet
that the integral
is being evaluated over. This is done using the ‘wrt’ keyword argument.
Note
The ContinuousSet
specified using the
‘wrt’ keyword argument must be explicitly specified as one of the indexing
sets (meaning it must be supplied as a positional argument). This is to
ensure consistency in the ordering and dimension of the indexing sets
After an Integral
has been declared, it can be
used just like a Pyomo Expression
component and can be included in constraints or the objective function as shown
above.
If an Integral
is specified with multiple
positional arguments, i.e. multiple indexing sets, the final component will be
indexed by all of those sets except for the
ContinuousSet
that the integral was
taken over. In other words, the
ContinuousSet
specified with the
‘wrt’ keyword argument is removed from the indexing sets of the
Integral
even though it must be specified as a
positional argument. This should become more clear with the following example
showing a double integral over the
ContinuousSet
components model.t1
and
model.t2
. In addition, the expression is also indexed by the
Set
model.s
. The mathematical representation
and implementation in Pyomo are shown below:
>>> model = ConcreteModel()
>>> model.t1 = ContinuousSet(bounds=(0, 10))
>>> model.t2 = ContinuousSet(bounds=(1, 1))
>>> model.s = Set(initialize=['A', 'B', 'C'])
>>> model.X = Var(model.t1, model.t2, model.s)
>>> def _intX1(m, t1, t2, s):
... return m.X[t1, t2, s]
>>> model.intX1 = Integral(model.t1, model.t2, model.s, wrt=model.t1,
... rule=_intX1)
>>> def _intX2(m, t2, s):
... return m.intX1[t2, s]
>>> model.intX2 = Integral(model.t2, model.s, wrt=model.t2, rule=_intX2)
>>> def _obj(m):
... return sum(m.intX2[k] for k in m.s)
>>> model.obj = Objective(rule=_obj)
Discretization Transformations¶
Before a Pyomo model with DerivativeVar
or Integral
components can be sent to a
solver it must first be sent through a discretization transformation. These
transformations approximate any derivatives or integrals in the model by
using a numerical method. The numerical methods currently included in pyomo.DAE
discretize the continuous domains in the problem and introduce equality
constraints which approximate the derivatives and integrals at the
discretization points. Two families of discretization schemes have been
implemented in pyomo.DAE, Finite Difference and Collocation. These schemes are
described in more detail below.
Note
The schemes described here are for derivatives only. All integrals will be transformed using the trapezoid rule.
The user must write a Python script in order to use these discretizations, they have not been tested on the pyomo command line. Example scripts are shown below for each of the discretization schemes. The transformations are applied to Pyomo model objects which can be further manipulated before being sent to a solver. Examples of this are also shown below.
Finite Difference Transformation¶
This transformation includes implementations of several finite difference methods. For example, the Backward Difference method (also called Implicit or Backward Euler) has been implemented. The discretization equations for this method are shown below:
where \(h\) is the step size between discretization points or the size of
each finite element. These equations are generated automatically as
Constraints
when the backward
difference method is applied to a Pyomo model.
There are several discretization options available to a
dae.finite_difference
transformation which can be specified as keyword
arguments to the .apply_to()
function of the transformation object. These
keywords are summarized below:
Keyword arguments for applying a finite difference transformation:
 ‘nfe’
 The desired number of finite element points to be included in the discretization. The default value is 10.
 ‘wrt’
 Indicates which
ContinuousSet
the transformation should be applied to. If this keyword argument is not specified then the same scheme will be applied to everyContinuousSet
.  ‘scheme’
 Indicates which finite difference method to apply. Options are ‘BACKWARD’, ‘CENTRAL’, or ‘FORWARD’. The default scheme is the backward difference method.
If the existing number of finite element points in a
ContinuousSet
is less than the desired
number, new discretization points will be added to the set. If a user specifies
a number of finite element points which is less than the number of points
already included in the ContinuousSet
then
the transformation will ignore the specified number and proceed with the larger
set of points. Discretization points will never be removed from a
ContinousSet
during the discretization.
The following code is a Python script applying the backward difference method. The code also shows how to add a constraint to a discretized model.
Discretize model using Backward Difference method
>>> discretizer = TransformationFactory('dae.finite_difference')
>>> discretizer.apply_to(model,nfe=20,wrt=model.time,scheme='BACKWARD')
Add another constraint to discretized model
>>> def _sum_limit(m):
... return sum(m.x1[i] for i in m.time) <= 50
>>> model.con_sum_limit = Constraint(rule=_sum_limit)
Solve discretized model
>>> solver = SolverFactory('ipopt')
>>> results = solver.solve(model)
Collocation Transformation¶
This transformation uses orthogonal collocation to discretize the differential equations in the model. Currently, two types of collocation have been implemented. They both use Lagrange polynomials with either GaussRadau roots or GaussLegendre roots. For more information on orthogonal collocation and the discretization equations associated with this method please see chapter 10 of the book “Nonlinear Programming: Concepts, Algorithms, and Applications to Chemical Processes” by L.T. Biegler.
The discretization options available to a dae.collocation
transformation
are the same as those described above for the finite difference transformation
with different available schemes and the addition of the ‘ncp’ option.
Additional keyword arguments for collocation discretizations:
 ‘scheme’
 The desired collocation scheme, either ‘LAGRANGERADAU’ or ‘LAGRANGELEGENDRE’. The default is ‘LAGRANGERADAU’.
 ‘ncp’
 The number of collocation points within each finite element. The default value is 3.
Note
If the user’s version of Python has access to the package Numpy then any number of collocation points may be specified, otherwise the maximum number is 10.
Note
Any points that exist in a
ContinuousSet
before discretization
will be used as finite element boundaries and not as collocation points.
The locations of the collocation points cannot be specified by the user,
they must be generated by the transformation.
The following code is a Python script applying collocation with Lagrange polynomials and Radau roots. The code also shows how to add an objective function to a discretized model.
Discretize model using Radau Collocation
>>> discretizer = TransformationFactory('dae.collocation')
>>> discretizer.apply_to(model,nfe=20,ncp=6,scheme='LAGRANGERADAU')
Add objective function after model has been discretized
>>> def obj_rule(m):
... return sum((m.x[i]m.x_ref)**2 for i in m.time)
>>> model.obj = Objective(rule=obj_rule)
Solve discretized model
>>> solver = SolverFactory('ipopt')
>>> results = solver.solve(model)
Restricting Optimal Control Profiles¶
When solving an optimal control problem a user may want to restrict the
number of degrees of freedom for the control input by forcing, for example,
a piecewise constant profile. Pyomo.DAE provides the
reduce_collocation_points
function to address this usecase. This function
is used in conjunction with the dae.collocation
discretization
transformation to reduce the number of free collocation points within a finite
element for a particular variable.

class
pyomo.dae.plugins.colloc.
Collocation_Discretization_Transformation
[source]¶ 
reduce_collocation_points
(instance, var=None, ncp=None, contset=None)[source]¶ This method will add additional constraints to a model to reduce the number of free collocation points (degrees of freedom) for a particular variable.
Parameters:  instance (Pyomo model) – The discretized Pyomo model to add constraints to
 var (
pyomo.environ.Var
) – The Pyomo variable for which the degrees of freedom will be reduced  ncp (int) – The new number of free collocation points for var. Must be less that the number of collocation points used in discretizing the model.
 contset (
pyomo.dae.ContinuousSet
) – TheContinuousSet
that was discretized and for which the var will have a reduced number of degrees of freedom

An example of using this function is shown below:
>>> discretizer = TransformationFactory('dae.collocation')
>>> discretizer.apply_to(model, nfe=10, ncp=6)
>>> model = discretizer.reduce_collocation_points(model,
... var=model.u,
... ncp=1,
... contset=model.time)
In the above example, the reduce_collocation_points
function restricts
the variable model.u
to have only 1 free collocation point per
finite element, thereby enforcing a piecewise constant profile.
Fig. 1 shows the solution profile before and
after appling
the reduce_collocation_points
function.
Applying Multiple Discretization Transformations¶
Discretizations can be applied independently to each
ContinuousSet
in a model. This allows the
user great flexibility in discretizing their model. For example the same
numerical method can be applied with different resolutions:
>>> discretizer = TransformationFactory('dae.finite_difference')
>>> discretizer.apply_to(model,wrt=model.t1,nfe=10)
>>> discretizer.apply_to(model,wrt=model.t2,nfe=100)
This also allows the user to combine different methods. For example, applying
the forward difference method to one
ContinuousSet
and the central finite
difference method to another
ContinuousSet
:
>>> discretizer = TransformationFactory('dae.finite_difference')
>>> discretizer.apply_to(model,wrt=model.t1,scheme='FORWARD')
>>> discretizer.apply_to(model,wrt=model.t2,scheme='CENTRAL')
In addition, the user may combine finite difference and collocation discretizations. For example:
>>> disc_fe = TransformationFactory('dae.finite_difference')
>>> disc_fe.apply_to(model,wrt=model.t1,nfe=10)
>>> disc_col = TransformationFactory('dae.collocation')
>>> disc_col.apply_to(model,wrt=model.t2,nfe=10,ncp=5)
If the user would like to apply the same discretization to all
ContinuousSet
components in a model, just
specify the discretization once without the ‘wrt’ keyword argument. This will
apply that scheme to all ContinuousSet
components in the model that haven’t already been discretized.
Custom Discretization Schemes¶
A transformation framework along with certain utility functions has been created so that advanced users may easily implement custom discretization schemes other than those listed above. The transformation framework consists of the following steps:
 Specify Discretization Options
 Discretize the ContinuousSet(s)
 Update Model Components
 Add Discretization Equations
 Return Discretized Model
If a user would like to create a custom finite difference scheme then they only have to worry about step (4) in the framework. The discretization equations for a particular scheme have been isolated from of the rest of the code for implementing the transformation. The function containing these discretization equations can be found at the top of the source code file for the transformation. For example, below is the function for the forward difference method:
def _forward_transform(v,s):
"""
Applies the Forward Difference formula of order O(h) for first derivatives
"""
def _fwd_fun(i):
tmp = sorted(s)
idx = tmp.index(i)
return 1/(tmp[idx+1]tmp[idx])*(v(tmp[idx+1])v(tmp[idx]))
return _fwd_fun
In this function, ‘v’ represents the continuous variable or function that the method is being applied to. ‘s’ represents the set of discrete points in the continuous domain. In order to implement a custom finite difference method, a user would have to copy the above function and just replace the equation next to the first return statement with their method.
After implementing a custom finite difference method using the above function
template, the only other change that must be made is to add the custom method
to the ‘all_schemes’ dictionary in the dae.finite_difference
class.
In the case of a custom collocation method, changes will have to be made in steps (2) and (4) of the transformation framework. In addition to implementing the discretization equations, the user would also have to ensure that the desired collocation points are added to the ContinuousSet being discretized.
Dynamic Model Simulation¶
The pyomo.dae Simulator class can be used to simulate systems of ODEs and DAEs. It provides an interface to integrators available in other Python packages.
Note
The pyomo.dae Simulator does not include integrators directly. The user must have at least one of the supported Python packages installed in order to use this class.

class
pyomo.dae.
Simulator
(m, package='scipy')[source]¶ Simulator objects allow a user to simulate a dynamic model formulated using pyomo.dae.
Parameters:  m (Pyomo Model) – The Pyomo model to be simulated should be passed as the first argument
 package (string) – The Python simulator package to use. Currently ‘scipy’ and ‘casadi’ are the only supported packages

get_variable_order
(vartype=None)[source]¶ This function returns the ordered list of differential variable names. The order corresponds to the order being sent to the integrator function. Knowing the order allows users to provide initial conditions for the differential equations using a list or map the profiles returned by the simulate function to the Pyomo variables.
Parameters: vartype (string or None) – Optional argument for specifying the type of variables to return the order for. The default behavior is to return the order of the differential variables. ‘timevarying’ will return the order of all the timedependent algebraic variables identified in the model. ‘algebraic’ will return the order of algebraic variables used in the most recent call to the simulate function. ‘input’ will return the order of the timedependent algebraic variables that were treated as inputs in the most recent call to the simulate function. Returns: Return type: list

initialize_model
()[source]¶ This function will initialize the model using the profile obtained from simulating the dynamic model.

simulate
(numpoints=None, tstep=None, integrator=None, varying_inputs=None, initcon=None, integrator_options=None)[source]¶ Simulate the model. Integratorspecific options may be specified as keyword arguments and will be passed on to the integrator.
Parameters:  numpoints (int) – The number of points for the profiles returned by the simulator. Default is 100
 tstep (int or float) – The time step to use in the profiles returned by the simulator. This is not the time step used internally by the integrators. This is an optional parameter that may be specified in place of ‘numpoints’.
 integrator (string) – The string name of the integrator to use for simulation. The default is ‘lsoda’ when using Scipy and ‘idas’ when using CasADi
 varying_inputs (
pyomo.environ.Suffix
) – ASuffix
object containing the piecewise constant profiles to be used for certain timevarying algebraic variables.  initcon (list of floats) – The initial conditions for the the differential variables. This is an optional argument. If not specified then the simulator will use the current value of the differential variables at the lower bound of the ContinuousSet for the initial condition.
 integrator_options (dict) – Dictionary containing options that should be passed to the integrator. See the documentation for a specific integrator for a list of valid options.
Returns: The first return value is a 1D array of time points corresponding to the second return value which is a 2D array of the profiles for the simulated differential and algebraic variables.
Return type: numpy array, numpy array
Note
Any keyword options supported by the integrator may be specified as keyword options to the simulate function and will be passed to the integrator.
Supported Simulator Packages¶
The Simulator currently includes interfaces to SciPy and CasADi. ODE simulation is supported in both packages however, DAE simulation is only supported by CasADi. A list of available integrators for each package is given below. Please refer to the SciPy and CasADi documentation directly for the most uptodate information about these packages and for more information about the various integrators and options.
 SciPy Integrators:
 ‘vode’ : Realvalued Variablecoefficient ODE solver, options for nonstiff and stiff systems
 ‘zvode’ : Complexvalues Variablecoefficient ODE solver, options for nonstiff and stiff systems
 ‘lsoda’ : Realvalues Variablecoefficient ODE solver, automatic switching of algorithms for nonstiff or stiff systems
 ‘dopri5’ : Explicit rungekutta method of order (4)5 ODE solver
 ‘dop853’ : Explicit rungekutta method of order 8(5,3) ODE solver
 CasADi Integrators:
 ‘cvodes’ : CVodes from the Sundials suite, solver for stiff or nonstiff ODE systems
 ‘idas’ : IDAS from the Sundials suite, DAE solver
 ‘collocation’ : Fixedstep implicit rungekutta method, ODE/DAE solver
 ‘rk’ : Fixedstep explicit rungekutta method, ODE solver
Using the Simulator¶
We now show how to use the Simulator to simulate the following system of ODEs:
We begin by formulating the model using pyomo.DAE
>>> m = ConcreteModel()
>>> m.t = ContinuousSet(bounds=(0.0, 10.0))
>>> m.b = Param(initialize=0.25)
>>> m.c = Param(initialize=5.0)
>>> m.omega = Var(m.t)
>>> m.theta = Var(m.t)
>>> m.domegadt = DerivativeVar(m.omega, wrt=m.t)
>>> m.dthetadt = DerivativeVar(m.theta, wrt=m.t)
Setting the initial conditions
>>> m.omega[0].fix(0.0)
>>> m.theta[0].fix(3.14  0.1)
>>> def _diffeq1(m, t):
... return m.domegadt[t] == m.b * m.omega[t]  m.c * sin(m.theta[t])
>>> m.diffeq1 = Constraint(m.t, rule=_diffeq1)
>>> def _diffeq2(m, t):
... return m.dthetadt[t] == m.omega[t]
>>> m.diffeq2 = Constraint(m.t, rule=_diffeq2)
Notice that the initial conditions are set by fixing the values of
m.omega
and m.theta
at t=0 instead of being specified as extra
equality constraints. Also notice that the differential equations are
specified without using Constraint.Skip
to skip enforcement at t=0. The
Simulator cannot simulate any constraints that contain ifstatements in
their construction rules.
To simulate the model you must first create a Simulator object. Building this object prepares the Pyomo model for simulation with a particular Python package and performs several checks on the model to ensure compatibility with the Simulator. Be sure to read through the list of limitations at the end of this section to understand the types of models supported by the Simulator.
>>> sim = Simulator(m, package='scipy')
After creating a Simulator object, the model can be simulated by calling the
simulate function. Please see the API documentation for the
Simulator
for more information about the
valid keyword arguments for this function.
>>> tsim, profiles = sim.simulate(numpoints=100, integrator='vode')
The simulate
function returns numpy arrays containing time points and
the corresponding values for the dynamic variable profiles.
 Simulator Limitations:
 Differential equations must be firstorder and separable
 Model can only contain a single ContinuousSet
 Can’t simulate constraints with ifstatements in the construction rules
 Need to provide initial conditions for dynamic states by setting the value or using fix()
Specifying TimeVaring Inputs¶
The Simulator
supports simulation of a system
of ODE’s or DAE’s with timevarying parameters or control inputs. Timevarying
inputs can be specified using a Pyomo Suffix
. We currently only support
piecewise constant profiles. For more complex inputs defined by a continuous
function of time we recommend adding an algebraic variable and constraint to
your model.
The profile for a timevarying input should be specified
using a Python dictionary where the keys correspond to the switching times
and the values correspond to the value of the input at a time point. A
Suffix
is then used to associate this dictionary with the appropriate
Var
or Param
and pass the information to the
Simulator
. The code snippet below shows an
example.
>>> m = ConcreteModel()
>>> m.t = ContinuousSet(bounds=(0.0, 20.0))
Timevarying inputs
>>> m.b = Var(m.t)
>>> m.c = Param(m.t, default=5.0)
>>> m.omega = Var(m.t)
>>> m.theta = Var(m.t)
>>> m.domegadt = DerivativeVar(m.omega, wrt=m.t)
>>> m.dthetadt = DerivativeVar(m.theta, wrt=m.t)
Setting the initial conditions
>>> m.omega[0] = 0.0
>>> m.theta[0] = 3.14  0.1
>>> def _diffeq1(m, t):
... return m.domegadt[t] == m.b[t] * m.omega[t]  \
... m.c[t] * sin(m.theta[t])
>>> m.diffeq1 = Constraint(m.t, rule=_diffeq1)
>>> def _diffeq2(m, t):
... return m.dthetadt[t] == m.omega[t]
>>> m.diffeq2 = Constraint(m.t, rule=_diffeq2)
Specifying the piecewise constant inputs
>>> b_profile = {0: 0.25, 15: 0.025}
>>> c_profile = {0: 5.0, 7: 50}
Declaring a Pyomo Suffix to pass the timevarying inputs to the Simulator
>>> m.var_input = Suffix(direction=Suffix.LOCAL)
>>> m.var_input[m.b] = b_profile
>>> m.var_input[m.c] = c_profile
Simulate the model using scipy
>>> sim = Simulator(m, package='scipy')
>>> tsim, profiles = sim.simulate(numpoints=100,
... integrator='vode',
... varying_inputs=m.var_input)
Note
The Simulator does not support multiindexed inputs (i.e. if m.b
in
the above example was indexed by another set besides m.t
)
Dynamic Model Initialization¶
Providing a good initial guess is an important factor in solving dynamic optimization problems. There are several model initialization tools under development in pyomo.DAE to help users initialize their models. These tools will be documented here as they become available.
From Simulation¶
The Simulator
includes a function for
initializing discretized dynamic optimization models using the profiles
returned from the simulator. An example using this function is shown below
Simulate the model using scipy
>>> sim = Simulator(m, package='scipy')
>>> tsim, profiles = sim.simulate(numpoints=100, integrator='vode',
... varying_inputs=m.var_input)
Discretize the model using Orthogonal Collocation
>>> discretizer = TransformationFactory('dae.collocation')
>>> discretizer.apply_to(m, nfe=10, ncp=3)
Initialize the discretized model using the simulator profiles
>>> sim.initialize_model()
Note
A model must be simulated before it can be initialized using this function
MPEC¶
pyomo.mpec
supports modeling complementarity conditions and
optimization problems with equilibrium constraints.
Generalized Disjunctive Programming¶
The Pyomo.GDP modeling extension allows users to include logical disjunctions in their models. These disjunctions are often used to model discrete decisions that have implications on the system behavior. For example, in process design, a disjunction may model the choice between processes A and B. If A is selected, then its associated equations and inequalities will apply; otherwise, if B is selected, then its respective constraints should be enforced. In the general case, if these models contain nonlinear relations, then they are Generalized Disjunctive Programming (GDP) models
Disjunctions¶
A disjunction is a set of constraint groupings that are linked by a logical OR relationship. The simplest case is a 2term disjunction:
That is, either the constraints in the collection D_{1} are enforced, OR the constraints in the collection D_{2} are enforced.
In Pyomo, we model each collection using a special type of block called
a Disjunct
. Each Disjunct
is a block that contains an implicitly
declared binary variable, “indicator_var” that is 1 when the constraints
in that Disjunct
is enforced and 0 otherwise.
Declaration¶
The following condensed code snippet illustrates a Disjunct
and a
Disjunction
:
>>> from pyomo.gdp import Disjunct, Disjunction
>>> model.d1 = Disjunct()
>>> model.d1.c = pyo.Constraint(expr= model.x == 0)
>>> model.d2 = Disjunct()
>>> model.d2.c = pyo.Constraint(expr= model.x == 5)
model
has two disjuncts, model.d1
and model.d2
. However,
just defining disjuncts is not sufficient to define disjunctions, as
Pyomo has no way of knowing which disjuncts should be bundled into which
disjunctions. To define a disjunction, you use a Disjunction
component. The disjunction takes either a rule or an expression that
returns a list of disjuncts over which it should form the
disjunction:
>>> model.c = Disjunction(expr=[model.d1, model.d2])
Note
Like Block
, Disjunct
can be indexed and defined by rules.
There is no requirement that disjuncts be indexed and also no
requirement that Disjuncts
within a Disjunction
be defined
using a common rule.
Transformation¶
To use standard commercial solvers, you must convert the disjunctive model to a standard MIP/MINLP model.
The two classical strategies for doing so are the (included) BigM and Hull reformulations.
From the Pyomo command line, include the option transform pyomo.gdp.bigm
or transform pyomo.gdp.hull
.
If you are using a Python script, TransformationFactory
accomplishes the same functionality:
TransformationFactory('gdp.bigm').apply_to(model)
TransformationFactory('gdp.hull').apply_to(model)
Note
 all variables that appear in disjuncts need upper and lower bounds for hull
 for linear models, the BigM transform can estimate reasonably tight M values for you if variables are bounded.
 for nonlinear models where finite expression bounds may be inferred from variable bounds, the BigM transformation may also be able to automatically compute M values for you.
 for all other models, you will need to provide the M values through a
“BigM” Suffix. A
GDP_Error
will be raised for missing M values.  When you declare a Disjunct, it (at declaration time) will automatically have a variable “indicator_var” defined and attached to it. After that, it is just a Var like any other Var.
 The hull reformulation is an exact reformulation at the solution points even for nonconvex models, but the resulting MINLP will also be nonconvex.
Direct GDP solvers¶
Pyomo includes the contributed GDPopt solver, which can direct solve GDP models. Its documentation and usage is described at GDPopt logicbased solver.
Examples¶
The following models all work and are equivalent:
Option 1: maximal verbosity, abstractlike
>>> model = pyo.ConcreteModel()
>>> model.x = pyo.Var()
>>> model.y = pyo.Var()
>>> # Two conditions
>>> def _d(disjunct, flag):
... model = disjunct.model()
... if flag:
... # x == 0
... disjunct.c = pyo.Constraint(expr=model.x == 0)
... else:
... # y == 0
... disjunct.c = pyo.Constraint(expr=model.y == 0)
>>> model.d = Disjunct([0,1], rule=_d)
>>> # Define the disjunction
>>> def _c(model):
... return [model.d[0], model.d[1]]
>>> model.c = Disjunction(rule=_c)
Option 2: Maximal verbosity, concretelike:
>>> model = pyo.ConcreteModel()
>>> model.x = pyo.Var()
>>> model.y = pyo.Var()
>>> model.fix_x = Disjunct()
>>> model.fix_x.c = pyo.Constraint(expr=model.x == 0)
>>> model.fix_y = Disjunct()
>>> model.fix_y.c = pyo.Constraint(expr=model.y == 0)
>>> model.c = Disjunction(expr=[model.fix_x, model.fix_y])
Option 3: Implicit disjuncts (disjunction rule returns a list of
expressions or a list of lists of expressions)
>>> model = pyo.ConcreteModel()
>>> model.x = pyo.Var()
>>> model.y = pyo.Var()
>>> model.c = Disjunction(expr=[model.x == 0, model.y == 0])
Stochastic Programming¶
To express a stochastic program in PySP, the user specifies both the deterministic base model and the scenario tree model with associated uncertain parameters. Both concrete and abstract model representations are supported.
Given the deterministic and scenario tree models, PySP provides multiple paths for the solution of the corresponding stochastic program. One alternative involves forming the extensive form and invoking an appropriate deterministic solver for the entire problem once. For more complex stochastic programs, we provide a generic implementation of Rockafellar and Wets’ Progressive Hedging algorithm, with additional specializations for approximating mixedinteger stochastic programs as well as other decomposition methods. By leveraging the combination of a highlevel programming language (Python) and the embedding of the base deterministic model in that language (Pyomo), we are able to provide completely generic and highly configurable solver implementations.
This section describes PySP: (Pyomo Stochastic Programming), where parameters are allowed to be uncertain.
Overview of Modeling Components and Processes¶
The sequence of activities is typically the following:
 Create a deterministic model and declare components
 Develop basecase data for the deterministic model
 Test, verify and validate the deterministic model
 Model the stochastic processes
 Develop a way to generate scenarios (in the form of a tree if there are more than two stages)
 Create the data files need to describe the stochastics
 Use PySP to solve stochastic problem
When viewed from the standpoint of file creation, the process is
 Create an abstract model for the deterministic problem in a file
called
ReferenceModel.py
 Specify the stochastics in a file called
ScenarioStructure.dat
 Specify scenario data
Birge and Louveaux’s Farmer Problem¶
Birge and Louveaux [BirgeLouveauxBook] make use of the example of a farmer who has 500 acres that can be planted in wheat, corn or sugar beets, at a per acre cost of 150, 230 and 260 (Euros, presumably), respectively. The farmer needs to have at least 200 tons of wheat and 240 tons of corn to use as feed, but if enough is not grown, those crops can be purchased for 238 and 210, respectively. Corn and wheat grown in excess of the feed requirements can be sold for 170 and 150, respectively. A price of 36 per ton is guaranteed for the first 6000 tons grown by any farmer, but beets in excess of that are sold for 10 per ton. The yield is 2.5, 3, and 20 tons per acre for wheat, corn and sugar beets, respectively.
ReferenceModel.py¶
So far, this is a deterministic problem because we are assuming that we
know all the data. The Pyomo model for this problem shown here is in the
file ReferenceModel.py
in the subdirectory
examples/pysp/farmer/models
that is distributed with Pyomo.
# ___________________________________________________________________________
#
# Pyomo: Python Optimization Modeling Objects
# Copyright 2017 National Technology and Engineering Solutions of Sandia, LLC
# Under the terms of Contract DENA0003525 with National Technology and
# Engineering Solutions of Sandia, LLC, the U.S. Government retains certain
# rights in this software.
# This software is distributed under the 3clause BSD License.
# ___________________________________________________________________________
# Farmer: rent out version has a scalar root node var
# note: this will minimize
#
# Imports
#
from pyomo.core import *
#
# Model
#
model = AbstractModel()
#
# Parameters
#
model.CROPS = Set()
model.TOTAL_ACREAGE = Param(within=PositiveReals)
model.PriceQuota = Param(model.CROPS, within=PositiveReals)
model.SubQuotaSellingPrice = Param(model.CROPS, within=PositiveReals)
def super_quota_selling_price_validate (model, value, i):
return model.SubQuotaSellingPrice[i] >= model.SuperQuotaSellingPrice[i]
model.SuperQuotaSellingPrice = Param(model.CROPS, validate=super_quota_selling_price_validate)
model.CattleFeedRequirement = Param(model.CROPS, within=NonNegativeReals)
model.PurchasePrice = Param(model.CROPS, within=PositiveReals)
model.PlantingCostPerAcre = Param(model.CROPS, within=PositiveReals)
model.Yield = Param(model.CROPS, within=NonNegativeReals)
#
# Variables
#
model.DevotedAcreage = Var(model.CROPS, bounds=(0.0, model.TOTAL_ACREAGE))
model.QuantitySubQuotaSold = Var(model.CROPS, bounds=(0.0, None))
model.QuantitySuperQuotaSold = Var(model.CROPS, bounds=(0.0, None))
model.QuantityPurchased = Var(model.CROPS, bounds=(0.0, None))
#
# Constraints
#
def ConstrainTotalAcreage_rule(model):
return summation(model.DevotedAcreage) <= model.TOTAL_ACREAGE
model.ConstrainTotalAcreage = Constraint(rule=ConstrainTotalAcreage_rule)
def EnforceCattleFeedRequirement_rule(model, i):
return model.CattleFeedRequirement[i] <= (model.Yield[i] * model.DevotedAcreage[i]) + model.QuantityPurchased[i]  model.QuantitySubQuotaSold[i]  model.QuantitySuperQuotaSold[i]
model.EnforceCattleFeedRequirement = Constraint(model.CROPS, rule=EnforceCattleFeedRequirement_rule)
def LimitAmountSold_rule(model, i):
return model.QuantitySubQuotaSold[i] + model.QuantitySuperQuotaSold[i]  (model.Yield[i] * model.DevotedAcreage[i]) <= 0.0
model.LimitAmountSold = Constraint(model.CROPS, rule=LimitAmountSold_rule)
def EnforceQuotas_rule(model, i):
return (0.0, model.QuantitySubQuotaSold[i], model.PriceQuota[i])
model.EnforceQuotas = Constraint(model.CROPS, rule=EnforceQuotas_rule)
#
# Stagespecific cost computations
#
def ComputeFirstStageCost_rule(model):
return summation(model.PlantingCostPerAcre, model.DevotedAcreage)
model.FirstStageCost = Expression(rule=ComputeFirstStageCost_rule)
def ComputeSecondStageCost_rule(model):
expr = summation(model.PurchasePrice, model.QuantityPurchased)
expr = summation(model.SubQuotaSellingPrice, model.QuantitySubQuotaSold)
expr = summation(model.SuperQuotaSellingPrice, model.QuantitySuperQuotaSold)
return expr
model.SecondStageCost = Expression(rule=ComputeSecondStageCost_rule)
#
# PySP Autogenerated Objective
#
# minimize: sum of StageCosts
#
# An active scenario objective equivalent to that generated by PySP is
# included here for informational purposes.
def total_cost_rule(model):
return model.FirstStageCost + model.SecondStageCost
model.Total_Cost_Objective = Objective(rule=total_cost_rule, sense=minimize)
Example Data¶
The data introduced here are in the file AverageScenario.dat in the subdirectory examples/pysp/farmer/scenariodata that is distributed with Pyomo. These data are given for illustration. The file ReferenceModel.dat is not required by PySP.
# "mean" scenario
set CROPS := WHEAT CORN SUGAR_BEETS ;
param TOTAL_ACREAGE := 500 ;
# no quotas on wheat or corn
param PriceQuota := WHEAT 100000 CORN 100000 SUGAR_BEETS 6000 ;
param SubQuotaSellingPrice := WHEAT 170 CORN 150 SUGAR_BEETS 36 ;
param SuperQuotaSellingPrice := WHEAT 0 CORN 0 SUGAR_BEETS 10 ;
param CattleFeedRequirement := WHEAT 200 CORN 240 SUGAR_BEETS 0 ;
# can't purchase beets (no real need, as cattle don't eat them)
param PurchasePrice := WHEAT 238 CORN 210 SUGAR_BEETS 100000 ;
param PlantingCostPerAcre := WHEAT 150 CORN 230 SUGAR_BEETS 260 ;
param Yield := WHEAT 2.5 CORN 3 SUGAR_BEETS 20 ;
Any of these data could be modeled as uncertain, but we will consider only the possibility that the yield per acre could be higher or lower than expected. Assume that there is a probability of 1/3 that the yields will be the average values that were given (i.e., wheat 2.5; corn 3; and beets 20). Assume that there is a 1/3 probability that they will be lower (2, 2.4, 16) and 1/3 probability they will be higher (3, 3.6, 24). We refer to each full set of data as a scenario and collectively we call them a scenario tree. In this case the scenario tree is very simple: there is a root node and three leaf nodes: one corresponding to each scenario. The acreagetoplant decisions are root node decisions because they must be made without knowing what the yield will be. The other variables are socalled second stage decisions, because they will depend on which scenario is realized.
ScenarioStructure.dat¶
PySP requires that users describe the scenario tree using specific
constructs in a file named ScenarioStructure.dat
; for the farmer
problem, this file can be found in the pyomo subdirectory
examples/pysp/farmer/scenariodata
that is distributed with Pyomo.
# IMPORTANT  THE STAGES ARE ASSUMED TO BE IN TIMEORDER.
set Stages := FirstStage SecondStage ;
set Nodes := RootNode
BelowAverageNode
AverageNode
AboveAverageNode ;
param NodeStage := RootNode FirstStage
BelowAverageNode SecondStage
AverageNode SecondStage
AboveAverageNode SecondStage ;
set Children[RootNode] := BelowAverageNode
AverageNode
AboveAverageNode ;
param ConditionalProbability := RootNode 1.0
BelowAverageNode 0.33333333
AverageNode 0.33333334
AboveAverageNode 0.33333333 ;
set Scenarios := BelowAverageScenario
AverageScenario
AboveAverageScenario ;
param ScenarioLeafNode :=
BelowAverageScenario BelowAverageNode
AverageScenario AverageNode
AboveAverageScenario AboveAverageNode ;
set StageVariables[FirstStage] := DevotedAcreage[*];
set StageVariables[SecondStage] := QuantitySubQuotaSold[*]
QuantitySuperQuotaSold[*]
QuantityPurchased[*];
param StageCost := FirstStage FirstStageCost
SecondStage SecondStageCost ;
This data file is verbose and somewhat redundant, but in most
applications it is generated by software rather than by a person, so
this is not an issue. Generally, the leftmost part of each expression
(e.g. ‘’set Stages :=’‘) is required and uses reserved words (e.g.,
Stages
) and the other names are supplied by the user (e.g.,
‘’FirstStage’’ could be any name). Every assignment is terminated with a
semicolon. We will now consider the assignments in this file one at a
time.
The first assignments provides names for the stages and the words “set Stages” are required, as are the := symbols. Any names can be used. In this example, we used “FirstStage” and “SecondStage” but we could have used “EtapPrimero” and “ZweiteEtage” if we had wanted to. Whatever names are given here will continue to be used to refer to the stages in the rest of the file. The order of the names is important. A simple way to think of it is that generally, the names must be in time order (technically, they need to be in order of information discovery, but that is usually timeorder). Stages refers to decision stages, which may, or may not, correspond directly with time stages. In the farmer example, decisions about how much to plant are made in the first stage and “decisions” (which are pretty obvious, but which are decision variables nonetheless) about how much to sell at each price and how much needs to be bought are second stage decisions because they are made after the yield is known.
set Stages := FirstStage SecondStage ;
Node names are constructed next. The words “set Nodes” are required, but any names may be assigned to the nodes. In two stage stochastic problems there is a root node, which we chose to name “RootNode” and then there is a node for each scenario.
set Nodes := RootNode
BelowAverageNode
AverageNode
AboveAverageNode ;
Nodes are associated with time stages with an assignment beginning with the required words “param Nodestage.” The assignments must make use of previously defined node and stage names. Every node must be assigned a stage.
param NodeStage := RootNode FirstStage
BelowAverageNode SecondStage
AverageNode SecondStage
AboveAverageNode SecondStage ;
The structure of the scenario tree is defined using assignment of children to each node that has them. Since this is a two stage problem, only the root node has children. The words “param Children” are required for every node that has children and the name of the node is in square brackets before the colonequals assignment symbols. A list of children is assigned.
set Children[RootNode] := BelowAverageNode
AverageNode
AboveAverageNode ;
The probability for each node, conditional on observing the parent node is given in an assignment that begins with the required words “param ConditionalProbability.” The root node always has a conditional probability of 1, but it must always be given anyway. In this example, the second stage nodes are equally likely.
param ConditionalProbability := RootNode 1.0
BelowAverageNode 0.33333333
AverageNode 0.33333334
AboveAverageNode 0.33333333 ;
Scenario names are given in an assignment that begins with the required words “set Scenarios” and provides a list of the names of the scenarios. Any names may be given. In many applications they are given unimaginative names generated by software such as “Scen1” and the like. In this example, there are three scenarios and the names reflect the relative values of the yields.
set Scenarios := BelowAverageScenario
AverageScenario
AboveAverageScenario ;
Leaf nodes, which are nodes with no children, are associated with scenarios. This assignment must be onetoone and it is initiated with the words “param ScenarioLeafNode” followed by the colonequals assignment characters.
param ScenarioLeafNode :=
BelowAverageScenario BelowAverageNode
AverageScenario AverageNode
AboveAverageScenario AboveAverageNode ;
Variables are associated with stages using an assignment that begins with the required words “set StageVariables” and the name of a stage in square brackets followed by the colonequals assignment characters. Variable names that have been defined in the file ReferenceModel.py can be assigned to stages. Any variables that are not assigned are assumed to be in the last stage. Variable indexes can be given explicitly and/or wildcards can be used. Note that the variable names appear without the prefix “model.” In the farmer example, DevotedAcreage is the only first stage variable.
set StageVariables[FirstStage] := DevotedAcreage[*] ;
set StageVariables[SecondStage] := QuantitySubQuotaSold[*]
QuantitySuperQuotaSold[*]
QuantityPurchased[*] ;
Note
Variable names appear without the prefix “model.”
Note
Wildcards can be used, but fully general Python slicing is not supported.
For reporting purposes, it is useful to define auxiliary variables in
ReferenceModel.py
that will be assigned the cost associated with
each stage. This variables do not impact algorithms, but the values are
output by some software during execution as well as upon completion. The
names of the variables are assigned to stages using the “param
StageCost” assignment. The stages are previously defined in
ScenarioStructure.dat
and the variables are previously defined in
ReferenceModel.py
.
param StageCost := FirstStage FirstStageCost
SecondStage SecondStageCost ;
Scenario data specification¶
So far, we have given a model in the file named ReferenceModel.py
, a
set of deterministic data in the file named ReferenceModel.py
, and a
description of the stochastics in the file named
ScenarioStructure.dat
. All that remains is to give the data for each
scenario. There are two ways to do that in PySP: scenariobased and
nodebased. The default is scenariobased so we will describe that
first.
For scenariobased data, the full data for each scenario is given in a
.dat
file with the root name that is the name of the scenario. So,
for example, the file named AverageScenario.dat
must contain all the
data for the model for the scenario named “AvererageScenario.” It turns
out that this file can be created by simply copying the file
ReferenceModel.dat
as shown above because it contains a full set of
data for the “AverageScenario” scenario. The files
BelowAverageScenario.dat
and AboveAverageScenario.dat
will
differ from this file and from each other only in their last line, where
the yield is specified. These three files are distributed with Pyomo
and are in the pyomo subdirectory examples/pysp/farmer/scenariodata
along with ScenarioStructure.dat
and ReferenceModel.dat
.
Scenariobased data wastes resources by specifying the same thing over
and over again. In many cases, that does not matter and it is convenient
to have full scenario data files available (for one thing, the scenarios
can easily be run independently using the pyomo
command). However,
in many other settings, it is better to use a nodebased specification
where the data that is unique to each node is specified in a .dat file
with a root name that matches the node name. In the farmer example, the
file RootNode.dat
will be the same as ReferenceModel.dat
except
that it will lack the last line that specifies the yield. The files
BelowAverageNode.dat
, AverageNode.dat
, and
AboveAverageNode.dat
will contain only one line each to specify the
yield. If nodebased data is to be used, then the
ScenarioStructure.dat
file must contain the following line:
param ScenarioBasedData := False ;
An entire set of files for nodebased data for the farmer problem are
distributed with Pyomo in the subdirectory
examples/pysp/farmer/nodedata
Finding Solutions for Stochastic Models¶
PySP provides a variety of tools for finding solutions to stochastic programs.
runef¶
The runef
command puts together the socalled extensive form
version of the model. It creates a large model that has constraints to
ensure that variables at a node have the same value. For example, in the
farmer problem, all of the DevotedAcres
variables must have the same
value regardless of which scenario is ultimately realized. The objective
can be the expected value of the objective function, or the CVaR, or a
weighted combination of the two. Expected value is the default. A full
set of options for runef
can be obtained using the command:
runef help
The pyomo distribution contains the files need to run the farmer example
in the subdirectories to the subdirectory examples/pysp/farmer
so
if this is the current directory and if CPLEX is installed, the
following command will cause formation of the EF and its solution using
CPLEX.
runef m models i nodedata solver=cplex solve
The option m models
has one dash and is shorthand for the option
modeldirectory=models
and note that the full option uses two
dashes. The i
is equivalent to instancedirectory=
in the
same fashion. The default solver is CPLEX, so the solver option is not
really needed. With the solve
option, runef would simply write an
.lp data file that could be passed to a solver.
runph¶
The runph command executes an implementation of Progressive Hedging (PH) that is intended to support scripting and extension.
The pyomo distribution contains the files need to run the farmer example
in the subdirectories to the subdirectory examples/pysp/farmer
so
if this is the current directory and if CPLEX is installed, the
following command will cause PH to execute using the default subproblem
solver, which is CPLEX.
runph m models i nodedata
The option m models
has one dash and is shorthand for the option
modeldirectory=models
and note that the full option uses two
dashes. The i
is equivalent to instancedirectory=
in the
same fashion.
After about 33 iterations, the algorithm will achieve the default level of convergence and terminate. A lot of output is generated and among the output is the following solution information:
Variable=DevotedAcreage
Index: [CORN] (Scenarios: BelowAverageScenario AverageScenario AboveAverageScenario )
Values: 79.9844 80.0000 79.9768 MaxMin= 0.0232 Avg= 79.9871
Index: [SUGAR_BEETS] (Scenarios: BelowAverageScenario AverageScenario AboveAverageScenario )
Values: 249.9848 249.9770 250.0000 MaxMin= 0.0230 Avg= 249.9873
Index: [WHEAT] (Scenarios: BelowAverageScenario AverageScenario AboveAverageScenario )
Values: 170.0308 170.0230 170.0232 MaxMin= 0.0078 Avg= 170.0256
Cost Variable=FirstStageCost
Tree Node=RootNode (Scenarios: BelowAverageScenario AverageScenario AboveAverageScenario )
Values: 108897.0836 108897.4725 108898.1476 MaxMin= 1.0640 Avg= 108897.5679
For problems with no, or few, integer variables, the default level of convergence leaves rootnode variables almost converged. Since the acreage to be planted cannot depend on the scenario that will be realized in the future, the average, which is labeled “Avg” in this output, would be used. A farmer would probably interpret acreages of 79.9871, 249.9873, and 170.0256 to be 80, 250, and 170. In realworld applications, PH is embedded in scripts that produce output in a format desired by a decision maker.
But in realworld applications, the default settings for PH seldom work well enough. In addition to postprocessing the output, a number of parameters need to be adjusted and sometimes scripting to extend or augment the algorithm is needed to improve convergence rates. A full set of options can be obtained with the command:
runph help
Note that there are two dashes before help
.
By default, PH uses quadratic objective functions after iteration zero;
in some settings it may be desirable to linearize the quadratic
terms. This is required to use a solver such as glpk for MIPs because it
does not support quadratic MIPs. The directive
linearizenonbinarypenaltyterms=n
causes linearization of the
penalty terms using n pieces. For example, to use glpk on the farmer,
assuming glpk is installed and the command is given when the current
directory is the examples/pysp/farmer
, the following command will
use default settings for most parameters and four pieces to approximate
quadratic terms in subproblems:
runph i nodedata m models solver=glpk linearizenonbinarypenaltyterms=4
Use of the linearizenonbinarypenaltyterms
option requires that
all variables not in the final stage have bounds.
Final Solution¶
At each iteration, PH computes an average for each variable over the nodes of the scenario tree. We refer to this as Xbar. For many problems, particularly those with integer restrictions, Xbar might not be feasible for every scenario unless PH happens to be fully converged (in the primal variables). Consequently, the software computes a solution system Xhat that is more likely to be feasible for every scenario and will be equivalent to Xbar under full convergence. This solution is reported upon completion of PH and its expected value is report if it is feasible for all scenarios.
Methods for computing Xhat are controlled by the xhatmethod
commandline option. For example
xhatmethod=closestscenario
causes Xhat to be set to the scenario that is closest to Xbar (in a
zscore sense). Other options, such as voting
and rounding
,
assign values of Xbar to Xhat except for binary and general integer
variables, where the values are set by probability weighted voting by
scenarios and rounding from Xbar, respectively.
Solution Output Control¶
To get the full solution, including leaf node solution values, use the
runph
outputscenariotreesolution
option.
In both runph
and runef
the solution can be written in csv
format using the
solutionwriter=pyomo.pysp.plugins.csvsolutionwriter
option.
Summary of PySP File Names¶
PySP scripts such as runef
and runph
require files that specify
the model and data using files with specific names. All files can be in
the current directory, but typically, the file ReferenceModel.py
is
in a directory that is specified using modeldirectory=
option
(the short version of this option is i
) and the data files are in a
directory specified in the instancedirectory=
option (the short
version of this option is m
).
Note
A file name other than ReferenceModel.py
can be used if the file
name is given in addition to the directory name as an argument to the
instancedirectory
option. For example, on a Windows machine
instancedirectory=models\MyModel.py
would specify the file
MyModel.py
in the local directory models
.
ReferenceModel.py
: A full Pyomo model for a singe scenario. There should be no scenario indexes in this model because they are implicit.ScenarioStructure.dat
: Specifies the nature of the stochastics. It also specifies whether the rest of the data is nodebased or scenariobased. It is scenariobased unlessScenarioStructure.dat
contains the line
param ScenarioBasedData := False ;
If scenariobased, then there is a data file for each scenario that
specifies a full set of data for the scenario. The name of the file is
the name of the scenario with .dat
appended. The names of the
scenarios are given in the ScenarioStructure.dat
file.
If nodebased, then there is a file with data for each node that
specifies only that data that is unique for the node. The name of the
file is the name of the node with .dat
appended. The names of the
nodes are given in the ScenarioStructure.dat
file.
Solving Subproblems in Parallel and/or Remotely¶
The Python package called Pyro provides capabilities that are used to
enable PH to make use of multiple solver processes for subproblems and
allows both runef
and runph
to make use remote solvers. We will
focus on PH in our discussion here.
There are two solver management systems available for runph
, one is
based on a pyro_mip_server
and the other is based on a
phsolverserver
. Regardless of which is used, a name server and a
dispatch server must be running and accessible to the runph
process. The name server is launched using the command pyomo_ns
and
then the dispatch server is launched with dispatch_srvr
. Note that
both commands contain an underscore. Both programs keep running until
terminated by an external signal, so it is common to pipe their output
to a file.
Solvers are controlled by solver servers. The pyro mip solver server is
launched with the command pyro_mip_server
. This command may be
repeated to launch as many solvers as are desired. The runph
then
needs a solvermanager=pyro
option to signal that runph
should
not launch its own solver, but should send subproblems to be dispatched
to parallel solvers. To summarize the commands:
 Once:
pyomo_ns
 Once:
dispatch_srvr
 Multiple times:
pyro_mip_server
 Once:
runph ... solvermanager=pyro ...
Note
The runph
option shutdownpyro
will cause a shutdown signal
to be sent to pyomo_ns
, dispatch_srvr
and all
pyro_mip_server
programs upon termination of runph
.
Instead of using pyro_mip_server
, one can use phsolverserver
in
its place. You can get a list of arguments using pyrosolverserver
help
, which does not launch a solver server (it just displays help
and terminates). If you use the phsolverserver, then use
solvermanager=phpyro
as an argument to runph rather than
solvermanager=pyro
.
Warning
Unlike the normal pyro_mip_server
, there must be one
phsolverserver
for each subproblem. One can use fewer
phsolverservers than there are scenarios by adding the commandline
option “–phpyrorequiredworkers=X”. This will partition the jobs
among the available workers,
Generating SMPS Input Files From PySP Models¶
This document explains how to convert a PySP model into a set of files
representing the SMPS format for stochastic linear programs. Conversion
can be performed through the command line by invoking the SMPS converter
using the command python m pyomo.pysp.convert.smps
. This command is
available starting with Pyomo version 5.1. Prior to version 5.1, the
same functionality was available via the command pysp2smps
(starting
at Pyomo version 4.2).
SMPS is a standard for expressing stochastic mathematical programs that is based on the ancient MPS format for linear programs, which is matrixbased. Modern algebraic modeling languages such as Pyomo offer a lot of flexibility so it is a challenge to take models expressed in Pyomo/PySP and force them into SMPS format. The conversions can be inefficient and error prone because Pyomo allows flexible expressions and model construction so the resulting matrix may not be the same for each set of input data. We provide tools for conversion to SMPS because some researchers have tools that read SMPS and exploit its limitations on problem structure; however, the user should be aware that the conversion is not always possible.
Currently, these routines only support twostage stochastic programs. Support for models with more than two time stages will be considered in the future as this tool matures.
Additional Requirements for SMPS Conversion¶
To enable proper conversion of a PySP model to a set of SMPS files, the following additional requirements must be met:
 The reference Pyomo model must include annotations that identify stochastic data locations in the secondstage problem.
 All model variables must be declared in the ScenarioStructure.dat file.
 The set of constraints and variables, and the overall sparsity structure of the objective and constraint matrix must not change across scenarios.
The bulk of this section discusses indepth the annotations mentioned in
the first point. The second point may come as a surprise to users that
are not aware of the ability to not declare variables in the
ScenarioStructure.dat file. Indeed, for most of the code in PySP, it is
only critical that the variables for which nonanticipativity must be
enforced need to be declared. That is, for a twostage stochastic
program, all secondstage variables can be left out of the
ScenarioStructure.dat file when using commands such as runef
and
runph
. However, conversion to SMPS format requires all variables to
be properly assigned a decision stage by the user.
Note
Variables can be declared as primary by assigning them to a stage
using the StageVariables
assignment, or declared as auxiliary
variables, which are assigned to a stage using
StageDerivedVariables
assignment. For algorithms such as PH, the
distinction is meaningful and those variables that are fully
determined by primary variables and the data should generally be
assigned to StageDerivedVariables
for their stage.
The third point may also come as a surprise, but the ability to handle a nonuniform problem structure in most PySP tools falls directly from the fact that the nonanticipativity conditions are all that is required in many cases. However, the conversion to SMPS format is based on a matrix representation of the problem where the stochastic coefficients are provided as a set of sparse matrix coordinates. This subsequently requires that the row and column dimensions as well as the sparsity structure of the problem does not change across scenarios.
Annotating Models for SMPS File Generation¶
Annotations are necessary for alerting the SMPS conversion routines of the locations of data that needs to be updated when changing from one scenario to another. Knowing these sparse locations allows decomposition algorithms to employ efficient methods for solving a stochastic program. In order to use the SMPS conversion tool, at least one of the following annotations must be declared on the reference Pyomo model:
 StochasticConstraintBoundsAnnotation: indicates the existence of stochastic constraint righthandsides (or bounds) in secondstage constraints
 StochasticConstraintBodyAnnotation: indicates the existence of stochastic variable coefficients in secondstage constraints
 StochasticObjectiveAnnotation: indicates the existence stochastic cost coefficients in the secondstage cost function
These will be discussed in further detail in the remaining sections. The following code snippet demonstrates how to import these annotations and declare them on a model.
from pyomo.pysp import annotations
model.stoch_rhs = annotations.StochasticConstraintBoundsAnnotation()
model.stoch_matrix = annotations.StochasticConstraintBodyAnnotation()
model.stoch_objective = annotations.StochasticObjectiveAnnotation()
Populating these annotations with entries is optional, and simply declaring them on the reference Pyomo model will alert the SMPS conversion routines that all coefficients appearing on the secondstage model should be assumed stochastic. That is, adding the lines in the previous code snippet alone implies that: (i) all secondstage constraints have stochastic bounds, (ii) all first and secondstage variables appearing in secondstage constraints have stochastic coefficients, and (iii) all first and secondstage variables appearing in the objective have stochastic coefficients.
PySP can attempt to determine the stageness of a constraint by examining the set of variables that appear in the constraint expression. E.g., a firststage constraint is characterized as having only firststage variables appearing in its expression. A secondstage constraint has at least one secondstage variable appearing in its expression. The stage of a variable is declared in the scenario tree provided to PySP. This method of constraint stage classification is not perfect. That is, one can very easily define a model with a constraint that uses only firststage variables in an expression involving stochastic data. This constraint would be incorrectly identified as firststage by the method above, even though the existence of stochastic data necessarily implies it is secondstage. To deal with cases such as this, an additional annotation is made available that is named ConstraintStageAnnotation. This annotation will be discussed further in a later section.
It is often the case that relatively few coefficients on a stochastic
program change across scenarios. In these situations, adding explicit
declarations within these annotations will allow for a more sparse
representation of the problem and, consequently, more efficient solution
by particular decomposition methods. Adding declarations to these
annotations is performed by calling the declare
method, passing some
component as the initial argument. Any remaining argument requirements
for this method are specific to each annotation. Valid types for the
component argument typically include:
Constraint
: includes single constraint objects as well as constraint containersObjective
: includes single objective objects as well as objective containersBlock
: includes Pyomo models as well as single block objects and block containers
Any remaining details for adding declarations to the annotations mentioned thus far will be discussed in later sections. The remainder of this section discusses the semantics of these declarations based on the type for the component argument.
When the declare
method is called with a component such as an
indexed Constraint
or a Block
(model), the SMPS conversion
routines will interpret this as meaning all constraints found within
that indexed Constraint
or on that Block
(that have not been
deactivated) should be considered. As an example, we consider the
following partially declared concrete Pyomo model:
model = pyo.ConcreteModel()
# data that is initialized on a perscenario basis
p = 1.0
q = 2.0
# variables declared as secondstage on the
# PySP scenario tree
model.z = pyo.Var()
model.y = pyo.Var()
# indexed constraint
model.r_index = pyo.Set(initialize=[3, 6, 9])
def r_rule(model, i):
return pyo.inequality(p + i, 1*model.z + 5*model.y, 10 + q + i)
model.r = pyo.Constraint(model.r_index, rule=r_rule)
# singleton constraint
model.c = pyo.Constraint(expr= p*model.z >= 1)
# a subblock with a singleton constraint
model.b = pyo.Block()
model.b.c = pyo.Constraint(expr= q*model.y >= 1)
Here the local Python variables p
and q
serve as placeholders
for data that changes with each scenario.
The following are equivalent annotations of the model, each declaring all of the constraints shown above as having stochastic righthandside data:
Implicit form
model.stoch_rhs = annotations.StochasticConstraintBoundsAnnotation()
Implicit form for
Block
(model) assignmentmodel.stoch_rhs = annotations.StochasticConstraintBoundsAnnotation() model.stoch_rhs.declare(model)
Explicit form for singleton constraint with implicit form for indexed constraint and subblock
model.stoch_rhs = annotations.StochasticConstraintBoundsAnnotation() model.stoch_rhs.declare(model.r) model.stoch_rhs.declare(model.c) model.stoch_rhs.declare(model.b)
Explicit form for singleton constraints at the model and subblock level with implicit form for indexed constraint
model.stoch_rhs = annotations.StochasticConstraintBoundsAnnotation() model.stoch_rhs.declare(model.r) model.stoch_rhs.declare(model.c) model.stoch_rhs.declare(model.b.c)
Fully explicit form for singleton constraints as well as all indices of indexed constraint
model.stoch_rhs = annotations.StochasticConstraintBoundsAnnotation() model.stoch_rhs.declare(model.r[3]) model.stoch_rhs.declare(model.r[6]) model.stoch_rhs.declare(model.r[9]) model.stoch_rhs.declare(model.c) model.stoch_rhs.declare(model.b.c)
Note that the equivalence of the first three bullet forms to the last
two bullet forms relies on the following conditions being met: (1)
model.z
and model.y
are declared on the second stage of the PySP
scenario tree and (2) at least one of these secondstage variables
appears in each of the constraint expressions above. Together, these two
conditions cause each of the constraints above to be categorized as
secondstage; thus, causing them to be considered by the SMPS conversion
routines in the implicit declarations used by the first three bullet
forms.
Warning
Pyomo simplifies product expressions such that terms with 0 coefficients are removed from the final expression. This can sometimes create issues with determining the correct stage classification of a constraint as well as result in different sparsity patterns across scenarios. This issue is discussed further in the later section entitled EdgeCases.
When it comes to catching errors in model annotations, there is a minor difference between the first bullet form from above (empty annotation) and the others. In the empty case, PySP will use exactly the set of secondstage constraints it is aware of. This set will either be determined through inspection of the constraint expressions or through the userprovided constraintstage classifications declared using the ConstraintStageAnnotation annotation type. In the case where the stochastic annotation is not empty, PySP will verify that all constraints declared within it belong to the set of secondstage constraints it is aware of. If this verification fails, an error will be reported. This behavior is meant to aid users in debugging problems associated with nonuniform sparsity structure across scenarios that are, for example, caused by 0 coefficients in product expressions.
Annotations on AbstractModel Objects
Pyomo models defined using the AbstractModel
object require the
modeler to take further steps when making these annotations. In the
AbstractModel
setting, these assignments must take place within a
BuildAction
, which is executed only after the model has been
constructed with data. As an example, the last bullet form from the
previous section could be written in the following way to allow
execution with either an AbstractModel
or a ConcreteModel
:
def annotate_rule(m):
m.stoch_rhs = annotations.StochasticConstraintBoundsAnnotation()
m.stoch_rhs.declare(m.r[3])
m.stoch_rhs.declare(m.r[6])
m.stoch_rhs.declare(m.r[9])
m.stoch_rhs.declare(m.c)
m.stoch_rhs.declare(m.b.c)
model.annotate = pyo.BuildAction(rule=annotate_rule)
Note that the use of m
rather than model
in the
annotate_rule
function is meant to draw attention to the fact that
the model object being passed into the function as the first argument
may not be the same object as the model outside of the function. This is
in fact the case in the AbstractModel
setting, whereas for the
ConcreteModel
setting they are the same object. We often use
model
in both places to avoid errors caused by forgetting to use the
correct object inside the function (Python scoping rules handle the
rest). Also note that a BuildAction
must be declared on the model
after the declaration of any components being accessed inside its rule
function.
Stochastic Constraint Bounds (RHS)
If stochastic elements appear on the righthandside of constraints (or
as constants in the body of constraint expressions), these locations
should be declared using the StochasticConstraintBoundsAnnotation annotation
type. When components are declared with this annotation, there are no
additional required arguments for the declare
method. However, to
allow for more flexibility when dealing with doublesided inequality
constraints, the declare
method can be called with at most one of
the keywords lb
or ub
set to False
to signify that one of
the bounds is not stochastic. The following code snippet shows example
declarations with this annotation for various constraint types.
# declare the annotation
model.stoch_rhs = annotations.StochasticConstraintBoundsAnnotation()
# equality constraint
model.c_eq = pyo.Constraint(expr= model.y == q)
model.stoch_rhs.declare(model.c_eq)
# range inequality constraint with stochastic upper bound
model.c_ineq = pyo.Constraint(expr= pyo.inequality(0, model.y, p))
model.stoch_rhs.declare(model.c_ineq, lb=False)
# indexed constraint using a BuildAction
model.C_index = pyo.RangeSet(1,3)
def C_rule(model, i):
if i == 1:
return model.y >= i * q
else:
return pyo.Constraint.Skip
model.C = pyo.Constraint(model.C_index, rule=C_rule)
def C_annotate_rule(model, i):
if i == 1:
model.stoch_rhs.declare(model.C[i])
else:
pass
model.C_annotate = pyo.BuildAction(model.C_index, rule=C_annotate_rule)
Note that simply declaring the StochasticConstraintBoundsAnnotation
annotation type and leaving it empty will alert the SMPS conversion
routines that all constraints identified as secondstage should be
treated as having stochastic righthandside data. Calling the
declare
method on at least one component implies that the set of
constraints considered should be limited to what is declared within the
annotation.
Stochastic Constraint Matrix
If coefficients of variables change in the secondstage constraint
matrix, these locations should be declared using the
StochasticConstraintBodyAnnotation annotation type. When components are
declared with this annotation, there are no additional required
arguments for the declare
method. Calling the declare
method
with the single component argument signifies that all variables
encountered in the constraint expression (including first and
secondstage variables) should be treated as having stochastic
coefficients. This can be limited to a specific subset of variables by
calling the declare
method with the variables
keyword set to an
explicit list of variable objects. The following code snippet shows
example declarations with this annotation for various constraint types.
model = pyo.ConcreteModel()
# data that is initialized on a perscenario basis
p = 1.0
q = 2.0
# a firststage variable
model.x = pyo.Var()
# a secondstage variable
model.y = pyo.Var()
# declare the annotation
model.stoch_matrix = annotations.StochasticConstraintBodyAnnotation()
# a singleton constraint with stochastic coefficients
# both the first and secondstage variable
model.c = pyo.Constraint(expr= p*model.x + q*model.y == 1)
model.stoch_matrix.declare(model.c)
# an assignment that is equivalent to the previous one
model.stoch_matrix.declare(model.c, variables=[model.x, model.y])
# a singleton range constraint with a stochastic coefficient
# for the firststage variable only
model.r = pyo.Constraint(expr=pyo.inequality(0, p*model.x  2.0*model.y, 10))
model.stoch_matrix.declare(model.r, variables=[model.x])
As is the case with the StochasticConstraintBoundsAnnotation annotation
type, simply declaring the StochasticConstraintBodyAnnotation
annotation type and leaving it empty will alert the SMPS conversion
routines that all constraints identified as secondstage should be
considered, and, additionally, that all variables encountered in these
constraints should be considered to have stochastic
coefficients. Calling the declare
method on at least one component
implies that the set of constraints considered should be limited to what
is declared within the annotation.
Stochastic Objective Elements
If the cost coefficients of any variables are stochastic in the
secondstage cost expression, this should be noted using the
StochasticObjectiveAnnotation annotation type. This annotation
uses the same semantics for the declare
method as the
StochasticConstraintBodyAnnotation annotation type, but with one
additional consideration regarding any constants in the objective
expression. Constants in the objective are treated as stochastic and
automatically handled by the SMPS code. If the objective expression does
not contain any constant terms or these constant terms do not change
across scenarios, this behavior can be disabled by setting the keyword
include_constant
to False
in a call to the declare
method.
# declare the annotation
model.stoch_objective = annotations.StochasticObjectiveAnnotation()
model.FirstStageCost = pyo.Expression(expr= 5.0*model.x)
model.SecondStageCost = pyo.Expression(expr= p*model.x + q*model.y)
model.TotalCost = pyo.Objective(expr= model.FirstStageCost + model.SecondStageCost)
# each of these declarations is equivalent for this model
model.stoch_objective.declare(model.TotalCost)
model.stoch_objective.declare(model.TotalCost, variables=[model.x, model.y])
Similar to the previous annotation type, simply declaring the StochasticObjectiveAnnotation annotation type and leaving it empty will alert the SMPS conversion routines that all variables appearing in the single active model objective expression should be considered to have stochastic coefficients.
Annotating Constraint Stages
Annotating the model with constraint stages is sometimes necessary to
identify to the SMPS routines that certain constraints belong in the
second timestage even though they lack references to any secondstage
variables. Annotation of constraint stages is achieved using the
ConstraintStageAnnotation annotation type. If this annotation
is added to the model, it is assumed that it will be fully populated
with explicit stage assignments for every constraint in the model. The
declare
method should be called giving a Constraint
or Block
as the first argument and a positive integer as the second argument (1
signifies the first time stage). Example:
model.IDX = pyo.RangeSet(5)
# declare the annotation
model.constraint_stage = annotations.ConstraintStageAnnotation()
# all constraints on this Block are firststage
model.B = pyo.Block()
# ...
model.constraint_stage.declare(model.B, 1)
# all indices of this indexed constraint are firststage
def C1_rule(m, i):
return p*m.x <= q*i
model.C1 = pyo.Constraint(model.IDX, rule=C1_rule)
model.constraint_stage.declare(model.C1, 1)
# all but one index in this indexed constraint are secondstage
def C2_rule(m, i):
return p*m.x <= q*i
model.C2 = pyo.Constraint(model.IDX, rule=C2_rule)
for index in model.C2:
if index == 3:
model.constraint_stage.declare(model.C2[index], 1)
else:
model.constraint_stage.declare(model.C2[index], 2)
Edge Cases
The section discusses various points that may give users some trouble, and it attempts to provide more details about the common pitfalls associated with translating a PySP model to SMPS format.
 Moving a Stochastic Objective to the Constraint Matrix
It is often the case that decomposition algorithms theoretically support stochastic cost coefficients but the software implementation has not yet added support for them. This situation is easy to work around in PySP. One can simply augment the model with an additional constraint and variable that computes the objective, and then use this variable in the objective rather than directly using the secondstage cost expression. Consider the following reference Pyomo model that has stochastic cost coefficients for both a firststage and a secondstage variable in the secondstage cost expression:
>>> # suppress duplicate object warning
>>> del model.TotalCost
# define the objective as the sum of the stagecost expressions
model.TotalCost = pyo.Objective(expr= model.FirstStageCost + model.SecondStageCost)
# declare that model.x and model.y have stochastic cost
# coefficients in the second stage
model.stoch_objective = annotations.StochasticObjectiveAnnotation()
model.stoch_objective.declare(model.TotalCost, variables=[model.x, model.y])
The code snippet below reexpresses this model using an objective
consisting of the original firststage cost expression plus a
secondstage variable SecondStageCostVar
that represents the
secondstage cost. This is enforced by restricting the variable to be
equal to the secondstage cost expression using an additional equality
constraint named ComputeSecondStageCost
. Additionally, the
StochasticObjectiveAnnotation annotation type is replaced with
the StochasticConstraintBodyAnnotation annotation type.
# set the variable SecondStageCostVar equal to the
# expression SecondStageCost using an equality constraint
model.SecondStageCostVar = pyo.Var()
model.ComputeSecondStageCost = pyo.Constraint(expr= model.SecondStageCostVar == model.SecondStageCost)
# declare that model.x and model.y have stochastic constraint matrix
# coefficients in the ComputeSecondStageCost constraint
model.stoch_matrix = annotations.StochasticConstraintBodyAnnotation()
model.stoch_matrix.declare(model.ComputeSecondStageCost, variables=[model.x, model.y])
 Stochastic Constant Terms
The standard description of a linear program does not allow for a constant term in the objective function because this has no weight on the problem solution. Additionally, constant terms appearing in a constraint expression must be lumped into the righthandside vector. However, when modeling with an AML such as Pyomo, constant terms very naturally fall out of objective and constraint expressions.
If a constant terms falls out of a constraint expression and this term changes across scenarios, it is critical that this is accounted for by including the constraint in the StochasticConstraintBoundsAnnotation annotation type. Otherwise, this would lead to an incorrect representation of the stochastic program in SMPS format. As an example, consider the following:
# a param initialized with scenariospecific data
model.p = pyo.Param(mutable=True)
# a secondstage constraint with a stochastic upper bound
# hidden in the lefthandside expression
def d_rule(m):
return (m.x  m.p) + m.y <= 10
model.d = pyo.Constraint(rule=d_rule)
Note that in the expression for constraint c
, there is a fixed
parameter p
involved in the variable expression on the
lefthandside of the inequality. When an expression is written this
way, it can be easy to forget that the value of this parameter will be
pushed to the bound of the constraint when it is converted into linear
canonical form. Remember to declare these constraints within the
StochasticConstraintBoundsAnnotation annotation type.
A constant term appearing in the objective expression presents a similar issue. Whether or not this term is stochastic, it must be dealt with when certain outputs expect the problem to be expressed as a linear program. The SMPS code in PySP will deal with this situation for you by implicitly adding a new secondstage variable to the problem in the final output file that uses the constant term as its coefficient in the objective and that is fixed to a value of 1.0 using a trivial equality constraint. The default behavior when declaring the StochasticObjectiveAnnotation annotation type will be to assume this constant term in the objective is stochastic. This helps ensure that the relative scenario costs reported by algorithms using the SMPS files will match that of the PySP model for a given solution. When moving a stochastic objective into the constraint matrix using the method discussed in the previous subsection, it is important to be aware of this behavior. A stochastic constant term in the objective would necessarily translate into a stochastic constraint righthandside when moved to the constraint matrix.
 Stochastic Variable Bounds
Although not directly supported, stochastic variable bounds can be expressed using explicit constraints along with the StochasticConstraintBoundsAnnotation annotation type to achieve the same effect.
 Problems Caused by Zero Coefficients
Expressions that involve products with some terms having 0 coefficients
can be problematic when the zeros can become nonzero in certain
scenarios. This can cause the sparsity structure of the LP to change
across scenarios because Pyomo simplifies these expressions when they
are created such that terms with a 0 coefficient are dropped. This can
result in an invalid SMPS conversion. Of course, this issue is not
limited to explicit product expressions, but can arise when the user
implicitly assigns a variable a zero coefficient by outright excluding
it from an expression. For example, both constraints in the following
code snippet suffer from this same underlying issue, which is that the
variable model.y
will be excluded from the constraint expressions in
a subset of scenarios (depending on the value of q
) either directly
due to a 0 coefficient in a product expressions or indirectly due to
userdefined logic that is based off of the values of stochastic data.
q = 0
model.c1 = pyo.Constraint(expr= p * model.x + q * model.y == 1)
def c2_rule(model):
expr = p * model.x
if q != 0:
expr += model.y
return expr >= 0
model.c2 = pyo.Constraint(rule=c2_rule)
The SMPS conversion routines will attempt some limited checking to help prevent this kind of situation from silently turning the SMPS representation to garbage, but it must ultimately be up to the user to ensure this is not an issue. This is in fact the most challenging aspect of converting PySP’s AMLbased problem representation to the structurepreserving LP representation used in the SMPS format.
One way to deal with the 0 coefficient issue, which works for both cases
discussed in the example above, is to create a zero Expression
object. E.g.,
model.zero = pyo.Expression(expr=0)
This component can be used to add variables to a linear expression so that the resulting expression retains a reference to them. This behavior can be verified by examining the output from the following example:
# an expression that does NOT retain model.y
>>> print((model.x + 0 * model.y).to_string())
x
# an equivalent expression that DOES retain model.y
>>> print((model.x + model.zero * model.y).to_string())
x + 0.0*y
# an equivalent expression that does NOT retain model.y (so beware)
>>> print((model.x + 0 * model.zero * model.y).to_string())
x
Generating SMPS Input Files¶
This section explains how the SMPS conversion utilities available in
PySP can be invoked from the command line. Starting with Pyomo version
5.1, the SMPS writer can be invoked using the command python m
pyomo.pysp.convert.smps
. Prior to version 5.1, this functionality was
available via the pysp2smps
command (starting at Pyomo version
4.2). Use the help
option with the main command to see a detailed
description of the commandline options available:
$ python m pyomo.pysp.convert.smps help
Next, we discuss some of the basic inputs to this command.
Consider the baa99 example inside the pysp/baa99
subdirectory that
is distributed with the Pyomo examples (examples/pysp/baa99
). Both
the reference model and the scenario tree structure are defined in the
file ReferenceModel.py
using PySP callback functions. This model has
been annotated to enable conversion to the SMPS format. Assuming one is
in this example’s directory, SMPS files can be generated for the model
by executing the following shell command:
$ python m pyomo.pysp.convert.smps m ReferenceModel.py basename baa99 \
outputdirectory sdinput/baa99
Assuming successful execution, this would result in the following files being created:
sdinput/baa99/baa99.cor
sdinput/baa99/baa99.tim
sdinput/baa99/baa99.sto
sdinput/baa99/baa99.cor.symbols
The first file is the core problem file written in MPS format. The second file indicates at which row and column the first and second time stages begin. The third file contains the location and values of stochastic data in the problem for each scenario. This file is generated by merging the individual output for each scenario in the scenario tree into separate BLOCK sections. The last file contains a mapping for nonanticipative variables from the symbols used in the above files to a unique string that can be used to recover the variable on any Pyomo model. It is mainly used by PySP’s solver interfaces to load a solver solution.
To ensure that the problem structure is the same and that all locations
of stochastic data have been annotated properly, the script creates
additional auxiliary files that are compared across scenarios. The
commandline option keepauxiliaryfiles
can be used to retain the
auxiliary files that were generated for the template scenario used to
write the core file. When this option is used with the above example,
the following additional files will appear in the output directory:
sdinput/baa99/baa99.mps.det
sdinput/baa99/baa99.sto.struct
sdinput/baa99/baa99.row
sdinput/baa99/baa99.col
The .mps.det
file is simply the core file for the reference scenario
with the values for all stochastic coefficients set to zero. If this
does not match for every scenario, then there are places in the model
that still need to be declared on one or more of the stochastic data
annotations. The .row
and the .col
files indicate the ordering
of constraints and variables, respectively, that was used to write the
core file. The .sto.struct
file lists the nonzero locations of the
stochastic data in terms of their row and column location in the core
file. These files are created for each scenario instance in the scenario
tree and placed inside of a subdirectory named scenario_files
within
the output directory. These files will be removed removed unless
validation fails or the keepscenariofiles
option is used.
The SMPS writer also supports parallel execution. This can significantly reduce the overall time required to produce the SMPS files when there are many scenarios. Parallel execution using PySP’s Pyrobased tools can be performed using the steps below. Note that each of these commands can be launched in the background inside the same shell or in their own separate shells.
Start the Pyro name server:
$ pyomo_ns n localhost
Start the Pyro dispatch server:
$ dispatch_srvr n localhost daemonhost localhost
Start 8 ScenarioTree Servers (for the 625 baa99 scenarios)
$ mpirun np 8 scenariotreeserver pyrohost=localhost
Run
python m pyomo.pysp.convert.smps
using the Pyro ScenarioTree Manager$ python m pyomo.pysp.convert.smps m ReferenceModel.py basename baa99 \ outputdirectory sdinput/baa99 \ pyrorequiredscenariotreeservers=8 \ pyrohost=localhost scenariotreemanager=pyro
An annotated version of the farmer example is also provided. The model
file can be found in the pysp/farmer/smps_model
examples
subdirectory. Note that the scenario tree for this model is defined in a
separate file. When invoking the SMPS writer, a scenario tree structure
file can be provided via the scenariotreelocation (s)
commandline option. For example, assuming one is in the
pysp/farmer
subdirectory, the farmer model can be converted to SMPS
files using the command:
$ python m pyomo.pysp.convert.smps m smps_model/ReferenceModel.py \
s scenariodata/ScenarioStructure.dat basename farmer \
outputdirectory sdinput/farmer
Note that, by default, the files created by the SMPS writer use
shortened symbols that do not match the names of the variables and
constraints declared on the Pyomo model. This is for efficiency reasons,
as using fully qualified component names can result in significantly
larger files. However, it can be useful in debugging situations to
generate the SMPS files using the original component names. To do this,
simply add the commandline option symbolicsolverlabels
to the
command string.
The SMPS writer supports other formats for the core problem file (e.g.,
the LP format). The commandline option coreformat
can be used to
control this setting. Refer to the commandline help string for more
information about the list of available format.
Generating DDSIP Input Files From PySP Models¶
PySP provides support for creating DDSIP inputs, and some support for reading DDSIP solutions back into PySP is under development. Use of these utilties requires additional model annotations that declare the location of stochastic data coefficients. See the section on converting PySP models to SMPS for more information.
To access the DDSIP writer via the command line, use python
m pyomo.pysp.convert.ddsip
. To access the full solver interface to
DDSIP, which writes the input files, invokes the DDSIP solver, and reads
the solution, use python m pyomo.pysp.solvers.ddsip
. For example,
to get a list of command arguments, use:
$ python m pyomo.pysp.convert.ddsip help
Note
Not all of the command arguments are relevant for DDSIP.
For researchers that simply want to write out the files needed by DDSIP,
the outputdirectory
option can be used with the DDSIP writer to
specifiy the directory where files should be created. The DDSIP solver
interface creates these files in a temporary directory. To have the
DDSIP solver interface retain these files after it exits, use the
keepsolverfiles
commandline option. The following example
invokes the DDSIP solver on the networkflow example that ships with
PySP. In order to test it, one must first cd
into to the networkflow
example directory and then execute the command:
$ python m pyomo.pysp.solvers.ddsip \
s 1ef10 m smps_model solveroptions="NODELIM=1”
The solveroptions
command line argument can be used set the
values of any DDSIP options that are written to the DDSIP configuration
file; multiple options should be spaceseparated. See DDSIP
documentation for a list of options.
Here is the same example modified to simply create the DDSIP input files
in an output directory named ddsip_networkflow
:
$ python m pyomo.pysp.convert.ddsip \
s 1ef10 m smps_model outputdirectory ddsip_networkflow \
symbolicsolverlabels
The option symbolicsolverlabels
tells the DDSIP writer to
produce the file names using symbols that match names on the original
Pyomo model. This can significantly increase file size, so it is not
done by default. When the DDSIP writer is invoked, a minimal DDSIP
configuration file is created in the output directory that specifies the
required problem structure information. Any additional DDSIP options
must be manually added to this file by the user.
As with the SMPS writer, the DDSIP writer and solver interface support PySP’s Pyrobased parallel scenario tree management system. See the section on the SMPS writer for a description of how to use this functionality.
PySP in scripts¶
See rapper: a PySP wrapper for information about putting Python scripts around PySP functionality.
Introduction to Using Concrete Models with PySP¶
The concrete interface to PySP requires a function that can return a
concrete model for a given scenario. Optionally, a function that
returns a scenario tree can be provided; however, a
ScenarioStructure.dat
file is also an option. This very
terse introduction might help you get started using
concrete models with PySP.
Scenario Creation Function¶
There is a lot of flexibility in how this function is implemented, but the path of least resistance is
>>> def pysp_instance_creation_callback(scenario_tree_model,
... scenario_name,
... node_names):
... pass
In many applications, only the scenario_name
argument is
used. Its purpose is almost always to determine what data
to use when populating the scenario instance. Note that in
older examples, the scenario_tree_model
argument is not
present.
An older example of this function can be seen in
examples/pysp/farmer/concrete/ReferenceModel.py
Note that this example does not have a function to return
a scenario tree, so it can be solved from the
examples/pysp/farmer
directory with a command
like:
 ::
 runef m concrete/ReferenceModel.py s scenariodata/ScenarioStructure.dat –solve
Note
If, for some reason, you want to use the concrete interface for PySP for an AbstractModel
, the body of the function might be something like:
>>> instance = model.create_instance(scenario_name+".dat") # doctest: +SKIP
>>> return instance # doctest: +SKIP
assuming that model
is defined as an AbstractModel
in the namespace
of the file.
Scenario Tree Creation Function¶
There are many options for a function to return a scenario tree. The path
of least resistance is to name the function pysp_scenario_tree_model_callback
with no arguments.
One example is shown in
examples/pysp/farmer/concreteNetX/ReferenceModel.py
It can be solved from the
examples/pysp/farmer
directory with a command
like:
 ::
 runef m concreteNetX/ReferenceModel.py –solve
Pyomo Network¶
Pyomo Network is a package that allows users to easily represent their model as a connected network of units. Units are blocks that contain ports, which contain variables, that are connected to other ports via arcs. The connection of two ports to each other via an arc typically represents a set of constraints equating each member of each port to each other, however there exist other connection rules as well, in addition to support for custom rules. Pyomo Network also includes a model transformation that will automatically expand the arcs and generate the appropriate constraints to produce an algebraic model that a solver can handle. Furthermore, the package also introduces a generic sequential decomposition tool that can leverage the modeling components to decompose a model and compute each unit in the model in a logically ordered sequence.
Modeling Components¶
Pyomo Network introduces two new modeling components to Pyomo:
pyomo.network.Port 
A collection of variables, which may be connected to other ports 
pyomo.network.Arc 
Component used for connecting the members of two Port objects 
Port¶

class
pyomo.network.
Port
(*args, **kwd)[source]¶ A collection of variables, which may be connected to other ports
The idea behind Ports is to create a bundle of variables that can be manipulated together by connecting them to other ports via Arcs. A preprocess transformation will look for Arcs and expand them into a series of constraints that involve the original variables contained within the Port. The way these constraints are built can be specified for each Port member when adding members to the port, but by default the Port members will be equated to each other. Additionally, other objects such as expressions can be added to Ports as long as they, or their indexed members, can be manipulated within constraint expressions.
Parameters:  rule (function) – A function that returns a dict of (name: var) pairs to be initially added to the Port. Instead of var it could also be a tuples of (var, rule). Or it could return an iterable of either vars or tuples of (var, rule) for implied names.
 initialize – Follows same specifications as rule’s return value, gets initially added to the Port
 implicit – An iterable of names to be initially added to the Port as implicit vars
 extends (Port) – A Port whose vars will be added to this Port upon construction

static
Equality
(port, name, index_set)[source]¶ Arc Expansion procedure to generate simple equality constraints

static
Extensive
(port, name, index_set, include_splitfrac=None, write_var_sum=True)[source]¶ Arc Expansion procedure for extensive variable properties
This procedure is the rule to use when variable quantities should be conserved; that is, split for outlets and combined for inlets.
This will first go through every destination of the port (i.e., arcs whose source is this Port) and create a new variable on the arc’s expanded block of the same index as the current variable being processed to store the amount of the variable that flows over the arc. For ports that have multiple outgoing arcs, this procedure will create a single splitfrac variable on the arc’s expanded block as well. Then it will generate constraints for the new variable that relate it to the port member variable using the split fraction, ensuring that all extensive variables in the Port are split using the same ratio. The generation of the split fraction variable and constraint can be suppressed by setting the include_splitfrac argument to False.
Once all arcspecific variables are created, this procedure will create the “balancing constraint” that ensures that the sum of all the new variables equals the original port member variable. This constraint can be suppressed by setting the write_var_sum argument to False; in which case, a single constraint will be written that states the sum of the split fractions equals 1.
Finally, this procedure will go through every source for this port and create a new arc variable (unless it already exists), before generating the balancing constraint that ensures the sum of all the incoming new arc variables equals the original port variable.
Model simplifications:
If the port has a 1to1 connection on either side, it will not create the new variables and instead write a simple equality constraint for that side.
If the outlet side is not 1to1 but there is only one outlet, it will not create a splitfrac variable or write the split constraint, but it will still write the outsum constraint which will be a simple equality.
If the port only contains a single Extensive variable, the splitfrac variables and the splitting constraints will be skipped since they will be unnecessary. However, they can be still be included by passing include_splitfrac=True.
Note
If split fractions are skipped, the write_var_sum=False option is not allowed.

class
pyomo.network.port.
_PortData
(component=None)[source]¶ This class defines the data for a single Port

vars
¶ A dictionary mapping added names to variables
Type: dict

add
(var, name=None, rule=None, **kwds)[source]¶ Add var to this Port, casting it to a Pyomo numeric if necessary
Parameters:  var – A variable or some NumericValue like an expression
 name (str) – Name to associate with this member of the Port
 rule (function) – Function implementing the desired expansion procedure for this member. Port.Equality by default, other options include Port.Extensive. Customs are allowed.
 kwds – Keyword arguments that will be passed to rule

fix
()[source]¶ Fix all variables in the port at their current values. For expressions, fix every variable in the expression.

free
()¶ Unfix all variables in the port. For expressions, unfix every variable in the expression.

get_split_fraction
(arc)[source]¶ Returns a tuple (val, fix) for the split fraction of this arc that was set via set_split_fraction if it exists, and otherwise None.

iter_vars
(expr_vars=False, fixed=None, names=False)[source]¶ Iterate through every member of the port, going through the indices of indexed members.
Parameters:  expr_vars (bool) – If True, call identify_variables on expression type members
 fixed (bool) – Only include variables/expressions with this type of fixed
 names (bool) – If True, yield (name, index, var/expr) tuples

The following code snippet shows examples of declaring and using a
Port
component on a
concrete Pyomo model:
>>> from pyomo.environ import *
>>> from pyomo.network import *
>>> m = ConcreteModel()
>>> m.x = Var()
>>> m.y = Var(['a', 'b']) # can be indexed
>>> m.z = Var()
>>> m.e = 5 * m.z # you can add Pyomo expressions too
>>> m.w = Var()
>>> m.p = Port()
>>> m.p.add(m.x) # implicitly name the port member "x"
>>> m.p.add(m.y, "foo") # name the member "foo"
>>> m.p.add(m.e, rule=Port.Extensive) # specify a rule
>>> m.p.add(m.w, rule=Port.Extensive, write_var_sum=False) # keyword arg
Arc¶

class
pyomo.network.
Arc
(*args, **kwds)[source]¶ Component used for connecting the members of two Port objects
Parameters:  source (Port) – A single Port for a directed arc. Aliases to src.
 destination (Port) – A single`Port for a directed arc. Aliases to dest.
 ports – A twomember list or tuple of single Ports for an undirected arc
 directed (bool) – Set True for directed. Use along with rule to be able to return an implied (source, destination) tuple.
 rule (function) – A function that returns either a dictionary of the arc arguments or a twomember iterable of ports

class
pyomo.network.arc.
_ArcData
(component=None, **kwds)[source]¶ This class defines the data for a single Arc

source
¶ The source Port when directed, else None. Aliases to src.
Type: Port

destination
¶ The destination Port when directed, else None. Aliases to dest.
Type: Port

ports
¶ A tuple containing both ports. If directed, this is in the order (source, destination).
Type: tuple

directed
¶ True if directed, False if not
Type: bool

expanded_block
¶ A reference to the block on which expanded constraints for this arc were placed
Type: Block

The following code snippet shows examples of declaring and using an
Arc
component on a
concrete Pyomo model:
>>> from pyomo.environ import *
>>> from pyomo.network import *
>>> m = ConcreteModel()
>>> m.x = Var()
>>> m.y = Var(['a', 'b'])
>>> m.u = Var()
>>> m.v = Var(['a', 'b'])
>>> m.w = Var()
>>> m.z = Var(['a', 'b']) # indexes need to match
>>> m.p = Port(initialize=[m.x, m.y])
>>> m.q = Port(initialize={"x": m.u, "y": m.v})
>>> m.r = Port(initialize={"x": m.w, "y": m.z}) # names need to match
>>> m.a = Arc(source=m.p, destination=m.q) # directed
>>> m.b = Arc(ports=(m.p, m.q)) # undirected
>>> m.c = Arc(ports=(m.p, m.q), directed=True) # directed
>>> m.d = Arc(src=m.p, dest=m.q) # aliases work
>>> m.e = Arc(source=m.r, dest=m.p) # ports can have both in and out
Arc Expansion Transformation¶
The examples above show how to declare and instantiate a
Port
and an
Arc
. These two components form the basis of
the higher level representation of a connected network with sets of related
variable quantities. Once a network model has been constructed, Pyomo Network
implements a transformation that will expand all (active) arcs on the model
and automatically generate the appropriate constraints. The constraints
created for each port member will be indexed by the same indexing set as
the port member itself.
During transformation, a new block is created on the model for each arc (located on the arc’s parent block), which serves to contain all of the auto generated constraints for that arc. At the end of the transformation, a reference is created on the arc that points to this new block, available via the arc property arc.expanded_block.
The constraints produced by this transformation depend on the rule assigned
for each port member and can be different between members on the same port.
For example, you can have two different members on a port where one member’s
rule is Port.Equality
and the other
member’s rule is Port.Extensive
.
Port.Equality
is the default rule
for port members. This rule simply generates equality constraints on the
expanded block between the source port’s member and the destination port’s
member. Another implemented expansion method is
Port.Extensive
, which essentially
represents implied splitting and mixing of certain variable quantities.
Users can refer to the documentation of the static method itself for more
details on how this implicit splitting and mixing is implemented.
Additionally, should users desire, the expansion API supports custom rules
that can be implemented to generate whatever is needed for special cases.
The following code demonstrates how to call the transformation to expand the arcs on a model:
>>> from pyomo.environ import *
>>> from pyomo.network import *
>>> m = ConcreteModel()
>>> m.x = Var()
>>> m.y = Var(['a', 'b'])
>>> m.u = Var()
>>> m.v = Var(['a', 'b'])
>>> m.p = Port(initialize=[m.x, (m.y, Port.Extensive)]) # rules must match
>>> m.q = Port(initialize={"x": m.u, "y": (m.v, Port.Extensive)})
>>> m.a = Arc(source=m.p, destination=m.q)
>>> TransformationFactory("network.expand_arcs").apply_to(m)
Sequential Decomposition¶
Pyomo Network implements a generic
SequentialDecomposition
tool that can be used to compute each unit in a network model in a logically
ordered sequence.
The sequential decomposition procedure is commenced via the
run
method.
Creating a Graph¶
To begin this procedure, the Pyomo Network model is first utilized to create
a networkx MultiDiGraph by adding edges to the graph for every arc on the
model, where the nodes of the graph are the parent blocks of the source and
destination ports. This is done via the
create_graph
method, which requires all arcs on the model to be both directed and already
expanded. The MultiDiGraph class of networkx supports both direccted edges
as well as having multiple edges between the same two nodes, so users can
feel free to connect as many ports as desired between the same two units.
Computation Order¶
The order of computation is then determined by treating the resulting graph
as a tree, starting at the roots of the tree, and making sure by the time
each node is reached, all of its predecessors have already been computed.
This is implemented through the calculation_order
and
tree_order
methods. Before this, however, the procedure will first select a set of tear
edges, if necessary, such that every loop in the graph is torn, while
minimizing both the number of times any single loop is torn as well as the
total number of tears.
Tear Selection¶
A set of tear edges can be selected in one of two ways. By default, a Pyomo
MIP model is created and optimized resulting in an optimal set of tear edges.
The implementation of this MIP model is based on a set of binary “torn”
variables for every edge in the graph, and constraints on every loop in the
graph that dictate that there must be at least one tear on the loop. Then
there are two objectives (represented by a doubly weighted objective). The
primary objective is to minimize the number of times any single loop is torn,
and then secondary to that is to minimize the total number of tears. This
process is implemented in the select_tear_mip
method, which uses
the model returned from the select_tear_mip_model
method.
Alternatively, there is the select_tear_heuristic
method. This
uses a heuristic procedure that walks back and forth on the graph to find
every optimal tear set, and returns each equally optimal tear set it finds.
This method is much slower than the MIP method on larger models, but it
maintains some use in the fact that it returns every possible optimal tear set.
A custom tear set can be assigned before calling the
run
method. This is
useful so users can know what their tear set will be and thus what arcs will
require guesses for uninitialized values. See the
set_tear_set
method for details.
Running the Sequential Decomposition Procedure¶
After all of this computational order preparation, the sequential
decomposition procedure will then run through the graph in the order it
has determined. Thus, the function that was passed to the
run
method will be
called on every unit in sequence. This function can perform any arbitrary
operations the user desires. The only thing that
SequentialDecomposition
expects from the function is that after returning from it, every variable
on every outgoing port of the unit will be specified (i.e. it will have a
set current value). Furthermore, the procedure guarantees to the user that
for every unit, before the function is called, every variable on every
incoming port of the unit will be fixed.
In between computing each of these units, port member values are passed across existing arcs involving the unit currently being computed. This means that after computing a unit, the expanded constraints from each arc coming out of this unit will be satisfied, and the values on the respective destination ports will be fixed at these new values. While running the computational order, values are not passed across tear edges, as tear edges represent locations in loops to stop computations (during iterations). This process continues until all units in the network have been computed. This concludes the “first pass run” of the network.
Guesses and Fixing Variables¶
When passing values across arcs while running the computational order,
values at the destinations of each of these arcs will be fixed at the
appropriate values. This is important to the fact that the procedure
guarantees every inlet variable will be fixed before calling the function.
However, since values are not passed across torn arcs, there is a need for
usersupplied guesses for those values. See the set_guesses_for
method for details
on how to supply these values.
In addition to passing dictionaries of guesses for certain ports, users can also assign current values to the variables themselves and the procedure will pick these up and fix the variables in place. Alternatively, users can utilize the default_guess option to specify a value to use as a default guess for all free variables if they have no guess or current value. If a free variable has no guess or current value and there is no default guess option, then an error will be raised.
Similarly, if the procedure attempts to pass a value to a destination port member but that port member is already fixed and its fixed value is different from what is trying to be passed to it (by a tolerance specified by the almost_equal_tol option), then an error will be raised. Lastly, if there is more than one free variable in a constraint while trying to pass values across an arc, an error will be raised asking the user to fix more variables by the time values are passed across said arc.
Tear Convergence¶
After completing the first pass run of the network, the sequential
decomposition procedure will proceed to converge all tear edges in the
network (unless the user specifies not to, or if there are no tears).
This process occurs separately for every strongly connected component (SCC)
in the graph, and the SCCs are computed in a logical order such that each
SCC is computed before other SCCs downstream of it (much like
tree_order
).
There are two implemented methods for converging tear edges: direct
substitution and Wegstein acceleration. Both of these will iteratively run
the computation order until every value in every tear arc has converged to
within the specified tolerance. See the
SequentialDecomposition
parameter documentation for details on what can be controlled about this
procedure.
The following code demonstrates basic usage of the
SequentialDecomposition
class:
>>> from pyomo.environ import *
>>> from pyomo.network import *
>>> m = ConcreteModel()
>>> m.unit1 = Block()
>>> m.unit1.x = Var()
>>> m.unit1.y = Var(['a', 'b'])
>>> m.unit2 = Block()
>>> m.unit2.x = Var()
>>> m.unit2.y = Var(['a', 'b'])
>>> m.unit1.port = Port(initialize=[m.unit1.x, (m.unit1.y, Port.Extensive)])
>>> m.unit2.port = Port(initialize=[m.unit2.x, (m.unit2.y, Port.Extensive)])
>>> m.a = Arc(source=m.unit1.port, destination=m.unit2.port)
>>> TransformationFactory("network.expand_arcs").apply_to(m)
>>> m.unit1.x.fix(10)
>>> m.unit1.y['a'].fix(15)
>>> m.unit1.y['b'].fix(20)
>>> seq = SequentialDecomposition(tol=1.0E3) # options can go to init
>>> seq.options.select_tear_method = "heuristic" # or set them like so
>>> # seq.set_tear_set([...]) # assign a custom tear set
>>> # seq.set_guesses_for(m.unit.inlet, {...}) # choose guesses
>>> def initialize(b):
... # b.initialize()
... pass
...
>>> seq.run(m, initialize)

class
pyomo.network.
SequentialDecomposition
(**kwds)[source]¶ A sequential decomposition tool for Pyomo Network models
The following parameters can be set upon construction of this class or via the options attribute.
Parameters:  graph (MultiDiGraph) –
A networkx graph representing the model to be solved.
default=None (will compute it)
 tear_set (list) –
A list of indexes representing edges to be torn. Can be set with a list of edge tuples via set_tear_set.
default=None (will compute it)
 select_tear_method (str) –
Which method to use to select a tear set, either “mip” or “heuristic”.
default=”mip”
 run_first_pass (bool) –
Boolean indicating whether or not to run through network before running the tear stream convergence procedure.
default=True
 solve_tears (bool) –
Boolean indicating whether or not to run iterations to converge tear streams.
default=True
 guesses (ComponentMap) –
ComponentMap of guesses to use for first pass (see set_guesses_for method).
default=ComponentMap()
 default_guess (float) –
Value to use if a free variable has no guess.
default=None
 almost_equal_tol (float) –
Difference below which numbers are considered equal when checking port value agreement.
default=1.0E8
 log_info (bool) –
Set logger level to INFO during run.
default=False
 tear_method (str) –
Method to use for converging tear streams, either “Direct” or “Wegstein”.
default=”Direct”
 iterLim (int) –
Limit on the number of tear iterations.
default=40
 tol (float) –
Tolerance at which to stop tear iterations.
default=1.0E5
 tol_type (str) –
Type of tolerance value, either “abs” (absolute) or “rel” (relative to current value).
default=”abs”
 report_diffs (bool) –
Report the matrix of differences across tear streams for every iteration.
default=False
 accel_min (float) –
Min value for Wegstein acceleration factor.
default=5
 accel_max (float) –
Max value for Wegstein acceleration factor.
default=0
 tear_solver (str) –
Name of solver to use for select_tear_mip.
default=”cplex”
 tear_solver_io (str) –
Solver IO keyword for the above solver.
default=None
 tear_solver_options (dict) –
Keyword options to pass to solve method.
default={}

calculation_order
(G, roots=None, nodes=None)¶ Rely on tree_order to return a calculation order of nodes
Parameters:  roots – List of nodes to consider as tree roots, if None then the actual roots are used
 nodes – Subset of nodes to consider in the tree, if None then all nodes are used

create_graph
(model)[source]¶ Returns a networkx MultiDiGraph of a Pyomo network model
The nodes are units and the edges follow Pyomo Arc objects. Nodes that get added to the graph are determined by the parent blocks of the source and destination Ports of every Arc in the model. Edges are added for each Arc using the direction specified by source and destination. All Arcs in the model will be used whether or not they are active (since this needs to be done after expansion), and they all need to be directed.

indexes_to_arcs
(G, lst)[source]¶ Converts a list of edge indexes to the corresponding Arcs
Parameters:  G – A networkx graph corresponding to lst
 lst – A list of edge indexes to convert to tuples
Returns: A list of arcs

run
(model, function)[source]¶ Compute a Pyomo Network model using sequential decomposition
Parameters:  model – A Pyomo model
 function – A function to be called on each block/node in the network

select_tear_heuristic
(G)¶ This finds optimal sets of tear edges based on two criteria. The primary objective is to minimize the maximum number of times any cycle is broken. The seconday criteria is to minimize the number of tears.
This function uses a branch and bound type approach.
Returns:  tsets – List of lists of tear sets. All the tear sets returned are equally good. There are often a very large number of equally good tear sets.
 upperbound_loop – The max number of times any single loop is torn
 upperbound_total – The total number of loops
Improvemnts for the future
I think I can imporve the efficency of this, but it is good enough for now. Here are some ideas for improvement:
1. Reduce the number of redundant solutions. It is possible to find tears sets [1,2] and [2,1]. I eliminate redundent solutions from the results, but they can occur and it reduces efficency.
2. Look at strongly connected components instead of whole graph. This would cut back on the size of graph we are looking at. The flowsheets are rarely one strongly conneted component.
3. When you add an edge to a tear set you could reduce the size of the problem in the branch by only looking at strongly connected components with that edge removed.
4. This returns all equally good optimal tear sets. That may not really be necessary. For very large flowsheets, there could be an extremely large number of optimial tear edge sets.

select_tear_mip
(G, solver, solver_io=None, solver_options={})[source]¶ This finds optimal sets of tear edges based on two criteria. The primary objective is to minimize the maximum number of times any cycle is broken. The seconday criteria is to minimize the number of tears.
This function creates a MIP problem in Pyomo with a doubly weighted objective and solves it with the solver arguments.

select_tear_mip_model
(G)[source]¶ Generate a model for selecting tears from the given graph
Returns:  model
 bin_list – A list of the binary variables representing each edge, indexed by the edge index of the graph

set_guesses_for
(port, guesses)[source]¶ Set the guesses for the given port
These guesses will be checked for all free variables that are encountered during the first pass run. If a free variable has no guess, its current value will be used. If its current value is None, the default_guess option will be used. If that is None, an error will be raised.
All port variables that are downstream of a nontear edge will already be fixed. If there is a guess for a fixed variable, it will be silently ignored.
The guesses should be a dict that maps the following:
Port Member Name > ValueOr, for indexed members, multiple dicts that map:
Port Member Name > Index > ValueFor extensive members, “Value” must be a list of tuples of the form (arc, value) to guess a value for the expanded variable of the specified arc. However, if the arc connecting this port is a 1to1 arc with its peer, then there will be no expanded variable for the single arc, so a regular “Value” should be provided.
This dict cannot be used to pass guesses for variables within expression type members. Guesses for those variables must be assigned to the variable’s current value before calling run.
While this method makes things more convenient, all it does is:
self.options[“guesses”][port] = guesses

set_tear_set
(tset)[source]¶ Set a custom tear set to be used when running the decomposition
The procedure will use this custom tear set instead of finding its own, thus it can save some time. Additionally, this will be useful for knowing which edges will need guesses.
Parameters: tset – A list of Arcs representing edges to tear While this method makes things more convenient, all it does is:
self.options[“tear_set”] = tset

tear_set_arcs
(G, method='mip', **kwds)[source]¶ Call the specified tear selection method and return a list of arcs representing the selected tear edges.
The kwds will be passed to the method.

tree_order
(adj, adjR, roots=None)¶ This function determines the ordering of nodes in a directed tree. This is a generic function that can operate on any given tree represented by the adjaceny and reverse adjacency lists. If the adjacency list does not represent a tree the results are not valid.
In the returned order, it is sometimes possible for more than one node to be caclulated at once. So a list of lists is returned by this function. These represent a bredth first search order of the tree. Following the order, all nodes that lead to a particular node will be visited before it.
Parameters:  adj – An adjeceny list for a directed tree. This uses generic integer node indexes, not node names from the graph itself. This allows this to be used on subgraphs and graps of components more easily.
 adjR – The reverse adjacency list coresponing to adj
 roots – List of node indexes to start from. These do not need to be the root nodes of the tree, in some cases like when a node changes the changes may only affect nodes reachable in the tree from the changed node, in the case that roots are supplied not all the nodes in the tree may appear in the ordering. If no roots are supplied, the roots of the tree are used.
 graph (MultiDiGraph) –
Pyomo Tutorial Examples¶
Additional Pyomo tutorials and examples can be found at the following links:
Debugging Pyomo Models¶
Interrogating Pyomo Models¶
Show solver output by adding the tee=True option when calling the solve function
>>> SolverFactory('glpk').solve(model, tee=True)
You can use the pprint function to display the model or individual model components
>>> model.pprint()
>>> model.x.pprint()
FAQ¶
 Solver not found
Solvers are not distributed with Pyomo and must be installed separately by the user. In general, the solver executable must be accessible using a terminal command. For example, ipopt can only be used as a solver if the command
$ ipopt
invokes the solver. For example
$ ipopt ?
usage: ipopt [options] stub [AMPL] [<assignment> ...]
Options:
 {end of options}
= {show name= possibilities}
? {show usage}
bf {read boundsfile f}
e {suppress echoing of assignments}
of {write .sol file to file f}
s {write .sol file (without AMPL)}
v {just show version}
Getting Help¶
See the Pyomo Forum for online discussions of Pyomo or to ask a question:
Ask a question on StackExchange
Open an issue on GitHub:
Advanced Topics¶
Persistent Solvers¶
The purpose of the persistent solver interfaces is to efficiently
notify the solver of incremental changes to a Pyomo model. The
persistent solver interfaces create and store model instances from the
Python API for the corresponding solver. For example, the
GurobiPersistent
class maintaints a pointer to a gurobipy Model object. Thus, we can
make small changes to the model and notify the solver rather than
recreating the entire model using the solver Python API (or rewriting
an entire model file  e.g., an lp file) every time the model is
solved.
Warning
Users are responsible for notifying persistent solver interfaces when changes to a model are made!
Using Persistent Solvers¶
The first step in using a persistent solver is to create a Pyomo model as usual.
>>> import pyomo.environ as pe
>>> m = pe.ConcreteModel()
>>> m.x = pe.Var()
>>> m.y = pe.Var()
>>> m.obj = pe.Objective(expr=m.x**2 + m.y**2)
>>> m.c = pe.Constraint(expr=m.y >= 2*m.x + 5)
You can create an instance of a persistent solver through the SolverFactory.
>>> opt = pe.SolverFactory('gurobi_persistent') # doctest: +SKIP
This returns an instance of GurobiPersistent
. Now we need
to tell the solver about our model.
>>> opt.set_instance(m) # doctest: +SKIP
This will create a gurobipy Model object and include the appropriate variables and constraints. We can now solve the model.
>>> results = opt.solve() # doctest: +SKIP
We can also add or remove variables, constraints, blocks, and objectives. For example,
>>> m.c2 = pe.Constraint(expr=m.y >= m.x) # doctest: +SKIP
>>> opt.add_constraint(m.c2) # doctest: +SKIP
This tells the solver to add one new constraint but otherwise leave the model unchanged. We can now resolve the model.
>>> results = opt.solve() # doctest: +SKIP
To remove a component, simply call the corresponding remove method.
>>> opt.remove_constraint(m.c2) # doctest: +SKIP
>>> del m.c2 # doctest: +SKIP
>>> results = opt.solve() # doctest: +SKIP
If a pyomo component is replaced with another component with the same name, the first component must be removed from the solver. Otherwise, the solver will have multiple components. For example, the following code will run without error, but the solver will have an extra constraint. The solver will have both y >= 2*x + 5 and y <= x, which is not what was intended!
>>> m = pe.ConcreteModel() # doctest: +SKIP
>>> m.x = pe.Var() # doctest: +SKIP
>>> m.y = pe.Var() # doctest: +SKIP
>>> m.c = pe.Constraint(expr=m.y >= 2*m.x + 5) # doctest: +SKIP
>>> opt = pe.SolverFactory('gurobi_persistent') # doctest: +SKIP
>>> opt.set_instance(m) # doctest: +SKIP
>>> # WRONG:
>>> del m.c # doctest: +SKIP
>>> m.c = pe.Constraint(expr=m.y <= m.x) # doctest: +SKIP
>>> opt.add_constraint(m.c) # doctest: +SKIP
The correct way to do this is:
>>> m = pe.ConcreteModel() # doctest: +SKIP
>>> m.x = pe.Var() # doctest: +SKIP
>>> m.y = pe.Var() # doctest: +SKIP
>>> m.c = pe.Constraint(expr=m.y >= 2*m.x + 5) # doctest: +SKIP
>>> opt = pe.SolverFactory('gurobi_persistent') # doctest: +SKIP
>>> opt.set_instance(m) # doctest: +SKIP
>>> # Correct:
>>> opt.remove_constraint(m.c) # doctest: +SKIP
>>> del m.c # doctest: +SKIP
>>> m.c = pe.Constraint(expr=m.y <= m.x) # doctest: +SKIP
>>> opt.add_constraint(m.c) # doctest: +SKIP
Warning
Components removed from a pyomo model must be removed from the solver instance by the user.
Additionally, unexpected behavior may result if a component is modified before being removed.
>>> m = pe.ConcreteModel() # doctest: +SKIP
>>> m.b = pe.Block() # doctest: +SKIP
>>> m.b.x = pe.Var() # doctest: +SKIP
>>> m.b.y = pe.Var() # doctest: +SKIP
>>> m.b.c = pe.Constraint(expr=m.b.y >= 2*m.b.x + 5) # doctest: +SKIP
>>> opt = pe.SolverFactory('gurobi_persistent') # doctest: +SKIP
>>> opt.set_instance(m) # doctest: +SKIP
>>> m.b.c2 = pe.Constraint(expr=m.b.y <= m.b.x) # doctest: +SKIP
>>> # ERROR: The constraint referenced by m.b.c2 does not
>>> # exist in the solver model.
>>> opt.remove_block(m.b) # doctest: +SKIP
In most cases, the only way to modify a component is to remove it from the solver instance, modify it with Pyomo, and then add it back to the solver instance. The only exception is with variables. Variables may be modified and then updated with with solver:
>>> m = pe.ConcreteModel() # doctest: +SKIP
>>> m.x = pe.Var() # doctest: +SKIP
>>> m.y = pe.Var() # doctest: +SKIP
>>> m.obj = pe.Objective(expr=m.x**2 + m.y**2) # doctest: +SKIP
>>> m.c = pe.Constraint(expr=m.y >= 2*m.x + 5) # doctest: +SKIP
>>> opt = pe.SolverFactory('gurobi_persistent') # doctest: +SKIP
>>> opt.set_instance(m) # doctest: +SKIP
>>> m.x.setlb(1.0) # doctest: +SKIP
>>> opt.update_var(m.x) # doctest: +SKIP
Working with Indexed Variables and Constraints¶
The examples above all used simple variables and constraints; in order to use indexed variables and/or constraints, the code must be slightly adapted:
>>> for v in indexed_var.values(): # doctest: +SKIP
... opt.add_var(v)
>>> for v in indexed_con.values(): # doctest: +SKIP
... opt.add_constraint(v)
This must be done when removing variables/constraints, too. Not doing this would result in AttributeError exceptions, for example:
>>> opt.add_var(indexed_var) # doctest: +SKIP
>>> # ERROR: AttributeError: 'IndexedVar' object has no attribute 'is_binary'
>>> opt.add_constraint(indexed_con) # doctest: +SKIP
>>> # ERROR: AttributeError: 'IndexedConstraint' object has no attribute 'body'
The method “is_indexed” can be used to automate the process, for example:
>>> def add_variable(opt, variable): # doctest: +SKIP
... if variable.is_indexed():
... for v in variable.values():
... opt.add_var(v)
... else:
... opt.add_var(v)
Persistent Solver Performance¶
In order to get the best performance out of the persistent solvers, use the “save_results” flag:
>>> import pyomo.environ as pe
>>> m = pe.ConcreteModel()
>>> m.x = pe.Var()
>>> m.y = pe.Var()
>>> m.obj = pe.Objective(expr=m.x**2 + m.y**2)
>>> m.c = pe.Constraint(expr=m.y >= 2*m.x + 5)
>>> opt = pe.SolverFactory('gurobi_persistent') # doctest: +SKIP
>>> opt.set_instance(m) # doctest: +SKIP
>>> results = opt.solve(save_results=False) # doctest: +SKIP
Note that if the “save_results” flag is set to False, then the following is not supported.
>>> results = opt.solve(save_results=False, load_solutions=False) # doctest: +SKIP
>>> if results.solver.termination_condition == TerminationCondition.optimal:
... m.solutions.load_from(results) # doctest: +SKIP
However, the following will work:
>>> results = opt.solve(save_results=False, load_solutions=False) # doctest: +SKIP
>>> if results.solver.termination_condition == TerminationCondition.optimal:
... opt.load_vars() # doctest: +SKIP
Additionally, a subset of variable values may be loaded back into the model:
>>> results = opt.solve(save_results=False, load_solutions=False) # doctest: +SKIP
>>> if results.solver.termination_condition == TerminationCondition.optimal:
... opt.load_vars(m.x) # doctest: +SKIP
rapper: a PySP wrapper¶
This is an advanced topic.
The pyomo.pysp.util.rapper package is built on the Pyomo optimization modeling language ([PyomoJournal], [PyomoBookII]) to provide a thin wrapper for some functionality of PySP [PySPJournal] associated with the runef and runph commands. The package is designed mainly for experienced Python programmers who are users of a Pyomo ConcreteModel in PySP and who want to embed the solution process is simple scripts. There is also support for users of a Pyomo AbstractModel. Note that callback functions are also supported for some aspects of PySP, which is somewhat orthogonal to the functionality provided by pyomo.pysp.util.rapper.
Demonstration of rapper Capabilities¶
In this section we provide a series of examples intended to show different things that can be done with rapper.
Imports:
>>> import pyomo.pysp.util.rapper as rapper
>>> import pyomo.pysp.plugins.csvsolutionwriter as csvw
>>> from pyomo.pysp.scenariotree.tree_structure_model import CreateAbstractScenarioTreeModel
>>> import pyomo.environ as pyo
The next line establishes the solver to be used.
>>> solvername = "cplex"
The next two lines show one way to create a concrete scenario tree. There are
others that can be found in `pyomo.pysp.scenariotree.tree_structure_model`.
>>> abstract_tree = CreateAbstractScenarioTreeModel()
>>> concrete_tree = \
... abstract_tree.create_instance("ScenarioStructure.dat")
Emulate some aspects of runef¶
Create a rapper solver object assuming there is a file named ReferenceModel.py that has an appropriate pysp_instance_creation_callback function.
>>> stsolver = rapper.StochSolver("ReferenceModel.py", ... tree_model = concrete_tree)This object has a solve_ef method (as well as a solve_ph method)
>>> ef_sol = stsolver.solve_ef(solvername) # doctest: +SKIPThe return status from the solver can be tested.
>>> if ef_sol.solver.termination_condition != \ # doctest: +SKIP ... pyo.TerminationCondition.optimal: # doctest: +SKIP ... print ("oops! not optimal:",ef_sol.solver.termination_condition) # doctest: +SKIPThere is an iterator to loop over the root node solution:
>>> for varname, varval in stsolver.root_Var_solution(): # doctest: +SKIP ... print (varname, str(varval)) # doctest: +SKIPThere is also a function to compute compute the objective function value.
>>> obj = stsolver.root_E_obj() # doctest: +SKIP >>> print ("Expecatation take over scenarios=", obj) # doctest: +SKIPAlso, stsolver.scenario_tree has the solution. The package csvw is imported from PySP as shown above.
>>> csvw.write_csv_soln(stsolver.scenario_tree, "testcref") # doctest: +SKIPIt is also possible to add arguments for chance constraints and CVaR; see rapperAPI for details.
Again, but with mip gap reported¶
Now we will solve the same problem again, but we cannot reuse the same rapper.StochSolver object in the same program so we must construct a new one; however, we can reused the scenario tree.
>>> stsolver = rapper.StochSolver("ReferenceModel.py", # doctest: +SKIP ... tree_model = concrete_tree) # doctest: +SKIPWe add a solver option to get the mip gap
>>> sopts = {"mipgap": 1} # I want a gapand we add the option to solve_ef to return the gap and the tee option to see the solver output as well.
>>> res, gap = stsolver.solve_ef(solvername, sopts = sopts, tee=True, need_gap = True) # doctest: +SKIP >>> print ("ef gap=",gap) # doctest: +SKIP
PH¶
We will now do the same problem, but with PH and we will reuse the scenario tree in tree_model from the code above. We put subsolver options in sopts and PH options (i.e., those that would provided to runph) Note that if options are passed to the constructor (and the solver); they are passed as a dictionary where options that do not have an argument have the data value None. The constructor really only needs to some options, such as those related to bundling.
>>> sopts = {} >>> sopts['threads'] = 2 >>> phopts = {} >>> phopts['outputsolverlog'] = None >>> phopts['maxiterations'] = '3'>>> stsolver = rapper.StochSolver("ReferenceModel.py", ... tree_model = concrete_tree, ... phopts = phopts)The solve_ph method is similar to solve_ef, but requires a default_rho and accepts PH options:
>>> ph = stsolver.solve_ph(subsolver = solvername, default_rho = 1, # doctest: +SKIP ... phopts=phopts) # doctest: +SKIPWith PH, it is important to be careful to distinguish xbar from xhat.
>>> obj = stsolver.root_E_obj() # doctest: +SKIPWe can compute and xhat (using the current PH options):
>>> obj, xhat = rapper.xhat_from_ph(ph) # doctest: +SKIPThere is a utility for obtaining the xhat values:
>>> for nodename, varname, varvalue in rapper.xhat_walker(xhat): # doctest: +SKIP ... print (nodename, varname, varvalue) # doctest: +SKIP
rapper API¶
A class and some utilities to wrap PySP. In particular to enable programmatic access to some of the functionality in runef and runph for ConcreteModels Author: David L. Woodruff, started February 2017

class
pyomo.pysp.util.rapper.
StochSolver
(fsfile, fsfct=None, tree_model=None, phopts=None)[source]¶ A class for solving stochastic versions of concrete models and abstract models. Inspired by the IDAES use case and by daps ability to create tree models. Author: David L. Woodruff, February 2017
Parameters:  fsfile (str) – is a path to the file that contains the scenario callback for concrete or the reference model for abstract.
 fsfct (str, or fct, or None) – str: callback function name in the filefct: callback function (fsfile is ignored)None: it is a AbstractModel
 tree_model (concrete model, or networkx tree, or path) – gives the tree as a concrete model (which could be a fct) or a valid networkx scenario tree or path to AMPL data file.
 phopts – dictionary of ph options; needed during construction if there is bundling.

scenario_tree
¶ scenario tree object (that includes data)

make_ef
(verbose=False, generate_weighted_cvar=False, cvar_weight=None, risk_alpha=None, cc_indicator_var_name=None, cc_alpha=0.0)[source]¶ Make an ef object (used by solve_ef); all Args are optional.
Parameters:  verbose (boolean) – indicates verbosity to PySP for construction
 generate_weighted_cvar (boolean) – indicates we want weighted CVar
 cvar_weight (float) – weight for the cvar term
 risk_alpha (float) – alpha value for cvar
 cc_indicator_var_name (string) – name of the Var used for chance constraint
 cc_alpha (float) – alpha for chance constraint
Returns: the ef object
Return type: ef_instance

root_E_obj
()[source]¶ post solve Expected cost of the solution in the scenario tree (xbar)
Returns: the expected costs of the solution in the tree (xbar) Return type: float

root_Var_solution
()[source]¶ Generator to loop over xbar
Yields: name, value pair for root node solution values

solve_ef
(subsolver, sopts=None, tee=False, need_gap=False, verbose=False, generate_weighted_cvar=False, cvar_weight=None, risk_alpha=None, cc_indicator_var_name=None, cc_alpha=0.0)[source]¶ Solve the stochastic program directly using the extensive form. All Args other than subsolver are optional.
Parameters:  subsolver (str) – the solver to call (e.g., ‘ipopt’)
 sopts (dict) – solver options
 tee (bool) – indicates dynamic solver output to terminal.
 need_gap (bool) – indicates the need for the optimality gap
 verbose (boolean) – indicates verbosity to PySP for construction
 generate_weighted_cvar (boolean) – indicates we want weighted CVar
 cvar_weight (float) – weight for the cvar term
 risk_alpha (float) – alpha value for cvar
 cc_indicator_var_name (string) – name of the Var used for chance constraint
 cc_alpha (float) – alpha for chance constraint
Returns: (Pyomo solver result, float)
solve_result is the solver return value.
absgap is the absolute optimality gap (might not be valid); only if requested
Note
Also update the scenario tree, populated with the solution. Also attach the full ef instance to the object. So you might want obj = pyo.value(stsolver.ef_instance.MASTER) This needs more work to deal with solver failure (dlw, March, 2018)

solve_ph
(subsolver, default_rho, phopts=None, sopts=None)[source]¶ Solve the stochastic program given by this.scenario_tree using ph
Parameters:  subsolver (str) – the solver to call (e.g., ‘ipopt’)
 default_rho (float) – the rho value to use by default
 phopts – dictionary of ph options (optional)
 sopts – dictionary of subsolver options (optional)
Returns: the ph object
Note
Updates the scenario tree, populated with the xbar values; however, you probably want to do obj, xhat = ph.compute_and_report_inner_bound_using_xhat() where ph is the return value.
Abstract Constructor¶
In Demonstration of rapper Capabilities we provide a series of examples intended to show different things that can be done with rapper and the constructor is shown for a ConcreteModel. The same capabilities are available for an AbstractModel but the construction of the rapper object is different as shown here.
Import for constructor:
>>> import pyomo.pysp.util.rapper as rapper
The next line constructs the rapper object that can be used to emulate runph or runef.
>>> stsolver = rapper.StochSolver(ReferencePath,
... fsfct = None,
... tree_model = scenariodirPath,
... phopts = None)
The rap¶
As homage to the tired cliché based on the rapwrap homonym, we provide the lyrics to our rap anthem:
A PySP State of Mind (The Pyomo Hip Hop)
By Woody and https://www.songlyricsgenerator.org.uk
Yeah, yeah
Ayo, modeller, it's time.
It's time, modeller (aight, modeller, begin).
Straight out the scriptable dungeons of rap.
The cat drops deep as does my map.
I never program, 'cause to program is the uncle of rap.
Beyond the walls of scenarios, life is defined.
I think of optimization under uncertainty when I'm in a PySP state of mind.
Hope the resolution got some institution.
My revolution don't like no dirty retribution.
Run up to the distribution and get the evolution.
In a PySP state of mind.
What more could you ask for? The fast cat?
You complain about unscriptability.
I gotta love it though  somebody still speaks for the mat.
I'm rappin' with a cape,
And I'm gonna move your escape.
Easy, big, indented, like a solution
Boy, I tell you, I thought you were an institution.
I can't take the unscriptability, can't take the script.
I woulda tried to code I guess I got no transcript.
Yea, yaz, in a PySP state of mind.
When I was young my uncle had a nondescript.
I waz kicked out without no manuscript.
I never thought I'd see that crypt.
Ain't a soul alive that could take my uncle's transcript.
An object oriented fox is quite the box.
Thinking of optimization under uncertainty.
Yaz, thinking of optimization under uncertainty
(optimization under uncertainty).
Bibliography¶
[PyomoBookII] 

[PyomoJournal]  William E. Hart; JeanPaul Watson; David L. Woodruff: “Pyomo: modeling and solving mathematical programs in Python,” Mathematical Programming Computation, Volume 3, Number 3, August 2011 
[PySPJournal]  JeanPaul Watson; David L. Woodruff; William E. Hart: “Pyomo: modeling and solving mathematical programs in Python,” Mathematical Programming Computation, Volume 4, Number 2, June 2012, Pages 109142 
Indices and Tables¶
Units Handling in Pyomo¶
Pyomo Units Container Module
This module provides support for including units within Pyomo expressions. This module can be used to define units on a model, and to check the consistency of units within the underlying constraints and expressions in the model. The module also supports conversion of units within expressions to support construction of constraints that contain embedded unit conversions.
To use this package within your Pyomo model, you first need an instance of a PyomoUnitsContainer. You can use the module level instance already defined as ‘units’. This object ‘contains’ the units  that is, you can access units on this module using common notation.
>>> from pyomo.environ import units as u >>> print(3.0*u.kg) 3.0*kg
Units can be assigned to Var, Param, and ExternalFunction components, and can be used directly in expressions (e.g., defining constraints). You can also verify that the units are consistent on a model, or on individual components like the objective function, constraint, or expression using assert_units_consistent (from pyomo.util.check_units). There are other methods there that may be helpful for verifying correct units on a model.
>>> from pyomo.environ import ConcreteModel, Var, Objective >>> from pyomo.environ import units as u >>> from pyomo.util.check_units import assert_units_consistent, assert_units_equivalent, check_units_equivalent >>> model = ConcreteModel() >>> model.acc = Var(initialize=5.0, units=u.m/u.s**2) >>> model.obj = Objective(expr=(model.acc  9.81*u.m/u.s**2)**2) >>> assert_units_consistent(model.obj) # raise exc if units invalid on obj >>> assert_units_consistent(model) # raise exc if units invalid anywhere on the model >>> assert_units_equivalent(model.obj.expr, u.m**2/u.s**4) # raise exc if units not equivalent >>> print(u.get_units(model.obj.expr)) # print the units on the objective m ** 2 / s ** 4 >>> print(check_units_equivalent(model.acc, u.m/u.s**2)) True
The implementation is currently based on the pint package and supports all the units that are supported by pint. The list of units that are supported by pint can be found at the following url: https://github.com/hgrecco/pint/blob/master/pint/default_en.txt.
If you need a unit that is not in the standard set of defined units,
you can create your own units by adding to the unit definitions within
pint. See PyomoUnitsContainer.load_definitions_from_file()
or
PyomoUnitsContainer.load_definitions_from_strings()
for more
information.
Note
In this implementation of units, “offset” units for temperature are not supported within expressions (i.e. the nonabsolute temperature units including degrees C and degrees F). This is because there are many nonobvious combinations that are not allowable. This concern becomes clear if you first convert the nonabsolute temperature units to absolute and then perform the operation. For example, if you write 30 degC + 30 degC == 60 degC, but convert each entry to Kelvin, the expression is not true (i.e., 303.15 K + 303.15 K is not equal to 333.15 K). Therefore, there are several operations that are not allowable with nonabsolute units, including addition, multiplication, and division.
This module does support conversion of offset units to absolute units numerically, using convert_value_K_to_C, convert_value_C_to_K, convert_value_R_to_F, convert_value_F_to_R. These are useful for converting input data to absolute units, and for converting data to convenient units for reporting.
Please see the pint documentation here for more discussion. While pint implements “delta” units (e.g., delta_degC) to support correct unit conversions, it can be difficult to identify and guarantee valid operations in a general algebraic modeling environment. While future work may support units with relative scale, the current implementation requires use of absolute temperature units (i.e. K and R) within expressions and a direct conversion of numeric values using specific functions for converting input data and reporting.

class
pyomo.core.base.units_container.
PyomoUnitsContainer
[source]¶ Bases:
object
Class that is used to create and contain units in Pyomo.
This is the class that is used to create, contain, and interact with units in Pyomo. The module (
pyomo.core.base.units_container
) also contains a module level units containerunits
that is an instance of a PyomoUnitsContainer. This module instance should typically be used instead of creating your own instance of aPyomoUnitsContainer
. For an overview of the usage of this class, see the module documentation (pyomo.core.base.units_container
)This class is based on the “pint” module. Documentation for available units can be found at the following url: https://github.com/hgrecco/pint/blob/master/pint/default_en.txt
Note
Predefined units can be accessed through attributes on the PyomoUnitsContainer class; however, these attributes are created dynamically through the __getattr__ method, and are not present on the class until they are requested.

convert
(src, to_units=None)[source]¶ This method returns an expression that contains the explicit conversion from one unit to another.
Parameters:  src (Pyomo expression) – The source value that will be converted. This could be a Pyomo Var, Pyomo Param, or a more complex expression.
 to_units (Pyomo units expression) – The desired target units for the new expression
Returns: ret
Return type: Pyomo expression

convert_temp_C_to_K
(value_in_C)[source]¶ Convert a value in degrees Celcius to Kelvin Note that this method converts a numerical value only. If you need temperature conversions in expressions, please work in absolute temperatures only.

convert_temp_F_to_R
(value_in_F)[source]¶ Convert a value in degrees Fahrenheit to Rankine. Note that this method converts a numerical value only. If you need temperature conversions in expressions, please work in absolute temperatures only.

convert_temp_K_to_C
(value_in_K)[source]¶ Convert a value in Kelvin to degrees Celcius. Note that this method converts a numerical value only. If you need temperature conversions in expressions, please work in absolute temperatures only.

convert_temp_R_to_F
(value_in_R)[source]¶ Convert a value in Rankine to degrees Fahrenheit. Note that this method converts a numerical value only. If you need temperature conversions in expressions, please work in absolute temperatures only.

convert_value
(num_value, from_units=None, to_units=None)
