6
3

Hi guys, I am part of an operations research team and we are working in an optimization project in the transportation industry. Our goal is to maximize the total revenue from moving loads minus the cost of moving trucks to pick up and deliver loads. We have a considerable amount of resources to allocate: around 60 trucks, and 80 drivers, five central spots, and we would like to elaborate a dynamic model that takes into account the evolution of a network through space and time.

We are currently deciding whether to model the situation as a linear programming problem or like we have seen in some papers, as a dynamic programming problem. According to our research, the dynamic allocation problem grows exponentially with the number of resources, so the number of decision variables can be in the thousands order.

To model/solve the problem, we are testing Python+scipy (or another module), python+mpl, or matlab+ Optimization toolbox. In particular we are concerned with the arrays' management in those programs. Knowing that maybe we will be working with thousands (maybe millions?) of coefficients, which software package in your opinion (by your experience) is the best to solve a problem of these dimensions?.

asked 15 Apr '11, 17:56

juandarr's gravatar image

juandarr
8113
accept rate: 0%


We actually did a comparison last year between different Python optimization implementations, based on a similar question to yours. The goal was to compare how different languages and optimization packages implement Python and specifically how they handle data management, which tends to be the most difficult issue when dealing with complex models. We published the results during a presentation in the Python track at the INFORMS San Antonio meeting in November, and also at the ICS meeting last January.

To keep our work load reasonable, we picked a relatively simple model CutStock, that still had some complexities involved regarding data. The model file cutstock.mpl, as well as the spreadsheet cutstock.xls that constains the data for the model, comes with the standard download of MPL.

The packages we compared were:

  1. Cplex
  2. Gurobi
  3. PulpOR
  4. Pyomo
  5. LPSolve
  6. MPL

What we found is there were great differences between the different packages, both how they set up the models and how they handle the data management.

Python

First here is a summary of the Python data (see the spreadsheet for the full data):

cutcount = 8 
patcount = 29
Cuts = [w1, w2, ... w8]
Patterns = [p1, p2, ..., p29]
PriceSheet = 28
SheetsAvail = 2000
CutDemand = [500, 400, 300, 450, 350, 200, 800, 200]
CutsInPattern = [[1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
                  0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 
             [0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 
              0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], ...

Cplex

The CPLEX implementation requires you to create special lists in Python to represent the data coefficients in the model formulation.

indA = range(patcount+1)
indA[0] = "SheetsCut"
for i in range(patcount):
    indA[i+1] = Patterns[i]
valA = [1] + [-1 for i in range(patcount)]

indP = range(cutcount)
valP = range(cutcount)
for c in range(cutcount):
    count = 0
    for p in range(patcount):
        if CutsInPattern[c][p] >= 1:
            count += 1
    indP[c] = range(count+1)
    valP[c] = range(count+1)
    count = 0
    for p in range(patcount):
        if CutsInPattern[c][p] >= 1:
            indP[c][count] = Patterns[p]
            valP[c][count] = CutsInPattern[c][p]
            count += 1
    indP[c][count] = Cuts[c]
    valP[c][count] = -1

After those have been created the rest of the formulation is relatively easy, adding variables, the objective, and the constraints, using those special lists:

cpx = cplex.Cplex()

# Variables
cpx.variables.add(names = ["SheetsCut"], lb = [0], ub = [cplex.infinity])
cpx.variables.add(names = ["TotalCost"], lb = [0], ub = [cplex.infinity], obj = [1])
cpx.variables.add(names = Patterns)
cpx.variables.add(names = Cuts)

# Objective Sense
cpx.objective.set_sense(cpx.objective.sense.minimize)

# Constraints
cpx.linear_constraints.add(lin_expr = [cplex.SparsePair(ind = ["SheetsCut", "TotalCost"], val = [-PriceSheet, 1.0])], senses = ["E"], rhs = [0])
cpx.linear_constraints.add(lin_expr = [cplex.SparsePair(ind = ["SheetsCut"],val = [1.0])], senses = ["L"], rhs = [SheetsAvail])
for c in range(cutcount):
    cpx.linear_constraints.add(lin_expr = [cplex.SparsePair(ind = indP[c],val = valP[c])], senses = ["E"], rhs = [CutDemand[c]])

Gurobi

With Gurobi you start by creating the model object and the variables for the model:

m = Model("CutStock")

# Variables
SheetsCut = m.addVar(0, GRB.INFINITY, 0, GRB.CONTINUOUS,"SheetsCut")
TotalCost = m.addVar(0, GRB.INFINITY, 1, GRB.CONTINUOUS,"TotCost")

PatternCount = []
for i in range(patcount):
    newvar = m.addVar(0, GRB.INFINITY, 0, GRB.CONTINUOUS, Patterns[i])
    PatternCount += [newvar]

ExcessCuts = []
for j in range(cutcount):
    newvar = m.addVar(0, GRB.INFINITY, 0, GRB.CONTINUOUS, Cuts[j])
    ExcessCuts += [newvar]

m.update()

With Gurobi callable library it is important to remember to always "update" the model object, so you can refer to the variables when you add the constraints:

# Constraints
m.addConstr(LinExpr(PriceSheet, SheetsCut), GRB.EQUAL, TotalCost,"TotCostCalc")
m.addConstr(LinExpr(1, SheetsCut), GRB.LESS_EQUAL, SheetsAvail,"RawAvail")

sheetsB = LinExpr()
for i in range(patcount):
    sheetsB.addTerms(1, PatternCount[i])
m.addConstr(sheetsB, GRB.EQUAL, SheetsCut,"Sheets")

for c in range(cutcount):
    cutReqB = LinExpr()
    cutReqB.addTerms(-1,ExcessCuts[c])
    for p in range(patcount):
        cutReqB.addTerms(CutsInPattern[c][p],PatternCount[p])
    m.addConstr(cutReqB, GRB.EQUAL, CutDemand[c],"CutReq_")

#Objective Sense
m.ModelSense = 1

m.update()

PulpOR

PulpOR has a clever command makeDict built-in to the language, which allows you to setup the data in a very simple manner:

# Dictionary
CutsInPattern = makeDict([Cuts,Patterns],CutsInPattern)
CutDemand = makeDict([Cuts],CutDemand)

prob = LpProblem("CutStock Problem", LpMinimize)

# Variables
SheetsCut = LpVariable("SheetsCut",0)
TotalCost = LpVariable("TotalCost",0)
PatternCount = LpVariable.dicts("PatternCount",Patterns, lowBound = 0)
ExcessCuts = LpVariable.dicts("ExcessCuts",Cuts, lowBound = 0)

# Objective
prob += TotalCost,""

Then PulpOR uses a lot of overloading of Python operators to formulate the constraints for the model:

# Constraints
prob += TotalCost == PriceSheet*SheetsCut,"TotCost"
prob += SheetsCut <= SheetsAvail,"RawAvail"
prob += PatternCount == SheetsCut, "Sheets"
for c in Cuts:
    prob += lpSum([CutsInPattern[p][c]*PatternCount[p] for p in Patterns]) == CutDemand[c] + ExcessCuts[c],"CutReq" + str(c)

Pyomo

The main difference of Pyomo is that the installation requires you to install special virtual version of Python, rather than use the existing Python installed on your machine. Like PulpOR, Pyomo also uses a lot of overloading of operators to formulate the model:

mod = ConcreteModel(name="The CutStock Problem")

# Variables
mod.SheetsCut = Var(bounds=(0,None), doc="The amount of sheets cut")
mod.TotalCost = Var(bounds=(0,None), doc="The total cost")
mod.PatternCount = Var(range(patcount), bounds=(0,None), doc="Number of cuttting patterns")
mod.ExcessCuts = Var(range(cutcount), bounds=(0,None), doc="Cuts that are not required")

# Objective
mod.obj = Objective(expr=mod.TotalCost, doc="Total Cost")

# Constraints
mod.TotCost = Constraint(expr=PriceSheet*mod.SheetsCut == mod.TotalCost, doc="Calculating TotalCost")
mod.RawAvail = Constraint(expr=mod.SheetsCut <= SheetsAvail, doc="Sheet Availability")
mod.Sheets = Constraint(expr=sum(mod.PatternCount[p] for p in range(patcount)) == mod.SheetsCut, doc="Calculating SheetsCut")
def CutReq_rule(c, mod):
    return sum(CutsInPattern[c][p] * mod.PatternCount[p] for p in range(patcount)) == CutDemand[c] + mod.ExcessCuts[c]
mod.CutReq = Constraint(range(cutcount), doc="MeetDemand")

LPSolve

I am skipping demonstrating the LPSolve code here, since we found out its implementation basically requires you to write the whole matrix generator for each variable term, which resulted in a lot of extra code. If you want to see it, please let me know and I will send it to you.

MPL

The Python implementation for MPL is a little bit different from some of the others, since its based on a modeling language instead of a solver. This allows you to choose how much of the work you want to perform inside Python vs. the MPL. In this sample we assume the data is still generated through Python:

mod = mpl.model

idxCuts = mod.IndexSets.AddNameSet("cuts", Cuts, cutcount)
idxPatterns = mod.IndexSets.AddNameSet("patterns", Patterns, patcount)

mod.DataConstants.Add("PriceSheet", PriceSheet)
mod.DataConstants.Add("SheetsAvail", SheetsAvail)
mod.DataVectors.AddDense("CutDemand[cuts]", CutDemand, cutcount)
mod.DataVectors.AddDense("CutsInPattern[cuts, patterns]", CutsInPattern, cutcount * patcount);

The rest of the model, can then be read either from an MPL file, or generated inside the Python code. Here is one example:

result = mod.ReadFile("cutstock.mpl")

Here is another example:

ModStmt = """VARIABLES
            SheetsCut;
            TotalCost;
            PatternCount[patterns];
            ExcessCuts[cuts];
         MODEL
            MIN z = PriceSheet*SheetsCut
         SUBJECT TO
             TotCost:  TotalCost = PriceSheet*SheetsCut;
             RawAvail: SheetsCut < SheetsAvail;
             Sheets:   SheetsCut = SUM(patterns: PatternCount[patterns]);
             CutReq[cuts]: SUM(patterns: CutsInPattern[patterns, cuts] * PatternCount[patterns])
                         = CutDemand[cuts] + ExcessCuts[cuts];
            END;"""

result = mod.Parse(ModStmt)

So as you can see, there are great differences between the different packages. In general the closer to you get to a modeling language the less processing you have to do on the data, before you can use it in the model. With CPLEX and GUROBI you clearly have to do more work than with PulpOR, Pyomo, and MPL.

As always, I would definitely recommend doing some testing with real data, before making the final decision on which one to use for your project.

Bjarni Kristjansson, Maximal Software, Inc.

link

answered 15 Apr '11, 23:21

BjarniMax's gravatar image

BjarniMax
8641312
accept rate: 13%

@Bjarni, excellent reply!

(16 Apr '11, 02:19) Bo Jensen ♦

Thanks for sharing your work. This is great. Can you comment on the performance as well? which one was faster?

(16 Apr '11, 06:15) Mark ♦
1

Performance is a different aspect of this. The key factor here is going to be the difference in speed between Python and the C language, as almost all solvers and modeling languages are written in C. As Bo mentioned, the faster the solvers become, the more relevant the data management and matrix generation is going to be in the overall performance.

The more processing you do in C, the faster your application is going to be, so the modeling languages are always going to have certain advantage here. I will create a new response where I discuss the likely speed differences between the packages.

(16 Apr '11, 12:22) BjarniMax
2

Hi bjarnimax, Thank you very much for sharing your work!. About your first big post: I read it a couple days ago but I didn't wanted to reply it until understanding all this new information. We have worked a little bit with PulpOR, MPL, Cplex and Gurobi. We didn't know Pyomo or LPSolve (right now I am reading about them, thanks :D).

So in conclusion, to do a fast implementation of a model, the best option would be use a modeling language.

To organize ideas, a good combination of tools would be something like: Python -it is useful for interaction with other software/programming languages- (aditional features) + modeling software/libraries for OR (Like MPL or Pulp) + a good solver (Cplex, Gurobi). What do you think? I noticed that You didn't mentioned Matlab + some toolbox/libraries, is this software suitable to solve big OR problems, or is there any big drawback of use it?

About your second big post: That information is very useful for us! We are aware of the importance of software performance in our problem because we for sure will need to work with new information (changes in demand, trucks arriving, new drivers available, etc) in real time. I will analize deeply all the information you gave to us, in order to get a wider vision of our problem.

(18 Apr '11, 13:12) juandarr

Make sure you don't fall into the trap of premature optimization i.e confirm the modelling part is actually very time consuming (make simple proto type). If it is then you should not choose matlab. Just my 2C.

(18 Apr '11, 13:50) Bo Jensen ♦

Hi Juandarr, the key thing is to pick the right software, based on the specific requirements your project has. You mentioned in your original comment that your model had potentially millions of coefficients, which would probably point towards using more advanced modeling software, such as MPL or Pulp.

The reason I did not cover Matlab, was that in our experience its Optimization Toolbox is very limited, both in regards to modeling and to solving. We have implemented some MEX files that allow calling MPL/CPLEX from Matlab, but its still limited compared to the other platforms available.

(18 Apr '11, 16:35) BjarniMax

Ok bjarnimax, thank you very much! I'll take note.

(19 Apr '11, 10:09) juandarr
showing 5 of 7 show 2 more comments

I ran into the 600 character limit in the comment field, so I had to move my response to the performance follow-up question to here instead.

We have not done specific performance testing yet between the different Python packages, but as this is clearly an interesting subject, we very well might go ahead and do it sometime in the near future. This would for example be an interesting followup report in the Python track, for the next INFORMS conference in Charlotte.

In the meantime, there are few things I can mention here, just based on previous experience with benchmarking for data processing and matrix generation, and what we know about the different Python packages. First lets define what I mean by benchmarking in this context. Here is a list of the different phases of running an optimization model:

  1. Data Import
  2. Data Pre-processing
  3. Model Generation and Indexing
  4. Matrix Generation
  5. Solving
  6. Extract solution
  7. Data Post-processing
  8. Data Export

As I mentioned in the above comment, one of the key factors here is going to be how much is done in C vs. Python. The main reason for this is that Python is an interpreted language, and therefore is often much, much slower, while C is fast since it is compiled. You can use numeric packages such as NumPy, which are written in C to speed things up, but still much of the processing is likely to be done in the slower Python language. You can also try writing C-language extensions in Cython or Swig, but those are rather complicated to master and would probably beat the purpose of using Python for the project.

So lets go through each one of the above phases and discuss where the Python packages would likely have different performance.

1. Data Import

This is likely to be the same performance between the different packages, at least for the data that is imported using Python. Python has some great standard modules that make importing data easy and fast. If only part of the data needs to be imported using Python, then the modeling languages might have possible advantage due to performance tuning that has been built into their data import implementation. In general, the type of data storage (fast to slow: binary file, text file, database, xml, spreadsheet) and whether the data is local or coming over a network, has much greater impact of how fast the data import is.

2. Data Processing

The performance is likely to be the same here between the packages also, except for the modeling languages, which would likely have a clear speed advantage. For large modeling projects, it is common to have a lot of data processing that needs to be performed, before the data can be used in the model generation. In some cases the actual model equations is just a small fraction of the model formulation, the rest being data processing. In some cases you may want to consider heavy data processing to be moved out of the model into a separate module, written in fast programming language.

3. Model Generation and Indexing

This is where large part of the work is being done, and you where are likely see the greatest differences. As you saw in the first response the Python packages are very different how you write each term in the model equations. In general you should expect the more Python code you have to write the slower the model generation is going to be. This is not a fixed rule though, since in some cases the generation has simply been moved into the Python code inside the package itself. Only by doing some real performance testing will we be able to determine which package is the fastest here. The modeling languages are going to have clear advantage here also, as they are likely have a highly optimized code written in C to manage the model generation and the index handling.

4. Matrix Generation

For modeling languages the matrix generation phase, is the process of taking the internal representation of the model matrix, and generating the actual column matrix that is sent to the solver. Again, this code is typically written in C for the modeling languages and is likely to be highly optimized. For the packages written in Python, the matrix generation is likely to be done at the same time as the model generation, although there may be some exceptions to this. The only way to find out would be to examine the actual source code and see if the generation is two-phased, like most modeling languages are, or not.

5. Solving

I am assuming that as most of the packages will use one of the high-end state-of-the-art solvers such as CPLEX or GUROBI, there is not going to be much difference between the different Python packages, so I will not discuss it further here. I am sure the respective solver vendors might not necessarily agree with the above statement. They of course all believe they are MUCH faster than the competition :-).

6. Extract Solution

This is usually a relatively easy step, except in cases where you need to do a lot of mapping from the solution data to the model identifiers. Python of course has excellent facilities to do efficient mapping, so this should not be much of a problem.

7. Data Post-processing

All the comments I made regarding 2. Data Pre-processing would apply equally here.

8. Data Export

In similar fashion, the comments I made regarding 1. Data Import would apply equally here. There is though one thing you need to be aware of. In regards to performance, data exports can easily take up lot more time than data imports. This has to do with peculiarities of the SQL language. When you do data import you typically issue a SELECT query, and then loop through the result set to collect the data for the model. When you do data export you basically have two choices UPDATE query and INSERT query.

The UPDATE query can be quite slow, especially when working with larger data tables. The reason is, that SQL has to search for a specific row in the table, for each data item it wants to export. Therefore the time used, will grow exponentially with the size of the data table.

The INSERT query is faster, but it requires you to empty the table containing the export data for each solver run. This may not always be feasible, but if you can do it, this may results in considerably faster data exports.

The above applies to both modeling languages and packages written in Python, but the modeling language may have advantage here, as they have in some cases implemented advanced database tuning options, such as Transactions.

link

answered 16 Apr '11, 13:54

BjarniMax's gravatar image

BjarniMax
8641312
accept rate: 13%

Thank for sharing, very interesting indeed. More of this please :-) I have a comment to section 5 : Yes all vendors believe they are much faster, but IHMO this is mostly made up by sales talk. I remember at least one situation at a INFORMS meeting, where "two of the leading vendors" arguing who was best in benchmarks...though good entertainment for the crowd, no one benefits from such subjective statements. If you ask the developers then you will get a much more reasonable answer. People should just test on their own models and decide.

(16 Apr '11, 14:52) Bo Jensen ♦

Yes, that is why the work that Hans Mittelmann is doing is so important, since its not biased toward any single solver.

GAMS also regularly provides reports at conferences, with results on solver benchmarks, done with their BENCH utility. We at Maximal regularly do our own benchmarking on solvers, but so far we have not elected to publish those results.

(16 Apr '11, 17:46) BjarniMax

Although it can handle both linear and nonlinear optimization Matlab's built in optimization toolbox is very limited. There are interfaces to CPLEX from both MATLAB and Python. Also Gurobi comes with a built in Python interface (CLPEX added it a year ago too). I would use CPLEX and Gurobi for large models like yours.

I have only used linear models for much smaller number of truck and locations (max 5 trucks and fewer than 50 sites). You will see in the literature that people have tried to solve these type of problems with neural networks. I spent some time on that only to realize it was a terrible choice and a waste of time. I absolutely advise against that.

If you can, later please let us know what package+model did you end up using

link

answered 15 Apr '11, 18:52

Mark's gravatar image

Mark ♦
3.6k22550
accept rate: 9%

edited 15 Apr '11, 18:58

Mark, thanks for you info. Jejeje Funny anecdote about neural networks :D (It happens a lot: we sometimes use the wrong tools to solve problems). I have a question for you: what is your opinion about using Matlab to solve a big problem of OR? Is it a suitable tool for it? And yes, of course, later I let you know about our package+model selected.

(18 Apr '11, 13:23) juandarr

Bjarni summarizes the main technical issues comprehensively. I don't have much to add other than to say that platform choices (Python vs MPL vs C++, SQL vs text-based I/O, hardware, etc.) do not, in practice, mark the difference between the success or failure of such a project. It's like any other efficiency frontier balancing act, in this case between development cycles and execution (time/space) efficiency. The lower-level your platform, the cheaper it'll be in time and memory usage, and the longer it'll take you to code/validate/debug. Modeling languages such as MPL represent a good middle ground because, even if you decide at a certain point that its overheads are too much, you can usually translate its model constructs into lower-level objects, e.g., in C++, line by line.

The choice of approach (e.g., MIP vs ANN vs dynamic programming) is more critical. In a recent project, where faced with solving an almost purely combinatorial problem, I flirted with using Constraint Programming. But reflecting on the many CP-based projects I have been associated with, plus discussions with ex-Ilog colleagues, convinced me that MIP was the better approach in that case. Not because provable optimality was hugely important (it wasn't) but because MIP provides a robust enough framework to sustain potential changes to the model as we went down the discovery path. That turned out to be the case. As the problem revealed itself slowly, over repeated discussions with the client, I changed the model multiple times, ending up with a decomposition scheme that is technically suboptimal, but practically more than sufficient for our client's needs.

Interestingly, truck routing/dispatch is among the problems for which CP has been used with success. Depending on the specific issues you face, you may wish to consider it. And if you do, the choice may impose its own platform constraints.

link

answered 18 Apr '11, 16:04

SanjaySaigal's gravatar image

SanjaySaigal
588211
accept rate: 13%

You should take a look at the related link in Erwin's blog

link

answered 26 Apr '11, 13:52

Bo%20Jensen's gravatar image

Bo Jensen ♦
5.0k21019
accept rate: 14%

1

Yes, Erwin's blog is very informative and often great fun to read. He is pointing out that Python produces much slower results than high-speed modeling languages, which is entirely correct.

Python is a very clever language, but speed is just not one of its strongest points. Coding various tasks in Python, typically takes only fraction of the time it takes to do the same thing in other languages, but the resulting performance can be quite low.

So, if the project you are working on requires fast speed on the data processing, then you may want to consider some other language. This may either be done in the modeling language itself, or if that is not an option, choose one of the faster programming languages, such as C/C++, CSharp, Java, etc.

(26 Apr '11, 14:58) BjarniMax

It is nice to see the comparison of pyomo with apml, then how about cplex/gurobi python interface with ampl? I guess cplex/gurobi python interface is faster than pyomo, but not sure.

(27 Apr '11, 09:38) John Cui

how about cplex/gurobi python interface with ampl? do you have any insight?

(05 May '11, 22:33) John Cui

The only Python package that I am aware of provideing interface to AMPL is NLPy, which you can find here:

http://nlpy.sourceforge.net/

Its solving is handled through .nl files, so the performance should be the same there as in AMPL itself.

(06 May '11, 02:28) BjarniMax

Yes, I think the python interfaces(which mapping to C directly), not python modeling language(pyomo), have the same performance as ampl.

(07 May '11, 19:00) John Cui

I think that you also need to consider the modeling language AIMMS as a possible tool to handle your optimization problem. The problem size that you mentioned is not a problem for AIMMS. Midwest ISO, the winner of the INFORMS Franz Edelman Award 2011 (http://www.informs.org/About-INFORMS/News-Room/Press-Releases/Edelman-Winner-2011), used AIMMS as the tool to solve their optimization problems with up to 3 million variables/constraints.

If you want to select a suitable optimization package, it is good to look at the complete life cycle of your optimization-based application. I am not familiar with python-based solutions, but I think that tools like AIMMS, MPL, CPLEX and GUROBI should have no problem solving your model. I normally ask people to look at the following four aspects of the application:

1. Building the model

What are the models that you need to build now and in the future? In general, once you have chosen a tool and invest in it, you will also use this for other projects in the future. Do you think that you will also solve MIP, NLP, MINLP or CP models in the future? If so, does the tool support that?

2. Solving the model

How much flexibility do you need in choosing the solver? Do you want to have the freedom to move between different alternatives for different problems? Do you want to use a high end commercial solver or do you want to use OpenSource solvers.

3. Visualize the results

How are you planning to communicate the results back to the end-user? Will they need a graphical user interface during model development, so that they can verify that the model is as desired? Do they need to have a user interface for regular end use?

4. Deploy the application

How will you deploy the application that you build? Will it be a standalone desktop application, a client-server application, or an integrated application?

Depending on your needs, different aspect maybe more or less important than others, but that is something that you need to decide.

AIMMS has all the aspect of a modeling language that are mentioned in this exchanged. However, I would like to point out three features that I believe can be very helpful in modeling your optimization:

  • Integrated development environment and GUI: This allows you to build the model, run the model, inspect results, change data, run the model again and that all without leaving the tool.

  • The math program inspector: A graphical tool that allows you to inspect and analyze the model/matrix that is send to the solver. This is useful when the model is infeasible, unbounded or give unexpected results.

  • A network object with integrated GIS link. This allows you to superimpose the results of your optimization model on a map of the area. This allows you to obtain better insight/understanding of the solution.

As SanjaySaigal mentioned, CP might be a valuable approach for your problem. In that light it might be good to know that we are currently beta testing the Constraint Programming functionality (http://www.aimms.com/newsitems/ni001723). This allows you to solve CP problems as well.

You can request a free 30 day trial license at www.aimms.com/try.

Peter Nieuwesteeg

link

answered 05 May '11, 17:25

PeterN's gravatar image

PeterN
191
accept rate: 0%

I would think that for reducing/optimizing deadheads and dwell times, you need to consider stochasticity. We have done some similar projects in the past where the difference of results between deterministic and stochastic modeling was considerable, and in the end, the client ended up using the stochastic model, with a link to their data source for auto-updating the model.

In the stochastic realm, OptQuest is the go to solver, and it has interfaces from all major programming languages. We at Oracle Crystal Ball provide an interface from within Excel for both Monte Carlo simulation, stochastic optimization (using OptQuest) and time-series forecasting, which can all be relevant for your problem. If you need more information, please feel free to contact.

link

answered 03 Apr '13, 11:58

Samik%20R.'s gravatar image

Samik R.
1.2k1920
accept rate: 2%

These are the different phases of running an optimization model:

  • Data Import
  • Data Pre-processing
  • Model Generation and Indexing
  • Matrix Generation
  • Solving
  • Extract Solution
  • Data post-processing
  • Data Export

I think first check the reviews of software optimization and use trial version if possible and then only go for finalization.

link

answered 12 Jun '14, 01:35

KaylnTailor's gravatar image

KaylnTailor
112
accept rate: 0%

edited 12 Jun '14, 01:38

Your answer
toggle preview

Follow this question

By Email:

Once you sign in you will be able to subscribe for any updates here

By RSS:

Answers

Answers and Comments

Markdown Basics

  • *italic* or _italic_
  • **bold** or __bold__
  • link:[text](http://url.com/ "Title")
  • image?![alt text](/path/img.jpg "Title")
  • numbered list: 1. Foo 2. Bar
  • to add a line break simply add two spaces to where you would like the new line to be.
  • basic HTML tags are also supported

Tags:

×9

Asked: 15 Apr '11, 17:56

Seen: 7,039 times

Last updated: 12 Jun '14, 01:38

OR-Exchange! Your site for questions, answers, and announcements about operations research.