(Originally posted at forum.openopt.org/viewtopic.php?id=654)

I'm using OpenOpt's Travelling Salesman Problem TSP function. I'm finding that OpenOpt returns different results for same the inputs. More than that, the results returned are 'reliably' different. I've boiled an example down to creating two identical input graphs, running both though TSP, and printing the results. The two sets of results are always different. I would like the results to always be the same for the same inputs, can anyone help?

Here is my sample Python code and the results (G and H have exactly the same inputs):

import networkx as nx
from openopt import *

G = nx.Graph()
H = nx.Graph()

G.add_edges_from([(0, 1, {'dur': 2322}), (0, 2, {'dur': 1928}), \
(0, 3, {'dur': 2154}), (0, 4, {'dur': 1948}), (0, 5, {'dur': 1404}), \
(0, 6, {'dur': 1257}), (0, 7, {'dur': 2263}), (0, 8, {'dur': 2120}), \
(0, 9, {'dur': 2551}), (0, 10, {'dur': 2263}), (1, 5, {'dur': 2424}), \
(1, 6, {'dur': 3162}), (1, 8, {'dur': 1755}), (1, 9, {'dur': 2849}), \
(2, 3, {'dur': 930}), (2, 4, {'dur': 3150}), (2, 6, {'dur': 2938}), \
(2, 7, {'dur': 2250}), (2, 10, {'dur': 1338}), (3, 7, {'dur': 2847}), \
(3, 10, {'dur': 1188}), (4, 5, {'dur': 2488}), (4, 6, {'dur': 1184}), \
(4, 7, {'dur': 1253}), (5, 6, {'dur': 1421}), (5, 7, {'dur': 3173}), \
(5, 8, {'dur': 1780}), (5, 9, {'dur': 1940}), (6, 7, {'dur': 1892}), \
(6, 8, {'dur': 2585}), (6, 9, {'dur': 1736}), (7, 10, {'dur': 2694}), \
(8, 9, {'dur': 1879})])

H.add_edges_from([(0, 1, {'dur': 2322}), (0, 2, {'dur': 1928}), \
(0, 3, {'dur': 2154}), (0, 4, {'dur': 1948}), (0, 5, {'dur': 1404}), \
(0, 6, {'dur': 1257}), (0, 7, {'dur': 2263}), (0, 8, {'dur': 2120}), \
(0, 9, {'dur': 2551}), (0, 10, {'dur': 2263}), (1, 5, {'dur': 2424}), \
(1, 6, {'dur': 3162}), (1, 8, {'dur': 1755}), (1, 9, {'dur': 2849}), \
(2, 3, {'dur': 930}), (2, 4, {'dur': 3150}), (2, 6, {'dur': 2938}), \
(2, 7, {'dur': 2250}), (2, 10, {'dur': 1338}), (3, 7, {'dur': 2847}), \
(3, 10, {'dur': 1188}), (4, 5, {'dur': 2488}), (4, 6, {'dur': 1184}), \
(4, 7, {'dur': 1253}), (5, 6, {'dur': 1421}), (5, 7, {'dur': 3173}), \
(5, 8, {'dur': 1780}), (5, 9, {'dur': 1940}), (6, 7, {'dur': 1892}), \
(6, 8, {'dur': 2585}), (6, 9, {'dur': 1736}), (7, 10, {'dur': 2694}), \
(8, 9, {'dur': 1879})])

result1 = TSP(G, objective='dur', start=0).solve('glpk').nodes
result2 = TSP(H, objective='dur', start=0).solve('glpk').nodes

print(result1)
print(result2)


The results are always:

...
INTEGER OPTIMAL SOLUTION FOUND
    1  1.827e+04            -100.00 
istop: 1000 (optimal)
Solver:   Time Elapsed = 0.18     CPU Time Elapsed = 0.08
objFunValue: 18266 (feasible, MaxResidual = 0)
...
INTEGER OPTIMAL SOLUTION FOUND
    1  2.492e+04            -100.00 
istop: 1000 (optimal)
Solver:   Time Elapsed = 4.1     CPU Time Elapsed = 3.53
objFunValue: 24916 (feasible, MaxResidual = 0)
[0, 10, 3, 2, 7, 4, 6, 9, 8, 1, 5, 0]
[0, 1, 9, 6, 8, 5, 7, 3, 10, 2, 4, 0]

I don't see why the two results aren't the same, any ideas? Many thanks.

asked 31 Jan '13, 04:06

yorkshirespud's gravatar image

yorkshirespud
412
accept rate: 0%

closed 31 Jan '13, 08:12

fbahr's gravatar image

fbahr ♦
3.6k313

1

Question tagged as "closed" since the reported behavior has been confirmed as a bug: forum.openopt.org/viewtopic.php?pid=1888#p1888

(31 Jan '13, 08:13) fbahr ♦
1

The OpenOpt developer Dmitrey has confirmed this behaviour as a bug caused by the FuncDesigner depedency keeping state. He has provided a workaround, and I have also tested a successful workaround that uses a fresh/throwaway forked child process to run the TSP function inside

(31 Jan '13, 18:11) yorkshirespud

The question has been closed for the following reason "Other" by fbahr 31 Jan '13, 08:12


This is actually quite common that a solver implementation returns different results on every run.

There are several causes for this, but they all boil down to the fact that they slightly change the path the solver takes through the search space which can send it in a totally different direction.

Some things that cause such a slight change:

  • Lack of a single Random instance used through the implementation. Normally, every solver implementation does this. Most optimization algorithms use randoms to break ties, some use it even more intensively.
  • Not seeding that Random instance. For example, in my implementation I automatically seed the Random with a fixed number if the environmentMode is REPRODUCIBLE.
  • CPU affected termination in combination with termination-based algorithms. For example, my Simulated Annealing implementation will - unless told otherwise - auto-configure itself based on the remaining time in every iteration. The CPU speed (and other processes running on the OS) affect that remaining time and can cause a difference.
  • Multi-threading optimizations. Reproducibility in a multi-threaded solver is possible, but not all optimizations can be enabled (such as work-stealing queues, ...).

In general, reproducibility is a small trade-off for performance. During development you want reproducibility, during production it depends on the business. So look for an easy way to turn it off and on.

link

answered 31 Jan '13, 05:22

Geoffrey%20De%20Smet's gravatar image

Geoffrey De ... ♦
3.2k232
accept rate: 4%

2

Getting "different solutions" wouldn't be much a problem ...if the solver didn't report both of them as opt - yet having vast different outcomes as a result (objFunValue 18266 vs. 24916).

(31 Jan '13, 05:45) fbahr ♦

Didn't notice it said "optimal". That first run does indeed disprove that the second run is optimal.

(31 Jan '13, 06:31) Geoffrey De ... ♦

Follow this question

By Email:

Once you sign in you will be able to subscribe for any updates here

By RSS:

Answers

Answers and Comments

Markdown Basics

  • *italic* or _italic_
  • **bold** or __bold__
  • link:[text](http://url.com/ "Title")
  • image?![alt text](/path/img.jpg "Title")
  • numbered list: 1. Foo 2. Bar
  • to add a line break simply add two spaces to where you would like the new line to be.
  • basic HTML tags are also supported

Tags:

×15
×3

Asked: 31 Jan '13, 04:06

Seen: 609 times

Last updated: 31 Jan '13, 18:11

OR-Exchange! Your site for questions, answers, and announcements about operations research.