# What is CPLEX's performance by version?

 7 In 2002, Bixby wrote a wonderful paper (Solving Real-World Linear Programs: A Decade and More of Progress) that looked at CPLEX's development up through version 7. In it was a table that compared the speed of cplex 1 with cplex 7 on constant hardware, showing the improvements in solution time due just to software improvements. Can anyone compare cplex 7 with cplex 12: how has linear programming software improved since then? asked 30 Jul '12, 16:15 Michael Trick ♦♦ 4.1k●5●16●33 accept rate: 20% yeesian 846●2●10

 7 The slides are not public, but I attach the "relevant part" here. answered 30 Jul '12, 17:03 Marco Luebbecke ♦ 3.4k●1●6●15 accept rate: 16% OK, thanks for the link. BTW : Another good reason for a slide share for conference presentations. I mean if you give a talk at a public (paid) event you probably want to share it anyway. (30 Jul '12, 17:27) Bo Jensen ♦ Perfect! Thanks! (30 Jul '12, 18:27) Michael Trick ♦♦ I used these slides in a course recently, so I had these pages already at hand... ;-) (30 Jul '12, 18:29) Marco Luebbecke ♦ 1 Very interesting statistics. How many datasets did they use? Did they use the same overall dataset(s) to calculate the diff between each sequential versions? ... This comment has been edited by myself to remove an unnecessary rant, which ended with this question that trigged Bo's response: In my opinion, showing any statistics like this anywhere, should be accompanied by information how to reproduce them empirically. (31 Jul '12, 03:21) Geoffrey De ... ♦ @Geoffrey So every version of CPLEX and Gurobi should be available along with some of the (10K+) data sets (which has been collected from customers they have promised not to share externally)... Seriously ? I also think these numbers sounds high, but I have no reason not to trust Bixby, all the papers I have read by him has been very objective especially when it comes to numerical results (which can not be said about a bunch of other papers..). I don't think the exact number is very interesting, all we need is a ballpark figure. (31 Jul '12, 03:35) Bo Jensen ♦ @Bo Good point, asking for reproducing in NDA protected data situations is irrational. In such cases, it would be nice if the methodology used is clearly described. (31 Jul '12, 03:48) Geoffrey De ... ♦ 1 @Geoffrey: According to these slides, Gurobi uses both internal and public testbanks to test and benchmark its releases. For the latter one, you'll find detailed results here. (31 Jul '12, 05:34) fbahr ♦ @fbahr That's a nice example on how Gurobi shows CPLEX how it's done properly: they link their public testdata and clearly explain their methodology. (31 Jul '12, 05:50) Geoffrey De ... ♦ showing 5 of 8 show 3 more comments
 4 Actually, they have charts on their website: Continuous CPLEX Optimizer performance improvements since 2002 ILOG CPLEX 12.4 (2011): 15% overall, 1.4X on 1000 seconds and up ILOG CPLEX 12.3 (2011): 20% overall, 2X on 1000 seconds and up ILOG CPLEX 12.2 (2010): 50% overall, 2.7X on 1,000 seconds and up ILOG CPLEX 12.1 (2009): 30% overall, 2X on 1,000 seconds and up ILOG CPLEX 11 (2007): 15% under one minute, 3X on 1-60 minutes, 10X on one hour and up ILOG CPLEX 10 (2006): 35% overall, 70% on “particularly difficult models” ILOG CPLEX 9 (2003): 50% on “difficult customer models” ILOG CPLEX 8 (2002): 40% overall, 70% on “difficult problems” ILOG CPLEX 7 (2000): 60% on “hard mixed integer problems” You might also find the following (by Bixby) useful: answered 31 Jul '12, 05:58 yeesian 846●2●10 accept rate: 3% Are these papers public and free available ? I mean otherwise we should probably not link to them. (31 Jul '12, 09:27) Bo Jensen ♦ 1 Seems to have been public for some time: see Tim Hopper's tweet (31 Jul '12, 09:29) yeesian Slides is most likely OK, with published papers you should probably be more careful, but I don't know for this paper. (31 Jul '12, 10:24) Bo Jensen ♦ I've updated the paper to link to citeseer - which also allows you to download it. (31 Jul '12, 10:26) yeesian
 0 One thing worth explaining is that performance measurement depends critically on the set of models you use. On the charts available on IBM site you can see that performance improvement is much larger for models that take more than 100 second to solve than when you look at easier models. You will also notice differences between speedup we report using our internal test suite, and what can be seen from using public benchmarks such as those on Mittelmann page. answered 08 Aug '12, 22:48 jfpuget 2.5k●3●10 accept rate: 8%
 1 We have an update with 12.5 using our own (IBM/ILOG) test suite, which makes comparison safer. As said elsewhere, one should try on his/her models and not rely on generic statement. ILOG CPLEX 12.5 (2012): 18% overall, 1.61X on 1000 seconds and up ILOG CPLEX 12.4 (2011): 15% overall, 1.4X on 1000 seconds and up ILOG CPLEX 12.3 (2011): 20% overall, 2X on 1000 seconds and up ILOG CPLEX 12.2 (2010): 50% overall, 2.7X on 1,000 seconds and up ILOG CPLEX 12.1 (2009): 30% overall, 2X on 1,000 seconds and up ILOG CPLEX 11 (2007): 15% under one minute, 3X on 1-60 minutes, 10X on one hour and up ILOG CPLEX 10 (2006): 35% overall, 70% on “particularly difficult models” ILOG CPLEX 9 (2003): 50% on “difficult customer models” ILOG CPLEX 8 (2002): 40% overall, 70% on “difficult problems” ILOG CPLEX 7 (2000): 60% on “hard mixed integer problems”  answered 14 Nov '12, 08:07 jfpuget 2.5k●3●10 accept rate: 8%
 3 I have today held a lecture on "experimental analysis of algorithms" in my course "computational mixed integer programming", and we extensively discussed eg. Johnson's paper and also the MIPLIB2010 paper which very much support @jfpuget's comment on "it also very much depends on the benchmark set." We have checked the "big" solvers' websites and their performances, and -- tatatataaaa -- they are all winners! ;-) answered 14 Nov '12, 08:20 Marco Luebbecke ♦ 3.4k●1●6●15 accept rate: 16% 1 What else would you expect from marketing folks? This is not peculiar to MP solvers market. Just pick any market. I doubt you'll find a single vendor that doesn't claim its products are best. ;) (14 Nov '12, 10:17) jfpuget 1 well, in a certain sense, all these products are the best :-) it was a very good scientific exercise to put such results in perspective. BTW, incredible how you obtain these speedups... for me, this is over and over again proof enough that one should definitively not give up on optimality by default. (14 Nov '12, 11:42) Marco Luebbecke ♦ 1 At Informs annual I over heard some saying they had plottet the Mittelmann benchmarks as performance profiles and for MIP they were more or less the same (3 big ones). How to present benchmarks is still not an exact science. (14 Nov '12, 12:32) Bo Jensen ♦
 0 I've uploaded a graph showing performance increase from CPLEX 6.0 to 12.5 here It has been presented at INFORMS meeting in October 2012 We use more than 3,000 models here. Blue bars show the number of those that do not solve in 10,000 seconds. Model categories are the ones taking more than xx seconds to solve with 12.5. Red lines show speed increase. Note that CPLEX 12.5 is now available should you want to check if performance increases on specific models! answered 05 Dec '12, 09:17 jfpuget 2.5k●3●10 accept rate: 8%
 toggle preview community wiki

By Email:

Markdown Basics

• *italic* or _italic_
• **bold** or __bold__
• image?![alt text](/path/img.jpg "Title")
• numbered list: 1. Foo 2. Bar
• to add a line break simply add two spaces to where you would like the new line to be.
• basic HTML tags are also supported

Tags:

×231
×191