Hi folks. I'm trying to teach myself some basics of IP and have run across a problem I can't figure out how to to formulate. Consider the problem of assigning (say) six items to three people. Each item can only be assigned to one person. Each itemperson pair has a utility (the utilities sum to 1 for each person), and the objective function is to maximize the sum of the utilities. I can set this up without a problem. But what if I want to add the constraint that the sums of the utilities, for each person, are not too different? This is, I don't want anyone to get left out of the assignment. How do I add that constraint? One thing I've figured out I can do is to add one constraint per person, where the sum of their assigned utilities must be less than a constant, initially 1. At first there will be no solution, but I can back off on the constant until a solution is found. The result tends to have equitable assignments. But how can I set up the problem to avoid lessequitable solutions and solve everything in one step? It seems to me that there needs to be a constant defined, which is how much utility is lost by an assignment that is lessthan equitable. But I don't know how to formulate that in the standard framework. Can someone help? I'm using Symphony, if it matters. And this is not homework. :) asked 28 Apr '11, 15:00 Harlan 
Consider the following model, in which \(x_{ij}\) is the (binary) assignment of item \(i\) to person \(j\), \(u_{ij}\) is the utility of item \(i\) to person \(j\), \(\overline{d}\) and \(\underline{d}\) are respectively the maximum and minimum utilities of any individual's "payload" (both nonnegative divisible variables) and \(\lambda>0\) is a parameter reflecting the tradeoff between aggregate utility and parity: \begin{array}{ccc} \max & \sum_{i,j}u_{ij}x_{ij}\lambda(\overline{d}\underline{d})\\ \textrm{s.t.} & \sum_{j}x_{ij}\le1 & \forall i\\ & \sum_{i}u_{ij}x_{ij}\le\overline{d} & \forall j\\ & \sum_{i}u_{ij}x_{ij}\ge\underline{d} & \forall j \end{array} The \(\lambda\) parameter may need a little trial and error to tune. There are other possible approaches, but you are dealing with a bicriterion model (utility and parity), and there is no ducking the question of what you're willing to trade for what. answered 28 Apr '11, 16:52 Paul Rubin ♦♦ fbahr ♦ So the d‾ and d_ are variables to be solved for, like the x_{i,j}? I think I can see how that would work! Will see if I can formulate this solution and report back...
(28 Apr '11, 18:40)
Harlan
Yes, they're variables (divisible, not integer).
(29 Apr '11, 10:47)
Paul Rubin ♦♦
This works well, and I really appreciate your very clear answer and explanation!
(03 May '11, 13:55)
Harlan
1
Here's my blog post that uses this solution! Thanks! http://www.harlan.harris.name/2011/05/optimizingmeatsharesdetails/
(09 May '11, 14:42)
Harlan
That's great Paul, thanks!
(18 Aug '11, 21:01)
tdhopper

Why not maximize a new variable MinUtil? Add a constraint for each person so that MinUtil <= Util[p]. answered 28 Apr '11, 16:43 SanjaySaigal 2
This is a valid option, with two cautions: it masks the tradeoff between overall utility and the minimum utility of any payload; and the solution with maximal MinUtil may not yield the tightest range of payload utilities (max  min).
(28 Apr '11, 17:01)
Paul Rubin ♦♦

Don't think you will like my suggestion, but here it goes. I would solve the problem in a two stage model, because it by nature is such. You could formulate a one stage heuristic, but I would rather have the model follow the actual problem you try to capture. Here's one possible method :
Notice using max min you maximize the person with lowest utility, i.e as a heuristic forcing dominant persons to give away utility. Since you have the objective from the simple assignment problem, you can still control the process trade off. answered 28 Apr '11, 17:23 Bo Jensen ♦ @Bo: Just to clarify, when you say put the original objective in as a constraint, you mean total_utility_sum >= optimal_sum  epsilon or total_utility_sum >= (1epsilon)*optimal_sum, right?
(28 Apr '11, 17:56)
Paul Rubin ♦♦
@Paul, yes correct, though it might be more than "epsilon", depending on the user :)
(28 Apr '11, 17:59)
Bo Jensen ♦
I actually do sorta like your solution! It's certainly more justifiable than the backoff I was doing before. Will ponder it.
(28 Apr '11, 18:30)
Harlan
1
Should you need input on how to formulate max min, then it's quite similar to formulating L infinity norm as touched in my old slides http://slidesha.re/kgQzNc.
(28 Apr '11, 18:42)
Bo Jensen ♦

To do a bit of summarizing, and without prejudice to whether parity is measured by minimum individual utility or the spread in individual utilities, there are three (among more than three) options emerging here: maximize total utility with a lower bound on parity; maximize parity with a lower bound on total utility; or smash the two criteria together with "Archimedean" weights (my answer). Other options include Goal Programming (nonArchimedean weights), interactive multicriterion optimization, and paralysis by analysis. If you're willing to do several optimizations while looping over a parameter, any of those first three methods can be used to generate a Pareto frontier that will give you an explicit representation of utilityparity tradeoffs. Don't know if that's of interest, or if you need an automated nohumanjudgment solution. answered 28 Apr '11, 18:04 Paul Rubin ♦♦ I think he, like most people, is looking for the "smashonestage" method i.e your answer :)
(28 Apr '11, 18:09)
Bo Jensen ♦
Hulk Smash is indeed what I was thinking of! Which doesn't mean it's what I should have been thinking of...
(28 Apr '11, 18:32)
Harlan
Actually, I need to back off the Pareto frontier comment a bit. The side constraints relating to either parity or utility (depending on which way it's modeled) screw up the integrality condition, so sensitivity analysis gets tricky.
(29 Apr '11, 10:50)
Paul Rubin ♦♦

If anyone would like to see a slightlymorecomplex example, which I'm eventually going to blog about, the R code that sets up the data and calls Symphony is here: https://github.com/HarlanH/groupmeatdemo/blob/master/subset_test.R
I'm interpreting "lessequitable" to mean too much spread between highest and lowest payload utilities. I think Sanjay and Bo are interpreting it to mean lowest payload utility is too small (which is not the same). You might want to clarify how you see it.
@Paul, I agree it's unclear. If you consider the case with too much spread, you can formulate it as a combination between max min and min max.
Thanks all for the thoughts on this! Yes, my intention was as Paul said, to minimize the minmax spread, or maybe the variance or the maximum difference from the mean.