English 中文(简体)
Which linear programming package should I use for high numbers of constraints and "warm starts" [closed]
原标题:

We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.

Closed 7 years ago.

I have a "continuous" linear programming problem that involves maximizing a linear function over a curved convex space. In typical LP problems, the convex space is a polytope, but in this case the convex space is piecewise curved -- that is, it has faces, edges, and vertices, but the edges aren t straight and the faces aren t flat. Instead of being specified by a finite number of linear inequalities, I have a continuously infinite number. I m currently dealing with this by approximating the surface by a polytope, which means discretizing the continuously infinite constraints into a very large finite number of constraints.

I m also in the situation where I d like to know how the answer changes under small perturbations to the underlying problem. Thus, I d like to be able to supply an initial condition to the solver based on a nearby solution. I believe this capability is called a "warm start."

Can someone help me distinguish between the various LP packages out there? I m not so concerned with user-friendliness as speed (for large numbers of constraints), high-precision arithmetic, and warm starts.

Thanks!

EDIT: Judging from the conversation with question answerers so far, I should be clearer about the problem I m trying to solve. A simplified version is the following:

I have N fixed functions f_i(y) of a single real variable y. I want to find x_i (i=1,...,N) that minimize sum_{i=1}^N x_i f_i(0), subject to the constraints:

  • sum_{i=1}^N x_i f_i(1) = 1, and
  • sum_{i=1}^N x_i f_i(y) >= 0 for all y>2

More succinctly, if we define the function F(y)=sum_{i=1}^N x_i f_i(y), then I want to minimize F(0) subject to the condition that F(1)=1, and F(y) is positive on the entire interval [2,infinity). Note that this latter positivity condition is really an infinite number of linear constraints on the x_i s, one for each y. You can think of y as a label -- it is not an optimization variable. A specific y_0 restricts me to the half-space F(y_0) >= 0 in the space of x_i s. As I vary y_0 between 2 and infinity, these half-spaces change continuously, carving out a curved convex shape. The geometry of this shape depends implicitly (and in a complicated way) on the functions f_i.

最佳回答
  • As for LP solver recommendations, two of the best are Gurobi and CPLEX (google them). They are free for academic users, and are capable of solving large-scale problems. I believe they have all the capabilities that you need. You can get sensitivity information (to a perturbation) from the shadow prices (i.e. the Lagrange multipliers).

  • But I m more interested in your original problem. As I understand it, it looks like this:

Let S = {1,2,...,N} where N is the total number of functions. y is a scalar. f_{i}:R^{1} -> R^{1}.

minimize sum{i in S} (x_{i} * f_{i}(0))
   x_{i}
s.t.
(1) sum {i in S} x_{i} * f_{i}(1) = 1
(2) sum {i in S} x_{i} * f_{i}(y) >= 0 for all y in (2,inf]
  • It just seems to me that you might want to try solve this problem as an convex NLP rather than an LP. Large-scale interior point NLP solvers like IPOPT should be able to handle these problems easily. I strongly recommended trying IPOPT http://www.coin-or.org/Ipopt

  • From a numerical point of view: for convex problems, warm-starting is not necessary with interior point solvers; and you don t have to worry about the combinatorial cycling of active sets. What you ve described as "warm-starting" is actually perturbing the solution -- that s more akin to sensitivity analysis. In optimization parlance, warm-starting usually means supplying a solver with an initial guess -- the solver will take that guess and end up at the same solution, which isn t really what you want. The only exception is if the active set changes with a different initial guess -- but for a convex problem with a unique optimum, this cannot happen.

If you need any more information, I d be pleased to supply it.

EDIT:

Sorry about the non-standard notation -- I wish I could type in LaTeX like on MathOverflow.net. (Incidentally, you might try posting this there -- I think the mathematicians there would be interested in this problem)

Ah now I see about the "y > 2". It isn t really an optimization constraint so much as an interval defining a space (I ve edited my description above). My mistake. I m wondering if you could somehow transform/project the problem from an infinite to a finite one? I can t think of anything right now, but I m just wondering if that s possible.

So your approach is to discretize the problem for y in (2,inf]. I m guessing you re choosing a very big number to represent inf and a fine discretization grid. Oooo tricky. I suppose discretization is probably your best bet. Maybe guys who do real analysis have ideas.

I ve seen something similar being done for problems involving Lyapunov functions where it was necessary to enforce a property in every point within a convex hull. But that space was finite.

问题回答

I encountered a similar problem. I searched the web and found just now that this problem may be classified as "semi-infinite" problem. MATLAB has tools to solve this kind of problems (function "fseminf"). But I haven t checked this in detail. Sure people have encountered this kind of questions.

You shouldn t be using an LP solver and doing the discretization yourself. You can do much better by using a decent general convex solver. Check out, for example, cvxopt. This can handle a wide variety of different functions in your constraints, or allow you to write your own. This will be far better than attempting to do the linearization yourself.

As to warm start, it makes more sense for an LP than a general convex program. While warm start could potentially be useful if you hand code the entire algorithm yourself, you typically still need several Newton steps anyway, so the gains aren t that significant. Most of the benefit of warm start comes in things like active set methods, which are mostly only used for LP.





相关问题
How do I solve the linear programming relaxed MKP?

I am studying about Knapsack problem. So I do not understand one thing here. The profit/pseudo-resource consumption ratios Uj = Pj / Wj with Wj = Rji * Aj I hope so you people know this equation ...

Recommended library for linear programming in .Net? [closed]

Can anyone recommend a library - free, or commercial but affordable ( There are some listed here: http://en.wikipedia.org/wiki/Linear_programming#Solvers_and_scripting_.28programming.29_languages ......

How to choose an integer linear programming solver?

I am newbie for integer linear programming. I plan to use a integer linear programming solver to solve my combinatorial optimization problem. I am more familiar with C++/object oriented programming on ...

Solving linear systems in Maxima

I m trying to write a general solver for a linear system in Maxima using linsolve(eqlist, varlist), but without having to explicitly specify the dimension of the problem. This works, but fixes the ...

Linear Independence Matrix

Suppose we have a m by n matrix A with rank m and a set K⊆{1..n} such that the columns of A indexed by K are linearly independent. Now we want to extend K and find a set L so that k⊆L and columns ...

热门标签