On Scenario Aggregation to Approximate Robust Optimization Problems

We just uploaded a new preprint on approximation algorithms for min-max and min-max regret optimisation problems with discrete uncertainty sets.

For these kind of problems, there is a very simple method to find an approximation: If you are given K scenarios, take the average of them, and solve the resulting (much easier) problem. This gives a K-approximation for both min-max and min-max regret problems.

As easy as this method is, it still holds the title of being the best known approximation algorithm for a whole range of problems. Finally, our results make some progress in this regard: We show that it is possible to improve the K-approximation to an εK-approximation for any fixed ε.

The basic idea of our approach is not to aggregate all scenarios until a single scenario is left, but stop earlier. For example, we may take the pairwise average of scenarios until four of them are left. Solving the resulting problem with four scenarios will give a better result than with one scenario, but it is still hard. Fortunately, there are good approximation methods known for few scenarios, which makes it possible to beat the K-result for the first time.

This is the abstract:

As most robust combinatorial min-max and min-max regret problems with discrete uncertainty sets are NP-hard, research into approximation algorithm and approximability bounds has been a fruitful area of recent work. A simple and well-known approximation algorithm is the midpoint method, where one takes the average over all scenarios, and solves a problem of nominal type. Despite its simplicity, this method still gives the best-known bound on a wide range of problems, such as robust shortest path, or robust assignment problems.
In this paper we present a simple extension of the midpoint method based on scenario aggregation, which improves the current best K-approximation result to an (εK)-approximation for any desired ε>0. Our method can be applied to min-max as well as min-max regret problems.

Advertisements

Hong Kong

I just returned from a two-week visit to Hong Kong University of Science and Technology, which was made possible through the Kan Tong Po fellowship of the Royal Society. And I must say – I am most impressed, by both the wonderful city and the university.

As our work revolves around using public transport in emergency planning, I gave a talk on the very same topic as part of the department’s seminar series last Wednesday. It was a pleasure speaking to such an open and attentive audience.

We had a good start developing new ideas and models. Let’s see what will come out of it!

Talk in London

Last week I visited Imperial College and used the opportunity to discuss some of the classic and more recent results on robust selection problems in a talk of the same title.

When I first saw the selection problem, I thought it quite silly. Given a set of n items with a cost vector,  choose p of those items that minimize the costs. An optimal solution is very simple to find: Just sort all items with respect to their costs, and pack the p cheapest ones. That’s all. So why is this interesting?

The trouble with most robust problems is that you add an additional layer of complexity through the handling of uncertainty. That means in many cases that you may start with a simple deterministic problem (say, shortest paths) and end up with an NP-hard robust counterpart. But the selection problem is so simple, that the complexity  of the robust version (depending on concept and uncertainty set) tends to lie just between easy and hard. This makes it quite challenging to analyse.

As an example, as far as I know, the min-max regret counterpart of selection with interval uncertainty was the first problem of its kind that turned out to be solvable in polynomial time. With more such polynomial solvabilty results in what followed, it makes you wonder how far we can push the deterministic problem complexity and still stay in the realm of polynomial solution algorithms?