We just uploaded a new preprint on approximation algorithms for min-max and min-max regret optimisation problems with discrete uncertainty sets.

For these kind of problems, there is a very simple method to find an approximation: If you are given *K* scenarios, take the average of them, and solve the resulting (much easier) problem. This gives a *K*-approximation for both min-max and min-max regret problems.

As easy as this method is, it still holds the title of being the best known approximation algorithm for a whole range of problems. Finally, our results make some progress in this regard: We show that it is possible to improve the *K*-approximation to an *εK*-approximation for any fixed *ε.*

The basic idea of our approach is not to aggregate all scenarios until a single scenario is left, but stop earlier. For example, we may take the pairwise average of scenarios until four of them are left. Solving the resulting problem with four scenarios will give a better result than with one scenario, but it is still hard. Fortunately, there are good approximation methods known for few scenarios, which makes it possible to beat the *K*-result for the first time.

This is the abstract:

As most robust combinatorial min-max and min-max regret problems with discrete uncertainty sets are NP-hard, research into approximation algorithm and approximability bounds has been a fruitful area of recent work. A simple and well-known approximation algorithm is the midpoint method, where one takes the average over all scenarios, and solves a problem of nominal type. Despite its simplicity, this method still gives the best-known bound on a wide range of problems, such as robust shortest path, or robust assignment problems.

In this paper we present a simple extension of the midpoint method based on scenario aggregation, which improves the current bestK-approximation result to an (εK)-approximation for any desiredε>0. Our method can be applied to min-max as well as min-max regret problems.