Last week I visited Imperial College and used the opportunity to discuss some of the classic and more recent results on robust selection problems in a talk of the same title.
When I first saw the selection problem, I thought it quite silly. Given a set of n items with a cost vector, choose p of those items that minimize the costs. An optimal solution is very simple to find: Just sort all items with respect to their costs, and pack the p cheapest ones. That’s all. So why is this interesting?
The trouble with most robust problems is that you add an additional layer of complexity through the handling of uncertainty. That means in many cases that you may start with a simple deterministic problem (say, shortest paths) and end up with an NP-hard robust counterpart. But the selection problem is so simple, that the complexity of the robust version (depending on concept and uncertainty set) tends to lie just between easy and hard. This makes it quite challenging to analyse.
As an example, as far as I know, the min-max regret counterpart of selection with interval uncertainty was the first problem of its kind that turned out to be solvable in polynomial time. With more such polynomial solvabilty results in what followed, it makes you wonder how far we can push the deterministic problem complexity and still stay in the realm of polynomial solution algorithms?