Last week I returned to Birmingham’s Aston University (having been there in March to talk about optimisation methods in evacuation planning) and joined the first-of-its-kind IMA and OR Society Conference on Mathematics of Operational Research. That’s kind of awkward name for a great idea. The IMA, that is the Institute of mathematics and its applications, a society for the promotion of mathematical culture, and the OR Society have plenty of things in common, and organising a joint conference seems quite natural in hindsight. I’ve been to few other conferences where I enjoyed every single talk.
I used the opportunity to talk about variable-sized robustness, which has become an area of great interest for me recently. The basic idea, as we explored it over two papers (1,2), is that uncertainty sets in robust optimisation are typically not known to the decision maker. Usually only raw data is available, and a suitable uncertainty set needs to be determined before any robust model can actually start its work. In variable-sized robustness, we assume that only the desired shape of the set is known, but not its size. This leads to some new and challenging optimisation problems with close ties to multi-objecitve optimisation.
The IMA/OR conference was well attended and we can only hope to see more events of the same kind in the future!
February is nearly over, so let’s have a look what has happened:
- We sumitted a bid to the SEA2017 conference, titled “A New Perspective on Complexity for Robust Combinatorial Optimization”. Let’s see how it fares, it is a competitive venue. Our basic idea is quite nice, though: Do robust problems really get harder and harder, the more scenarios you add? Spoiler alert: Not quite!
- I wrote a grant proposal along similar lines, which is all ready for submission. Only some letters of support are still missing before I can send it on its way.
- The Mitteilungen der DMV is the magazine of the German Mathematical Society. I wrote an article about Brexit and its impact on my academic work, which will be in the next issue.
- In two weeks I will give a talk at the Midlands OR Society on optimisation methods for evacuation planning.
- And last but not least, I am now affiliated with the Data Science Institute at Lancaster University!
I am very happy to announce that some fruits of last year’s labour in Wroclaw are now ripe and ready. The paper is called “On Recoverable and Two-Stage Robust Selection Problems with Budgeted Uncertainty” and can be found here. The general idea is actually quite simple: Say we can choose p out of n items. Then the adversary can choose a bounded number of items and make them more expensive. Afterwards, in a third stage, we may exchange some of our items again. We tried to answer the question if this problem can be solved in polynomial time or not.
It turns out that this is indeed possible, if the uncertainty set is continuous. For discrete types of uncertainty, this question remains still open – but at least we found a way to formulate it as a compact mixed-integer programme. This is the abstract:
In this paper the problem of selecting p out of n available items is discussed, such that their total cost is minimized. We assume that costs are not known exactly, but stem from a set of possible outcomes.
Robust recoverable and two-stage models of this selection problem are analyzed. In the two-stage problem, up to p items is chosen in the first stage, and the solution is completed once the scenario becomes revealed in the second stage. In the recoverable problem, a set of p items is selected in the first stage, and can be modified by exchanging up to k items in the second stage, after a scenario reveals.
We assume that uncertain costs are modeled through bounded uncertainty sets, i.e., the interval uncertainty sets with an additional linear (budget) constraint, in their discrete and continuous variants. Polynomial algorithms for recoverable and two-stage selection problems with continuous bounded uncertainty, and compact mixed integer formulations in the case of discrete bounded uncertainty are constructed.
Next Friday (10 February) I will give a talk on the scenario aggregation results at the Combinatorial Optimization and Applications workshop in Edinburgh. The organisers were able to invite an impressive list of speakers, so I’m much looking forward to the packed single-day schedule!
We just uploaded a new preprint on approximation algorithms for min-max and min-max regret optimisation problems with discrete uncertainty sets.
For these kind of problems, there is a very simple method to find an approximation: If you are given K scenarios, take the average of them, and solve the resulting (much easier) problem. This gives a K-approximation for both min-max and min-max regret problems.
As easy as this method is, it still holds the title of being the best known approximation algorithm for a whole range of problems. Finally, our results make some progress in this regard: We show that it is possible to improve the K-approximation to an εK-approximation for any fixed ε.
The basic idea of our approach is not to aggregate all scenarios until a single scenario is left, but stop earlier. For example, we may take the pairwise average of scenarios until four of them are left. Solving the resulting problem with four scenarios will give a better result than with one scenario, but it is still hard. Fortunately, there are good approximation methods known for few scenarios, which makes it possible to beat the K-result for the first time.
This is the abstract:
As most robust combinatorial min-max and min-max regret problems with discrete uncertainty sets are NP-hard, research into approximation algorithm and approximability bounds has been a fruitful area of recent work. A simple and well-known approximation algorithm is the midpoint method, where one takes the average over all scenarios, and solves a problem of nominal type. Despite its simplicity, this method still gives the best-known bound on a wide range of problems, such as robust shortest path, or robust assignment problems.
In this paper we present a simple extension of the midpoint method based on scenario aggregation, which improves the current best K-approximation result to an (εK)-approximation for any desired ε>0. Our method can be applied to min-max as well as min-max regret problems.
I just returned from a two-week visit to Hong Kong University of Science and Technology, which was made possible through the Kan Tong Po fellowship of the Royal Society. And I must say – I am most impressed, by both the wonderful city and the university.
As our work revolves around using public transport in emergency planning, I gave a talk on the very same topic as part of the department’s seminar series last Wednesday. It was a pleasure speaking to such an open and attentive audience.
We had a good start developing new ideas and models. Let’s see what will come out of it!
Last week I visited Imperial College and used the opportunity to discuss some of the classic and more recent results on robust selection problems in a talk of the same title.
When I first saw the selection problem, I thought it quite silly. Given a set of n items with a cost vector, choose p of those items that minimize the costs. An optimal solution is very simple to find: Just sort all items with respect to their costs, and pack the p cheapest ones. That’s all. So why is this interesting?
The trouble with most robust problems is that you add an additional layer of complexity through the handling of uncertainty. That means in many cases that you may start with a simple deterministic problem (say, shortest paths) and end up with an NP-hard robust counterpart. But the selection problem is so simple, that the complexity of the robust version (depending on concept and uncertainty set) tends to lie just between easy and hard. This makes it quite challenging to analyse.
As an example, as far as I know, the min-max regret counterpart of selection with interval uncertainty was the first problem of its kind that turned out to be solvable in polynomial time. With more such polynomial solvabilty results in what followed, it makes you wonder how far we can push the deterministic problem complexity and still stay in the realm of polynomial solution algorithms?