Performance Analysis in Robust Optimization

There is a new book out on Robustness Analysis in Decision Aiding, Optimization, and Analytics, edited by Michael Doumpos, Constantin Zopounidis, and Evangelos Grigoroudis.

André and me participated with a chapter on Performance Analysis in Robust Optimization. The basic idea can be encountered, even though implicitly, in many papers on robust models: Now that we have found a solution we consider robust, how can we evaluate how good it performs?

One way is to calculate how much the robust solution costs in the nominal scenario, compared to how much the best possible solution in the nominal scenario costs. But that is clearly one-sided: Of course the robust solution will costs more, but it is much more interesting to see how much we will save by using the robust solution.

There is not one single, fair method to evaluate a solution. We present different approaches to do so, and discuss advantages and disadvantages. As a side node, the average-case solution often performs not quite bad at all!

The book contains many more papers that will be highly interesting to the community. To highlight two: There is a survey on robust combinatorial optimisation which is exactly what was needed at the moment, and another survey on The State of Robust Optimization.

Brexit and Funding

Now that the UK is on track to leave the EU, academia wonders what happens with current and future EU-funded projects. Of course, nobody knows. The official stance is that we should carry on as normal:

While the UK remains a full member of the European Union we encourage researchers to continue to engage with partners in the EU and with European funding schemes as normal.

The problem is, this statement is full of “whiles” and “no immediates”. And that screaming uncertainty apparently has effects as the guardian reports, though I cannot speak out of first-hand experience.

Fortunately, some hope finally appears on the horizon: An easy and efficient plan to solve any Brexit-related problems at hand.

The wizened dwarf appeared to Theresa May and pledged to spin the room full of straw she inherits into purest gold, which has been praised as the most credible plan thus far presented by Leave campaigners.

Research Parasites

During his excellent keynote on Information-based Medicine and Combinatorial Optimization at the EURO2016, Pablo Moscato mentioned the concept of research parasites. If you hear at for the first time – as I did – prepare to be surprised.

The term originated in an editorial on data sharing of the New England Journal of Medicine. The authors stated

A second concern held by some is that a new class of research person will emerge — people who had nothing to do with the design and execution of the study but use another group’s data for their own ends, possibly stealing from the research productivity planned by the data gatherers, or even use the data to try to disprove what the original investigators had posited. There is concern among some front-line researchers that the system will be taken over by what some researchers have characterized as “research parasites.”

As you can imagine, this statement has been heavily criticised from all sides. To use data to disprove what the original investigator posited, well, that’s what scientific progress is about.  Were the authors misunderstood? In the same editorial, they propose an alternative to what they call parasitic research:

How would data sharing work best? We think it should happen symbiotically, not parasitically. Start with a novel idea, one that is not an obvious extension of the reported work. Second, identify potential collaborators whose collected data may be useful in assessing the hypothesis and propose a collaboration. Third, work together to test the new hypothesis. Fourth, report the new findings with relevant coauthorship to acknowledge both the group that proposed the new idea and the investigative group that accrued the data that allowed it to be tested. What is learned may be beautiful even when seen from close up.

Clearly, one concern of the authors was that appropriate credit for data would not be given. Still, their definition of good research would exclude checking results of other authors using the data they used, that is, verification of results, which is at the heart of science.

As often happens with such scandals, the editorial created quite the opposite effect it intended. A research parasite award has been created, and a twitter account keeps us up-to-date on latest research parasitism.

We are promised that all keynotes can be seen online soon. Let’s hope they are uploaded while memory of the EURO is still fresh.

Variable-Sized Uncertainty and Inverse Robust Optimisation at the EURO 2016

Having just arrived in Poznan, I’m much looking forward to the exciting talks we will see during the coming days.

My own talk (TC 47) will be on a recent paper on “Variable-Sized Uncertainty and Inverse Problems in Robust Optimization”. The idea is that we do not know the uncertainty set exactly, but only its shape. If the uncertainty size is zero, the nominal solutions is also an optimal “robust” solution. How does this change when the uncertainty increases?

In the first kind of problem we consider, robust optimisation with variable-sized uncertainty, we try to find a smallest set of robust solutions that contains an optimal solution for every possible uncertainty set. In the second kind of problem, inverse robust optimisation, we only consider how much uncertainty can or must be added such that the nominal solution is not an optimal solution for the robust problem anymore.

This is the abstract:

In robust optimization, the general aim is to find a solution that performs well over a set of possible parameter outcomes, the so-called uncertainty set. In this paper, we assume that the uncertainty size is not fixed, and instead aim at finding a set of robust solutions that covers all possible uncertainty set outcomes. We refer to these problems as robust optimization with variable-sized uncertainty. We discuss how to construct smallest possible sets of min-max robust solutions and give bounds on their size.
A special case of this perspective is to analyze for which uncertainty sets a nominal solution ceases to be a robust solution, which amounts to an inverse robust optimization problem. We consider this problem with a min-max regret objective and present mixed-integer linear programming formulations that can be applied to construct suitable uncertainty sets.
Results on both variable-sized uncertainty and inverse problems are further supported with experimental data.

Other talks in the session are given by Andre Chassein (Approximation of Ellipsoids Using Bounded Uncertainty Sets), Krzysztof Postek (Robust Counterparts of Ambiguous Stochastic Constraints Under Mean and Dispersion Information), and Mikita Hradovich (The Robust Recoverable Spanning Tree Problem with Interval Costs). So do come and enjoy!