Robust storage loading problems at the EURO 2016

Next week the EURO conference will take place in Poznan. Amongst many interesting talks, I’d like to highlight Thanh’s presentation on “Robust Storage Loading Problems with Stacking Constraints” (in WB-47).

The accompanying paper we wrote has been published quite recently. It considers the following problem: We want to stack containers in a way that a certain weight restriction in each stack is satisfied. However, we do not know exact stack weights in advance. We follow a classic robust model, where positions cannot be changed once additional information is available, and a two-stage procedure where we only determine to which stack (or set of stacks) and item is assigned in the first stage. Then, in the second stage, we specify the precise position within the stack, once the actual weights are known.

Using an innovative description of the set of possible worst-case scenarios we need to consider, a compact formulation can be derived, which considerably outperforms a standard scenario generation approach.

This is the full abstract of the paper:

We consider storage loading problems where items with uncertain weights have to be loaded into a storage area, taking into account stacking and payload constraints. Following the robust optimization paradigm, we propose strict and adjustable optimization models for finite and interval-based uncertainties. To solve these problems, exact decomposition and heuristic solution algorithms are developed. For strict robustness, we also propose a compact formulation based on a characterization of worst-case scenarios. Computational results for randomly generated data with up to 300 items are presented showing that the robustness concepts have different potential depending on the type of data being used.

Advertisements

Wroclaw

Currently visiting Wroclaw, the “home of approximation algorithms in robust optimisation”. And what a city it is! As this year’s European Capital of Culture, it sure lives up to expectation. A visit is warmly recommended; and to top it all off, the pierogi are simply delightful.

RAMOO Workshop

Yesterday surely was emotional. With Brexit looming over us and making quite a bad surprise for breakfast, we still had a most interesting and fruitful workshop on Recent Advances in Multi-Objective Optimisation, which I had the pleasure to help organise.

The connection between robust and multi-objective optimisation was a frequently occurring theme, starting with Gabi Eichfelders keynote on a concept called multi-objective decision uncertainty, and a talk by Corinna Krüger on a similar topic.

My own talk had the rather broad title “Robust and Multiobjective Optimisation: Opportunities and Challenges”. The general theme was to look not only at what robust optimisation can do for multi-objective optimisation, but also at the other way around: How can we use multiobjective optimisation to solve robust problems? What combinations are there between these topics? All slides should be available soon at the conference homepage.

The workshop was the third of its kind, and the series goes on. Most likely, we’ll meet in Germany again next year! All talks were of the highest quality, and all participants were clearly enthusiastic about their research. What an enjoyable day, after all!

Couple of Wikipedia Links

More or less random stuff I’ve come across:

NewYorker on Crowd Dynamics

I just stumbled upon this interesting (but also quite long) essay on crowd dynamics. Plenty of examples that show how important emergency management is!

We have never evolved a collective intelligence to function in large crowds—we have no way of getting beyond the purely local rules of interaction, as ants can.

Integrated Models in Public Transport

Lately, integrated models for public transport planning have become increasingly popular, with plenty of new publications as a fruit of renewed research effort.

The idea is really quite intuitive: To arrive at a public transport system, several successive planning steps need to be considered. Where do we locate stops? How do we design lines, and with what frequency should they be used? What is a good timetable? How many vehicles will we need? And many more variations of such questions.

Usually, each planning problem is studied on its own. There are good reasons for doing so. Problems are already very hard to solve in this form, and will be tackled by different planning departments, each with different objectives.

But: Designing a line plan, for example, will have big impact on the quality of a subsequent timetable. To integrate several planning steps into an even more complex single step might therefore result in a much better overall solution than an iterative approach.

There are two papers I was involved with that went into this direction.

In “Evaluating line concepts using travel times and robustness“, we looked at the impact of line plans on later planning steps. To do so, we generated a large set of possible line plans (using LinTim), and went through the planning process with each of them. In the end, you can see which parameters of the line plan influence the later planning steps more, and which are less important. Such an analysis can be seen as a much easier alternative to solving an integrated problem, as you can condense the relevant parameters into the line planning step.

This is the abstract:

Line planning is an early step in the planning process in public transportation, usually followed by designing the timetable. The problems related to both steps are known to be NP-hard, and an integrated model finding a line plan and a timetable simultaneously seems out of scope from a computational point of view. However, the line plan influences also the quality of the timetable to be computed in the next planning step.

In this paper we analyze the impact of different line planning models by comparing not only typical characteristics of the line plans, but also their impact on timetables and their robustness against delays. To this end, we set up a simulation platform LinTim which enables us to compute a timetable for each line concept and to experimentally evaluate its performance under delays. Using the German railway intercity network, we evaluate the quality of different line plans from a line planning, a timetabling, and a delay management perspective.

In the second paper, “An experimental comparison of periodic timetabling models“, we looked at the problem of integrating a timetable with passenger routing. There is a kind of chicken-egg problem here, as passenger routes depend on the timetable, but to find a good timetable, we need to know passenger routes. Our approach was to use an iteration between both problems to find timetables that allowed much better travel times than with the usual method.

This is the abstract:

In the Periodic Timetabling Problem, vehicle arrivals and departures need to be scheduled over a periodically repeating time horizon. Its relevance and applicability have been demonstrated by several real-world implementations, including the Netherlands railways and the Berlin subway.

In this work, we consider the practical impact of two possible problem variations: firstly, how passenger paths are handled, and secondly, how line frequencies are included. In computational experiments on real-world and close-to real-world networks, we can show that passenger travel times can significantly benefit from extended models.

Ranking robustness

Quite recently, our new approach for robust optimisation in combination with K-best ranking problems has been published.

In robust optimisation, we’d like to find solutions which perform well over a range of possible scenario outcomes. For K-best problems, we are interested in finding not only a best possible solution for a (deterministic) problem, but also the second best, the third best, and so on.

So what do these two problems have to do with each other? One could certainly look at K-best problems under uncertainty, that is, finding second best solutions for robust problems, third best, and so on. I don’t think anyone has ever tried that!

Our idea was instead to use such ranking algorithms to determine what a “good solution” means under uncertainty. The classic approach to determine the quality of a solution would be to calculate the worst objective value over all scenarios. In our approach, we calculate the worst ranking over all scenarios instead.

This idea was inspired by many real-world problems, where the specific objective value of a solution is only one of many indicators what a “good” solution constitutes. Under many circumstances, a more natural approach for a decision maker is to give a rough quality estimate, say in terms of “good”, “ok”, “bad”, or “very bad”, for example. We try to find a solution for which the worst such ranking over all scenarios is as high as possible.

K-best problems are tough to solve on their own, and it doesn’t get easier if you include that as a subproblem for robust optimisation. Accordingly, a big future challenge for us will be to produce high-quality ranking robust solutions in shorter computation time.

This is the abstract:

We present a new approach to handle uncertain combinatorial optimization problems that uses solution ranking procedures to determine the degree of robustness of a solution. Unlike classic concepts for robust optimization, our approach is not purely based on absolute quantitative performance, but also includes qualitative aspects that are of major importance for the decision maker.

We discuss the two variants, solution ranking and objective ranking robustness, in more detail, presenting problem complexities and solution approaches. Using an uncertain shortest path problem as a computational example, the potential of our approach is demonstrated in the context of evacuation planning due to river flooding.