contributed talks in ecco:ie.technion.ac.il/~levinas/abstract_booklet_ecco_2009.doc · web viewfor...

46
European Chapter on Combinatorial Optimization (ECCO) XXII May 17-19, 2009 Jerusalem Abstracts booklet The conference is supported by:

Upload: others

Post on 26-Oct-2020

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Contributed talks in ECCO:ie.technion.ac.il/~levinas/Abstract_booklet_ECCO_2009.doc · Web viewFor a cubic graph G of order n to decompose K_n it is necessary that n = 6k+4. We show

European Chapter on Combinatorial Optimization

(ECCO) XXII

May 17-19, 2009

Jerusalem

Abstracts booklet

The conference is supported by:

Page 2: Contributed talks in ECCO:ie.technion.ac.il/~levinas/Abstract_booklet_ECCO_2009.doc · Web viewFor a cubic graph G of order n to decompose K_n it is necessary that n = 6k+4. We show

Sunday

9:15-10:15 Plenary talk – Alain Hertz "Solution methods for the inventory routing problem"

We consider an inventory routing problem in discrete time where a supplier has to serve a set of customers over a time horizon. At each discrete time, a fixed quantity is produced at the supplier and a fixed quantity is consumed at each customer. A capacity constraint for the inventory is given for the supplier and each customer, and the service cannot cause any stock-out situation. Two different replenishment policies are considered: when a customer is served, one can either deliver any quantity that does not violate the capacity constraint, or impose that the inventory level at the customer should reach its maximal value. The transportation cost is proportional to the distance traveled, whereas the inventory holding cost is proportional to the level of the inventory at the customers and at the supplier. The objective is the minimization of the sum of the inventory and transportation costs. We describe the current best exact and heuristic algorithms for the solution of this complex problem.

10:45-12:00 Parallel sessions:

Track 1: Integer and mixed integer formulations 1

1. Julia Rieck and Jürgen Zimmermann, " A new mixed integer linear model for a rich vehicle routing problem with docking constraints"

In this paper we address a rich vehicle routing problem that arises in real-life applications. Among other aspects we consider

• heterogenous vehicles,• simultaneous delivery and pick-up at customer locations, • a total working time for vehicles from departure to arrival back at the depot, • time windows for customers during which delivery and pick-up can occur, • a time window for the depot, and• multiple use of vehicles throughout the planning horizon.

To guarantee a coordinated material flow at the depot, we include the timed allocation of vehicles to loading bays at which the (un-)loading activities can occur. The resulting rich vehicle routing problem can be formulated as a two-index vehicle-flow formulation where we use binary variables to indicate if a vehicle traverses an arc in the optimal solution and to sequence the (un-)loading activities at the depot.

In our performance analysis, we use CPLEX 11.0 to solve instances that are derived from the extended Solomon test set. The selective implementation of preprocessing techniques and cutting planes improves the solver performance significantly. In this context, we strengthen the domains of auxiliary variables as well as the big-M-constraints of our model. We identified clique, implied bound, and flow cover cuts as particularly suitable for our purposes. Additionally, we take into consideration the rounded capacity cuts for the capacitated vehicle routing problem. As average computational time for instances with 25 customers we obtain 310.06

Page 3: Contributed talks in ECCO:ie.technion.ac.il/~levinas/Abstract_booklet_ECCO_2009.doc · Web viewFor a cubic graph G of order n to decompose K_n it is necessary that n = 6k+4. We show

seconds and for instances with 30 customers 1305.10 seconds. A comparison to the results obtained for a three-index vehicle-flow formulation shows that the model is suitable for practical applications.

2. Bartosz Sawik, "Bi-objective dynamic portfolio optimization by mixed integer programming"

This paper presents the dynamic portfolio optimization problem formulated as a bi-objective mixed integer linear program. The computational efficiency of MILP model is very important for applications to real-life financial and other decisions where the constructed portfolios have to meet numerous side constraints. The portfolio selection problem considered is based on a multi-period model of investment, in which the investor buys and sells securities in consecutive periods. An extension of the Markowitz portfolio optimization model is considered, in which the variance has been replaced with the Value-at-Risk (VaR). The VaR is a quantile of the return distribution function. The advantage of using this measure in portfolio optimization is that this value of risk is independent of any distribution hypothesis. The portfolio selection problem is usually considered as a bi-objective optimization problem where a reasonable trade-off between expected rate of return and risk is sought. The objective is to maximize future returns by picking the best amount of stocks. The only way to improve future returns is to increase the risk level that the decision maker is willing to accept. In the classical Markowitz approach, future returns are random variables controlled by such parameters as the portfolio efficiency, which is measured by the expectation, while risk is calculated by the standard deviation. As a result the classical problem is formulated as a quadratic program with continuous variables and some side constraints. The objective of the problem considered in this paper is to allocate wealth on different securities to maximize the portfolio expected return and the threshold of the probability that the return is not less than a required level. The auxiliary objective is minimization of risk probability of portfolio loss. The input data for computations consist of 3500 historic daily quotations divided into 14 investment periods. The results of some computational experiments with the mixed integer programming approach modeled on a real data from the Warsaw Stock Exchange are reported.

3. Alberto Ceselli, Giovanni Righini and Gregorio Tirado Dominguez, "Mathematical programming algorithms for the double TSP with multiple stacks"

The Double TSP with Multiple Stacks requires to find a minimum cost pair of Hamiltonian tours, visiting two given sets of pickup and delivery customers, such that the two sequences of visits can be executed by a single vehicle that can trasnport goods organizing them in stacks of identical capacity. Stacks are managed according to a LIFO policy and items cannot be rearranged. For this problem only heuristic algorithms have been published so far.We present a decomposition approach that leads to Lagrangian relaxation and column generation algorithms. We also present some preliminary computational results.

Page 4: Contributed talks in ECCO:ie.technion.ac.il/~levinas/Abstract_booklet_ECCO_2009.doc · Web viewFor a cubic graph G of order n to decompose K_n it is necessary that n = 6k+4. We show

Track 2: Approximation and Online algorithms:

1. Zvi Lotker, Boaz Patt-Shamir and Dror Rawitz, "Rent, lease or buy: randomized strategies for multislope ski rental"

In the Multislope Ski Rental problem, the user needs a certain resource for some unknown period of time. To use the resource, the user must subscribe to one of several options, each of which consists of a one-time setup cost (“buying price”), and cost proportional to the duration of the usage (“rental rate”). The larger the price, the smaller the rent. The actual usage time is determined by an adversary, and the goal of an algorithm is to minimize the cost by choosing the best option at any point in time. Multislope Ski Rental is a natural generalization of the classical Ski Rental problem (where the only options are pure rent and pure buy), which is one of the fundamental problems of online computation. The Multislope Ski Rental problem is an abstraction of many problems, where online choices cannot be modeled by just two alternatives, e.g., power management in systems which can be shut down in parts.

In this work we study randomized online strategies for Multislope Ski Rental.Our results include an algorithm that produces the best possible randomized online strategy for any additive instance, where the cost of switching from one alternative to another is the difference in their buying prices; and an e-competitive randomized strategy for any (non-additive) instance. We also provide a randomized strategy with a matching lower bound for the case of two slopes, where both slopes have positive rents.

2. Asaf Levin and Uri Yovel, "Algorithms for (p,k)-uniform unweighted set cover problem"

We are given n base elements and a finite collection of subsets of them. The size of any subset varies between p to k (p<k). In addition, we assume that the input contains all possible subsets of size p. Our problem is to find a subcollection of minimum-cardinality which covers all the elements. This problem is known to be NP-hard. We provide two approximation-algorithms for it, one for the generic case, and an improved one for the special case of (p,k) = (2,4).

The algorithm for the generic case is a greedy one, based on packing phases: at each phase we pick a collection of disjoint subsets covering i new elements, starting from i=k down to i=p+1. At a final step we cover the remaining base elements by the subsets of size p. We derive the exact approximation ratio of this algorithm for all values of k and p, which is less than Hk, where Hk is the k’th harmonic number. However, the algorithm exhibits the known improvement methods over the greedy one for the unweighted k-set cover problem (in which subset sizes are only restricted not to exceed k), and hence it serves as a benchmark for our improved algorithm.

The improved algorithm for the special case of (p,k) = (2,4) is based on local-search: it starts with a feasible cover, and then repeatedly tries to replace sets of size 3 and 4 so as to maximize an objective function which prefers big sets over small ones. For this case, our generic algorithm achieves an approximation ratio of 1.5, and the local-search algorithm achieves a better ratio, which is bounded by 1.458333...

Page 5: Contributed talks in ECCO:ie.technion.ac.il/~levinas/Abstract_booklet_ECCO_2009.doc · Web viewFor a cubic graph G of order n to decompose K_n it is necessary that n = 6k+4. We show

3. Leah Epstein and Gyorgy Dosa, "Preemptive online scheduling with reordering"

We consider online preemptive scheduling of jobs, arriving one by one, on m parallel machines. A buffer of fixed size K>0, which assists in partial reordering of the input, is available, to be used for the storage of at most K unscheduled jobs. We consider several variants of the problem. For general inputs and identical machines, we show that a buffer of size Theta(m) reduces the overall competitive ratio from e/(e-1) to 4/3. Surprisingly, the competitive ratio as a function of m is not monotone, unlike the case where K=0.

12:10-13:00 Parallel sessions:

Track 1: Integer and mixed integer formulations 2:

1. Jan Pelikán and Jakub Fischer, "Optimal doubling models"

The article deals with the problem of doubling the part of the system in order to increase the reliability of this system. The task is to find out which part should be doubled if it is not possible to double all of them for economic reasons. The goal is to maximize the system’s reliability characterized by the probability of the faultless functioning or to minimize the losses caused by the system’s failure. The article proposes a procedure based on linear programming methods that find the optimal decision, which of the parts should be doubled.

2. Jorge Riera-Ledesma and Juan José Salazar González, "A branch-and-cut-and-price for the multiple vehicle traveling purchaser problem"

The Multiple Vehicle Traveling Purchaser Problem aims designing a set of optimal school bus routes with the purpose of carrying pupils to their center of learning. The problem is defined as follows: given a set of users going to a specific location, let's say school, there exist a set of potential bus stops. Each bus stop is reachable by a subset of users. A fleet of homogeneous vehicles is available with this purpose. The aim is assigning the set of users to a subset of potential stops, finding least cost trips that starts and ends at the school, and choosing a vehicle to serve each trip so that each stop with user assigned is served by a trip, and the total number of users assigned to the stops of a trip does not exceed the capacity of the vehicle.

We propose an integer linear formulation, and several families of valid inequalities are derived to strengthen the linear relaxation. A branch-and-cut-and-price procedure has been developed, and a extensive computational experience is presented.

Track 2: Approximation algorithms for bin packing problems:

1. Rolf Harren and Rob van Stee, "An absolute 2-approximation algorithm for two-dimensional bin packing"

We consider the problem of packing rectangles into an unlimited supply of equally-sized, rectangular bins. All rectangles have to be packed non-overlapping and orthogonal, i.e., axis-parallel. The goal is to minimize the number of bins used.

Page 6: Contributed talks in ECCO:ie.technion.ac.il/~levinas/Abstract_booklet_ECCO_2009.doc · Web viewFor a cubic graph G of order n to decompose K_n it is necessary that n = 6k+4. We show

In general, multi-dimensional packing is strongly related to scheduling, as one dimension can be associated with the processing time of the tasks (items) and the other dimensions make up geometrically ordered resources. In two-dimensional packing, e.g., tasks have to be processed on consecutive machines. The bin packing setting that we consider here applies when the machines work in shifts, e.g., only during the daytime. Thus the schedule has to be fixed into separate blocks.

Most of the previous work on two-dimensional bin packing has focused on the asymptotic approximation ratio, i.e., allowing a constant number of spare bins. In this setting, algorithms by Caprara and by Bansal et al. have approximation ratio better than 2. On the other hand, the additive constants are (very) large and thus these algorithms give bad approximation ratios for instances with small optimal value.

In terms of absolute approximability, i.e., without wasting additional bins, the previously best-known algorithms have a ratio of 3. If rotations by 90 degrees are permitted, the problem becomes somewhat easier as we do not have to combine differently oriented thin items into a bin. We previously showed an absolute 2-approximation for this case.

Here we present an algorithm for the problem without rotations with an absolute worst-case ratio of 2, i.e., our algorithm uses at most twice the optimal number of bins. It was shown that it is strongly NP-complete to decide wether a set of squares can be packed into a given square. Therefore our result is best possible unless P = NP and we thus settle the question of absolute approximability of two-dimensional bin packing.

Our algorithm consists of three main parts depending on the optimal value of the given instance. As we do not know this optimal value in advance, we apply all three algorithms and allow them to fail if the requirement on the optimal value is not satisfied. The algorithms with asymptotic approximation ratio less than 2 give an absolute 2-approximation for instances with large optimal value. We design further algorithms to solve instances that fit into one bin and that fit into a constant number of bins, respectively. All three algorithms together make up an absolute 2-approximation.

2. Leah Epstein and Asaf Levin, "Bin packing with general cost structures"

Following the work of Anily et al., we consider a variant of bin packing, called bin packing with general cost structures and design an asymptotic fully polynomial time approximation scheme (AFPTAS) for this problem. In the classic bin packing problem, a set of one-dimensional items is to be assigned to subsets of total size at most 1, that is, to be packed into unit sized bins. However, in our problem, the cost of a bin is not 1 as in classic bin packing, but it is a non-decreasing and concave function of the number of items packed in it, where the cost of an empty bin is zero. The construction of the AFPTAS requires novel techniques for dealing with small items, which are developed in this work. In addition, we develop a fast approximation algorithm which acts identically for all non-decreasing and concave functions, and has an asymptotic approximation ratio of 1.5 for all functions simultaneously.

Page 7: Contributed talks in ECCO:ie.technion.ac.il/~levinas/Abstract_booklet_ECCO_2009.doc · Web viewFor a cubic graph G of order n to decompose K_n it is necessary that n = 6k+4. We show

14:00-15:15 Parallel sessions:

Track 1: Algorithms:

1. Igor Averbakh, "Minmax regret bottleneck problems with solution-induced interval uncertainty structure"

We consider minmax regret bottleneck subset-type combinatorial optimization problems, where feasible solutions are some subsets of a finite ground set. Theweights of elements of the ground set are uncertain; for each element, an uncertainty interval that contains its weight is given. In contrast with previously studied interval data minmax regret models, where the set of scenarios (possible realizations of the vector of weights) does not depend on the chosen feasible solution, we consider the problem with solution-induced interval uncertainty structure. That is, for each element of the ground set, a nominal weight from the corresponding uncertainty interval is fixed, and it is assumed that only the weights of the elements included in the chosen feasible solution can deviate from their respective nominal values. We present a number of algorithmic results for bottleneck minmax regret problems of this type, in particular, a quadratic algorithm for the problem on a uniform matroid.

2. Endre Boros, Ondrej Cepek, Alex Kogan and Petr Kucera, "A subclass of Horn CNFs optimally compressible in polynomial time"

The problem of Horn Minimization (HM) can be stated as follows: given a Horn CNF representing a Boolean function f, find a CNF representation of f which consists of a minimum possible number of clauses. This is a classical combinatorial optimization problem with many practical applications. For instance, the problem of knowledge compression for speeding up queries to propositional Horn expert systems is equivalent to HM.

HM is a computationally difficult problem: it is known to be NP-hard even if the input is restricted to cubic Horn CNFs. On the other hand, there are two subclasses of Horn CNFs for which HM is known to be solvable in polynomial time: acyclic and quasi-acyclic Horn CNFs. In this talk we introduce a new class of Horn CNFs which properly contains both of the known classes and describe a polynomial time HM algorithm for this new class.

3. Alexander Zadorojniy, Guy Even and Adam Shwartz, "A strongly polynomial algorithm for controlled queues"

We consider the problem of computing optimal policies of finite-state, finite-action Markov Decision Processes (MDPs). A reduction to a continuum of constrained MDPs (CMDPs) is presented such that the optimal policies for these CMDPs constitute a path in a graph defined over the deterministic policies. This path contains, in particular, an optimal policy of the original MDP. We present an algorithm based on this new approach that finds this path and thus an optimal policy. In the general case this path might be exponentially long in number of states and actions. We prove that the length of this path is polynomial if the MDP satisfies a coupling property. Thus we obtain a strongly polynomial algorithm for MDPs that satisfy the coupling property. We prove that discrete time versions of controlled

Page 8: Contributed talks in ECCO:ie.technion.ac.il/~levinas/Abstract_booklet_ECCO_2009.doc · Web viewFor a cubic graph G of order n to decompose K_n it is necessary that n = 6k+4. We show

M/M/1 queues induce MDPs that satisfy the coupling property. The only previously known polynomial algorithm for controlled M/M/1 queues in the expected average cost model is based on linear programming (and is not known to be strongly polynomial). Our algorithm works both for the discounted and expected average cost models, and the running time does not depend on the discount factor.

Track 2: Scheduling 1:

1. Bezalel Gavish, "Batch sizing and scheduling in serial multistage production system"

The authors propose a procedure to model and determine the optimal batch sizes and their schedule in a serial multistage production process with constant demand rate for the final product. The model takes into consideration the batch setup time in each stage, the processing times of a batch as a function of batch size and production stage. It considers the holding and shortage costs of the final product, and the setup and processing costs at each stage of the production process. The objective is to maximize the net profit per unit of time derived from selling the final products. We report on extensive computational experiments that tested the impact of different factors on the objective function and the optimal batch size. A feature unique to this paper is the concentration on multistage serial systems consisting of hundreds of stages in the production process, in such systems a batch can spend from one to eight months in the production process. Batch sizing and scheduling has a great impact on the net revenues derived from the production facility. Due to the tight tolerances of the production process which are computer controlled, setup and processing times are deterministic.

2. Yaron Leyvand, Dvir Shabtay, George Steiner and Liron Yedidsion, "Just-in-time scheduling with controllable processing times on single and parallel machines"

We study bicriteria scheduling problems with controllable processing times on single and parallel machines. Our objectives are to maximize the weighted number of jobs that are completed exactly at their due date and to minimize the total resource allocation cost. We consider four different models for treating the two criteria. We prove that three of these problems are NP-hard even on a single machine, but somewhat surprisingly, the problem of maximizing an integrated objective function can be solved in polynomial time even for the general case of a fixed number of unrelated parallel machines. For the three NP-hard versions of the problem, with a fixed number of machines and a discrete resource type, we provide a pseudo-polynomial time optimization algorithm, which is converted to a fully polynomial time approximation scheme to find a Pareto optimal solution.

3. Gur Mosheiov and Assaf Sarig, "Scheduling a maintenance activity to minimize total weighted completion-time"

We study a single machine scheduling problem. The processor needs to go through a maintenance activity, which has to be completed prior to a given deadline. The objective function is minimum total weighted completion time. The problem is proved to be NP-hard, and an introduction of a pseudo-polynomial dynamic programming

Page 9: Contributed talks in ECCO:ie.technion.ac.il/~levinas/Abstract_booklet_ECCO_2009.doc · Web viewFor a cubic graph G of order n to decompose K_n it is necessary that n = 6k+4. We show

algorithm indicates that it is NP-hard in the ordinary sense. We also present an efficient heuristic which is shown numerically to perform well.

15:25-16:15 Parallel sessions:

Track 1: Supply chain:

1. Tadeusz Sawik, "A two-objective mixed integer programming approach for supply chain scheduling"

The integrated and hierarchical approaches based on mixed integer programming are proposed for a bi-objective coordinated scheduling in a customer driven supply chain. The supply chain consists of multiple suppliers (manufacturers) of parts and a single producer of finished products. Given a set of customer orders, the problem objective is to determine a coordinated schedule for the manufacture of parts by each supplier, for the delivery of parts from each supplier to the producer, and for the assignment of orders to planning periods at the producer, such that a high revenue or a high customer service level is achieved and the total cost of holding supply chain inventory is minimized. The proposed two approaches are described below.

1. Integrated (simultaneous) approach. The coordinated schedules for customer orders, for manufacturing of parts and for supply of parts are determined simultaneously to achieve a minimum number of tardy orders or a minimum lost revenue due to tardiness or full rejection of orders, at a minimum cost of holding total supply chain inventory of parts and finished products.

2. Hierachical (sequential) approach. First, the order acceptance/due date setting decisions are made to select maximal subset of orders that can be completed by requested due dates, and for the remaining orders delayed due dates are determined to minimize the lost revenue owing to tardiness or rejection of the orders. The due dates must satisfy capacity constraints and are considered to be deadlines at the order scheduling level. Next, order deadline scheduling is performed to meet all committed due dates and to minimize cost of holding finished product inventory of orders completed before the deadlines. Finally, scheduling of manufacturing and supply of parts, coordinated with the schedule for orders is accomplished to meet demand for parts at a minimum cost of part supply and inventory holding.

Numerical examples modeled after a real-world scheduling in a customer driven supply chain of high-tech products are presented and some computational results are reported to compare the two approaches. To determine subsets of non-dominated solutions for the integrated approach, in the computational experiments the weighted-sum program is compared with the Tchebycheff program.

Page 10: Contributed talks in ECCO:ie.technion.ac.il/~levinas/Abstract_booklet_ECCO_2009.doc · Web viewFor a cubic graph G of order n to decompose K_n it is necessary that n = 6k+4. We show

2. Grzegorz Pawlak, "Multi-layer agent scheduling and control model in the car factory"

The complex planning and production tasks in the car production factory will generate the necessity to apply complicated scheduling and control systems. The goal of the work is to have the production in balance, without breaks and without having too many parts in a stock. The production process is controlled and measured by taking into account many coefficients. The inventory level, the buffers occupation, production flow, deadlines, sequence quality, failures etc. Building the global optimization system is difficult and usually such a system is not considering all constraints and dependencies. The alternative is to build the model with the distributed local agents – schedulers and control units fast adapting to the changing circumstances. The disturbance of the process caused by the demand changes or break downs in the lines should lead to the appropriate reaction of the scheduling and control system. In the fast changing environment the intelligent and self adopted scheduling and control system may be the advantage in the production optimization process. In the situations where is a need to set the production speed or the synchronization of production line in multi-line production system the agent driven control could be the reasonable alternative. Additionally, the usage and intelligent control of buffers and switches can improve the production efficiency. In every car there are thousands of parts to be linked and that is why such production system, can be regarded as one of most complicated and other segment companies can follow that application example. So the model is not restricted only to the car factories.The presented model is taking into account the multi – layer control agent system. There are agents to play specific roles. There is the control, monitoring and visualization, scheduling agents. Each of them has defined the execution domain and the activity area. The configuration and the management of such agent systems have been taken into account. The classical scheduling models and algorithms have been incorporated into the bigger view of the production Summarizing, there is a need to have a self adopting production control system, which will be able to keep production with appropriate speed available for the demand driven production plan and according to the current state of production facilities and resources. In response to this problem, we would like to create the multi- line, self adopting production control system, which will consists of high-level control software, communication protocols and production line controllers behavior scenarios, prepared for different modes and situations.

Track 2: Networks:

1. Oren Ben-Zwi, Danny Hermelin, Daniel Lokshtanov and Ilan Newman, "An exact almost optimal algorithm for target set selection in social networks"

The Target Set Selection problem proposed by Kempe, Kleinberg, and Tardos, gives a nice clean combinatorial formulation for many problems arising in economy, sociology, and medicine. Its input is a graph with vertex thresholds, the social network, and the goal is to find a subset of vertices, the target set, that ``activates" aprespecified number of vertices in the graph. Activation of a vertex is defined via a so-called activation process as follows: Initially, all vertices in the target set become

Page 11: Contributed talks in ECCO:ie.technion.ac.il/~levinas/Abstract_booklet_ECCO_2009.doc · Web viewFor a cubic graph G of order n to decompose K_n it is necessary that n = 6k+4. We show

active. Then at each step i of the process, each vertex gets activated if the number of its active neighbors at iteration i-1 exceeds its threshold. The activation process is``monotone" in the sense that once a vertex is activated, it remains active for the entire process.

Unsurprisingly perhaps, Target Set Selection is NPC. More surprising is the fact that both of its maximization and minimization variants turn out to be extremelyhard to approximate, even for very restrictive special cases. The only known case for which the problem is known to have some sort of acceptable worst-case solution is the case where the given social network is a tree and the problem becomes polynomial-time solvable. In this paper, we attempt at extending this sparse landscape of tractable instances by considering the treewidth parameter of graphs. This parameter roughly measures the degree of tree-likeness of a given graph, e.g. the treewidth of a tree is 1, and has previously been used to tackle many classical NPhard problems in the literature.

Our contribution is twofold: First, we present an algorithm for Target Set Selection running in nO(w) time, for graphs with n vertices and treewidth bounded by w. The algorithm utilizes various combinational properties of the problem; drifting somewhat from standard dynamic-programming algorithms for small treewidth graphs.Also, it can be adopted to much more general settings, including the case of directed graphs, weighted edges, and weighted vertices. On the other hand, we also show that it is highly unlikely to find an no(\sqrt{w}) time algorithm for Target Set Selection, as this would imply a sub-exponential algorithm for all problems in SNPclass. Together with our upper bound result, this shows that the treewidth parameter determines the complexity of Target Set Selection to a large extent, and should be taken intoconsideration when tackling this problem in any scenario.

2. André Amaral, Alberto Caprara, Juan José Salazar González and Adam Lechford, "Lower bounds for the minimum linear arrangement of a graph"

Minimum Linear Arrangement is a classical basic combinatorial optimization problem from the 1960s that turns out to be extremely challenging in practice. In particular, for most of its benchmark instances, even the order of magnitude of the optimal solution value is unknown, as testified by the surveys on the problem that contain tables in which the best known solution value has many more digits than the best known lower bound value. Since, for most of the benchmark instances, finding a provably optimal solution appears to be completely out of reach at the moment, in this paper we propose a linear-programming based approach to compute lower bounds on the optimum. This allows us, for the first time, to show that the best known solutions are indeed not far from optimal.

16:45-17:45 Plenary talk – Maurice Queyranne "Structural and algorithmic properties for parametric minimum cuts"

We consider the minimum s-t-cut problem in a network with parametrized arc capacities. Classes of this parametric problem have been shown to enjoy the nice structural property that minimum cuts are nested and, following the seminal work of Gallo, Grigoriadis and Tarjan (1989), the nice algorithmic property that all minimum cuts can be computed in the same asymptotic time as a single minimum cut.

Page 12: Contributed talks in ECCO:ie.technion.ac.il/~levinas/Abstract_booklet_ECCO_2009.doc · Web viewFor a cubic graph G of order n to decompose K_n it is necessary that n = 6k+4. We show

We present a general framework for parametric minimum cuts that extends and unifies such results. We define two conditions on parametrized arc capacities that are necessary and sufficient for (strictly) decreasing differences of the parametric cut function. Known results in parametric submodular optimization then imply the structural property. We show how to construct appropriate flow updates in linear time under the above conditions, implying that the algorithmic property also holds under these conditions.We then consider other classes of parametric minimum cut problems, without decreasing differences, for which we establish the structural and/or the algorithmic property, as well as other cases where nested minimum cuts arise.

This is joint work with Frieda Granot and S. Thomas McCormick (Sauder School of Business at UBC) and Fabio Tardella (Universitá La Sapienza, Rome)

17:55-19:10 Parallel sessions:

Track 1: Scheduling 2:

1. Jan Pelikán, " Hybrid flow shop with adjustment"

The subject of this paper is a flow-shop based on a case study aimed at the optimisation of ordering production jobs in mechanical engineering, in order to minimise the overall processing time, the makespan. There is a given set of production jobs to be processed by the machines installed on the shop floor. A job is a product batch for a number of units of given product assigned to processing on the machine. Each job is assigned to a certain machine, which has to be adjusted by an adjuster. This worker adjusts all of the machines installed on the floor but at a given time, he can adjust only one machine, i.e. he cannot adjust more machines simultanously. Each adjusted machine is supposed to be capable of immediate starting to process the job for which it has been adjusted. No job processing is allowed to be broken (i.e., intermittent processing is not admissible), and each machine is able to process only one job at a time. The processing time minimisation problem is aimed at the reduction of the waiting times of both the machines for the adjuster and of the adjuster for a machine to be adjusted. After completing a job on the machine, the machine has to wait if the adjuster is finishes adjusting another machine. If all machines are processing jobs, the adjuster has to wait. A solution is represented by the order of the jobs in which the adjuster adjusts the respective machines. This order also determines the order of the machines to be adjusted by the adjuster. At the same time, the job ordering generates the order of the jobs on the machines when several jobs are assigned to the same machine. In the literature, hybrid flow-shop is defined as a problem of processing jobs which consists of two or more stages, with one or more processors at each stage. Each of the jobs to be processed consists of two or more tasks and each task is processed within its own stage. The jobs are non-preemptable and each subsequent stage is only started after the processing of the previous stage is completed. A hybrid flow-shop as described here further assumes that the following job at the first stage can be processed immediately upon completion of the preceding job at the first stage. Therefore, the problem of this case study can be formulated as a two-stage hybrid flow-shop, in which the first stage is represented by the work of an adjuster, who is viewed as the

Page 13: Contributed talks in ECCO:ie.technion.ac.il/~levinas/Abstract_booklet_ECCO_2009.doc · Web viewFor a cubic graph G of order n to decompose K_n it is necessary that n = 6k+4. We show

only one processor at the first stage. The second stage is represented by the machines on the shop floor, viewed as parallel processors; however, jobs are uniquely assigned to these processors. The objective function is defined as the makespan value, i.e., the overall processing time spent on all jobs at both stages. Contrary of the hybrid flow-shop defined in literature, there are two differences for two consecutive jobs here:a) If both jobs are assigned to the same machine, i.e., the same second-stage processor, the second job's first stage can only be started upon completion of the first job's second stage, when the machine is available;b) If each of the two consecutive jobs is assigned to a different machine as a second-stage processor, the second job's first stage can be started immediately upon completion of the first job's first stage.A mathematical model is proposed, a heuristic method is formulated, and the NP hardness of the problem, called a "hybrid flow-shop with adjustment," is proved.

2. Alexander Lazarev, "An approximation method for estimating optimal value of minimizing scheduling problems"

3. Alexander Kvaratskhelia and Alexander Lazarev, "An algorithm for total weighted completion time minimization in preemptive equal job length with release dates on a single machine"

We consider the minimizing total weighted completion time in preemptive equal job length scheduling problem on a single machine with release dates. We propose a polynomial time algorithm that solves the problem. Before this paper, the problem is known to be open (http://www.lix.polytechnique.fr/~durr/OpenProblems/1_rj_pmtn_pjp_sumWjCj/).We refer to the total weighted completion time problem (without release dates, preemption and arbitrary job length) that can be solved in polynomial time using Smith’s rule.

Our algorithm is also based on the Smith’s rule, which is applied to solve a sub-problem with purpose to find an optimal processing order within a time interval between two adjacent release dates. An optimal schedule is being found starting from the least release date. On each step we construct a sub-problem of jobs, which is

Page 14: Contributed talks in ECCO:ie.technion.ac.il/~levinas/Abstract_booklet_ECCO_2009.doc · Web viewFor a cubic graph G of order n to decompose K_n it is necessary that n = 6k+4. We show

already available wince corresponding release date. We find an optimal processing order of these jobs using Smith’s rule, and then we switch to the next release date.

We propose a number of properties of the optimal schedule, which prove the optimality of the schedule constructed by Algorithm in polynomial time.

Track 2: Optimization 1:

1. Michael Katz and Carmel Domshlak, "Pushing the envelope of abstraction-based admissible heuristics"

The field of automated, domain-independent planning seeks to build general-purpose algorithms enabling a system to synthesize a course of action that will achieve certain goals. Such algorithms perform reachability analysis in large-scale state models that are implicitly described in a concise manner via some intuitive declarative language. And though planning problems have been studied since the early days of Artificial Intelligence research, recent developments (and, in particular, recent developments in planning as heuristic search) have dramatically advanced the field, and also substantially contributed to some related fields such as software/hardware verification, control, information integration, etc.

The difference between various algorithms for planning as heuristic search is mainly in the heuristic functions they define and use. Most typically, an (admissible) heuristic function for domain-independent planning is defined as the (optimal) cost of achieving the goals in an over-approximating abstraction of the planning problem in hand. Such an abstraction is obtained by relaxing certain constraints that are present in the specification of the real problem, and the desire is to obtain a provably poly-time solvable, yet informative abstract problem. The main questions are thus:

1.What constraints should we relax to obtain such an effective over-approximating abstraction? 2. How should we combine information provided by multiple such abstractions? In this work we consider both these questions, and present some recent formal results that help answering these questions (sometimes even to optimality). First, we consider a generalization of the popular ``pattern database'' (PDB) homomorphism abstractions to what is called ``structural patterns''. The basic idea is in abstracting the problem in hand into provably tractable fragments of optimal planning, alleviating by that the constraint of PDBs to use projections of only low dimensionality. We introduce a general framework for additive structural patterns based on decomposing the problem along certain graphical structure induced by it, suggest a concrete non-parametricinstance of this framework called fork-decomposition, and formally show that the admissible heuristics induced by the latter abstractions provide state-of-the-art worst-case informativeness guarantees on several standard domains. Specifically, we describe a procedure that takes a classical planning task, a forward-search state, and a set of abstraction-based admissible heuristics, and derives an optimal additive composition of these heuristics with respect to the given state. Most importantly, we show that this procedure is polynomial-time for arbitrary sets of all known to us abstraction-based heuristics.

Page 15: Contributed talks in ECCO:ie.technion.ac.il/~levinas/Abstract_booklet_ECCO_2009.doc · Web viewFor a cubic graph G of order n to decompose K_n it is necessary that n = 6k+4. We show

2. Roberto Battiti and Paolo Campigotto, "Brain-machine optimization: learning objective functions by interacting with the final user"

As a rule of thumb, ninety percent of the problem-solving effort in the real world is spent on defining the problem, on specifying in a computable manner the function to be optimized. After this modeling work is completed, optimization becomes in certain cases a commodity. The implication is that much more research effort should be devoted to design supporting techniques and tools to help the final user, often without expertise in mathematics and in optimization, to define the function corresponding to his real objectives. Machine learning can help, after the important context of interfacing with a human user to provide feedback signals is taken into account, and after the two tasks of modeling and solving are integrated into a loop so that an initial rough definition can be progressively refined by bridging the representation gap between the user world and the mathematical programming world.

To consider a more specific case, many real world optimization problems are typically multi-objective optimization (MOO) problems. Efficient optimization techniques have been developed for a wide range of MOO problems but, as mentioned, asking a user to quantify the weights of the different objectives before seeing the actual optimization results is in some cases extremely difficult. If the final user is cooperating with an optimization expert, misunderstanding between the persons may arise. This problem can become dramatic when the number of the conflicting objectives in MOO increases. As a consequence, the final user will often be dissatisfied with the solutions presented by the optimization expert. The formalization of the optimization problem may be incomplete because some of the objectives remain hidden in the mind of the final user.

Although providing explicit weights and mathematical formulas can be difficult for the user, for sure he can evaluate the returned solutions. In most cases, the strategy to overcome this situation consists of a joint iterative work between the final user and the optimization expert to change the definition of the problem itself. The optimization tool will then be re-executed over the new version of the problem.

We design a framework to automatically learn the knowledge that cannot be clearly stated by the final user, i.e., the weights of the ``hidden" objectives. Each run of an optimization tool provides a set of non-dominated solutions. The learning process we investigate is based on the final user evaluation of the solution presented. Even if the final user cannot state in a clear and aware manner the unsatisfied objectives, he can select his favorite solutions among the ones provided by the optimization expert, after the results are appropriately clustered and a suitably small number of questions is answered.

In particular, our framework is based on the Reactive Search Optimization (RSO) technique, which advocates the adoption of learning mechanisms as an integral part of heuristic optimization schemes for solving complex optimization problems. The ultimate goal of RSO is the complete elimination of the human intervention in the parameters tuning process of heuristic algorithms. Our framework replaces the optimization expert in the definition of the optimization task, by directly learning the ``hidden" objective functions from the final user evaluation of the solutions provided

Page 16: Contributed talks in ECCO:ie.technion.ac.il/~levinas/Abstract_booklet_ECCO_2009.doc · Web viewFor a cubic graph G of order n to decompose K_n it is necessary that n = 6k+4. We show

by the optimization tools. The theoretical framework and some experimental results will be presented in the extended version of this work.

3. Lukasiak Piotr, Blazewicz Jacek, David Klatzmann, " CompuVac* - development and standardized evaluation of novel genetic vaccines".

Recombinant viral vectors and virus-like particles are considered the most promising vehicles to deliver antigens in prophylactic and therapeutic vaccines against infectious diseases and cancer. Several potential vaccine designs exist but their cost-effective development cruelly lacks a standardized evaluation system. On these grounds, CompuVac is devoted to (i) rational development of a novel platform of genetic vaccines and (ii) standardization of vaccine evaluation.CompuVac assembles a platform of viral vectors and virus-like particles that are among today’s most promising vaccine candidates and that are backed up by the consortium’s complementary expertise and intellectual property, including SMEs focusing on vaccine development.CompuVac recognizes the lack of uniform means for side-by-side qualitative and quantitative vaccine evaluation and will thus standardize the evaluation of vaccine efficacy and safety by using “gold standard” tools, molecular and cellular methods in virology and immunology, and algorithms based on genomic and proteomic information.“Gold standard” algorithms for intelligent interpretation of vaccine efficacy and safety will be built into Compuvac’s interactive “Genetic Vaccine Decision Support System”, which should generate (i) vector classification according to induced immune response quality, accounting for gender and age, (ii) vector combination counsel for prime-boost immunizations, and (iii) vector safety profile according to genomic analysis.

Main objectives of CompuVac are:• to standardize the qualitative and quantitative evaluation of genetic vaccines using defined “gold standard” antigens and methods• to rationally develop a platform of novel genetic vaccines using genomic and proteomic information, together with our gold standards• to generate and make available to the scientific community a “tool box” and an “interactive database” allowing to comparatively assess future vaccines to be developed with our gold standards

Vector platform used in CompuVac is made of both viral vectors and VLPs that are representative of and considered among the best in their class. Some, like the adenoviruses, are already in clinical development. Within this consortium, we will perform rational vector improvements and test rational prime boost combinations, all guided by our comparative evaluation system. This should generate safer and more efficient vectors, or vector combinations, for clinical development.

The consortium will generate an interactive platform called GeVaDS (Genetic Vaccine Decision Support system). The platform will contain formatted data related to defined gold standard antigens and methods used to assess immune responses i.e. newly acquired results as well as algorithms allowing the intelligent comparison of new vectors to previously analyzed ones.

Page 17: Contributed talks in ECCO:ie.technion.ac.il/~levinas/Abstract_booklet_ECCO_2009.doc · Web viewFor a cubic graph G of order n to decompose K_n it is necessary that n = 6k+4. We show

CompuVac aims at making GeVaDS accessible to any researcher developing genetic vaccines. Retrieving previously generated or introducing newly acquired results obtained with validated approved methods, should allow any researcher to rationally design and improve his or her vaccine vector as well as to comparatively assess its efficacy and potential. Internet access: http://www.compuvac.org (CompuVac), http://gevads.cs.put.poznan.pl (GeVaDSs).

Page 18: Contributed talks in ECCO:ie.technion.ac.il/~levinas/Abstract_booklet_ECCO_2009.doc · Web viewFor a cubic graph G of order n to decompose K_n it is necessary that n = 6k+4. We show

Monday

9:00-10:00 Plenary talk – Yefim Dinitz "Efficient algorithms for AND/OR (max-min) scheduling problems on graphs"

The talk considers scheduling with AND/OR precedence constraints. The events are represented as the vertices of a non-negatively weighted graph G=(V,E,d). The precedence relation between them is given by its edges, and the delays by edge weights. An event of type AND (max) should be scheduled anywhen after all its preceding events, with the corresponding delays after them. An event of type OR (min) should be scheduled anywhen after at least one of its preceding events, with the corresponding delay after it. The early schedule is in question. This problem is a max-min generalization of PERT. Besides, it may be considered as a mixture of the Shortest Path and PERT problems. We first consider the case when zero weight cycles are absent in G. Surprisingly, some natural mixture of the classic Dijkstra and PERT algorithm works for it. It solves the problem in time of two those algorithms, which is almost linear in the graph size. After that, we present an O(|V||E|) algorithm solving the problem in the general case, with zero weight cycles allowed. We describe also a quite special history of the research on the AND/OR scheduling problem. It began from a paper of D. Knuth in late 70th (presenting the algorithm, but not the problem), went over particular cases and proofless publications, and involved also names of Moehring, Adelson-Velsky, Levner, the lecturer, and others. Currently, the first full paper on its general case is in preparation.

10:30- 11:45 Parallel sessions:

Track 1: Metaheuristics:

1. Johan Oppen, "Parametric models of local search progression"

Algorithms that search for good solutions to NP-hard combinatorial optimization problems present a trace of current best objective values over time. The progression of objective function values for best solutions found is reasonably modeled as stochastic because the algorithms are often stochastic, and even when they are not, the progression of the search varies with each instance in unpredictable ways. Some characteristics are common to most searches: new bests are found quickly early in the search and not as quickly later on.We describe parametric models of this progression that are both interesting as ways to characterize the search progression in a compact way and useful as means of predicting search behavior. This is in contrast to non-parametric models that estimate the probability that the search will achieve a value better than some given threshold in a given amount of time. Both types of models have value, but parametric models offer the promise of greater predictive power. For practical purposes, it is useful to be ableto use small instances to estimate parameters for models that can then be used to predict the time trace performance on large instances.

Page 19: Contributed talks in ECCO:ie.technion.ac.il/~levinas/Abstract_booklet_ECCO_2009.doc · Web viewFor a cubic graph G of order n to decompose K_n it is necessary that n = 6k+4. We show

2. Gregory Gutin and Daniel Karapetyan, "Local search heuristics for the multidimensional assignment problem"

The Multidimensional Assignment Problem (MAP) (abbreviated s-AP in the case of s dimensions) is an extension of the well-known assignment problem. The most studied case of MAP is 3-AP, though the problems with larger values of s have also a number of applications. We consider several known and new MAP local search heuristics for MAP as well as their combinations and simple metaheuristics. Computational experiments with three instance families are provided and discussed. As a result, we select dominating local search heuristics. One of the most interesting conclusions is that combination of two heuristics may yield a superior heuristic with respect to both solution quality and the running time.

3. José Brandão, "A tabu search algorithm for the open vehicle routing problem with time windows"

The problem studied here, the open vehicle routing problem with time windows (OVRPTW), is different from the vehicle routing problem with time windows in that the vehicles do not return to the distribution depot after delivering the goods to the customers. We have solved the OVRPTW using a tabu search algorithm with embedded local search heuristics, which take advantage of the specific properties of the OVRP. The performance of the algorithm is tested using a large set of benchmark problems.

Track 2: Networks:

1. Silvano Martello, "Jacobi, Koenig, Egerváry and the roots of combinatorial optimization"

It is quite well-known that the first modern polynomial-time algorithm for the assignment problem, invented by Harold W. Kuhn half a century ago, was christened the "Hungarian method" to highlight that it derives from two older results, by Koenig (1916) and Egerváry (1931). A recently discovered posthumous paper by Jacobi (1804-1851) contains however a solution method that appears to be equivalent to the Hungarian algorithm. A second historical result concerns a combinatorial optimization problem, independently defined in satellite communication and in scheduling theory, for which the same polynomial-time algorithm was independently published thirty years ago by various authors. It can be shown that such algorithm directly implements another result by Egerváry, which also implies the famous Birkhoff-von Neumann theorem on doubly stochastic matrices.

2. Shoshana Anily and Aharona Pfeffer, " The uncapacitated swapping problem on a line and on a circle"

We consider the uncapacitated swapping problem on a line and on a circle. Objects of m types, which are initially positioned at n workstations on the graph, need to be rearranged in order to satisfy the workstations’ requirements. Each workstation initially contains one unit of a certain object type and requires one unit of possibly another object type. We assume that the problem is balanced, i.e., the total supply equals the total demand for each of the object types separately. A vehicle of unlimited

Page 20: Contributed talks in ECCO:ie.technion.ac.il/~levinas/Abstract_booklet_ECCO_2009.doc · Web viewFor a cubic graph G of order n to decompose K_n it is necessary that n = 6k+4. We show

capacity is assumed to ship the objects in order to fulfill the requirements of all workstations. The objective is to compute efficiently a shortest route such that the vehicle can accomplish the rearrangement of the objects while following this route, given designated starting and ending workstations on a line, or the location of a depot on a circle. We propose polynomial-time exact algorithms for solving the problems: an O(n)-time algorithm for the linear track case, and an O(n2)-time algorithm for the circular track case.

3. Susann Schrenk, Van-Dat Cung and Gerd Finke, "Revisiting the fixed charge transportation problem"

The classical transportation problem is a special case of a minimum cost flow problem with the property that the nodes of the network are either supply nodes or demand nodes. The sum of all supplies and the sum of all demands are equal, and a shipment between a supply node and a demand node induces a linear cost. The aim is to determine a least cost shipment of a commodity through the network in order to satisfy demands from available supplies. This classical transportation problem was first presented by Hitchkock in 1941. In 1951 Dantzig proposed a standard formulation as a linear program along with a first implementation of the simplex method to solve it. This problem is polynomial.

The transportation problem including a fixed cost associated to each transportation arc, named fixed charge transportation problem, is NP-Hard. This problem was first introduced by Hirsch and Dantzig in 1954 and has largely been investigated in the literature, Dantzig and Hirsch 1968, Sharma 1977, Balinski 1961. Most of papers until then focused on approximation schemes to solve the fixed charge transportation problem.

We study the special case, with no variable cost and the same fixed cost on all arcs. This is in a way the simplest form of this problem, but we can show that this problem remains NP-Hard. Also, minimizing the transportation costs is in this case equivalent to having the maximum number of arcs on which there is no transport. That is equivalent to finding a solution with a maximum degree of degeneracy. Since the basic solutions are trees, solving the problem is equivalent to finding the tree with maximum degree of degeneracy. Degeneracy occurs if one has a subset of the supply nodes and a subset of the demand nodes whose total demand and total supply are equal. We refer to this property as equal-subsum-problem. The existence of degeneracy is an NP-Complete problem. Finding the maximum degree of degeneracy is therefore also NP-Complete.

A new result in this context is the following: Finding the maximum degree of degeneracy remains NP-Complete even if all equal subsums are given. The proof of this theorem involves the maximum stable problem and the set packing problem.

Page 21: Contributed talks in ECCO:ie.technion.ac.il/~levinas/Abstract_booklet_ECCO_2009.doc · Web viewFor a cubic graph G of order n to decompose K_n it is necessary that n = 6k+4. We show

12:00-12:50 Parallel sessions:

Track 1: Optimization 2:

1. Gur Mosheiov and Assaf Sarig, "Scheduling and due-date assignment problems with job rejection"

Scheduling with rejection reflects a very common scenario, where the scheduler may decide not to process a job if it is not profitable. We study the option of rejection in several popular and well-known scheduling and due-date assignment problems. A number of settings are considered: due-date and due-window assignment problems with job-independent costs, a due-date assignment problem with job-dependent weights and unit jobs, minimum total weighted earliness and tardiness cost with job-dependent and symmetric weights (known as TWET), and several classical scheduling problems (minimum makespan, flow-time, earliness-tardiness) with position-dependent processing times. All problems (excluding TWET) are shown to have a polynomial time solution. For the (NP-hard) TWET, a pseudo-polynomial time dynamic programming algorithm is introduced and tested numerically.

2. Ephraim Korach and Michal Stern, "TDIness in the clustering tree problem"

We consider the following problem: Given a complete graph G=(V,E) with a weight on every edge and a given collection of subsets of V, we have to find a minimum weight spanning tree T such that each subset of the vertices in the collection induces a subtree in T. This problem arises and has motivation in the field of communication networks. Consider the case where every subset is of size at most three. We present a linear system that defines the polyhedron of all the feasible solutions. We prove that this system is Total Dual Integrality (TDI). We present a conjecture for the general case, where the size of each subset is not restricted to be less or equal to three. This conjecture generalizes the above results.

Track 2: Convex combinatorial optimization problems:

1. Tiit Riismaa, "Application of convex extension of discrete-convex functions in combinatorial optimization"

Convex extension of discrete-convex functions enables to adapt effective methods of the convex programming for solving some discrete-convex programming problems of hierarchy optimization. The n – dimensional real-valued function is called discrete-convex if the inequality of Jensen is valid for all convex combinations of (n+1) – elements from the domain of definition. The use of all n+1 elements convex combinations follows from the well-known theorem of Caratheodory .The convex extension of given function is the majorant convex function not exceeding the given function. Theorem. The function f can be extended to convex function on conv X if f is discrete-convex on X.The graph of a discrete-convex function is a part of the graph of a convex function. The convex extension is so called point-wise maximum over all linear functions not exceeding the given function. The convex function of a discrete-convex function is a piecewise linear function. Each discrete-convex function has a unique convex extension. The class of discrete-convex functions is the largest one to be extended to

Page 22: Contributed talks in ECCO:ie.technion.ac.il/~levinas/Abstract_booklet_ECCO_2009.doc · Web viewFor a cubic graph G of order n to decompose K_n it is necessary that n = 6k+4. We show

the convex functions. A class of iteration methods of local searching is developed. On each step of the iteration the calculation of the value of objective function is required only on some vertices of some kind of unit cube. Many discrete or finite hierarchical structuring problems can be formulated mathematically as a multi-level partitioning procedure of a finite set of nonempty subsets. This partitioning procedure is considered as a hierarchy where to the subsets of partitioning correspond nodes of hierarchy and the relation of containing of subsets define the arcs of the hierarchy. The feasible set of structures is a set of hierarchies corresponding to the full set of multi-level partitioning of given finite set. Each tree from this set is represented by a sequence of Boolean matrices, where each of these matrices is an adjacency matrix of neighboring levels. The described formalism enables to state the reduced problem as a two-phase mutually dependent optimization problem. Variable parameters of the inner minimization problem are used for the description of connections between adjacent levels. Variable parameters of the outer minimization problem are used for the presentation of the number of elements on each level. The two-phase statement of hierarchy optimization problem guarantees the possibility to extend the discrete-convex objective function to the convex function and enables to apply algorithm of local searching for finding the global optimum. The approach is illustrated by a multi-level production system example.

2. Boaz Golany, Moshe Kress, Michal Penn and Uriel Rothblum, "Resource allocation in temporary advantage competitions"

We consider a race between two competitors that develop competing products where the advantage that is achieved by one competitor is limited in time until the opponent is developing a new (better) product. We model the problem as network optimization with side constraints and obtain some insights. We further briefly discuss the stochastic case where the problems are modeled as (convex) non-linear optimization problems.

Page 23: Contributed talks in ECCO:ie.technion.ac.il/~levinas/Abstract_booklet_ECCO_2009.doc · Web viewFor a cubic graph G of order n to decompose K_n it is necessary that n = 6k+4. We show

Tuesday

9:00-10:15 Parallel sessions:

Track 1: Scheduling 3:

1. Liron Yedidsion, "Computational analysis of two fundamental problems in MRP"

In this research we establish the computational complexity for two of the most basic problem in the field of Material Requirement Planning (MRP). In particular, the minimization of the total cost in the multi-echelon Bill of Material (BOM) system and in the assembly system. Although both problems have been vastly studied since the early 50s of the 20th century, their computational complexity has remained an open question heretofore. We prove that both problems are strongly NP-hard, where the minimization of the total cost in the multi-echelon BOM system is strongly NP-hard even for the case where the Lead-Time of all the components is zero.

2. Sergey Sevastyanov and Bertrand M.T. Lin, "Efficient algorithm of generating all optimal sequences for the two-machine Johnson problem"

For the classical two-machine Johnson problem we make it our aim to generate the set of all optimal sequences of jobs (with respect to the minimum makespan objective), assuming that on the set of those optimal sequences we could wish to optimize some other (secondary) criterion.

Clearly, this purpose could be attained by enumerating all n! sequences of n jobs, and this is the best possible bound for some instances (for example, for those consisting of identical jobs). Yet for the majority of problem instances (having only a few optimal solutions) this way cannot be recognized as acceptable, in view of its excessive time-consumption.

We say that an algorithm pursuing the above mentioned purpose is efficient if its running time can be estimated as O(P(|I|) |\Pi_{OPT}|), where P(|I|) is a polynomial of the input size, while \Pi_{OPT} is the set of all optimal sequences of jobs. In other words, if obtaining every optimal sequence requires (in average) at most a polynomial time.

Such an efficient algorithm was designed in our paper. Its running time can be estimated as O(n\log n|\Pi_{OPT}|). Therefore, in average we spend for generating each new optimal sequence the same time as Johnson spent for his unique optimal sequence. The algorithm is based on several lemmas containing basic properties of optimal sequences.

3. Alexander Lazarev, "An approximate method for solving scheduling problems"

We consider the scheduling problems α|β|Fmax: A set of n jobs J1,…,Jn with release dates r1,…,rn, processing times p1,…,pn and due dates d1,…,dn has to be scheduled on a single or many machines. The job preemption is not allowed. The goal is to find a

Page 24: Contributed talks in ECCO:ie.technion.ac.il/~levinas/Abstract_booklet_ECCO_2009.doc · Web viewFor a cubic graph G of order n to decompose K_n it is necessary that n = 6k+4. We show

schedule that minimizes the regular function F(C1,…,Cn), that Cj is the job j completion time. We have suggest an approximation scheme to find approximate optimal value of the objective function.

The approximation scheme: Let we have a instance A={rj,pj,dj} of our scheduling problem. And for solving the instance we haven't sufficient computer resource. The idea of the approach consists: we draw a line in 3n-dimensional space through a point (instance) A. For example, {(rj,γpj,dj)|j\in N}, where γ \in [0,+ ∞). At γ =1 we have initial instance A. Then on the given line "the reasonable" interval for Γ1 and Γ2, Γ1 < γ < Γ2, Γ1 < 1 < Γ2, is chosen. Usually at boundary points (instances) of an interval the solution of scheduling problem is found for comprehensible time. Let H be "limiting top" size of computer possibilities. For example, we choose values γ1,…, γk as roots of Chebyshev's polynomial on the interval $[ Γ1, Γ2], as they "nestle" on the interval edges where we can find the solutions of the problem. For the instances corresponding γ1,…, γk for which enough computing possibilities we find value of the objective function F1,…,Fk. Then through points (γ1,F1),…, (γk,Fk) we construct Lagrange's polynomial. Then the criterion function error Δ in our initial instance (with γ =1) will not exceed size Δ ≤ C |(1- γ1)…(1- γk)|/(k+1)!, where C is constant. As a result we do not construct the schedule of the solution of the problem, we only estimate value of objective function. Than it is more points k, and the it is less distance between Γ2 and Γ1 the less size Δ.

Track 2: Heuristics 1:

1. Mark Sh. Levin, "Combinatorial optimization problems in system configuration and reconfiguration"

In recent years, systems configuration and reconfiguration problems are used as a special significant part of design and management for many applications (e.g., software, hardware, manufacturing, communications, supply chain systems, solving strategies, modular planning, combinatorial chemistry).

In the paper several basic system configuration/reconfiguration problems are examined: (i) searching for (selection of) a set (structure) of system components, (ii) searching for a set of compatible system components, (iii) allocation of system components, (iv) design of system hierarchy, and (v) reconfiguration of a system (e.g., change of system structure or system hierarchy).

The following combinatorial optimization problems are considered as underlying models (including multicriteria formulations): problem of representatives, multicriteria multiple choice problem, multicriteria allocation problem, graph coloring and recoloring problems, morphological clique problem (with compatibility of system components), multipartite clique or clustering of multipartite graph, hierarchical clustering, and minimal spanning trees (including Steiner tree) problems. Solving approaches and applications are discussed.

Page 25: Contributed talks in ECCO:ie.technion.ac.il/~levinas/Abstract_booklet_ECCO_2009.doc · Web viewFor a cubic graph G of order n to decompose K_n it is necessary that n = 6k+4. We show

2. Aleksandra Swiercz, Jacek Blazewicz, Marek Figlerowicz, Piotr Gawron, Marta Kasprzak, Darren Platt and Łukasz Szajkowski, "A new method for assembling of DNA sequences"

Genome sequencing is the process of recovering a DNA sequence of a genome. It is well known from its high complexity both on biological and computational levels. The process is composed of a few phases: sequencing, assembling and finishing.In the first phase one can obtain short DNA sequences of length up to a few hundreds of nucleotides (in case of Sanger method). In case of new approaches to DNA sequencing (e.g. 454 sequencing) shorter DNA sequences are obtained with relatively good qualities and in short time. As the output of the 454 sequencing one gets together with sequences (of length around 100-200 nucleotides), qualities, i.e. the confidence rates of each nucleotide.

In the next phase, the assembling, short DNA sequences coming from the sequencing phase are combined into a longer sequence or contiguous sequences, called contigs. In the last phase, the finishing, which can be omitted in case of small genomes, the contigs are put into the proper place of the chromosome.

The novel heuristic algorithm, SR-ASM, has been proposed for the assembly problem, which deals well with data coming from the 454 sequencing. The algorithm is based on a graph model. In the graph the input DNA sequences are on vertices, and arcs connect two vertices, if two respective sequences overlap. The algorithm is composed of three parts. In the first part the graph is constructed. At the beginning, a fast heuristics looks for the pairs of sequences which possibly overlap, and marks them as promising. Next, the score of the alignment is computed for every promising pair, and an arc connecting the two respective vertices from the pair is added to the graph with the calculated score. In the second part one searches for a path in the graph which passes through the most number of vertices. Usually, it is not possible to find one path due to errors in the input data. Thus instead of one path several paths are returned. In the last part of the algorithm, consensus sequences are constructed from the paths in the graph on the base of multiple alignment procedure.

The usefulness of our algorithm has been tested on raw data coming from the experiment on sequencing of the genome of the Prochlorococcus marinus bacteria performed at the Joint Genome Institute. As the output of the SR-ASM algorithm one gets small number of long contigs which are highly similar to the genome of the bacteria. The results are compared with the outputs of other available assemblers.

10:45-12:00 Parallel sessions: Track 1: Graphs:

1. Dietmar Cieslik, "The Steiner ratio"

Steiner's Problem is the "Problem of shortest connectivity", that means, given a finite set of points in a metric space X, search for a network interconnecting these points with minimal length. This shortest network must be a tree and is called a Steiner Minimal Tree (SMT). It may contain vertices different from the points which are to be connected. Such points are called Steiner points. If we do not allow Steiner points,

Page 26: Contributed talks in ECCO:ie.technion.ac.il/~levinas/Abstract_booklet_ECCO_2009.doc · Web viewFor a cubic graph G of order n to decompose K_n it is necessary that n = 6k+4. We show

that means, we only connect certain pairs of the given points, we get a tree which is called a Minimum Spanning Tree (MST). Steiner's Problem is very hard as well in combinatorial as in computational sense, but, on the other hand, the determination of an MST is simple. Consequently, we are interested in the greatest lower bound for the ratio between the lengths of these trees, which is called the Steiner ratio (of the space X). We look for estimates and exact values for the Steiner ratio in several metricspaces, given old and new results.

2. Martin Charles Golumbic and Robert E. Jamison, "Tolerance and Rank-Tolerance in Graphs: Mathematical and Algorithmic Problems"

In this talk, we set out a plan for future investigation of two global themes which focus on special families and properties of graphs.These involve the notion of measured intersection known as tolerance and a more general framework called rank-tolerance. Tolerance and rank-tolerance graphs were originally motivated by work spawned by the classical family of interval intersection graphs.

In a rank-tolerance representation of a graph, each vertex is assigned two parameters: a rank, which represents the size of that vertex, and a tolerance which represents an allowed extent of conflict with other vertices. Two vertices are adjacent if and only iftheir joint rank exceeds (or equals) their joint tolerance. By varying the coupling functions used to obtain the joint rank or joint tolerance, a variety of graph classes arise, many of which have interesting structure.

Applications, algorithms and their complexity have driven much of the research work in these areas. The research problems posed here provide challenges to those interested in structured families of graphs.

These include trying to characterize the class SP of sum-product graphs, where the tolerance coupling function is the sum} of the two tolerances and the rank coupling function is the product of the two ranks, and investigating the class of mix graphs, which are obtained by interpolating between the min and max functions.

3. Peter Adams, Moshe Rosenfeld and Hoa Vu Dinh, "Spanning graph designs"

Classical balanced incomplete block designs (BIBD) explore decompositions of the edges of the complete graph K_n into "blocks" (complete subgraphs) of prescribed size(s) so that every edge of K_n appears in the same number of blocks. A natural generalization of these designs is to extend "blocks" to other graphs. For example, decompositions of K_n into cycles has been studied by many authors. But the most general case, the Oberwolfach Problem is still open.

Of particular interest are spanning graph designs where the blocks are graph on n vertices. We study two groups of spanning graph designs: spanning cubic graph designs and spanning designs inspired by equiangular lines in R^d.

For a cubic graph G of order n to decompose K_n it is necessary that n = 6k+4. We show various constructions of such decompositions and conjecture that for n 'large enough" all cubic graphs of order 6n + 4 decompose k-{6n+4}.

Page 27: Contributed talks in ECCO:ie.technion.ac.il/~levinas/Abstract_booklet_ECCO_2009.doc · Web viewFor a cubic graph G of order n to decompose K_n it is necessary that n = 6k+4. We show

The second group of spanning graph designs involve G(2n, n-1). These are graphs of order 2n regular of degree n - 1. The implied spanning graph design will yield 2n-1 copies of G so that every edge of K_{2n} will appear in exactly n-1 copies of G. For instance, one can construct 11 labeled copies of the Icosahedron (a G(12,5)) so that every edge of K_12 will appear in exactly 5 copies. These designs turn out to be related to Hadamard matrices, conference matrices and other interesting combinatorial designs.

Track 2: Heuristics 2:

1. Michal Penn and Eli Troupiansky, " A heuristic algorithm for solving the heterogeneous open vehicle routing problem"

Given a capacitated directed graph with a depot vertex and demand vertices the VRP (Vehicle Routing Problem) is aimed at minimizing the cost of supplying the required demand. That is, minimizing the number of capacitated vehicles needed and the cost of the tour. The problem is known to be NP-hard and vast of the research effort is aimed at various heuristic methods for solving the problem.

We consider a variant of the VRP, termed Open VRP (OVRP), where the vehicles are not required to return to the depotss at the end of their tours. If, in addition, the vehicles are of different capacities and costs, the problem is denoted as Heterogeneous OVRP.

We present a greedy heuristic for solving the Heterogeneous OVRP. Our greedy algorithm consists of two stages – construction of an initial feasible solution and post-optimization for improving the initial solution. We compare our heuristic to some known algorithms for the homogeneous case and to Cross-Entropy (CE) in the general heterogeneous case. Our preliminary results show that the greedy heuristic out performs CE.

2. Dorit Ron, Ilya Safro and Achi Brandt, "Fast multilevel algorithms for linear and planar ordering problems"

The purpose of linear ordering problems is to minimize some functional over all possible permutations. These problems are widely used and studied in many practical and theoretical applications. We have developed linear-time multilevel solvers for a variety of such graph problems, e.g., the linear arrangement problem, the bandwidth problem and more. In a multilevel framework, a hierarchy of decreasing size graphs is constructed. Starting from the given graph G0, create by coarsening the sequence G1,...,Gk, then solve the coarsest level directly, and finally uncoarsen (interpolate) the solution back to G0. The experimental results for such problems turned out to be better than every known result in almost all cases, while the short running time of the algorithms enables applications on very large graphs.

Next we have considered the generalization of such problems to two dimensions. In many theoretical and industrial fields, this class of problems is often addressed and actually poses a computational bottleneck, e.g., graph visualization, facility location problem, VLSI layout, etc. In particular, we have tried to minimize various functionals (discrete or continuous) applied to the two-dimensional objects (associated with the graph nodes) at hand, while maintaining equi-density demand

Page 28: Contributed talks in ECCO:ie.technion.ac.il/~levinas/Abstract_booklet_ECCO_2009.doc · Web viewFor a cubic graph G of order n to decompose K_n it is necessary that n = 6k+4. We show

over the planar domain. We have concentrated on answering the following three issues: (a) minimize the total length of the connections between these objects (b) minimize the overlap between the objects and (c) the two-dimensional space should be well utilized.

We present a multilevel solver for a model that describes the core part of those applications, namely, the problem of minimizing a quadratic energy functional under planar constraints that bound the allowed amount of material (total areas of objects) in various subdomains of the entire domain under consideration. Given an initial arrangement for a particular graph Gi in the hierarchy, rearrange the objects under consideration into a more evenly distributed state over the entire defined domain. This process is done by introducing a sequence of finer and finer grids over the domain and demanding at each scale equi-density, that is, meeting equality or inequality constraints at each grid square, stating how much material it may (at most) contain. Since many variables are involved and since the needed updates may be large, we introduce a new set of displacement variables attached to the introduced grid points, which enables collective moves of many original variables at a time, at different scales including large displacements. The use of such multiscale moves has two main purposes: to enable processing in various scales and to efficiently solve the (large) system of equations of energy minimization under equi-density demands. The system of equations of the finer scales, when many unknowns are involved, is solved by a combination of well-known multigrid techniques. The entire algorithm solves the nonlinear minimization problem by applying successive steps of corrections, each using a linearized system of equations.

We demonstrate the performance of our solver on some instances of the graph visualization problem showing efficient usage of the given domain, and on a set of simple (artificial) placement (VLSI) examples.

3. L.F. Escudero and S. Muñoz, "A greedy procedure for solving the line design problem for a rapid transit network"

Given a set of potential station locations and a set of potential links between them, the well-known extended rapid transit network design problem basically consists in selecting which stations and links to construct without exceeding the available budget, and determining an upper bounded number of noncircular lines from them, to maximise the total expected number of users.

A modification of this problem has been proposed in the literature to allow the definition of circular lines provided that whichever two locations are linked by one line at most, and a two-stage approach has also been presented for solving this new problem. The model considered in the first stage makes possible to select the stations and links to be constructed without exceeding the available budget, in such a way that the total expected number of users is maximised. Once they have been selected, in the second stage each one of these links is assigned to a unique line, so that the number of lines passing through each selected station is minimised.

In this work we introduce an improvement in the model considered in the first stage above to obtain a connected rapid transit network. We also present a modification of the algorithm proposed for solving the line design problem of the second stage above;

Page 29: Contributed talks in ECCO:ie.technion.ac.il/~levinas/Abstract_booklet_ECCO_2009.doc · Web viewFor a cubic graph G of order n to decompose K_n it is necessary that n = 6k+4. We show

it is a greedy heuristic procedure that attempts to minimise the number of transfers that should be done by the users to arrive at their destinations, without increasing the number of lines that pass through each station.

The computational experiments that will be reported will show that this greedy procedure can significatively reduce the total number of transfers required for the solutions obtained by the approach taken from the literature.

12:10-13:10 plenary talk – Julian Molina "Metaheuristics for multi-objective combinatorial optimization"

Meta-heuristic methods are widely used to solve single-objective combinatorial optimization problems, especially when dealing with real applications. However, a multiple criteria approach is usually demanded when solving real-world applications requiring the consideration of several conflicting points of view corresponding to multiple objectives.

As it happens within the field of single-objective combinatorial optimization, meta-heuristics have shown to be efficient for solving multi-objective optimization models. It is one of the most active and growing branching in the field of multi-objective optimisation in current years.

This talk provides an overview of the research in this field and visits its principal aspects, including main methods, performance measures, test functions, and preference inclusion techniques. New trends in the field will be outlined, including new hybrid procedures and links to exact techniques.