DEPARTMENT OF COMPUTER SCIENCE UNIVERSITY OF COPENHAGEN Heuristics for Multidimensional Packing Problems PhD Thesis Jens Egeblad July 4th, 2008 Academic advisor: David Pisinger 2 Preface This Ph.D.-thesis was written in the years 2004 to 2008 at the Department of Computer Science at the University of Copenhagen (DIKU) under the supervision of Professor David Pisinger. The four years included two six month periods of leave of absence. During those years I worked on several projects and visited several departments at other universities in two wonderful parts of the world. I spend a total of approximately seven months at Dipartimento di Ingegneria Meccanica e Gestionale at Politecnico di Bari in Puglia, Italy. I also spend six months during the fall of 2007 at Stanford Univesity in California, USA. While all projects were both exciting and challenging one particular project stood out of the crowd. The project was a semi-commercial heuristic for container loading of furniture and contained many joyful elements. It was a wonderful way to get acquainted and experiment with the many requirements that must be tackled when dealing with practical problems for the industrial sector. Today, the difficulties of this project are hard to portray, but I still remember how great worries and concerns dominated the first many months. During the first period of leave of absence I focused on the software development of this project. This was a wonderful opportunity to design a full-blown medium sized piece of computer software tool for use by the company, and the final version of that tool consisted of more than approximately 90,000 lines of code. The most difficult part of this project was working with a tight schedule and having to come up with solutions to challenging problems within strict deadlines. Many sleepless nights were filled with frustrations and worries over not having a solution for yet another essential corner of the project. The paper presented in this thesis on the subject does not completely do justice to the huge amount of work involved. In the end, however, it was extremely rewarding to visit the company’s headquarters six moths after the project finished and find that many people were using the software concurrently on four different continents. Looking back, the frustrations and impossible deadlines are all forgotten. I now remember mainly the many fond memories of warm days and nights among good friends and colleagues, and think of this project as a successful marriage between theory and practice. In a way, this particular project also represents the work of this thesis well. All of the projects are of strong direct relevance to other sectors – industrial or other scientific fields – and since I began I have been approached by several parties who have shown keen interest in commercializing ideas presented here. The thesis has involved many physical, cultural, and spiritual journeys, many were difficult but they were all enriching and exciting. It has been four wonderful years. Acknowledgements I would like to thank the many people who made this thesis not only possible but also joyful to work on. First, I would like to thank Professor Vito Albino and Professor Claudio Garavelli from Politecnico di Bari for allowing me to be part of their wonderful project and letting me stay for many times over a long period at their department. I would also like to thank Cosimo Palmisano, Stefano Lisi, and the many other members of the department for making each visit a joy both during work but also in the after hours. A special thanks also goes to Professor Leonidas Guibas for letting me visit his department at Stanford University and the many members of the department that introduced me to interesting new scientific topics, but also life in The City, pizza and beer, and Happy Hours on fridays. Also Magda Jonikas 1 and Alain Laederach deserves many thanks for being patient with my lack of biological knowledge and for providing the data and ideas which made the chronological last paper of this thesis possible. I would also like to thank the entire group of people in Copenhagen for our many discussions of both scientific and less scientific nature and for their patience with my use of the limited computational resources available to us. A huge thanks goes to Benny K. Nielsen for the vast number of discussions on relaxed placement methods, for wonderful team work, and, not the least, for tolerating several messy paper drafts. My girlfriend, Zulfiya Sukhova, also deserves incredible appreciation for being open-minded and positive about the many weekends and nights which were spent on work, and the many months I have stayed abroad during these years. Surely, there cannot be a more understanding person. Most importantly, I would like to thank my advisor Professor David Pisinger who has never been too busy to discuss a scientific topic or to give a useful advice. The support, advices, and general help through the entire four years have been completely solid and have all greatly exceeded my expectations. – Thank you all Jens Egeblad July, 2008 2 Abstract In this thesis we consider solution methods for packing problems. Packing problems occur in many different situations both directly in the industry and as sub-problems of other problems. High-quality solutions for problems in the industrial sector may be able to reduce transportation and production costs significantly. For packing problems in general are given a set of items and one of more containers. The items must be placed within the container such that some objective is optimized and the items do not overlap. Items and container may be rectangular or irregular (e.g. polygons and polyhedra) and may be defined in any number of dimensions. Solution methods are based on theory from both computational geometry and operations research. The scientific contributions of this thesis are presented in the form of six papers and a section which introduces the many problem types and recent solution methods. Two important problem variants are the knapsack packing problem and the strip-packing problem. In the knapsack packing problem, each item is given a profit value, and the problem asks for the subset with maximal profit that can be placed within one container. The strip-packing problem asks for a minimum height container required for the items. The main contributions of the thesis are three new heuristics for strip-packing and knapsack packing problems where items are both rectangular and irregular. In the two first papers we describe a heuristic for the multidimensional strip-packing problem that is based on a relaxed placement principle. The heuristic starts with a random overlapping placement of items and large container dimensions. From the overlapping placement overlap is reduced iteratively until a non-overlapping placement is found and a new problem is solved with a smaller container size. This is repeated until a time-limit is reached, and the smallest container for which a non-overlapping placement was found is returned as solution. In each iteration, a single item is translated parallel to one of the coordinate axes to the position that reduces the overlap the most. Experimental results of this heuristic are among the best published in the literature both for two- and three-dimensional strip-packing problems for irregular shapes. In the third paper, we introduce a heuristic for two- and three-dimensional rectangular knapsack packing problems. The two-dimensional heuristic uses the sequence pair representation and a novel representation called sequence triple is introduced for the three-dimensional variant. Experiments for the two-dimensional knapsack packing problem are on-par with the best published in the literature and experiments for the three-dimensional variant are promising. A heuristic for a three-dimensional knapsack packing problem involving furniture is presented in the fourth paper. The heuristic is based on a variety of techniques including tree-search, wall-building, and sequential placement. The solution process includes considerations regarding stability and load bearing strength of items. The heuristic was developed in collaboration with an industrial partner and is now being used to solve hundreds of problems every day as part of their planning process. A simple heuristic for optimizing a placement of items with respect to balance and moment of inertia is presented in the fifth paper. Ensuring that a loaded consignment of items are balanced throughout a container can reduce fuel consumption and prolong the life-span of vehicles. The heuristic can be used as a post-processing tool to reorganize an existing solution to a packing problem. A method for optimizing the placement of cylinders with spherical ends is presented in the last paper. The method can consider proximity constraints which can be used to describe how cylinders should be placed relative to each other. The method is applied to problems where a placement of capsules must be found within a minimal spherical or box-shaped container and to problems where a placement within a given arbitrarily container must be found. The method has applications for prediction of RNA tertiary structure. 3 Abstract in Danish I denne afhandling betragtes løsningsmetoder til pakningsproblemer. Løsninger af meget høj kvalitet for industrielle problemer kan reducere transport- og produktionsomkostninger betydeligt. Pakningsproblemer opstår i mange forskellige situationer både direkte i industrien men også som delproblemer af andre problemer. Generalt består pakningsproblemer af en mængde af figurer og en eller flere containere. Figurer og containere kan være rektangulære og irregulære (fx polygoner og polyedra) og være defineret i et vilkårligt antal dimensioner. Figurerne skal placeres i containerne uden at overlappe sådan, at en målfunktion optimeres. De anvendte løsningsmetoder består af teknikker både fra geometri og optimering. De videnskabelige bidrag i denne afhandling præsenteres i seks adskildte artikler, som indledes med en introduktion til problemtyper og populære løssningsmetoder. To vigtige varianter af pakningsproblemer er rygsækpakning og strip-pakning. I rygsækpakningsproblemer gives hver figur en profitværdi, og problemet er nu at bestemme en delmængde af figurer som kan være i rygsækken med en maksimal profit-sum. I strip-pakningsproblemet ønsker vi at finde en container med minimal længde, der kan indeholde alle figurerne. Afhandlingens hovedbidrag består af tre nye heuristikker til strip- og rygsækpakningsproblemer med rektangulære og irregulære figurer. I de første to artikler præsenteres en heuristik til løsning af det multidimensionale strip-pakningsproblem. Heuristikken starter med en lang container og en tilfældig placering med overlap. Overlappet reduceres iterativt indtil en placering uden overlap findes, hvorefter et nyt problem med en mindre container løses. Dette gentages så mange gange som muligt indenfor en bestemt tidsgrænse, hvorefter den mindste container med en tilhørende placering uden overlap returneres som løsning. I hver iteration flyttes en enkelt figur parallelt med en af koordinatakserne til en placering med mindre overlap. De eksperimentielle resultater for denne heuristik er blandt de bedste, der er publiceret både for to- og tredimensionale problemer med irregulære figurer. I den tredje artikel introduceres en heuristik til to- og tredimensional rektangulære rygsæksproblemer. Heuristikken til todimensionale problemer benytter sekvensparrepræsentationen og en ny repræsentation introduceres til tredimensional problemer. Eksperimentielle resultater for todimensionale problemer er på niveau med de bedste der er publiceret og resultater for de tredimensionale problemer er lovende. En heuristik til det tredimensionale rygsækpakningsproblem, hvor figurer er møbler præsenteres i den fjerde artikel. Heuristikken er baseret på en række af teknikker heriblandt træsøgning, vægbygning, og sekvential placering. Heuristikken blev udviklet i samarbejde med en industripartner, og bliver nu anvendt til løsning af hundredevis af problemer dagligt som del af deres planlægning. En enkel heuristik til optimering af placering af figurer med hensyn til balance og moment præsenteres i den femte artikel. Balancerede placeringer kan reducere brændstofsforbrug og forlænge levetiden af de anvendte transportmidler. Heuristikken kan bruges som en efterprocesseringsværktøj til omorganisering af en eksisterende løsning til et pakningsproblem. En metode til optimering af placering af cylindere med kugleformede ender præsenteres i den sidste artikel. Metoden kan håndtere afstandskrav, som kan bruges til at angive, hvordan cylinderne skal placeres i forhold til hinanden. Metoden er anvendt på problemer, hvor en placering af cylindere skal findes indenfor en minimal kugleformet- eller rektangulær container, og til problemer, hvor en placering indenfor en vilkårlig container skal findes. Metoden kan anvendes til forudsigelse af tertiære RNA strukturer. 4 Contents 1 Introduction 1.1 Outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 Preliminaries 2.1 Problem Types . . . . . . . . . . . . . . . . . . . . . . . 2.1.1 One Dimensional Problems . . . . . . . . . . . . 2.1.2 Multi Dimensional Subset Selection Problems . . 2.1.3 Multi Dimensional Container-Count Minmization . 2.1.4 Container Minimization . . . . . . . . . . . . . . 2.1.5 Mathematical problems . . . . . . . . . . . . . . . 2.2 Typologies . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3 N P -completeness . . . . . . . . . . . . . . . . . . . . . 3 4 7 8 . . . . . . . . 9 10 10 12 14 15 16 16 18 . . . . . . . . . . . . . . . . . . . . . . . . . . 21 21 22 23 23 26 27 28 28 31 32 34 34 36 38 39 42 44 44 46 47 47 48 48 49 49 50 About the Papers 4.1 Relaxed Packing and Placement . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1.1 Two-dimensional Nesting . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1.2 Three-dimensional Nesting . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 51 51 52 Computational Techniques 3.1 Representing the Items and Container . . . . 3.2 Avoiding Overlap . . . . . . . . . . . . . . . 3.2.1 Detecting Overlap . . . . . . . . . . 3.2.2 No-Fit Polygon . . . . . . . . . . . . 3.2.3 φ-functions . . . . . . . . . . . . . . 3.2.4 Constraint Graphs . . . . . . . . . . 3.3 Solution Approaches . . . . . . . . . . . . . 3.3.1 MIP Formulations . . . . . . . . . . 3.3.2 Levels . . . . . . . . . . . . . . . . . 3.3.3 Stacks, Layers, and Walls . . . . . . 3.3.4 G4 structures . . . . . . . . . . . . . 3.3.5 Representing Free Space . . . . . . . 3.3.6 Relaxed Placement Methods . . . . . 3.3.7 Bottom-Left Strategies and Envelopes 3.3.8 Abstract Representations . . . . . . . 3.3.9 Packing Classes . . . . . . . . . . . . 3.3.10 Constraint Programming . . . . . . . 3.3.11 Bounds . . . . . . . . . . . . . . . . 3.3.12 Approximation Algorithms . . . . . . 3.4 Speculations on The Future . . . . . . . . . . 3.4.1 Rectangular Packing . . . . . . . . . 3.4.2 Two-dimensional Irregular Packing . 3.4.3 Irregular Three-dimensional Packing . 3.4.4 New Constraints and Objectives . . . 3.4.5 Sensitivity Analysis . . . . . . . . . 3.4.6 Integration with other problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 Contents 4.2 4.3 4.4 5 4.1.3 Optimization of the Center of Gravity 4.1.4 Relation to FFT Algorithms . . . . . Rectangular Knapsack Packing . . . . . . . . Knapsack Packing of furniture . . . . . . . . Cylinder Packing and Placement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Conclusion A: Fast neighborhood search for two- and three-dimensional nesting problems Published in the European Journal of Operational Research, 2007 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 53 54 54 55 57 69 A.1: Addendum to “Fast neighborhood search for two- and three-dimensional nesting problems” 95 B: Translational packing of arbitrary polytopes 99 Accepted for publication in the Computational Geometry: Theory and Applications, 2008 C: Heuristic approaches for the two- and three-dimensional knapsack packing problem 131 In press (available online). Computers and Operations Research, 2007 D: Heuristics for container loading of furniture Submitted. 2007 159 E: Placement of two- and three-dimensional irregular shapes for inertia moment and balance 189 Submitted. 2008 F: Three-dimensional Constrained Capsule Placement for Coarse Grained Tertiary RNA Structure Prediction 207 Draft. 2008 6 Introduction to Packing Problems 1 Introduction Packing problems have spawned interest from the mathematical community for centuries and arise in numerous industrial situations with large potential for cost-savings and reduction in pollution caused by carbon dioxide emission. The term packing problem is commonly used for a problem where one wishes to find a placement of small items within one or several larger objects. Although this definition may seem abstract, packing problems arise in many practical situations of our everyday life. One of the most obvious examples occurs in the super-market. While the price of a superfluous bag will have a limited impact on the household economy, we are compelled to minimize the environmental impact of our purchases and therefore seek to minimize the number of grocery bags we use to pack our commodities in. When we divide the items among the bags, we reflect on their shape and weight, to distribute them evenly. Another common problem occurs before an extended period of travel. Here the most useful set of items which can be packed compactly in a suitcase must be selected, and often this selection should be made such that the total weight is less than the weight limit imposed by any airline involved in the journey. Although household problems are too small to warrant extended mathematical analysis, improvement of solutions to packing problems in the industrial sector may lead to substantial cost reductions. The problems occur both during manufacturing and transportation of goods and several solution methods will be presented in this thesis. Packing problem may also exist as sub-problems in other scientific fields and a problem with a biological origin will be discussed in this thesis along with a solution method. Determining the optimal way to cut a large item into smaller pieces is equivalent to determining the optimal way to place the smaller pieces within a container shaped like the large item. Therefore, packing problems are synonymous with cutting problems and authors refer to the problems we consider in this thesis as either packing, cutting, or, packing and cutting problems. Solution methods for these difficult problems consists of a combination of techniques from the fields of computational geometry, operations research, and algorithms. Therefore any researcher with a background in those fields should find the topic compelling to work with. The problems have been studied since the nineteen-sixties and a plethora of solution methods have been presented over the years. The methods have been mostly heuristic, but exact algorithms have gained ground in recent years and even a few approximation algorithms have been presented. Due to the required reduction of carbon emission, the increasing global competition, and the ever rising oil prices, new methods that improve the current results are needed by industrial sector. At the same time, increasing computational power and recent research in computer science enable us tackle larger problems with new and better methods. Items can be rectangular (rectangles or boxes) or non-rectangular (e.g. polygons and polyhedra). Throughout this thesis we will refer to items which are non-rectangular and non-spherical as irregular. While computational methods for packing of rectangular items are able to produce high quality solutions at this time, it seems that there is still potential improvement of for methods that can handle non-rectangular item packing, especially in three-dimensions. The main objective of this thesis is to study novel heuristics for two- and three-dimensional packing problems involving rectangular and irregular shapes. 7 1. Introduction 1.1 Outline The scientific contributions of this dissertation are presented as six individual papers. One has been published, one is in press and available online, one has been accepted for publication, two have been submitted for publication, and one represents work in progress. All submitted and accepted papers have been or are being peer-reviewed. The papers are reproduced completely as in their final revision before publication, acceptance, or submission. The six papers are as follows: • [A] Fast Neighborhood Search for two- and three-dimensional nesting problems Jens Egeblad, Benny K. Nielsen, and Allan Odgaard. (published, 2007) In this paper we describe a novel heuristic for solving two- and three-dimensional strip-packing problems involving irregular shapes represented by polygons. • [B] Translational packing of arbitrary polytopes Jens Egeblad, Benny K. Nielsen, and Marcus Brazil (accepted for publication, 2008) Packing of shapes represented by polyhedra, and a generalization of the work from the first paper ([A]) to three- and higher dimensions are described in this paper. • [C] Heuristic approaches for the two- and three-dimensional knapsack packing problem Jens Egeblad and David Pisinger (in press and available online, 2007) In the paper, we introduce a new heuristic for the two- and three-dimensional rectangular knapsack packing problem which is based on the sequence pair and sequence triple representations for rectangle and box placements. • [D] Heuristics for container loading of furniture Jens Egeblad, Claudio Garavelli, Stefano Lisi, and David Pisinger (submitted, 2007) A heuristic for the three-dimensional knapsack packing problem of furniture is presented in this paper. The heuristic is part of a semi-commercial program used by a very large Italian furniture producer and consists of a wide variety of sub-parts. • [E] Two- and three-dimensional placement for center of gravity optimization Jens Egeblad (submitted, 2008) Heuristics for balanced placement of polygon and polyhedral items within a container are presented here. This builds on work from the papers [A] and [B]. • [F] Three-dimensional constrained placement of capsules with application to coarse grained RNA structure prediction Jens Egeblad, Leonidas Guibas, Magdalena Jonikas, and Alain Laederach (draft, 2008) This paper represents work in progress. A novel model for coarse grained RNA structure prediction based on compact placement of capsules (cylinders with spherical ends) and and technique for cylinder placement within a molecular envelope are presented. The dissertation is organized as follows. First, in Section 2, the main problems of this thesis are presented and it is shown that they are N P -hard. In order to aid the reader, the most popular and interesting solution methods are discussed in Section 3 along with speculations about future directions. In Section 4 the individual papers are presented and discussed. Finally, in Section 5, we give a conclusion and summarize possible future directions. The subsequent chapters (A,. . . ,F) consists of the six papers which were produced as part of the thesis. An addendum for the paper [ A] elaborates on a few missing details and presents updated experiments ([A.1]). 8 Introduction to Packing Problems Definition of Packing Problems. Given are two sets of elements, namely • a set of large objects (input, supply) and • a set of small items (output, demand) which are defined exhaustively in one, two, three or an even larger number (n) of geometric dimensions. Select some or all small items, group them into one or more subsets and assign each of the resulting subsets to one of the large objects such that the geometric condition holds, i.e. the small items of each subset have to be laid out on the corresponding large object such that • all small items of the subset lie entirely within the large object and • the small items do not overlap, and a given (single-dimensional or multi-dimensional) objective function is optimised. We note that a solution of the problem may result in using some or all large objects, and some or all small items, respectively. Figure 1: The definition of packing problems given by Wäscher et al. [157]. 2 Preliminaries The immense number of papers on cutting and packing problems renders the field difficult to approach as a newcomer. At the time of writing, there are no textbooks on the field and while several surveys exists (see [30, 44, 45, 102]), they are often brief or with emphasis on particular problems or methods. In this section we will list some of the problems which are discussed throughout this thesis. Our focus is on multidimensional problems; i.e., two- or higher dimensional. The problems are presented and defined in Section 2.1. In Section 2.2 we present a typology which covers the problem types, and in Section 2.3 we show that most of the problems are N P -hard. We consider problems where we wish to determine if (and often how) a set of items can be placed within one or several containers. In feasible solutions items must be placed such that they do not occupy the same area of the container, i.e. they are not allowed to overlap. An elaborate definition of the problems was given by Wäscher et al. [157] and is quoted in Figure 1. The definition given by Wäscher et al. [157] is somewhat abstract, will match a very large number of problems, and it would be a long and tedious job to list them all. In this text we will deal with three main groups of problems: subset selection, container-count minimization, and container-size minimization. In container count minimization problems the number of containers should be minimized. In subset selection problems an optimal subset, with respect to an objective function, of items must be selected. Finally, in container minimization problems the size of the container should be minimized. The problems consist of the following ingredients (see Figure 2): • A container object C. In most of this text, we will deal with only one type of container, but some problems are defined with respect to multiple types of containers. If C is rectangular, we let W be the width, H the height, and, for three-dimensional problems, D the depth of C. 9 2. Preliminaries (a) (b) Figure 2: (a) A two-dimensional container, item, and placement of an item within the container. (b) Three-dimensional variant. • A set of items I. The items are to be packed within C. We let n = |I| be the number of items. If items have a rectangular shape, we let wi be the width, hi the height, and, for three-dimensional problems, di the depth of item i ∈ I. Non-rectangular shapes may be simple to describe mathematical like a sphere or be completely arbitrary. Arbitrary shapes may be represented in a number of different ways which we will consider in Section 3.1. In general, we will refer to arbitrarily shaped items as irregular items. Rotation of items may or may not be allowed. • A placement P. Each item has a reference point, and the placement describes the position of each item’s reference point in the container(s). For rectangular items we let the reference point be the lower-left-back corner of the item. For problems, where rotation or mirroring is allowed, the placement may also describe the amount each item is rotated and if it is mirrored. To simplify matters we will generally not distinguish between particular placements in this and the subsequent section, but use xi , yi , and zi to indicate the position of item i’s reference point. An important part of evaluating solutions to packing problems in general is the utilization. The utilization of a placement is the area or volume occupied by the items within the container divided by the the area or volume of the container. 2.1 Problem Types One-dimensional problems are in a sense the simplest type of packing problems, and we will begin by, briefly, considering them, in order to move on to the more complicated higher dimensional problems which are the focus of this text. 2.1.1 One Dimensional Problems One of the simplest packing problems is the one-dimensional cutting stock problem (1DCSP). Materials such as paper, textiles and metal are commonly manufactured in large rolls. Different customers may desire different sized rolls and the large rolls are therefore cut into smaller rolls (see Figure 3 (a)). Given one large roll size, a set of small rolls sizes, and for each small roll size, a number of required rolls of that size, the one-dimensional cutting-stock problem is to find the minimal number of large rolls required (see Figure 3 (b)). 10 Introduction to Packing Problems (a) (b) Figure 3: The 1D cutting stock problem. (a) Large rolls must be cut into several smaller rolls. (b) Given large roll width, determine the number of large rolls required to cut the required pieces. A related problem is the one-dimensional bin-packing problem (1DBPP). For the bin-packing problem a set of items each with size wi , for i ∈ I, and a bin size W are given, and the problem is to find the minimal number of bins such that all items are packed in a bin. Fundamentally, the two problems are the same, since one can transform an instance of one problem into the other, by replacing the term “large roll” with “bin” and “small roll” with item or vice versa. Authors generally distinguish between 1DCSP and 1DBPP by the number of each item size; cutting stock usually concerns few item sizes (homogenous), while bin-packing is used for problems with many item sizes (heterogenous). Both problems fall into the group of problems that we refer to as container-count minimization, since the number of large rolls or bins must be minimized. The bin-packing problem can be formulated as a mixed integer linear program (MIP): K min ∑ yk , k=1 K s.t. ∑ xik ≥ 1, i∈I k=1 ∑ wi xik ≤ Wyk , k = 1, . . . , K i∈I where xi ∈ {0, 1}, i ∈ I yk ∈ {0, 1}, k = 1, . . . , K. (1) Here K is the maximum number of bins required. The binary variable yk indicates whether bin k is used (opened). Variable xik indicates whether item i is in the kth bin. The first constraint ensures that all items are placed in a bin at least once, and the second constraint ensures that the items assigned to a single bin can be placed in the bin without overlap. Another common packing problem is the one-dimensional knapsack problem (1DKPP). 1DKPP must also be solved when the cutting stock problem is to be solved using delayed column generation (see e.g. [33]). For the one-dimensional knapsack problem we are given a knapsack with a capacity and each item from I has a weight and a profit-value assigned to it. The objective is to determine the subset of items which can be packed in the knapsack without violating the weight capacity limit, such 11 2. Preliminaries (a) (b) Figure 4: The 1D knapsack packing problem. (a) A knapsack must be filled with the most profitable set of items with its weight limit. (b) The knapsack and items represented as a rectangles with equal height. The width of each item represents its weight. that the sum of the profit of the items from the subset is maximal. We can write this as a MIP: max ∑ pi xi , i∈I s.t. ∑ wi xi ≤ W i∈I where xi ∈ {0, 1}, i ∈ I. (2) Here W is the knapsack’s weight capacity, wi is the weight and pi the profit of each item i ∈ I. The binary variable xi indicates whether item i is chosen. The constraint is the capacity constraint, which ensures that all items can fit inside the knapsack without “overlap”. The problem is illustrated on Figure 4. The knapsack packing problem is a subset selection problem. Since this text focuses on packing problems in higher dimensions, we will abandon the onedimensions problems for now and instead move to the more challenging multi-dimensional problems. For more information on one-dimensional knapsack packing problems we refer to the book by Kellerer et al. [90]. 2.1.2 Multi Dimensional Subset Selection Problems The two-dimensional knapsack packing problem (2DKPP) and the one-dimensional variant are related. For the two-dimensional variant a plate of some size W × H is given as well as the set of items I. As for the one dimensional variant of the knapsack packing problem, one is given a set of items, each item is assigned a profit pi and one must select a maximal profit subset of items I 0 = {i ∈ I|xi = 1}, such that ∑i∈I pi xi is maximized and I 0 can be packed in the container. Some authors (e.g. Boschetti et al. [21], Hifi [79], Lai and Chan [95]) also refer to this problem as the two-dimensional cutting stock problem (2DCSP). When this terminology is used there are a number of differences. For the 2DCSP items are usually more homogeneous and identical items are grouped. For each group, a lower-bound value designates the minimum number of items from that group to be packed. These problems occurs in glass and metal industries, where large plates must be cut into smaller pieces which are sold to customers. A simple profit-value for each item is the area it occupies of the plate. If this value is used the objective is to maximize utilization of the plate. The problem appears in both unconstrained and constrained versions. In the unconstrained version the number of items of each type is infinite. 12 Introduction to Packing Problems (a) (b) (c) Figure 5: Guillotine cutting. (a) Guillotine cuttable placement. (b) Non-guillotine cuttable placement. No sequence of cuts can cut all the items from the plate without cutting an item. (c) Cutting sequence for (a). Figure 6: The manufacturer’s pallet loading problem is to determine the optimal loading of a set of identical items on a pallet. Some cutting-machines cannot stop in the middle of the plate, but need to divide the plate in its entirety either horizontally or vertically in each cut. This is referred to as a guillotine cutting stock and illustrated on Figure 5. Another form of cutting, right-angled cutting, where the cutting-machine can stop in the middle of the plate but only start from one of the edges, was investigated by Bukhvalova and Vyatkina [22]. A related two-dimensional problem is the manufacturer’s pallet loading problem (PLP). Here one wishes to find an optimal loading pattern of identical pieces (boxes), which must be loaded on a pallet. Although the problem is really a three-dimensional problem as noted by Bischoff and Ratcliff [17], it is commonly modeled as a two-dimensional variant, where only one layer of items is considered and this layer is replicated a number of times. The problem is illustrated on Figure 6. Since there is only one item-type, the objective is to find the loading pattern that maximizes the number of that item. It can therefore be considered a special case of the two-dimensional knapsack packing problem, where the lower bound of items is 0, the number of items is infinite, the number of item types is 1, the items can be rotated by 90 ◦ , and the profit of each item is one. Since the items are assumed to be identical, we will not go into further details on the problem, but simply make a few remarks. Exact algorithms for this problem have been introduced by Dowsland [43] and Bhattacharya and Bhattacharya [14], a survey of solution methods for PLP was given by Balasubramanian [5], a polynomial time algorithm for a version of the problem where placements are required to guillotinecuttable was presented by Tarnowski et al. [148], while several heuristics were presented by Herbert and Dowsland [77], Lins et al. [97], Young-Gun and Maing-Kyu [161]. 2DKPP generalizes to three dimensions. A special variant of the problem in three-dimensions, 13 2. Preliminaries Figure 7: The container loading problem and the common use of width (W ), height (H), and depth (D). is the container loading problem (CLP). The container loading problem occurs in the transportation industry, where items are transported in shipping-containers. Here one is given a set of items and the objective is find a packing of the items that maximizes the volume occupied by them. This corresponds to a three-dimensional knapsack packing problem (3DKPP) where the profit of each item is set to its volume. The literature considers mostly rectangular items for the container loading problem, and, unlike the knapsack problem, rotation of the items is generally allowed, although each item may only be rotated a specified subset of the six axis-aligned rotations. Standard internal dimensions of shipping containers are 234 × 234 × 592 cm3 for 20-feet containers and 234 × 234 × 1185 cm3 for 40-feet. Therefore, solution methods for CLP often take advantage of the elongated container and the large dimensions of the container relative to items, where as, methods for three-dimensional knapsack packing problems in general, cannot take advantage of this property. Figure 7 illustrates the CLP. Another distinction between three-dimensional knapsack packing problems and container loading problems is that CLP generally considers hundreds of items with a volume sum close to the volume of the container, while 3DKPP considers less than 100 items with a combined volume which may be many times larger than the container size. Apart from the differences related to container dimensions, the item numbers, and the profit value, container loading problems often also consider stability of the items, i.e, that the items are not floating in mid-air and will not drop to the floor once the container is moved. 2.1.3 Multi Dimensional Container-Count Minmization The one dimensional bin-packing problem can also be generalized to more dimensions. Here one wishes to minimize the number of bins required to pack a set of items. The literature deals mostly with rectangular items and as for the knapsack packing problem, it is common not to allow rotation. The bin-packing problem occurs naturally in the industry. Often one can have a set of plates which must be cut into smaller items, or a number of containers into which items must be divided, and one wishes to minimize the number of plates or containers used. Slightly confusing, this problem is also commonly referred to as the two-dimensional cutting stock problem (by e.g. Gilmore and Gomory [69]), and as for the variant related to the knapsack packing problem homogeneous and identical items are grouped and for each group, a lower-bound value designates the minimum number of items from that group to be packed. To avoid further complications we will refrain from using the cutting stock terminology in the remainder of this text. Special variants of the bin-packing problem includes the multi-container loading problem (MLCP) and the multi-pallet loading problem (MPLP). In the multi-container loading problem, one wishes to 14 Introduction to Packing Problems (a) (b) Figure 8: (a) The strip-packing problem. The height (H), also referred to as the length, of the container must be minimized. (b) Example packing of irregular items. minimize the number of containers required to load a set of items. As for the ordinary container loading problem, a single container may be able to hold hundreds of items while the bins of the bin-packing problem may contain far fewer items. Unlike single pallet loading, multi-pallet loading concerns different items, and the objective is to determine the minimal number of pallets required to load the items. Multi-pallet loading differs from multi-container loading, in that the container is smaller since it is a pallet, and that there may be stricter requirements related to stability. 2.1.4 Container Minimization A set of problems, that occur only in two or more dimensions, are problems where the objective is to determine the minimal container that is large enough to enclose a given set of items. The objective may be to minimize one dimension of the container (strip-packing), minimize the area or volume (area minimization), minimize the circumference of the container, or minimize the radius of a circular container (circle packing). Strip-Packing Strip-packing is probably the most commonly studied problem type in this category. The problem originates from the textile industry, but also occurs in the metal industry. In the textile industry one is given a roll of fabric, which must be cut into different components that make up clothing items when sewn together. The objective is to place the pieces such that the length of the strip is minimized. The is equivalent to a maximizing the utilization of the fabric. More specifically the problem is as follows: One is given a set of items and must place them within a rectangular container where one dimension is given (the width of the strip), and the second dimension must be minimized. The second dimensions which is the height of the container, is referred commonly referred to as the length of the strip. Since the problem occurs in the clothing industry, items are commonly irregular. The strip-packing problem involving irregular items is often referred to as the nesting problem. To avoid any confusing we will refrain from using that terminology in this introduction although it is commonly used by researchers in the field. The strip-packing problem is depicted on Figure 8. Three-dimensional strip-packing Strip-packing problems may be generalized to higher dimension by requiring that all but one of the container dimensions are given. The three-dimensional variant applies to rapid-prototyping machines, that generate simple plastic objects from computer aided design 15 2. Preliminaries (CAD) systems. The generated objects can be used as design models or prototypes, and allow a designer to physically interact with his new design. The term rapid comes from the fact that the machine can produce a physical replica of a three-dimensional computerized model within hours. Rapid prototyping machines generate models by building a set of layers. Each layer comes with a significant process time, and so it is favorable to minimize the number of layers which must be generated for a given set of items. The problem may also occur as a subproblem in situations where a set of items should be packed as tightly as possible inside container, such that the remaining space can be used for a different set of items. Area minimization Problems where several container dimensions must be minimized concurrently have also been studied in the literature. An example of such a problem is the two-dimensional minimal area packing problem, where one has to find the minimal area container large enough to encompass a set of items. This problem occurs in the Very Large Scale Integrated (VLSI) placement problem, that deals with the positioning of rectangular modules within a rectangular plate. However, commonly the problem of minimizing the area of such a plate is deemed less important compared to the more important problem of minimizing wire-length between the modules. Circle Packing Another well-known container minimization problem is the circle packing problem, where one must find the minimal, with respect to radius, circular container which can enclose a set of circles. In the literature problems in the cable or oil pipeline industries are often cited as sources to this problem. 2.1.5 Mathematical problems Packing problems have also attracted attention from the mathematical community, but here homogeneous or specific item dimensions are considered. As an example, the famous astronomer Johannes Kepler was looking for the most efficient way to pack equal-sized spheres in a large box in 1611. Kepler hypothesized that the optimal strategy was to stack the spheres similar to the way greengrocers stack oranges in crates. The utilization (volume occupied by the spheres divided by the volume of the π box) of such a stacking is 3√ ≈ 74.048%. Kepler’s hypothesis turned out to be extremely difficult to 2 prove, but Hales [75] recently published a convincing proof, although it has yet to be completely confirmed. Several other related problems have interested the mathematical community over the years. Some examples are: • Determine the minimum square or circular container capable of containing n unit circles. • Determine the maximal number of unit squares in square container with non-integer dimensions, and where the unit squares may be freely rotated. Discussion of these variants of packing problems is beyond the scope of this text, and we will abandon them here in favor of the more practical problems that we have discussed so far. 2.2 Typologies Because researchers come from different backgrounds, equivalent problems are often referred to with different names. This is already apparent in the prequel where the knapsack packing, container loading, and multi-pallet loading problems are all in a sense equivalent. To remedy this problem, several 16 Introduction to Packing Problems Abbrev. IPPP SKP MIKP MHKP SLOPP MILOPP MHLOPP Description Identical Item Packing Problem, e.g., Pallet Packing. Single Knapsack Problem. The knapsack packing problem as described in the prequel. Also includes the container loading problem. Multiple Identical Knapsack Problem. A subset of items for multiple knapsacks must be determined. Includes the multi-container loading problem and multi-pallet loading problem. Multiple Heterogenous Knapsack Problem. Variant of MIKP, where the knapsacks are not identical. Single Large Object Placement Problem. This is a form of weakly heterogeneous single knapsack problem. Multiple Identical Large Object Placement Problem. A variant of SLOPP with multiple knapsacks. Multiple Heterogenous Large Object Placement Problem. Variant of MILOPP with different knapsacks. Table 1: Output maximization problems according to Wäscher et al. [157]. researchers have introduced typologies based on a limited set of core problems. In recent years the typology by Wäscher et al. [157] has replaced the older typology by Dyckhoff [47], and we will describe the former briefly here. Wäscher et al. [157] start by dividing the input objects into two categories: large and small objects. The large object is/are the container(s), while the small objects are the items to be packed within them. Further, problems are divided into two main groups: output maximization and input minimization. Output maximization concerns problems where one is to place a set of small objects within a limited set of large objects. Input minimization concerns minimizing the number of large objects (or the size of one) required to pack the items. Three groups are used to classify the variety of items: identical, weakly heterogeneous, and strongly heterogeneous. Similarly, the set of large objects are classified as either a single object (for output maximization), identical, or heterogenous. For input minimization the set of large objects may be further categorized as either identical or heterogenous. These distinctions lead to a classification typology with a relatively low number of problem types. The problems types described in the previous sections can be classified using the typology by Wäscher et al. [157]. The subset selection problems of Section 2.1.2 are all output maximization problems and divided into the sub-categories listed in Table 1. The container-count minimization problems of Section 2.1.3 are all input minimization problems and are listed in Table 2. The container minimization problems from Section 2.1.4 are perceived by Wäscher et al. [157] as another category of input minimization problem named Open Dimension Problem (ODP). To identify problem types further Wäscher et al. [157] also consider the “dimensionality” of a problem type, i.e., two-dimensional, three-dimensional, or d-dimensional, and the type of items, i.e, regular (rectangular, circular, cylindrical) or irregular. Using the typology of Wäscher et al. [157] a two-dimensional knapsack packing problem with rectangles will be referred to as a “2-dimensional rectangular SKP”, while a three dimensional strippacking problem involving irregular shapes will be referred to as a “3-dimensional irregular ODP”. 17 2. Preliminaries Abbrev. SBSBSP MBSBPP RBPP SSSCSP MSSCSP RCSP ODP Description Single Bin Size Bin Packing Problem. Bin Packing Problem as described in the prequel. Multiple Bin Size Bin Packing Problem. Bin Packing Problem with multiple bin sizes. Residual Bin Packing Problem (RBPP). Bin Packing with strongly heterogenous bintypes. Single Stock Size Cutting Stock Problem. Bin Packing Problem consisting of weakly heterogenous items. Multiple Stock Size Cutting Stock Problem. SSSCSP variant with multiple types of bins. Residual Cutting Stock Problem. MSSCSP with strongly heterogenous bin-types. Open Dimension Problem. Container Minimization Problems such as strip-packing and area minimization problems. Table 2: Input minimization problems according to Wäscher et al. [157]. 2.3 N P -completeness From a computer science point of view, one of the most important properties of packing problems is that they generally fall in the category of N P -hard problems. A notable exception is the manufacturers pallet loading problem (PLP), which Lins et al. [97] note has never been proven N P -hard. This problem differs from the remaining problems in that it considers identical items to be packed. For the remaining problems – in multi-dimensional rectangular variant without rotation – proving that they are N P -hard problems may be done easily by reduction from the set partitioning problem. We will give a simple proof by first showing that a special problem, which we refer to as the twodimensional packing decision problem (2DPDP) is N P -complete and then explaining how the PDP can be reduced to the packing problems we have discussed in the prequel. We begin by defining the set partition problem which is known to be N P -complete (see [62]). Definition 1. Set Partition Problem (SPP). Given a set of items S, each with a positive integer value wi ∈ N for i ∈ S, decide if we can divide S into two disjoint sets S0 and S00 such that S0 ∪ S00 = S and ∑i∈S0 wi = ∑i∈S00 wi = 12 ∑i∈S wi . We now define the two-dimensional packing decision problem. Definition 2. Packing Decision Problem (2DPDP). Given a rectangular container of a size W × H where W, H ∈ N, and a set of two-dimensional rectangular items I, each with a width wi ∈ N and height hi ∈ N, decide if a non-overlapping placement, i.e., a packing, of the items exists within the container. Here, a placement P : I → N20 is a map of items into non-negative integer x, y-coordinates. 2DPDP represents the fundamental aspect of packing problems, which is to determine a nonoverlapping placement of items within one or more containers. Theorem 1. 2DPDP is N P -complete. Proof. Let xi , yi be the coordinate of the lower-left corner of each item in a placement. Verifying if a placement of items contains overlap amount to checking if the intersection ([xi , xi + wi [×[yi , yi + hi [) ∩ ([x j , x j + w j [×[y j , y j + h j [) is 0/ for all i, j ∈ I, i 6= j. The open sets ensures that items are allowed to abut. This can be done in time O(|I|2 ). Since a placement may be represented by a list of referencepoint positions, its can have a size which is polynomial of the input-size, and can therefore be used as a certificate to verify a solution to 2DPDP in polynomial time. 18 Introduction to Packing Problems Figure 9: Illustration of an instance of 2DPDP based on an instance of SPP used to prove that 2DPDP is N P -complete. The height of all rectangles are 1 and W is the sum of all the widths divided by two. The solution of the 2DPDP problem, shown on the bottom, is also a solution to the SPP problem with numbers equal to the widths, since it forms two rows of items, each having the same total width. Further, given an instance J of SPP we can create an instance J 0 of 2DPDP such that, a yes-answer exists for J iff a yes-answer exists for J 0 . To create J 0 from an instance of J with a set of items I, we simply create a set of rectangles each with dimensions wi × 1 for i ∈ S, and a rectangular container 1 with dimensions 2 ∑i∈S wi × 2. If a yes-answer exists for J , we know that two sets S0 and S00 exists, and we may create a placement with the items from S0 having y-coordinate 0 and the items from S00 having y-coordinates 1. If we set the x-coordinates appropriately this constitutes a non-overlapping placement of the rectangles of J 0 . Conversely, if a yes-answer exists for J 0 , the rectangles must be divided into two groups with respectively y-coordinate 0 and 1. Since the total width of the items is two times the container width, the total width of the items in each group must be equal to the container width, and therefore these two groups represent a partition of S. See Figure 9. Since 2DPDP is N P -complete, we can say that the following problems from Table 1 and Table 2 of Section 2.2 are N P -hard: the 2D-rectangular-SKP is N P -hard, since we may transform an instance of 2DPDP to an instance of 2D-rectangular-SKP where we will get a yes-answer to the 2DPDPinstance if and only if all items can be packed in the 2D-rectangular-SKP instance. N P -hardness of 2D-rectangular-MIKP and -MHKP follows from 2D-rectangular-SKP, since we may create instances of 2D-MIKP and 2D-MHKP with a single knapsack. The 2D-rectangular-SLOPP, -MILOPP, and MHLOPP are N P -hard, since it is trivial to transform each of the 2D-rectangular-SKP, -MIKP, and -MHKP to one of the former. The 2D-rectangular-SBSBSP is N P -hard, since we may transform an instance of 2DPDP to an instance of 2D-rectangular-SBSBSP where we will get a yes-answer to the 2DPDP-instance if and only if we only need one bin in the solution of the 2D-rectangular-SBSBSP. By using the same argumentation as above, we may state that also 2D-rectangular-MBSBPP, -RBPP, -SSSCSP, -MSSCSP, and -RCSP are N P -hard problems. Finally, rectangular strip-packing is N P -hard, since we may transform an instance of 2DPDP to an instance of 2D-rectangular-ODP where we will get a yes-answer to the 2DPDP-instance, if and only if we can find a container with a width equal to 21 ∑i∈I wi 1 . Variants of 2D-irregular-SKP, -MIKP, -MHKP, -SLOPP, -MILOPP, -MHLOPP, -SBSBSP, -RBPP, -SSSCSP, -MSSCSP, -RCSP, and -strip-packing where the set of allowed irregular items is a superset of rectangles, are also N P -hard, provided a certificate exists which can be used to verify a solution 1 The width and height are interchanged here relative to the definition given in section 2.1.4 19 2. Preliminaries in polynomial time, since it is trivial to transform the rectangular variants to the irregular variants. Because it can be verified if two polygons overlap in polynomial time, problems involving general polygons (convex, simple, and with holes), are thus N P -hard. Finally, all d-dimensional variants of the problems listed above are N P -hard for d > 2, since the two-dimensional variants can be transformed into d-dimensional variants by setting all remaining dimensions of items to 1. For completion’s sake, we note that the 2D-rectangular-area-minimization problem is also N P -hard as proven by Murata et al. [117]. 20 Introduction to Packing Problems 3 Computational Techniques The computational techniques used to solve packing problems involve methods from computational geometry, operations research, as well as mathematical observations. In this section we will review many of these techniques. As always, space only permits a limited number of contributions to be summarized, and the selection made in this text is purely subjective. The focus is on methods that were either common for several researchers, has returned promising results, or which simply feel powerful or universal according to the author of this thesis. In order to limit the size of the review we will not consider one-dimensional problems or methods for guillotine cutting. While guillotine cutting approaches may have relevance to the topics considered a line much be drawn somewhere and guillotine cutting problems are not within the focus of this thesis. The section is divided into four parts. First, in Section 3.1, we will consider how the items and container can be represented. Then, in Section 3.2, we will address the problem of determining if two items overlap and how to avoid it. In section 3.3 we discuss many of the known solution strategies. Finally, in Section 3.4 we speculate on possible and likely directions for future research in the field. 3.1 Representing the Items and Container Up until this point we have not disclosed the true nature of the container or the items involved. Generally, for a d-dimensional problem, items and containers consists of a closed subset of Rd which may undergo some some sort of rigid transformation (i.e. translation and rotation if allowed). Most literature deals with items and containers which can be represented by axis aligned rectangles. For a d-dimensional problem that means sets of the form Πdk=1 [0, wk ], where wk ∈ R is the k-th dimension of the item. Problems involving circles or spheres are modeled similarly. For other items, we may consider their so-called bounding-box. Assume an item i covers the closed set si ⊂ Rd , then the bounding-box of si is defined as the set Πdk=1 [xkmin , xkmax ] where: xkmin = min{xk | x = (x1 , . . . , xk , . . . , xd ) ∈ Rd , x ∈ si } xkmax = max{xk | x = (x1 , . . . , xk , . . . , xd ) ∈ Rd , x ∈ si }. Rectangular shapes are compelling for two reasons. Firstly, many practical situations deal with plates or boxes. Secondly, the use of rectangles instead of arbitrary shapes, simplifies the overlap constraints, which may enable a solution method to find better objective values in less time than what would be possible with a highly detailed description. It should be noted that significant space can be lost around items, if bounding-boxes are used rather than an accurate description. In two dimensions non-rectangular shapes may take the form of circles, polygons, and polygons with curved boundaries. In three dimensions they may take the form of polyhedra and raster-like models. In general, we refer to arbitrary shapes as irregular. Polygons are generally described by the sequence of endpoints of the line-segments that make up their boundaries (see Figure 10 (a)). Authors may distinguish between convex polygons, simple (non self-intersecting boundary) polygons, and simple polygons with holes. Endpoints are said to be in either clockwise or counter-clockwise order, and the order determines the interior of the polygon. Polygons with holes are generally represented by a list of sequences, one for each closed boundary. Polyhedra are described by the linear subspaces (faces) that make up their boundary. The simplest general such definition is a triangle mesh, which is a data-structure composed of a series of triangles often combined with neighboring information (see Figure 10 (c)). Researchers have also investigated various types of raster models (see Figure 10 (b)). An item i which occupy the subset si ⊂ Rd , may be represented implicitly by the function f (p) : Rd → {0, 1} 21 3. Computational Techniques (a) (b) (c) Figure 10: Polygons and polyhedra. (a) Polygon with hole with its bounding-box. The polygon is represented by the endpoints of the edges (black circles). (b) Raster representation of the polygon. Black circles indicate positions where the raster-function is 1. (c) Polyhedron represented as a number of triangles. Only four triangles shown here. Each triangle may be represented by its endpoints (the black circles). where f (p) = 1 if and only if p ∈ si . The raster representation of such an item, is a function defined on a coarser domain. E.g., if only integer coordinates p ∈ Zd are represented and nk is the kth dimension of the item then such a grid can be represented by Πdk=1 nk bits. It should be noted that several straight-forward compression techniques may use less space. For instance, in two dimensions one may represent the shapes as a series of x-slabs for each y value. A particular compression technique for three dimensions is an octree representation. Octrees are well-known in the area of computer graphics, but we will briefly summarize them here. An octree is a hierarchical tree data structure commonly used to partition three-dimensional space. Each level of an octree divides the previous level uniformly into 8 cubes. Each cube is marked depending on whether the cube is completely inside (black), partially inside (gray) or completely outside the shape (white). The top level of the octree consists of one cube circumscribing the entire shape. This means that level n uses up-to 8n−1 cubes. Only gray cubes are sub-divided. A quad-tree is a similar data-structure in two dimensions consisting of a hierarchy of squares. Most recent methods represent irregular items by polygon or polyhedra, and in general when we use the term irregular it we will be for problems involving polygons or polyhedra unless we state otherwise. The representation of items must be sufficiently detailed to reach high quality solutions without complicating the solution process. Thus the choice of representation is strongly connected to the ability to check for overlap between shapes efficiently which will be discussed in Section 3.2. 3.2 Avoiding Overlap The two reoccurring constraints for packing problems are that items are not allowed to overlap, and that all items must be positioned within the container. In other words, let si ⊂ Rd be the set occupied by item i ∈ I in a d-dimensional problem, then, in order to ensure that items do not overlap, we require that for any pair of items i and j int(si ) ∩ int(s j ) = / where int(·) is the interior of the sets. Note that this requirement allows the items to abut. Likewise 0, assume the container covers the set C ⊆ Rd then we also require that si ∩C = si for any item i. We will discuss methods to detect overlap in Section 3.2.1 and present the No fit polygon in Section 3.2.2 and φ-functions in Section 3.2.3. These concepts can be used to determine efficiently, 22 Introduction to Packing Problems how items must be placed relative to each other to avoid overlap. In Section 3.2.4 we show how a compact non-overlapping placement can be described by a graph that represents how rectangles need to placed relative to each other. 3.2.1 Detecting Overlap Verification of the two constraints depends on the representation of the items and the container. Circles, and hyper-spheres in general, are particular easy to check for overlap, since the distance between their centers must be larger than the sum of their radii. Orthogonally placed rectangles, i.e. with axis aligned sides, are also simple to test for overlap. If we let wki be the size of rectangle i in dimension k, then item i occupies the set [xi1 , xi1 + w1i ] × . . . × [xid , xid + wdi ]. To ensure that two rectangular items i and j do not overlap, we have to require that either xkj ≥ xik + wki or xik ≥ xkj + wkj for at least one dimension 1 ≤ k ≤ d. In two dimensions these constraints corresponds to the requirement that i must be either left of, right of, above, or below j. We refer to this as a required relation between i and j. The procedure becomes a bit more tricky when we deal with irregular items or free rotation of rectangles. For convex polygons their intersection can be found in O(n) asymptotic time (see e.g. [125, 151]). The intersection of irregular polygons can be determined in time O(n log n + k log k) (see [39]), where n is the sum of the number of edges of the input polygons, and k is the number of edges of the output polygon. Since the output polygon can have O(n2 ) edges O(n2 ) is also a lower bound for the running time of any algorithm that can return the intersection of two polygons, although determining if two polygons intersect may be done quicker. For raster models the process can be done in linear time of the input size. Let two two-dimensional items i and j have raster-functions f and g represented at integer coordinates. To test if the two items overlap, one may verify if f (x + xi , y + yi ) + g(x + x j , y + y j ) = 2 for any (x,y) with max max min max max x ∈ {max(ximin , xmin , x j )}, y ∈ {max(ymin , y j )}, j ), . . . , min(xi i , y j ), . . . , min(yi (3) max ] is the bounding-box of item i. where [ximin , ximax ] × [ymin i , yi For octree representatons, one may test the cubes recursively. If two black cubes overlap then the shapes overlap. If gray cubes overlap one may recursively consider their higher resolution sub-cubes until the highest resolution is reached, or until only white cubes overlap, in which case the actual shapes do not overlap. An interesting idea for three-dimensional objects was suggested by Dickinson and Knopf [41, 42] for a heuristic for three-dimensional layout of irregular shapes. Here each of the six sides of the bounding box of each shape is divided into a uniform two-dimensional grid and the distance perpendicular to the box-side from each grid cell to the shape’s surface is stored. Determining if two shapes overlap now amounts to testing the distance at all overlapping grid points of bounding box sides, when the sides are projected to two dimensions. 3.2.2 No-Fit Polygon A popular and efficient way to deal with overlap detection is with the so called no-fit-polygon (NFP). If si ⊆ Rd represents item i, we let si (p) = {p + q | q ∈ si } be si translated by the vector p. For two items i and j which are represented by polygons the NFP describes the set of translations of i and j that will cause the two items to overlap. / and we If we translate i by pi and j by p j , then si (pi ) ∩ s j (p j ) = 0/ if and only if si (pi − p j ) ∩ s j = 0, say that the vector pi − p j is i’s position relative to j. We may refer to set of relative positions between 23 3. Computational Techniques (a) (b) Figure 11: The clockwise sorting principle for NFP generation. (a) Two convex polygons. The edges of the polygons are a, b, c and i, j, k respectively (sorted by slope). (b) The NFP of the two polygons. The polygon with edges a, b, c has been negated to the polygon with edges −a, −b, −c. The NFP can now be constructed by visiting the edges sorted by slope which is the following sequence i, −c, j, −a, k, −b. si and s j that will cause i and j to overlap as their set of overlap resulting positions, ORP(i, j), which is defined as follows: / ORP(i, j) = {q | si (q) ∩ s j 6= 0}. (4) For items in the plane there is an appealing physical approach to calculate ORP(i, j). Cut both items out of cardboard. Position j on a sheet of paper with its reference point in 0, and slide the cardboard-model of i around the model of j, such that the boundary of the two polygons abut. As i is slid around j, follow the reference point of i, and draw its trajectory on the paper. The boundary of the figure drawn by the pencil represents the translations of i that will result in i abutting with j and its interior is ORP(i, j). To do this mathematically we use a concept which is referred to as the Minkowski-sum. The Minkowski-sum of two sets s1 ⊆ Rd and s2 ⊆ Rd is defined as (see [39]): s1 ⊕ s2 = {p + q | p ∈ s1 , q ∈ s2 }, (5) where p + q is the sum of vectors. Let −p be the vector where all of p’s coordinates are negated, then for a set s we define, −s = {−p | p ∈ s}. It can be seen (see [39]) that, ORP(i, j) = si ⊕ −s j , so the ability to evaluate the Minkowski-sum allows us to determine which relative translations will cause i to overlap with j. Note that, since we generally allow the items to abut in packing problems, we must really define ORP(i, j) from the interior of the sets of the items to get: ORP(i, j) = int(s(i)) ⊕ int(−s( j)), so that the boundary of the items can overlap. For two items represented by polygons P and Q we let NFP(P, Q) = int(P) ⊕ int(−Q) be their NFP which itself is a polygon although it may contain degenerate elements. Using a method which is quite similar to the physical traversal technique described above, one can determine the NFP for two convex polygons in O(n) time where n is the sum of edges from the two polygons, provided that the points of both the polygons are sorted in clockwise (or counter-clockwise) order. The reason for this, is that the NFP of two convex polygons consists of all the edges of each of the polygons sorted by slope. This is illustrated on Figure 11. 24 Introduction to Packing Problems Q P Reference point Figure 12: The NFP of two polygons P and Q. The boundary of the NFP corresponds to the position of P as P is translated around Q. Unfortunately, it is not quite as simple when dealing with arbitrary polygons. One of the intrinsic difficulties concerns the ability to position polygons within holes or concavities of other polygons. If one polygon can be fitted exactly into the hole of another, then the translation that leads to such a position, should be represented by a single point within the NFP. The NFP of two polygons is illustrated on Figure 12. Recent robust methods for calculating the NFP have been introduced by both Bennell and Song [11] and Burke et al. [24]. Both papers divide the methods for calculating the NFP in three different categories: Decomposition techniques, orbiting techniques, and Minkowski-sum techniques. The decomposition techniques consists of methods that break the polygons into convex or star-like components which are easier to handle individually. The orbiting techniques revolve around methods that implement the trajectory strategy that was mentioned in the beginning of this Section. Finally, Minkowski-sum techniques are generalizations of the slope-sorting techniques. The algorithm for generating the NFP by Bennell and Song [11] has worst case running time O(mn + m2 n2 log(mn)). The main problem when dealing with NFPs is that algorithms for their computation are highly complicated which is evident from the fact that robust methods have only begun to appear in the last few years. Another draw-back of NFPs is that they must be computed for each pair of polygons that one wishes to test for intersection. This computation is generally done in a preprocessing step and will result in a number of NFPs which is quadratic in the number of input polygons, which requires both computational time and storage. Additionally, if items can be rotated, NFPs for each pair of allowed rotation angles and each pair of polygons are required. It should be noted though, that the recent methods of Burke et al. [24] and Bennell and Song [11] are capable of calculating between hundreds and thousands of polygons per second on commodity hardware. The largest set of NFPs reported by Burke et al. [24] consists of 90,000 NFPs which are calculated in 142 seconds on a Pentium 4 2 GHz based on an instance consisting of 300 polygons. However, NFPs for the most common benchmark instances involving polygon shapes are processed within 1-5 seconds, so the preprocessing time may be negligible with todays hardware. To solve the problem of determining the NFPs the EURO Special Interest Group on Cutting and Packing (ESICUP), have made the NFPs for the commonly used benchmark data set for strip-packing problems with polygons available along with the data-set on their website (2 ). This way, researchers are spared the trouble of implementing algorithms for NFP-generation. Once the NFP of two polygons P and Q has been determined, one can determine if they overlap if placed at the position p and the position q respetively, by evaluating if their relative position (p − q) is within NFP(P, Q). This amounts to using a simple point-in-polygon tests such as ray-shooting (see e.g. de Berg et al. [39]) which can be done in linear time of the size of NFP(P, Q). If the polygons overlap, we may determine a minimal two-dimensional translation of one of the polygons, which will result in a non-overlapping relative position. This is referred to as the penetration 2 http://paginas.fe.up.pt/ ˜esicup/ 25 3. Computational Techniques (a) (b) Figure 13: (a) An overlapping triangle and retangle. (b) Their NFP. (p − q) is the relative position of the two polygons. v is the closest point to (p − q) on the boundary and the translation corresponding to v − (p − q) is the minimal translation of the triangle required to ensure that the two polygons do not overlap. The length of this is the intersection or penetration depth. The intersection of the line in direction r from (p − q) determines the amount required to move the triangle in direction r so that the two shapes do not overlap. depth. Given a relative position (p − q) we can calculate the penetration depth of polygons P and Q in linear time in the size of the NFP by evaluating the length of v − (p − q) where v is the point nearest to (p − q) on the boundary of NFP(P, Q). Given a direction r, we may also use the NFP to determine the amount we need to translate P along r for the two polygons to not overlap (see Figure 13 (a)). This is done, simply by shooting a ray from (p − q) along r. The first intersection between the ray and the boundary of NFP(P, Q) is the nearest non-overlapping translation along r (see Figure 13 (b)). According to Bennell and Song [11] some researchers have come to feel that the introduction of the NFP has stifled research in non-overlapping techniques, but NFPs describe non-overlapping positions between two polygons which is a fundamental aspect of solving packing problems involving polygons. So it seems, that NFPs by their very definition, are hard to overcome. Theoretically, the notion of an NFP can be generalized to higher dimensions. Minkowski-sums of polyhedra have certainly been investigated (e.g. by Varadhan and Manocha [154] and Zhong and Ghosh [163]), but suffer from the fact the complexity of the output, i.e. the number of edges, faces, and vertices, can be O(n3 m3 ) (see [139]) for two polyhedra with complexity n and m respectively. At this time, methods for packing three-dimensional irregular shapes are still in their infancy, but it would be interesting to examine the potential of a three dimensional NFP (No fit polyhedron). 3.2.3 φ-functions A concept which is related to the NFP is the notion of φ-functions which were introduced by Stoyan and Ponomarenko [141] and investigated for two- and three-dimensional problems by Stoyan et al. [142, 143, 144, 145]. For two d-dimensional sets s1 ⊆ Rd and s2 ⊆ Rd , a φ-function for s1 and s2 is a function φ : R2d → R, for which we require the following: / • φ(p, q) > 0 for s1 (p) ∩ s2 (q) = 0. • φ(p, q) = 0 for int(s1 (p)) ∩ int(s2 (q)) = 0/ and s1 (p) ∩ s2 (q) 6= 0/ (i.e. s1 and s2 have been translated such that they abut). / • φ(p, q) < 0 for int(s1 (p)) ∩ int(s2 (q)) 6= 0. 26 Introduction to Packing Problems φ-functions can be perceived as a measure of distance between s1 (p) and s2 (q), and this is used several times by Stoyan et al. [143]. For instance, the φ-function of circles and spheres is defined as the distance between their centers minus the sum of their radii. Stoyan et al. [143] also give a definition of a φ-function for two convex polygons, such that φ(p, q) is the distance from (p − q) to the boundary of their Minkowski-sum. This can be done by considering the minimal signed distance from (p − q) to any of the infinite lines that are coincident with the edges of the convex Minkowski-sum. φ-functions are adequately abstract to be used to detect overlap by many different simple shapes (circles, spheres, rectangles, cylinders, regular polygons, and convex polygons) and combinations which may include unions and disjunctions. For instance, the φ-function of two non-convex polygons, may be represented by breaking them into convex subparts and combining the set of φ-functions which comes from each pair of convex subregion. We may percieve φ-functions as an implicit representation of the Minkowski-sum of two-sets, since the set {q ∈ R | φ(0, q) = 0} is the boundary of the Minkowski-sum. Further, since any relative position of the two sets is mapped into a real value, which is less than 0 only for overlapping positions, we may use the value of the φ-function to indicate the amount the two sets overlap. Conversely, since we are generally interested in compact placements, we may also use positive values to indicate that the two sets can be moved closer. While recent research has consistently referred to φ-functions as “promising”, they have yet to prove competitive with conventional methods for objects other than circles and spheres, but this could be due to the relatively simple optimization techniques applied thus far. Further discussions of φ-functions are beyond the scope of this text, but, because they are an implicit form and a multi-dimensional generalization of NFPs to a variety of shapes and combinations hereof, they may provide many advantages over NFPs as the researchers familiarize themselves with the concept in the coming years. 3.2.4 Constraint Graphs Constraint graphs are a set of directed acyclic graphs width edge-weights, which can be used to describe the relative position of rectangular items. For a d-dimensional problem, d graphs can be used to describe the position of each rectangular item – One graph for each dimension. For instance, to model constraint graphs for a two dimensional placement two graphs Gh and Gv are needed. For both graphs a node is added for each rectangle, and we name the nodes of a rectangle a, a in Gh and a in Gv . Further, in Gh a node W (west) is added, while in Gv a node S (south) is added. Now directed edges are added between the nodes which describe their relative position. An edge from a to b in Gh indicates that rectangle a must be placed left of b, while an edge from a to b in Gv indicates that a must be placed below b. A directed edge from W is added to each node in Gh and a directed edge from S to each node is added in Gv . The weights of these edges are all set to 0. Edges are added between all pairs of rectangles such that for any two rectangles a and b, exists exactly one edge from a to b, b to a, a to b, or b to a. The edges indicate that a is to be placed left, right, below, or above b, respectively. In Gh each edge (a, b) is given weight equal to the width of rectangle a. In Gv the weight of each edge (a, b) is set to the height of rectangle a. The graphs can be used to generate the coordinates of each rectangle in a placement which is compact in the sense that all rectangles have minimal x- and y-coordinate as described by edges from the constraints of the graphs. To determine the x-coordinate of a rectangle a one only needs to determine the longest path (or the critical path) from W to a in Gv which can be done in O(n2 ) time, where n is the number of rectangles, since the graph is acyclic (see e.g. [36]). Similarly, one can 27 3. Computational Techniques (a) (b) (c) Figure 14: Constraint graphs for two-dimensional placement of rectangles a, b, c, d, e, f and g.. (a) Gh . (b) Gv . (c) Resulting placement. Edge weights have been removed but are respectively width and height of the rectangles for Gh and Gv . determine the longest path from S to a to determine its y-coordinate. Furthermore one may determine the position of all rectangles in O(n2 ) time. The construction of each graph and determination of the item positions is straight-forward to generalize to higher dimensions. Figure 14 illustrates this concept. Here the x-coordinate of g is equal to the longest path from W to g in Gh which goes through b and d, while the y-coordinate is equal to the length of the longest path from S to g in Gv which goes through a and b. An important aspect of constraint-graphs is that, as long as an edge exists between the nodes of any two rectangles a and b in any of the graphs, the placement is guaranteed to be without overlap. This comes from the fact that such an edge describes one of the required relations (see Section 3.2) between the two rectangles. 3.3 Solution Approaches In this section we present many popular solution approaches. An issue which have not covered previously is with respect to rotation. Most solution methods are not used to solve problems where items are allowed to rotate. This is especially true for exact algorithms; most likely because enumeration schemes and bounds are not powerful enough to manage rotated items. We will not dwell any further on this topic but simply remark that heuristics which do consider rotation, generally only consider 180◦ , or multiples of 90◦ , rotation of items. Solution approaches for packing problems fall into many different categories. We will begin by introducing Mixed Integer Programming Formulations in Section 3.3.1, and proceed to consider some of the simpler paradigms in sections 3.3.2, 3.3.3, and 3.3.4. A strategy for representing free-space will be considered in Section 3.3.5. Methods which consider overlap of items during the solution process are discussed in Section 3.3.6. The notion of envelopes which is the core element of methods that constructs a placement by positing one item at a time is discussed in section 3.3.7. In Section 3.3.8 non-trivial representations of placements are presented. Advanced techniques for recent exact algorithms are presented in sections 3.3.9, 3.3.10, and 3.3.11. Finally, the current state of the field of approximation algorithms is summarized in Section 3.3.12. 3.3.1 MIP Formulations Since packing problems are optimization problems, an obvious choice for modeling the problem is through mixed integer programming (MIP). A number of MIP formulations for packing problems have been considered over the years. Here we restrict ourselves to consider three such models for 28 Introduction to Packing Problems orthogonal rectangular packing and a single model for irregular packing. MIP models are generally used for exact algorithms. Beasley [8] introduced a model for two-dimensional knapsack packing problems. However, the model is simple to generalize to higher dimensions. In this model an integer variable is allocated for each item and each location throughout the container area. Specifically, we set 1 if item i is placed with its bottom − left corner at (p, q) xipq = 0 otherwise The coefficient matrix A is a form of “occupancy” matrix that describes which cells are occupied for each item location and we set 1 if grid − cell (r, s) is occupied when item i is placed with its bottom − left corner at(p, q) aipqrs = 0 otherwise Beasley considered a knapsack packing problem with the profit of each type of item i set to vi , and required that the number of packed items of type i should be between Pi and Qi . The model is as follows: n max ∑ vi xipq i=1 s.t. n W H ∑ ∑ ∑ aipq jk xipq i=1 j=1 k=1 W ≤ 1 (p = 1, . . . ,W ) (q = 1, . . . , H) H ∑ ∑ xipq ≥ Pi , ∑ ∑ xipq ≤ Qi , (i = 1, . . . , n) (i = 1, . . . , n) p=1 q=1 W H p=1 q=1 xipq ∈ {0, 1, 2 . . .} (i = 1, . . . , n), (p = 1, . . . ,W ), (q = 1, . . . , H) , where the first type of constraint ensures that each location is filled by only one item, and the following two ensures that the required number of items are packed. This formulation contains a massive number of integer values, but Beasley introduced a way to reduce this number, by considering possible positions for each item i based on the dimensions of the other items. However, even with this reduction, the model still suffers from an exponential number of integer variables which depends on the container dimensions. Despite the large number of binary variables, Beasley [8] was actually able to solve many problems to optimality, by using a combination of lagrange relaxation, sub-gradient search and tree-search. An advantage of the model by Beasley [8] is that it can be used to model any shape provided that the shape can be represented accurately using a grid-structure. In a way, one may therefore view this model as an integer formulation of the raster model used to detect overlap (see section 3.2.1). A similar model was later used by Hadjiconstantinou and Christophides [74], but unlike the model of Beasley [8], which used an integer variable for each combination of item and x- and ycoordinate, Hadjiconstantinou and Christophides [74] uses one binary variable for each item and each x-coordinate and one binary variable for each item and each y-coordinate. This model was also used for exact methods in conjunction with lagrange relaxation, sub-gradient search and tree-search. 29 3. Computational Techniques The model by Beasley [8] suffers from the fact that an exponential number of variables that depend on the dimension of the container are used. A model for rectangular packing that uses only a polynomial number of binary variables in the number of items was presented by Onodera et al. [124] and Chen et al. [29]. For each rectangle a linear variable xi,k is used to describe the coordinate of each rectangle i in dimension k. To ensure that two rectangles do not overlap two binary variables li, j,k and l j,i,k are introduced for every pair of items i, j ∈ I in each dimension 1 ≤ k ≤ d. The model (without objective function) looks as follows: xi,k − x j,k +Wk li, j,k x j,k − xi,k +Wk l j,i,k ∑nj=1 li,k, j + ∑nj=1 lk,i, j xi,k + wi,k ≤ ≤ ≥ ≤ Wk − wi,k , Wk − wi,k , 1, Wk 1 ≤ k ≤ d, 1 ≤ k ≤ d, 1 ≤ k ≤ d, 1 ≤ k ≤ d, i, j = 1, . . . , n i, j = 1, . . . , n i = 1, . . . , n i = 1, . . . , n xi,k ≥ 0, l j,k , li, j ∈ {0, 1} for i, j = 1, . . . , n, k = 1, . . . , d , where li,k, j ∈ {0, 1} and Wk is the kth dimension of the rectangular container. The two first constraints ensures that for every pair of rectangles i and j, the kth coordinate of i meets the requirement that xi,k + wi,k ≤ x j,k if li, j,k = 1 and vice versa if l j,i,k = 1. If just one of any of li, j,k or l j,i,k are equal to one for k = 1, . . . , d then rectangles i and j cannot overlap, and the third constraint ensures that this is true for at least one dimension k. The last constraint ensures that all rectangular items are placed within the container boundaries. The model is general and can be used for different packing problems, but the formulation is likely not suitable for branch-and-bound based algorithms, where the LP-relaxation is used in each node, since roughly n2 binary variables must be set to 1 to avoid overlap. Onodera et al. [124] use the model for an exact branch-and-bound algorithm for the minimum area rectangular packing problem, while Chen et al. [29] use it for the container loading problem. In both cases the authors report experiments for only around six items. The first MIP formulation for two- and higher dimensional packing problems is commonly credited to Gilmore and Gomory [69]. The formulation is based on a principle which had proven successful for one-dimensional cutting-stock problems using column generation (see Gilmore and Gomory [67, 68]), and is commonly used to introduce column generation to students. Their strategy for the one-dimensional problem is to enumerate all possible cutting patterns of the stock, i.e. is all possible feasible placements of items. If we let A j describe the jth cutting pattern, then we set ai j = 1 if item i belongs to the jth cutting pattern and ai j = 0 otherwise. Let M be the number of feasible cutting patterns, then all the feasible cutting patterns can used for columns in a n × M matrix A. Since the problem to be solved is a one-dimensional bin-packing problem, the objective is to minimize the number cutting patterns required in order for all items to be cut. The full formulation may now be written as: M min ∑ xi i=1 s.t. ∑M i=1 ai j x j = 1 x j ∈ {0, 1} (i = 1, . . . , n) ( j = 1, . . . , n), where x j is a binary variable indicating if pattern j is used or not and the constraints ensures that all items are cut exactly one time. This model is easily generalizable to higher dimensions, since each column A j may simply represent any feasible d-dimensional cutting pattern. 30 Introduction to Packing Problems Unfortunately, even for one dimension the number of cutting-patterns is prohibitively large, so rather than representing all of them in A, Gilmore and Gomory [67, 68] generate the columns dynamically. To do this one must solve a knapsack problem, that finds a feasible pattern with maximal reduced cost. For the one-dimensional case, the knapsack problem is one-dimensional, and easy to solve using dynamic programming (see e.g. [90]). The dimension of the associated knapsack problem follows the dimension of the primary problem, and so for higher-dimensional problems one must solve a higher-dimensional knapsack packing problem. This problem is not quite as simple to manage as the one-dimensional case, since the dynamic programming approach that works so well for onedimensional problems cannot be used for higher dimensions. However, this can be done with one of the two first models (see also [130, 131]). So far we have only considered MIP models for orthogonal rectangular items. MIP models for the strip-packing problem with polygons were introduced by Daniels et al. [38]. The models are based on NFPs and the principle that the relative position of two polygons, must be outside the NFP (see Section 3.2.2). For a pair of polygons this is modeled by requiring that the the relative position of the polygons is located within one of the convex subregions that make up the set of feasible locations which is the complement of the NFP. Ensuring that a relative position is within a convex subregion can be done with a set of linear constraints. Ensuring that it is within exactly one is done by introducing a binary variable for each subregion. Daniels et al. [38] considered the strip-packing problem, but because of the large number of binary variables, they were unable to solve realistic problems with more than 6-9 polygons to optimality. The MIP formulation by Daniels et al. [38] were used by Li and Milenkovic [96] to introduce Compaction and separation techniques for the strip-packing problem. The compaction procedure starts from a non-overlapping placement, and a linear-program formulation is used to determine a set of translational vectors of the polygons, which describes how the polygons should be translated to reach a feasible solution with less strip-length. NFPs are used to determined the constraints of the linear programming formulation, such that the resulting translations constitute a feasible placement. The method is similar to solving the MIP described by [38] with all integer values fixed to match the current placement. Since only neighboring polygons are used to determine a set of linear programming constraints in the original placement, and because the constraints generated depend on the relative position of two neighboring polygons, the translated placement can give rise to a different linear programming formulation with a new set of constraints. Therefore Li and Milenkovic [96] repeats the process a number of times until the constraints of two consecutive placements are equal which is considered a local minimum of the compaction problem. A similar method is presented for separation of the polygons, i.e. transform a placement with overlap to one without overlap such that its strip-length is minimal. Li and Milenkovic [96] use the compaction and separation technique combined with a database of of good solutions to solve the strip-packing problem for polygons. 3.3.2 Levels Heuristics by Baker and Schwarz [4], Berkey and Wang [12], Chung et al. [31] and Lodi et al. [100] employ a principle based on levels or “shelfs” for two-dimensional problems. The strategy is to fill the container area row by row. First a row which begins in the lower left corner of the container area is filled by positioning rectangles one-by-one from left to right. No rectangle may be positioned above any currently placed rectangle. Once no more items can be placed at the bottom of the container, that row is full, and the next row, which starts on top of the tallest item of the first row, is filled. Each row is 31 3. Computational Techniques Figure 15: Levels or Shelves. The container is filled from left to right in row, with the rectangles of a row “standing” on the bottom of that row. The height of a row is equal to the height of the tallest item. (a) (b) Figure 16: Stacks. (a) A stack of 5 items. (b) A placement consisting of 6 stacks. completely flat and rectangles cannot be placed above each other within the same row. By repeatedly filling rows, one will eventually have placed all items or reach the top of the container. Authors refer to the rows as “levels” or “shelves”. The procedure is illustrated on Figure 15. Because of their simplicity, level-based algorithms are easy to analyze and bounds and exact algorithms for problems constrained to level-based packing have recently been investigated by Lodi et al. [104] and Bettinelli and Ceselli [13]. These methods are based on column generation, which level-based packing is particular suitable for, since each set of items in a level may be represented by a column just as cutting patterns of the one-dimensional cutting stock problem were represented by columns in the column generation technique by Gilmore and Gomory [67, 68] (see 3.3.1). Levelbased packing is also commonly used for approximation algorithms (see Section 3.3.12). For other older studies of level packing see the papers by Coffman et al. [35] and Frenk and Galambos [60]. 3.3.3 Stacks, Layers, and Walls Stacks, layers and walls are generalizations of shelf-packing to three dimensions. Gilmore and Gomory [69] arrange boxes in stacks that are placed on the bottom of container and fill its height (see 16). Once a suitable set of stacks have been determined, one can solve the three-dimensional problem by solving the two-dimensional problem where the position of each stack must be determined. This principle was also used for a heuristic approach based on Genetic Algorithms by Gehring and Bortfeldt [64]. An advantage of stacks is, that since each item is supported by only one item, one can ensure global stability of the items simply by ensuring that the largest items are placed at the bottom of the stack and that higher items do not extend beyond the top of lower boxes. Stacks reduces the three-dimensional problem to a set of one-dimensional problems and a single two-dimensional problem. An alternative is to build a set of layers in the height of the container, where the placement of items in each layer is determined by solving a two-dimensional problem, and no item 32 Introduction to Packing Problems (a) (b) Figure 17: Layers. (a) 4 layers. (b) A placement consisting of the four layers placed on top of each other. Figure 18: Wall-building. 4 Walls are constructed and placed in a container. can be above any other item. The height of each layer may be set to the height of the tallest box within it. A tabu-search based heuristic for three-dimensional bin-packing based on layers was presented by Lodi et al. [103]. Despite of the fact that this method is based on layers, results are surprisingly comparable to the results of the method by Faroe et al. [53] which consider free placement of items (see section 3.3.6). In container loading problems, the largest dimension is the depth of the container. Therefore the “layers” are constructed using the width W and height H build in the depth of the container. Because these layers are standing on the container floor as a set of walls which fill the container in its depth (see Figure 18), this process is called wall-building and was introduced by George and Robinson [66]. To determine the depth of each wall, authors begin by selecting a “layer-defining-box” (LDB) that fixes the depth of the wall to the depth of the box. Generally wall-building approaches rely heavily on selection of the LDB and efficient strategies for packing each wall. Bischoff and Marriott [16] compare different ranking functions for the LDB, but does not determine any clear winner, and therefore recent methods use wall-building in conjunction with some form of meta-heuristic method. Examples include the heuristics by Bortfeldt and Gehring [18], Gehring and Bortfeld [63], and Pisinger [128] which are based on genetic algorithms, tabu-search, and tree-search respectively. To fill the individual walls authors either solve a simpler three-dimensional packing problem such as Gehring and Bortfeld [63] or a two-dimensional packing problem. George and Robinson [66] solve a two-dimensional packing problem by placing items in shelves and Pisinger [128] divides the wall recursively into horizontal and vertical strips and each strip is packed by solving a one-dimensional knapsack problem, which can be done efficiently in pseudo-polynomial time (see Figure 19). Wall- and layer building strategies have the clear advantage that they reduce an otherwise hard problem into simpler sub-problems. However, the most important disadvantage is that space is lost between walls, if the boxes cannot fully utilize the depth of a wall. This problem is somehow coun- 33 3. Computational Techniques Figure 19: When filling each wall during wall-building Pisinger recursively fills the wall with horizontal and vertical strips using tree-search. Here six strips are used; first a vertical, then two horizontal strips, then two vertical strips and finally one horizontal strip. tered by the fact that container-loading problems often contain hundreds of smaller homogeneous boxes, but wall-building is very likely to fail for a smaller set of large items. 3.3.4 G4 structures An interesting divide and conquer structure dubbed G4 was presented by Scheithauer and Terno [137]. If the container is partitioned either horizontally or vertically, then the two smaller packing problems can be solved and used to form a solution to the entire problem. This can be done recursively, however, only placements that are guillotine cuttable (see Section 2.1.2) may be considered. To remedy this situation Scheithauer and Terno [137] introduced a third possibility in addition to horizontal or vertical partitions. The third possibility is expressed with the G4-structure depicted on Figure 20 (a), which divides the container into four compartments. If we let n(W, H) denote the maximal number of items on a plate of size W × H using a G4-structure, then n(W, H) can be expressed with the recursion n(W, H) = max max n(a, b) + max n(c, d) + max{n(e, f ) + n(g, h)} , a b e f with a, . . . , h as indicated on Figure 20 (a). It should be noted that the values of a, . . . , h can be chosen from a subset of the values 1, . . . ,W and 1, . . . , H which depend on the item dimensions. This recursion may be calculated efficiently using dynamic programming and forms the basis of the heuristic by Scheithauer and Terno [137] for the pallet loading problem where there is only one item type. It was later used for heuristics for multi pallet loading and container loading problems by Scheithauer and Sommerweiss [136] and Terno et al. [149]. Although G4 structures seem versatile, they are not capable of represent all non-guillotine cuttable placements. An example of one such placement is illustrated on Figure 20 (c). 3.3.5 Representing Free Space One of the intrinsic problems when filling a three-dimensional container is to represent the residual space efficiently. So far we have touched upon simple strategies such as wall- and layer-building where this is trivial since the residual space can be represented by a single container. Eley [50] considered three-dimensional rectangular placement in a more free-form manner. Here residual space is represented as a set of overlapping boxes. Initially the residual space is the complete container. The first item is positioned at the lower left back corner of the container, and three overlapping boxes which represent the residual space are created; one describes the full space above, one the space in front of, and one the space right of the item. 34 Introduction to Packing Problems (a) (b) (c) Figure 20: (a) G4 structure consisting of 4 blocks. (b) Placement using the G4 structure with four item types. (c) Placement which cannot be represented by a G4 structure. (a) (b) (c) (d) (e) Figure 21: Eley’s space representation. (a) A single item is positioned in the container. (b,c,d) The three residual spaces generated with the insertion of the item. (e) Alternative to the residual space above the item, which will ensure that any new item positioned above, will be place at a stable position. The following items are position one by one. Each item is positioned in a residual space, and each of the residual spaces it overlaps with are divided into new residual spaces that describe the area surrounding the item. An item may only be placed in a residual space which is large enough to contain it. As the placement progresses residual spaces may be merged to reduce their number. The process is demonstrated on Figure 21. An important part of this process is that stability can be ensured by limiting the residual space above items. When new items are positioned, the smaller residual space ensures that they do not extend beyond the top dimensions of an underlying item. Since residual spaces are merged together, entire planar areas can be constructed, so that larger items can be positioned above a group of smaller ones. Eley [50] integrated this method in a variant of tree-search which is often referred to as the pilot method. Alvarez-Valdes et al. [1, 2] presented GRASP and tabu-search based heuristics for two-dimensional knapsack problems with a strategy similar to the one by Eley [50]. However, here the residual spaces does not overlap, but are simply merged to create large regions. Several authors have suggested alternative ways to represent free space in the container. Ngoi et al. [118] use a matrix for each cross section in the height of the container to represent free space. The matrices are not a complete discretization of the container volume, but represents a non-uniform grid-structure with cells for every x- and z-coordinate of every corner of the placed boxes between two consecutive y-coordinates. Bischoff [15] later simplified this apporach by representing the available height for every location with just one matrix. 35 3. Computational Techniques (a) (b) (c) Figure 22: Relaxed placement. (a) A placement with overlap. (b) Overlap has been reduced significantly by moving items. (c) A feasible placement without overlap. 3.3.6 Relaxed Placement Methods Although solutions require that there is no overlap of items, several methods where overlap is allowed during the solution process have been investigated in the literature. This is especially the case for problems involving irregular items. Commonly, these methods either include overlap in the objective function or attempt solely to solve the decision variant of the packing problem, i.e. find a feasible placement given a set of items and container dimensions. The procedure works by iteratively reducing overlap. This can be implemented within a local search framework, where one may simple change the coordinates of one or more items, evaluate the overlap of the new placement, and accept it if it contains less overlap. The procedure is illustrated on Figure 22 It should be noted that although the area of overlap of two items in the placement, is one of the most obvious ways to determine the amount of overlap, overlap can be measured in a variety of different ways (e.g. intersection depth) some of which are presented in Nielsen [120]. Heuristics that employ this principle for two-dimensional problems involving irregular shapes were investigated by Heckmann and Lengauer [76], Jain and Gea [85], Jakobs [86], Lutfiyya et al. [105], Oliveira and Ferreira [122], Theodoracatos and Grimsley [150] and Bennell and Dowsland [10]. Most noteworthy are the methods by Oliveira and Ferreira [122] and Heckmann and Lengauer [76] where overlap is removed by simulated annealing. Oliveira and Ferreira [122] allow random moves of items and use a raster model to evaluate the amount of overlap. Heckmann and Lengauer [76] solves the problem in four phases, where the first phases consider crude representations of items, while the later phases consider finer representations based on polygons. Also the distance items can move is determined by the temperature of the simulated annealing. Recent methods for irregular packing in two dimensions have abandoned the raster models in favor of polygons. Bennell and Dowsland [9] and Gomes and Oliveira [73] consider translations of items which result in overlap, but use the compaction and separation techniques described in Section 3.3.1 to remove overlap. Bennell and Dowsland [9] apply the separation techniques whenever the overlap climbs above a certain threshold level while Gomes and Oliveira [73] conduct separation and overlap removal after each exchange of shapes. A relaxed placement heuristic for rectangular packing was used by Faroe et al. [53] for the twoand three-dimensional bin-packing problem. The number of bins are minimized by starting with a large number of bins that are iteratively reduced. For each number of bins a decision problem which ask for a feasible solution is solved. To solve the decision problem, the heuristic iteratively reduces overlap by translating items either horizontally or vertically. Rather than considering overlap for single 36 Introduction to Packing Problems positions, all horizontal or vertical translations are considered when items are moved and the position with the least overlap is chosen. This is done efficiently using an algorithm with asymptotic time that is polynomial in the number of items. The technique has also been applied to four of the papers in the thesis ([A],[B], [F], and [E]), for strip-packing of polygons and polyhedra, and for placement of cylinders with spherical ends. A heuristic based on the procedure from [A] for strip-packing of polygons was later developed by Umetani et al. [152], but unlike the method by Egeblad et al. [A] which does not use any form of preprocessing, the method in [152] uses NFPs to calculate the overlap. Another relaxed placement heuristic by Imamichi et al. [83] takes a more global approach. Here overlap is calculated as the sum of intersection depths (see Section 3.2.2) of pairs of overlapping polygons. Overlap is iteratively removed by moving the placement in the direction of the gradient of the objective function, i.e., each item is translated in the direction that determines its minimal penetration depth. Once overlap has been removed two items are swapped to generate a new overlapping placement, which is then used to find yet another non-overlapping placement. Relaxed placement methods have also been applied to three-dimensional problems involving irregular items. The method from [83] was generalized to three dimensions by Imamichi and Hiroshi [82], where objects are represented by a collection of spheres. Ikonen et al. [81] represents items as triangle meshes, and use a genetic algorithm to control the search. Overlap is evaluated by checking bounding boxes for intersection and subsequently the triangles of the items. Cagan et al., Yin and Cagan, Yin and Cagan [25, 158, 159] use simulated annealing and also handle various additional optimization objectives such as routing lengths. Intersection checks are done using octree decompositions of shapes. An interested problem was investigated by Eisenbrand et al. [49] where the maximum number of uniform boxes that can be placed in the trunk of a car must be determined. For any placement of boxes they define a potential function that describes the total overlap and penetration depth between boxes and trunk sides and of pairs of boxes. Relaxed placement methods work well for decision problems where one must find a non-overlapping placement within a specific container or number of containers. It is not clear if they can be used for knapsack packing problems where a specific subset of items must be selected. The method by Eisenbrand et al. [49] removes and inserts new items into the placement during the solution process, but the problem involves only one type of item. The author of this thesis attempted to solve two-dimensional knapsack packing problems using the relaxed overlap method described in [A] and the following procedure: First, the canonical onedimensional relaxation of the two-dimensional knapsack problem is solved. This relaxation is the one-dimensional knapsack problem which is created by considering the same set of items and item profit values, but setting the size of each item in the one-dimensional problem to its area from the two-dimensional problem. Once solved, a two-dimensional decision problem involving the given set of items is sought for using the method by [A]. If no solution is found within a given time-limit, a constraint is added to the one-dimensional relaxation that ensures that the same set of items cannot be selected, and the one-dimensional problem is solved again to reveal a new set of items. The procedure iterates until a solution can be found. Since two-dimensional solutions to rectangular problems are often found to be within 2% of the optimal solution of the canonical one-dimensional relaxation, as described in Egeblad and Pisinger [C], one would expect that few constraints were required. Unfortunately, it proved impossible even to find solutions to relevant rectangular problems using this method, which is likely caused by the fact that the difference between the one-dimensional solution and the two-dimensional solution is too great and too many constraints are actually needed. 37 3. Computational Techniques (a) (b) Figure 23: Normalized placement. (a) A placement of a set of rectangles. (b) Normalized placement, where non of the rectangles can be moved left or down without causing overlap. 3.3.7 Bottom-Left Strategies and Envelopes A placement of rectangles is normalized if it is impossible to translate any rectangle in the placement to the left, downwards, or – in three-dimensional packing – backwards, without causing overlap. It was proven by Herz [78] that an optimal placement which is normalized exists for the packing problems described in Section 2.1. The intuition for this proof is that the rectangles in an optimal solution may be translated to the left and downwards until no further translation is possible without introducing overlap (see Figure 23). The consequence of this fact is that the search for an optimal solution can be limited to placements which are normalized. A commonly studied paradigm for two-dimensional rectangular packing is the so-called bottomleft principle. The bottom-left principle takes advantage of the property of normalized placements and is as follows: We are given an ordering of the items I as a list L. The rectangles are positioned in the order of L. The leftmost lowest (bottom-left) possible position is chosen for each rectangle. Chazelle [28] presents a “bottom-left” algorithm with O(n2 ) running time for n rectangles. Jakobs [86] and later Liu and Teng [98] considered heuristics for the rectangular strip-packing problem based on genetic algorithms in which the sequence (L) is the genotype, i.e. placements are represented by sequences and each individual represents its own sequence of items. The method by Jakobs [86] was also extended to consider polygons and is one among several methods for polygons that use the bottom-left or a similar principle to determine the position of the next item in a sequential placement. Many of these methods rely on envelopes. The notion of envelopes or profiles for packing problems are particular useful in methods that constructs a placement one item at a time. The purpose of the envelope is to reduce the set of feasible positions for each item, by “cutting” off part of the placement area (see figure 24). As that set is reduced, finding a suitable position for each item may be done more efficiently. In general, methods that use envelopes for rectangular placement rely on the fact that the set of normalized placements includes an optimal placement, since each rectangle is placed such that it abuts with the envelope. For two-dimensional rectangular packing problems a variant of the envelope structure was presented by Scheithauer [134] and later for two- and three-dimensional problems by Martello et al. [108]. The placement is constructed starting from the lower-left corner of the placement area. The concept is illustrated on Figure 25. Whenever a new rectangle is placed, it may not be placed such that its lower-left corner falls under and to the left of any of the previously placed rectangles’ upper-right corner. The boundary of the feasible positions is a stair-case pattern, and new rectangles may only be placed such that their lower-left corner abuts with the inner corner of steps of the stair-case (the circles on the figure). Once a rectangle is placed, the staircase is expanded, to contain the new rectangle. The rectangular envelope may be represented by using the rectangles which have already been placed and can be updated in amortized constant time, each time a rectangle is placed. This renders 38 Introduction to Packing Problems Figure 24: The notion of an envelope. As the placement is constructed a part of the placement area is inaccessible (closed) and the admissible area for a shape is the remaining area of the placement. the envelope structure extremely efficient, and it has been used in a number of methods; Martello et al. [108] used it in conjunction with a branch-and-bound algorithm for the two- and three-dimensional bin-packing problem to determine if a selection of items were feasible. Given the set of items, Martello et al. [108] construct a placement recursively, by branching on each remaining rectangle and each position in the envelope. The same principle was reused in Martello et al. [109] for an exact method for the two-dimensional strip-packing problem, and the method was revisited by Caprara and Monaci [27] for the two-dimensional knapsack packing problem. The same envelope was also used by Pisinger [127] for a heuristic for the area minimization problem, and by Egeblad and Pisinger [C] for a heuristic for the two-dimensional knapsack packing problem. While a three-dimensional variant of the envelope structure was presented by [108], it was later discovered by den Boef et al. [40], that this structure can represent only a subset of three-dimensional placements which is referred to as robot-packable that are also considered in Egeblad and Pisinger [C]. Although, an optimal solution may not be robot-packable, this subset still represents a comprehensive set of placements. Envelopes have also been used extensively for polygon items and were introduced for a heuristic for the two-dimensional strip-packing problem with polygons by Art, Jr. [3]. While the set of feasible locations for each new rectangle is reduced to a discrete set for the rectangular envelope, the set of feasible locations remains infinite for irregular shapes. Several other heuristics for the strip-packing problem with polygons use some form of envelope principle along with a greedy strategy similar to the bottom-left principle. The heuristic by Oliveira et al. [123] places the polygons sequentially at the position in an “envelope” which is deemed most promising according to different measures. Gomes and Oliveira [72] later added a 2-Exchange neighborhood to the heuristic, which exchanges the position of items in the sequence. A tabu-search based heuristic in which the sequence is modified was also presented by Burke et al. [23]. The “jostling” heuristic by Dowsland et al. [46] places the polygons sequentially, repeatedly from left to right and right to left. In each iteration the sequence is changed to reflect the last placement. While normalized optimal solutions exists for rectangular packing problems, this is not the case for problems involving polygons, and one cannot expect to find the optimal placement for problems with polygons using bottom-left principles. 3.3.8 Abstract Representations A direct representation of placements is a list of the individual coordinates of each item. This representation has the main draw-back that infeasible placements which contain overlap can be represented, and transitions from one placement without overlap to another placement without overlap are not simple to achieve as discussed in Section 3.3.6. This is illustrated on Figure 26 (a) and (b) where the 39 3. Computational Techniques (a) (b) (c) Figure 25: Rectangular envelope. The shape of an envelope as a placement is constructed by sadding one rectangle at a time. Rectangles are placed in normalized fashion and may not be placed lower or left of the envelope which is indicated by thick lines. Circles indicate feasible positions of the lower-left corner of the next rectangle to be added. (a) (b) (c) Figure 26: (a) A placement of rectangles a, b, c. (b) Overlap will appear if b and c are exchanged directly. (c) No overlap will appear if the rectangles are exchanged in the sequence pair representation and then positioned using a decoding algorithm (see text). rectangles b and c exchanges position. To tackle this problem, many heuristics rely on some form of abstract representation of the position of items, which does not deal directly with coordinates. Instead the placement is represented as either sequences or graphs which can be used to assign coordinates to each item using some form of decoding algorithm. The decoding algorithm generally ensures that the resulting placement is feasible. The heuristic by Jakobs [86], which was touched upon in Section 3.3.7, actually uses an abstract representation of placements. Here a placement of items is represented implicitly by a sequence which can be decoded to a feasible placement using a bottom-left algorithm that places items one-by-one in the order or the sequence. The advantage of this representation is that any modification of the sequence will still lead to a feasible placement. On the other hand, small changes in the sequence may lead to completely different placements, which makes them difficult to work with during the intensification stage of local search heuristics, where the main focus is to reach a local minimum. Abstract representations work well with local search heuristics since they can easily perform a small alteration of the current abstract representation, and evaluate the outcome. Alterations can be as simple as exchanging the position of two items in the ordered list mentioned above, which is difficult without causing overlap using a direct representation. During the last half of the 1990’s several more advanced representations for two-dimensional rectangular packing problems were proposed. The intention of these representations is to maintain the overall relative positions of rectangles, when small changes are made. The representation that ignited this research was the Sequence Pair which was introduced as part of a heuristic for the minimal area rectangle packing problem by Murata et al. [117], 40 Introduction to Packing Problems A sequence pair consists of two sequences of the rectangles. If the rectangles are numbered 1, . . . , n, the sequence pair consists of two permutations of the numbers (two sequences) < σ+ (1), . . . , σ+ (n) > and < σ− (1), . . . , σ− (n) >, where σ+ and σ− are permutation functions. A sequence pair can be converted into a placement using two simple rules. For i, j ∈ {1, . . . , n}: −1 −1 −1 • σ−1 a (i) < σa ( j) and σb (i) < σb ( j), j is placed to the right of i, −1 −1 −1 • σ−1 a (i) > σa ( j) and σb (i) < σb ( j), j is placed above i. By symmetry all four possible relations between i and j can be deduced. Once the relations are established one can use them to construct constraint graphs as described in Section 3.2.4, that can be used to determine positions of i and j. A sequence pair < c, b, a, d, e, g, f >, < a, e, b, c, d, f , g > represents the constraint graphs on Figure 14. It should be noted that all normalized placements can be represented by a sequence pair as proven by Murata et al. [117] and a method that can convert both a non-overlapping placement and a placement with overlap to a sequence pair was presented by Egeblad [48]. Since determining a placement of a set of rectangles based on constraint graphs requires O(n2 ), the transformation from a sequences pair to a placement can be done in O(n2 ) time by first constructing the constraint graph and then using it to determine a placement. Figure 26 (a) and (c) illustrates an exchange of two rectangles in both sequences of the sequence pair. The placement in Figure 26 (a) is represented by the sequence pair < b, a, c >, < a, c, b >. Figure 26 (c) shows the placement of the sequence pair < c, a, b >, < a, b, c > after the rectangles b and c have exchanged position in the sequences. Unlike the direct exchange based on coordinates shown on Figure 26 (b), the exchange of positions in sequences does not lead to overlap. A number of authors have suggested faster decoding methods. Tang et al. [147] introduces an algorithm which can convert a sequence pair to a placement in O(n log n) time using longest weighted common subsequence algorithms. They also describe a simple O(n2 ) time algorithm which circumvents constraint graphs completely and uses only the sequences to determine the position of each rectangle. This result was later improved to O(n log log n) time by Tang and Wong [146] with advanced data structures. An O(n log log n) decoding algorithm was also introduced by Pisinger [127] who used an envelope as described in Section 3.3.7. Items are placed one by one using the envelope, and the position in the envelope to be used for each rectangle is based on the sequence pair. The envelope structure is used in such a way that only relations between items from the envelope and the item to be placed are considered, therefore, this algorithm does not completely place rectangles according to the relative positions induced by the sequences. However, this decoding will generally generate more compact (semi-normalized) placements, and it was proven that any normalized minimal area packing solution can be still be represented with a sequence pair and this decoding. The method by Pisinger [127] can also be simplified to a decoding algorithm with running time O(n2 ). The decoding algorithms mentioned above were all used for heuristics for the minimal area rectangle packing problem, but the sequence pair representation was also used by Egeblad and Pisinger [C] in conjunction with a 2-exchange neighborhood and simulated annealing for solving the two-dimensional knapsack packing problem. A three-dimensional variant of the sequence pair, referred to as sequence triple, is also introduced to solve the three-dimensional knapsack packing problem. The sequencepair was also used in conjunction with a branch-and-bound method for an exact algorithm for the two-dimensional rectangular strip-packing problem by Kenmochi et al. [91]. The list of other similar representations include O-trees by Pang et al. [126], B*-trees by Chung et al. [32] and Corner Block List by Hong et al. [80]. All of these representations were introduced for 41 3. Computational Techniques the minimal area rectangle packing problem. They are commonly used together with a metaheuristic, such as simulated annealing that controls local search based alterations of the representation. Abstract representations for polygon problems have not been investigated to the same extent. A major issue is that while a normalized optimal feasible placement always exist for rectangular packing problems, a similar property is unlikely to exist for more general packing problems, especially if items can fit within holes of other items. Therefore it may be impossible to represent all relevant placements by something as simplistic as a pair of sequences. The current methods for irregular packing rely on a sequence and on the bottom-left principle or something similar, and decoding the sequence into a placement is a computational complex process. This is in contrast to the sequence pair representation for which an efficient implementation can decode hundreds of thousands of sequences containing 20-40 rectangles per second on modern commodity hardware. 3.3.9 Packing Classes An interesting abstract representation for rectangular problems where multiple placements are represented by a single data-structure is the packing class which was introduced by Fekete and Schepers [55]. Fundamentally, a packing class consists of a set of undirected graphs Gi = (Vi , Ei ) for i = 1, . . . , d – one for each dimension of the problem. Each graph Gi contains a set of nodes which corresponds to the rectangles of the problem similar to the constraint graphs of Section 3.2.4. Additionally, each graph Gi is an interval graph which means that it represents the intersection of intervals on the real line. To create a graph from a placement, one connects two nodes in graph Gi with an edge if and only if the two corresponding rectangles overlap when considering their extents in the ith dimension; i.e., an / edge is added between nodes of rectangles a and b in G1 if and only if [xa , xa + wa ] ∩ [xb , xb + wb ] 6= 0. Fekete and Schepers [55] consider construction of such graphs and denote a set of edges E1 , . . . , Ed for the d graphs as a packing class if it satisfies the following properties: • P1: The graphs Gi = (V, Ei ) for i = 1, . . . , d are interval graphs. • P2: Each stable set of S of Gi is xi -feasible for i = 1, . . . , d. A stable set in this context is a set of unconnected vertices and the requirement means that the sum of the width of a set of non-overlapping rectangles in one dimension cannot exceed the placement area width. • P3: ∩di=1 Ei = 0/ for i = 1, . . . , d. This means that two rectangles cannot overlap in all dimensions. A packing class defines a whole set of placements. To convert a packing class into a placement, one must consider the complement of each graph GCi = (V, EiC ) and assign an orientation to each of the edges EiC . Let the set Fi be the assigned orientation of EiC then Fi must be a transitive orientation, i.e., the directed graph GFi = (V, Fi ) must be transitive. Once the transitive orientation is known the graphs GFi can be converted to a placement by setting the coordinates of rectangle a using xi (a) = max{xi (b) + wi (b) | (a, b) ∈ Fi }, which is the same principle as was used to convert constraint graphs into placements (see Section 3.2.4). Figure 27 illustrates the concept. The 36 placements which belong to same packing class but corresponds to different orientations are illustrated on Figure 28. Fekete and Schepers [55, 57] and Fekete et al. [58] also show how to construct the sets E1 , . . . , Ed such that they constitute a packing class. They use a form of tree-search which adds an edge between two rectangles in one of the graphs in each node of the tree. To limit the size of the tree they rely on a set of mathematical theorems which can identify if the properties P1, P2, and P3 are all satisfied. It should be noted that the actual positions of the rectangles, are generally not required to solve a problem and therefore a transitive orientation is not required. 42 Introduction to Packing Problems Figure 27: Packing classes. A placement of rectangles in the upper-right corner is used to generate the interval graphs G1 and G2 . The edges of G1 and G2 constitute the packing class that the placement is part of. The edges of the complementary graphs GC1 and GC2 are given an orientation to reveal graphs GF1 and GF2 which can be used to generate the placement of the lower-right corner. Figure 28: The 36 placements which arise from the packing class represented by the edges of G1 and G2 from figure 27. 43 3. Computational Techniques Their method is used to solve the multidimensional orthogonal knapsack packing problem to optimality by Fekete et al. [58], but strategies for both rectangular strip-packing and bin-packing are also discussed by Fekete and Schepers [57]. It should be noted that the authors also take advantage of bounds which we will return to in Section 3.3.11. The appeal of packing classes is that they can reduce the solution space significantly, not only by completely disregarding the coordinates of the individual rectangles, but also by representing many symmetric placements with one single packing class. On the other hand, the drawback of packing classes is that they are relatively difficult to construct. Further, since they do not consider actual coordinates, they seem unsuitable for layout problems where an objective such as balance or interconnectivity (as in VLSI-layout optimization) must be considered. It is also not entirely obvious how to handle problems where single items may be rotated. 3.3.10 Constraint Programming Constraint programming techniques for determining if a feasible packing can be found were used by den Boef et al. [40] for an an exact method for the three-dimensional rectangular knapsack packing problem. For each pair of items one of six relations – since it is a three-dimensional problem – can be selected similar to the IP-formulation by Onodera et al. [124], where a binary variable is used to decide the relation between two boxes. The algorithm uses tree-search to find a feasible assignment of the boxes; in each node of the tree a pair of boxes is considered, and the algorithm branches on each of the six possible relations between the rectangles. The algorithm back-tracks if this leads to a feasible assignment. To determine if the assignment is feasible the algorithm checks if the chosen set of relations will cause rectangles to be positioned beyond the placement area. This can be done in O(n2 ) time since the set of relations induces a constraint graph as discussed in Section 3.2.4. A number of “look-ahead” techniques are used to determine if a branch will lead to an infeasible solution, which reduces the total number of branches required. The technique was later used by Pisinger and Sigurd [131] to solve the two-dimensional bin-packing problem to optimality with column generation. From a certain point of view the IP technique by Onodera et al. [124], the packing class generation technique by Fekete and Schepers [55], and the constraint programming technique are all equivalent. A placement (or a class of placements) is constructed by branching on a number of relations. The IP model and the constraint programming technique both consider relations of the type “left-of” or “right-of”. Packing-classes are oblivious to the exact relation since they consider undirected graphs and only designate if two items overlap in a dimension or not. This makes the feasibility checks required by Fekete and Schepers [55] harder than the techniques used for constraint programming, but each packing class cover several placements. Therefore it is not clear which of the methods can consider most placements within some specified amount of time. 3.3.11 Bounds Both exact methods and heuristics often take advantage of bounds which predict the optimal value efficiently. Exact methods are commonly based on the branch-and-bound paradigm and use both upper and lower bounds to avoid unfruitful branches. Bounds have been utilized mainly for the binpacking problem. Here upper bounds can be found with a heuristic, while lower bounds are mostly based on analysis of the total item area or volume. We consider only the latter type of bounds here and only for orthogonal rectangular problems where rotation of the items is not allowed. 44 Introduction to Packing Problems The simplest way to determine if a set of items may be placed within a container, how many containers are required for the items, or how large a container will be required, is to consider the total volume of the items. If it exceeds the container space the items cannot be placed within the container. For the the bin-packing problem the continuous lower bond given by n ∑i=1 wi hi , L0 (I) = WH for an instance I can be used to determine the number of bins. A more accurate bound, L2 , was presented by Martello and Vigo [107]. This bound belongs to a class of bounds which fit into a general scheme developed by Fekete and Schepers [54] which is based on the notions of dual feasible functions and conservative scales. A function u : [0, 1] → [0, 1] is dual feasible if, for any finite set of non-negative real numbers S ⊂ R: ∑ x ≤ 1 ⇒ ∑ u(x) ≤ 1. x∈S x∈S Fekete and Schepers [54] show that a class of dual feasible functions is the basis for the bounds presented by Martello and Toth [106] and Martello and Vigo [107]. To evaluate these bounds item dimensions are changed; items with sides larger than 1 − ε are expanded to 1 and items with sides less than ε are discarded. Fekete and Schepers [54] proceed to introduce conservative scales. Intuitively, a conservative scale alters the dimensions of the items, but in such a way, that if we cannot find a feasible placement of items with the modified dimensions, then we cannot find a feasible placement of items with the original dimensions. If all items are scaled such that the container dimensions become [0, 1]d for a d dimensional problem, then, according to the proof by Fekete and Schepers [54], any dual feasible function can be used to modify item dimensions. Fekete and Schepers [54] present three classes of dual feasible functions which may be used to alter item dimensions. Once item dimensions have been altered, one may use the volume criteria stated in the beginning of this section to determine if the items with modified dimensions are feasible to place within a container for the knapsack packing problem, the number of bins required for the bin-packing problem, or the required container length for the strippacking problem. Since the item dimensions were changed by a dual feasible function, the volume based bounds for the problem instance with modified item dimensions holds for the original instance. The bounds based on the dual feasible function presented by Fekete and Schepers [54] dominates the bounds of Martello and Toth [106] and Martello and Vigo [107]. Other bounds were presented by Boschetti and Mingozzi [19] and Clautiaux et al. [34] for the two-dimensional bin-packing problem without rotation, and by Boschetti and Mingozzi [20] for two-dimensional bin-packing with ninety degree rotation of items. The bounds by Martello and Vigo [107] were used for a branch-and-bound algorithm for the twodimensional bin-packing problem, and later for the three-dimensional variant by Martello et al. [108]. Martello et al. [109] use an extension of the bounds by Fekete and Schepers [54] and Martello and Toth [106] for an exact algorithm for the bin-packing problem. This method was also by used Caprara and Monaci [27] for the two-dimensional knapsack packing problem. The bounds by Fekete and Schepers [54] were used for an exact algorithm for the d-dimensional bin-packing and strip-packing problems by Fekete and Schepers [56], and for an exact algorithm for the the two- and higher-dimensional knapsack packing problem by Fekete et al. [58]. It should also be noted that the MIP formulations of Section 3.3.1 can be relaxed to linear programs which may be used for bounds. This observation was used in conjunction with column generation by Scheithauer [135] to find bounds for the container loading problem. 45 3. Computational Techniques The problem with bounds based solely on volumes is that the dimensions of a particular set of rectangular items can render a feasible placement of the items impossible even though their combined volume is far less than that of the container. The scheme presented by Fekete and Schepers [54] remedies part of this problem, but based on the papers by Caprara and Monaci [27] and Fekete et al. [58] it seems that the current bounds are still not strong enough to enable branch-and-bound methods to reach optimal solutions for problems where more than 20 rectangular items can fit within the container at the same time. 3.3.12 Approximation Algorithms In recent years the term approximation algorithm has become synonymous with a polynomial time algorithm with guaranteed performance bounds. An approximation algorithm A for a minimization A(I) problem has ratio bound ρ if for any instance I we have OPT(I) ≤ ρ, where OPT(I) and A(I) are the optimal and the value returned by A respectively, An asymptotic ratio bound describes the worst ratio bound as the size of the problem instances approaches ∞. Approximation algorithms for multidimensional packing are scarce. The techniques in this category are commonly based on sequential placement according to either first fit decreasing (FFD) or nearest fit decreasing (NFD) algorithms which proceed as follows: First items are sorted according to decreasing height (FFDH and NFDH) or decreasing size (FFDS and NFDS). Then items are placed one-by-one in bins, shelves, or layers. To simplify the description we will call them bins in the following. Initially one bin is open. As an item is placed it is either positioned in the first of the open bins which has enough space to accommodate it (FFD) or only in the last currently opened bin (NFD). If none of the examined bins are large enough for the item, a new bin is opened and the item placed is in the new bin. Bansal et al. [7] proved that no Asymptotic Polynomial Time Approximation Scheme (APTAS) exists for the two-dimensional rectangular bin-packing problem. They also present a polynomial time algorithm in n to find the optimal number of bins, as long as bin-sizes are increased by ε. Note that 1ε appears in the exponent of n in the asymptotic running time of their algorithm. A similar result which was discovered independently was presented by Correa and Kenyon [37]. Bansal et al. [7] also present an APTAS for the problem of placing rectangles into a minimal enclosing rectangle. The algorithm is based on the nearest fit decreasing height (NDFH) principle and the running time of the algorithm is polynomial in n and 1ε . Approximation algorithms for the two-dimensional bin-packing problem initially revolved around squares. An approximation algorithm for square packing, that is packing squares in a minimal number of squared bins, with an absolute worst case ratio of 2 was presented by van Stee [153] who also argued that this is the best possible provided N P 6= P . Ferreira et al. [59] presented an algorithm with asymptotic ratio bound of 1.988 for the same problem using an NFDS principle. The asymptotic ratio bound has later been improved by Seiden and van Stee [138] to 14 9 + ε and also Kohayakawa et al. [94] who present an algorithm for the d-dimensional cube bin-packing problem with a general d ratio bound of 2 − 23 . Caprara [26] also presents an algorithm for this problem with a conjectured asymptotic ratio bound between 1.490 and 1.507, which is supported by experimental evidence. Recently Bansal et al. [6] managed to move well beyond with an approximation algorithm for the general case where items are rectangles (not squares) which builds on the work by Caprara [26]. This algorithm has an asymptotic ratio bound of Π∞ + ε ≈ 1.525... + ε and was generalized to higher dimensional problems, albeit with a higher ratio bound of ln(d + ε) + 1 + ε (for a d dimensional problem) which comes arbitrarily close to 2.0986 for ε → 0. 46 Introduction to Packing Problems Kenyon and Remila [92] presented an APTAS with a (1 + ε) performance guarantee for the twodimensional strip-packing problem which is polynomial in n and 1ε and is based on linear programming relaxation. The APTAS was later extended to handle the general case where items may be rotated by Jansen and van Stee [88]. For higher dimensions, Jansen and Solis-Oba [87] introduced an approximation algorithm for the three-dimensional strip-packing problem with asymptotic ratio bound 2 + ε. This improves on results by Miyazawa and Wakabayashi [111, 112, 113] although problems with square box-sides and ninety-degree rotation are also considered by Miyazawa and Wakabayashi [111, 113] as particular cases. The algorithm by Jansen and Solis-Oba [87] also generalizes to an algorithm with asymptotic ratio bound 4 + ε for the three-dimensional bin-packing problem. For the two-dimensional knapsack packing problem Caprara and Monaci [27] presents an approximation algorithm with an absolute ratio bound of 31 − ε. At this point, approximation algorithms for packing problems are mostly of theoretical interest since either ratio bounds are too large or asymptotic running times too high. A notable exception is the tabu search heuristic for the two-dimensional bin-packing problem presented by Lodi et al. [101] which used an approximation algorithm with a performance ratio bound of 4 to generate initial solutions. The initial solutions generated by the approximation algorithm proved to act as good starting solutions despite the high ratio bound. 3.4 Speculations on The Future Section 3.3 revealed many of the efficient and effective solution methods which exist for the majority of packing problems. As evident from the long list of methods we are still far from a complete unified solution approach which can handle any possible variant of packing problems. Authors still use individual strategies for individual problems and coming solution methods may still depend on the specific problem type. In this section we will attempt to shed some light on the current state of the different problem types and the future directions in each of them. 3.4.1 Rectangular Packing The current exact methods for two-dimensional knapsack packing ([27, 58]) and three-dimensional bin-packing ([57, 108, 110, 131]) seem incapable of finding optimal solutions for problems where more than 20-30 items can be loaded at the same time inside the container within relevant computational time. To increase the size of problems that can be considered, bounds must be stronger and the verification techniques, such as constraint programming and packing classes, must be extended to handle symmetries better to avoid considerations of equivalent placements or sub-placements. In general, exact methods are also still incapable of handling problems involving rotation of the items. Heuristics for the two-dimensional knapsack ([1] and [C]) and bin-packing problems ([51, 114]) reveal promising results even when a large amount of rectangles can fit it the container at the same time. While heuristics exist for the three-dimensional bin-packing problem (see e.g. [51]), heuristics for three-dimensional knapsack packing problems, other than the container loading problem, are practically non-existing with the exception of the one presented in this thesis ([C]). This could be due to the fact that most practical problems are in the container loading domain where items are relatively small compared to the container and a large fraction of the items to choose from can fit within the container at the same time. Since most heuristics for container loading problems try to fill the container, rather than considering individual profit values of items, it would also seem relevant to consider methods for container loading problems where profit values are not proportional to item volumes. 47 3. Computational Techniques 3.4.2 Two-dimensional Irregular Packing The strip-packing problem with rectangular items has received less attention in later years, and the field of strip-packing has been dominated mostly by methods for polygons. Recent heuristics reach utilization levels of between 85 − 95% ([9, 23, 73, 83, 152]) including the one presented in this thesis ([A]). The main missing puzzle in this area seems to be dealing with rotation efficiently. Methods for free rotation were presented by Liu and He [99] and Nielsen [120] but in both cases results for free rotation of items are not convincingly better than results when not allowing rotation or only allowing 90◦ or 180◦ rotation. This could either be because the rotation angle which is implicitly selected in the definition of the items of the instances is such that the non-rotational variant gives good results, or because the solution space when allowing rotations is much larger and therefore more difficult to search within. The NFP is not suitable for rotational problems in its current form and this rules our generalizations of methods that utilize NFPs to rotational problems. However, it is not unlikely that a rotational variant of the NFP could somehow be generated. NFPs are related to robot motion planning and a few considerations for rotational planning are discussed in the book by de Berg et al. [39]. Another element missing for irregular packing is exact methods capable of handling more than 10 items such as the one mentioned in Section 3.3.1 ([38]). The problem here lies in finding proper branching rules and better bounds than the trivial area bound. Surprisingly few methods for irregular packing deal with bin-packing, but many methods including the one of this thesis ([A]) are likely generalizable to bin-packing problems. 3.4.3 Irregular Three-dimensional Packing Strip-packing of non-rectangular shapes in three dimensions have also been dealt with for both polyhedra by Stoyan et al. [145] and for spheres by Stoyan and et al. [140] Imamichi and Hiroshi [82] and in this thesis for polyhedra ([B]) and spheres/capsules ([F]). In this thesis we also present a heuristic for three-dimensional container loading or knapsack packing of furniture ([D]). Methods for irregular shapes in three dimensions are still in their infancy and a utilization of more than 55–65 % seems out of reach with current methods as confirmed both by the papers on threedimensional strip-packing ([145], [B], [F]) and the paper on container loading of furniture ([D]). The low utilization for three-dimensional problems could be both due to the geometry of the items or simply because our methods are not powerful enough. From rectangular packing problems it is known that, while two-dimensional rectangular problems can be solved with utilization of 95 − 100% (see [C]), the solutions for the three-dimensional variants, even with many small items in the container loading problem, rarely reach 90%. Better bounds for the three-dimensional problems involving irregular shapes could shed more light on the low utilization levels reached. However, the Kepler conjecture (see Section 2.1.5) which states that the maximal asymptotic utilization of homogeneous spherical packing is 74.048% indicates that utilization levels above 70% for irregular shapes in general, may be unlikely. As for two-dimensional irregular packing, methods capable of solving problems involving threedimensional irregular shapes with free rotation, may become more relevant in coming years. Especially, since it may be possible to reach higher levels of utilization if free rotation is allowed. A method that expands on the method of this thesis ([B]) to handle free rotational packing of three-dimensional polyhedra was presented by Nielsen [120]. 48 Introduction to Packing Problems 3.4.4 New Constraints and Objectives As methods for packing problems are becoming widely used in the industrial sector, more complicated objectives and constraints appear. We will discuss a few of them here. For the strip-packing problem quality regions must be considered when cutting leather from hides. This is also discussed in this thesis ([A]). For container loading one must often ensure that the load on an item is no larger than its strength. Items should also be positioned such that transportation is feasible and items will not drop and break. The problems are both described and considered in more detail in the paper on container loading of furniture in this thesis ([D]). Another problem in container loading, is with respect to proximity. If a large consignment of items are to be delivered to the same location, but is for different customers, items for the same customer should also be close to each other to simplify the unloading process. A similar problem may occur when loading items; items may be selected from various locations within a large warehouse and the free space outside the container may only accommodate a limited number of items. To minimize the number of trips made in the warehouse one should try to place items which are close to each other in the warehouse close to each other in the container. An aspect which has not been touched upon is the requirement that solutions can actually be physically packed. In many cases, human beings still handle the loading, but high utilization levels may be reached at the sacrifice of placements which are simple to achieve manually. This problem may have less significance as the loading process is increasingly managed by robots in the future. Often containers should be loaded such that the consignment is balanced and the inertia moment is minimized. For airplanes this is important to minimize fuel. For trucks this is important to ensure that the axles of the trucks carry equal weight. Considerations and methods for this type of problem involving both rectangular and irregular shapes are presented in more detail in the paper which appears in this thesis ([E]). This type of problems may be dealt with either by imposing new constraints, including them as a term in the objective function, or attempt to modify a good solution with respect to the “clean” packing problem in a posterior step. It is likely that methods which are capable or easily generalizable to handle the constraints mentioned above will receive more focus as the field matures in coming years. 3.4.5 Sensitivity Analysis Many methods used by the industry may be used as parts of decision support systems where sensitivity and what-if scenarios must be analyzed. Here the problem may be to quickly answer the consequence of replacing a subset of the input items with a new set of items. Although re-optimizing the entire problem can answer such a query, the industry is interested in quick responses, and methods which can start from an existing solution may turn out to be beneficial. Also methods which can perform their own set of analysis and suggestions – I.e.:“Replace input item set A with set B to achieve 2 % higher utilization” – could be of strong interest to the industry. The author of this thesis is unaware of any method that is capable of answering such queries or suggestions, but know from first hand experience that the industry desire this functionality. A possible related topic is that the industry already use solution methods during the design phase of production for “simulation” purposes. Here the problem may be to select the set of item dimensions that return the best possible utilization given other constraints for instance with respect to required volume. To solve this problem with current methods one can re-optimize a problem with different 49 3. Computational Techniques item dimensions and select the dimensions that return the highest utilization. New methods targeted for this problem may be able to get around re-optimization. 3.4.6 Integration with other problems As techniques for solving packing problems are becoming better and the processing power increases, research may turn towards problems where packing appear in conjunction with other types of problems. One of the well-known problems in operations research is the vehicle routing problem (VRP) (see e.g. Golden et al. [71]). In the recent years there has been an increasing interest in the integration of packing problems with vehicle routing problems. An exact algorithm for rectangular packing and VRP was introduced by Iori et al. [84]. Heuristics were introduced by Fuellerer et al. [61], Gendreau et al. [65] and Zachariadis et al. [162] for two-dimensional rectangular problems and a heuristic for the three-dimensional problem by Moura and Oliveira [116]. While this problem is difficult to handle since it involves two N P -hard sub-problems, one may expect that solving the routing problem renders the associated packing problems easier since the items for one individual route could be insufficient to fill a complete container. In any case this topic seems open, especially for three-dimensional packing. Another difficult problem appears in production planning and supply-chain management (see e.g. Pochet and Wolsey [132]). Here the set of items to be produced may depend on which items can be shipped in a container or a fleet of vehicles. Likewise, it may also depend on the set of raw-materials required for production which can be shipped in a single container. A model for such problems may involve both supply-chain optimization, VRP, and packing problems. 50 Introduction to Packing Problems 4 About the Papers The thesis consists of six papers and this section contains a short presentation of each of the papers accompanied by a discussion. The papers [A], [B], [E] all use the same relaxed placement method and we will begin by discussing them in Section 4.1. In Section, 4.2 we discuss the paper on rectangular knapsack packing problems [C] and in Section 4.3 the paper on container loading of furniture [D]. Finally, in Section 4.4 we discuss the working paper on capsule packing for tertiary RNA structure prediction [F]. 4.1 Relaxed Packing and Placement All three of the papers [A], [B], and [E] take advantage of the same principle. The method is based on iterative overlap minimization and originates from the heuristic by Faroe et al. [53] for rectangular two- and three-dimensional bin-packing which uses the metaheuristic Guided Local Search by Voudouris and Tsang [155, 156]. To a large degree the framework presented by Faroe et al. [53] has remained unchanged in the papers considered in this thesis. The main difference between the work by Faroe et al. [53] and the work presented in this thesis is that we consider irregular items. The papers all present methods that solve decision problems, i.e. determine if a feasible placement of items within given container dimensions exists. The procedure to solve the decision problem closely mimics that by Faroe et al. [53] and is as follows: The methods starts from a placement with overlap and repeatedly translates a single item either horizontally or vertically to a position with less overlap. A zero-overlap placement corresponds to a solution for the decision problem. Whenever a placement cannot be improved by a single translation, two items which overlap a lot are “penalized”, i.e., placements where these two particular items overlap will receive a high objective value. This is the GLS element of the heuristic. The effect of this is, that the items are pushed away from each other in the following steps of the solution process since the heuristic will avoid placements where they overlap. The heuristic which is used for the papers [A] and [B] starts with decision problems for large container lengths and then decreases the container length every time a solution to a decision problem has been found. The heuristic was also used for research outside this thesis. Nielsen [120] considered different measures of overlap instead of the area, free rotation of shapes, and arbitrary direction translation (non only horizontal or vertical). Nielsen [119] also considered repeated pattern nesting, i.e. achieving high utilization where the strip is infinite and the pattern generated is repeated an infinite number of times. 4.1.1 Two-dimensional Nesting The first paper in this thesis ([A]) also represents the chronological first work and describes a heuristic for the two-dimensional strip-packing problem of polygon shapes using the principle described above. The main novelty of the paper is the minimal overlap translation algorithm that finds the horizontal or vertical translation of a single polygon which minimizes its overlap with the other polygons. A proof of the correctness of this algorithm was given in the earlier work by Nielsen and Odgaard [121] (based on a note by the author of this thesis). However, this proof, which considered a more versatile set of translations, was deemed too complicated for a single paper and instead a simpler proof was presented in [A]. Recent updated experiments are presented in [ A.1] and show that the current implementation still produce some of the best results of the literature. 51 4. About the Papers An element which we never fully investigated was two-dimensional translation of polygons where the minimal overlap position can be found for the entire placement area instead of in just one direction. An algorithm to solve this problem was presented by Mount et al. [115] with a running time of O((mn)2 ) where n is the number of edges from the polygon to be translated and m the number of edges from all other polygons. The approach is based on an arrangement of line segments. Our initial investigations with implementations of this algorithm showed that there were many problems with degeneracies where lines and points in the arrangements would coincide and calculation with rational numbers was required to ensure stability. The running time of the first prototype implementation was far too high for our local search based heuristic since each translation would take seconds to calculate, and this local search neighborhood was dropped completely. However, it is possible that a similar neighborhood, which would just consider two-dimensional translations in a small area around the polygon to translate, is computational feasible. The importance of the Guided Local Search (GLS) metaheuristic was not made clear in the original paper. The Guided Local Search heuristic seems particularly strong for this problem, because it works well in conjunction with the piecewise continuous objective function (overlap) and the minimal overlap translation algorithm. An interesting future direction would be to replace GLS with other metaheuristics. It is not clear how this could be done. For instance, a tabu-search heuristic (see e.g. [70]) with a tabu-list of placements would probably be difficult to combine with the minimal overlap translation algorithm. Simulated annealing (see e.g. Kirkpatrick et al. [93]) would require the acceptance of some form of randomly generated solution and it is not clear how such a solution would be generated to take advantage of the minimal overlap translation algorithm. There are similar concerns with other metaheuristics such as genetic algorithms. A meta-heuristic which would be interesting to investigate as a replacement of GLS is adaptive large neighborhood search (ALNS) (see e.g. [129]). ALNS’s ability to handle several different neighborhoods could make it interesting for this problem, since it would make it possible to introduce new neighborhoods such as exchange of the position of two items or the two-dimensional translation neighborhood mentioned above. 4.1.2 Three-dimensional Nesting Although the first paper did sketch a three-dimensional translation algorithm, several details were missing, and only a prototype implementation for the decision variant of the three-dimensional problem was made. Results presented by Stoyan et al. [145] motivated a further investigation of the threedimensional strip-packing problem with polyhedra and a generalization of the procedure to higher dimensions at the same time. The details of a heuristic for the three-dimensional strip-packing problem and a generalization to higher dimensions were reported in the second paper [B]. While the two-dimensional and three-dimensional procedures are the same and even share implementation, there are several aspects of the proof behind the correctness of the minimal overlap translation algorithm that were changed. Most important was the introduction of the notion of balanced assignment. In the paper, the interior of the polytopes is defined as the set of points where a ray shot parallel to the x-axis intersects the boundary an odd number of times. Therefore, a ray from a point of the intersection of two polytopes should intersect the boundary of both the two polytopes an odd number of times. However, since the boundary is broken into many different pieces (faces and facets), the pieces cannot overlap, since that would cause problems in the even/odd counting principle used throughout the proofs. The notion of balanced assignment ensures that the pieces do not overlap. Another difference between the two papers is that sides of the three-dimensional polyhedra (and 52 Introduction to Packing Problems polytopes in general) are not limited to triangles but can take any convex form. Additionally, a greedy method for the three-dimensional strip-packing problem had to be introduced. 4.1.3 Optimization of the Center of Gravity The paper [E] represents a minor addition to the family of papers on relaxed placement methods for two- and three-dimensional problems involving polygons and polyhedra. In this paper a heuristic for the problem where a given set of items, each with a weight, must be placed within a given container, such that overall balance and inertia moment are optimized. The paper was motivated by recent methods for this problem (see [E] for more details). Although it is not a packing problem in the conventional sense, i.e., as described in Section 2.1, since the set of items to be placed and the container size are given, the purpose of the heuristic is to use it as a post-processing step of another packing algorithm which determines a feasible subset of items or container dimension. The method used in the paper minimizes an objective function consisting of weighted linear combination of balance, inertia moment and overlap, using the same translational moves as in the papers [A] and [B]. As the procedure progresses the significance of balance and inertia moment is decreased, so that the overlap will have a higher impact on the solution process. This continues until a feasible solution is found at which point it is increased again and the heuristic allows overlapping solutions again. A similar procedure was used by Faroe et al. [52] for the VLSI layout problem where the total wire-length of interconnected rectangles must be minimized, and the main contribution of the present paper is a demonstration that the same principle can be used to handle the relatively simple objective function involving balance and inertia. It would also be interesting to investigate if the same principle can be used for three-dimensional layout problems with wire-connections which were considered by Yin et al. [160]. 4.1.4 Relation to FFT Algorithms An important aspect of the minimal translation algorithm is that it can also be seen as a maximal overlap translation algorithm. In this context it seems related to well-known convolution based methods used for e.g. protein docking problems using raster models introduced by Katchalski-Katzir et al. [89]. These methods compare two raster models (three-dimensional grids) with a Fast Fourier Transform (FFT) algorithm to determine where structural elements fit the best, i.e., which relative three-dimensional translation maximizes overlap of the surface. In other words, the objective is to find the (x, y, z) ∈ Z3 which solves: n n max ∑ ∑ x,y,z n ∑ f (i, k, j) · g(i − x, k − y, j − z), i=1 j=1 k=1 where f : Z3 → {0, 1} and g : Z3 → {0, 1} are “raster functions” which indicate whether a grid-cell in the three-dimensional n × n × n-grid representations is occupied by the surface or not (1 means occupied). The FFT makes this possible since the entire three-dimensional convolution h of f and g: n n h(x, y, z) = ∑ ∑ n ∑ f (i, k, j) · g(i − x, k − y, j − z), i=1 j=1 k=1 can be determined in O(n3 log n) time using the FFT (see e.g. [36]). If a grid-cell is set to 1 whenever it is occupied by a structure the same procedure can be used to find the translation with maximal 53 4. About the Papers overlap of two structures. In this case, h(x, y, z) is a discrete version of the volume of overlap of the structures represented by f and g. This procedure somehow relates to our minimal overlap translation algorithm. 4.2 Rectangular Knapsack Packing In the paper [C] a heuristic for the rectangular knapsack packing problem in two- and three-dimensions is presented. The main strategy in the paper is to use the sequence pair representation to represent placements. In addition to introducing a new heuristic for the two-dimensional knapsack packing problem, the paper also demonstrates how versatile the sequence pair representation is, and that it can be used for other problems than the minimal area packing problem, for which it had previously been applied to. Another contribution in the paper is the introduction of the the sequence triple for three-dimensional representation of placements. To the author’s knowledge this the only truly abstract representation of placement of boxes other than the graph- and naive sequence-based representations, i.e., constraint graphs, packing classes, and sequential placement. However, there are two draw-backs of the representation. Firstly, it is only capable of representing robot-packable placements. This set excludes mainly interlocking placements, which, fortunately, may have little relevance in practical applications. The second drawback is that the asymptotic running time of the decoding algorithm is O(n2 ) for n boxes. It would therefore be interesting to examine faster placement strategies or alternative representation for three-dimensional box placement. The two-dimensional representation and heuristic are powerful enough to return results which are on a par with the current methods of the literature. The three-dimensional variants cannot be compared with other methods from the literature, but returns results which are close to the upper-bounds. Initial experiments revealed that the three-dimensional representation and heuristic are not capable of handling container loading problem which contain far more items, but it is possible that the representation could be used if the small items were combined to larger building blocks, or a different heuristic principle was used. 4.3 Knapsack Packing of furniture The paper [D] is the most interesting of the papers from a practical point of view and, in it, we present a heuristic for knapsack packing of pieces of furniture within a container. The procedure consists of a number of different steps including: A tree-search method for finding a overall good solution for large items, a local search heuristic for refining the solution, a local search heuristic for ensuring overall stability of the large items, a greedy heuristic for placement of medium sized items, and a wall-building heuristic for placing small items. The pieces of furniture are represented by triangle mesh structures and the main strategy of the paper is to determine a large set of possible combinations of furniture to use for the heuristics. The heuristic now has to select both good combinations as well position each combination within the container. When placed, each of the selected combinations of pieces of furniture is aligned with one of the four corners of the width-height plane in the container. The main contributions of the paper are the combination strategy, the four corner representation which forms the basis of both the tree-search and local search heuristics, and finally the method that ensures that each item is placed in a stable fashion. The four-corner principle can be viewed as a special type of abstract representation, and the local search method used bears some resemblance to the method from the paper on two- and three- 54 Introduction to Packing Problems dimensional knapsack packing ([C]); in both cases the heuristic attempts to exchange items in a sequence and allows placement of items outside the container, and only items inside the container are included when calculating the objective value. The methods of the two papers were developed in parallel and interestingly enough their similarity did not occur to us until late in the development process. It would be interesting to investigate if elements of the paper can be generalized. It is possible that the combination strategy can be used for rectangular container loading. Here items would have to be combined in larger building blocks that can be managed by the tree-search and local search algorithms. The main problem, however, concerns generation of suitable building blocks. A possibility is to use some form of three-dimensional knapsack packing or minimal volume packing on smaller subsets. It is also possible that the combination strategy can be used for other types of shapes. In the paper, the geometric analysis which forms the basis of combinations is based on the fact that both items must rest on the floor, and the analysis is made in two dimensions. A general three-dimensional analysis, e.g. based on minkowski-sums or relaxed placement of few items, could form the basis of a generalization to more arbitrary shapes. The combination strategy could likely be used to improve the performance of the relaxed placement methods by assessing good combinations of items in a preprocessing step and translating combinations instead of individual items. An additional local search neighborhood could then change the combinations used as part of the solution process. 4.4 Cylinder Packing and Placement The final paper [F] represents unfinished work and concerns a heuristic for placement of capsules which may function as a tool for RNA tertiary structure prediction. Prediction of RNA tertiary structure is related to prediction of protein tertiary structure and concerns prediction of the threedimensional positions of the atoms of the molecule based on known primary and secondary structures. RNA molecules consist of many helical regions connected by the backbone of the molecule and RNA molecules differ from proteins in that secondary structure prediction can be used to accurately determine helical regions which appear in the tertiary structure. The paper describes a method where the helical regions of RNA molecules are represented as capsules and geometric considerations are used to predict the tertiary structure. This is a so-called coarse grained method. Since atoms cannot overlap and the helical regions therefore do not overlap, the problem is to generate a non-overlapping placement of capsules. Helices are also connected by the backbone and this property is modeled with proximity constraints that ensures that connected helices are positioned close to each other. RNA structures are generally compact due to the same hydrophobic forces which appear in proteins. Therefore, it is conjectured that the capsules should be placed somehow compactly. Three different compaction strategies are introduced in the paper and are similar to that of the other papers on relaxed placement ([A] and [B]). The three strategies attempts to minimizes either a box or sphere container which can contain the capsules. The molecular surfaces of the RNA molecules studied in conjunction with the paper are not spherical, and therefore it is unlikely that these compaction strategies are useful for tertiary structure prediction of RNA. However, it is possible that a different compaction strategy used in conjunction with relaxed placement can return useful results. Nevertheless results for a heuristic for the problem of compacting interconnected capsules is presented to demonstrate the potential of the relaxed placement method, and its ability to find placements of capsules with limited freedom (small container dimensions and proximity constraints). Another problem considered in the paper is the placement problem where the capsules must be 55 4. About the Papers placed within a given molecular surface such that the proximity requirements are met. Here one is given auxiliary information that describes a boundary of the molecule and the objective is to accurately guess the placement of atoms within the molecule. This is modeled as a decision problem where a feasible placement of the capsules within an irregular container must be found. A number of random feasible placements were generated and surprisingly, one of the placements is not far from the known real structure. The paper represents work in progress and a number of aspects are missing from it. Firstly, different compaction strategies need to be investigated to determine if the problem can be considered as a compaction problem. Secondly, more experiments with placements where the molecular surface is given are required. Thirdly, it must be determined if the coarse-grained capsule placement can be successfully used as a starting point for accurate prediction methods. From a packing problem point of view the main novelty in this paper concerns the overlap translation method. While the overall principle of the papers Egeblad et al. [A, B] has been reused, the translational algorithm used in [F] differs substantially. Instead of volume of overlap, which is hard to determine for capsules, the algorithm deals with directional intersection depth in this paper. This, and the ability to handle both proximity constraints and an irregular container demonstrates how universal the minimal overlap principle is. 56 Introduction to Packing Problems 5 Conclusion This thesis presents a number of novel methods for packing problems. Three different types of heuristics are covered for both two- and three-dimensional packing problems. Both the relaxed placement methods and the heuristic for container loading of furniture involves irregular shapes. The results for the strip-packing problem with irregular shapes with the relaxed placement techniques ([A] and [B]) are among the best in the literature for two- and three-dimensional problems, and the core element of the polygon packing procedure, the minimal overlap translation algorithm, can be implemented in less than one thousand lines of code, which makes it an appealing alternative to NFP based methods. The heuristics for rectangular knapsack packing ([C]) demonstrate great potential and the sequence triple is a novel abstract representation for three-dimensional placements. Results are on a par with existing methods and the sequence pair and sequence triple representations are simple to implement – Placement methods can be implemented in a few hundred lines of code. The biggest question regarding the three-dimensional heuristic is if it can be scaled to manage container loading problems consisting of many more items. The techniques used for container loading of furniture ([D]) are specific for this problem. The overall heuristic consists of many relatively simple sub-parts, and an interesting future direction would be to apply some of the principles to other problems. The most impressive part of this work is that the time from start to finish, i.e., being presented with the problem, dealing with the theory, and producing a practical software application, was less than 18 months. At the time when we began this project, there was no obvious way from the literature to deal with irregular shapes to the extent required by our industrial partner. Today, the principles are being used hundreds of times each week within our software. Stability and balance issues are considered both as part of the heuristic for container loading of furniture and as an individual problem ([E]). The latter is one of several examples of the versatility and potential of the relaxed placement method presented in [A] and [B]. The principles of the relaxed placement method have also been used for the RNA tertiary structure prediction problem ([F]) which occurs in bioinformatics. The problem considered cylinders with capped ends and proximity constraints and the promising results show how universal the relaxed placement methodology is. While the results are promising the draft included in this thesis is incomplete and more experiments are needed in order to understand the full potential of the method. A common topic throughout the thesis is the relaxed placement method based on the minimal overlap translation. While its ability to tackle several problems has been considered in this thesis and the possibilities of the method seem almost endless, we have yet to successfully use the principle to solve knapsack packing problems. It would be interesting to investigate the possibilities in this domain further as part of future research. It would also be interesting to investigate generalization of the principles which were used for furniture packing and for there-dimensional rectangular knapsack packing. Many other future directions have been pointed out in this thesis, both with respect to problem types and improvements of the presented methods. Solution methods for packing problems are slowly maturing, but there are still many interesting possibilities to be explored and I hope that some of the many topics considered in this thesis can form the basis of fruitful future research. 57 5. Conclusion 58 Introduction to Packing Problems References [A] J. Egeblad, B. K. Nielsen, and A. Odgaard. Fast neighborhood search for two- and three-dimensional nesting problems. European Journal of Operational Research, 183(3):1249–1266, 2007. [B] J. Egeblad, B. K. Nielsen, and M. Brazil. Translational packing of arbitrary polytopes. CGTA. Computational Geometry: Theory and Applications, 2008. accepted for publication. [C] J. Egeblad and D. Pisinger. Heuristic approaches for the two- and three-dimensional knapsack packing problem. Computers and Operations Research, 2007. In press (available online). [D] J. Egeblad, C. Garavelli, S. Lisi, and D. Pisinger. Heuristics for container loading of furniture. Submitted, 2007. [E] J. Egeblad. Placement of two- and three-dimensional irregular shapes for inertia moment and balance. Submitted, 2008. [F] J. Egeblad, L. Guibas, M. Jonikas, and A. Laederach. Three-dimensional constrained capsule placement for coarse grained tertiary rna structure prediction. Working Paper, 2008. [1] R. Alvarez-Valdes, F. Parreno, and J.M. Tamarit. A tabu search algorithm for two-dimensional nonguillotine cutting problems. Technical Report TR07-2004, Universitat de Valencia, 2004. [2] R. Alvarez-Valdes, F. Parreno, and J.M. Tamarit. A GRASP algorithm for constrained two-dimensional non-guillotine cutting problems. Journal of Operational Research Society, 56:414–425, 2005. [3] R. C. Art, Jr. An approach to the two dimensional, irregular cutting stock problem. Technical Report 36.Y08, IBM Cambridge Scientific Center, September 1966. [4] B. S. Baker and J. S. Schwarz. Shelf algorithms for two-dimensional packing problems. SIAM Journal on Computing, 12(3):508–525, 1983. [5] R. Balasubramanian. The pallet loading problem: A survey. International Journal of Production Economics, 28(2):217–225, November 1992. [6] N. Bansal, A. Caprara, and Sviridenko M. Improved approximation algorithms for multidimensional bin packing problems. In Proceedings of the 47th on Foundations of Computer Science (FOCS’06), pages 697–708. IEEE Computer Society, 2006. [7] N. Bansal, J. Correa, C. Kenyon, and M. Sviridenko. Bin packing in multiple dimensions: Inapproximability results and approximation schemes. Mathematics of Operations Research, 31(1):31–49, 2006. [8] J. E. Beasley. Algorithms for two-dimensional unconstrained guillotine cutting. Journal of the Operational Research Society, 36:297–306, 1985. [9] J. A. Bennell and K. A. Dowsland. Hybridising tabu search with optimisation techniques for irregular stock cutting. Management Science, 47(8):1160–1172, 2001. [10] J. A. Bennell and K. A. Dowsland. A tabu thresholding implementation for the irregular stock cutting problem. International Journal of Production Research, 37:4259–4275, 1999. [11] J. A Bennell and X Song. A comprehensive and robust procedure for obtaining the nofit polygon using minkowski sums. Computers and Operations Research, 35:267–281, 2008. [12] J. O. Berkey and P. Y. Wang. Two-dimensional finite bin-packing algorithms. The Journal of the Operational Research Society, 38(5):423–429, 1987. 59 References [13] A. Bettinelli and G. Ceselli, A. Righini. A branch-and-price algorithm for the two-dimensional level strip packing problem. 4OR: A Quarterly Journal of Operations Research, 2007. Available online. [14] S. Bhattacharya and R. Bhattacharya. An exact depth-first algorithm for the pallet loading problem. European Journal of Operational Research, 110(3):610–625, 1998. [15] E. E. Bischoff. Three-dimensional packing of items with limited load bearing strength. European Journal of Operational Research, 168:952–966, 2006. [16] E. E. Bischoff and M. D. Marriott. A comparative evaluation of heuristics for container loading. European Journal of Operational Research, 44:267–276, 1990. [17] E. E. Bischoff and M. S. W. Ratcliff. Loading multiple pallets. Journal of the Operational Research Society, 46:1322–1336, 1995. [18] A. Bortfeldt and H. Gehring. Applying tabu search to container loading problems. In Operations Research Proceedings 1997, pages 533–538. Springer, Berlin, 1998. [19] M.A. Boschetti and A. Mingozzi. The two-dimensional finite bin packing problem, Part I: New lower bounds for the oriented case. 4OR, 1(1):27–42, 2003. [20] M.A. Boschetti and A. Mingozzi. The two-dimensional finite bin packing problem. Part II: New lower and upper bounds. 4OR, 1(2):135–147, 2003. [21] M.A. Boschetti, E. Hadjiconstantinou, and A. Mingozzi. New upper bounds for the two-dimensional orthogonal cutting stock problem. IMA Journal of Management Mathematics, 13:95–119, 2002. [22] V. Bukhvalova and K. Vyatkina. An optimal algorithm for partitioning a set of rectangles with rightangled cuts. In SIAM Conference on Geometric Design and Computing, pages 125–136, 2003. [23] E. K. Burke, R. Hellier, G. Kendall, and G. Whitwell. A new bottom-left-fill heuristic algorithm for the two-dimensional irregular packing problem. Operations Research, 54(3):587–601, 2006. [24] E. K. Burke, R. S. R. Hellier, G. Kendall, and G. Whitwell. Complete and robust no-fit polygon generation for the irregular stock cutting problem. European Journal of Operational Research, 179:27––49, 2007. [25] J. Cagan, D. Degentesh, and S. Yin. A simulated annealing-based algorithm using hierarchical models for general three-dimensional component layout. Computer Aided Design, 30(10):781–790, 1998. [26] A. Caprara. Packing 2-dimensional bins in harmony. In Proceedings of the 43rd Symposium on Foundations of Computer Science (FOCS’02), pages 490–499. IEEE Computer Society, 2002. [27] A. Caprara and M. Monaci. On the 2-dimensional knapsack problem. Operations Research Letters, 1 (32):5–14, 2004. [28] B. Chazelle. The bottom-left bin-packing heuristic: An efficient implementation. IEEE Transactions on Computers, 32(8), 1983. [29] C. S. Chen, S. M. Lee, and Q. S. Shen. An analytical model for the container loading problem. European Journal of Operational Research, 80:68–76, 1995. [30] C. H. Cheng, B. R. Feiring, and T. C. E. Cheng. The cutting stock problem – a survey. International Journal of Production Economics, 36(3):291–305, October 1994. [31] F. R. K. Chung, M. R. Garey, and D. S Johnson. On packing two-dimensional bins. SIAM Journal on Matrix Analysis and Applications, 3(1):66–76, 1982. 60 Introduction to Packing Problems [32] Y. Chung, Y. Chang, G. Wu, and S. Wu. B*-tree: A new representation for non-slicing floorplans. In Proceedings of Design Automation Conference, pages 458–463, 2000. [33] V. Chvatal. Linear Programming. W. H. Freeman, 1983. [34] F. Clautiaux, J. Carlier, and Moukrim A. A new exact method for the two-dimensional bin-packing problem with fixed orientation. Operations Research Letters, 35:357–364, 2007. [35] E. G. Coffman, M. R. Garey, D. S. Johnson, and R. E. Tarjan. Performance bounds for level-oriented two-dimensional packing algorithms. SIAM Journal on Computing, 9(4):808–826, 1980. [36] T. H. Cormen, C. E. Leiserson, and R. L. Rivest. Introduction to Algorithms. MIT Press, 1990. [37] J. Correa and C. Kenyon. Approximation schemes for multidimensional packing. In Proceedings of the 15th ACM/SIAM Symposium on Discrete Algorithms, pages 179–188. ACM/SIAM, 2004. [38] K. Daniels, Z. Li, and V. Milenkovic. Multiple containment methods. Technical Report TR-12-94, Harvard University, Cambridge, Massachusetts, 1994. [39] M. de Berg, M. van Kreveld, M. Overmars, and O. Schwarzkopf. Computational Geometry: Algorithms and Applications (2nd edition). Springer, 2000. [40] E. den Boef, J. Korst, S. Martello, D. Pisinger, and D. Vigo. Erratum to ‘The Three-Dimensional Bin Packing Problem’: Robot-packable and orthogonal variants of packing problems. Operations Research, 53:735–736, 2005. [41] J. K. Dickinson and G. K. Knopf. A moment based metric for 2-D and 3-D packing. European Journal of Operational Research, 122(1):133–144, 2000. [42] J. K. Dickinson and G. K. Knopf. Packing subsets of 3d parts for layered manufacturing. International Journal of Smart Engineering System Design, 4(3):147–161, 2002. [43] K. A. Dowsland. An exact algorithm for the pallet loading problem. European Journal of Operational Research, 31(1):78–84, July 1987. [44] K. A. Dowsland and W. B. Dowsland. Packing problems. European Journal of Operational Research, 56:2–14, 1992. [45] K. A. Dowsland and W. B. Dowsland. Solution approaches to irregular nesting problems. European Journal of Operational Research, 84:506–521, 1995. [46] K. A. Dowsland, W. B. Dowsland, and J. A. Bennell. Jostling for position: Local improvement for irregular cutting patterns. Journal of the Operational Research Society, 49:647–658, 1998. [47] H. Dyckhoff. A typology of cutting and packing problems. European Journal of Operational Research, 44:145–159, 1990. [48] J. Egeblad. Placement techniques for VLSI layout using sequence-pair legalization. Master’s thesis, DIKU, University of Copenhagen, Denmark, 2003. [49] F. Eisenbrand, S. Funke, A. Karrenbauer, J. Reichel, and E. Schömer. Packing a trunk: now with a twist! In SPM ’05: Proceedings of the 2005 ACM symposium on Solid and physical modeling, pages 197–206, New York, NY, USA, 2005. ACM Press. [50] M. Eley. Solving container loading problems by block arrangement. European Journal of Operational Research, 141(2):393–409, 2002. 61 References [51] O. Faroe, D. Pisinger, and M. Zachariasen. Guided local search for the three-dimensional bin packing problem. INFORMS Journal on Computing, 1999. [52] O. Faroe, D. Pisinger, and M. Zachariasen. Guided local search for final placement in vlsi design. Journal of Heuristics, 9(3):269–295, 2003. ISSN 1381-1231. [53] O. Faroe, D. Pisinger, and M. Zachariasen. Guided local search for the three-dimensional bin packing problem. INFORMS Journal on Computing, 15(3):267–283, 2003. [54] S. Fekete and J Schepers. A general framework for bounds for higher-dimensional orthogonal packing problems. Mathematical Methods for Operations Research, 60:311–329, 2004. [55] S. P. Fekete and J. Schepers. On more-dimensional packing I: Modeling. Submitted to Discrete Applied Mathematics, 1997. [56] S. P. Fekete and J. Schepers. On more-dimensional packing II: Bounds. Submitted to Discrete Applied Mathematics, 1997. [57] S. P. Fekete and J. Schepers. On more-dimensional packing III: Exact algorithms. Submitted to Discrete Applied Mathematics, 1997. [58] S. P. Fekete, J. Schepers, and J. C van der Veen. An exact algorithm for higher-dimensional orthogonal packing. Operations Research, 55(3), 2007. [59] C. E. Ferreira, F. K. Miyazawa, and Y. Wakabayashi. Packing squares into squares. Pesquisa Operacional, 19(2):223–237, 1999. [60] J. B. G. Frenk and G. Galambos. Hybrid next-fit algorithm for the two-dimensional rectangle bin-packing problem. Computing, 39(3):201–217, 1987. [61] M. Fuellerer, K.F. Doerner, R. Hartl, and M. Iori. Ant colony optimization for the two-dimensional loading vehicle routing problem. Computers and Operations Research, 2007. [62] M. R. Garey and D. S. Johnson. Computers and Intractability: A Guide to the Theory of NPCompleteness. W. H. Freeman, 1979. [63] H. Gehring and A. Bortfeld. A parallel genetic algorithm for solving the container loading problem. International Transactions in Operational Research, 9:497–511, 2002. [64] H. Gehring and A. Bortfeldt. A genetic algorithm for solving the container loading problem. International Transactions in Operational Research, 4:401–418, 1997. [65] M. Gendreau, M. Iori, G. Laporte, and S. Martello. A tabu search heuristic for the vehicle routing problem with two-dimensional loading constraints. Networks, 51(1), 2008. [66] J. A. George and D. F. Robinson. A heuristic for packing boxes into a container. Computers and Operations Research, 7:147–156, 1980. [67] P. C. Gilmore and R. E. Gomory. A linear programming approach to the cutting stock problem – Part I. Operations Research, 9:849–859, 1961. [68] P. C. Gilmore and R. E. Gomory. A linear programming approach to the cutting stock problem – Part II. Operations Research, 11:863–888, 1963. [69] P. C. Gilmore and R. E. Gomory. Multistage cutting stock problems of two and more dimensions. Operations Research, 13:94–120, 1965. [70] F. Glover. Tabu search - part 1. ORSA Journal on computing, 1(3):190–206, 1989. 62 Introduction to Packing Problems [71] B. Golden, S. Raghaven, and E. (editors) Wasil. The Vehicle Routing Problem: Latest Advances and New Challenges. Springer, 2008. [72] A. M. Gomes and J. F. Oliveira. A 2-exchange heuristic for nesting problems. European Journal of Operational Research, 141:359–370, 2002. [73] A. M. Gomes and J. F. Oliveira. Solving irregular strip packing problems by hybridising simulated annealing and linear programming. European Journal of Operational Research, 171(3):811–829, 2006. [74] E. Hadjiconstantinou and N. Christophides. An exact algorithm for general, orthogonal, two-dimensional knapsack problems. European Journal of Operational Research, 83:39–56, 1995. [75] T. C. Hales. A proof of the kepler conjecture. Annals of Mathematics, 162:1065–1185, 2005. [76] R. Heckmann and T. Lengauer. A simulated annealing approach to the nesting problem in the textile manufacturing industry. Annals of Operations Research, 57(1):103–133, 1995. [77] E. Herbert and K. A. Dowsland. A family of genetic algorithms for the pallet loading problem. Annals of Operations Research, 63(3):415–436, 1996. [78] J. C. Herz. A recursive computing procedure for two-dimensional stock cutting. IBM Journal of Research and Development, 16:462–469, 1972. [79] M. Hifi. Dynamic programming and hill-climbing techniques for constrained two-dimensional cutting stock problems. Journal of Combinatorial Optimization, 8(1):65–84, 2004. [80] X. Hong, G. Huang, Y. Cai, J. Gu, S. Dong, and C. Cheng. Corner block list: An effective and topological representation of non-slicing floorplan. In Proceedings of the 2000 IEEE/ACM international conference on Computer-aided design, pages 8 – 12, 2000. [81] I. Ikonen, W. E. Biles, A. Kumar, J. C. Wissel, and R. K. Ragade. A genetic algorithm for packing three-dimensional non-convex objects having cavities and holes. In Proceedings of the 7th International Conference on Genetic Algortithms, pages 591–598, East Lansing, Michigan, 1997. Morgan Kaufmann Publishers. [82] T. Imamichi and N. Hiroshi. A Multi-sphere Scheme for 2D and 3D Packing Problems, volume 4638/2007, pages 207–211. 2007. [83] T. Imamichi, M. Yagiura, and H. Nagamochi. An iterated local search algorithm based on nonlinear programming for the irregular strip packing problem. In Proceedings of the Third International Symposium on Scheduling, Tokyo Japan, pages 132–137, 2006. [84] M. Iori, J. J. Salazar-Gonzalez, and D. Vigo. An exact approach for the vehicle routing problem with two-dimensional loading constraints. Transportation Science, 40:342–350, 2006. [85] S. Jain and H. C. Gea. Two-dimensional packing problems using genetic algorithms. Engineering with Computers, 14(3):206–213, 1998. [86] S. Jakobs. On genetic algorithms for the packing of polygons. European Journal of Operational Research, 88:165–181, 1996. [87] K. Jansen and R. Solis-Oba. An asymptotic approximation algorithm for 3d-strip packing. In Proceedings of the seventeenth annual ACM-SIAM symposium on Discrete algorithm table of contents, pages 143–152, 2006. [88] K. Jansen and R. van Stee. On strip packing with rotations. In Proceedings of the 37-th Annual ACM Symposium on the Theory of Computing (STOC 2005), pages 755–761. ACM, 2005. 63 References [89] E. Katchalski-Katzir, I. Shariv, M. Eisenstein, A. A. Friesem, C. Aflalo, and I. A. Vakser. Molecular surface recognition: determination of geometric fit between proteins and their ligands by correlation techniques. In Proceedings of National Academic Society of the United States of America, volume 89(6), pages 2195–2199. National Academy of Sciences, 1992. [90] H. Kellerer, U. Pferschy, and D. Pisinger. Knapsack Problems. Springer, Berlin, Germany, 2004. [91] M. Kenmochi, T. Imamichi, K. Nnobe, M. Yagiura, and H. Nagamochi. Exact algorithms for the 2dimensional strip packing problem with and without rotations. Technical Report 2007-005, Department of Applied Mathematics and Physics, Kyoto University, 2007. [92] C. Kenyon and E. Remila. A near-optimal solution to a two-dimensional cutting stock problem. Mathematics of Operations Research, 25(4):645–656, 2000. [93] S. Kirkpatrick, C. D. Gelatt Jr., and M. P. Vecchi. Optimization by simulated annealing. Science, 220 (4598):671–680, 1983. [94] Y. Kohayakawa, F. Miyazawa, P. Raghavan, and Y. Wakabayashi. Multidimensional cube packing. Electronic Notes of Discrete Mathematics, 7, 2001. [95] K. K. Lai and J. W. M. Chan. Developing a simulated annealing algorithm for the cutting stock problem. Computers and Industrial Engineering, 32:115–127, 1997. [96] Z. Li and V. Milenkovic. Compaction and separation algorithms for non-convex polygons and their applications. European Journal of Operational Research, 84(3):539–561, 1995. [97] L. Lins, S. Lins, and R. Morabito. An l-approach for packing (`, w)-rectangles into rectangular and l-shaped pieces. Journal of the Operational Research Society, 54(7):777–789, 2003. [98] D. Liu and T. Teng. An improved bl-algorithm for genetic algorithms of the orthogonal packing of rectangles. European Journal of Operational Research, 112:413–420, 1999. [99] H. Liu and Y. He. Algorithm for 2D irregular-shaped nesting problem based on the nfp algorithm and lowest-gravity-center principle. Journal of Zhejiang University - Science A, 7(4):570–576, 2006. [100] A. Lodi, S Martello, and D. Vigo. Heuristic and metaheuristic approaches for a class of two-dimensional bin packing problems. INFORMS Journal on Computing, 11(4):345 – 357, 1999. [101] A. Lodi, S. Martello, and D. Vigo. Approximation algorithms for the oriented two-dimensional bin packing problem. European Journal of Operational Research, 112:158–166, 1999. [102] A. Lodi, S. Martello, and M. Monaci. Two-dimensional packing problems: A survey. European Journal of Operational Research, 141:241–252, 2002. [103] A. Lodi, S. Martello, and D. Vigo. Heuristic algorithms for the three-dimensional bin packing problem. European Journal of Operational Research, 141(2):410–420, 2002. [104] A. Lodi, S. Martello, and D. Vigo. Models and bounds for two-dimensional level packing problems. Journal of Combinatorial Optimization, 8(3):363–379, 2004. [105] H. Lutfiyya, B. McMillin, P. Poshyanonda, and C. Dagli. Composite stock cutting through simulated annealing. Journal of Mathematical and Computer Modelling, 16(2):57–74, 1992. [106] S. Martello and P. Toth. Lower bounds and reduction procedures for the bin-packing problem. Discrete Applied Mathematics, 26:59–70, 1990. [107] S. Martello and D. Vigo. Exact solution of the two-dimensional finite bin packing problem. Management Science, 44:388–399, 1998. 64 Introduction to Packing Problems [108] S. Martello, D. Pisinger, and D. Vigo. The three-dimensional bin packing problem. Operations Research, 48(2):256–267, 2000. ISSN 0030364X. [109] S. Martello, M. Monaci, and D. Vigo. An exact approach to the strip packing problem. INFORMS Journal on Computing, 3(15):310–319, 2003. [110] S. Martello, D. Pisinger, D. Vigo, Edgar den Boef, and Jan Korst. Algorithms for general and robotpackable variants of the three-dimensional bin packing problem. ACM Transactions on Mathematical Software, 33, 2007. [111] F. K. Miyazawa and Y Wakabayashi. Packing problems with orthogonal rotations. In Proceedings of the 6th Latin American Symposium on Theoretical Informatics, pages 359–368, 2004. [112] F. K. Miyazawa and Y Wakabayashi. An algorithm for the three-dimensional packing problem with asymptotic performance analysis. Algorithmica, 18:122–144, 2000. [113] F. K. Miyazawa and Y. Wakabayashi. Approximation algorithms for the orthogonal z-oriented 3-d packing problem. SIAM Journal on Computing, 29(3):1008 – 1029, 1999. [114] M. Monaci and P. Toth. A set-covering-based heuristic approach for bin-packing problems. Informs Journal Computing, 18:71–85, 2006. [115] David M. Mount, Ruth Silverman, and Angela Y. Wu. On the area of overlap of translated polygons. Computer Vision and Image Understanding, 64(1):53–61, 1996. [116] A. Moura and J. F. Oliveira. An integrated approach to the vehicle routing and container loading problems. OR Spectrum, 2008. [117] H. Murata, K. Fujiyoshi, S. Nakatake, and Y. Kajitani. VLSI module packing based on rectangle-packing by the sequence pair. IEEE Transaction on Computer Aided Design of Integrated Circuits and Systems, 15:1518–1524, 1996. [118] B. K. A. Ngoi, M. L. Tay, and E. S. Chua. Applying spatial representation techniques to the container packing problem. International Journal of Production Research, 32:111–123, 1994. [119] B. K. Nielsen. An efficient solution method for relaxed variants of the nesting problem. In Joachim Gudmundsson and Barry Jay, editors, Theory of Computing, Proceedings of the Thirteenth Computing: The Australasian Theory Symposium, volume 65 of CRPIT, pages 123–130, Ballarat, Australia, 2007. ACS. [120] B. K. Nielsen. Nesting Problems and Steiner Tree Problems. PhD thesis, DIKU, University of Copenhagen, Denmark, 2008. [121] B. K. Nielsen and A. Odgaard. Fast neighborhood search for the nesting problem. Technical Report 03/03, DIKU, Department of Computer Science, University of Copenhagen, 2003. [122] J. F. Oliveira and J. S. Ferreira. Algorithms for nesting problems. Applied Simulated Annealing, pages 255–273, 1993. [123] J. F. Oliveira, A. M. Gomes, and J. S. Ferreira. TOPOS - a new constructive algorithm for nesting problems. OR Spektrum, 22:263–284, 2000. [124] H. Onodera, Y. Taniguchi, and K Tamaru. Branch-and-bound placement for building block layout. In Proceedings of the 28th ACM/IEEE Design Automation Conference, pages 433–439. ACM, 1991. [125] J. O’Rourke. Computational Geometry in C. Cambridge University Press, 1998. Hardback ISBN: 0521640105; Paperback: ISBN 0521649765. 65 References [126] Y. Pang, C. Cheng, K. Lampaert, and W. Xie. Rectilinear block packing using o-tree representation. In Proceedings of the 2001 international symposium on Physical design, pages 156 – 161, 2001. [127] D. Pisinger. Denser packings obtained in O(n log log n) time. INFORMS Journal on Computing, 19(3): 395–405, 2007. [128] D. Pisinger. Heuristics for the container loading problem. European Journal of Operations Research, 3 (141):382–392, 2002. [129] D. Pisinger and S. Ropke. A general heuristic for vehicle routing problems. Computers and Operations Research, 34:2403–2435, 2007. [130] D. Pisinger and M. Sigurd. The two-dimensional bin packing problem with variable bin sizes and costs. Discrete Optimization, 2(2):154–167, 2005. [131] D. Pisinger and M. M. Sigurd. Using decomposition techniques and constraint programming for solving the two-dimensional bin packing problem. INFORMS Journal on Computing, 19(1):36–51, 2007. [132] Y. Pochet and L. A. Wolsey. Production Planning by Mixed Integer Programming. Springer, 2006. [133] G. Scheithauer. Algorithms for the container loading problem. Operations Research Proceedings 1991, pages 445–452, 1992. [134] G. Scheithauer. Equivalence and dominance for problems of optimal packing of rectangles. Ricerca Operativa, 83, 1997. [135] G. Scheithauer. Lp-based bounds for the container and multi-container loading problem. International Transactions in Operational Research, pages 199–213, 1999. [136] G. Scheithauer and U. Sommerweiss. 4-block heuristic for the rectangle packing problem. European Journal of Operational Research, 108:509–526, 1998. [137] G. Scheithauer and J. Terno. The g4-heuristic for the pallet loading problem. Journal of Operational Research Society, 47(4):511–522, 1996. [138] S. S. Seiden and R. van Stee. New bounds for multidimensional packing. Algorithmica, 36:261–293, 2003. [139] S. S. Skiena. Minkowski sum. In The Algorithm Design Manual, pages 395–396. Springer-Verlag, New York, 1997. [140] Y. Stoyan and et al. Packing of various radii solid spheres into a parallelepiped, 2001. [141] Y. Stoyan and L.D. Ponomarenko. Minkowski sum and hodograph of the dense placement vector function. Technical Report SER. A10, Reports of the SSR Academy of Science., 1977. [142] Y. Stoyan, J. Terno, G. Scheithauer, N. Gil, and T. Romanova. Phi-functions for primary 2d-objects, 2001. [143] Y. Stoyan, J. Terno, G. Scheithauer, N. Gil, and T. Romanova. Phi-functions for primary 2d-objects, 2001. [144] Y. Stoyan, G. Scheithauer, N. Gil, and T. Romanova. Φ-functions for complex 2d-objects. 4OR: Quarterly Journal of the Belgian, French and Italian Operations Research Societies, 2(1):69–84, 2004. [145] Y. Stoyan, N. I. Gil, G. Scheithauer, A. Pankratov, and I. Magdalina. Packing of convex polytopes into a parallelepiped. Optimization, 54(2):215–235, 2005. doi: 10.1080/02331930500050681. 66 [146] X. Tang and D. F. Wong. FAST-SP: a fast algorithm for block packing based on sequence pair. In Asia and South Pacific Design Automation Conference, pages 521–526, 2001. [147] X. Tang, R. Tian, and D. F. Wong. Fast evaluation of sequence pair in block placement by longest common subsequence computation. In Proceedings of DATE 2000 (ACM), Paris, France, pages 106– 110, 2000. [148] A. Tarnowski, J. Terno, and G. Scheithauer. A polynomial time algorithm for the guillotine pallet loading problem. INFOR, 32:275–287, 1994. [149] J. Terno, G. Scheithauer, U. Sommerweiss, and J. Riehme. An efficient approach for the multi-pallet loading problem. European Journal of Operations Research, 2(132):371–381, 2000. [150] V. E. Theodoracatos and J. L. Grimsley. The optimal packing of arbitrarily-shaped polygons using simulated annealing and polynomial-time cooling schedules. Computer methods in applied mechanics and engineering, 125:53–70, 1995. [151] G. T. Toussaint. A simple linear algorithm for intersecting convex polygons. The Visual Computer, 1(2): 118–123, 1985. [152] S. Umetani, T. Yagiura, S. Imahori, K. Nonobe, and T. Ibaraki. A guided local searc algorithm based on a fast neighborhood search for the irregular strip packing problem. In Proceedings of the Third International Symposium on Scheduling, Tokyo Japan, 2006. [153] R. van Stee. An approximation algorithm for square packing. Operations Research Letters, 32(6): 535–539, 2004. [154] G. Varadhan and D. Manocha. Accurate minkowski sum approximation of polyhedral models. Graphical Models, 68:343–355, 2006. [155] C. Voudouris and E. Tsang. Guided local search. Technical Report CSM-147, Department of Computer Science, University of Essex, Colchester, C04 3SQ, UK, August 1995. [156] C. Voudouris and E. Tsang. Guided local search and its application to the traveling salesman problem. European Journal of Operational Research, 113:469–499, 1999. [157] G. Wäscher, H. Haussner, and H. Schumann. An improved typology of cutting and packing problems. European Journal of Operational Research, 183:1109–1130, 2007. [158] S. Yin and J. Cagan. An extended pattern search algorithm for three-dimensional component layout. Journal of Mechanical Design, 122(1):102–108, 2000. [159] S. Yin and J. Cagan. Exploring the effectiveness of various patterns in an extended pattern search layout algorithm. Journal of Mechanical Design, 126(1):22–28, 2004. [160] S. Yin, J. Cagan, and P. Hodges. Layout optimization of shapeable components with extended pattern search applied to transmission design. Journal of Mechanical Design, 126(1):188–191, 2004. [161] G. Young-Gun and K. Maing-Kyu. A fast algorithm for two-dimensional pallet loading problems of large size. European Journal of Operational Research, 127(1):193–202, October 2001. [162] E. E. Zachariadis, C. D. Tarantilis, and Kiranoudis C. T. A guided tabu search for the vehicle routing problem with two-dimensional loading constraints. European Journal of Operational Research, 2007. [163] S. Zhong and J. Ghosh. A unified framework for model-based clustering, 2002. Published in the European Journal of Operational Research, 2007 Fast neighborhood search for two- and three-dimensional nesting problems Jens Egeblad∗ Benny K. Nielsen∗ Allan Odgaard∗ Abstract In this paper we present a new heuristic solution method for two-dimensional nesting problems. It is based on a simple local search scheme in which the neighborhood is any horizontal or vertical translation of a given polygon from its current position. To escape local minima we apply the meta-heuristic method Guided Local Search. The strength of our solution method comes from a new algorithm which is capable of searching the neighborhood in polynomial time. More precisely, given a single polygon with m edges and a set of polygons with n edges the algorithm can find a translation with minimum overlap in time O(mn log(mn)). Solutions for standard test instances are generated by an implementation and a comparison is done with recent results from the literature. The solution method is very robust and most of the best solutions found are also the currently best results published. Our approach to the problem is very flexible regarding problem variations and special constraints, and as an example we describe how it can handle materials with quality regions. Finally, we generalize the algorithm for the fast neighborhood search and present a solution method for three-dimensional nesting problems. Keywords: Cutting, packing, nesting, 3D nesting, guided local search 1 Introduction Nesting is a term used for many related problems. The most common problem is strip-packing where a number of irregular shapes must be placed within a rectangular strip such that the strip-length is minimized and no shapes overlap. The clothing industry is a classical example of an application for this problem. Normally, pieces of clothes are cut from a roll of fabric. A high utilization is desirable and it requires that as little of the roll is used as possible. The width of the roll is fixed, hence the problem is to minimize the length of the fabric. Other nesting problem variations exist, but in the following the focus is on the strip-packing variant. Using the typology of Wäscher et al. [36] this is a two-dimensional irregular open dimension problem (ODP). In the textile industry the shapes of the pieces of clothes are usually referred to as markers or stencils. In the following we will use the latter term except when the pieces need to be defined more precisely e.g. as polygons. In order to state the problem formally we first define the associated decision problem: Nesting Decision Problem Given a set of stencils S and a piece of material, position the stencils S such that ∗ Department of Computer Science, University of Copenhagen, DK-2100 Copenhagen Ø, Denmark. {jegeblad, benny, duff}@diku.dk. E-mail: 69 2. Existing Approaches to Nesting Problems w l Figure 1: The Strip Nesting Problem. Place a number of stencils on a strip with width w such that no two stencils overlap and the length l of the strip is minimized. • No two stencils overlap. • All stencils are contained within the boundaries of the material. The strip-packing variant can now be stated as: Strip Nesting Problem. Given a set of stencils S and a strip (the material) with width w find the minimal length l for which the Nesting Decision Problem can be solved (Figure 1). The Strip Nesting Problem is N P -hard [e.g. 28]. In this paper we present a new solution method for the Strip Nesting Problem. After a short analysis of some of the existing approaches to the problem (Section 2) we present a short outline of the new solution method in Section 3. In short, the approach is a local search method (Section 4) using the meta-heuristic Guided Local Search (Section 5) to escape local minima. A very efficient search of the neighborhood in the local search is the subject of Section 6. In the definitions above we have ignored the additional constraints which are often given for a nesting problem e.g. whether rotating and/or flipping the stencils is allowed. In Section 7 a discussion on how we handle such problem variations emphasizes the flexibility of our solution method. Experiments show that our solution method is very efficient compared with other published methods. Results are presented in Section 8. Finally, in Section 9, it is shown that our solution method is quite easily generalized to three-dimensional nesting problems. 2 Existing Approaches to Nesting Problems There exists numerous solution methods for nesting problems. A thorough survey by Dowsland and Dowsland [15] exists, but a more recent survey has also been done by Nielsen and Odgaard [28]. Meta-heuristics are one of the most popular tools for solving nesting problems. A detailed discussion of these can be found in the introductory sections of Bennell and Dowsland [6]. The following is a brief discussion of some of the most interesting approaches to nesting problems previously presented in the literature. The discussion is divided into three subsections concerning three different aspects of finding solutions for the problem: The basic solution method, the geometric approach and the use of a meta-heuristic to escape local minima. 70 Fast neighborhood search for two- and three-dimensional nesting problems 2.1 Basic solution methods Solution methods handling nesting problems generally belong to one of two groups. Those only considering legal placements in the solution process and those allowing overlap to occur during the solution process. Legal placement methods These methods never violate the overlap constraint. An immediate consequence is that placement of a stencil must always be done in an empty part of the material. Most methods for strip packing follow the basic steps below. 1. Determine a sequence of stencils. This can be done randomly or by sorting the stencils according to some measure e.g. the area or the degree of convexity [30]. 2. Place the stencils with some first/best fit algorithm. Typically a stencil is placed at the contour of the stencils already placed. Some algorithms also allow hole-filling i.e. placing a stencil in an empty area between already placed stencils [16, 17, 19]. 3. Evaluate the length of the solution. Exit with this solution [3] or repeat at step 2 after changing the sequence of stencils [16, 19]. Unfortunately the second step is quite expensive and if repeated these algorithms can easily end up spending time on making almost identical placements. Legal placement methods not doing a sequential placement do exist. These methods typically construct a legal initial solution and then introduce some set of moves (e.g. swapping two stencils) that can be controlled by a meta-heuristic to e.g. minimize the length of a strip [8, 9, 10, 11, 20]. Relaxed placement methods The obvious alternative is to allow overlaps to occur as part of the solution process. The objective is then to minimize the amount of overlap. A legal placement has been found when the amount of overlap reaches 0. Numerous papers applying such a scheme exist with varying degrees of success [5, 6, 21, 24, 25, 27, 29, 33]. Especially noteworthy is the work of Heckmann and Lengauer [21]. In this context it is very easy to construct an initial placement. It can simply be a random placement of all of the stencils, although it might be better to start with a better placement. Searching for a solution can be done by iteratively improving the placement i.e. decrease the total overlap and maybe also the strip-length. This is typically done by moving/rotating stencils. 2.2 Geometric approaches The first problem encountered when handling nesting problems is how to represent the stencils. If the stencils are not already given as polygons then they can quite easily be approximated by polygons which is also what is done in most cases. A more crude approximation can be done using a raster model [12, 21, 27, 29]. This is a discrete model of the stencils created by introducing a grid of some size to represent the material i.e. each stencil covers some set of raster squares. The stencils can then be represented by matrices. An example of a simple polygon and its raster model equivalent is given in Figure 2. A low granularity of the raster model provides fast calculations at the expense of limited precision. Better precision requires higher granularity, but it will also result in slower calculations. 71 2. Existing Approaches to Nesting Problems Figure 2: The raster model requires all stencils to be defined by a set of grid squares. The drawing above is an example of a polygon and its equivalent in a raster model. (a) (b) Figure 3: The degree of overlap can be measured in various ways. Here are two examples: (a) The precise area of the overlap. (b) The horizontal intersection depth. Comparisons between the raster model and the polygonal model were done by Heckmann and Lengauer [21] and they concluded that the polygonal model was the better choice for their purposes. Assuming polygons are preferred then we need some geometric tools to construct solutions without any overlaps. In the existing literature two basic tools have been the most popular. Overlap calculations The area of an overlap between two polygons (see Figure 3a) can be used to determine whether polygons overlap and how much they overlap. This can be an expensive calculation and thus quite a few alternatives have been suggested such as intersection depth [6, 14] (see Figure 3b) and the Φ-function [32] which can differentiate between three states of polygon interference: Intersection, disjunction and touching. Solution methods using overlap calculations most often apply some kind of trial-and-error scheme i.e. they try to place or move a polygon to various positions to see if or how much it overlaps. This can then be used to improve some intermediate solution which might be allowed to contain overlap [5, 6, 7, 21]. No-Fit-Polygons (NFP) Legal placement methods very often use the concept of the No-Fit-Polygon (NFP) [1, 2, 3, 16, 17, 19, 20, 30], although it can also be used in relaxed placement methods as done by Bennell and Dowsland [5]. The NFP is a polygon which describes the legal/illegal placements of one polygon in relation to another polygon, and it was introduced by Art, Jr. [3] (although named envelope). Given two polygons P and Q the construction of the NFP of P in relation to Q can be found in the following way: Choose a reference point for P. Slide P around Q as closely as possible without intersecting. The trace of the reference point is the contour of the NFP. An example can be seen 72 Fast neighborhood search for two- and three-dimensional nesting problems Q P Reference point Figure 4: Example of the No-Fit-Polygon (thick border) of stencil P in relation to stencil Q. The reference point of P is not allowed inside the NFP if overlap is to be avoided. in Figure 4. To determine whether P and Q intersect it is only necessary to determine whether the reference point of P is inside or outside their NFP. Placing polygons closely together can be done by placing the reference point of P at one of the edges of the NFP. If P and Q have s and t edges, respectively, then the number of edges in their NFP will be O(s2t 2 ) [4]. The NFP has one major weakness. It has to be constructed for all pairs of polygons. If the polygons are not allowed to be rotated it is feasible to do this in a preprocessing step in a reasonable time given that the number of differently shaped polygons is not too large. 2.3 Meta-heuristics Both legal and relaxed placement methods can make use of meta-heuristics. The most popular one for nesting problems is Simulated Annealing (SA) [9, 20, 21, 27, 29, 33]. The most advanced use of it is by Heckmann and Lengauer [21] who implemented SA in 4 stages. The first stage is a rough placement, the second stage eliminates overlaps, the third stage is a fine placement with approximated stencils and the last stage is a fine placement with the original stencils. Gomes and Oliveira [20] very successfully combine SA with the ideas for compaction and separation by Li and Milenkovic [26]. A very similar approach had previously been attempted by Bennell and Dowsland, Bennell and Dowsland [5, 6], but they combined it with a Tabu Search variant. More exotic approaches are genetic, ant and evolutionary algorithms [10, 11, 25] — all with very limited success. 3 Solution Method Outline In this section we will give a brief outline of our solution method. Our method is a relaxed placement method and it can handle irregular polygons with holes. A new geometric approach is utilized and the Guided Local Search meta-heuristic is used to escape local minima. This approach is inspired by a paper by Faroe et al. [18] which presented a similar approach for the two-dimensional Bin Packing Problem for rectangles. The following describes the basic algorithm for the Strip Nesting Problem. 1. Finding an initial strip length An initial strip length is found by using some fast heuristic e.g. a bottom-left bounding box placement algorithm. 2. Reducing the strip length The strip length is reduced by some value. This value could e.g. be based on some percentage 73 4. Local Search of the current length. After reducing the strip length any polygons no longer contained within the strip are translated appropriately. This potentially causes overlap which is removed during the subsequent optimization. 3. Applying local search to reduce overlap The strip length is fixed now and the search for a solution without overlap can begin. The overlap is iteratively reduced by applying local search. More precisely, in each step of the local search a polygon is moved to decrease the total amount of overlap. The local search and its neighborhood are described in Section 4, and a very efficient search of the neighborhood is the focus of Section 6. If a placement without overlap is found for the current fixed strip length then we have found a solution, and step 2 can be repeated to find even better solutions. This might not happen though since the local search can be caught in local minima. 4. Escaping local minima To escape local minima we have applied the meta-heuristic Guided Local Search. In short, it alters the objective function used in step 3 and then repeats the local search. It will be described in more detail in Section 5. 4 4.1 Local Search Placement First we define a placement formally. Let S = {s1 , . . . , sn } be a set of polygons. A placement of s ∈ S can be described by the tuple (sx , sy , sθ , s f ) ∈ R × R × [0, 2π) × { f alse,true} where (sx , sy ) is the position, sθ is the rotation angle and s f states whether s is flipped. Now the map p : S → R × R × [0, 2π) × { f alse,true} is a placement of the polygons S . 4.2 Objective function Given a set of polygons S = {s1 , . . . , sn } and a fixed-length strip with length l and width w, let P be the space of possible placements. We now wish to minimize the objective function, n i−1 g(p) = ∑ ∑ overlapi j (p), p ∈ P , i=1 j=1 where overlapi j (p) is a measure of the overlap between polygons si and s j . A placement p such that g(p) = 0 implies that p contains no overlap i.e. p solves the decision problem. We have chosen overlapi j (p) to be the area of intersection of polygons si and s j with respect to the placement described by p. 4.3 Neighborhood Given a placement p the local search may alter p to create a new placement p0 by changing the placement of one polygon si ∈ S . In each iteration the local search may apply one of the following four changes (depending on what is allowed for the given problem instance): • Horizontal translation. Translate si horizontally within the strip. 74 Overlap (area) Overlap (area) Fast neighborhood search for two- and three-dimensional nesting problems x position y position (a) (b) (c) Figure 5: Example of a local search. (a) One polygon overlaps with several other polygons and is selected for optimization. In the top row we have drawn the amount of overlap as a function of the leftmost x-coordinate of the polygon. The positions beyond the dashed line are illegal since the polygon would lie partially beyond the right limit of the strip. Local search translates the polygon to a position with least overlap. (b) In the next iteration the local search may continue with vertical translation. The graph of overlap as a function of y-coordinate is shown and again the polygon is translated to the position with least overlap. (c) Local search has reached a legal solution. • Vertical translation. Translate si vertically within the strip. • Rotation. Select a new angle of rotation for si . • Flipping. Choose a new flipping state for si . The new position, angle or flipping state is chosen such that the overlap with all other polygons ∑ j6=i overlapi j (p0 ) is minimized. In other words p0 is created from p by reducing the total overlap in a greedy fashion. An example of a local search is shown on Figure 5. Let N : P → 2P be the neighborhood function such that N(p) is the set of all neighboring placements of p. We say that the placement p0 is a local minimum if: ∀p ∈ N(p0 ) : g(p0 ) ≤ g(p), i.e. there exists no neighboring solution with less overlap. Now the local search proceeds by iteratively creating a new placement p0 from the current placement p until p0 is a local minimum. 75 5. Guided Local Search 5 Guided Local Search To escape local minima encountered during local search we apply the meta-heuristic Guided Local Search (GLS). GLS was introduced by Voudouris and Tsang [34] and has previously been successfully applied to e.g. the Traveling Salesman Problem [35] and two- and three-dimensional Bin-Packing Problems [18]. 5.1 Features and penalties Features are unwanted characteristics of a solution or in our case a placement. We let the features express pairwise overlap of polygons in the placement and define the indicator function: 0 if overlapi j (p) = 0 i, j ∈ 1, . . . , n, p ∈ P , Ii j (p) = 1 otherwise which determines whether polygon si and s j overlap in the placement p. The key element of GLS is the penalties. For each feature we define a penalty count φi j which is initially set to 0. We also define the utility function: µi j (p) = Ii j (p) overlapi j (p) . 1 + φi j Whenever local search reaches a local minimum p, the feature(s) with highest utility µi j (p) are “penalized” by increasing φi j . 5.2 Augmented objective function The features and penalties are used in an augmented objective function, n i−1 h(p) = g(p) + λ · ∑ ∑ φi j Ii j (p), i=1 j=1 where λ ∈]0, ∞[ is a constant used to fine-tune the behavior of the meta-heuristic. Early experiments have shown that a good value for λ is around 1 − 4% of the area of the largest polygon. Instead of simply minimizing g(p) we let the local search of Section 4 minimize h(p). An outline of the meta-heuristic and the associated local search is described in Algorithm 1. 5.3 Improvements The efficiency of GLS can be greatly improved by using Fast Local Search (FLS) [34]. FLS divides the local search neighborhood into sub-neighborhoods which are active or inactive depending on whether they should be considered during local search. In our context we let the moves of each polygon be a sub-neighborhood resulting in n sub-neighborhoods. Now it is the responsibility of the GLS algorithm to activate each sub-neighborhood and the responsibility of FLS to inactivate them. For the nesting problem we have chosen to let GLS activate neighborhoods of polygons involved in penalty increments. When a polygon s is moved we activate all polygons overlapping with s before and after the move. FLS inactivates a neighborhood if it has been searched and no improvement has been found. 76 Fast neighborhood search for two- and three-dimensional nesting problems Algorithm 1: Guided Local Search for Nesting Decision Problem Input: A set of polygons S ; Generate initial placement p; foreach pair of polygons si , s j ∈ S do Set φi j = 0; while p contains overlap do // Local search:; while p is not local minimum do Select polygon si ; Create p0 from p using the best neighborhood move of si , i.e., such that h(p0 ) is minimized; Set p = p0 .; // Penalize:; foreach pair of polygons si , s j ∈ S do Compute µi j (p); foreach pair of polygons si , s j ∈ S such that µi j is maximal do Set φi j = φi j + 1; return p If GLS runs for a long time then the penalties will at some point have grown to a level where the augmented objective function no longer makes any sense in relation to the current placement. Therefore we also need to reset the penalties at some point e.g. after some maximum number of iterations which depends on the number of polygons. 6 Fast Neighborhood Search To determine a translation of a single polygon which minimizes overlap we have developed a new polynomial-time algorithm. The algorithm itself is very simple and it is presented in Section 6.2, but the correctness of the algorithm is not trivial and a proof is required. The core of the proof is the Intersection Area Theorem which is the subject of the following section. 6.1 Intersection Area Theorem In this section we will present a special way to determine the area of intersection of two polygons. Nielsen and Odgaard [28] have presented a more general version of the Intersection Area Theorem which dealt with rotation and arbitrary shapes. In this text however we have decided to limit the theory to polygons and horizontal translation since this is all we need for our algorithm to work. It will also make the proof shorter and easier to understand. In order to state the proof we need to define precisely which polygons we are able to handle. First some definitions of edges and polygons. Definition 3 (Edges). An edge e is defined by its end points ea , eb ∈ R2 . Parametrically an edge is denoted e(t) = ea + t(eb − ea ) where t ∈ [0, 1]. For a point p = (px , py ) ∈ R2 and an edge e we say p ∈ e if and only if p = e(t0 ) for some t0 ∈ [0, 1] and py 6= min(eay , eby ). The condition, py 6= min(eay , eby ), is needed to handle some special cases (see Lemma 1). 77 6. Fast Neighborhood Search Positive edges Negativ e edges Figure 6: Positive and negative edges of a polygon according to Definition 6. Definition 4 (Edge Count Functions). Given a set of edges E we define two edge count functions, ← − → − fE (p), fE (p) : R2 → N0 , ← − fE (p) = |{e ∈ E | ∃ x0 < px : (x0 , py ) ∈ e}|, → − fE (p) = |{e ∈ E | ∃ x0 ≥ px : (x0 , py ) ∈ e}|. Definition 5 (Polygon). A polygon P is defined by a set of edges E. The edges must form one or more cycles and no pair of edges from E are allowed to intersect. The interior of the polygon is defined by the set ← − P̃ = {p ∈ R2 | fE (p) ≡ 1 ( mod 2)}. For a point p ∈ R2 we write p ∈ P if and only if p ∈ P̃. Note that this is an extremely general definition of polygons. The polygons are allowed to consist of several unconnected components and cycles can be contained within each other to produce holes in the polygon. Now, we will also need to divide the edges of a polygon into three groups. Definition 6 (Sign of Edge). Given a polygon P defined by an edge set E we say an edge e ∈ E is positive if ∀t, 0 < t < 1 : ∃ε > 0 : ∀δ, 0 < δ < ε : e(t) + (δ, 0) ∈ P. (1) Similarly we say e is negative if Equation 1 is true with the points e(t) − (δ, 0). Finally we say e is neutral if e is neither positive nor negative. The sets of positive and negative edges from an edge set E are denoted E + and E − , respectively. Although we will not prove it here it is true that any non-horizontal edge is either positive or negative, and that any horizontal edge is neutral. Notice that the positive edges are the “left” edges and the negative edges are the “right” edges with respect to the interior of a polygon (see Figure 6). The following lemma states some important properties of polygons and their positive/negative edges. Lemma 1. Given a vertical coordinate y and some interval I, we say that the horizontal line ly (t) = (t, y), t ∈ I, crosses an edge e if there exist t0 such that ly (t0 ) ∈ e. Now assume that P is a polygon defined by an edge set E then all of the following holds. 1. If I =] − ∞, ∞[ and we traverse the line from −∞ towards ∞ then the edges crossed alternate between being positive and negative. 78 Fast neighborhood search for two- and three-dimensional nesting problems 2. If I =] − ∞, ∞[ then the line crosses an even number of edges. 3. Assume p ∈ / P then the infinite half-line l py (t) for I = [px , ∞[ will cross an equal number of positive and negative edges. The same is true for I =]∞, px [. 4. Assume p ∈ P. If I = [px , ∞[ and if the line crosses n positive edges then it will also cross precisely n + 1 negative edges. Similarly, if I =] − ∞, px [ and if the line crosses n negative edges then it will also cross precisely n + 1 positive edges. Proof. We only sketch the proof. First note that some special cases concerning horizontal edges and the points where edges meet are handled by the inequality in Definition 3. The first statement easily follows from the definition of positive and negative edges since clearly any positive edge can only be followed by a negative edge and vice-versa when we traverse from left to right. The other statements follow from the first statement and the observation that the first edge must be positive and the last edge must be negative. The following definitions are unrelated to polygons. Their purpose is to introduce a precise definition of the area between two edges. Afterwards this will be used to calculate the area of intersection of two polygons based purely on pairs of edges. Definition 7 (Containment Function). Given two edges e1 and e2 and a point p ∈ R2 define the containment function 1 if ∃x1 , x2 : x2 < px ≤ x1 , (x2 , py ) ∈ e2 , and (x1 , py ) ∈ e1 C(e1 , e2 , p) = 0 otherwise. Given two sets of edges, E and F, we generalize the containment function by summing over all pairs of edges, C(E, F, p) = ∑ ∑ C(e1 , e2 , p). e1 ∈E e2 ∈F Note that given two edges e1 and e2 and a point p then C(e1 , e2 , p) = 1 ⇒ C(e2 , e1 , p) = 0. Definition 8 (Edge Region and Edge Region Area). Given two edges e1 and e2 we define the edge region R(e1 , e2 ) = {p ∈ R2 | C(e1 , e2 , p) = 1} and the area of R(e1 , e2 ) as Z Z A(e1 , e2 ) = Z Z 1dA = R(e1 ,e2 ) p∈R2 C(e1 , e2 , p)dA, Given two sets of edges, E and F, we will again generalize by summing over all pairs of edges, A(E, F) = ∑ ∑ A(e1 , e2 ). e1 ∈E e2 ∈F The edge region of two edges R(e1 , e2 ) is the set of points in the plane for which the containment function is 1. This is exactly the points which are both to the right of e2 and to the left of e1 (see Figure 7). We will postpone evaluation of A(e1 , e2 ) to Section 6.2 since we do not need it to prove the main theorem of this section. Instead we need to prove a theorem which we can use to break down the intersection of two polygons into regions. 79 6. Fast Neighborhood Search e1 e2 Figure 7: The edge region R(e1 , e2 ) of two edges e1 and e2 (see Definition 8). Containment Theorem Given polygons P and Q defined by edge sets E and F, respectively, then for any point p ∈ R2 the following holds: p ∈ P ∩ Q ⇒ w(p) = 1 p∈ / P ∩ Q ⇒ w(p) = 0, where w(p) = C(E + , F − , p) + C(E − , F + , p) − C(E + , F + , p) − C(E − , F − , p) (2) Proof. First we note that from the definition of the containment function C it is immediately obvious that the only edges affecting w(p) are the edges which intersect with the line l py (t),t ∈] − ∞, ∞[, and only the edges from E which are to the right of p and the edges from F which are to the left of p will contribute to w(p). −→ ←− Now let m = fE + (p) and n = fF − (p). By using Lemma 1 we can prove this theorem by counting. −→ First assume p ∈ P ∩ Q which implies p ∈ P and p ∈ Q. From Lemma 1 we know that fE − (p) = ←− m + 1 and fF + (p) = n + 1. Inserting this into Equation 2 reveals: w(p) = (n + 1)(m + 1) + nm − (n + 1)m − n(m + 1) = nm + n + m + 1 + nm − nm − m − nm − n = 1. Now for n ∈ / P ∩ Q there are three cases for which we get: p∈ / P∧ p ∈ /Q: w(p) = nm + nm − nm − nm = 0, p ∈ P∧ p ∈ / Q : w(p) = n(m + 1) − nm + nm − n(m + 1) = 0, p∈ / P ∧ p ∈ Q : w(p) = (n + 1)m − nm + (n + 1)m − nm = 0. 80 Fast neighborhood search for two- and three-dimensional nesting problems We are now ready to prove the main theorem of this section. Intersection Area Theorem Given polygons P and Q defined by edge sets E and F, respectively, then the area of their intersection (denoted α) is α = A(E + , F − ) + A(E − , F + ) − A(E + , F + ) − A(E − , F − ). (3) Proof. From the Containment Theorem we know: Z Z α = w(p)dA. p∈R2 Using Equation 2 we get: Z Z Z Z w(p)dA = p∈R2 p∈R2 − Z Z p∈R2 Let us only consider RR p∈R2 C(E Z Z Z Z C(E + , F + , p)dA − Z Z + , F − , p)dA + p∈R2 C(E + , F − , p)dA + p∈R2 p∈R2 C(E − , F + , p)dA C(E − , F − , p)dA which can be rewritten: Z Z − C(E , F , p)dA = ∑ ∑ p∈R2 e∈E + f ∈F − C(e, f , p)dA Z Z = ∑ ∑ p∈R2 e∈E + f ∈F − = ∑ ∑ C(e, f , p)dA A(e, f ) e∈E + f ∈F − + − = A(E , F ). The other integrals can clearly be rewritten as well and we achieve the required result. Note that this theorem implies a very simple algorithm to calculate the area of an intersection without explicitly calculating the intersection itself. 6.2 Translational overlap The idea behind the fast neighborhood search algorithm is to express the overlap of one polygon P with all other polygons as a function of the horizontal position of P. The key element of this approach is to consider the value of A(e, f ) for each edge-pair in the Intersection Area Theorem and to see how it changes when one of the edges is translated. Calculating the area of edge regions Fortunately it is very easy to calculate A(e, f ) for two edges e and f . We only need to consider three different cases. 1. Edge e is completely to the left of edge f (Figure 8a). 81 6. Fast Neighborhood Search f f e t f e (a) t f e (b) t (c) f e t (d) e t (e) Figure 8: The edge region R(e, f ) of two edges as e is translated t units from left to right. (a) e is completely left of f . (b) e adjoins f . (c) e crosses f . (d) e and f are adjoined again. (e) e is to the / a triangle or a triangle combined with a parallelogram. right of f . Notice that R(e, f ) is either 0, 2. Edge e intersects edge f (Figure 8c). 3. Edge e is completely to the right of edge f (Figure 8e). / For the second case the region is a triangle For the first case the region between the two edges is 0. and for the third case it is a union of a triangle and a parallelogram. Now, let the edge et be the horizontal translation of e by t units and define the function a(t) = A(et , f ). Assume et intersects f only when t ∈ [t 4 ,t 2 ] for appropriate t 4 and t 2 and let us take a closer look at how the area of the region behaves when translating e. 1) Clearly for t < t 4 we have a(t) = 0. 2) It is also easy to see that for t ∈ [t 4 ,t 2 ] the intersection of the two edges occurs at some point which is linearly depending on t thus the height of the triangle is linearly depending on t. The same goes for the width of the triangle and thereby a(t) must be a quadratic function for this interval. 3) Finally for t > t 2 , a(t) is the area of the triangle at t = t 2 which is a(t 2 ) and the area of some parallelogram. Since the height of the parallelogram is constant and the width is t − t 2 , a(t) for t > t 2 is a linear function. In other words, a(t) is a piecewise quadratic function. The next step is to extend the Intersection Area Theorem to edge sets Et and F where Et is every edge from E translated by t units, i.e we want to define the function α(t) = A(Et , F). For each pair of edges et ∈ Et and f ∈ F the interval of intersection is determined and the function ae, f (t) = A(et , f ) is formulated as previously described. All functions ae, f (t) are piecewise quadratic and have the form: for t < te,4f 0 4 2 2 + B4 t + C 4 ae, f (t) = (4) A4 e, f t e, f e, f for t ∈ [te, f ,te, f ] 2 2 2 Be, f t + Ce, f for t > te, f 4 We denote the constants te,4f and te,2f the breakpoints of the edge pair e and f , the values A4 e, f , Be, f , 2 Ce,4f the triangle coefficients of ae, f (t), and the values B2 e, f and Ce, f the parallelogram coefficients of ae, f (t). The total area of intersection between two polygons as a function of the translation of one of the polygons can now be expressed as in Equation 3: α(t) = A(Et+ , F − ) + A(Et− , F + ) − A(Et+ , F + ) − A(Et− , F − ). (5) The functions ae, f (t) are all piecewise quadratic functions, and thus any sum of these, specifically Equation 5, is also a piecewise quadratic function. In the next section we are going to utilize this result in our algorithm by iteratively constructing α(t) for increasing values of t. 82 Fast neighborhood search for two- and three-dimensional nesting problems Determining the minimum overlap translation Given a polygon P defined by an edge set E and a set of polygons S (P ∈ / S ) defined by an edge set F the local search of Section 4 looks for a translation of P such that the total area of intersection with polygons from S is minimized. In this section we present an algorithm capable of determining such a translation. The outline of the algorithm is as follows: For each pair of edges (e, f ) ∈ E × F use the signs of the edges to evaluate whether ae, f (t) contributes positively or negatively to the sum of Equation 5. Then determine the breakpoints for e and f and compute the triangle and parallelogram coefficients of ae, f (t). Finally traverse the breakpoints of all edge pairs from left to right and at each breakpoint maintain the function α(t) = Ãt 2 + B̃t + C̃, where all of the coefficients are initially set to zero. Each breakpoint corresponds to a change for one of the functions ae, f (t). Either we enter the triangle phase at te,4f or the parallelogram phase at te,2f of ae, f (t). Upon entry of the triangle phase at te,4f we add the triangle coefficients to α(t)’s coefficients. Upon entry of the parallelogram phase at te,2f we subtract the triangle coefficients and add the parallelogram coefficients to α(t)’s coefficients. To find the minimal value of α(t) we consider the value of α(t) within each interval between subsequent breakpoints. Since α(t) on such an interval is quadratic, determining the minimum of each interval is trivial using second order calculus. The overall minimum can easily be found by considering all interval-minima. The algorithm is sketched in Algorithm 2. The running time of the algorithm is dominated by the sorting of the breakpoints since the remaining parts of the algorithm runs in time O(|E| · |F|). Thus the algorithm has a worst case running time of O(|E| · |F| log(|E| · |F|)) which in practice can be reduced by only considering polygons from S which overlap horizontally with P. Theoretically every pair of edges in E × F could give rise to a new edge in the intersection P ∩ Q. Thus a lower bound for the running time of an algorithm which can compute such an intersection must be Ω(|E||F|). In other words, Algorithm 2 is only a logarithmic factor slower than the lower bound for determining the intersection for simply one position of P. Algorithm 2: Determine Horizontal Translation with Minimal Overlap Input: A set S of polygons and a polygon P ∈ / S; foreach edge e from polygons S \ {P} do foreach edge f from P do Create breakpoints for edge pair (e, f ); Let B = breakpoints sorted; Define area-function α(t) = Ãt 2 + B̃t + C̃; Set Ã = B̃ = C̃ = 0; foreach breakpoint b ∈ B do Modify α(t) by changing Ã, B̃ and C̃; Look for minimum on the next interval of α(t); return t with smallest α(t) 83 8. Results 7 Problem Variations The solution method presented in the previous sections can also be applied to a range of variations of nesting problems. Two of the most interesting are discussed in the following subsections. More details and other variations are described by Nielsen and Odgaard [28]. 7.1 Rotation We have efficiently solved the problem of finding an optimal translation of a polygon. A very similar problem is to find the optimal rotation of a polygon, i.e. how much is a polygon to be rotated to overlap the least with other polygons. It has been shown by Nielsen and Odgaard [28] that a rotational variant of the Intersection Area Theorem is also possible. They also showed how to calculate the breakpoints needed for an iterative algorithm. It is an open question though whether an efficient iterative algorithm can be constructed. Nevertheless the breakpoints can be used to limit the number of rotation angles needed to be examined to determine the existence of a rotation resulting in no overlap. This is still quite good since free rotation in existing solution methods is usually handled in a brute-force discrete manner i.e. by calculating overlap for a large set of rotation angles and then select a minimum. 7.2 Quality regions In e.g. the leather industry the raw material can be divided into regions of quality [22]. Some polygons may be required to be of specific quality and should therefore be confined to these regions. This is easily dealt with by representing each region by a polygon and mark each region-polygon with a positive value describing its quality. Now if an element is required to be of a specific quality, regionpolygons of poorer quality are included during overlap calculation with the element, thus disallowing placements with the element within a region with less-than-required quality. Note that the complexity of the translation algorithm is not affected by the number of quality levels. 8 Results The solution method described in the previous sections has been implemented in C++, and we call the implementation 2DN EST. A good description of the data instances used can be found in Gomes and Oliveira [20]. These data instances are all available on the ESICUP homepage1 . Some of their characteristica are included in Table 1. For some instances rotation is not allowed and for others 180◦ rotation or even 90◦ rotation is allowed. In 2DN EST this is handled by extending the neighborhood to include translations of rotated variants of the stencils. Note that the data instances Dighe1 and Dighe2 are jigsaw puzzles for which other solution methods would clearly be more efficient, but it is still interesting to see how well they are handled since we know the optimal solution. Most of the data instances have frequently been used in the literature, but with regard to quality the best results are reported by Gomes and Oliveira [20]. They also report average results found when doing 20 runs for each instance. Gomes and Oliveira implemented two variations of their solution method (GLSHA and SAHA) and results for the latter can be found in Table 1 (2-4GHz Pentium 4). Average computation times are included in the table since they vary between instances. More precisely they vary between 22 seconds and 173 minutes. When considering all instances the average 1 http://www-apdio-pt/esicup 84 Fast neighborhood search for two- and three-dimensional nesting problems computation time is more than 74 minutes. In total it would have taken more than 15 days to do all of the experiments on a single processor. SAHA is an abbreviation of “simulated annealing hybrid algorithm”. A greedy bottom-left placement heuristic is used to generate an initial solution, and afterwards simulated annealing is used to guide the search of a simple neighborhood (pairwise exchanges of stencils). Linear programming models are used for local optimizations including removing any overlap. We have chosen to run 2DN EST on each instance 20 times using 10 minutes for each run (3GHz Pentium 4). In Table 1 the quality of the average solution is compared to SAHA followed by comparisons of the standard deviation, the worst solution found and the best solution found. We have also done a single 6 hour long run for each instance. It would take less than 6 days to do these experiments on a single processor. The best average results, the best standard deviations and the largest minimum results are underlined in the table. Disregarding the 6 hour runs the best of the maximum results are also underlined in the table. Note that the varying computation times of SAHA makes it difficult to compare results, but most of the results (10 out of 15) are obtained using more than the 600 seconds used by 2DN EST. The quality of a solution is given as a utilization percentage, that is, the percentage of area covered by the stencils in the resulting rectangular strip. Average results by 2DN EST are in general better. The exceptions are Dagli, Shapes2 and Swim for which the average is better for SAHA. The best solutions for these instances are also found by SAHA and this is also the case for Dighe1, Dighe2, Shirts and Trousers. The two latter ones are beaten by the single 6 hour run though. The jigsaw puzzles (Dighe1 and Dighe2) are actually also handled quite well by 2DN EST, but it is not quite able to achieve 100% utilization. Disregarding the jigsaw puzzles we have found the best known solutions for 10 out of 13 instances. The standard deviations and the minimum results are clearly better for 2DN EST with the exception of Shapes2 and Swim which are instances that are in general handled badly by 2DN EST compared to SAHA. At least for Swim this is likely to be related to the fact that this instance is very complicated with an average of almost 22 vertices per stencil. It probably requires more time or some multilevel approach e-g- using approximated stencils or leaving out small stencils early in the solution process. The latter is the approach taken by SAHA in their multi-stage scheme which is used for the last 3 instances in the table (Shirts, Swim and Trousers). The best solutions produced by 2DN EST (including 6 hour runs) are presented in Figure 9. 9 Three-dimensional Nesting Our fast translation method is not restricted to two dimensions. In this section we will describe how the method can be used for three-dimensional nesting, but we will not generalize the proofs from Section 6. Solutions for such problems have applications in the area of Rapid Prototyping [37], and Osogami [31] has done a small survey of existing solution methods. 9.1 Generalization to three dimensions It is straightforward to design algorithms to translate polyhedra in three dimensions. Edges are replaced by faces, edge regions (areas) are replaced by face regions (volumes) and so forth. Positive and negative faces are also just a natural generalization of their edge counterparts. The only real problem is to efficiently calculate the face region R( f , g) between two faces f and g. 85 9. Three-dimensional Nesting Data instance Name Size Deg. Albano 24 180◦ Dagli 30 180◦ Dighe1 16 Dighe2 10 Fu 12 90◦ 25 90◦ Jakobs1 Jakobs2 25 90◦ 20 90◦ Mao Marques 24 90◦ Shapes0 43 Shapes1 43 180◦ 28 180◦ Shapes2 Shirts 99 180◦ 48 180◦ Swim Trousers 64 180◦ SAHA 86.12 83.97 86.57 81.81 90.05 87.07 79.53 81.07 88.08 64.25 71.12 76.71 85.14 69.41 88.77 2DN EST 83.27 83.14 74.68 75.73 85.08 75.39 74.23 78.93 85.31 61.39 65.41 80.00 84.91 70.63 87.74 SAHA 87.44 85.98 99.86 99.95 91.84 89.07 80.41 85.15 89.17 67.09 73.84 81.21 86.33 71.53 89.84 2DN EST 87.43 87.15 100.00 100.00 90.96 †∗ 78.89 77.28 82.54 88.14 66.50 71.25 83.60 † 86.79 74.37 89.96 SAHA 87.88 87.05 99.84 93.02 92.03 89.03 81.07 85.15 89.82 66.42 73.23 81.59 87.38 72.49 90.46 2DN EST 6 hours 2257 5110 83 22 296 332 454 8245 7507 3914 10314 ∗ 2136 10391 6937 8588 SAHA Sec. Maximum 2DN EST 1.23 1.07 3.90 6.84 1.40 0.88 0.89 0.87 0.81 0.98 1.41 0.74 0.49 0.97 0.57 Minimum SAHA 0.32 0.53 5.16 5.42 0.62 0.42 0.18 0.87 0.25 0.78 0.79 1.05 0.41 0.69 0.28 Std. Dev. 2DN EST 84.70 85.38 82.13 84.17 87.17 75.79 74.66 80.72 86.88 63.20 68.63 81.41 85.67 72.28 89.02 Average 86.96 85.31 93.93 93.11 90.93 88.90 80.28 82.67 88.73 65.42 71.74 79.89 85.73 70.27 89.29 Table 1: Comparison of our implementation 2DN EST and SAHA by Gomes and Oliveira [20]. For each data instance the number of stencils to be nested and the allowed rotation are given. Both algorithms have been run 20 times. Average, minimum and maximum utilization are given and it is supplemented by the standard deviation. 2DN EST uses 10 minutes (600 seconds) for each run which can be compared to the varying running times of SAHA in the final column (averages in seconds). The second to last column is the result of running 2DN EST once for 6 hours (3600 seconds). results were obtained by a more simple greedy approach (GLSHA) [20]: 81-67% for Jakobs1 and 86-80% for Shirts. ∗ These values have been corrected compared to those given in Gomes and Oliveira [20]. † Better 86 Fast neighborhood search for two- and three-dimensional nesting problems Jakobs1∗ Fu∗ Jakobs2∗ Mao∗ Shapes2 Shapes0∗ Marques∗ Dighe1 Shapes1∗ Albano∗ Dighe2 Dagli Trousers∗ Shirts∗ Swim Figure 9: The best solutions found by 2DN EST easily comparable with the ones shown in Gomes and Oliveira [20]. ∗ These solutions are also the currently best known solutions in the literature. 87 9. Three-dimensional Nesting Translation direction Figure 10: An illustration of the face region between two faces. The faces are not necessarily parallel, but the sides of the face region are parallel with the translation direction. The face region would be more complicated if the two faces were intersecting. g f (a) g y z x g f (b) f (c) g f (d) Figure 11: Translation of a triangle f through another triangle g along the x-axis, where the triangles have the same projection onto the yz-plane. The face region R( f , g) changes shape each time two corner points meet. Assume that translation is done along the direction of the x-axis. An illustration of a face region is given in Figure 10. Note that the volume will not change if we simplify the two faces to the end faces of the face region. This can be done by projecting the faces onto the yz-plane, find and triangulate the intersection polygon and project this back onto the faces. This reduces the problem to the calculation of the volume of the face region between two triangles in three-dimensional space. We know that the three pairs of corner points will meet under translation. Sorted according to when they meet we will denote these the first, second and third breakpoint. An illustration of the translation of two such triangles is given in Figure 11. Such a translation will almost always go through the following 4 phases. 1. No volume (Figure 11a). 2. After the first breakpoint the volume becomes a growing tetrahedron (Figure 11b). 3. The second breakpoint stops the tetrahedron (Figure 11c). The growing volume is now a bit harder to describe (Figure 11d) and we will take care of it in a moment. 4. After the third breakpoint the volume is growing linearly. It can be calculated as a constant plus the area of the projected triangle multiplied with the translation distance since the corner points met. 88 Fast neighborhood search for two- and three-dimensional nesting problems a = (x, 0, 0) xc xb Figure 12: The volume of the above tetrahedron can be calculated from the three vectors a, b and c. In our case b and c are linearly dependent on x which is the length of a (and the translation distance since the tetrahedron started growing). We have ignored 3 special cases of pairs of corner points meeting at the same time. 1) If the faces are parallel then we can simply skip to phase 4 and use a zero constant. 2) If the two last pairs of corner points meet at the same time then we can simply skip phase 3. 3) Finally, if the first two pairs of corner points meet at the same time we can skip phase 1. The reasoning for this is simple. Figure 11c illustrates that it is possible to cut a triangle into two parts which are easier to handle than the original triangle. The upper triangle is still a growing tetrahedron, but the lower triangle is a bit different. It is a tetrahedron growing from an edge instead of a corner and it can be calculated as a constant minus the area of a shrinking tetrahedron. The basic function needed is therefore the volume V (x) of a growing tetrahedron (a shrinking tetrahedron then follows easily). This can be done in several different ways, but one of them is especially suited for our purpose. Given three directional vectors a, b, c from one of the corner points of the tetrahedron, the following general formula can be used V= 1 |a · (b × c)|. 3! (6) In our case one of the vectors is parallel to the x-axis corresponding to the translation direction. An example of three vectors is given in Figure 12. Since the angles of the tetrahedron are unchanged during translation, the vectors b and c do not change direction and can simply be scaled to match the current translation by the value x where x is the distance translated. This is indicated in the drawing. Using Equation 6, we can derive the following formula for the change of volume when translating: V (x) = = = 1 |a · (xb × xc)| 3! 1 bx cx 1 3 0 b c x · × y y 3! 0 bz cz 1 (by cz − bz cy )x3 . 6 However, this function is inadequate for our purpose since it is based on the assumption that the translation is 0 when x = 0. We need a translation offset t and by replacing x with x − t we get: V (x) = 1 (by cz − bz cy )(x3 − 3tx2 + 3t 2 x − t 3 ) . 6 (7) 89 9. Three-dimensional Nesting Part2 Part3 Part4 Block1 Part5 Part6 Thin Stick2 Figure 13: The Ikonen data set. Now it is a simple matter to use Algorithm 2 in Section 6 for translating polyhedra with Equation 7 as breakpoint polynomials. The volume function is a cubic polynomial for which addition and finding minimum are constant time operations. Assume we are given two polyhedra with m and n faces respectively (with an upper limit on the number of vertices for each face), then the running time of the three-dimensional variant of Algorithm 2 is exactly the same as for the two-dimensional variant: O(mn log(mn)). However, the constants involved are larger. 9.2 Results for three dimensions A prototype has been implemented, 3DN EST, and its performance has been compared with the very limited existing results. In the literature only one set of simple data instances has been used. They were originally created by Ilkka Ikonen and later used by Dickinson and Knopf [13] to compare their solution method with Ikonen et al. [23]. Eight objects are available in the set and they are presented in Table 2 and Figure 13. Some of them have holes, but they are generally quite simple. They can all be drawn in two dimensions and then just extended in the third dimension. They have no relation to real-world data instances. Name Block1 Part2 Part3 Part4 Part5 Part6 Stick2 Thin # Faces 12 24 28 52 20 20 12 48 Volume 4.00 2.88 0.30 2.22 0.16 0.24 0.18 1.25 Bounding box 1.00 × 2.00 × 2.00 1.43 × 1.70 × 2.50 1.42 × 0.62 × 1.00 1.63 × 2.00 × 2.00 2.81 × 0.56 × 0.20 0.45 × 0.51 × 2.50 2.00 × 0.30 × 0.30 1.00 × 3.00 × 3.50 Table 2: The Ikonen data set. Based on these objects two test cases were created by Dickinson and Knopf for their experiments. • Case 1 90 Fast neighborhood search for two- and three-dimensional nesting problems Case 1 Case 2 Figure 14: The above illustrations contain twice as many objects as originally intended in Ikonens Case 1 and 2. They only took a few seconds to find. Pack 10 objects into a cylinder of radius 3.4 and height 3.0. The 10 objects were chosen as follows: 3 × Part2, 1 Part4 and 2 × Part3, Part5 and Part6. Total number of faces is 260 and 11.3% of the total volume is filled. • Case 2 Pack 15 objects into a cylinder of radius 3.5 and height 5.5. The 15 objects were chosen as in case 1, but with 5 more Part2. Total number of faces is 380 and 12.6% of the total volume is filled. Dickinson and Knopf report execution times for both their own solution method (serial packing) and the one by Ikonen et al. (genetic algorithm) and they ran the benchmarks on a 200 MHz AMD K6 processor. The results are presented in Table 3 in which results from our algorithm are included. Our initial placement is a random placement which could be a problem since it would quite likely contain almost no overlap and then it would not say much about our algorithm — especially the GLS part. To make the two cases a bit harder we doubled the number of objects. Our tests were run on a 733MHz G4. Even considering the difference in processor speeds there is no doubt that our method is the fastest for these instances. Illustrations of the resulting placements can be seen in Figure 14. Test Case 1 Case 2 Ikonen et al. 22.13 min. 26.00 min. Dickinson and Knopf 45.55 sec. 81.65 sec. 3DN EST 3.2 sec. (162 translations) 8.1 sec. (379 translations) Table 3: Execution times for 3 different heuristic approaches. Note that the number of objects is doubled for 3DN EST. 91 References 10 Conclusion We have presented a new solution method for nesting problems. The solution method uses local search to reduce the amount of overlap in a greedy fashion and it uses Guided Local Search to escape local minima. To find new positions for stencils which decrease the total overlap, we have developed a new algorithm which determines a horizontal or vertical translation of a polygon with least overlap. Furthermore, our solution method can easily be extended to handle otherwise complicated requirements such as free rotation and quality regions. The solution method has also been implemented and is in most cases able to produce better solutions than those previously published. It is also robust with very good average solutions and small standard deviations compared to previously published solutions methods, and this is within a reasonable time limit of 10 minutes per run. Finally we have generalized the method to three dimensions which enables us to also solve threedimensional nesting problems. Acknowledgments We would like to thank A. Miguel Gomes and José F. Oliveira for providing additional data on the performance of their solution method [20]. We would also like to thank Martin Zachariasen and the anonymous referees for some valuable remarks. References [1] M. Adamowicz and A. Albano. Nesting two-dimensional shapes in rectangular modules. Computer Aided Design, 1:27–33, 1976. [2] A. Albano and G. Sappupo. Optimal allocation of two-dimensional irregular shapes using heuristic search methods. IEEE Transactions on Systems, Man and Cybernetics, 5:242–248, 1980. [3] R. C. Art, Jr. An approach to the two dimensional, irregular cutting stock problem. Technical Report 36.Y08, IBM Cambridge Scientific Center, September 1966. [4] T. Asano, A. Hernández-Barrera, and S. C. Nandy. Translating a convex polyhedron over monotone polyhedra. Computational Geometry, 23(3):257–269, 2002. doi: 10.1016/S0925-7721(02)00098-6. [5] J. A. Bennell and K. A. Dowsland. Hybridising tabu search with optimisation techniques for irregular stock cutting. Management Science, 47(8):1160–1172, 2001. [6] J. A. Bennell and K. A. Dowsland. A tabu thresholding implementation for the irregular stock cutting problem. International Journal of Production Research, 37:4259–4275, 1999. [7] J. Blazewicz and R. Walkowiak. A local search approach for two-dimensional irregular cutting. OR Spektrum, 17:93–98, 1995. [8] J. Blazewicz, P. Hawryluk, and R. Walkowiak. Using a tabu search approach for solving the twodimensional irregular cutting problem. Annals of Operations Research, 41:313–325, 1993. [9] E. K. Burke and G. Kendall. Applying simulated annealing and the no fit polygon to the nesting problem. In Proceedings of the World Manufacturing Congres, pages 70–76. ICSC Academic Press, 1999. [10] E. K. Burke and G. Kendall. Applying ant algorithms and the no fit polygon to the nesting problem. In Proceedings of the 12th Australian Joint Conference on Artificial Intelligence (AI’99), volume 1747, pages 454–464. Springer Lecture Notes in Artificial Intelligence, 1999. 92 Fast neighborhood search for two- and three-dimensional nesting problems [11] E. K. Burke and G. Kendall. Applying evolutionary algorithms and the no fit polygon to the nesting problem. In Proceedings of the 1999 International Conference on Artificial Intelligence (IC-AI’99), volume 1, pages 51–57. CSREA Press, 1999. [12] P. Chen, Z. Fu, A. Lim, and B. Rodrigues. The two dimensional packing problem for irregular objects. International Journal on Artificial Intelligent Tools, 2004. [13] J. K. Dickinson and G. K. Knopf. Serial packing of arbitrary 3d objects for optimizing layered manufacturing. In Intelligent Robots and Computer Vision XVII, volume 3522, pages 130–138, 1998. [14] D. Dobkin, J. Hershberger, D. Kirkpatrick, and S. Suri. Computing the intersection-depth of polyhedra. Algorithmica, 9:518–533, 1993. [15] K. A. Dowsland and W. B. Dowsland. Solution approaches to irregular nesting problems. European Journal of Operational Research, 84:506–521, 1995. [16] K. A. Dowsland, W. B. Dowsland, and J. A. Bennell. Jostling for position: Local improvement for irregular cutting patterns. Journal of the Operational Research Society, 49:647–658, 1998. [17] K. A. Dowsland, S. Vaid, and W. B. Dowsland. An algorithm for polygon placement using a bottom-left strategy. European Journal of Operational Research, 141:371–381, 2002. [18] O. Faroe, D. Pisinger, and M. Zachariasen. Guided local search for the three-dimensional bin packing problem. INFORMS Journal on Computing, 15(3):267–283, 2003. [19] A. M. Gomes and J. F. Oliveira. A 2-exchange heuristic for nesting problems. European Journal of Operational Research, 141:359–370, 2002. [20] A. M. Gomes and J. F. Oliveira. Solving irregular strip packing problems by hybridising simulated annealing and linear programming. European Journal of Operational Research, 171(3):811–829, 2006. [21] R. Heckmann and T. Lengauer. A simulated annealing approach to the nesting problem in the textile manufacturing industry. Annals of Operations Research, 57:103–133, 1995. [22] J. Heistermann and T. Lengauer. The nesting problem in the leather manufacturing industry. Annals of Operations Research, 57:147–173, 1995. [23] I. Ikonen, W. E. Biles, A. Kumar, J. C. Wissel, and R. K. Ragade. A genetic algorithm for packing three-dimensional non-convex objects having cavities and holes. In Proceedings of the 7th International Conference on Genetic Algortithms, pages 591–598, East Lansing, Michigan, 1997. Morgan Kaufmann Publishers. [24] P. Jain, P. Fenyes, and R. Richter. Optimal blank nesting using simulated annealing. Journal of mechanical design, 114:160–165, 1992. [25] S. Jakobs. On genetic algorithms for the packing of polygons. European Journal of Operational Research, 88:165–181, 1996. [26] Z. Li and V. Milenkovic. Compaction and separation algorithms for non-convex polygons and their applications. European Journal of Operational Research, 84:539–561, 1995. [27] H. Lutfiyya, B. McMillin, P. Poshyanonda, and C. Dagli. Composite stock cutting through simulated annealing. Journal of Mathematical and Computer Modelling, 16(2):57–74, 1992. [28] B. K. Nielsen and A. Odgaard. Fast neighborhood search for the nesting problem. Technical Report 03/03, DIKU, Department of Computer Science, University of Copenhagen, 2003. 93 References [29] J. F. Oliveira and J. S. Ferreira. Algorithms for nesting problems. Applied Simulated Annealing, pages 255–273, 1993. [30] J. F. Oliveira, A. M. Gomes, and J. S. Ferreira. TOPOS - a new constructive algorithm for nesting problems. OR Spektrum, 22:263–284, 2000. [31] T. Osogami. Approaches to 3D free-form cutting and packing problems and their applications: A survey. Technical Report RT0287, IBM Research, Tokyo Research Laboratory, 1998. [32] Y. Stoyan, G. Scheithauer, N. Gil, and T. Romanova. Φ-functions for complex 2d-objects. 4OR: Quarterly Journal of the Belgian, French and Italian Operations Research Societies, 2(1):69–84, 2004. [33] V. E. Theodoracatos and J. L. Grimsley. The optimal packing of arbitrarily-shaped polygons using simulated annealing and polynomial-time cooling schedules. Computer methods in applied mechanics and engineering, 125:53–70, 1995. [34] C. Voudouris and E. Tsang. Guided local search. Technical Report CSM-147, Department of Computer Science, University of Essex, Colchester, C04 3SQ, UK, August 1995. [35] C. Voudouris and E. Tsang. Guided local search and its application to the traveling salesman problem. European Journal of Operational Research, 113:469–499, 1999. [36] G. Wäscher, H. Haussner, and H. Schumann. Translating a convex polyhedron over monotone polyhedra. European Journal of Operational Research, this issue, 2006. [37] X. Yan and P. Gu. A review of rapid prototyping technologies and systems. Computer Aided Design, 28 (4):307–318, 1996. 94 Addendum to “Fast neighborhood search for two- and three-dimensional nesting problems” Jens Egeblad 1 Implemetation Background A number of implementation details were missing from the paper [A]. In this addendum we present new experiments in Section 2 and elaborate on some of the missing details in Section 3. The work presented in the paper [A] is based on older work by Egeblad et al. [1] and the first implementation from 2001 already showed results which were competitive with the best results from the literature. The implementation was later completely rewritten as part of the Master’s thesis by Nielsen and Odgaard [4] and this implementation was slightly modified and improved for the first paper [A]. The implementation was heavily modified again to improve floating point stability issues, to handle the strip-packing variant of three-dimensional problems of the paper [B], to allow for analysis of overlap measures and handling of free rotation by Nielsen [6], to handle repeated pattern nesting by Nielsen [5], and to manage the objective function of the paper [E]. Several parts of the implementation were made more efficient, especially for the three-dimensional problems. 2 New Experiments Experiments from the paper [A] were rerun with the new implementation (see also Nielsen [6]) and a comparison of the new implementation (Current N EST 2D) and the old implementation (Old 2DN EST) as well as two other state-of-the-art heuristics by Gomes and Oliveira [2] (SAHA) and Imamichi et al. [3] (ILSQN) are shown in Table 1. The results of both (Old 2DN EST) and (Current N EST 2D) were over 20 runs and the running times for each run in both implementations were 600 seconds, although on two different processors. Running times vary for SAHA (see [A] for more details) while results for ILSQN are from 10 runs of 600 or 1200 seconds depending on the size of the problem. Best results and average results over the 20 runs for 2DN EST and N EST 2D, and average results for the other heuristics are presented in the table. It can be seen from this table that both the old and in particular the new implementation are still competitive with the other heuristics from the literature, the average overall utilization of the average results of N EST 2D are 84.93, while the average results of the best achieved utilization is 85.75. This almost matches ILSQN which has an overall average of averages equal to 82.74 and overall average of best results equal to 85.89 – Only 0.14 percentage points better than N EST 2D. It is important to note that the running times are for the complete optimization phase and not for solving the last decision variant. However, the majority of the running time is spend solving the decision problem on the last few strip-lengths. 3 Implementation Details A number of interesting details are omitted from the paper [A] and we discuss them briefly here: 95 3. Implementation Details Albano Dagli Dighe1 Dighe2 Fu Jakobs1 Jakobs2 Mao Marques Shapes0 Shapes1 Shapes2 Shirts Swim Trousers Size 24 30 16 10 12 25 25 20 24 43 43 28 99 48 64 Deg. 180◦ 180◦ 90◦ 90◦ 90◦ 90◦ 90◦ 180◦ 180◦ 180◦ 180◦ 180◦ Old 2DN EST 86.96 85.31 93.93 93.11 90.93 88.90 80.28 82.67 88.73 65.42 71.74 79.89 85.73 70.27 89.29 83.54 Average results Current N EST 2D ILSQN 87.50 87.14 86.17 85.80 99.99 90.49 99.99 84.21 91.03 87.57 89.05 84.78 80.71 80.50 83.08 81.31 89.12 86.81 66.07 66.49 72.35 72.83 80.69 81.72 86.61 88.12 72.00 74.62 89.64 88.69 84.93 82.74 SAHA 84.70 85.38 82.13 84.17 87.17 75.79 74.66 80.72 86.88 63.20 68.63 81.41 85.67 72.28 89.02 80.12 Old 2DN EST 87.44 85.98 99.86 99.95 91.84 89.07 80.41 85.15 89.17 67.09 73.84 81.21 86.33 71.53 89.84 85.25 Best results Current N EST 2D ILSQN 87.90 88.16 87.51 87.40 100.00 99.89 100.00 99.99 91.94 90.67 89.09 86.89 82.44 82.51 84.23 83.44 89.85 89.03 66.74 68.44 73.83 73.84 82.32 84.25 87.35 88.78 73.04 75.29 90.04 89.79 85.75 85.89 SAHA 87.43 87.15 100.00 100.00 90.96 78.89 77.28 82.54 88.14 66.50 71.25 83.60 86.79 74.37 89.96 84.32 Table 1: Average and best results are compared for the four different solution methods to nesting. The highest utilization values are underlined. The number of shapes and the degree of rotation allowed are also reported. Processors are 3.0 GHz Intel Pentium IV (Old 2DN EST ) 2.16 GHz Intel Core Duo (Current N EST 2D), 2.8GHz Intel Xeon (ILSQN), and 2.4GHz Pentium IV (SAHA). 96 Addendum to [A] Resetting penalties From time to time penalties are reset to 0 for two reasons. Firstly, because this ensures that the augmented objective function is never too far from the actual objective function. Secondly, because it ‘kicks’ the heuristic out of the current solution state and allows for masively new changes. A reset is conducted every 5 · n2 iterations where n is the number of items. This value was found through parameter tuning. Penalties in the overlap algorithm Penalties must be included when overlap is calculated in the algorithm which finds the minimal overlap translation between a polygon p and a set of polygons P \ {p}. To do this, overlap between p and the individual polygons is maintained as the algorithm traverses the list of breakpoints. If the overlap for an individual polygon q increases beyond zero the penalty of the overlap of p and q is added to the objective function. When it decreases to zero again it is subtracted. This can be done without increasing the asymptotic running time since each breakpoint comes from one edge of exactly one polygon from P \ {p}, and therefore, each breakpoint only requires update of the overlap of one polygon, which can be done in constant time. Rotations In the original work by Egeblad et al. [1] rotations were handled differently than translations. A normal overlap algorithm was implemented to return the overlap of a pair of polygons. The overlap of each rotation angle considered was measured using this algorithm, while the minimal overlap translation algorithm was used to determine overlap of translations. Because of small floating point errors the two algorithms could return slightly different overlap values for the same position and rotation. This could cause the heuristic to cycle through the same two placements infinitely since one placement would seem to reduce the overlap when calculated with one of the algorithms and another with respect to the other algorithm. To remedy this problem the implementation used by Nielsen and Odgaard [4] and in later papers use the same algorithm for rotation and translation. This is done by considering a full horizontal translation for each rotation angle considered. Iterations The paper reports running times but does not detail the number of iterations. However, for the instances tested in the paper, the number of translations considered range between 1,000 to 26,000 per second depending on the complexity of the instance. Since two translations are considered for each polygon, and an additionally number of translations for each rotation, the number of iterations per second is roughly between 250 and 13,000, and the full number of iterations ranges between 150,000 and 78,000,000 for a complete 600 second run. Decreasing the strip-length The strip-length L is decreased by setting it to L = L0 × (1 − ε), where L0 is length of the last solved decision problem. Initially ε is set to 0.01, i.e. the strip-length is decreased by 1% between each decision problem. However, if no solution to a decision problem has been found within 10 · n2 , then the heuristic updates ε by setting it to ε = 0.7 · ε0 where ε0 is the last used value of ε. References [A] J. Egeblad, B. K. Nielsen, and A. Odgaard. Fast neighborhood search for two- and three-dimensional nesting problems. European Journal of Operational Research, 183(3):1249–1266, 2007. [1] J Egeblad, B. K. Nielsen, and A. Odgaard. Metaheuristikken guided local search anvendt på pakning af irregulære polygoner. (Project at DIKU), 2001. 97 References [2] A. M. Gomes and J. F. Oliveira. Solving irregular strip packing problems by hybridising simulated annealing and linear programming. European Journal of Operational Research, 171(3):811–829, 2006. [3] T. Imamichi, M. Yagiura, and H. Nagamochi. An iterated local search algorithm based on nonlinear programming for the irregular strip packing problem. In Proceedings of the Third International Symposium on Scheduling, Tokyo Japan, pages 132–137, 2006. [4] B. K. Nielsen and A. Odgaard. Fast neighborhood search for the nesting problem. Technical Report 03/03, DIKU, Department of Computer Science, University of Copenhagen, 2003. [5] Benny K. Nielsen. An efficient solution method for relaxed variants of the nesting problem. In Joachim Gudmundsson and Barry Jay, editors, Theory of Computing, Proceedings of the Thirteenth Computing: The Australasian Theory Symposium, volume 65 of CRPIT, pages 123–130, Ballarat, Australia, 2007. ACS. [6] Benny Kjær Nielsen. Nesting Problems and Steiner Tree Problems. PhD thesis, DIKU, University of Copenhagen, Denmark, 2008. 98 Accepted for publication in the Computational Geometry: Theory and Applications, 2008 Translational packing of arbitrary polytopes Jens Egeblad∗ Benny K. Nielsen∗ Marcus Brazil† Abstract We present an efficient solution method for packing d-dimensional polytopes within the bounds of a polytope container. The central geometric operation of the method is an exact one-dimensional translation of a given polytope to a position which minimizes its volume of overlap with all other polytopes. We give a detailed description and a proof of a simple algorithm for this operation in which one only needs to know the set of (d − 1)-dimensional facets in each polytope. Handling non-convex polytopes or even interior holes is a natural part of this algorithm. The translation algorithm is used as part of a local search heuristic and a meta-heuristic technique, guided local search, is used to escape local minima. Additional details are given for the three-dimensional case and results are reported for the problem of packing polyhedra in a rectangular parallelepiped. Utilization of container space is improved by an average of more than 14 percentage points compared to previous methods. The translation algorithm can also be used to solve the problem of maximizing the volume of intersection of two polytopes given a fixed translation direction. For two polytopes with complexity O(n) and O(m) and a fixed dimension, the running time is O(nm log(nm)) for both the minimization and maximization variants of the translation algorithm. Keywords: Packing, heuristics, translational packing, packing polytopes, minimizing overlap, maximizing overlap, strip-packing, guided local search 1 Introduction Three-dimensional packing problems have applications in various industries, e.g., when items must be loaded and transported in shipping containers. The three main problems are bin-packing, knapsack packing, and container loading. In bin-packing the minimum number of equally-sized containers sufficient to pack a set of items must be determined. In knapsack packing one is given a container with fixed dimensions and a set of items, each with a profit value; one must select a maximum profit subset of the items which may be packed within the container. The container loading problem is a special case of the knapsack problem where the profit value of each item is set to its volume. Bin-packing, knapsack packing, and container loading problems involving boxes are classified as orthogonal packing problems and are well-studied in the literature. In general, three-dimensional packing problems can also involve more complicated shapes; it is not only boxes that are packed in shipping containers. An interesting example is rapid prototyping which is a term originally used for the production of physical prototypes of 3D computer aided design (CAD) models needed in the early design or test phases of new products. Nowadays, rapid prototyping ∗ Department of Computer Science, University of Copenhagen, DK-2100 Copenhagen Ø, Denmark. E-mail: {jegeblad, benny}@diku.dk. † ARC Special Research Centre for Ultra-Broadband Information Networks (CUBIN) an affiliated program of National ICT Australia, Department of Electrical and Electronic Engineering, The University of Melbourne, Victoria 3010, Australia. E-mail: [email protected] 99 1. Introduction Laser Scanning mirror Levelling roller Powder Powder cartridge Figure 1: An illustration of a typical machine for rapid prototyping. The powder is added one layer at a time and the laser is used to sinter what should be solidified to produce the desired objects. technologies are also used for manufacturing purposes. One of these technologies, selective laser sintering process, is depicted in Figure 1. The idea is to build up the object(s) by adding one very thin layer at a time. This is done by rolling out a thin layer of powder and then sintering (heating) the areas/lines which should be solid by the use of a laser. The unsintered powder supports the objects built and therefore no pillars or bridges have to be made to account for gravitational effects. This procedure takes hours (“rapid” when related to weeks) and since the time required for the laser is significantly less than the time required for preparing a layer of powder, it will be an advantage to have as many objects as possible built in one run of the machine. A survey of rapid prototyping technologies is given by Yan and Gu [31]. In order to minimize the time used by the rapid prototype machine items must be placed as densely as possible and the number of layers must be minimized. The problem of minimizing layers may therefore be formulated as a strip-packing problem: A number of items must be placed within a container such that the container height is minimized. In this paper we present a solution method for the multidimensional strip-packing problem. However, our techniques may be applied to some of the other problem variants, e.g., bin-packing. Specifically, for three dimensions, we pack a number of arbitrary (both convex and non-convex) polyhedra in a parallelepiped such that one of the parallelepiped’s dimensions is minimized. No rotation is allowed and gravity is not considered. A formal description of the problem is given in Section 2 and a review of related work is given in Section 3. The solution method described in this paper generalizes previous work by Egeblad et al. [A]. This earlier paper focused on the two-dimensional variant of this problem which is generally known as the nesting problem (packing polygons in a rectangle), but also included a short description and some results for a three-dimensional generalization. In both cases overlap is iteratively reduced by a 100 Translational packing of arbitrary polytopes central algorithm which determines a one-dimensional translation of a given polygon/polyhedra to a minimum overlap position. Egeblad et al. only prove the correctness of the two-dimensional variant. In this paper, we prove the correctness of the translation algorithm in three and higher dimensions (Section 4), essentially describing a solution method for packing polytopes in d-dimensional space. We also give a more detailed description of the translation algorithm in three dimensions. The complete solution method is described in Section 5. Because applications for d > 3 are not obvious, an implementation has only been done for the three-dimensional case. Experimental results are presented in Section 6 and compared with existing results from the literature. Finally, some concluding remarks are given in Section 7. 2 Problem description The main problem considered in this paper is as follows: The 3D Decision Packing Problem (3DDPP). Given a set of polyhedra S and a polyhedral container C, determine whether a non-overlapping translational placement of the polyhedra within the bounds of the container exists. This problem is N P -complete even if all polyhedra in S are cubes [16]. If ν(P) denotes the volume of a polyhedron P and this is generalized for sets such that ν(S ) = ∑P∈S ν(P) then a nonoverlapping placement for the 3DDPP has a utilization (of the container) of ν(S )/ν(C). Based on the decision problem we can define the following optimization problem. The 3D Strip Packing Problem (3DSPP). Given a set of polyhedra S and a rectangular parallelepiped C (the container) with fixed width w and length l, find the minimum height h of the container for which the answer to the 3D decision packing problem is positive. An optimal solution to 3DSPP has a utilization of ν(S )/ν(C) = ν(S )/(w · l · h), i.e., the utilization only depends on the height of the parallelepiped and not on a particular placement corresponding to this height. The word strip is based on the terminology used for the 2-dimensional variant of the problem. While the solution method discussed in the following sections could be applied to the bin-packing problem or other variants of multi-dimensional packing problems, we limit our description to the strippacking problem. The strip-packing variant has been chosen mainly because it allows a comparison with results from the existing literature. In the typology of Wäscher et al. [30], the problem we consider, 3DSPP, is a three-dimensional irregular open dimension problem (ODP) with fixed orientations of the polyhedra. The polyhedra handled in this paper are very general. Informally, a polyhedron can be described as a solid whose boundary consists of a finite number of polygonal faces. Note that every face must separate the exterior and the interior of the polyhedron, but convexity is not required and holes and interior voids are allowed. A polyhedron is even allowed to consist of several disconnected parts and holes may contain smaller individual parts. The problem formulations above are easily generalized to higher dimensions. Simply replace polyhedron with polytope and consider a rectangular d-dimensional parallelepiped for the strip-packing problem. We denote the corresponding problems dDDPP and dDSPP, where d is the number of dimensions. For simplicity, the faces are required to be convex, but this is not a restriction on the types 101 3. Related work of polytopes allowed, as a non-convex face can be partitioned into a finite number of convex faces. The polytopes themselves can be non-convex and contain holes. Since dDDPP is N P -complete dDSPP is an N P -hard problem. Our solution method for dDSPP is heuristic and therefore not guaranteed to find the optimal solution of dDSPP. When solving a problem in 3D for physical applications such as rapid prototyping, one should be aware that some feasible solutions are not very useful in practice since objects may be interlocked. Avoiding this is a very difficult constraint which is not considered in this paper. 3 Related work Cutting and packing problems have received a lot of attention in the literature, but focus has mainly been on one or two dimensions and also often restricted to simple shapes such as boxes. A survey of the extensive 2D packing literature is given by Sweeney and Paternoster [28] and a survey of the 2D packing literature concerning irregular shapes (nesting) is given by Dowsland and Dowsland [12]. Recent heuristic methods for orthogonal packing problems include the work of Lodi et al. [22] and Faroe et al. [15] for the bin-packing problem, and Bortfeldt et al. [2] and Eley [14] for the container loading problem. The meta-heuristic approach utilized in this paper is based on the ideas presented by Faroe et al. In the following, we review solution methods presented for packing problems in more than two dimensions which also involve shapes more general than boxes. A survey is given by Cagan et al. [4] in the broader context of three-dimensional layout problems for which maximum utilization may not be the only objective. Their focus is mainly on various meta-heuristic approaches to the problems, but a section is also dedicated to approaches for determining intersections of shapes. A survey on 3D free form packing and cutting problems is given by Osogami [26]. This covers applications in both rapid prototyping in which maximum utilization is the primary objective and applications in product layout in which other objectives, e.g., involving electrical wire routing length and gravity are more important. Ikonen et al. [20] have developed one of the earliest approaches to a non-rectangular 3D packing problem. Using a genetic algorithm they can handle non-convex shapes with holes and a fixed number of orientations (45◦ increments on all three axes). To evaluate if two shapes overlap, their bounding boxes (the smallest axis-aligned circumscribing box) are first tested for intersection and, if they intersect, triangles are subsequently tested for intersection. For each pair of intersecting triangles it is calculated how much each edge of each triangle intersects the opposite triangle. Cagan et al. [3] use the meta-heuristic simulated annealing and they allow rotation. They can also handle various additional optimization objectives such as routing lengths. Intersection checks are done using octree decompositions of shapes. As the annealing progresses the highest resolution is increased to improve accuracy. Improvements of this work using variants of the meta-heuristic pattern search instead of simulated annealing are later described by Yin and Cagan, Yin and Cagan [32, 33]. Dickinson and Knopf [10] focus on maximizing utilization, but they introduce an alternative metric to determine the compactness of a given placement of shapes. In short, this metric measures the compactness of the remaining free space. The best free space, in three dimensions, is in the form of a sphere. The metric is later used by Dickinson and Knopf [11] with a sequential placement algorithm for three-dimensional shapes. Items are placed one-by-one according to a predetermined sequence and each item is placed at the best position as determined by the free-space metric. To evaluate if two shapes overlap they use depth-maps. For each of the six sides of the bounding box of each shape, they divide the box-side into a uniform two-dimensional grid and store the distance perpendicular 102 Translational packing of arbitrary polytopes to the box-side from each grid cell to the shape’s surface. Determining if two shapes overlap now amounts to testing the distance at all overlapping grid points of bounding box sides, when the sides are projected to two dimensions. For each shape up to 10 orientations around each of its rotational axes are allowed. Note that the free-space metric is also generalized for higher dimensions and thus the packing algorithm could potentially work in higher dimensions. Hur et al. [19] use voxels, a three-dimensional uniform grid structure, to represent shapes. As for the octree decomposition technique, each grid-cell is marked as full if a part of the associated shape is contained within the cell. The use of voxels allows for simple evaluation of overlap of two shapes since overlap only occurs if one or more overlapping grid cells from both shapes are marked as full. Hur et al. [19] also use a sequential placement algorithm and a modified bottom-left strategy which always tries to place the next item of the sequence close to the center of the container. A genetic algorithm is used to iteratively modify the sequence and reposition the shapes. Eisenbrand et al. [13] investigate a special packing problem where the maximum number of uniform boxes that can be placed in the trunk of a car must be determined. This includes free orientation of the boxes. For any placement of boxes they define a potential function that describes the total overlap and penetration depth between boxes and trunk sides and of pairs of boxes. Boxes are now created, destroyed, and moved randomly, and simulated annealing is used to decide if new placements should be accepted. Recently, Stoyan et al. [27] presented a solution method for 3DSPP handling convex polyhedra only (without rotation). The solution method is based on a mathematical model and it is shown how locally optimal solutions can be found. Stoyan et al. [27] use Φ-functions to model non-intersection requirements. A Φ-function for a pair of shapes is defined as a real-value calculated from their relative placement. If the shapes overlap, abut or do not overlap the value of the Φ-function is larger than, equal or less than 0, respectively. A tree-search is proposed to solve the problem to optimality, but due to the size of the solution space Stoyan et al. opt for a method that finds locally optimal solutions instead. Computational results are presented for three problem instances with up to 25 polyhedra. A comparison with the results of the solution method presented in this paper can be found in Section 6. 4 Axis-Aligned Translation As mentioned in the introduction, our solution method for packing polytopes is based on an algorithm for translating a given polytope to a minimum volume of overlap position in an axis-aligned direction. This problem is polynomial-time solvable and we present an efficient algorithm for it here. The algorithm can easily be modified to determine a maximum overlap translation. Note that the position of a polytope is specified by the position of a given reference point on the polytope; hence its position corresponds to a single point. Without loss of generality, we assume that the translation under consideration is an x-axis-aligned translation. In three dimensions, the problem we are solving can be stated as follows: 1-Dimensional Translation Problem in 3D (1D3DTP). Given a fixed polyhedral container C, a polyhedron Q with fixed position, and a polyhedron P with fixed position with respect to its y and z coordinates, find a horizontal offset x for P such that the volume of overlap between P and Q is minimized (and P is within the bounds of the container C). By replacing the term polyhedron with polytope this definition can easily be generalized to higher dimensions, in which case we denote it 1DdDTP. In Section 4.1 we give a more formal definition and present a number of properties of polytopes which we use in Section 4.2 to prove the correctness of an 103 4. Axis-Aligned Translation algorithm for 1DdDTP. Since the algorithm solves the problem of finding a minimum overlap position of P we refer to it in the following as the translation algorithm. In Section 4.3 we provide additional details for the three dimensional case. Egeblad et al. [A] have proved the correctness of the two-dimensional special case of the algorithm described in the following and they have also sketched how the ideas can be generalized to 3D. Here we flesh out the approach sketched by Egeblad et al., and generalize it to d dimensions. 4.1 Polytopes and their Intersections In the literature, the term polytope is usually synonymous with convex polytope, which can be thought of as the convex hull of a finite set of points in d-dimensional space, or, equivalently, a bounded intersection of a finite number of half-spaces. In this paper we use the term polytope to refer to a more general class of regions in d-dimensional space, which may be non-convex and can be formed from a finite union of convex polytopes. By definition, we assume that the boundary of a polytope P is composed of faces, each of which is a convex polytope of dimension less than d. Following standard notation, we refer to a onedimensional face of P as an edge, and a zero-dimensional face as a vertex. The faces must satisfy the following properties: 1. The (d − 1)-dimensional faces of P (which we refer to as facets) have the property that two facets do not intersect in their interiors. 2. The facets must be simple, i.e., on the boundary of a facet each vertex must be adjacent to exactly d − 1 edges. Note that this only affects polytopes of dimension 4 or more. 3. Each face of P of dimension k < d − 1 lies on the boundary of at least two faces of P of dimension k + 1 (and hence, by induction, on the boundary of at least two facets). Note that our definition of a polytope allows two adjacent facets to lie in the same hyperplane. This allows our polytopes to be as general as possible, while imposing the condition that all faces are convex (by partitioning any non-convex facets into convex (d − 1)-dimensional polytopes). Most importantly, our definition of polytopes does not require boundaries to be triangulated in 3D. Note that such a requirement would only allow minor simplifications in the proofs and the algorithm described later in this paper. Given a polytope P, we write p ∈ P if and only if p is a point of P including the boundary. More importantly, we write p ∈ int(P) if and only if p is an interior point of P, i.e., ∃ε > 0 : ∀p0 ∈ Rd , ⇒ ||p − p0 || < ε, p0 ∈ P. We next introduce some new definitions and concepts required to prove the correctness of the translation algorithm in d dimensions. Let ei be the ith coordinate system basis vector, e.g., e1 = (1, 0, . . . , 0)T . As stated earlier we only consider translations in the direction of the x-axis (that is, with direction ±e1 ). This helps to simplify the definitions and theorems of this section without loss of generality since translations along other axes work in a similar fashion. In the remainder of this section it is convenient to refer to the direction −e1 as left and e1 as right. Given a polytope P, we divide the points of the boundary of P into three groups, positive, negative, and neutral. Definition 9 (Signs of a Boundary Point). Suppose p is a point of the boundary of a polytope P. We say that the sign of p is 104 Translational packing of arbitrary polytopes v3 v1 v2 v4 v6 v7 x-axis v5 (a) (b) Figure 2: (a) A polygon with three positive (thick) edges, (v1 , v7 ), (v2 , v3 ), and (v5 , v6 ), three negative (dashed) edges, (v3 , v4 ), (v4 , v5 ), and (v6 , v7 ), and one neutral (thin) edge, (v1 , v2 ). Also note that the end-points v1 , v3 , v5 , v6 , and v7 are neutral (thin), v2 is positive (thick) and v4 is negative (dashed). (b) A polyhedron for which only positive (bright) and neutral (dark) faces are visible. Most of the edges are neutral since the interior of the polyhedron is neither to the left nor the right of the edges. Two edges are positive since the interior is only to the right of them. • positive if ∃ε > 0 : ∀δ ∈ (0, ε) : p + δ · e1 ∈ int(P) and p − δ · e1 ∈ / int(P), • negative if ∃ε > 0 : ∀δ ∈ (0, ε) : p − δ · e1 ∈ int(P) and p + δ · e1 ∈ / int(P), • and neutral if it is neither positive nor negative. In other words, a point is positive if the interior of the polytope is only on the right side of the point, it is negative if the interior of the polytope is only on the left side of the point, and it is neutral if the interior of the polytope is on both the left and the right side of the point or on neither the left nor the right side. Clearly, each point on the boundary is covered by one and only one of the above cases. Furthermore, all points in the interior of a given face have the same sign. Therefore, any set of facets F can be partitioned into three sets F + , F − , and F 0 consisting of respectively positive, negative, and neutral facets from F. Examples in two and three dimensions are given in Figure 2. In order to handle some special cases in the proofs, we need to be very specific as to which facet a given boundary point belongs. Every positive or negative point p on the boundary is assigned to exactly one facet as follows. If p belongs to the interior of a facet f then it cannot belong to the interior of any other facet and thus it is simply assigned to f . If p does not belong to the interior of a facet then it must be on the boundary of two or more facets. If p is positive, then it follows easily that this set of facets contains at least one positive facet to which it can be assigned. Analogously, if p is negative it is assigned to a negative facet. Such an assignment of the boundary will be referred to as a balanced assignment. Neutral points are not assigned to any facets. Given a (positive or negative) facet f , we write p ∈ f if and only if p is a point assigned to f . Note that since all points in the interior of a face have the same sign, it follows that the assignment of all boundary points of the polytope can be done in bounded time; one only needs to determine the sign of one interior point of a face (of dimension 1 or more) to assign the whole interior of the face to a facet. 105 4. Axis-Aligned Translation It follows from the definition of balanced assignment that a point moving in the direction of e1 , that passes through an assigned point of a facet of P, either moves from the exterior of P to the interior of P or vice versa. To determine when a point is inside a polytope we need the following definition. Definition 10 (Facet Count Functions). Given a set of facets F we define the facet count functions for all points p ∈ Rd as follows: ← − C F + (p) ← − C F − (p) → − C F + (p) → − C F − (p) = |{ f ∈ F + | ∃t > 0 : p − te1 ∈ f }|, = |{ f ∈ F − | ∃t ≥ 0 : p − te1 ∈ f }|, = |{ f ∈ F + | ∃t ≥ 0 : p + te1 ∈ f }|, = |{ f ∈ F − | ∃t > 0 : p + te1 ∈ f }|. ← − ← − The facet count functions C F + (p) and C F − (p) represent the number of times the ray from p → − with directional vector −e1 intersects a facet from F + and F − , respectively. Equivalently, C F + (p) → − and C F − (p) represent the number of times the ray from p with directional vector e1 intersects a facet from F + and F − , respectively. The following lemma states some other important properties of polytopes and their positive/negative facets based on the facet count functions above. Lemma 2. Let P be a polytope with facet set F. Given a point p and interval I ⊆ R, we say that the line segment l p (t) = p + t · e1 , t ∈ I, intersects a facet f ∈ F if there exist t0 ∈ I such that l p (t0 ) ∈ f . Given a balanced assignment of the boundary points of P, then all of the following statements hold. 1. If I = (−∞, ∞) then, as t increases from −∞, the facets intersected by l p (t) alternate between positive and negative. → − → − 2. If p ∈ / int(P) then C F + (p) = C F − (p), i.e, the ray from p in direction e1 intersects an equal ← − ← − number of positive and negative facets. Similarly, C F + (p) = C F − (p). → − → − 3. If p ∈ int(P) then C F − (p) − C F + (p) = 1, i.e, the number of positive facets intersected by the ← − ray from p in direction e1 is one less than the number of negative facets. Similarly, C F + (p) − ← − C F − (p) = 1. The proof is straightforward, and is omitted. As a corollary, the facet count functions provide an easy way of determining whether or not a given point is in the interior of P. Corollary 1. Let P be a polytope with facet set F. Given a balanced assignment of the boundary → − points, then for every point p ∈ Rd we have that p lies in the interior of P if and only if C F − (p) − → − ← − ← − C F + (p) = 1. Similarly, p lies in the interior of P if and only if C F + (p) − C F − (p) = 1. Proof. Follows directly from Lemma 2. The following definitions relate only to facets. Their purpose is to introduce a precise definition of the overlap between two polytopes in terms of their facets. 106 Translational packing of arbitrary polytopes f g Figure 3: A simple example of the inter-facet region R( f , g) of two vertical faces f and g in R3 . The dashed lines delimit the region. Note that the inter-facet region R(g, f ) is empty. Definition 11 (Containment Function, Inter-Facet Region). Given two facets f and g and a point p0 ∈ Rd define the containment function 1 if ∃t1 ,t2 ∈ R : t2 < 0 < t1 , p0 + t2 e1 ∈ g, and p0 + t1 e1 ∈ f 0 C( f , g, p ) = 0 otherwise. Also define the inter-facet region R( f , g) as the set of points which are both to the right of g and to the left of f ; that is, R( f , g) = {p ∈ Rd | C( f , g, p) = 1}. Given two facet sets F and G, we generalize the containment function by summing over all pairs of facets (one from each set): C(F, G, p) = ∑ ∑ C( f , g, p). f ∈F g∈G If f and g do not intersect and f lies to the right of g then in three dimensions the inter-facet region R( f , g) is a tube, with the projection (in direction e1 ) of f onto g and the projection of g onto f as ends. A simple example is given in Figure 3. We now state a theorem which uses the containment function to determine whether or not a given point lies in the intersection of two polytopes. Theorem 2. Let P and Q be polytopes with facet sets F and G, respectively. Then for any point p ∈ Rd the following holds: p ∈ int(P ∩ Q) ⇔ w(p) = 1 p∈ / int(P ∩ Q) ⇔ w(p) = 0, where w(p) = C(F + , G− , p) + C(F − , G+ , p) − C(F + , G+ , p) − C(F − , G− , p) (1) → − ← − Proof. C(F + , G− , p) is equal to C F + (p) · C G− (p), since it is equal to the number of facets from F + which are to the right of p, times the number of facets from G− which are to the left of p. → − ← − Similar observations allows us to deduce that C(F − , G+ , p) = C F − (p) · C G+ (p), C(F + , G+ , p) = → − ← − → − ← − C F + (p) · C G+ (p), and C(F − , G− , p) = C F − (p) · C G− (p). 107 4. Axis-Aligned Translation → − → − If we assume p ∈ int(P ∩ Q) then from Corollary 1, we know that C F − (p) − C F + (p) = 1 and ← − ← − C G+ (p) − C G− (p) = 1 and we get: → − ← − → − ← − → − ← − → − ← − w(p) = C F + (p) · C G− (p) + C F − (p) · C G+ (p) − C F + (p) · C G+ (p) − C F − (p) · C G− (p) → − ← − ← − → − ← − ← − = − C F + (p) · ( C G+ (p) − C G− (p)) + C F − (p) · ( C G+ (p) − C G− (p)) → − → − = C F − (p) − C F + (p) = 1. When p ∈ / int(P∩Q) the three cases where p ∈ / P and/or p ∈ / Q can be evaluated in similar fashion. In order to solve 1DdDTP we need to define a measure of the overlap between two polytopes. Definition 12 (Overlap Measures). An overlap measure is a real-valued function µ such that, for any bounded region R0 , µ(R0 ) = 0 if int(R0 ) = 0/ and µ(R0 ) > 0 otherwise. Preferably, an overlap measure should be computationally efficient and give a reasonable estimate of the degree of overlap of two polytopes. A general discussion of overlap measures in the context of translational packing algorithms can be found in Nielsen [25]. For the remainder of this paper we restrict our attention to the standard Euclidean volume measure V d . Given a bounded region of space R we write V d (R) for its volume: V d (R) = Z dV d . R In particular V d (R( f , g)) is the volume of the inter-facet region of facets f and g. For convenience, we let V d ( f , g) = V d (R( f , g)) and for sets of facets we use V d (F, G) = ∑ ∑ V d ( f , g). f ∈F g∈G The following theorem states that V d is an overlap measure with a simple decomposition into volumes of inter-facet regions. Theorem 3. Let R0 be a bounded region in Rd , and let P and Q be polytopes in Rd , with facet sets F and G respectively, such that R0 = P ∩ Q. Then V d is an overlap measure, and it satisfies the following relation: V d (R0 ) = V d (F + , G− ) +V d (F − , G+ ) −V d (F + , G+ ) −V d (F − , G− ). Proof. The theorem essentially follows from Theorem 2 and the proof given for the Intersection Area Theorem in Egeblad et al. [A], with area integrals replaced by d-dimensional volume integrals. Note that a balanced assignment is not required in order to get the correct overlap value using the facet decomposition of Theorem 3. This is due to the fact that the d-dimensional volume of all points in P ∩ Q which require the balanced assignment of boundary points to get the correct value in Theorem 2 is 0 and therefore will have no impact on the resulting volume. In order to simplify notation in the following sections, for a d-dimensional region R we write V (R) for V d (R); and for given facets f and g we write V ( f , g) for V d ( f , g). 4.2 Minimum area translations In the following we describe an efficient algorithm for solving 1DdDTP with respect to volume measure using Theorem 3. We continue to assume that the translation direction is e1 , i.e., parallel to the x-axis, and we will use terms such as left, right and horizontal as natural references to this direction. 108 Translational packing of arbitrary polytopes 4.2.1 Volume Calculations Here we describe a method for computing the volume of overlap between two polytopes by expressing it in terms of the volumes of a collection of inter-facet regions, and then using the decomposition of volume measure given in Theorem 3. First we introduce some basic notation. Given a point p ∈ Rd and a translation value t ∈ R, we use p(t) to denote the point p translated by t units to the right, i.e., p(t) = p + te1 . Similarly, given a facet f ∈ F, we use f (t) to denote the facet translated t units to the right, i.e., f (t) = {p(t) ∈ Rd |p ∈ f }. Finally, given a polytope P with facet set F, we let P(t) denote P translated t units to the right, i.e., P(t) has facet set { f (t)| f ∈ F}. Now, consider two polytopes P and Q with facet sets F and G, respectively. For any two facets f ∈ F and g ∈ G, we will show how to express the volume function V ( f (t), g) as a piecewise polynomial function in t with degree d. Combined with Theorem 3, this will allow us to express the full overlap of P(t) and Q as a function in t by iteratively adding and subtracting volume functions of inter-facet regions. In order to express volumes in higher dimensions, we use the concept of a simplex. A simplex is the d-dimensional analogue of a triangle and it can be defined as the convex hull of a set of d + 1 affinely independent points. Before describing how to calculate V ( f (t), g), we first make a few general observations on R( f (t), g); recall that V ( f (t), g) = V (R( f (t), g)). Definition 13 (Hyperplanes and Projections). Let f and g be facets of polytopes in Rd . We denote by f the unique hyperplane of Rd containing f . Also, we define the projection of g onto f as proj f (g) = {p ∈ f |∃t ∈ R : p(t) ∈ g} (2) In other words, proj f (g) is the horizontal projection of the points of g onto f . Using these definitions, there are a number of different ways to express the inter-facet region R( f (t), g). Let f 0 = projg ( f ) and g0 = proj f (g); then it is easy to see that R( f (t), g) = R( f 0 (t), g0 ) = R( f 0 (t), g) = R( f (t), g0 ) for any value of t. Clearly, the corresponding volumes are also all identical. To compute f 0 and g0 , it is useful to first project f and g onto a hyperplane h orthogonal to the translation direction. For a horizontal translation this can be done by simply setting the first coordinate to 0 (which, in 3D, corresponds to a projection onto the yz-plane). We denote the d − 1 dimensional intersection of these two projections by h = projh ( f ) ∩ projh (g). The region h can then be projected back onto f and g (or their hyperplanes) to obtain f 0 and g0 . This also means that the projections of f 0 and g0 onto h are identical. In the case of three dimensions, when f 0 is to the right of g0 , h is a polygonal cross-section of the tube R( f , g) (perpendicular to e1 ) and f 0 and g0 are the end-faces of this tube. See Figures 4a and 4b for an example. Finding the intersection of the projections of f and g is relatively straightforward. Since we require the facets to be convex, the projections are also convex and can therefore be represented using half-spaces. The intersection can then easily be represented by the intersection of all of the half-spaces from the two sets and the main problem is then to find the corresponding vertex representation. If the intersection h is empty or has dimension smaller than d − 1 then the volume V ( f (t), g) is 0 for any t value. So, assume we have a (d − 1)-dimensional intersection h, i.e., a non-degenerate intersection. Let p1 , . . . , pn be the vertices of g0 . For each point pi , there exists a translation value ti such that pi (−ti ) ∈ f . Each point pi (−ti ) is a vertex of f 0 , and each ti value represents the horizontal distance of pi from the hyperplane f . Assume that the points pi are sorted such that t1 ≤ · · · ≤ tn . We refer to these points as breakpoints and the ti values as their corresponding distances. 109 4. Axis-Aligned Translation x-axis x-axis f0 f f g0 g g (a) (b) x-axis x-axis f 00 f g 00 g (c) (d) Figure 4: An example of the inter-facet region R( f , g) between two faces, f and g, in three dimensions, where g is in front of f in the x-axis direction. In general the two facets are not necessarily in parallel planes and they can also intersect. (a) The dashed lines indicate the boundary of the projections of f on g and g on f . The projections are denoted f 0 and g0 . (b) The region R( f , g) which is identical to R( f 0 , g0 ). (c) To simplify matters, the projections can be triangulated. (d) One of the resulting regions with triangular faces. 110 Translational packing of arbitrary polytopes f0 g (a) g f0 g0 f0 g (b) (c) Figure 5: An illustration of the growing tetrahedron volumes needed when a face f 0 passes through a face g, or equivalently its plane g, in order to calculate the volume of R( f 0 , g0 ). (a) The initial growing tetrahedron after the first breakpoint. (b) After the second breakpoint, at growing tetrahedral volume based on the red area needs to be subtracted. (c) After the third breakpoint, a second growing tetrahedral needs to be subtracted. We now consider how to compute V ( f (t), g). If t ≤ t1 then the region R( f (t), g) is clearly empty since f 0 (t) is entirely to the left of g0 . It follows in this case that V ( f (t), g) = 0. A bit less trivially, if t ≥ tn then V ( f (t), g) = V ( f (tn ), g)+(t −tn )·V (d−1) (h) which is a linear function in t. In this case f 0 (t) is to the right of g0 in its entirety. Note that V (d−1) (h) is the volume of h in (d − 1)-dimensional space. In R3 , V 2 (h) is the area of the polygonal cross-section of the tube between f 0 and g0 . The volume V (d−1) (h) can be computed by partitioning h into simplices. Hence the only remaining difficulty lies in determining V ( f (t), g) for t ∈ (t1 ,tn ]. For any value of t in this interval, f 0 (t) and g0 intersect in at least one point. Figure 5 is an illustration of what happens when a face in 3D is translated through another face. This is a useful reference when reading the following. First note that the illustration emphasizes that we can also view this as the face f 0 passing through the plane g. An easy special case, for computing V ( f (t), g), occurs when t1 = tn . This corresponds to the two facets being parallel and thus V ( f (tn ), g) = 0. Now, assume all the ti are distinct. The case where two or more ti are equal is discussed in Section 4.3. Each facet is required to be simple by definition and thus each vertex pi of g0 has exactly d − 1 neighboring points, i.e., vertices of g0 connected to pi by edges. Denote these points p1i , . . . , pd−1 i and denote the corresponding breakpoint distances ti1 , . . . ,tid−1 . Consider the value of V ( f (t), g) in the first interval (t1 ,t2 ]. To simplify matters, we change this to the equivalent problem of determining the function V0 ( f (t), g) = V ( f (t + t1 ), g) for t ∈ (0,t2 − t1 ]. V0 ( f (t), g) can be described as the volume of a growing simplex in t with vertex set {tv j | j = 0, . . . , d} j j where v0 = (0, . . . , 0), vd = (1, 0, . . . , 0) and v j = (p1 − p1 )/(t1 −t1 ) for 1 ≤ j ≤ d − 1. The volume of this simplex is: 111 4. Axis-Aligned Translation v0 = (0, 0, 0) v3 = (t, 0, 0) g0 f0 tv1 tv2 Figure 6: In 3D, it is necessary to calculate the volume function for a growing tetrahedron based on two points, v1 and v2 , and an overlap distance t. The coordinates of the points change linearly with respect to a change of the distance t. In d dimensions d − 1 neighboring points are used. V0 ( f (t), g) = 1 | det([tv1 , ...,tvd ])|. d! This is illustrated in Figure 6. Since vd = (1, 0, . . . , 0), we can simplify the above expression to V0 ( f (t), g) = 1 1 | det([v1 , v2 , ..., vd ])t d | = | det([v01 , v02 , ..., v0d−1 ])t d |, d! d! where v0i is vi without the first coordinate. This results in very simple expressions in low dimensions: 2D : 3D : 1 0 2 2 | det(v2 )t | 1 0 0 3 6 | det([v2 v3 ])t | = = 1 y 2 2 |v2t | y z z y 3 1 6 |(v2 v3 − v2 v3 )t | In general V0 ( f (t), g) is a degree d polynomial in t. To calculate the original volume V ( f (t), g) we offset the function by setting V ( f (t), g) = V0 ( f (t − t1 ), g) for t ∈ (t1 ,t2 ] . The above accounts for the interval between the two first breakpoints. To handle the remaining breakpoints we utilize a result of Lawrence [21] concerning the calculation of the volume of a convex polytope. Lawrence shows that the volume can be calculated as the sum of volumes of a set of simplices. Each simplex is based on the neighboring edges of each vertex of the polytope. Given t ∈ (t1 ,tn ], we are interested in the polytope R( f (t), g). It can be constructed by taking the part of f 0 which is to the right of the hyperplane g, then projecting this back onto g and connecting the corresponding vertices horizontally. See Figure 5 for some examples. This is a convex polytope, and its volume can be calculated as follows. Let ∆(p1 ,t) denote the initial growing simplex described above, that is V (∆(p1 ,t)) = V0 ( f (t1 − t), g). Similarly, let ∆(pi ,t), i ∈ {2, ..., n − 1} denote simplices based on the other points pi and their neighboring points (we do not need to include pn ). For each such simplex, the vertices are given by j j j j tv j where v j = (pi − pi )/(ti − ti ). Note that whenever ti < ti the direction of the vector from pi to pi is reversed. By an argument of Lawrence [21], we then have 112 Translational packing of arbitrary polytopes k V ( f (t), g) = V (∆(p1 ,t)) − ∑ V (∆(pi ,t)) where k = max {i|ti < t}. i=2 1≤i≤n (3) In other words, the volume can be calculated by taking the growing volume of the first simplex and then subtracting a simplex for each of the remaining breakpoints with a breakpoint distance less than t. The volume of each simplex can be described as a degree d polynomial in t similar to the description of V0 of the previous paragraph. In Figure 5a, there is only the first growing simplex (tetrahedron). After that, in Figure 5b, another growing simplex needs to be subtracted from the first, and finally, in Figure 5c, a third growing simplex needs to be subtracted. Between each pair of breakpoints, the total volume between g0 and f 0 (t) in the horizontal direction can therefore be described as a sum of polynomials in t of degree d which itself is a polynomial of degree d. 4.2.2 The Main Algorithm An algorithm for determining a minimum overlap translation in d dimensions can now be established. Pseudo-code is given in Algorithm 3. Given polytopes P and Q and a polytope container C, we begin by determining all breakpoints between facets from P and facets from polytopes Q and C and the coefficients of their d-dimensional volume polynomials. Signs of all volume polynomials calculated with regard to the container C should be negated. This corresponds to viewing the container as an infinite polytope with an internal cavity. Algorithm 3: Determine minimum overlap translation along x-axis in d dimensions Input: Polytopes P and Q and a container C; foreach facet f from P do foreach facet g from Q ∪C do Create all breakpoints for the facet pair ( f , g).; Each breakpoint ( f , g) has a distance t f ,g and a volume polynomial V f ,g (t); (negate the sign of V f ,g (t) if g ∈ C).; Let B be all breakpoints sorted in ascending order with respect to t.; Let ν(t) and νC (t) be polynomials with maximum degree d and initial value ν(P).; for i = 1 to |B| − 1 do Let ti be the distance value of breakpoint i.; Let f and g be the facets of breakpoint i.; Let Vi (t) be the volume function of breakpoint i.; Modify ν(t) by adding the coefficients of Vi (t).; if g ∈ C then Modify νC (t) by adding the coefficients of Vi (t).; if νC (t) = 0 for t ∈ [ti ,ti+1 ] then Find the minimum value ti0 for which ν(t) is minimized in [ti ,ti+1 ].; return ti0 with smallest ν(ti0 ) The breakpoints are sorted such that t1 ≤ t2 ≤ . . . ≤ tn . The algorithm traverses the breakpoints in this order while maintaining a total volume function ν(t) which describes the volume of the total overlap between P and Q ∪ C. The volume function ν(t) is a linear combination of the volumes of 113 4. Axis-Aligned Translation simplices for each breakpoint encountered so far (based on Equation 3) and may be represented as a degree d polynomial in t. Initially ν(t) = ν(P) for t ≤ t1 , corresponding to P being completely outside the container (overlapping C). As each breakpoint ti is reached, the algorithm adds an appropriate volume polynomial to ν(t). Since ν(t) is a sum of polynomials of at most degree d, ν(t) can itself be represented as d + 1 coefficients of a polynomial with degree at most d. When a breakpoint is encountered its volume polynomial is added to ν(t) by simply adding its coefficients to the coefficients of ν(t). The subset of volume polynomials which comes from breakpoints related to the container may also be added to a volume polynomial νC (t). Whenever this function is 0 for a given t-value, it means that P(t) is inside the container. For each interval (ti ,ti+1 ] between succeeding breakpoints for which νC (t) = 0, ν(t) can be analyzed to determine local minima. Among all these local minima, we select the smallest distance value tmin for which ν(tmin ) is a global minimum value. This minimum corresponds to the leftmost xtranslation where the overlap between P and Q ∪C is as small as possible. Therefore tmin is a solution to 1DdDTP. Analyzing each interval amounts to determining the minimum value of a polynomial of degree d. This is done by finding the roots of the derivative of the polynomial and checking interval end-points. While finding the exact minimum for d ≤ 5 is easy, it is problematic for higher dimensions, due to the Abel-Ruffini Theorem, since one must find the roots of the derivative which itself is polynomial of degree 5 or higher. However, it is possible to find the minimum to any desired degree of accuracy, e.g., using the Newton-Raphson method. Approximations are needed in any case when using floating point arithmetics. The following lemma is a simple but important observation. Lemma 3. Given two polytopes P and Q, assume that the 1-dimensional translation problem in n dimensions has a solution (a translation distance t 0 ) where P does not overlap Q then there also exists a breakpoint distance tb for which P does not overlap Q. Proof. Assume that t 0 is not a breakpoint distance. First assume t 0 is in an interval (tb1 ,tb2 ) of breakpoint distances. Assume that it is the smallest such interval. Since the overlap value, ν(t 0 ), is 0 and t 0 is not a breakpoint distance then, specifically, no facets from P and Q can intersect for t = t 0 . Since there are no other breakpoint distances in this interval, and facets can only begin to intersect at a breakpoint distance, no facets can intersect in the entire interval (tb1 ,tb2 ). From the discussion in Section 4.2.1, ν(t) must be a sum of constants or linear functions for t ∈ (tb1 ,tb2 ) since no facets intersect in this interval. ν(t) cannot be linear since t 0 is not an interval end-point and this would imply a negative overlap at one of the interval end-points which is impossible. Therefore ν(t) = 0 for t ∈ (tb1 ,tb2 ). By continuity ν(tb1 ) = 0 and ν(tb2 ) = 0 and we may choose either of these breakpoints as tb . Now assume t 0 is within the half-open infinite interval either before the first breakpoint or after the last breakpoint. Again, since ν(t) is linear on that entire interval and ν(t) cannot be negative, one can select the breakpoint of the infinite interval as tb . If a non-overlapping position of P exists for a given translation direction, then P is non-overlapping at one of the breakpoint distances. Our solution method for dDDPP, as described in Section 5, repeatedly solves 1DdDTP problems using Algorithm 3. Since our aim is to find a non-overlapping position for each polytope we may actually limit our analysis in each translation to testing the interval endpoints. This way one may avoid the computationally inefficient task of finding roots even though one does not find the true minimum for 1DdDTP (though it might be at the expense of increasing the number of translations required to find a solution). 114 Translational packing of arbitrary polytopes To analyze the asymptotic running time of Algorithm 3, we assume that either one does not find minima between breakpoints or one considers this a constant time operation (which is true for 5 dimensions or less). For a fixed dimension d, the time needed for finding the intersection of (d − 1)simplices can also be considered a constant, and the same is true for the calculation of the determinants used in the volume functions. Given two polytopes with complexity O(n) and O(m), the number of breakpoints generated is at most a constant times nm. Computing the volume function for each pair of breakpoints takes constant time and thus the running time is dominated by the sorting of breakpoints revealing a running time of O(nm log(nm)). In most cases, the number of breakpoints is likely to be much smaller than O(nm) since the worst case scenario requires very unusual non-convex polytopes — the vertical extents must overlap for all pairs of edges. If the complexity of the facets is bounded (e.g. triangles for d = 3), then the number of breakpoints generated for two polytopes with n and m facets, respectively, is at most O(nm), and the asymptotic running time is O(nm log(nm)). In some settings it may be desirable to allow for overlap with the region outside the container. This is easily accommodated by ignoring the condition that νC (t) = 0 in Algorithm 3. 4.3 Special cases in three dimensions In the previous subsection, it was assumed that either all breakpoints had unique distance values or that all breakpoints had the same distance value. Here we describe how to handle a subset of breakpoints with the same distance value for the case where d = 3. In particular, we focus on the special case of the two first breakpoint distances being equal (and different than the subsequent ones). Given two convex faces f and g, we know that h, the intersection of their 2D projections onto the yz-plane, is also a convex polygon. As before, let f 0 and g0 denote the projections of h onto the given faces. When f and g are not parallel, then for any given translation of f 0 , the intersection between f 0 and g0 contains at most two corner points. Furthermore, Equation 3 still applies if these two corner points are not neighbors; and they can only be neighbors if they are the two first breakpoints or the two last breakpoints. The latter case is not a problem since at that point, the volume function can be changed to a linear expression based on the area of h. The important special case is when the two first breakpoint distances are equal. This problem varies depending on the shape of h, but in each case the solution is almost the same. Figure 7a illustrates the standard case with only a single initial breakpoint while Figures 7b-d illustrate three possible variants of having two initial breakpoints. They differ with respect to the direction of the neighboring edges, but in all three cases it is possible to introduce a third point p0 which emulates the standard case in Figure 7a. Essentially, the calculations need to be based on a tetrahedron with fixed size, a linearly increasing volume and the usual cubic volume function of a growing tetrahedron. A more detailed illustration and description of the situation in Figure 7b is given in Figure 8, where v02 corresponds to p1 , v03 to p2 , and v0 to p0 . Note that quite often polyhedrons have triangulated surfaces. Given triangular faces, the special case in Figure 7d cannot occur. The special case in Figure 7b can also be avoided if the intersection polygon is triangulated and each triangle is handled separately (see Figure 4d). It is still an open question how to handle identical breakpoint distances in higher dimensions. Perturbation techniques could be applied, but it would be more gratifying if the above approach could be generalized to higher dimensions. 115 4. Axis-Aligned Translation p2 p1 p1 p0 p2 (a) (b) p1 p1 p2 (c) p0 p2 p0 (d) Figure 7: Various special cases can occur when the two first breakpoint distances are equal. Here it is illustrated in 2D using a dashed line to illustrate the intersection line of the two faces during the translation. (a) The standard case in which the first breakpoint distance is unique. (b) The two first breakpoint distances are equal (points p1 and p2 ) which also means that the dashed line is parallel to the line between p1 and p2 . The sum of the angles at p1 and p2 is greater than 180◦ . This can be handled by introducing a third point p0 at the intersection of the lines through the edges from p1 and p2 . This point is going to have a smaller breakpoint distance and it almost reduces the problem to be identical with the first case. More details can be seen in Figure 8. (c) The sum of the angles at p1 and p2 is less than 180◦ , but we can still find a natural candidate for the additional point p0 based on the edges from p1 and p2 . (d) The sum of angles is exactly 180◦ . In this case p0 is just chosen at an appropriate distance from p2 , e.g., such that the angle at p0 is 45◦ . This case does not occur if the input polyhedra have triangulated faces. 116 Translational packing of arbitrary polytopes t0 t v1 v0 v10 v20 v30 g0 f0 v2 v3 Figure 8: To calculate the volume between the faces f 0 and g0 , one can base the calculations on the extended faces illustrated with dashed lines. The volume is a growing tetrahedron (v0 , v1 , v2 , v3 ) subtracted by a constant volume (the tetrahedron (v0 , v01 , v02 , v03 )) and subtracted by a linearly growing volume based on the triangle (v01 , v02 , v03 ) and the translation distance t. 117 5. Solution method for dDSPP 5 Solution method for dDSPP In this section we describe our solution method for the d-dimensional strip packing problem (dDSPP), i.e., the problem of packing a set S of n given polytopes inside a d-dimensional rectangular parallelepiped C with minimal height. In short, we do this by solving the decision problem dDDPP for repeatedly smaller heights using a local search and a meta-heuristic technique. This stops when a fixed time limit is reached. The height of the last solved dDDPP is reported as a solution to the dDSPP. Each instance of dDSPP in turn is solved by repositioning polytopes to minimal overlapping positions using Algorithm 3. In the following, we first review the local search and the meta-heuristic technique used and described by Egeblad et al. [A]. We then describe how one can obtain an initial height (Section 5.2) for which a non-overlapping placement is known to exist. 5.1 Local search and guided local search To solve dDDPP (for a container with given height) we apply a local search method in conjunction with guided local search (GLS); a meta-heuristic technique introduced by Voudouris and Tsang [29]. Given a set of polytopes S = {Q1 , . . . , Qn }, the local search starts with a placement of the polytopes which may contain overlap. Overlap is then iteratively reduced by translating one polytope at a time in axis-aligned directions to a minimum overlap position. When overlap can no longer be reduced by translating a single polytope the local search stops. If overlap is still present in the placement then GLS is used to escape this constrained local minimum. Otherwise a non-overlapping placement has been found for dDDPP and the strip-height is decreased and all polyhedrons are moved inside the smaller container. Minimum overlap translations are found using the axis-aligned translation algorithm described in the previous section. For each polytope each of the possible d axis-aligned translations are used and the direction which reveals the position with least overlap is chosen. Note that if P = Qi is the polytope undergoing translation then the polytope Q (in the translation algorithm in the previous section) is actually the union of all other polytopes, i.e., Q = ∪nj=1, j6=i Q j . The container C in the previous section is assumed to be a simple rectangular parallelepiped with some initial height h. (The solution method can, however, be adapted to other containers.) Let Vi∩ j (P) be the volume of the pairwise overlap of Qi and Q j in a placement P. The local search minimizes the objective function f (P) = ∑ Vi∩ j (P). (4) 1≤i< j≤n In other words, f (P) is the total sum of volumes of pairwise overlaps in placement P. If f (P) = 0 then P is a solution to the current instance of dDDPP. The local search uses the axis-aligned translation algorithm from the previous section to iteratively decrease the amount of overlap in the current placement. A local minimum is reached if no polytope can be translated in an axis-aligned direction to a position with less overlap. To escape local minima the objective function is augmented using the principles of GLS to give g(P) = f (P) + λ ∑ φi, j Ii, j (P), (5) 1≤i< j≤n where λ is a penalty constant used to fine-tune the heuristic, φi, j is a penalty term associated with Qi and Q j (which is described in more detail below), and Ii, j (P) ∈ {0, 1} is 1 if and only if the interiors of polytopes Qi and Q j overlap in placement P. Due to the fact that the augmented terms are larger than 118 Translational packing of arbitrary polytopes zero if and only if P contains overlap, the augmented objective function, g(P), retains the property that a value of 0 is reached if and only if there is no overlap in the placement P. Therefore, a placement P is a solution to the dDDPP if and only if g(P) = 0. Initially, φi, j = 0 for 1 ≤ i < j ≤ n. Whenever the search reaches a local minimum with g(P) > 0, Vi∩ j (P) the value of φi, j is increased for the pair Qi and Q j with highest µi, j (P) where µi, j (P) = 1+φ . We i, j refer to this as penalizing the pair Qi and Q j . Intuitively, this change of the objective function g(P) (if large enough), allows the local search to move Qi and Q j away from each other, even if it results in greater overlap. Ideally, this move causes the local search to reach a different part of the solution space. To avoid large discrepancy between the real and penalized solution space, the penalties are reset from time to time. To avoid searching the entire neighborhood in each iteration of GLS, we also apply fast local search [29]. The translation algorithm in the previous section needs to be able to handle the penalties introduced here. This subject was not fully covered by Egeblad et al. [A] although it is quite straightforward. First of all an augmented volume polynomial ν0 (t) is defined by adding the penalties between the polytope P = Qi to be translated and all other polytopes. This is done by maintaining an array of volume functions νQ j (t), Q j ∈ S \ Qi . Whenever the overlap between P and a given polytope Q j changes from 0 or to 0, the penalty for the pair of polytopes P, Q j is, respectively, added to or subtracted from the augmented volume function ν0 (t). Note that this does not increase the asymptotic running time, since the volume polynomial of a breakpoint arising from a face of Q j is only added to ν0 (t) and νQ j (t) and only νQ j (t) needs to be checked for a change to or from 0. With regard to the usefulness of the local search neighborhood in relation to the number of dimensions d, we note that a 1-dimensional translation becomes a less efficient move as d increases, since up to d axis-aligned translations may be required to move a polytope from one arbitrary point to another. However, it should also be noted that in general fewer polytopes would be involved in each translation. If the polytopes are p placed compactly in a grid-like fashion with little overlap, then there are likely to be in the order of d |(S)| polytopes to be considered in each of the coordinate system axes directions. 5.2 Initial solution The solution method described above can start with a parallelepiped of any height since the initial placement is allowed to contain overlaps. However, it makes more sense to set the initial height to one for which a solution is known to exist. In any dimension, a naive initial height can be based on the sum of heights of all polytopes, but in the following, we describe a more ambitious strategy. In short, we use a greedy bounding box based algorithm in which the polytopes are placed one by one inside the container in an order of decreasing bounding box volume. This algorithm is based on residual volumes and is related to the approach used by Eley [14] for the container loading problem in three dimensions. Although the algorithm could be generalized to higher dimensions, we are only going to describe its three-dimensional variant. The algorithm maintains a set of empty box-spaces. Each box-space s consists of the volume [xs , xs ] × [ys , ys ] × [zs , zs ]. Initially, the entire container is the only empty space. Whenever a new shape i with bounding box Bi = [xi , xi ] × [yi , yi ] × [zi , zi ] is to be placed inside the container, the list of empty spaces is searched. Let s0 be the empty space with lexicographical least z0s , y0s , and x0s (lower-left-back corner), which is large enough to contain Bi . Shape i is now positioned in s with offset (xi , yi , zi )T such that Bi ’s lower-left-back corner is coincident with the lower-left-back corner of s; (xi , yi , zi )T + (xi , yi , zi )T = (x0s , y0s , z0s )T . 119 6. Computational experiments Next all residual spaces that overlap with the bounding box of the positioned shape i, B0i = [xi + xi , xi + xi ] × [yi + yi , yi + yi ] × [zi + zi , zi + zi ], are split into six new box-spaces and removed from the list of empty box-spaces. For each overlapping space s, we generate the following six new box-spaces: [xs , xi + xi ] × [ys , ys ] × [zs , zs ], [xs , xs ] × [ys , yi + yi ] × [zs , zs ], [xs , xs ] × [ys , ys ] × [zs , zi + zi ] [xi + xi , xs ] × [ys , ys ] × [zs , zs ], [xs , xs ] × [yi + yi , ys ] × [zs , zs ], [xs , xs ] × [ys , ys ] × [zi + zi , zs ]. These represent the volumes left, below, behind, right, above, and in-front of Bi , respectively. If any of the intervals are empty then the new space is empty. Each of the new non-empty spaces are added to the list of empty spaces and may be used for the remaining bounding boxes. To reduce the number of empty spaces generated throughout this process, spaces which are contained within or are equal to other empty spaces are discarded whenever a new bounding box is placed. The resulting placement is a non-overlapping placement and the maximum z value of any placed bounding box B0i may be used as a basis for the initial strip-height. To diversify solutions to 3DSPP we place shapes randomly within a container with this strip-height. This is the only random element of the solution method. 6 Computational experiments The solution method described in this paper was implemented for the three dimensional problem using the C++ programming language and the GNU C++ 4.0 compiler. We denote this implementation 3DN EST. Although similar in functionality, this implementation is not identical to the one used by Egeblad et al. [A]. In particular, the new implementation can handle convex faces without triangulating them and it can handle the strip packing problem. Another noteworthy feature of the new implementation is that it is possible to do almost all calculations with rational numbers — the only exception is the computation of minimum values between breakpoints in the translation algorithm since this requires solving quadratic equations. This is primarily convenient for debugging purposes, and is currently not very useful in practice since it is much slower than using standard floating point precision. Due to the limited precision of floating point calculations, the correctness of all solutions found are verified using C GAL [17], i.e., it is verified that no polyhedron is involved in any significant overlap with other polyhedra or the container. In the experiments presented in this section, the largest total volume of overlap allowed in a solution corresponds to 0.01% of the total volume of all polyhedrons for the given problem. All experiments were performed on a system with a 2.16 GHz Intel Core Duo processor with 2 MB of level 2 cache and 1 GB of RAM. Note that the implementation only uses one core of the processor. 6.1 Problem instances The literature on the subject of three-dimensional packing contains only few useful problem instances with regard to a comparison of results. We have found two appropriate data sets for our experiments. The first one was introduced by Ikonen et al. [20] and the second one was introduced by Stoyan et al. [27]. The sets contain 8 and 7 polyhedra, respectively. Characteristics of these data sets are presented in Table 1. The Stoyan polyhedra are all convex and relatively simple with a maximum of 18 faces, while some of the Ikonen polyhedra are non-convex and feature up to 52 faces. Real world instances, e.g., from the rapid prototyping industry, could easily contain more than 100,000 faces, 120 Translational packing of arbitrary polytopes but in most cases it would also be possible to simplify these polyhedra considerably without making substantial changes to the basic shape, e.g., Cohen et al. [6] has an example of a model of a phone handset which is reduced from 165,936 to 412 triangles without changing its basic shape. Name Block1 Part2 Part3 Part4 Part5 Part6 Stick2 Thin Convex1 Convex2 Convex3 Convex4 Convex5 Convex6 Convex7 Faces 12 24 28 52 20 20 12 48 14 4 10 16 18 8 16 Volume 4.00 2.88 0.30 2.22 0.16 0.24 0.18 1.25 176.00 74.67 120.00 124.67 133.33 147.00 192.50 Bounding box Type 1.00 × 2.00 × 2.00 Convex 1.43 × 1.70 × 2.50 Non-convex 1.42 × 0.62 × 1.00 Non-convex 1.63 × 2.00 × 2.00 Non-convex 2.81 × 0.56 × 0.20 Non-convex 0.45 × 0.51 × 2.50 Non-convex 2.00 × 0.30 × 0.30 Convex 1.00 × 3.00 × 3.50 Non-convex 5.00 × 6.00 × 8.00 Convex 11.00 × 4.00 × 14.00 Convex 3.00 × 4.00 × 12.00 Convex 3.00 × 4.00 × 16.00 Convex 4.00 × 8.00 × 10.00 Convex 6.00 × 7.00 × 7.00 Convex 6.00 × 10.00 × 9.00 Convex Number of polyhedra: Ikonen 1 2 3 2 1 2 2 10 Stoyan 1 2 3 8 2 1 2 2 15 1 1 1 1 1 1 1 7 1 1 1 1 3 2 3 12 2 4 6 4 4 3 2 25 Table 1: Characteristics of the three-dimensional polyhedra from the literature used in the experiments.The rightmost 5 columns describe the sets of polyhedra used in the problems presented in the originating papers. A number in one of these columns is the number of copies of the polyhedra in the corresponding problem instance, e.g., 6 copies of the polyhedron named Convex3 is present in the problem instance Stoyan3. 6.2 Puzzles To further test the capabilities of our solution method, we devised and implemented a generator for random problem instances. The generator creates a problem instance by splitting a three-dimensional cube into smaller pieces. The pieces along with container dimensions matching the width and height of the cube constitute a problem instance for which the optimal utilization is known to be 100%. A set of half-spaces H can be used to define a convex polyhedron as the set of points which is contained in all of the half-spaces. This polyhedron can be found by generating the set, I, of all intersection points of distinct planes p, q, r with p, q, r ∈ H , and then generate the convex hull C of the subset of points from I which are contained in all of the half-spaces. An elegant way to find the points of the convex hull is by using the concept of dualisation (see de Berg et al. [9]). Let C d be the convex hull of the dual points of H , then the planes of the facets of C d are duals of the corner points of C . This allows one to find C in time O(n log n) (de Berg et al. [9]) using just a convex hull algorithm. Given a positive integer n, the construction of an n-piece puzzle commences as follows. Initially, a set of 6 half-spaces, H0 , is generated such that they correspond to a cube. Now let P1 = {H0 } then we will iteratively construct a sequence of half-space sets Pi . To do this, we select the smallest cardinality half-space set H ∈ Pi for each i and generate a random plane which can be used to split H into two new 121 6. Computational experiments (a) (b) (c) (d) Figure 9: Examples of three different puzzles and the cutting planes used to generate them. (a) Three convex polyhedra. (b) The corresponding cube and cutting planes. (c) A puzzle with 5 pieces. (d) A puzzle with 10 pieces. sets H 0 and H 00 . We then let Pi+1 = (Pi \ {H}) ∪ {H 0 , H 00 }, i.e., the set of half-space sets containing H 0 and H 00 as well as all sets from Pi except H . Since the cardinality |Pi | = i, it follows that |Pn | = n. If the random plane used to split each half-space set has been selected appropriately we may generate n non-empty convex polyhedrons from the half-space sets in Pn . In Figure 9, three examples of various sizes are visualized including the cutting planes used to generate them. It is important to emphasize that a solution method specifically designed with this type of instances in mind may be able to find better solutions more efficiently than our general method. However, since the optimal utilization for these instances is 100%, we may use them to evaluate the quality of the solutions produced by the heuristic. It is interesting to see if we can actually solve some of them even though the solution method is obviously not ideal for puzzle-solving. 6.3 Benchmarks The two problems given by Ikonen et al. [20] (see Table 1) are decision problems with a cylindrical container and they have already been shown to be easily solved by Egeblad et al. [A]. The only previous results and thus also the best results for the Stoyan instances are reported by Stoyan et al. [27] and their results are repeated in the first two columns of Table 2. Problem Stoyan1 Stoyan2 Stoyan3 Stoyan Height Util. (%) 27.0 29.88 30.92 27.21 45.86 29.33 Bounding box Height Util. (%) 46 17.54 34 24.75 45 29.90 3DN EST Height Util. (%) 19.31 42.05 (3.4) 19.83 42.45 (1.0) 29.82 45.12 (0.8) Improv. 12.17 15.24 15.79 Table 2: The results from Stoyan et al. [27] are compared to the average results of 10 runs of 10 minutes with 3DN EST. Results for the initial solution found by 3DN EST is also reported. The second last column includes the standard deviation over the 10 runs. The last column emphasizes the difference between 3DN EST and the approach by Stoyan et al. 122 Translational packing of arbitrary polytopes In Table 2, we also report the results of the initial solution found using the algorithm described in Section 5.2 and the average results found by running 3DN EST with 10 different random seeds and 10 minutes for each seed. Note that the initial solution found is actually slightly better than the solution found by Stoyan et al. [27] for the largest problem instance. The last column of Table 2 emphasizes the percentage of material saved (on average) when 3DN EST is compared to the results from Stoyan et al. The utilization of 3DNest is on an average about 14 percentage points higher than that of Stoyan et al., and the average improvement over the utilization of Stoyan et al. is 50.2%. This demonstrates that 3DNest performs very well in comparison to existing methods. In order to make some additional experiments, a new problem instance called Merged1 was created by combining the polyhedra from Stoyan and Ikonen. It contains one copy of each of the polyhedra in Table 1 and the Ikonen polyhedra have been scaled with a factor of 4 to better match the size of the Stoyan polyhedra. Larger versions of this problem instance called Mergedi are simply created by making i number of copies of each polyhedron. The dimensions of the container for these instances are chosen such that a solution with 50% utilization is a cube. Furthermore, we have used the puzzle generator described above to generate 40 puzzles with 5, 10, 20, and 40 pieces. Rather than testing each puzzle with 10 different random seeds, 10 different puzzles are tested for each of the four cardinalities. The purpose of this is to illustrate the heuristic’s capabilities independently of the input data. Each shape of these puzzles has an average of about 11-13 facets which is very similar to the Stoyan instances. Results are presented in Table 3. The utilization of the initial solution (0 seconds) and the utilization after 10, 60, 300, and 600 seconds are reported. All values are averages over 10 runs with different seeds for 3DN EST, except in the case of the puzzles where the seed is used to vary the problem instance. The best solution after 10 minutes, the standard deviation, and the average number of translations done per second are reported for each instance. Problem Stoyan1 Stoyan2 Stoyan3 Merged1 Merged2 Merged3 Merged4 Merged5 Puzzle5 Puzzle10 Puzzle20 Puzzle40 Size 7 12 25 15 30 45 60 75 5 10 20 40 Utilization after number of seconds 0 10 60 300 600 17.54 39.76 41.60 42.05 42.05 24.75 38.25 39.90 41.79 42.45 29.90 39.19 42.49 44.58 45.12 23.44 37.29 39.68 42.38 42.97 23.62 30.23 39.77 42.80 42.92 24.58 27.02 35.49 42.23 43.32 24.80 26.09 31.61 40.06 41.99 26.17 26.66 29.63 37.99 40.96 28.85 98.30 98.89 98.89 99.22 20.90 72.68 84.96 93.74 94.30 15.77 42.27 50.05 72.20 82.54 13.62 26.40 34.56 45.68 49.59 Max. util. 46.38 44.27 46.67 44.12 42.99 44.99 42.81 42.56 100.00 100.00 95.16 70.85 Std. dev. 3.4 1.0 0.8 1.0 0.1 1.0 0.5 0.7 2.2 14.4 12.2 7.8 Translations per second 1468 887 756 462 295 265 233 199 353 205 145 Table 3: Average results obtained by running 3DN EST 10 times for at most 10 minutes in each run. Results include the utilization obtained with the initial solution, within 10 seconds and within 1, 5, and 10 minutes. The maximum utilization and standard deviation is also included for the results after 10 minutes. Finally, the average number of translations per second is presented except in the case of Puzzle5, for which an optimal solution was often found within a second. Note that the results for the Puzzle problems are on 10 different instances rather than with 10 different random seeds. 123 6. Computational experiments Figure 10: The best solution found for Stoyan3 without using rotation (from two different angles). The utilization is 46.67%. After 10 seconds the results are already better than those of Stoyan et al. for all of the Stoyan instances with an average utilization close to 40%. The heuristic continues to improve solutions but utilization is only improved by less than 1 percentage point after 300 seconds. Solutions for the Merged instances appear to be quite good with utilizations matching the smaller Stoyan instances even with as many as 60 and 75 shapes. Also here, solutions are generally only improved by one percentage point after the first 300 seconds with the exception of Merged5. The optimal utilization for Merged5 is probably higher than it is for Merged1 which is also indicated by the initial solutions, but the slow decline in the number of translations performed is also a strong indication that large problem instances are solved efficiently by 3DN EST. Puzzles with 5 or 10 pieces are most often solved to optimality, and even puzzles with 20 pieces are handled quite well within the time limit of 10 minutes. The average utilization of these instances is only 50% after 10 minutes, and the best found utilization is less than 71% which is far from the optimal 100%. The best solutions found for Stoyan3 and Merged5 are shown in Figure 10 and Figure 11. A simple strategy for handling rotation has also been implemented in 3DN EST. The local search neighborhood was expanded, so that in addition to trying translations in three directions, 24 different orientations (90◦ increments for each axis) are also tried. In each iteration the translation or orientation which results in least overlap is chosen. This was mainly done to get an indication of the improvement possible in the utilization when allowing rotation. The results are presented in Table 4 and better results are indeed obtained for the Stoyan instances while some of the large Merged instances are not handled very well, most likely because of the increased amount of computations needed and the increased size of the solution space. A lower bound on the height of the Stoyan instances and Merged1 is 16 since these instances 124 Translational packing of arbitrary polytopes Figure 11: The best solution found for Merged5 without using rotation (from two different angles). The utilization is 42.12%. contain a shape (see Convex4 in Table 1) with height 16 which cannot be rotated and still be within the bounds of the container. In all runs on Stoyan1 and Merged1 as well as most runs on Stoyan2, 3DNest is able to find solutions matching this bound and therefore these solutions are optimal. 7 Conclusion In this paper we have presented a solution method for the multi-dimensional strip-packing problem. An earlier version of this method was previously tested by Egeblad et al. [A] and proved very successful for two dimensions. Three problem instances in three dimensions, by Stoyan et al. [27], were used to show that the presented solution method is able to reach far better results than those by Stoyan et al. [27]. The heuristic has also been tested on problems where the optimal value is known, and has proven able to find the optimal solution for instances with 10 items and close to optimal solutions for instances with 20 items. A simple rotation scheme shows that increased utilization may be achieved by allowing rotation, and optimal solutions are found for instances with 7 and even 15 items. The translation algorithm presented in Section 4 is strongly connected to packing problems, but it is important to emphasize that the algorithm could also be used to maximize the volume of intersection of polytopes with an axis-aligned translation. Also, the restriction to axis-aligned translations is imposed only in order to keep the mathematical details as simple as possible. It is, of course, possible to alter the algorithm for translation in an arbitrary direction: a trivial approach would be to rotate the input data. It is also important to note the simplicity of the translation algorithm which is able to work directly with the faces of the polyhedrons. Unlike many other methods, as presented in Section 3, we do not rely on additional approximating data-structures such as octrees, depth maps or voxels. Even though intersection volumes of non-convex polytopes are calculated, the intersections are never explicitly 125 7. Conclusion Problem Stoyan1 Stoyan2 Stoyan3 Merged1 Merged2 Merged3 Merged4 Merged5 Size 7 12 25 15 30 45 60 75 Average Height 16.00 16.13 25.60 16.00 20.97 26.50 32.93 40.27 Utilization Avg. Min. 50.42 (0.0) 50.42 52.17 (0.5) 51.15 52.57 (1.2) 50.41 46.87 (0.0) 46.87 45.09 (1.3) 43.50 40.83 (0.9) 39.75 36.25 (1.9) 33.44 31.89 (1.2) 29.31 Max. 50.43 52.58 54.02 46.87 47.72 42.95 39.26 33.52 Translations per second 1325.21 682.27 673.50 440.18 305.21 299.31 275.15 242.22 Table 4: Results obtained when allowing rotation.Average utilization, standard deviation, minimum and maximum utilization are reported. The last column is the average number of translations per second. constructed. Non-convex polytopes are handled as easily as the convex ones and even holes are handled without any changes to the algorithm. Other problem variants might include non-rectangular containers [18], quality regions [18], repeated patterns [24] and more. Although these references are for the 2D problem, the generalized constraints can be handled by our solution method in any dimension, essentially without affecting the running time. If one does not calculate the minimum between each set of breakpoints in the translation algorithm, then only rational numbers are needed for the solution method described (given that the input problem only uses rational numbers). This property permits the use of the method if exact calculations are needed for some reason. In two dimensions, a minimum between breakpoints can also be found using rational numbers since one only needs to solve linear equations. Mount et al. [23] described what is essentially a 2D translation algorithm in 2D space (solving 2D2DTP). Given polygons with n and m edges, the worst case running time is O((mn)2 ). Their approach is based on an arrangement of line segments. Decomposition techniques for d ≥ 3 have been studied by several authors (see de Berg et al. [7]), and it would be interesting if these methods can be used to generalize the solution method for 2D2dTP to dDdDTP. It is also an open question how to make an algorithm for 2D3DTP or more generally d 0 DdDTP for any d ≥ 3 and d 0 ∈ {2, . . . , d − 1}. For the sake of completion, de Berg et al. [8] solve the maximization variant of 2D2DTP with two convex polygons in time O((n + m) log(n + m)). Ahn et al. [1] consider a generalization of the same problem in which they allow rotation. Finally, Cheong et al. [5] present an approximation algorithm, that finds a translation for two general polygons where the area of overlap is at least µopt − ε, for some given value ε and µopt is the maximal overlap of any translation. If the polygons have complexity n and m, the running time of their algorithm is O(m + (n2 /ε4 ) log2 n) and O(m + (n3 /ε4 ) log5 n), if rotations are allowed. Free orientation of shapes is one of the most important directions for future research. Especially when considering that most applications of packing 3D shapes, e.g., rapid prototyping, do allow free orientation. Another important direction for future research is how to also handle some of the constraints which are typically part of more general layout problems, e.g., constraints concerning gravity or wire length [4]. 126 Translational packing of arbitrary polytopes References [A] J. Egeblad, B. K. Nielsen, and A. Odgaard. Fast neighborhood search for two- and threedimensional nesting problems. European Journal of Operational Research, 183(3):1249–1266, 2007. [1] H.-K. Ahn, O. Cheong, C.-D. Park, C.-S. Shin, and A. Vigneron. Maximizing the overlap of two planar convex sets under rigid motions. Computational Geometry, 37(1):3–15, 2006. [2] A. Bortfeldt, H. Gehring, and D. Mack. A parallel tabu search algorithm for solving the container loading problem. Parallel Computing, 29:641–662, 2002. [3] J. Cagan, D. Degentesh, and S. Yin. A simulated annealing-based algorithm using hierarchical models for general three-dimensional component layout. Computer Aided Design, 30(10):781– 790, 1998. [4] J. Cagan, K. Shimada, and S. Yin. A survey of computational approaches to three-dimensional layout problems. Computer-Aided Design, 34(8):597–611, 2002. [5] O. Cheong, A. Efrat, and S. Har-Peled. Finding a guard that sees most and a shop that sells most. Discrete & Computational Geometry, 37(4):545–563, 2007. [6] J. Cohen, A. Varshney, D. Manocha, G. Turk, H. Weber, P. Agarwal, F. Brooks, and W. Wright. Simplification envelopes. Computer Graphics, 30(Annual Conference Series):119–128, 1996. [7] M. de Berg, L. J. Guibas, and D. Halperin. Vertical decompositions for triangles in 3-space. Discrete & Computational Geometry, 15(1):35–61, 1996. [8] M. de Berg, O. Cheong, O. Devillers, and M. van Kreveld. Computing the maximum overlap of two convex polygons under translations. Theory of Computing Systems, 31(5):613–628, 1998. [9] M. de Berg, M. van Kreveld, M. Overmars, and O. Schwarzkopf. Computational Geometry: Algorithms and Applications (2nd edition). Springer, 2000. [10] J. K. Dickinson and G. K. Knopf. A moment based metric for 2-D and 3-D packing. European Journal of Operational Research, 122(1):133–144, 2000. [11] J. K. Dickinson and G. K. Knopf. Packing subsets of 3d parts for layered manufacturing. International Journal of Smart Engineering System Design, 4(3):147–161, 2002. [12] K. A. Dowsland and W. B. Dowsland. Solution approaches to irregular nesting problems. European Journal of Operational Research, 84:506–521, 1995. [13] F. Eisenbrand, S. Funke, A. Karrenbauer, J. Reichel, and E. Schömer. Packing a trunk: now with a twist! In SPM ’05: Proceedings of the 2005 ACM symposium on Solid and physical modeling, pages 197–206, New York, NY, USA, 2005. ACM Press. [14] M. Eley. Solving container loading problems by block arrangement. European Journal of Operational Research, 141(2):393–409, 2002. [15] O. Faroe, D. Pisinger, and M. Zachariasen. Guided local search for the three-dimensional bin packing problem. INFORMS Journal on Computing, 15(3):267–283, 2003. 127 References [16] R. J. Fowler, M. S. Paterson, and S. L. Tanimoto. Optimal packing and covering in the plane are np-complete. Information Processing Letters, 12(3):133–137, 1981. [17] P. Hachenberger and L. Kettner. 3D Boolean Operations on Nef Polyhedra. In C. E. Board, editor, CGAL-3.2 User and Reference Manual. 2006. [18] J. Heistermann and T. Lengauer. The nesting problem in the leather manufacturing industry. Annals of Operations Research, 57:147–173, 1995. [19] S.-M. Hur, K.-H. Choi, S.-H. Lee, and P.-K. Chang. Determination of fabricating orientation and packing in sls process. Journal of Materials Processing Technology, 112(2-3):236–243, 2001. [20] I. Ikonen, W. E. Biles, A. Kumar, J. C. Wissel, and R. K. Ragade. A genetic algorithm for packing three-dimensional non-convex objects having cavities and holes. In Proceedings of the 7th International Conference on Genetic Algortithms, pages 591–598, East Lansing, Michigan, 1997. Morgan Kaufmann Publishers. [21] J. Lawrence. Polytope volume computation. Mathematics of Computation, 57(195):259–271, 1991. [22] A. Lodi, S. Martello, and D. Vigo. Heuristic algorithms for the three-dimensional bin packing problem. European Journal of Operational Research, 141(2):410–420, 2002. [23] D. M. Mount, R. Silverman, and A. Y. Wu. On the area of overlap of translated polygons. Computer Vision and Image Understanding, 64(1):53–61, 1996. [24] B. K. Nielsen. An efficient solution method for relaxed variants of the nesting problem. In J. Gudmundsson and B. Jay, editors, Theory of Computing, Proceedings of the Thirteenth Computing: The Australasian Theory Symposium, volume 65 of CRPIT, pages 123–130, Ballarat, Australia, 2007. ACS. [25] B. K. Nielsen. Nesting Problems and Steiner Tree Problems. PhD thesis, DIKU, University of Copenhagen, Denmark, 2008. [26] T. Osogami. Approaches to 3D free-form cutting and packing problems and their applications: A survey. Technical Report RT0287, IBM Research, Tokyo Research Laboratory, 1998. [27] Y. G. Stoyan, N. I. Gil, G. Scheithauer, A. Pankratov, and I. Magdalina. Packing of convex polytopes into a parallelepiped. Optimization, 54(2):215–235, 2005. [28] P. E. Sweeney and E. R. Paternoster. Cutting and packing problems: A categorized, applicationorientated research bibliography. Journal of the Operational Research Society, 43(7):691–706, 1992. [29] C. Voudouris and E. Tsang. Guided local search and its application to the traveling salesman problem. European Journal of Operational Research, 113:469–499, 1999. [30] G. Wäscher, H. Haussner, and H. Schumann. An improved typology of cutting and packing problems. European Journal of Operational Research, 2006. [31] X. Yan and P. Gu. A review of rapid prototyping technologies and systems. Computer Aided Design, 28(4):307–318, 1996. 128 Translational packing of arbitrary polytopes [32] S. Yin and J. Cagan. An extended pattern search algorithm for three-dimensional component layout. Journal of Mechanical Design, 122(1):102–108, 2000. [33] S. Yin and J. Cagan. Exploring the effectiveness of various patterns in an extended pattern search layout algorithm. Journal of Mechanical Design, 126(1):22–28, 2004. 129 In press (available online). Computers and Operations Research, 2007 Heuristic approaches for the two- and three-dimensional knapsack packing problem Jens Egeblad∗ and David Pisinger Abstract The maximum profit two- or three-dimensional knapsack packing problem packs a maximum profit subset of some given rectangles or boxes into a larger rectangle or box of fixed dimensions. Items must be orthogonally packed, but no other restriction is imposed to the problem. We present a new iterative heuristic for the two-dimensional knapsack problem based on the sequence pair representation proposed by Murata et al. (1996) using a semi-normalized packing algorithm by Pisinger (2006). Solutions are represented as a pair of sequences. In each iteration, the sequence pair is modified and transformed to a packing in order to evaluate the objective value. Simulated annealing is used to control the heuristic. A novel abstract representation of box placements, called sequence triple, is used with a similar technique for the three-dimensional knapsack problem . The heuristic is able to handle problem instances where rotation is allowed. Comprehensive computational experiments which compare the developed heuristics with previous approaches indicate very promising results for both two- and three-dimensional problems. 1 Introduction Assume that we are given a set of n rectangles j = 1, . . . , n, each having a width w j , height h j and profit p j and a rectangular plate having width W and height H. The maximum profit two-dimensional knapsack packing problem (2DKP) assigns a subset of the rectangles onto the plate such that the associated profit sum is maximized. All coefficients are assumed to be nonnegative integers, and the rectangles may not be rotated. A packing of rectangles on the plate is feasible if no two rectangles overlap, and if no part of any rectangle exceeds the plate. The maximum profit three-dimensional knapsack packing problem (3DKP) assigns a subset of boxes each with dimensions w j , h j , d j into a larger box with dimensions W , H and D. The problem has direct applications in various packing and cutting problems where the task is to use the space or material in an optimal way. The 2DKP problem also appears as pricing problem when solving the two-dimensional bin-packing problem [11, 31, 32]. 2DKP and 3DKP are NP-hard in the strong sense, which can be shown by reduction from the one-dimensional bin packing problem. An extensive survey on cutting and packing as well as a useful classification of these problems was developed by Wascher, Haussner and Schumann [33]. The problems we consider in this paper can be classified as two- and three-dimensional rectangular single knapsack problems (SKP) according to the typology of Wascher, Haussner and Schumann [33]. The items considered are strongly heterogeneous and we consider problems with and without rotation. A related problem is the constrained two-dimensional orthogonal non-guillotine cutting problem. Here equal items are grouped in types and for each item-type there are both a lower bound and an ∗ Corresponding Author: Tel.: +45 35 32 14 00; fax: +45 35 32 14 01. E-mail addresses: [email protected] (J. Egeblad), [email protected] (D. Pisinger) 131 2. Integer programming formulation of the problem upper bound on the number of that type required in the solution. Therefore the constrained nonguillotine cutting problem may be seen as a generalization of the orthogonal knapsack packing problem. Instances for the constrained non-guillotine cutting problem are often weakly heterogeneous, and solution methods commonly take advantage here of. Integer Programming formulations of the 2DKP have been presented by Beasley [5], Hadjiconstantinou and Christofides [15], and Boschetti, Hadjiconstantinou, Mingozzi [7] among others. Fekete and Schepers [11, 12, 14] solved the 2- and 3DKP through a branch-and-bound algorithm which assigns items to the knapsack without specifying the position of the rectangles. For each assignment of items a two-dimensional packing problem is solved, deciding whether a feasible assignment of coordinates to the items is possible such that they all fit into the knapsack without overlaps. An advanced graph representation was used for solving the latter problem. Baldacci and Boschetti [3] used a similar approach but introduced new reduction-tests and a cutting-plane approach to compute more effective bounds. Pisinger and Sigurd [32] solved the 2DKP through a branch-and-cut approach in which an ordinary one-dimensional knapsack problem is used to select the most profitable items whose overall area does not exceed the area of the plate. Having selected the most profitable items, a two-dimensional packing problem in decision form is solved, through constraint programming. If all items can be placed in the knapsack the algorithm terminates, otherwise an inequality is added to the one-dimensional knapsack stating that not all the current items can be selected simultaneously, and the process is repeated. Finally, Caprara and Monaci [8] developed a branch-and-bound algorithm for the 2DKP. The algorithm is based on a branch-and-bound scheme which assigns items to the knapsack without specifying the position of each item, followed by a feasibility check. The latter is done using an enumeration scheme from Martello, Monaci, Vigo [25]. Several authors have also applied heuristics to the constrained non-guillotine packing variant of the problem. Lai and Chan [20, 21] use simulated annealing and genetic algorithms. Solutions are represented as a sequence of items that are transformed into a placement. Only limited computational results are reported. The work by Leung et al. [22, 23] is based on that of Lan and Chan and the bottom-left placement procedure introduced by Jakobs [18]. Beasley [4] described a heuristic based on genetic algorithms capable of efficiently generating good solutions for instances with up to 4000 pieces. However, the heuristic is unable to reproduce known optimal solutions for smaller instances. Intermediate solutions explicitly state coordinates of rectangles, and overlap of items are allowed during the solution process. A penalty in the objective function ensures that overlap is minimized. More recently Alvarez-Valdes et al. applied both of the meta-heuristics GRASP and Tabu-search [1, 2] to the problem and were able to achieve very impressive results on the data used by Beasley. In the present paper we first present an IP formulation of the 2- and 3DKP. In Section 3 we describe the sequence pair representation, which we use in Section 4 with a simple local search neighborhood controlled by Simulated Annealing to solve 2DKP. In Section 5 we introduce a novel abstract representation of box placements in three dimensions and use the same methods as for two dimensions to solve 3DKP. Finally in Section 6 we present our result on existing and new benchmarks instances for 2- and 3DKP. 2 Integer programming formulation of the problem In the following we show an integer programming formulation of the 3DKP. A formulation of 2DKP easily follows by removing variables and constraints for the third dimension. We will introduce the decision variable si to indicate whether box i is packed within the knapsack box. The coordinates of box i are (xi , yi , zi ), meaning that the lower left back corner of the box 132 Heuristic approaches for the two- and three-dimensional knapsack packing problem is located at this position. If a rectangle is not packed within the knapsack we may assume that (xi , yi , zi ) = (0, 0, 0). As no part of a packed box may exceed the knapsack, we have the obvious constraints 0 ≤ xi ≤ W − w i , 0 ≤ yi ≤ H − hi , 0 ≤ zi ≤ D − di . (1) We introduce the binary decision variables `i j (left), ri j (right), ui j (under), oi j (over), bi j (behind) and fi j (in-front), to indicate the relative position of boxes i, j where i < j. To ensure that no two packed boxes i, j overlap we will demand that `i j + ri j + ui j + oi j + bi j + fi j ≥ 1, (2) whenever si = s j = 1. Depending on the relative position of two rectangles the coordinates must satisfy the following inequalities `i j = 1 ⇒ xi + wi ≤ x j , ui j = 1 ⇒ yi + hi ≤ y j , bi j = 1 ⇒ zi + di ≤ z j , ri j = 1 ⇒ x j + w j ≤ xi , oi j = 1 ⇒ y j + h j ≤ yi , f i j = 1 ⇒ z j + d j ≤ zi . (3) The problem may now be formulated as n max ∑ pi si i=1 s.t. `i j + ri j + ui j + oi j + bi j + fi j ≥ si + s j − 1 xi − x j +W `i j ≤ W − wi x j − xi +W ri j ≤ W − w j yi − y j + Hui j ≤ H − hi y j − yi + Hoi j ≤ H − h j zi − z j + Dbi j ≤ D − di z j − zi + D fi j ≤ D − d j 0 ≤ xi ≤ W − wi 0 ≤ yi ≤ H − hi 0 ≤ zi ≤ D − di `i j , ri j , ui j , oi j , bi j , fi j ∈ {0, 1} si ∈ {0, 1} xi , yi , zi ≥ 0 i, j = 1, . . . , n i, j = 1, . . . , n i, j = 1, . . . , n i, j = 1, . . . , n i, j = 1, . . . , n i, j = 1, . . . , n i, j = 1, . . . , n i = 1, . . . , n i = 1, . . . , n i = 1, . . . , n i, j = 1, . . . , n i = 1, . . . , n i = 1, . . . , n (4) The first constraint ensures that if boxes i and j are packed, then they must be located left, right, under, over, behind or in-front of each other as stated in (2). The next six constraints are just linear versions of the constraints (3). The last three inequalities correspond to the constraints (1). The MIP-model has 6n2 + n binary decision variables and 3n continuous variables. Although the size of O(n2 ) binary variables is not alarming, the problem is difficult to solve. This is mainly due to the use of conditional constraints (3), as these will loose their effect when solving the LP-relaxation, and thus bounds from LP-relaxation are in general far from the MIP-optimal solution value. 3 Sequence pairs Murata et al. [17] presented an abstract representation of two-dimensional rectangle packings based on sequence pairs. The problem they consider is the minimum area enclosing rectangle packing problem. 133 3. Sequence pairs e a d c f b Figure 1: A packing represented by sequence A = <e,c,a,d,f,b> and sequence B = <f,c,b,e,a,d>. In the abstract representation every compact packing can be represented by two permutations of the numbers {1, 2, . . . , n} where each number represents a rectangle in the problem instance. The pair of permutations is called a sequence pair (A, B). For a given packing, the two permutations A and B are found as follows: We use the relation Ai j to denote that item i precedes j in sequence A. We define (xi + wi ≤ x j ∨ yi ≥ y j + h j ) ⇔ Ai j (5) In a similar way we use the relation Bi j to denote that item i precedes j in sequence B, defining (xi + wi ≤ x j ∨ yi + hi ≤ y j ) ⇔ Bi j (6) Each of the two relations A , B given by (5) and (6) defines a semi-ordering, and hence for a given packing the two permutations A and B can easily be found by repeatedly choosing one (of possibly more) minimum elements. Figure 1 illustrates a packing and a corresponding sequence pair (A, B). From (5) and (6) we immediately see that if item i precedes item j in both sequences, then i must be placed left of j. If i succeeds j in sequence A but i precedes j in sequence B then i must be placed under j. Formally we have Ai j ∧ Bi j ⇒ i is left of j ¬Ai j ∧ Bi j ⇒ i is under j (7) (8) where we use the terminology ¬Ai j to denote A ji . The implications (7) and (8) can be used to derive a pair of constraint graphs as illustrated in Figure 2. In both graphs the nodes correspond to the items and edges indicate which rectangles should be placed left of each other (respectively under each other). In the first graph we have an edge from i er H a H jr -d H rH * c H jr H r -r PP 1 b @ P q P r 1 @ Rf @ r Figure 2: Constraint graphs corresponding (<e,c,a,d,f,b>, <f,c,b,e,a,d>). 134 r 6 ] JJ er J 6 ra Jrd I @ @ >6 ] JJ cr @ 6 @J @ fr Jrb I @ @ @r to the sequence pair (A, B) = Heuristic approaches for the two- and three-dimensional knapsack packing problem 32 2 29 46 12 8 38 34 35 25 20 27 45 0 31 5 6 11 37 23 41 47 16 42 43 18 269 36 7 44 45 49 1 22 15 37 0 5 48 19 27 40 10 34 8 38 25 20 11 35 30 39 28 33 14 21 24 4 2 29 32 12 46 3 17 13 31 4 7 269 36 44 6 3 41 47 42 43 18 16 39 28 33 14 1 19 23 10 21 24 48 49 30 40 15 17 22 13 Figure 3: Transformation of a sequence pair to a packing using the ordinary transformation (left) and using the semi-normalized transformation (right). to j if and only if item i should be placed left of j (Ai j ∧ Bi j ). In the second graph we have an edge from i to j if and only if item i should be placed under j (¬Ai j ∧ Bi j ). Redundant edges are removed from the figure for clarity. Traversing the nodes in topological order while assigning coordinates to the items, a packing (i.e. the coordinates of the items) can be obtained in O(n2 ) time. Tang et al. [34, 35] showed how the same packing can be derived without explicitly defining the constraint graph, but by finding weighted longest common subsequences in the sequence pair. Pisinger [30] further improved the algorithm, by presenting an algorithm which transforms a sequence pair to a semi-normalized packing in time O(n log log n). A normalized packing is a packing where the items are packed according to the sequence B and where each new item is placed such that it touches an already placed item on its left side, and an already placed item on its lower side. A semi-normalized packing is a packing where the items are packed according to the sequence B and where each new item is placed such that it touches the contour of the already placed items both from left and from below. The difference between a packing based on the ordinary transformation and the semi-normalized packing of the transformation by Pisinger is illustrated in Figure 3. 4 Sequence pairs for two-dimensional knapsack packing Let any sequence pair represent a feasible solution to the 2DKP. To evaluate the solution we transform the sequence pair into a packing. The solution value is sum of the profit values of those items which are located completely within the knapsack W × H of the transformed packing. Figure 3 illustrates two such packings which arose from the conventional and semi-normalized transformation of the sequence-pair. The solution values of each packing is the sum of the profits of items within the dashed lines. The transformation from sequence pair to packing may be stopped as soon as the contour of already placed items is completely outside the knapsack. For problems where only a small fraction of items fit inside the knapsack, this can save substantial time, since generating the packing will take an amount of time which is roughly equal to the time required to place only the subset of items which are inside the knapsack. This section is organized as follows: In section 4.1 we describe our heuristic in more detail. In Section 4.2 we describe how to accommodate problems where rotation is allowed and in Section 4.3 we describe how problem instances can be simplified during the solution process. 135 4. Sequence pairs for two-dimensional knapsack packing 4.1 Simulated annealing To solve the 2DKP, we use the meta-heuristic Simulated Annealing which works well in cooperation with the sequence pair representation [17, 30, 34, 35]. In this setting we repeatedly make a small modification to the sequence pair, transform the sequence pair into a packing, evaluate the profit of the corresponding packing, and accept the solution depending on the outcome. In Simulated Annealing any non-improving solution is accepted with probability that decreases over time. An outline of the algorithm is found in Figure 4. choose initial incumbent solution s ∈ S choose initial time t0 choose time step ts a := 0 repeat choose s0 ∈ N(s) if f (s0 ) ≤ f (s) then accept := true else p := rand(0, 1) T := t0 +t1 s ·a ∆ := f (s0 )− f (s) f (s) −∆ if p < e T then accept := true end end if accept then s := s0 a := a + 1 end until stopping-criteria return s Figure 4: Simulated Annealing Heuristic Our variant of Simulated Annealing is as follows: At any given time the temperature is evaluated as 1/(t0 + ts · a) where t0 is a start time-value, ts is a time-step value and a is the number of accepted solutions. The temperature depends on the time, so the higher t0 + ts · a is, the lower is the current temperature. The temperature is decreased only when a new solution is accepted. The neighborhood N(s) of a solution s = (A, B) is defined as one of the following three permutations: Either exchange two items in sequence A; exchange two items in sequence B; or exchange two items in both sequence A and B. The items are selected randomly. 4.2 Rotations Few papers consider exact algorithms for packing problems where rotation is allowed. A possible explanation could be the increased size of the solution space and the lack of high-quality upper bounds. In our heuristic, rotations are easy to handle as we may represent each packing by the triple (A, B, R). Here (A, B) is the sequence pair and R = (r1 , . . . , rn ) is a binary vector of length n. In a placement of 136 Heuristic approaches for the two- and three-dimensional knapsack packing problem (A, B, R) item i is rotated ri · 90 degrees. If rotation is allowed the neighborhood N(s) of our heuristic is extended with a fourth permutation: Change the rotation flag of an item in R. 4.3 Removing items In instances where only a small fraction of the items can fit inside the knapsack, it can be advantageous to a-priori remove some items which provably will not be selected in an optimal solution. Our approach is based on standard techniques for reducing the number of items in a 0-1 Knapsack Problem [19] and is similar to the one presented in [1]. Let the 1-dimensional relaxation of the (2DKP) be given by the following 0-1 Knapsack Problem (1DKP): n max ∑ p jx j j=1 n s.t. ∑ (w j h j )x j ≤ W H (9) j=1 x j ∈ {0, 1}, j = 1, . . . , n Assume that z∗ is the currently best known solution to (2DKP), then clearly z∗ is a lower bound for (1DKP), and an optimal solution to (1DKP) is also an upper bound on z∗ . Now, assume that we have an upper bound u1j for (1DKP) with the additional constraint that x j = 1. 1 If u j ≤ z∗ then we know that item j will not be chosen in an improved solution of (1DKP), and hence neither in an improved solution of (2DKP). As our upper bound we have chosen to use the Dembo and Hammer [10] upper bound uDH which can be calculated as follows: Sort the items in (1DKP) according to nonincreasing efficiency p j /(w j h j ) and fill the knapsack in a greedy way until the first item s (split items) does not fit into the knapsack. Then the Dembo and Hammer upper bound is given by uDH = ∑s−1 i=1 pi + (W H − s−1 1 ∑i=1 wi hi )ps /(ws hs ). If we fix a variable x j = 1 the upper bound becomes u j = uDH + p j −(w j h j )ps /(ws hs ) which can be calculated in constant time. During the simulated annealing, we run this test whenever we encounter an improving solution z∗ and remove every item j for which u1j ≤ z∗ from the problem. 5 Three dimensions For the three-dimensional problem we will consider a new representation which like the sequence pair for two dimensions will contain the relative box placements for three dimensions. We call the representation sequence triple since it consists of three sequences. Not all three-dimensional packings are obtainable with this representation but we will prove that a large subset of all normalized packings, known as fully robot packable packings, may be represented. A robot packing is a packing which can be achieved by successively placing boxes starting from the bottom-left-behind corner, and such that each box is in-front of, right of, or over each of the previously placed boxes [26]. Robot packings are motivated by several industrial applications, where boxes have to be packed by robots equipped with a rectangular “hand” parallel to the base of the large box. To avoid collisions between the hand and the boxes, it is demanded that no already packed box blocks for the “hand” movement. In [26] it is shown that the quality of a packing is seldom affected by restricting the solution space to the set of robot packings. 137 5. Three dimensions y 6 8 5 9 4 A 3 1 7 6 i 1 2 3 4 5 6 7 8 9 2 -x B z+ wi 2 3 3 5 5 6 6 4 4 hi 4 7 7 2 2 3 3 6 6 di 9 3 6 3 6 3 6 3 6 xi 4 6 6 4 4 0 0 0 0 yi 3 0 0 7 7 0 0 3 3 zi 0 6 0 6 0 6 0 0 3 C Figure 5: A packing and the corresponding sequence triple (A, B,C) = {< 9, 4, 8, 5, 1, 6, 2, 7, 3 >, < 7, 8, 6, 9, 1, 3, 5, 2, 4 >, < 2, 3, 6, 7, 1, 4, 5, 9, 8 >}. A packing is a fully robot packable packing if all six 90 degree rotations of it are robot packings or, equivalently, the robot criteria is satisfied no matter which corner is selected as start corner instead of the bottom-left-behind corner. This section is organized as follows: First we describe the Sequence Triple representation in detail in Section 5.1. Then we describe an algorithm to transform a Sequence Triple to a packing in Section 5.2. Finally, in Section 5.3, we describe how the same Simulated Annealing strategy we use for the sequence pair is applied to the sequence triple to form a heuristic for 3DKP. 5.1 Sequence triple A given fully robot packable packing is represented by three sequences A, B and C where each sequence is a permutation of the n boxes. For any sequence X we define the relation Xi j to mean that i is before j in sequence X. For convenience we use the notation ¬Xi j ⇔ X ji . In a similar way as in Section 3 we define the relation Ai j by (xi + wi ≤ x j ∨ yi ≥ y j + h j ∨ zi ≥ z j + d j ) ⇔ Ai j (10) In other words Ai j iff i is located left, over or in-front of j. Using the formulation (2) we have Ai j ⇔ `i j + oi j + fi j ≥ 1. Relation Bi j is defined by (xi ≤ x j + w j ∨ yi ≤ y j + h j ∨ zi ≤ z j + d j ) ⇔ Bi j (11) This means Bi j iff i is located left, under or behind of j. The relation can be expressed as Bi j ⇔ li j + ui j + bi j ≥ 1. Finally, relation Ci j is defined by (xi ≥ x j + w j ∨ yi + hi ≤ y j ∨ zi ≥ z j + d j ) ⇔ Ci j (12) In words, Ci j iff i is located right, under or in-front of j, which can be expressed as Ci j ⇔ ri j + ui j + fi j ≥ 1. 138 Heuristic approaches for the two- and three-dimensional knapsack packing problem Due to the definition of fully robot packable packings, there will always be an item which is located furthest left-over-behind. By removing this item and repeating the operation, we get the ordering of sequence A. In a similar way the orderings of B and C can be determined, as illustrated in Figure 5 (The letters A, B,C on the figure indicate the directions which are used for defining the corresponding sequence of boxes). This shows that every fully robot packable packing can be represented by a sequence triple. Using the relations Ai j ⇔ `i j + oi j + fi j ≥ 1, Bi j ⇔ li j + yi j + bi j ≥ 1, Ci j ⇔ ri j + ui j + fi j ≥ 1, `i j + ri j ≤ 1, oi j + ui j ≤ 1, fi j + bi j ≤ 1, (13) we find that Ai j ∧ ¬Bi j ∧ Ci j Ai j ∧ Bi j ∧ Ci j ¬Ai j ∧ ¬Bi j ∧ Ci j ¬Ai j ∧ Bi j ∧ Ci j Ai j ∧ ¬Bi j ∧ ¬Ci j Ai j ∧ Bi j ∧ ¬Ci j ¬Ai j ∧ ¬Bi j ∧ ¬Ci j ¬Ai j ∧ Bi ∧ ¬Ci j ⇔ ⇔ ⇔ ⇔ ⇔ ⇔ ⇔ ⇔ fi j = 1 `i j + ri j ≥ 1 ∨ oi j + ui j ≥ 1 ∨ fi j + bi j ≥ 1 ri j = 1 ui j = 1 oi j = 1 `i j = 1 `i j + ri j ≥ 1 ∨ oi j + ui j ≥ 1 ∨ fi j + bi j ≥ 1 bi j = 1 (14) Notice that Ai j ∧ Bi j ∧ Ci j and ¬Ai j ∧ ¬Bi j ∧ ¬Ci j cannot occur for any packing. We have, however, chosen to assign these cases a meaning, such that every sequence triple has a corresponding packing. This leads to the following four implications, similar to (7) and (8), which are used to determine the relative box positions: Ai j ∧ Bi j ∧ ¬Ci j ¬Ai j ∧ Bi j ∧ Ci j ¬Ai j ∧ Bi j ∧ ¬Ci j Ai j ∧ Bi j ∧ Ci j ⇒ i is left of j (15) ⇒ i is under j (16) ⇒ i is behind j (17) ⇒ i is behind j (18) Notice that both (17) and (18) impose that i must be behind j in the packing. The unfortunate consequence of this is that the representation is biased towards orderings in that direction which could have a negative impact on the solution process, but as we wish to let every sequence triple represent a packing, an arbitrary choice had to be done. 5.2 A placement algorithm To find a placement (i.e. the coordinates of the boxes) corresponding to a sequence triple, we can construct three constraint graphs similar to Figure 2: In the first graph we have an edge from item i to item j if i is located left of j (i.e. Ai j ∧ Bi j ∧ ¬Ci j ). In the second graph we have an edge from item i to item j if i is located under j (i.e. ¬Ai j ∧ Bi j ∧ Ci j ). In the last graph we have an edge from item i to item j if i is located behind j (i.e. ¬Ai j ∧ Bi j ∧ ¬Ci j or Ai j ∧ Bi j ∧ Ci j ). Traversing the nodes in topological order for each graph while assigning coordinates to the items, we find the location of all boxes in time O(n2 ). 139 6. Computational experiments By observing that Bi j is a necessary criteria for node i to precede node j in each of the three constraint graphs, we may actually omit the topological ordering as it is in each case given by the order of sequence B. The first box in B is placed at (x, y, z) = (0, 0, 0) and succeeding boxes are placed one by one according to the order of sequence B. At any time let P consist of all previously placed boxes. Now assume we wish to place box i. To determine the position of i we compare i with every box j ∈ P. Let Px ⊆ P be the subset of boxes which satisfy (15), (i.e. Ai j ∧ Bi j ∧ ¬Ci j ) let Py ⊆ P be the subset which satisfy (16), (i.e. ¬Ai j ∧ Bi j ∧ Ci j ) and let Pz ⊆ P be the subset which satisfy (17) or (18) (i.e. ¬Ai j ∧ Bi j ∧ ¬Ci j or Ai j ∧ Bi j ∧ Ci j ). Now assign to i the coordinates (xi , yi , zi ) determined by xi = max(0, max(x j + w j )) (19) yi = max(0, max(y j + h j )) (20) zi = max(0, max(z j + d j )) (21) j∈Px j∈Py j∈Pz Once a box has been placed it is inserted into P. If we maintain a table in which the position of each box i in the three sequences A, B,C is saved, we can test whether Ai j , Bi j or Ci j holds in constant time for two given boxes i, j. Since placing a box only requires comparison with every previously placed box, calculating (19) to (21) for a given box i can be done in O(|P|) = O(n) time. Placing all n boxes then requires O(n2 ) time. To speed up the placement procedure slightly we remove a box from P if it is completely “shaded” by a newly inserted box. A box j is shaded by a box i if x j + w j ≤ xi + wi , y j + h j ≤ yi + hi and z j + d j < zi + di , hence it does not affect the placement of future boxes, and by removing it we can avoid subsequent redundant checks. 5.3 Simulated annealing To solve 3DKP, we use the same Simulated Annealing scheme used for two dimensions but with the three-dimensional sequence representation. The neighborhood is increased to accommodate the extra sequence and consists of the following permutations: 1) exchange two boxes from one of the sequences, 2) exchange two boxes in sequence A and B, 3) exchange two boxes in sequence A and C, 4) exchange two boxes in sequence B and C, 5) exchange two boxes in all sequences. 6 Computational experiments The heuristic described in the previous sections was implemented in C++ using a modified version of the sequence pair algorithm by Pisinger [30] for two dimensions and an implementation of the placement algorithm for sequence triple described in Section 5.2 for three dimensions. The implementation was tested on a computer with an AMD Athlon 64 3800+ (2.4 GHz) processor with 2 GB of RAM using the GNU-C++ compiler (gcc 4.0). This section is divided into three parts; Section 6.1 considers tighter upper bounds to evaluate the quality of solutions. Section 6.2 deals with the 2DKP and Section 6.3 considers the 3DKP. 6.1 Bounds To determine the quality of the solutions we compare solution values with upper bound introduced by Fekete et al. [13, 14] which is based on conservative scales. 140 Heuristic approaches for the two- and three-dimensional knapsack packing problem Given an instance I of (3DKP) with container dimensions W × H × D a new instance I 0 with dimensions 1 × 1 × 1 is generated by scaling the dimensions of each item i ∈ I by W1 , H1 and D1 , respectively. Define the conservative scale: x for (k + 1)x ∈ Z (k) u (x) = , 1 b(k + 1)xc k otherwise and let u(0) (x) = x. Now for each j, k, l = 0, 1, . . . , 4, instances are generated from I 0 by setting item i’s dimensions to u( j) (w0i )×u(k) (h0i )×u(l) (di0 ). The minimal optimal value of the 1-dimensional relaxation of any of these instances is used as an upper bound of I. For an instance of (2DKP) similar bounds are determined by disregarding the third dimension. For instances where rotation is allowed the upper bound from conservative scales is not valid, hence we use the weaker upper bound given by the optimal value of (1DKP) defined in (9). 6.2 2D computational experiments To test the 2DKP heuristic we used both classical instances from the literature and a new set of instances (described in Table 1). The instances were used for parameter tuning of the heuristic. Results are reported for instances both without and with rotation allowed. 6.2.1 Classical instances for 2DKP We use the benchmarks instances considered by Fekete et al. [14], Caprara and Monaci [8] and Alvarez-Valdes et al. [1, 2]. The instances are listed in Table 2. The instances beasley1-12 originate from [6]. The cgcut and gcut instances are guillotinecut instances from [9] and [5] respectively. The guillotine-cut instance wang20 is from [36]. The instances 3 to CHL5 are also guillotine-cut instances by Hifi [16]. The data for hadchr3 and hadchr11 was presented in [15]. The instances okp1-5 are by Fekete and Schepers [12]. To transform the gcut instances, exactly one rectangle was created in the 2DKP instance for each rectangle in the original instance. For the constrained instances bi duplicates of each rectangle were created, where bi is the maximum number of times rectangle i may be cut from the material. In addition to the above instances, Beasley [4] presented a set of 630 instances (ngcutfs), which are listed in Table 3. In these, the number of distinct items, M, ranges between 40 and 1000, but items are duplicated Q times, with Q ∈ {1, 3, 4}. Therefore the total number of items, n, ranges from 40 to 4000. 6.2.2 New instances for 2DKP To test the performance of the heuristic for problems where many rectangles can exist simultaneously in the knapsack, we have created 80 new instances. The rectangle dimensions in each instance belongs to one of five different classes which are listed in Table 1. Dimensions of the items are selected randomly from a uniform distribution between the first and last values of the intervals in the ‘Width’ and ‘Height’ columns of the table. The five classes are tall (T), wide (W), square (S), uniform (U) and diverse (D). The number of rectangles, n, in each instance is selected from the set {30, 50, 100, 200}. The rectangles may be clustered (C) and random (R). Clustered instances consists of only 20 rectangles which are duplicated appropriately, while in the random instances all rectangles are independently generated. Finally the area of the bin is either 25 % or 75 % of the total area of the rectangles and the height of the bin is always twice its width. 141 6. Computational experiments Table 1: The five different classes of the new ep2 instances. Class T W S U D Description Tall. Rectangles are tall Wide. Rectangles are wide Square. Rectangles are square Uniform. Largest dimension is no more than 150% of the smallest Diverse. Largest dimension can be up-to 100 times the smallest Width [1, 13 · 100] [ 23 · 100, 100] [1, 100] [ 32 · 100, 100] Height [ 32 · 100, 100] [1, 13 · 100] Equal to width [ 23 · 100, 100] [1, 100] [1, 100] The naming convention is ep2-n-c-t-p, where n ∈ {30, 50, 100, 200} is the number of rectangles, c ∈ (T, W, S, U, D) describes the class, t ∈ (C, R) describes if it is clustered or random, p ∈ 25, 75 describes the size of the bin in percentage of the total rectangle area. The profit of the rectangles is always the area of the rectangle times a random number from {1, 2, 3}. The instances are presented in Table 4 and are available along with the source code to generate them at this web-address: http://www.diku.dk/˜pisinger/codes.html. 6.2.3 Parameter setting Three parameters are crucial for the results of Simulated Annealing: The start time t0 , the time step ts and the stopping-criteria. A time limit is used as stopping-criteria. Suitable values of t0 and ts and the time limit depend on the complexity of the instance. V For each instance we determine two indicator values n0 and n1 . We set n0 = n · knapsack Vitems , where Vknapsack is the area (volume) of the knapsack and Vitems is sum of the items’ area (volume). It indicates the average number of items a knapsack may contain. We use the value n0 to determine the running time of our experiments: For an instance with n rectangles and n0 defined as above let F(n, n0 ) = n0 lg n. If we expect n0 items to fit into the knapsack and the order of the items matters then there are roughly nn!0 ! ≈ nn0 ways to select the items and we may expect that there are roughly nn0 different possible solutions to search. Thus F(n, n0 ) = lg(nn0 ) should give us a rough indication of the size of the solution space of the instance. The running time T (n, n0 ) (in seconds) of each instance is determined from the value F(n, n0 ) as follows 30 for F(n, n0 ) < 25 60 for 25 ≤ F(n, n0 ) < 65 120 for 65 ≤ F(n, n0 ) < 100 T (n, n0 ) = (22) 240 for 100 ≤ F(n, n0 ) < 250 600 for 250 ≤ F(n, n0 ) The value n1 is the number of items chosen in an optimal solution of (1DKP) defined in (9). For all considered instances, this problem is solved to optimum very quickly using the exact method by Pisinger [28]. n1 reflects the number of rectangles to be expected in an optimal solution which is another indicator of the complexity of the instance. To determine appropriate values of t0 and ts we experimented with the 23 instances marked with ‘’*’ in Table 2 and 4. These contain between 16 and 200 rectangles. We performed the experiments with t0 ∈ {10−3 , 10−2 , 10−1 , 100 , 101 , 102 , 103 , 104 , 105 } and ts ∈ 2 {10 , 101 , 10−1 , 10−3 , 10−5 , 10−7 , 10−9 , 10−11 , 10−13 }. For the 23 instances, each of the 81 combina- 142 Heuristic approaches for the two- and three-dimensional knapsack packing problem Simulated Annealing (ep2−200−S−R−75) Profit 100000 10000 1000 Simulated Annealing (cgcut2) Profit 100 10 T0 1 1e−12 1e−10 1e−08 1e−06 0.0001 Tstep 0.01 0.1 0.01 100000 10000 1000 Simulated Annealing (gcut13) 100000 10000 1000 10 T0 1 Profit 100 1 0.1 0.01 1e−14 1e−12 1e−10 1e−08 1e−06 0.0001 Tstep 0.01 1 Simulated Annealing (gcut12) Profit 100 T0 10 1 0.1 0.01 1e−14 1e−12 1e−10 1e−08 1e−06 0.0001 Tstep 0.01 100000 10000 1000 100 10 T0 1 1 0.1 0.01 1e−14 1e−12 1e−10 1e−08 1e−06 0.0001 Tstep 0.01 1 Figure 6: Results of the Simulated Annealing heuristic for different values of t0 and ts on four different instances. tions of t0 and ts were tested using the running times from (22). Results from four selected instances are presented in Figure 6. Based on the results of the parameter tuning for the 23 instances, we were able to establish that good values of t0 and ts are: t0 = n21 , ts = n21 , 107 The values can be interpreted in the following way: The higher t0 and ts the less likely is the acceptance of a non-improving permutation. The larger the number of rectangles is in an optimal solution the more improving steps must be undertaken before the heuristic reaches a local minimum. 6.2.4 Results Using the parameters of the previous section, our heuristic was applied to the instances described in Sections 6.2.1 and 6.2.2. To determine the robustness of the heuristic, we ran it on each instance with 10 different random seeds. For the classical instances, the optimal solution value is reported [14] where known. For the remaining instances, we compare our results with the upper bounds described in section 6.1. The results are reported in Tables 2–6 which all follow the same format. The average, best and worst results of our heuristic on each instance for the 10 seeds are reported in the columns entitled ‘Avg.’, ‘Best’ and ‘Worst’, respectively. The time before the heuristic discovered the best solution is reported in the column entitled ‘Best Time’ and the time spent on each seed is reported in the column entitled ‘Seed Time’ (total time = 10× Seed Time). In Table 2, we report results for the classical instances. The column ‘Optimal’ contains values of optimal solutions. For gcut13, no optimal value is currently known, but we have reported the results and upper bounds of respectively Caprara and Monaci [8] and Fekete and Schepers [14]. The ‘Exact 143 6. Computational experiments Methods Time’ represent the running time of the algorithms by Caprara and Monaci [8] and Fekete and Schepers [14]. All running times are in seconds. The running times of Alvarez-Valdes et al. [1] are reported in the column entitled ‘Alvarez-Valdes’ (Instances were solved to optimality). Instances marked with ‘*’ were used for parameter tuning. The heuristic finds the optimal value in all but four of the instances. For the remaining instances except hadchr3, the deviation is less than two percent from optimum. The heuristic is able to improve the best known solution of gcut13 by 0.8%. The time to find the best solution is generally below one second for the small instances and reaches only a few minutes for the most difficult instances. For the constrained guillotine-cut instances, we generally get equal or better results than the original authors since we also allow for non-guillotine packing. However, the unconstrained instances (gcut) have been transformed and are not comparable with results from the original paper. Table 3 summarizes results on the 630 ngcutfs instances as well as the results of Beasley [4] and Alvarez-Valdes et al. [1, 2]. As in [4] and [1, 2], results are reported as the average percent deviation from the (1DKP) upper bound in the columns marked ‘Avg./1D’. The average time in seconds it took to reach the best solution is given in columns marked ‘Time’. The results in ‘Beasley’ are taken from [4] and the results in ‘Alvares-Valdes et al.’ are taken from [1]. It is seen that in terms of solution quality, our results are 0.4% better than those of Beasley and only 0.25% worse than Alvarez-Valdes et al. Since their reported results for large instances are found within the first second, we suspect that the approach by Alvarez-Valdes et al. benefits from a greedy constructive algorithm that is applied before their local search, which seems to find optimal solutions for instances with hundreds of items or more. In the present paper, we are mainly interested in comparing the local search frameworks and not the initial greedy heuristics. Moreover, it should be emphasized that although the ngcutfs instances contain up-to 4000 items, the average value of n0 and n1 are 5.4 and 7.9 and the maximal values are respectively 9.31 and 21. This indicates that the number of rectangles which can fit inside the knapsack is only a few dozen. Therefore it would be interesting to compare results for instances where more than a hundred items fit simultaneously inside the knapsack. The results on the 80 newly proposed benchmarks are listed in Table 4. Here ‘Bound’ refers to the upper bound based on conservative scales and ‘Best/Bound’, ‘Avg./Bound’, ‘Worst/Bound’ are the percentage deviations between the heuristic best, average and worst solutions and the conservative scale upper bound calculated as 100 − [solution value]/[bound value] · 100. Only for 11 of the instances is the best result more than 5% from the upper bound and on average the deviation is only 3.0%. The average deviation is 4.6%, 3.1%, 2.4% and 2.1 for n = 30, 50, 100, and 200 respectively. For 9 of the instances, the heuristic finds the optimal solution since the deviation from the upper bound is 0. This shows the heuristic’s ability to find good solutions for both small and large instances and that the gap between solution value and upper bound decreases as the number of items and size of the knapsack increases. Figure 7 illustrates the behavior of the heuristic over time for the instances okp5, ep-100-S-R-75 and ep2-200-U-C-75. The y-axis is the percentage of the best solution found during the run and the x-axis is the percentage of the full running time for each instance. The graph shows that after rougly half the running time the solution value stays within 1 percent of the best found solution value and that the heuristic quickly converges but allows for minor changes throughout the entire solution process. The best results for three of the instances are shown in Figure 8. 144 Heuristic approaches for the two- and three-dimensional knapsack packing problem Table 2: Results for the classical benchmark instances. Instance beasley1 beasley2 beasley3 beasley4 beasley5 beasley6 beasley7 beasley8 beasley9 beasley10 beasley11 beasley12 cgcut1* cgcut2* cgcut3* gcut1* gcut2 gcut3* gcut4 gcut5 gcut6 gcut7 gcut8 gcut9 gcut10 gcut11 gcut12* gcut13* n 10 17 21 7 14 15 8 13 18 13 15 22 16 23 62 10 20 30 50 10 20 30 50 10 20 30 50 32 n0 5.3 6.3 7.6 6.5 6 7.8 18.3 8.2 7.4 6.8 9.1 8.6 10.7 14.8 3.9 3.82 4.6 4.6 4.3 4.6 4.1 3.7 4.5 4.9 3.7 4.6 4 20.1 n1 5 8 6 6 7 8 8 9 9 7 8 11 8 11 11 4 5 6 6 4 5 5 5 5 5 6 4 18 hadchr3 hadchr7 hadchr8 hadchr11 hadchr12 wang20* 3 3s a1 a1s a2 a2s chl2 chl2s chl3 chl3s chl4 chl4s chl5 okp1* okp2 okp3* okp4 okp5* 42 7 22 10 15 15 62 62 62 62 53 53 19 19 35 35 27 27 18 50 30 30 61 97 5 6.6 8.6 5.6 5.3 5.4 3.9 3.9 4.2 4.2 5.5 5.5 9.1 9.1 89.8 89.8 92.7 92.7 7.4 14.3 9.6 8.3 10.1 12.6 4 5 11 6 4 3 11 6 11 7 11 7 10 9 35 35 27 27 5 9 11 11 8 15 Optimal 164 230 247 268 358 289 430 834 924 1452 1688 1865 244 2892 1860 48368 59798 61275 61380 195582 236305 240143 245758 939600 937349 969709 979521 ≥8408316 ≥8622498 ≤9000000 1178 1865 2517 1270 2949 2726 1860 2726 2020 2956 2615 3535 2326 3336 5283 7402 8998 13932 589 27718 22502 24019 32893 27923 Best 164 230 247 268 358 289 430 834 924 1452 1688 1865 244 2892 1860 48368 59798 61275 61380 195582 236305 240143 245758 939600 937349 969709 979521 8691947 Avg. 164 230 247 268 358 289 430 834 924 1452 1688 1865 244 2892 1846 48368 59704 61275 61380 195582 236305 240143 245758 939600 937349 968582.3 978727.8 8637809.1 1086 1865 2517 1270 2949 2716 1860 2726 2020 2950 2615 3535 2326 3336 5283 7402 8858 13932 589 27718 22214 24019 32893 27923 1086 1865 2517 1270 2949 2712.5 1846 2722.5 2004 2950 2594 3517.9 2326 3336 5283 7402 8763.5 13932 587 27542.7 22098.6 23859.8 32893 26759 Egeblad & Pisinger Worst Best Time 164 ≤ 0.02 230 ≤ 0.02 247 ≤ 0.02 268 ≤ 0.02 358 ≤ 0.02 289 ≤ 0.02 430 ≤ 0.02 834 ≤ 0.02 924 0.3 1452 ≤ 0.02 1688 ≤ 0.02 1865 0.3 244 ≤ 0.02 2892 1.2 1840 1.6 48368 ≤ 0.02 59563 16.6 61275 2.1 61380 0.9 195582 ≤ 0.02 236305 ≤ 0.02 240143 ≤ 0.02 245758 0.1 939600 ≤ 0.02 937349 0.6 958442 ≤ 0.02 976877 26.2 8615240 239.3 Seed Time 30 60 60 30 30 60 30 60 60 60 60 60 60 120 30 30 30 30 30 30 30 30 60 30 30 30 30 240 ≤ 0.02 0.3 ≤ 0.02 ≤ 0.02 0.4 29.2 1.6 8.6 ≤ 0.02 18.1 2.3 7.6 11.9 55.5 ≤ 0.02 ≤ 0.02 ≤ 0.02 ≤ 0.02 1.7 6.6 22.9 11 4.9 5.4 30 60 30 30 30 60 30 30 30 30 30 30 60 60 240 240 240 240 60 120 60 60 60 120 1086 1865 2517 1270 2949 2711 1840 2721 1960 2950 2545 3516 2326 3336 5283 7402 8658 13932 584 27486 21947 23531 32893 25468 Exact Methods Time Fek-Sch Cap-Mon ≤ 0.02 ≤ 0.02 ≤ 0.02 ≤ 0.02 ≤ 0.02 ≤ 0.02 ≤ 0.02 ≤ 0.02 ≤ 0.02 ≤ 0.02 ≤ 0.02 ≤ 0.02 1.46 0.3 531.93 531.93 4.58 4.58 0.01 ≤ 0.02 0.22 0.19 3.24 2.16 376.52 346.99 0.5 ≤ 0.02 0.12 0.06 1.07 0.22 168.5 136.71 0.08 ≤ 0.02 0.14 ≤ 0.02 16.3 14.76 25.39 16.85 1800 1800 ≤ 0.02 ≤ 0.02 ≤ 0.02 ≤ 0.02 ≤ 0.02 2.72 ≤ 0.02 ≤ 0.02 ≤ 0.02 ≤ 0.02 ≤ 0.02 ≤ 0.02 ≤ 0.02 ≤ 0.02 ≤ 0.02 ≤ 0.02 ≤ 0.02 ≤ 0.02 ≤ 0.02 35.84 1559 10.63 4.05 488.27 2.72 11.6 1535.95 1.91 2.13 488.27 Time Alvarez-Valdes 0 0 0 0 0 0 0 0 0 0 0 0 - 0.11 0.05 2.14 3.4 0.66 ≤ 0.02 Table 3: Average results for the 630 ngcutfs instances. Instance Type 1 Type 2 Type 3 Beasley 1.64 1.70 1.66 Avg. 1D Deviation Egeblad & Pisinger Alvarez-Valdes et al. 1.19 0.95 1.29 1.06 1.19 0.94 Beasley 558.1 668.4 830.0 Time Egeblad & Pisinger 37.38 45.03 62.05 Alvarez-Valdes et al. 19.61 23.84 32.56 145 6. Computational experiments Table 4: Results for the ep2 instances. Instance ep2-30-D-C-25 ep2-30-D-C-75* ep2-30-D-R-25 ep2-30-D-R-75 ep2-30-S-C-25 ep2-30-S-C-75 ep2-30-S-R-25 ep2-30-S-R-75 ep2-30-T-C-25 ep2-30-T-C-75 ep2-30-T-R-25* ep2-30-T-R-75 ep2-30-U-C-25 ep2-30-U-C-75 ep2-30-U-R-25 ep2-30-U-R-75 ep2-30-W-C-25 ep2-30-W-C-75 ep2-30-W-R-25 ep2-30-W-R-75* ep2-50-D-C-25 ep2-50-D-C-75 ep2-50-D-R-25 ep2-50-D-R-75 ep2-50-S-C-25 ep2-50-S-C-75 ep2-50-S-R-25 ep2-50-S-R-75 ep2-50-T-C-25* ep2-50-T-C-75 ep2-50-T-R-25 ep2-50-T-R-75 ep2-50-U-C-25 ep2-50-U-C-75 ep2-50-U-R-25 ep2-50-U-R-75* ep2-50-W-C-25* ep2-50-W-C-75 ep2-50-W-R-25 ep2-50-W-R-75 ep2-100-D-C-25 ep2-100-D-C-75 ep2-100-D-R-25 ep2-100-D-R-75 ep2-100-S-C-25 ep2-100-S-C-75 ep2-100-S-R-25 ep2-100-S-R-75* ep2-100-T-C-25* ep2-100-T-C-75 ep2-100-T-R-25 ep2-100-T-R-75 ep2-100-U-C-25 ep2-100-U-C-75 ep2-100-U-R-25 ep2-100-U-R-75 ep2-100-W-C-25 ep2-100-W-C-75 ep2-100-W-R-25* ep2-100-W-R-75 ep2-200-D-C-25* ep2-200-D-C-75 ep2-200-D-R-25 ep2-200-D-R-75 ep2-200-S-C-25 ep2-200-S-C-75 ep2-200-S-R-25 ep2-200-S-R-75* ep2-200-T-C-25* ep2-200-T-C-75 ep2-200-T-R-25 ep2-200-T-R-75 ep2-200-U-C-25 ep2-200-U-C-75 ep2-200-U-R-25 ep2-200-U-R-75 ep2-200-W-C-25 ep2-200-W-C-75 ep2-200-W-R-25 ep2-200-W-R-75 146 n 30 30 30 30 30 30 30 30 30 30 30 30 30 30 30 30 30 30 30 30 50 50 50 50 50 50 50 50 50 50 50 50 50 50 50 50 50 50 50 50 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 200 200 200 200 200 200 200 200 200 200 200 200 200 200 200 200 200 200 200 200 n0 7.5 22.2 7.4 22.4 7.4 22.4 7.5 22.5 7.5 22.4 7.4 22.3 7.5 22.5 7.5 22.4 10.6 22.4 10.5 22.2 12.4 37.1 12.5 37.1 12.5 37.4 12.4 37.3 12.4 37.2 12.4 37.2 12.5 37.4 12.4 37.4 13.7 37.3 12.9 37.4 24.5 74.0 24.7 74.6 24.9 74.9 24.9 74.9 24.9 74.7 24.8 74.8 25.0 75.0 25.0 74.9 24.9 74.9 25.0 74.7 49.2 149.7 49.6 149.3 49.9 149.8 49.8 149.7 50.0 149.5 49.7 149.5 50.0 149.8 49.9 149.7 49.6 149.4 49.9 149.6 n1 5 25 6 24 11 22 9 24 9 22 8 23 8 22 8 22 10 22 13 23 11 44 19 38 14 38 12 38 19 39 12 36 13 37 12 38 15 37 13 35 19 77 23 73 28 81 27 80 27 69 25 77 25 75 26 75 29 88 27 77 63 160 57 154 54 155 48 143 41 152 50 145 51 151 49 149 76 160 50 146 Bound 6339 12760 6877 14395 82059 198013 97151 228676 30462 73944 30570 78323 143750 354871 143127 366621 35727 46176 34332 45777 11094 21433 12495 31657 161376 391915 142758 306187 46065 118094 51175 144602 242937 632455 263251 576134 50130 94279 55920 115156 23250 51241 22326 51231 323640 756554 254616 523573 92331 265970 103359 262492 547224 1433510 518661 1216431 70437 167577 70224 247494 46728 127834 43605 99002 649446 1315780 519498 1225926 188684 441796 190638 476289 1084836 2313551 1039584 2447655 161002 390001 196128 511386 Best 6160 12588 6129 14259 81944 195670 85220 225747 22608 73565 26034 77627 133101 343528 137739 352982 35727 46176 34332 45777 10369 21076 11475 31426 154653 387690 138263 303774 46019 114081 51175 141056 231774 619080 227979 564394 50130 87320 55920 114656 22730 49732 22133 50874 314198 747129 250242 519787 92331 256480 102837 257927 505794 1400642 493901 1203544 70164 159929 70224 230809 45987 124146 42138 97694 636050 1297053 506437 1201303 184528 431290 188967 468381 1056636 2265176 1015452 2400803 157508 375767 196086 503222 Best Time 60 240 60 240 60 240 60 240 60 240 60 240 60 240 60 240 60 240 60 240 120 240 120 240 120 240 120 240 120 240 120 240 120 240 120 240 120 240 120 240 240 600 240 600 240 600 240 600 240 600 240 600 240 600 240 600 240 600 240 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 Seed Time 6.2 12.7 35.1 148.8 0.5 12.9 54.0 174.8 0.1 186.8 0.2 206.4 ≤ 0.02 23.4 ≤ 0.02 0.5 11.8 ≤ 0.02 0.2 0.5 96.8 127.2 119.2 235.6 24.5 40.9 107.4 234.3 12.3 56.9 7.7 138.8 4.8 207.4 8.5 235.8 18.1 21.0 0.5 97.0 225.0 536.7 237.9 520.3 39.2 580.4 210.3 595.6 28.2 316.4 34.3 573.4 42.2 14.0 157.0 418.4 4.9 450.9 165.3 393.8 530.4 536.5 591.4 450.6 39.9 570.2 533.1 581.5 319.9 558.1 587.1 597.6 512.6 549.6 587.0 176.6 577.0 566.1 170.7 586.7 Best/Bound 2.82 1.35 10.88 0.97 0.14 1.20 12.28 1.28 25.78 0.54 14.84 0.89 7.74 3.22 3.76 3.74 0.00 0.00 0.00 0.00 6.56 1.70 8.16 0.78 4.21 1.08 3.19 0.82 0.10 3.42 0.00 2.45 4.60 2.17 13.40 2.07 0.00 7.38 0.00 0.49 2.25 2.99 0.92 0.73 2.98 1.29 1.77 0.76 0.00 3.65 0.56 1.78 7.63 2.29 4.77 1.10 0.39 4.56 0.00 6.80 1.73 2.99 3.46 1.39 2.15 1.49 2.59 2.06 2.26 2.44 0.98 1.72 2.84 2.15 2.39 1.97 2.21 3.78 0.02 1.69 Avg./Bound 2.82 1.46 10.88 1.18 0.14 1.37 14.19 1.34 25.78 0.55 14.84 1.20 7.74 3.22 3.76 3.74 0.00 0.00 0.00 0.00 6.85 1.96 8.82 1.51 4.21 1.75 3.22 1.03 0.10 4.25 0.00 2.81 4.79 2.47 13.75 2.29 0.00 7.38 0.00 0.95 2.80 3.85 1.31 1.11 3.57 1.63 2.69 1.26 0.00 4.06 0.78 2.39 8.70 2.29 5.28 1.26 0.71 4.88 0.04 7.40 2.55 3.87 4.83 2.19 2.78 2.29 5.30 3.10 3.48 3.35 1.23 2.67 2.96 2.58 3.70 2.14 2.97 4.26 0.14 2.24 Worst/Bound 2.82 1.97 10.88 1.53 0.14 1.47 14.40 1.69 25.78 0.61 14.84 1.41 7.74 3.22 3.76 3.74 0.00 0.00 0.00 0.00 7.21 2.40 12.01 2.11 4.21 3.41 3.27 1.30 0.10 4.82 0.00 4.02 4.87 3.64 13.90 3.54 0.00 7.38 0.00 1.22 3.47 4.99 2.14 2.08 4.15 2.49 3.70 1.68 0.00 4.96 1.05 3.49 9.77 2.29 5.51 1.74 1.06 6.38 0.12 8.33 3.99 4.73 5.80 3.47 3.43 3.08 7.48 3.75 3.84 4.57 1.53 3.64 2.97 2.78 5.05 2.65 3.83 5.10 0.26 2.79 Heuristic approaches for the two- and three-dimensional knapsack packing problem Solution quality 100 okp5 ep2−200−U−C−75 ep2−100−S−R−75 97.5 0 50 100 Running time Figure 7: Illustration of the heuristic behavior over time. 6.2.5 Results with Rotations We repeated all tests allowing rotation, doubling the running time to accommodate for the larger solution space. A maximum time limit of 600 seconds was still assigned to all instances. Parametertuning revealed that the setting of t0 and ts reported in Section 6.2.3 also give good results when rotation is allowed. The results on the two sets of instances are reported in Table 5 and Table 6. The tables follow the same format as the tables without rotation, however the column ‘No rotation’ is the value of the optimal solution without rotation for the classical instances and the best solution without rotation for the ep2 instances where the optimal solutions are not known. For gcut13 no optimal values without rotation is know and we have reported the best solution from Table 2. To compare the quality of the solution we used the (1DKP) upper bound, which is weaker than the bound from conservative scales. The column ‘No rotation/1D’ contains percentage deviations between the result without rotation and the (1DKP) upper bound and ‘Best/1D’, ‘Avg./1D’, ‘Worst/1D’ are deviation between best, average and worst results over the 10 runs and the (1DKP) upper bound. With the exception of two instances the results with rotation for the classical benchmark instances are better or equal to the results without rotation. This was expected since rotations make it possible to arrange the rectangles in more ways, and hence denser packings can be obtained. Indeed, the heuristic solution is generally larger than 95% of the (1DKP) upper bound, and often reaches 98% when rotation is allowed. For the 28 of the ep2 instances, we get results which are more than 2% closer to the upper bound than without rotation. We get slightly worse results when allowing rotation in 15 of the 80 cases but only for 3 instances is the result more than 2% further from the upper bound. This demonstrates the heuristics ability to handle rotation well, even though the solution space is increased by a factor of 2n which makes it much harder to find near-optimal solutions. The average deviation between the best result and the (1DKP) upper bound for the ep2 instances is 2.3% which means the heuristic performs well even with rotation for large instances. It also shows that utilization increases by a few percent if rotation is allowed. 147 6. Computational experiments Table 5: Results with rotation for the benchmark instances. Instance beasley1 beasley2 beasley3 beasley4 beasley5 beasley6 beasley7 beasley8 beasley9 beasley10 beasley11 beasley12 cgcut1 cgcut2 cgcut3 gcut1 gcut2 gcut3 gcut4 gcut5 gcut6 gcut7 gcut8 gcut9 gcut10 gcut11 gcut12 gcut13 hadchr-3 hadchr-7 hadchr-8 hadchr-11 hadchr-12 wang20 3 3s a1 a1s a2 a2s chl2 chl2s chl3 chl3s chl4 chl4s chl5 okp1 okp2 okp3 okp4 okp5 148 1D 201 253 266 275 373 317 430 938 962 1517 1864 2012 260 2919 2020 62488 62500 62500 62500 249854 249992 249998 250000 997256 999918 1000000 1000000 9000000 1347 2012 3079 1547 3604 2800 2020 2800 2140 3000 2705 3600 2502 3410 5283 7402 8998 13932 600 29133 24800 26714 33631 29045 No rotation 164 230 247 268 358 289 430 834 924 1452 1688 1865 244 2892 1860 48368 59798 61275 61380 195582 236305 240143 245758 939600 937349 969709 979521 ≥ 8736757 1178 1865 2517 1270 2949 2771 1860 2726 2020 2956 2615 3335 2326 3336 5283 7402 8998 13932 589 27718 22502 24019 32893 27923 Best 193 250 259 268 370 300 430 886 924 1452 1786 1932 260 2909 1940 58136 60656 61275 61710 233969 239467 245306 247462 953293 938036 979580 987674 8897979 1272 1932 2722 1431 3252 2762 1940 2758 2120 2985 2690 3579 2429 3390 5283 7402 8912 13932 600 28423 24263 25216 32893 27971 Best Time ≤ 0.02 0.3 0.4 ≤ 0.02 ≤ 0.02 23.9 ≤ 0.02 0.2 4.4 ≤ 0.02 ≤ 0.02 1.8 0.7 115.3 6.3 ≤ 0.02 23.9 49.6 30.1 1.1 0.1 34.5 21 ≤ 0.02 0.1 20 8.9 344.5 ≤ 0.02 9.7 ≤ 0.02 1 9.8 118 0.9 12.6 40.2 1.7 18.4 99.1 8.9 7.8 ≤ 0.02 ≤ 0.02 ≤ 0.02 ≤ 0.02 7.6 41.7 1.6 7.2 19.7 121 Seed Time 60 120 120 60 60 120 60 120 120 120 120 120 120 240 60 60 60 60 60 60 60 60 120 60 60 60 60 480 60 120 60 60 60 120 60 60 60 60 120 120 120 120 480 480 480 480 120 240 120 120 120 240 No rotation/1D 18.41 9.09 7.14 2.55 4.02 8.83 0.00 11.09 3.95 4.28 9.44 7.31 6.15 0.92 7.92 22.60 4.32 1.96 1.79 21.72 5.47 3.94 1.70 5.78 6.26 3.03 2.05 2.92 12.55 7.31 18.25 17.91 18.17 1.04 7.92 2.64 5.61 1.47 3.33 7.36 7.03 2.17 0.00 0.00 0.00 0.00 1.83 4.86 9.27 10.09 2.19 3.86 Best/1D 3.98 1.19 2.63 2.55 0.80 5.36 0.00 5.54 3.95 4.28 4.18 3.98 0.00 0.34 3.96 6.96 2.95 1.96 1.26 6.36 4.21 1.88 1.02 4.41 6.19 2.04 1.23 1.13 5.57 3.98 11.59 7.50 9.77 1.36 3.96 1.50 0.93 0.50 0.55 0.58 2.92 0.59 0.00 0.00 0.96 0.00 0.00 2.44 2.17 5.61 2.19 3.70 Avg./1D 3.98 1.19 2.63 2.55 0.80 5.99 0.00 5.54 4.57 4.28 4.18 4.47 0.00 0.34 3.96 6.96 3.22 2.26 1.28 6.36 4.21 2.64 1.12 4.41 6.19 2.46 1.23 1.64 5.57 4.03 11.59 7.50 9.77 1.50 3.96 1.54 1.78 0.50 0.55 0.58 3.20 0.59 0.00 0.00 1.77 0.00 0.00 4.24 5.19 7.28 2.19 6.06 Worst/1D 3.98 1.19 2.63 2.55 0.80 5.99 0.00 5.54 4.57 4.28 4.18 4.47 0.00 0.34 3.96 6.96 3.31 2.92 1.47 6.36 4.21 2.83 1.26 4.41 6.19 2.96 1.23 2.32 5.57 4.47 11.59 7.50 9.77 1.50 3.96 1.57 1.87 0.50 0.55 0.58 4.32 0.59 0.00 0.00 2.98 0.00 0.00 5.47 8.05 10.99 2.19 11.43 Heuristic approaches for the two- and three-dimensional knapsack packing problem Table 6: Results with rotation for the ep2 instances. Instance ep2-30-D-C-25 ep2-30-D-C-75 ep2-30-D-R-25 ep2-30-D-R-75 ep2-30-S-C-25 ep2-30-S-C-75 ep2-30-S-R-25 ep2-30-S-R-75 ep2-30-T-C-25 ep2-30-T-C-75 ep2-30-T-R-25 ep2-30-T-R-75 ep2-30-U-C-25 ep2-30-U-C-75 ep2-30-U-R-25 ep2-30-U-R-75 ep2-30-W-C-25 ep2-30-W-C-75 ep2-30-W-R-25 ep2-30-W-R-75 ep2-50-D-C-25 ep2-50-D-C-75 ep2-50-D-R-25 ep2-50-D-R-75 ep2-50-S-C-25 ep2-50-S-C-75 ep2-50-S-R-25 ep2-50-S-R-75 ep2-50-T-C-25 ep2-50-T-C-75 ep2-50-T-R-25 ep2-50-T-R-75 ep2-50-U-C-25 ep2-50-U-C-75 ep2-50-U-R-25 ep2-50-U-R-75 ep2-50-W-C-25 ep2-50-W-C-75 ep2-50-W-R-25 ep2-50-W-R-75 ep2-100-D-C-25 ep2-100-D-C-75 ep2-100-D-R-25 ep2-100-D-R-75 ep2-100-S-C-25 ep2-100-S-C-75 ep2-100-S-R-25 ep2-100-S-R-75 ep2-100-T-C-25 ep2-100-T-C-75 ep2-100-T-R-25 ep2-100-T-R-75 ep2-100-U-C-25 ep2-100-U-C-75 ep2-100-U-R-25 ep2-100-U-R-75 ep2-100-W-C-25 ep2-100-W-C-75 ep2-100-W-R-25 ep2-100-W-R-75 ep2-200-D-C-25 ep2-200-D-C-75 ep2-200-D-R-25 ep2-200-D-R-75 ep2-200-S-C-25 ep2-200-S-C-75 ep2-200-S-R-25 ep2-200-S-R-75 ep2-200-T-C-25 ep2-200-T-C-75 ep2-200-T-R-25 ep2-200-T-R-75 ep2-200-U-C-25 ep2-200-U-C-75 ep2-200-U-R-25 ep2-200-U-R-75 ep2-200-W-C-25 ep2-200-W-C-75 ep2-200-W-R-25 ep2-200-W-R-75 Best 6340 12672 6875 14330 81944 195670 83160 225747 26436 73565 29535 77387 141588 351682 140287 360202 37851 53668 37258 52895 11010 21230 12456 31531 154653 387526 138335 303605 46327 117226 53383 142124 246645 626218 241032 570751 51360 111307 57561 113868 23220 50515 22281 50772 316238 745608 251303 515379 99429 260968 102957 259019 531585 1400642 504801 1203634 94098 165709 103431 242357 46056 123724 42844 96344 637579 1292256 505687 1198449 186248 432747 189141 465169 1073880 2265346 1035978 2406562 158468 382662 210864 501738 Best time 57.4 347.5 113.6 115.3 1.8 30.4 1.4 127.2 0.5 144.4 3.3 77.6 0.2 204.7 22.6 450.9 75.6 174.6 32.5 254.1 90.0 278.8 209.6 299.1 9.3 285.3 235.4 313.8 91.8 303.3 172.2 388.1 0.1 365.4 26.6 398.3 22.2 354.7 102.5 452.1 415.3 378.0 51.7 410.8 341.7 354.1 343.7 509.9 98.6 574.9 303.1 497.6 192.6 147.6 48.2 576.3 205.3 599.5 444.4 356.2 473.0 561.3 572.0 593.3 333.6 596.0 589.1 576.0 77.0 563.5 577.5 583.1 238.0 570.2 422.7 591.2 506.2 587.2 575.9 599.7 Seed Time 120 480 120 480 120 480 120 480 120 480 120 480 120 480 120 480 120 480 120 480 240 480 240 480 240 480 240 480 240 480 240 480 240 480 240 480 240 480 240 480 480 600 480 600 480 600 480 600 480 600 480 600 480 600 480 600 480 600 480 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 No rotation/1D 4.23 1.35 16.54 0.94 6.04 1.18 12.28 1.28 29.29 0.51 16.28 0.89 7.41 3.20 7.51 3.72 11.44 14.33 9.43 14.61 6.54 1.67 8.81 0.73 4.17 1.08 3.15 0.79 2.26 3.40 5.99 2.45 14.86 2.11 14.39 2.04 4.21 22.17 3.92 0.43 2.24 2.94 0.86 0.70 2.92 1.25 1.72 0.72 8.94 3.57 0.51 1.74 7.57 2.29 4.77 1.06 26.63 6.77 32.83 6.74 1.59 2.88 3.36 1.32 2.06 1.42 2.51 2.01 2.20 2.38 0.88 1.66 2.60 2.09 2.32 1.91 2.17 3.65 8.51 1.60 Best/1D 1.43 0.69 6.39 0.45 6.04 1.18 14.40 1.28 17.32 0.51 5.02 1.20 1.50 0.90 5.80 1.75 6.17 0.43 1.71 1.33 0.76 0.95 1.02 0.40 4.17 1.12 3.10 0.84 1.61 0.74 1.93 1.71 9.39 0.99 9.49 0.93 1.86 0.79 1.10 1.12 0.13 1.42 0.20 0.90 2.29 1.45 1.30 1.57 1.94 1.88 0.39 1.32 2.86 2.29 2.67 1.05 1.61 3.40 1.06 2.08 1.44 3.22 1.75 2.68 1.83 1.79 2.66 2.24 1.29 2.05 0.79 2.33 1.01 2.08 0.35 1.68 1.57 1.88 1.62 1.89 Avg./1D 1.52 1.20 6.58 0.62 6.04 1.56 14.40 1.40 17.87 1.04 5.02 1.80 1.50 0.90 5.80 1.75 6.33 0.83 2.29 1.80 1.27 1.56 2.78 1.22 4.20 2.18 3.18 1.23 1.88 1.01 2.10 2.23 9.39 1.46 9.49 1.71 1.86 1.21 1.25 1.55 0.26 2.67 0.42 1.78 3.37 1.91 2.30 2.02 3.31 2.94 1.06 2.30 5.11 2.37 4.27 1.63 2.57 3.69 2.83 2.87 2.32 3.56 3.62 3.64 2.86 2.52 4.78 4.33 2.32 2.66 1.43 3.67 2.04 2.34 1.95 2.06 2.36 3.85 2.10 3.51 Worst/1D 1.55 1.50 6.66 0.93 6.04 2.20 14.40 2.02 18.52 1.50 5.02 2.49 1.50 0.90 5.80 1.75 6.34 1.37 2.85 2.70 1.66 2.05 5.65 1.84 4.43 3.41 3.53 1.78 3.42 1.23 2.28 3.56 9.39 2.11 9.49 1.80 1.86 2.05 1.26 2.18 0.71 4.17 0.64 2.81 4.36 2.45 3.79 3.96 6.41 3.77 2.93 3.44 6.17 3.06 4.95 1.69 4.24 5.00 4.78 3.49 3.09 4.07 5.14 5.88 3.45 4.30 7.11 6.11 3.81 3.54 2.61 4.86 2.73 2.71 4.37 2.25 3.75 6.30 2.50 4.52 149 6. Computational experiments ep2-50-T-R-75 ep2-200-D-R-75 ep2-100-U-R-75 Figure 8: Best results for three two-dimensional instances without rotation. Figure 9: A solution to gcut13 with profit 8736757 which was reached after 144 10 minute runs. 6.2.6 The instance gcut13 Since gcut13 is the only instance of the classical benchmark instances from the literature where the optimal solution is unknown, we decided to investigate this instance further without considering rotations. We used 144 seeds with 10 minutes running time on each seed; thus the total running time was 24 hours for this instance. The parameters for the Simulated Annealing were based on the results we gathered during parameter-tuning for gcut13, and were set to t0 = 0.1 and ts = 10 . The best result was reached after 367 seconds for one of the seeds and was 8736757 which is 0.5% closer to the upper bound than the 8691947 we were able to reach with 10 seeds and 4 minutes running time. This demonstrates that the heuristic is able to return marginally improved results given more running time. The resulting placement can be seen in Figure 9. 150 Heuristic approaches for the two- and three-dimensional knapsack packing problem Table 7: The 5 different classes of new ep3 instances. Class F W S U D 6.3 Description Flat. Boxes are flat Long. Boxes are long Cubes. Boxes are cubes Uniform. Largest dimension is no more than 200% of the smallest Diverse. Largest dimension can be upto 50 times the smallest Width [50, 100] [1, 23 · 100] [1, 100] [50, 100] Height [50, 100] [1, 32 · 100] Equal to width [50, 100] Depth [25, 60] [50, 100] Equal to width [50, 100] [1, 50] [1, 50] [1, 50] 3D computational experiments A set of experiments were also conducted using the three-dimensional variant of our heuristic which follows the same scheme as the experiments conducted for the two-dimensional variant. New instances for 3DKP are introduced in Section 6.3.1, the parameter-tuning is outlined in Section 6.3.2 and results are presented in Section 6.3.3. 6.3.1 New instances As we were unable to locate any benchmark instances for the three-dimensional knapsack problem from the literature, we have generated 60 random instances. It should be noted that Fekete et al. [14] do report results for a number of 3DKP problem instances, but the instances are not described in detail. The new instances contain 20, 40 or 60 boxes. The dimensions of the boxes were chosen from 5 different classes described in Table 7. The width, height and depth of the boxes in each class are selected randomly from the intervals in the ‘Width’, ‘Height’ and ‘Depth’ columns of the table. As for the two-dimensional case, boxes are clustered and random, and the container has a volume equal to 50% or 90% of the total volume of the boxes. The naming convention is ep3-n-c-t-p, where n ∈ {20, 40, 60} is the number of boxes, c ∈ (F, L, C, U, D) describes the class, t ∈ (C, R) describes if it is clustered or random, p ∈ {50, 90} describes the size of the bin in percentage of the total box volume. The profit of a box is set to the volume times a random number from {1, 2, 3}. The instances are presented in Table 8 and are available along with the source code to generate them at this webaddress: http://www.diku.dk/˜pisinger/codes.html. 6.3.2 Parameters As for the two-dimensional instances we determine values n0 and n1 and for each instance we set the running time based on the function F(n, n0 ) = n0 lg n, so that for F(n, n0 ) ≤ 110 the running time is set to 120 seconds, for 110 < F(n, n0 ) ≤ 200 the running time is set to 300 seconds, and for F(n, n0 ) > 200 the running time is set to 600 seconds. Nine three-dimensional instances were selected for parameter tuning tests. Values for t0 was selected from {100 , 101 , 102 , 103 , 104 , 105 , 107 , 108 } and ts from {10−10 , 10−8 , 10−6 , 10−4 , 10−1 , 100 , 102 , 104 , 106 }. Based on the 81 parameter combinations we found results similar to the twodimensional case, and based on these we determined good values to be t0 = n21 , and ts = n21 . 5 151 7. Conclusion 6.3.3 Results The results from our 3D tests are presented in Table 8 using the same format as the tables of the two-dimensional results. For the instances with 20 items, the gap between the best found solution and the conservative scales upper bound is as large as 38.7% for one instance but only 13.6% on average and in two cases the heuristic is able to reach the upper bound thus finding optimal solutions. For the instances with 40 items, 16 of the best solutions are within 15% of the upper bound and as much as 7 are within 10%. The average gap between best solutions and upper bound is 12.53%. For the instances with 60 items, the heuristic reaches best solutions which have a gap which is less than 20% to the upper bound for all but one of the instances. For 8 of the instances, the gap is less than 10% and the average gap is as low as 11.4%. The best results for eight of the instances are shown in Figure 10. Our method finds high quality solutions quickly with an average gap to the upper bound of only 12.8%. In most of the instances, the best solution is found long before the heuristic’s time limit, so solutions may be significantly closer to the optimal value than the bounds indicate. Results with large gaps could be due to the geometry of the boxes which can make it hard to utilize the threedimensional knapsack as well as in the one-dimensional problems. Another explanation could be that the conservative scales bound does not function well on those instances or that the heuristic simply performs poorly in some cases. To the best of our knowledge no other authors report gaps to upper bound for the three-dimensional knapsack packing problem. Fekete et al. [14] report only the number of problems solved to optimality. Their items are also larger than ours and we suspect that far fewer may be loaded in the knapsack than in our instances. This reduces the solution space and increases the strength of the bounds, making the instances easier to solve than the instances considered herein. Instances for the Container Loading Problem contain hundreds of items, and state-of-the-art heuristics (e.g. [24, 27, 29]) reach volume utilization of slightly more than 90% on average. Initial experiments with our heuristic for container loading instances resulted in solutions with utilization of around 84% within 1 minute. Container loading heuristics however only optimize volume utilization, and hence cannot handle a general profit objective. Moreover, they exploit that most container loading instances contain many similar items. The strength of our heuristic is that it is not based on assumptions on the item sizes or profit. Based on our experiments we believe our heuristic is most suitable for medium sized instances where less than 80 items can fit in the knapsack simultaneously. We also suspect that the proposed heuristic for 3DKP cannot compete with heuristics tailored specifically for the Container Loading Problem. 7 Conclusion In this paper, we have presented simulated annealing based approaches for the two- and three-dimensional knapsack problem. For the two-dimensional knapsack problem, we utilize an abstract representation for rectangle packings called sequence pair whereas for the three-dimensional problem we utilize a novel abstract representation for box packings called sequence triple. We have proved that the sequence triple is able to represent any fully robot packable packing. The heuristic for two dimensions is generally able to reproduce the results of exact algorithms with similar running times. The heuristic also gives the best known results for the only unsolved classical-instance; gcut13. To demonstrate the high quality of the results of the heuristic for larger instances we have created a new set of instances with up-to 200 rectangles and also here the heuristic 152 Heuristic approaches for the two- and three-dimensional knapsack packing problem Table 8: Results for the new ep3 instances. Instance ep3-20-C-C-50.3kp ep3-20-C-C-90.3kp ep3-20-C-R-50.3kp ep3-20-C-R-90.3kp ep3-20-D-C-50.3kp ep3-20-D-C-90.3kp ep3-20-D-R-50.3kp ep3-20-D-R-90.3kp ep3-20-F-C-50.3kp ep3-20-F-C-90.3kp ep3-20-F-R-50.3kp ep3-20-F-R-90.3kp ep3-20-L-C-50.3kp ep3-20-L-C-90.3kp ep3-20-L-R-50.3kp ep3-20-L-R-90.3kp ep3-20-U-C-50.3kp ep3-20-U-C-90.3kp ep3-20-U-R-50.3kp ep3-20-U-R-90.3kp ep3-40-C-C-50.3kp ep3-40-C-C-90.3kp ep3-40-C-R-50.3kp ep3-40-C-R-90.3kp ep3-40-D-C-50.3kp ep3-40-D-C-90.3kp ep3-40-D-R-50.3kp ep3-40-D-R-90.3kp ep3-40-F-C-50.3kp ep3-40-F-C-90.3kp ep3-40-F-R-50.3kp ep3-40-F-R-90.3kp ep3-40-L-C-50.3kp ep3-40-L-C-90.3kp ep3-40-L-R-50.3kp ep3-40-L-R-90.3kp ep3-40-U-C-50.3kp ep3-40-U-C-90.3kp ep3-40-U-R-50.3kp ep3-40-U-R-90.3kp ep3-60-C-C-50.3kp ep3-60-C-C-90.3kp ep3-60-C-R-50.3kp ep3-60-C-R-90.3kp ep3-60-D-C-50.3kp ep3-60-D-C-90.3kp ep3-60-D-R-50.3kp ep3-60-D-R-90.3kp ep3-60-F-C-50.3kp ep3-60-F-C-90.3kp ep3-60-F-R-50.3kp ep3-60-F-R-90.3kp ep3-60-L-C-50.3kp ep3-60-L-C-90.3kp ep3-60-L-R-50.3kp ep3-60-L-R-90.3kp ep3-60-U-C-50.3kp ep3-60-U-C-90.3kp ep3-60-U-R-50.3kp ep3-60-U-R-90.3kp n 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 60 60 60 60 60 60 60 60 60 60 60 60 60 60 60 60 60 60 60 60 n0 9.9 17.8 9.9 17.8 9.8 17.6 10.0 17.8 9.9 17.8 9.9 18.0 10.0 17.8 9.9 18.0 9.9 17.9 9.9 17.9 19.7 35.7 19.8 35.9 19.9 35.6 19.2 35.7 19.9 35.8 19.7 35.7 19.6 35.5 19.9 35.6 20.0 35.9 19.8 35.9 29.7 53.7 29.8 53.7 29.2 53.4 29.4 53.4 29.7 53.3 29.6 53.6 29.6 53.5 29.3 53.3 29.6 54.0 29.7 53.6 n1 13 19 15 19 13 17 12 18 13 17 11 18 8 18 10 17 10 17 10 18 29 33 22 38 22 30 24 36 19 33 21 37 21 34 21 35 20 36 20 37 37 57 44 58 38 55 41 56 40 50 30 55 34 52 29 54 34 54 31 55 1D 125170 155222 108421 118927 21792 38842 19298 25676 114718 246440 128928 191008 33780 63772 31415 37570 154428 162251 140762 211545 224754 358993 202725 245888 34128 70302 35001 52769 225834 543292 312163 396059 80264 117699 61275 85517 216956 461920 271163 401482 363614 620658 303842 427077 90084 129504 56227 73109 522640 642192 437008 668571 131466 124652 74166 117361 430130 324792 452615 604852 Bound 65308 97422 63849 80956 13192 30728 17845 24836 89771 184773 85712 172182 28716 51988 30037 37832 148412 152837 123675 209795 170318 252425 145067 226890 32520 63600 35803 53698 180302 473556 277727 400886 61222 115632 62931 87114 185028 442120 276228 408182 373341 534168 229567 415339 75732 115968 58185 74464 517890 616504 442187 685430 127740 127340 76856 120313 441418 343738 465935 618814 Best 65308 80124 62364 66844 13192 27848 15170 20822 71816 167281 80962 155155 20835 48404 24809 32892 91036 132291 97358 188691 141418 243447 126069 218971 20464 56124 28019 49476 161310 416467 238990 367045 56742 106602 50389 77514 164976 388456 242819 371662 252584 499572 224875 384302 65502 105540 49011 67202 467730 540732 378079 625362 119676 113107 62190 107270 361340 293852 396794 550375 Best time ≤ 0.02 ≤ 0.02 ≤ 0.02 ≤ 0.02 ≤ 0.02 0.1 23.6 3.1 ≤ 0.02 0.6 0.1 0.4 23.0 ≤ 0.02 0.6 29.8 26.6 0.2 0.2 26.3 0.1 0.6 0.6 11.0 0.1 113.9 36.2 244.0 23.3 3.4 93.7 282.3 76.9 223.6 80.9 235.2 0.4 78.5 12.0 286.9 3.1 92.8 273.2 49.4 63.4 450.7 227.3 337.1 3.9 145.3 138.6 413.5 229.3 226.8 297.8 373.4 42.6 181.7 281.2 531.6 Seed time 120 120 120 120 120 120 120 120 120 120 120 120 120 120 120 120 120 120 120 120 120 300 120 300 120 300 120 300 120 300 120 300 120 300 120 300 120 300 120 300 300 600 300 600 300 600 300 600 300 600 300 600 300 600 300 600 300 600 300 600 Best/Bound 0.00 17.76 2.33 17.43 0.00 9.37 14.99 16.16 20.00 9.47 5.54 9.89 27.44 6.89 17.41 13.06 38.66 13.44 21.28 10.06 16.97 3.56 13.10 3.49 37.07 11.75 21.74 7.86 10.53 12.06 13.95 8.44 7.32 7.81 19.93 11.02 10.84 12.14 12.09 8.95 32.34 6.48 2.04 7.47 13.51 8.99 15.77 9.75 9.69 12.29 14.50 8.76 6.31 11.18 19.08 10.84 18.14 14.51 14.84 11.06 Avg./Bound 0.00 17.76 2.33 17.43 0.00 9.37 16.26 16.16 20.00 12.46 5.54 10.35 27.65 6.89 17.55 14.98 41.08 16.15 22.66 10.06 16.97 3.56 13.10 3.49 40.42 14.08 24.17 9.11 10.53 13.36 17.36 8.78 8.63 8.71 21.93 11.97 11.87 12.14 12.86 9.58 34.40 6.48 4.57 7.80 13.57 9.01 18.60 10.21 9.69 12.92 15.66 9.65 9.64 12.09 21.03 11.67 18.83 15.57 16.07 11.67 Worst/Bound 0.00 17.76 2.33 17.43 0.00 9.37 18.08 16.16 20.00 13.75 5.54 12.03 27.67 6.89 18.15 16.80 42.11 17.04 28.45 10.06 16.97 3.56 13.10 3.49 46.33 14.34 26.14 9.81 10.53 14.06 22.64 9.60 8.78 9.90 26.22 13.10 14.27 12.14 14.07 10.52 34.98 6.48 16.73 7.97 13.63 9.02 20.39 10.70 9.69 14.60 18.52 10.02 11.53 13.08 23.27 12.06 20.13 16.70 17.17 12.17 153 7. Conclusion ep3-20-C-C-50 ep3-20-L-C-90 ep3-40-D-R-90 ep3-40-U-R-50 ep3-40-F-R-90 ep3-60-L-C-50 ep3-60-U-C-50 ep3-60-U-R-50 Figure 10: Best results for 8 three-dimensional instances. 154 Heuristic approaches for the two- and three-dimensional knapsack packing problem performs extremely well by generating results with an average gap to our upper bound of less than 3.0%. The heuristic for the three dimensional case demonstrates the potential for the sequence triple representation. Exact methods are capable of solving small problems to optimality and tailored heuristics based on greedy principles work well for container loading problems. The proposed local search based heuristic performs very well for medium sized problems with an average gap to the upper bound of only 13%. The heuristics are generally able to return very good results for both two- and three-dimensional problems within few minutes, and often within seconds for the classical two-dimensional benchmark instances. Acknowledgements The authors wish to thank the anonymous reviewers for their many helpful and supporting comments. References [1] Alvarez-Valdes, F. Parre no, and J.M. Tamarit. A tabu search algorithm for two-dimensional non-guillotine cutting problems. Technical Report TR07-2004, Universitat de Valencia, 2004. [2] R. Alvarez-Valdes, F. Parre no, and J.M. Tamarit. A grasp algorithm for constrained twodimensional non-guillotine cutting problems. Journal of Operational Research Society, 56:414– 425, 2005. [3] R. Baldacci and Marco A. Boschetti. A cutting-plane approach for the two-dimensional orthogonal non-guillotine cutting problem. European Journal of Operational Research, 2006. Available online. [4] J. E. Beasley. A population heuristic for constrained two-dimensional non-guillotine cutting. European Journal of Operational Research, 156:601–607, 2004. [5] J.E. Beasley. Algorithms for two-dimensional unconstrained guillotine cutting. Journal of the Operational Research Society, 36:297–306, 1985. [6] J.E. Beasley. An exact two-dimensional non-guillotine cutting tree search procedure. Operations Research, 33:49–64, 1985. [7] M.A. Boschetti, E. Hadjiconstantinou, and A. Mingozzi. New upper bounds for the twodimensional orthogonal cutting stock problem. IMA Journal of Management Mathematics, 13:95–119, 2002. [8] A. Caprara and M. Monaci. On the 2-dimensional knapsack problem. Operations Research Letters, 1(32):5–14, 2004. [9] N. Christophides and C. Whitlock. An algorithm for two dimensional cutting stock problems. Operations Research, 25:30–44, 1977. [10] R.S. Dembo and P.L. Hammer. A reduction algorithm for knapsack problems. Methods of Operations Research, 36:49–60, 1980. 155 References [11] S. P. Fekete and J. Schepers. A new exact algorithm for general orthogonal d-dimensional knapsack problems. In Algorithms ESA ’97, Springer Lecture Notes in Computer Science, volume 1284, pages 144–156, 1997. [12] S. P. Fekete and J. Schepers. On more-dimensional packing III: Exact algorithms. submitted to Discrete Applied Mathematics, 1997. [13] S. P. Fekete and J. Schepers. A general framework for bounds for higher-dimensional orthogonal packing problems. Mathematical Methods of Operations Research, 60:81––94, 2004. [14] S. P. Fekete, J. Schepers, and J. C van der Veen. An exact algorithm for higher-dimensional orthogonal packing. Operations Research, 3(55):569–587, 2007. [15] E. Hadjiconstantinou and N. Christophides. An exact algorithm for general, orthogonal, twodimensional knapsack problems. European Journal of Operational Research, 83:39–56, 1995. [16] M. Hifi. Two-dimesional (un)constrained cutting stock problems. picardie.fr/hifi/OR-Benchmark/2Dcutting/, 2006. http://www.laria.u- [17] H.Murata, K.Fujiyoshi, S.Nakatake, and Y.Kajitani. Vlsi module packing based on rectanglepacking by the sequence pair. IEEE Transaction on Computer Aided Design of Integrated Circuits and Systems, 15:1518–1524, 1996. [18] S. Jakobs. On genetic algorithms for the packing of polygons. European Journal of Operational Research, 88:165–181, 1996. [19] H. Kellerer, U. Pferschy, and D. Pisinger. Knapsack Problems. Springer, Berlin, Germany, 2004. [20] K. K. Lai and J. W. M. Chan. Developing a simulated annealing algorithm for the cutting stock problem. Computers and Industrial Engineering, 32:115–127, 1997. [21] K. K. Lai and J. W. M. Chan. A evolutionary algorithm for the rectangular cutting stock problem. International Journal on Industrial Engineering, 4:130–139, 1997. [22] T.W. Leung, C. H. Yung, and M. D. Troutt. Applications of genetic search and simulated annealing to the two-dimensional non-guillotine cutting stock problem. Computers and Industrial Engineering, 40:201–204, 2001. [23] T.W. Leung, C. H. Yung, and M. D. Troutt. Application of mixed simulated annealing-genetic algorithm heuristic for the two-dimensional orthogonal packing problem. European Journal of Operational Research, 145:530–542, 2003. [24] D. Mack, A. Bortfeldt, and H. Gehring. A parallel hybrid local search algorithm for the container loading problem. Internatinal Transaction in Operations Research, 11:511–533, 2004. [25] S. Martello, M. Monaci, and D. Vigo. An exact approach to the strip packing problem. INFORMS Journal on Computing, 3(15):310–319, 2003. [26] S. Martello, D. Pisinger, D. Vigo, E. den Boef, and J. Korst. Algorithm 864: General and robot-packable variants of the three-dimensional bin packing problem. ACM Transactions on Mathematical Software, 33:1–7, 2007. 156 Heuristic approaches for the two- and three-dimensional knapsack packing problem [27] A. Moura and J. F. Oliveira. A grasp approach to the container-loading problem. IEEE Intelligent Systems, 20(4):50–57, 2005. [28] D. Pisinger. A minimal algorithm for the 0-1 knapsack problem. Operations Research, 45:758– 767, 1997. [29] D. Pisinger. Heuristics for the container loading problem. European Journal of Operations Research, 3(141):382–392, 2002. [30] D. Pisinger. Denser packings obtained in O(n log log n) time. INFORMS Journal on Computing, 19:395–405, 2007. [31] D. Pisinger and M. Sigurd. The two-dimensional bin packing problem with variable bin sizes and costs. Discrete Optimization, 2:154–167, 2005. [32] D. Pisinger and M. M. Sigurd. Using decomposition techniques and constraint programming for solving the two-dimensional bin packing problem. INFORMS Journal on Computing, 19:36–51, 2007. [33] G. Wascher, H. Haussner, and H Schumann. An improved typology of cutting and packing problems. European Journal of Operational Research, 2007. Available online. [34] X.Tang and D.F.Wong. Fast-sp: a fast algorithm for block packing based on sequence pair. In Asia and South Pacific Design Automation Conference, 2001. [35] X.Tang, R.Tian, and D.F.Wong. Fast evaluation of sequence pair in block placement by longest common subsequence computation. In Proceedings of DATE 2000 (ACM), Paris, France, pages 106–110, 2000. [36] P. Y.Wang. Two algorithms for constrained two dimensional cutting stock problems. Operations Research, 31:573–586, 1983. 157 Submitted. 2007 Heuristics for container loading of furniture Jens Egeblad ([email protected]), Claudio Garavelli ([email protected]), Stefano Lisi ([email protected]), David Pisinger ([email protected]) Abstract We consider a real-life container loading problem which occurs at a typical furniture producer. The problem is to determine an optimal subset from a larger set of furniture which can be loaded into a container of given dimensions. Each item has an associated profit and a loadable subset of items with maximal total profit is considered optimal. In the studied company, the problem arises during the planning of transportation of products to clients hundreds of times daily. The instances may contain more than one hundred of different items with irregular shapes. Large-sized items are combined in specific structures to ensure proper protection of the items during transportation and to reduce the complexity of the remaining problem. We have developed a method composed of several heuristics which are applied successively to the problem. The average loading utilization is 91.3% for the most general instances with average running times around 100 seconds. Keywords: Packing, Combinatorial Optimization, Logistics, Transportation, Heuristics . 1 Introduction The container loading optimization problem is a central problem in the industry, where it appears in various formulations like bin-packing, knapsack packing, container loading and multi-container loading. Surveys on packing problems were presented by Dyckhoff et al. [24] or Wäscher et al. [62]. In this paper we consider container loading of pieces of furniture. The problem occurs at a typical furniture producer and solutions to hundreds of such problems every day may be required. Solutions must be generated within minutes on commodity hardware. Items may have non-rectangular (irregular) shapes and each item has an associated profit value to describe how desirable it is to load. The problem we address can be formulated as a three-dimensional knapsack packing problem with irregular shapes; Given a consignment of items and container dimensions W , H and D, the objective is to determine the maximal profit subset of the consignment, which can fit inside the container. It is easy to see that, since the one-dimensional knapsack problem is NP-hard (see e.g [41]), the three dimensional variant is also NP-Hard. In the typology by Wäscher et al. [62] the problem belongs to the category strongly heterogeneous three-dimensional Single Knapsack Problem (SKP) with irregular shapes. Items (products produced) are divided into three different categories: Figure 1: Example of irregular items to be loaded. 159 1. Introduction D y W z H x Figure 2: Example solution with more than 300 items and our coordinate system convention. • Large: Irregularly shaped items such as armchairs, 2- and 3-seat sofas, chaise lounges and corners which are used to combine two segments of a sofa to a corner-sofa. To avoid damage during transportation, items must be placed in a stable location and only a specific set of rotations are allowed. The validity of a position and rotation may also depend on the surrounding items. See Figure 1. • Medium-sized: Box-shaped robust items, such as ottomans. All six possible axis-aligned rotations are allowed. • Small: Small accessory box-shaped items or items loaded in cardboard boxes, e.g. vases, lamps and glass-plates for tables. These items have different levels of fragility and may be allowed only a subset of the six possible axis aligned rotations. An example solution with more than 300 items from all three categories is illustrated on Figure 2. For a problem instance we let L, M, S and I = L ∪ M ∪ S be the sets of large, medium, small items and all items respectively. Instances are generally weakly heterogeneous and we let L , M , S and I be the different types of large, medium, small and all items respectively. Although all items i ∈ I have an associated profit pi , large items are always more desirable than medium-sized items which in turn are more desirable than small-items. In this paper we present a new strategy for handling problems with three-dimensional irregular shapes based on combination and simplification of items. This is achieved by a geometric algorithm and heuristics that generate building blocks called templates. To place large items we also introduce a new simple paradigm called quad-walls, which uses templates for efficient placement. Our solution method is divided into several steps; during a preprocessing stage combinations of large items are determined. Then a tree-search heuristic finds an initial solution of L. A local search heuristic is used to refine the initial solution and ensure stability. Next, medium-sized items are placed using a greedy-approach. Finally, small items are placed at the end of the container, using a wall-building heuristic, and in remaining free space, using a greedy heuristic. This paper is organized as follows: First, we present an overview of previous relevant work in Section 2. Then, in Section 3 we present an overview of our heuristic method and devote a section to each step of the optimization process in Section 4 to 10. Finally, in Section 11 we present the experimental results followed by a conclusion in Section 12. 160 Heuristics for container loading of furniture 2 Related Work While the three-dimensional knapsack problem with irregular shapes that we study in this paper has not been studied in the literature, several variants of the problem have been investigated since the seminal work of Gilmore and Gomory [32] in the 1960’s. For two- and three-dimensional packing the most common problem definitions are: Container loading, pallet packing, strip-packing, bin-packing and knapsack-packing. In general some or all container dimensions are given and one must find a placement of items such that the items do not overlap and some objective is minimized or maximized. The problems are also commonly NP-hard. In the following we discuss prior work in each related problem category. 2.1 Knapsack-packing In the one-dimensional knapsack problem one is given a maximal weight W and a set of items each with an associated weight wi and profit pi ,and a subset of the items I 0 ⊆ I must be selected such that ∑i∈I 0 wi ≤ W and z = ∑i∈I 0 pi is maximized. In the two- and three-dimensional knapsack problems we are given container dimensions, W × H(×D), and a set of rectangular items. Each item has an associated profit value and one has to select a maximal profit subset of the items that can be placed in the knapsack without overlap. While the one-dimensional knapsack problem has been thoroughly investigated (see e.g. [41]), the multi-dimensional variants have received less attention. Although we are unaware of work within the field of irregular shapes some work has been done with rectangles and boxes. Several Integer Programming formulations for two-dimensional knapsack problem exists (see e.g. [4, 11, 37]), but they generally suffer from large numbers of integer variables, and numerous symmetric solutions. A general approach for packing problems with rectangular items was proposed by Fekete and Schepers [26, 27, 28]. Among the list of problems they are able to solve are multidimensional knapsack problems. Their approach is a branch-and-bound algorithm which assigns items to the knapsack without specifying the position of the rectangles. For each assignment of items an advanced graph representation is used to decide if a feasible assignment of coordinates to the items is possible. A branchand-bound algorithm for the two-dimensional knapsack problem was also developed by Caprara and Monaci [17]. As in the work by Fekete and Schepers, items are assigned to the knapsack without specifying the position of each item and an enumeration scheme from Martello, Monaci, Vigo [46] is used to ensure feasibility. In the area of heuristics Egeblad and Pisinger [C] use the sequence-pair (see [49, 54, 63]) to represent two-dimensional feasible placements and a novel representation for three dimensions. Representations are modified with Simulated Annealing. Also Alvarez-Valdes et al. [1, 2] applied both GRASP and Tabu-search to the two-dimensional knapsack problem with very impressive results. 2.2 Container Loading A specific version of the three-dimensional knapsack problem is the container loading problem (CLP). Here the “profit” value of each item is equal to its volume. Thus the objective is to maximize the utilization of the container volume. Solution methods for the container loading problem often utilize a form of wall-building technique originally introduced by George and Robinson [31]. In wall-building the container is filled in the depth 161 2. Related Work with walls of items. Each wall has dimensions W × H × di where di is the depth of the ith wall. For each wall one usually selects a layer defining box (LDB) and let di be the depth of that box. Once the depth of the wall has been determined it is filled either by considering the simpler three-dimensional packing problem ([29]), or a two-dimensional packing problem. While George and Robinson [31] solves the two-dimensional packing problem by placing items in shelves, Pisinger [53] divides the wall into horizontal and vertical strips and each strip is packed by solving a onedimensional knapsack problem. Generally the wall-building approaches rely heavily on selection of the LDB and efficient strategies for packing each wall. Bischoff and Marriott [7] compare different ranking functions for the LDB, without determining a clear winner. This illustrates the need to elaborate on the greedy strategy, and much better results are achieved when LDB-selection is integrated with metaheuristics. Experiments have been conducted with genetic algorithms ([29]), tabu-search ([10]), and tree-search ([53]). While the advantage of wall-building is that the problem is reduced to simpler sub-problems, wallbuilding strategies commonly suffer from the fact that space is lost when items do not fully utilize the depth of a wall. Several authors have suggested alternative ways to represent free space in the container. Morabito and Arenales [47] suggest a slicing tree representation, where each tree corresponds to a guillotine cutting of the container and the leafs represent single boxes. Gilmore and Gomory [32] arrange boxes in towers which are placed on the container floor, thus reducing the problem to two dimensional packing. This strategy is also used with the genetic algorithm by Gehring and Bortfeldt [30]. Scheithauer [56] use a three-dimensional contour representation along with a form of dynamic programming. Ngoi et al. [50] use a matrix for each cross section in the height of the container to represent free space. Bischoff [8] later simplified this apporach by representing the available height for every location with just one matrix. Eley [25] use a list of available space, which is updated each time a new block of boxes is placed. An interesting approach was also suggested by Terno et al. [59], where layers are build using a non-slicing structure for two-dimensional packing called M4. While methods for multidimensional knapsack problems generally work well with less than 50100 items, heuristics for container loading problems are typical geared to problems with 100-200 items. At the time of writing, some of the highest utilization values have been achieved by Mack et al. [45] using a parallelized hybrid of tabu-search and simulated annealing running on 64 processors which utilizes the advanced wall-building procedure of [29]. In general they achieve higher than 90% utilization. Some of the best results with a non-parallel method were achieved by Moura and Oliveira [48] with overall utilization slightly below 90%. 2.3 Irregular Shapes Most methods that consider packing of irregular shapes are made for the nesting problem, which is the problem of arranging a set of irregular shapes on a two-dimensional strip with fixed height and minimal width. The majority of successful heuristics for the nesting problem are iterative but can be roughly divided in two segments; legal and relaxed methods. Legal methods iteratively try to improve feasible solutions while relaxed methods also incorporate infeasible solution in which items overlap during the solution process. In general shapes are represented by polygons. Art [3] was among the first to consider legal placement. He used an envelope-principle, which, for a partial solution, defines the legal positions of the next polygon. Legal placement methods also often utilize another concept; the so called nofit polygon (NFP). Given two polygons P and Q the NFP can be defined as NFP(P, Q) = {p − q | p ∈ P, q ∈ Q}, which is the set of translations of Q such that P and Q overlap. Note that the NFP 162 Heuristics for container loading of furniture is closely related to the Minkowski-sum. For two point sets A and B the Minkowski-sum is the set {a + b | a ∈ A, b ∈ B}, which means that the NFP is the two-dimensional Minkowski-sum where one polygon has been inverted in both axes. Several methods places items sequentially using the NFP. Oliviera et al. [52] and Dowsland et al. [23] generates a sequence by heuristically estimating how well a piece fits with the next position. Other authors used a modification scheme where the sequence is iteratively modified and reevaluated. Blazewicz et al. [9] used tabu-search. Burke and Kendall [13, 14, 15] used the NFP with ant-colony algorithms, simulated annealing and evolutionary algorithms and more recently Burke et al. [12] used tabu-search and hill-climbing. Also more problem specific heuristics have been investigated. Gomes and Oliveira [35] used a 2-exchange heuristic. Dowsland et al. [22] used a “jostling” mechanism where the placement is alternately generated left-to-right and right-to-left, based on the sequence of the previous placement. Recently, Gomes and Oliviera [36] used a combination of simulated annealing and linear programming to generate impressive results. A number of researchers have considered infeasible solutions during the solution process. Here overlap of shapes is allowed but iteratively reduced. Once the total overlap has been reduced to 0, a feasible solution has been reached. Several authors have experimented both with a simplified raster-model and geometric models for measuring overlap. Lutfiyya et al. [44] use a raster model in conjunction with simulated annealing. Later Oliveira and Ferreira [51] experimented with both raster and geometric models. Dowsland and Bennel al. [5, 6] experimented with intersection depth as a measure of overlap, and combined LP-compaction methods by Li and Milenkovic [43] and the NFP for faster evaluation. A particular ambitious heuristic with relaxed placement is the 4-stage simulated annealing by Heckmann and Lengauer [38]. Lately, Egeblad et al. [A] use the metaheuristic Guided Local Search combined with a fast geometric algorithm to determine a minimal-overlap horizontal and vertical translation of a polygon. Currently the best results in the literature are evenly divided among this work and the work by Gomes and Oliveira [36]. 2.3.1 Irregular 3D-packing Three-dimensional packing of irregular shapes has received far less attention. Methods generally represent surfaces of shapes by triangle-mesh structures. Ikonen et al. [39, 40] were among the first to consider optimization problems with irregular threedimensional shapes. They proposed to use genetic algorithms with a relaxed placement method based on triangle intersection. Cagan et al. [16] also use a relaxed placement method, but with simulated annealing and spatial octrees (see e.g. [19]) to quickly determine pairwise overlap. Dickinson and Knopf [20, 21] use a legal placement method where items are placed sequentially, and each item is placed with an individual optimization heuristic, but the sequence is only placed once. Also Stoyan et al. [58] use a serial packing method. They have generalized the concept of φ-functions (which is a form of description of overlapping area similar to the NFP) to three dimensions. Recently, Egeblad et al. [A] generalized their 2D relaxed placement method to three dimensions with results surpassing those of Ikonen et al. and Dickinson and Knopf both in speed and quality. 2.4 Weight and Stability Considerations Gehring and Bortfeldt [30] consider a set of constraints which are important with respect to the placement of items. The constraints they consider are: Items may be placed only on top of a stack with sufficient bearing strength to accommodate it. Some items may only be stacked a limited number of 163 3. Solution Process Figure 3: The different stages of the solution process times on top of each other. Some items may only be at the top of a stack. Since strength of an item depends on its orientation, only a limited number of item-depending orientations may be used. Items must be loaded such that they do no drop to the floor during transportation. The center of gravity of the overall placement must coincide with the center of the container for transportation with trucks and airplanes. The approach by by Gehring and Bortfeldt [30] was based on building stacks which makes it relatively simple to accommodate most of these constraints. To handle the COG constraints they divided the container space into layers (walls) parallel to the back of the container. Once volumeoptimization is complete, individual walls are interchanged, mirrored or rotated by 180 degrees to move the COG within the demanded range. Davies and Bischoff [18] combined the items into deeper layers called blocks and determined a good permutation of the blocks by random search. Eley [25] also apply this principle and reports that only 3-4 blocks are required in order for items to be placed acceptably with respect to the center of gravity. A weight considereration technique by Ratcliff and Bischoff [55] was later refined by Bischoff [8] to use a matrix representation to describe not only the available space for each region of the floor but also the bearing-strength of each region. As the heuristic is constructive it is easy to ensure that items are positioned without violating load constraints. 3 Solution Process Our solution process is divided into a number of stages which we outline in this section. The process completes the stages illustrated on Figure 3 one at a time from left to right. In the initial stages the heuristic considers mainly large items. Once a placement for large items is determined medium and small items are considered. During a preprocessing step the full three-dimensional structure of the items is analyzed to determine how items can be placed relative to each other. To simplify the task of placement, item geometry is reduced for the following stages. The last three stages consider only box-items, but a set of constraints apply to the small items which make them complex to handle. Each stage of the optimization process is described in the following: 1. Preprocessing (L). Generation of set of templates which describe sub-placements of items. See Section 4. 2. Quad-wall bulding (L). A tree-search heuristic which fills the container by selecting four templates which fit next to each other in the width and height of the container (quad-wall) determines a placement of large items. See Section 6. 164 Heuristics for container loading of furniture y z x Figure 4: The triangle mesh-structure of an item shown here in the item-coordinate system. 3. Local search (L). Refinement of the quad-wall solution occurs via a local search heuristic for large items as described in Section 7. 4. Stability (L). A special local search heuristic, described in Section 8, ensures stability of the solution generated in step 3. 5. Greedy algorithm (M). Once the large items have been placed we proceed to place mediumsized items between and above large items with a greedy algorithm described in Section 9.1. As explained in Section 9.2 this algorithm is also used to evaluate placement of large items. This is indicated on Figure 3 by the area below the three first heuristic steps, which protrudes into the “Greedy algo.” step. 6. Wall-building (S). The available space at the end of the container, which is maximized during the first stages, is filled with small items using a traditional wall-building heuristic to be described in Section 10.2. 7. Greedy algorithm (S). Remaining small items which are robust enough to be placed on top of large items, are placed using the greedy algorithm of step 5, as described in Section 10.3 3.1 Item Representation Although three-dimensional shapes may be represented in different ways, we have chosen a trianglemesh structure (mesh). Here the surface of the items is represented by a list of triangles in space (See Figure 4). Collecting triangle meshes is a complex problem. Although laser-scanning was considered we settled on parametric models. A set of approximately 50 parameters (measures) per item, taken from the physical model, are converted by a mesh-generation algorithm to a mesh of the item. In the problem studied, only large items are represented by meshes, since other items are packed as boxes. 4 Preprocessing Prior to optimization, large items undergo two types of analysis; Geometric analysis and template building. The geometric analysis is used to determine where item-types must be positioned relative to each other to avoid overlap. Template building is used to create suitable sub-placements of a few large items, which are used to simplify the problem. 165 4. Preprocessing 4.1 Geometric analysis The geometric analysis determines non-overlapping compact placements of items relative to each other. Although several combinations could be considered, we only consider placements of two shapes (pairs), such that their surfaces are in contact with each other. If rotation angles are fixed, the boundary of the three-dimensional Minkowski-sum of A and the inverse of B, describe all surface-contacting translations of B relative to A – Like the NFP in two dimensions (see Section 2.3). For non-convex polyhedra with respectively m and n features the combinatorial complexity of the Minkowski-sum can be as high as O(m3 n3 ) (see e.g. [57]) and thus this is an expensive operation. To simplify the problem, we take advantage of the fact that the shapes we consider describe mainly sofas. When loaded in the bottom of the container, a sofa is first rotated 90 degrees around the x-axis (see Figure 4) and placed with its armrest on the container floor. Therefore the following conditions apply: • Both items must stand on the ground, so only a two-dimensional set of relative translations may be allowed. • Fragile parts of sofas must point towards each other, for protection during transportation. • The items are generally semi-convex sofa-shapes which limits the number of possible relative positions. In general, for every relative x-translation we assume that there is one and only one acceptable y-translation. Let meshes A and B (e.g. sofas or chairs) be defined as in the coordinate system on Figure 4. Although meshes are rotated around the x-axis when the pairs are placed inside the container we omit this step here, so one should imagine in the remainder that the xy-plane is the container floor. We let Bπ be B rotated 180◦ around the z-axis, so that the fragile parts (the seats) of A and Bπ can point towards each other. The legal translations of Bπ are those where Bπ touches A and Bπ has its seat pointing towards A. We begin by translating A and Bπ in the z-direction such that the minimal z-coordinate of their bounding-boxes is 0. Then for any x-translation of Bπ we wish to find the minimal y-translation such that Bπ does not overlap with A. Figure 5 shows this y-translation as a function of x-translation. This function represents all the different possible pairs, we are allowed to consider, and Figure 6 illustrates five of these pairs. To calculate the function we commence as follows: First let A0 be the triangles from A with upwards pointing normal (y > 0) and B0 be the triangles from Bπ with downwards pointing normal (y < 0). All other triangles represent parts of the surfaces which point away from the opposite mesh. For every triangle t ∈ A0 and every corner point, p = (px , py , pz ), of a triangle from B0 we determine the intersection of t and the plane z = pz . This is either the empty set, a single point, or a line-segment 1 2 which can be described by a linear function f p,t (x) on a closed interval J p,t = [J p,t , J p,t ], that gives the line’s y-coordinate for every x-coordinate. Assume the intersection is a line-segment and define 1 − p , J 2 − p ], then for every x ∈ J , p + (x, f (x), 0) ∈ t. f p,t (x) = f p,t (x − px ) and Jp,t = [Jp,t x p,t x p,t p,t I.e. (x, f p,t (x), 0) is the required translation of p, such that p touches t. If the intersection is not a / line-segment define Jp,t = 0. We also determine f p,t (x) in x for every corner point, p, of any triangle in A0 and every triangle t in B0 , i.e. the opposite set of linear functions. Now we loop over all pairs of edges, (ea , eb ), with positive x extent, from triangles in A0 and triangles in B0 respectively. For every edge pair ea and eb we determine a function fea ,eb and half-open 166 Heuristics for container loading of furniture y Relative Pair Positions x Figure 5: Example output of the pairing algorithm described in Section 4.1. Two copies of the mesh from Figure 4 are paired together and the output is a piecewise linear function describing relative y-position for every relative x-position. Figure 6: Five different pairs which arise from the piecewise linear pair-function in Figure 5. interval Jea ,eb such that eb translated x ∈ Jea ,eb along the x-axis and fea ,eb (x) along the y-axis intersects with ea . 1 , J 2 ) and f For each linear function set f p,t (x) = −∞, x ∈ / [Jp,t / Jea ,eb . Let F be ea ,eb (x) = −∞, x ∈ p,t the set of all such linear functions and define the piecewise linear function fall (x) = max f ∈F f (x), as the maximum y-coordinate of any f ∈ F and for every x. The function fall (x) can be generated in time O(|F|2 ) since every f ∈ F is a line segment. In total the number of line segments generated is O(|A0 ||B0 |) and the total running time is O(|A0 |2 |B0 |2 ). Let B(x, y) be Bπ translated (x, y, 0) units. Now for x ∈ (−∞, ∞) the set of translations (x, fall (x), 0) are translations of Bπ such that one or more edges or point of a triangle in B(x, fall (x)) touches A but no triangle of B(x, fall (x)) intersects A. 4.2 Template building The second part of the preprocessing stage determines a large set of feasible and stable sub-placements of items which we call templates. A template t represents a placement of items of different types and simplify the inner-portions of our heuristics by reducing the number of geometric computations and making it easier to ensure overall stability. For a template t, let (t 1 , . . . ,t n ) be its item-types with t i ∈ L for i = 1, . . . , n and let |t| = n be the number of its item-types. For each t i , i = 1, . . . , |t|, the template also contains the relative position and rotation of t i . Let T be the set of templates. For a template t ∈ T we let t p be the total profit of its item types. Let w(t), h(t), d(t) be the dimensions of t’s minimal axis-aligned box, [0, w(t)] × [0, h(t)] × [0, d(t)], which contains its items (bounding-box). If the template is used at a position p ∈ R3 items matching each of the item-types t i are placed relative to p and orientated as in the template. We refer to the positioned sub-placement of items as a bundle and define the bounding-box a bundle b as [bx1 , bx2 ] × [by1 , by2 ] × [bz1 , bz2 ]. Since the same set of item-types may be combined in many different ways, we group similar templates. For a tuple (t 1 , . . . ,t n ) of item-types we let R(t 1 , . . . ,t n ) ⊆ T be the set of templates containing 167 4. Preprocessing the item-types t 1 , . . . ,t n . For t ∈ T we let R(t) be the set of templates containing the same item-types from t and we say that s,t ∈ T are related if and only if s ∈ R(t). During this stage many templates are generated in each of the following categories: Singletons, stacks, pairs based on the geometric analysis of Section 4.1, pairs of stacks, user-defined templates and fused templates. For every item-type i ∈ L a singleton template containing only i is generated for each allowed rotation. An example singleton-template is depicted in (Figure 7 (a)). Stacks, as shown in Figure 7 (b), are generated by a recursive algorithm that adds item-types on top of each-other until the container height is reached. All such possible templates are constructed. The bounding-box of the meshes is used to avoid overlap when stacking. To limit the number of templates an item is not allowed on top of a shorter item. Because items are large, stacks commonly contain less than four items. The geometric analysis (see Section 4.1) returns all possible legal pairs between any two sofa item-types as illustrated in Figure 7 (d). To limit the solution space, only a low number of these pairs are used as templates. The pairs chosen are: P1: Pair with minimal bounding-box volume. P2: Pair which fills the container in the width next to P1 (maximal sliding). P3 and P4: rotated versions of P1 and P2 (around y-axis). P5: Pair which occupies half the container width. Although more pairs theoretically allow for better solutions, experiments showed that in practice they tend to have the opposite effect – Presumably because the size of the solution space is increased. Pairs of stacks (See figure 7 (e)) are built by combining the techniques for pair and stack construction. To limit the number of combinations the stacks are build by adding items one-by-one in order of non-increasing height to the smallest stack. Once the composition of the two stacks have been determined possible combinations are determined using the geometric analysis of Section 4.1 similar to pairs. Because items in our context have a soft surface, geometric analysis is not as accurate as physical measurements and so there is a great advantage in using measured data. In our implementation we have around 40 different kinds of user-defined templates for which the user can specify the constituent types, width, height and depth of the template. See Figure 7 (c) for an example of a user-defined template. Finally, for reasons which will be clarified in Section 5, the templates of the previous categories are also fused heuristically together along the x-axis to form new templates (See Figure 7 (f)). Specifically any pair of constructed templates are fused if there is room for a third template in the container width. This process is recursive over all templates and may generate new larger templates which are fusions of many narrow templates. To ensure that not too many combinations are generated, this preprocessing step begins by repeating similar templates and continues to combine distinct templates where the resulting utilization will be high. This process stops once f fused templates have been created. We discovered that f = 2000 is an adequate number. In general a template is generated for each allowed orientation of the templates described in the prior sections. This implies a discrete rotational model — not all rotations are possible. This limitation works well with the real-life situation, where only a limited number of rotations may be allowed for some templates or items for quality assurance reasons. For every template at least a y-axis rotated variant will also be generated. 168 Heuristics for container loading of furniture (a) (b) (c) (d) (e) (f) Figure 7: Examples of templates. (a) (b) (c) Figure 8: Contours (b and c) of a template (a). 5 Sequence Representation Due to the size of large items there is generally only room for at most two pair-templates in the width and height of the container. Therefore we limit the possible ways a template can be placed in the in xy-plane to four alignments numbered 1 − 4 matching the four corners lower-left, lower-right, upperleft and upper-right. A solution of n large items may be represented as a sequence of template and alignment pairs of the form < (t1 , a1 ), . . . , (tn , an ) > for t ∈ T and a ∈ {1, . . . , 4}. To convert a sequence into a placement templates are placed one-by-one in the order of the sequence. Assume we apply template t to place bundle b then we proceed as follows. First we determine x and y-extents of a such that (bx1 , by1 ), (bx2 , by1 ), (bx1 , by2 ) and (bx2 , by2 ) coincide with (0, 0), (0,W ), (H, 0) and (H,W ) for respectively the lower-left, lower-right, upper-left and upper-right alignments. To determine the z-extent the set of current bundles, B, is considered. Let B0 ⊆ B be the set of bundles for which [bx1 , bx2 ] × [by1 , by2 ] overlaps in the xy-plane. Then b is placed such that bz1 = maxb0 ∈B0 bz2 . Realizing an entire sequence of n templates takes O(n2 ) since this process takes O(|B|) time for each template, however if bundles are sorted and searched in order of highest z, finding the zcoordinate generally requires only evaluation of a few top placed bundles, since the search may stop once all templates have lower z-coordinate than the current maximum. As an alternative to alignments, templates could be placed according to some greedy principle. However, this strategy easily ruins the entire solution if for instance two templates are exchanged whereas alignments help maintain locally optimal placements. It is easy to determine a lower z-coordinate than the one arising from the use of bounding-boxes. Since the x- and y-coordinate of each item within the template are fixed once the template is aligned one can search for a collision along the z-axis, using the meshes of each template. This operation is computationally expensive and in the sequel we describe how it can be quickened by reducing to two dimensions and caching values. 169 5. Sequence Representation (a) (b) Figure 9: A substantial amount of space can be recovered by using the contours of templates to determine required distance between bundles (b) rather than the bounding-box (a). 5.1 Profile Considerations Rather than considering the full three-dimensional structure of templates we reduce the collision detection problem to a two-dimensional problem. This strategy works by considering either the xz-plane or yz-plane, but the techniques are similar and we will only describe the approach for the xz-plane here. For each template t we generate a contour of its extreme points in the xz-plane (see Figure 8). Let tT be the set of triangles of the positioned and oriented meshes of items in t. Let tu (x) = max{z | ∃y, s ∈ tT : (x, y, z) ∈ s} be the value of the maximal z-coordinate of any point on a triangle in tT that overlaps with x (Figure 8 (b)), and let tl (x) = min{z | ∃y, s ∈ tT : (x, y, z) ∈ s} (Figure 8 (c)). tl and tu are piecewise linear functions which can be constructed from the edges of each triangle. For a bundle b of a template t, define the translated contours bu (x) = tu (x − bx1 ) + bz1 and bl (x) = tl (x − bx1 ). Now assume, we have aligned a bundle b0 (only x- and y-coordinates have been determined). Then if we place b0 b0z1 = maxx bu (x) − b0l (x) the templates of the two bundles will not overlap. Since the items we consider generally consists of surfaces which are not completely parallel to the coordinate system planes, a considerable amount of space in the container can be saved by this simple strategy, as seen in Figure 9. On the other hand the nature of the items are such that little, if anything, would be gained by considering a full three-dimensional collision detection. 5.2 Caching The piecewise linear functions tu and tl are calculated for all templates during preprocessing. With many thousands of templates precalculation of required distance between all template-pairs for any relative position is not a viable approach. However, since templates can only be aligned to the left and right side of the container a template can only be placed with two different x-coordinates. Therefore there are only four possible ways the templates can be positioned relative to each other in the xdirection. Additionally, when the required distance between two templates is calculated as in Section 5.1, it is stored in a binary search tree for later use. Additionally, if accurate physical measures for two templates are known, these can be inserted into the data structure during preprocessing. 170 Heuristics for container loading of furniture Figure 10: corners. 6 3. Upper−left 4. Upper−right 1. Lower−left 2. Lower−right A quad-wall consists of up-to 4 templates that may be placed in the four alignment Quad-Wall Building The first step of the heuristic consists of a tree-search algorithm for large items. The tree-search algorithm fills the container in the depth by determining good ‘walls’ consisting of four templates (quad-wall). Each quad-wall is ranked according to how well it fills the width and height of the container. Quad-walls are appended in the depth until the container is full at which point the treesearch heuristic backtracks and uses walls with less rank. It is important to note that, unlike traditional wall-building schemes, quad-walls have no boundaries in the z-direction and, when placed, their constituents are pushed as far back in the container as possible as described in Section 5. Another important element is that the constituents of a quad-wall are repeated a number of times in the depth, but not necessarily an equal number of times. Determination and realization of walls are described in the following sections. 6.1 Quad-Wall Selection A quad-wall is a combination of templates that can fit next to each other in the width and the height of the container. In practice the most commonly used type of template is a pair. Using the same argument as in the start of Section 5 there is commonly only room for up-to four different templates in the width and height of the container (see Figure 10). Since the number of templates in some instances can reach 30, 000 the number of combinations of four templates could be as high as 30, 0004 , and it is intractable to evaluate all possible walls within reasonable time. Rather, we use an additional tree-search heuristic to find high-quality walls. A fundamental part of evaluating walls is to rank individual templates. For a template t ∈ T we define p(t) rank(t) = h(t)d(t) , which indicates how profitable t is per height and depth unit. Quad-walls are constructed by assigning a template to each of the corners in the order lower-left (1), lower-right (2), upper-left (3) and upper-right (4). A partial wall, < t1 , . . . ,ti > for i = 1, . . . , 4, is an assignment of a template t j ∈ T ∪ {nil} to each corner j, j = 1, . . . , i. A corner j may be empty if t j = nil. The templates that may be assigned to corner i depend on the assignment of templates to the first i − 1 corners. Let Ti (< t1 , . . . ,ti−1 >) ⊆ T be the set of templates allowed at corner i if templates t1 , . . . ,ti−1 have been assigned to corners 1 to i − 1. The search begins by ranking all partial walls containing lower-left templates t1 : rank1 (< t1 >) = rank(t1 ) + max t2 ∈T2 (<t1 >) rank(t2 ) + max t3 ∈T3 (<t1 >) rank(t3 ). (1) The set T2 (< t1 >) depends only on the width of t1 . Therefore all templates allowed in the lower-right corner can be stored in a balanced search-tree with width as key, and the best ranked t2 , given the width of t1 , can be found in time O(log |T |). A similar argument holds for t3 . This way each template 171 6. Quad-Wall Building can be evaluated in O(log |T |) time. Let T 1 be the at most m1 best ranked walls with a lower-left template where m1 is a parameter for the heuristic. The heuristic then proceeds recursively to rank walls with templates assigned to the remaining corners. Define ranki (< t1 , . . . ,ti >) of a partial wall consisting of i templates as: rank(ti ) + maxt∈T4 (<t1 ,...,ti >) rank(t) for i = 2, 3 ranki (< t1 , . . . ,ti >) = , (2) rank(ti ) for i = 4 and let T i (< t1 , . . . ,ti >) be the set of at most mi best ranked partial walls with respect to ranki (< t1 , . . . ,ti−1 ,ti >) for ti ∈ Ti (< t1 , . . . ,ti−1 >), where mi is a parameter for the heuristic. Then let T i be the at most Πij=1 m j partial walls consisting of templates at corners 1, . . . , i and let it be defined recursively as T i = ∪<t1 ,...,ti−1 >∈T i−1 T i (< t1 , . . . ,ti >). (3) This means that at each corner we generate the at most Πij=1 mi best ranked partial walls, T i , by appending the best ranked partial walls from the previous corner, T i−1 , with templates at corner i. The parameters mi determines the number of branches at each level of the tree search and are used to describe the width of the search-tree. The set T 4 are walls with template assignments to all four corners and |T 4 | ≤ mtot = Π4j=1 m j which we will consider in the following. Note that not all corners need to be assigned a template. This can be dealt with by including a template with no profit and item-types in T . 6.2 Domination If two walls w ∈ T 4 and v ∈ T 4 consists of the same item-types, v may dominate w by utilizing the space strictly better, and w may be discarded. To test for domination, all pairs of quad-walls w, v ∈ T 4 with equal elements are compared by considering four points based on the bounding-boxes of templates from each wall. Assume a wall w consists of templates < w1 , w2 , w3 , w4 >, and each template wi would be placed as bundle bi , if it were to be used at this time, then we define q1 (w) = (b1x2 , b1y2 , b1z2 ), q2 (w) = (b2x1 , b2y2 , b2z2 ), q3 (w) = (b3x2 , b3y1 , b3z2 ), q4 (w) = (b4x1 , b4y1 , b4z2 ). The points qi (w), i = 1, . . . , 4 are illustrated on Figure 11. Now, to determine if wall w is dominated by v, we require that qi (v)z ≤ qi (w)z for all i = 1, . . . , 4 and: q1 (v)x ≤ q1 (w)x , q1 (v)y ≤ q1 (w)y , q2 (v)x ≥ q2 (w)x , q2 (v)y ≤ q2 (w)y , q3 (v)x ≤ q3 (w)x , q3 (v)y ≥ q3 (w)y , q4 (v)x ≥ q4 (w)x , q4 (v)y ≥ q4 (w)y . If w is dominated we simply remove it from T 4 . Note that cases where walls have no assigned template in one (or several) corner(s) i (i.e. wi = nil), can be dealt with by assigned appropriate values to qi . 6.3 Quad-Wall-appending p(ti ) The m walls from T 4 with highest rank(w) = ∑4i=1 d(t for w =< t1 ,t2 ,t3 ,t4 >∈ T 4 are selected and i) the best of these is appended to the current solution. In traditional wall-building a wall is constructed once and independently of the previously placed elements. Here, however, we apply the templates < t1 , . . . ,t4 > of a quad-wall by a set of rules. Let di 172 Heuristics for container loading of furniture q (w) 4 q (w) 3 q2 (w) q (w) 1 Figure 11: The four points of each wall used for domination check. (a) (b) Figure 12: Example placement using quad-walls. (a) Placement after use of first quad-wall (3 templates). (b) Placement after use of second quad-wall (4 templates). be the maximum z-coordinate of any bounding-box of current bundles at corner i of the container for i = 1, . . . , 4. Let di+ be the resulting maximum z coordinate if template ti is applied at corner i. Then we apply templates ti to corner i from the wall one-by-one using the following rule. 1. If d4+ ≤ d2 and d4+ ≤ d3 we apply t4 , 2. otherwise if d3+ ≤ d1 we apply t3 , 3. otherwise if d2+ ≤ d1 we apply t2 , 4. otherwise we apply t1 . We repeat this until there are insufficient items or space to apply one of the four templates. Note that by these rules a wall is not simply repeated, rather templates are distributed depending on depth. E.g. a very deep template will not be applied as often as a thin template. It is also important to note that as the templates are placed, they are pushed as far back in the container as possible, so while a quad-wall is a collection of items in the width and height of the container it does not fill “slices” of the container. Example of placements after the first and second wall have been used are shown of Figure 12. 6.4 Backtracking The quad-wall placement is embedded in a tree-search heuristic. Each stage where a wall is appended to the container is a node in a search tree and each candidate wall corresponds to a path to a child-node in the search tree. Once the the container is filled and there is insufficient space for further templates at the end of the container, the heuristic back-tracks in the search tree and continues with the second-best ranked wall. When all walls of a node have been investigated the heuristic back-tracks to the node’s parent and continues with the next wall of the parent. At most 150,000 forward and backward steps in the search-tree are allowed due to time-constraints. To ensure that the heuristic does not search only in the end of the container the values of mi are reduced at the lower levels of the tree and generally we set mi = 21 mi−1 , to balance the inner tree-search. 173 7. Local Search Item bundles added (a) (b) Figure 13: The best solution from the quad-wall heuristic (a) is appended with remaining items that may protrude out of the container (b) before local search is applied. At every q steps (e.g. q = 10, 000) we back-track to the root node of the tree. This strategy works adequately to ensure that searching is done both within the beginning and the end of the container. 7 Local Search The best solution found during tree-search is used as start-solution for a local-search heuristic. During local search, solutions are represented as the type of sequences described in Section 5 and one side of the container is ’open’ to allow bundles to extend beyond the container boundary. Only the profit of those bundles which are completely inside the container contribute to the objective function. Items which are not part of the initial solution are added to the end of the loading as additional bundles, and will in general protrude the container as shown in Figure 13. The idea is that the local search will be able to move some of these items inside the container boundaries and thereby increase the solution value. This strategy is similar to the one used by [C] for two-dimensional and three-dimensional knapsack problems with boxes. Let the current solution be defined as the sequence σ =< (t1 , a1 ), . . . , (tn , an ) > and let σ(i) = (ti , ai ) be the ith element of the sequence, then the local search neighborhoods consists of the following: Exchange: For every i, j, with 1 ≤ i, j ≤ n, i 6= j, every alignment ai , a j = 1, . . . , 4 and every related template ti0 ∈ R(ti ), t 0j ∈ R(t j ) we try the sequence σ0 where σ0 (i) = (t j , a j ), σ0 ( j) = (ti , ai ) and σ0 (k) = σ(k) otherwise. This corresponds to exchanges of two templates in the sequence combined with replacing them with their relatives and testing all alignments. Subset Side-Exchange: For every i, j with 1 ≤ i < j ≤ n we try the sequences σ0 where σ0 (k) = (tk , o(ak )) for i ≤ k < j and σ0 (k) = σ(k) otherwise. We define o(1) = 2, o(2) = 1, o(3) = 4 and o(4) = 3. This corresponds to a swap of alignment between left and right of templates from all possible consecutive sub-sequences. Insert: For every i, j with 1 ≤ i, j ≤ n, i 6= j and every alignment ai = 1, . . . , 4 we try the sequences σ0 = σ, but with ti removed and reinserted before t j . Combine: For templates t and s let R(t ∪ s) be the set of templates with all elements from t and s. Then for every i, j with 1 ≤ i, j ≤ n, i 6= j, every alignment ai = 1, . . . , 4 and every template tk ∈ R(ti ∪ t j ) we try the sequences σ0 = σ, but with ti and t j removed and σ0 (i) = (tk , ai ). This corresponds to removing all ti and t j and reinserting a template with all the items from both templates at the position of i at all four alignments. Split: For every i, j with 1 ≤ i, j ≤ n, i 6= j, every alignment ak = 1, . . . , 4, every k = 1, . . . , |ti |, |t | every template ti0 ∈ R(ti1 , . . . ,t k−1 ,t k+1 , . . . ,ti i ) and tk ∈ R(t k ) we try the sequences σ0 = σ, but with σ0 (i) = (ti0 , ai ) and (tk , ak ) inserted before the jth element. This corresponds to splitting ti into two templates, one with |ti | − 1 elements and one with one element, and attempting to place the singleton element everywhere in the sequence with every alignment. 174 Heuristics for container loading of furniture Cross-over: For every i, j with 1 ≤ i, j ≤ n, i 6= j, every k = 1, . . . , |ti |, every l = 1, . . . , |t j |, every re|t | |t | j l+1 k lated template ti0 ∈ R(ti1 , . . . ,tik−1 ,tik+1 , . . . ,ti i ,t lj ), every related template t 0j ∈ R(t 1j , . . . ,t l−1 j ,t j , . . . ,t j ,t j ), 0 every alignment ai = 1, . . . , 4 and every alignment a j = 1, . . . , 4 we try the sequence σ = σ, but with σ(i) = (ti0 , ai ) and σ( j) = (t 0j , a j ). This corresponds to all exchanges of one of the item-types from ti with an one item-type from t j and trying all alignments. Every time a sequence is tried its templates are applied to their respective alignment to evaluate the objective value of the solution. The local search behaves as a steepest descent algorithm — i.e. all neighboring solutions are evaluated before choosing the change which results in the largest improvement. Although the neighborhoods are quite large, we are generally able to place more than a 100,000 sequences per second, so in practice many changes may be examined within reasonable times. Experiments showed that not all neighborhoods need to be examined in every iteration. Thus the neighborhoods are examined in the order in which they are described above. If an improving move is found in one neighborhood subsequent neighborhoods are not searched. Once the heuristic terminates all items which are not within the container boundaries are removed to ensure that the solution is feasible. 7.1 Objective Functions The primary objective function is to maximize the total profit of bundles loaded within the container boundaries. However, with this objective function, changes are only accepted if they result in increased profit which may be hard to achieve in one single move. To allow changes that may improve the objective value if more steps are allowed, a set of secondary objective functions is used. At any given time one secondary objective function is active. Changes where the total profit remains the same but increases the active secondary objective are now also accepted. Additionally changes that give the largest improvement in the primary objective value are always preferred, but ties are settled by considering the secondary objective value. Local minimum only occurs when no improvement with respect to neither the primary nor the active secondary objective value can be found. Our secondary objective functions are as follows: 1. Minimize total depth. Minimize z2 of any bundle. 2-4. Minimize total depth of item in corner i. Minimize the maximal value of z2 for any bundle placed at the ith corner. This gives rise to four secondary objective functions. 5. Minimize sum corner-depths. The sum of the maximal z2 for each of the four corners is minimized. 6. Minimize total depth of loaded items. Similar to 1 but only bundles inside the container are considered. At any given moment only one secondary objective function is active. Initially 1 is active. When the heuristic reaches a local minimum with respect to the currently active secondary objective function, it switches to the next secondary objective function from list. After 6 it switches to 1 again. After a specified number of non-profit-improving iterations the heuristic terminates. In practice one can evaluate all secondary objective values for all permutations, so when a local minimum is reached and the secondary objective function is changed, all permutations with respect to the new 175 8. Stability-Search secondary objective function have already been evaluated, and the best move with the new secondary objective can be carried out instantly. 7.2 Metaheuristics The large local search neighborhood is geared towards minor alterations that only shifts items slightly around and can be thought of as a “clean-up” or “tighten” of the quad-wall solution. Meta-heuristics such as Simulated Annealing [42], Tabu-Search (e.g. [33, 34]) and Guided Local Search [60, 61] were investigated. Unfortunately, none of these proved able to find better solutions than the simple local search scheme within acceptable computational time. An explanation could be that, while the quad-wall heuristic of the previous section determines a good local structure by generating quad-walls, it is harder to locate structured solutions with the mentioned meta-heuristics. 8 Stability-Search For transportation it is important that items are loaded in such a way, that they are not damaged by dropping to the floor of the container. In this section we consider only large items. To handle this requirement a second local search heuristic is initiated once the local search of Section 7 is complete. This heuristic starts from the best solution found during the previous step, and it is completely equivalent to the local search in Section 7 except that the objective function has been replaced. We will refer to the heuristic of step as stability-search. In stability search the objective function is to minimize the total profit of unstable templates. When no improving change can be found, the search terminates and templates which violate their stability requirement are removed from the solution and the resulting solution is stable. Alternatively, one could limit the tree-search and local search heuristics to consider only stable solutions. Preliminary testing, however, showed that the chosen approach is favorable for two reasons. Checking stability is an expensive operation, but solutions are commonly almost stable after the local search step, and resolving the few problems that arise is faster. The second reason is that overall solution quality decreases when only stable solutions could be searched, presumably because the heuristics benefits from passing through unstable solutions. 8.1 Center of Mass A fundamental part of our stability check is based on centroids. The centroid is the center of mass of an object if it has uniform density. The centroid, R, and signed volume, V , of a tetrahedron with o = (0, 0, 0)t and a, b, c ∈ R3 as corner points may be calculated as: a+b+c a · (b × c) , V= . 4 6 To calculate the centroid R of a set of n tetrahedra one can use the cumulative expression: R= R= ∑ni=1 Vi Ri , ∑ni=1 Vi (4) (5) where Ri is the centroid and Vi is the volume of each tetrahedron i. The centroid of a mesh with n triangles each with corner points ai , bi , ci ∈ R3 , can be determined by decomposing it into n tetrahedra consisting of points o, ai , bi , ci and using the addition formula (5). In principle any point can be chosen as o, since the volume and centroid calculation of each tetrahedron is signed and negative tetrahedra cancels surplus contribution of positive tetrahedra. 176 Heuristics for container loading of furniture 0 cm 80 cm 115 cm 125 cm 130 cm 140 cm (a) (b) Figure 14: Stability evaluation. (a) Item to be evaluated is shown as dashed box. (b) Height levels below the item (xz-projection). The circle roughly in the middle is the xz-projection of the centroid of the item. The difference between the maximum height-values of each of the four zones of the centroid must be less than some preset value. Here it is 140 cm − 115 cm = 25 cm. 8.2 Stability Evaluation While existing methods in the literature demand that each item is placed on an even surface, this constraint can often be circumvented in practice by use of e.g. polystyrene plates. Hence we have chosen the following approach. To evaluate if an item is positioned in a stable fashion, we divide the xz-projection of its bounding box into four areas around the xz-projection of the centroid (which must be within its bounding box). We then determine the maximal height of items within each of the four regions which are below the item considered (see Figure 14). Now we require that the height difference between the maximal height of the four regions must be less than some value h (e.g. 15 cm). This requirement ensures that the item is properly supported around its centroid and that there is no more than h difference between the height of the supporting items below it. By this strategy items can also be supported by two or more different items below so even “bridges” are acceptable. 9 Medium Sized Items The medium-sized items is the second group we consider. Medium-sized items have identical profitvalue and one large item is considered more valuable than any number of medium-sized items. The medium-sized items are boxes which may be rotated 90 degrees around any of the coordinate axis resulting in up-to six different orientations. Medium sized items are placed using a polynomial time greedy heuristic which will be explained in Section 9.1. A greedy heuristic is adequate for this part of the problem, because items are relatively small and homogeneous. Secondly, an efficient placement method allows us to integrate the greedy heuristic in the heuristics for large items, as will be explained in Section 9.2. 9.1 Greedy Algorithm The algorithm places types of items, one type at a time, until there is insufficient space for more items. Types are considered in order of decreasing size; Since largest items are often the hardest to place, and small items can appropriately fill the remaining empty holes. Each type is considered in three steps: 1) Find start position, 2) Determine volume to be filled. 3) Calculate efficient fill. If the volume is insufficient to accommodate all items of the current type, the same type is considered in the next iteration. Otherwise, if all items of this type are placed, or no volume is sufficient for the current type, the algorithm proceeds to the next type. 177 9. Medium Sized Items m (a) (b) Figure 15: Two-dimensional illustration of NFBs. 9.1.1 Start Position The algorithm begins by finding a start-position. Let m be an item of the next item type and let: mr = [xrm , xrm ] × [yrm , yrm ] × [zrm , zrm ], (6) be the bounding-box of m with respect to rotation r ∈ {1, . . . , 6}. Let K be the set of bounding-boxes of previously placed items or bundles and define i ∈ K as follows: [xi , xi ] × [yi , yi ] × [zi , zi ]. Now, for two boxes i and j the no-fit-box: NFB(i, j) = [xi − x j , xi − x j ] × [yi − y j , yi − z j ] × [zi − z j , zi − z j ], is the set of translations of box j for which j will overlap with box i (like the NFP from Section 2.3). When we consider an item of type m, NFB(i, mr ) are created for all previously placed items i and rotations r. Constraints of the form “m not above i” – e.g. if m is too heavy – are easy to handle by expanding NFB(i, mr ) to the height of the container. For every triple of i, j, k, if NFB(i, mr ), NFB( j, mr ) and NFB(k, mr ) intersects, we determine the intersection point (NFB(i, mr )x , NFB( j, mr )y , NFB(k, mr )z )T (right side NFB(i, mr ), top of NFB( j, mr ) and front of NFB(k, mr )). If the intersection point is not contained within some NFB(l, mr ), l ∈ K, it is feasible position of mr with respect to bounding-boxes of previously placed items. The intersectionpoint, q, with lexicographically least z, y and x coordinates is chosen as start point for placing boxes of type m. To ensure that items are placed within the container dimensions artificial boxes representing the container sides are introduced. Figure 15 illustrates this procedure reduced to two dimensions. The current placement is shown in (a) along with the box-type m that we wish to place. The light-shaded rectangles with thick lines in (b) demonstrates the NFBs of the placement and translations of m’s lower-left corner to any point within the white area are feasible positions. Filled circles indicate feasible intersection points determined by the algorithm while hollow circles represent infeasible intersection points. Determination of the start-position takes O(n5 ) time, where n is the number of items in the problem instance. However, in practice the algorithm is fast and may be speeded up further by a sweep-line principle, which moves from low to high z-coordinates with breakpoints at zi and zi for i ∈ K. During this traversal, a list of “active” boxes, which are the boxes that overlap with the current z-coordinate, are maintained, and one can end the search as soon as the first non-overlapping position has been found. 178 Heuristics for container loading of furniture y q q x q z Figure 16: Volume determination from the start-point q. 9.1.2 Volume Determination For the start position q, we determine a suitable box-volume V which will be filled by items of the current type, m. The start position, q, is the lexicographically lowest intersection-point with type m, for some rotation r and q is a feasible position of mr . Determination of the extents of V is illustrated on Figure 16. First V ’s lower-left-back coordinate is set to q; (xV , yV , zV ) = q . To determine the upper-right-front coordinates (xV , yV , zV ) we move right from q parallel to the x-axis until we hit the first box from K. This give us xV . We then find the minimal y ≥ yV such that the line segment between xV , y, zV and xV , y, zV intersects a box from K and let this be our yV . Finally, we determine zV by finding the minimal z ≥ zV such that the axis-aligned box with corners q and (xV , yV , z) intersect a box from K. Determining the volume V can be done in O(|K|) time. An example of the volume V is depicted on Figure 15 as a the dashed rectangle extended from the start-point determined in section 9.1.1. 9.1.3 Volume Filling To fill V we use a three-level recursive guillotine division of V into smaller volumes. The division considers cuts of V parallel to the x-axis, y-axis and z-axis, such that one direction is chosen for the first level. In each recursion only divisions in directions not used in previous levels are allowed. We will describe only the x-division since y- and z-divisions are similar. An x-cut divides V into two parts V 0 (x0 ) = {(x, y, z) ∈ V | x ≤ x0 } and V 00 (x0 ) = {(x, y, z) ∈ V | x ≥ x0 }. Let W (mr ) = xrm − xrm and W (V ) = xV − xV be the width of volume V and mr , respectively. Let the height H(mr ), H(V ) and depth D(mr ), D(V ) be similarly defined. For a volume V and item mr , let C(V, mr ) be the number of times mr can be placed inside V ; H(V ) D(V ) W (V ) · · C(V, m ) = W (mr ) H(mr ) D(mr ) r For r ∈ {1, . . . , 6} we consider x-cuts which divides V into volumes V 0 (xi ) and V 00 (xi ) for xi = W (V ) 0 0 0 r 00 0 s xV + i · (wmr ), i ∈ {0, . . . , b w(m r ) c}. We then select x and s such that C(V (x ), m ) +C(V (x ), m ) are maximal. We then proceed with the volume V 0 (x0 ) and V 00 (x0 ) to consider y-cuts and z-cuts, but such that one side of the new cuts of V 0 (x0 ) and V 00 (x0 ) use mr and ms respectively. For each of the volumes on the third level only the cut-direction unused at higher levels of the recursion is allowed. This procedure is repeated with y-cut and a z-cut as initial cut. The cutting sequence and orientation assignment which results in the highest utility of V is used to fill the eight sub-volumes of V. 179 10. Small Items 9.2 Integration Since a good solution of medium-sized items may depend on the placement of large items, the greedy approach is also integrated with the tree-search and local search heuristics for large items. At any time while running the two heuristics let z0l be total profit value of the large items and z0m for the medium-sized items of the currently best known solution. When the heuristics encounter a solution with solution value zl equal to z0l for large items, the solution value zm for medium sized items is calculated. If zm > z0m the current best known solution is replaced. 10 Small Items The last set of items to be placed is the small items. These have a lower precedence than large and medium-sized items. A number of constraints apply to them which will be explained in in Section 10.1. Small items are placed both using a wall-building heuristic, to be described in Section 10.2, and the greedy algorithm of Section 9, as described in Section 10.3. 10.1 Constraints The constraints that we consider for small items can be divided into three groups; Rotations, robustness and weight. For each item a specified subset of the six possible 90 degree rotations around coordinate system axis is allowed. For each item m we let s(m) ∈ {1, . . . , 6} define its robustness and for any two items mi and m j , we require s(m j ) ≥ s(mi ) for mi to be be placed on m j . This ensures that an item is only put on top of a more robust item, so that fragile items are not placed on the bottom of a stack. Finally, let g(m) be the weight of item m (e.g in Kilograms). Then, for any two items mi and m j with s(mi ) = s(m j ), we require that g(mi ) < g(m j ) if mi is to be placed on top of m j . 10.2 Wall-building As previously described the wall-building paradigm of container loading heuristics fills the container in the depth by constructing walls of items. We are using the wall-building approach by Pisinger [53], which improved the heuristic by George and Robinson. Rather than considering just one wall-depth for each wall, Pisinger used a tree-search heuristic which branches on a number of different wall depths. In addition, walls are filled by either horizontal or vertical strips and the heuristic branches on different strip-widths for each strip. Finally each strip is packed optimally by solving a knapsack problem. The heuristic back-tracks once there is no room for additional walls. The heuristic is used to pack items in a box-volume at the end of the container with dimensions W × H × (D − maxl∈L z(l)) where l is the set of bounding-boxes of templates and medium-sized items from the previous steps. Once the local search and greedy heuristic for large items completes, local search is used an additional time but this time the objective is to minimize maxl∈L z(l) so that the input-space for the wall-building heuristic is maximized. This is illustrated on figure 18 where (a) shows a solution with respect to large items which is optimized to the solution of (b) in which the space at the end of the container (bright gray area) is filled by the wall-building heuristic. The wall-building proceeds similarly to the wall-building in [53], but only vertical strips are allowed to accomodate the constraints described in Section 10.1. 180 Figures and tables Heuristics for container loading of furniture layer of dimension W H d - H D W d Figure 1: Figure 17: Figure of wall-building (a) (b) Figure 18: Maximizing space for small items at the end of the container. 16 The filling of vertical strips is again solved using knapsack packing, but once the optimal set of boxes has been determined, the boxes are sorted according to robustness value and weight to ensure that the constraints are not violated. To handle rotational constraints only allowed rotations are considered during depth determination of layers, width of each layer, and packing in each strip. 10.3 Additional Filling Although wall-building fills the volume of the container quite well, it is desirable to fill also the volume above the large items. To handle this we simply use the greedy heuristic from Section 9 on each item. When placing an item m robustness constraints are are ensured by expanding the NFBs for items with lower robustness than m to the container height. 11 Experiments The heuristic has been implemented in C++ (gcc 3.4.2) running under Linux on a AMD 64 3800+ 2.4 GHz. Since the problem has not been studied in prior literature no test-instances were available. Therefore, to demonstrate the capabilities of the heuristic we have constructed a small dataset and report results in this section. 11.1 Dataset The dataset consists of 40 large, 10 medium sized and 40 small items types. 181 11. Experiments The large items are shapes of chairs, sofas, chaise lounges and corners. Items are paired together and used in complex templates generated automatically as explained in Section 4. To further mimic the behavior at a furniture producer a set of user-defined templates were determined by a geometric tool. For each item-type less than 6 user-defined templates were created. These templates are comprised of up-to 5 items, but only one type of item. Profit values for large items were selected from the set {10, 20, 30}, and is correlated to the dimensions of the item. Both the user-defined templates and the profit values mimic the practical use of the heuristic. Medium-sized box-shaped items were generated with random dimensions taken from the interval 20 to 80. The profit of all medium-sized items was set to 5. Small box-shaped items were generated with random dimensions from the interval 10 to 80. The set of allowed rotations, robustness values and weight were determined randomly. Weight was set randomly between 1 and 10 kg. The profit of small items was set to 2 for all items. A total of 61 instances divided among three groups were created: (A) 21 instances containing only large items, (B) 20 instances with large and medium sized items, and (C) 20 instances with large, medium and small items. The most homogeneous instances consists of three large item-types, while the largest and most heterogeneous instances consists of a total of 90 distinct item-types (40 large, 10 medium, 40 small). All instances used container dimensions equal to a 40 ft. high-cube container (234 × 239 × 1185cm3 ). The high-cube container allows us to better demonstrate the heuristic’s ability to place items in multiple layers. Instances were generated so that the total profit was approximately and at least 160, 180 and 200 for (A), (B) and (C) respectively. The characteristics of the instances are reported in Table 1 which is described in the following section. The dataset is available for download at http://www.diku.dk/˜pisinger/ along with a description of the file-format used. 11.2 Results Results of the computational experiments for all 61 instances are reported in Table 1. For each instance we report the number of item-types in each of the three group in the column marked |L |/|M |/|S | and the number of items in the columns marked |I|, |L|, |M|, |S|. The column labeled “Loaded” under “Items” indicates the total number of items loaded. The columns labeled “Profit” gives the total profit of items (I) and profit of large items (L). The columns labeled “Loaded profit” is the best solution value for all items (I) and for each of the three item groups L, M and S. Since no real performance measure exists, we have reported the utilization of each of the instances as two different values. The bounding-box utilization in the column labeled “BB” is the percentage of the container volume occupied by the bounding-boxes of the items. Since bounding-boxes can overlap, only non-overlapping volume is accounted for. The mesh-utilization is the percentage of the container occupied by meshes (for large items) and boxes (for medium-sized and small items) and is reported in column “Mesh”. The average results for the three instance groups and all instances are reported in the rows labeled 1 − 21, 22 − 41, 42 − 61 and 1 − 61. On average the bounding-box utilization is 89.2% and the mesh-utilization is 59%. The utilization generally increases as more small items are available. When only large-sized items are considered, the average utilization in percent is 86.5 resp. 56.4 increasing to 89.9 resp. 59.8 when the cargo also contains medium-sized items. However, for the hardest series of instances containing all three types of items and up-to 200 items, volume utilization is 91.3% and 60.9% respectively. This compares well with traditional container loading heuristics where the state-of-the-art is around 90 − 91% for box-shaped items. Since the space between pairs of large items cannot be used for quality assurance reasons, the 182 Heuristics for container loading of furniture Figure 19: Solution of instance 61 with 114 items (running time was 134 sec.) bounding-box utilization is probably the most correct performance measure, as it accounts for this lost space between the items. The running time in seconds is reported in column “Time”. The average running time is only around 100 seconds, which is highly acceptable for real-life applications. A solution to a typical instance (instance 61) is shown on Figure 19. 12 Conclusion We have developed a new heuristic for the three-dimensional knapsack container loading problem with irregular shapes. The heuristic consists of several sub-heuristics, which each solves a specific part of the overall problem. Items are divided into three different groups reflecting their importance, size and complexity. Large items are irregular, represented by three-dimensional triangle-meshes and initially a set of templates which are used by tree-search and local-search heuristics are generated. Medium-sized item are rectangular and placed using a simple greedy heuristic. Small items are rectangular and loaded primarily in the end of the container with a modified wall-building approach. To fill out the remaining parts of the container small items are also placed using a greedy heuristic. The solution-method is able to find good solutions for problems with hundreds of heterogeneous items within minutes on current common hardware. The bounding-boxes of items occupy more than 89% of the container on average and in instances with items in all three groups the average utilization is 91%. These results compares well with state-of-the-art container loading heuristics that consider only smaller boxes and reach around 90% utilization. Finally, the algorithm was implemented at a major European furniture producer and improved their utilization by 3 − 5%. References [A] Egeblad, J., Nielsen, B. K., and Odgaard, A. Fast neighborhood search for two- and threedimensional nesting problems. European Journal of Operational Research, 183(3):1249–1266, 2007. [C] Egeblad, J. and Pisinger, D. Heuristic approaches for the two- and three-dimensional knapsack packing problem. Computers and Operations Research, 2007. In press (available online). 183 References Inst. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 1-21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 22-41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 42-61 1-61 |L |/|M |/|S | 3/0/0 6/0/0 12/0/0 24/0/0 40/0/0 3/1/0 6/2/0 12/4/0 24/8/0 40/10/0 3/1/3 6/2/6 12/4/12 24/8/24 40/10/40 |I| 85 82 82 76 86 78 61 70 80 82 71 75 62 77 83 78 84 84 80 79 82 78 80 106 90 92 117 130 100 99 118 101 107 92 108 110 100 109 104 103 113 101 104 114 221 194 238 167 205 211 200 181 152 165 172 219 196 219 183 191 200 194 204 191.3 123.7 Items |L| 85 82 82 76 86 78 61 70 80 82 71 75 62 77 83 78 84 84 80 79 82 |M| 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 |S| 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 63 79 74 63 88 97 78 70 83 71 78 70 82 81 79 83 84 85 83 85 78.8 82 108 90 103 75 92 87 78 78 72 75 75 83 90 92 84 88 88 84 90 85.7 17 27 16 29 29 33 22 29 35 30 29 22 26 29 21 26 20 18 30 16 75.2 32 24 19 32 26 26 31 36 24 20 27 23 33 27 31 22 12 24 22 17 25.4 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 104 89 85 103 66 87 93 86 79 60 63 74 103 79 96 77 91 88 88 97 85.4 Loaded 54 62 74 73 60 67 53 55 65 65 64 63 51 63 69 62 68 65 65 63 67 63.2 73 73 79 79 81 83 88 76 100 89 80 72 84 90 84 89 84 81 88 82 82.8 66 100 120 77 106 99 96 127 103 101 119 113 108 101 130 109 112 120 123 114 107.2 84 Profit I L 1600 1600 1600 1600 1600 1600 1600 1600 1610 1610 1600 1600 1610 1610 1610 1610 1610 1610 1610 1610 1600 1600 1600 1600 1610 1610 1610 1610 1600 1600 1600 1600 1610 1610 1600 1600 1620 1620 1600 1600 1610 1610 1605.24 1605.24 1815 1730 1815 1680 1810 1730 1805 1660 1805 1660 1825 1660 1800 1690 1805 1660 1825 1650 1820 1670 1805 1660 1820 1710 1800 1670 1815 1670 1805 1700 1800 1670 1800 1700 1800 1710 1800 1650 1800 1720 1808.5 1682.5 2008 1640 2018 1720 2025 1760 2016 1650 2012 1750 2014 1710 2021 1680 2022 1670 2008 1730 2020 1800 2001 1740 2013 1750 2001 1630 2013 1720 2017 1670 2014 1750 2002 1760 2006 1710 2016 1730 2019 1740 2013.3 1715.5 I 1270 1360 1520 1550 1350 1470 1490 1400 1360 1400 1430 1430 1370 1370 1410 1370 1380 1360 1400 1360 1410 1402.9 1675 1395 1530 1575 1445 1525 1570 1645 1585 1560 1665 1520 1580 1585 1525 1570 1530 1430 1540 1530 1549 1600 1316 1593 1490 1654 1716 1665 1706 1606 1566 1587 1587 1581 1559 1487 1600 1596 1590 1606 1595 1585 1510.5 Loaded profit L M 1270 1360 1520 1550 1350 1470 1490 1400 1360 1400 1430 1430 1370 1370 1410 1370 1380 1360 1400 1360 1410 1402.9 1590 85 1260 135 1450 80 1430 145 1300 145 1360 165 1460 110 1500 145 1410 175 1410 150 1520 145 1410 110 1450 130 1440 145 1420 105 1440 130 1430 100 1340 90 1390 150 1450 80 1423 126 1440 160 1170 120 1420 95 1330 160 1480 130 1550 130 1470 155 1480 180 1440 120 1410 100 1380 135 1390 115 1390 165 1380 135 1260 155 1450 110 1460 60 1410 120 1430 110 1450 85 1409.5 127 1411.6 83 S 0 26 78 0 44 36 40 46 46 56 72 82 26 44 72 40 76 60 66 60 48.5 15.9 Table 1: Characteristics and computational results for the 61 test instances. 184 Utilization BB Mesh 89.9 59.8 84.2 60.7 88 53.2 81.2 50.6 85 64.3 91.6 60.5 82.3 53.4 87.8 62 89.2 61.4 86.6 57.6 87.9 58 89.1 61.5 84.6 56.8 84.6 53.5 85.1 51.2 85.6 53.2 85.3 54.4 86.8 52.6 85.9 53.1 88.9 53.2 87.7 53.6 86.5 56.4 89.8 62.6 89.2 63.2 89.6 61.7 91.2 62.5 86.4 56.1 93.5 67.6 86.7 56.9 90.6 61.6 90.4 59.6 87.1 58.6 91.9 63 91.2 63.8 89.9 58.7 90.8 59.8 91.7 59.3 91.7 59.8 89.4 56.4 87.4 52 89.7 56.7 89.5 55.6 89.9 59.8 91.3 58.8 88.2 61.5 90 64.7 93.8 60.7 92.3 64.2 89.9 57.5 90 62.1 93.4 64.3 92.5 64.2 89.9 62 90.9 64.8 91.8 64.4 92.7 58.6 92.1 60.4 88.1 56.9 91.8 58.9 91.2 56.2 92.5 59 92 59.5 91.7 60.2 91.3 60.9 89.2 59 Time (sec.) 58.6 46.8 61.5 50.8 67.9 76 55.7 68.5 90.3 57.5 75.7 89.5 46.9 120.4 168.7 82.3 104.2 71.8 81.5 145.9 129 83.3 44.8 86.4 49.2 63.1 114.9 176.6 77.2 132.7 78.2 52.3 140 67.3 95.4 109.2 129.8 165.5 96.4 123.8 168.6 78 102.5 88.2 52 55.8 109.9 78.6 124.8 123 105.7 106.5 57.7 71.1 78.3 73.6 70.6 64.9 134.6 142.7 177.6 236.3 134 104.3 96.5 Heuristics for container loading of furniture [1] Alvarez-Valdes, R., Parreno, F., Tamarit, J., 2004. A tabu search algorithm for two-dimensional non-guillotine cutting problems. Tech. Rep. TR07-2004, Universitat de Valencia. [2] Alvarez-Valdes, R., Parreno, F., Tamarit, J., 2005. A grasp algorithm for constrained twodimensional non-guillotine cutting problems. Journal of Operational Research Society 56, 414– 425. [3] Art, Jr., R. C., September 1966. An approach to the two dimensional, irregular cutting stock problem. Tech. Rep. 36.Y08, IBM Cambridge Scientific Center. [4] Beasley, J., 1985. Algorithms for two-dimensional unconstrained guillotine cutting. Journal of the Operational Research Society 36, 297–306. [5] Bennell, J. A., Dowsland, K. A., 1999. A tabu thresholding implementation for the irregular stock cutting problem. International Journal of Production Research 37, 4259–4275. [6] Bennell, J. A., Dowsland, K. A., 2001. Hybridising tabu search with optimisation techniques for irregular stock cutting. Management Science 47 (8), 1160–1172. [7] Bischoff, E., Marriott, M., 1990. A comparative evaluation of heuristics for container loading. European Journal of Operational Research 44, 267–276. [8] Bischoff, E. E., 2006. Three-dimensional packing of items with limited load bearing strength. European Journal of Operational Research 168, 952–966. [9] Blazewicz, J., Hawryluk, P., Walkowiak, R., 1993. Using a tabu search approach for solving the two-dimensional irregular cutting problem. Annals of Operations Research 41, 313–325. [10] Bortfeldt, A., Gehring, H., 1998. Applying tabu search to container loading problems. In: Operations Research Proceedings 1997. Springer, Berlin, pp. 533–538. [11] Boschetti, M., Hadjiconstantinou, E., Mingozzi, A., 2002. New upper bounds for the twodimensional orthogonal cutting stock problem. IMA Journal of Management Mathematics 13, 95–119. [12] Burke, E. K., Hellier, R., Kendall, G., Whitwell, G., 2006. A new bottom-left-fill heuristic algorithm for the two-dimensional irregular packing problem. Operations Research 54 (3), 587–601. [13] Burke, E. K., Kendall, G., 1999. Applying ant algorithms and the no fit polygon to the nesting problem. In: Proceedings of the 12th Australian Joint Conference on Artificial Intelligence (AI’99). Vol. 1747. Springer Lecture Notes in Artificial Intelligence, pp. 454–464. [14] Burke, E. K., Kendall, G., 1999. Applying evolutionary algorithms and the no fit polygon to the nesting problem. In: Proceedings of the 1999 International Conference on Artificial Intelligence (IC-AI’99). Vol. 1. CSREA Press, pp. 51–57. [15] Burke, E. K., Kendall, G., 1999. Applying simulated annealing and the no fit polygon to the nesting problem. In: Proceedings of the World Manufacturing Congres. ICSC Academic Press, pp. 70–76. [16] Cagan, J., Degentesh, D., Yin, S., 1998. A simulated annealing-based algorithm using hierarchical models for general three-dimensional component layout. Computer Aided Design 30 (10), 781–790. 185 References [17] Caprara, A., Monaci, M., 2004. On the 2-dimensional knapsack problem. Operations Research Letters 1 (32), 5–14. [18] Davies, A., Bischoff, E., 1999. Weight distribution considerations in container loading. European Journal of Operational Research 114, 509–527. [19] de Berg, M., Kreveld, M. V., Overmarks, M., Schvarzkopf, O., 2000. Computational Geometry: Algorithms and applications. Springer Verlag, Berlin, Germany. [20] Dickinson, J. K., Knopf, G. K., 1998. Serial packing of arbitrary 3d objects for optimizing layered manufacturing. In: Intelligent Robots and Computer Vision XVII. Vol. 3522. pp. 130– 138. [21] Dickinson, J. K., Knopf, G. K., 2002. Packing subsets of 3d parts for layered manufacturing. International Journal of Smart Engineering System Design 4 (3), 147–161. [22] Dowsland, K. A., Dowsland, W. B., Bennell, J. A., 1998. Jostling for position: Local improvement for irregular cutting patterns. Journal of the Operational Research Society 49, 647–658. [23] Dowsland, K. A., Vaid, S., Dowsland, W. B., 2002. An algorithm for polygon placement using a bottom-left strategy. European Journal of Operational Research 141, 371–381. [24] Dyckhoff, H., Scheithauer, G., Terno, J., 1997. Cutting and Packing (C&P). In: Dell’Amico, M., Maffioli, F., Martello, S. (Eds.), Annotated Bibliographies in Combinatorial Optimization. John Wiley & Sons, Chichester. [25] Eley, M., 2002. Solving container loading problems by block arrangement. European Journal of Operational Research 141 (2), 393–409. [26] Fekete, S. P., Schepers, J., 1997. A new exact algorithm for general orthogonal d-dimensional knapsack problems. In: Algorithms ESA ’97, Springer Lecture Notes in Computer Science. Vol. 1284. pp. 144–156. [27] Fekete, S. P., Schepers, J., 1997. On more-dimensional packing III: Exact algorithms. submitted to Discrete Applied Mathematics. [28] Fekete, S. P., Schepers, J., van der Veen, J. C., 2006. An exact algorithm for higher-dimensional orthogonal packing. Tech. Rep. cs/0604045, ArXiv Computer Science e-prints. [29] Gehring, H., Bortfeld, A., 2002. A parallel genetic algorithm for solving the container loading problem. International Transactions in Operational Research 9, 497–511. [30] Gehring, H., Bortfeldt, A., 1997. A genetic algorithm for solving the container loading problem. International Transactions in Operational Research 4, 401–418. [31] George, J., Robinson, D., 1980. A heuristic for packing boxes into a container. Computers and Operations Research 7, 147–156. [32] Gilmore, P., Gomory, R., 1965. Multistage cutting stock problems of two and more dimensions. Operations Research 13, 94–120. [33] Glover, F., 1989. Tabu search - part 1. ORSA Journal on computing 1 (3), 190–206. 186 Heuristics for container loading of furniture [34] Glover, F., 1990. Tabu search - part 1i. ORSA Journal on computing 2 (1), 4–32. [35] Gomes, A. M., Oliveira, J. F., 2002. A 2-exchange heuristic for nesting problems. European Journal of Operational Research 141, 359–370. [36] Gomes, A. M., Oliveira, J. F., 2006. Solving irregular strip packing problems by hybridising simulated annealing and linear programming. European Journal of Operational Research 171 (3), 811–829. [37] Hadjiconstantinou, E., Christophides, N., 1995. An exact algorithm for general, orthogonal, twodimensional knapsack problems. European Journal of Operational Research 83, 39–56. [38] Heckmann, R., Lengauer, T., 1995. A simulated annealing approach to the nesting problem in the textile manufacturing industry. Annals of Operations Research 57 (1), 103–133. [39] Ikonen, I., Biles, W. E., Kumar, A., Wissel, J. C., Ragade, R. K., 1997. A genetic algorithm for packing three-dimensional non-convex objects having cavities and holes. In: Proceedings of the 7th International Conference on Genetic Algortithms. Morgan Kaufmann Publishers, East Lansing, Michigan, pp. 591–598. [40] Ikonen, I. T., Biles, W. E., Lewis, J. E., Kumar, A., Ragade, R. K., 1998. Garp: genetic algorithm for part packing in a rapid prototyping machine. In: Gopalakrishnan, B., Murugesan, S. (Eds.), Intelligent Systems in Design and Manufacturing. Vol. 3517 of Proceedings of SPIE. pp. 54–62. [41] Kellerer, H., Pferschy, U., Pisinger, D., 2004. Knapsack Problems. Springer, Berlin, Germany. [42] Kirkpatrick, S., Jr., C. D. G., Vecchi, M. P., 1983. Optimization by simulated annealing. Science 220 (4598), 671–680. [43] Li, Z., Milenkovic, V., 1995. Compaction and separation algorithms for non-convex polygons and their applications. European Journal of Operational Research 84 (3), 539–561. [44] Lutfiyya, H., McMillin, B., Poshyanonda, P., Dagli, C., 1992. Composite stock cutting through simulated annealing. Journal of Mathematical and Computer Modelling 16 (2), 57–74. [45] Mack, D., Bortfeldt, A., Gehring, H., 2004. A parallel hybrid local search algorithm for the container loading problem. Internatinal Transaction in Operations Research 11, 511–533. [46] Martello, S., Monaci, M., Vigo, D., 2003. An exact approach to the strip packing problem. INFORMS Journal on Computing 3 (15), 310–319. [47] Morabito, R., Arenales, M., 1994. An and/or graph approach to the container loading problem. International Transactions in Operational Research 1, 59–73. [48] Moura, A., Oliveira, J. F., 2005. A grasp approach to the container-loading problem. IEEE Intelligent Systems 20 (4), 50–57. [49] Murata, H., Fujiyoshi, K., Nakatake, S., Kajitani, Y., 1996. Vlsi module packing based on rectangle-packing by the sequence pair. IEEE Transaction on Computer Aided Design of Integrated Circuits and Systems 15, 1518–1524. [50] Ngoi, B. K. A., Tay, M. L., Chua, E. S., 1994. Applying spatial representation techniques to the container packing problem. International Journal of Production Research 32, 111–123. 187 References [51] Oliveira, J. F., Ferreira, J. S., 1993. Algorithms for nesting problems. Applied Simulated Annealing, 255–273. [52] Oliveira, J. F., Gomes, A. M., Ferreira, J. S., 2000. TOPOS - a new constructive algorithm for nesting problems. OR Spektrum 22, 263–284. [53] Pisinger, D., 2002. Heuristics for the container loading problem. European Journal of Operations Research 3 (141), 382–392. [54] Pisinger, D., 2006. Denser packings obtained in O(n log log n) time. INFORMS Journal on Computing to appear. [55] Ratcliff, M. S. W., Bischoff, E. E., 1998. Allowing for weight considerations in container loading. OR Spektrum 20, 65–71. [56] Scheithauer, G., 1992. Algorithms for the container loading problem. Operations Research Proceedings 1991, 445–452. [57] Skiena, S., 1997. Minkowski sum. In: The Algorithm Design Manual. Springer-Verlag, New York, pp. 395–396. [58] Stoyan, Y. G., Gil, N. I., Scheithauer, G., Pankratov, A., Magdalina, I., 2005. Packing of convex polytopes into a parallelepiped. Optimization 54 (2), 215–235. [59] Terno, J., Scheithauer, G., Sommerweiss, U., Riehme, J., 2000. An efficient approach for the multi-pallet loading problem. European Journal of Operations Research 2 (132), 371–381. [60] Voudouris, C., Tsang, E., August 1995. Guided local search. Tech. Rep. CSM-147, Department of Computer Science, University of Essex, Colchester, C04 3SQ, UK. [61] Voudouris, C., Tsang, E., 1999. Guided local search and its application to the traveling salesman problem. European Journal of Operational Research 113, 469–499. [62] Wäscher, G., Haussner, H., Schumann, H., 2006. An improved typology of cutting and packing problems. European Journal of Operational ResearchIn Press. [63] X.Tang, D.F.Wong, 2001. Fast-sp: a fast algorithm for block packing based on sequence pair. In: Asia and South Pacific Design Automation Conference. pp. 521–526. 188 Submitted. 2008 Placement of two- and three-dimensional irregular shapes for inertia moment and balance J. Egeblad∗ Abstract We present a heuristic for the problem of placing irregular shapes in two or three dimensions within a container, such that the placement of the shapes is optimized for balance and inertia moment and no two shapes overlap. The heuristic is based on a technique that iteratively removes overlap. The technique was introduced by Faroe et al. [7] for rectangular objects and later generalized by Egeblad et al. [A] to handle irregular shapes. We extend this method and demonstrate its ability to optimize an objective function related to the individual position of each shape. The approach iteratively reduces an augmented objective function which is the sum of balance, inertia moment and overlap and uses the metaheuristic Guided Local Search. Keywords: packing, nesting, balanced packing, three-dimensional packing 1 Introduction Problems that involve a placement of two- or three-dimensional shapes within a container or a set of containers are generally referred to as cutting and packing problems (see Wäscher et al. [13] for a survey) or sometimes layout problems (see Cagan et al. [2] for a survey). While researchers have thoroughly investigated problems such as bin-packing, knapsack-packing, strip-packing and component layout problems, methods for ensuring overall stability of the placements have received less attention. In this paper we investigate the problem of packing shapes while ensuring balance and reduce inertia moment of the placement. Balance must be ensured by minimizing the difference between the global center of gravity of the items and a specified target center of gravity. The moment of inertia, which describes the force required to turn the items around an axis going through the center of gravity, must also be minimized. The problem occurs in transportation problems where balance is important. Ships must be loaded such that the likeliness of capsizing is minimal. Trucks should not tip and the weight should be distributed evenly on the wheels. Cars must be designed such that the engine and other elements are placed in balance and with minimal inertia of moment. Airplanes and space exploration vehicles must be in balance and the moment of inertia should be minimal to minimize fuel consumption. In this paper we view the problem of achieving balance solely as a post-processing problem which is applied to solutions arising from typical transportation problems such as container-loading, knapsack-packing or bin-packing. Our objective is to determine optimal positions of items within container boundaries, since we assume that the selection of shapes have occurred a priori. We consider both the two- and three-dimensional variants of the problem. For two dimensions we consider polygonal shapes and for three dimensions polyhedra. Our technique is described mainly for three dimensions, and we only detail the simpler two-dimensional variant in cases where the two differ substantially. ∗ [email protected], Department of Computer Science, University of Copenhagen, DK-2100 Copenhagen Ø, Denmark 189 1. Introduction Given a container shape C and a set of items P described as polygons (polyhedrons), where each item i ∈ P has an associated weight gi , we wish to solve the following problem: minimize F(P ), s.t. (I) P (i) ∩ P ( j) = 0 ∀i, j ∈ P (II) P (i) ∩C = P (i) ∀i ∈ P. (1) Where P is a placement of P and P (i) is i translated and rotated as described by P . The constraints (I) and (II) ensure that no two shapes overlap and all shapes are placed within the container boundaries. Notice that if the container C is a box then C = [X0 , X1 ] × [Y0 ,Y1 ] × [Z0 , Z1 ]. Assume that the desired center of gravity is located in the coordinate system origin, (0, 0, 0), then the objective function F is defined as: n F(P ) = α ∑ gi (xi2 + y2i + z2i ) + βGx (P )2 + γGy (P )2 + δGz (P )2 i=1 where xi , yi and zi are the positions of the center of gravity of shape i ∈ P in the placement P , and Gx , Gy and Gz are defined as follows: Gx (P ) = ∑i∈P gi xi ∑i∈P gi Gy (P ) = ∑i∈P gi yi ∑i∈P gi Gz (P ) = ∑i∈P gi zi ∑i∈P gi The first summation term of the objective function describes the moment of inertia while the last three terms are used to describe the square distance between origin (desired center of gravity) and actual center of gravity in each of the three directions. The values α, β, γ and δ are weights which can be used to adjust the importance of individual terms in the resulting solution. We refer to this problem as the Balanced Weight Placement Problem (BWPP) and the two- and three-dimensional variants as respectively BWPP-3D and BWPP-3D. An alternative formulation is to minimize the moment of inertia such that the center of gravity is confined to a target region around the origin given by [Lx ,Ux ] × [Ly ,Uy ] × [Lz ,Uz ]. This formulation is achieved with β, γ and δ set to 0 and adding the additional constraints: (III) Lx ≤ Gx (P ) ≤ Ux (IV) Ly ≤ Gy (P ) ≤ Uy (V) Lz ≤ Gz (P ) ≤ Uz (2) We refer to this formulation as the Constrained Balanced Weight Placement Problem (CBWPP) and the two- and three-dimensional variants as CBWPP-2D and CBWPP-3D. Two example solutions to the two-dimensional problem are depicted in Figure 1. Here each item has been colored according to weight; The darker the color the heavier the item. The desired center of gravity is marked by dashed lines. While the actual center of the solution is marked by dotted lines; The actual center and the desired center are very close in the two examples. The problem we consider is NP-hard, as stated by the following theorem. Theorem 4. The Balanced Weight Placement Problem and Constrained Balanced Weight Placement Problem are NP-Hard. Proof. Let DBWPP-2D be the decision variant of BWPP-2D and let it be defined as follows; Given a value k, a container shape C and a set of items P described as polygons (polyhedrons), where each 190 Placement of two- and three-dimensional irregular shapes for inertia moment and balance (a) (b) Figure 1: Example solutions. (a) 64 1x1 squares in 10x10 container (BWPP-2D). (b) Irregular shapes within irregular container (CBWPP-2D). item i ∈ P has an associated weight gi , determine if we can can find a placement P with F (P ) ≤ k which is feasible under the constraints (I-II). To prove that BWPP-2D is NP-Hard we show that DBWPP-2D is NP-complete by reduction from the Set Partition Problem (SPP) which is known to be NP-complete (see e.g. Cormen et al. [3]). The SPP is given as follows; Given a set of n items each with size ai , let M = ∑ni=1 ai and determine if a subset S of the items can be found such that: ∑ ai = ∑ ai = i∈S i∈S / M . 2 (3) To show that DBWPP-2D is NP-complete we define the reduction function from an instance of SPP to an instance, I , of DWBPP-2D as follows: Create items with center of gravity in their middle and dimensions [− a2i , a2i ] × [−0.5, 0.5] all with weight gi = ai (mass density 1), let the container be defined as C = [− M4 , M4 ] × [−1, ∞], set α = 0, β = 1, γ = 1 and k = 0. We first show that a solution of the SPP also constitute a solution to I . Assume that the rectangles from S are numbered 1, . . . , m and the remaining rectangles are numbered m + 1, . . . , n, this means ai i−1 n M M that ∑m i=1 ai = ∑i=m+1 ai = 2 . Then assign coordinates to the rectangles as xi = ∑ j=1 a j + 2 − 2 and a i−1 yi = − 12 for i = 1, . . . , m. and xi = ∑ j=m+1 a j + 2i − M2 and yi = − 12 for i = m + 1, . . . , n. Denote this placement P and observe that P is a feasible placement of BWBPP-2D due to the dimensions of the rectangles (see figure 2 (a)). Now we can calculate the center of gravity of P : ! !! m i−1 n i−1 1 ai M ai M Gx (P ) = (4) ∑ ai ∑ a j + 2 − 4 + ∑ ai ∑ a j + 2 − 4 M i=1 j=1 i=m+1 j=m+1 1 M2 M2 M2 M2 − + − = 0. (5) = M 8 8 8 8 ! m i−1 1 1 1 1 M M Gy (P ) = − ∑ ai + ∑ ai = − + = 0. (6) M M 2 2 i=1 2 j=m+1 2 This shows that a solution to SPP constitute a solution to the instance I . Conversely, assume we have a solution, P , to I . Then we know that n n ( ∑ ai xi )2 + ( ∑ ai yi )2 = 0. i=1 (7) i=1 191 1. Introduction (a) (b) (c) Figure 2: Illustrations for Theorem 4 (see text). (a) Solution from SPP used to generate a solution to DBWPP-2D. (b) Placement P from the proof of Theorem 4; All rectangles are slid down. (c) Placement P 0 where all rectangles have y-coordinate yi = {− 12 , 12 , 12 , . . .}. Therefore ∑ni=1 ai xi = 0 and ∑ni=1 ai yi = 0. Now create a new placement, P 0 by sliding all rectangles downwards in the y-direction as far as possible without creating overlap (see figure 2 (b)). Each rectangle i now has coordinates xi0 = xi and y0i ≤ yi . Note that the x-portion of the center of gravity P is the same as P 0 since we do not alter the x-coordinates. Since all rectangles have height 1 and we cannot move any rectangle to a lower y-coordinate, we know that all y0i ∈ {− 21 , 12 , 23 , 25 , 72 , . . .} (see figure 2 (c)). Let S be the group of rectangles with y0i = − 12 and let S0 be the set of all other rectangles. The total area occupied by the rectangles in S which all have y0i = − 12 cannot exceed M2 since they all have height one and the width of the container is M2 so ∑i∈S ai ≤ M2 . For a rectangle i in P which extends beyond the horizontal line y = 1 (yi + 12 > 1), we have either slid i down such that y0i < yi or we have y0i ≥ 32 since yi > 12 and every rectangle is 1 unit high. Assume at least one such rectangle exists then we have: n n M 1 M M 1 0 = ∑ ai yi = − ∑ ai + ∑ ai yi > − + ∑ ai = − + = 0, 2 i∈S0 2 2 2 i=1 i∈S 2 i∈S0 (8) which shows that the assumption that yi > 21 must be wrong and no rectangle can therefore stretch beyond the horizontal line y = 1 in P . Therefore we can divide the rectangles in P in two groups S = {i ∈ {1, . . . , n}|yi = − 12 } and 0 S = {i ∈ {1, . . . , n}|yi = − 21 } and we have that ∑ ai = ∑ ai = i∈S i∈S0 M , 2 (9) since the width of the container is M2 . S and S0 can now be used for a solution to SPP. The reduction may be done in polynomial time, and the resulting solution to DBWPP-2D represents a valid solution to SPP. A solution to DBWPP-2D may be verified in polynomial time, and therefore DBWPP-2D is NP-complete and BWPP-2D is NP-hard. A similar reasoning shows that CBWPP-2D is NP-hard. 1.1 Contribution In this paper we present a heuristic for determining the optimal placement of a set of two- or threedimensional items with respect to (1). Our solution method is based on work by Egeblad et al. [A] on 192 Placement of two- and three-dimensional irregular shapes for inertia moment and balance the strip-packing problem of irregular shapes. This method is briefly summarized in Section 4, while we detail how to accommodate the objective function and additional constraints in Section 5. The primary element of the method by Egeblad et al. [A] is an algorithm which is able to find the minimal overlap translation of one shape among a placement of shapes in polynomial time, and we will show how this algorithm is extended to accommodate the objective function F (P). The remainder of the paper is organized as follows. In Section 2 we describe relevant work from the literature that considers similar problem formulations. We explain how to determine the center of mass of each individual item in Section 3. Finally we present computational results for both the two- and three-dimensional variants with respect to both rectangular and irregular shapes in Section 6. 2 Related Work We briefly summarize the solution methods that consider balanced loadings here. Amiouny et al. [1] consider the problem of placing items in an airplane or in a truck with two axles. It is explained that although airplanes need not be loaded such that they are in complete balance along the long axis, it is generally favorable to do so, since the pilot will otherwise have to compensate with increased fuel consumption. In some countries there are limits to the maximal allowed weight on each axle of a truck, and they argue that if one wishes to minimize the maximal weight on each axle the center of gravity of the load must be located halfway between the two axles. The problem they consider is a one-dimensional problem and they present two approximation algorithms and a heuristic for solving the problem. Both approximation algorithms have running time O(n log n) and guarantee that the center of gravity of the load does not deviate from the target point by more than half the size of the largest box. An alternative heuristic for this problem was later proposed by Mathur [10]. Fasano [8] describes a knapsack variant of the problem where one is given a set of items and must select and place a subset of items that maximizes the utilization of the container, given that the center of gravity must fall within a given convex domain. The problem considered is a threedimensional problem with boxes and “tetris”-like shapes. His solution method is based on Mixed Integer Programming. No results are reported. Wodziak and Fadel [14] describe a genetic algorithm for two-dimensional placement of rectangles with applications in the area of truck-loading. Teng et al. [11] describe a method for optimizing the placement of parts in space satellites. The satellite is modeled as a section of a cone which is split in two parts by a plate. Parts must be mounted on either side of the plate such that the moment of inertia is minimized. First the twodimensional problems of placing parts on the plate is solved, then the appropriate position of the plate is determined. The items they consider consists of both cylinders and boxes. The paper does not detail their placement approach, but they report that they use a Broyden-Fletcher-Goldfarb-Shanno (BFGS) variable metric unconstrained minimization method. Gehring and Bortfeldt [9] consider center of gravity during container loading. Their heuristic uses the wall-building paradigm commonly used for container loading problems, where the container is divided into smaller parts (walls) in its depth and items in each part are placed by solving a twodimensional packing problem. Each item must be contained fully within the wall it has been assigned to. Once the packing heuristic completes, the walls of the best solution are interchanged, mirrored or rotated by 180 degrees to move the center of gravity of the overall placement towards the center of the container. A similar approach was used by Davies and Bischoff Davies and Bischoff [4] for trucks. However, 193 4. Finding a non-overlapping placement their walls were deeper and they utilized random search to optimize for the center of gravity. Eley [5] also use this principle. He reports that as little as 3-4 to walls may be sufficient to achieve acceptable solutions. 3 The Center of Mass We consider only items with their mass distributed uniformly throughout their volume. If an object has uniform mass, its center of mass is referred to as its centroid. For rectangles and boxes the centroid coincides with the geometric center. For more complex items this is not the case. The standard way to calculate the centroid c = (xc , yc ) ∈ R2 of a polygon with n points, (xi , yi ) ∈ R2 , i ∈ {1, . . . , n}, is. xc = 1 n ∑ (xi + xi+1 )(xi yi+1 − xi+1 yi ), 6A i=1 yc = 1 n ∑ (yi + yi+1 )(xi yi+1 − xi+1 yi ), 6A i=1 A = 1 n ∑ (xi yi+1 − xi+1 yi ), 2 i=1 where (xn+1 , yn+1 ) = (x1 , y1 ) and A is the area of the polygon. For a tetrahedron i with corner points (0, 0, 0)t and a, b, c ∈ R3 its signed volume, Vi , and centroid, Ri , can be calculated as: Vi = a · (b × c) . 6 Ri = a+b+c . 4 To calculate the centroid of a set of n tetrahedra one can use the cumulative expression: R= ∑ni=1 Vi Ri , ∑ni=1 Vi (10) where Ri is the centroid and Vi is the volume of each tetrahedron i. If a polyhedron is described by a set of n triangles in space, each with corner points ai , bi , ci ∈ R3 , its centroid is equal to the centroid of the n tetrahedra consisting of the four points (0, 0, 0)t , ai , bi , ci , i = 1, . . . , n which can be calculated using (10). 4 Finding a non-overlapping placement In this section we briefly summarize the work by Faroe et al. [7] and Egeblad et al. [A] which is the foundation of our heuristic. The solution process of Faroe et al. [7] and Egeblad et al. [A] revolves around solving the decision variant of a packing problem in which a non-overlapping placement of a set of shapes must be found within given container dimensions. The method uses the metaheuristic Guided Local Search (GLS) by Voudouris and Tsang [12] to control the solution process. Egeblad et al. [A] begin by defining the total overlap in a placement P as, G(P ) = ∑ i, j∈P 194 overlap(i, j), Placement of two- and three-dimensional irregular shapes for inertia moment and balance where overlap(i, j) is the area or volume of overlap of shapes i and j in placement P . In order to find a solution to the placement problem, G(P ) is minimized by iteratively reducing the overlap. In each iteration shapes are translated in one of the two or three axis-parallel directions to the position that results in least overlap. This procedure terminates once G(P ) = 0 for the current placement P , since this placement has no overlap and is a solution. The metaheuristic GLS by Voudouris and Tsang [12] is applied to help escape local minima. GLS uses an augmented objective, and Faroe et al. [7] formulate it as follows: min H(P ) = ∑ overlap(p, q) + λ ∑ I(p, q)ρ p,q p,q∈P p,q∈P where λ is a fine-tuning parameter for the heuristic, I(p, q) = 1 if shapes p and q overlap in placement P and 0 if not, and ρ p,q is a penalty term. At each local minimum the value ρ p,q is increased for the pair of shapes p and q for which the value overlap(p,q) is largest. After ρ p,q has been increased, the ρ p,q +1 solution process commences with the modified objective function. The consequence of increasing the penalty term for p and q, is that the heuristic will prefer a placement where p and q no longer overlap. For two dimensions, the minimal overlap translation for a polygon p is determined by an algorithm with running time O(mn log mn), where m is the number of edges from p and n is the number of edges belonging to other polygons. The algorithm works by considering every pair of edges from p and all other shapes. For each pair of an edge from p and and edge from another shape a piecewise quadratic function describes the size of the area between the two edges for all horizontal translations of p. Each piece of the quadratic function represents an interval of translations of p. The piecewise quadratic functions are added together to represent a piecewise quadratic function that describes the total area of overlap. Penalties are accounted for by adding each to the piecewise quadratic in the intervals that corresponds to a positive overlap between the shapes of that penalty. The combined overlap and penalties form one piecewise quadratic function H(P (p,t)) which describes the overlap and penalty value for each t translation of p along the x-axis relative to placement P : H(P (p,t)) = ∑ overlap(p(t), q) + λ ∑ I(p(t), q)ρ pq . j∈P (11) j∈P Here p(t) is p translated t units along the x-axis and P (p,t) is the placement P with shape p(t) instead of p. The minimal overlap position is determined by traversing the piecewise quadratic function H(P (p,t)) from low to high values of t and analyzing each segment for minima. Only minima corresponding to translations within the container boundaries are considered and t for the global such minimum is selected as best translation. An example of H(P (p,t)) is illustrated on Figure 3. A similar approach works for three dimensional triangle-mesh polyhedra. Here the volume between each triangle from p and each of the triangles from all other polyhedra can be represented by a piecewise cubic function in the amount t p is translated. Combined these constitute a complete piecewise cubic function which describes the volume of overlap and any penalties for all values of t. 5 Solving the Balance Problem The procedure described in Section 4 was used by Faroe et al. [6] to optimize the placement of rectangles (modules) in Final Placement of VLSI design. Here the objective was to find a non-overlapping placement of the modules with minimal wire-length. To solve this problem Faroe et al. [6] minimized a sum of wire-length and overlap as objective function. Since overlap and wire-length minimization in VLSI design are counteracting objectives the procedure by Faro et al. slowly increased the weight 195 Overlap (area) 5. Solving the Balance Problem x position (a) (b) Figure 3: Illustration of the procedure by Egeblad et al. [A]. (a) An overlapping placement containing one shape. (b) The total overlap is described as a piecewise quadratic function of the horizontal translation of the shape. of overlap in the objective function from 0 towards ∞ such that overlap minimization increasingly dominates the objective and a legal placement with little wire-length would eventually be found. The paradigm described does not have any requirements with respect to the initial placement and may therefore be well suited for improving and legalizing an initial, possibly infeasible, placement. Our approach follows the method by Faroe et al. [6] and considers the augmented gravity objective function: minimize E(P ) = H(P ) + ωF(P ). (12) E(P ) is a weighted sum of overlap, balance and inertia in placement P . Starting from some initial placement the objective value is iteratively reduced by translating shapes parallel to one of the two or three coordinate axes to the position that has minimal value E. Like the previous methods we use GLS to ensure that the local search can overcome local minima. The value of ω changes during the solution process. Initially ω is set to a very large number so only balance and moment of inertia is optimized. We now slowly decrease the value of ω, so that in the beginning of the solution process the balance objective, F(P ) is the most important objective, and the heuristic converges towards non-overlapping placements. Let ωi be the value of ω after i translations, then we set ωi+1 = ψωi . Empirically we found that ψ = 0.999 is a good compromise between fast convergence and high solution quality. Once an feasible non-overlapping placement has been determined, the current value of ω is multiplied by k and the procedure continues with the new objective function. Initially k is set to 2. If the objective value of the feasible solution, w.r.t balance, is equal to the last found feasible solution k is doubled until a new feasible placement is found at which point k is reset to 2. This is done to avoid cyclic behavior. The development of ω and the balance inertia objective value for the instance ep2-100-U-R-75 (to be presented in Section 6) is depicted in figure 4. Here the the best found objective value (objective) is shown as a function of time along with the logarithm of the value of ω (omega). The first feasible solution is found after 5 seconds and it is demonstrated and the heuristic continuously find better solutions during the first 45 seconds. The minimal overlap algorithm by Egeblad et al. is adapted to accommodate the augmented objective function E. The algorithm by Egeblad et al. searches a piecewise quadratic function for global minimum. By adding F(P ) to this quadratic function, which account for the overlap, we can ensure that the minimum found by the algorithm is the translation of a polygon (polyhedron) that reduces E the most. 196 Placement of two- and three-dimensional irregular shapes for inertia moment and balance Objective Omega 0 5 10 15 20 25 30 35 40 45 Running time (sec.) Figure 4: The solution process of ep2-100-U-R-75. The value of the balance and inertia objective function F for the placement P (p,t) can be determined as: g p (t − x p ) 2 F(P (p,t)) = F(P ) + αg p (t − x p ) + β Gx + , (13) ∑i∈P g p where Gx is the x-coordinate of the center of gravity for P . This shows that F(P (p,t)) can be evaluated in constant time for any t if F(P ) is known and that F(P (p,t)) is a quadratic function in t. Let E((P (p,t)) = ωH(P (p,t)) + F(P (p,t)) be the full augmented objective value of the placement P (p,t) comprising of overlap, penalties and balance and inertia. In two dimensions E(P (p,t)) is the sum of a quadratic and a piecewise quadratic function and may therefore be described as a piecewise quadratic function. In three dimensions E(P (p,t)) is piecewise cubic. The same analysis performed on H(P (p,t)) as described in the end of Section 4 may be performed on E(P (p,t)) to determine the minimal translation with respect to the augmented gravity objective function E. Note that (13) shows that F is quadratic for translations of p that are parallel to the x-axis, however, a similar argument holds for the other two coordinate system axis. 5.1 Target Region Constraints To handle the CBWPP version of the problem where the center of gravity must be kept within a rectangular region of the container (constraints (III) − (V )), all items are placed initially such that their center of gravity falls within this region. Let Gx (P ) be the current x-coordinate of the center of gravity, then for a shape p the x-coordinate of the center of gravity of the placement can be described as a function of the translation t of p along the x-axis: Gx (P (p,t)) = Gx (P ) + g p (t − x p ) , ∑i∈P gi (14) where x p is the x-coordinate of p in P . Since Gx (P (p,t)) is linear we may determine values t0 and t1 such that Gx (P (p,t0 )) = Lx and Gx (P (p,t1 )) = Ux . We now only allow translations t of i such that t0 ≤ t ≤ t1 , i.e. translations which retain the center of gravity within the target region. 197 6. Computational Experiments 5.2 Initial weight The initial value of ω, ω0 , should be carefully selected to match the problem instance. We assume that the heuristic starts with some initial random placement, P0 . We now set: ω0 = ∑ area(p)/F(P0 ), p∈P where area(p) is the area of polygon (polyhedron) p. If we expect the total area to be in the same order of magnitude as the initial overlap, then this choice of ω ensures that the contribution of H (P0 ) and F (P0 ) in (12) are of same order of magnitude. Since the overlap is rarely equal to the total area, the contribution of F (P0 ) is also slightly larger than that of H (P0 ). This strategy was found empirically to weigh F high enough during the initial steps to find good solutions. 6 Computational Experiments The necessary changes described above were added to the implementation of the heuristic described in Egeblad et al. [A]. The heuristic was implemented in C++ (gcc 4.1.3) and tests were conducted on a computer with two quad-core Intel Xeon 5355 2.66 GHz processors and 8 GB RAM. The implementation did not use any form of parallelism and therefore did not take advantage of the multi-cpu multi-core system. To demonstrate the capabilities of the heuristic we conduct experiments in both two- and threedimensions and for both BWPP and CBWPP. We also compare results of the heuristic with random legal placements. 6.1 Two dimensions To test the two-dimensional variant of the heuristic we use a total of 16 instances which are described in Table 1. Both rectangular and irregular shapes were used for the computational experiments. The type of shapes are described in the column ‘Shapes’ and are either rectangular (Rect.) or irregular (Irre.). The column ‘n’ describes the number of shapes in each instance. We have selected 10 instances with irregular shapes which are commonly used for The TwoDimensional Nesting Problem and are described in Egeblad et al. [A], and 3 instances with rectangular shapes used for Two-Dimensional Orthogonal Knapsack Packing Problems which are described in Egeblad and Pisinger [C]. The dimensions of the container were adjusted appropriately for both sets of instances. For the instances from The Two-Dimensional Nesting Problem the strip-length was set to 105% of the best strip-length reported in Egeblad et al. [A]. For the knapsack instances the rectangles of the best found subset was used and the container dimensions were expanded to 105% in all directions. The modification of container dimensions were made to give the heuristic adequate freedom to optimize for balance and moment of inertia without too much emphasis on searching for non-overlapping legal placements. In addition three more instances were created. Two instances were created to test the heuristic’s ability to handle odd-sized containers (one with rectangular shapes and one with irregular shapes), these are called Ship and Car. Finally an instance containing 64 10x10 squares within a 80x80 container (64-squares) was created to test the heuristic’s ability to handle simple problems. The weight, gi , of each item i within each instance was set to area(i) · ri where ri was chosen uniform randomly within the interval [ 21 , 2]. 198 Placement of two- and three-dimensional irregular shapes for inertia moment and balance Instance 64-squares ep2-50-D-R-75 ep2-100-U-R-75 ep2-200-D-R-75 Albano Dagli Fu Mao Marques Shapes0 Shapes2 Shirts Swim Trousers Ship Car n 64 41 69 157 24 30 12 20 24 43 28 99 48 64 83 43 Shapes Rect. Rect. Rect. Rect. Irre. Irre. Irre. Irre. Irre. Irre. Irre. Irre. Irre. Irre. Rect. Irre. Target (x/y) 1 1 2/2 1 1 2/2 2 1 3/2 1 1 2/2 1 1 2/2 2 1 3/2 1 1 2/2 2 2 3/3 1 1 2/2 2 1 3/2 1 1 2/2 2 2 3/3 1 1 2/2 2 1 3/2 2 1 3/2 2 1 3/2 Target size (%) 20 20 20 20 20 30 40 20 30 30 20 20 20 30 20 20 Small W 80 82.95 536.6 155.4 10453 63.0 33.6 1818 81.9 63.0 28.65 65.20 6494.3 251.50 5 58.0 52 H 80 166.95 1073 310.8 5071.5 62.1 39.33 2677.5 107.64 41.4 15.52 41.4 5953.32 81.765 600.0 80 Util. 100.0 89.8 88.5 88.0 80.5 77.8 82.0 77.2 81.6 61.2 73.6 80.0 65.8 82.5 89.3 66.6 W 120 118.5 766.5 222 14934 90 48.0 2598 117.0 90 22.5 60 8628 364.5 - Large H 120 238.5 1533 444 7350 90 57.0 3825 156.0 60 40.5 94.5 9277.5 118.5 - Util. 44.4 44.0 43.3 43.1 38.9 37.6 39.6 37.8 39.4 29.6 35.6 38.1 31.8 39.8 - Table 1: Instances used for two-dimensional experiments. To test different target center of gravity positions, the instances have their center of gravity target set to either 12 or 23 of both x- and y-dimensions of the container. The target center position is written in column ‘Target’ of Table 1. For CBWPP-2D we restrict the center of gravity to a region around the center of gravity target. The dimensions of this rectangular region is described as a percentage of the container dimensions in the column marked ‘Target size’. This set of 16 instances we refer to as the set of small container instances. The set of small container instances with a rectangular container was copied to create a set of 14 large instances. The dimensions of the large instances was set to 150% their original/best found value but the weight of each item was kept intact. This allow us to compare the results of the heuristic for compact and less compact placements. The dimensions of small and large instances are in the columns ‘Small’ and ‘Large’ where ‘W ’ is the width, ‘H’ is the height and ‘Util.’ is the utilization of the instance under the given container dimensions. Results of the test instances are listed in Table 2. For each instance we list the results with respect to moment of inertia and center of gravity after 30, 120 and 300 seconds. The resulting distance to the target center of gravity is listed in the columns entitled ‘COG’ and the resulting moment of inertia in the columns entitled ‘Moment’. The distance to the target √ center of gravity is reported as percentage deviation between the diagonal length of the instance ( W 2 + H 2 ) and the euclidian distance from the solution center of gravity to the target center of gravity. We do not report the total value of F(P ), but it may be extrapolated using container dimensions and the target center of gravity deviation. Each instance is tested using 3 different formulations; First for solving the BWPP-2D, letting the heuristic run 300 seconds with α, β, γ and δ all set to 1. This way the moment of inertia is optimized without sacrificing good solutions to the center of gravity. The results of this test are listed in the first 3 columns of Table 2 with incrementally best result reported after 30, 120 and 300 seconds. Secondly, 300 seconds with the CBWPP-2D formulation are reported in the column entitled ‘300 s. (CBWPP- 199 6. Computational Experiments Car (normal) Dagli (small, region constraints) ep-100-U-R-75 (large, region constraints) ep2-100-D-R-75 (small, normal) Albano (small, normal) Trousers (small, region constraints) Ship (normal) Figure 5: Best solution for selected instances. 2D)’. And finally in the column entitled ‘Random’ we report the solutions achieved by legalizing random initial solutions using the procedure by Egeblad et al. [A]. Each instance was tested 10 times with 10 different random seeds leading to 10 different initial placements for each formulation. We report the average of the best found result within the designated time-limit. For some instances, not all 10 seeds led to legal placements within the time limit and in this case the number of successful runs is reported as ‘(x)’ in front of the associated results, and the average is taken over the successful runs. Inspection of the results reveals that for the small container instances the improvement of the combined objective function is on average 4.11 % between the first 30 and 120 seconds and 2.30% during the last 180 seconds. Similarly for the large container instances the improvements are 5.00 % between the results after 30 and 120 seconds and 1.16% during the last 180 seconds. For the odd container instances the improvement between the first 30 and 120 seconds is 2.95 % while it is 1.92 % during the last 180 seconds. This shows that most improvements occur during the first 120 seconds and relatively little improvement occurs during the last 180 seconds. For the small container instances the resulting moment of inertia is surprisingly 0.43 % worse 200 Ship Car (9) 5.98 7.30 120 s. 300 s. 300 s. (region) Random Moment COG Moment COG Moment COG Moment Small Container Instances 0.02 1.67 · 105 0.01 1.66 · 105 0.79 1.68 · 105 0.05 2.44 · 105 9 9 9 1.78 1.06 · 10 1.56 1.05 · 10 1.91 1.06 · 10 3.31 1.50 · 109 5 5 5 7.40 1.03 · 10 6.93 1.01 · 10 7.53 1.01 · 10 11.9 1.25 · 105 4 4 4 3.26 2.33 · 10 3.09 2.24 · 10 (9) 3.14 2.31 · 10 4.43 2.90 · 104 5 5 5 1.53 1.63 · 10 1.54 1.61 · 10 1.21 1.62 · 10 3.96 2.28 · 105 4 4 4 10.3 6.87 · 10 8.96 6.67 · 10 10.1 6.47 · 10 13.7 8.36 · 104 4 4 4 2.05 1.14 · 10 1.89 1.11 · 10 1.90 1.11 · 10 2.31 1.41 · 104 11.7 1.08 · 105 11.1 1.04 · 105 12.1 1.05 · 105 15.0 1.27 · 105 (8) 1.66 8.13 · 108 (9)1.12 8.03 · 108 1.59 7.93 · 108 1.84 8.40 · 108 6 6 6 11.6 1.19 · 10 10.7 1.15 · 10 (6) 8.63 1.10 · 10 14.9 1.44 · 106 7 7 7 1.21 3.85 · 10 0.73 3.82 · 10 1.13 3.83 · 10 1.70 4.59 · 107 10 10 10 6.03 6.40 · 10 5.91 6.36 · 10 4.45 6.73 · 10 8.42 7.97 · 1010 8 8 8 0.52 4.02 · 10 0.63 3.96 · 10 0.57 3.96 · 10 1.37 5.09 · 108 Large container Instances 0.40 1.61 · 105 0.28 1.59 · 105 0.60 1.59 · 105 10.7 5.71 · 105 8 8 8 0.60 8.07 · 10 0.67 8.02 · 10 0.76 8.02 · 10 2.48 1.80 · 109 4 4 4 1.88 8.88 · 10 1.95 8.80 · 10 2.99 8.78 · 10 7.81 1.98 · 105 4 4 4 0.72 1.87 · 10 0.77 1.86 · 10 3.15 1.87 · 10 0.24 4.25 · 104 5 5 5 0.00 1.49 · 10 0.00 1.49 · 10 0.72 1.48 · 10 0.03 2.99 · 105 4 4 4 0.78 4.37 · 10 0.79 4.35 · 10 1.38 4.34 · 10 9.83 1.38 · 105 3 3 3 0.55 8.43 · 10 0.57 8.31 · 10 0.66 8.29 · 10 1.86 1.96 · 104 4 4 4 1.48 6.97 · 10 1.22 6.66 · 10 1.58 6.66 · 10 9.20 2.24 · 105 8 8 8 0.68 7.19 · 10 0.52 7.02 · 10 0.89 6.96 · 10 2.26 1.50 · 109 5 5 5 4.09 6.53 · 10 3.90 6.48 · 10 4.53 6.36 · 10 8.85 2.05 · 106 7 7 7 0.52 2.95 · 10 0.40 2.94 · 10 0.59 2.95 · 10 4.04 6.19 · 107 10 10 10 1.91 4.95 · 10 1.95 4.94 · 10 2.98 4.98 · 10 8.48 1.38 · 1011 8 8 8 0.06 3.22 · 10 0.07 3.17 · 10 0.10 3.17 · 10 0.55 8.54 · 108 Irregular Container Instances (9) 5.91 4.33 · 108 (9) 5.87 4.25 · 108 6.25 4.25 · 108 10.44 5.87 · 108 4 4 6.74 6.55 · 10 6.42 6.47 · 10 6.72 6.35 · 104 13.86 1.04 · 105 COG Table 2: Results of two-dimensional experiments. 4.36 · 108 6.81 · 104 0.50 1.65 · 105 0.67 8.15 · 108 2.04 9.08 · 104 0.81 1.89 · 104 0.58 1.50 · 105 0.92 4.46 · 104 0.62 8.51 · 103 3.35 9.04 · 104 0.94 8.74 · 108 3.94 7.12 · 105 0.44 2.96 · 107 1.80 5.02 · 1010 0.07 3.28 · 108 64-squares Albano Dagli Fu Marques Shapes0 Shapes2 Shirts Swim Troousers ep2-50-D-R-75 ep2-100-U-R-75 ep2-200-D-R-75 30 s. Moment 0.53 1.69 · 105 2.42 1.11 · 109 9.34 1.08 · 105 3.99 2.46 · 104 1.71 1.69 · 105 (9) 11.2 7.14 · 104 2.05 1.19 · 104 11.7 1.15 · 105 (1) 1.25 7.96 · 108 (9) 12.6 1.25 · 106 1.05 3.88 · 107 5.85 6.50 · 1010 0.64 4.20 · 108 COG 64-squares Albano Dagli Fu Marques Shapes0 Shapes2 Shirts Swim Troousers ep2-50-D-R-75 ep2-100-U-R-75 ep2-200-D-R-75 Instance Name Placement of two- and three-dimensional irregular shapes for inertia moment and balance 201 6. Computational Experiments on average when the heuristic solves for CBWPP-2D instead of BWPP-2D and the average distance between the target center of gravity and actual center of gravity is also larger. For the large container instances the moment of inertia is 0.2 % better and for the odd container instances it is 0.84 % better when the heuristic optimizes for a specific center of gravity region. This shows that that little is gained by optimizing for a target region for the center of gravity. A likely cause is that there is a strong correlation between solutions with low moment of inertia and with the center of gravity close to the target center of gravity. Another possible cause is that the limited target region limits the type of changes the heuristic can conduct during local search. The difference between considering the objective function during optimization and random placements can be seen when comparing the results of the heuristic after 300 seconds with results of the random placements. For the small container instances the value of the combined objective function is 31.6 % higher for the random placements than for those produced by the heuristic. The moment of inertia is 28.0 % higher and the quadratic distance between the actual center of gravity and the target center of gravity is 294.9 %. For the large container instances the values are respectively 178.0 %, 162.9 % and 15170 %, and for the odd container instances 68.8 %, 49.6 % and 292.3 %. This shows that random placements are far from optimal with respect to balance and moment of inertia. The objective value for the large container instances are on average 17.44% better than for the small container instances for unconstrained solutions. Since the same set of items were used, but with larger container dimensions this shows that the heuristic behaves only slightly better even if the container dimensions are larger, which should simplify the problem of finding feasible solutions and give greater freedom for positions of items. Example solutions are shown in figure 5. the target center of gravity is indicated as the intersection of the dashed lines, while the actual center of gravity is indicated as the intersection of the dotted lines. Target regions are indicated as dashed rectangles for the solutions of the problems with a target region. 6.2 Three dimensions A similar set of tests were conducted for a three-dimensional variant of the heuristic. Three instances with rectangular items and two instances with irregularly shaped items were used for testing. The rectangular instances are based on solutions to knapsack problems reported in [C] while the irregular instances are based on the best found solutions to the three-dimensional strip-packing problems reported by Egeblad et al. [B]. As for the two-dimensional instances, the input containers for the instances with rectangular items were expanded by 5 % in every direction (small container instances) and by 50 % in every direction (large container instances). The instances are listed in Table 3. For the instances with irregular shapes the two fixed container dimensions were kept intact and height (strip-length) was set to 110% of the average height reported in the Egeblad and Pisinger [C]. For ep3-60-C-R-50 and stoyan3 the target center of gravity is set close to the bottom center of the container; ( 21 W, 12 H, 31 L) for small sized containers and ( 12 W, 21 H, 14 L) for large sized containers. The dimensions of the instances are reported in the columns entitled W (width), H (height) and L (length). The results of running the heuristic are reported in Table 4 using the same terminology as was used for the two-dimensional results. Several example solutions are shown in Figure 6. The average improvement from 30 to 120 seconds of running time is respectively 3.42 % and 0.82 % for the small and large container instances, and respectively 4.03 % and 0.41 % from 120 to 300 seconds. This shows, that although little improvement occurs for the large instances during the last 180 seconds, there is still substantial improvement for the small container instances. The value of moment of inertia is 2.14% and 0.88 % better for respectively the small and large container instances, when the heuristic optimizes for CBWPP-3D rather than BWPP-3D. This matches 202 Placement of two- and three-dimensional irregular shapes for inertia moment and balance Instance n Shapes ep3-60-C-R-50 ep3-40-L-C-90 ep3-60-C-R-90 stoyan2 stoyan3 46 24 44 12 25 Target Rect. Bottom Rect. Center Rect. Center Irre. Center Irre. Bottom Target size (%) 20 20 20 20 25 Small W H L Util. 128.1 257.25 128.1 68.9 169.05 338.1 169.05 69.1 205.8 411.0 205.8 68.6 15.0 20.9 14.0 38.4 15.0 31.9 16.0 40.27 W 241.5 183.0 294.0 22.5 22.5 Large H L 483.0 241.5 367.0 183.0 588.0 294.0 32.25 21.0 46.95 24.0 Util. 31.7 25.8 28.9 11.6 13.3 Table 3: Instances used for three-dimensional experiments. the results from the two-dimensional experiments which showed that little is gained with the CBWPP formulation. For the small container instances the difference between the random solutions and the heuristic solutions is 28.9 %, 25.1 % and 354 % for respectively the full objective function, the moment and center of gravity components of the objective functions. This shows that the three-dimensional random solutions are suboptimal and matches the results for two dimensions. As for the two-dimensional instances, the instances with large container dimensions only have an objective value which is 15.24 % better on average than the instances with small container dimensions. This, again, shows that the heuristic only works slightly worse when it is easier to find a feasible placement. 7 Conclusion We have described a simple approach for solving the two- and three-dimensional placement problem of shapes with respect to balance and inertia moment. The objective is to minimize the deviation between actual center of gravity and a target center of gravity as well as the inertia of moment of the shapes. The solution method uses a technique previously used by Egeblad et al. [A] that finds feasible placements of shapes, by minimizes the amount of overlap in the placement. The objective function we minimize here a weighted sum of overlap and the balance objective. Initially the method focuses on ensuring proper balance, and slowly increases the weight of overlap in the objective function. Once a placement with no overlap has been found, the method again reduces the weight of the overlap term, to continue the search for other feasible placements that are better with respect to balance. Our method expands on the work by Egeblad et al. [A] to efficiently search the local search neighborhood consisting of axis-aligned translations. We show that the balance terms of the objective function for all axis-aligned translations of a single shape can be described by a quadratic function. This quadratic function is added to the piece-wise quadratic function which describes overlap and enables us to find the minimal overlap translation for a shape efficiently. Good results are returned within minutes even for instances with more than 150 rectangles, however our implementation is general and quality results may be obtained even faster for rectangular instances with an implementation specifically designed for rectangular placement. The approach is, to our knowledge, the only method capable of optimizing the center of gravity and moment of inertia for irregular shapes in both two- and three-dimensions. 7.1 Acknowledgements The author wishes to thank Benny K. Nielsen and David Pisinger for inspiring and fruitful discussions. 203 7. Conclusion Instance Name ep3-60-C-R-50 ep3-40-L-C-90 ep3-60-C-R-90 stoyan2 stoyan3 ep3-60-C-R-50 ep3-40-L-C-90 ep3-60-C-R-90 stoyan2 stoyan3 COG 6.34 0.90 2.51 (9) 7.28 (4) 8.22 30 s. 120 s. 300 s. 300 s. (region) Moment COG Moment COG Moment COG Moment Small Container 9.16 · 1010 6.19 9.15 · 1010 6.16 7.70 · 1010 (4) 6.33 7.60 · 1010 2.19 · 1010 0.81 2.19 · 1010 0.89 2.19 · 1010 1.22 1.50 · 1010 2.44 · 1011 3.00 2.43 · 1011 2.56 2.41 · 1011 2.85 2.14 · 1011 1.92 · 104 4.37 2.08 · 104 4.76 2.06 · 104 4.03 1.57 · 104 5.79 · 104 10.2 6.64 · 104 9.26 6.38 · 104 (9) 6.94 5.29 · 104 Large Container 0.75 1.56 · 1011 0.78 1.56 · 1011 1.09 9.16 · 1010 0.54 6.68 · 1010 0.54 6.68 · 1010 0.70 1.77 · 1010 4.14 3.22 · 1011 4.14 3.21 · 1011 4.89 2.76 · 1011 0.94 2.93 · 104 0.89 2.70 · 104 0.89 8.61 · 103 2.38 3.01 · 104 2.38 3.01 · 104 2.68 2.93 · 104 0.76 1.57 · 1011 0.60 6.68 · 1010 4.17 3.20 · 1011 ≈ 0.95 3.24 · 104 2.41 3.02 · 104 Table 4: Results of three-dimensional experiments. COG Random Moment 13.1 1.21 · 1011 3.05 2.59 · 1010 5.46 3.04 · 1011 5.42 2.08 · 104 6.52 5.95 · 104 7.11 2.84 · 1011 5.35 6.75 · 1010 16.6 1.03 · 1012 3.50 4.49 · 104 10.1 1.17 · 105 204 Placement of two- and three-dimensional irregular shapes for inertia moment and balance ep3-60-C-R-50 (small, normal) ep3-60-C-R-50 (large, normal) ep3-40-L-C-90 (small, region constraints) ep3-40-L-C-90 (large, region constraints) ep3-60-C-R-90 (small, normal) ep3-60-C-R-90 (large, region constraints) stoyan2 (small, region constraints) stoyan3 (large, normal) Figure 6: Best solution for selected three-dimensional instances (rotated 90 degrees). 205 References References [A] J. Egeblad, B. K. Nielsen, and A. Odgaard. Fast neighborhood search for two- and threedimensional nesting problems. European Journal of Operational Research, 183(3):1249–1266, 2007. [B] J. Egeblad, B. K. Nielsen, and M. Brazil. Translational packing of arbitrary polytopes. CGTA. Computational Geometry: Theory and Applications, 2008. accepted for publication. [C] J. Egeblad and D. Pisinger. Heuristic approaches for the two- and three-dimensional knapsack packing problem. Computers and Operations Research, 2007. In press (available online). [1] S. V. Amiouny, III J. J. Bartholdi, J. H. Vande-Vate, and J. Zhang. Balanced loading. Oper. Res., 40(2):238–246, 1992. ISSN 0030-364X. [2] J. Cagan, K. Shimada, and S. Yin. A survey of computational approaches to three-dimensional layout problems. Computer-Added Design, 34:597–611, 2002. [3] T. H. Cormen, C. E. Leiserson, and R. L. Rivest. Introduction to Algorithms. MIT Press, 1990. [4] A.P. Davies and E.E. Bischoff. Weight distribution considerations in container loading. European Journal of Operational Research, 114:509–527, 1999. [5] M. Eley. Solving container loading problems by block arrangement. European Journal of Operational Research, 141(2):393–409, 2002. [6] O. Faroe, D. Pisinger, and M. Zachariasen. Guided local search for final placement in vlsi design. Journal of Heuristics, 9(3):269–295, 2003. ISSN 1381-1231. [7] O. Faroe, D. Pisinger, and M. Zachariasen. Guided local search for the three-dimensional bin packing problem. INFORMS Journal on Computing, 15(3):267–283, 2003. [8] G. Fasano. A mip approach for some practical packing problems: Balancing constraints and tetris-like items. 4OR, 2(2):161–174, 2004. [9] H. Gehring and A. Bortfeldt. A genetic algorithm for solving the container loading problem. International Transactions in Operational Research, 4:401–418, 1997. [10] K. Mathur. An integer-programming-based heuristic for the balanced loading problem. Operations Research Letters, 22(1):19–25, 1998. [11] H. Teng, S. Sun, D. Liu, and Y. Li. Layout optimization for the objects located within a rotating vessel a three-dimensional packing problem with behavioral constraints. Comput. Oper. Res., 28(6):521–535, 2001. ISSN 0305-0548. [12] C. Voudouris and E. Tsang. Guided local search. Technical Report CSM-147, Department of Computer Science, University of Essex, Colchester, C04 3SQ, UK, August 1995. [13] G. Wäscher, H. Haussner, and H. Schumann. An improved typology of cutting and packing problems. European Journal of Operational Research, 2006. In Press. [14] J. Wodziak and G. Fadel. Packing and optimizing the center of gravity location using a genetic algorithm. 206 Draft. 2008 Three-dimensional Constrained Capsule Placement for Coarse Grained Tertiary RNA Structure Prediction J. Egeblad∗ L. Guibas† M. Jonikas‡ A. Laederach§ Abstract We present a novel technique to solve problems in which a set of capsules are to be placed within an arbitrary container. The placements may be further constrained by inter-capsule distances and angles. We also study optimization variants of the problem where capsules must be placed such that some objective is optimized. The model has applications within coarse grained RNA tertiary structure prediction, which we model as a network of inter-connected capsules which represent alpha-helices that must be placed within some molecular surface. In our model such a surface is represented by a triangle-mesh. The problem is solved heuristically via an iterative local search method which utilizes the metaheuristic Guided Local Search. The local search neighborhood consists of all axis aligned translations of a single capsule and is searched efficiently using a polynomial time algorithm. Results show that the method is capable of finding feasible placements of networks consisting of up to 50 capsules under compact conditions. Experiments with a model of an RNA-molecule consisting of 7 helices and a molecular envelope return helical placements with an average RMSD of 20 Å to the crystal structure. Keywords: Cylinder packing, RNA prediction, Irregular Packing 1 Introduction Knowing the three-dimensional structure of RNA molecules is vital for studying and determining their function. While x-ray crystallography may be used to determine the structure experimentally, this is a time and labour consuming process and methods which can accurately predict the structure computationally may help scientist to uncover the mysteries of the molecules. The secondary structure of an RNA molecule consists of a list of base pairs. The tertiary structure consists of a set of three-dimensional coordinates for each atom. Among the successful methods for secondary structure prediction is the dynamic programming method Mfold [25] and the probabilistic method ContraFold [5]. Prediction of the tertiary structure of RNA may be done based on the basepaired regions of a secondary structure prediction. In this paper we consider a coarse grained model for RNA structure prediction, in which we assume that the proper secondary structure may be accurately predicted, and that a set of helical regions can be deduced from it. In our model we treat each helical region as a rigid body. We let each ∗ Computer Science Department, University of Copenhagen, DK-2100 Cph Ø, Denmark. E-mail: [email protected] † Computer Science Department, Stanford University, Stanford, CA 94305, USA. E-mail: [email protected] ‡ Bioengineering department, Stanford University, Stanford, CA 94305, USA. E-mail: [email protected] § Department of Biomedical Sciences, Wadsworth Center, New Scotland Av. Albany, NY 12208. USA, E-mail: [email protected] 207 1. Introduction (a) (b) (c) Figure 1: (a) Example capsule. (b) Coarse model for tertiary structure used in this paper. (c) Example placement within triangle mesh (Link constraints are indicated with green beams). α-helix i, from the predicted secondary structure, be represented by a cylinder with spherical ends (capsule) whose radius, ri , and length, li corresponds to the extents of the helix it represents. Since atoms cannot overlap, we require that the helices do not overlap. Base-pairs between two helices are modeled as distance constraints (links), so that the endpoints of the capsules corresponding to the helices are required to be within a distance of each other that corresponds to estimated physical distance between the two helices. Additional data may be available. In some cases scientist will know the molecular envelope of the RNA molecule which can be determined by SAXS and we may require that all capsules must be located completely within the envelope. By observing experimental data, it may also be possible to identify parts of the helices that are exposed and should therefore lie close to the molecular surface and finally, angles between helices may be deduced and used to describe the relative orientation of two capsules. A sketch of the model is depicted on Figure 1 (a,b) along with a real placement of capsules on Figure 1 (c). In this paper we focus on the problem of determining one or several placements of the capsules given the requirements. As we will show in Section 2, this problem is N P -complete. To solve this problem we define an objective function in which any violation of the requirements contributes positively to the objective value. A objective value of zero implies that we have found a feasible placement of the helices, that is, a placement where all requirements are met. The method begins with an infeasible random placement and iteratively refines the placement until an objective value of zero is reached. In each iteration of the refinement we translate or rotate a single capsule to a position that reduces the value of the objective function. Current techniques for RNA tertiary structure prediction may consider molecules with approximately 50 nucleotides. Since our method considers pure geometry it may open the door for rough placements of RNA structures with hundreds of nucleotides, that can later be refined by other techniques which accurately determine the positions of the individual nucleotides. The remainder of this paper is organized as follows: In Section 2 we give the exact problem formulation. In Section 3 we describe related scientific work. In Section 4 we outline the solution method and in Section 5 we describe an efficient implementation of the local search moves. 208 Constrained Capsule Placement for Coarse Grained Tertiary RNA Structure Prediction 2 Problem Formulation We consider a simplified model for coarse grained RNA prediction consisting of a network of n interconnected helices represented as capsules with radius ri , and length, li for i = 1, . . . , n. A capsule i in space may be represented as a coordinate pi , a direction vector vi and a radius ri , where capsule i is the set of points, {p ∈ R3 | min ||p − pi + tvi ||} (1) t∈[0,1] which are within a distance ri from the line segment between pi and pi + vi (See figure 1 (a)). Each capsule is defined in a local coordinate system where the endpoints pi and pi + vi of capsule i’s are (− l2i , 0, 0)T and ( l2i , 0, 0)T . A placement of n capsules consists of a transformation for each capsule from its local coordinate system to a global coordinate system. Each transformation consists a rotation and translation. The translation is given by the vector ci ∈ R3 . The rotation is represented by the matrix Mrot (θi , φi ), which first rotates the coordinate system θi ∈ A radians around the z-axis, then φi ∈ A radians around the x-axis. In this text it is assumed that the set of allowed angles A is a discrete set. Capsule i’s endpoints in the global coordinate system for a placement P = (c, θ, φ) ∈ R3n × A n × A n are pi = Mrot (θi , φi )(− l2i , 0, 0)T + ci and pi + vi = Mrot (θi , φi )( l2i , 0, 0)T + ci . Links which describe how individual helices are connected are modeled as maximal distance constraints. Links are numbered 1 to m and link i connects capsule s(i) with e(i). For a feasible placement we require that the distance between the endpoints, ps(i) + vs(i) , and pe(i) , must be less than b(i), i.e: ||ps(i) + vs(i) − pe(i) ||2 ≤ b(i)2 . We also wish to ensure that all capsules lie within some molecular envelope, E, whose surface is closed and represented by a set of non-intersecting triangles Es . A point p is enclosed by Es if any ray from p intersects Es an odd number of times. In other words, for a point p ∈ R3 , let c(p) ∈ Z+ be the number of times the ray r− (p) = sa + p, for any vector a and s < 0 intersects any triangle T ∈ Es , then the set of points enclosed by the envelope are: E = {p ∈ R3 | c(p) ≡ 1 mod 2}, which, since the surface is closed, is independent of the choice of a. To ensure that a capsule i is completely within E, we require that both its endpoints are in E and that the minimal distance from the line segment si to any triangle in T is larger than ri . Note that such a capsule cannot cross the surface, since that would imply a minimal distance of 0. We may also consider angles between capsules. For each link i we require that the angle in radians + between the incident pair of capsules, s(i) and e(i) falls within some interval [α− i , αi ] ⊂] − π, π]. The primary objective in this paper is to find a placement P , such that all the constraints listed above are met and we will refer to this problem as the Interconnected Capsule Placement Decision Problem (ICPDP). Theorem 5. The Interconnected Capsule Placement Decision Problem with rational coordinates and values is N P -complete. Proof. Assume, for now, that we can determine if a capsule is located within a triangle envelope and if two capsules overlap in polynomial time (This will be shown in Section 5). Then we may determine 209 3. Related work Figure 2: Placement of the capsules that corresponds to the set of items {1, 1, 1, 1, 2, 2} in the proof of Theorem 5. Here capsules are either 1 − 1 or 2 − 1 units in length and have radius 12 . if a placement is feasible in polynomial time based on the coordinates of the capsules, and we can therefore use a placement as a polynomial size certificate. To prove that the problem is N P -complete we show that if we can solve any instance of ICPDP, we can solve any instance of the Set Partition Problem (SPP) which is N P -complete (see e.g. [8]). SPP is defined as follows; Given a set of items S, each with a positive integer value ai ∈ N for i ∈ S, determine if we can divide S into two disjoint sets S0 and S00 such that S0 ∪ S00 = S and ∑i∈S0 ai = ∑i∈S00 ai = 12 ∑i∈S ai . Given an instance I of SPP we may create an instance, I 0 , of ICPDP as follows. For each item i ∈ S from I create a capsule i in I ’ with length li = ai − 1 and radius 12 . Set M = ∑i∈S ai and create an envelope with the feasible domain C = [0, M2 ] × [0, 2] × [0, 1]. Note that the only feasible z-coordinate for the endpoints of any capsule is 12 . We also fix the feasible set of rotation angles to {0}, so no rotation is allowed. Given a solution to I , we may create a solution to I 0 in the following way. Assume (WLOG) that the items are enumerated such that S0 = {i | 1 ≤ i ≤ |S0 |} and S00 = {i | |S0 | + 1 ≤ i ≤ |S|}. For each capsule i we set vi = (ai , 0, 0) and set ci = ( 21 ai + ∑ j<i a j , 12 , 12 ) for i ∈ S0 and ci = ( 21 ai + ∑|S0 |< j<i a j , 32 , 12 ) for i ∈ S00 (see figure 2). Note that no two capsules overlap with this assignment, since the length of capsule i is ai − 1. Conversely, assume we have a solution S to I 0 , then divide the capsules into two sets S0 = {i|ci,y < 1}, consisting of the capsules with center y-coordinate less than 1, and S00 = S \ S0 . Due to the dimensions of the capsules, the fact that rotations are not allowed, and the fact that no two capsules overlap in S , we can deduce that the only feasible placement must be similar to the one shown in Figure 2, where the capsules are divided into two rows and therefore: ∑ (ai − 1 + 2ri ) = ∑ ai ≤ M i∈S0 i∈S0 Similarly, ∑i∈S00 ai ≤ M. This shows that S0 and S00 constitute a solution to I , which completes the proof that ICPDP is N P -complete. 3 Related work This paper fall between the fields RNA-structure prediction and optimization of packing and layout problems and relevant work from both fields is briefly discussed in the following. 3.1 RNA Structure Prediction In both the surveys on RNA structure prediction by Shapiro et al. [20] and Capriotti and Marti-Renom [2] a section is devoted to tertiary structure prediction of RNA. Presently, most methods for tertiary structure prediction rely on some form of human assistance. The Erna-3D program by Muller [19] 210 Constrained Capsule Placement for Coarse Grained Tertiary RNA Structure Prediction builds helices based on the secondary structure. The helices are combined to form a complete tertiary structure. The program is based on human manipulation of the generated structured on different levels of detail with the highest level consisting of complete helices. Massire and Westhof [17] present a tool that builds the tertiary structure based on a library of RNA motifs. Once constructed, the structures can be manipulated interactively. MC-Sym by Major [15] also builds 3D structures from known 3D structures, and allows interactive specification of structural constraints. The 3D structures may be further refined using molecular dynamics simulation which minimize their energy. The RNA2D3D program by Martinez et al. [16] generates helices from the secondary structure and spaces atomic models of nucleotides evenly on a backbone which is used to create the three-dimensional structure by winding. This first order representation may then be further refined interactively and by molecular dynamics. Das and Baker [3] presents a procedure which is inspired by the Rosetta low-resolution protein structure prediction method. The method assembles RNA fragments controlled by a Monte Carlo method in order to minimize a knowledge based energy function and is able to accurately predict structures consisting of approximately 30 nucleotides. Finally, Ding et al. [4] uses discrete molecular dynamics (DMD) to fold structures consisting of up-to 100 nucleotides although best results are reached for less that 50 nucleotides. Some geomtric aspects of RNA structures were analyzed by Hyeon et al. [12]. The RNA molecules studied were found to be more aspherical and prolate than proteins. Furthermore the radius of gyra1 tion, which determines the compactness of the molecules, was found to be consistently RG = 5.5ÅN 3 for N nucleotides, which is less than the compactness for proteins. 3.2 Packing and Layout Since the methods that we use in this paper have been previously applied to packing and layout problems we briefly consider this field. Packing problems of non-rectangular shapes in three dimensions have been considered by several authors. Stoyan et al. [22] considers optimal packing of convex polyhedra within a rectangular container, while a similar problem involving spheres was considered by Stoyan and et al. [21]. Imamichi and Hiroshi [13] also considers packing of spheres and model rigid shapes as collections of spheres. In addition to packing problems, the method is applied to protein-protein docking problems. The methodology closely resembles the strategies described in this paper, since a placement with overlap is continuously refined using a gradient search method until a non-overlapping placement is reached. Determining a placement of objects given a set of constraints has previously been considered for component layout optimization. Here a given set of items inter-connected by wires must be placed within a container such that some objective is optimized. The objective can be wire-length or the overall center of gravity, and in some cases additional proximity constraints must be met. A survey of layout problems was given by Cagan et al. [1] and recent work is presented by Yin et al. [24]. The methodology used in this paper, as presented in Section 4, originates from work by Faroe et al. [7] who consider a relaxed placement method for the bin-packing problem, where the minimal number of rectangular bins required to contain a set of rectangular items must be found. The method revolves around a procedure which starts with infeasible placements of overlapping rectangular items, that are continuously refined to reach non-overlapping placements. The refinement procedure consists of repeated translations of individual items to less-overlapping positions. The refinement process was also used by Egeblad et al. [A] for two- and three-dimensional strip-packing problems, where a minimal length container capable of encompassing a set of polygons must be found. The method was generalized to three-dimensional problems involving general polyhedra and to higher dimensional 211 4. Solution Method problems by Egeblad and Pisinger [C]. The refinement procedure Faroe et al. [6] uses a similar method for very large circuit (VLSI) layout problems, which is a two-dimensional component layout problem. 4 Solution Method The solution method follows the work by Faroe et al. [6, 7] and Egeblad et al. [A, B] which was briefly touched upon in Section 3.2. The intuition behind our method is as follows. We begin with a random placement of the capsules which is unlike to meet all of our requirements. As our solution method progresses capsules are allowed to be freely moved around in the coordinate system even that make them overlap with other capsules, extend beyond the envelope, and violate link and angle constraints. Violations of constraints are described by a continuous objective function, such that a “larger” violation of constraints has a higher objective value and a placement of the capsules is feasible with respect to the requirements if and only if the objective value is zero. Our method iteratively approaches a feasible placement where all requirements are met by reducing the objective value in each step. In each iteration exactly one capsule, which contributes positively to, the objective function is selected and all possible axis aligned translations as well as rotations of the capsule are considered. The translation or rotation that reduces the objective value the most is selected. The objective function we consider is defined as follows: n n n m m F (P ) = ∑ ∑ fC (P , vi j , i, j) + ∑ fE (P , ṽi , i) + ∑ fL (P , i) + ∑ fA (P , i), i=1 j=i i=1 i=1 i=1 where P is the current placement of capsules, and each of the functions fC , fE , fL and fA are explained in the following and illustrated on Figure 3. We emphasize that all these values are with respect to the current placement P . fC (P , vi j , i, j) ≥ 0 is the amount capsule i must be translated along the vector vi j in placement P in order for it not to overlap with j as illustrated on Figure 3 (a). This value is related to the concept intersection depth, and will be described in more detail in Section 5.1. Note that fC (P , vi j , i, j) = 0 if and only if i and j are not overlapping in P . T fE (P , ṽi , i) ≥ 0 indicates how far capsule i must be moved in direction ṽi in placement P in order to be completely contained within the envelope E. This is illustrated on Figure 3 (b). fE (P , ṽi , i) = 0 if and only if i is completely contained within E. For link k which connects capsules i = s(k) and j = e(k) the value fL (P , k) = max(||ps(k) + vs(k) − pe(k) ||2 − b(i)2 , 0) ≥ 0, is a measure of how far capsules i = s(k) and j = e(i) should be moved relative to their placement in P in order for the distance between the endpoints of the capsules i and j to be feasible. This is illustrated on Figure 3 (c) Note that fL (P , k) = 0 if and only if the distance between the endpoints of the two capsules linked by link i are within a feasible range. For link k the value fA (P , k) indicates how far the angle between capsules s(k) and e(k), is from + the target angle interval [α− i , αi ] and we set: − − αi − ∠(i) for ∠(i) < αi + 0 for α− fA (i) = i ≤ ∠(i) ≤ αi + + ∠(i) − αi for ∠(i) > αi where ∠(P , i) is the angle in radians between capsules s(i) and e(i) and the difference is calculated modulo 2π, i.e. it is always positive. This is illustrated on Figure 3 (c). We will give more details on how to evaluate the different terms of the objective function and explain the choice of the vectors vi j and ṽi in Section 5. As can be seen from the previous description F (P ) = 0 if and only if P is feasible with respect to our set of requirements. 212 Constrained Capsule Placement for Coarse Grained Tertiary RNA Structure Prediction (a) (b) (c) (d) Figure 3: Illustration of the different terms of the objective function. In all cases the capsule i is at an infeasible position and the white hollow capsule represents a feasible position. (a) Capsules i and j overlap and i must be translated fC (P , v, i, j) along v (indicated by the arrow) to remove the overlap. (b) i is placed outside E and must be translated fE (P , v, i) along v (indicated by the arrow), (c) A link k connects capsules i and j, which are placed too far from each other. fL (P , k) (indicated by the arrow) is the quadratic distance that i must be translated for its endpoint to be within a distance b(k) (represented by the circle) of j’s endpoint. (d) Link k connects capsules i and j and i should be rotated + fA (P , k) (represented by the arrow) for the angle to be within the required [α− k , αk ]. 4.1 Local search overview Our procedure starts with a random, and likely, infeasible placement P0 . We now seek to minimize F using a simple local search scheme which from a placement Pk searches for a new placement Pk+1 such that F (Pk+1 ) < F (Pk ). The possible changes we look for with respect to Pk revolves around a single capsule i and are as follows: 1. Translate i in directions parallel to the three coordinate system vectors. 2. Translate i in direction ∇ f (x, y, z), where Pk (x, y, z) is the placement Pk with (x, y, z) added to ci and f (x, y, z) = ∑mj=1 fL (Pk (x, y, z), j). The purpose of this change, is that a translation in this direction will reduce the link term of the objective function. 3. Rotate i. The set of feasible angles is limited to the discrete set A . We refer to this set of possible changes as the local search neighborhood. All three possible changes are evaluated, and all possibilities among each type of translation are investigated. The translation or rotation which reduces F (Pk ) the most is selected and the new placement is Pk+1 . Evaluation of the objective function for each possible translation is a computationally expensive process, and we will explain how this can be done efficiently in polynomial time in Section 5. We refer to the set of placements which may arise from one of the changes listed above applied to a placement P as the local search neighborhood, N (P ). If F (P 0 ) ≥ F (P ) for any placement P 0 ∈ N (P ), we say that P is a local minimum with respect to F and the local search neighborhood. The local search process proceeds until either a placement Pk with F (Pk ) = 0 or a local minimum placement is found. If F (Pk ) = 0 we have solved the specific instance of the ICPDP and return Pk as solution. To continue the process from a local minimum we use the metaheuristic Guided Local Search which will be described in Section 4.2. Note that when the local search move selected is a translation then we calculate the fC and fE terms with respect to the direction of translation as will be described in Section 5. We also set the values of the vectors vi j and v ji for j = 1, . . . , n and ṽi to the chosen direction. In other words, the 213 4. Solution Method value which determines the translation distance required for a capsule i, in order for i not to overlap, is always with respect to the last translation that changed the overlap of i. Therefore one could argue that during the optimization of F (P ) we also attempt to find the right set of vectors of vi j and ṽi . However, for F (P ) = 0 the choice of vectors has no effect on the objective value. Therefore our search is still limited to finding a set of translations and rotations of the capsules such that F (P ) = 0. 4.2 Guided Local Search One of the main challenges with local search based methods is to ensure that they can continue the search for a global minimum once they encounter a local minimum. A common way to solve this problem is to control the local search with a form of metaheuristic. Metaheuristics consists of general principle used to attack combinatorial optimization problems and some of most succesful and well-known metaheuristics are Simulated Annealing (Monte Carlo Methods) Kirkpatrick et al. [14], Genetic Algorithms Mitchell [18] and Tabu search Glover [9, 10]. Although, many local search based metaheuristics are percieved as generic tools, some metaheuristics are often more suitable than others for a specific local search procedure. The metaheuristic Guided Local Search (GLS) introduced by Voudouris and Tsang [23] has previously proved successful for packing and layout optimization problems as described by Faroe et al. [6, 7] and Egeblad et al. [A, B] and was therefore found suitable for solving ICPDP as well. The primary element of GLS is to minimize an augmented objective function in which undesirable features of placements are penalized by adding a set of additional penalty-terms to the objective function one wishes to optimize. Whenever the local search procedure reaches a local minimum placement, the augmented objective function is altered by modifying the penalty-terms, such that the current placement ceases to be a local minimum relative to the modified augmented objective function. An important part of this paradigm is that penalty terms must be carefully added such that any global minimum of the augmented objective function is also a global minimum of the original objective function. For ICPDP we use the following augmented objective function: minimize H (P ) = F (P ) + Z (P ), where Z (P ) is value of the penalties in P and given by n Z (P ) = λC ∑ n ∑ i=1 j=i+1 n m n IC (i, j, P )ρi, j + λE ∑ IE (i)ψi + λL ∑ IL (i)σi + λA ∑ IA (i)τi . i=1 i=1 i=1 Here IC (i, j, P ) = 1 if and only if capsules i and j overlap which is true if and only if fC (P , vi j , i, j) > 0, IE (i) = 1 if and only if capsule i is not contained within the envelope which is the case if and only if fE (P , ṽi , i) > 0, IL (i) = 1 if and only if link-distance i is violated and therefore fL (P , i) > 0, and IA (i) = 1 if and only if the angle of link i is not within its required interval which can be true if and only if fA (P , i) > 0. The values ρi, j , ψi , σi and τi are penalty counts for respectively inter-capsule overlap, envelope overlap, link-distance violation, and angle violation and these are explained shortly. The values λC ≥ 0, λE ≥ 0, λL ≥ 0 and λA ≥ 0 are parameters that determines the weight of each of the penalties in H and are used to fine-tune the behavior of the heuristic. Note that with this definition F (P ) = 0 if and only if H (P ) = 0. Initially all ρi, j , ψi , σi and τi are set to 0 and H = F . However, whenever a local minimum placement P with H (P ) > 0 is encountered, one of the penalty terms are modified. Note that such 214 Constrained Capsule Placement for Coarse Grained Tertiary RNA Structure Prediction a placement must contain either overlap, envelope overlap, link-distance violation, or incorrect linkangles. The heuristic randomly selects one among these four contributions to change. If the overlap penalty term is chosen we calculate the utility of this feature as: µi, j = fC (P , vi j , i, j) , 1 + ρi, j i, j = 1, . . . , n and increase the value ρi, j by 1 for the pair of capsules i and j with maximal µi, j . If the envelope overlap term is chosen we calculate: ξi = fE (P , ṽi , i) , 1 + ψi i = 1, . . . , n and increase ψi for the capsule i with largest ξi . If the link-distance term is chosen we calculate: νi = fL (P , i) , 1 + σi i = 1, . . . , m and increase the value σi for the link i with the highest νi . Finally, if the angle term is chosen we calculate: γi = fA (P , i) , 1 + τi i = 1, . . . , m and increase the value τi for the link i with the highest γi . After this change of objective function the local search heuristic continues with the modified objective. The effect of the modification is that the undesirable features, e.g. large overlap of two specific capsules, are “emphasized” in subsequent optimization and the local search heuristic will move towards placements without this particular overlap. 4.3 Fast Local Search A very important aspect of the outlined local search procedure, is the selection of the capsule in each step. Searching for new placements with respect to every capsule in each iteration of the local search is computationally expensive. Rather, we use a concept referred to as Fast Local Search (FLS). The details of FLS are as follows: We maintain a list L of capsules and in each step the local search procedure searches only for improving changes to the first capsule in the list. Initially the list L contains all the capsules. Whenever the local search procedure has attempted to change the placement of a capsule, it is inactivated and removed from L and the procedure continues with the next capsule in L . If an improving change of the placement for a capsule is found, all capsules connected to it via links and capsules overlapping with it before or after the move are activated and put in L . If L is empty, we assume the current placement is a local minimum placement, and we proceed to change the penalties as described in the previous section. Afterwards, the capsule(s) associated with the penalized feature are inserted in L and will be considered in subsequent steps by the local search heuristic. 5 Neighborhood search In this Section we will show how to evaluate the local search neighborhood described in Section 4.1 efficiently. The computationally most expensive element of the neighborhood is to determine the 215 5. Neighborhood search translation of a capsule i in a direction a that minimizes H . In this section, we will consider a specific capsule i and let P (t) denote the placement that arises from P when i is translated ta units relative to its current position. A piecewise quadratic function f (t) consists of k second order polynomials each constrained to a specific interval (pieces): 2 a1t + b1t + c1 for t ∈ [t1 ,t2 ] . .. f (t) = , 2 ak t + bk t + ck for t ∈ [t2k−1 ,t2k ] where each of the coefficient ai , bi , ci ∈ R3 and [t2i−1 ,t2i ] ∩ [t2 j−1 ,t2 j ] = 0/ for i, j = 1, . . . , k. Rather than probing H (P (t)) for a discrete set of values t and selecting the best translation from this set, we will present an efficient polynomial time algorithm that returns the minimal value of H (P (t)). The algorithm determines a piecewise quadratic function which describes H (P (t)) for any value of t ∈ R, given a and capsule i. It has asymptotic running time O((n + m + |Es |) log(n + m + |Es |)), where n is the number of capsules in the instance, m is the number of links, and |Es | is the number of surface triangles from the envelope. Specifically the algorithm can be used to find t for mint H (P (t)) in the same asymptotic time, by analyzing each piece for its minimum. H consists of three components that depend on t; capsule overlap ( fC ) envelope overlap ( fE ), and link violations ( fL ). In the following we discuss how we may determine piecewise quadratic functions fC (P (t), a, i, j), fE (P (t), a, i), and fL (P (t), a, k) over t for capsules i and j and link k ∈ {1, . . . , m}. 5.1 Capsule Intersection The first terms of F are the fC terms. fC (P , a, i, j) describes the minimum amount capsule i must be translated in a specific direction in order for i not to overlap with j as illustrated on Figure 3 (a). To determine if two capsules overlap for different translations t along a we will evaluate the minimum quadratic distance between two capsules. One with the line segment endpoints p1 + ta and p1 + v1 + ta, and one with the line segment endpoints p2 and p2 + v2 as a function in t. The capsules overlap, if the distance between these segments is less than the sum of the capsules’ radii. For a given translation t and direction a let f (t, a, s, u) = ||p1 +sv1 +ta−p2 −uv2 ||2 be the distance between specific points on the infinite lines that are coincident with the two line segments, let dLL (t, a, p1 , v1 , p2 , v2 ) = min f (t, a, s, u), s,u∈R be the minimal quadratic distance between the two infinite lines that are coincident with the line segments, and let dS,S (t, a, p1 , v1 , p2 , v2 ) = min f (t, a, s, u) s,u∈[0,1] be the minimal quadratic distance between the two line segments. For fixed values of t and a f is a two dimensional quadratic function in s and u, and the minimum distance between the two infinite lines occurs for values of s0 (t) and u0 (t) where: 216 ∂f ∂f 0 0 0 0 (t, s (t), u (t)), (t, s (t), u (t)) = 0, 0 . ∂s ∂u (2) Constrained Capsule Placement for Coarse Grained Tertiary RNA Structure Prediction It can be deduced that: ∂f ∂f s v1 · a + 2V + 2t , (t, s, u), (t, s, u) = 2M u −v2 · a ∂s ∂u with M= ||v1 ||2 −v1 · v2 −v1 · v2 ||v2 ||2 and V = v1 · (p1 − p2 ) p2 · (p2 − p1 ) . Which allows us to define s0 (t) and u0 (t) as 0 s (t) v1 · a −1 −1 = −M V − tM , u0 (t) −v2 · a which shows that the values s0 (t) and u0 (t) are linear in t. If we let S = {t | s0 (t) ∈ [0, 1]} and U = {t | u0 (t) ∈ [0, 1]} then both s0 (t) ∈ [0, 1] and u0 (t) ∈ [0, 1] for t ∈ S ∩ U. This shows, that for t ∈ S ∩ U the two nearest points of the infinite lines are on the line segments and for t ∈ S ∩ U we can evaluate the distance between the two line segments as the distance dLL (t, a, p1 , v1 , p2 , v2 ) between two infinite lines going through the endpoints of the capsules. The distance between two infinite lines can be determined as: dLL (t, a, p1 , v1 , p2 , v2 ) = (t(a · (v1 × v2 )) + (p1 − p2 ) · (v1 × v2 ))2 . ||v1 × v2 ||2 For t ∈ / S ∩ U at least one of the two nearest points of the line segments, is an endpoints of a line segment and we can evaluate the minimum distance between the two line segments as the minimum distance between all of the four line segment endpoints and the opposite line segment: dPPSS (t, a, p1 , v1 , p2 , v2 ) = min dPS (t, a, p1 , p2 , v2 ), dPS (t, a, p1 + v1 , p2 , v2 ) dPS (t, −a, p2 , p1 , v1 ), dPS (t, −a, p2 + v2 , p1 , v1 ) , where dPS (t, a, p, p2 , v2 ) = ||p + ta − p2 − r(t, p, p2 , v2 )v2 ||2 , r(t, a, p, p2 , v2 ) = max 0, min(1, r0 (t, p, p2 , v2 )) , ta · p2 + p1 · v2 . r0 (t, a, p, p2 , v2 ) = ||v2 ||2 Here r(t, a, p, p2 , v2 ) is the value of s for the point on the line segment p2 + sv2 , s ∈ [0, 1] that is closest to p. By analyzing the interval for t where r0 (t, p1 , p2 , v2 ) ∈ [0, 1], we can describe dPS (t, a, p, p2 , v2 ) as a piecewise quadratic equal to the quadratic distance between points p +ta and p2 for r0 (t, p, p2 , v2 ) < 0, the point-line distance between p and the line-segment p2 , p2 + v2 for 0 ≤ r0 (t, pi , l) ≤ 1, and the distance between points p1 + ta and p2 + v2 for r0 (t, pi , l) > 0. In total the distance between the two line segments can be calculated as: dLL (t, a, p1 , v1 , p2 , v2 ) for t ∈ S ∩U. dSS (t, a, p1 , v1 , p2 , v2 ) = dPPSS (t, a, p1 , v1 , p2 , v2 ) otherwise . Since both dLL and dPS are piecewise quadratic in t, dSS may be represented as a single piecewise quadratic function. Note that extra attention must be given for orthogonal lines and that evaluating dPPSS (t, a, p1 , v1 , p2 , v2 ) can be simplified with further boundary analysis. 217 5. Neighborhood search Figure 4: Illustration of the feasible translations of a point p, such that it is contained within the envelope E (see text). For a point p + ta to be feasible, the number of intersections between the envelope surface Es and the ray p +ta − ua, for u ∈ [0, ∞] must be odd. Here translations with t in the intervals [t1 ,t2 ] or [t3 ,t4 ] are feasible. Having the minimum distance for any translation along a vector a of two line segments allows us to evaluate the interval O = [t1 ,t2 ], where the capsules i and j overlap. The endpoints of O are the up-to two values of t for which dSS (t, a, p1 , v1 , p2 , v2 ) = (ri + r j )2 . We now set the value fC (P (t), a, i, j) = min(−t1 ,t2 ) if t ∈ O and 0 otherwise since we must translate i either −t1 or t2 units along a in order for the two capsules not to overlap. We can also use this to calculate the value of fC (P (t), a, i, j) by creating the piecewise linear-function: t −t t − t1 for t ∈ [t1 , 2 2 1 ] 1 t2 − t for t ∈ [ t2 −t fC (P (t), a, i, j) = 2 ,t2 ] . 0 otherwise Thus fC (P (t), a, i, j) for all t ∈ R3 can be determined in O(1) asymptotic time and the resulting piecewise linear function consists of no more than 4 pieces. 5.2 Surface Intersection The second term fE of F (t) concerns placement of capsules outside the envelope as illustrated on Figure 3 (b). First we note that a capsule can only be inside the envelope if both its endpoint are inside the envelope, it does not cross the envelope, and the minimal distance from the line segment to any triangle on the envelope is larger than the radius of the capsule. Our strategy is to determine the set of intervals of t where all these conditions are met. The intersection of those intervals constitute the feasible translations of capsule i along vector a. To determine if an endpoint p is within the envelope we may simply cast a ray from p in direction of −a. If the number of intersections between the ray, and the envelope is odd, p is inside the envelope. This concept can be used to return the set of intervals of t where p is within the envelope. This is done by determining all intersections between triangles of the envelope and the line p + ta. Denote the distinct values of t for the intersections as t1 , . . . ,tk and assume, WLOG, that they are sorted such that t1 < t2 < . . . < tk . Since translations of p with an odd number of intersections are feasible translations, we know that values of t within the intervals [t1 ,t2 ], [t3 ,t4 ], . . . , [tk−1 ,tk ] are feasible (see Figure 4). Since multiple intersections can occur for equal values of t when triangle edges are coincident we ensure that the distance between to subsequent intersections must be larger than a small value ε. Calculating these intervals may be done in O(|Es | log |Es |) time, since it takes O(|Es |) time to calculate all the line-triangle intersections, O(|Es | log |Es |) time to sort them and O(|Es |) time to traverse them and generate the intervals. The set of intervals representing feasible translations of both endpoints 218 Constrained Capsule Placement for Coarse Grained Tertiary RNA Structure Prediction (a) (b) Figure 5: Illustration of the formulae required to calculate the distance from a line segment to a triangle as described in Section 5.2. can be calculated in O(|Es | log |Es |) time by determining the intersection of the intervals from each endpoint. We now consider the problem of determining the distance between the line segment of capsule i, li (t) : pi + svi + ta, s ∈ [0, 1] and a triangle T . If the line segment intersects T , the distance is 0 otherwise the minimal distance is the minimum of the distances between li (t) and any of T ’s edges or the minimal distance from one of li (t)’s endpoints to T . To determine if li (t) intersects T we first need to determine the point of intersection between the infinite line coincident with li (t) and the plane of T as a a parameter of t (see Figure 5). Let the plane of T be defined as the set of points P : {p ∈ R3 |n · p + q = 0} where n is the normal of the plane and q ∈ R. Let s0 (t) be the value of s for the point of intersection between the infinite line going through li (t) and P, then we require that: n · (pi + s0 (t)vi + ta) + q = 0, which implies that: s0 (t) = n · pi + q n·a +t . n · vi n · vi If n · vi = 0, li (t) and the P are parallel and no intersection occurs. Otherwise, the points of intersection are represented by the line: l pi : t n·a n · pi + q vi + pi + vi , n · vi n · vi which lies in the plane P. Let It = {t | l pi (t) ∈ T } be the interval of intersection between l pi and T , then It is the interval of t for which the infinite line going through li intersects T . Let Is = {t | s0 (t) ∈ [0, 1]}, then the interval IT ∩ Is is the set of values for t where li (t) intersects the triangle T (see Figure 5 (a)). The problem of determining the minimal distance between the line segment and T ’s edges as a parameter in t is similar to the problem of determining the distance between two line segments which was covered in Section 5.1. To determine the minimum distance from the line segment endpoints to the interior of T , consider a point p + ta (which can be either endpoint of li (t)), then the distance from p + ta to P is n · (p + ta)/||n|| + q. This distance is valid as a distance to T when the point closest to p + ta on P is within T . When this is not the case, the closest point of T is on one of T ’s edges and this situation 219 5. Neighborhood search is handled by the line segment edge distance evaluation mentioned above. To determine for which values of t the closest point on P is within T we calculate the line (in t) representing the closest point on P to p + ta for any value of t as follows. First, we define a line l(t) : p + un + ta, u ∈ R which is the line going through p + ta in direction n. Let u0 (t) be the value of u where l(t) intersects P, then u0 (t) = − p · n + q + ta · n . ||n||2 Inserting this into the equation for l(t) we get a line representing the closest point of P to p: l 0 (t) : p − p·n+q a·n + t(a − n). 2 ||n|| ||n||2 Now, the interval of t I p where the closest point on P is within T is the interval where l 0 intersects the triangle T which is determined by calculating the values of t for l 0 ’s entry and exit point of T (see Figure 5 (b)). The minimal quadratic distance between li and T is calculated by combining the different distance measures over their respective valid intervals; For the interval where li and T intersects we set the distance to 0. For the remaining parts we calculate the minimum distance of either of the endpoint distances within their valid intervals and the line segment distances between li and the edges of T . Since each individual distance is composed of piecewise quadratic functions, we may combine them into one single piecewise quadratic function which gives the distance from li to T for any translation t along a. Denote this piecewise quadratic function dST (t, a, pi , vi , T ). We now generate a set of intervals containing feasible translations of i along a. This may be generated by subtracting the intervals where dST (t, a, pi , vi , T ) < ri2 for T ∈ E from the set of intervals representing feasible translations of both endpoints of li , which may be done in O(|Es | log |Es |) time. Assume the resulting list of intervals with feasible translations is represented as a sorted list of tvalues, t1 ,t2 , . . . ,tn , where each interval is represented as a pair of t-values [t j ,t j+1 ], j ≡ 1 mod 2, then we create a piecewise linear function t1 − t for t ≤ t1 3 t − t2 for t2 < t ≤ t2 +t 2 t2 +t3 t3 − t for 2 < t ≤ t3 5 t − t4 for t4 < t ≤ t4 +t 2 t +t fE (P (t), a, i) = t5 − t for 4 2 5 < t ≤ t5 , .. . t − tn for t > tn 0 otherwise which describes the amount we need to translate i along a for i to be placed feasibly within E for every value of t. 5.3 Link Constraints The third contribution to F (t) is the set of link terms. Specifically we need to determine the value of ∑mj=k fL ((P , k) for any translation of capsule i along vector a. To do this we first note that the sum of all link-distances for links which are not incident with i, can be described as a constant. For each remaining link k incident to i we determine the value fL (P (t), k) which is the value fL (P (t), k) for the placement P (t) where i is translated t units in direction a. Assume i = s(k) (i = e(k) is equivalent) and 220 Constrained Capsule Placement for Coarse Grained Tertiary RNA Structure Prediction let db (k,t, a) = ||ps(k) +vs(k) +ta−pe(k) ||2 , then fL (P (t), k) = max(db (k,t, a)−b(k)2 , 0) is a piecewise quadratic function: db (k,t, a) − b(k)2 for db (k,t, a) > b(k)2 fL (P (t), k) = 0 otherwise, where the interval db (k,t, a) > b(k)2 can be found by solving the quadratic equation db (k,t, a) = b(i)2 . 5.4 Fast Neighborhood Search As stated earlier, computation of H (P (t)) can be done by probing with each value t from a discrete set of values W . However, the size of W would depend on the desired resolution and dimensions of the coordinate system used for placements and it would impose limits on the set of feasible positions. Additionally, it would be inefficient since each probe naively would require asymptotic linear time in the number of capsules and size of envelope. Assuming that we consider translations where t ∈ W , then such an algorithm would have asymptotic running time O(W (|Es | + n + m)). Instead we will present an algorithm which is independent of coordinate system resolution and dimensions and has running time O((n + |Es | + m) log(n + |Es | + m)). The algorithm takes advantage of the fact that, since each of the individual terms of the objective function may be represented as piecewise quadratic functions over t, H (P (t)) may also be represented by a piecewise quadratic function over t. For a capsule i, to be translated, the function fC (P (t), a, i, j) for another capsule j, can be calculated in O(1) time and results in a piecewise quadratic function with less than 5 pieces. For each link k incident with i, fL (P (t), k) can be calculated in O(1) time, and results in a piecewise quadratic function with less than 3 pieces. fE (P (t), a, i) can be calculated in O(|Es | log |Es |) time and results in a piecewise quadratic function of less than |Es | + 1 pieces. The values fC (P (t), vi j , k, j), fE (P (t), ṽk , k) for j, k 6= i and fL (P (t), k) where link k is not incident with i, as well as the values fA (P (t), k), for k = 1, . . . , m are constant for all values of t. A set of piecewise quadratic functions with a total of k intervals may be summed to a global piecewise quadratic function f (t) in asymptotic time O(k log k); First, sort the interval endpoints, enumerate the sorted endpoints as t1 ≤ t2 ≤ . . . ≤ tk and let (ai , bi , ci ) be the coefficients corresponding to the quadratic function that endpoint ti arose from. Now, construct f (t) by visiting each interval endpoint ti in order i = 1, . . . , k and maintaining values values (a, b, c)T , initially 0. As ti is visited create a quadratic function over the interval [ti−1 ,ti ] with coefficients a, b, c, and update a, b, c by either adding or subtracting ai , bi , ci depending on whether ti is an interval end or start. Since the number of pieces of the piecewise quadratic functions which arise from evaluating the individual terms of F (P (t)) is less than 5n + 3m + |Es | = O(n + m + |Es |), we can sum the complete piecewise quadratic function F (P (t)) in O((n + |Es | + m) log(n + |Es | + m)) time. Analyzing the piecewise quadratic function F (P (t)) for its global minimum with respect to t may be done in the O(n + m + |Es |) time by analyzing each individual piece for local minimum and selecting the global minimum. Finally, the penalty function Z (P (t)) may be included, by considering where each of the piecewise quadratic functions are larger than zero and which violations they arise from. In total, we can calculate H (P (t)) and select the value of t which results in the global minimum of H (P (t)) in O((n + |Es | + m) log(n + |Es | + m)) time. 6 Optimization Variants The problem described in the previous sections is a decision problem, i.e. we wish determine a feasible placement under the given set of constraints. One may also consider optimization variants 221 6. Optimization Variants Figure 6: Illustration of the evaluation of the fE term. First, the intervals corresponding to translations where both points are within E are determined. Then the set of intervals where the distance between the capsule and Es are subtracted which gives the feasible intervals. Finally, the fE function is determined based on the feasible intervals. revolving around ICPDP, which, since ICPDP is N P -complete, are N P -hard. Here we will discuss two types of problems; The first group consists of problems where the objective can be optimized by solving a series of decision problems. The second group consists of problems where the objective is optimized by adding it to the decision objective function H (P ). The examples we present here for the first group are container compaction problems and in the second group we consider adding surface placement requirements to the capsules. 6.1 Compaction Problems In this section we consider three simple optimization variants in which container dimensions are minimized. The procedure we outline in the following was also used by Egeblad et al. [A, B] to solve two- and three-dimensional strip-packing problems. Unlike the problem considered previously, we will assume in the following that we are given a convex container C instead of an envelope E. In the previous variant we allowed capsules outside the envelope E during the solution process, albeit at a cost of increased objective value. Here we will only consider translations within C during the solution process, so the fE term is omitted from the objective function in this section. The three container minimization problems we consider are as follows and are illustrated on Figure 7: • Given two dimensions W and H determine a minimum value L such that a feasible placement of the capsules with respect to overlap, link and angle constraints can be found within the boxcontainer W × H × L (strip packing). • Minimize L such that a feasible placement can be found within the cube L × L × L (minimal cube packing). 222 Constrained Capsule Placement for Coarse Grained Tertiary RNA Structure Prediction (a) (b) (c) Figure 7: The three different compaction objectives. (a) Minimize box-height. (b) Minimize cube. (c) Minimize sphere. • Minimize L such that a feasible placement can be found within a sphere of radius L (minimal sphere packing). All three problems may be solved similarly. We begin with a sufficiently large value for the free container dimension, L0 . Once a solution has been found for the ICPDP problem with the free container dimensions L0 , we consider a problem with smaller container dimensions where L1 = L0 − ε, where ε is a step-size. We repeat this process so that the free container dimension in iteration i is L1 = L0 − i · ε, although more complex strategies mimicking binary search may be applied. When the container dimensions are reduced, all capsules not within the new container are translated into the new smaller container, while the remaining capsules remain at their current position. This way, a large part of the placement is kept intact and less time is spend on finding a feasible placement. The search may end after a specific time-limit, where the smallest value Li found so far, is returned as a solution to the problem. 6.2 Exposure Optimization It may be possible to identify residues of the molecule that are exposed through experimental techniques. This can be used to aid the search, since they can be used to indicate which portions of the capsules must be near the surface. To model this, we may create a set of spheres S corresponding to the identified residues. Each sphere is then assigned to the capsule that corresponds to the helix of the residue, by specifying a coordinate in the local coordinate system of the capsule (see figure 8 (a)). Now, during local search, the same transformation that applies to the capsule as described in the beginning of Section 2 also applies to the sphere, so that when the capsule is moved the sphere follows it. The objective is to ensure that each sphere i ∈ S, with radius ri overlaps with the molecular surface Es (see figure 8 (b)). Rather than modeling this as hard constraint, we can formulate the problem as an optimization problem. In this variant our objective is to minimize the number of spheres not within a certain distance of the molecular surface. The objective function is as follows: minimize G (P ) = ∑ wi fX (P , ṽc(i) , i), i∈S where wi is a weight assigned to each sphere that can model its importance, c(i) is capsule sphere i is attached to, and fX (P , ṽc(i) , i) is the amount c(i) must be translated for the center of i to be within ri units of Es . 223 6. Optimization Variants (a) (b) Figure 8: (a) Illustration of two spheres which represent residues that are known to be exposed. These are described in the local coordinate system of the capsule they are attached to. (b) The residuespheres follow the capsules under transformation. Here the two spheres attached to the capsule overlaps with the surface, as required. The complete optimization problem, where minimization of G (P ) is combined with ICPDP and extra penalty terms are added, can now be formulated as follows: minimize H 0 (P ) = H (P ) + G (P ) + λP ∑ ηi IP (i) i∈S where ηi is a penalty value for sphere i, λP is used to fine-tune the heuristic, and IP (i) is an indicator function: 1 for fX (P , ṽc(i) , i)) > 0 IP (i) = . 0 otherwise Whenever GLS reaches a local minimum and the penalty terms are modified (see Section 4.2), we also consider the sphere penalties, ηi , and increase the penalty for the sphere i with highest utility: χ(i) = fX (ṽc(i) , i) . ηi + 1 At the end of the search the feasible placement, with respect to the ICPDP, with least found value G (P ) is returned. Just as for the components of F (P ), we determine fX (P , ṽc(i) , i) for any translation of c(i) along a vector a. Let qi + ta be the center of i, when translated t units along a. Let dS,E (q + ta) be the distance from qi + ta to any triangle T ∈ Es , then we determine feasible intervals [t1 ,t2 ], . . . , [tk−1 ,tk ] (sorted in ascending order), such that dE (q + ta) < ri , for t ∈ [t2 j−1 ,t2 j ], j ≤ 2k . Now the intervals [t1 ,t2 ], . . . , [tk−1 ,tk ], represent translations of sphere i relative to P where sphere i overlaps with Es . We can evaluate dE (q + ta) and determine t1 , . . . ,tk in O(|Es |log|Es |) time using the same strategy used to determine the distance between segment endpoints and Es outlined in Section 5.2. We now let fX (P (t), ṽc(i) , i), be: t1 − t for t ≤ t1 0 for t ∈ [t1 ,t2 ] 2 t − t2 for t ∈ [t2 , t3 +t 2 ) t +t fX (P (t), ṽ, i) t − t for t ∈ [ 3 2 ],t ) 3 3 2 . . . t − tk for t ≥ tk 224 Constrained Capsule Placement for Coarse Grained Tertiary RNA Structure Prediction Since fX (P (t), ṽ, i) is piecewise linear we can use the same strategies as mentioned in Section 5.4 to evaluate H 0 (P (t)) for all values of t. We must evaluate fX (t, ṽ, i) for each sphere i attached to the capsule of translation c(i) so the total asymptotic running time required to evaluate H 0 (P (t)) for all values of t is O((n + m + l|Es |) log(n + m + l|Es |)), where l is the number of spheres attached to the capsule c(i) for which the set of possible translations are being examined. It should be noted that, for this optimization variant rotations around the capsule axis must also be represented and included in the local search neighborhood, in order to ensure that the all solutions can be reached. Hidden residues It is also straightforward to extend the model to include residues which are hidden. Let S− be the set of sphere representing hidden residues, then we may solve: minimize G (P ) = ∑ wi fX (P , ṽc(i) , i) + i∈S ∑ wi fX− (P , ṽc(i) , i), i∈S− where fX− (P , ṽc(i) , i) is the amount c(i) must be translated for the center of i to be more than ri units from of Es . Penalties are added in the same fashion as for exposed residues and the neighborhood can be searched in similar fashion. 7 Experimental Results To investigate the performance of the heuristic it was implemented in C++ and compiled with GCC 4.2.3. All experiments were run on a computer with two Intel Xeon 5355 2.66 GHz quad core processors (8 cores total) and 8 GB RAM. No advantage was taken of the 8 core system. The set of rotation 1 angles A was set to {0, 32 π, . . . , 31 32 π}. Suitable values of λC , λL , and λE were found using parameter tuning, and was set as follows λC = 0.1 · S, λL = 0.1 · S, and λE = 0.5 · S for S = ∑ni=1 (ci + ri ). Both compaction and decision variants of the problem were tested and will be describe in Section 7.1 and Section 7.2 respectively. 7.1 Results for Compaction To test the method’s ability to work as an heuristic for the compaction optimization problems listed in Section 6.1, three different types of problem instances were constructed; Homogeneous problems consisting of only one type of capsule, heterogeneous problems consisting of different types of capsules, and problems with capsule links. For each major type of problem, instances for each of the three compaction variants (V = {strip, cube, sphere}) from Section 6.1 were randomly generated. Capsule radii were all set to 1 and lengths for the problems are taken from the list L = {0, 2, 8, 32}. The number of capsules in each instance were taken from the list N = {5, 10, 15, 25, 35, 50, 150}. For all compaction instances, the heuristic was set to report the best result found within 250,000 iterations, which was found to deliver adequate convergence. To test the stability of the heuristic, each instance was run with 5 different random seeds for the random number generator. 7.1.1 Homogeneous Problems For each length from l ∈ L, each number of capsules n ∈ N, and each optimization variant from V an instance was generated with n capsules and all capsule lengths set to l. This results in 3 × 4 × 7 = 84 225 7. Experimental Results Sphere Cube Strip Length n 5 10 15 25 35 50 150 5 10 15 25 35 50 150 5 10 15 25 35 50 150 Avg. 31.4 34.7 39.1 44.2 44.9 41.9 43.1 37.4 35.5 39.7 40.7 42.5 42.8 44.8 35.5 38.7 43.3 40.6 42.1 42.5 44.2 0 Std. 0.71 0.17 0.19 0.33 0.15 0.35 0.17 1.41 0.00 0.00 1.17 1.12 1.00 0.00 0.00 0.00 0.00 0.00 1.75 1.34 0.93 Time 42.7 61.8 83.0 116.6 139.2 165.0 312.4 42.6 61.0 76.0 95.8 120.9 151.9 338.0 39.0 65.0 86.1 109.9 134.4 167.1 363.2 Avg. 42.3 37.1 38.8 42.3 45.1 47.5 47.9 37.5 38.3 41.9 42.8 44.7 45.6 47.2 39.3 40.2 43.3 43.1 44.1 45.1 46.3 Homogeneous 2 Std. Time Avg. 0.00 37.4 43.3 0.53 109.6 54.5 0.19 164.0 45.7 0.29 264.0 50.6 0.40 357.0 42.0 0.34 491.9 44.8 0.38 1161.5 48.9 0.00 68.1 39.8 0.00 110.2 44.2 1.38 155.8 49.3 0.00 223.3 45.5 1.12 317.8 47.5 0.82 434.2 46.7 0.60 1266.1 48.9 0.00 73.7 44.2 0.00 150.3 42.4 0.00 200.2 42.8 0.00 296.2 43.6 1.40 387.3 44.5 0.00 541.4 44.5 0.00 1434.7 47.9 8 Std. 0.00 0.00 0.41 0.00 0.39 0.13 0.37 1.64 1.46 0.00 0.00 0.89 0.94 0.55 2.94 0.00 0.00 0.00 1.60 0.00 0.86 Time 0.3 51.5 123.0 182.3 302.1 482.5 1066.7 59.7 112.4 144.7 234.7 311.8 431.4 1253.2 63.8 125.0 169.3 263.1 343.4 467.8 1303.9 Avg. 34.1 46.8 41.6 43.0 42.8 46.4 51.0 42.4 51.1 48.1 48.3 49.8 53.1 51.1 37.2 39.5 41.7 42.1 42.8 43.0 43.8 32 Std. 0.36 0.00 0.69 0.70 0.61 0.60 2.26 1.06 0.00 1.10 1.20 0.87 0.67 0.00 1.42 2.45 0.00 1.26 0.00 0.00 0.00 Time 43.7 74.9 119.1 197.5 225.8 368.1 874.1 50.8 96.5 135.8 225.0 296.0 421.2 1163.9 33.7 71.7 126.9 204.9 281.6 405.4 1175.7 Table 1: Results of experiments with homogeneous instances. Results are shown for each of the four capsule lengths 0, 2, 8, 32, and each of three compaction variants (Strip, Cube, and Sphere). ‘n’ is the number of capsules. Each instance was run five times with 5 different random seeds. ‘avg.’ is the average utilization of the container in percent. ‘std.’ is the standard deviation of the utilization over the five runs. ‘time’ is the average running time in seconds over the five runs. homogeneous instances. The results for the homogeneous instances are presented in Table 1. In the table, the average utilization [Volume of items]/[Volume of container] from the 5 different runs are presented for each instance along with the standard deviation. Utilization levels are generally between 40 and 50 % even for instances with as many as a 150 capsules, although only between 30 and 40 % for the instances with 5 capsules. The high utilization in instances containing as much as 150 spheres indicates that the placement method scales well. Running times are between 30 seconds for the smallest instances and up-to 20 minutes for the largest. The standard deviation is generally between 0 and 2 utilization percentage points, which shows a high level of stability. The utilization is equivalent across the different compaction types, which demonstrates that the heuristic works well even for different types of containers. Examples of homogeneous solutions are displayed on Figure 9. The instances where the length is zero are homogeneous sphere-packing instances. Johannes Kepler’s conjecture, which was recently proved by Hales [11], states that an optimal packing of hoπ mogeneous spheres in an infinitely large box has a utilization of 3√ ≈ 74.048%. However, for a 2 low number of spheres such a packing may be impossible and the heuristic is not geared specifically towards homogeneous sphere packing so the utilization levels are promising. 226 Constrained Capsule Placement for Coarse Grained Tertiary RNA Structure Prediction (a) (b) (c) (d) Figure 9: Examples of homogeneous solutions. (a) 10 capsules of length 32 in minimal cube solution (utilization 51%). (b) 50 capsules of length 32 in minimal strip solution (utilization 53%). (c) 150 capsules of length 0 (spheres) in a minimal sphere (utilization 45% ). (d) 150 capsules of length 32 in minimal cube (utilization 51%). 227 7. Experimental Results Sphere Cube Strip Iterations n 5 10 15 25 35 50 150 5 10 15 25 35 50 150 5 10 15 25 35 50 150 Heterogeneous 50,000 125,000 Avg. Std. Time Avg. Std. Time 43.0 0.00 0.3 43.0 0.00 0.3 49.7 1.81 16.9 52.8 1.03 34.3 43.0 0.42 28.3 43.6 0.32 69.8 43.5 0.27 45.7 44.1 0.29 114.9 43.9 0.41 61.5 44.6 0.28 161.0 42.9 0.32 76.7 45.0 0.36 229.6 17.8 0.00 159.7 34.8 0.15 409.5 42.3 0.86 13.3 42.5 1.21 33.6 43.1 0.35 23.2 43.3 0.65 58.2 45.5 0.90 32.2 45.8 0.96 80.0 45.3 0.79 50.0 45.5 0.56 123.4 45.8 0.67 67.4 46.2 0.22 166.1 46.3 0.23 93.3 46.6 0.19 230.8 47.3 0.30 205.8 47.5 0.50 625.4 42.2 1.20 11.1 42.8 1.47 27.2 43.8 0.93 22.6 43.8 0.93 57.4 43.5 1.01 33.6 44.1 0.83 85.0 43.4 0.70 51.2 44.5 0.89 127.6 44.0 0.39 69.4 45.0 0.32 172.7 44.6 0.65 95.9 45.2 0.60 236.5 46.4 0.46 237.0 46.6 0.47 660.8 Avg. 43.0 54.2 44.1 44.4 44.9 45.7 47.3 42.5 44.0 46.5 45.7 46.6 46.7 48.3 42.8 44.1 44.3 44.7 45.2 45.3 46.8 250,000 Std. Time 0.00 0.3 0.35 51.0 0.37 138.7 0.18 229.6 0.19 323.9 0.34 468.3 0.28 1097.2 1.21 66.8 1.39 117.0 1.08 160.2 0.51 246.1 0.75 331.5 0.39 456.6 0.31 1288.4 1.47 54.1 0.47 116.4 0.42 170.8 0.81 256.8 0.00 344.5 0.67 476.1 0.22 1331.0 Table 2: Results for the heterogeneous instances. Results are shown after 50,000, 100,000, and 250,000 iterations to illustrate the convergence of the heuristic. For each instance the average of the five runs on the three different instances is reported. See Table 1 for a description of labels. 7.1.2 Heterogeneous Problems For the heterogeneous problems, instances were generated random, with capsule radii set to 1 and lengths from L. Four instances were generated for each optimization variant from V and each value of n ∈ N given a total of 3 × 3 × 7 = 63 heterogeneous instances. Results of the heterogeneous instances are presented in Table 1 The results of the heterogenous instances for 250,000 iterations matches those of the homogeneous instances. Utilization is generally between 40 and 50 % and even matches 50 %. Standard deviation remains below 2 utilization percentage points. The results also show that the heuristic converges rapidly. For the small instances containing 5-15 capsules there is little improvement between 50,000 to 250,000 iterations. For the larger instances containing up-to 50 capsules the improvement between 125,000 and 250,000 iterations is less than a single percentage point. For a 150 capsules good results are only reached with 250,000 iteration in the strip-packing variant, while the last 125,000 iterations for the other variants show little improvement. Example solutions are shown on Figure 10. No other published results exists for packing problems involving capsules, but the best known results for three-dimensional strip-packing of polyhedra yields a utilization of between 40 and 55 % 228 Constrained Capsule Placement for Coarse Grained Tertiary RNA Structure Prediction (a) (b) (c) (d) Figure 10: Examples of heterogeneous solutions. (a) 10 capsules in minimal height container (utilization 55%). (b) 150 capsules in minimal cube container (utilization 58%). (c) 150 capsules in minimal height container (utilization 47% ). (d) 150 capsules in minimal sphere container (utilization 47%). as presented by Egeblad et al. [B], so the utilization levels reached by our method are promising. 7.1.3 Problems with Links A number of instances with linked capsules were randomly generated and tested to investigate the method’s ability to find feasible placements under compact conditions. Capsules were linked in four different ways (See Figure 11): • As an open chain of capsules where capsule i is linked to capsule i + 1 ( Figure 11 (a)). • As a closed chain of capsules where capsule i is linked to capsule i + 1 and capsule n is linked to capsule 1. ( Figure 11 (b)). • As an open chain consisting of single links or ‘T’-intersection links, where one endpoint of at least one capsule is connected to two other capsules. The chain is acyclic. ( Figure 11 (c)). • As a closed chain consisting of single links or ‘T’-intersection links, where capsule i may be connected to both capsule j and capsule k. The chain consist of at least one cyclic sub-chain. (Figure 11 (d)). 229 7. Experimental Results (a) (b) (c) (d) Figure 11: The different types of instances for experiments. (a) Open chain. (b) Closed chain. (c) Open T-chain. (d) Closed T-chain. The capsules were generated as the heterogeneous instances described in Section 7.1.2 for each of the four types of links. This gives a total of 4 × 63 = 252 instances with linked capsules. The results of the instances with linked capsules are given in Table 3. Results for all of the four different types of chains are promising. Generally, utilization levels of over 40 % are reached which matches the instances without links, and shows that the heuristic handles the extra constraints imposed by adding links extremely well. Although the randomly generated instances with closed T-chains may be infeasible, i.e. it is unknown if a valid solution for the links exists, close inspection of the data revealed that the heuristic was able to find feasible placements for all instances, and only failed in 8, 10, and 11 runs for respectively strip, cube, and sphere packing of the instances containing 150 capsules. There the heuristic handles difficult link constraints for instances with up-to 50 capsules well and in compact placements, while open and closed chain compaction problems are dealt with even for 150 capsules. 7.2 Decision Problems The RNA structure P4-P6 RNA was modeled to test the performance of the heuristic for problems where only a feasible placement must be found within an envelope. The structure consists of 158 nucleotides and 8 helical regions (See figure 14 (a)). The 8 helical regions were contracted into 7 helices and converted into an instance of the capsule placement problem as illustrated on Figure 14 (b). A crystal structure was used to identify the actual position of each nucleotide in the RNA molecule. The nucleotides of each helix were identified and the center axis of each helix was found by linear least square fitting of the positions of the nucleotides in the crystal structure. The radius of each helix was determined as the maximum distance from the axis to the center of any of the involved nucleotides. Links were added between capsule for which the associated helices were neighbors in the backbone of the RNA, and the required distance between two capsules i and j was set to the total distance between the last nucleotide of i and the first nucleotide j on the backbone. Additionally, an molecular surface was generated using Small-angle X-ray scattering (SAXS) and converted into a triangle mesh consisting of 880 triangles which was used to represent an envelope. The generated instances was tested with 375 different random seeds. The results of the 375 testruns are summarized in Table 4. In 318 (85%) of the 375 test runs an actual placement within the envelope was found. Each run took on average 445 seconds, but with the fastest run taking less than 4 minutes and the slowest almost 105 minutes. 230 Closed T-chain Open T-chain Closed chain Open chain Constrained Capsule Placement for Coarse Grained Tertiary RNA Structure Prediction n 5 10 15 25 35 50 150 5 10 15 25 35 50 150 5 10 15 25 35 50 150 5 10 15 25 35 50 150 Avg. 43.0 49.9 42.2 42.0 41.3 41.3 38.4 43.0 48.2 42.1 41.5 40.9 41.0 38.4 43.0 48.1 42.2 42.0 41.3 40.9 38.0 43.0 39.7 39.2 37.6 38.4 36.9 17.6 Strip Std. Time 0.00 0.3 1.12 95.2 0.26 166.6 0.44 276.6 0.32 378.2 0.34 508.7 0.43 1163.9 0.00 0.3 1.23 98.6 0.34 169.0 0.41 277.3 0.31 379.1 0.35 509.5 0.48 1165.6 0.00 0.5 1.03 97.8 0.32 167.9 0.43 279.5 0.47 381.5 0.48 512.0 0.53 1175.8 0.00 1.1 0.94 116.4 0.56 178.9 0.53 295.3 0.63 392.4 1.24 518.0 1.18 1403.6 With Proximity Constraints Cube Fail Avg. Std. Time Fail - 45.2 0.52 75.3 - 43.0 0.47 131.0 - 44.4 1.13 183.9 - 43.6 0.94 277.5 - 43.7 0.84 366.5 - 43.6 0.57 498.6 - 42.6 0.47 1352.8 - 45.2 0.57 76.3 - 42.8 0.84 134.3 - 44.1 0.77 185.5 - 43.4 0.94 280.6 - 43.6 0.83 370.1 - 43.1 0.93 500.1 - 42.0 0.60 1352.4 - 44.7 0.46 75.0 - 43.0 0.47 132.1 - 44.6 1.21 184.5 - 43.2 0.85 279.3 - 43.7 0.32 370.9 - 43.6 0.46 505.4 - 42.5 0.48 1370.6 - 43.0 2.79 76.6 - 40.6 0.90 135.0 - 42.4 1.28 187.3 - 41.4 1.05 287.1 - 40.8 1.11 378.6 - 40.7 0.92 518.7 8 26.3 2.48 1477.8 10 Avg. 42.1 43.7 41.7 42.0 42.0 42.5 42.0 42.1 43.7 41.7 41.8 42.0 42.5 41.5 42.1 43.4 42.2 42.7 42.6 42.7 42.0 42.1 41.4 40.4 40.1 40.9 40.1 24.4 Sphere Std. Time 0.00 62.8 0.00 126.3 1.17 186.7 0.00 283.3 0.00 376.3 0.81 513.2 0.86 1424.3 0.00 68.6 0.00 128.2 0.50 186.9 1.31 284.0 0.80 374.2 0.81 513.4 0.92 1419.6 0.00 62.4 0.61 126.6 0.56 188.3 0.54 286.4 0.92 381.0 1.18 519.9 0.55 1442.7 0.00 67.0 2.03 133.6 0.49 188.3 0.43 291.9 1.14 390.9 1.06 528.0 0.84 1528.7 Fail 11 Table 3: Results of experiments with instances with link constraints. Results are presented for each of the four different types of chains and each of the three different compaction goals. Results for each instance type covers the average result of three instances where each instance has been run 5 times. See table 1 for a description of labels. The column ‘Fail’ contains the number of the 15 runs of each instance type where no placement could be found within the maximum iteration limit. ‘-’ indicates that a feasible placement was found for all instances in all runs of the designated instance type. 231 7. Experimental Results (a) (b) (c) (d) (e) (f) Figure 12: Example results for problems with capsule links. First row of each example illustrates the capsule placement, and the second row illustrations connectivity. (a) 5 capsules in a closed chain (minimal height). (b) 25 capsules in a closed T-chain (minimal height). (c) 50 capsules in a closed loop (minimal height). (d) 5 capsules in a closed T-chain (minimal height). (e) 10 capsules in an open loop (minimal height). (f) 35 capsules in an open T-chain (minimal sphere). 232 Constrained Capsule Placement for Coarse Grained Tertiary RNA Structure Prediction Utilization 30.5% Success 318/375 (85%) P4-P6 RNA (PDB ID: 1GID) Avg. RMSD Min. RMSD Max. RMSD 19.07 10.74 25.44 Avg. Time Min. Time Max. Time 445.0 221.7 6303.9 Std. RMSD 2.73 Std. Time 570.0 Table 4: Overview of the results from the envelope decision test. RMSD values are in Å, and time values are in seconds. ‘Std. RMSD’ and ‘Std. Time’ are standard deviations of the time and RMSD. Figure 13: An accurate known RNA molecular is converted into a ICPDP and a placement is found within the molecular envelope using our procedure. The endpoints of the capsule line-segments of the resulting placement were compared to the endpoints of the capsule line-segments from the known RNA structure, and the resulting RMSD reported. The resulting placements were compared with the input-structure, by measuring the root mean square deviation (RMSD) between the endpoints of the capsules from the crystal structure and the endpoints of the capsules from the each solution. The method is illustrated on Figure 13. The placements had an average RMSD distance from the input structure of approximately 23 Å, while the minimal RMSD distance was 10.74Å and the maximal 29.6Å. The placement with minimal RMSD found is illustrated on Figure 14 (c) and showed with the target structure on Figure 14 (d). 8 Conclusion We have introduced a simple coarse grained model for RNA tertiary structure prediction in which helical regions are converted into interconnected capsules. An efficient method capable of finding feasible layouts of the capsules within a molecular envelope was described. The method is based on a local search scheme in which each capsule is translated in one of four directions or rotated such that the feasibility of the placement is increased with each change. Finding an improving position is done efficiently using a polynomial time algorithm. The resulting paradigm can be used not only for finding a feasible placement of the capsules, but also for solving optimization variants of the problem in which a compact placement is desired. The compaction heuristics reveal promising results with utilization levels around 50% for minimal height box packing, minimal cube packing, and minimal sphere packing problems. This matches 233 References (a) (b) (c) (d) Figure 14: (a) The nucleotides and base-pairs of the P4-P6 RNA used for testing the coarse grained model. (b) Layout of the capsules determined from the P4-P6 RNA structure. (c) Placement of capsules from structure with minimal RMSD. (d) Overlay of the structures from (b) and (c) with capsule radii divided by 4 for a clearer comparison. previous publicized work from the literature for non-rectangular shapes in three dimensions. The heuristic handles additional connection constraints between capsules well and is able to find highly compact placements with connection constraints of up-to 150 capsules within 20 minutes. Experiments with modeling an RNA structure consisting of more than a 150 nucleotides as a set of capsules within a molecular envelope reveals promising results and further refinement of the resulting placement may lead to a more accurate prediction of the actual structure. This shows that the method has the potential to become a valuable tool for tertiary RNA structure prediction. Further analysis of the procedure presented in this paper with RNA structures may reveal if the procedure is capable of accurately prediction structures of hundreds of nucleotides. Additionally, the model may be extended to include energy potentials or other information which may be increase the accuracy of this coarse grained method. References [A] J. Egeblad, B. K. Nielsen, and A. Odgaard. Fast neighborhood search for two- and threedimensional nesting problems. European Journal of Operational Research, 183(3):1249–1266, 2007. [B] J. Egeblad, B. K. Nielsen, and M. Brazil. Translational packing of arbitrary polytopes. CGTA. Computational Geometry: Theory and Applications, 2008. accepted for publication. [1] J. Cagan, K. Shimada, and S. Yin. A survey of computational approaches to three-dimensional layout problems. Computer-Added Design, 34:597–611, 2002. 234 Constrained Capsule Placement for Coarse Grained Tertiary RNA Structure Prediction [2] E. Capriotti and M. A. Marti-Renom. Computational rna structure prediction. Current Bioninformatics, 3:32–45, 2008. [3] R. Das and D. Baker. Automated de novo prediction of native-like rna tertiary structures. In Proceedings of the National Academy of Sciences of the United States of America, volume 104(37), pages 14664–14669, 2007. [4] F. Ding, S. Sharma, P. Chalasani, V. V. Demidov, N. E. Broude, and N. V. Dokholyan. Ab initio rna folding by discrete molecular dynamics: From structure prediction to folding mechanisms. RNA, 14:1164–1173, 2008. [5] C. B. Do, D. A. Woods, and S. Batzoglou. Contrafold: Rna secondary structure prediction without physics-based models. Bioinformatics, 22(14):e90–e98, 2006. [6] O. Faroe, D. Pisinger, and M. Zachariasen. Guided local search for final placement in vlsi design. Journal of Heuristics, 9(3):269–295, 2003. ISSN 1381-1231. [7] O. Faroe, D. Pisinger, and M. Zachariasen. Guided local search for the three-dimensional bin packing problem. INFORMS Journal on Computing, 15(3):267–283, 2003. [8] M. Garey and D. Johnson. Computers and intractability. A Guide to the theory of NPcompleteness. W. H. Freeman and Company, New York, 1979. [9] F. Glover. Tabu search - part 1. ORSA Journal on computing, 1(3):190–206, 1989. [10] F. Glover. Tabu search - part 1i. ORSA Journal on computing, 2(1):4–32, 1990. [11] T. C. Hales. A proof of the kepler conjecture. Annals of Mathematics, 162:1065–1185, 2005. [12] C Hyeon, R. I. Dima, and D. Thirumalai. Size, shape, and flexibility of rna structures. The Journal of Chemical Physics, 125(19):194905, 2006. [13] T. Imamichi and N. Hiroshi. A Multi-sphere Scheme for 2D and 3D Packing Problems, volume 4638/2007, pages 207–211. 2007. [14] S. Kirkpatrick, C. D. Gelatt Jr., and M. P. Vecchi. Optimization by simulated annealing. Science, 220(4598):671–680, 1983. [15] F. Major. Building three-dimensional ribonucleic acid structures. Computating in Science and Engineering, 5:44–53, 2003. [16] H. M. Martinez, J. V. Maizel, and B. Shapiro. Rna2d3d: A program for generating, viewing andccomparing 3-dimensional models of rna. Journal of Biomolecular Structure and Dynamics, 25(6):669–683, 2008. [17] C. Massire and E. Westhof. Manip: an interacivve tool for modelling rna. Journal of Molecular Graphics and Modelling, 16:197–205, 255–257, 1998. [18] M. Mitchell. An Introduction to Genetic Algorithms. MIT Press, Cambridge, MA., 1996. [19] Z. C. Muller. Three-dimensional comparative modeling of rna. Nucleic Acids Symposium Series, 36:69–71, 1997. 235 References [20] B. A. Shapiro, T. G. Yingling, W. Kasprzak, and Bindewald E. Bridging the gap in rna structure prediction. Current opinion in Structural Biology, 17:157–165, 2007. [21] Y. Stoyan and et al. Packing of various radii solid spheres into a parallelepiped, 2001. URL citeseer.ist.psu.edu/stoyan01packing.html. [22] Y. Stoyan, N. I. Gil, G. Scheithauer, A. Pankratov, and I. Magdalina. Packing of convex polytopes into a parallelepiped. Optimization, 54(2):215–235, 2005. [23] C. Voudouris and E. Tsang. Guided local search. Technical Report CSM-147, Department of Computer Science, University of Essex, Colchester, C04 3SQ, UK, August 1995. [24] S. Yin, J. Cagan, and P. Hodges. Layout optimization of shapeable components with extended pattern search applied to transmission design. Journal of Mechanical Design, 126(1):188–191, 2004. doi: 10.1115/1.1637663. [25] M. Zuker, D. H. Mathews, and D. H. Turner. Algorithms and thermodynamics for rna secondary structure prediction: A practical guide. In Clark BFC Edited by Barciszewski J, editor, RNA Biochemistry and Biotechnology. NATO ASI Series Kluwer Academic Publishers, 1999. 236

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project

Download PDF

advertisement