Abstract
This work addresses the 2D Bin-Packing Problem with Varied Size and proposes heuristic solutions for it. An extensive literature review on the problem is carried out, and the state-of-the-art literature algorithm for its variation with guillotine constraint is reproduced in order to validate the reported results. It is also used to build a baseline which is compared with a proposed improvement for this algorithm in order to enable the addition of a new constraint to the problem that meets an industry demand for its practical application.
Access provided by University of Notre Dame Hesburgh Library. Download conference paper PDF
Similar content being viewed by others
1 Introduction
Given one-dimensional boxes and items of varying lengths, the Bin Packing Problem (BPP) seeks to determine who to allocate items to each box in order to pack all items while minimizing the number of boxes used. Such a problem is famously NP-Hard, which means that a solution to the BPP is also a solution to every problem in the NP-Hard class, and, going further, there is no known algorithm of polynomial complexity to solve it. The Bin-Packing Problem is one of the most studied classical problems in computing, with several heuristic algorithms having already been proposed to quickly find good, but not necessarily optimal, solutions for it.
In this context, the 2D Bin-Packing Problem with Varied Size (2DVSBPP) is a variation of the original BPP, with rectangular two-dimensional boxes and items, with such boxes having different sizes, and the objective is to pack all items minimizing the total area spent for this (or, in the same way, maximizing the use of the total area). More formally, given N rectangular items of widths \(W_i\) and heights \(H_i\), with \(i\in [1, N]\), and M rectangular boxes of widths \(W_j\) and heights \(H_j\), with \(j \in [1, M]\), one wants to define the position of each item in exactly one box such that the items are entirely within a box, there is no overlapping of items and the sides of an item are parallel to the sides of its respective box, in order to minimize the sum of the areas of the boxes used.
As it is a generalization of BPP, 2DVSBPP is also in the class of NP-Hard problems, and methods to solve it become even more complex and elaborate, given the exponentially greater number of possible combinations of items with boxes due to the addition of a extra dimension. Furthermore, 2DVSBPP is of great relevance to the industry [2, 4], having as its immediate application the determination of the layout for cutting metal sheets in order to minimize material waste. In this context, another way of describing the problem (which will be used throughout the work) is to determine a sequence of vertical and horizontal cuts in the base material in order to obtain rectangular items from it. Since these cuts have to be from one end of the base material to the opposite one, this version of the problem is known as 2DVSBPP with Guillotine Constraint [9]. Due to its direct applicability in the industry, it is of great interest to study this problem and develop more efficient solutions for it, including ones that fit in strict industry requirements [2, 4, 5].
The objectives of this work are, thus, three-fold: to carry out an extensive literature review for the 2D Bin-Packing Problem, focusing on its version with Varied Size and Guillotine Constraint; reproduce the state-of-the-art algorithm for the 2DVSBPP with Guillotine Constraint and analyse the results obtained by [3]; propose an improvement to this algorithm, taking into account not only the use of the total area spent but also the real practicality of the generated solutions - by reducing the number of changes of cut orientations in the solution - and enable the introduction of a new constraint to the problem in order to increase its applicability in the industry, by setting a hard limit on such number of cut orientation changes.
The remainder of this paper is organized as follows: Sect. 2 presents the literature review carried out for the 2DVSBPP. Section 3 introduces the methodology used in the work, specifying the process of collecting test instances, implementing the baseline algorithm, proposing an improvement to the algorithm, adding a new constraint to the problem and the corresponding results obtained. Finally, Sect. 4 concludes the work.
2 Literature Review
In [9], the authors proposed a notation to define 4 different variations of the 2D Bin Packing Problem:
-
2BP|O|G: items cannot be rotated and must be positioned to allow guillotine cutting
-
2BP|R|G: items can be rotated and must be positioned to allow guillotine cutting
-
2BP|O|F: items cannot be rotated and the cutting shape is free
-
2BP|R|F: items can be rotated and the cutting shape is free
A guillotine cut is a straight, vertical or horizontal cut that must be made from one side of the box to the other. Thus, the guillotine cutting constraint implies finding a configuration for the arrangement of items in such a way as to allow such items to be extracted only through cuts of this type. Such constraint is very common in industrial applications. Still in this work, a different heuristic is presented for each of the four variations of the problem, and an algorithm based on Tabu Search that allows using the four heuristics to solve the corresponding variation was also presented, and became the state-of-the-art of the era.
In the work of [12], an exact solution for the 2DVSBPP is presented. The authors used an Integer Programming (IP) formulation for the problem which, according to the text, is very difficult to be solved by standard solvers. They then presented four lower-bounds for the problem, and then implement a “branch-and-price” algorithm that applies these bounds to optimally solve the problem. A new set of instances for the problem, with 5 bin sizes and number of items equal to 20, 40, 60, 80 and 100, was also proposed. It is shown that one of the presented lower-bounds, which makes use of the Dantzig-Wolfe decomposition, is extremely accurate, capable of providing a bound of 3.89% on average at the root node of the branch-and-cut tree. Such lower-bound is found through a Mixed-Integer Programming model.
In [11], the 2DVSBP|O|F is discussed, that is, 2BP|O|F with items and boxes of varying sizes. The authors presented a two-phase algorithm for the problem. The first step consists of solving consecutive Strip Packing problems, using the width of the current box as the width of the strip, until all items have been packed. In the second step, a repacking process is carried out on each box, in the hope of repacking all of its items into a box of smaller area, through successive attempts on increasingly larger boxes. The algorithm was tested on instances from the literature and new proposed instances, whose generation process is described in detail.
In [8], the authors presented two algorithms based on Dynamic Programming (DP) to solve the 2DVSBPP with a limited number of each bin size. The first algorithm, H1, only solves the 2DVSBPP without guillotine cuts and with orientation constraints. The second, H2, is capable of incorporating or not each of these constraints. The DP does not solve this problem optimally, since it has an exponential number of states. In this way, the DP is used to solve the problem approximately, by uniting several states into just one, thus losing information but gaining time and space. On average, the H1 algorithm is able to obtain considerably better solutions than H2, when the constraints that H2 can incorporate are not present.
Alvarez-Valdés et al. [1] tackled the 2D and 3D Variable Sized and Variable Cost Bin Packing Problems. The authors proposed a two-phase algorithm to solve the problem. The first phase consists of a GRASP, which generates a set of local optimal solutions. The second phase is a Path Relinking algorithm, responsible for combining the solutions from the first phase into a better solution.
Hong et al. [6] addressed the 2DVSBP|O|G variant and introduced a heuristic called BTVS, which selects the next box size using a procedure based on Backtracking. For each size chosen, a semi-greedy algorithm is used to try to fill the box with the largest area of items possible, considering different item orderings. Such heuristic makes use of two new algorithms, one semi-greedy and the other greedy, described by the authors, and in the end the best solution returned by one of the two methods is chosen. The authors also presented a heuristic based on Simulated Annealing, where the neighborhood function simply consists of selecting two random items and swapping their order. This method is designed to improve the solution found by BTVS.
In [2], the authors proposed the Two-Dimensional Three-Stage Guillotine Cutting Problem with Batch Precedence Constraints, which is similar to 2DVSBPP, but with an additional constraint that solutions can have a maximum depth of three - that is, they must start with a set of vertical cuts, followed by a set of horizontal cuts, followed by a final set of vertical cuts - and the order of the items being obtained follows a pre-defined sequence. Such restrictions are motivated by demands from the glass cutting industry. Three heuristics are presented for this problem, and experiments are carried out on publicly available instances. Similarly, in [4] the same authors propose constructive heuristic methods, whereas in [5], a Biased Random-key Genetic algorithm is proposed for the same problem.
In [13], the authors presented a matheuristic approach for 2DVSBP|O|G. In this algorithm, feasible solutions are successively constructed by solving entire schedules iteratively, one sub-problem each time. Three different Integer Programming models were presented, each one better but more expensive than the other. The developed algorithm surpasses the then state of the art, both in the problem in question and in the 2D Single Size BPP problem. In the latter, the procedure was able to prove the optimality of the solution in 415 of the 500 instances tested.
Recently, Gardeyn and Wauters [3] proposed a method for \(2DVSBP|*|G\), that is, the orientation constraint can be applied or not. The authors used a rooted tree to represent a solution, where leaf nodes are empty items/spaces, and inner nodes define which direction child nodes are cut (vertical or horizontal). The proposed algorithm is based on the Ruin-and-Recreate paradigm, and computational experiments show that the algorithm is the new state-of-the-art for \(2DVSBP|*|G\), and equals the state-of-the-art for \(2DBP |*|G\), although it has considerably longer execution times.
Finally, [10] carried out an extensive bibliographical review of the original Bin Packing Problem, exposing its formal notation, applications and, mainly, proposes a notation note to catalog all the variations and restrictions of the BPP. Furthermore, it also presents statistics that illustrate how the problem has been more or less researched over the years.
3 Methodology and Contribution
Based on the literature review, it was decided to collect and use the test instances proposed in [7, 12] and [11], for a total of 855 instances. The first set contains 500 instances separated into 10 groups, namely MC_B1, up to MC_B10. The second set consists of 340 instances divided into two groups of 170 each, where the first, Nice, contains instances of items of homogeneous size, while the second, Path, contains items of vastly different and extreme sizes and dimensions. The last set contains 15 instances divided into 3 groups: M1, M2 and M3.
3.1 Literature Baseline Implementation
The algorithm chosen for reproduction and implementation was the one proposed in [3], for the \(2DVSBP|*|G\), as it is an extremely recent work, is the current state of the art for this problem and is well specified in the original article. The entire description of the algorithm presented by the authors was meticulously followed, and the parts of the approach not detailed in the original article were implemented with a focus on minimizing the time and memory complexity of the algorithm, while maintaining the expected behavior.
The algorithm is based on a goal-driven heuristic of ruin and recreate. A rooted tree represents a solution, where leaf nodes are empty items/spaces, and internal nodes define in which direction child nodes are cut. More specifically, if an internal node represents cuts in a certain direction, its child node, which is also internal, represents cuts in the opposite direction to that of its parent (that is, an internal node represents a sequence of cuts in the same direction). The procedure uses two values to guide the search for better solutions: the total area of boxes used and the total area of unpacked items. This is because the procedure addresses invalid solutions rather than discarding them. The Ruin phase consists of deleting random nodes (item or internal) from the solution tree until a certain maximum number of nodes have been deleted and as long as the area of the modified solution is greater than an area limit. The Recreate phase tries to insert each unpacked item into the solution, whether or not it may open new boxes to do so. Each item is inserted in the best empty space for it, considering both orientations for it and for the cut that will generate it. However, to introduce a stochastic element, there is a probability that one of the options is ignored, which could lead to insertion in a sub-optimal space. Finally, to decide whether the new generated solution will be accepted or not, the Late-Acceptance Hill-Climbing metaheuristic is used. In this metaheuristic, a new solution is accepted or not if, and only if, it is better than the current solution X iterations ago. Algorithm 1 presents a high-level overview of the algorithm.
Results. Our implementation of the algorithm was run for all 855 instances described in the beginning of this section. The code was developed in C++11 and compiled with GCC v9.4.0. The tests were run on an Ubuntu 18.04 machine with an Intel Core i7 980 and 24 GB of RAM. In the original article, each instance ran for 10 min with 8 threads. As multi-threading was not implemented in this work and to analyze the impact of running time on the algorithm’s performance, it was decided to run our version for 25 min, then reporting the final result and the result obtained in the first 10 min of each execution.
The Tables 1, 2 and 3 display the results obtained in the instances of [7, 11] and [12], respectively. Each line presents the average value of utilization, in percentage, obtained in each instance subset, for our algorithm with 10 and 25 min of execution, the result reported in [3] and the difference between the result in the article original and our 25-minute performance. The final line presents the averages for each column. The use is defined as the sum of the area of all items divided by the sum of the area of all boxes used in the solution found. Therefore, its maximum value is 1 (100%).
Through an analysis of the results, it can be seen that the results obtained by our implementation are close to those reported in the original article, and, therefore, it is a faithful reproduction of the original algorithm, in addition to a positive validation of such reported results in [3]. This implementation, then, served as a baseline for the following stages of this work, since it is possible to compare the performance between the algorithm and proposed modifications.
3.2 Proposed Improvement to the Baseline Algorithm
Although the main objective of the 2DVSBPP and, consequently, of the Gardeyn algorithm, is to maximize the percentage of box area used by the solution, in practical applications this is not the only priority taken into account when one needs to produce good solutions to the problem. More specifically, in industrial applications where 2DVSBPP is used to define a sequence of guillotine cuts in metal/glass sheets in order to obtain smaller pieces, one of the criteria applied to find a feasible solution is the number of cuts to be made or, in more particular scenarios, the number of exchanges between a sequence of cuts in one orientation (for example, vertical) to a sequence of cuts in the other orientation (for example, horizontal), as addressed in [2]. This is mainly due to the fact that, in such scenarios, changing the orientation of machine cuts requires performing slow and costly procedures, such as rotating all the base material or parts of the machine itself. So, a solution that minimizes such number of exchanges of cut orientation can drastically save a lot of time and money for a manufacture. In this scenario, this last criterion can be measured as the maximum depth among the trees that represent a solution constructed by the baseline algorithm, since, as already described, an internal node of a solution tree represents a sequence of cuts in the same direction, and the children nodes represent cuts in the opposite direction. Therefore, it was decided to measure the maximum depth of the solutions generated by our reproduction of the algorithm described above. The results can be seen in Tables 4, 5 and 6.
It was noticed that in the instances of [12] and, mainly, of [11], many solutions had depths considered impractical by industry standards (in [2], the industry requirement is a max depth of 3). This result motivated the elaboration of a modification to the original algorithm, which not only aims at improving the use of the area used in the solution, but also focus on generating solutions of smaller depths, less variable and with no extreme values (in certain instances, the original method generated solutions with depth above 20).
Description of the Proposed Algorithm. While analysing the solutions generated by reproducing the original algorithm, it was noticed that the high depth of the solutions was ofter concentrated in some specific regions of the box where many items were grouped with cutting directions that alternated with high frequency. Therefore, the proposed solution to mitigate this problem, in order to generate solutions of less depth but maintaining or possibly increasing the use of boxes, is to use a Dynamic Programming (DP) algorithm. Given an empty space in the box, choose a set of items to be arranged side by side in this space in order to maximize the sum of the area of the selected items. Such DP is equivalent to solving a Knapsack Problem, where items with weight \(p_i\) and value \(v_i\) must be chosen in order to maximize the sum of their values without the sum of their weights exceeding a pre-defined limit. In this case, \(v_i\) is the area of the item i and \(p_i\) is either the width or the height of the item, depending on how it is arranged in the empty area.
More formally, let I be the set of items not yet added to the solution such that each item would fit individually in the empty space to be filled, H be the height of the space and W its width. Let \(h_i\) also be the height of item \(i \in I\) and \(w_i\) be the width of \(i \in I\). If \(W > H\), then the items will be arranged from top to bottom in the empty space. Otherwise, left to right, side by side. Without loss of generalization, considering that \(W > H\), the items are chosen according to the following recurrence equation:
The initial call of the equation is \(F(\Vert I\Vert , W)\). The asymptotic time and memory complexity of this algorithm are equal to \(O(\Vert I\Vert \cdot W)\), since the values of F(k, l) are computed and stored for every pair of values \((k \in \{1 ... \Vert I\Vert \}, l \in \{1 ... W\})\).
Therefore, in order to preserve the core functioning of the original algorithm and maintain its stochastic character, it was decided to introduce such DP filling as part of the Recreate step of the original algorithm. This way, after each insertion of an item during this step, one of the new empty spaces generated by such inclusion (randomly selected with equal chances) is filled with the new proposed method with probability \(p = 1 - \gamma ^{d }\), where d is the depth of the node corresponding to the empty space in question and \(\gamma \in \mathbb {R}\) is a parameter between 0 and 1. This means that this method is more likely to be called when the current branch of the tree is deep. After experimentation, it was decided to set \(\gamma =0.6\). Furthermore, after DP defines which items should be inserted, they are cut from highest to lowest, from left to right, and the direction of their cuts is defined using the same criteria as the original article.
Results. Since solutions with excessive depth were observed in test instances of the [11] and [12] sets, the algorithm with the proposed modification was executed in these instances, on the same machine and same parameters described in Sect. 3.2.1, for 25 min each. The results are displayed in Tables 7 and 8. Furthermore, in order to visualize the difference in the variation in the depth of the solutions found within the same set of instances, boxplots, available in Figs. 1 and 2, were generated with the distribution of such depth.
It can be seen that in both Ortmann and Pisinger instances, no significant difference was observed between the percentage of use of the solutions generated by the two methods. However, when observing the boxplot with the distribution of solution depths, it is evident that, especially in the most challenging sets of instances in Ortmann’s dataset - Nice300i, Nice400i, Nice500i, Path300i, Path400i and Path500i -, the solutions found by the proposed algorithm have depths smaller, less varied and without extreme values than those displayed by the original algorithm. Such a difference, however, is not present in the Pisinger set, where the results displayed by the two methods are extremely close.
Difference in the depth of solutions between the original algorithm and the proposed modification, in the instances of [11]
Difference in the depth of solutions between the original algorithm and the proposed modification, in the instances of [12]
3.3 Insertion of Additional Constraint to 2DVSBPP
With the aim of obtaining greater control over the depth of the solutions found in order to obey to rigid industry limitations aforementioned, it was decided to implement a rigid constraint on the maximum depth of the generated solution, parameterized as input to the algorithm, with the proposed modification. This constraint was implemented in order to minimize the impact on the quality of solutions found without it. Thus, the DP filling method is only run in empty spaces whose corresponding node has a depth less than the maximum minus 1, in order to prevent the choice of an item whose cut increases the depth of the solution beyond what is allowed. In other steps of the algorithm, when selecting the order of cuts to be added and the items to be inserted, these choices are also restricted to avoid breaking the constraint.
Results. The proposed algorithm with the strict maximum depth constraint was executed for the instances of [11] and [12], on the same machine and parameters described in Sect. 3.2.1, with the same value of gamma mentioned in Sect. 3.3.1, and maximum depth of 7. The results comparing the performance of the algorithm with and without the additional constraint are in Tables 9 and 10. Furthermore, Figs. 9 and 10 show the distribution of the solution maximum depths of both proposed methods.
It can thus be seen that it was possible to add the new restriction with minimal impact on the quality of the solutions found, since the average difference between the use of the area of solutions with and without the constraint is less than 1%. Even in the most challenging sets of Ortmann’s dataset, where the original solutions are quite deep, the constrained algorithm was able to find solutions of practically the same quality as those without the constraint (Fig. 3 and 4).
Difference in the depth of solutions between the proposed algorithm without restrictions and with restrictions, in the instances of [11]
Difference in the depth of solutions between the proposed algorithm without restrictions and with restrictions, in the instances of [12]
4 Conclusion
This work presents an extensive review of the literature on the 2D Bin Packing Problem, focusing on its variable size version, the re-implementation of the state-of-the-art algorithm for the 2D Bin Packing Problem with guillotine cuts constraint, the proposal of a modification to this method in order to improve the real practicality of the solutions found by it and the insertion of an additional constraint to the 2DVSBPP and, correspondingly, to the proposed algorithm in order to deal with this.
Through the results obtained by the reproduced code, it was possible to verify the results reported in [3]. Furthermore, the proposal for a modification to the original algorithm by using a DP to more optimally choose the items that fill empty spaces in the boxes in order to reduce the final depth of the solution - and, so, make it more practical in industrial applications - made this method more stable under this criterion, meaning it can find solutions without extreme maximum depth values and with smaller depth variation between similar instances. Finally, the addition of a hard constraint on the maximum depth of solutions gave more control over the algorithm and made it even more applicable in practical situations that require such constraint.
References
Alvarez-Valdés, R., Parreño, F., Tamarit, J.M.: A grasp/path relinking algorithm for two-and three-dimensional multiple bin-size bin packing problems. Comput. Oper. Res. 40(12), 3081–3090 (2013)
Bogue, E.T., Guimarães, M.V., Noronha, T.F., Pereira, A.H., Carvalho, I.A., Urrutia, S.: The two-dimensional guillotine cutting stock problem with stack constraints. In: 2021 XLVII Latin American Computing Conference (CLEI), pp. 1–9. IEEE (2021)
Gardeyn, J., Wauters, T.: A goal-driven ruin and recreate heuristic for the 2D variable-sized bin packing problem with guillotine constraints. Eur. J. Oper. Res. 301(2), 432–444 (2022)
Guimaraes, M., Bogue, E., Pereira, A., Carvalho, I., Noronha, T., Urrutia, S.: Heurısticas construtivas para o problema de corte guilhotinado bidimensional em 3 estágios com restriçoes de precedência. Anais do LI Simpósio Brasileiro de Pesquisa Operacional 2 (2019)
Guimarães, M.V., Bogue, E.T., Carvalho, I.A., Pereira, A.H., Noronha, T.F., Urrutia, S.: A biased random-key genetic algorithm for the 2-dimensional guillotine cutting stock problem with stack constraints. In: Dorronsoro, B., Yalaoui, F., Talbi, EG., Danoy, G. (eds.) International Conference on Metaheuristics and Nature Inspired Computing, pp. 155–169. Springer, Heidelberg (2021). https://doi.org/10.1007/978-3-030-94216-8_12
Hong, S., Zhang, D., Lau, H.C., Zeng, X., Si, Y.W.: A hybrid heuristic algorithm for the 2D variable-sized bin packing problem. Eur. J. Oper. Res. 238(1), 95–103 (2014)
Hopper, E., Turton, B.: Problem generators for rectangular packing problems. Stud. Inform. Univ. 2(1), 123–136 (2002)
Liu, Y., Chu, C., Wang, K.: A dynamic programming-based heuristic for the variable sized two-dimensional bin packing problem. Int. J. Prod. Res. 49(13), 3815–3831 (2011)
Lodi, A., Martello, S., Vigo, D.: Heuristic and metaheuristic approaches for a class of two-dimensional bin packing problems. INFORMS J. Comput. 11(4), 345–357 (1999)
Mezghani, S., Haddar, B., Chabchoub, H.: The evolution of rectangular bin packing problem—a review of research topics, applications, and cited papers (2023)
Ortmann, F.G., Ntene, N., Van Vuuren, J.H.: New and improved level heuristics for the rectangular strip packing and variable-sized bin packing problems. Eur. J. Oper. Res. 203(2), 306–315 (2010)
Pisinger, D., Sigurd, M.: The two-dimensional bin packing problem with variable bin sizes and costs. Discret. Optim. 2(2), 154–167 (2005)
Polyakovskiy, S., M’Hallah, R.: A lookahead matheuristic for the unweighed variable-sized two-dimensional bin packing problem. Eur. J. Oper. Res. 299(1), 104–117 (2022)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2025 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Fernandes, E.A.M., de Noronha, T.F., Coco, A.A. (2025). Heuristic Solutions for the 2D Bin-Packing Problem with Varied Size. In: Paes, A., Verri, F.A.N. (eds) Intelligent Systems. BRACIS 2024. Lecture Notes in Computer Science(), vol 15413. Springer, Cham. https://doi.org/10.1007/978-3-031-79032-4_5
Download citation
DOI: https://doi.org/10.1007/978-3-031-79032-4_5
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-79031-7
Online ISBN: 978-3-031-79032-4
eBook Packages: Computer ScienceComputer Science (R0)





