Abstract
The Max-Cut Problem involves dividing a set of n vertices in a weighted graph \(G = (V, E)\) into two subsets \((S, \bar{S})\) in such a way that the sum of the weights between the subsets is maximized. This research introduces two heuristic methods that combine Genetic Algorithm, Tabu Search, and a set of optimality cuts, which are also proven in this work. To the best of our knowledge, we are the first to utilize these inequalities in conjunction with the genetic algorithm methodology to solve the Max-Cut problem. Computational experiments using a benchmark set of 54 instances, ranging from 800 to 3000 vertices, demonstrate that the incorporation of optimality cuts is a crucial factor for our methodologies to compete effectively with six state-of-the-art approaches for the Max-Cut problem and our genetic algorithm that incorporated optimality cuts in the population generation was able to improve the state-of-the-art value for the G51 instance and find the same solutions as the literature in 31 other instances.
This work is partially supported by Universal CNPq [422912/2021-2].
Access provided by University of Notre Dame Hesburgh Library. Download conference paper PDF
Similar content being viewed by others
1 Introduction
The Max-Cut problem is a combinatorial optimization problem that, due to its vast domain of applications such as social networks, where the Max-Cut value is generally a measure of network robustness and structural balance originating from social behaviors [1, 5, 11, 18], statistical physics, image segmentation, design of Very Large Scale Integrated (VLSI) Circuits and communications network design [4], has attracted the scientific interest of several researchers in the areas of graph theory, mathematical optimization and discrete mathematics [6]. This high applicability motivates the study and development of algorithms that obtain good results in a viable computational time.
The Max-Cut problem can be described as follows: given an undirected graph \(G = (V, E)\) where V is a set with n nodes, E is a set with m edges and a cost \(c_{ij} = c_{ji} \in \mathbb {R}\) for each edge \((i,j) \in E\). Any partition of V, represented by \((S, \bar{S})\), defines a cut of G. The cut value is the sum of all \(c_{ij}\), such that \(i \in S \text { and } j \in \bar{S}\). It is important to note that S or \(\bar{S}\) can be empty. The Max-Cut problem consists in finding a cut of G with the largest value. Figure 1 a) show a weighted undirected graph G with \(V = \{1,2,3,4,5\}\) and \(E = \{(1,2), (1,4), (2,3), (2,4), (2,5), (3,4), (3,5), (4,5) \}\). Figure 1 b) presents a possible partition \((S, \bar{S})\), where \(S = \{1, 2, 3\}\), \(\bar{S} = \{4, 5\}\) and its associated cut value \(c_{14} +c_{24} + c_{25} + c_{34} + c_{35} = 10 + 10 + 5 + 10 + 5 = 40\).
Classified as an NP-hard problem [14], it is considered a computational challenge to solve it optimally, even for moderately sized instances [19]. Therefore, for these instances, it is necessary to use approaches that provide results close to the optimal solution, such heuristics, meta-heuristics and approximation algorithms [10].
For large instances, heuristic and metaheuristic methods are commonly used to find “goodenough” sub-optimal solutions. In particular, for the very popular max-cut problem, many heuristic algorithms have been proposed, including simulated annealing [23], tabu search [17], tabu search with local search procedure [2], scatter search with Greedy Randomized Adaptive Search Procedure - GRASP [21], memetic algorithm [27] and multiple search operator heuristic [20]. These six procedures were tested using the benchmark of instances, called G set and the results obtained (only cut value), as far as is known, represents the state-of-the-art to Max-Cut.
In this context, a genetic algorithm is one of the population-based meta-heuristic [22] that has been used in several studies, such as test data generation and selection in a regression testing environment [24], for the job shop scheduling problem [26], for designing the weights to combine Recurrent Neural Network (RNN) and long short-term memory (LSTM) [7] and for the Max-Cut problem [10, 15, 16].
Considering the literature, it is notorious an extensive use of heuristics and meta-heuristics to obtain solutions for the Max-Cut, a fact that motivates the creation of hybrid and non-exact guided methodologies to improve the results of the state-of-the-art. Heuristic approaches using optimality cuts proved to be an interesting possibility to reduce the search space and realize less random modifications to the solutions. Generally, valid inequalities, optimality cuts and optimality conditions are used in exact approaches [8, 9, 25]. On the other hand, it is difficult to find and prove these cuts. Also, considering inequalities that already exist, it is difficult to efficiently apply them to optimization problems.
In this paper, we developed two variations of genetic algorithms that use optimally cuts in their composition. The first one uses the optimally cuts in the generation of the initial population and also as a modification (a kind of mutation) procedure. The second uses the optimally cuts in population generation and also uses them as criterion for choosing candidates to compose a tabu list. The main difference between the two versions is the effort spent on generating new individuals. The first variant puts more effort into generating a good population, using the mutation procedure and local search, while the second version puts more effort into improving the value of individuals through the tabu search.
The main contributions of this paper are thus (1) the use of optimality cuts obtained through the study of the objective function of a quadratic programming model for the maximum cut problem; (2) development of genetic algorithms that incorporate optimality cuts to guide the search for better solutions; and (3) the genetic algorithm that incorporated optimality cuts in the population generation was able to improve the state-of-the-art value for the G51 instance and find the same solutions as the literature in 31 other instances.
The remaining sections of this paper are organized as follows. The optimality cuts used are presented and proved in Sect. 2. In Sect. 3, we present how we used the optimally cuts in the composition of the two variants of the genetic algorithm. Section 4 contains the computational results. Finally, Sect. 5 concludes the paper, with our final remarks.
2 Optimality Cuts for Max-Cut
Let \( (S, \bar{S}) \) be a partition represented by the binary vector \(x \in \{ 0,1 \}^n \) with \( x_i = 0 \) if \(i \in S \) and \(x_i = 1\) otherwise, \( \forall i = \{1, \dots , n\}\) and let \(\bar{x}_j = 1 - x_j \), \( \forall j = \{1, \dots , n\}\), then the max-cut problem can be formulated as an unrestricted problem, with a quadratic objective function and its variables assuming binary values [3]. The constant \(c_{ij} = c_{ji}, \forall i = \{1, \dots , n\}\), \(j = \{1, \dots , n\}\) and \(i \ne j\), represent the edge weight; in cases where the edge does not exist its value is zero. Let (1) be a mathematical formulation as follows:
Let \(Q(x) = \displaystyle \sum _{i = 1}^{n-1} \sum _{j = i + 1}^{n} c_{ij}(x_i \bar{x}_j + \bar{x}_i x_j)\), notice that
Let \(x \in \{0, 1\}^n\), \(k \in \{1, 2, \dots , n\}\) and given \(x^{\bar{k}} \in \{0, 1\}^n\) be a vector such that: \( x_i^{\bar{k}} = x_i\), if \(i\ne k\) and \(x_i^{\bar{k}} =1-x_i\) if \( i = k \). We are interested in studying the variation \(\varDelta _k(x)\) in the value of the objective function when we modify only the component k of a feasible vector x.
Lemma 1
Let \( \varDelta _k(x) = Q (x^{\bar{k}}) - Q(x) = \displaystyle \sum _{\begin{array}{c} j \ne k \end{array}} c_{kj}(1 - 2x_k)(1 - 2x_j)\), for all \(x \in \{0, 1\}^n\) and for all \(k \in \{1, 2, \dots , n\}\).
Proof
Using the equation (2) we have \(\varDelta _k(x)\)
\(\blacksquare \)
Let \(\displaystyle C_k = \sum _{\begin{array}{c} j=1 \atop j \ne k \end{array}}^n c_{kj}\) and \( \displaystyle C_k(x) = \displaystyle \sum _{\begin{array}{c} j = 1 \atop j \ne k \end{array}}^n c_{kj}x_j\), Corollary 1 present properties of an optimal solution that are based on Lemma 1.
Corollary 1
Let \(x^*\) be an optimal solution for formulation (1) and let \(k \in \{1, 2, \dots , n\}\). Then, the following holds:
-
1.
If \(x^*_k = 1\), then \(C_k(x^*) \le C_k/2\) and If \(C_k(x^*) < C_k/2\), then \(x^*_k = 1\).
-
2.
If \(x^*_k = 0\), then \(C_k(x^*) \ge C_k/2\) and If \(C_k(x^*) > C_k/2\) then \(x^*_k = 0\).
Proof
Let \(\bar{x} = (x^*)^{\bar{k}}\). We have that \(Q(\bar{x}) \le Q(x^*)\) by optimality of \(x^*\). First, consider \(x^*_k = 1\). By Lemma () we have
Showing the first implication. To confirm the second claim of item 1 in the Corollary 1, consider \(x^*_k = 0\). Again, by Lemma 1 we have
\(\blacksquare \)
The proof of item 2 in the Corollary 1 will be omitted as it utilizes the same arguments as item 1.
3 Developed Algorithms
In this section, we describe the strategy for encoding the optimality cuts and how they will be used in conjunction with the genetic algorithms, as well as the main ideas and pseudocodes of the algorithms. To exemplify the use of the optimality cuts, consider the graph G of Fig. 1. Each cut in G is a binary vector where each position represents a vertex and the value at the position represents the set to which the vertex belongs: if \(v \in S, x_v = 0\), otherwise \(v \in \bar{S}\) and \(x_v = 1\). Let \(x^* = [0,0,0,1,1]\) be the binary vector that represents the cut \((S,\bar{S})\) on G in Fig. 1. It is worth noting that the vector \(x^*\) represents an optimal solution for G. We have \(C_4 = c_{41}+c_{42}+c_{43}+c_{45} = 36\), \(C_4(x^*) = c_{41}x^*_1+c_{42}x^*_2+c_{43}x^*_3+c_{45}x^*_5 =6\), \(x^*_4 = 1\) and \(C_4(x^{*}) \le C_4/2\).
For this example, it is easy to see that \(x^*\) is satisfied by the optimality cut. We know that these inequalities hold to any optimal solution, and if we modify, for example, the vertex 5 in set \(\bar{S}\) to the set S, the inequalities of vertices \(\{1,2,3,4\}\) still satisfy the optimality cuts, however the new inequality that represents vertex 5 it is infeasible, ensuring that the new cut is not optimal.
The Algorithm 1 presents the procedure that uses optimality cuts to modify a vector x. The computation of \(C_k\) values is performed only once and their values do not change during the execution of the algorithm; this allows the calculation to be done at the instance loading step, prior to Algorithm 1. However, \(C_k(x)\) values are calculated dynamically and it must be computed according to the values of the current x vector.
The computation of \(C_k(x)\) for each vertex k in lines (2)–(6) has complexity O(n) and can be improved to \(\varTheta (1)\) after the first iteration by saving the values in a vector and updating after each change in the cut. The lines (7) and (10) run in constant time, and the procedure to change the vertex between sets can be made in O(n)-time. Assuming \(\alpha \) as a constant indicating the number of iterations in loop (1)–(11), the time complexity of the Algorithm 1 can be limited to \(O(\alpha n^2)\).
3.1 Genetic Algorithm with Optimality Cuts
The motivation to use Genetic Algorithms (GA) is the fact there are many different solutions to which the cuts can be applied, along with the fact that genetic operators make it possible to maintain diversity in solutions, especially after applying the inequalities which often result in similar solutions. This section describes the first developed genetic algorithm that uses the optimality cuts, referenced in the rest of this document as GA-OC. The complete version of GA-OC is presented in the Algorithm 2.
Each individual in the population is a binary vector that represents a cut in the graph and the fitness function is the cut value. The first step of GA-OC lines (1)–(6) randomly generates the initial population, where (ps) and (nioc) are parameters referring to the size of the population and the amount of randomly generated individuals that are modified by the optimality cuts in the initial population. It is important to mention that we chose a vertex that did not have its value modified when using the Algorithm 1. New individuals are generated through crossover (Algorithm 4) and local optimization (Algorithm 3) in lines (9)–(13). The selection of parents is performed through tournaments, which selects the fittest individual from a random subset of the population. The crossover function creates a new individual from a pair returned from tournaments. The next step in lines (14)–(18), is to select a subset of individuals from the population P to apply the modification procedure using the optimality cuts, which consists of fixing a random vertex and applying the Algorithm 1. The same occurs in the lines (19)–(23), where a different subset from the population P is selected to apply the default mutation, as shown in Algorithm 5. In the end, new individuals are inserted into the population after applying a procedure of selection, presented on Algorithm 6.
3.2 Genetic Algorithm with Perturbation-Based on Tabu Search
The Genetic Algorithm with Perturbation-Based on Tabu Search (GA-TS) pseudocode is presented on (Algorithm 7). First, the initial population is created in the same way as in GA-OC lines (1)–(6), then the new individuals are computed using crossover (Algorithm 4) and perturbation-based on tabu search (Algorithm 8). We discard the default mutation procedure and in the end, new individuals are inserted into the population after applying a procedure of selection, presented on Algorithm 6.
It is important to emphasize that in Algorithm 8, the Tabu List (TL) size is dynamically and that size changes following the rule presented by Galinier [12]. The condition to shuffle the solution was based on Wu [27], according to their results and some preliminary tests that we made. Also, values above 500 iterations do not show significant difference. The Shuffle procedure in the line (8) of the Algorithm 8 selects 150 vertices from the current solution and changes the set where they are in.
Algorithm 9 is based on the principle of best improvement, in which the movement candidate list is generated based on two properties: the cut improvement of moving the vertex k to the other set of the partition, and if this vertex, according to the optimality cuts, must be in the opposite set. Thus, the movement that results in the greatest cut gain for the solution is made. This procedure is called on Algorithm 8 until the maximum number of iterations is reached, always updating the Incumbent, i.e., the best solution found until the moment.
The choice of a Perturbation-Based on Tabu Search (TS) in conjunction with the GA is due to the decision to use more time optimizing the new individuals generated, allowing for changes only when they improve the cut value. The main difference between GA-TS and the GA-OC is the fact that GA-TS makes a stronger search to improve the new individuals.
4 Computational Results
In this section, we describe the computational experiments that were performed to test the efficiency of the two heuristics developed in this work. The algorithms were implemented using the C++11 programming language and are available at https://github.com/cvaraujo/max-cut-hybrid-ga-ts. The experiments were carried out using a machine with Intel(R) Xeon(R) Silver 4114 (2.20 GHz) \(\times \) 10 with 32 GB RAM and Linux Ubuntu 16.04 64 bits operating system using a sample of 54 instances. This set of benchmark instances, called G set, is available at http://www.grafo.etsii.urjc.es/optsicom/maxcut and it was generated by Helmberg et al. [13].
To set these values and parameters we consider an empirical analysis. We tested the algorithms with different values for some parameters and the results did not present significant differences. We observed that it is preferable that GA-TS manages fewer individuals per iteration since it takes more time optimizing each one. For GA-OC, creating more individuals and renewing part of the population facilitates the variety and the quality of solutions that are optimized, either by the inequalities or by the local optimization procedure. The operator and parameter values to algorithms GA-OC and GA-TS are: Representation: GA-OC and GA-TS use binary vector representation; Population: The initial population size for GA-OC is 300, with 10\(\%\) NIOCs and 50 new individuals are created in each generation; In GA-TS, the initial population size is 50, with 10\(\%\) NIOCs and a one individual in each iteration; Tournament: for both algorithms the tournaments use 4 randomly selected individuals and the best of them is returned; Crossover: for both algorithms, the crossover used is uniform with two individuals. If the vertex value of the parents is different, the child has a chance of inheriting from the fittest parent. The crossover procedure also generates only one child; Mutation: the mutation on GA-OC is applied to \(20\%\) of the population, excluding the fittest. Each gene of the selected individuals has a probability of \(10\%\) to change. On GA-TS, the mutation function is not used; TS Iterations: the number of iterations is \(10^6\); Stopping Criterion: time limit of 1800 s for both algorithms.
To show the difference in the quality of individuals created with the use of optimality cuts, we select a sample of 12 instances, where 3 are from each size of vertices, and created a population of 50 individuals through Algorithm 1, referred here as NIOC, another population of 50 Randomly Generated Individuals, referring here to RCI. In the population of RCIs, the local optimization procedure (Algorithm 3) was also used on all individuals. The results are in Table 1, the first and second columns (Graph, n) are the instance name and the number of vertices. For each algorithm (NIOC, RCI and the RCI with local optimization RCI-LO) there are two columns (avg, dev) that inform the average and standard deviation of the cut values, respectively. The highest averages are in bold.
In all 12 sample instances, the NIOC population consistently exhibits higher mean values compared to RCI and RCI-LO. This superiority is also observed in the average standard deviation, except for instance G46 where NIOC did not achieve the best result. The results of RCI-LO demonstrate the effectiveness of the local optimization procedure in improving the quality of randomly generated individuals across all tested instances. These findings highlight the positive impact of incorporating optimality cuts on the overall output quality of the algorithm, enabling the generation of solutions with higher cut values within a shorter timeframe. However, it is important to acknowledge that relying solely on optimality cuts for optimization may lead to individuals getting trapped in local maxima more quickly than other methods like RCIs. To address this limitation, we have implemented additional measures such as an alternative local optimization procedure, a high mutation rate, and a preference for maximum diversity during the selection process. These measures aim to mitigate the risk of premature convergence and enhance the algorithm’s exploration capabilities, enabling it to search for more optimal solutions across a wider solution space.
The second experiment compares two versions of our algorithm, GA-OC and GA-TS, with a standard genetic algorithm implementation. We conducted 30 runs of GA-OC, GA-TS, and the standard GA algorithm for each of the 54 instances. Table 3 presents the average, standard deviation, and minimum values obtained by each of the algorithms. Considering the results, the hybrid approaches outperform the default version of the GA for all instances. Considering only the two versions of GA using the optimality cuts, the algorithm GA-OC outperforms GA-TS with respect to the averages and lowest values. For all instances, GA-OC has 28 highest values of column min, and in 6 cases the value is the same, while for the 20 remaining the value of GA-OC is less than GA-TS. Considering columns avg, to 29 instances GA-OC is the highest, 4 were draws, and GA-TS has higher values in others 21. For column dev, GA-OC has a smaller variation in solutions, such that for 30 instances it has the standard deviation value less than GA-TS, to 4 instances both the algorithms have value 0 of standard deviation and for the remaining 20 instances GA-TS has smaller values. Based on these results, the probability associated with the t-test was calculated using the mean values (columns 4 and 7) obtained by the GA-OC and GA-TS algorithms across the 54 instances, using a two-tailed distribution and a 95% confidence interval as parameters. The p-value of 0.009 indicates that the difference between the means of the algorithms is statistically significant.
We compared the results of our algorithms with the most effective heuristics currently in the literature. It is important to emphasize that the conditions and configuration parameters of our algorithms were set under different circumstances than those reported in the literature for the compared heuristics, such as the programming languages used, termination criteria, and hardware configuration. Therefore, no implementation of the algorithms from the compared papers was conducted. Only the results presented by the authors were used for the purpose of comparison with our algorithms.
Table 4 compares our approaches with 6 state-of-the-art algorithms. The first two columns contain the instance identifier (Graph) and the number of vertices (n). Columns (GA-OC) and (GA-TS) exhibit the best values obtained by our algorithms and from the fifth to the tenth column are the best results found by these reference algorithms. The “-” symbol represents that no value was made available for that instance and values in bold font are the actual best-known results. Table 2 show the summary of the comparison with the best result of our algorithms. The rows 2, 3, 4 for GA-OC in Table 2 respectively denote: the number of instances in which our algorithms obtain better, equal, and worse cut values when compared to the corresponding reference algorithm.
From Table 2, we observe that the GA-OC algorithm outperforms 5 out of 6 reference algorithms in terms of the number of wins, i.e., the number of instances in which each algorithm achieved the best cut value. Compared to the SILS method, GA-OC won in instances G37 and G51, with G51 being the instance where it was able to improve the state-of-the-art and achieved the same results in 32 other instances. To the 20 remaining instances, the \(gap = \frac{STA-OA}{STA} \times 100\), between the best of our algorithms (OA) and the state-of-the-art (STA) is less than \(1\%\), more specifically the gap of one instance is \(1\%\) while for the other 19 this value is less than \(0.3\%\). On the other hand, the GA-TS algorithm outperforms 3 out of 6 reference algorithms. These results confirm the effectiveness of our genetic algorithms that use our set of optimality cuts to deliver high-quality solutions for the 54 instances belonging to G set.
5 Conclusion
This work presents a new set of optimality cuts and two heuristics based on genetic algorithms that use these inequalities in their composition. Besides the proposal to present an approach to improve the efficiency of these heuristics, as far as is known nothing of the kind was reported to Max-Cut problem. Analysis using the benchmark set G considered the best cut value obtained by our heuristics in each instance and we were able to improve the best-known value for the instance G51 and be strongly competitive with current state of the art algorithms, presenting results with a maximum of \(1\%\) gap of the best-known values for all instances. Although the experiments consider a limited execution time, increasing it would not have a significant impact on the results provided by our algorithms. Thus, it is possible to conclude that the use of the proposed optimality cuts in GA and TS results in a good improvement to obtain solutions to the Max-Cut Problem. For future work, we hope to find new ways to explore the optimality cuts presented and apply them to different heuristics and meta-heuristics, such as Simulated Annealing (SA) and Scatter Search (SS). It is possible to use these optimality cuts in Math-heuristics, that are approaches which use mathematical models and heuristics. Also, improvements can be sought in the cuts, searching for more efficient and quick ways to apply them.
References
Agrawal, R., Rajagopalan, S., Srikant, R., Xu, Y.: Mining newsgroups using networks arising from social behavior. In: Proceedings of the 12th international conference on World Wide Web, pp. 529–535 (2003)
Alidaee, B., Sloan, H., Wang, H.: Simple and fast novel diversification approach for the UBQP based on sequential improvement local search. Comput. Ind. Eng. 111, 164–175 (2017)
Barahona, F.: The max-cut problem on graphs not contractible to K5. Oper. Res. Lett. 2(3), 107–111 (1983)
Barahona, F., Grötschel, M., Jünger, M., Reinelt, G.: An application of combinatorial optimization to statistical physics and circuit layout design. Oper. Res. 36(3), 493–513 (1988)
Bramoullé, Y.: Anti-coordination and social interactions. Games Econom. Behav. 58(1), 30–49 (2007)
Burer, S., Monteiro, R., Zhang, Y.: Rank-two relaxation heuristic for max-cut and other binary quadratic problems. SIAM J. Optim. 12(2), 503–521 (2001/2002)
Chui, K.T., Gupta, B.B., Vasant, P.: A genetic algorithm optimized RNN-LSTM model for remaining useful life prediction of turbofan engine. Electronics 10(3), 285 (2021)
De Simone, C., Diehl, M., Jünger, M., Mutzel, P., Reinelt, G., Rinaldi, G.: Exact ground states of Ising spin glasses: new experimental results with a branch-and-cut algorithm. J. Stat. Phys. 80(12), 487–496 (1995)
De Simone, C., Rinaldi, G.: A cutting plane algorithm for the max-cut problem. Optim. Methods Softw. 3(13), 195–214 (1994)
Dunning, I., Gupta, S., Silberholz, J.: What works best when? A systematic evaluation of heuristics for Max-Cut and QUBO. INFORMS J. Comput. 30(3), 608–624 (2018)
Facchetti, G., Iacono, G., Altafini, C.: Computing global structural balance in large-scale signed social networks. Proc. Natl. Acad. Sci. 108(52), 20953–20958 (2011)
Galinier, P., Boujbel, Z., Fernandes, M.: An efficient memetic algorithm for the graph partitioning problem. Annals OR 191, 1–22 (2011)
Helmberg, C., Rendl, F.: A spectral bundle method for semidefinite programming. SIAM J. Optim. 10(3), 673–696 (2000)
Karp, R.M.: Reducibility among combinatorial problems. In: Miller, R.E., Thatcher, J.W., Bohlinger, J.D. (eds.) Complexity of Computer Computations, pp. 85–103. Springer, Cham (1972). https://doi.org/10.1007/978-1-4684-2001-2_9
Kim, S.H., Kim, Y.H., Moon, B.R.: A hybrid genetic algorithm for the MAX CUT problem. In: Proceedings of the 3rd Annual Conference on Genetic and Evolutionary Computation, pp. 416–423. Morgan Kaufmann Publishers Inc. (2001)
Kim, Y.H., Yoon, Y., Geem, Z.W.: A comparison study of harmony search and genetic algorithm for the MAX-CUT problem. Swarm Evol. Comput. 44, 130–135 (2019)
Kochenberger, G.A., Hao, J.K., Lü, Z., Wang, H., Glover, F.: Solving large scale Max Cut problems via tabu search. J. Heuristics 19(4), 565–571 (2013)
Kolli, N., Narayanaswamy, B.: Influence maximization from cascade information traces in complex networks in the absence of network structure. IEEE Trans. Comput. Soc. Syst. 6(6), 1147–1155 (2019)
Krislock, N., Malick, J., Rouoin, F.: Improved semidefinite bounding procedure for solving Max-Cut problems to optimality. Math. Program. 143(1–2), 61–86 (2014)
Ma, F., Hao, J.K.: A multiple search operator heuristic for the max-k-cut problem. Ann. Oper. Res. 248(1–2), 365–403 (2017)
Martí, R., Duarte, A., Laguna, M.: Advanced scatter search for the Max-Cut problem. INFORMS J. Comput. 21(1), 26–38 (2009)
Mirjalili, S.: Genetic algorithm. In: Evolutionary Algorithms and Neural Networks. SCI, vol. 780, pp. 43–55. Springer, Cham (2019). https://doi.org/10.1007/978-3-319-93025-1_4
Myklebust, T.G.: Solving maximum cut problems by simulated annealing. arXiv preprint (2015)
Pandey, A., Banerjee, S.: Test suite optimization using firefly and genetic algorithm. Int. J. Softw. Sci. Comput. Intell. (IJSSCI) 11(1), 31–46 (2019)
Rendl, F., Rinaldi, G., Wiegele, A.: A branch and bound algorithm for Max-Cut based on combining semidefinite and polyhedral relaxations. In: Fischetti, M., Williamson, D.P. (eds.) IPCO 2007. LNCS, vol. 4513, pp. 295–309. Springer, Heidelberg (2007). https://doi.org/10.1007/978-3-540-72792-7_23
Sun, L., Cheng, X., Liang, Y.: Solving job shop scheduling problem using genetic algorithm with penalty function. Int. J. Intell. Inf. Process. 1(2), 65–77 (2010)
Wu, Q., Hao, J.-K.: A memetic approach for the Max-Cut problem. In: Coello, C.A.C., Cutello, V., Deb, K., Forrest, S., Nicosia, G., Pavone, M. (eds.) PPSN 2012. LNCS, vol. 7492, pp. 297–306. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-32964-7_30
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Soares, P.L.B., Araújo, C.V.D. (2023). Genetic Algorithms with Optimality Cuts to the Max-Cut Problem. In: Naldi, M.C., Bianchi, R.A.C. (eds) Intelligent Systems. BRACIS 2023. Lecture Notes in Computer Science(), vol 14197. Springer, Cham. https://doi.org/10.1007/978-3-031-45392-2_2
Download citation
DOI: https://doi.org/10.1007/978-3-031-45392-2_2
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-45391-5
Online ISBN: 978-3-031-45392-2
eBook Packages: Computer ScienceComputer Science (R0)









