1 Introduction

Since the early 1990s, Chemical Batch Scheduling (BS) has been explored, particularly considering the inherent uncertainties and constraints such as fluctuating demands and limited capacity. Several studies have investigated the application of Constrained Multiobjective Optimization (CMO) techniques to scheduling problems [6, 10, 11, 14, 17]. Multiobjective Evolutionary Algorithms (MOEAs) are frequently employed due to their flexibility in parallel processing and their adaptability to a wide range of problem domains [20]. Numerous optimization approaches in pharmaceutical manufacturing, particularly in areas of process planning and inventory optimizations to minimize costs and reduce revenue losses, remain unexplored [3, 10]. This is evidenced in the case of Enbrel pharmaceutical, which experienced over 200 million USD in lost revenue [10]. The challenge of real-world constrained non-linear environments has led to the integration of problem-specific heuristics and operators, including local searches, to enhance the exploration and exploitation of feasible solutions, thereby improving solution quality and convergence. Despite the high computational cost, Monte Carlo Simulations (MCS) have been employed in the search for models with superior performance with respect to the certain expected probabilistic distributions.

In previous work [13], we proposed a constrained multiobjective evolutionary approach, based on NSGA-II, to deal with the constrained and non-linear pharmaceutical batch scheduling problem. We improved the initialization and the mutation operators. The heuristic-based initialization optimizes increases the diversity of the initial population and incorporates heuristics for product sequences to minimize setup times. The enhanced mutation operator consists of four successive operations: changing products, single batch mutation (with addition or removal), mutating the number of genes (with addition or removal), and swapping gene positions. The proposed operators improved significantly the evaluated metrics \(IGD+\), E, NS compared to the reference model.

This work proposes a selection operator with constraint partition to enhance diversity and exploration of the search space, and an improved mutation operator with a greedy local search strategy for BS to improve the exploitation of the favorable regions. The selection divides the population into two subsets to be evaluated for reinsertion using constrained and unconstrained criteria. For costly fitness calculations (as in the case of MCS), maintaining the best invalid individuals could lead to better valid individuals. The mutation operator incorporates a greedy local search strategy for adjusting batch numbers when new genes are added, it leveraging information from the population about the average batch count for the selected product. It iteratively explores batch increments or decrements based on constraints and division criteria. Experiments were carried out in order to compare the enhanced MOEA in relation of the previous approach (reference model), using different performance metrics, such as Hypervolume (Hv), Inverted Generational Distance Plus (IGD+), Error Rate (E), Coverage of Two Solution Sets (CS), and the Number of valid Solutions (NS). We evaluated two models: ps-re model integrated the partitioned selection with constraints for reinsertion, while the bl-bat model incorporated the greedy local search for the mutation operator. Both models demonstrated significant improvements compared to the reference model (ini-heu). The ps-re model improved E by 10.4% and IGD+ by 34.2%, while bl-bat achieved even more significant results with a reduction in E by 13.8% and an improvement in IGD+ by 37.8%.

2 Problem Definition

Multiobjective Optimization Problems (MOPs) aim to find optimal solutions \(x_i\) for a vector of J objectives \(f_j\). The complexity arises when comparing solutions, as one solution may outperform another across different objectives. In this regard, the concept of Pareto dominance and the Pareto optimal set are typically used for ranking [19]. A solution \(x_1\) dominates \(x_2\), or \(x_2\) is dominated by \(x_1\), if it is no worse than \(x_2\) for all j objectives \(f_j(x_1) \ge f_j(x_2)\), and superior in at least one \(j \in J\) (\(f_j(x_1) > f_j(x_2)\)). Otherwise \(x_1\) does not dominate \(x_2\). A set of non-dominated solutions, known as Pareto optimal \(P^*\), exists if no solution within the search space dominates any in \(P^*\) [4]. MOEAs aim to identify \(P^*\) or its subset, ensuring close approximation to the true Pareto front.

Fig. 1.
figure 1

Decoding of chromossome.

A Production Plan (PP) specifies a sequence and quantity of product batches, considering the costs of product transitions on the production line. Utilizing real-world data from [10], the production of two batches each of products A and B spans 91 days. Upstream Processing (USP) for product A, including inoculation and seed phases, takes 59 days, followed by 7 days for each Downstream Processing (DSP) batch. A 10-day downtime is allocated for changeovers, with product B’s USP potentially overlapping with A’s production (Fig. 1). Demand uncertainty impacts inventory metrics, therefore MCS are used to simulate demand and evaluate PP. The Fig. 2 illustrates the decoding of a PP, yields the produced quantity \(prod_{(p,m)}\) for product p in month m. Demand simulations \(SD_{(p,m)}^{s}\), generated using Triangular Distribution data, are used for the calculation of Backlog \(B(prod_{(p,m)}, SD_{(p,m)}^{s})\) and Deficit \(Def(prod_{(p,m)}, SD_{(p,m)}^{s})\) metrics. The process is repeated S times to assess the median of the outcomes. Monthly availability (A) is determined by adding inventory (I) to production (P). If demand (D) exceeds A, the surplus is sold (S), avoiding Backlog (B) and reducing I by D. Otherwise, all of A is sold, and unsold units contribute to B. Inventory Deficit (De) is the shortfall from the Inventory Target. In this work, we investigate a CMOP applied to pharmaceutical BS with product demands simulated via Monte Carlo. Data based on a pharmaceutical manufacturing case extracted from  [10], formalized as follows:

$$\begin{aligned} &\text {maximize: } f_1(p,m) = \sum _{p,m}^{P,M} \text {prod}_{(p,m)}\end{aligned}$$
(1)
$$\begin{aligned} &\text {minimize: } f_2(p,m,s)=\text {median}\{ \sum _{p,m}^{P,M} \text {Def}(\text {prod}_{(p,m)}, SD_{(p,m)}^{s})\}\end{aligned}$$
(2)
$$\begin{aligned} &\text {subject to: } e(p,m,s)=\text {median}\{~\sum _{p,m}^{P,M} B(prod_{(p,m)}, SD_{(p,m)}^{s})\}\le 0 \end{aligned}$$
(3)
Fig. 2.
figure 2

Every month, the Available (Di) is calculated from the Stock (E) added to the Produced (P). If Di is greater than the Demand (D), Di is served/sold (V) and there are no unmet sales, so there is no backlog (B), and V is reduced from the total of E. Otherwise, all E becomes V and unmet sales are accounted for as B. Stock Deficit (De) is the strategic monthly stock target minus E.

The first objective \(f_1(p,m)\) (Eq. 1) to be maximized is the sum of the produced quantity \(prod_{(p,m)}\) in kilograms (kg) (illustrated by Produced in Fig. 2) for product p in month m, with P being the total of four products and M the total of 36 months of planning. This value is calculated by decoding the PP considering the process data and changeover costs [10]. The second objective \(f_2(p,m,s)\) (Eq. 2) to be minimized is the median of the accumulated differences between the monthly deficits (illustrated by Deficit in Fig. 2). With \(s\le S\) with S being the total number of MCS. The constraint (Eq. 3) is the median of the S values of total backlog can be named as quantity of unfulfilled orders (illustrated by Backlog in Fig. 2), which should not exceed 0 kg. The B can accumulate, and if the stock is sufficient, the B is sold (Table 1).

Table 1. Process Data. Source:Jankauskas et al. [10].

The exploration of the problem space is restrictive because of the problem space size and the fitness evaluation cost. The individual is defined by a sequence of product numbers (up to 4) and a variable number of batches, which is contingent on the minimum and maximum values per product. Specifically, for product D, there are, on average, 43 batch number options per product. Given an experimental constraint of a maximum of 17 genes per chromosome, the permutation with repetition of the product yields approximately \(2.29 \times 10^{10}\) variations. Furthermore, the permutation of the number of batches results in approximately \(1.56 \times 10^{26}\) variations. Consequently, the composition of these two permutations yields a total of approximately \(3.56 \times 10^{36}\) distinct individuals.

3 Literature Review

Local search strategies are commonly employed in optimization problems, especially for cases with rapid evaluations. We review MOEA’s applied to CMOP scheduling local search strategies for constrained BS and the reference model used in our approach.

Zhen et al. [18] proposed two distinct selection strategies at different evolutionary stages to balance global and local search efforts. Wang et al. [16] explored constrained optimization problems with computationally expensive evaluations, proposing an interior-point method-based local search. At each iteration, a new point is generated using gradient descent until a stopping criterion is met, resulting in the most promising candidate solution. Local search is integrated into a surrogate model to reduce evaluation time, demonstrating significant improvements over benchmarks, even in problems with nonlinear inequality constraints. Habib et al. [7] developed an evolutionary algorithm for constrained problems with high evaluation costs, employing a local search method to refine individuals that violate constraints. A limited number \( n \) of solutions with the smallest Euclidean distances to the reference point are selected, with \( n \) dynamically defined based on the current number of invalid solutions. Cai et al. [2] investigated local search strategies in GAs with a surrogate-based trust region. The update mechanism is based on a partitioning strategy of the adjacent region to maintain population diversity and evaluate the expected improvement. Multiple predictive models within the mechanism confer robustness to the optimization, proving efficient in high-dimensional problems. Han et al. [8] developed a Multiobjective Genetic Algorithm (MOGA), named DEMO, for task scheduling problems considering setup times with objectives focused on total time and energy consumption. They employed a heuristic for population initialization, self-adaptive evolutionary operators to attain high-quality solutions, and a local search strategy to enhance the algorithm’s exploratory capabilities. The local search operator, based on gene insertion, selects individuals for local search with a certain probability, creating a uniformly distributed number of neighbors. The best neighbor is then chosen based on its distance to a reference point. Simulation results indicate that DEMO outperforms three state-of-the-art algorithms in terms of hypervolume metrics, coverage rate, and distance. Azadeh et al. [1] integrated into NSGA-II a population initialization strategy that eliminates non-occurring operations, crossover operators combined with simulated annealing, and a mutation operator with local search for neighbor evaluation. Jia et al. [11] developed a History-Based Evolutionary Algorithm using decomposition and local competition applied to scheduling with three objectives: total time, delay penalties, and energy consumption. A competitive local search is proposed (evaluating gene position swaps, gene insertion based on time optimization) that adjusts task positions to expedite convergence. The method includes an elitism-based reinsertion strategy, retaining half of the population and generating the remaining solutions based on historical information and characteristics extracted from the best solutions, ensuring only valid individuals are produced.

This work uses the study case presented in [10]. Authors developed a MOEA based on NSGA-II algorithm [5] where each individual is a variable-length vector representing a production plan with the sequence of products and their corresponding batch numbers. The population initiates with 100 individuals, each consisting of a single gene representing one batch of the four products. The \(f_1\) (Eq. 3) is calculated through individual decoding, while 1000 demand simulations (\(SD_{(p,m)}^{s}\)) is used to calculate the \(f_2\) and the e. Parent selection for crossover utilizes binary tournament selection. The population is categorized into non-dominated fronts, the criteria to define the winner is based on first the backlog constraint, in case of tie uses a criteria similar to NSGA-II with the ranking of the non-dominated front; and if another tie occurs the crowding distance is used [5]. Mutation comprises four sequential operations, applicable per gene (product mutation, positive batch increment, negative batch decrement) and/or per chromosome (gene position swap). A new random gene is always added to the chromosome to all generated offspring, resulting in increasing individual sizes from the initial population. In the reinsertion, individuals are sorted and selected based on constraint value, i.e., those with the smallest constraint violations will be chosen. In case of a tie, the ranking of the non-dominated front and crowding distance are applied [5]. In the end of each 1000 generations (defined as a run) feasible non-dominated solutions are stored to an external file. This process is iterated 50 times (runs), resulting in a file containing feasible non-dominated solutions from all 50 runs. The non-dominated front of the external file is calculated again, determining it as the final solution of one execution.

In previous work [13], a MOEA approach, called ini-heu, was proposed as an improvement for the later work [10]. This approach increasing the diversity of the population based on initialization strategies, incorporating an optimized heuristic-based mutation operator, and exploring various crossover types to enhance solution quality. Despite the improvements in the metrics evaluated some areas for improvement were identified: (i) high error rate, even using a high number of generations; (ii) low diversity in the final population, where individuals are very close to each other; and (iii) large complex problem space adding complexity to the search of feasible solutions.

4 Proposed Model

In this section, the proposed strategies for enhancing the genetic operators are detailed. Two models are investigated: the first, named ps-re, improves upon the reference model ini-heu ([13]) by incorporating the partitioned selection with constraints for reinsertion, and also reduces the number of generations by increasing the crossover rate. The second variation, named bl-bat, extends the ps-re model by integrating a mutation operator coupled with a greedy local search strategy to determine the number of batches for new genes.

4.1 Selection Partitioned by Constraints for Reinsertion

We propose a selection partitioned by constraints for reinsertion based on distinct evaluation criteria (constrained and non-constrained), aiming to enhance the population diversity. The proposed partitioned reinsertion strategy divides the population set \( P \) of size \( N \) into two subsets \( P_{1} \) and \( P_{2} \), selected based on the original (restricted) and NSGA-II (unrestricted) criteria, respectively. The first subset \( P_{1} \), of size \( N \cdot pRe \), is selected by initially prioritizing the constraint; in case of a tie, the NSGA-II criterion is applied evaluating first the pareto front and in case of tie the crowding distance. The second subset \( P_{2} \), of size \( N \cdot (1-pRe) \), is chosen exclusively by the NSGA-II criterion.

During the reinsertion process, the Pareto front of all solutions are calculated, regardless of the constraints. The set \( P_{1} \) is filled by selecting \( N \cdot pRe \) solutions according to the ordering of the original criterion. With the remaining solutions, they are ordered using the NSGA-II criterion. That is, for a population with \( N=100 \), the dominance front are calculated disregarding the constraints. Considering \( pRe \) of 60%, each generation selects 60 individuals following the ordering criterion that prioritizes the smallest constraint results and, in case of a tie, the NSGA-II criteria. The remaining 40 individuals are chosen from the entirety of the current population, based on the NSGA-II criteria.

4.2 Mutation with Greedy Local Search

An enhancement for the Heuristic Mutation operator used in [13] is proposed. For non-linear constrained problems, the exploitation of neighbours could improve the quality and speed of convergence. Thus, we incorporate a greedy local search strategy for the number of batches if new genes are added. The operator is illustrated in Fig. 3 and its pseudocode is visualized in Algorithm 1.

Fig. 3.
figure 3

Mutation operator with 4 operations: product mutation with probability pMutP, batch mutation with pAddB, number of genes with pAddG and position swap with pSwapG. The local batch search occurs if a new gene is added.

figure a

In case a new gene is added, the product is randomly selected, while for the number of batches, a greedy local search approach is adopted. The average number of batches of the current population for the selected product is evaluated, and a percentage of the average batch ( pBM% ) is calculated in relation to this average number for the evaluation of the individual in the case of an increase and decrease in the number of batches.

The individual with the original number of batches is then compared with the proposed variation of increment and decrement of batches considering the restriction and division criterion. First, the smallest restriction is evaluated; in case of a tie, the result of the division of the objectives \(\frac{f_{2}(x)*(-1)}{f_{1}(x)}\) is evaluated with the aim of minimization.

If the variations in the number of batches do not result in an improvement in relation to the average number, the search is terminated, maintaining the previous number of batches.

This procedure is repeated with the same increment or decrement operation selected in the first iteration until the maximum number of iterations (MI) is reached or the established limits for the number of batches are met.

5 Experiments and Results

In this section we explain the experimental setup and the discussion of the results, through the comparative analysis and the hypothesis test analysis.

5.1 Experimental Setup

Let P be defined as the non-dominated front obtained from the final population of an algorithm execution. Given the stochastic nature of MOGAs, we conducted 30 executions for each evaluated model. The algorithm was implemented in Java and executed on machines with the following configuration: CPU Intel Core i7; 8GB RAM; OS Windows 10.

Metrics. In this section, we define the evaluated metrics: hv, NSV, E, IGD+, and CS. Since the set \(P*\) is not known, it was experimentally approximated from 1424 executions of the MOGA. The approximated \(P*\) set comprises 32 individuals, indicating the problem’s complexity and the challenge in finding solutions that satisfy the backlog constraint. To ensure comparability in the results analysis, we used the same experiment environment for the demand set.

To evaluate the significance of the superiority between the approaches for each metric, a hypothesis test with 5% confidence was used. For distributions that approximate a normal distribution, the T-test [12] was used, otherwise, the Mann-Whitney U test [15] was used. The hypervolume metric (hv) quantifies the distribution of the non-dominated solution set relative to the total search space. It represents the coverage of a specific solution space by a particular solution set. For a problem with two objectives, the hv represents the area dominated by the set P relative to the total area of the search space. The formula is as follows:

$$\begin{aligned} hv = \sum _{i=1}^{P} \sum _{j=1}^{M} \left| f_j(s_i) - f_j(w) \right| \end{aligned}$$
(4)

where w is the worst solution found in P, and \(f_j(s_i)\) refers to the value of objective j of the non-dominated solution \(s_{i}\).

The number of solutions (NS) is a problem-related metric defined as the number of valid non-dominated solutions \(s_i\) found in P that meet the problem constraint (backlog = 0). It provides a measure of the diversity of the set found. However, a high NS does not necessarily imply better solution quality, as a good solution may reduce the NS by dominating multiple inferior solutions.

The error rate (E) represents the percentage of solutions in P that belong to a non-dominated or Pareto optimal set \(P*\), and is formulated as:

$$\begin{aligned} E = \frac{\sum \limits _{i=1}^{|P|} e_{i}}{|P|} \end{aligned}$$
(5)

where \(e_{i} = 1\) if the solution \(i \in P\) is dominated by any solution in \(P*\), otherwise \(e_{i} = 0\). It is defined as the percentage of elements of the Pareto set \(P*\) that are not contained in P.

Given two sets of non-dominated solutions A and B, the set coverage metric, or Coverage of two Sets (CS), represents the percentage of elements in B that are dominated by A. Therefore, CS(A,B)=1 indicates that all solutions in B are dominated by A, while CS(A,B)=0 indicates that none of the elements in B is dominated by A [20]. This metric is calculated as follows:

$$\begin{aligned} CS(A,B) = \frac{|\{p\in P | \exists p' \in P':p'\ge p\}|}{|P|} \end{aligned}$$
(6)

In addition, both directions must be analyzed, as CS(A,B) is not necessarily equal to 1 - CS(A,B).

The Inverted Generational Distance Plus (IGD+) metric is an improvement of the Inverted Generational Distance (IGD) and Generational Distance (GD) metrics, which quantify and qualify the non-dominated set achieved by the MOGA (P) in relation to a reference set. Typically, P is compared to the Pareto optimal set (\(P*\)). IGD+ considers the dominance relations between the solution and the reference point. If a solution is dominated by a reference point, the Euclidean distance is used for the distance calculation. In the case where they are not dominated by each other, the minimum distance from the reference point to the region dominated by the solution is used. As a result, the distance represents the amount of inferiority of the solution in relation to the reference point [9]. This metric is calculated as follows:

$$\begin{aligned} IGD+(A,B) = \frac{1}{|A|}\sum _{s\in A}\min _{r\in B} d(r,s) \quad \text {, }\quad d(r,s) = \sqrt{\sum _{i=1}^M (\max \{s_i-r_i, 0\})^2} \end{aligned}$$
(7)

5.2 Comparative Analysis

This section presents a comparative analysis of the results obtained from the reference model, ini-heu, and the proposed models, ps-re and bl-bat. The ps-re model refines the ini-heu model by integrating the partitioned selection with constraints for reinsertion (Subsect. 4.1). The ps-re model uses a partition percentage pRe of 60% for the selection for reinsertion and increases the crossover rate from 30% to 90%. For the mutation operator, it uses pMutP of 1%, pMutG of 40%, pAddG of 50%, pMutB of 25%, pAddB of 25%, and pSwapG of 50%.

The bl-bat model extends the ps-re model by modifying the mutation operator with a greedy local search strategy for the number of batches. It employs a MI of 2 and pDM of 85.

Table 2 displays the average (avg) and median (med) of the results. Notably, as the ps-re model enhances the quality of the results even at a lower number of generations, the total number of generations was reduced from 1000 to 600. Consequently, the final generations for each model will be compared.

Upon evaluating the metric E, the proposed models ps-re and bl-bat (at generation 600) reduce by 10.4% and 13.8%, respectively, compared to ini-heu (at generation 1000). This reduction suggests that the improved selection of diversity enables a more effective exploration of the search space and the discovery of more solutions from P*. Furthermore, the 3.4% improvement when comparing ps-re and bl-bat suggests that the local search facilitates the exploitation of the nearest individuals.

The improvement of the proposed models over generations, especially for the initial generations for E and IGD+, can be visualized through the boxplot of the executions in Fig. 4a.

Fig. 4.
figure 4

Boxplots of performance metrics over MOGA generations.

For the avg IGD+, both proposed models ps-re and bl-bat improve by 34.2% and 37.8%, respectively, compared to ini-heu. This improvement suggests that the models are capable of finding solutions closer to P*.

Similarly to the impact to the hv, there is a 3.6% improvement for the IGD+ when comparing ps-re and bl-bat, indicating a better exploitation of the solutions with the incorporation of the local search.

Upon analysing the NS, the results are similar with a range of 24 to 27 solutions. The ini-heu model still presents the best average, with a reduction of 3.1% and 6.4% for the ps-re and bl-bat, respectively. However, this metric must be evaluated carefully since a large NS does not necessarily imply a better solution. The box plots for IGD+ and NS are shown in Fig. 4b.

Evaluating the average hv, the results are similar, varying from 0.89997 to 0.9007. The ps-re model shows a slight improvement of 3.8% in the hv and the bl-bat a small reduction of 6.9% compared to ini-heu.

Table 2. Performance Metrics of the MOGA Approaches.

To investigate the dominance relationship between the fronts of different approaches, we analyze the CS metric. Table 3 presents the results. Both proposed models ps-re and bl-bat demonstrate superiority for the dominance. When comparing ps-re with ini-heu, 71.2% of the solutions from ini-heu are dominated by the solutions from ps-re. On the other hand, only 11.9% of the solutions in ps-re are dominated by ini-heu.

Comparing bl-bat with ini-heu, 60.1% of the solutions from ini-heu are dominated by the solutions from bl-bat, while only 17.1% of the solutions in bl-bat are dominated by ini-heu. This confirms that bl-bat presents a significant improvement in terms of dominance compared to ini-heu.

Comparing ps-re in relation to bl-bat, 25.8% of the solutions from ps-re are dominated by bl-bat and 36.1% of the solutions from bl-bat are dominated by ps-re. Therefore, ps-re presents a slight better dominance compared to bl-bat.

This suggests that the ps-re model presents a significant improvement in terms of dominance compared to the ini-heu model. Moreover, comparing ps-re to ini-heu for dominance ps-re provides a slight better result. The ps-re model is able to find more dominant solutions, suggesting a more effective exploration of the search space.

Table 3. CS metrics of the strategies

5.3 Hypothesis Test Analysis

This section presents the results of a hypothesis test conducted at a 95% confidence level to assess the statistical significance of the ps-re model in comparison to other models. The results, displayed in Table 4, show each row representing the average values of the metrics (hv and NS for maximization, and IGD+ and E for minimization). Each column represents the comparison of ps-re with a different model. The green cells indicate instances where ps-re significantly outperforms the other model for the corresponding metric.

The hypothesis test confirms the statistical superiority of ps-re over the ini-heu for hv, IGD+, and E metrics; and bl-bat for hv metric. When comparing ps-re with ini-heu, there is significant evidence of superiority for all metrics, except for the NS with comparable results. This suggests that allowing unconstrained solutions enhances the diversity of the population, improving the exploration of solutions towards the P*.

Despite the higher averages for E and IGD+ for the bl-bat model, there is insufficient statistical evidence to assert its superiority over ps-re. Moreover ps-re demonstrating evidence of superiority for the hv metric. This suggests that the combination of local search strategies, could be further investigated in combination with surrogate models to reduce the cost of fitness evaluation.

Table 4. Hipothesis test for the metrics comparing ps-re vs other approaches

6 Conclusion

In this study, we investigated MOGA models applied to the batch scheduling of pharmaceutical products with stochastic demands. We proposed two improvements in the genetic operators: the partitioned selection with constraints for the reinsertion and the incorporation of the greedy local search strategy to the mutation operator. The operators were investigated in two models: the first ps-re, which enhances the reference model ini-heu by integrating the partitioned selection with constraints for reinsertion; and the second bl-bat, which incorporates to the ps-re model the greedy local search strategy for the number of batches to the mutation operator. Both models demonstrated statistically significant improvements compared to the reference model(ini-heu) in relation to the metrics IGD+ and Error Rate. For the ps-re model, we observed an improvement in the avg of 10.4% in E, improvement of 34.2% for IGD+, a slight increase of 3,8% for the hv and a small reduction to NS of 3,1% compared to the reference model. This suggests that the enhanced selection of diversity and the increased crossover rate positively influenced capability of finding solutions in P* and closer. The bl-bat model presented even more expressive results, with a reduction in the average of 13.8% in E, an improvement of 37.8% in IGD+, a reduction to NS of 6,4% and a small reduction of 6,9% in hv compared to the reference model. Moreover, both demonstrated superiority in the CS metric compared to the ini-heu model, with 71.2% of the solutions from ini-heu being dominated by the solutions from ps-re. Only 11.9% of ps-re solutions were dominated by ini-heu solutions. The ps-re also outperforms the bl-bat in dominance, with 36,1% of the solutions from bl-bat dominated by the solutions from ps-re and 25,8% from ps-re solutions dominated by bl-bat solutions. This confirms the significant improvement of the ps-re model in terms of dominance compared to the ini-heu model.

As future investigations, we are going evaluated the algorithm performance when processing time is used as a stopping criterion. Surrogate models could be used in order to reduce the computational cost of the fitness and constraints evaluation. It also favors the study of more refined local search operators in the mutation.