key: cord-0069988-xfdaer71 authors: Deng, Lei; Chen, Lei; Zhao, Jingjie; Wang, Ruimei title: Modeling and performance analysis of shuttle-based compact storage systems under parallel processing policy date: 2021-11-15 journal: PLoS One DOI: 10.1371/journal.pone.0259773 sha: 6c597c2acf0846ed9777ed4413aa0eaa886c30a0 doc_id: 69988 cord_uid: xfdaer71 Short response time for order processing is important for modern warehouses, which can be potentially achieved by adopting appropriate processing policy. The parallel processing policy have advantages in improving performance of many autonomous storage and retrieval systems. However, researchers tend to assume a sequential processing policy managing the movement of independent resources in shuttle-based compact storage systems. This paper models and analyses a single-tier of specialized shuttle-based compact storage systems under parallel processing policy. The system is modeled as a semi-open queueing network with class switching and the parallel movement of shuttles and the transfer car is modeled using a fork-join queueing network. The analytical model is validated against simulations and the results show our model can accurately estimate the system performance. Numerical experiments and a real case are carried out to compare the performance of parallel and sequential processing policies. The results suggest a critical transaction arrival rate and depth/width ratio, below which the sequential processing policy outperforms the parallel processing policy. However, the advantage of sequential processing policy is decreasing with the increasing of shuttle number, transaction arrival rate and depth/width ratio. The results also suggest an optimal depth/width ratio with a value of 1.75 for minimizing the expected throughput time in the real system. Given the current system configurations, the parallel processing policy should be considered when the number of shuttles is larger than 2 or the transaction arrival rate is larger than 24 per hour. In recent years, customer demands for logistics and distribution become dynamic and keep changing, especially during COVID-19. This implies an increasing trend towards more service variety and shorter response times. As a new unit-load storage and retrieval system, shuttlebased compact storage systems combine the features of and are more cost-effective than autonomous vehicle-based storage systems and compact storage systems. Additionally, such systems work, Malmborg [6] attempts to propose an optimal system design through comparing AVS/ RS systems to traditional autonomous storage systems, and his results shows the former one has advantages in cost-savings and operational flexibility. And Malmborg [7] extends the previous works by considering the opportunistic-interleaving in the system. By using queueing method, the activity of vehicle is modeled by an M/G/V queue and the activity of lifts are modeled by a G/G/L queue in the research of Fukunari and Malmborg [8] ; Kuo et al. [9] . Roy et al. [10] analyze a single-tier AVS/RS using semi-open queueing network and examines the system performance. Furthermore, as a useful modelling tool, semi-open queueing network is used to model AVS/RS by the works of Heragu et al. [11] and Marchet et al. [12] . Nevertheless, there are many researches analyze AVS/RS through simulation, such as Ekren [4] . Recently, Ekren and Akpunar [13] develop an open queuing network and a software-based tool to calculate the performance of AVS/RS. In this research, they consider both single-and dual-command cycles, and also estimate system performance related to energy consumption. For the shuttle-based storage and retrieval systems (SBS/RS), some studies focus on the system design, energy consumption and scheduling process. For instance, Zhao et al. [14] use a semi-open queueing network (SOQN) to model a tier-to-tier SBS/RS system to identify the optimal number of shuttles and provide some insights in system design. Through simulation analysis, Ha and Chae [15] propose a free balancing in SBS/RS systems to prevent collisions and blockages and achieve the targeted system throughput with an optimal number of shuttles. Wu et al. [16] build a queueing model and design an optimal algorithm to find the minimum cost configurations in terms of number of tiers, aisles, lifts and workstations with given throughput, tote capacity and order cycle time requirements. Lei et al. [17] investigate the optimal storage location assignment by using a optimization model. Besides, Luo et al. [18] and Dong et al. [19] investigate the optimal scheduling rule for storage and retrieval processes, respectively, to minimize the makespan of storing or retrieving a series of loads. And Liu et al. [20] develops an energy consumption model for the SBS/RS and estimate the maximum energy consumption under different throughput requirement. Studies on the shuttle-based compact storage systems are scant, notwithstanding its better volume flexibility, lower operational cost and shorter respond time. Tappia et al. [2] consider multiple tiers and build a semi-open queueing network to model this system. Based on the results of their analytical models, they show the optimal depth/width ratio and number of tiers and compare the economic performance between specialized and generic shuttles. Compared to the research of Tappia et al. [2] , Manzini et al. [21] only focus on the estimation of travel time and distance, aiming to find an appropriate layout and system configuration to optimize the system performance in terms of travel distance and cycle time. Borovinšek et al. [3] attempt to find out the optimal layout and system configuration to minimize the investment, energy consumption and cycle time of the system by using a multi-objective optimization model. D'Antonio et al. [22] consider the effect of different allocation criterion on system performance and propose an analytical model based on probabilistic approach to estimate the cycle time and its standard deviation. Boysen et al. [23] focus on a shuttle-based deep-lane storage system with forklifts performing vertical movements. They build a mixed-integer programming model to estimate the performance of two system configurations, namely one-sided and two-sided access to deep-lane storage system, aiming at avoiding blocking. Eder [24] proposes a continuous-time open queueing network taking into account the effect of capacity limitation and the results show that as the increasing of storage depth, the throughput time increases and the investment cost decreases. Recently, Kumawat and Roy [25] develop a new solution approach to solve the multi-stage semi-open queuing networks and apply it in the shuttlebased compact storage systems, which is more accurate for estimating system performance. Our literature review shows that a sequential processing policy is used to manage the movement of shuttles and transfer car. For instance, when a retrieval transaction is assigned to a shuttle, the shuttle travels to the first bay of its lane and place a request for transfer car sequentially. Once the transfer car is available, it moves to shuttle's lane, transports the shuttle to the retrieval lane, releases the shuttle, waits for shuttle retrieving the load and then transports the shuttle to the I/O point. During the retrieval transaction, the transfer car cannot respond for demand of any other shuttles, which means the longer the time that the shuttle retrievals load takes, the more inefficient the whole system will be. As pointed out by Tappia et al. [2] , the sequential processing policy is currently in use for some warehouses since their storage lanes are not too deep. As the storage lanes become deeper, however, it will take more time for transfer car in waiting for shuttles retrieving loads. Under the parallel processing policy, the movements of shuttles within storage lanes and transfer car in the cross-aisle are simultaneous (Fig 1) . Some previous studies have examined the performance of such a policy in automated and vehicle-based storage and retrieval systems [26, 27] . The systems in their studies are crane-based [26] or single/double deep storage systems [27] , which are differ from the shuttlebased compact storage systems discussed in our study. Besides, the former research uses deterministic models and the latter only takes retrieval transactions into consideration. Recently, Kumawat et al. [28] propose a closed queueing network with two-phase servers to model the simultaneously operations of shuttle and transfer car in a shuttle-based compact storage system. However, their model only captures the parallel movements of shuttles and transfer car before their joint movement, meaning the transfer car still has to wait for shuttle moving within storage lane to pick up the load. In summary, simultaneously operations of independent resources in autonomous storage and retrieval systems have attracted the attention of scholars who have performed a number of theoretical studies. However, existing studies on shuttle-based compact storage systems either assume sequential operations between shuttle and transfer car or focus on the modeling of parallel movements of shuttle and transfer car before their joint movement in cross-aisle, both of which mean that the transfer car has to wait for shuttle retrieving the load. In practice, the simultaneous movements of different resources when processing a transaction have advantages in system performance over their sequential movements [28] . Despite the requirement for shorter response times and the performance benefits of parallel processing policy, most previous literatures mainly focus on the sequential processing policy and studies on the parallel processing policy are rare. Therefore, to contribute to the scant literature on this subject, this study aims to estimate the system performance under parallel processing policy and investigate the conditions on which the parallel processing policy outperforms the sequential processing policy. Based on this, this study analyzes the operational processes of shuttle-based compact storage systems under parallel processing policy and develop a multi-class semi-open queuing network (SOQN) with class switching to model such system. Meanwhile, a fork-join queueing network (FJQN) is used to model the concurrent movement of shuttles and transfer car. Since the original network does not have a product-form solution, a decomposition-based approximation approach is developed to estimate the system performance and simulation is used to validate the accuracy of the analytical model. Additionally, a series of numerical experiments are conducted to compare the system performance under parallel and sequential processing policies. Some design insights and managerial implications are provided through the investigation of a real case. With respect to the previous literature, this study mainly focuses on the parallel movements of shuttles and the transfer car. The results may provide new insights for the improvement of warehouse performance. The main contributions of this study are the followings: 1. We develop a SOQN combined with FJQN to model the parallel movements of shuttles and the transfer car in shuttle-based compact systems. Compared to the previous studies, our model is stochastic and considers both storage and retrieval transactions, thereby taking into account the effect of time spent on waiting for resources to be paired and the route of shuttles. Besides, our model allows the transfer car to be released and respond for the demand of another shuttles when the shuttle is retrieving the load (in the existing studies, the transfer car have to wait for shuttle moving within storage lane to pick up the load). We validate the proposed model using numerical experiments and apply the model on a real case and compare it with the model under sequential processing policy proposed by Tappia et al. [2] . Our analytical results provide some managerial insights in regards to the conditions in terms of number of shuttles, depth/width ratio and arrival rate of orders, under which the parallel processing policy should be considered. The rest of the study is organized as follows: section 2 provides the system description and assumptions. In section 3, we introduce the models and the approximate solution approach is described in section 4. Section 5 contains the simulation validation, numerical experiments and the insights. Conclusions and future works are presented in section 6. Table 1 summarized main notations used throughout the study. The following assumptions are made in this study: 1. We only consider a single tier. This is based on the following observations. First, the parallel movements of shuttle and transfer car are performed within a single tier. Second, our model can be easily extended to the case of multiple tiers by using the multi-tier linking approach proposed by Tappia et al. [2] . 2. We only consider a system with specialized shuttle, since we are interested in whether the simultaneous operations of shuttle and transfer car improve the system performance. 3. The arrival process of both storage and retrieval transactions are assumed to follow a Poisson distribution. 4. The random storage policy is used, meaning the probability of a product being stored in any storage positions is equal. 5. We consider the storage system operates in single-command cycles, which means only a single storage transaction or a single retrieval transaction is performed and only one unit load is handled in each cycle. 6. Each storage lane holds one product. 7. Since compared with the number of storage lanes, the number of shuttles is small, we assume a storage lane can be accessed by at most one shuttle once so that we can ignore the shuttle blocking effects within a storage lane. 8 . The shuttles and the transfer car follow a point-of-service-completion (POSC) dwell point policy. Therefore, the shuttles and the transfer car will wait either at an interior point after completion of a storage transaction or the I/O point after completion of a retrieval transaction. 9. The shuttles and the transfer car follow a first-come-first-served (FCFS) scheduling policy. 10 . The arriving transaction is performed by the first available shuttle, or by the first shuttle waiting at the idle shuttle queue regardless of the transaction type and shuttle dwell point. 11 . We do not consider the effect of acceleration and deceleration on the movement of shuttles and transfer car. When a transaction is assigned to a shuttle, a request is made by the shuttle for transfer car simultaneously. Given the POSC and FCFS policies, the shuttle and transfer car can dwell at any interior or I/O point and a transaction can be assigned to any shuttle regardless of its dwell point. Besides, whether the shuttle dwells at the same lane of retrieval position results in different individual movements required to perform retrieval transactions. Therefore, depending on the dwell point of shuttle (interior or I/O point) and the type of transaction (storage or retrieval), one of the following scenarios showed in Figs 2-5 can occur (For details about the operational processes of such a system under sequential processing policy, we refer to Tappia et al. [2] ). Given the random storage policy and the operational processes showed in Figs 2-5, the expected travel time related to shuttles and transfer car be obtained based on the probability distribution of storing or retrieving a load from each storage position, i.e., a uniform distribution. Therefore, each component of the travel time related to shuttles and transfer car can be expressed as follows: Time required for the shuttle to: 1. travel from dwell point (or the retrieval position) to the first bay of its lane: 2. travel from dwell point to the retrieval position when dwells in the same lane of retrieval position: 3. pick up or drop the load: 2. travel from dwell point (not I/O point) to the shuttle's lane: 3. load or unload the shuttle: 3 Semi-open queueing network for shuttle-based compact storage systems As showed in Fig 6, there are three nodes and a fork-join network with two nodes. All the service required for shuttles and transfer car to complete the parallel movement is modeled by the fork-join network, in which the service of shuttles is represented by an infinite-server (IS) node 1 and the service of transfer car is represented by a single-server node 2. The joint movement is captured by the single-server node 3. IS nodes 4 and 5 represent shuttle movements for retrieval and storage transaction, respectively. Node S is a synchronization station with two queues that Q 1 represents the external queue of transactions and Q 2 represents the queue where shuttles will be released to after completing service. Under the parallel processing policy, an incoming transaction, after being paired with the first available shuttle, is split into two parts that one is served by the shuttle and the other by the transfer car. The completed part waits in one of the two join queues denoted by Q sh (represents the shuttles) and Q t (represents the transfer car) for the completion of the other one. Then they join at the join node. Our model allows transfer car performing other tasks after it releases the shuttle to retrieve the required load. This results that the shuttle and transfer car have to be synchronized upon completion of their parallel movements more than once when processing retrieval transactions To deal with the differences between their first and second synchronizations, class switching is allowed in our model. Specifically, we call the transaction the customer of the system, and storage and retrieval transaction as a class 1 and 2 customer, respectively. In the case that a class 2 customer is served by a shuttle dwelling in a storage lane which is different from that of retrieval position, after the transfer car moving to the destination storage lane and unloading the shuttle (the first joint movement which is captured by node 3), the class of the customer changes to 3. While in the case that a class 2 customer is served by a shuttle dwelling at I/O point, after the transfer car moving from I/O point to destination lane and unloading the shuttle, the class of the customer also changes to 3. Table 2 describes the customer class switching rule. The external arrival process of transactions is assumed to follow a Poisson distribution. Thus, the type of transaction waiting at the head of Q 1 can be storage with probability p s = λ s ⁄(λ r + λ s ) or retrieval with probability p r = λ r ⁄(λ r + λ s ). The probability with which a transaction is assigned to shuttle dwelling at interior (or I/O) point is p sio = λ r ⁄(λ r + λ s ) (or p sin = λ s ⁄(λ r + λ s )). Since the loads are stored randomly in the system, we can get p ss = 1⁄(2N l ) and Transfer car dwells at interior or I/O point depends upon the previous task it completes (there are three possible tasks in our model: joint movement of storage transaction and the first and second joint movement of retrieval transaction), which makes it difficult to calculate the corresponding probabilities. Specifically, the transfer car dwells at I/O point after the completion of the first joint movement of retrieval transaction, while it dwells at an interior point after the completion of the joint movement of storage transaction and the second joint movement of retrieval transaction. Given the random storage policy, it is reasonable to assume that, in steady state, the number of class 2 customers (excluding the case that shuttle is present in the lane where the retrieval load is present) is equal to that of class 3 customers. This assumption implies, given that the shuttle is not present in the lane where the retrieval load is present, the probability that the transfer car performs the first joint movement of retrieval transaction is equal to the probability that it performs the second joint movement of retrieval transaction (i.e., both probabilities can be expressed as (p r p sin p sd + p r p sio )/2). Thus, we can obtain: Let f denotes the fork-join node, the routing probabilities are given as follows: The service time of each node for each class customer depends upon the type of transactions and the dwell point of shuttles and transfer car. Therefore, based on the scenarios provided in Table 3 and fork-join node in Table 4 , as well as their corresponding probabilities and scenarios. The queueing model we developed is a multiclass semi-open queueing network with both general and infinite stations. It is difficult to evaluate such queueing network directly by continuous-time Markov chain (CTMC) since the system has a large state space. This is because we have to record the number of each customer class in each node and its corresponding queue, the number of idle shuttles and the exact order of all of the customers in join queues. In order to estimate the performance of such model with a non-product form solution, we develop a decomposition-based approximation method including following three steps: first, we consider the FJQN as a closed network and estimate its load-dependent service rate; second, we replace the FJQN by a Flow Equivalent Server (FES), aggregate the compliment network, together with the FES, into a single server and calculate its service rate; at last, we solve the reduced SOQN with one single server directly by CTMC. Fig 7 shows the procedure for reducing the original network to a one single-server network. There are two single-server stations in the FJQN, one of which is general station representing transfer car and the other is IS representing shuttles (Fig 7a) . Note that there is no class switching in this closed queueing network. Thus, to obtain the service rate of FJQN, we first aggregate all classes into one, as suggested by [29] , and consider the FJQN as a closed network and short-circuit the other nodes. Thus, we can approximate the state probabilities and calculate the service rate. The mean service time of node 1 and 2 for the aggregation class is given by the combination of mean service time of all possible scenarios. Therefore, the mean service time of node1 and 2, as well as their second moments are obtained by: where E(T i ) denotes the mean service time of node i, T im represents the mean service time of node i in mth scenario described in Table 3 with its corresponding probability denoted by p m and E(T i 2 ) represents the second moment of expected service time for the aggregation class. Moreover, the squared coefficient of variation (scv) of the service time can be obtained by Since the service time of two nodes are general distributed and cv 2 i < 1, an Erlang-k distribution is adopted to approximate the service process of each node, where k denotes the number of exponential phases and k = [1/cv 2 ]. The mean service time at each phase, μ −1 = E(T)⁄k. When a transaction goes through the FJQN, it will be split into two parts, one of which requests the service of shuttles at node 1, the other requests the service of transfer car at node 2. The joining of these two tasks at the join node of FJQN represents the service completion of a transaction. Thus, the state of the system can be described by a two-dimensional vector st q = (N ws , N wt ), where N ws is the number of waiting shuttles in the join queue Q sh and N wt is the number of waiting transfer cars in the join queue Q t . Let N f be the number of customers in the closed network, q be the total number of states in this network, we can obtain q = N ws (N f + 1) + N wt . Given the fact that there is only one transfer car in the system, the state space can be expressed by: As shown in [27] , the joining of two tasks can be completed when a shuttle finishes its service if N wt = 1 or the transfer car finishes its service if N ws > 1. Thus, the service rate of FJQN can be calculated by: where π(st q ) denotes the state probability of state st q . Obviously, the state probabilities π(st q ) can be obtained by solving the following: ( where e is the column vector of ones, Q denotes the transition matrix of st q (S1 File). After obtaining the service rate of FJQN, we replace the FJQN by a FES node with exponential distributed load-dependent service time, μ f (N f ). Then the network made up by all nodes (except for the synchronization node), are considered as a closed network (Fig 7b) . Since class switching is allowed in this closed queueing network, the switch from classes to chains is needed [30] . According to the concept of chains, there are two chains in the closed queueing network denoted by c 1 and c 2 . Without loss of generality, we let c 1 = {1} represents storage transactions, c 2 = {2,3} represents retrieval transactions and N k denotes the number of customers in the closed network. The size of the state space therefore is reduced to: where |c u | and N ku denote the number of elements and customers in chain c u , u = 1,2, respectively. The routing probabilities are given by: (14), we can get the visit ratios of the chain to and expected service time at node i: Then the throughputs of the closed queueing network, μ 1 (N k ) for c 1 and μ 2 (N k ) for c 2 , are obtained through mean value analysis (MVA). After substituting the subnetwork made up of all nodes with a FES node (Fig 7c) , we first reduce the network into a single chain and then use a birth-death process to model the system [31] . Let the aggregate arrival rate λ = λ s + λ r be the birth rate and the service rate for the aggregate chain μ(N k ) = p s μ 1 (N k ) + p r μ 2 (N k ) be the load-dependent death rate of the system. The state space is described using a single variable x, which represents the number of transactions waiting in queue Q 1 when x > 0 and the number of idle shuttles waiting in queue Q 2 when −N s � x � 0. Thus, the load-dependent death rate μ(N k ) = μ(N s + x) when −N s � x � 0 and μ(N k ) = μ(N s ) when x > 0. The steady state probabilities can be obtained by using flow rate balance equations and can be expressed by (S2 File): The expected throughput time of the system, average utilizations of shuttles and transfer car, average queue length of Q 1 are the main performance measures we are interested in and are obtained using the following equations: Where p(x t ) denotes the probability corresponding to the generic state x t belonging to X, the set of all possible states x of the system, x t represents the states with the average number of transactions at FJQN, L f > 0, and node 3, L 3 > 0. The simulation model is based on Arena software 14.0. Table 5 provides the details about the scenarios in simulation model. The data we use in the validation are derived from the study of Tappia et al. [2] and are provided in Table 6 . The depth and width of a tier are measured by the maximum travel distance in the x-and y-direction, respectively. To validate the analytical model under different resource utilizations, the arrival rate of transactions is set at three levels: 22, 25 and 28 per hour with the assumption that λ r = λ s , which results a bottleneck utilization ranging from 70% to 90%. 12 scenarios are designed based on the variation of shuttle number and order arrival rate for each combination of depth / width ratio and total number of storage positions. Other assumptions are the same as the analytical model (i.e., POSC dwell point and random storage policy) (S3 File). For each scenario, a warm-up period of more than 5000 transactions is run, followed by 15 replications with a run time of more than 30000 transactions, which leads to a 95% confidence interval where the half-width is less than 2% of the average. Four performance measures are estimated to validate the analytical model: the throughput time of system, the utilizations of transfer car and shuttles and the queue length of Q 1 . The accuracy of analytical model is measured by absolute relative error, ε, which is defined as ε = |A − S|⁄S × 100%, where A and S denote the analytical and simulation results respectively. The computational complexity of the proposed model can be characterized by O (N s 4 � max (N l , N c ) 2 ). In our experiments, the conduction of proposed algorithm takes less than 1 second of computational time on a standard computer. The distribution of absolute relative errors for each performance measure is shown in Fig 8. The average absolute errors are 6.32%, 2.93%, 2.38% and 10.81% for expected throughput time, transfer car utilization, shuttle utilization and expected queue length of Q 1 , respectively. These results suggest that the analytical model can accurately estimate the system performance. As pointed out by Tappia et al. [2] , it may be advantageous for deep lane shuttle-based compact storage systems under parallel processing policy, which means the response time of systems under parallel processing policy may be shorter than that of sequential processing policy with the increase of depth / width ratio. Thus, we compared the performance of two processing policies by carrying out numerical experiments. The system performance under sequential processing policy is estimated using simulation model, while the system performance under parallel processing policy is estimated using analytical model proposed in this study. To compare these two processing polices in more detail, we vary N s and λ, i.e., N s ranges from 2 to 5 and λ varies from18 to 30 with a step size of 0.1 to deal with the uncertainty of order arrival rate and we also assume that λ r = λ s . The total number of storage positions is 5000. The depth / width ratio varies from 0.75 to 3.5 with a step size of 0.25. The results are shown in Fig 9. To better understand the difference in system throughput time under different processing policies, we use the average improvement percentage of the parallel processing policy over the https://doi.org/10.1371/journal.pone.0259773.g008 sequential processing policy, I P , which is defined by: where E[T P ] and E[T S ] represent the expected system throughput time under parallel and sequential processing policy, respectively. And V is the number of values taken by λ. The results are shown in Fig 10. Obviously, as shown in Fig 7, the average improvement percentage I P increases with the depth / width ratio and the number of shuttles and the parallel policy performs better than sequential policy when depth / width ratio is large enough. Specifically, there is an intersection point between the curves of two processing policies, denoted by (dw � , E[T] � ). When N s = 2, λ = 28, for example, dw � � 2.37. When dw > dw � , the parallel policy outperforms sequential policy. Additionally, dw � decreases with the increase of N s and λ. Allowing the transfer car and shuttles to operate simultaneously reduces the total processing time, while its effect on total waiting time depends on the number of shuttles and depth / width ratio. Specifically, the processing time of parallel task is the maximum of shuttle processing time and transfer car processing time. At the meantime, for systems with storage lanes that are not too deep (i.e., dw < dw � ), the parallel policy increases the total waiting time due to a long travel distance of transfer car to pick up the waiting shuttles. This implies that the shuttles may always waiting for the service of transfer car, which resulting a higher utilization of transfer car and a longer waiting time of shuttles. For deep lane storage systems (i.e., dw > dw � ), the situation reverses since the capacity of transfer car is sufficient so that the increase of shuttle waiting time is dominated by the reduction of total processing time. Therefore, the performance of parallel policy is better than that of sequential policy. In addition, increasing the number of shuttles can reduce the total waiting time since the expected travel distance of transfer car is shorter. Thus, the reduction of total processing time can offset the increase of total waiting time easier (i.e., dw � decreases). In this section, we estimate the performance of both sequential and parallel processing policies in a real case, which refers to a Nedcon system in UK [2] . The system consists of multiple tiers of multiple storage lanes with a layout as studied in our research. In each tier, there are 37 storage columns and 47 storage lanes at each side of the cross-aisle. As analyzing the real case, we should consider the effects of acceleration/deceleration of shuttles and the transfer car. Thus, the model has been adjusted to accommodate for acceleration/deceleration effects, which is referred to the work of Zou et al. [27] . And we also assume that λ r = λ s . Other system parameters are described in Table 7 . As shown in previous discussion, the depth/width ratio, transaction arrival rate and number of shuttles may affect the performance of sequential and parallel processing policies. Thus, for the analysis of a real case, we first vary the transaction arrival rate, ranging from 10 to 28 with a step size of 1, and keep the other variables fixed to investigate potential improvement in With the increasing of transaction arrival rate, the increasing utilization of the transfer car increases the waiting time of shuttles for the service of the transfer car. Given the current configuration of the real system, the expected throughput capacity of sequential processing policy is larger than that of parallel processing when transaction arrival rate is relatively small. This may result from that when λ is small, the average waiting time of shuttles for the service of the transfer car is longer under parallel processing policy than that under sequential processing policy. For a large arrival rate, the situation reverses since the increase of shuttle waiting time, under parallel processing policy, is dominated by the reduction of total processing time. The intersection point between the curves of the sequential processing policy and the parallel processing policy shows the critical transaction arrival rate, below which the sequential processing policy outperforms the parallel processing policy. On the other hand, when the number of shuttles increases, the critical transaction arrival rate decreases since adding new shuttles may shorten the average shuttle waiting time for the service of transfer car. Specifically, the critical transaction arrival rate is about 26 per hour when N s = 3, and about 23 when N s = 4, 5. For the case of two shuttles, the critical transaction arrival rate is larger than 28 per hour, where the resource utilizations are higher than 95% and may not guarantee the conditions for convergence of the system. Thus, we eliminate the scenarios with transaction arrival rates larger than 26. For the analysis of the effects of tier configuration, we vary the depth/width ratio from 0.5 to 3.5 with a step size of 0.25, and keep the other variables fixed (the transaction arrival rate is 22 per hour). The results are provided in Fig 12. The optimal depth/width ratio under parallel processing policy is 1.75, larger than that of sequential processing policy. This implies that, given the current system configurations, the maximum system throughput can be achieved when the depth/width ratio is 1.75 and the system throughput decreases as the depth/width ratio increase or decrease. And the curve is very flat at the optimal ratio point. As discussed in previous section, there exists a critical depth/ width ratio, below which the sequential processing policy outperforms the parallel processing policy. And adding new shuttles also results a decreasing of the critical depth/width ratio. Specifically, the critical depth/width ratio is about 1.75 when N s = 2, about 0.72 when N s = 3, and about 0.625 when N s = 4, 5. Given the current configuration of the real system, the sequential processing policy outperforms the parallel processing policy. However, when the arrival rate of transactions becomes large (e.g., during COVID- 19) , the parallel processing policy should be considered. Our results also allow showing that the depth/width ratio have a significant impact on the difference in system performance between sequential and parallel processing policy. This implies the adoption of parallel processing policy may shorten system response time in the systems with deep storage lanes. Besides, despite the increase of investment cost, adding new shuttles may be a useful way to improve system performance since it will reduce the critical transaction arrival rate and encourage the transform of processing policy from sequential to parallel, which may further improve the system performance. For the system design, our results suggest that the optimal depth/width ratio should be used as a guiding factor. To better understand the system performance under different processing policies, we set the depth/width ratio at the optimal level (i.e., 1.75 for system under parallel processing policy and 1.25 for sequential processing policy), vary the transaction arrival rate, ranging from 10 to 28 with a step size of 1 and keep the other variables fixed. The results are provided in Fig 13. When the number of shuttles is small (i.e., N s = 2) and the arrival rate of transactions is relatively low (smaller than 24 per hour), the system throughput under sequential processing policy is better than that under parallel processing given the current system configurations and the optimal depth/width ratio (i.e., 1.75 for system under parallel processing policy and 1.25 for sequential processing policy). However, when adding new shuttles or the arrival rate of transactions becomes larger, the parallel policy outperforms the sequential processing policy. Besides, the advantage of the parallel processing policy increases with the increase of shuttle number and the transaction arrival rate. These results suggest that, considering the variety of customer demands, the parallel processing policy should be considered and the optimal depth/ width ratio of parallel processing policy should be used as a guiding factor. The shuttle-based compact storage systems are becoming popular and adopted by many modern warehouses. Considering the variety of customer demands, it is important to improve the performance of such systems. Given its advantages in improving system performance, the parallel processing policy in shuttle-based compact storage systems need to be investigated. However, studies on this subject are rare. This study is one of the first to estimate the system performance of parallel processing policy in shuttle-based compact systems. Our contributions lie in both developing an analytical model and providing operational and design insights. Specifically, we mainly focus on the performance estimation of a single-tier of specialized shuttlebased compact storage system, in which the shuttles can only move within storage lanes and are transported along the cross-aisle by the transfer car, under parallel processing policy. The system is modelled as a multi-class semi-open queuing network with class switching, so that transfer car can perform other tasks during a retrieval transaction. Both storage and retrieval transactions are considered to capture the dynamic of shuttle routes and estimate the effect of different transactions on system throughput time. To capture the effect of simultaneously operations of the shuttles and the transfer car, we formulate a FJQN in which the transaction will be split into two parts, one is served by the shuttle and the other is served by the transfer car. Since exact solutions to the proposed semi-open queuing network are not available, a decomposition-based approach is developed to estimate the performance of the system. The analytical model is validated against simulations, the average errors for system response time, shuttle and transfer car utilization and external queue length are 6.32%, 2.93%, 2.38% and 10.81%, respectively. We carry out a series of numerical experiments to compare the performance of sequential and parallel processing policies. The results show that the parallel processing policy outperforms the sequential processing policy in systems with deep storage lanes (which means the depth/width ratio of the system is large). Additionally, the advantage of the parallel processing policy increases with the increase of shuttle number, the depth/width ratio and the transaction arrival rate. Our results also show that there is a critical depth/width ratio, below which the system should follow the sequential processing policy. Otherwise, the parallel processing policy should be considered. We also investigate the performance of both sequential and parallel processing policies in a real case. Given the current configuration of the real system, the system response time of sequential processing policy is lower than that of parallel processing policy. However, when the transaction arrival rate becomes large, our results suggest benefits of adopting parallel processing policy. The results also show the critical point of transaction arrival rate and depth/width ratio under different shuttle numbers. Besides, the optimal depth/ width ratio of the real system is 1.75 when parallel processing policy is used, which is independent of the shuttle number and the transaction arrival rate. When comparing the system performance of different processing policy under the optimal depth/width ratio (1.75 for parallel and 1.25 for sequential processing policy), the results show that the sequential processing policy only have advantages when there are two shuttles in a tier and the transaction arrival rate is small (smaller than 24 per hour). This suggests a potential improvement in system performance achieved by the adoption of parallel processing policy considering the variety of customer demand. This study provides some useful managerial implications and warehouse design insights. However, there is nevertheless a set of limitations. First, the proposed model is only applied on only one real system. Thus, the findings, such as the optimal and critical depth/width ratio and the potential improvement in system performance achieved by adopting parallel processing policy, may not be applicable to other warehouses with different system configurations. Second, in order to develop a tractable model, some assumptions are made in this study, such as random storage policy, POSC dwell point policy, FCFS scheduling policy and so on, all of which could be relaxed. therefore, for future research, it is interesting to consider the effect of different storage assignment policies, different dwell point policies, different transaction scheduling policies, different shuttle assignment rules and the blocking effects. Additionally, it would be interesting to investigate the system performance with transactions requiring more than one unit load and considering both single-and dual-command cycles. On the other hand, future research would include applying the proposed model on other systems where resources work simultaneously and developing more accurate and robust modeling approaches. Robotized and Automated Warehouse Systems: Review and Recent Developments Analysis, and Design Insights for Shuttle-Based Compact Storage Systems Multi-Objective Optimisation Model of Shuttle-Based Storage and Retrieval System Graph-Based Solution for Performance Evaluation of Shuttle-Based Storage and Retrieval System Conceptualizing Tools for Autonomous Vehicle Storage and Retrieval Systems Design Optimization Models for Storage and Retrieval Systems Using Rail Guided Vehicles Interleaving Dynamics in Autonomous Vehicle Storage and Retrieval Systems An Efficient Cycle Time Model for Autonomous Vehicle Storage and Retrieval Systems Design Models for Unit Load Storage and Retrieval Systems Using Autonomous Vehicle Technology and Resource Conserving Storage and Dwell Point Policies Performance Analysis and Design Trade-Offs in Warehouses with Autonomous Vehicle Technology Analytical Models for Analysis of Automated Warehouse Material Handling Systems Analytical Model to Estimate Performances of Autonomous Vehicle Storage and Retrieval Systems for Product Totes An Open Queuing Network Based Tool for Performance Estimations in a Shuttle-Based Storage and Retrieval System Analysis of the Shuttle-Based Storage and Retrieval System Free Balancing for a Shuttle-Based Storage and Retrieval System Modelling and Design for a Shuttle-Based Storage and Retrieval System. International journal of production research Optimization of Storage Location Assignment in Tier-To-Tier Shuttle-Based Storage and Retrieval Systems Based on Mixed Storage Scheduling Storage Process of Shuttle-Based Storage and Retrieval Systems Based on Reinforcement Learning. Complex System Modeling and Simulation Retrieval Scheduling in Crane-Based 3D Automated Retrieval and Storage Systems with Shuttles Energy Consumption Model for Shuttle-Based Storage and Retrieval Systems Travel Time Models for Deep-Lane Unit-Load Autonomous Vehicle Storage and Retrieval System (AVS/RS) Analytical Models for the Evaluation of Deep-Lane Autonomous Vehicle Storage and Retrieval System Performance Deep-Lane Storage of Time-Critical Items: One-Sided Versus Two-Sided Access An Approach for a Performance Calculation of Shuttle-Based Storage and Retrieval Systems with Multiple-Deep Storage A New Solution Approach for Multi-Stage Semi-Open Queuing Networks: An Application in Shuttle-Based Compact Storage Systems. Computers & Operations Research Travel Time Analysis of a New Automated Storage and Retrieval System Modeling Parallel Movement of Lifts and Vehicles in Tier-Captive Vehicle-Based Warehousing Systems Stochastic Modeling of Parallel Process Flows in Intra-logistics Systems: Applications in Container Terminals and Compact Storage Systems Solving Semi-Open Queuing Networks Queueing Networks and Markov Chains A Nested Semi-Open Queuing Network Model for Analyzing Dine-In Restaurant Performance We acknowledge the study of Tappia and his colleagues in supporting our research with real data. Conceptualization: Lei Deng, Ruimei Wang.