key: cord-0869875-yi2lv2eg authors: Zohdi, T. I. title: A Digital-Twin and Machine-Learning Framework for Ventilation System Optimization for Capturing Infectious Disease Respiratory Emissions date: 2021-06-05 journal: Arch Comput Methods Eng DOI: 10.1007/s11831-021-09609-3 sha: 5fd7ef71d4b3c1f04fe0f0779b3d357c8f5c7353 doc_id: 869875 cord_uid: yi2lv2eg The pandemic of 2019 has led to an enormous interest in all aspects of modeling and simulation of infectious diseases. One central issue is the redesign and deployment of ventilation systems to mitigate the transmission of infectious diseases, produced by respiratory emissions such as coughs. This work seeks to develop a combined Digital-Twin and Machine-Learning framework to optimize ventilation systems by building on rapidly computable respiratory emission models developed in Zohdi (Comput Mech 64:1025–1034, 2020). This framework ascertains the placement and flow rates of multiple ventilation units, in order to optimally sequester particles released from respiratory emissions such as coughs, sneezes, etc. Numerical examples are provided to illustrate the framework. The pandemic of 2019, due to SARS-CoV-2, named COVID-19 and referred to as coronavirus, has been responsible for nearly two million of deaths worldwide in 2020 alone. It is well-established that this virus primarily spreads from person-to-person contact by respiratory droplets produced when an infected person coughs or sneezes. Subsequently, the droplets come into contact with the eyes, nose or mouth of a nearby person or when a person touches an infected surface, then makes contact with their eyes, nose or mouth. Since the virus is small, 0.06-0.14 microns in diameter, it can be contained in or attached to such emitted droplets. Droplets as small as one micron can carry enough viral load to cause an infection. A particular concern is the interaction of droplets with ventilation systems, which can capture emitted pathogens, but can also potentially could enhance their propagation. This has implications on situation-specific safe distancing and the design of building filtration systems, air distribution, heating, air-conditioning and decontamination systems, for example using UV-c and related technologies. In order to facilitate such system redesigns, fundamental analysis tools are needed that are easy to use. Accordingly, this work develops a combined Digital-Twin and Machine-Learning framework to optimize ventilation systems by building on rapidly computable respiratory emission models developed in Zohdi [1] . This framework ascertains the placement and flow rates of multiple mobile ventilation units, in order to optimally sequester particles released from respiratory emissions such as coughs, sneezes, etc. There have been dramatic advances in technologies associated with automation across many industries, which have the potential to drastically improve system efficiency, quality and safety. Some approaches are methodical and systematic, while some are ad-hoc and haphazard. In the world of systems engineering, increasingly sophisticated and integrated approaches for digital systems are appearing at a rapid rate. One key component to advancements in this areas are Digital-Twins, which are digital replicas of complex physical systems that can be safely manipulated and optimized in a virtual world and deployed in the physical world afterwards, reducing costs of experiments and accelerating development of new technologies. Digital-Twins blend AI, machine learning, software analytics and data to create living digital computer models that can update and change in tandem with their physical counterparts-and ultimately control them in real time. Updates to the Digital-Twin are made continuously in real-time, which necessitates technologies like rapid wireless communication, hyperspectral cameras, sensor fusion and fast simulations of process behavior. Today, there is no shortage of general simulation software; however, the fundamental limitations are ease-of-use and integration of models with in-field data and easy deployment for researchers. A core issue across all domains is the ability of a system to adapt to rapid changes in the environment and system capabilities by autonomously modifying operation-with humans in the loop. By developing these tools for food systems researchers, we believe that we can enhance the ability of industry to conduct research that inevitably involve Digital Twins, as well as benefiting society and industry. The presented work also involves blending Machinelearning with Digital Twins, whereby simulations iteratively learn from their mistakes and constantly evolve to improve and optimize the model, safely in a virtual environment. This rapid simulation should be designed to run rapidly in tandem with a real system. Accordingly, the Digital-Twin and Machine-Learning in this framework combines these rapidly computable models with genomic Machine-Learning methods to ascertain the placement and flow rates of multiple mobile ventilation units, in order to optimally sequester particles released from respiratory emissions such as coughs, sneezes, etc. Specifically: • A model is developed for the propagation of emitted particles that is rapidly computable, • A qualitative analysis of the model is discussed, • A numerical discretization for the flow of the distribution of particles is constructed, • A genomic-based Machine-Learning algorithm to optimize the mobile ventilation systems is developed and • Numerical examples are provided. It is now widely accepted that masks are an integral part of controlling the spread of infectious diseases, such as COVID19. There are three general categories of face masks: (1) Cloth Masks (for the general public), which provides some protection to the wearer, but mainly serves to stop the spread of viruses (2) Surgical Masks (for healthcare workers and the general public), which provides partial protection and (3) N95 Respirators (primarily for healthcare workers), which provides high level of protection and use charged fibers (electrets) to attract particles of all sizes. N95 masks are excellent at trapping very small particles (below 0.1 micron), since their motion is random (Brownian), due to collisions with other particles, air molecules, etc., which will force them to collide with an electrically charged "sticky" fiber. It is also excellent in trapping large particles (above 0.3 microns) , since their motion is effectively linear across the mask(because of their inertia) and will collide with a fiber. It has more difficulty trapping medium-sized particles (0.1 microns-0.3 microns), which travel with airflow patterns. N95 means that it traps at least 95% of the particles. However, there are problems, due to mask ill-fitting, re-use due to shortages, use of alcohols and solvents for mask cleaning, etc., which may damage the mask or neutralize the fiber charge. Furthermore, the proper donning and doffing (removal from face) of a mask is nontrivial, due to strap elongation, facial hair, etc. Therefore, one must assume that some particles get through, regardless of the mask type. Accordingly, in workplaces, some type ventilation is needed, generally involving HEPA (High Efficiency Particulate Absorbing) grade filters, which remove 99.95% of the particles equal to 0.3 microns in size, with efficiency increasing for particles larger or smaller than this size. Such filters were started by the German military in WWII for gas masks, whereby increases in mask efficiency were noted by simply adding a piece of paper (cellulose) to the mask. Such systems started commercially in the 1950s for decontamination control in hospitals, nuclear facilities, homes vehicles, etc. They are comprised of randomly arranged fiberglass fibers between 0.5 and 2.0 microns. As with N95 masks, they are designed to capture a range of particles by similar mechanisms. Another common term is 'MERV' Minimum Efficiency Reporting Value (1987 ASHRAE), which is designed to report the worst case performance of a filter, ranging 0.3-10 microns, with a scale rating of from 1-16 (highest at 95% ). It is essential, that ventilation systems be given much more attention than ever before. We remark that some systems also employ UV-c light for further decontamination, in combination with the ventilation system. A dosage of 1 J∕cm 2 is considered adequate to destroy COVID19. Many commercial mobile UV-c light systems now exist. UV-c light 254-265 nm is considered optimal. Wavelengths above this range are not effective, while wavelengths below produce ozone, which is dangerous (Zohdi [2] ). Heating contaminated at 70 °C for 60 min is effective, but impractical in a large workplace setting with sensitive equipment and people. For example, decontamination based on UV technology has become ubiquitous, with many variants now being proposed. UV light varies in wavelength from 10 to 400 nm, thus making it shorter that visible wavelengths and larger than X-rays. Short wave UV light (UV-c) can damage DNA and sterilize surfaces making in useful in the medical industry. This was first noted in 1878 (Downes and Blunt [3] ) when the effect of shortwavelength light killing bacteria was discovered. By 1903 it was known the most effective wavelengths were around 250 nm (UV-c), for which Niels Finsen won a Nobel Prize (for skin-based tuberculosis eradication using UV light). Contaminants in the indoor environment are almost entirely organic carbon-based compounds, which break down when exposed to high-intensity UV at 240-280 nm. Despite the attractiveness of using UV-c light, the literature has shown that it is difficult to ensure that all surfaces are completely decontaminated due to shadowing effects. Thus, the use of ultraviolet germicidal irradiation (UVGI) is effective only as a component in a multistage process-it alone carries the risk of residual contamination. Thus, purely UV-c protocols should be adopted if there is no other choice. However, they can be an integral part of a multistage process involving a combination of (a) gas vapors and (b) heat and humidity. The topic of decontamination technologies is of paramount interest (see references Anderson et al. [4] , Battelle [5] , Boyce et. al. [6] , Card et al. [7] , Heimbuch and Harish [8] , Heimbuch et al. [9] , Ito and Ito [10] , Lin et al. [11] , Kanemitsu [12] , Lindsley et al. [13] , Lore et al. [14] , Marra et al. [15] , Mills et al. [16] and Nerandzic et al. [17] ), and the corresponding simulation of such processes has recently been undertaken in Zohdi [1] and is a topic of ongoing research. In its most basic form, a cough can be considered as a high-velocity release of a random distribution of particles of various sizes, into an ambient atmosphere ( Fig. 1 [35] for extensive reviews of coughs and other respiratory emissions. We will consider the model problem in Fig. 1 . Following formulations for physically similar problems associated with particulate dynamics from the fields of blasts, explosions and fire embers (Zohdi [36] [37] [38] [39] ), we make the following assumptions (Zohdi [1] ): • We assume the same initial velocity magnitude for all particles under consideration, with a random distribution of outward directions away from the source of the cough. This implies that a particle non-interaction approximation is appropriate. Thus, the inter-particle collisions are negligible. This has been repeatedly verified by "brute-force" collision calculations using formulations found in Zohdi [40] [41] [42] [43] . • We assume that the particles are spherical with a random distribution of radii R i , i = 1, 2, 3...P n , where P n is the total number of particles. The masses are given by where i is the density of the particles. • We assume that the cough particles are quite small and that the amount of rotation, if any, contributes negligibly to the overall trajectory of the particles. The equation of motion for the ith particle in the system is with initial velocity v i (0) and initial position r i (0) . The gravitational force is grav i = m i g , where g = (g x , g y , g z ) = (0, 0, −9.81) m∕s 2 . • For the drag, we will employ a general phenomenological model where C D is the drag coefficient, A i is the reference area, which for a sphere is A i = R 2 i , a is the density of the ambient fluid environment and v f is the velocity of the surrounding medium which, in the case of interest, is air. We will assume that the velocity of the surrounding fluid medium ( v f ) is given, implicitly assuming that the dynamics of the surrounding medium are unaffected by the particles. We will discuss these assumptions further, later in the work. A simplified Stokesian model, which can be solved analytically provides some physical insight, is discussed in Zohdi [1] . A summary of that analysis is as follows. For a (low Reynolds number) Stokesian model, the differential equation for each particle is where c i = f 6 R i , where f is the viscosity of the surrounding fluid (air) and the local Reynolds number for a particle is Re and f is the fluid viscosity. The trends are In summary • Large particles travel far and settle quickly and • Small particles do not travel far and settle slowly. The ratio of the Stokesian drag force to gravity is which indicates that for very small particles, drag will dominate the settling process and for larger particles, gravity will dominate. For the general model, in order to more accurately model the effects of drag, one can take into account that the empirical drag coefficient varies with Reynolds number. For example, consider the following piecewise relation (Chow [23] ): where, as in the previous section, the local Reynolds number for a particle is Re and f is the fluid viscosity. 1 We note that in the zero Reynolds number limit, the drag is Stokesian. In order to solve the governing equation, To illustrate the model, we utilize an example from Zohdi [1] , without a mask. As an example, a mean particle radius was chosen to be R = 0.0001 m with variations according to where A = 0.9975 and a random variable −1 ≤ i ≤ 1 . The algorithm used for particle generation was: The initial trajectories were determined from the following algorithm • Specify relative direction 'cone' parameters: N c = (N c x , N c y , N c z ), • For each particle, i = 1, 2, 3, ..., P n , construct a (perturbed) trajectory vector: where −1 ≤ ix ≤ 1 , 0 ≤ iy ≤ 1 and −1 ≤ iz ≤ 1. • For each particle, normalize the trajectory vector: • For each particle, the velocity vector is constructed by a projection onto the normal vector: Following Zohdi [1] , we integrate the velocity numerically The position is the obtained by integrating again: This approach has been used repeatedly for a variety of physically similar drift-type problems in Zohdi [36] [37] [38] [39] . In order to illustrate the model, the following simulation parameters were chosen: An extremely small (relative to the total simulation time) time-step size of Δt = 10 −6 seconds was used. Further reductions of the time-step size produced no noticeable changes in the results, thus the solutions generated can be considered to have negligible numerical error. The simulations took under 10 seconds on a standard laptop. The algorithm generated 59941 particles ranging from 2.5 × 10 −7 m ≤ R i ≤ 2 × 10 −4 m (i.e. 0.25 microns ≤ R i ≤ 200 microns ). We used a trajectory cone of N c = (0, 1, 0) and A c = (1, 0.5, 1) in the example given. Figure 2 illustrates the results for the parameters above (for v f y = 0 ). We refer the reader to Zohdi [1] for more details. The mask model will include an extra term in the drag resistance for particles in the domain of the mask ahead of the face: where A s = 4 R 2 . To account for multiple interacting vents assume a radial distance decay model for flow field from each vent, j = 1, 2, ...N of the form: where r v j is the position of the jth vent and d is a decay factor. Figure 3 illustrates the flow patterns with this model for 4 randomly placed vents. These will be optimized later in the work. In summary, the overall model is: Remark 4 A fully resolved two-way coupled system of particles and surrounding fluid would require a fine spatiotemporal discretization involving Finite Element, Finite Difference or Finite Volume methods of the Navier-Stokes equations for the surrounding fluid where (x) is the density field of the fluid, v(x) is the fluid velocity field, (x) is the fluid stress field, D(x) is the fluid velocity gradient field, f (x) is the body force field, P(x) is the fluid pressure field, (x) and (x) are fluid material property fields. 2 Additionally, the first law of thermodynamics could be included (along with equations for various chemical reactions), which reads as where w(x) is the stored energy in the fluid, q(x) is the heat flux field, z is the heat source field per unit mass. This is discussed further at the end of this work. The rapid rate at which these simulations can be completed enables the ability to explore inverse problems seeking to determine what parameter combinations can deliver a desired result (Fig. 4) . In order to cast the objective mathematically, we set the problem up as a Machine Learning Algorithm (MLA); specifically a Genetic Algorithm (GA) variant, which is well-suited for nonconvex optimization. Following Zohdi [44] [45] [46] [47] 39] , we formulate the objective as a cost function minimization problem that seeks system parameters that match a desired response We systematically minimize Eq. 4.1, min Λ Π , by varying the design parameters: ={panel size, spacing, angles...} . The system parameter search is conducted within the constrained ranges of 3 , etc. These upper and lower limits would, in general, be dictated by what is physically feasible. Here we follow Zohdi [45] [46] [47] 39] in order to minimize Eq. 4.1, which we will refer to as a "cost function". Cost functions such as Eq. 4.1 are nonconvex in design parameter space and often nonsmooth. Their minimization is usually difficult with direct application of gradient methods. This motivates derivative-free search methods, for example those found in Machine Learning Algorithms (MLA's). One of the most basic subsets of MLA's are so-called Genetic Algorithms (GA's). Typically, one will use a GA first in order to isolate multiple local minima, and then use a gradient-based algorithm in these locally convex regions or reset the GA to concentrate its search over these more constrained regions. GA's are typically the simplest scheme to start the analysis, and one can, of course, use more sophisticated methods if warranted. For a review of GA's, see the pioneering work of John Holland ( [48, 49] ), as well as Goldberg [50] , Davis [51] , Onwubiko [52] and Goldberg and Deb [53] . The MLA/GA approach is extremely well-suited for nonconvex, nonsmooth, multicomponent, multistage systems and, broadly speaking, involves the following essential concepts: Following Zohdi [54, [45] [46] [47] , the algorithm is as follows: where for this operation, the i and i are random numbers, such that 0 If one selects the mating parameter Φ to be greater than one and/or less than zero, one can induce "mutations", i.e. characteristics that neither parent possesses. However, this is somewhat redundant with introduction of new random members of the population in the current algorithm. If one does not retain the parents in the algorithm above, it is possible that inferior performing offspring may replace superior parents. Thus, top parents should be kept for the next generation. This guarantees a monotone reduction in the cost function. Furthermore, retained parents do not need to be reevaluated, making the algorithm less computationally expensive, since these parameter sets do not have to be reevaluated (or ranked) in the next generation. Numerous studies of the author have shown that the advantages of (4.3) parent retention outweighs inbreeding, for sufficiently large population sizes. Finally, we remark that this algorithm is easily parallelizable. After application of such a global search algorithm, one can apply a gradient-based method, if the cost function is sufficiently smooth in that region of the parameter space. In other words, if one has located a convex portion of the parameter space with a global genetic search, one can employ gradient-based procedures locally to minimize the objective function further, since they are generally much more efficient for convex optimization of smooth functions. An exhaustive review of these methods can be found in the texts of Luenberger [55] and Gill, Murray and Wright [56] . The system was optimized, Eq. 4.1, varying the following (16) parameters: Table 1 and Fig. 5 illustrate the results. The algorithm was automatically reset around the best gene every 10 generations. The entire 50 generation simulation, with 24 genes per evaluation (1200 total designs) took a few minutes on a laptop, making it ideal as a design tool. Shown are the optimization results for successively longer time limits of T = 1.75, 2.0, 2.25 and 2.5 s for the 16 parameter set. Shown are the best performing gene (design parameter set, in red) as a function of successive generations, as well as the average performance of the entire population of genes (design parameter set, in green). Successively allowing longer simulations times allows the vents to trap more particles. We note that, for a given set of parameters, each complete simulation takes on the order of 0.25 s, several thousand parameters sets can be evaluated in well under an hour, without even exploiting the inherent parallelism of the MLA/GA. which multiply inputs by weights that represent the inputs' relevance to the desired output, (b) neurons, which add outputs from all incoming synapses and applies activation functions and (c) training, which recalibrates the weights ( w i , , i = 1, 2, ...N ) to achieve a desired overall output. Ultimately, one constructs a system with optimized weights to mimic an artificial "input-output" brain. For extremely physically-complex systems, these techniques remain unproven, but are actively being investigated in a number of scientific fields. This work developed a combined Digital-Twin and Machine-Learning framework to optimize ventilation systems by building on rapidly computable respiratory emission models developed in Zohdi [1] . This framework ascertains the placement and flow rates of multiple ventilation units, in order to optimally sequester particles released from respiratory emissions such as coughs, sneezes, etc. Numerical examples are provided to illustrate the framework. An extension of the analysis is to consider cases where the change in the surrounding fluid's behavior, due to the motion of the particles and cough, may be important. The result is a system of coupled equations between the particles and the fluid, requiring spatio-temporal discretization, such as high-fidelity Finite Element or Finite Difference methods, of the classical equations governing the surrounding fluid mechanics (Navier Stokes). Generally such models are computationally quite expensive and ineffective for rapid real-time use, but are useful for detailed offline background analyses, where a rapid response is a nonissue. The continuum discretization is usually combined with a Discrete Element Method for the particle dynamics. There are a variety of such approaches, for example, see Avci and Wriggers [57] , Onate et al. [58, 59] , Leonardi et al. [60] , Onate et al. [61] , Bolintineanu et al. [62] and Zohdi [40, 43] . Such models are significantly more complex than the models used in the current work. Along these lines, in Zohdi [40, 43] , more detailed, computationally intensive models were developed to characterize the motion of small-scale particles embedded in a flowing fluid where the dynamics of the particles affects the dynamics of the fluid and vice-versa. In such a framework, a fully implicit Finite-Difference discretization of the Navier-Stokes equations was used for the fluid and a direct particle-dynamics discretization is performed for the particles. Because of the large computational difficulty and expense of a conforming spatial discretization needed for large numbers of embedded particles, simplifying assumptions are made for the coupling, based on semi-analytical computation of dragcoefficients, which allows for the use of coarser meshes. Even after these simplifications, the particle-fluid system was strongly-coupled. The approach taken in that work was to construct a sub-model for each primary physical process. In order to resolve the coupling, a recursive staggering scheme was constructed, which was built on works found in Zohdi [40] [41] [42] [43] . The procedure was as follows (at a given time increment): (1) each submodel equation (fluid or particle-system) is solved individually, "freezing" the other (coupled) fields in the system, allowing only the primary field to be active, (2) after the solution of each submodel, the associated field variable was updated, and the next submodel was solved and (3) the process is then repeated, until convergence. The time-steps were adjusted to control the rates of convergence, which is dictated by changes in the overall physics. Specifically, the approach was a staggered implicit time-stepping scheme, with an internal recursion that automatically adapted the time-step sizes to control the rates of convergence within a time-step. If the process did not converge (below an error tolerance) within a preset number of iterations, the time-step was adapted (reduced) by utilizing an estimate of the spectral radius of the coupled system. The developed approach can be incorporated within any standard computational fluid mechanics code based on finite difference, finite element, finite volume or discrete/ particle element discretization (see Labra and Onate [63] , Onate et al. [58, 59] , Rojek et al. [64] and Avci and Wriggers [57] Modeling and simulation of the infection zone from a cough Rapid simulation of viral decontamination efficacy with UV irradiation On the influence of light upon protoplasm Inactivation of food-borne enteropathogenic bacteria and spoilage fungi using pulsed-light Instructions for Healthcare personnel: preparation of compatible N95 respirators for decontamination by the battelle memorial institute using the battelle decontamination system Modern technologies for improving cleaning and disinfection of environmental surfaces in hospitals UV sterilization of personal protective equipment with idle laboratory biosafety cabinets during the Covid-19 pandemic preprint Research to mitigate a shortage of respiratory protection devices during public health emergencies (Report to the FDA No. HHSF223201400158C) A pandemic influenza preparedness study: Use of energetic methods to decontaminate filtering facepiece respirators contaminated with H1N1 aerosols and droplets Absorption spectra of deoxyribose, ribosephosphate, ATP and DNA by direct transmission measurements in the vacuum-UV (150-190 nm) and far-UV (190-260 nm) regions using synchrotron radiation as a light source Relative survival of Bacillus subtilis spores loaded on filtering facepiece respirators after five decontamination methods Does incineration turn infectious waste aseptic? Effects of Ultraviolet germicidal irradiation (UVGI) on N95 respirator filtration performance and structural integrity Effectiveness of three decontamination treatments against influenza virus applied to filtering facepiece respirators No-touch disinfection methods to decrease multidrug-resistant organism infections: a systematic review and meta-analysis Ultraviolet germicidal irradiation of influenzacontaminated N95 filtering facepiece respirators Evaluation of an automated ultraviolet radiation device for decontamination of Clostridium difficile and other healthcare-associated pathogens in hospital rooms Airborne spread of infectious agents in the indoor environment The size and the duration of air-carriage of expiratory droplets and droplet-nuclei Human cough as a two-stage jet and its role in particle transport Published Study on transport characteristics of saliva droplets produced by coughing in a calm indoor environment An introduction to computational fluid dynamics Size distribution and sites of origin of droplets expelled from the human respiratory tract during expiratory activities Particle image velocimetry of human cough Study on the initial velocity distribution of exhaled air from coughing and speaking Coughing and aerosols How far droplets can move in indoor environments-revisiting the Wells evaporation-falling curve Flow dynamics and characterization of a cough Modeling the fate of expiratory aerosols and the associated infection risk in an aircraft cabin environment CFD analysis of the human exhalation flow using different boundary conditions and ventilation strategies Control of airborne infectious diseases in ventilated spaces Dispersion of coughed droplets in a fullyoccupied high-speed rail cabin Quantity and size distribution of cough-generated aerosol particles produced by influenza patients during and after illness The ventilation of buildings and other mitigating measures for COVID-19: a focus on wintertime A note on firework blasts and qualitative parameter dependency On the thermomechanics and footprint of fragmenting blasts Modeling the spatio-thermal fire hazard distribution of incandescent material ejecta in manufacturing A machine-learning framework for rapid adaptive digital-twin based fire-propagation simulation in complex environments Computation of strongly coupled multifield interaction in particle-fluid systems On the dynamics of charged electromagnetic particulate jets Numerical simulation of charged particulate cluster-droplet impact on electrified surfaces Embedded electromagnetically sensitive particle motion in functionalized fluids An agent-based computational framework for simulation of global pandemic and social response on planet X Mechanistic modeling of swarms Multiple UAVs for Mapping: a review of basic modeling, simulation and applications The Game of Drones: rapid agent-based machinelearning models for multi-UAV path planning Adaptation in natural & artificial systems Artificial Adaptive Agents in Economic Theory (PDF) Genetic algorithms in search, optimization & machine learning Handbook of genetic algorithms Introduction to engineering design optimization Special issue on genetic algorithms A machine-learning framework for rapid adaptive digital-twin based fire-propagation simulation in complex environments Introduction to linear & nonlinear programming Practical optimization A DEM-FEM coupling approach for the direct numerical simulation of 3D particulate flows Advances in the particle finite element method for the analysis of fluid-multibody interaction and bed erosion in free surface flows Possibilities of the particle finite element method for fluid-soilstructure interaction problems Coupled DEM-LBM method for the free-surface simulation of heterogeneous suspensions Lagrangian analysis of multiscale particulate flows with the particle finite element method Particle dynamics modeling methods for colloid suspensions High-density sphere packing for discrete element method simulations Comparative study of different discrete element models and evaluation of equivalent micromechanical parameters Acknowledgements The author states that there is no conflict of interest.This work has been partially supported by the UC Berkeley College of Engineering, Lam Research and the USDA AI Institute for Next Generation Food Systems (AIFS), USDA award number 2020-67021-32855.