key: cord-0645077-a9veu5d2 authors: Cordier, Benjamin A.; Sawaya, Nicolas P. D.; Guerreschi, Gian G.; McWeeney, Shannon K. title: Biology and medicine in the landscape of quantum advantages date: 2021-12-01 journal: nan DOI: nan sha: 94a3ea01b760460faa5788f1dbb23d801db81dce doc_id: 645077 cord_uid: a9veu5d2 Quantum computing holds significant potential for applications in biology and medicine, spanning from the simulation of biomolecules to machine learning approaches for subtyping cancers on the basis of clinical features. This potential is encapsulated by the concept of a quantum advantage, which is typically contingent on a reduction in the consumption of a computational resource, such as time, space, or data. Here, we distill the concept of a quantum advantage into a simple framework that we hope will aid researchers in biology and medicine pursuing the development of quantum applications. We then apply this framework to a wide variety of computational problems relevant to these domains in an effort to i) assess the potential of quantum advantages in specific application areas and ii) identify gaps that may be addressed with novel quantum approaches. Bearing in mind the rapid pace of change in the fields of quantum computing and classical algorithms, we aim to provide an extensive survey of applications in biology and medicine that may lead to practical quantum advantages. The notion that a quantum computer may be more powerful than a classical computer was first conceived some forty years ago in the context of simulating physical systems [1, 2, 3] . Theoretical models of quantum computers quickly followed [4, 5, 6] . Then, in 1993, 11 years after Feynman's talk on simulating physics [3] , early formal evidence that a universal quantum computer may be more powerful than its classical counterpart arrived with proof of a superpolynomial quantum advantage on an artificial problem, recursive Fourier Sampling [7] . This was shortly followed by the development of the first quantum algorithm for a practical problem, prime factorization, by Peter Shor in 1994 -also yielding a superpolynomial advantage [8] . Shor's algorithm was the first example of a quantum algorithm that could have significant real world implications by threatening the RSA cryptosystem [9] , which is widely used to secure the Web. Its discovery initiated a flurry of research into quantum algorithms, a now burgeoning subfield of quantum information science (QIS), that has continued to the present. More recently, in 2019, experimental evidence of the increased computational power of quantum computers was provided via the first successful quantum primacy 1 experiment on a 53 qubit superconducting device [12] , with similar experiments following shortly after [13, 14, 15] . While it has been debated whether these experiments represent true examples of quantum primacy 2 , they have nonetheless galvanized the QIS field around the practical potential of quantum computers in the near term. 1 We use the term quantum primacy [10] in lieu of the original quantum supremacy [11] , given recent discussions around the ethics of language in the sciences. In the context of this work, quantum primacy is considered a special case quantum advantage. 2 While each experiment represents a state-of-the-art demonstration of the capabilities of quantum devices, they are not without limitations. For example, whether the classical simulation performed in [12] constitutes the practical limit of classical computation has been contested [16, 17] , although recent work has shown substantial advantages in energy consumption for the specific experiment [18] (and in general [19] ). Further, while the superconducting devices in [12, 14, 15] are programmable, the photonic device in [13] was i) hard-coded to perform the specific task and ii) a Gaussian boson sampler (GBS) implementing a limited, non-universal model of quantum computation. Quantum hardware has now entered the noisy intermediate-scale quantum (NISQ) era [20] , a stage of maturity characterized by devices with low qubit counts, high error rates, and correspondingly short coherence times. While it is unclear how long the transition from the NISQ era towards fault tolerant quantum computation (FTQC) will take -whether it occur by error correction [21] or inherently fault tolerant hardware based on topological properties [22] -common estimates range from several years to several decades. Yet, with first generation NISQ devices moving from the lab to the cloud, now is an opportune time for computationalists in biology and medicine to begin exploring the value that quantum approaches may bring to their research toolbox. Over the past three decades, biology and medicine have evolved into highly quantitative fields [23] . Areas of inquiry span from foundational questions on the origins of life [24] and the relationships between protein structure and biological function [25] to ones with a direct impact on clinical practice, such as those concerned with the oncogenesis of cancer [26, 27] , the development of novel drugs [28] , and the precise targeting of therapeutics on the basis of genetic mutations [29] and other clinical indicators [30] . However, despite the substantial progress facilitated by computational methods and the expansion of high-performance computing (HPC) environments, fundamental constraints when modeling biological and clinical systems persist. System complexity is one example. This constraint arises from both first-order biological complexity, as can be seen in the metabolic processes of individual cells [31] or the binding of protein receptors to ligands [32] , and higher-order clinical complexity, occurring at the intersection of complex biological, behavioral, socioeconomic, cultural, and environmental factors [33] . On one hand, this system complexity has made biological and clinical research a verdant playground for the development of many novel, efficient computational algorithms and approaches. On the other hand, practical algorithms typically manage system complexity via reductionist frameworks. A consequence of this is that existing computational models often fail to capture and reconcile important system dynamics. Quantum computers, if sufficiently robust ones can be built, promise to fundamentally reduce the algorithmic complexity of constructing and analyzing many of these models. This may allow solutions to many difficult problems to be computed with far greater efficiency, which could in turn reduce compute times and improve the fidelity of practical models. The second constraint is one of scale. Looking to healthcare alone, as much as 153 exabytes of data were generated in 2013 with a projected annual growth rate of 48% [34] . Extrapolating this growth rate, it is plausible that over 2,300 exabytes were generated in 2020. Similar data challenges also exist in biology. For example, the highthroughput sequencing revolution has led to exabytes of highly complex genomic, epigenomic, transcriptomic, proteomic, and metabolomic data types (among others). To manage these large data volumes, centralized data repositories have proliferated (e.g. see the ever-growing dbGaP [35] or the more recent Genome Data Commons [36] ). These massive data resources are crucial to the re-use of high-value data in secondary analyses and reproducibility studies. However, even with the wide use of HPC infrastructures, large bioinformatics and computational biology workflows often extend for days, weeks, or longer. In recent years, this challenge has grown with the expansion of other areas demanding significant computational resources. Examples include high-resolution imaging (e.g. cryo-EM) and massive deep learning inference pipelines with on the order of 10 9 (or greater) model parameters. While it is not anticipated that scalability constraints will be addressed by quantum computing technologies in the near term, FTQC devices may offer a partial solution to some of these challenges over the long term. Given these challenges across biology and medicine and the potential of quantum computing, a common question among domain computationalists interested in quantum computing is: "When will I be able to leverage quantum computing for [insert preferred application]?" The answer to this question is complex and can be factored into a number of considerations: • How can it be ascertained whether a problem will benefit from a quantum advantage? • What scale of problem instance is required to meaningfully demonstrate such an advantage? • What hardware and software are required to translate a quantum advantage from theory to practice? • How can we detect and measure a practical quantum advantage once it has been achieved? The goal of the work presented here is two-fold. The first goal is to consider what is known of the answers to these questions for a broad variety of potential applications in biology and medicine. To do this, we have sought to i) distill current knowledge around quantum advantages, quantum algorithms, quantum hardware, and a broad number of specific application areas in biology and medicine and ii) identify gaps in existing theory and implementations. Of course, we cannot hope to account for every possible application in such a large domain. Further, prior work [37, 38, 39, 40, 41] in this field has already considered aspects of these questions for a variety of application areas (e.g. genomics, drug-design, clinical phenotyping, neurimaging). Thus, our second goal is to leverage our own exploration of these questions towards a greater framework to aid domain computationalists as they embark on their own explorations of quantum applications that have the potential to deliver practical quantum advantages in the near term. It is our hope that this work can serve as both i) a resource for computationalists in biology and medicine interested in developing quantum computing proof of principles for their field in the NISQ era and ii) a guide for quantum algorithms researchers interested in developing algorithms targeting applications in biology and medicine. As such, we have written this perspective with an interdisciplinary audience in mind and endeavored to minimize the need for a formal background in biology, medicine, quantum information, quantum algorithms, quantum hardware, and computational complexity. If additional background on the QIS field is desired by the reader, we refer them to the references highlighted in this footnote 3 . In addition, background on some of the topics in biology and medicine we discuss can be found here 4 . The Landscape of Quantum Advantages is a set of concepts that together are central to the identification, characterization, and realization of quantum advantages. These concepts, which we present below, include a classification scheme for quantum advantages, known hardware constraints that can influence their practical implementation, and context-based evidence levels for establishing their existence and practical realization. How can we define a quantum advantage from the theoretical perspective? While multiple mathematical formulations have been described in the literature (e.g. see [58, 59] ), for our purposes, we simply state that a theoretical quantum advantage is defined by four key properties: 1. Problem: A formal computational problem. 3 See the following reviews and resources for more information on quantum information theory [42] , quantum algorithms [43, 44] , and quantum hardware [45] . For an exhaustive discussion of near term quantum algorithms and their prospects, we direct the reader to the recently published paper by Bharti et al. [46] , alongside the more focused reviews on quantum machine learning [47] and variational quantum algorithms [48] . See the recent review by Eisert et al. for a discussion of hardware and software certification and benchmarking [49] and a review by Gheorghiu et al. for a discussion of the more stringent verification of a quantum computation [50] . Finally, while the standard textbook for the QIS field is Nielsen and Chuang [51] , see [52] for a recent, application-oriented review of quantum algorithms and their circuit implementations. 4 See the following reviews and resources for more information on computational approaches in drug discovery [28] , phylogenetics [53] , medical image segmentation [54] , de novo assembly [55] , biological sequence error correction [56] , and deep learning applications in biology and medicine [57] . A classical algorithm and a quantum algorithm, each of which solve the computational problem. 3. Resources: One or more resources, such as time, space, or data, that are consumed by both the classical and quantum algorithms. Bounds: Analytical bounds on the resource consumption (e.g. a worst-case time complexity bound) for both the classical and quantum algorithms. A range of theoretical quantum advantages have been identified. For example, for the general problem of unstructured search, Grover's algorithm [60, 61] yields a quadratic advantage with a query complexity of O( √ N ) relative to the O(N ) queries required classically 5 (note that a query in this context can be thought of as a function call). On the other end of the spectrum, the k-Forrelation problem [62, 63] admits a quantum algorithm with the largest known (superpolynomial) quantum advantage -whereΩ(N 1−1/k ) queries are required by a classical randomized algorithm (omitting logarithmic factors), only k/2 queries are required by the quantum algorithm. In most cases, theoretical quantum advantages should be thought of as loose approximations of the degree of quantum advantage that may be possible in practice. A theoretical quantum advantage can be classified on the basis of two factors ( Figure 1 ). The first one relates to the classical hardness of the computational problem, which is defined by the best known classical algorithm 6 or a provable upper bound. In particular, a computational problem may be classified as easy or hard according to whether its classical algorithmic complexity (typically for a worst case input) is polynomial or superpolynomial, respectively [76] . The second factor relates to the size of the advantage yielded by the quantum algorithm relative to the classical algorithm. Often, these advantages result from a reduction in complexity for a computational resource. Like algorithmic complexity, the size of an advantage advantage may also be classified as polynomial or superpolynomial. Here, we would like to make clear that the terms "easy" and "hard" refer only to classical computability; they are not intended to refer to the attainability of a practical quantum advantage or the difficulty of implementing a quantum approach. 5 Note that it is convention to take N to mean N = 2 n . This is due to the exponential state space (known as a Hilbert space) generated by an n-qubit quantum superposition. 6 In practice, we are often interested in the best available classical algorithm, which may be identified by a mixed criteria. Examples of this criteria include i) the most commonly used practical algorithm for a given application, ii) the most advanced algorithm implemented for that application, or iii) the algorithm for which the underlying operational assumptions best comport with the ones required for the comparison being made. Figure 1 : Classifying quantum advantages. A conceptual illustration of the classes of quantum advantages. We discuss four classes that we define across two axes: i) The classical resource consumption (vertical) and ii) the strength of the advantage (horizontal). Quantum advantages are expected to have variable computational overheads; as such, the pink region indicates where the quantum advantage begins. Top Left: Polynomial advantages on classically hard problems. Amplitude amplification and estimation [64] provides a general framework to achieve polynomial quantum advantages on search and optimization problems. These techniques have led to quantum versions of dynamic programming algorithms [65, 66] , which are relevant to many tasks in genomics, such as sequence alignment. Top Right: Superpolynomial advantages on classically hard problems. Hamiltonian simulation of strongly correlated fermionic systems is classically hard. Many quantum Hamiltonian simulation algorithms yield superpolynomial advantages [67] and may lead to substantial improvements in drug discovery pipelines [38] . Bottom right: Superpolynomial advantages on classically easy problems. Matrix inversion is among a set of linear algebra subroutines that are central to machine learning. In particular, inverting a feature covariance matrix is common to many classical machine learning approaches, such as support vector machines and Gaussian processes. The quantum linear systems algorithm (QLSA) [68] performs this crucial subroutine for a variety of QML algorithms (e.g. see [69, 70] ) and runs in polylogarithmic time relative to the matrix dimensions. While this implies a superpolynomial advantage (omitting the complexity due to the condition number), practical realizations are expected to depend on matrix rank and may require the development of quantum random access memory (QRAM) [71, 72, 73, 74, 75] . Bottom left: Polynomial advantages on classically easy problems. Grover's algorithm [60] for unstructured search is the classic example. Quantum algorithms in this class will likely be used as subroutines in conjunction with other quantum algorithms that offer greater advantages. For the rest of this subsection, we discuss paradigmatic examples of each class of quantum advantage. While we briefly highlight some relevant algorithms and applications in biology and medicine, we provide a much more detailed account of potential applications in Section 4. Polynomial advantages for easy problems. The most well-known algorithm in this class is Grover's algorithm for unstructured search, which has a worst-case complexity of O( √ N ) relative to the O(N ) complexity of its classical counterpart [60] . At its core, Grover's algorithm uses amplitude amplification [64] , a general quantum algorithm technique that yields polynomial advantages without requiring a specific problem structure. Many other quantum algorithms also fall under this class of quantum advantage. These include ones for convex optimization [77, 78] , semi-definite programming [79] , and calculating graph properties (e.g. bipartiteness, stconnectivity, and cliques) [80, 81, 82] . From the perspective of biology and medicine, some problems in network and systems biology can be cast as convex optimization or network inference problems incorporating graph property testing 7 . Some practical examples of these problems include inferring the characteristics and behaviors of gene regulatory, protein interaction, and metabolic networks. Polynomial advantages for hard problems. These include many NP-hard optimization problems that can also benefit from amplitude amplification. Examples of quantum algorithms in this class include ones for constraint satisfaction [85] and combinatorial optimization [65, 66, 86] . Notably, algorithms in this class were among the first to target applications specific to biology and medicine, including sequence alignment [87, 88, 89] and the inference of phylogenetic trees [90] . Sequence alignment, in particular, represents a crucial computational primitive for many tasks in bioinformatics and computational biology. Superpolynomial advantages for easy problems. While no algorithms specific to problems in biology or medicine are known to exhibit this class of advantage, a number of quantum machine learning (QML) algorithms with general relevance do. Perhaps the most prominent example is the quantum linear systems algorithm (QLSA) [68] for high-rank matrices, which led to the development of many early QML algorithms. Another example is a quantum algorithm for pattern matching [91] , which yields a superpolynomial advantage as the size of the pattern approaches the size of the input. Quantum algorithms in this class typically exploit compact quantum data encodings, such as amplitude encoding. These encodings allow for certain computations to be performed using a polylogarithmic number of qubits and operations relative to their comparable classical algorithms. However, they may also be subject to several practical constraints related to data input and output sampling [92] . Superpolynomial advantages for hard problems. The most widely known example of an algorithm in this class may be Shor's algorithm for factoring integers [93] , a problem that has no direct relevance to biology or medicine. Quantum algorithms for simulating quantum physics [94, 95, 96, 97, 98, 99] , on the other hand, may find significant applications. In principle, these algorithms could provide superpolynomial advantages for a vast range of hard problems related to simulating physical systems 8 . Examples include characterizing the ground states of biologically relevant small molecules, the behavior of chemical solutions, and the ternary and quaternary structures of biological molecules [100] . In general, superpolynomial advantages for hard problems are the most desirable class of quantum advantage. A quantum advantage may be supported by multiple forms of evidence that can vary by its context. At the highest level, these contexts can be theoretical or empirical. In the theoretical context, abstract evidence is collected to answer whether a quantum algorithm can yield an advantage over the most efficient classical algorithm. In contrast, empirical evidence is collected to answer whether a specific quantum algorithm and device can, together, yield an observable advantage over a classical approach given a well-defined problem and context-dependent metric. Importantly, i) these contexts are often not mutually exclusive (i.e. a degree of overlap may exist) and ii) while an experimental advantage may also be practical, evidence from an operational context is best suited to addressing questions around the practical value of an observed quantum advantage. In this section, we describe these contexts in detail and their implications for the evidence required to establish a quantum advantage. Theoretical advantages result from an improvement in analytical bounds on resource efficiency by a quantum algorithm relative to a well-motivated classical counterpart. The core resource in question typically involves units of time, space, or information (e.g. samples from an experiment). In practice, these units of comparison may be gates, queries, bits, or error rates. Theoretical advantages are defined mathematically, may be conjectured, and are often contingent on well-founded assumptions from computational complexity theory. Crucially, they need not be application-oriented 9 . Instead, they may be conceptual -designed with the express intent of demonstrating that an advantage exists for an artificial task or broader class of problems that have theoretical interest. Examples of conceptual quantum algorithms yielding theoretical advantages include the Deutsch-Jozsa algorithm [101] , Bernstein-Vazirani algorithm [102] , Simon's algorithm [103] , and one for solving the Forrelation problem [104] . Despite their theoretical motivation, practical applications may nonetheless follow from conceptual algorithms. One example of this can be seen with Shor's algorithm for prime factorization [8] , which was inspired by Simon's algorithm [103] . Experimental quantum advantages are a type of empirical quantum advantage observed in an experimental context. Crucially, they require the comparative analysis of computational metrics that precisely measure the advantage in question 10 . To facilitate these comparisons, these metrics are ideally defined over both quantum and classical approaches. While theoretical evidence (i.e. proofs, conjectures, and numerical models) may support an experimental advantage a priori, it is the computational benchmarking of an implementation that serves as the evidence for establishing an experimental quantum advantage. For instance, the first quantum primacy experiment [12] represented a real-world demonstration of an experimental quantum advantage. First, a theoretical advantage was proven and numerically estimated [106] , then initial experimental work soon followed, which yielded the first hint of such an experimental advantage [107] . Finally, this culminated in an experimental demonstration of the quantum advantage -as evidenced by substantial cross-entropy benchmarking [12] . Similar trajectories are observable for other quantum primacy experiments, such as one recently demonstrated on a non-universal photonics-based device [13, 108, 109] . Operational quantum advantages result from the successful translation of an experimental advantage to an applied setting. As such, in addition to the computational benchmarking required to validate the experimental advantage, this type of advantage may require the definition and measurement of an operational metric -a key performance indicator (KPI) -to gauge the extent of the practical advantage when deployed. Accordingly, this type of advantage considers the greater context outside of the computational environment. In particular, it can be viewed as one that additionally integrates the organizational, economic, and social context of the target application. Challenges to realizing an operational quantum advantage may include organizational inertia, entrenched support for incumbent classical methods, and difficulties in the integration of the experimental quantum advantage into existing software, hardware, and network infrastructure. To date, no obvious demonstration of an operational quantum advantage has occurred and estimates of when such an advantage may occur vary greatly. From theory to practice. The context of a quantum advantage is used to determine what evidence is required to establish it. While theoretical evidence has historically preceded experimental evidence, the chronological order of evidence levels may vary. This is expected to become increasingly apparent as interest in practical quantum approaches and access to near term quantum hardware expands. In the absence of supporting theoretical evidence, the pursuit of experimental advantages may be motivated instead by expert intuition (e.g. around the structure of a computational problem and how quantum information may be beneficial to finding its solution). While such an intuition-based approach provides practical flexibility, it should be cautioned that it may, too, be a source of bias [110] . To manage this risk for specific applications, robust benchmarking and the publication of full experimental evidence (e.g. data and analysis code) through open science tools will be key to community verification of claimed empirical quantum advantages over the near term, especially in operational contexts. Translating a theoretical advantage into an experimental one is often challenging. These challenges largely arise from the practical constraints of NISQ hardware. In this section, we discuss some of these constraints and how they inform the translation of theory into practice. For an overview of the potential feasibility of a variety of quantum advantages with respect to quantum hardware, see Table 1 . Logical versus noisy qubits. Theoretical quantum algorithms tend to be modeled with FTQC devices -ones implementing logical qubits upon which error-free gates, state-preparation, and measurement are performed. Current hardware remains far from such a scalable device. Instead, existing NISQ devices have dozens to hundreds of error-prone qubits. The error characteristics of quantum devices arise from a number of error types. These include state preparation and measurement (SPAM) error, gate (operation) errors, emergent errors, such as crosstalk [118, 119, 120] , and systematic errors due to device calibration or fabrication defects. While these errors can in principle be factored into their constituent phase and bit-flip components, the heterogeneity of their generating processes will likely require an all of the above approach to realize scalable FTQCs with coherence times sufficiently long to run large quantum circuits with polynomial depth. Approaches to realizing FTQC are expected to require error mitigation techniques 11 (e.g. see [121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131] ) to bring hardware error rates below a fault-tolerance threshold [21] . Below this threshold, it may be possible to use error correcting codes (e.g. see the CSS code [132, 133] , surface code Table 1 : Quantum advantages and their expected feasibility by hardware. The type of advantage yielded by a quantum algorithm informs the degree of hardware maturity that is expected to be required for its practical realization. Whereas superpolynomial quantum advantages on classically hard problems may be readily attained by NISQ devices in the coming years, other forms of advantage (i.e. superpolynomial advantages on classically easy problems and polynomial advantages on both classically easy and hard problems) will likely require greater hardware maturity, up to FTQC. In the right-most columns, we indicate the hardware paradigm that is expected to be required for the stated quantum advantage to be realized by a practical application. Note that sample complexity advantages are currently supported by theoretical proofs; algorithms with the potential to demonstrate them in practice are under investigation. [134] , XZZX surface code [135] , and honeycomb code [136] ) to enable practically unlimited coherence times 12 . Numerical simulations of error correction approaches imply that the overheads incurred by some of these strategies will place limits on the class of quantum advantages that can be achieved for many years 13 . Indeed, some 12 The required fault-tolerance threshold depends on a variety of factors, including the hardware error characteristics, connectivity map, and the error correction code being used. 13 A recent case study [137] found that high constant time overheads represent a significant challenge in implementing Shor's algorithm on a benchmark application in cryptography. As modeled, this overhead is due to magic state distillation [138, 139] , a process required for error correcting non-Clifford polynomial advantages [59] and even superpolynomial advantages [137] using the surface code may require very large numbers of qubits to encode the necessary logical qubits. Fortunately, the rapid pace of progress and many avenues being explored give much reason for optimism. One recent example lies in a numerical simulation of the honeycomb code [136] . In particular, this numerical simulation indicated that achieving computations on circuits with trillions of logical operations may be possible with as few as 600 qubits [141] , given modest assumptions around qubit error rates. gates (crucial operations for universal quantum computation). While the overhead of magic state distillation has long been viewed as necessary to FTQC, it is notable that this has recently been called into question (e.g. see [140] ). Qubit connectivity. With few exceptions, NISQ computers are expected to have limited qubit connectivity. In contrast, the mathematical proofs behind theoretical quantum advantages often assume all-to-all connectivity within qubit registers. Fortunately, a growing body of evidence [142, 143] around the number of SWAP gates [144, 145] required to emulate all-to-all connectivity on lower connectivity architectures indicates this may be a soft constraint on practical quantum algorithms. For example, one recent numerical simulation demonstrated that all-to-all connectivity can be simulated with a 2D grid architecture (with logarithmic or sublinear bounds on the overhead) for three quantum subroutines central to quantum simulation, machine learning, and optimization algorithms [146] . This work also highlighted that logarithmic bounds may also be realized in architectures with even greater connectivity constraints provided sufficient additional qubits are available 14 . Altogether, these numerical simulations imply that qubit connectivity on its own may present only a mild constraint when pursing practical quantum advantages. Input constraints. Theoretical quantum algorithms typically use a combination of abstract input models ( Table 2) . These input models can either be oracles -"blackbox" functions for which the form of the input and output is known but the implementation of the function is not -or data inputs. Both oracles and data inputs can be either quantum or classical. Practically, oracles can be viewed as algorithm subroutines or queryable data structures. Importantly, the type of oracle (i.e. quantum or classical) can either improve or limit the feasibility of implementing a quantum algorithm on a NISQ device. Whereas quantum oracles may place additional complexity requirements with respect to the number of qubits or gates required to implement a quantum circuit, classical oracles may mitigate the size of a quantum circuit by offloading classically efficient computations to a classical device. Similarly, the input of data typically requires either the encoding of classical information into qubits (Box 1) or the loading of a quantum state. These data input steps generally demand additional circuit depth. When theoretically analyzing quantum algorithms, the complexity overheads of oracles and data input are sometimes omitted from the stated complexity. Despite these omissions, these subroutines can strongly influence whether an empirical quantum advantage is possible. One prominent example exists for the class of superpolynomial advantages on classically 14 These numerical simulations of quantum circuits included ones for the quantum Fourier transform, Jordan-Wigner transform, and Grover diffusion operator, each compiled using three connectivity maps representative of existing device architectures: All-to-all with n(n − 1)/2 edges, a 2D square grid with √ n × √ n dimensionality and 2(n − √ n) edges, a ladder with n/2×2 dimensionality and 3n/2−2 edges, and a linear nearest neighbor graph with degree-1 terminal qubits and n − 1 edges. easy problems 15 . In principle, for this class, the data input must be performed in O(polylog(n)) to maintain the quantum advantage, a point highlighted in a discussion [92] of the potential limitations of the QLSA 16, 17 . Box 1: Data encoding methods. While many data encoding methods for quantum algorithms exist, three are commonly used: • Computational basis encoding allows for an n-bit binary strings to be encoded into n qubit basis states. It can be thought of as a quantum version of a classical computer writing binary data to memory. • Amplitude encoding allows for the input of real values into the N = 2 n amplitudes (i.e. relative phase) of a quantum system using log N qubits. • Angle encoding allows for the efficient encoding of n real-valued inputs by parameterizing n qubits with the input values and then computing their tensor product. For hybrid quantum-classical algorithms (discussed in Section 3), the type of data encoding method used can significantly influence both the expressivity of the parameterized quantum circuit (PQC) [150, 151] and the appropriate procedure for sampling the output distribution of the quantum circuit. Many quantum algorithms for preparing initial quantum states already exist [147, 153, 154, 155, 156] . Nonetheless, over the near term, low qubit counts and circuit depths are expected to limit the realization of many quantum advantages in practice. To mitigate this challenge, a variety of approaches have been proposed. These include the "coresets" input model [152] , a technique with conceptual similarities to importance sampling, for which 15 Quantum algorithms yielding this class of advantage often leverage amplitude, or another similarly efficient encoding (e.g. see [147] ), requiring on the order of log 2 N qubits. 16 In particular, the QLSA embeds the data matrix into the amplitudes of the quantum state, which implies that a quantum state with amplitudes proportional to the values in the data matrix must be prepared in O(polylog(n)) time [92] . To do this, the QLSA assumes access to a quantum random access memory (QRAM) [71, 72, 75] , a hardware device expected to require fault tolerance. Additionally, given the efficiency of the amplitude encoding used for the data input, the size of the measured output is limited proportionally (i.e. log 2 N ) due to Holevo's bound [148] . To sample the full solution vector, a polynomial number of samples (and an i.i.d. -independent and identically distributed -assumption over output of the quantum circuit) is required, abrogating the superpolynomial speedup. 17 It is notable that recent work has indicated that this particular input constraint on the QLSA may be less severe in practice, given a modest assumption [149] . Coresets [152] x = (x 1 , ..., x n ) ∈ Σ n and oracle access Standard x = (x 1 , ..., x n ) ∈ {0,f : Σ × Y → R Potential applications identified include k-means clustering, logistic regression, zero-sum games, and boosting. Table 2 : Input models for quantum algorithms. A major challenge in quantum algorithm development is determining the most efficient way to input classical data and prepare quantum states. The standard, oracle, quantum data, and quantum oracle models represent four basic approaches (along with many variants) used in developing theoretical quantum algorithms. One such variant is the coreset input model, which can reduce the size of the input and be viewed as a hybrid of the standard and oracle models. Sometimes, a quantum algorithm will leverage multiple input models (e.g. both quantum data and a quantum oracle). This table was reproduced (with minor modification) from [152] . one recent practical implementation has been demonstrated [157] . Similarly, data transformations designed to compress the correlation structure of features into a lower dimensional space may also prove useful. Further, hybrid quantum-classical approaches (discussed in detail in Section 3) and recent work highlighting the relationship between data encodings and learned feature embeddings [151, 158, 159] have substantially improved our understanding of efficient data input for near term devices. In particular, hybrid quantum-classical techniques can allow for substantial reductions in circuit width and depth for many applications [48] . Thus, while input constraints are expected to remain prevalent in the near term (particularly for algorithms requiring polynomial qubits and circuit depths), the substantial work on mitigating them and recent progress with hybrid approaches provides a growing toolkit for managing them. Output constraints can largely be attributed to two highly-related factors arising from the physics of quantum information. The first is Holevo's bound [148] , a theoretical result that limits the amount of information that can be efficiently retrieved from n qubits to n classical bits. Naïvely, this implies that the size of a solution output by a quantum circuit must be (at most) polynomial in the number of qubits. The second constraint relates to the probabilistic nature of quantum information. In particular, even if a solution can be output within the space afforded by Holevo's bound, the number of quantum circuit evaluations (i.e. samples) to identify it with confidence must also have polynomial scaling for efficient computability. Together, these constraints can limit the practicality of implementing certain classes of quantum advantage. One illustrative example relates to the previously mentioned QLSA. In particular, the number of samples required to output the complete solution vector is expected to scale polynomially with n. Given the algorithm runs in O(polylog(n)) operations, fully extracting the solution vector abrogates the superpolynomial speedup [92] . As such, this particular algorithm (and many related ones) are expected to deliver a superpolynomial advantage only for sampling statistics of the solution vector or when calculating some global property of the state. More generally, these output constraints emphasize the importance of choosing an appropriate input encoding in anticipation of the number of classical bits that will be required to output the solution. 3 The pursuit of NISQ advantages Understanding empirical quantum advantages and their computational benchmarking has taken on additional importance with the development of hybrid quantumclassical approaches. Many of these hybrid approaches developed for NISQ devices can be described as variational quantum algorithms (VQAs). In general, a VQA leverages three basic components: i) a parameterized quantum circuit (PQC), ii) an objective function, and iii) a classical optimizer 18 ( Figure 2 ). Examples include the variational quantum eigensolver (VQE) [170] and many variational QML algorithms. Crucially, they allow for substantial flexibility in their implementation and application. To design a VQA, the first step is to define a problem Hamiltonian H (i.e. a Hermitian operator that, in the context of physical simulation, corresponds to the kinetic and potential energy of the system), and associated objective function, L. This is followed by the design of an appropriate PQC (sometimes called an ansatz) composed of both fixed and variational gates, which may be single-, two-, or multi-qubit gates. The circuit is controlled by parameters θ ∈ R d , which are made to vary by a classical optimizer, which seeks to minimize the the objective function. These parameters may control a variety of aspects of the quantum circuit, such as quantum gate rotation angles (e.g. R y ( θ)), whether a layer or unitary in the circuit should execute, or whether execution should halt upon a condition being reached. As an example, a common VQA approach to quantum simulation is to minimize the 18 It is notable that, in addition to the many gradient-based optimizers seen in classical deep learning libraries [160], VQAs may leverage quantum-specific optimizers. In particular, the parameter-shift rule [161, 162] represents a favored gradientbased approach developed expressly for VQAs. Recent work on this optimizer has led to a broadening of the gate sets to which it can be applied [163, 164] and the use of adaptive shots [165, 166] , which leverage information from prior circuit evaluations to reduce the number of measurements required. Other optimizers include a quantum analog of natural gradient descent [167, 168] and derivative-free methods [169] . expectation value of the Hamiltonian L( θ) = ψ θ |H |ψ θ with respect to the parametric state |ψ θ prepared by the quantum circuit. Doing so leads to an approximation of the ground state or minimum eigenvalue of H , providing an upper bound on the ground state energy of the Hamiltonian. Using this approach, VQAs may solve problems in the near term that would be otherwise infeasible due to the inherent coherence limitations of NISQ devices. Further, by leveraging classical post-processing techniques on the output observables [171, 172, 173] , some theoretical evidence suggests that VQAs can be made resistant to errors. Quantum simulation is one of the most anticipated applications for quantum computers, particularly in the domains of condensed matter physics, quantum chemistry, material science, and structural biology. Within these fields, a core goal is the scalable simulation of large, strongly correlated fermionic systems with an accuracy far superior to current approximate solutions from classical computation. Realizing this goal may precipitate a novel paradigm within these fields where our ability to understand the material world -superconductors, meta materials, chemistries, catalysts, proteins, and pharmaceutical compounds -is vastly improved. Given the many superpolynomial advantages offered by quantum simulation algorithms, it is highly plausible that variational quantum simulation (VQS) will yield many practical applications in the near to medium term. These applications may in turn have a significant impact across a broad variety of research fields and industrial sectors. The first VQS algorithm (and VQA) was the variational quantum eigensolver (VQE) [170] . The VQE was originally conceived to compute quantitative properties of molecular Hamiltonians. The first problem instance targeted by the VQE was finding the ground state of helium hydride [170] . Since then, various implementations of the VQE approach have been used to simulate molecular binding [174] and dissociation [175, 176] , dipole moments [177] , chemical reactions [178] , and other quantum mechanical systems and their quantities. So far, these practical implementations have all focused on simple molecules with relatively few atomic nuclei 19 -simulations that are classically tractable (and with greater accuracy). These molecules include multiple hydrides, such as ones of beryllium (BeH2) [180] and sodium, rubidium, and potassium (NaH, RbH, and KH, respectively) [181] . While these represent promising early results, significant work remains to scale VQS approaches to larger molecules that have biological relevance, such as proteins, enzymes, nucleic acids, and metabolites. In fact, the simulation of such a biomolecule to high accuracy would Hamiltonian operator, H. Following this mapping, execution of the VQA proceeds as follows: First, the quantum circuit is initialized to the state |0 ⊗n . The first execution of the parameterized quantum circuit can be viewed as the initial ansatz, |ψ θ . Next, a measurement is performed to extract the information required to compute the loss of the objective function (e.g. the expectation of the Hamiltonian operator, L( θ) = ψ θ |H|ψ θ ). Using this measurement, a classical optimizer then computes updates to the parameters to minimize the loss of the objective function (e.g. a simple form of gradient descent uses the update rule θ t+1 := θ t − η∇L( θ t ), where η is a hyperparameter for the gradient step size or learning rate and t is the time step). This proceeds iteratively until the system has converged from the initial ansatz to the lowest energy state of |ψ θ . This low energy state represents both an approximation of and upper bound on the lowest energy state of the Hamiltonian operator. likely represent a clear demonstration of an experimental quantum advantage. A number of barriers currently prevent VQS advantages from being realized. One barrier is the number of measurements required by VQS algorithms to attain chemical accuracy competitive with existing classical simulation techniques [182, 183] . Fortunately, substantial recent work suggests that this measurement problem may not be insurmountable [184, 185, 186, 187] . Another barrier relates to whether VQAs are sufficient to efficiently model the physical aspects of Hamiltonian simulation problems that challenge classical algorithms. Without this capability, the superpolynomial advantages established by existing theoretical quantum simulation algorithms [94, 95, 100, 111, 188, 189] may not be realized by VQS approaches, which are inherently heuristic. It seems unlikely that this will be the case. Significant theoretical [169, 190] , numerical [191] , and experimental evidence [127, 178] imply that VQS approaches may be among the best candidates for operational quantum advantages in the near term. In Sections 4.1, we detail a large number of potential applications for quantum simulation and whether they may yield empirical quantum advantages in the near term. Prior to the advent of the VQA paradigm, the field of quantum machine learning (QML) grew rapidly following the publication of the quantum linear systems algorithm (QLSA) [68] , which yielded a superpolynomial advantage on a classically easy problem. The QLSA algorithm solves linear systems by matrix inversion, a general technique used by many machine learning algorithms (e.g. to invert a feature covariance matrix). Given the generality of the QLSA, early QML research largely sought to improve upon it [114, 115, 116, 117] and leverage it towards the development of other QML algorithms for FTQC hardware. Examples of these include quantum algorithms for support vector machines (SVM) [69] , recommendation systems [70] , clustering [192] , principal components analysis [193] , and Gaussian processes [194, 195] , among many others. However, despite these initial successes several challenges (including the input and output constraints discussed in Section 2.4) imply that the QLSA may not be practical in the near term [92] . Among these is "dequantization" [196] 20 , the first example of which occurred in 2018 and involved the development of a classical algorithm for efficient l 2 -norm sampling of approximate matrix products 21 . Other QML algorithms based on the QLSA were subsequently dequantized, including ones for principal component analysis and supervised clustering [198] , support vector machines (SVM) [199] , lowrank regression [200, 201] , semi-definite program solving [202] , and low-rank Hamiltonian simulation and discriminant analysis [197] . Despite the practical challenges for early QML algorithms, the development of variational approaches has broadened QML into a substantially more diverse field with many algorithms that may deliver practical advantages before FTQC is available. In general, QML approaches can be placed into one of four categories on the basis of the type of data and device being used [203] : with classical data. A learning algorithm executed on a classical device with quantum data. A learning algorithm executed on a quantum device with classical data. A learning algorithm executed on a quantum device with quantum data. Category 1 represents classical machine learning approaches, including deep learning. In the context of QIS, classical machine learning may find relevance for benchmarking QML algorithms, controlling quantum hardware, and designing quantum experiments [204] . At present, Category 2 finds primary application in characterizing the state of quantum systems, studying phase transitions, and high-energy physics. Category 4 may be particularly relevant to domains at the intersection of quantum metrology [205] and QML where quantum data is received as input 22 . Alternatively, a QML algorithm could be used to post-process quantum data generated by a quantum simulation algorithm, such as the VQE. Variational QML approaches typically belong to categories 3 and 4. These algorithms include variational incarnations of the QLSA [207, 208, 209] , a broad variety of quantum neural networks (QNNs) (e.g. variational autoencoders [210] , generative adversarial networks [211, 212] , continuous variable QNNs [213] , and the discovery of a comparably efficient subroutine) that eliminates a previously claimed advantage for a theoretical quantum algorithm. 21 In particular, assuming the matrix is low-rank [196, 197] , the classical algorithm could be used to replace a core subroutine of the quantum recommendation system algorithm [70] , reducing the advantage from superpolynomial to polynomial. 22 While the unification of quantum metrology and QML appears unexplored for biological and clinical research, recent work on quantum algorithms for medical imaging represents a movement in this direction [206] . reinforcement learning [214] ) 23 , and at least one example of a variational approach to quantum unsupervised learning [218] . For reviews of this area of the QML field, see [219, 220] . Among variational QML approaches, quantum kernel estimation (QKE) [158, 221, 222] and variational quantum classifiers (VQC) [158, 159, 221] for supervised learning are of particular interest. In the case of QKE, the aim is to leverage a quantum circuit that implements a non-linear map between the classical features of each class and corresponding quantum states 24 . The inner product of the two quantum states, corresponding to a quantum kernel function measuring the distance between the data points, is then sampled from the quantum device for each sample pair. This kernel, which may be intractable to compute classically [221, 224] , can then be input into a classical machine learning algorithm, such as an SVM, for a potential quantum advantage. In this respect, QKE can be viewed as combining both Categories 2 and 3 in a single method. A recent demonstration of this approach on a 67 feature astronomy dataset using 17 qubits provided evidence that QKE can yield comparable performance to a classical SVM (in this instance, with a radial basis kernel) on a practical dataset [224] . Further, advantages were recently shown to be possible using these methods on engineered data sets [225] . Whether these advantages can be realized on realistic data sets remains the subject of ongoing research efforts. Similarly, VQCs aim to learn a non-linear feature map that embeds the classical features into a higherdimensional quantum feature space (i.e. a region of the Hilbert space of the quantum system) and maximizes the class separation therein. Doing so allows for the use of a simple linear classifier and can be viewed as a variational analog to a classical SVM. Crucially, the VQC approach yields a general framework for quantum supervised learning [159] . In particular, VQCs can be viewed as implementing a two step process of i) embedding classical data into a quantum state and ii) finding the appropriate measurement basis to maximize the class separation. This approach can be further generalized to allow for other fixed circuits, such as amplitude amplification or phase estimation, following the data embedding phase [223] . A key benefit of VQCs is that the quantum circuit directly outputs a classification, which greatly reduces the number of measurements required relative to QKE. While many variational QML methods lack analytical bounds on their time complexity (unlike many of the earlier QML algorithms), they may nonetheless offer other forms of quantum advantage. In particular, theoretical and empirical evidence exist for improvements in generalization error [226, 227, 228, 229] , trainability (i.e. with certain constructions, favorable training landscapes with fewer barren plateaus and narrow gorges [226, 227, 230, 231, 232, 233] ), and sample complexity [234, 235, 236] . It is plausible that these types of advantages may lead to novel machine learning applications in biology and medicine. However, recent work has also made clear that the bar for achieving these advantages may be high given the ability of data to empower classical machine learning algorithms [225] . Explorations of variational QML approaches for biology and medicine are only now getting under way. They include proof of principle implementations of protein folding with a hybrid deep learning approach leveraging quantum walks on a gate-based superconducting device [237] and the diagnosis of breast cancer from legacy clinical data via QKE [238] . As larger, more flexible quantum devices are made available, further growth of research into applications of variational QML is expected. The quantum approximate optimization algorithm (QAOA) is a hybrid quantum-classical algorithm for targeting optimization problems, such as MaxCut [239] . Following its original publication, the algorithm was generalized into a broader framework, called the quantum alternating operator ansatz (retaining the original acronym) [240, 241] . This generalization allows for more expressive constructions capable of addressing a broader range of problems. Below, we briefly review the original QAOA approach; we refer to Hadfield et al. [240] for more details including extensions to the algorithm. The original QAOA leverages two Hamiltonians, a phase Hamiltonian, H P = f (y)|y , and a mixing Hamiltonian, H M = n j=1 X j . H P encodes the cost function which operates on an n qubit computational basis state and H M is comprised of a Pauli-X operator for each qubit. Application of the unitary operators generated by H P and H M to the initial state is then alternated for p rounds. Here, p represents a crucial parameter of the QAOA algorithm, defining the length of the quantum circuit and the number of its parameters. There are 2p parameters of the form γ 1 , β 1 , γ 2 , β 2 , . . . γ p , β p , which control how many iterations of the alternating Hamiltonian operators are applied. The QAOA circuit thus prepares the parameterized quantum state, |β, γ = e −iβ P H M e −iγ P H P . . . e −iβ1H M e −iγ1H P |s , which is an unbalanced superposition of computational basis states (typically, |s is initialized as a balanced superposition). By measuring all qubits in the computational basis, a candidate solution, y, is obtained with probability | y|β, γ | 2 . By repeated sampling of the state preparation and measurement steps, the expected value of the cost function over the returned samples can be computed as f = β, γ|H P |β, γ . In many respects, the QAOA algorithm and its generalization are similar to the VQA framework described in Section 3. However, unlike VQAs, QAOA algorithms are less flexible in the problems they can address. This is due to i) limitations on the construction of the problem Hamiltonian and ii) the singular p hyperparameter, which defines how many applications of the problem and mixing Hamiltonians should occur. Together, these characteristics may limit the expressivity of QAOA algorithms and, in turn, their ability to yield a practical quantum advantage. Despite these potential challenges, the QAOA algorithm may have some benefits over other quantum algorithm approaches due to its relatively short circuit depth 25 , which aligns with the limited coherence of NISQ devices. With respect to quantum advantages, recent numerical simulations have estimated that an experimental advantage would require hundreds of noisy qubits [242, 243] . Such a NISQ device appears plausible in the near term [244] and potential advantages using the QAOA framework are being explored on at least one state-of-the-art superconducting platform [245] . Further, the QAOA framework may also broaden the scope of possible quantum advantages to improved approximation ratios on hard optimization problems 26 . Yet, the exact problems for which practical advantages may exist are at present unclear. Relevant to biology, we know of two recent examples of QAOA-based proof of principles. The first applied QAOA to the task of protein folding in a lattice-based model [247] , while another used QAOA to develop an overlap-layout-consensus approach for de novo genomic assembly [248] . To address the gap around the existence of problems where a quantum advantage may be possible, recent work has involved the development of a framework for searching for QAOA-based advantages [249] and a classical machine learning approach for identifying problems that may offer an advantage [250] . These general approaches, designed to aid in the targeting of problems that may yield a quantum advantage, remain relatively unexplored. Other work has focused on the characteristics of quantum information and the challenges they present to optimization in the quantum regime, which has led to insights around the type of structure needed for quantum advantages in low-depth circuits [251] . As is the case with many quantum algorithms, it is too early to say whether QAOA approaches will provide quantum advantages in practice. Quantum annealing (QA) devices provide an alternative approach to quantum computing with specialized NISQ hardware based on classical simulated annealing that may yield quantum advantages in quantum simulation and optimization. Unlike the many gate-based NISQ devices, quantum annealers provide a form of analog quantum computation based on the adiabatic model [252, 253] . Existing commercial quantum annealers are qualitatively different from programmable gate-based devices 27 . Most notably, their higher qubit counts (now on the order of 10 4 ) and qubit connectivity maps allow for relatively large and complex inputs. However, despite these benefits, a number of drawbacks also exist. These include i) a non-universal model of computation 28 , ii) their analog nature, which is expected to limit the ability to perform error correction (although error mitigation appears possible), and iii) the development of a quantum-inspired classical algorithm, simulated quantum annealing (SQA) [256] , capable of efficiently simulating quantum tunneling 29 . Given their early development, large qubit counts, and robust connectivity maps, multiple proof of principle demonstrations targeting bioinformatics and computational biology applications have been developed using quantum annealers. These include ranking and classification of transcription factor binding affinities [257] , the discovery of biological pathways in cancer from genepatient mutation data [258] , cancer subtyping [259] , the prediction of amino acid side chains and conformations that stabilize a fixed protein backbone (a key procedure in protein design) [260] , various approaches to protein folding [261, 262, 263, 264, 265] , and two recent approaches for de novo assembly of genomes [248, 266] . However, despite many promising proof of principle demonstrations (for example, see [267] ), no clear practical quantum advantage has been shown. The degree of advantage possible by QA algorithms also remains un- 27 Like the programmable devices used in the recent quantum primacy experiments [12, 14, 15] , commercial quantum annealers also use a superconducting qubit architecture, with some differences in qubit implementation. 28 The computational model implemented by practical quantum annealers is believed to be non-universal, unlike the adiabatic model [254, 255] . This limitation is due to the finite temperature of the hardware during computation; in principle, a quantum annealer operating at zero Kelvin could be universal 29 Quantum tunneling is a uniquely quantum mechanical effect that was thought to be a potential source of advantage for QA devices. clear and this ambiguity arguably extends over the long term. The primary factors contributing to this uncertainty are the aforementioned non-universal model of computation and the lack of clarity around the possibility for error correction. Thus, while quantum annealing approaches to optimization may yet bear fruit in the NISQ era [268] , a number of barriers remain to be addressed [251] . In this section we describe a broad variety of potential applications for quantum algorithms. Our aim is to highlight the breadth of both existing quantum algorithms and the types of problems in biology and medicine that they may address. We leverage the quantum advantage framework described in Section 2 and note when a specific application may admit an empirical quantum advantage in the near-or medium-term (summarized in Table 3 ). While some of the applications described are not expected to be feasible in the near term -indeed, in some cases, even an FTQC may not be the most appropriate tool for the target problem -it is our intent for the breadth of potential applications and research directions covered to be valuable to an interdisciplinary audience. As such, when possible, we have sought to provide i) quantum scientists with relevant details and references to develop targeted quantum algorithms for applications in biology and medicine and ii) domain computationalists with information on quantum algorithms relevant to applications in the biology and medicine and their prospects for operational quantum advantages in the near-or medium-term. Simulating microscopic properties and processes at the atomic level [282] is a key area of computational biology research. These tasks often require quantum mechanical simulations, which are classically intractable for all but the smallest quantum systems. These inherent limitations mean that most classical approaches are approximations and often provide a mostly qualitative understanding. In contrast, many of these same quantum mechanical simulations are a natural task for quantum computers. Beyond the NISQ era, it is expected that simulations of large quantum systems may be used to predict biochemical properties and behaviors that are not efficiently computable with classical devices [191, 283, 284] . In this section we begin with an overview of applications in or relevant to biology and medicine that may benefit from quantum simulation approaches. We then provide a brief summary of three quantum algorithms central to quantum simulation. Finally, we conclude with a discussion of the potential for empirical quantum advantages in the near term. Ground states, binding energies, and reactions. Calculating the energy of a molecular system is an ubiq- uitous task in computational chemistry [285] . An important example (which calculates ground states as one of several subroutines) is protein-ligand docking, where the goal is to calculate the binding energy of a small molecule (e.g. a drug) to a target site on a protein or other macro-molecular structure (e.g. a receptor domain) [286] . Though this problem type is often approximated with clas-sical mechanics, future quantum computers may provide highly accurate predictions for docking. Similar types of simulations may find use as subroutines for calculating protein-protein interactions and small-molecule properties, like the solubility of drug molecules [287, 288] . In addition, calculating the ground state along different nuclear positions yields reaction coordinates, which are essential for understanding both reactivity in vivo and drug synthesis mechanisms [289] . Molecular dynamics (MD) simulations involve propagating the equations of motion of a microscopic system, such as a complex containing proteins or DNA [290] . In addition to understanding qualitative mechanisms, a core goal is often to calculate quantities, such as diffusion rates [291] and Gibbs free energies [292] . To do this, MD often uses parameterized force fields and Newtonian dynamics, although one can accurately treat nuclear quantum effects (such as tunneling and zero-point energy) via path-integral MD methods [293] , and electronic quantum effects using, for example, Car-Parrinello molecular dynamics [294] . In principle, quantum simulation approaches may be used both for time propagation (using quantum algorithms that speed up classical ordinary differential equations) [295, 296] and electronic structure calculations performed at each time step [297] . Excited states. Though excited electronic and vibrational [298, 299] states are not usually a primary focus in biological processes, they are important for probing microscopic states using spectroscopy [300, 301] . Green fluorescent protein (GFP), for example, is a commonly used marker that allows one to study the expression, localization, and activity of proteins in cells via microscopy [302, 303] . Other artificial dyes and markers have also been used to study dynamic processes, such as diffusion [304] and DNA unraveling [305] . In principle, the ability to accurately compute excited states could lead to effective screening methods to aid the development of novel fluorescent proteins and dyes that emit or absorb highly specific wavelengths, have narrower emission/absorption bands, or exhibit higher quantum efficiency. These dye markers may also be probed by a variety of spectroscopy methods, such as absorption, emission, and Raman spectroscopy 30 [300, 301, 303] . In addition, time-dependent femtosecond spectroscopy is often necessary when studying certain biomolecular processes and the ability to model femtosecond excited-state behavior could allow for more accurate interpretation of certain experiments [306] . Finally, other excited-state processes inherent to biological systems exist that may benefit from quantum approaches, such as photosynthesis [306] and modeling simple tissue degradation via ultraviolet light [307] . Electronic dynamics. Deeper understanding of some biologically relevant processes might be achieved from the simulation of electron dynamics 31 . Cases where one may need to directly simulate the dynamics of electrons include enzymatically-driven reactions such as nitrogen fixation [283] , biomolecular signaling [308] , biological processes involving radical reactions [309, 310] , compo- 30 Each method has different advantages with respect to molecular specificity and spatiotemporal resolution and the appropriate form of spectroscopy varies by application. 31 Simulation of electron dynamics is distinct from MD, which is concerned primarily with the motion of the atomic coordinates. nent processes of neurons and synapses [311] , photosynthetic processes [306] , and the interpretation of electronic behavior in femtosecond spectroscopy experiments, as mentioned above. Some related fundamental phenomena might also be better understood via direct simulation. One notable example is proton-coupled electron transfer (PCET) [312] , which is ubiquitous but only partially understood. The applications for knowledge generated from simulations of electronic dynamics vary widely. One future possibility could be the design of novel enzymes for the development of more sensitive and specific diagnostic assays or novel therapeutics [313] . Hybrid quantum-classical models. When modeling a large biomolecular system, researchers sometimes implement a model that uses different approximations for different portions of the system. For example, one might use Newtonian molecular dynamics for the majority of a protein, but perform electronic structure calculations for the protein's reaction site. Another example is to use density functional theory (DFT) [314] as a lower accuracy method and dynamical mean-field theory (DMFT) [315] as a higher accuracy method for a subsystem of interest [316] . Currently, classical examples of such multilayered approaches exist, such as the ONIOM method (Own N-layer Integrated molecular Orbital molecular Mechanics) [317] . In principle these existing classical approaches could be modified to run a classically intractable portion on a quantum computer, leaving the rest to run on a classical computer. Already, work on quantum algorithms in this direction is being pursued [316, 318] . Quantum algorithms for quantum simulation. Arguably, there are three broad quantum computational methods used when studying quantum physical systems: time propagation [51, 111] , quantum phase estimation (QPE) [319, 320] , and the variational quantum eigensolver (VQE) [169, 170, 321] . We describe the basic versions of these algorithms below. Time propagation. When one is interested in propagating the dynamics of a system, the goal is to approximate the time-propagation operator where H is the Hamiltonian describing the system of interest. For near term hardware, this is most easily performed using low-order Suzuki-Trotter decompositions [322, 323] , though asymptotically more efficient algorithms also exist for fault tolerant devices [324, 325, 326, 327, 328, 329, 330] . Note also that QPE, discussed next, uses U (t) as a subroutine. Phase estimation. For an arbitrary Hamiltonian, the quantum phase estimation (QPE) algorithm outputs the phase (e −iEiτ ) of the eigenenergy E i for arbitrary τ , given the input of an eigenvector |ψ i , When the input is a mix of eigenvalues, the probability of measuring a particular eigenvalue (eigenphase) is proportional to its overlap-squared. Assuming one has a FTQC and a method for preparing an eigenstate of interest (for example a molecular ground state), the quantum phase estimation algorithm can be used to output the eigenenergy. One can readily determine the eigenvalue E i from e −iEiτ , whose precision depends on the number of additional qubits in the second quantum register. Variational quantum eigensolver. For early generations of quantum hardware, it is likely that approaches based on the VQE will be the only viable option 32 . In this method, the goal is to minimize the function by varying the parameters, θ. These parameters determine the behavior of the quantum circuit, which prepares the quantum state |ψ( θ) . Usually these parameters simply control rotation angles for one-and two-qubit gates. A recent review of the VQE [321] discusses many of the extensions to the algorithm that have been proposed. In many cases, these algorithms -time propagation, QPE, and VQE -are extensively modified. For example, a variety of strategies to enhance their capabilities have been leveraged in experiments. Among these strategies are error mitigation [121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131] , post-processing to improve accuracy [332, 333] , approaches to reducing the number of quantum circuit evaluations [184, 334, 335] , and approaches to dynamically modify the quantum circuits [176] . It is anticipated that these types of practical enhancements, among others, will be crucial to realizing many empirical quantum advantages in the near term both within and outside the space of quantum simulation problems. Prospects for quantum simulation. Quantum simulation offers some of the strongest prospects for practical quantum advantages. Example applications include finding ground [180, 181] and excited states [169, 336] of electronic degrees of freedom, vibrational degrees of freedom [298, 299] , and more complex degrees of freedom [113, 337, 338, 339] , or dispersion interaction between drug molecules and proteins [340] . VQEs in particular have the strong possibility of near term advantages as they scale. In this respect, one promising direction is divide-and-conquer approaches [341, 342] , which combine multiple VQEs by hierarchical methods to simulate molecules that would otherwise be too large to input into current NISQ hardware. Provided these approaches prove practical, it is possible that simulating larger, biologically relevant molecules -proteins, nucleic acids, drugs, and metabolites -will be feasible in the near term. Further, potential also exists for other hybrid quantumclassical models. Already, quantum algorithms for em- 32 We also note that a distinct algorithm called quantum imaginary time evolution (QITE) was recently proposed, which may also be promising for near-term hardware [331] . bedding models have been developed [316, 318] . Similar algorithms may allow for the treatment of a subsystem (such as a protein active site) with the quantum computer while the rest of the system (such as the solvent and protein) is simulated with a classical computer. Like the VQE and short-duration quantum dynamics, these hybrid approaches may yield empirical quantum advantages in the near to medium term. In principle, quantum simulation approaches may also be used both for time propagation and electronic structure calculations, which could be performed at each time step. It is possible that short-time quantum dynamics simulations (using quantum algorithms that speed up classical ordinary differential equations, discussed in the next section) [295, 296] on near term devices may find limited use in elucidating reaction mechanisms and approximating free energies that remain classically intractable. Over the long term, it is possible that quantum simulations may expand to include QPE on FTQC devices. These methods would allow for the precise quantification of biochemical properties and behaviors far beyond the capabilities of classical HPC systems [283, 284, 191] . Further, it is also possible that accurate, long duration ab initio MD simulations may also be attainable with these methods over the long term. Simulations of biological processes governed by classical physics often require searching over a parameter space to optimize a set of classical variables, for which search, optimization, and machine learning algorithms may be used. In addition, the simulation of Newtonian physics and other non-quantum processes is widely used in biological research. For these simulations, ordinary (ODE) and partial differential equations (PDE) are particularly important and have broad applications -from simulating fluid and tissue mechanics to molecular dynamics. Below, we first summarize this application space, briefly review relevant quantum algorithms, and then consider the prospects for empirical quantum advantages. Conformation search. It is often necessary to search a large conformational space in order to find a global or near-global optimum [343, 344, 345] . Such a search is usually performed over a domain of classical variables (e.g. Cartesian atomic coordinates), and may be done in concert with a quantum mechanical method for calculating the energy at each given conformation (as noted above). An important example of conformation search in biology is protein folding [343] , but large conformation spaces are also encountered when studying other biomolecules, such as RNA [346] , determining a drug's molecular crystal structure [347] , or identifying pathways in complex reaction mechanisms [348] . Fluid mechanics. Many biological processes are governed by fluid mechanics, requiring the simulation of Navier-Stokes equations [349] . Relevant macroscopic processes in this area include the simulation of cardiovascular blood flow [350] and air flow in lungs [351] , as well as some aspects of gastroenterology [352] . On a smaller scale, one may want to simulate highly viscous flow inside or around microbes [353] , or capillary flow [354] . The latter is especially relevant to angiogenesis [355] (a hallmark of cancer [26, 27] ) and understanding tumor formation and drug permeability [356] . Further, fluid simulations may be use to model designs for chemical reactors and bioreactors, which are often critical components in drug [357] or complex tissue [358] manufacture. Non-fluidic continuum mechanics. Macroscopic modeling is important not just for fluids but also for solid or semi-solid continuum materials. In this context, the finite element method and related approaches are often used [359, 360] . While applications of these methods include the modeling of macroscopic tissues, such as muscle [361] and bone [362] , they have also been applied to the nanoscale, including in simulations of the cytoskeleton [363] . Classical electrodynamics. Classical electrodynamics can ultimately be described by differential equations, too. The design of medical devices is one area where this type of simulation may be useful. Examples may include the modeling of MRI designs [364] or, perhaps, medical devices that interact with lasers. Another area of application is the design of classical optical devices [365] , which may be used in biological research [366] . Systems modeling and dynamics. There are many ubiquitous classical modeling approaches apart from those that directly use Newtonian physics. A particularly relevant example exists in complex population models, which are essential in fields such as epidemiology [367, 368] . Others include detailed simulations of entire cells [369] , organs [370] , or groups of organisms [371, 372] . Quantum algorithms for classical simulation. A number of targeted quantum algorithms have been proposed for finding low-energy conformations and searching through candidate molecules, many of which have been specifically developed for protein folding [343, 345, 373] . More generally, amplitude amplification [64] may be used to explore conformation spaces over classical variables with a quadratic advantage. Additionally, theoretical quantum advantages have also been shown for other optimization-related subroutines, such as escaping saddle points in optimization landscapes [374] . Adjacent to optimization and search, QML algorithms may also be applied. Already, examples exist, such as one for leveraging quantum deep learning to search the chemical space [375, 376] . It is plausible that empirical advantages with these methods may be achievable in the near term given their hybrid quantum-classical structure. Finally, the past few years have also seen progress in quantum algorithms for solving classical differential equations, either for general cases [295, 296, 377, 378, 379] or specific applications, like the finite element method (FEM) [380, 381] or Navier-Stokes [382, 383] . Importantly, among these quantum algorithms are ones for solving the more difficult cases of non-homogeneous and nonlinear PDEs [296, 379] . Prospects for classical simulation. With respect to search over conformation spaces, it is possible that empirical quantum advantages may be achievable. However, recent work has indicated that general approaches offering quadratic advantages, like amplitude amplification, may be of limited value on their own, even in the FTQC regime [59] . For this reason, it is expected that the integration of domain knowledge and additional quantum subroutines will be key to achieving any future advantages. For QML, and hybrid quantum-classical algorithms in particular, further exploration of near term compatible methods to improve simulations of classical physics is merited. Finally, the two primary aspects of fluid simulations that lead to simulation difficulty are, arguably, system size and turbulent flow [384] . While it is unclear whether turbulence may be addressed efficiently with quantum approaches, quantum algorithms for differential equations may allow for reductions in complexity with respect to system size [296, 379] , which could lead to superpolynomial advantages in some cases. However, a caveat also exists with known quantum algorithms for differential equations given that they are affected by the input and output problems described in Section 2.4. For this reason, further research is required to understand when empirical quantum advantages for these applications may become feasible. Optimization is central to many bioinformatics tasks, such as sequence alignment, de novo assembly, and phylogenetic tree inference. At their core, classical algorithms for these problems often use subroutines for matching substrings, constructing and traversing string graphs, and sampling and counting k -mers (i.e. substrings of biological sequences). Here, we describe the basic constructions of these problems and summarize relevant quantum algorithms. Sequence alignment is a computational primitive of bioinformatics tasks. The heavy integration of sequence alignment algorithms into bioinformatics software has led to diverse applications -from the de novo assembly of whole genomes [55] , to the discovery of quantitative trait loci linked to disease phenotypes [385] and identification and analysis of driver mutations in cancer [386] . A classic formulation for identifying the global optimum alignment of two sequences involves finding the lowest weight path through an n × m dynamic programming matrix, where n and m are the lengths of the sequences being compared (it is often the case that n = m) [387] . In practice, an approximate solution is typically constructed by a greedy heuristic using a biologicallyinformed scoring function. Examples of scoring functions include sum-of-pairs, weighted sum-of-pairs, and minimum entropy (each of which imply certain biolog-ical assumptions) [388] . For example, with a weighted sum-of-pairs, one may assign different scores to DNA base matches, mismatches, substitutions, insertions, and deletions (the scoring system may also be used to control whether the output alignment is global [387] or local [389] ). Alternatively, for proteins, a scoring matrix may be used where each cell represents the likelihood that the amino acid in the row will be replaced by the amino acid in the column [390] . This likelihood may be determined empirically by a statistical analysis of a large protein sequence database or on the basis of chemical properties of the amino acids (e.g. polar or non-polar, hydrophilic or hydrophobic). While pairwise alignment has polynomial complexity, the generalization to multiple sequence alignment (MSA) with sum-of-pairs scoring is known to be NP-hard [391, 392, 393] . Other heuristic approaches to sequence alignment also exist, such as progressive alignment [394] . For more on MSA algorithms and their broad applications, see this recent review [395] . De novo assembly refers to the process of assembling a reference genome -a foundational resource for many bioinformatics analyses -from a large set of overlapping reads of the genome. Often these are short reads (on the order of 10 3 base pairs long with error rates on the order of 10 −5 [396] ). However, more recent long read sequencing technologies (typically on the order of ≥ 10 5 base pairs with error rates on the order of 10 −3 [397] ) can also be used to aid in the scaffolding of the genome using a hybrid approach [55] 33 . Modern software packages typically leverage one of two approaches: i) Overlaplayout-consensus (OLC), which involves the construction of a string overlap graph and is reducible to the NPcomplete Hamiltonian path problem (a greedy approximation heuristic is used) and ii) a k-mer graph approach, which involves the construction of a de Bruijn graph and is reducible to the Eulerian path problem, which admits a polynomial time algorithm. In practice, achieving highquality, biologically plausible assemblies is non-trivial and subject to many challenges due to both genome structures, such as homopolymeric and repetitive regions, and the introduction of errors into sequencing data from library preparation, systematic platform error, and low coverage regions [399] . Phylogenetic tree inference is the process of inferring the evolutionary relationships between multiple genome 33 A trade-off between read length and base error rates with short and long read sequencing technologies yields complementary characteristics that makes hybrid approaches to de novo assembly advantageous (this can be viewed as a close sibling of the well-documented trade-off between depth and coverage in short read sequencing data [398] ). Long read technologies allow for the scaffolding of homopolymeric and repetitive regions that are beyond the length of short read sequences; short read technologies generate many overlapping reads of the sequenced regions providing a large sample size (the depth of coverage for modern short read sequencers ranges from 3 × 10 1 to on the order of 10 4 ) and greater statistical confidence for each base call (which can each be treated as an individual hypothesis test). sequences 34 . Typically, inference of a phylogeny involves combining i) a multiple sequence alignment of the genomes in question, ii) an evolutionary dynamics model accounting for the types of evolutionary processes occurring between sequences, and iii) a tree's topology, where branch lengths represent the distance (e.g. Hamming or Levenshtein distance) between two sequences. The evolutionary dynamics model may be as simple as a continuous-time Markov model sampling from a table of specific events with empirically estimated transition probabilities (e.g. a base substitution T > A; in evolutionary biology, the events and their probabilities may be highly specific to a species or genus). Alternatively, more complex Bayesian methods and mixture models may be used where the associated probabilities for events may vary for different sequence regions. The topology of a phylogenetic tree may be initialized randomly from the MSA and is often inferred by hierarchical clustering with a maximum likelihood estimator (MLE) [401] . In practice, phylogenetic tree methods are central to evolutionary biology [53] and understanding the evolutionary dynamics of clonal populations in cancer [402] (among a multitude of other applications), the latter of which has significant clinical relevance to the targeting of precision therapeutics and characterization of treatment resistance. Emerging application areas. Many emerging application areas in bioinformatics exist that may represent interesting targets for quantum algorithm development. Examples of application areas include i) the inference of topologically associating domains (TADs; interacting regions of chromosomes governed by the 3-dimensional bundling structure of chromatin in cell nuclei), which are crucial to our understanding of epigenetic mechanisms [403] , ii) single-cell multiomics, a set of novel methods for generating multi-modal data from single-cell sequencing assays, which allow for simultaneous measurement of a cell's state across the biological layers (e.g. genomic, transcriptomic, and proteomic) [404] , and iii) improving the modeling and inference of biological networks (e.g. interaction networks for genes, transcripts, or proteins). With respect to the latter, this may include the alignment of multimodal networks [405, 406, 407] , which can be used for predicting the associations between biological and disease processes [406, 407] , and the modeling of gene regulator networks using complex-valued ODEs [408] , which may be especially well suited to quantum information. Given the breadth of the bioinformatics space, these applications represent a very small subset of the potential emerging application space for quantum algorithm development. Prospects for bioinformatics. A small number of quantum algorithms for problems in bioinformatics have been proposed ( Table 3) . These include theoretical algorithms developed for FTQC devices that target NP-hard prob- 34 It is notable that phylogenetic tree inference has taken center stage in the COVID-19 pandemic with tools such as Nextstrain [400] , which has been used to monitor the evolution of SARS-nCoV-2. lems, such as sequence alignment [87, 88, 89] and the inference of phylogenetic trees [90] , which leverage amplitude amplification and quantum walks [409] . To be made practical, these theoretical quantum algorithms are expected to require both significant refinement and effort in translation. In the near term, these refinements could include i) recasting them for NISQ devices using the VQA, QAOA, or QA frameworks and ii) integrating greater biological context. Already, examples of this type of work exist for de novo assembly [248, 266] , sequence alignment [270] , and the inference of biological networks [274, 275] . Over the long term, operational advantages may be pursued by optimizing near term approaches and integrating fast quantum algorithm subroutines where possible. Known quantum algorithms that may be relevant to this work include ones for backtracking [85] , dynamic programming [65, 66] , operating on strings [91, 272, 271] , and differential equations [295, 296, 377, 378, 379] . Taking these measures into account, operational advantages for these problems may nonetheless remain among the most difficult to achieve. This is partly due to the factors discussed in Section 2.4. Other barriers to quantum advantages include i) the sophistication of existing classical heuristic algorithms and the inherent parallelism of many of the problems they solve, ii) the scale of both existing classical hardware and practical problem instances within the context of contemporary research [410] , iii) the broad institutional support and incumbent advantage benefiting existing classical approaches (including extensive clinical validation in the medical setting), and iv) the likely precondition of FTQC to realize polynomial advantages based on amplitude amplification in practice [59] . Thus, while current research in this direction shows long term promise and should be explored further, many of these quantum advantages appear unlikely to be practical the near term. Many of the quantum advantages associated with near term variational QML algorithms relate to model capacity, expressivity, and sample efficiency. In particular, variational QML algorithms may yield reductions in the number of required trainable parameters [238] , generalization error [226, 227, 228, 229] , the number of examples required to learn a model [236, 222] , and improvements in training landscapes [277, 226, 227, 231, 232] . Evidence supporting one or more of these advantages has been found in both theoretical models and proof of principle implementations of quantum neural networks (QNNs) [226, 227, 228, 231] and quantum kernel methods (QKM) [222, 229, 225] . It is notable that these methods are both closely related to VQAs leveraging gradient-based classical optimizers (indeed, they often share overlapping definitions in the literature, as briefly noted in 2019 [213] ) [223, 411, 412] . Given the breadth of applications for machine learning approaches in biology, we focus our discussion below on these types of advantages and their potential applications in lieu of specific methods. Improvements to training landscapes refer to the reduction or removal of barren plateaus and narrow gorges in the landscape of the objective function of a gradientbased learning algorithm. These improvements may stem from the unitary property of (many) quantum circuits, which inherently maintains the length of the input feature vector throughout the computation [238] provided an appropriate input encoding is used. This bears similarity to many classical approaches used to improve and stabilize training landscapes in practice, such as batch normalization [413] and self-normalizing neural networks [414] . While improved training landscapes may result in more rapid convergence, it is unclear whether this type of advantage alone can be made practical (e.g. by allowing for a model to be trained that would be "untrainable" by classical means). Fortunately, improvements in training landscapes have been seen to co-occur with reductions in generalization error [226, 415] . Reductions in generalization error. Generalization error measures the ability of a machine learning model to maintain similar performance (i.e. "generalizability") on unseen data 35 . Reductions in generalization error may yield advantages in the accuracy and flexibility of trained machine learning models. Advantages in generalization error are dependent on a variety of factors, including the encoding used (with basis encoding performing particularly poorly [416] ) and the availability of data sufficient to train a comparable classical model [225] . While there is substantial evidence supporting reductions in generalization error [226, 227, 228, 229] , evidence of poor generalization performance under certain constructions also exists (e.g. see this recent paper [417] ). This may be partly attributable to shallower quantum circuits providing better utility bounds than deeper circuits [418] , which contrasts with classical neural network intuition where increased layer depth is associated with an exponential increase in model expressiveness [419] . With respect to applications, much like improvements in training landscapes, reductions in generalization error (despite having broad relevance) may alone be insufficient to provide a practical quantum advantage in the near term. Reductions in sample complexity may allow for the learning of robust machine learning models from fewer examples. Intuitively, sample complexity (and generalization error) advantages may arise when quantum entanglement enables the modeling of classically intractable correlative structures. If such sample complexity advantages are achievable with classical data, they will likely be problem instance specific [225] , highly dependent on the distribution of the input data [222, 225] , and are unlikely to be superpolynomial [235, 420] . Nonetheless, polynomial [234, 235, 236] or even sub-linear reductions in the number of examples required to build a classifier could provide significant operational advantages The source of these operational advantages can be viewed as the result of the typically high cost of sample generation or acquisition in biological research and clinical contexts. This cost can be due to a variety of factors, including wet lab protocol duration, procedural invasiveness, financial cost, or low disease incidence. Some emblematic examples of costly data acquisition in the clinical context include the sampling of bone marrow in leukemias and time-consuming medical imaging of patients with rare neurological diseases. This significant cost in the acquisition of data contrasts with typical hardware measures of time and space resources, such as clock cycles, gates, (qu)bits, and queries, which are often (or expected to be) very cheap due to their high frequency and scalability 36 . Indeed, where a quadratic reduction in the number of function queries (e.g. from n = 10 6 to √ n = 10 3 ) may lead to only millisecond differences in compute time, a similar reduction in the numbers of samples could save months or years in biological sample collection and processing time (to say nothing of the economic considerations). Further, by providing a potentially large improvement in operational outcomes, sample complexity advantages over classical data distributions may also exhibit a resiliency to improvements in classical hardware. For these reasons, the discovery of data distributions for which quantum computers may offer even small reductions in sample complexity could have substantial relevance to the development of QML approaches for many prediction and inference problems in biology and medicine. Privacy advantages. Since the earliest days of the QIS field, the inherently private nature of quantum information has been a subject of significant interest [421] . More recently, theoretical work at the intersection of quantum information and differential privacy has been explored [422, 423, 424, 425] , with potential applications in the healthcare setting [426] . While research in this area is in the very early stages, the potential for differentially private learning on large, open healthcare data resources presents an opportunity for classical machine learning techniques and may be further empowered by quantum advantages in differentially private learning and data sharing over the long term. For an example of one classical approach, see this recent paper [427] . Prospects for quantum machine learning Variational quantum machine learning (QML) is expected to provide a methodological toolbox with significant relevance to a wide range of biological research and clinical applications. Substantial numerical and theoretical evidence now points towards a variety of strengths related to variational QML algorithms on NISQ hardware. These include robustness in the presence of device, parameter, feature, and label noise [227, 238, 428, 429] . It is possible that the breadth of applications may be similar to deep learning, a set of highly flexible methodological tools for generative and predictive modeling now widely used in the field [430, 431, 432] . With respect to quantum advantages, further experimental work is necessary to assess whether the potential advantages in variational QML discussed above can yield operational advantages. Sample complexity advantages in particular could have a great impact 37 . Indeed, if even small polynomial reductions can be demonstrated over data distributions common in biological and clinical research, they may find important applications where examples are rare (e.g. due to disease incidence) or sample acquisition is expensive, invasive, or difficult. Examples that fit this criteria include the diagnosis and prognosis of rare phenotypes [433, 434] , the identification of adverse multi-drug reactions from EHR data [435, 436] , and the diagnosis of cancers [437] and their subtyping on the basis of clinical outcomes, such as drug sensitivity [438] and disease prognosis [439] . Altogether, while practical applications of QML advantages remain largely theoretical, their potential to address existing domain constraints provides ample motivation for further research into variational QML approaches. In bioinformatics and computational biology, nontraditional data structures have long been leveraged by classical algorithms to great effect. For instance, a number of state-of-the-art algorithms for error correcting sequencing data [56] leverage Bloom filters [440] , a probabilistic data structure related to hash tables. The core benefit of a Bloom filter comes from its ability to trade a low probability of false positive lookups for significant savings in memory -a common constraint in large bioinformatics pipelines. In a similar vein, the full-text minute space (FM)-index data structure [441] is leveraged (in conjunction with the Burrows-Wheeler transform) by sequence aligners such as Bowtie [442] , the BWA family of aligners [443, 444, 445] , and more recent graph reference genome aligners [446, 447] . Like Bloom filters, FM-indexes offer rapid querying and significant memory efficiency. It is conceivable that the inherently probabilistic nature of quantum computers and novel data input modalities offered by quantum information, such as angle and phase encoding, could lead to the development of similarly useful quantum data structures and abstractions in the FTQC regime. QRAM represents one example [71, 72, 73, 74, 75] . In the medium term, a concerted effort towards developing an open quantum data structure library may be useful for improving our understanding of the types of quantum approaches that may admit practical advantages over the long term. The landscape of quantum advantages considers the benefits of quantum technologies relative to existing classical alternatives. An advantage is identified by evidence for which the context can be described as theoretical, experimental, or operational. We divide quantum advantages into four classes on the basis of the strength of the quantum advantage (i.e. a polynomial or superpolynomial reduction in complexity) and the complexity of the analogous classical algorithm (i.e. polynomial or superpolynomial). Each class implies differing prospects for achieving experimental and operational advantages in the near term. Among these four classes, superpolynomial advantages on classically hard problems appear to present the most viable path to operational advantages. In biology and medicine, relevant domain problems in this class include ones related to the quantum simulation of biologically relevant molecules -such as small molecules, protein domains, and nucleic acids -and their chemical quantities. This suggests that the fields of drug development, biochemistry, and structural biology may stand to benefit in the near term from targeted proof of principles using hybrid quantum-classical approaches, such as variational quantum simulation. More speculatively, quantum machine learning algorithms yielding advantages in sample complexity (even small polynomial ones) may yield meaningful advantages in the near to medium term. In particular, this type of advantage may allow for the training of quantum machine learning models that exhibit generalization error rates similar to classical machine learning models but require less input data. Given the pervasive challenges around generating and acquiring biological and clinical samples, reducing sample complexity may provide a basis for significant operational advantages. Further, these advantages may be resilient to improvements in classical hardware capabilities. Quantum advantages may result from a variety of quantum algorithm paradigms. These include variational quantum simulation, variational quantum machine learning, quantum approximate optimization algorithms, and quantum annealing. Already, variational quantum algorithms have shown particularly promising results. This is partly due to their substantial flexibility, which allows them to tackle a wide variety of problems across quantum simulation, quantum machine learning, and optimization. Notably, while VQAs may yield superpolynomial advantages on classically hard problems, whether these near term algorithms can fully capitalize on the computational power afforded by quantum information remains a matter of theoretical investigation (e.g. see [448] ). Nonetheless, quantum primacy experiments [12, 14] leveraging param-eterized quantum circuits, the core quantum component of near term hybrid quantum-classical algorithms, have already demonstrated the viability of quantum advantages on NISQ devices. Finally, we note that it remains possible that the greatest fruit of research into quantum approaches will be novel quantum inspired classical algorithms. For example, the previously noted framework [197] for the "dequantization" of QML algorithms [196, 198] based on the QLSA has led to the development of classical algorithms that may in the future improve upon existing practical implementations. Similarly, the development of one classical optimization algorithm for constraint satisfaction [246] was inspired by the original QAOA [239] and improved upon its performance. Further, recent work on the value of data in classical computation has extended our understanding around the types of hard problems that may be tractable with high quality data and classical machine learning techniques [225] . On the hardware side, stiff competition between quantum approaches [12] and their classical counterparts [16, 18, 449, 450, 451] , and the potential that decoherence may not be tamed, together leave open the possibility that large quantum advantages or fault tolerant quantum computers may not be possible. However, a diverse and growing body of evidencefrom recent work on novel, more efficient error correcting codes [136, 141] , to the identification of significant operational quantum advantages in the energy required to perform computations [18, 19] -gives much reason for optimism. Proceedings of the Royal Society of London A, volume 400, pages 1985 . [5] D. Deutsch. Quantum computational networks. In Proceedings of the Royal Society of London A, volume 425, pages [73] [74] [75] [76] [77] [78] [79] [80] [81] [82] [83] [84] [85] [86] [87] [88] [89] [90] 1989 . Vychislimoe i nevychislimoe (computable and noncomputable) The computer as a physical system: a microscopic quantum mechanical hamiltonian model of computers represented by turing machines Simulating physics with computers Quantum theory, the church-turing principle, and the universal quantum computer Quantum circuit complexity Quantum complexity theory (preliminary abstract) Algorithms for quantum computation: Discrete logarithms and factoring A method for obtaining digital signatures and publickey cryptosystems Physicists need to be more careful with how they name things Quantum computing and the entanglement frontier Quantum computational advantage using photons Leveraging secondary storage to simulate deep 54-qubit sycamore circuits Solving the sampling problem of the sycamore quantum supremacy circuits Establishing the quantum supremacy frontier with a 281 pflop/s simulation Quantum technologies for climate change: Preliminary assessment Quantum computing in the nisq era and beyond Fault-tolerant quantum computation with constant error rate Fault-tolerant quantum computation by anyons All biology is computational biology The rna world: molecular cooperation at the origins of life Protein structure prediction and structural genomics The hallmarks of cancer Hallmarks of cancer: The next generation The many roles of computation in drug discovery Five-year follow-up of patients receiving imatinib for chronic myeloid leukemia A new initiative on precision medicine Estimating computational limits on theoretical descriptions of biological cells Computationally predicting binding affinity in protein-ligand complexes: free energy-based simulations and machine learning-based scoring functions Bringing complexity into clinical practice: An internistic approach Stanford medicine 2017 health trends report: Harnessing the power of data in health The ncbi dbgap database of genotypes and phenotypes Grossman. The nci genomic data commons Quantumassisted biomolecular modelling Potential of quantum computing for drug discovery Quantum computing at the frontiers of biological sciences The prospects of quantum computing in computational molecular biology Towards practical applications in quantum computational biology Quantum theory from five reasonable axioms. arXiv e-prints Quantum algorithms: an overview Quantum algorithm zoo The physical implementation of quantum computation A. Noisy intermediate-scale quantum (nisq) algorithms. arXiv e-prints Quantum machine learning Variational quantum algorithms Quantum certification and benchmarking. arXiv e-prints Verification of quantum computation: An overview of existing approaches Quantum computation and information Quantum algorithm implementations for beginners Phylogenetic tree building in the genomic age Deep learning techniques for medical image segmentation: Achievements and challenges The present and future of de novo whole-genome assembly Denoising dna deep sequencing datahigh-throughput sequencing errors and their correction Opportunities and obstacles for deep learning in biology and medicine Defining and detecting quantum speedup Focus beyond quadratic speedups for error-corrected quantum advantage A fast quantum mechanical algorithm for database search Strengths and weaknesses of quantum computing. arXiv e-prints An optimal separation of randomized and quantum query complexity Quantum amplitude amplification and estimation. arXiv e-prints Quantum speedups for exponential-time dynamic programming algorithms Quantum algorithms for solving dynamic programming problems Toward the first quantum simulation with quantum speedup Quantum algorithm for solving linear systems of equations Quantum support vector machine for big data classification Quantum recommendation system Quantum random access memory Architectures for a quantum random access memory On the robustness of bucket brigade quantum ram Hardwareefficient quantum random access memory with hybrid quantum acoustic systems Scalable and high-fidelity quantum random access memory in spin-photon networks The intrinsic computational difficulty of functions Quantum algorithms and lower bounds for convex optimization. arXiv e-prints Convex optimization using quantum oracles. Quantum Quantum speed-ups for semidefinite programming Quantum algorithms for the triangle problem Quantum property testing for bounded-degree graphs Time and space efficient quantum algorithms for detecting cycles and testing bipartiteness Can graph properties have exponential quantum speedup? arXiv e-prints How symmetric is too symmetric for large quantum speedups? arXiv e-prints Quantum walk speedup of backtracking algorithms Quantum speedup of branchand-bound algorithms Fast quantum search algorithms in protein sequence comparisons: Quantum bioinformatics Multiple sequence alignment by quantum genetic algorithm A quantum pattern recognition method for improving pairwise sequence alignment Quantum simulation of phylogenetic trees. arXiv e-prints Quantum pattern matching fast on average Read the fine print Polynomial-time algorithms for prime factorization and discrete logarithms on a quantum computer Simulation of manybody fermi systems on a universal quantum computer Polynomial-time quantum algorithm for the simulation of chemical dynamics On the relationship between continuous-and discrete-time quantum walk Black-box hamiltonian simulation and unitary implementation Simulating hamiltonian dynamics with a truncated taylor series Optimal hamiltonian simulation by quantum signal processing Quantum algorithm for molecular properties and geometry optimization Rapid solution of problems by quantum computation Quantum complexity theory On the power of quantum computation Forrelation: A problem that optimally separates quantum from classical computing A volumetric framework for quantum computer benchmarks. arXiv e-prints Characterizing quantum supremacy in near-term devices A blueprint for demonstrating quantum supremacy with superconducting qubits The computational complexity of linear optics Boson sampling with 20 input photons and a 60-mode interferometer in a 1014-dimensional hilbert space Expert biases in technology foresight. why they are a problem and how to mitigate them Universal quantum simulators Hamiltonian simulation by qubitization Resourceefficient digital quantum simulation of d-level systems for photonic, vibrational, and spin-s hamiltonians Variable time amplitude amplification and quantum algorithms for linear algebra problems Preconditioned quantum linear system algorithm Quantum algorithm for systems of linear equations with exponentially improved dependence on precision Quantum linear system algorithm for dense matrices Detecting crosstalk errors in quantum information processors Characterizing largescale quantum computers via cycle benchmarking Software mitigation of crosstalk on noisy intermediate-scale quantum computers Can error mitigation improve trainability of noisy variational quantum algorithms Error mitigation for short-depth quantum circuits Efficient variational quantum simulator incorporating active error minimization Errormitigated digital quantum simulation Experimental error mitigation via symmetry verification in a variational quantum eigensolver Measurement error mitigation for variational quantum algorithms Unifying and benchmarking state-of-the-art quantum error mitigation techniques Error mitigation for variational quantum algorithms through mid-circuit measurements Fundamental limitations of quantum error mitigation Scalable mitigation of measurement errors on quantum computers Scheme for reducing decoherence in quantum computer memory Good quantum error-correcting codes exist High threshold universal quantum computation on the surface code The xzzx surface code. arXiv e-prints Dynamically generated logical qubits How to factor 2048 bit rsa integers in 8 hours using 20 million noisy qubits Universal quantum computation with ideal clifford gates and noisy ancillas Surface codes: Towards practical large-scale quantum computation A fault-tolerant non-clifford gate for the surface code in two dimensions A fault-tolerant honeycomb memory. arXiv e-prints Efficient implementation of quantum circuits with limited qubit interactions On the depth overhead incurred when running quantum algorithms on near-term quantum computers with limited qubit connectivity Efficient distributed quantum computing On connectivity-dependent resource requirements for digital quantum simulation of d-level particles Impact of qubit connectivity on quantum algorithm performance A divide-and-conquer algorithm for quantum state preparation Bounds for the quantity of information transmitted by a quantum communication channel. Problems of Information Transmission Smooth input preparation for quantum and quantum-inspired machine learning Robust data encodings for quantum classifiers Effect of data encoding on the expressive power of variational quantum-machine-learning models Small quantum computers and large classical data sets Transformation of quantum states using uniformly controlled rotations Synthesis of quantum-logic circuits Efficient quantum circuits for accurate state preparation of smooth Lowdepth quantum state preparation. arXiv e-prints Quantum machine learning in feature hilbert spaces Quantum embeddings for machine learning. arXiv e-prints Hybrid quantum-classical approach to quantum optimal control Quantum circuit learning Gradients of parameterized quantum gates using the parameter-shift rule and gate decomposition Measuring analytic gradients of general quantum evolution with the stochastic parameter shift rule An adaptive optimizer for measurementfrugal variational algorithms Operator sampling for shot-frugal optimization in variational algorithms Quantum natural gradient On the natural gradient for variational quantum eigensolver The theory of variational hybrid quantum-classical algorithms A variational eigenvalue solver on a photonic quantum processor Accounting for errors in quantum algorithms via individual error reduction Computation of molecular spectra on a quantum processor with an error-resilient algorithm Error mitigation extends the computational reach of a noisy quantum processor Cloud quantum computing of an atomic nucleus Quantum simulation of molecular energies An adaptive variational algorithm for exact molecular simulations on a quantum computer Quantum chemistry simulations of dominant products in lithium-sulfur batteries Hartree-fock on a superconducting qubit quantum computer Towards a larger molecular simulation on the quantum computer: Up to 28 qubits systems accelerated by point group symmetry Hardware-efficient variational quantum eigensolver for small molecules and quantum magnets Quantum chemistry as a benchmark for near-term quantum computers Progress towards practical quantum variational algorithms Identifying challenges towards practical quantum advantage through resource estimation: the measurement roadblock in the variational quantum eigensolver Unitary partitioning approach to the measurement problem in the variational quantum eigensolver method Measurement optimization in the variational quantum eigensolver using a minimum clique cover Efficient and noise resilient measurements for quantum chemistry on near-term quantum computers Minimizing estimation runtime on noisy quantum computers Quantum algorithm providing exponential speed increase for finding eigenvalues and eigenvectors Simulated quantum computation of molecular energies Theory of variational quantum simulation. arXiv e-prints How will quantum computers provide an industrially relevant computational advantage in quantum Quantum algorithms for supervised and unsupervised machine learning Quantum principal component analysis Quantum algorithms for training gaussian processes Quantum-assisted gaussian process regression A quantum-inspired classical algorithm for recommendation systems Sampling-based sublinear lowrank matrix arithmetic framework for dequantizing quantum machine learning Quantum-inspired classical algorithms for principal component analysis and supervised clustering Quantuminspired support vector machine. arXiv e-prints Quantuminspired low-rank stochastic regression with logarithmic dependence on the dimension An improved quantum-inspired algorithm for linear regression Quantum-inspired sublinear algorithm for solving low-rank semidefinite programming Quantum machine learning for classical data Active machine learning for quantum experiments Quantum metrology and its application in biology. arXiv e-prints Quantum medical imaging algorithms. arXiv e-prints Variational quantum linear solver: A hybrid algorithm for linear systems Near-term quantum algorithms for linear systems of equations. arXiv e-prints Hybrid quantum linear equation algorithm and its experimental test on ibm quantum experience Quantum variational autoencoders. arXiv e-prints Quantum generative adversarial networks Variational quantum generators: Generative adversarial quantum machine learning for continuous distributions Continuousvariable quantum neural networks Variational quantum circuits for deep reinforcement learning Nonlinear quantum mechanics implies polynomial-time solution for NP-complete and P problems Quantum generalisation of feedforward neural networks. Quantum generalisation of feedforward neural networks Quantum neuron: an elementary building block for machine learning on quantum computers Quantum-enhanced machine learning Machine learning & artificial intelligence in the quantum domain Gambetta. Supervised learning with quantum-enhanced feature spaces A rigorous and robust quantum speed-up in supervised machine learning Supervised quantum machine learning models are kernel methods Machine learning of high dimensional data on a noisy quantum processor Power of data in quantum machine learning The power of quantum neural networks. arXiv e-prints Training deep quantum neural networks Generalization study of quantum neural network. arXiv e-prints Towards understanding the power of quantum kernels in the nisq era Barren plateaus in quantum neural network training landscapes. Nature Communications Absence of barren plateaus in quantum convolutional neural networks. arXiv e-prints Equivalence of quantum barren plateaus to cost concentration and narrow gorges The presence and absence of barren plateaus in tensor-network based machine learning Oracles and queries that are sufficient for exact learning Equivalences and separations between quantum and classical learnability Tangible reduction of sample complexity with large classical samples and small quantum system Quantum walks and deep learning to solve protein folding. arXiv e-prints Circuit-centric quantum classifiers A quantum approximate optimzation algorithm From the quantum approximate optimization algorithm to a quantum alternating operator ansatz Quantum approximate optimization with hard and soft constraints Qaoa for max-cut requires hundreds of qubits for quantum speed-up How many qubits are needed for quantum computational supremacy? Quantum Quantum approximate optimization algorithm: Performance, mechanism, and implementation on near-term devices Beating the random assignment on constraint satisfaction problems of bounded degree A quantum alternating operator ansatz with hard and soft constraints for lattice protein folding Quaserquantum accelerated de novo dna sequence reconstruction Evaluation of qaoa based on the approximation ratio of individual samples To quantum or not to quantum: towards algorithm selection in near-term quantum optimization Low depth mechanisms for quantum optimization Quantum computation by adiabatic evolution. arXiv e-prints Adiabatic quantum computation Adiabatic quantum computation is equivalent to standard quantum computation Quantummerlin-arthur-complete problems for stoquastic hamiltonians and markov matrices. Physical Review A Simulated quantum annealing can be exponentially faster than classical simulated annealing Quantum annealing versus classical machine learning applied to a simplified computational biology problem Quantum and quantum-inspired methods for de novo discovery of altered cancer pathways Chittenden. Unconventional machine learning of genomewide human cancer data Designing peptides on a quantum computer Construction of model hamiltonians for adiabatic quantum computation and its application to finding low-energy conformations of lattice protein models Finding lowenergy conformations of lattice protein models by quantum annealing Coarsegrained lattice protein folding on a quantum annealer. arXiv e-prints Investigating the potential for a limited quantum speedup on protein lattice problems Dominant reaction pathways by quantum computing Genome assembly using quantum and quantum-inspired annealing Observation of topological phenomena in a programmable lattice of 1,800 qubits Prospects for quantum enhancement with diabatic quantum annealing Molecular docking with gaussian boson sampling. arXiv e-prints An algorithm for dna read alignment on quantum accelerators A quantum algorithm for string matching String matching in o( √ n + √ m) quantum time Quantum hopfield neural network Bayesian networks based hybrid quantum-classical machine learning approach to elucidate gene regulatory pathways Detecting multiple communities using quantum annealing on the d-wave system The quest for a quantum neural network Quantum deep learning. arXiv e-prints A quantum algorithm to train neural networks using low-depth circuits Information scrambling in quantum neural networks Quantum algorithms for feedforward neural networks Statistical limits of supervised quantum learning Biomolecular simulations: From dynamics and mechanisms to computational assays of biological activity Elucidating reaction mechanisms on quantum computers Quantum chemistry in the age of quantum computing Introduction to Computational Chemistry Software for molecular docking: a review Investigation of protein-protein interactions and hot spot region between pd-1 and pd-l1 by fragment molecular orbital method Molecular simulation as a computational pharmaceutics tool to predict drug solubility, solubilization processes and partitioning Methods for exploring reaction space in molecular systems Essentials of Computational Chemistry: Theories and Models Ligand gaussian accelerated molecular dynamics (ligamd): Characterization of ligand binding thermodynamics and kinetics Accurate absolute free energies for ligand-protein binding based on non-equilibrium approaches Statistical Mechanics: Theory and Molecular Simulation Car-parrinello molecular dynamics Quantum algorithm for nonhomogeneous linear partial differential equations High-precision quantum algorithms for partial differential equations. arXiv Ab initio molecular dynamics on quantum computers Digital quantum simulation of molecular vibrations Nearand long-term quantum algorithmic approaches for vibrational spectroscopy Principles of Nonlinear Optical Spectroscopy Spectra of Atoms and Molecules Green fluorescent protein: A perspective Review of superresolution fluorescence microscopy for biology Fluorescent dyes with large stokes shifts for superresolution optical microscopy of biological objects: a review Marcus. Temperature-dependent conformations of exciton-coupled cy3 dimers in double-stranded dna featured Quantum chemical modeling of the photoinduced activity of multichromophoric biosystems Ultraviolet light degrades the mechanical and structural properties of human stratum corneum Heme-nitrosyls: Electronic structure implications for function in biology The radical chemistry of galactose oxidase Low-energy spectrum of iron-sulfur clusters directly from many-particle quantum mechanics Molecular and functional asymmetry at a vertebrate electrical synapse Protoncoupled electron transfer Microbial enzymes: industrial progress in 21st century Conceptual density functional theory Electronic structure calculations with dynamical mean-field theory Hybrid quantum-classical approach to correlated materials The oniom method and its applications Quantum simulations of materials on near-term quantum computers Quantum measurements and the abelian stabilizer problem. arXiv e-prints, pages quant-ph/9511026v1 Classical and Quantum Computation Vqe method: A short survey and recent developments. arXiv e-prints Generalized trotter's formula and systematic approximants of exponential operators and inner derivations with applications to manybody problems Fractal decomposition of exponential operators with applications to many-body theories and monte carlo simulations Simulating Hamiltonian dynamics with a truncated taylor series Optimal hamiltonian simulation by quantum signal processing Hamiltonian simulation by qubitization Blackbox hamiltonian simulation and unitary implementation Hamiltonian simulation with nearly optimal dependence on all parameters Theory of trotter error with commutator scaling Graph optimization perspective for low-depth trotter-suzuki decomposition Determining eigenstates and thermal states on a quantum computer using quantum imaginary time evolution Izmaylov. Constrained variational quantum eigensolver: Quantum computer search engine in the fock space Hybrid quantum-classical algorithms and quantum error mitigation Measurement reduction in variational quantum algorithms Predicting many properties of a quantum system from very few measurements Quantum equation of motion for computing molecular excitation energies on a noisy quantum processor A general quantum algorithm for open quantum dynamics demonstrated with the fenna-matthews-olson complex Boson sampling for molecular vibronic spectra Quantum algorithm for calculating molecular vibronic spectra Quantum computation of molecular response properties Nakagawa. Deep variational quantum eigensolver: a divide-and-conquer method for solving a larger problem with smaller size quantum computers Deep variational quantum eigensolver for excited states and its application to quantum chemistry calculation of periodic materials Finding the needle in the haystack: towards solving the protein-folding problem computationally Conformation generation: The state of the art Conformation search across multiple-level potentialenergy surfaces (csamp): A strategy for accurate prediction of protein-ligand binding structures Rna frabase 2.0: an advanced webaccessible database with the capacity to search the three-dimensional fragments within rna structures A new era for ab initio molecular crystal lattice energy prediction. Angewandte Chemie International Edition Computational organic chemistry: Bridging theory and experiment in establishing the mechanisms of chemical reactions Various authors. Recent Progress in the Theory of the Euler and Navier-Stokes Equations. London Mathematical Society Lecture Note Series Hydrokinetic approach to largescale cardiovascular blood flow A method for threedimensional navier-stokes simulations of largescale regions of the human lung airway Computation of flow through the oesophagogastric junction Reduction of viscosity in suspension of swimming bacteria Numerical investigation of blood flow. part ii: In capillaries Computational model of flow-tissue interactions in intussusceptive angiogenesis Numerical modeling of interstitial fluid flow coupled with blood flow through a remodeled solid tumor microvascular network Introduction to Chemical Engineering Kinetics and Reactor Design Computational fluid dynamics for improved bioreactor design and 3d culture The Finite Element Method: Linear Static and Dynamic Finite Element Analysis Immersed finite element method and its applications to biological systems Characterizing the biomechanical properties of the pubovisceralis muscle using a genetic algorithm and the finite element method Stress and displacement patterns in the craniofacial skeleton with rapid maxillary expansion -a finite element method study Mechanical properties of cancer cytoskeleton depend on actin filaments to microtubules content: Investigating different grades of colon cancer cell lines An outlook on future design of hybrid pet/mri systems Analytical Lens Design (IOP Series in Emerging Technologies in Optics and Photonics) A modeling approach for investigating opto-mechanical relationships in the human eye lens Mathematical Models in Epidemiology Review of the systems biology of the immune system using agent-based models Whole-cell modeling and simulation: A brief survey Multi-scale simulation of early kidney branching morphogenesis Systems Modeling Bruggeman. Systems modeling approaches for microbial community studies: from metagenomics to inference of the community structure Quantum algorithm for alchemical optimization in material design Quantum algorithms for escaping from saddle points. arXiv e-prints Quantum generative models for small molecule drug discovery Quantum autoencoders for efficient compression of quantum data Quantum speedup of Monte Carlo methods Quantum spectral methods for differential equations. arXiv Quantum algorithm for nonlinear differential equations. arXiv Quantum algorithms and the finite element method Quantum algorithm for the advection -diffusion equation simulated with the lattice boltzmann method Finding flows of a Navier-Stokes fluid through quantum computing. npj Quantum Information Quantum algorithm for the Navier-Stokes equations. arXiv Fluid turbulence The ICGC/TCGA Pan-Cancer Analysis of Whole Genomes Consortium. Pan-cancer analysis of whole genomes A general method applicable to the search for similarities in the amino acid sequence of two proteins Multiple sequence alignment: Algorithms and applications Identification of common molecular subsequences Comparison of the pam and blosum amino acid substitution matrices On the complexity of multiple sequence alignment The complexity of multiple sequence alignment with sp-score that is a metric Computational complexity of multiple sequence alignment with sp-score Progressive alignment of amino acid sequences and construction of phylogenetic trees from them Multiple sequence alignment modeling: methods and applications Systematic evaluation of error rates and causes in short samples in next-generation sequencing Long-read human genome sequencing and its applications Sequencing depth and coverage: key considerations in genomic analyses Comparison of the two major classes of assembly algorithms: overlap-layout-consensus and debruijn-graph Nextstrain: real-time tracking of pathogen evolution Harnessing machine learning to guide phylogenetic-tree search algorithms Phylooncology: Understanding cancer through phylogenetic analysis Topologically associating domains are stable units of replication-timing regulation Integrative single-cell analysis From homogeneous to heterogeneous network alignment via colored graphlets Heterogeneous network embedding enabling accurate disease association predictions Random walk with restart on multiplex and heterogeneous biological networks Reverse engineering gene regulatory network based on complex-valued ordinary differential equation model Quantum walks: a comprehensive review. Quantum Information Processing Optimizing high-performance computing systems for biomedical workloads Deep neural networks as gaussian processes. arXiv e-prints Neural tangent kernel: Convergence and generalization in neural networks. arXiv e-prints Batch normalization: Accelerating deep network training by reducing internal covariate shift Self-normalizing neural networks. arXiv e-prints Layerwise learning for quantum neural networks Generalization in quantum machine learning: a quantum information perspective The dilemma of quantum neural networks. arXiv e-prints On the learnability of quantum neural networks. arXiv e-prints On the expressive power of deep neural networks Optimal quantum sample complexity of learning algorithms. arXiv e-prints Quantum cryptography: public key distribution and coin tossing Differential privacy in quantum computation Quantum noise protects quantum classifiers against adversaries Quantum secure learning with classical samples Quantum machine learning with differential privacy Privacypreserving quantum machine learning using differential privacy Differential privacy-enabled federated learning for sensitive health data Quantum learning with noise and decoherence: A robust quantum neural network Quantum classifier with tailored quantum kernel. npj Quantum Information Deep learning for computational biology Predicting the future -big data, machine learning, and clinical medicine High-performance medicine: the convergence of human and artificial intelligence Diagnosis support systems for rare diseases: a scoping review The use of machine learning in rare diseases: a scoping review Predicting adverse drug reactions of combined medication from heterogeneous pharmacologic databases Predicting adverse drug reactions on distributed health data using federated learning Rise of the machines: Advances in deep learning for cancer diagnosis Deep learning for drug response prediction in cancer Predicting non-small cell lung cancer prognosis by fully automated microscopic pathology image features Space/time trade-offs in hash coding with allowable errors Opportunistic data structures with applications Ultrafast and memory-efficient alignment of short dna sequences to the human genome Fast and accurate short read alignment with burrows-wheeler transform Fast and accurate long-read alignment with burrows-wheeler transform Aligning sequence reads, clone sequences and assembly contigs with bwa-mem Genome graphs and the evolution of genome inference Fast and accurate genomic analyses using genome graphs Critical points in hamiltonian agnostic variational quantum algorithms. arXiv e-prints 64-qubit quantum circuit simulation Classical simulation of quantum supremacy circuits. arXiv e-prints Closing the "quantum supremacy" gap: Achieving real-time simulation of a random quantum circuit using a new sunway supercomputer We would like to thank Srinivasan Arunachalam, Meysam Asgari, Aaron Cohen, Justin Gottschlich, Jennifer Paykin, Marek Perkowski, Robert Rand, Xubo Song, and Guanming Wu for productive discussions around topics covered in this work. B. Cordier would like to acknowledge funding by National Library of Medicine and National Institute of Environmental Health Sciences (5T15LM007088-27).