key: cord-0004132-s8ohts1j authors: Stanstrup, Jan; Broeckling, Corey D.; Helmus, Rick; Hoffmann, Nils; Mathé, Ewy; Naake, Thomas; Nicolotti, Luca; Peters, Kristian; Rainer, Johannes; Salek, Reza M.; Schulze, Tobias; Schymanski, Emma L.; Stravs, Michael A.; Thévenot, Etienne A.; Treutler, Hendrik; Weber, Ralf J. M.; Willighagen, Egon; Witting, Michael; Neumann, Steffen title: The metaRbolomics Toolbox in Bioconductor and beyond date: 2019-09-23 journal: Metabolites DOI: 10.3390/metabo9100200 sha: 8c44e37bccf0c2a493e1b1754081384fc802d3d9 doc_id: 4132 cord_uid: s8ohts1j Metabolomics aims to measure and characterise the complex composition of metabolites in a biological system. Metabolomics studies involve sophisticated analytical techniques such as mass spectrometry and nuclear magnetic resonance spectroscopy, and generate large amounts of high-dimensional and complex experimental data. Open source processing and analysis tools are of major interest in light of innovative, open and reproducible science. The scientific community has developed a wide range of open source software, providing freely available advanced processing and analysis approaches. The programming and statistics environment R has emerged as one of the most popular environments to process and analyse Metabolomics datasets. A major benefit of such an environment is the possibility of connecting different tools into more complex workflows. Combining reusable data processing R scripts with the experimental data thus allows for open, reproducible research. This review provides an extensive overview of existing packages in R for different steps in a typical computational metabolomics workflow, including data processing, biostatistics, metabolite annotation and identification, and biochemical network and pathway analysis. Multifunctional workflows, possible user interfaces and integration into workflow management systems are also reviewed. In total, this review summarises more than two hundred metabolomics specific packages primarily available on CRAN, Bioconductor and GitHub. Metabolomics aims to measure, identify and (semi-)quantify a large number of metabolites in a biological system. The methods of choice are generally Nuclear Magnetic Resonance (NMR) spectroscopy or Mass Spectrometry (MS). The latter can be used directly (e.g., direct infusion MS), but is normally coupled to a separation system such as Gas Chromatography (GC-MS), Liquid Chromatography (LC-MS) or Capillary Electrophoresis (CE-MS). To increase the separation power, multidimensional separation systems are becoming common, such as comprehensive two-dimensional GC or LC (GC×GC, LC×LC) or LC combined with ion mobility spectrometry (LC-IMS) before MS detection. Other detection techniques include Raman spectroscopy, UV/VIS (ultraviolet/visible absorbance spectrophotometric detection-typically with a Diode Array Detector (DAD)) and fluorescence. NMR also benefits from separation techniques, such as LC-MS-NMR or LC-SPE-NMR. Additionally, there are a wide variety of pulse programs commonly used in 1D, and an even bigger set of 2D pulse programs used in metabolomics and for metabolite identification; for a comprehensive review on this, see [1] . A general introduction to metabolomics can be found in textbooks like [2] [3] [4] or online courses like [5, 6] . All of these analytical platforms and methodologies generate large amounts of high-dimensional and complex experimental raw data when used in a metabolomics context. The amount of data, the need for reproducible research, and the complexities of the biological problem under investigation necessitates a high degree of automation and standard workflows in the data analysis. In comparison to vendor software, which is usually not open, open source projects offer the possibility to work in community-driven teams, perform reproducible data analysis and to work with different types of raw data. Many tools and methods have been developed to facilitate the processing and analysis of metabolomics data; most seek to solve a specific challenge in the multi-step data processing and analysis workflow. This review provides an overview of the metabolomics-related tools that are made available as packages (and a limited number of non-trivial, non-packaged scripts) for the statistics environment and programming language R [7] . We have included packages even if they are not anymore part of current CRAN or Bioconductor, i.e., they are available as archived versions only. We have not included packages described in the literature if they are no longer available for download at all. We did include packages that are currently available, but not yet published in the scientific literature. The package Figure 1 . Overview of typical tasks in metabolomics workflows, ranging from metabolite profiling (left, green) via metabolite annotation (center, purple) to data analysis using statistics and metabolite networks (right, red). The first step for any metabolomics study is conversion from vendor formats into open data formats and pre-processing of the obtained raw data. The latter entails converting chromatographic (usually hyphenated to MS) or spectroscopic data into a data matrix suitable for data analysis. For LC-MS data this typically involves feature detection (or peak picking) in individual samples followed by matching of features between samples. For spectroscopic data, this typically means alignment of spectra and potentially binning of the spectra into 'buckets'. The final matrix will have samples in one dimension and so-called features (unique chromatographic features or spectral bins) in the other dimension. In NMR-based metabolomics, several steps are carried out to process raw time domain data to a spectrum to improve quality such as phasing and baseline correction of the spectrum. Next is alignment of peaks across spectra and samples, followed by segmenting data into bins or a peak fitting step depending on the method used. Once the analytical data has been pre-processed, it is generally subjected to different statistical approaches to find features that are "interesting" in the context of the experimental design, e.g., differentiating diseased patients from healthy controls. In untargeted metabolomics, the selected features contain only the characteristics (e.g., m/z, retention time, chemical shift, intensity) obtained from the measurement, but not (yet) the metabolite Figure 1 . Overview of typical tasks in metabolomics workflows, ranging from metabolite profiling (left, green) via metabolite annotation (center, purple) to data analysis using statistics and metabolite networks (right, red). The first step for any metabolomics study is conversion from vendor formats into open data formats and pre-processing of the obtained raw data. The latter entails converting chromatographic (usually hyphenated to MS) or spectroscopic data into a data matrix suitable for data analysis. For LC-MS data this typically involves feature detection (or peak picking) in individual samples followed by matching of features between samples. For spectroscopic data, this typically means alignment of spectra and potentially binning of the spectra into 'buckets'. The final matrix will have samples in one dimension and so-called features (unique chromatographic features or spectral bins) in the other dimension. In NMR-based metabolomics, several steps are carried out to process raw time domain data to a spectrum to improve quality such as phasing and baseline correction of the spectrum. Next is alignment of peaks across spectra and samples, followed by segmenting data into bins or a peak fitting step depending on the method used. Once the analytical data has been pre-processed, it is generally subjected to different statistical approaches to find features that are "interesting" in the context of the experimental design, e.g., differentiating diseased patients from healthy controls. In untargeted metabolomics, the selected features contain only the characteristics (e.g., m/z, retention time, chemical shift, intensity) obtained from the measurement, but not (yet) the metabolite identification or chemical structure as such. Different approaches exist for this metabolite annotation Metabolites 2019, 9, 200 4 of 55 step, ranging from (usually insufficient) database lookup of exact mass (MS) or chemical shift (NMR) alone, to the use of fragmentation patterns obtained in tandem MS experiments, which can be searched against spectral databases or analysed with in silico algorithms, to spectral searching or de novo structure elucidation using combinations of NMR experiments (often 1D and 2D). Large parts of the metabolomics software landscape in general have been covered in reviews, recent ones include the large list of software packages [8] first described by Spicer et al. [9] , and a series of annual reviews covering the list maintained by Misra and others [10] [11] [12] [13] , a review by Kannan et al. [14] and the review focussing on approaches for compound identification of LC-MS/MS data by Blaženović et al. [15] . These reviews did include software regardless of the programming environment or language used for the implementation. In Section 2.9 we briefly mention how those can be accessed from within R. This review will focus on the ecosystem of R packages for metabolomics. It provides an overview of packages to carry out one or multiple of the above-mentioned steps. Some aspects are not covered in depth or not at all. For example, MS-based imaging in metabolomics is an area that has unique challenges and merits its own review, and it is also beyond the scope of this review to discuss all statistical methods that could be applied in metabolomics. The core of the R language was started in 1997 and provided the basic functionality of a programming language, with some functions targeting statistics. The real power driving the popularity of R today is the huge number of contributed packages providing algorithms and data types for a myriad of application realms. Many packages have an Open Source license. This is not a phenomenon exclusive to R, but is rather a positive cultural aspect of bioinformatics software being mostly published under Open Source license terms, regardless of the implementation language. An R interpreter can be embedded in several other languages to execute R code snippets, and R code can also be executed via different workflow systems (e.g., KNIME or Galaxy, see Section 2.9), which is beneficial for analysis workflows, interoperability and reuse. These packages are typically hosted on platforms that serve as an umbrella project and are a "home" for the developer and user communities. The Comprehensive R Archive Network (CRAN) repository contains over 14,500 packages for many application areas, including some for bioinformatics and metabolomics. The "CRAN Task Views", which are manually curated resources describing available packages, books etc., help users navigate CRAN and find packages for a particular task. For metabolomics, the most relevant Task View is "Chemometrics and Computational Physics" [16] edited by Katharine Mullen, which includes sections on Spectroscopy, Mass Spectrometry and other tasks relevant for metabolomics applications. The Bioconductor project (BioC for short) was started by a team around Robert Gentleman in 2001 [17] , and has become a vibrant community of around 1000 contributors, working on 1741 software, 371 data and 948 annotation packages (BioC release 3.9). In addition to a rich development infrastructure (website, developer infrastructure, version control, build farm, etc.) there are regular workshops for developers and users. To enable reproducible research, BioC runs bi-annual software releases tied to a particular R release, thus ensuring and guaranteeing interoperability of packages within the same BioC release and allowing to install BioC packages from a certain release to reproduce or repeat old data analyses. On both CRAN and BioC, each package has a landing page pointing to sources, build information, binary packages and documentation. On BioC, packages are sorted (by their respective authors) into "BiocViews", where most packages are targeting genomics and gene expression analysis, and the most relevant ones for metabolomics are Cheminformatics (containing 11 packages), Lipidomics (11) , SystemsBiology (66) and, of course, Metabolomics (56) . Bioconductor workflows (organised as separate BioC View [18] ) provide well documented examples of typical analyses. For community support, BioC maintains mailing lists, a web-based support site, slack communication channels and more. Both CRAN and BioC have a well-defined process for accepting new packages, and the respective developer guidelines (see guidelines for CRAN [19] and for BioC [20] ) cover the package life-cycle from submission, updates and maintenance, to deprecation/orphaning of packages. In the case of BioC, new submissions undergo a peer review process, which also provides feedback on technical aspects and integration with the BioC landscape. A smaller number of packages are also hosted on sites like rforge.net (https://rforge.net/), r-forge.wu-wien.ac.at (http://r-forge.wu-wien.ac.at/) [21] , or sourceforge.net (https://sourceforge.net/) (SF). The non-profit initiative rOpenSci [22] maintains an ecosystem around reproducible research, including staff and community-contributed R packages with additional peer review. Currently, there are no specific metabolomics related packages. The GitHub (and also GitLab, Bitbucket) hosting services are not specific to R development, but have gained a lot of popularity due to their excellent support for participation and contribution to software projects. The maintenance of BioC packages on one of the git-based sites has become easier since the BioC team migrated to git as its version control system. A downside of these generic repository hosting sites is that there is no central point of entry, and finding packages for specific tasks is difficult compared with dedicated platforms and relies on search engines and publications. Also, while these hosting services make it easier to provide packages that do not meet BioC and CRAN requirements (e.g., rinchi, due to limitations in the InChI algorithm itself), it also allows users to postpone (or circumvent entirely) the review process that helps ensure the quality of BioC contributions. In addition to generic search engines like Google.com or Bing.com, the rdrr.io is a comprehensive index of R packages and documentation from CRAN, Bioconductor, GitHub and R-Forge. Initially, its main purpose was to find R packages by name, perform full-text search in package documentation, functions and R source code. Recently, it also serves as a hub to actually run R code without local installation, see Section 2.9. Code reuse and object inheritance can be a sign of a well-connected and interacting community. At the useR!2015 and JSM2015 conferences, A. de Vries and J. Rickert (both Microsoft, London, UK) showed the analysis of the CRAN and BioC dependency network structure [23] [24] [25] . Compared to CRAN, BioC packages had a higher connectivity: "It seems that the Bioconductor policy encourages package authors to reuse existing material and write packages that work better together". We repeated such an analysis [26] with the packages mentioned in this review and created a network of reverse dependencies (i.e., the set of packages that depend on these metabolomics related packages in BioC or CRAN). The resulting network is shown in Figure 2 . This section reviews packages, relates some of those with similar functionality, and mentions how some of the packages can be used together. The sections in this review are ordered according to specific analytical approaches and the individual required steps. Edges connect to packages that depend on another package, as long as they are in CRAN or BioC. Green nodes correspond to packages in CRAN or BioC not covered in the review. The inset shows the neighbourhood of the ptw package. Not shown are (1) infrastructure packages, e.g., rJava, Rcpp; (2) packages from the review without reverse dependencies; and (3) data packages. Some packages from the review are not in current versions of CRAN or BioC. An interactive version of this figure is also available online (rformassspectrometry.github.io/metaRbolomics-book, Appendix 2) and as Supplemental File S2. This section reviews packages, relates some of those with similar functionality, and mentions how some of the packages can be used together. The sections in this review are ordered according to specific analytical approaches and the individual required steps. For all mass spectrometers, the fundamental data generated is a mass spectrum, i.e., mass-signal intensity pairs. MS-based metabolomics data is typically acquired either as a single mass spectrum or a collection of mass spectra over time, with the time axis (retention time) defined by chromatographic (or other time domain) separation. One of the first steps in metabolomics data processing is usually the reduction of the typically large raw data produced by the instrument to a much smaller set of so-called features, which are then subjected to downstream data analysis and interpretation. Features normally represent integrated peaks for a given mass that have been aligned across samples. Establishing these features is called pre-processing. The feature detection approaches and packages applicable depend on the type and characteristics of the input data. This section describes the basic data structure for some of the common analytical approaches and shows appropriate tools in R for pre-processing such data, see Table 1 for an overview of the corresponding packages. The mass spectra can be recorded in profile (also called continuum) mode, but are often 'centroided'. Centroiding is, in effect, a process of peak detection for a profile mode mass spectrum (hence in the m/z dimension, not in a chromatographic dimension)-a gaussian region of a continuum spectrum with a sufficiently high signal to noise ratio is integrated to give a centroided mass (a "stick" in the mass spectrum as opposed to a continuous signal) and integrated area under the curve. This results in data of reduced size-what was many m/z-intensity pairs is reduced to a single m/z-intensity pair. Practically, this reduces the file size considerably, and many data processing tools (e.g., centWave in xcms) require MS data that has been centroided. The centroiding can be done either during acquisition on the fly by the instrument software, or as an initial processing step. Post-acquisition centroiding can be performed during conversion of the vendor data format to open formats; typically using msconvert from ProteoWizard [27, 28] , which in some cases provides access to vendor centroiding algorithms or can alternatively use its own built-in centroiding method. Dedicated vendor tools can also be used, and the R packages MSnbase also provides centroiding capabilities. Currently, one of the highest-throughput analytical approaches is direct infusion MS, where the sample is directly injected into the mass spectrometer without any chromatographic separation. This approach can be used with high mass resolution or ultra-high resolution mass spectrometers to discriminate isobaric analytes [29] . Summing or averaging these spectra generates a single mass spectrum, which is representative of that sample. Peak picking can be done using MassSpecWavelet that applies a continuous wavelet transform-based peak detection. xcms provides a wrapper for this function in the findPeaks.MSW function. In the Flow Injection Analysis analytical approach (FIA), the sample is transiently injected into the carrier stream flowing directly into the MS instrument. In the absence of chromatographic separation, matrix effects are a challenge for the quantification, especially in complex matrices. FIA coupled to High-Resolution Mass Spectrometry data can be processed with the proFIA workflow which provides efficient and robust peak detection and quantification. Chromatographic separation before MS enables better measurement of complex samples and the ability to separate isobaric compounds. Here, the mass spectra are acquired over time as the sample components separate on the chromatography column. The mass spectrum at any given time has the same data structure as any mass spectrum-units of mass to charge ratio and time. As can be inferred from the above descriptions, chromatographically coupled mass spectrometry data is three-dimensional, with dimensions of retention time, m/z, and intensity. For the pre-processing of LC-MS and GC-MS data, xcms is widely used. A recent paper reviewed some of the "xcms family" packages [30] , although many more packages exist that build on xcms by providing tools for specialised analyses while others provide improvements of some of the xcms processing steps such as improved peak picking (xMSanalyzer, warpgroup, cosmiq). xcms itself provides several different algorithms for peak picking such as matchedFilter [31] , centWave [32] and massifquant [33] . apLCMS, yamss, KPIC2 and enviPick also provide peak picking for LC-MS data independently of xcms. In cases where the alignment of the peak data of different samples is considered (e.g., in cohort studies), xcms and apLCMS include methods to group the peaks by their m/z and retention times within tolerance levels. The groups are split into sub-groups using density functions and the consensus m/z and retention time is assigned to each bin. In addition to the most standard "spectra over time" representation of chromatographically separated MS data, there are several alternative ways to represent the data or simplify the data. The signal intensity for a given mass (or mass range) over chromatographic time can be represented as two equal length vectors, with retention time and intensity as units for the values of those vectors. Examples of these vector pairs include the extracted ion chromatogram (EIC, sometimes also referred to as selected or eXtracted ion chromatogram SIC, XIC), where these chromatograms represent the intensity of a given mass over (retention) time. The data thus contains no spectra, but several SICs. Frequently, this is accomplished by summarising the raw data in a two dimensional matrix consisting of m/z and time dimensions, with each cell holding the signal intensity for that m/z and retention time range (or bin). Low mass resolution mass spectrometers often represent the data natively as a SIC and targeted data are also usually represented this way. Recent versions of xcms are also able to process such data, and additional xcms-based functionalities for analysis of targeted data can be found in the packages TargetSearch and SWATHtoMRM, while analysis of isotope labeled data can be found in the packages X 13 CMS, geoRge, and IsotopicLabelling. SIMAT also provides processing for targeted data and does not rely on xcms. The vast majority of data collected for metabolomics comprises of three dimensions: retention time, m/z, and intensity. However, there are more complicated analytical approaches that add additional dimensionality to the data. Two-dimensional chromatography offers two separations in the chromatographic (retention time) domain. The eluent from one column is captured by retention time range and transferred to a second column, where a fast orthogonal separation occurs. When coupled to a mass spectrometer, this generates four-dimensional data (m/z, first retention time, second retention time, intensity). Ion mobility separation (IMS) is a gas phase separation method offering resolution of ions based on molecular shape. This separation occurs on timescales of tens of microseconds, which generates a nested data structure in which there are dozens to hundreds of mass spectra collected across the IMS separation time scale. One can envision this as an ion mobility 'chromatogram'-however, this chromatogram is nested within the actual chromatographic separation, thus LC-IMS-MS data is also four dimensional. Most MS instruments offer the capability to perform selection (or filtering) of ions for fragmentation. The precursor selection can be performed through a quadrupole or ion trap, and fragmentation is often induced by collisions with an inert collision gas. Because this adds a level of mass spectrometry, it is called tandem MS, MS 2 or MS/MS. Ion trap instruments can further select fragment ions and acquire MS n spectra. There are several data independent MS/MS approaches, whereby MS/MS precursor selection is done, typically, on a scanning basis. These approaches perform precursor selection in a manner which does not depend on any feedback from the instrument control software or the MS level data. In practice, this precursor window can be either m/z or ion mobility-based. The processing tools within the R universe (discussed below) are so far underdeveloped for these approaches. With the increased popularity of multidimensional separation, the need for algorithms that can fully utilise the increased separation power is also increasing. Currently, osd provides peak picking for unit resolution GC×GC-MS. While the msPeak package provides peak picking for GC×GC-MS data, the peak picking is done on the total ion chromatogram, thus not taking advantage of the mass selectivity provided by the MS detector. It does not appear that any package for R exists that provides peak picking for GC×GC-MS, LC×LC-MS or LC-IMS-MS, similar to (or even better than) commercial tools (e.g., ChromaTOF, GC Image, ChromSquare). Also, at least in the case of GC×GC-MS, unit mass resolution still seems to be the most common use-case, even though high-resolution MS could further improve signal deconvolution and ultimately, analyte identification. Such capabilities are crucial for moving these new powerful analytical approaches into mainstream metabolomics analysis. The result from the pre-processing is usually a matrix of abundances, rows being features (or features grouped into compounds/molecules) and columns being the samples. Within the statistical community, it is common nowadays to manipulate data matrices with rows as observations and columns as features, this difference stems from the early days, when spreadsheet programs could only handle a limited number of columns smaller than the number of e.g., genes. Such matrices can be easily encapsulated into an ExpressionSet class from Bioconductor's Biobase package [34] , the more recent SummarizedExperiment defined in the SummarizedExperiment [35] package or the mSet class from the metabolomics focussed Metabase [36] package. The main advantage of such objects is their inherent support to align quantitative data along with related metadata (i.e., feature definitions/annotations as row-and sample annotations as column metadata). As an example, a SummarizedExperiment can be generated from xcms pre-processing results by adding the output from the featureValues function on the xcms result object as quantitative assay and the outputs of the featureDefinitions and pData functions as row and column annotations, respectively. Many Bioconductor packages for omics data analysis have native support for such objects (e.g., pcaMethods, STATegRa, ropls, biosigner, omicade4). For the downstream export of mass spectrometry data from metabolomics or lipidomics experiments, the package rmzTab-M provides support for exporting quantitative and identification results backed by analytical and mass spectrometric evidence into the mzTab-M metabolomics file format [37] . Extension of xcms that provides support for isotopic labeling. Detection of metabolites that have been enriched with isotopic labeling. [48] Analysis of isotopic patterns in isotopically labeled MS data. Estimates the isotopic abundance of the stable isotope (either 2H or 13C) within specified compounds. Finding the dual (or multiple) isotope labeled analytes using dual labeling of metabolites for metabolome analysis (DLEMMA) approach, described in Liron [50] . Peak picking using peak apex intensities for selected masses. Reference library matching, RT/RI conversion plus metabolite identification using multiple correlated masses. Includes GUI. Pre-processing for targeted (SIM) GC-MS data. Guided selection of appropriate fragments for the targets of interest by using an optimisation algorithm based on user provided library. Unsupervised data mining on GC-MS. Clustering of mass spectra to detect compound spectra. The output can be searched in NIST and ARISTO [58] . Chemometrics analysis GC×GC-MS: baseline correction, smoothing, COW peak alignment, multiway PCA is incorporated. Retention time and mass spectra similarity threshold-free alignments, seamlessly integrates retention time standards for universally reproducible alignments, performs common ion filtering, and provides compatibility with multiple peak quantification methods. Pre-processing of data from Flow Injection Analysis (FIA) coupled to High-Resolution Mass Spectrometry (HRMS). In MS-based metabolomics, the characterisation and identification of metabolites involves several steps and approaches. After peak (feature) table generation, several tools can be used for grouping features that are postulated to originate from the same molecule. These include the widely used CAMERA for MS 1 data, as well as RAMClustR (particularly for DIA data), MetTailor, nontarget, CliqueMS and peakANOVA. Packages that support interpretation of the relationship between the ion species, including adducts, isotopes and in-source fragmentation, are InterpretMSSpectrum, CAMERA, nontarget and mzMatch [82] . See Table 2 for a summary of these packages. Detailed reconstructed isotope patterns can be used to determine the molecular formula of potential candidates. In the case of molecular formula and isotope analysis, the m/z and intensities for a given (set of) features can be used to calculate a ranked list of possible molecular formulas, based on the accurate mass and relative isotope abundances. The Rdisop, GenFormR and enviPat packages are able to simulate and decompose isotopic patterns into molecular formula candidates. Some post processing can calculate e.g., the double bond equivalents (DBE) and similar characteristics to reduce the number of false positive assignments. Another additional source of information to improve molecular formula estimation is to include MS/MS spectra, as used in MFAssignR, InterpretMSSpectrum or GenFormR. A typical next step is the annotation of m/z with putative metabolites using accurate mass lookup, or if the molecular formula was calculated, lookup of the formula in metabolite databases. It has to be noted that annotation with accurate mass search is by no means equivalent to identification. Under the assumption that all the metabolites measured in a sample have some biochemical relation, a global annotation strategy as used in ProbMetab can help as well. Here, the individual ranked lists of formulae are re-evaluated to also maximise the number of pairs with (potential) biochemical substrate-product pairs. The masstrixR package contains several utility functions for accurate mass lookup. This enables matching of measured m/z values against a given database or library and can additionally perform matching based on retention times (RT) and/or collisional cross sections (CCS) if available. Table 2 . R packages for ion species grouping, annotation, molecular formula generation and accurate mass lookup. Package Reference Repos Uses GenForm for molecular formula generation on mass accuracy, isotope and/or MS/MS fragments, as well as performing MS/MS subformula annotation. The annotation of features from MS 1 experiments alone has limited specificity. Additional structural information for metabolite identification is available from tandem MS and higher-order MS n experiments. There are different approaches, ranging from targeted MS/MS experiments and DDA to DIA (e.g., MS E , all-ion, broad-band CID, SWATH and other vendor terms). Table 3 provides a summarised overview of R packages for these types of experiments. Generation of high-quality MS/MS spectral libraries and MS/MS data can be a tedious task. It involves wet lab steps of preparing solutions of reference standards as well as creating MS machine-specific acquisition methods. Several steps can be automated using different R packages presented here. In case of targeted MS/MS, the instrument isolates specific (specified via method files) masses and fragments them is one possibility. Manually writing targeted MS/MS methods from metabolomics data can be tedious if several tens to hundreds of ions need to be fragmented. The MetShot package supports creating targeted method files for some Bruker and Waters instruments. For all other vendors, optimised lists of non-overlapping peaks (RT-m/z pairs) can be generated to optimise acquisition in the lowest possible number of methods. In data-dependent acquisition (DDA) the instrument is configured to apply a set of rules, which determine which precursor ions are fragmented and MS/MS spectra acquired. DDA approaches also produce a lot of spectra for background peaks or contaminants, which are often of limited use for the purpose of metabolomics studies. Using the RMassBank package, MS 1 and MS/MS data can be recalibrated and spectra cleaned of artifacts generated. After database lookup of corresponding identifiers, MassBank records are generated. In data-independent acquisition mode (DIA), the isolation windows are broader, or in some cases, all ions are fragmented, e.g., the Weizmass library [107] is based on MS E . The computational challenge for DIA data is to deconvolute the MS/MS data and assign the correct precursor ion. DIA data analysis support is currently being implemented in several R packages. MS/MS spectra can be further processed for example by selecting a representative MS/MS spectrum among all spectra associated with a chromatographic peak or by fusing them into a consensus spectrum. Subsequently, spectra can be used in downstream analyses such as spectral matching or clustering. Due to the re-use of infrastructure from the MSnbase package, xcms has recently gained native support for MS/MS data handling and hence allows to extract all MS/MS spectra associated with a feature or chromatographic peak for further processing. While DDA and DIA are convenient methods, users might miss the accuracy and full control over what is fragmented in the targeted approach. The packages rcdk, MetShot and RMassBank can be combined into a workflow (see [108] ) for the generation of records to be uploaded to MS/MS spectral databases (e.g., MassBank [109] ) or to be used off-line. MetShot allows the user to specify an arbitrary number RT-m/z pairs and first sorts them into non-overlapping subsets for which in a second step MS/MS methods (Bruker) or target lists (Agilent, Waters) are generated. It is possible to allow multiple collision energies in a single or separate experiment methods. rcdk was used for calculation of exact masses of adducts. MS/MS data were then acquired on a Bruker maXis plus UHR-Q-ToF-MS. After data collection each run was manually checked for data quality and processed with RMassBank. Tools for processing raw data to database ready cleaned spectra with metadata. From RT-m/z pairs (or m/z alone) creates MS/MS experiment files with non-overlapping subsets of the targets. Bruker, Agilent and Waters supported. Creating MS libraries from LC-MS data using xcms/CAMERA packages. A multi-modular annotation function including X-Rank spectral scoring matches experimental data against the generated MS library. Assess precursor contribution to fragment spectrum acquired or anticipated isolation windows using "precursor purity" for both LC-MS(/MS) and DI-MS(/MS) data. Spectral matching against a SQLite database of library spectra. Uses MetFrag and adds substructure prediction using the isotopic pattern. Can be trained on a custom dataset. Retention time prediction based on compound structure descriptors. Five different machine learning algorithms are available to build models. Plotting available to explore chemical space and model quality assessment. Spectral matching of measured MS/MS data with spectral libraries is an important step in metabolite identification. Different possibilities for matching of two spectra exist, ranging from simple cosine similarity and the normalised dot product to X-Rank and proprietary algorithms. In MSnbase, different spectra can be compared. Functions for comparison include the number of common peaks, their correlation, their dot product or alternatively a custom comparison function can be supplied. In addition, it will be possible to import spectra from different file formats such as NIST msp, mgf, and Bruker library to MSnbase objects using the MSnio package. MSnbase therefore seems to be the most flexible R package for the computation of spectral similarities. Spectra are binned before comparison. The OrgMassSpecR package contains a simple cosine spectral matching between two spectra. The two spectra are aligned with each other within a defined m/z error window using one spectrum as the reference. The feature-rich compMS2Miner can import msp files and uses the dot product to calculate the spectral similarity, the msPurity package can perform spectral matching using different similarity functions, and MatchWeiz implements the probabilistic X-Rank algorithm [118] . A growing number of packages, e.g., LOBSTAHS [99] , LipidMatch [100] and LipidMS [101] , support the annotation of lipids, see Table 2 . They use a combination of lipid database lookup, spectral or selected fragment mass matching and in silico spectra prediction. To improve disambiguation between lipids of the same species that may only differ in their fatty acid chain composition, they usually rely on identifying specific MS/MS feature masses that are indicative of substructure fragments, such as the lipid headgroup, the headgroup with a certain fatty acid attached, or losses of fatty acid(s), and other modifications, such as oxidation. Additionally, they require certain intensity ratios between characteristic fragments of a lipid in order to identify the lipid species or subspecies. NIST msp files and derived msp-like dialects are a commonly used plain text format for the representation of mass spectra. The msp format is described by NIST as part of their Library Conversion Tool [119] documentation, but has many different dialects due to rather loose format definitions. R packages that support the import and export of this file format are able to both use spectral libraries for identification, as well as to create and enrich spectral libraries with new data. There are various R packages that support the import of NIST msp files (see Table 3 ), but the support of different dialects varies, e.g., the NIST-like spectral libraries from RIKEN PRIME [120] cannot be parsed by some readers. In addition, none of these packages currently supports the import of additional attributes such as 'InChIKey: ' or 'Collision_energy: ' as used in the export of MoNA libraries [121] . In essence, most of the packages support the format shown in Listing S1 (see Supplemental File S1, 'basic NIST' in Table S1 ). The metaMS package supports NIST msp files as shown in Listing S2 (see Supplemental File S1, termed 'canonical NIST') and RIKEN PRIME provides a similar format with different attributes as shown in Listing S3 (see Supplemental File S1). The packages metaMS, OrgMassSpecR, enviGCMS, and TargetSearch support the export of NIST msp files. The remaining packages partially support the export of results to NIST msp files (see Table S1 ). One of the most flexible packages for the handling of NIST msp files is metaMS. This package imports and exports the most attributes, although it does not entirely support generic attributes, and the export is very slow (we observed 20 min for an 8 MB file). In addition, a good library reader should also support mgf (mascot generic format) as available for download from GNPS [122] as well as other common formats such as the MassBank record format and different vendor library formats such as Bruker (.library, another msp flavour) and Agilent (.cef). NMR is another analytical technique commonly used in metabolomics research. The pre-processing steps for NMR data normally include Fourier transformation, apodisation, zero filling, phase and baseline correction, and finally referencing and alignment of spectra. Other steps commonly used are removing the areas without any metabolites such as the water region (from 4.7 to 4.9 ppm), as they generally contain no useful information. There are several R packages that can carry out the above tasks (see Table 4 ). The PepsNMR and speaq are two examples of such R-based packages. The 1D NMR spectra can then be segmented into spectral regions (also known as bins or buckets) subjected directly to statistical data analysis after a normalisation step. The size of the bins could be fixed or variable (adopted or intelligent binning) based on NMR peaks or even each data point from each peak (full data point resolution) used for data analysis. The NMRProcFlow [123] package provides a graphical and interactive interface for 1D NMR spectral processing and analysis. Additionally, it provides various spectral alignment methods with the ability to use the corresponding experimental-factor levels in a visual and interactive environment, bridging the gap between experimental design and subsequent statistical analyses. Alternatively, peak picking (based on the regions of interest, ROI) can be performed and individual compounds can be identified and integrated prior to statistical analysis. Targeted profiling aims to identify and quantify specific compounds in a sample. The packages that use such approach (ROI) are rDolphin, and rNMR. The bucketed/integrated spectra are normalised to minimise the biological and technical variation. The most common methods are normalisation to a constant sum (e.g., total sum of integral/bin intensities), probabilistic quotient normalisation [124] and dry weight tissue or protein content. NMR metabolite annotation uses either chemical shifts and multiplicity matching from an existing database, such as Human Metabolome Database [125] [126] [127] [128] (HMDB), a literature experimental search, or uses simulated reference library compounds [129] to match or to fit the existing biological spectra. 1D NMR data often is not sufficient for a confident assignment of the metabolite peaks [130] therefore complementary 2D spectral data acquisition are often required to confirm the assignment [131] . The only package that explicitly deals with 2D NMR is rNMR that takes a targeted approach where the user defines regions of interest to be quantified and compared. DOLPHIN, originally written in MATLAB [132] , uses both 1D and 2D NMR data for targeted profiling that is also available as an R version called rDolphin. We are not aware of other R packages that handle 2D NMR data processing. Several general multiway statistical tools such as PARAFAC [133] , Tucker3 [134] and MCR have been described [135] that are able to analyse 1D and 2D NMR data, see the section on statistical analysis for a list of packages available for these techniques. BATMAN uses a Bayesian model and some template information such as chemical shifts, J-couplings, multiplicity and intensity ratios derived from spectral database to automatically quantify metabolites in a targeted manner [136] . Table 4 . R packages for NMR data handling, (pre-)processing and analysis. Package Reference Repos Interactive environment based on shiny that includes a complete set of tools to process and visualise 1D NMR spectral data using the package Rnmr1D. Processing includes baseline correction, ppm calibration, removal of solvents and contaminants and re-alignment of chemical shifts. Basic processing and statistical analysis steps including several spectral quality assessment as well as pre-processing to multivariate analysis statistics functions. A tool for processing of 1H NMR data including apodisation, baseline correction, bucketing, Fourier transformation, warping and phase correction. Bruker FID can be directly imported. Analysis of 1D and 2D NMR spectra using a ROI-based approach. Export to MMCD or uploaded to BMRB for identification. rNMR [142] shiny-based interactive NMR data import and Statistical TOtal Correlation SpectroscopY (STOCSY) analyses. iSTATS CRAN Handles hyperspectral data, i.e., spectra plus further information such as spatial information, time, Another, in metabolomics sometimes under-appreciated, analytical approach is UV absorption detection, usually coupled with an HPLC or UHPLC system. In some cases, the photo-diode array detector (DAD or PDA) is part of an LC-MS system, actually an LC-UV-MS setup. There are other detectors (e.g., fluorescence) with a different principle, but similar characteristics when it comes to the acquired data. Alignment and baseline correction are typically the first steps of pre-processing LC-UV data. Alignment can be achieved for example with the alsace or the ptw package while baseline correction can be achieved using the hyperSpec, ChemoSpec, mdatools (or the baseline packages). The alsace package provides an alternative to using all channels (wavelengths) by first finding unique components (i.e., "pure" spectra) and then performing peak picking in these components. After alignment, general multiway statistical methods like PARAFAC, simultaneous component analysis (SCA), and Tucker Factor Analysis can be applied in the same manner as feature tables would be handled. Table 5 provides an overview of the available R packages for UV data. Following the feature detection and grouping steps outlined in the sections above, different paths to statistical analysis are available in R and Bioconductor. Once the "sample versus variable" feature matrix of molecule intensities or abundances has been generated, comprehensive statistical analyses can be performed by using the vast range of packages provided by the R statistical software and the Bioconductor project (see Table 6 ); see, for instance, StatisticalMethod biocViews [146] and the ExperimentalDesign [147] , Cluster [148] , Multivariate [149] , MachineLearning [150] CRAN Task Views [151] . As mentioned in the introduction, we will only cover common statistical approaches used in metabolomics. Areas such as time-series analysis, clustering methods, machine learning and visualisation of high-dimensional data were dealt with in various books and literature reviews [152] [153] [154] [155] [156] [157] [158] [159] [160] . With regard to statistical analyses in untargeted metabolomics, two strategies can be differentiated that necessitate the use of different methods. The first strategy "metabolite profiling" is performed by most untargeted metabolomics studies. Here, a bottom-up approach is taken where sets or classes of pre-defined metabolites are studied usually in different phenotypes of the same biological species and differences in metabolites are usually related to more coarse functional or biological levels (e.g., to phenotype or to control vs. treatment in biomedical studies) [161] . Exploratory data analysis, univariate methods, hierarchical clustering (HCA), Principal Component Analysis (PCA) and Multi-Dimensional Scaling (MDS) like methods are very common in metabolite profiling approaches. Feature/variable selection is performed to find only the most significant metabolite candidates that explain the underlying research question, usually using univariate methods to target only specific metabolites that are interesting to the research question of the study [162] [163] [164] [165] . The second strategy, "metabolite fingerprinting", is commonly used in biomedicine, environmental metabolomics and eco-metabolomics to find metabolite patterns across metabolite profiles. Here, metabolites are characterised without necessarily identifying them, and characterisation usually occurs from spatiotemporally coarser scales to intrinsic scales within biological species [166] . Multivariate statistical methods are used that require reduction of high-dimensional data and, thus, ordination methods are commonly applied like (Orthogonal) Partial Least Squares regression (sometimes also coupled to Discriminant Analysis) ((O)PLS(-DA)), (Linear) Discriminant Analysis ((L)DA), and (Canonical) Correspondence Analysis ((C)CA) that make it possible to relate sets of explanatory variables containing species traits or environmental properties (such as soil type, plant height, smoker/non-smoker, gender, etc.) to the metabolite feature matrix [157, 167, 168] . Other machine learning methods like Random Forests (RF), Support Vector Machines (SVM) and Neural Networks (NN or ANN) are also applicable [169] . Lately, untargeted metabolomics data is related to other 'omics using network analysis or Procrustes analysis to visualise (dis)similarities between two or more 'omics data sets [170] [171] [172] [173] . Extracting a restricted list of features, which still provides a high prediction performance (i.e., a molecular signature), is critical for biomarker validation and clinical diagnostic. Several strategies have been described for feature selection [174, 175] (e.g., wrapper approaches such as Recursive Feature Elimination, Genetic Algorithms, or sparse models such as Lasso, Elastic Net, or sparse PLS). Such techniques are implemented in R packages, which also provide detailed comparisons on real datasets in terms of the stability and the size of the selected signature, the prediction performance of the final model, and the computation time [176] [177] [178] [179] . A great number of packages are available for performing statistics on metabolomics datasets. Some of them focus on performing several specific tasks, such as sample size estimation, batch normalisation, exploratory data analysis, univariate hypothesis testing, multivariate modeling and omics data integration. Others, listed in the section 'Multiple workflow steps' in Table 6 , adopt a more comprehensive approach, providing statistics toolbox that cover different methods and functionalities. muma is a package designed to be compatible with MS and NMR generated data. The package mainly focuses on performing statistics. It does not contain functions for data extraction and the user has to provide values arranged in a data.frame format. The pre-processing is limited to missing value imputation, noise filtering, variable scaling and normalisation. The package also provides tools for outlier detection, univariate and multivariate analysis. Notably, the package offers a script for Statistical TOtal Correlation SpetroscopY (STOCSY) on NMR data. MOFA proposes tools for the integration of data coming from different omics disciplines (multi-omics). Using factor analysis, it makes it possible to calculate hidden factors that capture the biological sample variation across multi-omics datasets, thus allowing marker discovery. MOFA also provides various tools for the visualisation of results. IntLIM also supports integration of other omics datasets with metabolomics data by leveraging linear modeling to identify gene-metabolite pairs whose relationship differs from one phenotype to another (e.g., positive correlation in one phenotype, negative or no correlation in another). IntLIM includes a user-friendly web interface to perform data quality control of input data, identification of phenotype-dependent gene-metabolite pairs, and interactive visualisation of results. This tool is particularly useful for integrating transcriptomic and metabolomic or other omics data by generating novel hypothesis in a data-driven manner. MetaboDiff is presented as an entry-level, user-friendly package for differential metabolomics analysis. The information contained in the input data (metabolomics measurements and metadata) are stored in S4 objects which are used for the downstream processing. The pre-processing consists of missing value imputation, outlier removal and data normalisation, while the data analysis part offers a variety of statistical methods including tools to explore how metabolites relate to each other in sub-pathways. MetaboAnalystR is a toolbox built over several R packages and contains more than 500 functions organised in eleven modules. The package was created to overcome the limitations of the homonymous web application, such as the possibility of creating flexible customised workflows (including xcms interoperability) and the capacity of dealing with large data sets. MetaboAnalystR functionalities cover a wide range of tools: exploratory statistical analysis, biomarker analysis, power analysis, biomarker meta-analysis, functional enrichment analysis, pathway and joint pathway analysis. Through an implementation of the mummichog algorithm [180] , MetaboAnalystR also allows to infer pathways for from user-generated m/z peak-lists. Using the MetaboAnalyst knowledgebase, MetaboAnalystR provides access to metabolite set libraries, compound libraries and pathway libraries. Normalizer [193] Signal and Batch Correction for Mass Spectrometry. Chemometric analysis of NMR, IR or Raman spectroscopy data. It includes functions for spectral visualisation, peak alignment, HCA, PCA and model-based clustering. ChemoSpec BioC Joint analysis of MS and MS/MS data, where hierarchical cluster analysis is applied to MS/MS data to annotate metabolite families and principal component analysis is applied to MS data to discover regulated metabolite families. A large number of methods available for PCA. pcaMethods [195] BioC Estimate tail area-based false discovery rates (FDR) as well as local false discovery rates (fdr) for a variety of null models (p-values, z-scores, correlation coefficients, t-scores). fdrtool [196] CRAN GUI for statistical analysis using linear mixed models to normalise data and ANOVA to test for treatment effects. Many methods for corrections for multiple testing. multtest [198] BioC Derives stable estimates of the metabolome-wide significance level within a univariate approach based on a permutation procedure, which effectively controls the maximum overall type I error rate at the α level. Identifies analyte-analyte (e.g., gene-metabolite) pairs whose relationship differs by phenotype (e.g., positive correlation in one phenotype, negative or no correlation in another). The software is also accessible as a user-friendly interface at intlim.bmi.osumc.edu. Statistical framework supporting many different types of multivariate analyses, e.g., PCA, CCA, (sparse)PLS(-DA). Multi-omics base classes integrable with commonly used R Bioconductor objects for omics data; container that holds omics results. Multiple co-inertia analysis of omics datasets (MCIA) is a multivariate approach for visualisation and integration of multi-omics datasets. The MCIA method is not dependent on feature annotation therefore it can extract important features even when they are not present across all datasets. STATegRa combines information in multiple omics datasets to evaluate the reproducibility among samples and across experimental conditions using component analysis (omicsNPC implements the NonParametric Combination) and clustering. STatistics in R Using Class Templates-Classes for building statistical workflows using methods, models and validation objects. STRUCT GitHub Table 6 . Cont. Integration of omics data using multivariate methods such as PLS. Performs community detection and network analysis to allow visualisation of positive or negative associations between different datasets generated using samples from the same individuals. Also available as a shiny app (https://kuppal.shinyapps.io/xmwas). Joint metabolic model-based analysis of metabolomics measurements and taxonomic composition from microbial communities. Missing Value Imputation There are several packages that can deal with cheminformatics tasks, property calculations, metabolite lookup in (web) databases or mapping between databases or structure format conversions (see Table 7 ). A well-established package is rcdk which provide a comprehensive subset of functions from the Chemistry Development Kit [239] . rcdk provides a computer readable representation of molecular structures and provide a wealth of functions to import structures from different molecule structure description formats, manipulate structures, visualise structures and calculate properties and molecular fingerprints. The package fingerprint can then be used to compare fingerprints. rinchi provides reading and writing of InChI and InChIKeys [240] . ChemmineR is an alternative to rcdk, providing many similar functions, with more tools for fingerprints, clustering and others through querying the ChemMine Tools web service [241] . ChemmineR also has significantly faster parsing of SDF files, which can be an advantage when reading large databases. A large number of additional descriptors are available in the package camb which focuses on quantitative predictive models. ChemmineOB provides conversion between a large number of chemical structure formats using OpenBabel [242] . A notable exception is InChI/InChIKey, which is not directly supported by ChemmineOB or ChemmineR and one would thus have to go through rinchi and rcdk for offline import from InChI to ChemmineR or ChemmineOB. RChemMass is a package that combines the functionality of the rcdk with that of RMassBank, and enviPat. The package RRDKit makes (part of) the functionality of the RDKit [243] toolkit available from within R. Several existing compound databases are useful for metabolomics. These can supply metadata such as common names and synonyms, database identifiers and experimental or predicted properties. The Rpubchem package provides lookup of information available in PubChem [244, 245] , while the webchem package provide query of a large number of databases including PubChem, ChemSpider [246] , Wikidata [247] , Chemical Translation Service [248] , PHYSPROP [249], Chemical Identifier Resolver [250] and others. BridgeDbR can be used to map identifiers (metabolites, but also genes and proteins, and interactions) between databases, e.g., PubChem to ChemSpider identifiers; RMassBank and RChemMass also provide some useful web-retrieval functions. The analysis of identified compounds on the level of substance classes can give biochemical insights which are not obvious from the individual structures, or in case the structures are not fully elucidated. The web tool ClassyFire is able to annotate a given structure with compound classes from their ChemOnt taxonomy as well as different substituents [251] . The classyfireR package supports the retrieval of substance classes using the RESTful API of the ClassyFire tool based on InChIKeys. The R environment offers packages to analyse networks of metabolomics data and metabolic pathways (see Table 8 ). Within this section, we refer to a 'pathway' as a linked series of chemical reactions between molecules, conveyed by enzymes that lead to a product or change in a cell. These molecules are also known as metabolites and transformations occur in the same cellular compartment or in close vicinity. The term 'network' refers to the entity of metabolites that are connected biologically, chemically or structurally (e.g., similarity between MS/MS spectra of two metabolites), functionally or by any other measure (e.g., statistically correlated). The R environment offers a general infrastructure for network analysis. Functionality is implemented in a plethora of software packages, among others igraph, tidygraph or the statnet suite. These packages offer functions to generate networks from respective data input (e.g., adjacency matrices), to analyse networks, calculate network properties and to visualise networks. Generally, any kind of metabolomics data that can be converted to an interpretable format for one of these packages can be analysed by generic network analysis tools. For example, MSnbase offers functionality to calculate similarity scores between MS/MS spectral data that can be readily interpreted as a spectral similarity network (see [257] for the pioneering work of mass spectral molecular networking for biological systems). Such networks can be analysed by the functions provided by the above-mentioned packages or by packages tailored more towards the analysis of biological data (e.g., RedeR). Specifically interesting for metabolomics applications is DiffCorr, an R package to compare correlation networks from two different experimental conditions that builds on an association measure such as Pearson's correlation coefficient to identify distinctive properties. DiffCorr enables testing of differential correlation of high-dimensional data sets by identifying the first principal component-based 'eigen-molecules' in the correlation networks. DiffCorr then tests these differential correlation values based on Fisher's z-transformation to identify discriminating metabolite pairs that show different response to conditions. Another R package, more tailored towards the analysis of metabolomics data, is BioNetStat, which creates correlation-based networks from metabolite concentration data and analyses the networks based on graph spectra (group of eigenvalues in an adjacency matrix), spectral entropy, degree distribution and node centralities. BioNetStat also allows for KEGG pathway visualisation of metabolite data. As mentioned above in Section 2.2, a major challenge in metabolomics is metabolite annotation, spanning the annotation of known compounds (dereplication) or annotation of unknown metabolites and proposing hypotheses of their structures. Network and pathway analysis can be employed to putatively annotate metabolites in metabolomics data sets. The Bioconductor package MetNet aims at facilitating detection and putative annotation of unknown MS 1 features in untargeted metabolomic studies. MetNet infers networks by using an ensemble of statistical associations between intensity values across samples and structural information (mass difference matching between features to a list of enzymatic transformation, retention time adjustment) to infer metabolic networks and guide the annotation of especially specialised metabolites of plant, fungi or bacteria samples. Another package for improving annotation is the package xMSAnnotator, which incorporates a multi-criteria scoring algorithm to annotate mass features into different confidence levels. xMSAnnotator uses coelution, pathway level correlations, correlation and KEGG [258] [259] [260] , HMDB, Toxin and Toxin Target Database (T3DB) [261, 262] , LipidMaps [263] and ChemSpider [246] for annotation and incorporates several filter steps, e.g., by defining modules of co-expressing m/z features using WGCNA and a topological overlap-based dissimilarity matrix and thereby categorising related metabolites into the same network modules. Molecular networking starting from MS/MS data can enhance the annotation of metabolites. MetDNA, implemented in R, JavaScript and Python (available via a web interface on http://metdna. zhulab.cn), combines MS 1 and MS/MS data to putatively annotate features in metabolomics data sets [264] . MetDNA uses a metabolic reaction network-based recursive algorithm for metabolite annotation employing spectral matching of MS/MS spectra in an automatic fashion. The iterated application of similarity matching between reaction pairs, a substrate metabolite with its product metabolite displaying similar chemical structures, allows the expansion of annotation using seed metabolites or previously annotated metabolites. MetCirc, designed for the annotation of MS/MS features in untargeted metabolomics data, visualises the spectral similarity matrix (e.g., the normalised dot product) between MS/MS spectra in a Circos-like interactive shiny application. Within the shiny application, similarity scores can be thresholded, MS/MS spectra can be interactively explored and annotated based on expert knowledge given the similarity score and displayed spectral features. MetCirc relies on the MSnbase framework to store MS/MS spectral data and to calculate similarities between spectra. Similarly, CluMSID employs spectral similarity matching to guide annotation of MS/MS spectra, incorporates functionality to calculate a correlation networks and for hierarchical and density-based clustering. compMS2Miner is another R package for MS/MS feature annotation and offers functionality for noise filtering, MS/MS substructure annotation, calculation of correlation-and spectral similarity-based networks and interactive visualisation. Several R packages implement the functionality to generate metabolic networks. These networks can subsequently be analysed by their topological properties, be used to identify motifs that differ between experimental conditions or queried to find associations between metabolic features. MetaMapR generates metabolic networks by integrating enzymatic transformation, structural similarity between metabolites, mass spectral similarity and empirical correlation information. Hereby, MetaMapR queries biochemical reactions in KEGG and molecular fingerprints for structural similarities in PubChem. Furthermore, MetaMapR aims at incorporating metabolites with unknown biochemistry and unknown structures, and integrates other data sources (genomic, proteomic, clinical data). The package Metabox offers a pipeline for metabolomics data analysis, including functionality for data-driven network construction using correlation, estimation of chemical structure similarity networks using substructure fingerprints. Its statistical analysis highlights metabolites that are altered based on the experimental design group, which can be further interrogated by network and pathway analysis tools. Furthermore, the package MetabNet includes functionality to perform targeted metabolome-wide association studies (MWAS) and to guide the association of unknowns to a specific metabolic pathway, followed by mapping a target metabolite to the metabolic network structure. Several R packages enable pathway analysis that uses quantitative data of metabolites and maps these to biological pathways. The Bioconductor package pwOmics analyses proteomics, transcriptomics and other-omics data in combination to highlight molecular mechanisms for single-point and time-series experiments. In downstream analyses, pwOmics allows for pathway, transcription factor and target gene identification. Another important aspect commonly executed is enrichment analysis to identify pathways that are up-or downregulated given an experimental condition. The R environment offers a whole range of enrichment analysis packages (e.g., tmod for metabolite data). Targeted more towards pathway analysis, FELLA is a Bioconductor package for enrichment analysis. FELLA detects discriminative metabolic features, maps these to known biological pathways of the KEGG database and detects enriched terms by a diffusion algorithm. CePa offers enrichment analysis tools extending conventional gene set enrichment methods by incorporating pathway topologies. CePa takes nodes rather than terms for analysis and uses network centralities as weight of nodes incorporating pathways from the Pathway Interaction Database (PID, [265] ), including NCI/Nature Pathway Interaction, BioCarta [266] , Reactome [267] and KEGG [258] [259] [260] . MetaboDiff offers functionality to pinpoint to metabolome-wide differences using PCA and t-distributed stochastic neighbor embedding (tSNE) building on the MultiAssayExperiment S4 class. Using t-test or ANOVA, MetaboDiff identified metabolites that differ in their abundance between groups and identifies modules/sub-pathways by using WGCNA that indicate changes in biological pathways. SDAMS (Semi-parametric differential abundance analysis method for proteomics and metabolomics data from mass spectrometry), building upon the SummarizedExperiment S4 class, performs differential abundance analysis on metabolomics data by linking (non-normally distributed) metabolite levels to phenotypic data, containing zero and possibly non-normally distributed non-zero intensity values. Many R packages guide the discovery of biomarkers for specific phenotypes. Among these is lilikoi, which maps features to pathways by using standardised HMDB IDs, transforms metabolomic profiles to pathway-based profiles using pathway deregulation scores, a measure how much a sample deviates from a normal level, followed by feature selection, classification and prediction. INDEED (INtegrated DiffErential Expression and Differential network analysis) aims to detect biomarkers by performing a differential expression analysis, which is combined with a differential network analysis based on partial correlation and followed by a network topology analysis. Subsequently, activity scores are calculated based on differences detected in the differential expression and the topology of the differential network that will guide the selection of biomarkers. Another R package for biomarker and feature selection is MoDentify which finds regulated modules, groups of correlating molecules that can span from few metabolites to entire pathways, to a given phenotype. These groups are possibly functionally coordinated, coregulated or driven by a similar or same biological process. Score maximisation using a multivariable linear regression model with the candidate module as dependent and the phenotype and optional covariates as independent variables identifies the modules. Furthermore, MoDentify implements Gaussian graphical models, where depending on the resolution nodes reflect metabolites or entire pathways. PAPi (Pathway activity profiling) assigns pathway activity scores to samples to represent the potential pathway activity and statistically detects affected pathways by applying t-test or ANOVA. PAPi uses KEGG pathway identifiers. pathwayPCA, with gene selection in mind, offers multi-omics data analysis by estimating sample-specific pathway activities, e.g., taken from the rWikiPathways interface. pathwayPCA takes continuous, binary or survival outcomes as input and estimates contributions of individual genes towards pathway significance. R offers packages to analyse metabolic systems and to estimate biochemical reaction rates in metabolic networks using flux balance analysis, e.g., BiGGR, abcdeFBA, sybil, and fbar. For example, BiGGR interfaces with the BiGG databases that contains reconstructions of metabolic networks. After importing pathways from the database, flux balance and downstream routines can be performed, e.g., linear optimisation routines or likelihood-based ensembles of calculated flux distributions fitting experimental data. The package MetaboLouise simulates longitudinal metabolomics data. The simulation builds on a mathematical representation that is parameterised according to underlying biological networks, i.e., by defining metabolites and relation between them by initialising enzyme rates. Optionally, the package implements functionality to vary the rates depending on the network state, to add external fluxes and to analyse results based on different parameters. A plethora of pathway resources exist, aptly aggregated by Pathguide.org. Several of these resources can be accessed by R packages, which were partly reviewed in [268] : rBiopaxParser, graphite, NCIgraph, pathview, KEGGgraph, SBMLR, rsbml, gaggle, and PSICQUIC. Of these, graphite stores pathway information for proteins and metabolites of currently fourteen species (version 1.28.0). Available databases are KEGG, Biocarta, Reactome, NCI/Nature Pathway Interaction Database, HumanCyc, Panther, SMPDB and PharmGKB. graphite offers in addition topological and statistical pathway analysis tools for metabolomics data by interfaces with the Bioconductor packages SPIA and clipper and supports functionality to build own pathways. Furthermore, RPathVisio enables the creation and editing of biological pathways. RPathVisio makes it possible to visualise data on pathways, to perform statistics on pathway data, and provides an interface to WikiPathways. KEGGREST makes it possible to access the KEGG REST API via a client interface. The package provides utility to search keywords, convert identifiers and link across databases. The package also makes it possible to return amino acid sequences as AAStringSet or nucleotide sequences as DNAStringSet objects (from the Biostrings [269] package). Another package, paxtoolsr, provides literature-curated pathway using the Biological Pathway Exchange (BioPAX) format by providing an interface to the Pathway Commons database (including data from the NCI Pathway Interaction Database (PID), PantherDB, HumanCyc, Reactome, PhosphoSitePlus and HPRD). rWikiPathways is an interface between R and WikiPathways.org. Pathways can be queried, interrogated and downloaded to the R session. Furthermore, rWikiPathways associates metabolite information to pathways when providing the system code of a chemical database (e.g., from HMDB, ChEBI, or ChemSpider). RaMP provides a relational database of Metabolomics Pathways, integrates pathway, gene, and metabolite annotations from KEGG, HMDB, Reactome, and WikiPathways. The database is downloadable as a standalone MySQL dump, for integration with other software, and is also accessible through an R package, and includes a shiny [270] web interface that supports four basic queries: (1) retrieve analytes (genes of metabolites) given a pathway name; (2) retrieve a pathway for one or more analytes; (3) retrieve analytes involved in the same reaction; (4) retrieve ontologies (cellular location, biofluid locations, etc.) from metabolites. The web interface also supports pathway overrepresentation analysis on genes, metabolites, or genes and metabolites combined (query 3) and includes clustering of significantly enriched pathways according to the percent of overlapping analytes between pathways. Furthermore, the web interface provides network visualisation of gene-metabolites relationships (query 4). When dealing with non-targeted metabolomics data sets, data processing represents a key step for obtaining meaningful and consistent results. While the type and number of data processing methods may vary according to the experimental design and aim of the study, some key steps can be identified that are common for most metabolomics experiments. For this reason, several multifunctional R-based workflows have been developed over the years. A key advantage of using multifunctional workflows is that most of the functions the user needs are available within the same "environment", so that the data does not have to be formatted to comply with functions in other packages. In this respect, a quite common backbone of R workflows consists in performing a pre-processing step that generates an R object that can be used as argument for different functions. Another advantage is that, in most cases, workflows allow a certain degree of flexibility so that functionalities can be used as standalone functions (modular workflows) to better comply with the user's needs. The packages covering larger parts of metabolomics workflows available in R are listed in Table 9 . These multifunctional packages include comprehensive workflows that focus on multiple aspects, such as data pre-processing, data validation, preliminary statistical analysis and data visualisation of large metabolomics datasets. The considered workflows support both MS-based data (LC-MS and GC-MS) and data generated by different analytical platforms. MAIT (Metabolite Automatic Identification Toolkit) offers pre-processing, annotation, statistical analysis and data visualisation. It relies on xcms for peak picking and on CAMERA for the preliminary annotation. In addition to CAMERA, the peak annotation process is implemented by including a functionality that allows relating in-source mass losses to specific biotransformations. Human biotransformations are already included, additional biotransformation criteria can be added by the end user. MAIT also provides several statistical tools and visual representations (e.g., PCA, boxplot, PLS), as well as a function to perform identifications using accurate mass search in HMDB. MetMSLine shows some similarities with MAIT in terms of processing stages (xcms-based pre-processing, multivariate statistics, metabolite identifications). Functionalities characterising MetMSLine include normalisation, signal drift correction using a smoothing method, noise transformation and outlier removal. SimExTargId is a wrapper of different software and R packages for LC-MS data. It includes tools for data conversion (Proteowizard), peak picking and annotation (xcms and CAMERA), outlier detection and data correction (MetMSLine), and basic statistical analysis. A special feature of SimeExTargId is the real time monitoring of the different workflow stages aimed at metabolomics core facilities; users are notified by email in case of processing errors (e.g., outlier detection, signal drift). mzMatch is slightly different from the above-mentioned workflows and is designed to fit in a broader processing pipeline itself. The project also includes a dedicated file format (peakML) and a Java environment. The different modules can still be used independently. mzMatch supports peak picking and grouping using xcms, reproducibility calculation, data normalisation. The peakMonitor app identifies peaks using the local database. The identification is performed on the basis of m/z and retention time values with user-defined mass accuracy and retention time deviation values. MetaDB is built by integrating the metaMS R package into a web application written in Grails. It has also been designed to be integrated with the MetaboLights database. MetaDB supports both LC-MS and GC-MS datasets and offers a wide range of functionalities, including: data storage and metadata management (using the ISA-Tab format and ISACreator tool [300, 301] ), peak picking and annotation (via metaMS, an xcms and CAMERA add-on) and QC plots. MStractor is designed for non-expert users to carry out non-targeted data processing on LC-MS experiments. It gathers xcms and CAMERA functions in a user-friendly pipeline, requiring minimal input and providing graphical QC outputs throughout the workflow. It also includes a manual peak curation step and the possibility of calculating descriptive statistics for each sample class. patRoon is an interface for different MS-based open source software for non-targeted data processing. patRoon covers different aspects of metabolomics workflows, such as file conversion to open data formats (mzXML and mzML), feature extraction and grouping (using several open software and the R packages xcms, OpenMS, enviPick), extraction of MS and MS/MS data (mzR), component generation (RAMClustR, CAMERA, nontarget), formula calculation (GenForm) and compound identification through automatic annotation of MS/MS spectra (MetFrag and SIRIUS with CSI:FingerID). Other functionalities include (interactive) visualisation and reporting of workflow data, comparison and combining results from different workflow algorithms and several data reduction and selection strategies. specmine provides a general framework that addresses a variety of different analytical platforms, such as LC-MS, GC-MS, NMR, IR and UV-Vis. The package supports many data formats and includes the possibility of adding metadata in a tabular format. It relies on xcms for LC-MS and GC-MS data pre-processing, on hyperSpec for NMR, IR and UV/VIS data processing and on MAIT for metabolite identification. specmine provides scripts for missing values imputation, univariate and multivariate statistics and machine learning methods. Several case studies are available for testing purposes. mQTL.NMR is a package specifically for the systematic analysis of 1H NMR metabolomics in quantitative genetics. The package mainly focuses on NMR spectral data pre-processing (normalisation, scaling and peak alignment), mQTL mapping in different model organisms, structural assignment of marker metabolites, and result visualisation. enviMass is a comprehensive workflow for the data-mining of LC-MS and GC-MS datasets, which also supports MS/MS experiments. It provides the user with a graphical user interface (GUI) and a flexible workflow structure covering common processing steps such as data conversion, peak picking, noise removal,-mass re-calibration, data normalisation, and blank subtraction. It also offers several more specific and advanced functionalities, including isotopologue and adduct grouping, homologous series detection and visualisation, estimation of atom counts for nontarget components, temporal sequences, profile trend detection and processing of both data dependent and data independent acquisition of MS/MS experiments. RMassScreening is a workflow for batch processing of LC-HRMS datasets using a script interface, YAML-based setting configuration and visual interactive data evaluation. It provides wrappers for script-based usage of enviPick and basic enviMass components, and implements suspect screening and combinatorial prediction of possible metabolites (transformation products) from parent compounds. A GUI provides facilities to analyse the results, grouped by sample groups and experimental timepoints, by applying freely adjustable filters. MetaboNexus is an interactive data analysis platform for metabolomics experiments, which provides a user friendly R shiny-based GUI designed to work without the need for web server connections. It allows pre-processing (using xcms and MZmine), data scaling, univariate and multivariate statistics (t-test, ANOVA, PCA, PLS-DA, Random Forest, heatmap), putative metabolite identification (library matching of MS and MS/MS adduct with METLIN, HMDB and MassBank databases), and several functions for data visualisation. Table 9 . R packages with multifunctional workflows. Convenience wrapper for pre-processing tools (xcms, CAMERA) and several statistical analyses. Pre-processing (xcms), replicate merging, noise, blank and missingness filtering, feature grouping, annotation of known compounds, isotopic labeling analysis, annotation from KEGG or HMDB, common biotransformations and probabilistic putative metabolite annotation using MetAssign. mzMatch [82, 105] GitHub xcms and CAMERA-based workflow for non-targeted processing of LC-MS datasets. It includes pre-processing, peak picking, peak filtering, data normalisation and descriptive statistics calculation. Performs simultaneous raw data to mzXML conversion (MSConvert), peak picking, automatic PCA outlier detection and statistical analysis, visualisation and possible MS/MS target list determination during an MS1 metabolomic profiling experiment. Pre-processing of large LC-MS datasets. Performs automatic PCA with iterative automatic outlier removal and, clustering analysis and biomarker discovery. [304] GitHub Table 9 . Cont. Workflow for the systematic analysis of 1H NMR metabolomics dataset in quantitative genetics. Performs pre-processing, mQTL mapping, metabolites structural assignment and offers data visualisation tools. Workflow for pre-processing, quality control, annotation and statistical data analysis of LC-MS and GC-MS-based metabolomics data to be submitted to public repositories. Framework mainly built on several already published packages. It supports data processing form different analytical platforms (LC-MS, GC-MS, NMR, IR, UV-Vis). Common interface for several different MS-based data processing software. It covers various aspects, such as data preparation and data extraction, formula calculation, compound identification and reporting. patRoon GitHub Processing of high resolution of LC-MS data for environmental trend analysis. enviMass Zenodo Workflow for pre-processing of LC-HRMS data, suspect screening, screening for transformation products using combinatorial prediction, and interactive filtering based on ratios between sample groups. Workflow to perform pre-processing, statistical analysis and metabolite identifications based on database search of detected spectra. shiny-based platform to extract differential features from LC-MS data, includes xcms-based feature detection, statistical analysis, prediction of molecular formulas, annotation of MS/MS spectra, MS/MS molecular networking and chemical compound database search. METABOseek GitHub shiny interface to Metabolomics packages and MetaboAnalyst scripts. Pre-processing and visualising LC-MS data, as well as statistical analyses, mainly based on univariate linear models. amp GitHub Visualisation is an important part of data analysis. Traditionally, graphics in R have been focussed on creating static plots, while typical explorative studies generally require interactive visualisation to fully investigate the data. User interactions could range from simply zooming in chromatographic or spectroscopic data through to temporarily excluding data from a complex plot for clarity. Several packages in R are available for making interactive plots, e.g., the Plotly library [311] , to create interactive graphics from the static plots generated by the popular plotting framework ggplot2 [312] . The use of interactive plots in R is growing, and is helped by an increasing number of code examples. Another way interactive plots, and even full GUI tools, are being introduced into R is through the shiny framework, which can create web apps using the full power of R packages as the backend. Many such tools related to metabolomics data analysis are also becoming available, which decreases the learning curve considerably for the typical metabolomics scientist without a computational background. A current gap in the shiny metabolomics landscape are powerful and re-usable widget collections for, e.g., spectra viewers, molecular structures or metabolic networks. There are several approaches to creating, sharing and using data analysis in R for developers and users, with different strengths and weaknesses. Table 10 summarises several ways to create and run a data analysis with some interpretation and comparative comments. Please note that in some cases it is difficult to quantify "implementation simplicity", e.g., in the case of shiny apps, which can range from rather straightforward to highly complex. All of these environments can be run locally or installed on a (local or cloud-based) server. Recently, several initiatives have started to provide publicly available computing resources. Examples are e.g., the previously mentioned rdrr.io, which offers to paste R code into an online console for execution. The console can also be embedded into individual websites. The same project also hosts rnotebook.io, which allows to create and run R notebooks. The shinyapps.io platform operated by RStudio Inc has free and paid options to host shiny apps. The binder project (involving members from large academic institutions and companies (like UC Berkeley, Cal Poly San Luis Obispo, Wild Tree Tech Switzerland, Netflix or Simula Research Lab) is an infrastructure for creating and using shareable, interactive and reproducible data analysis (not only) with R [313] by taking any GitHub repository, turning it into a Docker image and launching it on a cloud service. The package holepunch [314] simplifies preparing an R project for launching on binder. A public instance is the mybinder.org service providing (limited) resources to execute R-based scripts in a hosted Rstudio, Jupyter notebook or applications written with e.g., shiny. The binder infrastructure code is available on GitHub, so that the service can be offered by universities and research groups to its users, lifting the resource limitations of the public instance. In some cases, an R package can provide bindings to existing tools and libraries written in other languages (see Table 11 ). This is, for example, the case for the packages rcdk or MetFragR using the rJava bindings, or mzR, which is a wrapper around the Proteowizard C++ library using the Rcpp package. The fairly new reticulate package provides the corresponding infrastructure to execute Python from R code. Several workflow systems support workflow nodes and tools that can wrap and execute R code, and in turn build on the huge number of R packages (not only) for metabolomics. In this way, systems like KNIME [315, 316] and Galaxy [317, 318] also provide a GUI and visual programming using the wrapped R functionality, and possibly combine with tools developed in other programming frameworks. Galaxy is a web-based environment for omics data analysis [319] . The Workflow4metabolomics.org online Galaxy infrastructure dedicated to metabolomics [318] includes wrappers of xcms, CAMERA, metaMS, proFIA, ropls, biosigner and is open to new contributions. W4M is supported by two national infrastructures: the French Institute of Bioinformatics (www.france-bioinformatique.fr) and the Infrastructure for Metabolomics and Fluxomics (www.metabohub.fr) [320] . Wrapping R code into a Galaxy module is quite straightforward: examples can be found on the toolshed central repository (toolshed.g2.bx.psu.edu) and in the RGalaxy bioconductor package. An additional benefit is that the workflow developers need to ensure seamless data flow through the workflow steps, and often contribute the glue code to bridge the gap between objects and data structures that are not always directly compatible across different packages and softwares, thus also improving interoperability beyond the use in workflow systems. Workflows and input/output data can be publicly referenced [321, 322] on the Workflow4metabolomics platform, thus enabling fully reproducible research. By using workflow systems, the reuse and reprocessing of data sets is greatly encouraged, as well as the tracking of data provenance [323] . This way, workflows help to boost the FAIR principles that were shaped for data [324] . Table 11 . Packages to interface R with other languages and workflow environments. Given an R function and its manual page, make the documented function available in Galaxy. Integration of R and C++. Many R data types and objects can be mapped back and forth to C++ equivalents. Sharing of data has become increasingly common, and metabolomics data are available from MetaboLights [326] in the EU, GNPS [122] and Metabolomics Workbench [327] in the US. In the context of this review, we focus instead on data in R packages, which is important for development, unit testing, documentation and user training (see Table 12 ). While there is no difference in R between software and data packages per se, they are handled differently in the Bioconductor infrastructure and separate views exist. There are several data sets with raw data from LC-MS and flow injection analysis, which can be used by the data pre-processing packages in the previous sections. Other packages contain pre-processed data from GC-MS, LC-MS or NMR in the form of peak tables, which are then typically used in statistics packages, network analysis and other downstream analyses. pmartRdata GitHub 6 mzML files (human plasma spiked with 40 compounds acquired in positive mode on an orbitrap fusion). plasFIA BioC mzML files (Thermo Exactive) from comparison of leaf tissue from 4 B. distachyon ecotypes with Flow-infusion electrospray ionisation-high resolution mass spectrometry (FIE-HRMS). Also includes data sets with 10 technical injections of human urine and another 10 injections from leaf tissue (ecotype ABR1). metaboData GitHub This review surveyed both the scientific literature and the R landscape for packages relevant to metabolomics research. While it was very easy to find relevant packages in CRAN and even more so in BioC, many packages are scattered across other source code hosting platforms. While GitHub has a concept of topics (see github.com/search?q=topic:metabolomics+topic:r), and crawlers like rdrr.io can find R packages across several platforms, the best findability can be achieved through well-integrated umbrella projects like Bioconductor, which provide additional infrastructure and also improve the community interaction through conferences and workshops. This also shows the need for more detailed metadata of the R packages allowing easier mixing and matching of packages, noting that Bioconductor already does a very good job. R packages already have a long-standing history of metadata annotation via their DESCRIPTION and CITATION files. These provide links to other packages (e.g., dependencies and suggestions) and literature describing the package. Exposing package and vignette meta data with semantic approaches will support the community in developing further, more advanced multi-functional workflows for metabolomics. The authors have recently adopted Bioschemas [330] to make metadata more easily findable. For example, efforts to start annotation in vignettes allows the ELIXIR Training eSupport System TeSS (tess.oerc.ox.ac.uk) to pick up newer versions (see this git commit [331] ), and efforts are underway to expose content from the DESCRIPTION file as Bioschemas annotations on Bioconductor (see this pull request [332] ). These actions greatly contribute to community adoption and encourage the reuse of R-based computational workflows in different use cases [323] . In some cases, software described in the literature was only available "on request", which in practice often turns out to be not available anymore. This review also did not assess whether the R packages (and their dependencies) could be installed on a current R installation. A recent survey [333] showed how the repeatability of papers using scientific software drops when software is not available or does not install. Issues/bug reports were filed for packages that were found that were not able to be tested on contemporary operating systems. The way out of the (un-)repeatability trap can be expressed in very few, seemingly trivial, rules [334] and hosting the code in the open repositories, if possible with regular builds or even Continuous Integration. As discussed earlier, the metabolomics packages have tighter connections in an established community such as Bioconductor, rather than in other package repositories. In the last few years, Bioconductor packages for metabolomics and proteomics data analysis started converging towards a common mass spectrometry infrastructure, which simplifies interoperability between these packages. Based on experiences from these efforts, the RforMassSpectrometry (RforMassSpectrometry.org) initiative was recently started aiming at providing efficient, thoroughly documented, tested and flexible R software for MS data import, handling and analysis. Significant improvements can thus be expected in the future, simplifying and unifying MS data handling for the benefit of the end users. RforMassSpectrometry also contains the metaRbolomics-book [335] , which will be a continuously developed resource with additional examples beyond this review. The authors expect that the metaRbolomics landscape will continue its steady growth rate and keeps track of the evolving metabolomics experiments to come. Metabolomics in Practice: Successful Strategies to Generate and Analyze Metabolic Data Metabolome Analysis: An Introduction Metabolomics: An Introduction | EMBL-EBI Train Online International Metabolomics Society Category: Free Tools & Learning Resources-Metabolomics Society Wiki R: A Language and Environment for Statistical Computing Navigating freely-available software tools for metabolomics analysis Updates in metabolomics tools and resources Review of emerging metabolomic tools and resources New tools and resources in metabolomics Metabolomics: Tools Databases Resources in Metabolomics & Integrated Omics in 2015-2016. Available online Public data and open source tools for multi-assay genomic investigation of disease Software Tools and Approaches for Compound Identification of LC-MS/MS Data in Chemometrics and Computational Physics Open software development for computational biology and bioinformatics The Comprehensive R Archive Network CRAN Repository Policy Collaborative Software Development Using R-Forge Building Community: Lessons from the rOpenSci Project The Network Structure of R Packages on CRAN & BioConductor Differences in the Network Structure of CRAN and BioConductor (Revolutions) Scripts Used for My UseR!2015 Presentation on the Network Structure of CRAN. Available online A cross-platform toolkit for mass spectrometry and proteomics ProteoWizard: Open source software for rapid proteomics tools development High-throughput, accurate mass metabolome profiling of cellular extracts by flow injection-time-of-flight mass spectrometry A roadmap for the XCMS family of software solutions in metabolomics XCMS: Processing mass spectrometry data for metabolite profiling using nonlinear peak alignment, matching, and identification Highly sensitive feature detection for high resolution LC/MS. BMC Bioinform Massifquant: Open-source Kalman filter-based XC-MS isotope trace feature detection Orchestrating high-throughput genomic analysis with Bioconductor A R Package to Store, Manipulate, Analyze, and Visualize Metabolomics Data Data Standard for Sharing Quantitative Results in Mass Spectrometry Metabolomics MSnbase-an R/Bioconductor package for isobaric tagged mass spectrometry data visualization, processing and quantitation IPO: A tool for automated optimization of XCMS parameters Joint Bounding of Peaks Across Samples Improves Differential Analysis in Mass Spectrometry-Based Metabolomics Automated pipeline for improved feature detection and downstream analysis of large-scale, non-targeted metabolomics data Improving peak detection in high-resolution LC/MS metabolomics data using preexisting knowledge and machine learning approach Increased precision of metabolomic data processing by consensus integration bound analysis AMDORAP: Non-targeted metabolic profiling based on high-resolution LC-MS Evaluation and reduction of the analytical uncertainties in GC-MS analysis using a boundary regression model A Computational Tool To Detect the Presence of Stable Isotope Labeling in LC/MS-Based Untargeted Metabolomics Correcting for natural isotope abundance and tracer impurity in MS-, MS/MS-and high-resolution-multiple-tracer-data from stable isotope labeling experiments with X13CMS: Global tracking of isotopic labels in untargeted metabolomics IsotopicLabelling: An R package for the analysis of MS isotopic patterns of labelled analytes DLEMMA-MS-Imaging for Identification of Spatially Localized Metabolites and Metabolic Network Map Reconstruction Miso: An R package for multiple isotope labeling assisted metabolomics data analysis TargetSearch-a Bioconductor package for the efficient preprocessing of GC-MS metabolite profiling data SIMAT: GC-SIM-MS data analysis tool Development of High-Coverage Targeted Metabolomics Method Using SWATH Technology for Biomarker Discovery An integrated targeted metabolomic platform for high-throughput metabolite profiling and automated data processing Assayr: A simple mass spectrometry software tool for targeted metabolic and stable isotope tracer analyses Deconvolution of Multiplexed MS/MS Spectra for Metabolite Identification in SWATH-MS based Untargeted Metabolomics ARISTO: Ontological classification of small molecules by electron ionization-mass spectrometry Unsupervised and untargeted GC-MS data processing A Computational Tool Integrating Spectral Deconvolution and Alignment with Quantification and Identification of Metabolites in GC/MS-Based Metabolomics Automated mass spectral deconvolution and identification system for GC-MS screening for drugs, poisons, and metabolites in urine The National Institute of Standards and Technology AMDIS Download Metab: An R package for high-throughput analysis of metabolomics data generated by GC-MS Compound identification in gas chromatography/mass spectrometry-based metabolomics by blind source separation Automated resolution of chromatographic signals by independent component analysis-orthogonal signal deconvolution in comprehensive gas chromatography/mass spectrometry-based metabolomics Extending the dynamic range in metabolomics experiments by automatic correction of peaks exceeding the detection limit GCalignR: An R package for aligning gas-chromatography data for ecological and evolutionary studies metaMS: An open-source pipeline for GC-MS-based untargeted metabolomics A new method of peak detection for analysis of comprehensive two-dimensional gas chromatography mass spectrometry data An optimal peak alignment for comprehensive two-dimensional gas chromatography mass spectrometry using mixture similarity measure Smith-Waterman peak alignment for comprehensive two-dimensional gas chromatography-mass spectrometry R2DGC: Threshold-free peak alignment and identification for 2D gas chromatography mass spectrometry in R proFIA: A data preprocessing workflow for flow injection analysis coupled to high-resolution mass spectrometry Preprocessing, classification modeling and feature selection using flow injection electrospray mass spectrometry metabolite fingerprint data Improved peak detection in mass spectrum by incorporating continuous wavelet transform-based pattern matching Credentialing features: A platform to benchmark and optimize untargeted metabolomic methods A dynamic programming approach for the alignment of signal peaks in multiple gas chromatography-mass spectrometry experiments Fast parametric time warping of peak lists Arnaud Droit rTANDEM Separating Measurement Artifacts from True Metabolites in an Untargeted Metabolomics Experiment PeakML/mzMatch: A file format, Java library, R library, and tool-chain for mass spectrometry data analysis MS/MS data improves automated determination of molecular formulas by mass spectrometry Accelerated isotope fine structure calculation using pruned transition trees DECOMP-from interpreting Mass Spectrometry peaks to solving the Money Changing Problem CAMERA: An integrated strategy for compound spectra extraction and annotation of liquid chromatography/mass spectrometry data sets CliqueMS: A computational tool for annotating in-source metabolite ions from LC-MS untargeted metabolomics data based on a coelution similarity network RAMClust: A novel feature clustering method enables spectral-matching-based annotation for metabolomics data An untargeted metabolomic workflow to improve structural characterization of metabolites Dynamic block summary and intensity normalization for robust analysis of mass spectrometry data in metabolomics Nontargeted homologue series extraction from hyphenated high resolution mass spectrometry data Stronger findings from mass spectral data through multi-peak modeling A tool for merging untargeted metabolomic data from high-resolution mass spectrometry in the positive and negative ionization modes AStream: An R package for annotating LC/MS metabolomic data Navigating mass spectral similarity in high-resolution MS/MS metabolomics data ProbMetab: An R package for Bayesian probabilistic annotation of LC-MS-based metabolomics Compound annotation in liquid chromatography/high-resolution mass spectrometry based metabolomics: Robust adduct ion determination as a prerequisite to structure prediction in electrospray ionization mass spectra An Automatable Metabolite Identification, Visualization, and Data-Sharing R Package for High-Resolution LC-MS Data Sets LOBSTAHS: An Adduct-Based Lipidomics Strategy for Discovery and Identification of Oxidative Stress Biomarkers LipidMatch: An automated workflow for rule-based lipid identification using untargeted high-resolution tandem mass spectrometry data LipidMS: An R package for lipid annotation in untargeted liquid chromatography-data independent acquisition-mass spectrometry lipidomics MassTRIX reloaded: Combined analysis and visualization of transcriptome and metabolome data Metabolomics Count Data Analysis Tool for Intuitive Figures and Convenient Metadata Collection Structure/reaction directed analysis for LC-MS based untargeted analysis Probabilistic annotation of metabolites from LC-MS data using a Bayesian clustering approach xMSannotator: An R Package for Network-Based Annotation of High-Resolution Metabolomics Data The WEIZMASS spectral library for high-confidence metabolite identification GitHub-Michaelwitting/ms2dbworkflow MassBank: A public repository for sharing mass spectral data for life sciences Automatic recalibration and processing of tandem mass spectra using formula annotation Nearline acquisition and processing of liquid chromatography-tandem mass spectrometry data Automated Evaluation of Precursor Ion Purity for Mass Spectrometry-Based Fragmentation in Metabolomics Baitmet, a computational approach for GC-MS library-driven metabolite profiling CluMSID: An R package for similarity-based clustering of tandem mass spectra to aid feature annotation in metabolomics A Survey of Orbitrap All Ion Fragmentation Analysis Assessed by an R MetaboList Package to Study Small-Molecule Metabolites MetFrag relaunched: Incorporating strategies beyond in silico fragmentation The Compound Characteristics Comparison (CCC) approach: A tool for improving confidence in natural compound identification X-Rank: A robust algorithm for small molecule identification using tandem mass spectrometry The National Institute of Standards and Technology Library Conversion Tool Data-independent MS/MS deconvolution for comprehensive metabolome analysis Sharing and community curation of mass spectrometry data with Global Natural Products Social Molecular Networking NMRProcFlow: A graphical and interactive tool dedicated to 1D spectra processing for NMR-based metabolomics Probabilistic quotient normalization as robust method to account for dilution of complex biological mixtures. Application in 1H NMR metabonomics HMDB: A knowledgebase for the human metabolome 0-The Human Metabolome Database in 2013 The human metabolome database 0: The human metabolome database for Metabolic profiling, metabolomic and metabonomic procedures for NMR spectroscopy of urine, plasma, serum and tissue extracts A fast NMR method for resonance assignments: Application to metabolomics Two-dimensional J-resolved NMR spectroscopy: Review of a key methodology in the metabolomics toolbox Dolphin: A tool for automatic targeted metabolite profiling using 1D and 2D (1)H-NMR data Parallel factor (PARAFAC) kernel analysis of temperature-and composition-dependent NMR spectra of poly(lactic acid) nanocomposites Chemometric methods to quantify 1D and 2D NMR spectral differences among similar protein therapeutics Application of Multi-Way Analysis to 2D NMR Data BATMAN-an R package for the automated quantification of metabolites from nuclear magnetic resonance spectra using a Bayesian model PepsNMR for the 1H-NMR Metabolomic Data Pre-Processing An integrated workflow for robust alignment and simplified quantitative analysis of NMR spectrometry data Laukens, K. speaq 2.0: A complete workflow for high-throughput 1D NMR spectra processing and quantification ASICS: An R package for a whole analysis workflow of 1D 1H NMR spectra A GUI R package for proficient automatic profiling of 1D 1H-NMR spectra of study datasets Open source software for identifying and quantifying metabolites in NMR spectra MWASTools: An R/bioconductor package for metabolome-wide association studies NMR: An integrated suite for genetic mapping of quantitative variations of (1)H NMR-based metabolic profiles Metabolite profiling in LC-DAD using multivariate curve resolution: The alsace package for R Packages Found under StatisticalMethod Design of Experiments (DoE) & Analysis of Experimental Data Cluster Analysis & Finite Mixture Models Multivariate Statistics Machine Learning & Statistical Learning The Comprehensive R Archive Network CRAN Task Views Modern hierarchical, agglomerative clustering algorithms. arXiv 2011 Clustering analysis of proteins from microbial genomes at multiple levels of resolution A comparison of latent class, K-means, and K-median methods for clustering dichotomous data Pattern Recognition and Machine Learning Mathematical Methods for Knowledge Discovery and Data Mining Multivariate Statistics for the Environmental Sciences (Mathematics) Multivariate Statistical Methods Introduction to Multivariate Analysis; Routledge: Abingdon-on-Thames Algorithms for hierarchical clustering: An overview Annual Plant Reviews Network Marker Selection for Untargeted LC-MS Metabolomics Data mixOmics: An R package for 'omics feature selection and multiple data integration A critical assessment of feature selection methods for biomarker discovery in clinical proteomics A flexible and comprehensive software for processing metabolomics data Environmental Modelling) A comparative investigation of modern feature selection and classification approaches for the analysis of mass spectrometry data Principles and Theory for Data Mining and Machine Learning; Springer series in statistics Antibiotic resistome in a large-scale healthy human gut microbiota deciphered by metagenomic and network analyses Liquid chromatography-mass spectrometry calibration transfer and metabolomics data fusion Missing Value Imputation Approach for Mass Spectrometry-based Integrated omics approaches in plant systems biology A review of feature selection techniques in bioinformatics Evaluation of variable selection methods for random forests and omics data sets Optimal Algorithm for Metabolomics Classification and Feature Selection varies by Dataset A New Method for the Discovery of Significant Molecular Signatures from Meta-Statistics for Variable Selection: TheR PackageBioMark Variable selection and validation in multivariate modelling Predicting network activity from high throughput metabolomics Selecting the optimal sample size for metabolomic studies using an analysis based approach Large-scale untargeted LC-MS metabolomics data correction using between-batch feature alignment and cluster-based within-batch signal intensity drift correction Improved batch correction in untargeted MS-based metabolomics Compensation for systematic cross-contribution improves normalization of mass spectrometry based metabolomics data Intensity drift removal in LC/MS metabolomics by common variance compensation Normalization of metabolomics data with applications to correlation maps MetaboQC: A tool for correcting untargeted metabolomics data with mass spectrometry detection using quality controls Correcting gene expression data when neither the unwanted variation nor the factor of interest are observed Statistical methods for handling unwanted variation in metabolomics data NormalizeMets: Assessing, selecting and implementing statistical methods for normalizing metabolomics data Normalization and integration of large-scale metabolomics data using support vector regression Normalyzer: A tool for rapid evaluation of normalization methods for omics data sets Discovering regulated metabolite families in untargeted metabolomics studies pcaMethods-a bioconductor package providing PCA methods for incomplete data A versatile R package for estimating local and tail area-based false discovery rates MetabR: An R script for linear model analysis of quantitative metabolomic data Multiple Testing Procedures: The multtest Package and Applications to Genomics Estimation of permutation-based metabolome-wide significance thresholds: Supplementary material Building Predictive Models in R Using the caret Package Analyzing German Business Cycles. In Data Analysis and Decision Support KODAMA: An R package for knowledge discovery and data mining Probabilistic principal component analysis for metabolomic data A variable selection approach in the multivariate linear model: An application to LC-MS metabolomics data Integrating omics datasets with the OmicsPLS package The pls Package: Principal Component and Partial Least Squares Regression inR A review of variable selection methods in Partial Least Squares Regression Spatio-Temporal Multiway Decompositions Using Principal Tensor Analysis on-Modes: TheR PackagePTAk RankProd 2.0: A refactored bioconductor package for detecting differentially expressed features in molecular profiling datasets Analysis of the Human Adult Urinary Metabolome Variations with Age, Body Mass Index, and Gender by Implementing a Comprehensive Workflow for Univariate and OPLS Statistical Analyses Three-Way Component Analysis Using theR PackageThreeWay IntLIM: Integration using linear models of metabolomics and gene expression data integrOmics: An R package to unravel relationships between two omics datasets MultiDataSet: An R package for encapsulating multiple data sets with application to omic data integration A multivariate approach to the integration of multi-omics datasets STATegra EMS: An Experiment Management System for complex next-generation omics experiments A data-driven integration and differential network analysis tool Metabolic Model-Based Integration of Microbiome Taxonomic and Metabolomic Profiles Elucidates Mechanistic Links between Ecological and Metabolic Variation. mSystems Metabomxtr: An R package for mixture-model analysis of non-targeted metabolomics data Kernel approaches for differential expression analysis of mass spectrometry-based metabolomics data Imputation with theR PackageVIM An Introduction to Statistical Learning A program for missing data missmda: A package for handling missing values in multivariate data analysis MissForest-non-parametric missing value imputation for mixed-type data mice: Multivariate Imputation by Chained Equations inR Multiple Imputation with Diagnostics (mi) inR: Opening Windows into the Black Box A Gibbs sampler based left-censored missing value imputation approach for metabolomics studies MSPrep-summarization, normalization and diagnostics for processing of mass spectrometry-based metabolomic data Normalizing and integrating metabolomics data MetaboAnalyst 3.0-making metabolomics more meaningful MetaboAnalystR: An R package for flexible and reproducible analysis of metabolomics data An R Package for Metabolomics Univariate and Multivariate Statistical Analysis. CMB 2013 Multi-Omics Factor Analysis-a framework for unsupervised integration of multi-omics data sets MetaboDiff: An R package for differential metabolomic analysis Statistical and Multivariate Analysis of MS-Based Plant Metabolomics Data polyPK: An R package for pharmacokinetic analysis of multi-component drugs using a metabolomics approach The Chemistry Development Kit (CDK) v2.0: Atom typing, depiction, molecular formulas, and substructure searching Inchi, the IUPAC international chemical identifier ChemMine tools: An online service for analyzing and clustering small molecules Open Babel: An open chemical toolbox Open-Source Cheminformatics Software. Available online PubChem: A public information system for analyzing bioactivities of small molecules PubChem Substance and Compound databases Chemspider: An online chemical information resource Introducing wikidata to the linked data web Lecture notes in computer science The Chemical Translation Service-a web-based tool to improve standardization of metabolomic reports Automated chemical classification with a comprehensive, computable taxonomy Chemical Informatics Functionality in R ChemmineR: A compound mining framework for R Mismatch tolerant maximum common substructure searching in R Chemically Aware Model Builder (camb): An R package for property and bioactivity modelling of small molecules Zenodo Release Mass spectral molecular networking of living microbial colonies New perspectives on genomes, pathways, diseases and drugs New approach for understanding genome variations in KEGG Kyoto encyclopedia of genes and genomes The toxic exposome database T3DB: A comprehensively annotated database of common toxins and their targets LIPID MAPS online tools for lipid research Metabolic reaction network-based recursive metabolite annotation for untargeted metabolomics The pathway interaction database The reactome pathway knowledgebase R-based software for the integration of pathway data into bioinformatic algorithms Shiny: Web Application Framework for R The igraph software package for complex network research RedeR: R/Bioconductor package for representing modular structures DiffCorr: An R package to analyze and visualize differential correlations in biological networks Metabolite Network Prediction from High-Resolution Mass Spectrometry Data in R Aiding Metabolite Annotation MetaMapR: Pathway independent metabolomic network analysis incorporating unknowns Metabox: A toolbox for metabolomic data analysis, interpretation and integrative exploration MetabNet: An R Package for Metabolic Association Analysis of High-Resolution Metabolomics Data WGCNA: An R package for weighted correlation network analysis pwOmics: An R package for pathway-based integration of time-series omics data using public database knowledge Statistical hypothesis testing of factor loading in principal component analysis and its application to metabolite set enrichment analysis Perera-Lluna, A. Null diffusion-based enrichment for metabolomics data CePa: An R package for finding significant pathways weighted by multiple network centralities Condition-specific series of metabolic sub-networks and its application for gene set enrichment analysis Lilikoi: An R package for personalized pathway-based classification modeling using metabolomics data INDEED: Integrated differential expression and differential network analysis of omic data for biomarker discovery Phenotype-driven module identification in metabolomics networks at different resolutions Pathway Activity Profiling (PAPi): From the metabolite profile to the metabolic pathway activity pathwayPCA: An R package for integrative pathway analysis with modern PCA methodology and gene selection Using bioconductor package BiGGR for metabolic flux estimation based on gene expression changes in brain A novel signaling pathway impact analysis Automatically visualise and analyse data on pathways using PathVisioRPC from any programming environment Ramp: A comprehensive relational database of metabolomics pathways for pathway enrichment analysis of genes and metabolites Beissbarth, T. rBiopaxParser-an R package to parse, modify and visualize BioPAX data metaGraphite-A new layer of pathway annotation to get metabolite networks graphite-A Bioconductor package to convert pathway topology to gene network Pathview: An R/Bioconductor package for pathway-based data integration and visualization A graph approach to KEGG PATHWAY in R and bioconductor Pathway analysis in R using Pathway Commons WikiPathways: A multifaceted pathway database bridging metabolomics to other omics research ISA software suite: Supporting standards-compliant experimental annotation and enabling curation at the community level Toward interoperable bioscience data An R package to analyse LC/MS metabolomic data: MAIT (Metabolite Automatic Identification Toolkit) simExTargId: A comprehensive package for real-time LC-MS data acquisition and analysis MetMSLine: An automated and fully integrated pipeline for rapid processing of high-resolution LC-MS metabolomic datasets An R package for the integrated analysis of metabolomics and spectral data Exploring micropollutant biotransformation in three freshwater phytoplankton species Biodiversity drives micropollutant biotransformation in freshwater phytoplankton assemblages MetaboNexus: An interactive platform for integrated metabolomics analysis MetaboShiny-Interactive processing, analysis and identification of untargeted metabolomics data Elegant Graphics for Data Analysis Binder 2.0-Reproducible, interactive, sharable environments for science at scale Configure Your R Project for Binderhub Hole Punch A versatile visual NMR metabolomics workbench in KNIME KniMet: A pipeline for the processing of chromatography-mass spectrometry metabolomics data Galaxy-M: A Galaxy workflow for processing and analyzing direct infusion and liquid chromatography mass spectrometry-based metabolomics data Workflow4Metabolomics: A collaborative research infrastructure for computational metabolomics Galaxy Team Galaxy: A comprehensive approach for supporting accessible, reproducible, and transparent computational research in the life sciences Create, run, share, publish, and reference your LC-MS, FIA-MS, GC-MS, and NMR data analysis workflows with the Workflow4Metabolomics 3.0 Galaxy online infrastructure for metabolomics Workflow4metabolomics Referenced W4M Histories | Workflow4metabolomics.org. Available online The FAIR Guiding Principles for scientific data management and stewardship rcpp: Seamless R and C++ Integration MetaboLights-An open-access general-purpose repository for metabolomics studies and associated meta-data Metabolomics Workbench: An international repository for metabolomics data and metadata, metabolite standards, protocols, tutorials and training, and analysis tools Assignment of endogenous substrates to enzymes by global metabolite profiling Multi-Omics of Tomato Glandular Trichomes Reveals Distinct Features of Central Carbon Metabolism Supporting High Productivity of Specialized Metabolites Bioschemas: From Potato Salad to Protein Annotation Attempt to Add BioSchemas.org JSON-LD to the Vignette HTML Bridgedb/BridgeDbR@40e741a GitHub Added Template for BioSchemas Tool Annotation by Egonw Pull Request #25 Bioconductor/Bioconductor.org GitHub Repeatability in computer systems research Ten simple rules for making research software more robust