key: cord-0596049-e13f30cz authors: Masini, Ricardo P.; Medeiros, Marcelo C.; Mendes, Eduardo F. title: Machine Learning Advances for Time Series Forecasting date: 2020-12-23 journal: nan DOI: nan sha: 67f8e32beb6495579652a8a8317105066799ae1c doc_id: 596049 cord_uid: e13f30cz In this paper we survey the most recent advances in supervised machine learning and high-dimensional models for time series forecasting. We consider both linear and nonlinear alternatives. Among the linear methods we pay special attention to penalized regressions and ensemble of models. The nonlinear methods considered in the paper include shallow and deep neural networks, in their feed-forward and recurrent versions, and tree-based methods, such as random forests and boosted trees. We also consider ensemble and hybrid models by combining ingredients from different alternatives. Tests for superior predictive ability are briefly reviewed. Finally, we discuss application of machine learning in economics and finance and provide an illustration with high-frequency financial data. This paper surveys the recent developments in Machine Learning (ML) methods to economic and financial time series forecasting. ML methods have become an important estimation, model selection and forecasting tool for applied researchers in Economics and Finance. With the availability of vast datasets in the era of Big Data, producing reliable and robust forecasts is of great importance. 1 However, what is Machine Learning? It is certainly a buzzword which has gained a lot of popularity during the last few years. There are a myriad of definitions in the literature and one of the most well established is from the artificial intelligence pioneer Arthur L. Samuel who defines ML as the "the field of study that gives computers the ability to learn without being explicitly programmed." 2 We prefer a less vague definition where ML is the combination of automated computer algorithms with powerful statistical methods to learn (discover) hidden patterns in rich datasets. In that sense, Statistical Learning Theory gives the statistical foundation of ML. Therefore, this paper is about Statistical Learning developments and not ML in general as we are going to focus on statistical models. ML methods can be divided into three major groups: supervised, unsupervised, and reinforcement learning. This survey is about supervised learning, where the task is to learn a function that maps an input (explanatory variables) to an output (dependent variable) based on data organized as input-output pairs. Regression models, for example, belongs to this class. On the other hand, unsupervised learning is a class of ML methods that uncover undetected patterns in a data set with no pre-existing labels as, for example, cluster analysis or data compression algorithms. Finally, in reinforcement learning, an agent learns to perform certain actions in an environment which lead it to maximum reward. It does so by exploration and exploitation of knowledge it learns by repeated trials of maximizing the reward. This is the core of several artificial intelligence game players (AlfaGo, for instance) as well as in sequential treatments, like Bandit problems. The supervised ML methods presented here can be roughly divided in two groups. The first one includes linear models and are discussed in Section 2. We focus mainly on specifications estimated by regularization, also known as shrinkage. Such methods date back at least to Tikhonov (1943) . In Statistics and Econometrics, regularized estimators gained attention after the seminal papers by Willard James and Charles Stein who popularized the bias-variance tradeoff in statistical estimation (Stein, 1956; James and Stein, 1961) . We start by considering the Ridge Regression estimator put forward by Hoerl and Kennard (1970) . After that, we present the Least Absolute Shrinkage and Selection (LASSO) estimator of Tibshirani (1996) and its many extensions. We also include a discussion of other penalties. Theoretical derivations and inference for dependent data are also reviewed. The second group of ML techniques focus on nonlinear models. We cover this topic in Section 3 and start by presenting an unified framework based on sieve semiparametric approximation as in Grenander (1981) . We continue by analysing specific models as special cases of our general setup. More specifically, we cover feedforward neural networks, both in their shallow and deep versions and recurrent neural networks, and tree-based models such as random forests and boosted trees. Neural Networks (NN) are probably one of the most popular ML methods. The success is partly due to the, in our opinion, misguided analogy to the functioning of the human brain. Contrary of what has been boasted in the early literature, the empirical success of NN models comes from a mathematical fact that a linear combination of sufficiently many simple basis functions is able to approximate very complicated functions arbitrarily well in some specific choice of metric. Regression trees only achieved popularity after the development of algorithms to attenuate the instability of the estimated models. Algorithms like Random Forests and Boosted Trees are now in the toolbox of applied economists. In addition to the models mentioned above, we also include a survey on ensemble-based methods such as Bagging Breiman (1996) and the Complete Subset Regression (Elliott et al., 2013 (Elliott et al., , 2015 . Furthermore, we give a brief introduction to what we named "hybrid methods", where ideas from both linear and nonlinear models are combined to generate new ML forecasting methods. Before presenting an empirical illustration of the methods, we discuss tests of superior predictive ability in the context of ML methods. A quick word on notation: an uppercase letter as in X denotes a random quantity as opposed to a lowercase letter x which denotes a deterministic (non-random) quantity. Bold letters as in X and x are reserved for multivariate objects such as vector and matrices. The symbol · q for q ≥ 1 denotes the q norm of a vector. For a set S we use |S| to denote its cardinality. Given a sample with T realizations of the random vector (Y t , Z t ) , the goal is to predict Y T +h for horizons h = 1, . . . , H. Throughout the paper, we consider the following assumption: t=1 be a covariance-stationary stochastic process taking values on R d+1 . Therefore, we are excluding important non-stationary processes that usually appear in timeseries applications. In particular unit-root and some types on long-memory process are excluded by Assumption 1. For (usually predetermined) integers p ≥ 1 and r ≥ 0 define the n-dimensional vector of predictors X t := Y t−1 , . . . , Y t−p , Z t , . . . , Z t−r where n = p + d(r + 1) and consider the following direct forecasting model: Y t+h = f h (X t ) + U t+h , h = 1, . . . , H, t = 1, . . . , T, (1.1) where f h : R n → R is an unknown (measurable) function and U t+h := Y t+h − f h (X t ) is assumed to be zero mean and finite variance 3 . The model f h could be the conditional expectation function, f h (x) = E(Y t+h |X t = x), or simply the best linear projection of Y t+h onto the space spanned by X t . Regardless of the model choice, our target becomes f h , for h = 1, . . . , H. As f h is unknown, it should be estimated from data. The target function f h can be a single model or an ensemble of different specifications and it can also change substantially for each forecasting horizon. Given an estimate f h for f h , the next step is to evaluate the forecasting method by estimating its prediction accuracy. Most measures of prediction accuracy derives from the random quantity For instance the term prediction consistency refers to estimators such that ∆ h (X t ) p −→ 0 as T → ∞ where the probability is taken to be unconditional; as opposed to its conditional counterpart which is given by ∆ h (x t ) p −→ 0, where the probability law is conditional on X t = x t . Clearly, if the latter holds for (almost) every x t then the former holds by the law of iterated expectation. Other measures of prediction accuracy can be derived from the L q norm induced by either the unconditional probability law E|∆ h (X t )| q or the conditional one E(|∆ h (X t )| q |X t = x t ) for q ≥ 1. By far, the most used are the (conditional) mean absolutely prediction error (MAPE) when q = 1 and (conditional) mean squared prediction error (MSPE) when q = 2 or the (conditional) root mean squared prediction error (RMSPE) which is simply the square root of MSPE. Those measures of prediction accuracy based on the L q norms are stronger than prediction consistency in the sense that the converge to zero as sample since increases of any of those (q ≥ 1) implies prediction consistency by Markov's inequality. This approach stems from casting economic forecasting as a decision problem. Under the choice of a loss function, the goal is to select f h from a family of candidate models that minimises the the expected predictive loss or risk. Given a estimate f h for f h , the next step is to evaluate the forecasting method by estimating its risk. The most commonly used losses are the absolute error and squared error, corresponding to L 1 and L 2 risk functions, respectively. See Granger and Machina (2006) for references a detailed exposition of this topic, Elliott and Timmermann (2008) for a discussion of the role of loss function in forecasting, and Elliott and Timmermann (2016) for a more recent review. Apart form this brief introduction, the paper is organized as follows. Section 2 reviews penalized linear regression models. Nonlinear ML models are discussed in Section 3. Ensemble and hybrid methods are presented in Section 4. Section 5 briefly discusses tests for superior predictive ability. An empirical application is presented in Section 6. Finally, we conclude and discuss some directions for future research in Section 7. of f (X t ) to be finite suffices for the finite variance We consider the family of linear models where f (x) = β 0 x in (1.1) for a vector of unknown parameters β 0 ∈ R n . Notice that we drop the subscript h for clarity. However, the model as well as the parameter β 0 have to be understood for particular value of the forecasting horizon h. These models contemplate a series of well-known specifications in time series analysis, such as predictive regressions, autoregressive models of order p, AR(p), autoregressive models with exogenous variables, ARX(p), autoregressive models with dynamic lags ADL(p, r), among many others (Hamilton, 1994) . In particular, (1.1) becomes where, under squared loss, β 0 is identified by the best linear projection of Y t+h onto X t which is well defined whenever Σ := E(X t X t ) is non-singular. In that case, U t+h is orthogonal to X t by construction and this property is exploited to derive estimation procedures such as the Ordinary Least Squares (OLS). However, when n > T (and sometimes n T ) the OLS estimator is not unique as the sample counterpart of Σ is rank deficient. In fact, we can completely overfit whenever n ≥ T . Penalized linear regression arises in the setting where the regression parameter is not uniquely defined. It is usually the case when n is large, possibly larger than the number of observations T , and/or when covariates are highly correlated. The general idea is to restrict the solution of the OLS problem to a ball around the origin. It can be shown that, although biased, the restricted solution has smaller mean squared error, when compared to the unrestricted OLS (Hastie et al., 2009, Ch. 3 and Ch. 6) . In penalized regressions the estimator β for the unknown parameter vector β 0 minimizes the Lagrangian form where Y := (Y h+1 , . . . Y T ) , X := (X 1 , . . . X T −h ) and p(β) := p(β; λ, γ, Z) ≥ 0 is a penalty function that depends on a tuning parameter λ ≥ 0, that controls the trade-off between the goodness of fit and the regularization term. If λ = 0, we have an the classical unrestricted regression, since p(β; 0, γ, X) = 0. The penalty function may also depend on a set of extra hyper-parameters γ, as well as on the data X. Naturally, the estimator β also depends on the choice of λ and γ. Different choices for the penalty functions were considered in the literature of penalized regression. The ridge regression was proposed by Hoerl and Kennard (1970) as a way to fight highly correlated regressors and stabilize the solution of the linear regression problem. The idea was to introduce a small bias but, in turn, reduce the variance of the estimator. The ridge regression is also known as a particular case of Tikhonov Regularization (Tikhonov, 1943 (Tikhonov, , 1963 Tikhonov and Arsenin, 1977) , in which the scale matrix is diagonal with identical entries. The ridge regression corresponds to penalizing the regression by the squared 2 norm of the parameter vector, i.e., the penalty in (2.2) is given by Ridge regression has the advantage of having an easy to compute analytic solution, where the coefficients associated with the least relevant predictors are shrunk towards zero, but never reaching exactly zero. Therefore, it cannot be used for selecting predictors, unless some truncation scheme is employed. The LASSO was proposed by Tibshirani (1996) and Chen et al. (2001) as a method to regularize and perform variable selection at the same time. LASSO is one of the most popular regularization methods and it is widely applied in data-rich environments where number of features n is much larger than the number of the observations. LASSO corresponds to penalizing the regression by the 1 norm of the parameter vector, i.e., the penalty in (2.2) is given by The solution of the LASSO is efficiently calculated by coordinate descent algorithms (Hastie et al., 2015, Ch. 5) . The 1 penalty is the smallest convex p penalty norm that yields sparse solutions. We say the solution is sparse if only a subset k < n coefficients are non-zero. In other words, only a subset of variables is selected by the method. Hence, LASSO is most useful when the total number of regressors n T and it is not feasible to test combination or models. Despite attractive properties, there are still limitations to the LASSO. A large number of alternative penalties have been proposed to keep its desired properties whilst overcoming its limitations. The adaptive LASSO (adaLASSO) was proposed by H. Zou (2006) and aimed to improve the LASSO regression by introducing a weight parameter, coming from a first step OLS regression. It also has sparse solutions and efficient estimation algorithm, but enjoys the oracle property, meaning that it has the same asymptotic distribution as the OLS conditional on knowing the variables that should enter the model. 4 The adaLASSO penalty consists in using a weighted 1 penalty: where ω i = |β * i | −1 and β * i is the coefficient from the first-step estimation (any consistent estimator of β 0 ) AdaLASSO can deal with many more variables than observations. Using LASSO as the first-step estimator can be regarded as the two-step implementation of the local linear approximation in Fan et al. (2014) with a zero initial estimate. The elastic-net (ElNet) was proposed by Zou and Hastie (2005) as a way of combining strengths of LASSO and ridge regression. While the L 1 part of the method performs variable selection, the L 2 part stabilizes the solution. This conclusion is even more accentuated when correlations among predictors become high. As a consequence, there is a significant improvement in prediction accuracy over the LASSO (Zou and Zhang, 2009 ). The elastic-net penalty is a convex combination of 1 and 2 penalties: where α ∈ [0, 1]. The elastic net has both the LASSO and ridge regression as special cases. Just like in the LASSO regression, the solution to the elastic-net problem is efficiently calculated by coordinate descent algorithms. Zou and Zhang (2009) proposes the adaptive elastic net. The elastic-net and adaLASSO improve the LASSO in distinct directions: the adaLASSO has the oracle property and the elastic net helps with the correlation among predictors. The adaptive elastic-net combines the strengths of both methods. It is a combination of ridge and adaLASSO, where the first-step estimator come from the elastic-net. LASSO approaches became popular in sparse high-dimensional estimation problems largely due their computational properties. Another very popular approach is the folded concave penalization of Fan and Li (2001) . This approach covers a collection of penalty functions satisfying a set of properties. The penalties aim to penalize more parameters close to zero than those that are further away, improving performance of the method. In this way, penalties are concave with respect to each |β i |. One of the most popular formulations is the SCAD (smoothly clipped absolute deviation). Note that unlike LASSO, the penalty may depend on λ in a nonlinear way. We set the penalty in (2.2) for γ > 2 and λ > 0. The SCAD penalty is identical to the LASSO penalty for small coefficients, but continuously relaxes the rate of penalization as the coefficient departs from zero. Unlike OLS or LASSO, we have to solve a non-convex optimization problem that may have multiple minima and is computationaly more intensive than the LASSO. Nevertheless, Fan et al. (2014) showed how to calculate the oracle estimator using an iterative Local Linear Approximation algorithm. Regularization imposes a restriction on the solution space, possibly imposing sparsity. In a data-rich environment it is a desirable property as it is likely that many regressors are not relevant to our prediction problem. The presentation above concentrates on the, possibly, most used penalties in time series forecasting. Nevertheless, there are many alternative penalties that can be used in regularized linear models. The group LASSO, proposed by Yuan and Lin (2006) , penalizes the parameters in groups, combining the 1 and 2 norms. It is motivated by the problem of identifying "factors", denoted by groups of regressors as, for instance, in regression with categorical variables that can assume many values. Let G = {g 1 , ..., g M } denote a partition of {1, ..., n} and β g i = [β i : i ∈ g i ] the corresponding regression sub-vector. The group lasso assign to (2.2) where |g i | is the cardinality of set g i . The solution is efficiently estimated using, for instance, the group-wise majorization-descent algorithm Yang and H. Zou (2015) . Naturally, the adaptive group LASSO was also proposed aiming to improve some of the limitations present on the group LASSO algorithm Wang and Leng (2008) . In the group LASSO, the groups enter or not in the regression. The sparse group LASSO recover sparse groups by combining the group LASSO penalty with the L 1 penalty on the parameter vector (Simon et al., 2013) . Park and Sakaori (2013) modify the adaptive lasso penalty to explicitly take into account lag information. Konzen and Ziegelmann (2016) propose a small change in penalty and perform a large simulation study to asses the performance of this penalty in distinct settings. They observe that taking into account lag information improves model selection and forecasting performance when compared to the LASSO and adaLASSO. They apply their method to forecasting inflation and risk premium with satisfactory results. There is a Bayesian interpretation to the regularization methods presented here. The ridge regression can be also seen as a maximum a posteriori estimator of a Gaussian linear regression with independent, equivariant, Gaussian priors. The LASSO replaces the Gaussian prior by a Laplace prior (Park and Casella, 2008; Hans, 2009 ). These methods fall within the area of Bayesian Shrinkage methods, which is a very large and active research area, and it is beyond the scope of this survey. In this section we give an overview of the theoretical properties of penalized regression estimators previously discussed. Most results in high-dimensional time series estimation focus on model selection consistency, oracle property and oracle bounds, for both the finite dimension (n fixed, but possibly larger than T ) and high-dimension (n increases with T , usually faster). More precisely, suppose there is a population, parameter vector β 0 that minimizes equation (2.1) over repeated samples. Suppose this parameter is sparse in a sense that only components indexed by S 0 ⊂ {1, ..., n} are non-null. Let S 0 := {j : β j = 0}. We say a method is model selection consistent if the index of non-zero estimated components converges to S 0 in probability. 5 Consistency can also be stated in terms of how close the estimator is to true parameter for a given norm. We say that the estimation method is L q -consistent if for every > 0: It is important to note that model selection consistency does not imply, nor it is implied by, L q -consistency. As a matter of fact, one usually have to impose specific assumptions to achieve each of those modes of convergence. Model selection performance of a given estimation procedure can be further broke down in terms of how many relevant variables j ∈ S 0 are included in the model (screening). Or how many irrelevant variables j / ∈ S 0 are excluded from the model. In terms of probability, model screening consistency is defined by P( S 0 ⊇ S 0 ) → 1 and model exclusion consistency defined by We say a penalized estimator has the oracle property if its asymptotic distribution is the same as the unpenalized one only considering the S 0 regressors. Finally, oracle risk bounds are finite sample bounds on the estimation error of β that hold with high probability. These bounds require relatively strong conditions on the curvature of objective function, which translates into a bound on the minimum restricted eigenvalue of the covariance matrix among predictors for linear models and a rate condition on λ that involves the number of non-zero parameters, |S 0 |. The LASSO was originally developed in fixed design with independent and identically distributed (IID) errors, but it has been extended and adapted to a large set of models and designs. Knight and Fu (2000) was probably the first paper to consider the asymptotics of the LASSO estimator. The authors consider fixed design and fixed n framework. From their results, it is clear that the distribution of the parameters related to the irrelevant variables is non-Gaussian. To our knowledge, the first work expanding the results to a dependent setting was Wang et al. (2007) , where the error term was allowed to follow an autoregressive process. Authors show that LASSO is model selection consistent, whereas a modified LASSO, similar to the adaLASSO, is both model selection consistent and has the oracle property. Nardi and Rinaldo (2011) shows model selection consistency and prediction consistency for lag selection in autoregressive models. Chan and Chen (2011) shows oracle properties and model selection consistency for lag selection in ARMA models. Yoon et al. (2013) derives model selection consistency and asymptotic distribution of the LASSO, adaLASSO and SCAD, for penalized regressions with autoregressive error terms. Sang and Sun (2015) studies lag estimation of autoregressive processes with long memory innovations using general penalties and show model selection consistency and asymptotic distribution for the LASSO and SCAD as particular cases. Kock (2016) shows model selection consistency and oracle property of adaLASSO for lag selection in stationary and integrated processes. All results above hold for the case of fixed number of regressors or relatively high-dimension, meaning that n/T → 0. In sparse, high-dimensional, stationary univariate time-series settings, where n → ∞ at some rate faster than T , Mendes (2016, 2017) show model selection consistency and oracle property of a large set of linear time series models with difference martingale, strong mixing, and non-Gaussian innovations. It includes, predictive regressions, autoregressive models AR(p), autoregressive models with exogenous variables ARX(p), autoregressive models with dynamic lags ADL(p, r), with possibly conditionally heteroscedastic errors. Xie et al. (2017) shows oracle bounds for fixed design regression with β-mixing errors. Wu and Wu (2016) derive oracle bounds for the LASSO on regression with fixed design and weak dependent innovations, in a sense of Wu (2005) , whereas Han and Tsay (2020) show model selection consistency for linear regression with random design and weak sparsity 6 under serially dependent errors and covariates, within the same weak dependence framework. Xue and Taniguchi (2020) show model selection consistency and parameter consistency for a modified version of the LASSO in time series regressions with long memory innovations. Fan and Li (2001) shows model selection consistency and oracle property for the folded concave penalty estimators in a fixed dimensional setting. Kim et al. (2008) showed that the SCAD also enjoys these properties in high-dimensions. In time-series settings, Uematsu and Tanaka (2019) shows oracle properties and model selection consistency in time series models with dependent regressors. Lederer et al. (2019) derived oracle prediction bounds for many penalized regression problems. The authors conclude that generic high dimensional penalized estimators provide consistent prediction with any design matrix. Although the results are not directly focused on time series problems, they are general enough to hold in such setting. Babii et al. (2020c) proposed the sparse-group LASSO as an estimation technique when 6 Weak sparsity generalizes sparsity by supposing that coefficients are (very) small instead of exactly zero. high-dimensional time series data are potentially sampled at different frequencies. The authors derived oracle inequalities for the sparse-group LASSO estimator within a framework where distribution of the data may have heavy tails. Two frameworks not directly considered in this survey but of great empirical relevance are nonstationary environments and multivariate models. In sparse, high-dimensional, integrated time series settings, Lee and Z. Shi (2020) and Koo et al. (2020) show model selection consistency and derive the asymptotic distributions of LASSO estimators and some variants. Smeeks and Wijler (2020) proposed the Single-equation Penalized Error Correction Selector (SPECS), which is an automated estimation procedure for dynamic single-equation models with a large number of potentially co-integrated variables. In sparse multivariate time series, Hsu et al. (2008) shows model selection consistency in VAR models with white-noise shocks. Ren and Zhang (2010) uses adaLASSO in a similar setting, showing both model selection consistency and oracle property. Afterwards, Callot et al. (2013) show model selection consistency and oracle property of the adaptive Group LASSO. In high dimensional settings, where the dimension of the series increase with the number of observations, Kock and Callot (2015) ; Basu and Michailidis (2015) shows oracle bounds and model selection consistency for the LASSO in Gaussian V AR(p) models, extending previous works. Melnyk and Banerjee (2016) extended these results for a large collection of penalties. Zhu (2020) derive oracle estimation bounds for folded concave penalties for Gaussian V AR(p) models in high dimensions. More recently researchers have departed from gaussianity and correct model specification. Wong et al. (2020) derived finite-sample guarantees for the LASSO in a misspecified VAR model involving β-mixing process with sub-Weibull marginal distributions. Masini et al. (2019) derive equation-wise error bounds for the LASSO estimator of weakly sparse V AR(p) in mixingale dependence settings, that include models with conditionally heteroscedastic innovations. Although several papers derived the asymptotic properties of penalized estimators as well as the oracle property, these results have been derived under the assumption that the true non-zero coefficients are large enough. This condition is known as the β-min restriction. Furthermore, model selection, such as the choice of the penalty parameter, has not been taken into account. Therefore, the true limit distribution, derived under uniform asymptotics and without the βmin restriction can bee very different from Gaussian, being even bimodal; see, for instance, Leeb and Pötscher (2005) , Leeb and Pötscher (2008) , and Belloni et al. (2014) for a detailed discussion. Inference after model selection is actually a very active area of research and a vast number of papers have recently appeared in the literature. van de Geer et al. (2014) proposed the desparsified LASSO in order to construct (asymptotically) a valid confidence interval for each β j,0 by modifying the original LASSO estimate β. Let Σ * be an approximation for the inverse of Σ := E(X t X t ), then the desparsified LASSO is defined as β := β + Σ * (Y − X β)/T . The addition of this extra term to the LASSO estimator results in an unbiased estimator that no longer estimate any coefficient exactly as zero. More importantly, asymptotic normality can be recover in the sense that √ T ( β i − β i,0 ) converges in distribution to a Gaussian distribution under appropriate regularity conditions. Not surprisingly, the most important condition is how well Σ −1 can be approximated by Σ * . In particular, the authors propose to run n LASSO regressions of X i onto X −i := (X 1 , . . . , X i−1 , X i+1 , . . . , X n ), for 1 ≤ i ≤ n. The authors named this process as nodewide regressions, and use those estimates to construct Σ * (refer to Section 2. where the interest lies on the the scalar parameter β 01 and X (2) t is a high-dimensional vector of control variables. The procedure consists in obtaining an estimation of the active (relevant) regressors in the high-dimension auxiliary regressions of Y t on X (2) and of X (1) t on X (2) t , given by S 1 and S 2 , respectively. 7 This can be obtained either by LASSO or any other estimation procedure. Once the set S := S 1 ∪ S 2 is identified, the (a priori) estimated non-zero parameters can by estimated by a low-dimensional regression Y t on X (1) t and {X (2) it : i ∈ S}. The main result (Theorem 1 of Belloni et al. (2014)) states conditions under which the estimator β 01 of the parameter of interest properly studentized is asymptotically normal. Therefore, uniformly valid asymptotic confidence intervals for β 01 can be constructed in the usual fashion. Similar to Taylor et al. (2014) and Lockhart et al. (2014) , Lee et al. (2016) put forward general approach to valid inference after model selection. The idea is to characterize the distribution of a post-selection estimator conditioned on the selection event. More specifically, the authors argue that the post-selection confidence intervals for regression coefficients should have the correct coverage conditional on the selected model. The specific case of the LASSO estimator is discussed in details. The main difference between Lee et al. (2016) and Taylor et al. (2014) and Lockhart et al. (2014) is that in the former, confidence intervals can be formed at any value of the LASSO penalty parameter and any coefficient in the model. Finally, it is important to stress that Lee et al. (2016) inference is carried on the coefficients of the selected model, while van de Geer et al. (2014) and Belloni et al. (2014) consider inference on the coefficients of the true model. More specifically, Babii et al. (2020a) consider inference in time-series regression models under heteroskedastic and autocorrelated errors. The authors consider heteroskedaticity-and autocorrelation-consistent (HAC) estimation with sparse group-LASSO. They propose a debiased central limit theorem for low dimensional groups of regression coefficients and study the HAC estimator of the long-run variance based on the sparse-group LASSO residuals. Adámek 7 The relevant regressors are the ones associated with non-zero parameter estimates. et al. (2020) extend the desparsified LASSO to a time-series setting under near-epoch dependence assumptions, allowing for non-Gaussian, serially correlated and heteroskedastic processes. Furthermore, the number of regressors can possibly grow faster than the sample size. The function f h appearing (1.1) is unknown and in several applications the linearity assumption is too restrictive and more flexible forms must be considered. Assuming a quadratic loss function, the estimation problem turns to be the minimization of the functional where f ∈ G, a generic function space. However, the optimization problem stated in (3.1) is infeasible when G is infinite dimensional, as there is no efficient technique to search over all G. Of course, one solution is to restrict the function space, as for instance, imposing linearity or specific forms of parametric nonlinear models as in, for example, Teräsvirta (1994) , Suarez-Fariñas et al. (2004) or McAleer and Medeiros (2008) ; see also Teräsvirta et al. (2010) for a recent review of such models. Alternatively, we can replace G by simpler and finite dimensional G D . The idea is to consider a sequence of finite dimensional spaces, the sieve spaces, G D , D = 1, 2, 3, . . . , that converges to G in some norm. The approximating function g D (X t ) is written as where g j (·) is the j-th basis function for G D and can be either fully known or indexed by a vector of parameters, such that: g j (X t ) := g(X t ; θ j ). The number of basis functions J := J T will depend on the sample size T . D is the dimension of the space and it also depends on the sample size: D := D T . Therefore, the optimization problem is then modified to (3. 2) The sequence of approximating spaces G D is chosen by using the structure of the original underlying space G and the fundamental concept of dense sets. If we have two sets A and B ∈ X , X being a metric space, A is dense in B if for any > 0, ∈ R and x ∈ B there is a y ∈ A such that x − y X < . This is called the method of sieves. For a comprehensive review of the method for time-series data, see Chen (2007) . For example, from the theory of approximating functions we know that the proper subset P ⊂ C of polynomials is dense in C, the space of continuous functions. The set of polynomials is smaller and simpler than the set of all continuous functions. In this case, it is natural to define the sequence of approximating spaces G D , D = 1, 2, 3, . . . by making G D the set of polynomials of degree smaller or equal to D − 1 (including a constant in the parameter space). Note that dim(G D ) = D < ∞. In the limit this sequence of finite dimensional spaces converges to the infinite dimensional space of polynomials, which on its turn is dense in C. When the basis functions are all known (linear sieves), the problem is linear in the parameters and methods like ordinary least squares (when J T ) or penalized estimation as previously described can be used. For example, let p = 1 and pick a polynomial basis such that In this case, the dimension D of G D is J + 1, due to the presence of a constant term. If J << T , the vector of parameters β = (β 1 , . . . , β J ) can be estimated by where X J is the T × (J + 1) design matrix and Y = (Y 1 , . . . , Y T ) . When the basis functions are also indexed by parameters (nonlinear sieves), nonlinear least-squares methods should be used. In this paper we will focus on frequently used nonlinear sieves: neural networks and regression trees. Neural Networks (NN) is one of the most traditional nonlinear sieves. NN can be classified into shallow or deep networks. We start describing the shallow NNs. The most common shallow NN is the feedforward neural network where the the approximating function g D (X t ) is defined as In the above model,X t = (1, X t ) , S j (·) is a basis function and the parameter vector to be estimated is given by θ = (β 0 , . . . , β K , γ 1 , . . . , γ J T , γ 0,1 , . . . , γ 0,J T ) , whereγ j = (γ 0,j , γ j ) . NN models form a very popular class of nonlinear sieves and have been used in many applications of economic forecasting. Usually, the basis functions S(·) are called activation functions and the parameters are called weights. The terms in the sum are called hiddenneurons as an unfortunate analogy to the human brain. Specification (3.3) is also known as a single hidden layer NN model as is usually represented in the graphical as in Figure 1 . The green In the example in the figure there are four input variables. The blue and red circles indicate the hidden and output layers, respectively. In the example, there are five elements (neurons) in the hidden layer.The arrows from the green to the blue circles represent the linear combination of inputs: γ j X t + γ 0,j , j = 1, . . . , 5. Finally, the arrows from the blue to the red circles represent the linear combination of outputs from the hidden layer: β 0 + 5 j=1 β j S(γ j X t + γ 0,j ). There are several possible choices for the activation functions. In the early days, S(·) was chosen among the class of squashing functions as per the definition bellow. Historically, the most popular choices are the logistic and hyperbolic tangent functions such that: The popularity of such functions was partially due to theoretical results on function approximation. Funahashi (1989) establishes that NN models as in (3.3) with generic squashing functions are capable of approximating any continuous functions from one finite dimensional space to another to any desired degree of accuracy, provided that J T is sufficiently large. Cybenko (1989) and Hornik et al. (1989) simultaneously proved approximation capabilities of NN models to any Borel measurable function and Hornik et al. (1989) extended the previous results and showed the NN models are also capable to approximate the derivatives of the unknown function. Barron (1993) relate previous results to the number of terms in the model. Stinchcombe and White (1989) and Park and Sandberg (1991) derived the same results of Cybenko (1989) and Hornik et al. (1989) but without requiring the activation function to be sigmoid. While the former considered a very general class of functions, the later focused on radial-basis functions (RBF) defined as: Radial Basis: S(x) = exp(−x 2 ). More recently, Yarotsky (2017) showed that the rectified linear units (ReLU) as Rectified Linear Unit: S(x) = max(0, x), are also universal approximators. Model (3.3) can be written in matrix notation. Let Γ = (γ 1 , . . . ,γ K ), Therefore, by defining β = (β 0 , β 1 , . . . , β K ) , the output of a feed-forward NN is given by: The dimension of the parameter vector θ = [vec (Γ) , β ] is k = (n + 1) × J T + (J T + 1) and can easily get very large such that the unrestricted estimation problem defined as is unfeasible. A solution is to use regularization as in the case of linear models and consider the minimization of the following function: where usually p(θ) = λθ θ. Traditionally, the most common approach to minimze (3.5) is to use Bayesian methods as in MacKay (1992), MacKay (1992), and Foresee and Hagan (1997) . A more modern approach is to use a technique known as Dropout (Srivastava et al., 2014) . The key idea is to randomly drop neurons (along with their connections) from the neural network during estimation. A NN with J T neurons in the hidden layer can generate 2 J T possible "thinned" NN by just removing some neurons. Dropout samples from this 2 J T different thinned NN and train the sampled NN. To predict the target variable, we use a single unthinned network that has weights adjusted by the probability law induced by the random drop. This procedure significantly reduces overfitting and gives major improvements over other regularization methods. where s, v, and r = (r 1 , . . . , r n ) are independent Bernoulli random variables each with probability q of being equal to 1. The NN model is thus estimated by using g * D (X t ) instead of g D (X t ) where, for each training example, the values of the entries of r are drawn from the Bernoulli distribution. The final estimates for β j , γ j , and γ o,j are multiplied by q. A Deep Neural Network model is a straightforward generalization of specification (3.3) where more hidden layers are included in the model as represented in Figure 2 . In the figure we represent a Deep NN with two hidden layers with the same number of hidden units in each. However, the number of hidden neurons can vary across layers. As pointed out in Mhaska et al. (2017) , while the universal approximation property holds for shallow NNs, deep networks can approximate the class of compositional functions as well as shallow networks but with exponentially lower number of training parameters and sample complexity. Set J as the number of hidden units in layer ∈ {1, . . . , L}. For each hidden layer define Γ = (γ 1 , . . . ,γ k ). Then the output O of layer is given recursively by where O o := X. Therefore, the output of the Deep NN is the composition The estimation of the parameters is usually carried out by stochastic gradient descend methods with dropout to control the complexity of the model. Broadly speaking, Recurrent Neural Networks (RNNs) are NNs that allow for feedback among the hidden layers. RNNs can use their internal state (memory) to process sequences of inputs. In the framework considered in this paper, a generic RNN could be written as where Y t+h|t is the prediction of Y t+h given observations only up to time t, f and g are functions to be defined and H t is what we call the (hidden) state. From a time-series perspective, RNNs can be see as a kind of nonlinear state-space model. RNNs can remember the order that the inputs appear through its hidden state (memory) and they can also model sequences of data so that each sample can be assumed to be dependent on previous ones, as in time series models. However, RNNs are hard to be estimated as they suffer from the vanishing/exploding gradient problem. Set the cost function to be where θ is the vector of parameters to be estimated. It is easy to show that the gradient ∂Q T (θ) ∂θ can be very small or diverge. Fortunately, there is a solution to the problem proposed by Hochreiter and Schmidhuber (1997) . A variant of RNN which is called Long-Short-Term Memory (LSTM) network . Figure 3 shows the architecture of a typical LSTM layer. A LSTM network can be composed of several layers. In the figure, red circles indicate logistic activation functions, while blue circles represent hyperbolic tangent activation. The symbols "X" and "+" represent, respectively, the element-wise multiplication and sum operations. The RNN layer is composed of several blocks: the cell state and the forget, input, and ouput gates. The cell state introduces a bit of memory to the LSTM so it can "remember" the past. LSTM learns to keep only relevant information to make predictions, and forget non relevant data. The forget gate tells which information to throw away from the cell state. The output gate provides the activation to the final output of the LSTM block at time t. Usually, the dimension of the hidden state (H t ) is associated with the number of hidden neurons. the new information (X t ). Note that f t ∈ [0, 1] and it will attenuate the signal coming com c t−1 . The input and output gates have the same structure. Their function is to filter the "relevant" information from the previous time period as well as from the new input. p t scales the combination of inputs and previous information. This signal will be then combined with the output of the input gate (i t ). The new hidden state will be an attenuation of the signal coming from the output gate. Finally, the prediction is a linear combination of hidden states. Figure 4 illustrates how the information flows in a LSTM cell. Mathematically, RNNs can be defined by the following algorithm: 1. Initiate with c 0 = 0 and H 0 = 0. 2. Given the input X t , for t ∈ {1, . . . , T }, do: and b c are parameters to be estimated. A regression tree is a nonparametric model that approximates an unknown nonlinear function f h (X t ) in (1.1) with local predictions using recursive partitioning of the space of the covariates. A tree may be represented by a graph as in the left side of Figure 5 , which is equivalent as the partitioning in the right side of the figure for this bi-dimensional case. For example, suppose that we want to predict the scores of basketball players based on their height and weight. The first node of the tree in the example splits the players taller than 1.85m from the shorter players. The second node in the left takes the short players groups and split them by weights and the second node in the right does the same with the taller players. The prediction for each group is displayed in the terminal nodes and they are calculated as the average score in each group. To grow a tree we must find the optimal splitting point in each node, which consists of an optimal variable and an optimal observation. In the same example, the optimal variable in the first node is height and the observation is 1.85m. The idea of regression trees is to approximate f h (X t ) by From the above expression, it becomes clear that the approximation of f h (·) is equivalent to a linear regression on J T dummy variables, where I j (X t ) is a product of indicator functions. Let J := J T and N := N T be, respectively, the number of terminal nodes (regions, leaves) Each parent node has a threshold (split) variable associated, X s j t , where s j ∈ S = {1, 2, . . . , p}. Define J and T as the sets of parent and terminal nodes, respectively. Figure 6 gives an example. In the example, the parent nodes are J = {0, 2, 5} and the terminal nodes are T = {1, 6, 11, 12}. Therefore, we can write the approximating model as Random Forest (RF) is a collection of regression trees, each specified in a bootstrap sample of the original data. The method was originally proposed by Breiman (2001) . Since we are dealing with time series, we use a block bootstrap. Suppose there are B bootstrap samples. For each sample b, b = 1, . . . , B, a tree with K b regions is estimated for a randomly selected subset of the original regressors. K b is determined in order to leave a minimum number of observations in each region. The final forecast is the average of the forecasts of each tree applied to the original data: The theory for RF models has been developed only to independent and identically distributed random variables. For instance, Scornet et al. (2015) proves consistency of the RF approximation to the unknown function f h (X t ). More recently, Wager and Athey (2018) proved consistency and asymptotic normality of the RF estimator. Boosting is another greedy method to approximate nonlinear functions that uses base learners for a sequential approximation. The model we consider here, called Gradient Boosting, was introduced by Friedman (2001) and can be seen as a Gradient Descendent method in functional space. The study of statistical properties of the Gradient Boosting is well developed for independent data. For example, for regression problems, Duffy and Helmbold (2002) derived bounds on the convergence of boosting algorithms using assumptions on the performance of the base learner. Zhang and Yu (2005) proves convergence, consistency and results on the speed of convergence with mild assumptions on the base learners. Bühlmann (2002) shows similar results for consistency in the case of 2 loss functions and three base models. Since boosting indefinitely leads to overfitting problems, some authors have demonstrated the consistency of boosting with different types of stopping rules, which are usually related to small step sizes, as suggested by Friedman (2001) . Some of these works include boosting in classification problems and gradient boosting for both classification and regression problems. See, for instance, Jiang (2004) ; Lugosi and Vayatis (2004) ; Bartlett and M. Traskin (2007) ; Zhang and Yu (2005) ; Bühlmann (2006) ; Bühlmann (2002) . Boosting is an iterative algorithm. The idea of boosted trees is to, at each iteration, sequentially refit the gradient of the loss function by small trees. In the case of quadratic loss as considered in this paper, the algorithm simply refit the residuals from the previous iteration. Algorithm (2) presents the simplified boosting procedure for a quadratic loss. It is recommended to use a shrinkage parameter v ∈ (0, 1] to control the learning rate of the algorithm. If v is close to 1, we have a faster convergence rate and a better in-sample fit. However, we are more likely to have over-fitting and produce poor out-of-sample results. Additionally, the derivative is highly affected by over-fitting, even if we look at in-sample estimates. A learning rate between 0.1 and 0.2 is recommended to maintain a reasonable convergence ratio and to limit over-fitting problems. Algorithm 2. The boosting algorithm is defined as the following steps. 2. For m = 1, . . . , M : The final fitted value may be written as Conducting inference in nonlinear ML methods is tricky. One possible way is to follow Medeiros et al. (2006) , Medeiros and Veiga (2005) and Suarez-Fariñas et al. (2004) and interpret particular nonlinear ML specifications as parametric models, as for example, general forms of smooth transition regressions. However, this approach restricts the application of ML methods to very specific settings. An alternative, is to consider models that can be cast in the sieves framework as described earlier. This is the case of splines and feed-forward NNs, for example. In this setup, Chen and Shen (1998) and Chen (2007) derived, under regularity conditions, the consistency and asymptotically normality of the estimates of a semi-parametric sieve approximations. Their setup is defined as follows: where f (X t ) is a nonlinear function that is nonparametrically modeled by sieve approximations. Chen and Shen (1998) and Chen (2007) consider both the estimation of the linear and nonlinear components of the model. However, their results are derived under the case where the dimension of X t is fixed. Recently, Chernozhukov et al. (2017) and Chernozhukov et al. (2018) consider the case where the number of covariates diverge as the sample size increases in a very general setup. In this case the asymptotic results in Chen and Shen (1998) and Chen (2007) are not valid and the authors put forward the so-called double ML methods as a nice generalization to the results of Belloni et al. (2014) . For Deep Neural Networks, Farrell et al. (2021) consider semiparametric inference and establish nonasymptotic high probability bounds. Consequently, the authors are able to derive rates of convergence that are sufficiently fast to allow them to establish valid second-step inference after first-step estimation with deep learning. Nevertheless, the above papers do not include the case of time-series models. More specifically to the case of Random Forests, asymptotic and inferential results are derived in Scornet et al. (2015) and Wager and Athey (2018) for the case of IID data. More recently, Davis and Nielsen (2020) prove a uniform concentration inequality for regression trees built on nonlinear autoregressive stochastic processes and prove consistency for a large class of random forests. Finally, it is worth mentioning the interesting work of . In their paper, the authors show that proper predictor targeting controls the probability of placing splits along strong predictors and improves prediction. The term bagging means Bootstrap Aggregating and was proposed by Breiman (1996) to reduce the variance of unstable predictors 8 . It was popularized in the time series literature by Inoue and Kilian (2008) , who to construct forecasts from multiple regression models with localto-zero regression parameters and errors subject to possible serial correlation or conditional heteroscedasticity. Bagging is designed for situations in which the number of predictors is moderately large relative to the sample size. The bagging algorithm in time series settings have to take into account the time dependence dimension when constructing the bootstrap samples. Algorithm 3 (Bagging for Time-Series Models). The Bagging algorithm is defined as follows. 1. Arrange the set of tuples (y t+h , x t ), t = h + 1, . . . , T , in the form of a matrix V of dimension (T − h) × n. , i = 1, . . . , B, by drawing blocks of M rows of V with replacement. 3. Compute the ith bootstrap forecast as where x * (i)t := S * (i)t z * (i)t and S t is a diagonal selection matrix with jth diagonal element given by c is a pre-specified critical value of the test. λ * (i) is the OLS estimator at each bootstrap repetition. In algorithm 3, above, one requires that it is possible to estimate and conduct inference in the linear model. This is certainly infeasible if the number of predictors is larger than the sample size (n > T ), which requires the algorithm to be modified. Garcia et al. (2017) and Medeiros et al. (2021) adopt the following changes of the algorithm: Algorithm 4 (Bagging for Time-Series Models and Many Regressors). The Bagging algorithm is defined as follows. 0. Run n univariate regressions of y t+h on each covariate in x t . Compute t-statistics and keep only the ones that turn out to be significant at a given pre-specified level. Call this new set of regressors asx t 1-4. Same as before but with x t replaced byx t . Complete Subset Regression (CSR) is a method for combining forecasts developed by Elliott et al. (2013 Elliott et al. ( , 2015 . The motivation was that selecting the optimal subset of X t to predict Y t+h by testing all possible combinations of regressors is computationally very demanding and, in most cases, unfeasible. For a given set of potential predictor variables, the idea is to combine forecasts by averaging 9 all possible linear regression models with fixed number of predictors. For example, with n possible predictors, there are n unique univariate models and n k,n = n! (n − k)!k! different k-variate models for k ≤ K. The set of models for a fixed value of k as is known as the complete subset. When the set of regressors is large the number of models to be estimated increases rapidly. Moreover, it is likely that many potential predictors are irrelevant. In these cases it was suggested that one should include only a small, k, fixed set of predictors, such as five or ten. Nevertheless, the number of models still very large, for example, with n = 30 and k = 8, there are 5, 852, 925 regression. An alternative solution is to follow Garcia et al. (2017) and Medeiros et al. (2021) and adopt a similar strategy as in the case of Bagging high-dimensional models. The idea is to start fitting a regression of Y t+h on each of the candidate variables and save the t-statistics of each variable. The t-statistics are ranked by absolute value, and we select theñ variables that are more relevant in the ranking. The CSR forecast is calculated on these variables for different values of k. This approach is based on the the Sure Independence Screening of Fan and Lv (2008) , extended to dependent by Yousuf (2018) , that aims to select a superset of relevant predictors among a very large set. Recently, Medeiros and Mendes (2013) proposed the combination of LASSO-based estimation and NN models. The idea is to construct a feedforward single-hidden layer NN where the parameters of the nonlinear terms (neurons) are randomly generated and the linear parameters are estimated by LASSO (or one of its generalizations). Similar ideas were also considered by Kock and Teräsvirta (2014) and Kock and Teräsvirta (2015) . Trapletti et al. (2000) and Medeiros et al. (2006) proposed to augment a feedforward shallow NN by a linear term. The motivation is that the nonlinear component should capture only the nonlinear dependence, making the model more interpretable. This is in the same spirit of the semi-parametric models considered in Chen (2007) . Inspired by the above ideas, Medeiros et al. (2021) proposed combining random forests with adaLASSO and OLS. The authors considered two specifications. In the first one, called RF/OLS, the idea is to use the variables selected by a Random Forest in a OLS regression. The second approach, named adaLASSO/RF, works in the opposite direction. First select the variables by adaLASSO and than use them in a Random Forest model. The goal is to disentangle the relative importance of variable selection and nonlinearity to forecast inflation. Recently, Diebold and Shin (2019) propose the "partially-egalitarian" LASSO to combine survey forecasts. More specifically, the procedure sets some combining weights to zero and shrinks the survivors toward equality. Therefore, the final forecast will be close related to the simple average combination of the survived forecasts. Although the paper considers survey forecasts, the method is quite general and can be applied to any set of forecasts. As pointed out by the authors, optimally-regularized regression-based combinations and subset-average combinations are very closely connected. Diebold et al. (2021) extended the ideas in Diebold and Shin (2019) in order to construct regularized mixtures of density forecasts. Both papers shed light on how machine learning methods can be used to optimally combine a large set of forecasts. With the advances in the ML literature, the number of available forecasting models and methods have been increasing at a fast pace. Consequently, it is very important to apply statistical tools to compare different models. The forecasting literature provides a number of tests since the seminal paper by Diebold and Mariano (1995) that can be applied as well to the ML models described in this survey. In the Diebold and Mariano's (1995) test, two competing methods have the same unconditional expected loss under the null hypothesis, and the test can be carried out using a simple t-test. A small sample adjustment was developed by Harvey et al. (1997) . See also the recent discussion in Diebold (2015) . One drawback of the Diebold and Mariano's (1995) test is that its statistic diverges under null when the competing models are nested. However, Giacomini and White (2006) show that the test is valid if the forecasts are derived from models estimated in a rolling window framework. Recently, McCracken (2020) shows that if the estimation window is fixed, the Diebold and Mariano's (1995) statistic may diverge under the null. Therefore, it is very important that the forecasts are computed in a rolling window scheme. In order to accommodate cases where there are more than two competing models, an unconditional superior predictive ability (USPA) test was proposed by White (2000) . The null hypothesis states that a benchmark method outperforms a set of competing alternatives. However, Hansen (2005) showed that White's (2000) test can be very conservative when there are competing methods that are inferior to the benchmark. Another important contribution to the forecasting literature is the model confidence set (MCS) proposed by Hansen et al. (2011) . A MCS is a set of competing models that is built in a way to contain the best model with respect to a certain loss function and with a given level of confidence. The MCS acknowledges the potential limitations of the dataset, such that uninformative data yield a MCS with a large number models, whereas informative data yield a MCS with only a few models. Importantly, the MCS procedure does not assume that a particular model is the true one. Another extension of the Diebold and Mariano's (1995) test is the conditional equal predictive ability (CEPA) test proposed by Giacomini and White (2006) . In practical applications, it is important to know not only if a given model is superior but also when it is better than the al-ternatives. Recently, Li et al. (2020) proposed a very general framework to conduct conditional predictive ability tests. In summary, it is very important to compare the forecasts from different ML methods and the literature provides a number of tests that can be used. Macroeconomic forecasting is certainly one of the most successful applications of penalized regressions. Medeiros and Mendes (2016) applied the adaLASSO to forecasting US inflation and showed that the method outperforms the linear autoregressive and factor models. Medeiros and Vasconcelos (2016) show that high-dimensional linear models produce, on average, smaller forecasting errors for macroeconomic variables when a large set of predictors is considered. Their results also indicate that a good selection of the adaLASSO hyperparameters reduces forecasting errors. Garcia et al. (2017) show that high-dimensional econometric models, such as shrinkage and complete subset regression, perform very well in real time forecasting of Brazilian inflation in data-rich environments. The authors combine forecasts of different alternatives and show that model combination can achieve superior predictive performance. Smeeks and Wijler (2018) consider an application to a large macroeconomic US dataset and demonstrate that penalized regressions are very competitive. Medeiros et al. (2021) conduct a vast comparison of models to forecast US inflation and showed the penalized regressions were far superior than several benchmarks, including factor models. Ardia et al. (2019) introduce a general text sentiment framework that optimizes the design for forecasting purposes and apply it to forecasting economic growth in the US. The method includes the use of the elastic net for sparse data-driven selection and the weighting of thousands of sentiment values. Tarassow (2019) consider penalized VARs to forecast six different economic uncertainty variables for the growth of the real M2 and real M4 Divisia money series for the US using monthly data. Uematsu and Tanaka (2019) consider high-dimensional forecasting and variable selection via folded-concave penalized regressions. The authors forecast quarterly US gross domestic product data using a high-dimensional monthly data set and the mixed data sampling (MIDAS) framework with penalization. See also Babii et al. (2020c) and Babii et al. (2020b) . There is also a vast list of applications in empirical finance. Elliott et al. (2013) find that combinations of subset regressions can produce more accurate forecasts of the equity premium than conventional approaches based on equal-weighted forecasts and other regularization techniques. Audrino and Knaus (2016) used LASSO-based methods to estimated forecasting models for realized volatilities. Callot et al. (2017) consider modelling and forecasting large realized covariance matrices of the 30 Dow Jones stocks by penalized vector autoregressive (VAR) models. The authors find that penalized VARs outperform the benchmarks by a wide margin and improve the portfolio construction of a mean-variance investor. Chinco et al. (2019) use the LASSO to make 1-minute-ahead return forecasts for a vast set of stocks traded at the New York Stock Exchange. The authors provide evidence that penalized regression estimated by the LASSO boost out-of-sample predictive power by choosing predictors that trace out the consequences of unexpected news announcements. There are many papers on the application of nonlinear ML methods to economic and financial forecasting. Most of the papers focus on NN methods, specially the ones from the early literature. With respect to the early papers, most of the models considered were nonlinear versions of autoregressive models. At best, a small number of extra covariates were included. See, for example, Teräsvirta et al. (2005) and the references therein. In the majority of the papers, including Teräsvirta et al. (2005) , there was no strong evidence of the superiority of nonlinear models as the differences in performance were marginal. Other examples from the early literature are Swanson and White (1995) , Swanson and White (1997a) , Swanson and White (1997b) , Balkin and Ord (2000) , Tkacz (2001) , Medeiros et al. (2001) , and Heravi et al. (2004) . More recently, with the availability of large datasets, nonlinear models are back to the scene. For example, Medeiros et al. (2021) show that, despite the skepticism of the previous literature on inflation forecasting, ML models with a large number of covariates are systematically more accurate than the benchmarks for several forecasting horizons and show that Random Forests dominated all other models. The good performance of the Random Forest is due not only to its specific method of variable selection but also the potential nonlinearities between past key macroeconomic variables and inflation. Other successful example is Gu et al. (2020) . The authors show large economic gains to investors using ML forecasts of future stock returns based on a very large set of predictors. The best performing models are tree-based and neural networks. Coulombe et al. (2020) show significant gains when nonlinear ML methods are used to forecast macroeconomic time series. consider penalized regressions, ensemble methods, and random forest to forecast employment growth in the United States over the period 2004-2019 using Google search activity. Their results strongly indicate that Google search data have predictive power. compute now-and backcasts of weekly unemployment insurance initial claims in the US based on a rich set of daily Google Trends search-volume data and machine learning methods. In this section we illustrate the use of some of the methods reviewed in this paper to forecast daily realized variance of the Brazilian Stock Market index (BOVESPA). We use as regressors information from other major indexes, namely, the S&P500 (US), the FTSE100 (United Kingdom), DAX (Germany), Hang Seng (Hong Kong), and Nikkei (Japan). Our measure of realized volatility is constructed by aggregating intraday returns sample at the 5-minute frequency. The data were obtained from the Oxford-Man Realized Library at Oxford University. 10 For each stock index, we define the realized variance as where r st is the log return sampled at the five-minute frequency. S is the number of available returns at day t. The benchmark model is the Heterogeneous Autoregressive (HAR) model proposed by Corsi (2009): log RV t+1 = β 0 + β 1 log RV t + β 5 log RV 5,t + β 2 2 log RV 22,t + U t+1 , where RV t is daily realized variance of the BOVESPA index, As alternatives we consider a extended HAR model with additional regressors estimated by adaLASSO. We include as extra regressors the daily past volatility of the other five indexes considered here. The model has a total of eight candidate predictors. Furthermore, we consider two nonlinear alternatives using all predictors: a random forest and shallow and deep neural networks. The realized variances of the different indexes are illustrated in Figure 7 . The data starts in February 2, 2000 and ends in May 21, 2020, a total of 4,200 observations. The sample includes two periods of very high volatility, namely the financial crisis of 2007-2008 and the Covid-19 pandemics of 2020. We consider a rolling window exercise, were we set 1,500 observations in each window. The models are re-estimated every day. Several other authors have estimated nonlinear and machine learning models to forecast realized variances. McAleer and Medeiros (2008) considered a smooth transition version of the HAR while Hillebrand and Medeiros (2016) considered the combination of smooth transitions, long memory and neural network models. Hillebrand and Medeiros (2010) and McAleer and Medeiros (2011) combined NN models with bagging and Scharth and Medeiros (2009) considered smooth transition regression trees. The use of LASSO and its generalizations to estimate extensions of the HAR model was proposed by Audrino and Knaus (2016) . Although the models are estimated in logarithms, we report the results in levels, which in the end is the quantity of interest. We compare the models according to the Mean Squared The results are shown in Table 1 . The table reports for each model, the mean squared error (MSE) and the QLIKE statistics as a ratio to the HAR benchmark. Values smaller than one indicates that the model outperforms the HAR. The asterisks indicate the results of the Diebold-Mariano test of equal forecasting performance. *,**, and ***, indicate rejection of the null of equal forecasting ability at the 10%, 5% and 1%, respectively. We report results for the full out-of-sample period, the financial crisis years (2007) (2008) and the for 2020 as a way to capture the effects of the Covid-19 pandemics on the forecasting performance of different models. As we can see from the tables the ML methods considered here outperform the HAR benchmark. The winner model is definitely the HAR model with additional regressors and estimated with adaLASSO. The performance improves during the high volatility periods and the gains reach 10% during the Covid-19 pandemics. Random Forests do not perform well. On the other hand NN models with different number of hidden layers outperform the benchmark. In this paper we present a non-exhaustive review of the most of the recent developments in machine learning and high-dimensional statistics to time-series modeling and forecasting. We presented both linear and nonlinear alternatives. Furthermore, we consider ensemble and hybrid models. Finally, we briefly discuss tests for superior predictive ability. The table reports for each model, the mean squared error (MSE) and the QLIKE statistics as a ratio to the HAR benchmark. Values smaller than one indicates that the model outperforms the HAR. The asterisks indicate the results of the Diebold-Mariano test of equal forecasting performance. *,**, and ***, indicate rejection of the null of equal forecasting ability at the 10%, 5% and 1%, respectively. Among linear specification, we pay special attention to penalized regression (Ridge, LASSO and its generalizations, for example) and ensemble methods (Bagging and Complete Subset Regression). Although, there has been major theoretical advances in the literature on penalized linear regression models for dependent data, the same is not true for ensemble methods. The theoretical results for Bagging are so far based on independent data and the results for complete subset regression are quite limited. With respect to nonlinear ML methods, we focused on neural networks and tree-based methods. Theoretical results for random forests and boosted trees have been developed only to independent and identically distributed data and in the case of a low dimensional set of regressors. For shallow neural networks, Chen et al. (2007) and Chen (2007) provide some theoretical results for dependent data in the low dimensional case. The behavior of such models in high-dimensions is still under study. The same is true for deep neural networks. Nevertheless, the recent empirical evidence shows that nonlinear machine learning models combined with large datasets can be extremely useful for economic forecasting. As a direction for further developments we list the following points: models, and many more. However, we hope that the material presented here can be of value to anyone interested of applying ML techniques to economic and/or financial forecasting. Develop results for Bagging and Boosting for dependent data Show consistency and asymptotic normality of the random forecast estimator of the unknown function f h (X t ) when the data are dependent Derive a better understanding of the variable selection mechanism of nonlinear ML methods Develop inferential methods to access variable importance in nonlinear ML methods Develop models based on unstructured data, such as text data, to economic forecasting Evaluate ML models for nowcasting Evaluate ML in very unstable environments with many structural breaks we would like to point that we left a number of other interesting ML methods out of this survey, such as, for example, Support Vector Regressions, autoenconders, nonlinear factor References Adámek, R., S. Smeekes, and I. Wilms (2020). LASSO inference for high-dimensional time series Questioning the news about economic growth: Sparse forecasting using thousands of news-based sentiment values Lassoing the HAR model: A model selection perspective on realized volatility dynamics Inference for high-dimensional regressions with heteroskedasticity and autocorrelation Machine learning panel data regressions with an application to nowcasting price earnings ratios Machine learning time series regressions with an application to nowcasting Automatic neural network modeling for univariate time series Universal approximation bounds for superpositions of a sigmoidal function AdaBoost is consistent Regularized estimation in sparse high-dimensional time series models Inference on treatment effects after selection amongst high-dimensional controls Boosting for high-dimensional linear models Targeting predictors in random forest regression Now-and backcasting initial claims with highdimensional daily internet search-volume data In search of a job: Forecasting employment growth using Google trends Bagging predictors Random forests Consistency for l2boosting and matching pursuit with trees and treetype basis functions Oracle efficient estimation and forecasting with the adaptive LASSO and the adaptive group LASSO in vector autoregressions Modeling and forecasting large realized covariance matrices and portfolio choice Subset ARMA selection via the adaptive LASSO Atomic decomposition by basis pursuit Large sample sieve estimation of semi-nonparametric models Semiparametric ARX neural-network models with an application to forecasting inflation Sieve extremum estimates for weakly dependent data Double/debiased/neyman machine learning of treatment effects Double/debiased machine learning for treatment and structural parameters Sparse signals in the cross-section of returns A simple long memory model of realized volatility How is machine learning useful for macroeconomic forecasting? Approximation by superposition of sigmoidal functions Modeling of time series using random forests: Theoretical developments Comparing predictive accuracy, twenty years later: A personal perspective on the use and abuse of Diebold-Mariano tests Machine learning for regularized survey forecast combination: Partially-egalitarian LASSO and its derivatives On the aggregation of probability assessments: Regularized mixtures of predictive densities for Eurozone inflation and real interest rates Comparing predictive accuracy Boosting methods for regression Complete subset regressions Complete subset regressions with largedimensional sets of predictors Economic forecasting Forecasting in economics and finance Variable selection via nonconcave penalized likelihood and its oracle properties Sure independence screening for ultrahigh dimensional feature space Strong oracle optimality of folded concave penalized estimation Deep neural networks for estimation and inference Gauss-newton approximation to Bayesian regularization Greedy function approximation: a gradient boosting machine On the approximate realization of continuous mappings by neural networks Real-time inflation forecasting with high-dimensional models: The case of brazil Combining expert forecasts: Can anything beat the simple average? Tests of conditional predictive ability Forecasting and decision theory Abstract Inference Empirical asset pricing via machine learning The adaptive LASSO and its oracle properties Time Series Analysis High-dimensional linear regression for dependent data with applications to nowcasting Bayesian LASSO regression A test for superior predictive ability The model confidence set Testing the equality of prediction mean squared errors The elements of statistical learning: data mining, inference, and prediction Statistical learning with sparsity: the LASSO and generalizations Granger causality testing in high-dimensional VARs: a post-double-selection procedure Linear versus neural network forecasts for european industrial production series The benefits of bagging for forecast models of realized volatility Asymmetries, breaks, and long-range dependence Long short-term memory Ridge regression: Biased estimation for nonorthogonal problems Multi-layer Feedforward networks are universal approximators Subset selection for vector autoregressive processes using LASSO How useful is bagging in forecasting economic time series? a case study of U.S. consumer price inflation Estimation with quadratic loss Process consistency for AdaBoost Smoothly clipped absolute deviation on high dimensions Asymptotics for LASSO-type estimators Consistent and conservative model selection with the adaptive lasso in stationary and nonstationary autoregressions Oracle inequalities for high dimensional vector autoregressions Forecasting performance of three automated modelling techniques during the economic crisis Forecasting macroeconomic variables using neural network models and three automated model selection techniques LASSO-type penalties for covariate selection and forecasting in time series High-dimensional predictive regression in the presence of cointegration Oracle inequalities for high-dimensional prediction Exact post-selection inference with application to the LASSO On LASSO for predictive regression Model selection and inference: Facts and fiction Sparse estimators and the oracle property, or the return of Hodges' estimator Conditional superior predictive ability On asymptotically optimal confidence regions and tests for high-dimensional models On the Bayes-risk consistency of regularized boosting methods Bayesian interpolation A practical bayesian framework for backpropagation networks Regularized estimation of high-dimensional vector autoregressions with weakly dependent innovations Forecasting realized volatility with linear and nonlinear models A multiple regime smooth transition heterogeneous autoregressive model for long memory and asymmetries Diverging tests of equal predictive ability Penalized estimation of semi-parametric additive timeseries models 1 -regularization of high-dimensional time-series models with non-gaussian and heteroskedastic errors Adaptive LASSO estimation for ARDL models with GARCH innovations Forecasting macroeconomic variables in data-rich environments Building neural network models for time series: A statistical approach Forecasting inflation in a data-rich environment: The benefits of machine learning methods A flexible coefficient smooth transition time series model Modelling exchange rates: Smooth transitions, neural networks, and linear models Estimating structured vector autoregressive models When and why are deep networks better than shallow ones? Autoregressive process modeling via the LASSO procedure Lag weighted LASSO for time series model Universal approximation using radial-basis-function networks The Bayesian LASSO Subset selection for vector autoregressive processes via adaptive LASSO Some studies in machine learning using the game of checkers Simultaneous sparse model selection and coefficient estimation for heavy-tailed autoregressive processes Asymmetric effects and long memory in the volatility of dow jones stocks Consistency of random forests. Annals of Statistics A sparse-group LASSO Macroeconomic forecasting using penalized regression methods An automated approach towards sparse single-equation cointegration modelling Simple way to prevent neural networks from overfitting Inadmissibility of the usual estimator for the mean of a multivariate distribution Universal approximation using feedforward neural networks with non-sigmoid hidden layer activation functions Local-global neural networks: A new approach for nonlinear time series modelling A model selection approach to assesssing the information in the term structure using linear models and artificial neural networks Forecasting economic time series using flexible versus fixed specification and linear versus nonlinear econometric models A model selection approach to real-time macroeconomic forecasting using linear models and artificial neural networks Forecasting u.s. money growth using economic uncertainty measures and regularisation techniques Post-selection adaptive inference for least angle regression and the LASSO Specification, estimation, and evaluation of smooth transition autoregressive models Modelling Nonlinear Economic Time Series Linear models, smooth transition autoregressions and neural networks for forecasting macroeconomic time series: A reexamination (with discussion) Regression shrinkage and selection via the LASSO On the stability of inverse problems On the solution of ill-posed problems and the method of regularization Solutions of ill-posed problems Neural network forecasting of Canadian GDP growth Stationary and integrated autoregressive neural network processes High-dimensional macroeconomic forecasting and variable selection via penalized regression On asymptotically optimal confidence regions and tests for high-dimensional models Estimation and inference of heterogeneous treatment effects using random forests A note on adaptive group LASSO Regression coefficient and autoregressive order shrinkage and selection via the LASSO A reality check for data snooping LASSO guarantees for β-mixing heavy tailed time series Nonlinear system theory: Another look at dependence Performance bounds for parameter estimates of high-dimensional linear models with correlated errors LASSO for sparse linear regression with exponentially β-mixing errors Modified LASSO estimators for time series regression models with dependent disturbances A fast unified algorithm for solving group-LASSO penalize learning problems Error bounds for approximations with deep ReLU networks Penalized regression models with autoregressive error terms Variable screening for high dimensional time series Model selection and estimation in regression with grouped variables Boosting with early stopping: Convergence and consistency On model selection consistency of LASSO Nonconcave penalized estimation in sparse vector autoregression model Regularization and variable selection via the elastic net On the adaptive elastic-net with a diverging number of parameters We are very grateful for the insightful comments made by two anonymous referees. The second author gratefully acknowledges the partial financial support from CNPq. We are also grateful to Francis X. Diebold, Daniel Borub, and Andrii Babii for helpful comments.