key: cord-0213446-ossgj4pw authors: Christensen, Kim; Nielsen, Mikkel Slot; Podolskij, Mark title: High-dimensional estimation of quadratic variation based on penalized realized variance date: 2021-03-04 journal: nan DOI: nan sha: c87d59e4c999e13b522d8557bb2ed34779e28719 doc_id: 213446 cord_uid: ossgj4pw In this paper, we develop a penalized realized variance (PRV) estimator of the quadratic variation (QV) of a high-dimensional continuous It^{o} semimartingale. We adapt the principle idea of regularization from linear regression to covariance estimation in a continuous-time high-frequency setting. We show that under a nuclear norm penalization, the PRV is computed by soft-thresholding the eigenvalues of realized variance (RV). It therefore encourages sparsity of singular values or, equivalently, low rank of the solution. We prove our estimator is minimax optimal up to a logarithmic factor. We derive a concentration inequality, which reveals that the rank of PRV is -- with a high probability -- the number of non-negligible eigenvalues of the QV. Moreover, we also provide the associated non-asymptotic analysis for the spot variance. We suggest an intuitive data-driven bootstrap procedure to select the shrinkage parameter. Our theory is supplemented by a simulation study and an empirical application. The PRV detects about three-five factors in the equity market, with a notable rank decrease during times of distress in financial markets. This is consistent with most standard asset pricing models, where a limited amount of systematic factors driving the cross-section of stock returns are perturbed by idiosyncratic errors, rendering the QV -- and also RV -- of full rank. The covariance matrix of asset returns is a central component, which is required in several contexts in financial economics, such as portfolio composition, pricing of financial instruments, or risk management (e.g., Andersen, Bollerslev, Diebold, and Labys, 2003) . In the past few decades, estimation of quadratic variation (QV) from high-frequency data has been intensively studied in the field of econometrics. The QV is an essential component for understanding the (conditional) variance of asset returns. The standard estimator of QV, Σ, say over a window [0, 1] , is the realized variance (RV) (e.g., Andersen and Bollerslev, 1998; Shephard, 2002, 2004; Jacod, 1994) . Given a decreasing sequence (∆ n ) n≥1 with ∆ n → 0 and equidistant observations (Y i∆n ) [0, 1] , the RV is defined as where ∆ n k Y = Y k∆n − Y (k−1)∆n is the increment, and ⊤ denotes the transpose operator. The asymptotic properties of Σ n are generally known, when the dimension d is fixed (e.g., Barndorff-Nielsen, Graversen, Jacod, Podolskij, and Shephard, 2006; Diop, Jacod, and Todorov, 2013; Heiny and Podolskij, 2020; Jacod, 1994 Jacod, , 2008 . In particular, for any semimartingale Σ n is by definition a consistent estimator of Σ. 1 When modeling the dynamics of a large financial market over a relatively short time horizon, one is often faced with an ill-posed inference problem, where the number of assets d is comparable to (or even exceeding) the number of high-frequency data ⌊∆ −1 n ⌋ available to compute Σ n . This renders RV inherently inaccurate, and it also implies that d cannot be treated as fixed in the asymptotic analysis. To recover an estimate of Σ in such a high-dimensional setting, the covariance matrix has to display a more parsimonious structure. Currently, the literature on high-dimensional high-frequency data (all based on a continuous semimartingale model for the security price process) is rather scarce. Wang and Zou (2010) investigate optimal convergence rates for estimation of QV under sparsity constraints on its entries. Zheng and Li (2011) apply random matrix theory to estimate the spectral eigenvalue distribution of QV in a one-factor diffusion setting. Related work based on random matrix theory and eigenvalue cleaning, or on approximating the high-dimensional system by a low-dimensional factor model, can be found in, e.g., Cai, Hu, Li, and Zheng (2020) ; Hautsch, Kyj, and Oomen (2012) ; Lunde, Shephard, and Sheppard (2016) . The objective there is usually to project an estimate of 1 The limiting distribution of Σ n , for fixed d and as ∆ n → 0, depends heavily on the particular framework. In the context of the semimartingale model in (2.1), Σ n converges at rate ∆ −1/2 n and is asymptotically mixed normal with ∆ −1/2 n Σ n − Σ → MN 0, t 0 A s ds , where A s is a d 2 × d 2dimensional array with elements A jk,j ′ k ′ s = c jj ′ t c kk ′ t + c jk ′ t c kj ′ t (see, e.g., Barndorff-Nielsen and Shephard, 2004; Barndorff-Nielsen, Graversen, Jacod, Podolskij, and Shephard, 2006) . Asymptotic theory for more general and purejump Itô semimartingales are developed in Diop, Jacod, and Todorov (2013) ; Heiny and Podolskij (2020) ; Jacod (2008) ; Jacod and Protter (2012) . the covariance matrix onto the space of positive definite matrices, for example in order to compute its inverse. Aït-Sahalia and Xiu (2017) study a high-dimensional factor model, where the eigenvalues of QV are assumed to be dominated by common components (spiked eigenvalue setting), and estimate the number of common factors under sparsity constraints on the idiosyncratic components. In the context of joint modeling of the cross-section of asset returns, if a factor model is adopted, one should expect a small number of eigenvalues of Σ to be dominant (see also Section 4). In other settings, such as when the market is complete and some entries of Y t reflect prices on derivative contracts (spanned by the existing assets and therefore redundant), one should even expect the rank of the volatility to be strictly smaller than the dimension. In such an instance, the rank of the covariance matrix is smaller than d, meaning that the intrinsic dimension of the system can be represented in terms of a driving Brownian motion of lower dimension. In general, identifying a low rank can, for instance, help with the economic interpretation of the model, or it may lighten the computational load related to its estimation or simulation; we refer to several recent studies on identification and estimation of the intrinsic dimension of the driving Brownian motion and eigenvalue analysis of QV in Aït-Sahalia and Xiu (2019); Fissler and Podolskij (2017) ; Jacod, Lejay, and Talay (2008) ; Jacod and Podolskij (2013, 2018) in the fixed dimensional setting. In this paper, we develop a regularized estimator of Σ called the penalized realized variance (PRV). We draw from the literature of shrinkage estimation in linear regression (e.g., Hoerl and Kennard, 1970; Tibshirani, 1996) . In particular, we propose the following LASSO-type estimator of Σ based on the RV in (1.1): Σ λ n = arg min A∈R d×d Σ n − A 2 2 + λ A 1 . (1.2) Here, λ ≥ 0 is a tuning parameter that controls the degree of penalization, while · 1 and · 2 denote the nuclear and Frobenius matrix norm. It has been shown in various contexts that estimators based on nuclear norm regularization possess good statistical properties, such as having optimal rates of convergence and being of low rank. For instance, Koltchinskii, Lounici, and Tsybakov (2011) study such properties in the trace regression model and, in particular, in the matrix completion problem, where one attempts to fill out missing entries of a partially observed matrix. Negahban and Wainwright (2011) adopt a rather general observation model and, among other things, cover estimation in near low rank multivariate regression models and vector autoregressive processes. Further references in this direction include Argyriou, Evgeniou, and Pontil (2008) ; Bach (2008) ; Candès and Recht (2009); Recht, Fazel, and Parrilo (2010) . While previous papers on nuclear norm penalization are solely dedicated to discrete-time models, the current work is-to the best of our knowledge-the first to study this problem in a continuous-time Itô setting. The implies a number of subtle technical challenges, since the associated high-frequency data are dependent and potentially non-stationary. As we show in Proposition 2.1, the unique solution to (1.2) is found from the orthogonal decomposition of Σ n by soft-thresholding, i.e. shrinking towards zero, of its eigenvalues. The nuclear norm is the convex envelope of the rank function on the unit ball (cf. Hiriart-Urruty and Le, 2013), so (1.2) is the convex optimization problem that is "closest" to the rank-penalized counterpart. Moreover, the nuclear norm of a matrix is the ℓ 1 -norm of its singular values, so this type of regularization adapts the principle ideas of the LASSO estimator of Tibshirani (1996) . It therefore encourages sparsity in the vector of singular values or, equivalently, a low rank estimator of QV. In the paper, we provide a complete non-asymptotic theory for the PRV estimator in (1.2) by deriving bounds for the Frobenius norm of its error. We show that it is minimax optimal up to a logarithmic factor. The derivation relies on advanced results from stochastic analysis combined with a Bernstein-type inequality presented in Theorem 2.3. The latter builds upon recent literature on concentration inequalities for matrix-valued random variables, but it is more general as the variables are allowed to be both dependent and unbounded. We further show that in a "localto-deficient" rank setting, where some eigenvalues are possibly decreasing as the dimension of the system increases, the penalized estimator in (1.2) identifies the number of non-negligible eigenvalues of Σ with high probability. In practice, the estimator depends on the concrete value of the tuning parameter λ. We propose an intuitive data-driven approach based on Sabourin, Valdar, and Nobel (2015) and the i.i.d. boostrap of Gonçalves and Meddahi (2009) in order to choose an appropriate amount of shrinkage in the implementation. We provide a related theoretical analysis for the local volatility, which is more likely to exhibit a lower rank than QV (the latter is only expected to have a near low rank in many settings). Considering the 30 members of the Dow Jones Industrial Average index, we also show that our results are consistent with the standard Fama-French three-factor structure, and one may expect even a lower number of factors during crisis periods. In contrast to (Aït-Sahalia and Xiu, 2017), which is the most related article to our analysis, we do not assume an a priori structure of the model. Instead, we seek to identify a low rank and reduce the dimension of the model by determining those directions in a data-driven manner. The paper proceeds as follows. In Section 2, we set the stage for our analysis and establish concentration inequalities for the estimation error Σ λ n − Σ 2 . Based on these findings we show minimax optimality of Σ λ n up to a logarithmic factor. In Section 2.3, we present sufficient conditions to ensure that the rank of Σ λ n coincides with the number of non-negligible eigenvalues of Σ with high probability. In Section 2.4, we show how the tuning parameter λ can be selected with a data-driven approach. In Section 3, the associated non-asymptotic theory for estimation of the instantaneous variance is developed. In Section 4, we design a simulation study to demonstrate the ability of our estimator to identify a low-dimensional factor model through rank( Σ λ n ). In Section 5, we implement the estimator on empirical high-frequency data from the 30 members of the Dow Jones Industrial Average index. We illustrate how the rank of our estimator is consistent with a standard Fama-French three-factor structure. The proofs are relegated to the appendix, where some additional auxiliary results are also presented. This paragraph introduces some notation used throughout the paper. For a vector or a matrix A the transpose of A is denoted by A ⊤ . We heavily employ the fact that any real matrix A ∈ R m×q admits a singular value decomposition (SVD) of the form: where s 1 ≥ · · · ≥ s m∧q are the singular values of A, and {u 1 , . . . , u m∧q } and {v 1 , . . . , v m∧q } are orthonormal vectors. If m = q and A is symmetric and positive semidefinite, its singular values coincide with its eigenvalues and the SVD is the orthogonal decomposition (meaning that one can take v k = u k ). Sometimes we write s k (A), v k (A) and u k (A) to explicitly indicate the dependence on the matrix A. The rank of A is denoted rank(A), whereas is the number of singular values of A exceeding a certain threshold ε ∈ [0, ∞). Note that ε → rank(A; ε) is non-increasing on [0, ∞), and càdlàg and piecewise constant on (0, ∞). Furthermore, rank(A; 0) = d and lim ε↓0 rank(A; ε) = rank(A). We denote by A p the Schatten p-norm of A ∈ R m×q , i.e., for p ∈ (0, ∞). Moreover, A ∞ denotes the maximal singular value of A, i.e., A ∞ = s 1 . In particular, in this paper we work with p = 1, p = 2, and p = ∞ corresponding to the nuclear, Frobenius, and spectral norm. The Frobenius norm is also induced by the inner product A, B = tr(A ⊤ B) and we have the trace duality For a linear subspace S ⊆ R m , S ⊥ denotes the linear space orthogonal to S and P S stands for the projection onto S. Given two sequences (a n ) n≥1 and (b n ) n≥1 in (0, ∞), we write a n = o(b n ) if lim n→∞ a n /b n = 0 and a n = O(b n ) if lim sup n→∞ a n /b n is bounded. Furthermore, if both a n = O(b n ) and b n = O(a n ), we write a n ≍ b n . Finally, to avoid trivial cases we assume throughout the paper that the dimension d of the process (Y t ) t∈[0,1] is at least two. We suppose a d-dimensional stochastic process Y t = (Y 1,t , . . . , Y d,t ) ⊤ is defined on a filtered probability space (Ω, F , (F t ) t∈[0,1] , P). In our context, Y t is a time t random vector of log-prices of financial assets, which are traded in an arbitrage-free, frictionless market and therefore semimartingale (e.g., Delbaen and Schachermayer, 1994) . In particular, we assume (Y t ) t∈[0,1] is a continuous Itô semimartingale, which can be represented by the stochastic differential equation: We remark that the length T = 1 of the estimation window [0, T ] is fixed for expositional convenience and is completely without loss of generality, since a general T can always be reduced to the unit interval via a suitable time change. Note that we exclude a jump component from (2.1). It should be possible to extend our analysis and allow for at least finite-activity jump processes. To do so, we can replace the RV with its truncated version, following the ideas of Mancini (2009) . We leave the detailed analysis of more general jump processes for future work. Apart from these restrictions, our model is essentially nonparametric, as it allows for almost arbitrary forms of random drift, volatility, and leverage. The goal of this section is to derive a sharp upper bound on the estimation error Σ λ n − Σ 2 that apply with high probability, where the PRV estimator Σ λ n has been defined in (1.2). We first show that the PRV can alternatively be found by soft-thresholding of the eigenvalues in the orthogonal decomposition of Σ n . Proposition 2.1 Let Σ n = d k=1 s k u k u ⊤ k be the orthogonal decomposition of Σ n . Then, the unique solution Σ λ n to (1.2) is given by Proposition 2.1 is standard (see Koltchinskii, Lounici, and Tsybakov, 2011) , but we state it explicitly for completeness. The interpretation of the PRV representation in (2.2) is that all "significant" eigenvalues of Σ n are shrunk by λ/2, while the smallest ones are set to 0. Hence, we only retain the principal eigenvalues in the orthogonal decomposition of Σ n . The PRV can therefore account for a (near) low rank constraint. In the next proposition, we present a general non-probabilistic oracle inequality for the performance of Σ λ n . Proposition 2.2 Assume that 2 Σ n − Σ ∞ ≤ λ. Then, it holds that In particular, Σ λ n − Σ 2 2 ≤ min 2λ Σ 1 , 3λ 2 rank(Σ) . The statement of Proposition 2.2 is also a general algebraic result that is standard for optimization problems under nuclear penalization (e.g., Koltchinskii, Lounici, and Tsybakov, 2011) . It says that an oracle inequality for Σ λ n is available if we can control the stochastic error Σ n − Σ ∞ . The latter is absolute key to our investigation. In order to assess this error, we need to impose the following assumption on the norms of the drift and volatility processes. It holds that sup t∈[0,1] µ t 2 2 ≤ ν µ , sup t∈[0,1] tr(c t ) ≤ ν c,2 and sup t∈[0,1] c t ∞ ≤ ν c,∞ for some constants ν µ , ν c,2 , and ν c,∞ with ν c,∞ ≤ ν c,2 . Mathematically speaking, Assumption (A) appears a bit strong, since it imposes an almost sure bound on the drift and volatility. We can weaken the requirement on the drift to a suitable moment condition without affecting the rate of Σ n − Σ ∞ in Theorem 2.4 below, but as usual the cost of this is more involved expressions. If the volatility does not meet the boundedness condition, which it does not for most stochastic volatility models, one can resort to the localization technique from Section 4.4.1 in Jacod and Protter (2012) . In most financial applications, however, Assumption (A) is not too stringent if the drift and volatility do not vary strongly over short time intervals, such as a day. The constants ν µ , ν c,2 , and ν c,∞ may depend on d, but they should generally be as small as possible to get the best rate (see, e.g., (2.12)). To study the concentration of Σ n − Σ ∞ , we present an exponential Bernstein-type inequality, which applies to matrix-valued martingale differences as long as the conditional moments are sufficiently well-behaved. While there are several related concentration inequalities (see, e.g., Minsker, 2017; Tropp, 2011 Tropp, , 2012 Tropp, , 2015 , and references therein), existing results require that summands are either independent or bounded. Thus, to be applicable in our setting we needed to modify these. Theorem 2.3 Suppose that (X k ) n k=1 is a martingale difference sequence with respect to a filtration (G k ) n k=0 and with values in the space of symmetric d × d matrices. Assume that, for some predictable sequence (C k ) n k=1 of random variables and constant R ∈ (0, ∞), . . , n and p = 2, 3, . . . Then, for all x, ν ∈ (0, ∞), it holds that . (2.5) Assume that Theorem 2.3 applies and that C 1 , . . . , C n in (2.4) can also be chosen to be deterministic. Then, with ν = n k=1 C k , (2.6) so n k=1 X k ∞ has a sub-exponential tail. The next theorem derives a concentration inequality for Σ n − Σ ∞ . Theorem 2.4 Suppose that Assumption (A) holds. Then, there exists an absolute constant γ such that We now combine Proposition 2.2 and Theorem 2.4 to deliver a result on the concentration of Σ n − Σ 2 , which is the main statement of this section. Theorem 2.5 Suppose that Assumption (A) holds. For a given τ ∈ (0, ∞) consider a regularization parameter λ such that where γ is a large absolute constant. Then, with probability at least 1 − e −τ , If the drift term is non-dominant, in the sense that ν µ ≤ ν c,∞ , it follows that the regularization parameter λ should meet for an absolute constant γ. To get a concentration probability as large as possible without impairing the rate implied by (2.9), we should choose τ ≍ log(d). Moreover, the first term of the maximum in (2.10) is largest for 1/∆ n ≥ (log(d) + τ )ν c,2 /ν c,∞ . In light of these observations, the following corollary is an immediate consequence of Theorem 2.5, so we exclude the proof. Corollary 2.6 Suppose that Assumption (A) holds with ν µ ≤ ν c,∞ ≤ ν c,2 . Assume further that νc,∞ log(d) and choose the regularization parameter where γ is a sufficiently large absolute constant. Then, it holds that It follows from (2.12) that the estimation error of Σ λ n is closely related to the size of ν c,2 and ν c,∞ , which are both determined from the properties of the volatility process. It is reasonable to assume the norm of each row in σ is bounded by a constant not dependent on d, so that ν c,2 = O(d). As emphasized by Tropp (2015, Section 1.6.3), we can also often assume that c s ∞ and, hence, ν c,∞ can be chosen independently of d. When this is the case, the rate implied by (2.12) is rank(Σ)∆ n d log(d), and since rank(Σ) ≤ d, Corollary 2.6 implies consistency of Σ λ n when both ∆ n → 0 and d → ∞ so long as d 2 log(d) = o(∆ −1 n ). If rank(Σ) can be further bounded by a constant (that does not depend on d), the growth condition on d improves to d log(d) = o(∆ −1 n ). In contrast, for the estimation error Σ n − Σ 2 2 → 0, one cannot expect a better condition than d 2 = o(∆ −1 n ), since this estimation error corresponds to a sum of d 2 squared error terms, each of the order ∆ n . We denote by C r , for a given non-zero integer r ≤ d, the subclass of d × d symmetric positive semidefinite matrices S d + whose effective rank is bounded by r: (2.14) Compared to the rank, the effective rank is a more stable measure for the intrinsic dimension of a covariance matrix (see, e.g., Vershynin, 2010, Remark 5.53 ). In the following we argue that Σ λ n , with λ of the form (2.11), is a minimax optimal estimator of Σ in the parameters ν c,2 , ν c,∞ , and rank(Σ) over the parametric class of continuous Itô processes generated by (2.1) with no drift µ s ≡ 0 and a constant volatility σ s ≡ √ A for A ∈ C r . To this end, denote by P A a probability measure for which (Y t ) t∈[0,1] is defined as in (2.1) with no drift and constant volatility c s ≡ A. In this setting, we can choose ν c,2 = tr(A) and ν c,∞ = A ∞ , which by Corollary 2.6 means that for an absolute constant γ and ∆ −1 n ≥ 2r log(d). Now, since the log-price increments ∆ n 1 Y, . . . , ∆ n ⌊∆ −1 n ⌋ Y are i.i.d. Gaussian random vectors under P A , we can exploit existing results from the literature to assess the performance of Σ λ n . Hence, the following is effectively an immediate consequence of Theorem 2 in Lounici (2014) . It shows that, up to a logarithmic factor, no estimator can do better than (2.15). Theorem 2.7 Let C r be given as in (2.13) and suppose that ⌊∆ −1 n ⌋ ≥ r 2 . Then, there exist absolute constants β ∈ (0, 1) and γ ∈ (0, ∞) such that In this section, we study the rank of Σ λ n relative to the number of non-negligible eigenvalues and, in particular, the true rank of Σ. In line with Section 2.1, we begin by stating a general nonprobabilistic inequality in Theorem 2.8. In the formulation of this result, we recall that rank(A; ε) is the number of singular values of A exceeding ε ∈ [0, ∞). With this result in hand we can rely on the exponential inequality for the quantity Σ n − Σ ∞ established in Theorem 2.4 to show that, with high probability and in addition to converging to Σ at a fast rate, Σ λ n automatically has the rank of Σ when we neglect eigenvalues of sufficiently small order. In particular, if Σ has full rank, but many of its eigenvalues are close to zero, Σ λ n is of low rank and reflects the number of important components (or factors) in Σ. Corollary 2.9 Suppose that Assumption (A) holds with ν µ ≤ ν c,∞ ≤ ν c,2 and fix δ ∈ (0, 1/2). Assume further that ∆ −1 n ≥ 2 ν c,2 νc,∞ log(d) and, for a sufficiently large constant γ depending only on δ, choose the regularization parameter λ as follows: (2.18) and rank(Σ; λ) ≤ rank( Σ λ n ) ≤ rank(Σ; δλ). (2.20) If, in addition, λ ≤ s, where s is the smallest non-zero eigenvalue of Σ, then both rank( Σ λ n ) = rank(Σ) and (2.19) hold with probability at least 1 − d −1 . In the setting of Corollary 2.9, it follows that with high probability an eigenvalue s of Σ affects rank( Σ λ n ) for large n if λ = o(s), while it does not if s = o(λ). The first condition says that, relative to the level of shrinkage, an eigenvalue is significant (or non-negligible), whereas the second says the opposite. Hence, the notion of negligibility depends on λ, which in turn depends on the model through the constants ν c,2 and ν c,∞ . However, we know that for Σ λ n to be a consistent estimator of Σ, a necessary condition is that λ → 0 as n → ∞, which implies that for an eigenvalue of Σ to be negligible, it must tend to zero as n increases. In particular, if d is fixed, rank( Σ λ n ) = rank(Σ) with a probability tending to one as n → ∞. The following example illustrates a stylized setting, where many eigenvalues of Σ are negligible. Example 2.10 Let r ∈ N be an absolute constant (i.e., independent of d and n). In line with the factor model studied in Aït-Sahalia and Xiu (2017), suppose that c t is of the form where e t ∈ R r×r and g t ∈ R d×d are predictable, symmetric, and positive definite processes, which are uniformly bounded such that e t ∞ ≤ C e and g t ∞ ≤ C g for some constants C e , C g = O(1). The matrix β ∈ R d×r of factor loadings is deterministic and constant in time; a common assumption in the literature, which is also supported by Reiss, Todorov, and Tauchen (2015) . The form (2.21) of c t can be motivated by a standard multi-factor model, where the total risk associated with any of the d assets can be decomposed into a systematic and an idiosyncratic component. The systematic component βe t β ⊤ is composed of loadings on the r underlying priced common factors in the economy, β, and the risk of those factors, e t . The idiosyncratic component g t is asset-specific and can therefore be reduced by diversification. Suppose that, given an initial set of d individual securities, we start forming a new set of (orthogonalized) portfolios by taking linear combinations of the original assets, as prescribed by the APT model of Ross (1976) , and a standard way to implement factor models in practice when constructing sorting portfolios. Then, these new assets are generally diversified enough to assume that C g = O(d −1 ). To identify the number of driving factors, r, we assume that d −γ β ⊤ β − I r ∞ = o(1) (as d → ∞) and s r (e t ) ≥ ε for some absolute constants γ ∈ [0, 1] and ε ∈ (0, ∞). This corresponds to Assumption 4 in Aït-Sahalia and Xiu (2017), but it is slightly more general as the assets are assumed to be diversified portfolios. Now, given a suitable drift process (µ t ) t∈[0,1] , Corollary 2.9 applies with regularization parameter with probability at least 1 − d −1 . By assumption |s r (βEβ ⊤ ) − d γ s r (E)| = o(d γ ) and s r (E) ≥ ε (the former follows from the arguments in the proof of Theorem 1 in Aït-Sahalia and Xiu, 2017), and hence d γ = O(s r (βEβ ⊤ )). Combining this with (2.22), we conclude that rank( Σ λ n ) = r with , and n is large. Or to put it differently, rank( Σ λ n ) is, with a probability tending to one, exactly the number of underlying factors in the model. We remark that the restrictions imposed above are too weak to ensure that the Frobenius estimation error Σ λ n − Σ 2 tends to zero, meaning that our estimator can be used to detect the underlying factor structure even in very high-dimensional settings. To ensure the estimation error is small, we In view of Theorem 2.7 and Corollary 2.9, it follows that Σ λ n can be of low rank and accurate, given optimal tuning of λ. However, as evident from (2.18), λ depends on the latent spot variance process (c t ) t∈[0,1] through ν c,2 and ν c,∞ as well as the unknown absolute constant γ, whose value can be important in finite samples. In order to address this problem, and facilitate the calculation of our estimator in Sections 4 and 5, we adopt a data-driven shrinkage selection by exploiting the resampling technique of Sabourin, Valdar, and Nobel (2015) in the context of LASSO estimation in the linear regression setup. In particular, take an i We choose the shrinkage parameter as some appropriate measure of central tendency from the sequence (λ(1), . . . , λ(B)), such as the mean λ * B = B −1 B b=1 λ(b) or median λ * B = median(λ(1), . . . , λ(B)). The above approach is rather intuitive. Indeed, ε(b) provides an estimate of the null matrix, but is perturbed by randomness in the data. A good choice of shrinkage parameter λ(b) exactly shrinks the eigenvalues of ε(b) to zero (in view of problem (1.2) with Σ n replaced by ε(b)). By Proposition 2.1, this means λ(b) = 2 ε(b) ∞ . Thus, the final choice λ * B reflects how much shrinkage, on average, is required to remove the noise from the estimate. We may expect λ * B to deliver a slightly more conservative level of shrinkage than (2.18), since it intends to approximate 2 Σ n −Σ ∞ , which, with high probability, is a lower bound for (2.18) (see, e.g., Proposition 2.2). We remark that, although ∆ n 1 Y, . . . , ∆ n ⌊∆ −1 n ⌋ Y are generally both heteroscedastic and dependent, the i.i.d. bootstrap remains valid (Gonçalves and Meddahi, 2009) . To provide further theoretical support for choosing the shrinkage parameter in the above datadriven way, we analyze the quantity where, conditional on Σ n , the random vectors Z 1 , . . . , Z ⌊∆ −1 n ⌋ are i.i.d. Gaussian with mean vector 0 and covariance matrix Σ n /⌊∆ −1 n ⌋. Equation (2.23) can be viewed as a stylized version of λ * n ⌋ , and (II) we draw an infinite number of bootstrap samples (B → ∞). Concerning (I), if the volatility is locally constant over the estimation window [0, 1] and ⌊∆ −1 n ⌋ is large, the log-price increments are approximately Gaussian, so the sampling distributions do not differ much. As for (II), although B is always set by the modeler in practice, it anyway needs to be large enough to stabilize the average. In turn, we can expect λ * B and λ * to be close, and hence the practical difference in their outcome to be limited. This is further supported by the comparison of λ * B and λ * in Section 4. In general, we cannot expect that λ * identifies (2.18), since ν c,2 and ν c,∞ are not unique (not even up to a scaling). However, when µ s ≡ 0 and σ s ≡ √ A for a deterministic matrix A ∈ S d + \ {0}, we know from Section 2.1 that it is (close to) optimal to take ν c,2 = tr(A) and ν c,∞ = A ∞ , so that λ ≍ tr(A) A ∞ ∆ n log(d). We show in the following result that, up to a multiplicative logarithmic factor and with high probability, λ * meets this criterion. Then, for all sufficiently large n and with probability at least where γ is an absolute constant. As noted in Section 1, the rank of the local variance c t is often much smaller than the rank of Σ. In fact, although Σ may be well-approximated by a matrix of low rank, we should expect that rank(Σ) = d. On the other hand, there are many situations where it is reasonable to expect that rank(c t ) is small, in which case it provides valuable insight about the complexity of the underlying model. This motivates developing a theory for the spot version of our penalized estimator with particular attention on its ability to identify rank(c t ). To estimate c t , we follow Jacod and Protter (2012) and apply a localized realized variance, which is defined over a block of ⌊h n /∆ n ⌋ ≥ 2 log-price increments with h n ∈ (0, 1). The RV computed over the time window [t, t + h n ], for t ∈ (0, 1 − h n ], is then defined as follows The corresponding penalized version Σ λ n (t; t + h n ) is computed as or by using Proposition 2.1 with Σ n replaced by Σ n (t; t + h n ). Then, the penalized estimatorĉ λ n (t) of c t is given byĉ for the QV over (t 1 , t 2 ] for arbitrary time points t 1 , t 2 ∈ [0, 1] with t 1 < t 2 . Recall that for a convex and increasing function ψ : [0, ∞) → [0, ∞) with ψ(0) = 0, the Orlicz norm of a random variable Z with values in the Hilbert space (R d×d , · 2 ) is defined as: (3.5) This setting includes, in particular, the L p (P) norms (with ψ(s) = s p ), but also the ψ 1 and ψ 2 norms for sub-exponential (ψ(s) = e s − 1) and sub-Gaussian (ψ(s) = e s 2 − 1) random variables. For convenience, we further impose the mild restriction that the range of ψ is [0, ∞), so it admits an inverse ψ −1 on [0, ∞). νc,∞ log(d). Take the regularization parameter λ as for a sufficiently large absolute constant γ. Then, with probability at least 1 − d −1 − ψ( log(d)) −1 for some absolute constant κ. The bound on the estimation error ofĉ λ n (t) builds on Corollary 2.6, but Theorem 3.1 further enforces a smoothness condition on (c t ) t∈[0,1] . It entails a trade-off between the smoothness of spot variance and the concentration probability associated with (3.7). For example, if (c t ) t∈[0,1] is 1/2-Hölder continuous in L 2 (P), which corresponds to setting ψ(s) = s 2 , then we end up with a concentration probability converging to one at rate log(d) −1 , which is slower than for the PRV. On the other hand, if (c t ) t∈[0,1] is 1/2-Hölder continuous in the sub-Gaussian norm ψ(s) = e s 2 − 1, the concentration probability converges to one at the rate d −1 , which is equivalent to the PRV. Note that with d fixed, (3.7) reveals how to select the length of the estimation window h n optimally. The upper bound depends on ∆ n /h n and h n , and for these terms to converge to zero equally fast, we should take h n ≍ √ ∆ n . This is consistent with the literature on spot variance estimation; see, e.g., Jacod and Protter (2012) . The next result, which relies on Corollary 2.9, shows that the rank and performance ofĉ λ n (t) can be controlled simultaneously. Theorem 3.2 Suppose that Assumption (A) holds with ν µ ≤ ν c,∞ ≤ ν c,2 and fix δ ∈ (0, 1/4). Furthermore, let t ∈ [ hn 2 , 1 − 3hn 2 ] and assume that sup 0≤u 0 and ε ≤ s rank(ct) (c t ), then both (3.9) and rank(ĉ λ n (t)) = rank(c t ) hold with probability at least 1 − d −1 − ψ( log(d)) −1 . Consider the setting of Theorem 3.2. For the upper bound in (3.10) to be useful, we must have that ε > 0 or, equivalently, ν 2 c,ψ h 2 n < ν c,2 ν c,∞ ∆ n . (3.12) Suppose that ν c,2 , ν c,∞ , and ν c,ψ do not depend on n. The inequality (3.12) can then be achieved in large samples by choosing h n = o( √ ∆ n ). However, as pointed out above one should take h n ≍ √ ∆ n in order to achieve an optimal rate for the estimation error ĉ λ n (t) − c t 2 2 . In that case, (3.12) translates directly into an upper bound on ν c,ψ , which concerns the smoothness of (c t ) t∈[0,1] . When can the term rank(Σ(t − hn 2 ; t + 3hn 2 )) in the upper bound of (3.9) be replaced by rank(c t )? This question appears difficult to answer except in restricted settings. What is needed is that the rank of QV over a small window containing t is equal to rank(c t ) with high probability. In this context, note that although 1 ε Σ(t; t + ε) → c t as ε → 0, (3.13) for almost all t, there is no guarantee that rank(Σ(t; t + ε)) → rank(c t ), because the rank function is not continuous. In general, it only holds that rank(Σ(t; t + ε)) ≥ rank(c t ). Since c t is positive semidefinite and the QV is an aggregation of such matrices, this means that the null space of spot variance must be constant in a neighborhood of t in order to ensure that rank(Σ(t; t + ε)) = rank(c t ) when ε is sufficiently small. Furthermore, given the optimality of Σ λ n , one cannot expect to be able to replace rank(Σ(t − hn 2 ; t + 3hn 2 )) by a quantity of smaller magnitude. In the following example, we investigate the setting of a locally constant volatility process, in which case the upper bound in (3.9) simplifies. where (C n ) is a sequence of (possibly random) positive semidefinite matrices, and (N t ) t∈[0,1] is a Cox process with intensity (ξ t ) t∈[0,1] , such that sup t≤1 ξ t ≤ ν ξ for some absolute constant ν ξ . In particular, this covers the setting, where (N t ) t∈[0,1] is an inhomogeneous Poisson process. Note that (c t ) t∈[0,1] is piecewise constant with jumps at random time points, and for any t 1 , t 2 ∈ [0, 1] with t 1 < t 2 . This shows that Suppose further that the sequence (C n ) n≥0 satisfies for suitably chosen constants ν c,2 ,ν c,2 , and ν c,∞ with ν c,∞ ≤ ν c,2 . With ψ(s) = s 2 , and by using (3.15), we can take ν c,ψ = 2 √ν c,2 ν ξ . Hence, provided the drift term (µ t ) t∈[0,1] meets the imposed assumptions, Theorem 3.2 is applicable for this choice of (c t ) t∈[0,1] . In particular, given the estimator c λ n (t) is tuned according to (3.8), we can combine Theorem 3.2 with (3.16) to conclude that In this section, we do a Monte Carlo analysis to inspect the finite sample properties of the PRV, Σ λ n . The aim here is to demonstrate the ability of our estimator to detect the presence of near deficient rank in a large-dimensional setting. In view of Theorem 3.2, this can be achieved by a proper selection of λ. We simulate a d = 30-dimensional log-price process Y t . The size of the vector corresponds to the number of assets in our empirical investigation. We assume Y t follows a slightly modified version of the r = 3-factor model proposed in Aït-Sahalia and Xiu (2019): where F t ∈ R r is a vector of systematic risk factors, which has the dynamic: and W t ∈ R r is a standard Brownian motion. The drift term α ∈ R r is constant α = (0.05, 0.03, 0.02) ⊤ . The random volatility matrix σ t = diag(σ 1,t , σ 2,t , σ 3,t ) ∈ R r×r , where diag( · ) is a diagonal matrix with coordinates · , is simulated as a Heston (1993) square-root process: for j = 1, . . . , r, where W t ∈ R r is an r-dimensional standard Brownian motion independent of W t . As in Aït-Sahalia and Xiu (2019) The idiosyncratic component Z t ∈ R d is given by (4.6) for j = 1, . . . , d, and B t ∈ R d and B t ∈ R d are independent d-dimensional standard Brownian motions. Thus, g t is the instantaneous variance of the unsystematic component. We set κ Z = 4, θ Z = 0.25, and η Z = 0.06. Hence, the idiosyncratic error varies independent in the cross-section of assets, but the parameters controlling the dynamics are common. The above setup implies c t = βσ t ρσ t β ⊤ + g t , (4.7) so the spot covariance matrix has full rank at every time point t. However, it has only r = 3 large eigenvalues associated with the systematic factors, while the remaining d − r associated with the idiosyncratic variance are relatively small. Note that in contrast to Aït-Sahalia and Xiu (2019), we allow for time-varying idiosyncratic variance. We set ∆ n = 1/n with n = 78. This corresponds to 5-minute sampling frequency within a 6.5 hour window. Hence, d is relatively large compared to n. We construct 10,000 independent replications of this model. At the beginning of each simulation, we draw the associated factor loadings β ∈ R d×r at random such that the first column (interpreted as the loading on the market factor) are from a uniform distribution on (0.25, 1.75), i.e. β i1 ∼ U(0.25, 1.75), i = 1, . . . , d. The range of the support is consistent with the realized beta values reported in Table 1 in Section 5. The remaining columns are generated as β ij ∼ N(0, 0.5 2 ), i = 1, . . . , d and j = 2 and 3. In Panel A of Figure 1 , we plot the relative size of the three largest eigenvalues, extracted from the corresponding RV, together with the average of the remaining 27 eigenvalues. In Panel B, we include a histogram of the relative size of the eigenvalues across simulations. We observe that it is generally challenging to distinguish important factors from idiosyncratic variation, since the relative size of the eigenvalues decays smoothly with the exception of the largest (and perhaps the second largest) eigenvalue. As explained in Section 2.3, we employ an i.i.d. bootstrap to select λ in practice. In each simulation, we form additional B = 10,000 bootstrap samples (drawn with replacement from the original n = 78 log-returns). For the bth bootstrap, b = 1, . . . , B, we compute the bootstrap RV, Σ n (b), and choose λ(b) such that ε(b) = Σ n (b) − Σ n is exactly reduced to the null matrix, i.e. λ(b) = 2 ε(b) ∞ . As a final estimate of λ, we extract the median across bootstrap replica, i.e. λ * B = median(λ(1), . . . , λ(B)). 2 In Panel A of Figure 2 , we report the distribution of the λ * B parameter across simulations, expressed in percent of the maximal eigenvalue of RV. As a reference point, we also superimpose the distribution of λ * B based on the Gaussian approximation, but there is hardly any difference. Overall, the penalization is relatively stable with a tendency to shrink around one-fourth to onethird of the largest eigenvalue most of the times. In Panel B, we plot a histogram of the relative frequency of the rank of the PRV. In general, the PRV does a good job at identifying the number of driving factors, given the challenging environment. Although there are variations in the rank across simulations, caused by the various sources of randomness incorporated in our model, the picture is close to the truth. This means we tend to identify about one-to-three driving factors. The average rank estimate is 1.79, so as expected there is a slight tendency to overshrink leading to a small downward bias. Based on these findings, we can confidently apply the PRV in the empirical application. Note. In Panel A, we report a kernel density estimate of the shrinkage parameter λ, expressed in percent of the maximal eigenvalue of the RV matrix. In Panel B, we show the relative frequency histogram of the associated rank of the PRV. In this section, we apply the PRV to an empirical dataset. The sample period is from January 3, 2007 through May 29, 2020 (3,375 trading days in total) and includes both the financial crisis around 2007 -2009 and partly the recent events related to Covid-19. Note. "code" is the ticker symbol, "N " is the number of transactions, "σ RV " is the annualized square-root realized variance, "σǫ" is the standard deviation of the idiosyncratic component. The latter is computed as σ 2 ǫ = σ 2 RV − β 2 σ 2 SPY , where β is the realized beta between the asset and SPY (acting as market portfolio). We look at high-frequency data from the 30 members of the Dow Jones Industrial Average (DJIA) index as of April 6, 2020. 3 The ticker codes of the various firms are listed in Table 1 , along with selected descriptive statistics. As readily seen from Table 1 , most of these companies are very liquid. In order to compute a daily RV matrix estimate we collect the individual 5-minute log-return series spanning the 6.5 hours of trading on U.S. stock exchanges from 9:30am to 4:00pm EST. 4 Hence, our analysis is based on a relatively large cross-section (i.e., d = 30) compared to the sampling frequency (i.e., n = 78). The realized beta in Table 1 is calculated with SPY acting as market portfolio. The latter is an exchange-traded fund that tracks the S&P500 and its evolution is representative of the overall performance of the U.S. stock market. The dispersion of realized beta is broadly consistent with the simulated values generated in Section 4. In Figure 3 , we depict the factor structure in the time series of RV. In Panel A, we compute on a daily basis the proportion of the total variance (i.e., trace) explained by each eigenvalue, whereas Panel B reports the sample average of this statistic. As consistent with Aït-Sahalia and Xiu (2019), we observe a pronounced dynamic evolution in the contribution of each eigenvalue to RV with notably changes corresponding to times of severe distress in financial markets. The first factor captures the vast majority of aggregate return variation (about 35% on average). It is followed by a few smaller-but still important-factors accountable for an incremental 25% -30% of the total variance, whereas the last 25 or so eigenvalues are relatively small. Next, we turn our attention to the PRV estimator. To select the shrinkage parameter λ, we again follow the i.i.d. bootstrap procedure. The relative frequency histogram of the associated rank of the PRV is reported in Panel A of Figure 4 , whereas Panel B reports a three-month (90-day) moving average of the rank. The vast majority (more than 95%) of the daily rank estimates are between one Note. In Panel A, we report the relative frequency of the rank of the PRV. In Panel B, we compare the estimated rank to the effective rank, where both are smoothed over a three-month moving average window. and three, which is consistent with a standard Fama-French three-factor interpretation of the data. There are relatively few rank estimates at four and five, and it never exceeds six (with less than a handful of the latter). In Panel B, we observe the rank varies over time in an intuitive fashion, often dropping close to a single-factor representation during times of crisis, where correlations tend to rise. As a comparison, we compute the effective rank of the RV (cf. (2.14)), which does not depend on a shrinkage parameter. There is a high association between the series, which is consistent with a soft-thresholding that eliminates smaller eigenvalues of the RV. In this paper, we develop a novel and powerful penalized realized variance estimator, which is applicable to estimate the quadratic variation of high-dimensional continuous-time semimartingales under a low rank constraint. Our estimator relies on regularization and adapts the principle ideas of the LASSO from regression analysis to the field of high-frequency volatility estimation in a highdimensional setting. We derive a non-asymptotic analysis of our estimator, including bounds on its estimation error and rank. The estimator is found to be minimax optimal up to a logaritmic factor. We design a completely data-driven procedure for selecting the shrinkage parameter based on an i.i.d. bootstrap. In a simulation study, the estimator is found to possess good properties. In our empirical application, we confirm a low rank environment that is consistent with a three-factor structure in the cross-section of log-returns from the large-cap segment of the U.S. stock market. This section presents the proofs of the results from the main text, and a few auxiliary results. Proof of Proposition 2.1. Since the map A → L n (A) ≡ Σ n − A 2 2 + λ A 1 is strictly convex, the minimizer A of L n is uniquely determined by the property that 0 ∈ R d×d belongs to the set Here, ∂L n (A) denotes the subdifferential of L n at A, and we employ Watson (1992) With Σ λ n given as in (2.2), we find that ∂L n ( Σ λ n ) coincides with the set of matrices V of the form: where W ∈ R d×d is such that W ∞ ≤ 1, and U is the linear span of k : s k ≤λ/2 {u k }. With W = 2 λ k : s k ≤λ/2 s k u k u ⊤ k , it follows that W ∞ ≤ 1 by submultiplicativity of the norm and P U W P U = W . Hence, (A.2) shows that 0 ∈ ∂L n ( Σ λ n ), so Σ λ n is the unique minimizer of L n . Proof of Proposition 2.2. The structure of the proof is based on Theorem 1 from Koltchinskii, Lounici, and Tsybakov (2011) , but is modified to fit our setting. Starting from the definition of Σ λ n , and since L n (A) = Σ n − A 2 2 + λ A 1 , it follows that for any A ∈ R d×d . Note also that By using the above identity and (A.3), The first term in the minimum follows by observing that where we employ the trace duality A 1 , A 2 ≤ A 1 1 A 2 ∞ , the triangle inequality, and the assumption that 2 Σ n − Σ ∞ ≤ λ. To show the second part of the proposition, we observe that if B is a subgradient of L n at Σ λ n , then by definition for all matrices A. This shows that B + 2 Σ n is a subgradient of the function A → A 2 2 + λ A 1 at Σ λ n and, thus, B = 2 Σ λ n − 2 Σ n + λ V for an appropriate V ∈ ∂ Σ λ n 1 . Moreover, because Σ λ n is a minimizer of L n , there must exist B ∈ ∂L n ( Σ λ n ) such that B, Σ λ n − A ≤ 0 for all A ∈ R d×d (see, for instance, Clarke, 1990, Section 2.4) . Combining these facts, we establish that for some V ∈ ∂ Σ λ n 1 and any A ∈ R d×d . Note the identity Now, fix A ∈ R d×d and consider a generic matrix V ∈ ∂ A 1 . Then, if we subtract 2Σ+λV, Σ λ n −A on both sides of (A.4), exploit (A.5), and note that V − V, Σ λ n − A ≥ 0 by the monotonicity of subdifferentials (Clarke, 1990 , Proposition 2.2.9), we deduce that We shall bound each of the last two terms on the right-hand side of (A.6), starting with V, A − Σ λ n . According to Watson (1992) , with r = rank(A) and A = r k=1 s k u k v ⊤ k being the SVD of A, the subdifferential ∂ A 1 has the characterization: Here, S 1 and S 2 denote the span of {u 1 , . . . , u r } and {v 1 , . . . , v r }, respectively. By the polar decomposition, one finds that W ∈ R d×d with W ∞ = 1 and W, P S ⊥ 1 Σ λ n P S ⊥ 2 = P S ⊥ 1 Σ λ n P S ⊥ 2 1 . Fixing V to be the subgradient associated with this choice of W , The last term on the right-hand side of (A.6) can be handled by setting B = Σ n −Σ−P S ⊥ 1 ( Σ n −Σ)P S ⊥ 2 . The Cauchy-Schwarz inequality and trace duality then delivers the estimate: For any given matrix M ∈ R d×d : This shows that B 2 ≤ λ √ r. Hence, by combining (A.6) -(A.8) we get By subtracting Σ λ n − A 2 2 on both sides and using the inequality α 2 /4 ≥ αβ − β 2 with α = 3λ √ r and β = Σ λ n − A 2 , we arrive at the second term in the minimum (with A = Σ): Proof of Theorem 2.3. We set S n = n k=1 X k and ν n = n k=1 C k . The subadditivity of the maximal eigenvalue function λ max (·) implies that whenever λ max (S n ) ≥ x, ν n ≤ ν and θ ∈ (0, 1). By exponentiating both sides of (A.9), applying the spectral mapping theorem and that λ max (A) ≤ tr(A) for any positive semidefinite matrix A, where Y n = tr exp θS n − θ 2 2(1 − θ) ν n I d . Suppose for the moment that R = 1, so that E[X p k | G k−1 ] ∞ ≤ p! 2 C k for all k and integers p ≥ 2. Since E X k | G k−1 = 0, it then follows that Thus, the matrix is positive semidefinite, and hence it follows by Tropp (2011, Lemma 2.1) that (Y k ) n k=1 is a positive supermartingale with Y 0 = d. By combining this observation with (A.10) and choosing θ = x/(x+ν), we conclude that The general result can now be deduced by taking X k = X k /R. A key ingredient to verify the conditions of Theorem 2.3 and prove Theorem 2.4 below is a suitable version of the Burkholder-Davis-Gundy inequality. We shall employ the one from Seidler and Sobukawa (2003, Lemma 2. 2), which is restated here for convenience. Note that, although their result applies to martingales with values in a general Hilbert space, we specialize the formulation here to the finite-dimensional setting. Lemma A.1 (Seidler and Sobukawa (2003) ) Let m ∈ N and T ∈ (0, ∞). For any constant Moreover, one can always take With the choice in (A.11), γ p is of smaller asymptotic order (as p → ∞) than the constant associated with the usual Burkholder-Davis-Gundy inequalities available by application of Itô's formula (cf. the proof of Proposition 2.1 in Marinelli and Röckner, 2016) . Proof of Theorem 2.4. We decompose the log-price into the drift and volatility (A.12) Jensen's inequality implies that for the first term in (A.12), The fourth term can be bounded as: To handle the third term in (A.12), we note it has the form a 3 = is a martingale difference sequence with respect to the filtration (F k∆n ) ⌊∆ −1 n ⌋ k=0 . Thus, we can apply Theorem 2.3 if we can control for integer p ≥ 2. First note that, since B k ∞ ≤ ν c,∞ ∆ n , the sub-multiplicativity of · ∞ and the binomial theorem imply that We start by analyzing the second term on the right-hand side of (A.15). For an arbitrary integer m ≥ 1 and any F (k−1)∆n -measurable set A, it follows from Lemma A.1 that and, thus, for a suitable absolute constant α ≥ 1. In particular, (A.16) shows that E A k p−i ∞ | F (k−1)∆n ] ≤ (αpν c,2 ∆ n ) p−i for i = 1, . . . , p. Together with the mean value theorem and Stirling's formula, we can therefore establish that Now, look at the first term on the right-hand side of (A.15). For a fixed u ∈ R d with u 2 = 1, it follows from the Cauchy-Schwarz inequality that The first term in (A.18) is handled by (A.16) with m = 2(p − 1): Next, observe that tr(c t )dt, k = 1, . . . , ⌊∆ −1 n ⌋, is a martingale difference sequence. By (A.16), we see that for a suitable absolute constant α ≥ 2, and thus (2.6) ensures that P n k=1 X k > y ≤ τ ν c,∞ ν c,2 y , where we assumeα in τ is larger than α. This estimate together with (A.24) means that a 2 ≤ τ x 5 . (A.25) By combining (A.13) -(A.14), (A.23) and (A.25), we conclude that if x ≥ 5∆ n max{ν µ , ν c,∞ } and x 2 25νµ∆n − ν c,2 ≥ ν c,2 5νc,∞ x. In particular, (A.26) holds whenever x ≥ γ max ν c,2 νµ∆n νc,∞ , ν c,2 ν µ ∆ n , ν c,∞ ∆ n , where γ is a sufficiently large absolute constant. Proof of Theorem 2.5. Consider a fixed τ ∈ (0, ∞). Theorem 2.4 implies the existence of an absolute constant α such that P(2 Σ n − Σ ∞ > λ) ≤ 6d exp − λ αν c,2 ν c,∞ ∆ n min{λ, ν c,∞ } as long as λ ≥ α max ν c,2 νµ∆n νc,∞ , ν c,2 ν µ ∆ n , ν c,∞ ∆ n . Thus, under this restriction on λ it follows that 2 Σ n − Σ ∞ ≤ λ with probability at least 1 − e −τ if 6d exp − λ αν c,2 ν c,∞ ∆ n min{λ, ν c,∞ } ≤ exp (−τ ) or, in particular, if λ ≥ γ max ν c,2 ν c,∞ ∆ n (log(d) + τ ), ν c,2 ∆ n (log(d) + τ ), ν c,2 ν µ ∆ n ν c,∞ (A.27) for a suitably chosen absolute constant γ. In view of Proposition 2.2, the proof is complete. Proof of Theorem 2.7. Under P A , we have that Z k = ∆ n k Y / √ ∆ n , k = 1, . . . , ⌊∆ −1 n ⌋, are i.i.d. Gaussian random vectors with covariance matrix A. Consequently, since ⌊∆ −1 n ⌋ ≥ r 2 , (Lounici, 2014, Theorem 2) implies that inf Σ sup A∈Cr P A Σ − A 2 2 > γ A 2 ∞ r e (A) rank(A) ⌊∆ −1 n ⌋ ≥ β (A.28) for suitable absolute constants β ∈ (0, 1) and γ ∈ (0, ∞), and this proves the result. with probability at least 1−d −1 for a suitable absolute constant κ 2 . Thus, by using that r e (A) log(d) = o(∆ −1 n ), 1 2 A ∞ ≤ Σ n ∞ ≤ 3 2 A ∞ (A.33) on the same event for all sufficiently large n. Moreover, Lounici (2014, Lemma 2) shows that, with probability at least 1 − d −1 , | tr( Σ n ) − tr(A)| ≤ κ 3 tr(A) max log(d) for an absolute constant κ 3 . Consequently, on that event we have with probability at least 1 − 2d −1 for all sufficiently large n and some absolute constant γ. Since the first term of the maximum is dominating when ∆ −1 n ≥ r e (A), which is the case when n is large, the proof is complete. Proof of Theorem 3.1. First, let δ n ∈ {0, 1} be defined such that ⌊(t + h n )∆ −1 n ⌋ − ⌊t∆ −1 n ⌋ = ⌊(h n + δ n ∆ n )∆ −1 n ⌋, (A.35) and set τ (s) = εs + ⌊t∆ −1 n ⌋∆ n with ε = h n + δ n ∆ n . Clearly,Ȳ s = Y τ (s) , s ∈ [0, 1], is a diffusion with driftμ s = εµ τ (s) , volatilityσ s = √ εσ τ (s) , and driving (standard) Brownian motionW s = (W τ (s) − W τ (0) )/ √ ε. Moreover, its RV at time 1 based on a sampling frequency of∆ n = ∆ n /ε coincides with Σ n (t; t + h n ): Σ n (t; t + h n ) = ⌊∆ −1 n ⌋ k=1 (∆ n kȲ )(∆ n kȲ ) ⊤ ,∆ n kȲ =Ȳ k∆n −Ȳ (k−1)∆n . Hence, the first step is to apply Corollary 2.6 to the process (Ȳ s ) s∈[0,1] with sampling frequency∆ n . To this end, note that Since we also have ε ∈ [h n , 2h n ] and h n /∆ ≥ 2 ν c,2 νc,∞ log(d), it follows that the assumptions of Corollary 2.6 are satisfied and we deduce that Σ λ n (t; t + h n ) −Σ with probability at least 1 − d −1 when λ meets (3.6) and γ is a sufficiently large absolute constant. HereΣ denotes the QV of (Ȳ s ) s∈[0,1] at time 1. By dividing both sides of the inequality (A.36) by h 2 n , ĉ λ n (t) −Σ h n 2 2 ≤ 3γ 2 ν c,2 ν c,∞ ∆ n rank(Σ) log(d) h n . (A.37) The error h −1 nΣ − ε −1Σ 2 can be bounded in the following way using that h n /∆ n ≥ ν c,2 2νc,∞ : Σ h n −Σ ε 2 2 ≤ ν c,2 ∆ n h n 2 ≤ 2ν c,2 ν c,∞ ∆ n h n . (A.38) Now, for any given β ∈ (1, ∞), we want to argue that ε −1Σ − c t 2 is small with probability 1 − β −1 . To do so, note initially that t ∈ A n := [⌊t∆ −1 n ⌋∆ n , ⌊t∆ −1 n ⌋∆ n + ε] and that, for any s, u ∈ A n , we have ν c,ψ ≥ c s − c u ψ / √ 2h n . Using these two facts, and by relying on Jensen's and Markov's inequality, we do the following computations for an arbitrary x > 0: which applies with probability at least 1−d −1 −ψ( log(d)) −1 for a suitably chosen absolute constant κ. The inequality (3.9) follows immediately from the fact that rank(Σ) ≤ rank(Σ(t − hn 2 ; t + 3hn 2 )). To establish the bounds (3.10) on rank(ĉ λ n (t)) note initially that, from the proof of Theorem 3.1 (particularly, (A.39)), we may in fact assume that Σ ε − c t 2 2 ≤ 2h n ν 2 c,ψ log(d) on the event that we are considering. Consequently, since singular values are Lipschitz continuous with constant 1 with respect to · 2 , s k Σ ε − s k (c t ) ≤ ν c,ψ 2h n log(d) for k = 1, . . . , d. By using this observation, the fact that ε ∈ [h n , 2h n ], and the explicit expression for λ the following two implications can be deduced: s k (Σ) < λ =⇒ s k (c t ) < γ ν c,2 ν c,∞ ∆ n h n + ν c,ψ h n log(d). We have also imposed the innocent assumption that γ is chosen such that δγ ≥ √ 2. From these two implications we conclude that rank(Σ; 2δλ) ≤ rank(c t ; ε) and rank(Σ; λ) ≥ rank(c t ; ε), which (in view of (A.40)) establishes (3.10). The last statement in the result is a direct consequence of (3.10), and thus the proof is complete. Using principal component analysis to estimate a high dimensional factor model with high-frequency data Answering the skeptics: Yes, standard volatility models do provide accurate forecasts Modeling and forecasting realized volatility Convex multi-task feature learning Consistency of trace norm minimization A central limit theorem for realized power and bipower variations of continuous semimartingales Econometric analysis of realized covariation: High frequency based covariance, regression, and correlation in financial economics High-dimensional minimum variance portfolio estimation based on high-frequency data Exact matrix completion via convex optimization Optimization and nonsmooth analysis A general version of the fundamental theorem of asset pricing Central limit theorems for approximate quadratic variations of pure jump Itô semimartingales Testing the maximal rank of the volatility process for continuous diffusions observed with noise Bootstrapping realized volatility A blocking and regularization approach to high dimensional realized covariance estimation On estimation of quadratic variation for multivariate pure jump semimartingales A closed-form solution for options with stochastic volatility with applications to bond and currency options A variational approach of the rank function Ridge regression: Biased estimation for nonorthogonal problems Asymptotic properties of realized power variations and related functionals of semimartingales Estimation of the Brownian dimension of a continuous Itô process A test for the rank of the volatility process: The random perturbation approach Discretization of Processes Concentration inequalities and moment bounds for sample covariance operators Nuclear-norm penalization and optimal rates for noisy low-rank matrix completion High-dimensional covariance matrix estimation with missing observations Econometric analysis of vast covariance matrices using composite realized kernels and their application to portfolio choice Non-parametric threshold estimation for models with stochastic diffusion coefficient and jumps On the maximal inequalities of Burkholder, Davis and Gundy On some extensions of Bernstein's inequality for self-adjoint operators Estimation of (near) low-rank matrices with noise and highdimensional scaling Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimization Nonparametric test for a constant beta between Itô semi-martingales based on high-frequency data The arbitrage theory of capital asset pricing A permutation approach for selecting the penalty parameter in penalized model selection Exponential integrability of stochastic convolutions Regression shrinkage and selection via the lasso User-friendly tail bounds for sums of random matrices Introduction to the non-asymptotic analysis of random matrices Vast volatility matrix estimation for high-frequency financial data Characterization of the subdifferential of some matrix norms On the estimation of integrated covariance matrices of high dimensional diffusion processes The second term is treated as: (A.20) where Lemma A.1 is used here for the conditional expectation. Consequently, by combining (A.18) -(A.20), we get the estimateSince this analysis holds for an arbitrary unit vector u, the right-hand side of (A.21) is an upper bound for E[A p k | F (k−1)∆n ] ∞ . This fact, together with (A.15) and (A.17), shows thatfor a suitable absolute constantα ≥ 2. Now, it follows that Theorem 2.3 is applicable with R =αν c,2 ∆ n andand 4νR 2 ≤ 4α 2 ν c,2 ν c,∞ ∆ n , the inequality (2.6) implies thatIn particular, (A.22) shows that a 3 ≤ τ x 5 .(A.23)Finally, for the second term in (A.12) we invoke the Cauchy-Schwarz inequality and consider an x for which x 2 25νµ∆n − ν c,2 ≥ ν c,2 5νc,∞ x to deduce the following initial bound:Proof of Theorem 2.8. Setr = rank( Σ λ n ). To show the upper bound in (2.17) it suffices to argue that Σ has at leastr singular values which are larger than (λ −λ)/2 or, equivalently, sr(Σ) ≥ (λ −λ)/2. Observe first that the representation (2.2) implies sr( Σ n ) > λ/2 (recall that s k (A) refers to the kth largest singular value of A). By using this inequality together with the fact that A → s k (A) is Lipschitz continuous with constant 1 with respect to the spectral norm, we establish thatIn order to prove the lower bound of (2.17), we need to show that s r λ ( Σ λ n ) > 0 with r λ = rank(Σ; λ). However, this follows immediately from the following sequence of inequalities:The last part of the result is a direct consequence of the inequality (2.17), since rank(Σ; x) = rank(Σ) for x ∈ (0, s]. This finishes the proof.Proof of Corollary 2.9. For an arbitrary numberλ ∈ (0, ∞), Proposition 2.2 and Theorem 2.8 imply that both Σ λ n − Σ 2 2 ≤ 3λ 2 rank(Σ) (A.29) and rank(Σ; λ) ≤ rank( Σ λ n ) ≤ rank(Σ; 1 2 (λ −λ)) (A.30) on the set A(λ) = {2 Σ n − Σ ∞ ≤λ} whenever λ >λ. By the proof of Theorem 2.5, in particular (A.27) together with the fact that ν µ ≤ ν c,∞ , it follows that for any τ ∈ (0, ∞), the set A(λ) has probability at least 1 − exp(−τ ) if λ =γ max ν c,2 ν c,∞ ∆ n (log(d) + τ ), ν c,2 ∆ n (log(d) + τ ) (A.31) for a sufficiently large absolute constantγ. By considering τ = log(d) and using that ∆ −1 n ≥ 2 ν c,2 νc,∞ log(d), the maximum in (A.31) equals its first term. With this choice of τ , the set A(λ) has probability at least 1 − d −1 , and (λ −λ)/2 ≥ δλ when λ is given by (2.18) as long as we choose γ ≥γ √ 2/(1 − 2δ). Consequently, by plugging the specific values of λ andλ into (A.29) and (A.30), we have established the first part of the result. The last part of the result follows immediately from Theorem 2.8.Proof of Theorem 2.11. By Theorem 4 in Koltchinskii and Lounici (2017) , we deduce that λ * max tr( Σn) Σn ∞ ⌊∆ −1 n ⌋ , tr( Σn)almost surely, for some absolute constant κ 1 . From the proof of Theorem 2.5 and the assumptions on µ t and σ t ,It follows that, with probability at least 1 − β −1 , Σ ε − c t for a suitably chosen absolute constant κ. Since A n ⊆ [t − ∆ n , t + h n + ∆ n ] and h n ≥ 2∆ n , it follows that rank(Σ) ≤ rank(Σ(t − hn 2 ; t + 3hn 2 )), and the proof is complete.Proof of Theorem 3.2. Similarly to the proof of Theorem 3.1, we consider the time-changed processȲ s = Y τ (s) , s ∈ [0, 1], and apply Corollary 2.9 to deduce that, with probability at least 1 − d −1 , ĉ λ n (t) −Σ h n 2 2 ≤ 3γ 2 ν c,2 ν c,∞ ∆ n rank(Σ) log(d) h n and rank(Σ; λ) ≤ rank(ĉ λ n (t)) ≤ rank(Σ; 2δλ). (A.40)Here we recall that τ (s) = εs + ⌊t∆ −1 n ⌋∆ n and ε = h n + δ n ∆ n , where δ n ∈ {0, 1} is chosen such that (A.35) holds. By following the exact same arguments as in the proof of Theorem 3.1 we obtain the estimate ĉ λ n (t) − c t 2 2 ≤ κγ 2 ν c,2 ν c,∞ ∆ n rank(Σ) h n + h n ν 2 c,ψ log(d),