key: cord-0259200-dlyew29g authors: Prakash, Arjun; James, Nick; Menzies, Max; Francis, Gilad title: Structural clustering of volatility regimes for dynamic trading strategies date: 2020-04-21 journal: nan DOI: 10.1080/1350486x.2021.2007146 sha: 07771d8018714b882bd7691a68ea5826f6cc9e79 doc_id: 259200 cord_uid: dlyew29g We develop a new method to find the number of volatility regimes in a nonstationary financial time series by applying unsupervised learning to its volatility structure. We use change point detection to partition a time series into locally stationary segments and then compute a distance matrix between segment distributions. The segments are clustered into a learned number of discrete volatility regimes via an optimization routine. Using this framework, we determine a volatility clustering structure for financial indices, large-cap equities, exchange-traded funds and currency pairs. Our method overcomes the rigid assumptions necessary to implement many parametric regime-switching models, while effectively distilling a time series into several characteristic behaviours. Our results provide significant simplification of these time series and a strong descriptive analysis of prior behaviours of volatility. Finally, we create and validate a dynamic trading strategy that learns the optimal match between the current distribution of a time series and its past regimes, thereby making online risk-avoidance decisions in the present. Modelling the volatility of a financial asset is an important task for traders and economists. Volatility may be modelled from an individual stock level to an index level; the latter can represent the uncertainty or systemic risk of an entire sector or economy. Financial markets are both significant in their own right and have substantial flow-on effects on the rest of society, as seen during the global financial crisis, US-China trade war, and COVID-19 pandemic. Statistical methods for volatility modelling have long been popular in the literature (Shah, Isah, and Zulkernine 2019; Kirchler and Huber 2007; Baillie and Morana 2009 ). Long-standing parametric methods such as ARCH (Engle 1982; Zakoian 1994) and GARCH (Bollerslev 1986 ) model the volatility of individual stocks, obeying assumptions such as stylized facts (Guillaume et al. 1997) , and appropriately choosing parameters to best fit past data. These methods allow traders to model future returns, provided that their founding assumptions continue to hold. Volatility may also be studied indirectly through options pricing (Gonçalves and Guidolin 2006; Cont and da Fonseca 2002; Tompkins 2001) . For many years, statisticians have noted that many real-world processes may exhibit significant nonstationarity. These switches in behaviour may disrupt the use of parametric or nonparametric models to predict future behaviour and so must be accounted for. For this purpose, a broad literature has been developed that aims to study nonstationary time series by partitioning them into locally stationary segments. Early research in a purely statistical context was carried out by (Priestley 1965; Priestley and Tong 1973; Ozaki and Tong 1975; Tong and Lim 1980) with significant advances by Dahlhaus (1997) . Such concepts have been applied to financial time series to develop regime-switching models. These combine the aforementioned statistical techniques with the observation that financial assets exhibit such switching patterns, moving between periods of heightened volatility and ease (Hamilton 1989; Lavielle and Teyssière 2007; Lamoureux and Lastrapes 1990) . Numerous regime-switching models have been developed to model such patterns (Hamilton 1989; Klaassen 2002; Guidolin and Timmermann 2007; Yang et al. 2018; de Zeeuw and Zemel 2012) ; these are also generally parametric, building on ARCH and GARCH, and must a priori specify the number of regimes and underlying distributions. Typically, the number of regimes is assumed to be two (Taylor 1999 (Taylor , 2005 , one for high volatility and one for low volatility (Guidolin 2011) . However, market crises such as the global financial crisis or COVID-19 may require more flexibility in volatility modelling, through varied assumptions regarding both individual distributions and the number of regimes (Arouri et al. 2016; Song, Ryu, and Webb 2016; Balcombe and Fraser 2017; Carstensen et al. 2020; Baiardi et al. 2020; Campani, Garcia, and Lewin 2021) . For example, Baba and Sakurai (2011) describes three regimes in the VIX index from 1990-2010: tranquil, turmoil and crisis. Most recent regime-switching models have been statistical in nature, for instance building on the Heston model (Goutte, Ismail, and Pham 2017; Papanicolaou and Sircar 2013; Song, Ryu, and Webb 2018) . This paper introduces a new method from an alternative perspective to analyze the regime-switching structure of a financial time series and use this for risk-avoidance in the future. Our method flexibly determines the number of volatility regimes, making no assumptions about the underlying data generating process. We use the entire history of the time series to learn its fundamental structure over time and how it behaves in volatile and non-volatile periods alike, improving on most parametric methods that only use recent behaviour. By not assuming the number of regimes a priori, we ameliorate a typical shortcoming of such models (Ang and Timmermann 2012) . Moreover, our determination of the number of volatility regimes fits well within the existing literature as one could conceivably take our determined number for a given time series and input this learned number in an alternative regime-switching model. Subsequently, we draw on our findings and use additional learning procedures to design a dynamic trading strategy that can identify volatile periods and allocate capital in real time. We learn the volatility structure of the S%P 500 and determine whether the present period should be avoided by comparing its empirical distribution with past segments and determining the closest match by distance minimization. We show that it provides superior risk-adjusted returns to the S&P 500 index in various market conditions and suitably avoids the index during periods of higher volatility, generally associated with market downturns. Our procedure is inspired by both machine learning and metric geometry and contains additional optimization relative to previous trading strategies (Nystrup et al. 2016) . In Section 2, we describe our methodology in detail, together with mathematical proof of its efficacy under specific circumstances. In Section 3, we validate our methodology on synthetic data, comparing several variants, and show a reasonable clustering structure can be determined for major indices, stocks, popular exchange-traded funds (ETFs) and currency pairs. In Section 4, we introduce our dynamic trading strategy, incorporating our insights and additional learning procedures. Section 5 summarizes our findings and describes future research and applications of our methods. In mathematical statistics, a time series (X t ) is a sequence of random variablesmeasurable functions from a probability space to the real numbers -indexed by time. In finance, one generally conflates the random variable with the observed data point at each time. As such, a financial time series is a sequence of price data. In this paper, we will examine the time series of closing prices (across a range of assets) (p t ) t≥0 at time t, and the associated log returns (R t ) t≥1 = log pt pt−1 . In this section, we describe our method in detail. We begin by assuming our nonstationary time series are generated from Dahlhaus locally stationary processes (Dahlhaus 1997 ) and proceed to partition the time series into stationary segments; specifically, we detect changes in the volatility of a time series via the Mood test change point method. Next, using the empirical cumulative density function of each segment, we use the Wasserstein metric to quantify distance between these distributions. We determine an allocation into an appropriate number of clusters via self-tuning spectral clustering (Zelnik-Manor and Perona 2004) . Thus, we classify our segments of volatility into discrete classes without assuming the number of classes a priori. Finally, we record the number of clusters and their structure. Our methodology differs from usual regime-switching models in that it is inspired by unsupervised learning rather than stochastic volatility models. We use spectral clustering rather than simply testing for low or high volatility in the different segments precisely because two volatility regimes are not suitable in all contexts (Campani, Garcia, and Lewin 2021) . As we study long histories of these time series, rather than recent behaviour for use in probabilistic models, we must account for the possibility of a varying number of regimes between different assets. Periods of ordinary market behaviour, turmoil, and crises, could potentially manifest themselves in three regimes for equities (Baba and Sakurai 2011) , but perhaps the behaviour of currency pairs in these times may be quite different. The precise method that we describe below, applicable to volatility clustering, is not exhaustive. As long as there is consistency between the regime characteristic of interest, the change point algorithm (and its test statistic if applicable), and the distance metric between distributions, the method below could easily be reworked for detection and classification of regimes of alternative characteristics. For example, we could substitute the Jensen-Shannon divergence for the Wasserstein metric, substitute the Bartlett test for the Mood test, or smooth out the empirical distributions via kernel density estimation. Below, we outline the detailed implementation to model volatility and discuss some of the aforementioned alternatives in the process. Given time series price data, begin by forming the log return time series (R t ) t=1,...,T over a particular time interval. It is generally appropriate to assume the log returns independent random variables with mean close to 0, but not appropriate to assume they have any particular distribution (Ross 2013) . With this in mind, we apply the nonparametric Mood test, performed in the CPM package of Ross (2015) , to detect changes in the volatility of a time series. Although this is commonly known as a median test, it is also appropriate for detecting change in the variance between two distributions, as described in Section 4 of Mood (1954) . More details on the change point framework and our specific implementation can be found in Appendix A. This yields a collection of change points τ 1 , ..., τ m−1 . For notational convenience, set τ 0 = 1, τ m = T . The stationary segments according to this partition are then This yields m stationary segments. Now let (Y (j) ) be the restricted time series whose entries are taken from the time interval [τ j−1 , τ j ]. That is, (Y (j) t ) consists of the values R t where t ranges from τ j−1 to τ j . Each (Y (j) ) has been determined by the algorithm to be sampled from a consistent distribution. Next, we compute the Wasserstein distance between the empirical distributions of each stationary segment (Y (j) ). The Wasserstein metric (Kolouri et al. 2017) , also known as the earth mover's distance, is the minimal work to move the mass of one probability distribution into another. Given probability measures µ, ν on Euclidean space R d , we define This infimum is taken over all joint probability measures γ on R d × R d with marginal probability measures µ and ν. In the case where d = 1, this distance can be computed relatively simply in terms of the cumulative distribution functions associated to the two distributions. Given probability measures µ, ν with cumulative distribution functions F, G, the distance W p (µ, ν) may be computed (del Barrio, Giné, and Matrán 1999) as This allows us to form a m × m distance matrix of Wasserstein distances between the m distributions of each locally stationary segment of the log return time series. In our implementation, we set p = 1. In this section, we cluster the distributions by applying spectral clustering directly to the matrix D. This is a natural choice -alternative methods such as K-means would require the data to lie in Euclidean space. Spectral clustering often proceeds with the number of clusters k chosen a priori -we explore two methods for making this selection. We proceed according to the self-tuning spectral clustering methodology proposed by Zelnik-Manor and Perona (2004) . From the distance matrix D, an affinity matrix A is defined according to where σ i are parameters to be chosen. In their original implementation, Zelnik-Manor and Perona (2004) Next, one forms the Laplacian L and normalized Laplacian L sym following von Luxburg (2007). We define the diagonal degree matrix Deg ii = j A ij , and then L, L sym are m × m symmetric matrices, and hence are diagonalizable with all real eigenvalues. By the definition of L and the normalization L sym , all their eigenvalues are non-negative, 0 = λ 1 ≤ ... ≤ λ m . We now describe two methods of determining the number of clusters k, which we will empirically investigate in Section 3. The first method is that described by Zelnik-Manor and Perona (2004) . Obtaining the number of clusters proceeds by varying the number of eigenvectors c ∈ [1, m]: (1) Find x 1 , ..., x c the eigenvectors of L corresponding to the c largest eigenvalues. Form matrix X = [x 1 , ..., x c ] ∈ R n×c . (2) Recover the rotation R which best aligns X's columns with the canonical coordinate system using the incremental gradient descent scheme. The final number of clusters k ZP is chosen to be the value of c with the minimal alignment cost. The second method chooses k to maximize the eigengap between successive eigenvalues. We denote this as k e , defined by Having determined the number of clusters k, spectral clustering proceeds as follows. We compute the normalized eigenvectors u 1 , ..., u k corresponding to the k smallest eigenvalues of L sym . We form the matrix U ∈ R m×k whose columns are u 1 , ..., u k . Let v i ∈ R k be the rows of U , i = 1, ..., m. These rows are clustered into clusters C 1 , ..., C k according to k-means. Finally, we output clusters A l = {i : v i ∈ C l }, l = 1, ..., k to assign the original m elements, in this case segment distributions, into the corresponding clusters. In this section, we introduce a mathematical framework: under certain conditions (introduced in Definition 2.2), our aforementioned methodology is proven to accurately cluster segments by high or low volatility. Propositions 2.3 and 2.5 are intermediate technical results that allow us to establish this claim in Theorem 2.6. For the remainder of this section, let f, g, f i denote continuous probability density functions (pdf's) and let F, G, F i denote their respective cumulative density functions (cdf's). We recall the necessary properties. Such a function f is continuous f : Definition 2.1. Say a continuous probability density function f is median-zero if one of three equivalent conditions hold: In the next definition, we define a partial ordering ≺ that describes when one probability distribution is of consistently greater spread than another. Definition 2.2. Let f, g be two median-zero continuous pdf's. Say g has greater volatility than f everywhere, We make several remarks on this definition. First, this is a partial rather than a total ordering, as two arbitrary cdf's F, G may frequently intersect for x ∈ R. Second, we note that this notation is only used in Section 2.4. Third, this is quite a strong definition -while volatility may sometimes refer to only the second moment, this definition carries information about higher moments as well -broadly speaking if f ≺ g then g has all higher moments greater than that of f . Finally, the utility of this definition is predicated upon the following proposition and corollary. For an example of two pdf's that fit the scope of both Proposition 2.3 and Corollary 2.4, we include Figure 1 . This proposition gives a means to establish that f ≺ g by quick inspection. Proof. First, the continuity of f and g implies that f (τ 1 ) = g(τ 1 ) and f (τ 2 ) = g(τ 2 ). Next, let x ≤ τ 1 . By integrating f and g over the interval t ∈ (−∞, x), it is immediate that F (x) < G(x). Now, consider the interval [τ 1 , 0] and the function h = F − G on this interval. We have established that h(τ 1 ) < 0 while h(0) = 0 by the median-zero property. By the fundamental theorem of calculus, h is differentiable on the interval with h (t) = f (t) − g(t) > 0 for t ∈ (τ 1 , 0). Thus h is increasing on the interval [τ 1 , 0]. Since h(0) = 0, this implies h(t) < 0 for t ∈ (τ 1 , 0), that is F (t) < G(t) on the interval t ∈ (τ 1 , 0). Together with the earlier fact that F (x) < G(x) for x ≤ τ 1 , we deduce that F (x) < G(x) for all x < 0. By an identical argument, we deduce F (x) > G(x) for all x > 0. That is, f ≺ g, which concludes the proof. Corollary 2.4. Let 0 < σ 1 < σ 2 and let f 1 , f 2 be the pdf 's associated to normal distributions N (0, σ 1 ), N (0, σ 2 ) respectively. Then f 1 ≺ f 2 . Let τ be the square root of the right hand side of (1). Then The following lemma is a technical result for evaluating Wasserstein distances between pdf's that can be ordered with ≺. Proposition 2.5. Let f < g. Then the Wasserstein distance W (f, g) can be computed as (2) Proof. In general, W (f, g) may be computed (del Barrio, Giné, and Matrán 1999) as In the case where f ≺ g, F − G is non-negative on [0, ∞) and negative on (∞, 0). Thus this integral can be expressed as a sum as required. Theorem 2.6. Suppose f 1 , . . . , f n is a collection of median-zero pdf 's that can be totally ordered by ≺. Then our method described in Section 2 partitions the segments into intervals under ≺. That is, if f 1 ≺ · · · ≺ f n then the methodology necessarily segments into clusters Moreover, our methodology is also resistant to deformations: if f 1 , . . . , f n are a sufficiently small deformation away from totally ordered median-zero pdf 's, then the result still holds. Proof. First we assume the collection can be totally ordered under ≺. By relabelling, without loss of generality f 1 ≺ . . . f n . Then, if i < j, (3), (4) and (5) above. That is, the distance matrix D between the segments f 1 , . . . , f n is identical to a distance matrix between real numbers a 1 < · · · < a n . Spectral clustering applied to real numbers partitions them into intervals, that is C 1 = {a 1 , . . . , a n1 }, C 2 = {a n1+1 , . . . , a n2 }, . . . , C k = {a nk−1+1 , . . . , a nk } for some integers 1 ≤ n 1 < · · · < n k = n. Since D ij = a j − a i the results of spectral clustering are identical to applying it on the corresponding real numbers. Hence the segments are partitioned into corresponding clusters Finally, if the initial collection of pdf's were not precisely median-zero and totally ordered under ≺ but a sufficiently small deformation away from a median-zero totally ordered collection, then the result still holds. By the continuity of the Wasserstein distance, the distance matrix D would be a small deformation away from a distance matrix between real numbers. By the continuity of the procedures within spectral clustering, once again the f i would be partitioned into clusters of increasing volatility. In this section, we validate our method on 200 synthetic time series. In two distinct experiments, we randomly generate 100 times series each with artificially pronounced breaks in volatility by concatenating different segments. Each segment is randomly drawn from various data generating processes and randomly chosen between 200 and 300 in length, together with added Gaussian random noise δ, to ensure none of the data generating processes are identical. Thus, we create 200 unique time series where a ground truth number of clusters and the segment memberships are known -to these we apply our methodology to validate it. In a small number of instances, our methodology does not correctly identify the number of segments at the change point detection phase -we term this a mismatch. Excluding these, when the change point detection step correctly identifies the number of segments, we evaluate the quality of the clustering using the Fawlkes-Mallows index (Fowlkes and Mallows 1983; Halkidi, Batistakis, and Vazirgiannis 2001) : where T P, F P, F N are the number of true positives, false positives and false negatives, respectively. The score is bounded between 0 and 1, where 1 indicates a perfect cluster assignment. This FMI can only be applied when the change point detection step correctly identifies the number of segments, which we call correctly matched. In the first experiment, we draw from five different normal distributions, as follows: They must be approximately mean-centred to mimic the properties of the log returns time series and for the Mood test to detect change points in the variance, as detailed in Appendix A. In this first experiment, there were 4 out of 100 mismatches. In all these instances, the algorithm was off by just one in detecting the correct number of segments. Among the 96 remaining time series, we are able to apply the two different methods to select the number of clusters and apply spectral clustering, as described in Section 2.3. We refer to the two methods as the eigengap method to generate k e and the gradient descent method to generate k ZP , as defined in 2.3. To validate our methodology, we record both the mean FMI score for these two methods as well as smoothed histograms over the 96 experiments. In Figure 2 , we present one example of such a time series. Figure 2 (a) displays the change point partitions; the detection times, as described in Appendix A, are not visible. Figure 2 (b) displays the kernel density estimations of the distributions, coloured according to their membership in five detected clusters. Figure 2 (c) shows the final clustering of the segments of the synthetic time series. This whole procedure correctly identifies the change in variance, as well as the existence of five regimes (clusters) of volatility. Across all 96 correctly matched iterations of this first experiment, the mean FMI score for the eigengap method (k e ) was 0.93, while the mean FMI score for the gradient descent method (k ZP ) was 0.86. The smoothed histogram in Figure 3 (a) shows that the eigengap method has systematically higher FMI scores. For the second experiment, we repeated the same procedure, however we drew from five Laplace distributions: In this second experiment, there were 11 out of 100 mismatches. Once again, the algorithm was off by just one in detecting the correct number of segments in all instances. Across all 89 correctly matched iterations of this second experiment, the mean FMI score for the eigengap method (k e ) was 0.94, while the mean FMI score for the gradient descent method (k ZP ) was 0.85. Again, the smoothed histogram in Figure 3 (b) shows that the eigengap method has systematically higher FMI scores than the alternative. Manually examining the mismatches shows that they tend to occur when the one segment is sandwiched between two segments drawn from the next most similar distribution, for example when a N (0, 1) segment is sandwiched between two N (0, 2) segments. When this happens, the algorithm tends to split the segments into four parts instead of three, leading to the overestimation. It is also worth noting that we repeated both sets of experiments using the Bartlett test instead of the Mood test to help reduce mismatches. While both tests performed similarly in the first experiment, the Bartlett test had a much higher mismatch rate in the second experiment, leading to much higher estimations of the number of clusters. This is to be expected as the Barlett test is a parametric test for changes in normal distributions. The better performance of the Mood test in both the normal and Laplace experiments indicates that it is better suited to real-world log returns data, which are not necessarily normally distributed. Having observed few mismatches and better FMI scores with the eigengap method, we proceed by exclusively applying the eigengap method to real data in subsequent sections. In this section, we apply our method to SPY, an ETF of the S&P 500, and analyze our volatility clustering results. We study adjusted closing price data from Yahoo! Finance, https://finance.yahoo.com, from 1 January 2008 to 31 December 2020, and compute the log returns before applying our methodology. Our algorithm finds five volatility clusters over the 13-year period studied, seen in Figure 4 . The cyan cluster is associated with extreme market behaviours such as the worst part of the global financial crisis (GFC). The next most volatile cluster is displayed in yellow, and the majority of the GFC, the August 2011 crash and the entirety of 2020 are grouped in this period. The red cluster contains the start of the GFC, the 2010 flash crash and the 2015-16 sell off and the US/China Trade War. Finally the blue and green clusters display more stable economic periods. Our trading strategy is predicated upon the idea that we associate the most recent window with a period in the past that is most similar. Details are described in Section 4. We observe that a change point is detected between the 17th and 18th segment of Figure 4(c) , and yet there was no regime change in volatility at this time. This can occur when the distributions are different, but not different enough to warrant an entire regime change. In this section, we apply our methodology to a collection of asset classes including equity indices, individual stocks, ETFs and currency pairs. For each asset, we report the learned number of segments and clusters. This information is used to design the trading strategy. We display these results in Tables 1, 2, 3 and 4, respectively. Table 4 displays results for several currency pairs. In Appendix B, we provide figures displaying various time series' partitioned volatility regimes, and the respective clusters in which they are associated. All results in Section 3 are obtained over a 13-year period from 2008 to 2020. In this brief section, we discuss the effects of extending or subdividing our window of analysis, 2 0 0 7 -1 2 2 0 0 8 -0 6 2 0 0 8 -1 2 2 0 0 9 -0 6 2 0 0 9 -1 2 2 0 1 0 -0 6 2 0 1 0 -1 2 2 0 1 1 -0 6 2 0 1 1 -1 2 2 0 1 2 -0 6 2 0 1 2 -1 2 2 0 1 3 -0 6 2 0 1 3 -1 2 2 0 1 4 -0 6 2 0 1 4 -1 2 2 0 1 5 -0 6 2 0 1 5 -1 2 2 0 1 6 -0 6 2 0 1 6 -1 2 2 0 1 7 -0 6 2 0 1 7 -1 2 2 0 1 8 -0 6 2 0 1 8 -1 1 2 0 1 9 -0 6 2 0 1 9 -1 1 2 0 2 0 -0 Density (b) Kernel density estimation plots of SPY distributions, forming five clusters 2 0 0 7 -1 2 2 0 0 8 -0 6 2 0 0 8 -1 2 2 0 0 9 -0 6 2 0 0 9 -1 2 2 0 1 0 -0 6 2 0 1 0 -1 2 2 0 1 1 -0 6 2 0 1 1 -1 2 2 0 1 2 -0 6 2 0 1 2 -1 2 2 0 1 3 -0 6 2 0 1 3 -1 2 2 0 1 4 -0 6 2 0 1 4 -1 2 2 0 1 5 -0 6 2 0 1 5 -1 2 2 0 1 6 -0 6 2 0 1 6 -1 2 2 0 1 7 -0 6 2 0 1 7 -1 2 2 0 1 8 -0 6 2 0 1 8 -1 1 2 0 1 9 -0 6 2 0 1 9 -1 1 2 0 2 0 -0 6 2 0 2 0 -1 1 The total number of clusters, however, is more difficult to predict regarding how it will behave with different period lengths. For example, as the analysis window is extended, more segments may be observed, yielding a larger space of segments to be clustered. However, as a space (in this case, consisting of probability densities) grows in cardinality, its determined number of clusters may change unpredictably. Usually, a larger cardinality of points in a high-dimensional space should produce a greater number of determined clusters. Yet this is not always the case. Consider a hypothetical example of a space neatly divided into three clusters (indexed 1, 2, and 3). If enough data points are added to "bridge" clusters 2 and 3, then clustering may subsequently detect only two clusters (1 and 2 ∪ 3). In our implementation, we broadly notice that smaller periods of analysis produce fewer clusters. This will be relevant in the precise trading methodology of Section 4. Finally, we remark that even two adjacent segments may lie in the same determined cluster. Indeed, this was demonstrated in Section 3.2. Fortunately, this works well in the context of our methodology. Synthetic experiments demonstrate that the change point detection algorithm identifies more change points than the ground truth time series in some circumstances. Our clustering overcomes this potential issue of oversensitivity. In the event that an erroneous change point partitions two adjacent segments with similar statistical properties, the clustering method would determine that these segments exist in the same cluster, ameliorating the false positive of the change point algorithm. Our empirical results demonstrate substantial heterogeneity in the volatility structure among the asset classes studied. The number of clusters ranges between 2 and 6, with the most common number of volatility regimes determined to be 3 or 4. The number of segments belonging to each clustering regime is relatively consistent among all asset classes. The one exception is at the height of the GFC, where a single segment is associated with one cluster. When a shorter time period is used, two regimes are consistently identified, which is consistent with prior findings (Guidolin 2011) . Despite heterogeneity in the number of segments and clusters, we can still identify similarities between asset classes. For instance, the S&P 500 and the Dow Jones both exhibit periods of high volatility in March 2010, April 2011, and late 2018. All but several asset classes exhibit a volatile clustering regime at the height of the GFC. Typically, this period belongs to the same cluster as the COVID-19 market crash of 2020 -highlighting similar market structure during periods of severe crisis (James 2021) . In addition, all five of the listed firms experienced a similarly volatile window associated with the US/China trade war in late 2018. By contrast, the ETFs under analysis did not exhibit similarly volatile periods -most likely due to their varied asset class mix. In their typical formulation, regime-switching models require assumptions regarding the number of regimes and candidate distributions a priori. They are often criticized for this highly parameterized structure (Guidolin 2011; Ang and Timmermann 2012). One meaningful advantage of our method, is the ability to account for extreme economic events and market crises with the flexibility of our method. No assumptions regarding the number of regimes or data generating process are required -indeed, we have validated our method both on Gaussian and non-Gaussian distributions. For those wishing to implement parametric regime-switching models, the methodology proposed in this work could be used as an accompaniment to algorithmically determine an appropriate number of regimes for various asset classes. As a further application, in the proceeding section we demonstrate how these results can be used to inform asset allocation decisions in the context of a dynamic trading strategy. In recent times, passive investing has gathered more asset inflows than active investment management. In particular, index funds and ETFs that track major indices such as the S&P 500 are a popular way of attaining broad market exposure for investors. We apply our analysis to the S&P 500 index to determine a dynamic trading strategy that can simultaneously benefit from the index's appreciation while minimising risk. In Section 3.2, we determined that the S&P 500 has two distinct volatility regimes, captured in two distinct clusters of volatility periods. Our contrived trading strategy is to buy and hold SPY, a tracker of the S&P 500, in low volatility periods, and then flee to the safe haven of GLD, a gold bullion tracker, in high volatility periods. We improve on the previous work of Nystrup et al. (2016) , who uses a live implementation of the rank test to move away from the S&P 500. This method has two drawbacks: first, as noted in Section 3, a change point does not necessarily indicate a change in regime; secondly, their method has an unpredictable delay in registering the change point, as discussed in Appendix A. Instead, we implement a dynamic procedure with a 4-year sliding window. Model parameters are learned within the prior window, and then applied to the proceeding four years of data. Suppose our algorithm begins with years 0 : 4. First, we analyze the SPY over the prior 4-year period, years -4 : 0. We determine the cluster structure of the distribution segments of the SPY over this prior period. To make investment decisions in the current period of 0 : 4 years, we try to match the present distribution with the most similar distribution in the prior window. We combine metric geometry and unsupervised learning for this purpose, minimizing a metric to one of an existing set of candidate segments. Specifically, we examine the present local distribution of the last n days, where n is a learned parameter, and determine the minimal distance between the present local distribution and the set of segment distributions of the prior 4-year period. We call n the look back length of the procedure. If this closest prior segment lies in (one of) the most volatile class of past distributions (characterized by greatest variance), we determine that the local distribution is volatile, and allocate all capital toward gold. This method works even if more than 2 volatility clusters are found during the previous window. For example, if r volatility regimes are determined to exist, the algorithm could avoid SPY if the current period is matched to the most volatile r/2 such regimes, ordered by variance. Fortunately, in our implementation, the number of volatility regimes in the 4-year windows is always 2. The parameter n is optimized relative to the -4 : 0 year window. Specifically, having determined the cluster structure, n is chosen to optimize the Sharpe ratio, a wellestablished measure of risk-adjusted returns, when testing over that window. We optimize n over a range 20 ≤ n ≤ 30, that is, 4 to 6 trading weeks. Thus, n is learned in this prior window and then used in the algorithm in the subsequent window. The window is then successively slid forward four years, and the process repeats. That is, model parameters estimated on years 0 : 4 are used to forecast in years 4 : 8, and so on. There are several reasons for our choice of 4-year windows. First, it is a suitable compromise in the length of the training data. If the training period were too short, we could erroneously capture transient behaviours as persistent market dynamics. By contrast, if the training period were too lengthy, our strategy may fail to prioritize more recent and relevant dynamics. Specifically regarding our use of clustering, our trading strategy always makes decisions (in the present) having been trained on data that was sufficiently recent to learn up-to-date behaviour of volatility regimes, but sufficiently long to observe non-trivial clustering results. In addition, this adaptive sliding window technique allows us to convincingly validate the long-run performance of our trading strategy. Second, as discussed in Section 3.4, the number of volatility regimes generally increases with the window length. Across 4-year training periods, our algorithm always determines r = 2 to be the optimal number of clusters when we examine SPY in its partitioned chunks, as shown in Table 5 . This makes the decision process of avoiding volatile regimes simpler and less ambiguous -if the current distribution is matched (by minimal distance) to the more volatile (of two) class of distributions, we allocate all capital towards gold. Third, a 4-year period is suitable as equity markets follow 4-year cycles, associated with the cyclicality of Kitchin cycles (Korotayev and Tsirel 2010) and the US presidential election (Gärtner and Wellershoff 1995) . We analyze the strategy's performance in a period from immediately prior to the global financial crisis (GFC), up to the present day. Accordingly, our initial backtest period of -4 : 0 is 2004-2008, while our first period of trading, years 0 : 4, is 2008-2012. We compare the performance of our dynamic trading strategy with three other strategies: holding SPY, holding GLD, and a baseline strategy holding an equal split between the two. We use six common validation metrics to evaluate and compare our trading strategy. (1) Annualized return (AR): the total return a strategy yields relative to the time the strategy has been in place. (2) The overall standard deviation (SD) of the portfolio. (3) Sharpe ratio (SR): a common measure of risk-adjusted return. Unfortunately, this penalizes both upside and downside volatility. Some strategies with strong annualized returns may have lower Sharpe ratios due to erratic, yet positive return profiles. (4) Maximum drawdown (MD): an alternative penalty function capturing the maximum peak to trough trading loss. (5) Sortino ratio (SoR): an alternative measure of risk-adjusted return that only penalizes downside deviation in the denominator. (6) Calmar ratio (CR): a measure of risk-adjusted returns that penalizes the maximum realized drawdown over some candidate investment period. If our trading strategy were applied among a collection of specific assets rather than an index (for example, a group of components of the index as featured in Table 2 ), the trading strategy could attain greater expected returns and higher risk-adjusted return ratios. In particular, while an index has a mean-reverting effect with respect to a large collection of stock returns, a smaller collection of carefully selected stocks may provide higher expected returns. Should the group of stocks be greater than ∼ 30, unsystematic (stock-specific) risk is diversified away for both the index and the portfolio of stocks. Both portfolios would have a risk component mostly comprised of systematic risk. This could lead to higher risk-adjusted returns for the trading strategy. Implementing our trading strategy between January 2008 and December 2020 would have been highly successful for both risk-averse and risk-on investors. Seen in Table 6 and Figure 5 , the strategy consistently outperformed the S&P 500 index and overall generated annualized returns of 9.4%. The S&P 500 returned 7.5% while the static baseline strategy returned 7.6%. The strategy clearly generates alpha by its dynamic nature, automatically detecting market regimes and allocating capital successfully. This entire period can broadly be characterized as a bull market, and yet features several severe market shocks; the strategy's consistent performance demonstrates its robustness to varied market dynamics. Figure 6 shows the positions held by the strategy. Of the four strategies compared, our dynamic trading strategy has the best annualized returns, Sharpe ratio, Sortino ratio and Calmar ratio, and lowest drawdown. It has the second-lowest standard deviation of 0.10, close to the baseline static strategy's 0.0087. The most significant component of the Sharpe ratio's performance comes from strong annualized returns; the increased upside volatility is the main contributor to the standard deviation. Indeed, our strategy's Sortino ratio is about 33% greater than that of the S&P 500; this confirms that a significant degree of the penalty in the standard deviation and Sharpe ratio is generated from upside returns. That is, the strong annualized returns of our trading strategy are generated in a relatively volatile manner. This is unsurprising, given that the strategy generates performance due to market timing. In this section, we describe the performance in detail over various time periods, particularly during market crises. While we have reported our findings over one period 2008-2020, in fact, four separate learning and evaluation procedures have been performed. All four periods were successful for our strategy, visible in Figure 5 . First, the strategy performs well during the GFC. Our strategy generates the secondbest returns during the GFC, surpassed only by gold. During the GFC, gold provided extraordinary returns for investors who invested prior to or during the crisis. After incurring a sharp drawdown, our strategy reallocates capital from S&P 500 into gold and consequently outperforms equity markets until late 2011. Next, the market experienced significant drawdown in December 2018. Given the brevity of this drawdown, our trading strategy is unable to reallocate capital away from the S&P 500 into gold fast enough to meaningfully reduce the strategy's drawdown. After all, our strategy is predicated on identifying regimes, and allocating capital when new data are identified as similar to past phenomena. It reflects the delicate balance in the look back length n. If it were too long, trading decisions would be made too slowly; if it were too short, trading decisions would be made too frivolously. The final significant market crisis during our window of analysis is the market turbulence associated with COVID-19. Our strategy performs well during this period. While it does experience a dip around March of 2020, it does allocate funds away from SPY quite early and avoids a much larger crash. The algorithm then benefits from the V-shaped recovery and reaches its previous peak before switching to GLD for the remainder of the year, leading to a small drawdown. The algorithm reverts back to SPY at the very last trade. We expect the algorithm to benefit from the market rally around the 2021 economic stimulus plan and the possibility of COVID herd immunity through vaccines. During the four 4-year windows that make up the 2008-2020 experiment, the optimal look back length n changes as follows. For the four windows, the optimized n is 20, 25, 20, 27 and 20 for 2004 -2008 , 2008 , Jan 2020 -Dec 2020 respectively. This suggests that continually updating the look back length is important, due to the dynamic nature of markets. The longest look back length is during 2012-2016, a bull market period with the greatest consistency and least volatility in the return profile. This suggests that regimes were more persistent and possibly easier to identify during the 2012-2016 period. This paper demonstrates an original means of clustering volatility regimes, highlighting it as a useful tool for descriptive analysis of financial time series and designing trading strategies. Results on both synthetic and real data are promising, with good validation scores across a range of synthetic data and significant simplification of real time series. The findings support previous work by Hamilton (1989) , and Lamoureux and Lastrapes (1990) and many others who contributed to the idea of discrete changes in volatility regimes. Moreover, while previous models generally select the number of regimes in advance, our model applies self-tuning unsupervised learning to determine the number of clusters in its implementation. In real data, we showed that this is usually between 2 and 4, while remaining flexible enough to detect more during crises. Our method integrates well with others in the literature (Guidolin 2011; Ang and Timmermann 2012; Campani, Garcia, and Lewin 2021) , as the determined number of volatility regimes can then be used in an alternative regime-switching model that requires this quantity to be set a priori. Additionally, the dynamic trading strategy performs well at avoiding periods of significant volatility and drawdown, and performs substantially better than the SPY in various market conditions. Our method continually updates its distributions and parameters, reflecting the need for ongoing learning of market conditions and volatility structure. The method is flexible and also integrates with other statistical and machine learning methods. For instance, one could replace the static safe haven of gold with a learned allocation of low beta assets as a dynamic safe haven. The precise methodology and applications described in this paper are not an exhaustive representation of the utility of this method. As long as there is consistency between the regime characteristic of interest, the change point algorithm, and the distance metric between distributions, the method could easily be reworked for classification of regimes of alternative characteristics, and in other domains of study. Future work could make several substitutions or improvements to the methodology. One could apply change point tests that detect changes in the mean to identify clusters with positive or negative returns. Different clustering algorithms could be used to uncover novel structures: for example, DBSCAN (Ester et al. 1996) could be used to find outlier distributions, or fuzzy clustering could be used to find distributions that might belong to multiple clusters. These additional insights could be useful for making new inferences about the time series. Currently, we recompute the distributions every four years; instead, we could make additional use of online change point detection to optimize this period in a more sophisticated manner. We could integrate more sophisticated procedures from metric geometry than simply minimizing the distance to the prior distributions. Finally, we could conceivably combine other machine learning methods of predicting volatility and online decision-making with our distance-based detection of high volatility historical regimes. No potential conflict of interest was reported by the authors. No funding was received for this research. In this section, we describe the change point detection framework. Developed by Hawkins (1977) ; Hawkins and Zamba (2005) , change point algorithms seek to determine breaks in a time series at which the stochastic properties of the underlying random variables change, and have become instrumental in time series analysis. First, we outline the change point detection framework in general. A sequence of observations x 1 , x 2 , ..., x n are drawn from random variables X 1 , X 2 , ..., X n . We wish to determine points τ 1 , ..., τ m at which the distributions change. One assumes that the underlying random variables are independent and identically distributed between change points. One can summarize this with the following notation, following Ross (2015) : That is, one assumes X i is a random sampling of a different distribution over each time period [τ i , τ i+1 ]. In order to meet the apparently restrictive assumption of independence of the data, one must usually perform an appropriate transformation of the data. The log quotient transformation, which yields the log returns from the closing price data, is one such transformation (Gustafsson 2001) . Ross (2013) points out the fact that log returns often exhibit heavy-tailed behaviour. As a result, a nonparametric test is needed to detect change points that do not a priori assume the distribution of the data. The rank test is one such test. Suppose there are two samples of observations from unknown distributions A = {r 1,1 , r 1,2 , ..., r 1,m } and B = {r 2,1 , r 2,2 , ..., r 2,n }. Define the rank of an observation r ∈ A ∪ B as follows: rank(r) = m j 1 (r≥r1,j) + n j 1 (r≥r2,j) = #{s ∈ A ∪ B : r ≥ s} A larger rank indicates a higher positioning in the ordering of the elements of A and B. If both sets of samples have the same distribution, the median rank among {rank(r) : r ∈ A ∪ B} is 1 2 (n + m + 1). In this case, one would assume that both sets have a near equal split of the ranks. The Mood test determines the extent that each observation's rank differs from the median rank, thereby detecting differences in the distributions' variance. If the samples have different variances, then one set of samples would have more extreme values than the other, which means the ranks would not be even between the two sets. Specifically, the test statistic is as follows: This is appropriately normalized: If M mn is greater than some threshold h, we reject the null hypothesis that the distributions have the same variance, and conclude they have different variances. As depicted in B, the log return time series are tail heavy but strongly mean and median centred. Thus, the Mood test reliably detects changes in the variance without being affected by changes in the median. Compare Sections 4 and 5 of Mood (1954) for this distinction. Ross' CPM algorithm (Ross 2015) works by feeding in one data point at a time. When a change point τ is detected, the algorithm restarts and proceeds from that point, so it suffices to describe how the algorithm determines its very first change point. Suppose x 1 , ..., x N is a sequence for which no change point has been detected. For each m = 1, 2, ..., N define n = N − m, mirroring the notation of A.2, and compute the Mood test statistic M m,n . If the maximum among these, M N = max m+n=N M m,n , exceeds a threshold parameter h N , we declare a change point in the variance has occurred atτ = argmax m M m,n . If the maximum such test statistic does not exceed the threshold parameter, feed in the next data point x N +1 and continue. If a change pointτ = m is detected at time N , there has been a delay of n units in its detection. This delay is necessary for the algorithm to examine data points on each side of the change point. The algorithm then restarts from the change pointτ . In our implementation of the algorithm, we read in at least 30 values before looking for another change point, so that all stationary periods have length at least 30. We choose our parameters h in order to manage the number of false positives (Type I errors). Given an acceptability threshold α, the following equations specify that this error should remain constant over time: In the event that no change point exists, a false positive will nonetheless be detected at time 1/α on average. This quantity is the average run length parameter ARL 0 that is passed to CPM, which in term calculates the appropriate choice of h t . In this case, ARL 0 is set to 10,000. We include detailed results for the experiments in Section 3. Figures B1, B2 We remark that all distribution plots are strongly centred in mean and median about zero. This is an important technical point for the Mood test to work correctly to detect changes in variance, as described in Appendix A. 2 0 0 7 -1 2 2 0 0 8 -0 6 2 0 0 8 -1 2 2 0 0 9 -0 6 2 0 0 9 -1 2 2 0 1 0 -0 6 2 0 1 0 -1 2 2 0 1 1 -0 6 2 0 1 1 -1 2 2 0 1 2 -0 6 2 0 1 2 -1 2 2 0 1 3 -0 6 2 0 1 3 -1 2 2 0 1 4 -0 6 2 0 1 4 -1 2 2 0 1 5 -0 6 2 0 1 5 -1 2 2 0 1 6 -0 6 2 0 1 6 -1 2 2 0 1 7 -0 6 2 0 1 7 -1 2 2 0 1 8 -0 6 2 0 1 8 -1 1 2 0 1 9 -0 6 2 0 1 9 -1 1 2 0 2 0 -0 6 2 0 2 0 -1 1 Density (a) Kernel density estimation plots of Nikkei distributions 2 0 0 8 -0 1 2 0 0 8 -0 7 2 0 0 9 -0 1 2 0 0 9 -0 7 2 0 1 0 -0 1 2 0 1 0 -0 7 2 0 1 1 -0 2 2 0 1 1 -0 8 2 0 1 2 -0 2 2 0 1 2 -0 8 2 0 1 3 -0 2 2 0 1 3 -0 8 2 0 1 4 -0 2 2 0 1 4 -0 8 2 0 1 5 -0 3 2 0 1 5 -0 9 2 0 1 6 -0 3 2 0 1 6 -0 9 2 0 1 7 -0 3 2 0 1 7 -0 9 2 0 1 8 -0 3 2 0 1 8 -0 9 2 0 1 9 -0 3 2 0 1 9 -1 0 2 0 2 0 -0 4 2 0 2 0 -1 0 Density (a) Kernel density estimation plots of MSCI distributions 2 0 0 7 -1 2 2 0 0 8 -0 6 2 0 0 8 -1 2 2 0 0 9 -0 6 2 0 0 9 -1 2 2 0 1 0 -0 6 2 0 1 0 -1 2 2 0 1 1 -0 6 2 0 1 1 -1 2 2 0 1 2 -0 6 2 0 1 2 -1 2 2 0 1 3 -0 6 2 0 1 3 -1 2 2 0 1 4 -0 6 2 0 1 4 -1 2 2 0 1 5 -0 6 2 0 1 5 -1 2 2 0 1 6 -0 6 2 0 1 6 -1 2 2 0 1 7 -0 6 2 0 1 7 -1 2 2 0 1 8 -0 6 2 0 1 8 -1 1 2 0 1 9 -0 6 2 0 1 9 -1 1 2 0 2 0 -0 6 2 0 2 0 -1 1 2 0 0 7 -1 2 2 0 0 8 -0 6 2 0 0 8 -1 2 2 0 0 9 -0 6 2 0 0 9 -1 2 2 0 1 0 -0 6 2 0 1 0 -1 2 2 0 1 1 -0 6 2 0 1 1 -1 2 2 0 1 2 -0 6 2 0 1 2 -1 2 2 0 1 3 -0 6 2 0 1 3 -1 2 2 0 1 4 -0 6 2 0 1 4 -1 2 2 0 1 5 -0 6 2 0 1 5 -1 2 2 0 1 6 -0 6 2 0 1 6 -1 2 2 0 1 7 -0 6 2 0 1 7 -1 2 2 0 1 8 -0 6 2 0 1 8 -1 1 2 0 1 9 -0 6 2 0 1 9 -1 1 2 0 2 0 -0 6 2 0 2 0 -1 1 2 0 0 7 -1 2 2 0 0 8 -0 6 2 0 0 8 -1 2 2 0 0 9 -0 6 2 0 0 9 -1 2 2 0 1 0 -0 6 2 0 1 0 -1 2 2 0 1 1 -0 6 2 0 1 1 -1 2 2 0 1 2 -0 6 2 0 1 2 -1 2 2 0 1 3 -0 6 2 0 1 3 -1 2 2 0 1 4 -0 6 2 0 1 4 -1 2 2 0 1 5 -0 6 2 0 1 5 -1 2 2 0 1 6 -0 6 2 0 1 6 -1 2 2 0 1 7 -0 6 2 0 1 7 -1 2 2 0 1 8 -0 6 2 0 1 8 -1 1 2 0 1 9 -0 6 2 0 1 9 -1 1 2 0 2 0 -0 6 2 0 2 0 -1 1 Density (a) Kernel density estimation plots of BRK-A distributions 2 0 0 7 -1 2 2 0 0 8 -0 6 2 0 0 8 -1 2 2 0 0 9 -0 6 2 0 0 9 -1 2 2 0 1 0 -0 6 2 0 1 0 -1 2 2 0 1 1 -0 6 2 0 1 1 -1 2 2 0 1 2 -0 6 2 0 1 2 -1 2 2 0 1 3 -0 6 2 0 1 3 -1 2 2 0 1 4 -0 6 2 0 1 4 -1 2 2 0 1 5 -0 6 2 0 1 5 -1 2 2 0 1 6 -0 6 2 0 1 6 -1 2 2 0 1 7 -0 6 2 0 1 7 -1 2 2 0 1 8 -0 6 2 0 1 8 -1 1 2 0 1 9 -0 6 2 0 1 9 -1 1 2 0 2 0 -0 6 2 0 2 0 -1 1 Density (a) Kernel density estimation plots of XLF distributions 2 0 0 7 -1 2 2 0 0 8 -0 6 2 0 0 8 -1 2 2 0 0 9 -0 6 2 0 0 9 -1 2 2 0 1 0 -0 6 2 0 1 0 -1 2 2 0 1 1 -0 6 2 0 1 1 -1 2 2 0 1 2 -0 6 2 0 1 2 -1 2 2 0 1 3 -0 6 2 0 1 3 -1 2 2 0 1 4 -0 6 2 0 1 4 -1 2 2 0 1 5 -0 6 2 0 1 5 -1 2 2 0 1 6 -0 6 2 0 1 6 -1 2 2 0 1 7 -0 6 2 0 1 7 -1 2 2 0 1 8 -0 6 2 0 1 8 -1 1 2 0 1 9 -0 6 2 0 1 9 -1 1 2 0 2 0 -0 6 2 0 2 0 -1 1 2 0 0 7 -1 2 2 0 0 8 -0 6 2 0 0 8 -1 2 2 0 0 9 -0 6 2 0 0 9 -1 2 2 0 1 0 -0 6 2 0 1 0 -1 2 2 0 1 1 -0 6 2 0 1 1 -1 2 2 0 1 2 -0 6 2 0 1 2 -1 2 2 0 1 3 -0 6 2 0 1 3 -1 2 2 0 1 4 -0 6 2 0 1 4 -1 2 2 0 1 5 -0 6 2 0 1 5 -1 2 2 0 1 6 -0 6 2 0 1 6 -1 2 2 0 1 7 -0 6 2 0 1 7 -1 2 2 0 1 8 -0 6 2 0 1 8 -1 1 2 0 1 9 -0 6 2 0 1 9 -1 1 2 0 2 0 -0 6 2 0 2 0 -1 1 Density (a) Kernel density estimation plots of RYT distributions 2 0 0 7 -1 2 2 0 0 8 -0 6 2 0 0 8 -1 2 2 0 0 9 -0 6 2 0 0 9 -1 2 2 0 1 0 -0 6 2 0 1 0 -1 2 2 0 1 1 -0 6 2 0 1 1 -1 2 2 0 1 2 -0 6 2 0 1 2 -1 2 2 0 1 3 -0 6 2 0 1 3 -1 2 2 0 1 4 -0 6 2 0 1 4 -1 2 2 0 1 5 -0 6 2 0 1 5 -1 2 2 0 1 6 -0 6 2 0 1 6 -1 2 2 0 1 7 -0 6 2 0 1 7 -1 2 2 0 1 8 -0 6 2 0 1 8 -1 1 2 0 1 9 -0 6 2 0 1 9 -1 1 2 0 2 0 -0 6 2 0 2 0 -1 1 Density (a) Kernel density estimation plots of AUD/USD distributions 2 0 0 7 -1 2 2 0 0 8 -0 6 2 0 0 8 -1 2 2 0 0 9 -0 6 2 0 0 9 -1 2 2 0 1 0 -0 5 2 0 1 0 -1 1 2 0 1 1 -0 5 2 0 1 1 -1 1 2 0 1 2 -0 4 2 0 1 2 -1 0 2 0 1 3 -0 4 2 0 1 3 -1 0 2 0 1 4 -0 4 2 0 1 4 -0 9 2 0 1 5 -0 3 2 0 1 5 -0 9 2 0 1 6 -0 3 2 0 1 6 -0 8 2 0 1 7 -0 2 2 0 1 7 -0 8 2 0 1 8 -0 2 2 0 1 8 -0 7 2 0 1 9 -0 1 2 0 1 9 -0 7 2 0 2 0 -0 1 2 0 2 0 -0 6 2 0 2 0 -1 2 Density (a) Kernel density estimation plots of GBP/USD distributions 2 0 0 7 -1 2 2 0 0 8 -0 6 2 0 0 8 -1 2 2 0 0 9 -0 6 2 0 0 9 -1 2 2 0 1 0 -0 6 2 0 1 0 -1 1 2 0 1 1 -0 5 2 0 1 1 -1 1 2 0 1 2 -0 5 2 0 1 2 -1 0 2 0 1 3 -0 4 2 0 1 3 -1 0 2 0 1 4 -0 4 2 0 1 4 -0 9 2 0 1 5 -0 3 2 0 1 5 -0 9 2 0 1 6 -0 3 2 0 1 6 -0 8 2 0 1 7 -0 2 2 0 1 7 -0 8 2 0 1 8 -0 2 2 0 1 8 -0 7 2 0 1 9 -0 1 2 0 1 9 -0 7 2 0 2 0 -0 1 2 0 2 0 -0 6 2 0 2 0 -1 2 Density (a) Kernel density estimation plots of NZD/USD distributions 2 0 0 7 -1 2 2 0 0 8 -0 6 2 0 0 8 -1 2 2 0 0 9 -0 6 2 0 0 9 -1 2 2 0 1 0 -0 5 2 0 1 0 -1 1 2 0 1 1 -0 5 2 0 1 1 -1 1 2 0 1 2 -0 4 2 0 1 2 -1 0 2 0 1 3 -0 4 2 0 1 3 -1 0 2 0 1 4 -0 3 2 0 1 4 -0 9 2 0 1 5 -0 3 2 0 1 5 -0 9 2 0 1 6 -0 2 2 0 1 6 -0 8 2 0 1 7 -0 2 2 0 1 7 -0 8 2 0 1 8 -0 1 2 0 1 8 -0 7 2 0 1 9 -0 1 2 0 1 9 -0 7 2 0 2 0 -0 1 2 0 2 0 -0 6 2 0 2 0 -1 2 Regime Changes and Financial Markets Economic policy uncertainty and stock markets: Long-run evidence from the US Predicting regime switches in the VIX index with macroeconomic variables The Dynamics of the S&P 500 under a Crisis Context: Insights from a Three-Regime Switching Model Modelling long memory and structural breaks in conditional variances: An adaptive FIGARCH approach Do bubbles have an explosive signature in Markov switching models? Generalized autoregressive conditional heteroskedasticity Optimal portfolio strategies in the presence of regimes in asset returns Predicting ordinary and severe recessions with a three-state Markov-switching dynamic factor model Dynamics of implied volatility surfaces Fitting time series models to nonstationary processes Regime shifts and uncertainty in pollution control Central Limit Theorems for the Wasserstein Distance Between the Empirical and the True Distributions Autoregressive Conditional Heteroscedasticity with Estimates of the Variance of United Kingdom Inflation A Density-Based Algorithm for Discovering Clusters in Large Spatial Databases with Noise A Method for Comparing Two Hierarchical Clusterings Is there an election cycle in American stock returns? Predictable Dynamics in the S&P 500 Index Options Implied Volatility Surface Regime-switching stochastic volatility model: estimation and calibration to VIX options Markov Switching Models in Empirical Finance Asset allocation under multivariate regime switching From the bird's eye to the microscope: A survey of new stylized facts of the intra-daily foreign exchange markets Adaptive Filtering and Change Detection A New Approach to the Economic Analysis of Nonstationary Time Series and the Business Cycle Solving the problem of the K parameter in the KNN classifier using an ensemble learning approach Testing a Sequence of Observations for a Shift in Location A Change-Point Model for a Shift in Variance Dynamics, behaviours, and anomaly persistence in cryptocurrencies and equities surrounding COVID-19 Fat tails and volatility clustering in experimental asset markets Improving GARCH volatility forecasts with regime-switching GARCH Optimal Mass Transport: Signal processing and machine-learning applications A Spectral Analysis of World GDP Dynamics: Kondratieff Waves, Kuznets Swings, Juglar and Kitchin Cycles in Global Economic Development, and the 2008-2009 Economic Crisis Persistence in Variance, Structural Change, and the GARCH Model Adaptive Detection of Multiple Change-Points in Asset Price Volatility On the Asymptotic Efficiency of Certain Nonparametric Two-Sample Tests Detecting change points in VIX and S&P 500: A new approach to dynamic asset allocation On fitting of non-stationary autoregressive models in time series analysis A regime-switching Heston model for VIX and S&P 500 implied volatilities Evolutionary Spectra and Non-Stationary Processes On the Analysis of Bivariate Non-Stationary Processes Modelling financial volatility in the presence of abrupt changes Parametric and Nonparametric Sequential Change Detection in R: The cpm Package Stock Market Analysis: A Review and Taxonomy of Prediction Techniques Overseas market shocks and VKOSPI dynamics: A Markov-switching approach Volatility dynamics under an endogenous Markov-switching framework: a cross-market approach Evaluating volatility and interval forecasts Generating Volatility Forecasts from Value at Risk Estimates Implied volatility surfaces: uncovering regularities for options on financial futures Threshold Autoregression, Limit Cycles and Cyclical Data A tutorial on spectral clustering Market-making strategy with asymmetric information and regime-switching Threshold heteroskedastic models Self-Tuning Spectral Clustering Many thanks to Kerry Chen and Alex Judge for helpful edits and discussion.