arXiv daily: Statistical Finance

arXiv daily: Statistical Finance (q-fin.ST)

1.On statistical arbitrage under a conditional factor model of equity returns

Authors:Trent Spears, Stefan Zohren, Stephen Roberts

Abstract: We consider a conditional factor model for a multivariate portfolio of United States equities in the context of analysing a statistical arbitrage trading strategy. A state space framework underlies the factor model whereby asset returns are assumed to be a noisy observation of a linear combination of factor values and latent factor risk premia. Filter and state prediction estimates for the risk premia are retrieved in an online way. Such estimates induce filtered asset returns that can be compared to measurement observations, with large deviations representing candidate mean reversion trades. Further, in that the risk premia are modelled as time-varying quantities, non-stationarity in returns is de facto captured. We study an empirical trading strategy respectful of transaction costs, and demonstrate performance over a long history of 29 years, for both a linear and a non-linear state space model. Our results show that the model is competitive relative to the results of other methods, including simple benchmarks and other cutting-edge approaches as published in the literature. Also of note, while strategy performance degradation is noticed through time -- especially for the most recent years -- the strategy continues to offer compelling economics, and has scope for further advancement.

1.Chance or Chaos? Fractal geometry aimed to inspect the nature of Bitcoin

Authors:Esther Cabezas-Rivas, Felipe Sánchez-Coll, Isaac Tormo Xaixo

Abstract: The aim of this paper is to analyse the Bitcoin in order to shed some light on its nature and behaviour. We select 9 cryptocurrencies that account for almost 75\% of total market capitalisation and compare their evolution with that of a wide variety of traditional assets: commodities with spot and futures contracts, treasury bonds, stock indices, growth and value stocks. Fractal geometry will be applied to carry out a careful statistical analysis of the performance of the Bitcoin returns. As a main conclusion, we have detected a high degree of persistence in its prices, which decreases the efficiency but increases its predictability. Moreover, we observe that the underlying technology influences price dynamics, with fully decentralised cryptocurrencies being the only ones to exhibit self-similarity features at any time scale.

1.Linking microblogging sentiments to stock price movement: An application of GPT-4

Authors:Rick Steinert, Saskia Altmann

Abstract: This paper investigates the potential improvement of the GPT-4 Language Learning Model (LLM) in comparison to BERT for modeling same-day daily stock price movements of Apple and Tesla in 2017, based on sentiment analysis of microblogging messages. We recorded daily adjusted closing prices and translated them into up-down movements. Sentiment for each day was extracted from messages on the Stocktwits platform using both LLMs. We develop a novel method to engineer a comprehensive prompt for contextual sentiment analysis which unlocks the true capabilities of modern LLM. This enables us to carefully retrieve sentiments, perceived advantages or disadvantages, and the relevance towards the analyzed company. Logistic regression is used to evaluate whether the extracted message contents reflect stock price movements. As a result, GPT-4 exhibited substantial accuracy, outperforming BERT in five out of six months and substantially exceeding a naive buy-and-hold strategy, reaching a peak accuracy of 71.47 % in May. The study also highlights the importance of prompt engineering in obtaining desired outputs from GPT-4's contextual abilities. However, the costs of deploying GPT-4 and the need for fine-tuning prompts highlight some practical considerations for its use.

1.Combining predictive distributions of electricity prices: Does minimizing the CRPS lead to optimal decisions in day-ahead bidding?

Authors:Weronika Nitka, Rafał Weron

Abstract: Probabilistic price forecasting has recently gained attention in power trading because decisions based on such predictions can yield significantly higher profits than those made with point forecasts alone. At the same time, methods are being developed to combine predictive distributions, since no model is perfect and averaging generally improves forecasting performance. In this article we address the question of whether using CRPS learning, a novel weighting technique minimizing the continuous ranked probability score (CRPS), leads to optimal decisions in day-ahead bidding. To this end, we conduct an empirical study using hourly day-ahead electricity prices from the German EPEX market. We find that increasing the diversity of an ensemble can have a positive impact on accuracy. At the same time, the higher computational cost of using CRPS learning compared to an equal-weighted aggregation of distributions is not offset by higher profits, despite significantly more accurate predictions.

1.Methods for Acquiring and Incorporating Knowledge into Stock Price Prediction: A Survey

Authors:Liping Wang, Jiawei Li, Lifan Zhao, Zhizhuo Kou, Xiaohan Wang, Xinyi Zhu, Hao Wang, Yanyan Shen, Lei Chen

Abstract: Predicting stock prices presents a challenging research problem due to the inherent volatility and non-linear nature of the stock market. In recent years, knowledge-enhanced stock price prediction methods have shown groundbreaking results by utilizing external knowledge to understand the stock market. Despite the importance of these methods, there is a scarcity of scholarly works that systematically synthesize previous studies from the perspective of external knowledge types. Specifically, the external knowledge can be modeled in different data structures, which we group into non-graph-based formats and graph-based formats: 1) non-graph-based knowledge captures contextual information and multimedia descriptions specifically associated with an individual stock; 2) graph-based knowledge captures interconnected and interdependent information in the stock market. This survey paper aims to provide a systematic and comprehensive description of methods for acquiring external knowledge from various unstructured data sources and then incorporating it into stock price prediction models. We also explore fusion methods for combining external knowledge with historical price features. Moreover, this paper includes a compilation of relevant datasets and delves into potential future research directions in this domain.

1.Regularity in forex returns during financial distress: Evidence from India

Authors:Radhika Prosad Datta

Abstract: This paper uses the concepts of entropy to study the regularity/irregularity of the returns from the Indian Foreign exchange (forex) markets. The Approximate Entropy and Sample Entropy statistics which measure the level of repeatability in the data are used to quantify the randomness in the forex returns from the time period 2006 to 2021. The main objective of the research is to see how the randomness of the foreign exchange returns evolve over the given time period particularly during periods of high financial instability or turbulence in the global financial market. With this objective we look at 2 major financial upheavals, the subprime crisis also known as the Global Financial Crisis (GFC) during 2006-2007 and the recent Covid-19 pandemic during 2020-2021. Our empirical results overwhelmingly confirm our working hypothesis that regularity in the returns of the major Indian foreign exchange rates increases during times of financial crisis. This is evidenced by a decrease in the values of the sample entropy and approximate entropy before and after/during the financial crisis period for the majority of the exchange rates. Our empirical results also show that Sample Entropy is a better measure of regularity than Approximate Entropy for the Indian forex rates which is in agreement with the theoretical predictions.

1.A Common Shock Model for multidimensional electricity intraday price modelling with application to battery valuation

Authors:Thomas Deschatre, Xavier Warin

Abstract: In this paper, we propose a multidimensional statistical model of intraday electricity prices at the scale of the trading session, which allows all products to be simulated simultaneously. This model, based on Poisson measures and inspired by the Common Shock Poisson Model, reproduces the Samuelson effect (intensity and volatility increases as time to maturity decreases). It also reproduces the price correlation structure, highlighted here in the data, which decreases as two maturities move apart. This model has only three parameters that can be estimated using a moment method that we propose here. We demonstrate the usefulness of the model on a case of storage valuation by dynamic programming over a trading session.

2.Shifting Cryptocurrency Influence: A High-Resolution Network Analysis of Market Leaders

Authors:Arnav Hiray, Agam Shah, Sudheer Chava

Abstract: Over the last decade, the cryptocurrency market has experienced unprecedented growth, emerging as a prominent financial market. As this market rapidly evolves, it necessitates re-evaluating which cryptocurrencies command the market and steer the direction of blockchain technology. We implement a network-based cryptocurrency market analysis to investigate this changing landscape. We use novel hourly-resolution data and Kendall's Tau correlation to explore the interconnectedness of the cryptocurrency market. We observed critical differences in the hierarchy of cryptocurrencies determined by our method compared to rankings derived from daily data and Pearson's correlation. This divergence emphasizes the potential information loss stemming from daily data aggregation and highlights the limitations of Pearson's correlation. Our findings show that in the early stages of this growth, Bitcoin held a leading role. However, during the 2021 bull run, the landscape changed drastically. We see that while Ethereum has emerged as the overall leader, it was FTT and its associated exchange, FTX, that greatly led to the increase at the beginning of the bull run. We also find that highly-influential cryptocurrencies are increasingly gaining a commanding influence over the market as time progresses, despite the growing number of cryptocurrencies making up the market.

1.An exploration of the mathematical structure and behavioural biases of financial crises

Authors:Nick James, Max Menzies

Abstract: In this paper we contrast the dynamics of the 2022 Ukraine invasion financial crisis with notable financial crises of recent years - the dot-com bubble, global financial crisis and COVID-19. We study the similarity in market dynamics and associated implications for equity investors between various financial market crises and we introduce new mathematical techniques to do so. First, we study the strength of collective dynamics during different market crises, and compare suitable portfolio diversification strategies with respect to the unique number of sectors and stocks for optimal systematic risk reduction. Next, we introduce a new linear operator method to quantify distributional distance between equity returns during various crises. Our method allows us to fairly compare underlying stock and sector performance during different time periods, normalising for those collective dynamics driven by the overall market. Finally, we introduce a new combinatorial portfolio optimisation framework driven by random sampling to investigate whether particular equities and equity sectors are more effective in maximising investor risk-adjusted returns during market crises.

1.Memory Effects, Multiple Time Scales and Local Stability in Langevin Models of the S&P500 Market Correlation

Authors:Tobias Wand, Martin Heßler, Oliver Kamps

Abstract: The analysis of market correlations is crucial for optimal portfolio selection of correlated assets, but their memory effects have often been neglected. In this work, we analyse the mean market correlation of the S&P500 which corresponds to the main market mode in principle component analysis. We fit a generalised Langevin equation (GLE) to the data whose memory kernel implies that there is a significant memory effect in the market correlation ranging back at least three trading weeks. The memory kernel improves the forecasting accuracy of the GLE compared to models without memory and hence, such a memory effect has to be taken into account for optimal portfolio selection to minimise risk or for predicting future correlations. Moreover, a Bayesian resilience estimation provides further evidence for non-Markovianity in the data and suggests the existence of a hidden slow time scale that operates on much slower times than the observed daily market data. Assuming that such a slow time scale exists, our work supports previous research on the existence of locally stable market states.

1.Are there Dragon Kings in the Stock Market?

Authors:Jiong Liu, M. Dashti Moghaddam, R. A. Serota

Abstract: We undertake a systematic study of historic market volatility spanning roughly five preceding decades. We focus specifically on the time series of realized volatility (RV) of the S&P500 index and its distribution function. As expected, the largest values of RV coincide with the largest economic upheavals of the period: Savings and Loan Crisis, Tech Bubble, Financial Crisis and Covid Pandemic. We address the question of whether these values belong to one of the three categories: Black Swans (BS), that is they lie on scale-free, power-law tails of the distribution; Dragon Kings (DK), defined as statistically significant upward deviations from BS; or Negative Dragons Kings (nDK), defined as statistically significant downward deviations from BS. In analyzing the tails of the distribution with RV > 40, we observe the appearance of "potential" DK which eventually terminate in an abrupt plunge to nDK. This phenomenon becomes more pronounced with the increase of the number of days over which the average RV is calculated -- here from daily, n=1, to "monthly," n=21. We fit the entire distribution with a modified Generalized Beta (mGB) distribution function, which terminates at a finite value of the variable but exhibits a long power-law stretch prior to that, as well as Generalized Beta Prime (GB2) distribution function, which has a power-law tail. We also fit the tails directly with a straight line on a log-log scale. In order to ascertain BS, DK or nDK behavior, all fits include their confidence intervals and p-values are evaluated for the data points to check if they can come from the respective distributions.

1.Estimating the roughness exponent of stochastic volatility from discrete observations of the realized variance

Authors:Xiyue Han, Alexander Schied

Abstract: We consider the problem of estimating the roughness of the volatility in a stochastic volatility model that arises as a nonlinear function of fractional Brownian motion with drift. To this end, we introduce a new estimator that measures the so-called roughness exponent of a continuous trajectory, based on discrete observations of its antiderivative. We provide conditions on the underlying trajectory under which our estimator converges in a strictly pathwise sense. Then we verify that these conditions are satisfied by almost every sample path of fractional Brownian motion (with drift). As a consequence, we obtain strong consistency theorems in the context of a large class of rough volatility models. Numerical simulations show that our estimation procedure performs well after passing to a scale-invariant modification of our estimator.

1.Analysis of Indian foreign exchange markets: A Multifractal Detrended Fluctuation Analysis (MFDFA) approach

Authors:R. P. Datta

Abstract: The multifractal spectra of daily foreign exchange rates for US dollar (USD), the British Pound (GBP), the Euro (Euro) and the Japanese Yen (Yen) with respect to the Indian Rupee are analysed for the period 6th January 1999 to 24th July 2018. We observe that the time series of logarithmic returns of all the four exchange rates exhibit features of multifractality. Next, we research the source of the observed multifractality. For this, we transform the return series in two ways: a) We randomly shuffle the original time series of logarithmic returns and b) We apply the process of phase randomisation on the unchanged series. Our results indicate in the case of the US dollar the source of multifractality is mainly the fat tail. For the British Pound and the Euro, we see the long-range correlations between the observations and the thick tails of the probability distribution give rise to the observed multifractal features, while in the case of the Japanese Yen, the origin of the multifractal nature of the return series is mostly due to the broad tail.

1.Higher-order Graph Attention Network for Stock Selection with Joint Analysis

Authors:Yang Qiao, Yiping Xia, Xiang Li, Zheng Li, Yan Ge

Abstract: Stock selection is important for investors to construct profitable portfolios. Graph neural networks (GNNs) are increasingly attracting researchers for stock prediction due to their strong ability of relation modelling and generalisation. However, the existing GNN methods only focus on simple pairwise stock relation and do not capture complex higher-order structures modelling relations more than two nodes. In addition, they only consider factors of technical analysis and overlook factors of fundamental analysis that can affect the stock trend significantly. Motivated by them, we propose higher-order graph attention network with joint analysis (H-GAT). H-GAT is able to capture higher-order structures and jointly incorporate factors of fundamental analysis with factors of technical analysis. Specifically, the sequential layer of H-GAT take both types of factors as the input of a long-short term memory model. The relation embedding layer of H-GAT constructs a higher-order graph and learn node embedding with GAT. We then predict the ranks of stock return. Extensive experiments demonstrate the superiority of our H-GAT method on the profitability test and Sharp ratio over both NSDAQ and NYSE datasets

1.Fractal properties, information theory, and market efficiency

Authors:Xavier Brouty, Matthieu Garcin

Abstract: Considering that both the entropy-based market information and the Hurst exponent are useful tools for determining whether the efficient market hypothesis holds for a given asset, we study the link between the two approaches. We thus provide a theoretical expression for the market information when log-prices follow either a fractional Brownian motion or its stationary extension using the Lamperti transform. In the latter model, we show that a Hurst exponent close to 1/2 can lead to a very high informativeness of the time series, because of the stationarity mechanism. In addition, we introduce a multiscale method to get a deeper interpretation of the entropy and of the market information, depending on the size of the information set. Applications to Bitcoin, CAC 40 index, Nikkei 225 index, and EUR/USD FX rate, using daily or intraday data, illustrate the methodological content.

2.Multivariate Simulation-based Forecasting for Intraday Power Markets: Modelling Cross-Product Price Effects

Authors:Simon Hirsch, Florian Ziel

Abstract: Intraday electricity markets play an increasingly important role in balancing the intermittent generation of renewable energy resources, which creates a need for accurate probabilistic price forecasts. However, research to date has focused on univariate approaches, while in many European intraday electricity markets all delivery periods are traded in parallel. Thus, the dependency structure between different traded products and the corresponding cross-product effects cannot be ignored. We aim to fill this gap in the literature by using copulas to model the high-dimensional intraday price return vector. We model the marginal distribution as a zero-inflated Johnson's $S_U$ distribution with location, scale and shape parameters that depend on market and fundamental data. The dependence structure is modelled using latent beta regression to account for the particular market structure of the intraday electricity market, such as overlapping but independent trading sessions for different delivery days. We allow the dependence parameter to be time-varying. We validate our approach in a simulation study for the German intraday electricity market and find that modelling the dependence structure improves the forecasting performance. Additionally, we shed light on the impact of the single intraday coupling (SIDC) on the trading activity and price distribution and interpret our results in light of the market efficiency hypothesis. The approach is directly applicable to other European electricity markets.

1.DoubleAdapt: A Meta-learning Approach to Incremental Learning for Stock Trend Forecasting

Authors:Lifan Zhao, Shuming Kong, Yanyan Shen

Abstract: Stock trend forecasting is a fundamental task of quantitative investment where precise predictions of price trends are indispensable. As an online service, stock data continuously arrive over time. It is practical and efficient to incrementally update the forecast model with the latest data which may reveal some new patterns recurring in the future stock market. However, incremental learning for stock trend forecasting still remains under-explored due to the challenge of distribution shifts (a.k.a. concept drifts). With the stock market dynamically evolving, the distribution of future data can slightly or significantly differ from incremental data, hindering the effectiveness of incremental updates. To address this challenge, we propose DoubleAdapt, an end-to-end framework with two adapters, which can effectively adapt the data and the model to mitigate the effects of distribution shifts. Our key insight is to automatically learn how to adapt stock data into a locally stationary distribution in favor of profitable updates. Complemented by data adaptation, we can confidently adapt the model parameters under mitigated distribution shifts. We cast each incremental learning task as a meta-learning task and automatically optimize the adapters for desirable data adaptation and parameter initialization. Experiments on real-world stock datasets demonstrate that DoubleAdapt achieves state-of-the-art predictive performance and shows considerable efficiency.

1.Agent market orders representation through a contrastive learning approach

Authors:Ruihua Ruan, Emmanuel Bacry, Jean-François Muzy

Abstract: Due to the access to the labeled orders on the CAC40 data from Euronext, we are able to analyse agents' behaviours in the market based on their placed orders. In this study, we construct a self-supervised learning model using triplet loss to effectively learn the representation of agent market orders. By acquiring this learned representation, various downstream tasks become feasible. In this work, we utilise the K-means clustering algorithm on the learned representation vectors of agent orders to identify distinct behaviour types within each cluster.

2.FinGPT: Open-Source Financial Large Language Models

Authors:Hongyang Yang, Xiao-Yang Liu, Christina Dan Wang

Abstract: Large language models (LLMs) have shown the potential of revolutionizing natural language processing tasks in diverse domains, sparking great interest in finance. Accessing high-quality financial data is the first challenge for financial LLMs (FinLLMs). While proprietary models like BloombergGPT have taken advantage of their unique data accumulation, such privileged access calls for an open-source alternative to democratize Internet-scale financial data. In this paper, we present an open-source large language model, FinGPT, for the finance sector. Unlike proprietary models, FinGPT takes a data-centric approach, providing researchers and practitioners with accessible and transparent resources to develop their FinLLMs. We highlight the importance of an automatic data curation pipeline and the lightweight low-rank adaptation technique in building FinGPT. Furthermore, we showcase several potential applications as stepping stones for users, such as robo-advising, algorithmic trading, and low-code development. Through collaborative efforts within the open-source AI4Finance community, FinGPT aims to stimulate innovation, democratize FinLLMs, and unlock new opportunities in open finance. Two associated code repos are \url{https://github.com/AI4Finance-Foundation/FinGPT} and \url{https://github.com/AI4Finance-Foundation/FinNLP}

1.Permutation invariant Gaussian matrix models for financial correlation matrices

Authors:George Barnes, Sanjaye Ramgoolam, Michael Stephanou

Abstract: We construct an ensemble of correlation matrices from high-frequency foreign exchange market data, with one matrix for every day for 446 days. The matrices are symmetric and have vanishing diagonal elements after subtracting the identity matrix. For this case, we construct the general permutation invariant Gaussian matrix model, which has 4 parameters characterised using the representation theory of symmetric groups. The permutation invariant polynomial functions of the symmetric, diagonally vanishing matrices have a basis labelled by undirected loop-less graphs. Using the expectation values of the general linear and quadratic permutation invariant functions of the matrices in the dataset, the 4 parameters of the matrix model are determined. The model then predicts the expectation values of the cubic and quartic polynomials. These predictions are compared to the data to give strong evidence for a good overall fit of the permutation invariant Gaussian matrix model. The linear, quadratic, cubic and quartic polynomial functions are then used to define low-dimensional feature vectors for the days associated to the matrices. These vectors, with choices informed by the refined structure of small non-Gaussianities, are found to be effective as a tool for anomaly detection in market states: statistically significant correlations are established between atypical days as defined using these feature vectors, and days with significant economic events as recognized in standard foreign exchange economic calendars. They are also shown to be useful as a tool for ranking pairs of days in terms of their similarity, yielding a strongly statistically significant correlation with a ranking based on a higher dimensional proxy for visual similarity.

1.Explaining AI in Finance: Past, Present, Prospects

Authors:Barry Quinn

Abstract: This paper explores the journey of AI in finance, with a particular focus on the crucial role and potential of Explainable AI (XAI). We trace AI's evolution from early statistical methods to sophisticated machine learning, highlighting XAI's role in popular financial applications. The paper underscores the superior interpretability of methods like Shapley values compared to traditional linear regression in complex financial scenarios. It emphasizes the necessity of further XAI research, given forthcoming EU regulations. The paper demonstrates, through simulations, that XAI enhances trust in AI systems, fostering more responsible decision-making within finance.

1.Accounting statement analysis at industry level. A gentle introduction to the compositional approach

Authors:Germà Coenders University of Girona, Núria Arimany Serrat University of Vic - Central University of Catalonia

Abstract: Compositional data are contemporarily defined as positive vectors, the ratios among whose elements are of interest to the researcher. Financial statement analysis by means of accounting ratios fulfils this definition to the letter. Compositional data analysis solves the major problems in statistical analysis of standard financial ratios at industry level, such as skewness, non-normality, non-linearity and dependence of the results on the choice of which accounting figure goes to the numerator and to the denominator of the ratio. In spite of this, compositional applications to financial statement analysis are still rare. In this article, we present some transformations within compositional data analysis that are particularly useful for financial statement analysis. We show how to compute industry or sub-industry means of standard financial ratios from a compositional perspective. We show how to visualise firms in an industry with a compositional biplot, to classify them with compositional cluster analysis and to relate financial and non-financial indicators with compositional regression models. We show an application to the accounting statements of Spanish wineries using DuPont analysis, and a step-by-step tutorial to the compositional freeware CoDaPack.

1.Complexity measure, kernel density estimation, bandwidth selection, and the efficient market hypothesis

Authors:Matthieu Garcin

Abstract: We are interested in the nonparametric estimation of the probability density of price returns, using the kernel approach. The output of the method heavily relies on the selection of a bandwidth parameter. Many selection methods have been proposed in the statistical literature. We put forward an alternative selection method based on a criterion coming from information theory and from the physics of complex systems: the bandwidth to be selected maximizes a new measure of complexity, with the aim of avoiding both overfitting and underfitting. We review existing methods of bandwidth selection and show that they lead to contradictory conclusions regarding the complexity of the probability distribution of price returns. This has also some striking consequences in the evaluation of the relevance of the efficient market hypothesis. We apply these methods to real financial data, focusing on the Bitcoin.

1.On the Time-Varying Structure of the Arbitrage Pricing Theory using the Japanese Sector Indices

Authors:Koichiro Moriya, Akihiko Noda

Abstract: This paper is the first study to examine the time instability of the APT in the Japanese stock market. In particular, we measure how changes in each risk factor affect the stock risk premiums to investigate the validity of the APT over time, applying the rolling window method to Fama and MacBeth's (1973) two-step regression and Kamstra and Shi's (2023) generalized GRS test. We summarize our empirical results as follows: (1) the APT is supported over the entire sample period but not at all times, (2) the changes in monetary policy greatly affect the validity of the APT in Japan, and (3) the time-varying estimates of the risk premiums for each factor are also unstable over time, and they are affected by the business cycle and economic crises. Therefore, we conclude that the validity of the APT as an appropriate model to explain the Japanese sector index is not stable over time.

1.Deep Stock: training and trading scheme using deep learning

Authors:Sungwoo Kang

Abstract: Despite the efficient market hypothesis, many studies suggest the existence of inefficiencies in the stock market, leading to the development of techniques to gain above-market returns, known as alpha. Systematic trading has undergone significant advances in recent decades, with deep learning emerging as a powerful tool for analyzing and predicting market behavior. In this paper, we propose a model inspired by professional traders that look at stock prices of the previous 600 days and predicts whether the stock price rises or falls by a certain percentage within the next D days. Our model, called DeepStock, uses Resnet's skip connections and logits to increase the probability of a model in a trading scheme. We test our model on both the Korean and US stock markets and achieve a profit of N\% on Korea market, which is M\% above the market return, and profit of A\% on US market, which is B\% above the market return.

1.Recurrent neural network based parameter estimation of Hawkes model on high-frequency financial data

Authors:Kyungsub Lee

Abstract: This study examines the use of a recurrent neural network for estimating the parameters of a Hawkes model based on high-frequency financial data, and subsequently, for computing volatility. Neural networks have shown promising results in various fields, and interest in finance is also growing. Our approach demonstrates significantly faster computational performance compared to traditional maximum likelihood estimation methods while yielding comparable accuracy in both simulation and empirical studies. Furthermore, we demonstrate the application of this method for real-time volatility measurement, enabling the continuous estimation of financial volatility as new price data keeps coming from the market.

1.Parameterized Neural Networks for Finance

Authors:Daniel Oeltz, Jan Hamaekers, Kay F. Pilz

Abstract: We discuss and analyze a neural network architecture, that enables learning a model class for a set of different data samples rather than just learning a single model for a specific data sample. In this sense, it may help to reduce the overfitting problem, since, after learning the model class over a larger data sample consisting of such different data sets, just a few parameters need to be adjusted for modeling a new, specific problem. After analyzing the method theoretically and by regression examples for different one-dimensional problems, we finally apply the approach to one of the standard problems asset managers and banks are facing: the calibration of spread curves. The presented results clearly show the potential that lies within this method. Furthermore, this application is of particular interest to financial practitioners, since nearly all asset managers and banks which are having solutions in place may need to adapt or even change their current methodologies when ESG ratings additionally affect the bond spreads.

2.Collective dynamics, diversification and optimal portfolio construction for cryptocurrencies

Authors:Nick James, Max Menzies

Abstract: Since its conception, the cryptocurrency market has been frequently described as an immature market, characterized by significant swings in volatility and occasionally described as lacking rhyme or reason. There has been great speculation as to what role it plays in a diversified portfolio. For instance, is cryptocurrency exposure an inflationary hedge or a speculative investment that follows broad market sentiment with amplified beta? This paper aims to investigate whether the cryptocurrency market has recently exhibited similarly nuanced mathematical properties as the much more mature equity market. Our focus is on collective dynamics and portfolio diversification in the cryptocurrency market, and examining whether previously established results in the equity market hold in the cryptocurrency market, and to what extent.

1.Structured Multifractal Scaling of the Principal Cryptocurrencies: Examination using a Self-Explainable Machine Learning

Authors:Foued Saâdaoui

Abstract: Multifractal analysis is a forecasting technique used to study the scaling regularity properties of financial returns, to analyze the long-term memory and predictability of financial markets. In this paper, we propose a novel structural detrended multifractal fluctuation analysis (S-MF-DFA) to investigate the efficiency of the main cryptocurrencies. The new methodology generalizes the conventional approach by allowing it to proceed on the different fluctuation regimes previously determined using a change-points detection test. In this framework, the characterization of the various exogenous factors influencing the scaling behavior is performed on the basis of a single-factor model, thus creating a kind of self-explainable machine learning for price forecasting. The proposal is tested on the daily data of the three among the main cryptocurrencies in order to examine whether the digital market has experienced upheavals in recent years and whether this has in some ways led to a structured multifractal behavior. The sampled period ranges from April 2017 to December 2022. We especially detect common periods of local scaling for the three prices with a decreasing multifractality after 2018. Complementary tests on shuffled and surrogate data prove that the distribution, linear correlation, and nonlinear structure also explain at some level the structural multifractality. Finally, prediction experiments based on neural networks fed with multi-fractionally differentiated data show the interest of this new self-explained algorithm, thus giving decision-makers and investors the ability to use it for more accurate and interpretable forecasts.