arXiv daily

Methodology (stat.ME)

Thu, 25 May 2023

Other arXiv digests in this category:Thu, 14 Sep 2023; Wed, 13 Sep 2023; Tue, 12 Sep 2023; Mon, 11 Sep 2023; Fri, 08 Sep 2023; Tue, 05 Sep 2023; Fri, 01 Sep 2023; Thu, 31 Aug 2023; Wed, 30 Aug 2023; Tue, 29 Aug 2023; Mon, 28 Aug 2023; Fri, 25 Aug 2023; Thu, 24 Aug 2023; Wed, 23 Aug 2023; Tue, 22 Aug 2023; Mon, 21 Aug 2023; Fri, 18 Aug 2023; Thu, 17 Aug 2023; Wed, 16 Aug 2023; Tue, 15 Aug 2023; Mon, 14 Aug 2023; Fri, 11 Aug 2023; Thu, 10 Aug 2023; Wed, 09 Aug 2023; Tue, 08 Aug 2023; Mon, 07 Aug 2023; Fri, 04 Aug 2023; Thu, 03 Aug 2023; Wed, 02 Aug 2023; Tue, 01 Aug 2023; Mon, 31 Jul 2023; Fri, 28 Jul 2023; Thu, 27 Jul 2023; Wed, 26 Jul 2023; Tue, 25 Jul 2023; Mon, 24 Jul 2023; Fri, 21 Jul 2023; Thu, 20 Jul 2023; Wed, 19 Jul 2023; Tue, 18 Jul 2023; Mon, 17 Jul 2023; Fri, 14 Jul 2023; Thu, 13 Jul 2023; Wed, 12 Jul 2023; Tue, 11 Jul 2023; Mon, 10 Jul 2023; Fri, 07 Jul 2023; Thu, 06 Jul 2023; Wed, 05 Jul 2023; Tue, 04 Jul 2023; Mon, 03 Jul 2023; Fri, 30 Jun 2023; Thu, 29 Jun 2023; Wed, 28 Jun 2023; Tue, 27 Jun 2023; Mon, 26 Jun 2023; Fri, 23 Jun 2023; Thu, 22 Jun 2023; Wed, 21 Jun 2023; Tue, 20 Jun 2023; Fri, 16 Jun 2023; Thu, 15 Jun 2023; Tue, 13 Jun 2023; Mon, 12 Jun 2023; Fri, 09 Jun 2023; Thu, 08 Jun 2023; Wed, 07 Jun 2023; Tue, 06 Jun 2023; Mon, 05 Jun 2023; Fri, 02 Jun 2023; Thu, 01 Jun 2023; Wed, 31 May 2023; Tue, 30 May 2023; Mon, 29 May 2023; Fri, 26 May 2023; Wed, 24 May 2023; Tue, 23 May 2023; Mon, 22 May 2023; Fri, 19 May 2023; Thu, 18 May 2023; Wed, 17 May 2023; Tue, 16 May 2023; Mon, 15 May 2023; Fri, 12 May 2023; Thu, 11 May 2023; Wed, 10 May 2023; Tue, 09 May 2023; Mon, 08 May 2023; Fri, 05 May 2023; Thu, 04 May 2023; Wed, 03 May 2023; Tue, 02 May 2023; Mon, 01 May 2023; Fri, 28 Apr 2023; Thu, 27 Apr 2023; Wed, 26 Apr 2023; Tue, 25 Apr 2023; Mon, 24 Apr 2023; Fri, 21 Apr 2023; Thu, 20 Apr 2023; Wed, 19 Apr 2023; Tue, 18 Apr 2023; Mon, 17 Apr 2023; Fri, 14 Apr 2023; Thu, 13 Apr 2023; Wed, 12 Apr 2023; Tue, 11 Apr 2023; Mon, 10 Apr 2023
1.High-dimensional Response Growth Curve Modeling for Longitudinal Neuroimaging Analysis

Authors:Lu Wang, Xiang Lyu, Zhengwu Zhang, Lexin Li

Abstract: There is increasing interest in modeling high-dimensional longitudinal outcomes in applications such as developmental neuroimaging research. Growth curve model offers a useful tool to capture both the mean growth pattern across individuals, as well as the dynamic changes of outcomes over time within each individual. However, when the number of outcomes is large, it becomes challenging and often infeasible to tackle the large covariance matrix of the random effects involved in the model. In this article, we propose a high-dimensional response growth curve model, with three novel components: a low-rank factor model structure that substantially reduces the number of parameters in the large covariance matrix, a re-parameterization formulation coupled with a sparsity penalty that selects important fixed and random effect terms, and a computational trick that turns the inversion of a large matrix into the inversion of a stack of small matrices and thus considerably speeds up the computation. We develop an efficient expectation-maximization type estimation algorithm, and demonstrate the competitive performance of the proposed method through both simulations and a longitudinal study of brain structural connectivity in association with human immunodeficiency virus.

2.Fractional Polynomials Models as Special Cases of Bayesian Generalized Nonlinear Models

Authors:Aliaksandr Hubin, Georg Heinze, Riccardo De Bin

Abstract: We propose a framework for fitting fractional polynomials models as special cases of Bayesian Generalized Nonlinear Models, applying an adapted version of the Genetically Modified Mode Jumping Markov Chain Monte Carlo algorithm. The universality of the Bayesian Generalized Nonlinear Models allows us to employ a Bayesian version of the fractional polynomials models in any supervised learning task, including regression, classification, and time-to-event data analysis. We show through a simulation study that our novel approach performs similarly to the classical frequentist fractional polynomials approach in terms of variable selection, identification of the true functional forms, and prediction ability, while providing, in contrast to its frequentist version, a coherent inference framework. Real data examples provide further evidence in favor of our approach and show its flexibility.

3.Distributed model building and recursive integration for big spatial data modeling

Authors:Emily C. Hector, Ani Eloyan

Abstract: Motivated by the important need for computationally tractable statistical methods in high dimensional spatial settings, we develop a distributed and integrated framework for estimation and inference of Gaussian model parameters with ultra-high-dimensional likelihoods. We propose a paradigm shift from whole to local data perspectives that is rooted in distributed model building and integrated estimation and inference. The framework's backbone is a computationally and statistically efficient integration procedure that simultaneously incorporates dependence within and between spatial resolutions in a recursively partitioned spatial domain. Statistical and computational properties of our distributed approach are investigated theoretically and in simulations. The proposed approach is used to extract new insights on autism spectrum disorder from the Autism Brain Imaging Data Exchange.

4.Gibbs sampler approach for objective Bayeisan inference in elliptical multivariate random effects model

Authors:Olha Bodnar, Taras Bodnar

Abstract: In this paper, we present the Bayesian inference procedures for the parameters of the multivariate random effects model derived under the assumption of an elliptically contoured distribution when the Berger and Bernardo reference and the Jeffreys priors are assigned to the model parameters. We develop a new numerical algorithm for drawing samples from the posterior distribution, which is based on the hybrid Gibbs sampler. The new approach is compared to the two Metropolis-Hastings algorithms, which were previously derived in the literature, via an extensive simulation study. The results are implemented in practice by considering ten studies about the effectiveness of hypertension treatment for reducing blood pressure where the treatment effects on both the systolic blood pressure and diastolic blood pressure are investigated.

5.Accommodating informative visit times for analysing irregular longitudinal data: a sensitivity analysis approach with balancing weights estimators

Authors:Sean Yiu, Li Su

Abstract: Irregular visit times in longitudinal studies can jeopardise marginal regression analyses of longitudinal data by introducing selection bias when the visit and outcome processes are associated. Inverse intensity weighting is a useful approach to addressing such selection bias when the visiting at random assumption is satisfied, i.e., visiting at time $t$ is independent of the longitudinal outcome at $t$, given the observed covariate and outcome histories up to $t$. However, the visiting at random assumption is unverifiable from the observed data, and informative visit times often arise in practice, e.g., when patients' visits to clinics are driven by ongoing disease activities. Therefore, it is necessary to perform sensitivity analyses for inverse intensity weighted estimators (IIWEs) when the visit times are likely informative. However, research on such sensitivity analyses is limited in the literature. In this paper, we propose a new sensitivity analysis approach to accommodating informative visit times in marginal regression analysis of irregular longitudinal data. Our sensitivity analysis is anchored at the visiting at random assumption and can be easily applied to existing IIWEs using standard software such as the coxph function of the R package Survival. Moreover, we develop novel balancing weights estimators of regression coefficients by exactly balancing the covariate distributions that drive the visit and outcome processes to remove the selection bias after weighting. Simulations show that, under both correct and incorrect model specifications, our balancing weights estimators perform better than the existing IIWEs using weights estimated by maximum partial likelihood. We applied our methods to data from a clinic-based cohort study of psoriatic arthritis and provide an R Markdown tutorial to demonstrate their implementation.

6.The GNAR-edge model: A network autoregressive model for networks with time-varying edge weights

Authors:Anastasia Mantziou, Mihai Cucuringu, Victor Meirinhos, Gesine Reinert

Abstract: In economic and financial applications, there is often the need for analysing multivariate time series, comprising of time series for a range of quantities. In some applications such complex systems can be associated with some underlying network describing pairwise relationships among the quantities. Accounting for the underlying network structure for the analysis of this type of multivariate time series is required for assessing estimation error and can be particularly informative for forecasting. Our work is motivated by a dataset consisting of time series of industry-to-industry transactions. In this example, pairwise relationships between Standard Industrial Classification (SIC) codes can be represented using a network, with SIC codes as nodes, while the observed time series for each pair of SIC codes can be regarded as time-varying weights on the edges. Inspired by Knight et al. (2019), we introduce the GNAR-edge model which allows modelling of multiple time series utilising the network structure, assuming that each edge weight depends not only on its past values, but also on past values of its neighbouring edges, for a range of neighbourhood stages. The method is validated through simulations. Results from the implementation of the GNAR-edge model on the real industry-to-industry data show good fitting and predictive performance of the model. The predictive performance is improved when sparsifying the network using a lead-lag analysis and thresholding edges according to a lead-lag score.

7.Robust Functional Data Analysis for Discretely Observed Data

Authors:Lingxuan Shao, Fang Yao

Abstract: This paper examines robust functional data analysis for discretely observed data, where the underlying process encompasses various distributions, such as heavy tail, skewness, or contaminations. We propose a unified robust concept of functional mean, covariance, and principal component analysis, while existing methods and definitions often differ from one another or only address fully observed functions (the ``ideal'' case). Specifically, the robust functional mean can deviate from its non-robust counterpart and is estimated using robust local linear regression. Moreover, we define a new robust functional covariance that shares useful properties with the classic version. Importantly, this covariance yields the robust version of Karhunen--Lo\`eve decomposition and corresponding principal components beneficial for dimension reduction. The theoretical results of the robust functional mean, covariance, and eigenfunction estimates, based on pooling discretely observed data (ranging from sparse to dense), are established and aligned with their non-robust counterparts. The newly-proposed perturbation bounds for estimated eigenfunctions, with indexes allowed to grow with sample size, lay the foundation for further modeling based on robust functional principal component analysis.

8.All about sample-size calculations for A/B testing: Novel extensions and practical guide

Authors:Jing Zhou, Jiannan Lu, Anas Shallah

Abstract: While there exists a large amount of literature on the general challenges of and best practices for trustworthy online A/B testing, there are limited studies on sample size estimation, which plays a crucial role in trustworthy and efficient A/B testing that ensures the resulting inference has a sufficient power and type I error control. For example, when sample size is under-estimated, the statistical inference, even with the correct analysis methods, will not be able to detect the true significant improvement leading to misinformed and costly decisions. This paper addresses this fundamental gap by developing new sample size calculation methods for correlated data, as well as absolute vs. relative treatment effects, both ubiquitous in online experiments. Additionally, we address a practical question of the minimal observed difference that will be statistically significant and how it relates to average treatment effect and sample size calculation. All proposed methods are accompanied by mathematical proofs, illustrative examples, and simulations. We end by sharing some best practices on various practical topics on sample size calculation and experimental design.

9.Flexible Variable Selection for Clustering and Classification

Authors:Mackenzie R. Neal, Paul D. McNicholas

Abstract: The importance of variable selection for clustering has been recognized for some time, and mixture models are well-established as a statistical approach to clustering. Yet, the literature on variable selection in model-based clustering remains largely rooted in the assumption of Gaussian clusters. Unsurprisingly, variable selection algorithms based on this assumption tend to break down in the presence of cluster skewness. A novel variable selection algorithm is presented that utilizes the Manly transformation mixture model to select variables based on their ability to separate clusters, and is effective even when clusters depart from the Gaussian assumption. The proposed approach, which is implemented within the R package vscc, is compared to existing variable selection methods -- including an existing method that can account for cluster skewness -- using simulated and real datasets.

10.Interval estimation in three-class ROC analysis: a fairly general approach based on the empirical likelihood

Authors:Duc-Khanh To, Gianfranco Adimari, Monica Chiogna

Abstract: The empirical likelihood is a powerful nonparametric tool, that emulates its parametric counterpart -- the parametric likelihood -- preserving many of its large-sample properties. This article tackles the problem of assessing the discriminatory power of three-class diagnostic tests from an empirical likelihood perspective. In particular, we concentrate on interval estimation in a three-class ROC analysis, where a variety of inferential tasks could be of interest. We present novel theoretical results and tailored techniques studied to efficiently solve some of such tasks. Extensive simulation experiments are provided in a supporting role, with our novel proposals compared to existing competitors, when possible. It emerges that our new proposals are extremely flexible, being able to compete with contestants and being the most suited to accommodating flexible distributions for target populations. We illustrate the application of the novel proposals with a real data example. The article ends with a discussion and a presentation of some directions for future research.

11.Sequential Bayesian experimental design for calibration of expensive simulation models

Authors:Özge Sürer, Matthew Plumlee, Stefan M. Wild

Abstract: Simulation models of critical systems often have parameters that need to be calibrated using observed data. For expensive simulation models, calibration is done using an emulator of the simulation model built on simulation output at different parameter settings. Using intelligent and adaptive selection of parameters to build the emulator can drastically improve the efficiency of the calibration process. The article proposes a sequential framework with a novel criterion for parameter selection that targets learning the posterior density of the parameters. The emergent behavior from this criterion is that exploration happens by selecting parameters in uncertain posterior regions while simultaneously exploitation happens by selecting parameters in regions of high posterior density. The advantages of the proposed method are illustrated using several simulation experiments and a nuclear physics reaction model.

12.Forecasting intraday financial time series with sieve bootstrapping and dynamic updating

Authors:Han Lin Shang, Kaiying Ji

Abstract: Intraday financial data often take the form of a collection of curves that can be observed sequentially over time, such as intraday stock price curves. These curves can be viewed as a time series of functions observed on equally spaced and dense grids. Due to the curse of dimensionality, high-dimensional data poses challenges from a statistical aspect; however, it also provides opportunities to analyze a rich source of information so that the dynamic changes within short-time intervals can be better understood. We consider a sieve bootstrap method of Paparoditis and Shang (2022) to construct one-day-ahead point and interval forecasts in a model-free way. As we sequentially observe new data, we also implement two dynamic updating methods to update point and interval forecasts for achieving improved accuracy. The forecasting methods are validated through an empirical study of 5-minute cumulative intraday returns of the S&P/ASX All Ordinaries Index.