logo IMB
Retour

Séminaire Optimisation Mathématique Modèle Aléatoire et Statistique

(proba-stat) Using a Supervised Principal Components Analysis for Variable Selection in High-Dimensional Datasets Reduces False Discovery Rates

Benoît Liquet

( Université de Pau )

Salle 2 IMB

05 septembre 2025 à 11:00

High-dimensional datasets, where the number of variables p is much larger than the number of samples n, are ubiquitous and often render standard classification techniques unreliable due to overfitting. An important research problem is feature selection, which ranks candidate variables based on their relevance to the outcome variable and retains those that satisfy a chosen criterion. In this presentation we propose a computationally efficient variable selection method based on principal component analysis tailored to a binary classification problem or case-control study. This method is accessible and is suitable for the analysis of high-dimensional datasets. We demonstrate the superior performance of our method through extensive simulations. A semi-real gene expression dataset, a challenging childhood acute lymphoblastic leukemia gene expression study, and a GWAS that attempts to identify single-nucleotide polymorphisms (SNPs) associated with rice grain length further demonstrate the usefulness of our method in genomic applications. We expect our method to accurately identify important features and reduce the False Discovery Rate (FDR) by accounting for the correlation between variables and by de-noising data in the training phase, which also makes it robust to mild outliers in the training data. Our method is almost as fast as univariate filters, so it allows valid statistical inference. The ability to make such inferences sets this method apart from most current multivariate statistical tools designed for today's high-dimensional data.