Preprints:
Gilles Blanchard, Alexandra Carpentier, and Oleksandr Zadorozhnyi. Moment inequalities for sums of weakly dependent random fields. In: arXiv preprint arXiv:2306.16403, 2023. arXiv
Chhor J, Carpentier A. Goodness-of-Fit Testing for Holder-Continuous Densities: Sharp Local Minimax Rates. arXiv preprint arXiv:2109.04346, 2021.arXiv
Alexandra Carpentier, Claire Vernade, Yasin Abbasi-Yadkori. The Elliptical Potential Lemma Revisited. arXiv preprint arXiv:2010.10182, 2020.arXiv
O. Zadorozhnyi, G. Blanchard, and A. Carpentier. Restless dependent bandits with fading memory. arXiv preprint arXiv:1906.10454, 2019.arXiv
M. Gutzeit. Minimax L_2-Separation Rate in Testing the Sobolev-Type Regularity of a function. arXiv preprint arXiv:1901.00880, 2019.arXiv
2023:
Pilliat Emmanuel, Alexandra Carpentier, and Nicolas Verzelen. Optimal permutation estimation in crowd-sourcing problems. In: arXiv preprint arXiv:2211.04092, to appear in AoS, 2023. arXiv
Tomas Kocak and Alexandra Carpentier. Online Learning with Feedback Graphs: The True Shape of Regret” In: arXiv preprint arXiv:2306.02971, to appear in ICML, 2023. arXiv
El Mehdi Saad, Nicolas Verzelen, and Alexandra Carpentier. Active Ranking of Experts Based on their Performances in Many Tasks. In: arXiv preprint arXiv:2306.02628, to appear in ICML, 2023. arXiv
Pilliat, E., Carpentier, A., and Verzelen, N. Optimal multiple change-point detection for high-dimensional data. Electronic Journal of Statistics 17 (1), 1240-1315, 2023.arXiv
2022:
Chhor J, Carpentier A. Sharp Local Minimax Rates for Goodness-of-Fit Testing in Large Random Graphs, multivariate Poisson families and multinomials. Mathematical Statistics and Learning 5.1–2, pp. 1–54, 2022.arXiv
J. Lam-Weil, A. Carpentier, and B. Sriperumbudur. Local minimax rates for closeness testing of discrete distributions. Bernoulli, 28(2), 1179-1197, 2022.Publisher websitearXiv
Alexandra Carpentier, Olivier Collier, Laetitia Comminges, Alexandre B. Tsybakov, Yuhao Wang. “Estimation of the L2-norm and testing in sparse linear regression with unknown variance”. In: Bernoulli 28.4, pp. 2744–2787. arXiv
Julien Chhor and Alexandra Carpentier. Sharp local minimax rates for goodness-of-fit testing in multivariate binomial and Poisson families and in multinomials. In: Mathematical Statistics and Learning 5.1–2, pp. 1–54, 2022. arXiv
Solenne Gaucher, Alexandra Carpentier, and Christophe Giraud. The price of unfairness in linear bandits with biased feedback. In: Advances in Neural Information Processing Systems 35, pp. 18363–18376, 2022. arXiv
A. Al Marjani, T. Kocak, and A. Garivier. “On the complexity of all ε-best arms identification”. In: Joint European Conference on Machine Learning and Knowledge Discovery in Databases. Springer. 2022, pp.317–332.
A. Barrier, A. Garivier, and T. Koc ́ak. “A non-asymptotic approach to best-arm identification for gaussian bandits”. In: International Conference on Artificial Intelligence and Statistics. PMLR. 2022, pp. 10078–10109.
J. Lam-Weil, B. Laurent, and J.-M. Loubes. “Minimax optimal goodness-of-fit testing for densities and multinomials under a local differential privacy constraint”. In: Bernoulli 28.1 (2022), pp. 579–600
2021:
A. Carpentier, S. Delattre, E. Roquain, N. Verzelen. Estimating minimum effect with outlier selection. The Annals of Statistics 49.1, pp. 272–294, 2021. arXiv
de Heide R, Cheshire J, Ménard P, Carpentier A. Bandits with many optimal arms. Advances in Neural Information Processing Systems 34, pp. 22457–22469, 2021.arXiv
J. Cheshire, P. Ménard and A. Carpentier. Problem Dependent View on Structured Thresholding Bandit Problems. Proceedings of the 38th International Conference on Machine Learning, PMLR 139:1846-1854, 2021.arXiv
A. Carpentier, and N. Verzelen. Optimal Sparsity Testing in Linear regression Model. Bernoulli 27(2): 727-750, 2020.arXiv
A. Carpentier, C. Duval, E. Mariucci. Total variation distance for discretely observed Lévy processes: a Gaussian approximation of the small jumps. Ann. Inst. H. Poincaré Probab. Statist. 57(2): 901-939, 2021.arXiv
T. Kocak and A. Garivier. “Epsilon Best Arm Identification in Spectral
Bandits.” In: IJCAI. 2021, pp. 2636–2642.
2020:
A. Manegueu, C. Vernade, A. Carpentier, M. Valko. Stochastic bandits with arm-dependent delays. International Conference on Machine Learning, PMLR, pp. 3348–3356, 2020.arXiv
C. Vernade, A. Carpentier, T. Lattimore, G. Zappella, B. Ermis, M. Brückner. Linear bandits with Stochastic Delayed Feedback. International Conference on Machine Learning, PMLR, pp. 9712–9721, 2020.arXiv
J. Cheshire, P. Menard, A. Carpentier. The Influence of Shape Constraints on the Thresholding Bandit Problem. Conference on Learning Theory, PMLR, pp. 1228–1275, 2020.Publisher website
D. Ghoshdastidar, M. Gutzeit, A. Carpentier, U. von Luxburg. Two-sample Hypothesis Testing for Inhomogeneous Random Graphs. In: Annals of Statistics 48, pp. 2208–2229, 2020.arXiv
2019:
A. Carpentier, N. Verzelen. Adaptive estimation of the sparsity in the Gaussian vector model.The Annals of Statistics 47.1: 93-126, 2019. arXivPublisher website
J. Achdou, J. Lam, A. Carpentier and G. Blanchard. A minimax near-optimal algorithm for adaptive rejection sampling. In Proceedings of Machine Learning Research (AISTATS), PMLR 98:94-126, 2019. arXiv Publisher website
J. Seznec, A. Locatelli, A. Carpentier, A. Lazaric, and M. Valko. Rotting bandits are no harder than stochastic ones. . In Proceedings of Machine Learning Research (AISTATS), PMLR 89:2564-2572, 2019. arXiv Publisher website
A. Locatelli, A. Carpentier and M. Valko. Active multiple matrix completion with adaptive confidence sets. In Proceedings of Machine Learning Research (AISTATS), PMLR 89:1783-1791, 2019. Publisher website
A.Carpentier, J.Eisert, D.Gross and R.Nickl. Uncertainty Quantification for Matrix Compressed Sensing and Quantum Tomography Problems. To appear in HDP, 2019. arXiv
A. Carpentier, O. Collier, L. Comminges, A. Tsybakov, Y. Wang. Minimax rate of testing in sparse linear regression. To appear in Automation and Remote Control, 2019. arXiv
2018:
A. Carpentier and A.K.H. Kim. An iterative hard thresholding estimator for low rank matrix recovery with explicit limiting distribution. Statistica Sinica, 28:1371-1393, 2018. arXivPublisher website
G. Blanchard, A. Carpentier, M. Gutzeit. Minimax Euclidean Separation Rates for Testing Convex Hypotheses in R^d. Electronic Journal of Statistics, 12.2:3713-3735, 2018, 2018. arXiv
A. Locatelli and A. Carpentier. Adaptivity to Smoothness in X-armed bandits. In Proceedings of Machine Learning Research (COLT), PMLR 75:1463-1492, 2018. Publisher website
A. Carpentier. One and two sided composite-composite tests in Gaussian mixture models. In Mathematisches Forschungsinstitut Oberwolfach Report No. 12/2018, pp21:22, 2018. Publisher website
A. Locatelli, A. Carpentier, S. Kpotufe. An Adaptive Strategy for Active Learning with Smooth Decision Boundary. In the Proceedings of Machine Learning Research (ALT) 83:547-571, 2018. Publisher websitearXiv
2017:
A. Locatelli, A. Carpentier and S. Kpotufe. Adaptivity to Noise Parameters in Nonparametric Active Learning. In the Proceedings of Machine Learning Research (COLT), 65:954-977, 2017. arXivPublisher website
D. Ghoshdastidar, U. von Luxburg, M. Gutzeit and A. Carpentier. Two-Sample Tests for Large Random Graphs using Network Statistics. In the Proceedings of Machine Learning Research (COLT), 65:1383-1416, 2017. Publisher websitearXiv
A. Carpentier, O. Klopp, M. Loeffler, and R. Nickl. Adaptive Confidence Sets for Matrix Completion. In the Bernoulli Journal, 24(4A):2429-2460, 2017. arXivPublisher website
2016:
A. Carpentier and A. Locatelli. Tight (Lower) Bounds for the Fixed Budget Best Arm Identification Bandit Problem. In the Journal of Machine Learning Research W&CP (COLT), 49:115, 2016. Publisher websitearXiv
A. Locatelli, M. Gutzeit, A. Carpentier. An optimal algorithm for the Thresholding Bandit Problem.In the Journal of Machine Learning Research W&CP (ICML), 48:16901698, 2016. Publisher website
A. Erraqabi, M. Valko, A. Carpentier, O. Maillard. Pliable Rejection Sampling. In the Journal of Machine Learning Research W&CP (ICML), 48:21212129, 2016. Publisher website
A. Carpentier and M. Valko. Revealing Graph Bandits for Maximizing Local Influence. In the Journal of Machine Learning Research W&CP (AISTATS), 51:1018, 2016. Publisher websitearXiv
A. Carpentier, O. Klopp, M. Loeffler. Constructing confidence sets for the matrix completion problem. In the proceedings of Advances in Nonparametric Statistics - 3 rd ISNPS, Avignon, France, 2016. arXiv
A. Carpentier and T. Schlueter. Learning Relationships between data obtained independently. In the Journal of Machine Learning Research W&CP (AISTATS), 51:658666, 2016. Publisher websitearXiv
2015:
Alexandra Carpentier and Richard Nickl. On signal detection and confidence sets for low rank inference problems. In the Electronic Journal of Statistics, 9(2):2675-2688, 2015. Publisher websitearXiv
A. Carpentier, R. Munos and A. Antos. Adaptive strategy for stratified Monte Carlo sampling. In the Journal of Machine Learning Research, 16(Nov):22312271, 2015. Publisher website
A. Carpentier and M. Valko. Simple regret for infinitely many armed bandits. In Journal of Machine Learning Research W&CP (ICML) Volume 37. Publisher websitearXiv
A. Carpentier. Testing the regularity of a smooth signal. In the Bernoulli Journal, 21(1):465-488, 2015. Publisher websitearXiv
A. Carpentier. Uncertainty quantification for high dimensional linear problems. In Mathematisches Forschungsinstitut Oberwolfach Report No. 26/2015, pp1474:1476, 2015. Publisher website
A. Carpentier. Implementable confidence sets in high dimensional regression. In Journal of Machine Learning Research W&CP (AISTATS) 38: 120-128, 2012, 2015. Publisher websitearXiv
2014:
A. Carpentier and A.K.H. Kim. Honest and adaptive confidence interval for the tail coefficient in the Pareto model. In the Electronic Journal of Statistic, 8(2), pp. 2066-2110, 2014. Publisher websitearXiv
A. Carpentier and M. Valko. Extreme Bandits. In Advances in Neural Information Processing Systems (NIPS) pp. 1089-1097, 2014. Publisher website
A. Carpentier and A.K.H. Kim. Adaptive and minimax optimal estimation of the tail coefficient. In Statistica Sinica, 25(3):1133-1144, 2014. Publisher websitearXiv
A. Carpentier and R. Munos. Minimax Number of Strata for Online Stratified Sampling : the Case of Noisy Samples. Theoretical Computer Science, 558, 77-106, 2014. Publisher website
2013:
A. Carpentier. Honest and adaptive confidence sets in Lp. In the Electronic Journal of Statistics, volume 7, pp. 2875-2923, 2013. Publisher website
E.M. Thomas, M. Clerc, A. Carpentier, E. Daucé, D. Devlaminck, R. Munos. Optimizing P300-speller sequences by RIP-ping groups apart. In 6th International IEEE/EMBS Conference on Neural Engineering (NER), pp. 1062-1065, IEEE, 2013. HALPublisher website
A. Carpentier and R. Munos. Toward Optimal Stratification for Stratified Monte-Carlo Integration. In Journal of Machine Learning Research W&CP (ICML), vol. 28 (2), pp. 28-36, 2013. Publisher websitearXiv
M. Valko, R. Munos and A. Carpentier. Stochastic Simultaneous Optimistic Optimization. In Journal of Machine Learning Research W&CP (ICML), vol. 28 (2), pp. 19-27, 2013. Publisher website
J. Fruitet, A. Carpentier, R. Munos and M. Clerc. Automatic motor task selection via a bandit algorithm for a brain-controlled button. In Journal of Neural Engineering 10(1), 016012, 2013. Publisher website
2012:
A. Carpentier and R. Munos. Adaptive Stratified Sampling for Monte-Carlo integration of Differentiable functions. In Advances in Neural Information Processing Systems (NIPS), pp. 251-259, 2012. Publisher websitearXiv
O. A. Maillard, and A. Carpentier. Online allocation and homogeneous partitioning for piecewise constant mean-approximation. In Advances in Neural Information Processing Systems (NIPS), 2012. Publisher websiteHAL
J. Fruitet, A. Carpentier, R. Munos and M. Clerc. Bandit Algorithms boost motor-task selection for Brain Computer Interfaces. In Advances in Neural Information Processing Systems (NIPS), pp. 449-457, 2012. Publisher website
A. Carpentier and R. Munos. Minimax Number of Strata for Online Stratified Sampling given Noisy Samples. In Algorithmic Learning Theory (ALT), pp. 229-244, 2012. Publisher website
A. Carpentier and R. Munos. Bandit Theory meets Compressed Sensing for high dimensional Stochastic Linear Bandit. In Journal of Machine Learning Research W&CP (AISTATS) 22: 190-198, 2012. Publisher websitearXiv
A. Carpentier. De l’échantillonnage optimal en grande et petite dimension. PhD Thesis, AFIA ex-aequo accessit 2013, (french machine learning/artificial intelligence second price). .pdf
2011:
A. Carpentier and R. Munos. Finite time analysis of stratified sampling for monte carlo. In Advances in Neural Information Processing Systems (NIPS), pp. 1278-1286, 2011. Publisher website
A. Carpentier, O. A. Maillard, and R. Munos. Sparse recovery with brownian sensing. In Advances in Neural Information Processing Systems (NIPS), pp. 1782-1790, 2011. Publisher website
A. Carpentier, A. Lazaric, M. Ghavamzadeh, R. Munos and P. Auer. Upper Confidence Bounds Algorithms for Active Learning in Multi-Armed Bandits. In Algorithmic Learning Theory (ALT), pp. 189-203, 2011. Publisher websiteHAL
2010:
G. Guillot and A. Carpentier-Skandalis. On the informativeness of dominant and co-dominant genetic markers for Bayesian supervised clustering. The Open Statistics and Probability Journal, 2010. Publisher website