Research Overview

I study learning-to-defer: how models should decide whether to predict or defer to experts under uncertainty, cost, and risk. My work combines statistical learning theory with practical methods for prediction and decision-making under uncertainty in multi-expert and resource-constrained systems.

Recent work has appeared at ICML, ICLR, and AISTATS, with a focus on theory, robustness, and multi-expert decision systems.

Publications

Selected publications at ICML 2025, ICLR 2026, and AISTATS 2026 are listed below.

2026

  1. Learning-to-Defer with Expert-Conditioned Advice. Yannis Montreuil, Leina Montreuil, Axel Carlier, Lai Xing Ng, Wei Tsang Ooi. arXiv:2603.14324
  2. Learning to Defer in Non-Stationary Time Series via Switching State-Space Models. Yannis Montreuil*, Letian Yu*, Axel Carlier, Lai Xing Ng, Wei Tsang Ooi. arXiv:2601.22538
  3. Why Ask One When You Can Ask k? Learning-to-Defer to the Top-k Experts. Yannis Montreuil, Axel Carlier, Lai Xing Ng, Wei Tsang Ooi. ICLR 2026. arXiv:2504.12988.
  4. Online Learning-to-Defer with Varying Experts. Yannis Montreuil*, Duy Dang Hoang*, Maxime Meyer*, Axel Carlier, Lai Xing Ng, Wei Tsang Ooi. AISTATS 2026.
  5. Adversarial Robustness in One-Stage Learning-to-Defer. Yannis Montreuil*, Letian Yu*, Axel Carlier, Lai Xing Ng, Wei Tsang Ooi. AISTATS 2026. arXiv:2510.10988.
  6. Optimal Query Allocation in Extractive QA with LLMs: A Learning-to-Defer Framework with Theoretical Guarantees. Yannis Montreuil*, Yeo Shu Heng*, Axel Carlier, Lai Xing Ng, Wei Tsang Ooi. AISTATS 2026. arXiv:2410.15761.
  7. Towards Robust Human–AI Decision-Making via Learning-to-Defer. Yannis Montreuil. AAAI-26 Doctoral Consortium.

2025

  1. Adversarial Robustness in Two-Stage Learning-to-Defer: Algorithms and Guarantees. Yannis Montreuil, Axel Carlier, Lai Xing Ng, Wei Tsang Ooi. ICML 2025. arXiv:2502.01027.
  2. A Two-Stage Learning-to-Defer Approach for Multi-Task Learning. Yannis Montreuil*, Yeo Shu Heng*, Axel Carlier, Lai Xing Ng, Wei Tsang Ooi. ICML 2025. arXiv:2410.15729.

* indicates equal contribution. For abstracts and more details, see Detailed Publications.