Guillaume Dalle
Postdoc at EPFL

I am a postdoctoral researcher at EPFL, working between the labs IdePHICS and INDY. Between 2019 and 2022, I was a PhD student at École des Ponts (CERMICS), and then a teaching assistant at MIT (JuliaLab).
Check out my PhD Resources website for students and researchers
news
No news so far...
selected publications
- PreprintSolving a Continent-Scale Inventory Routing Problem at Renault
This paper is the fruit of a partnership with Renault. Their backward logistic requires to solve a continent-scale multi-attribute inventory routing problem (IRP). With an average of 30 commodities, 16 depots, and 600 customers spread across a continent, our instances are orders of magnitude larger than those in the literature. Existing algorithms do not scale. We propose a large neighborhood search (LNS). To make it work, (1) we generalize existing split delivery vehicle routing problem and IRP neighborhoods to this context, (2) we turn a state-of-the art matheuristic for medium-scale IRP into a large neighborhood, and (3) we introduce two novel perturbations: the reinsertion of a customer and that of a commodity into the IRP solution. We also derive a new lower bound based on a flow relaxation. In order to stimulate the research on large-scale IRP, we introduce a library of industrial instances. We benchmark our algorithms on these instances and make our code open-source. Extensive numerical experiments highlight the relevance of each component of our LNS.
@misc{bouvierSolvingContinentScaleInventory2022, title = {Solving a {{Continent-Scale Inventory Routing Problem}} at {{Renault}}}, author = {Bouvier, Louis and Dalle, Guillaume and Parmentier, Axel and Vidal, Thibaut}, date = {2022-09-01}, number = {arXiv:2209.00412}, publisher = {{arXiv}}, doi = {10.48550/arXiv.2209.00412}, url = {http://arxiv.org/abs/2209.00412}, }
- PreprintLearning with Combinatorial Optimization Layers: A Probabilistic Approach
Combinatorial optimization (CO) layers in machine learning (ML) pipelines are a powerful tool to tackle data-driven decision tasks, but they come with two main challenges. First, the solution of a CO problem often behaves as a piecewise constant function of its objective parameters. Given that ML pipelines are typically trained using stochastic gradient descent, the absence of slope information is very detrimental. Second, standard ML losses do not work well in combinatorial settings. A growing body of research addresses these challenges through diverse methods. Unfortunately, the lack of well-maintained implementations slows down the adoption of CO layers. In this paper, building upon previous works, we introduce a probabilistic perspective on CO layers, which lends itself naturally to approximate differentiation and the construction of structured losses. We recover many approaches from the literature as special cases, and we also derive new ones. Based on this unifying perspective, we present InferOpt.jl, an open-source Julia package that 1) allows turning any CO oracle with a linear objective into a differentiable layer, and 2) defines adequate losses to train pipelines containing such layers. Our library works with arbitrary optimization algorithms, and it is fully compatible with Julia’s ML ecosystem. We demonstrate its abilities using a pathfinding problem on video game maps.
@misc{dalleLearningCombinatorialOptimization2022, title = {Learning with {{Combinatorial Optimization Layers}}: A {{Probabilistic Approach}}}, author = {Dalle, Guillaume and Baty, Léo and Bouvier, Louis and Parmentier, Axel}, date = {2022-07-27}, number = {arXiv:2207.13513}, publisher = {{arXiv}}, doi = {10.48550/arXiv.2207.13513}, url = {https://arxiv.org/abs/2207.13513}, }
- PreprintMinimax Estimation of Partially-Observed Vector AutoRegressionsGuillaume Dalle, and Yohann De Castro
High-dimensional time series are a core ingredient of the statistical modeling toolkit, for which numerous estimation methods are known.But when observations are scarce or corrupted, the learning task becomes much harder.The question is: how much harder? In this paper, we study the properties of a partially-observed Vector AutoRegressive process, which is a state-space model endowed with a stochastic observation mechanism.Our goal is to estimate its sparse transition matrix, but we only have access to a small and noisy subsample of the state components.Interestingly, the sampling process itself is random and can exhibit temporal correlations, a feature shared by many realistic data acquisition scenarios.We start by describing an estimator based on the Yule-Walker equation and the Dantzig selector, and we give an upper bound on its non-asymptotic error.Then, we provide a matching minimax lower bound, thus proving near-optimality of our estimator.The convergence rate we obtain sheds light on the role of several key parameters such as the sampling ratio, the amount of noise and the number of non-zero coefficients in the transition matrix.These theoretical findings are commented and illustrated by numerical experiments on simulated data.
@misc{dalleMinimaxEstimationPartiallyObserved2022, title = {Minimax {{Estimation}} of {{Partially-Observed Vector AutoRegressions}}}, author = {Dalle, Guillaume and De Castro, Yohann}, date = {2022-05-05}, number = {arXiv:2106.09327}, publisher = {{arXiv}}, doi = {10.48550/arXiv.2106.09327}, url = {http://arxiv.org/abs/2106.09327}, }