Model Ensembling for Constrained Optimization
Document
10.4230/LIPIcs.FORC.2025.14
Export XML
Export ACM-XML
Export DOAJ-XML
Export Schema.org
Export BibTeX
Model Ensembling for Constrained Optimization
Authors
Ira Globus Harris
Varun Gupta
Michael Kearns
Aaron Roth
Part of:
Volume:
6th Symposium on Foundations of Responsible Computing (FORC 2025)
Part of:
Series:
Leibniz International Proceedings in Informatics (LIPIcs)
Part of:
Conference:
Symposium on Foundations of Responsible Computing (FORC)
License:
Creative Commons Attribution 4.0 International license
Publication Date: 2025-06-03
PDF
Files
PDF
LIPIcs.FORC.2025.14.pdf
Filesize: 0.65 MB
17 pages
HTML
(experimental)
LIPIcs.FORC.2025.14.html
Document Identifiers
DOI:
10.4230/LIPIcs.FORC.2025.14
URN:
urn:nbn:de:0030-drops-231412
Subject Classification
ACM Subject Classification
Computing methodologies → Learning settings
Keywords
model ensembling
trustworthy AI
decision-making under uncertainty
Metrics
Access Statistics
Total Accesses (updated on a weekly basis)
Document
Metadata
Abstract
Many instances of decision making under objective uncertainty can be decomposed into two steps: predicting the objective function and then optimizing for the best feasible action under the estimate of the objective vector. We study the problem of ensembling models for optimization of uncertain linear objectives under arbitrary constraints. We imagine we are given a collection of predictive models mapping a feature space to multi-dimensional real-valued predictions, which form the coefficients of a linear objective that we would like to optimize. We give two ensembling methods that can provably result in transparent decisions that strictly improve on all initial policies. The first method operates in the "white box" setting in which we have access to the underlying prediction models and the second in the "black box" setting in which we only have access to the induced decisions (in the downstream optimization problem) of the constituent models, but not their underlying point predictions. They are transparent or trustworthy in the sense that the user can reliably predict long-term ensemble rewards even if the instance by instance predictions are imperfect.
Cite As
Get BibTex
Ira Globus Harris, Varun Gupta, Michael Kearns, and Aaron Roth. Model Ensembling for Constrained Optimization. In 6th Symposium on Foundations of Responsible Computing (FORC 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 329, pp. 14:1-14:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)
BibTex
@InProceedings{globusharris_et_al:LIPIcs.FORC.2025.14,
author = {Globus Harris, Ira and Gupta, Varun and Kearns, Michael and Roth, Aaron},
title = {{Model Ensembling for Constrained Optimization}},
booktitle = {6th Symposium on Foundations of Responsible Computing (FORC 2025)},
pages = {14:1--14:17},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-367-6},
ISSN = {1868-8969},
year = {2025},
volume = {329},
editor = {Bun, Mark},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FORC.2025.14},
URN = {urn:nbn:de:0030-drops-231412},
doi = {10.4230/LIPIcs.FORC.2025.14},
annote = {Keywords: model ensembling, trustworthy AI, decision-making under uncertainty}
Author Details
Ira Globus Harris
University of Pennsylvania, Philadelphia, PA, USA
Varun Gupta
University of Pennsylvania, Philadelphia, PA, USA
Michael Kearns
University of Pennsylvania, Philadelphia, PA, USA
Aaron Roth
University of Pennsylvania, Philadelphia, PA, USA
References
Hongrui Chu, Wensi Zhang, Pengfei Bai, and Yahong Chen. Data-driven optimization for last-mile delivery. Complex & Intelligent Systems, 9(3):2271-2284, 2023.
Sarang Deo, Kumar Rajaram, Sandeep Rath, Uday S Karmarkar, and Matthew B Goetz. Planning for hiv screening, testing, and care at the veterans health administration. Operations research, 63(2):287-304, 2015. URL:
Priya Donti, Brandon Amos, and J Zico Kolter. Task-based end-to-end model learning in stochastic optimization. In Advances in Neural Information Processing Systems, pages 5484-5494, 2017. URL:
Ally Yalei Du, Dung Daniel Ngo, and Zhiwei Steven Wu. Reconciling model multiplicity for downstream decision making. CoRR, 2024. URL:
Adam N Elmachtoub and Paul Grigas. Smart “predict, then optimize”. Management Science, 68(1):9-26, 2022.
Jérémie Gallien, Adam J Mersereau, Andres Garro, Alberte Dapena Mora, and Martín Nóvoa Vidal. Initial shipment decisions for new products at zara. Operations Research, 63(2):269-286, 2015. URL:
Daniele Gammelli, Yihua Wang, Dennis Prak, Filipe Rodrigues, Stefan Minner, and Francisco Camara Pereira. Predictive and prescriptive performance of bike-sharing demand forecasts for inventory management. Transportation Research Part C: Emerging Technologies, 138:103571, 2022.
Sumegha Garg, Christopher Jung, Omer Reingold, and Aaron Roth. Oracle efficient online multicalibration and omniprediction. In Proceedings of the 2024 Annual ACM-SIAM Symposium on Discrete Algorithms (SODA), pages 2725-2792. SIAM, 2024. URL:
Sumegha Garg, Michael P Kim, and Omer Reingold. Tracking and improving information in the service of fairness. In Proceedings of the 2019 ACM Conference on Economics and Computation, pages 809-824, 2019. URL:
Ira Globus-Harris, Varun Gupta, Christopher Jung, Michael Kearns, Jamie Morgenstern, and Aaron Roth. Multicalibrated regression for downstream fairness. In Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society, pages 259-286, 2023. URL:
Ira Globus-Harris, Declan Harrison, Michael Kearns, Aaron Roth, and Jessica Sorrell. Multicalibration as boosting for regression. In Proceedings of the 40th International Conference on Machine Learning, pages 11459-11492, 2023. URL:
Parikshit Gopalan, Adam Tauman Kalai, Omer Reingold, Vatsal Sharan, and Udi Wieder. Omnipredictors. In 13th Innovations in Theoretical Computer Science Conference (ITCS 2022). Schloss-Dagstuhl-Leibniz Zentrum für Informatik, 2022.
Parikshit Gopalan, Princewill Okoroafor, Prasad Raghavendra, Abhishek Shetty, and Mihir Singhal. Omnipredictors for regression and the approximate rank of convex functions. arXiv preprint arXiv:2401.14645, 2024. URL:
Ursula Hébert-Johnson, Michael Kim, Omer Reingold, and Guy Rothblum. Multicalibration: Calibration for the (computationally-identifiable) masses. In International Conference on Machine Learning, pages 1939-1948. PMLR, 2018.
Lunjia Hu, Inbal Rachel Livni Navon, Omer Reingold, and Chutong Yang. Omnipredictors for constrained optimization. In International Conference on Machine Learning, pages 13497-13527. PMLR, 2023. URL:
Georgy Noarov, Ramya Ramalingam, Aaron Roth, and Stephan Xie. High-dimensional prediction for sequential decision making, 2023. URL:
Aaron Roth. Uncertain: Modern topics in uncertainty estimation, 2022.
Aaron Roth, Alexander Tolbert, and Scott Weinstein. Reconciling individual probability forecasts, 2023. URL:
Robert E Schapire and Yoav Freund. Boosting: Foundations and algorithms. Kybernetes, 42(1):164-166, 2013.
Akylas Stratigakos, Simon Camal, Andrea Michiorri, and Georges Kariniotakis. Prescriptive trees for integrated forecasting and optimization applied in trading of renewable energy. IEEE Transactions on Power Systems, 37(6):4696-4708, 2022.
Shengjia Zhao, Michael Kim, Roshni Sahoo, Tengyu Ma, and Stefano Ermon. Calibrating predictions to decisions: A novel approach to multi-class calibration. Advances in Neural Information Processing Systems, 34:22313-22324, 2021. URL:
Any Issues?
Feedback on the Current Page
Thanks for your feedback!
Feedback submitted to Dagstuhl Publishing
Could not send message
Please try again later or send an
E-mail
Schloss Dagstuhl – LZI GmbH
About DROPS
Imprint
Contact
US