[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
  EconPapers    
Economics at your fingertips  
 

Policy learning for many outcomes of interest: Combining optimal policy trees with multi-objective Bayesian optimisation

Patrick Rehill and Nicholas Biddle

Papers from arXiv.org

Abstract: Methods for learning optimal policies use causal machine learning models to create human-interpretable rules for making choices around the allocation of different policy interventions. However, in realistic policy-making contexts, decision-makers often care about trade-offs between outcomes, not just single-mindedly maximising utility for one outcome. This paper proposes an approach termed Multi-Objective Policy Learning (MOPoL) which combines optimal decision trees for policy learning with a multi-objective Bayesian optimisation approach to explore the trade-off between multiple outcomes. It does this by building a Pareto frontier of non-dominated models for different hyperparameter settings which govern outcome weighting. The key here is that a low-cost greedy tree can be an accurate proxy for the very computationally costly optimal tree for the purposes of making decisions which means models can be repeatedly fit to learn a Pareto frontier. The method is applied to a real-world case-study of non-price rationing of anti-malarial medication in Kenya.

Date: 2022-12, Revised 2023-10
New Economics Papers: this item is included in nep-big, nep-cmp and nep-upt
References: Add references at CitEc
Citations:

Downloads: (external link)
http://arxiv.org/pdf/2212.06312 Latest version (application/pdf)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:arx:papers:2212.06312

Access Statistics for this paper

More papers in Papers from arXiv.org
Bibliographic data for series maintained by arXiv administrators ().

 
Page updated 2024-12-28
Handle: RePEc:arx:papers:2212.06312