[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
10.5555/3666122.3668022guideproceedingsArticle/Chapter ViewAbstractPublication PagesnipsConference Proceedingsconference-collections
research-article

Front-door adjustment beyond markov equivalence with limited graph knowledge

Published: 30 May 2024 Publication History

Abstract

Causal effect estimation from data typically requires assumptions about the cause-effect relations either explicitly in the form of a causal graph structure within the Pearlian framework, or implicitly in terms of (conditional) independence statements between counterfactual variables within the potential outcomes framework. When the treatment variable and the outcome variable are confounded, front-door adjustment is an important special case where, given the graph, causal effect of the treatment on the target can be estimated using post-treatment variables. However, the exact formula for front-door adjustment depends on the structure of the graph, which is difficult to learn in practice. In this work, we provide testable conditional independence statements to compute the causal effect using front-door-like adjustment without knowing the graph under limited structural side information. We show that our method is applicable in scenarios where knowing the Markov equivalence class is not sufficient for causal effect estimation. We demonstrate the effectiveness of our method on a class of random graphs as well as real causal fairness benchmarks.

Supplementary Material

Additional material (3666122.3668022_supp.pdf)
Supplemental material.

References

[1]
J. Acharya, A. Bhattacharyya, C. Daskalakis, and S. Kandasamy. Learning and testing causal models with interventions. Advances in Neural Information Processing Systems, 31, 2018.
[2]
R. Addanki, S. Kasiviswanathan, A. McGregor, and C. Musco. Efficient intervention design for causal discovery with latents. In International Conference on Machine Learning, pages 63-73. PMLR, 2020.
[3]
M. F. Bellemare, J. R. Bloem, and N. Wexler. The paper of how: Estimating treatment effects using the front-door criterion. Technical report, Working paper, 2019.
[4]
R. Bhattacharya and R. Nabi. On testability of the front-door model via verma constraints. In Uncertainty in Artificial Intelligence, pages 202-212. PMLR, 2022.
[5]
D. C. Castro, I. Walker, and B. Glocker. Causality matters in medical imaging. Nature Communications, 11(1):1-10, 2020.
[6]
D. Cheng, J. Li, L. Liu, K. Yu, T. D. Lee, and J. Liu. Towards unique and unbiased causal effect estimation from data with hidden variables. arXiv preprint arXiv:2002.10091, 2020.
[7]
D. Cheng, J. Li, L. Liu, J. Liu, and T. D. Le. Data-driven causal effect estimation based on graphical causal modelling: A survey. arXiv preprint arXiv:2208.09590, 2022.
[8]
T. Claassen, J. M. Mooij, and T. Heskes. Learning sparse causal models is not np-hard. In Proceedings of the Twenty-Ninth Conference on Uncertainty in Artificial Intelligence, pages 172-181, 2013.
[9]
D. Entner, P. Hoyer, and P. Spirtes. Data-driven covariate selection for nonparametric estimation of causal effects. In Artificial Intelligence and Statistics, pages 256-264. PMLR, 2013.
[10]
I. R. Fulcher, I. Shpitser, S. Marealle, and E. J. Tchetgen Tchetgen. Robust inference on population indirect causal effects: the generalized front door criterion. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 82(1):199-214, 2020.
[11]
A. N. Glynn and K. Kashin. Front-door difference-in-differences estimators. American Journal of Political Science, 61(4):989-1002, 2017.
[12]
A. N. Glynn and K. Kashin. Front-door versus back-door adjustment with unmeasured confounding: Bias formulas for front-door and hybrid adjustments with application to a job training program. Journal of the American Statistical Association, 113(523):1040-1049, 2018.
[13]
L. Gultchin, M. Kusner, V. Kanade, and R. Silva. Differentiable causal backdoor discovery. In International Conference on Artificial Intelligence and Statistics, pages 3970-3979. PMLR, 2020.
[14]
S. Gupta, Z. C. Lipton, and D. Childers. Estimating treatment effects with observed confounders and mediators. In Uncertainty in Artificial Intelligence, pages 982-991. PMLR, 2021.
[15]
H. Hofmann. Statlog (German Credit Data). UCI Machine Learning Repository, 1994.
[16]
P. Hünermund and E. Bareinboim. Causal inference and data fusion in econometrics. arXiv preprint arXiv:1912.09104, 2019.
[17]
A. Jaber, J. Zhang, and E. Bareinboim. Causal identification under markov equivalence: Completeness results. In International Conference on Machine Learning, pages 2981-2989. PMLR, 2019.
[18]
H. Jeong, J. Tian, and E. Bareinboim. Finding and listing front-door adjustment sets. arXiv preprint arXiv:2210.05816, 2022.
[19]
F. Kamiran and T. Calders. Classifying without discriminating. In 2009 2nd international conference on computer, control and communication, pages 1-6. IEEE, 2009.
[20]
R. Kohavi and B. Becker. UCI machine learning repository, 1996. URL http://archive.ics.uci.edu/ml/datasets/adult.
[21]
M. Kuroki. Selection of post-treatment variables for estimating total effect from empirical research. Journal of the Japan Statistical Society, 30(2):115-128, 2000.
[22]
M. Kuroki and Z. Cai. Selection of identifiability criteria for total effects by using path diagrams. arXiv preprint arXiv:1207.4140, 2012.
[23]
M. Mogstad and A. Torgovitsky. Identification and extrapolation of causal effects with instrumental variables. Annual Review of Economics, 10:577-613, 2018.
[24]
R. Nabi, D. Malinsky, and I. Shpitser. Learning optimal fair policies. In International Conference on Machine Learning, pages 4674-4682. PMLR, 2019.
[25]
J. Pearl. [bayesian analysis in expert systems]: Comment: graphical models, causality and intervention. Statistical Science, 8(3):266-269, 1993.
[26]
J. Pearl. Causal diagrams for empirical research. Biometrika, 82(4):669-688, 1995.
[27]
J. Pearl. Causality. Cambridge university press, 2009.
[28]
E. Perkovic, J. Textor, M. Kalisch, and M. H. Maathuis. Complete graphical characterization and construction of adjustment sets in markov equivalence classes of ancestral graphs. The Journal of Machine Learning Research, 2018.
[29]
D. Plecko and E. Bareinboim. Causal fairness analysis. arXiv preprint arXiv:2207.11385, 2022.
[30]
A. Shah, K. Shanmugam, and K. Ahuja. Finding valid adjustments under non-ignorability with minimal dag knowledge. In International Conference on Artificial Intelligence and Statistics, pages 5538-5562. PMLR, 2022.
[31]
I. Shpitser and J. Pearl. Identification of joint interventional distributions in recursive semi-markovian causal models. In Proceedings of the National Conference on Artificial Intelligence, volume 21, page 1219. Menlo Park, CA; Cambridge, MA; London; AAAI Press; MIT Press; 1999, 2006.
[32]
P. Spirtes, C. N. Glymour, and R. Scheines. Causation, prediction, and search. MIT press, 2000.
[33]
E. V. Strobl, P. L. Spirtes, and S. Visweswaran. Estimating and controlling the false discovery rate for the pc algorithm using edge-specific p-values. arXiv preprint arXiv:1607.03975, 2016.
[34]
E. V. Strobl, P. L. Spirtes, and S. Visweswaran. Estimating and controlling the false discovery rate of the pc algorithm using edge-specific p-values. ACM Transactions on Intelligent Systems and Technology (TIST), 10(5):1-37, 2019a.
[35]
E. V. Strobl, K. Zhang, and S. Visweswaran. Approximate kernel-based conditional independence tests for fast non-parametric causal discovery. Journal of Causal Inference, 7(1), 2019b.
[36]
J. Tian and J. Pearl. A general identification condition for causal effects. eScholarship, University of California, 2002.
[37]
S. Triantafillou and I. Tsamardinos. Constraint-based causal discovery from multiple interventions over overlapping variable sets. The Journal of Machine Learning Research, 16(1):2147-2205, 2015.
[38]
V. Veitch and A. Zaveri. Sense and sensitivity analysis: Simple post-hoc analysis of bias due to unobserved confounding. Advances in Neural Information Processing Systems, 33:10999-11009, 2020.
[39]
T. Verma and J. Pearl. Causal networks: Semantics and expressiveness. In Machine intelligence and pattern recognition, volume 9, pages 69-76. Elsevier, 1990.
[40]
M. Wienöbst, B. van der Zander, and M. Liskiewicz. Finding front-door adjustment sets in linear time. arXiv preprint arXiv:2211.16468, 2022.
[41]
J. Zhang. On the completeness of orientation rules for causal discovery in the presence of latent confounders and selection bias. Artificial Intelligence, 172(16-17):1873-1896, 2008.
[42]
J. Zhang and E. Bareinboim. Fairness in decision-making—the causal explanation formula. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 32, 2018.

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image Guide Proceedings
NIPS '23: Proceedings of the 37th International Conference on Neural Information Processing Systems
December 2023
80772 pages

Publisher

Curran Associates Inc.

Red Hook, NY, United States

Publication History

Published: 30 May 2024

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 0
    Total Downloads
  • Downloads (Last 12 months)0
  • Downloads (Last 6 weeks)0
Reflects downloads up to 09 Jan 2025

Other Metrics

Citations

View Options

View options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media