Skip to content

Commit

Permalink
Update references.bib
Browse files Browse the repository at this point in the history
  • Loading branch information
kristiankersting committed Nov 6, 2023
1 parent e84f290 commit f5e3804
Showing 1 changed file with 2 additions and 2 deletions.
4 changes: 2 additions & 2 deletions references.bib
Original file line number Diff line number Diff line change
Expand Up @@ -87,7 +87,7 @@ @misc{steinmann2023learning
}

@article{zecevic2023same,
Anote={./images/zecevic2023sacmlpng},
Anote={./images/zecevic2023sacml.png},
author={Matej Zecevic and Devendra Singh Dhami and Kristian Kersting},
note = {Seurally-parameterized Structural Causal Models in the Pearlian notion to causality, referred to as NCM, were recently introduced as a step towards next-generation learning systems. However, said NCM are only concerned with the learning aspect of causal inference and totally miss out on the architecture aspect. That is, actual causal inference within NCM is intractable in that the NCM won’t return an answer to a query in polynomial time. This insight follows as corollary to the more general statement on the intractability of arbitrary structural causal model (SCM) parameterizations, which we prove in this work through classical 3-SAT reduction. Since future learning algorithms will be required to deal with both high dimensional data and highly complex mechanisms governing the data, we ultimately believe work on tractable inference for causality to be decisive. We also show that not all “causal” models are created equal. More specifically, there are models capable of answering causal queries that are not SCM, which we refer to as partially causal models (PCM). We provide a tabular taxonomy in terms of tractability properties for all of the different model families, namely correlation-based, PCM and SCM. To conclude our work, we also provide some initial ideas on how to overcome parts of the intractability of causal inference with SCM by showing an example of how parameterizing an SCM with SPN modules can at least allow for tractable mechanisms. With this work we hope that our insights can raise awareness for this novel research direction since achieving success with causality in real world downstream tasks will not only depend on learning correct models but also require having the practical ability to gain access to
model inferences.},
Expand All @@ -104,7 +104,7 @@ @article{zecevic2023same

@article{zecevic2023acml,
Anote={./images/zecevic2023acml.png},
author={HMatej Zecevic and Devendra Singh Dhami and Kristian Kersting},
author={Matej Zecevic and Devendra Singh Dhami and Kristian Kersting},
note = {The recent years have been marked by extended research on adversarial attacks, especially on deep neural networks. With this work we intend on posing and investigating the question of whether the phenomenon might be more general in nature, that is, adversarial-style attacks outside classical classification tasks. Specifically, we investigate optimization problems as they constitute a fundamental part of modern AI research. To this end, we consider the base class of optimizers namely Linear Programs (LPs). On our initial attempt of a naïve mapping between the formalism of adversarial examples and LPs, we quickly identify the key ingredients missing for making sense of a reasonable notion of adversarial examples for LPs. Intriguingly, the formalism of Pearl's notion to causality allows for the right description of adversarial like examples for LPs. Characteristically, we show the direct influence of the Structural Causal Model (SCM) onto the subsequent LP optimization, which ultimately exposes a notion of confounding in LPs (inherited by said SCM) that allows for adversarial-style attacks. We provide both the general proof formally alongside existential proofs of such intriguing LP-parameterizations based on SCM for three combinatorial problems, namely Linear Assignment, Shortest Path and a real world problem of energy systems.},
journal={Special Issue ACML 2023, Machine Learning Journal (MLJ)},
title={Structural Causal Models Reveal Confounder Bias in Linear Program Modelling},
Expand Down

0 comments on commit f5e3804

Please sign in to comment.