Identification of causal relationships through running experiments is not always possible. In this talk, an alternative approach towards it, quasi-experimental frameworks, is discussed. Additionally, I will present how to adjust well-known machine-learning algorithms so they can be used to quantify causal relationships.
### What problem is the talk addressing?
Experiments are a gold standard for estimating causal relationships. That being said, they are not always possible. Experiments can be costly, long-lasting, unethical, or illegal. In other cases, the underlying assumptions for identification cannot be met, e.g. it is not possible to split subjects into control and treatment groups randomly or avoid interactions between them.
### Why is the problem relevant to the audience?
Understanding the magnitude of treatment effects is a premise for designing optimal strategies by policy makers/stakeholders.
### What are the solutions to the problem?
Prediction-driven algorithms might not be best-tailored for accurate identification of causal links. In this talk I will show how to shift the goal post of those algorithms from prediction towards identification of treatment effects. First, I will cover classical quasi-experimental frameworks such as difference-in-differences and regression discontinuity design. Then, I shed some light on how to augment those methods with out-of-the-box machine-learning techniques. To this end, orthogonal machine learning will be discussed.
### What are the main takeaways from the talk?
I will reiterate that correlation does not imply causation. The audience will get familiarized with causal-inference methods used when laboratory experiments are not feasible. The participants will learn how to adjust off-the-shelf machine-learning algorithms to identify conditional average treatment effects.