University of Melbourne
Causal discovery is the problem of learning the causal structures (usually in the form of graphs) from observational data. It is a fundamental task in various disciplines of science and engineering. For example, in biology, scientists are interested in causal regulatory networks that model how each gene is causally influenced by the others. In industry, companies are interested in the causes of customer churns. We are working on theoretical foundations and computational methods to discover causal structures from real-world complex data, for example, data with distribution shifts, missing values, measurement error, etc.
Most studies in the health and social sciences are motivated by understanding the effect of a treatment. For example, what fraction of past crimes could have been avoided by a given policy? What is the efficacy of a given drug in a population? These questions could be answered straightforwardly if the potential outcome of any value of the treatment is available while all the confounders are under control. However, in practice, such potential outcomes are not always observable. We aim to tackle this challenge using advanced techniques such as moment equation balancing and develop efficient methodologies to conduct treatment effect inferences from different data types, such as multivariate (dynamic) data, longitudinal data and measurement error data.
Powered by deep neural networks, representation learning extracts high-level representations from raw data and enables accurate predictions in many applications. However, existing representation learning techniques can not outperform humans in transferring/generalizing to new environments and tasks. In addition, the blackbox representations face challenges in the social aspects of AI, such as fairness, reliablity, and transparency. Our research goal is to learn causal representations that enable efficient transfer learning, out-of-distribution generalization, and socially responsible learning.