Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add talk videos from Kun Zhang, Yujia Zheng, and Johannes Textor #25

Merged
merged 1 commit into from
Feb 29, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
11 changes: 7 additions & 4 deletions _community_videos/03_talk_series.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,15 +5,18 @@ layout: page
description: >-
PyWhy Causality in Practice - Causal Representation Learning: Discovery of the Hidden World - Kun Zhang
summary: >-
Causality is a fundamental notion in science, engineering, and even in machine learning. Causal representation learning aims to reveal the underlying high-level hidden causal variables and their relations. It can be seen as a special case of causal discovery, whose goal is to recover the underlying causal structure or causal model from observational data. The modularity property of a causal system implies properties of minimal changes and independent changes of causal representations, and in this talk, we show how such properties make it possible to recover the underlying causal representations from observational data with identifiability guarantees: under appropriate assumptions, the learned representations are consistent with the underlying causal process. Various problem settings are considered, involving independent and identically distributed (i.i.d.) data, temporal data, or data with distribution shift as input. We demonstrate when identifiable causal representation learning can benefit from flexible deep learning and when suitable parametric assumptions have to be imposed on the causal process, with various examples and applications.
Prof. Kun Zhang, currently on leave from Carnegie Mellon University (CMU), is a professor and the acting chair of the machine learning department and the director of the Center for Integrative AI at Mohamed bin Zayed University of Artificial Intelligence (MBZUAI). In this talk, he gives an overview of causal representation learning and how it has evolved over time.
<br>
The talk will include a description of the causal-learn package in PyWhy. Learn more: <a href="https://github.com/py-why/causal-learn">https://github.com/py-why/causal-learn</a>
<br>
Causality is a fundamental notion in science, engineering, and even in machine learning. Causal representation learning aims to reveal the underlying high-level hidden causal variables and their relations. It can be seen as a special case of causal discovery, whose goal is to recover the underlying causal structure or causal model from observational data. The modularity property of a causal system implies properties of minimal changes and independent changes of causal representations, and in this talk, we show how such properties make it possible to recover the underlying causal representations from observational data with identifiability guarantees: under appropriate assumptions, the learned representations are consistent with the underlying causal process. Various problem settings are considered, involving independent and identically distributed (i.i.d.) data, temporal data, or data with distribution shift as input. We demonstrate when identifiable causal representation learning can benefit from flexible deep learning and when suitable parametric assumptions have to be imposed on the causal process, with various examples and applications.
<br>
<b>Speaker:</b> Kun Zhang is currently on leave from Carnegie Mellon University (CMU), where he is an associate professor of philosophy and an affiliate faculty in the machine learning department; he is working as a professor and the acting chair of the machine learning department and the director of the Center for Integrative AI at Mohamed bin Zayed University of Artificial Intelligence (MBZUAI). He develops methods for making causality transparent by torturing various kinds of data and investigates machine learning problems including transfer learning, representation learning, and reinforcement learning from a causal perspective. He has been frequently serving as a senior area chair, area chair, or senior program committee member for major conferences in machine learning or artificial intelligence, including UAI, NeurIPS, ICML, IJCAI, AISTATS, and ICLR. He was a general & program co-chair of the first Conference on Causal Learning and Reasoning (CLeaR 2022), a program co-chair of the 38th Conference on Uncertainty in Artificial Intelligence (UAI 2022), and is a general co-chair of UAI 2023.
<br>

<b><a href="https://teams.microsoft.com/l/meetup-join/19%3ameeting_N2E0NzAxOTctMDAxNC00ZTY2LWE5ODYtZDU5YjhmNmRlZmM4%40thread.v2/0?context=%7b%22Tid%22%3a%22492f1487-e76c-454d-aad1-28c9aaf849f3%22%2c%22Oid%22%3a%22404ab0c2-59ec-4b36-88e9-81f01946470f%22%7d">Join the live seminar</a> on January 29, 2024 at Monday 8:00am pacific / 11:00am eastern / 4:00pm GMT / 9:30pm IST.</b>
<iframe width="800" height="450" src="https://www.youtube.com/embed/tvyuJZHJZvA?si=0Np85fXMzTx-L-a4"
title="YouTube video player" frameborder="0" allow="accelerometer; autoplay;
clipboard-write; encrypted-media; gyroscope; picture-in-picture;
web-share" allowfullscreen></iframe>



---
21 changes: 21 additions & 0 deletions _community_videos/04_talk_series.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
---
title: "causal-learn library: Causal discovery in Python"
slug: pywhy-video
layout: page
description: >-
PyWhy Causality in Practice - causal-learn library: Causal discovery in Python - Yujia Zheng
summary: >-
Yujia Zheng, a Ph.D. student at CMU, talks about the causal-learn package and how it can be used to learn causal graphs (and more) from observational data.
<br>
<br>
Causal discovery aims at revealing causal relations from observational data, which is a fundamental task in science and engineering. This talk introduces causal-learn, an open-source Python library for causal discovery. This library focuses on bringing a comprehensive collection of causal discovery methods to both practitioners and researchers. It provides easy-to-use APIs for non-specialists, modular building blocks for developers, detailed documentation for learners, and comprehensive methods for all. Different from previous packages in R or Java, causal-learn is fully developed in Python, which could be more in tune with the recent preference shift in programming languages within related communities. The talk will also explore related usage examples, aiming to further lower the entry threshold by providing a roadmap for selecting the appropriate algorithm.
<br>
<br>

<iframe width="800" height="450" src="https://www.youtube.com/embed/4E5b9ldybDE?si=EoljD2TMFEzyjnHk"
title="YouTube video player" frameborder="0" allow="accelerometer; autoplay;
clipboard-write; encrypted-media; gyroscope; picture-in-picture;
web-share" allowfullscreen></iframe>


---
21 changes: 21 additions & 0 deletions _community_videos/05_talk_series.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
---
title: "Lessons learned from the DAGitty user community"
slug: pywhy-video
layout: page
description: >-
PyWhy Causality in Practice - Lessons learned from the DAGitty user community - Johannes Textor
summary: >-
Johannes Textor works both at the Radboud University and the Radboud University Medical Center in Nijmegen, The Netherlands. He is interested in leveraging causal inference methodology for the benefit of biomedical research, especially in the fields of Immunology and Tumor Immunology.
<br>
<br>
In this talk, Johannes describes his reflections on DAGitty usage by biomedical scientists and to what extent causal graphs are useful in science. In his words, "I started developing the tool “dagitty” in 2010, first as a website, and then as an R package. I don’t know exactly how many people use this tool, but I believe it’s a substantial amount: there are ~1000 visits to the site per day, ~17000 causal diagrams have been saved on the website so far, and the two dagitty papers have ~2800 citations. Over the years, feedback from the user base has provided me with unique insights into the users’ issues and priorities. More recently, I’ve also actively tried to get insight into how dagitty (and causal diagrams more broadly) are being used and if this is actually beneficial for science (I currently have my doubts). In the talk, I’lll share some stories of these interactions and how they shaped dagitty and myself over the years."
<br>
<br>

<iframe width="800" height="450" src="https://www.youtube.com/embed/gnRdYQBbFLk?si=9R4nJPnol9q1uz2N"
title="YouTube video player" frameborder="0" allow="accelerometer; autoplay;
clipboard-write; encrypted-media; gyroscope; picture-in-picture;
web-share" allowfullscreen></iframe>


---
Loading