招待講演

※ プレゼン動画の公開期間は終了いたしました。

招待講演1: 2020/11/24(火) 13:00 – 14:00 @ [ウェビナー]

Pedro Domingos (University of Washington)

Unifying Logical and Statistical AI with Markov Logic [動画公開終了,資料]

Intelligent systems must be able to handle the complexity and uncertainty of the real world. Markov logic enables this by unifying first-order logic and probabilistic graphical models into a single representation. Many deep architectures are instances of Markov logic. An extensive suite of learning and inference algorithms for Markov logic has been developed, along with open source implementations like Alchemy. Markov logic has been applied to natural language understanding, information extraction and integration, robotics, social network analysis, computational biology, and many other areas.

招待講演2: 2020/11/25(水) 13:00 – 14:00 @ [ウェビナー]

Suresh Venkatasubramanian (ACLU of Utah)

Quantifying Problems with Shapley Values as a Way of Explaining Model Behavior [動画公開終了,資料]

The complexity of machine learning models presents challenges for attempts to interpret their behavior or determine why they might exhibit bias. In recent years, one proposed approach to model explainability has been to quantify the degree to which individual features contribute to the model outcome either for a particular input or in general. To do this, a tool from cooperative game theory known as the Shapley value of a game has become a popular method for quantification. In brief, the trick is to think of the features as “playing” a cooperative game to produce model output, in which case the Shapley value for each “player” represents an assignment of contribution that satisfies some natural axiomatic properties.
While methods based on this idea have become popular for feature influence, there are significant conceptual and mathematical issues with the use of Shapley values. especially in circumstances where features might not be independent of each other (which is often the case when dealing with biased models). In this talk, I’ll outline some of these problems and present an alternate geometric perspective on Shapley values that yields a deeper insight into their limits and how we might augment them to obtain greater insights into feature interactions.