In certain ML applications such as fraud detection, explanations may boost both the understanding of the model's prediction and the performance of the human-in-the-loop. But how can we unveil what goes inside a complex neural network model? How can we use these to promote better and faster decision-making? In our last meetup before the summer break, Catarina and Vladimir will show us how we can leverage weak supervision to craft a concept-annotated dataset, and use it to train a multi-task neural network that jointly learns decisions and associated domain knowledge explanations in the context of fraud (e.g., high-level fraud patterns such as "Suspicious item").
Catarina Belém and Vladimir Balayan are both Research Data Scientists at Feedzai working in the Responsible AI (FATE) group. ([ Ссылка ], [ Ссылка ])
As always, expect up to 1h of talk, with around 10 minutes in the end for the Q&A (be sure to leave all the questions you might have during the talk on the YouTube chat!).
Links:
- The presentation content is available on our GitHub page: [ Ссылка ]
- Application form to apply to speak at one of our meetups: [ Ссылка ]
- Feedback form:
[ Ссылка ]
![](https://i.ytimg.com/vi/wGbSDH0Wa9w/maxresdefault.jpg)