David Watson presented "Interpretable machine learning for genomics: examples, opportunities, and challenges" as part of the Issues in Explainable Artificial Intelligence: Understanding and Explaining in Healthcare workshop.
The growing use of artificial intelligence in healthcare – e.g. for medical diagnosis, health monitoring, and treatment recommendation – prompts a series of ethical questions about the appropriate regulation and application of these technologies. Some of the most important challenges arise from
the use of sophisticated ‘black box’ algorithms with limited interpretability: for instance, what are clinicians morally required to disclose or explain to patients about algorithmic treatment recommendations and diagnoses by such systems? And what kind of understanding of AI systems do different stakeholders need (e.g. physicians, designers or policymakers) to integrate
them responsibly into healthcare practice? .
The workshop is part of Rune Nyrup’s project Understanding Medical Black Boxes, funded by the Wellcome Trust. It is the second instalment of the workshop series organised in collaborations with research projects on issues in explainable AI at the University of Saarland, the Technical University of Dortmund and Delft University of Technology.
[ Ссылка ]
[ Ссылка ]
[ Ссылка ]
Ещё видео!