Trustworthy AI
The term “trustworthy AI” has become increasingly popular over the last years – from academic research and publications, lawmakers and regulators, to the press and the general public. In this course we have a look at what dimensions are involved in making an AI system trustworthy, what are some of the scientific underpinnings and the current state of the art in the corresponding fields of academic and industry R&D, and what are the regulatory and market mechanisms which aim to make sure that end products indeed meet a certain (minimum) level of trustworthiness. We will visit the foundations of the explainability and interpretability of AI systems, of privacy-preservation, of fairness/bias mitigation, of security, and of safety and discuss these from technological, regulatory and societal perspectives. We will see that the concept of “trust” is multifaceted and spans across several criteria some of which may well be in conflict with each other, requiring (ideally: conscious) tradeoffs between those mutually not fully compatible aspects.
Session 1 (9:00-10:30):
- Introduction to Trustworthy AI – Mapping the Landscape
- Explainability/Interpretability
- Privacy-Preservation
- Fairness/Bias
- Security
- Safety
Session 2 (11:00-12:30):
- Trustworthy AI – Regulation & Testing/Certification
- The EU AI Act
- The Trustworthy AI Framework of the EU HLEG
- AI Standardization
- Testing/Certification
- Discussion