This course provides a practical introduction to AI regulation and the development of trustworthy AI systems. It is intended for professionals with a background in AI, data science, law, compliance, or management, who want to understand how to design, deploy, and govern AI systems in line with the EU AI Act and principles of trustworthy AI.
Students will learn about the legal and regulatory framework for AI, including the classification of AI systems by risk and enforcement timelines under the EU AI Act. They will explore key principles of trustworthy AI, such as fairness, transparency, explainability, and accountability, and gain skills in designing compliance roadmaps, performing AI risk assessments, and implementing governance measures across AI lifecycles. The course also covers organizational readiness, participatory design, and strategies for embedding trustworthiness in AI development.
Instruction type: Asynchronous online modules + optional live Q&As/workshops.
Frequency: Case studies, expert videos, scenario-based and co-design learning.
Examination: Quizzes and practical project (e.g., AI risk analysis or compliance strategy).

Course responsible: