Yesterday Seray made an amazing presentation about a nowadays trending topic: Artificial Intelligence Act in the EU.
As of March 2024, the EU legislation is in the process of enacting the European Artificial Intelligence Act. The Act´s goal is to ensure the safety of fundamental rights of citizens by providing clear requirements and obligations regarding specific uses of AI as well as to foster trustworthy AI, promote innovation and position Europe to play a leading role in this field globally. As the first of its kind, the Act takes a risk-based approach. The higher the potential risks of an AI application, the stricter the compliance requirements. For this, the regulation differentiates between four different categories of risks. Accordingly, the use of AI shall be either prohibited altogether or pertained by compliance obligations, ranging from mere transparency duties to a number of strict compliance measurements such as risk assessment and mitigation systems, detailed documentation or human oversight. Additionally, so called Codes of Practice are yet to be elaborated in collaboration with model providers and stakeholders, defining certain aspects of the Act and fleshing out benchmarks of the obligations. To ensure enforcement, the European Commission established a European AI Office whose duty is to oversee the implementation with the member states and to foster collaboration, innovation and research in AI. The European approach with its attempt to balance out acceptable risks and innovation definitely is heading in the right direction. However, voices of criticism are justified. The act imposes legal uncertainty on entities and raises a number of legal challenges, as the scope and open wording might cause more hindrance than clarity, making continuous legal development and supervision indispensable.