Blog 11.3.2024

EU AI Act shapes AI and the ways we apply it

Digital Society

Intelligent Industry

Artificial intelligence (AI), a product of digitalisation, has grasped everyone’s attention on how it will potentially shape societies. At the same time through norms, such as legislation, we shape the AI. Let’s direct our attention to that as well. 

My first experience with the intersection of legislation and AI was in the early 2010s when managing field trials on the user experience of an AI video remixing prototype. In the field trials we studied questions related to human-AI collaboration [1, 2]. For a smooth realization of the project, it was crucial to negotiate copyright contracts with the Finnish copyright organization TEOSTO. Collaboratively, we were able to shape a contract that provided us a sandbox to trial AI innovations in media context.

Legislation is a formally enforced norm decided by the parliament, and aimed to e.g. maintain order, protect rights, and ensure safety. However, there is often a tension between legislation and technology. Creating laws takes time and regulation is sometimes seen as hampering innovation. AI on the other hand evolves at great speed and has a tremendous potential to increase e.g. efficiency and well-being. This has sparked a global debate on the relationship of AI and law, and on what the responsible approach to AI is.

To encourage a responsible approach to AI, the European Union has recently agreed on the EU AI act, the purpose of which is to protect citizens from the negative effects of AI without hindering innovation.

What is the EU AI Act and why should I care?

The aim of the act is to set rules on AI to address risks to people’s health, safety, and fundamental rights, and the environment. Although, at the time of writing, the details are under discussion, the current understanding is that the act will become fully effective 24 months after it enters into force, and set requirements depending on AI systems’ risk categories [3]. The act will impact all public and private entities using AI systems in the EU or affecting EU residents. In the current version, AI is defined as “a machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.” This is aligned with the OECD’s definition.

AI systems will be classified and regulated based on four risk categories:  

Minimal risk – Most AI systems. These will follow existing legislation and there will be no additional requirements.

High risk – AI systems that are considered to possibly have a negative impact on safety or fundamental rights. There will be a list of high-risk areas, and for example healthcare and educational systems are currently classified as high risk. Also, a requirement called conformity assessment will be set. The requirement is to demonstrate that a system complies with requirements for a trustworthy AI, meaning for example proper data quality, transparency, and human oversight.

Unacceptable risk – This category of uses of AI would be banned as they are considered to violate EU’s core values from a fundamental rights perspective. Some examples are using AI for social scoring, behavioral manipulation, and scraping of facial images.

Transparency risk – Transparency requirements are set to some specific AI systems. As an example, with chatbots, users should be aware that they are interacting with a machine.

In addition, general purpose AI models such as large generative AI models are addressed. They will be considered especially from a systemic perspective, as a powerful model can have a significant impact on society and for example propagate harmful biases. Providers of general-purpose AI systems will also have transparency obligations and need to ensure that they respect copyright laws when training the models.

The recent New York Times’ lawsuit against OpenAI, in which NYT accuses OpenAI of using NYT content to train its language model without permission, highlights the relevance of copyright issues [4]. This situation also mirrors potential problems we could have encountered in our AI video project had we not involved the copyright organization.
 
At the time of the writing of this blog post new information on the act is being revealed almost daily. Just recently 892 pages draft text was leaked [5]. Thus, there is a lot of happening for those who want to stay informed.

How to lead AI integration in a people-driven way?

Stay informed and act on AI developments

The EU AI act and other regulative initiatives such as the White House’s blueprint for an AI bill of rights [6] and GDPR reflect the ongoing interplay with technology and societal norms.

For decision-makers looking to fully utilise AI, it’s crucial to understand its multifaceted nature. This requires staying informed about changes in legislation, technology, social norms, and societal impacts.

AI changes the role of humans in organisations’ processes, practices, and culture. In this change it is important to keep a state of mind that aims for a responsible approach to AI that is inclusive, considers well-being and sees AI as a collaborator rather than a replacement for a human. We also highlight these points in our Ethical Design guide [7].

At Gofore, we are specialised in navigating in the complexities of AI and happy to innovate and explore together to figure out what a responsible and human-centered approach to AI means for your organisation.

Sources:

  1. Vihavainen, S. & al., Video as memorabilia: user needs for collaborative automatic mobile video production. CHI 2012, ACM Press. https://dl.acm.org/doi/10.1145/2207676.2207768 
  2. Vihavainen, S. & al., We Want More: Human-Computer Collaboration in Mobile Social Video Remixing of Music Concerts. CHI 2011, ACM Press https://dl.acm.org/doi/10.1145/1978942.1978983 
  3. EU Commission, Artificial Intelligence – Questions and Answers* https://ec.europa.eu/commission/presscorner/detail/en/QANDA_21_1683 
  4. New York Times sues Microsoft and OpenAI for ‘billions’. https://www.bbc.com/news/technology-67826601
  5. https://twitter.com/BertuzLuca/status/1749326217612820558 
  6. Blueprint for an AI Bill of Rights. MAKING AUTOMATED SYSTEMS WORK FOR THE AMERICAN PEOPLE https://www.whitehouse.gov/ostp/ai-bill-of-rights/ 
  7. Ethical design booklet. Gofore 2022. https://gofore.com/en/ethical-design-booklet/ 

Data and AI

Sami Vihavainen

Principal Designer

Sami has over 15 years of experience in understanding and designing interactions between people and technology. He has worked in various roles in both academic and business environments, and for instance conducted a doctoral thesis related to user experience of artificial intelligence.

Sami’s objective is to design technologies and services that increase people’s and societies’ wellbeing. He sees that through the increasing role of digitalisation and AI in both everyday life and solving of global challenges, it is ever more important to take the society level goals, ethics and sustainability into account in design.

Back to top