Blog 5.8.2024

The impacts of the AI Act on public sector entities  

Digital Society

Public sector entities and public servants must mostly follow the same rules, regulations, and laws in their operations whether they are using AI or not. The amount of AI specific regulation has, however, skyrocketed as the EU AI Act has entered into force. 

The AI Act will apply from August 2nd, 2026, which is also the deadline for fulfilling most of its obligations. Many of the AI Act obligations concern public sector entities. Due to this, now is a great time to evaluate your own operations from the point of view of AI Act compliance. Certain obligations of the AI Act must be fulfilled 6 or 12 months after the entry into force, so at the very least you should make sure your organization fulfills these obligations in time. 

Is my organization an AI system provider? 

There are obligations set for both providers of AI systems and, in a lesser amount, deployers of AI systems. A deployer means a person or an organization using an AI system under its authority in a professional manner. A provider is anyone who develops an AI system or a general-purpose AI model or that has an AI system, or a general-purpose AI model developed and places it on the market or puts the AI system into service under their own name or trademark, whether for payment or free of charge. 

The first thing to take note of about the AI Act is the aforementioned definition of a provider. For example, a public sector entity that has an AI element on its website that it ordered and that is under its own name is a provider under the AI Act and must, therefore, fulfill a provider’s obligations. This means your organization may end up being considered a provider even if you do not develop or sell AI systems. 

What kind of risks are associated with my organization’s AI systems? 

The obligations set in the AI Act depend on both whether your organization is an AI system provider or a deployer and what the risk level associated with the system is. AI systems, which are of negligible or very low risk, for example video game elements that use AI, do not fall under the scope of the AI Act at all. The AI systems that do fall under the scope of the AI Act are divided into three risk-based categories: prohibited, high-risk, and other systems. The risks based on which the classification is made are to health, safety, and fundamental rights, including influencing the outcome of decision making. Because these are typically things that public sector entities’ operations concern, it is safe to assume that public sector entities are especially likely to utilize high-risk systems and other systems that fall under the scope of the AI Act. 

Prohibited AI practices include, for example, AI based social scoring, real-time biometric identification in public spaces (only allowed in exceptional circumstances), subliminal, manipulative or deceptive techniques, and exploiting the vulnerabilities of, for example, children or elderly people. I hope and believe that European public sector entities would not even consider these types of use regardless of the AI Act. 

High-risk systems include, for example, AI systems intended to be used as safety components for road traffic or the supply of water, gas, heating or electricity, for recruitment, to make decisions on the promotion or termination of work-related contractual relationships or by a judicial authority for researching and interpreting facts and the law and for applying the law to a concrete set of facts.

Providers of high-risk AI systems have obligations regarding, for example, conformity assessment, a quality management system and quality assurance, documentation, assuring effective human oversight is enabled, and accuracy, robustness, and cybersecurity of the AI system through design and development. High-risk AI systems’ deployers’ duties include, for example, assigning human oversight, ensuring use according to instructions from the provider, and keeping automatically generated logs. Deployers that are public authorities or EU institutions must also register to an EU database. Deployers that are bodies governed by public law, or are private entities providing public services, must perform an assessment of the impact on fundamental rights that the use of the system may produce prior to deploying a high-risk AI system. 

General purpose AI models 

The AI Act also contains obligations regarding general purpose AI models. General-purpose AI models means AI models that display significant generality and are capable of competently performing a wide range of distinct tasks. General purpose AI models are divided into models with a systemic risk, meaning that the model has high impact capabilities, and models that do not contain a systemic risk. 

Providers of models with systemic risk must notify the model to the Commission, perform model evaluation including adversarial testing, assess and mitigate possible systemic risks on the EU level, keep track of, document, and report serious incidents and corrective measures, and ensure cybersecurity, including of the physical infrastructure of the model.  

Providers of both models with systemic risk and without systemic risk must draw up and keep up-to-date technical documentation, provide certain information to AI system providers who integrate the model into their AI systems, have a policy to protect intellectual property rights, and make a detailed summary about content used for training publicly available. 

What general obligations does the AI Act set? 

Both providers and deployers of AI systems must take measures to ensure that their staff and other people dealing with the operation and use of AI systems on their behalf have a sufficient level of AI literacy. This means that they have the skills, knowledge, and understanding that allow them to make an informed deployment of AI systems while understanding their rights and duties, as well as awareness about the opportunities and risks of AI and the possible harm it can cause. The AI Act’s description of AI literacy is still on a quite general level, but it should become more concrete once EU and other officials publish additional material regarding the AI Act. 

Another obligation that concerns both providers and deployers is transparency. Providers must make sure that if AI systems are intended to interact directly with people, the people concerned must be informed or be able to recognize that they are interacting with an AI system. Also, they must make sure that AI created synthetic image, audio, video, and text is marked as artificially generated or manipulated in a machine-readable format and recognizable as such. 

Deployers must inform people who are subject to an emotion recognition system or a biometric categorization system. Deployers must also disclose so-called deepfakes, or AI-generated or manipulated image, audio or video content that resembles existing persons, objects, places, entities or events and would falsely appear to a person to be authentic or truthful, as artificially generated or manipulated. Deployers must also disclose text that is published with the purpose of informing the public on matters of public interest as artificially generated or manipulated if there is no human review or editorial control on the text. 

Entry into force   

As I wrote above, the AI Act has come into force and will apply from August 2nd, 2026. There are, however, certain exceptions to this. The most central exceptions for providers and deployers of AI systems are that general provisions of the AI Act (including the obligation on AI literacy) and provisions regarding prohibited AI practices will apply from February 2nd, 2025, and the provisions concerning general purpose AI models will apply from August 2nd, 2025. 

When it comes to AI systems, which are already placed on the market or put into service at the time of entry into force, the deadlines are: 

  • Other than concerning prohibited use, operators of high-risk AI systems that have been placed on the market or put into service before August 2nd, 2026, must comply with the AI Act only if, as from that date, those systems are subject to significant changes in their designs.  
  • Other than concerning prohibited use, providers and deployers of high-risk AI systems intended to be used by public authorities shall take the necessary steps to comply with the requirements and obligations of the AI Act by August 2nd, 2030. 
  • Providers of general-purpose AI models that have been placed on the market before August 2nd, 2025, shall take the necessary steps to comply with the obligations of the AI Act by August 2nd, 2027. 
  • Other than concerning prohibited use, certain EU AI systems that have been placed on the market or put into service before August 2nd, 2027, shall be brought into compliance with the AI Act by December 31st, 2030. 

Ending remarks 

Public sector entities must take the obligations of the AI Act into account not only when planning and implementing the use of AI, but also, for example, in procurement, so that it can be ensured that the procured systems are AI Act compliant. This may be challenging, as the AI Act is currently in many ways vague when it comes to its exact, concrete meaning. The application of many laws and other regulations already in force, such as copyright laws and regulations, to the use of AI is also somewhat unclear, because, for example, there is little jurisprudence guiding the interpretation. 

If you have questions or concerns regarding the AI Act, help is available. Your member state may not have yet named the national competent authorities regarding the AI ​​Act, but that should happen soon. I also believe that authorities at both national and the EU level will produce material clarifying the AI ​​Act. The AI ​​Act itself contains promise that this will happen, as the tasks of the AI ​​Office and the AI ​​Council also include the production of documents that assist in complying with the AI ​​Act. However, it is not advisable to just wait. Help with AI Act compliance is of course also available from us at Gofore: our experienced technical and legal experts will be happy to help you. Get in touch! 

Certainly, the most risk-free option would be to not use AI at all. However, it very rarely makes sense to intentionally miss out on technological development. This is especially true when it comes to AI since there are significant benefits up for grabs, for example, through operational efficiency and quality assurance. Making a careful legal risk assessment and ensuring the adequacy of internal processes and governance models are effective ways to ensure the sustainability of one’s own operations, also from a compliance or legal standpoint. This makes it possible to enjoy the benefits of AI within your organization without taking unreasonable legal or other risks. It is once again true that using resources in the short term enables improved quality and efficiency in the long term.

data & AI

procurement

Jenni Miettinen

ICT Procurement Lawyer

Jenni is a lawyer specialising in public procurement and ICT law, with a wide range of experience for example working as an attorney-at-law, an in-house procurement lawyer at a large contracting entity, and completing training at the bench.

Back to top