The second stage of the EU AI Regulation (AI Regulation) comes into force on February 2, 2025. This is intended to make the use of artificial intelligence in the EU safer, more transparent and more responsible. Not only providers and developers of AI systems will be held accountable, but also companies and other organizations that use AI technologies in their everyday lives will be affected.
In concrete terms, this means that whether you operate an online store with AI-generated texts and images, create valuable content for university and college websites or use AI-based tools to analyze donation data in fundraising - the new rules can have a significant impact on the way you work. The main focus is on transparency requirements, mandatory risk assessments and the training of employees in so-called "AI competence".

Background: Reasons for the AI Regulation
The EU AI Regulation was created to promote the opportunities of artificial intelligence while minimizing the risks for people and society. Artificial intelligence is already revolutionizing many areas of European citizens' lives, from medicine and education to marketing and administration. However, the new opportunities are also accompanied by new risks, such as discrimination by algorithms, violations of privacy or automated wrong decisions.
The EU AI Regulation therefore follows a risk-based approach: depending on the potential risk, AI is divided into different categories - from low-risk to prohibited. This is intended to enable innovation while guaranteeing fundamental rights, data protection and security. The transparency requirements, mandatory training for employees and the documentation of AI applications affect almost all companies and organizations in the EU.
The 5 risk categories of the AI Regulation
- General purposes: Operation of AI systems such as GTP or LLama
- Minimal risk: Use of spam filters, content in video games
- Limited risk: Chatbots, AI content or deepfakes on social media
- High risk: Biometric recognition, employee evaluations, enrollments
- Prohibited: Manipulation, social scoring, predictive policing
Who is affected?
The AI Regulation affects a wide range of stakeholders, from developers and providers of AI systems to companies that merely use AI-based tools. The new rules are particularly relevant for operators and users, for example in e-commerce, the education sector or non-profit organizations.
The regulation distinguishes between:
- Providers of AI systems (e.g. developers of software such as chatbots or HR tools),
- Operators (e.g. companies that use these tools),
- importers and distributors who bring AI technologies into the EU.
Organizations that use AI in customer communication, fundraising or for administrative tasks such as personalized marketing campaigns are also covered by the regulation. For example, users must ensure that their AI systems comply with transparency requirements and that employees are adequately trained.
Private individuals who use AI for purely private purposes and other organized areas such as national security and the militaryare not affected . Open source software is generally exempt - unless it is used in prohibited or high-risk contexts.
Practical requirements
The AI Regulation brings with it clear obligations for companies and organizations that use AI. Even if you do not develop AI systems, but only use ready-made tools such as chatbots, HR systems or AI-supported marketing solutions, you must ensure that you comply with the new requirements.
- Transparency obligations: Users of AI applications must be informed that they are interacting with AI. AI-generated content, such as texts or images, must be labeled accordingly, especially if they are deepfakes.
- Documentation: Companies should fully document the use of AI systems. This includes technical details, transparency information and logs on the use of AI. It is also important that all AI-related processes remain traceable in the long term in the event of possible audits.
- AI expertise: Employees who work with AI systems must be trained. This includes technical know-how, an awareness of risks and clear guidelines for dealing with AI, for example when labeling AI-generated content.
- Risk management: For high-risk AI systems (e.g. HR tools or AI in medicine), a comprehensive risk assessment is required that also includes data protection aspects.

What companies should do now
In order to implement the requirements of the AI Regulation on time, companies should take action now and introduce the following measures.
- AI inventory: Record all AI systems used or planned in your company. Document their functions, areas of application and potential risks. This creates transparency and serves as a basis for further steps.
- Introduction of AI guidelines: Establish internal guidelines for dealing with AI. These include, for example, the mandatory labeling of AI-generated content or the handling of personal data in AI applications.
- Employee training: Ensure that employees who work with AI have a sufficient level of "AI competence". This includes technical knowledge as well as legal principles and an awareness of ethical risks.
- Risk assessment: Carry out a thorough risk assessment for high-risk applications. Data protection and security aspects should also be taken into account.
- Documentation and transparency: Keep complete documentation of your AI systems. This should not only meet internal requirements, but also stand up to external audits.
High fines for offences

The AI Regulation is monitored by a multi-layered supervisory network involving both EU-wide and national authorities. In Germany, supervision is divided by sector: Data protection supervisory authorities are responsible for high-risk AI systems in specific areas such as law enforcement or border control. Systems in other areas such as critical infrastructure or education are likely to be regulated by the market surveillance authorities.
The EU Commission has announced that it will set up a European Office for AI. The new office will be responsible for AI systems with general application, for example. Due to the federal system in Germany, a large number of authorities are expected to be involved. In order to cope with this complexity, a so-called "single point of contact" is to be set up. This central office will serve as a point of contact for companies and citizens to report breaches and submit complaints.
The AI Regulation provides for severe penalties for infringements, some of which are higher than under the GDPR. Companies face fines of up to 35 million euros or 7% of their global annual turnover, whichever is higher. The focus is particularly on breaches of transparency obligations, inadequate risk assessments or the use of prohibited AI practices.
Act now and be prepared
The EU AI Regulation presents companies and organizations with new challenges, but also offers opportunities to optimize internal processes and build trust. Those who create transparency at an early stage, train employees and introduce AI guidelines can avoid legal risks and gain competitive advantages in the long term.
We support you every step of the way: from the risk assessment of your AI systems to the creation of legally compliant documentation, we help you to meet the requirements of the AI Regulation. Together, we develop suitable measures that are not only legally compliant, but also efficient and future-oriented. In this way, we ensure that you can make optimum and secure use of the opportunities presented by the AI Regulation.
Note: This article is for general information purposes only and does not constitute legal advice. The content presented here was not written by a lawyer and cannot replace individual legal advice. No guarantee is given for the correctness, completeness and topicality of the information. The author accepts no liability for damages resulting from the use of the information provided.