Key legal considerations when developing an AI system: Ensure GDPR compliance
As technology advances and the regulatory spotlight intensifies on artificial intelligence, developers and users of AI systems must navigate a fast-evolving legal landscape. From the EU’s AI Act to existing frameworks like the GDPR, regulatory compliance is a fundamental part of building responsible, trustworthy AI.
Whether you’re a provider, deployer, or other kind of operator, this series of three articles analyses the most important legal considerations to keep in mind when developing an AI system.
This is the second article in this series which focuses on ensuring you remain GDPR compliant. For the first article, visit Know your risk classification.
Ensure GDPR compliance
AI and the GDPR are no natural fit. Core principles like purpose limitation, transparency or data minimisation can sit uneasily alongside the broad data usage and ‘black box’ nature of AI models.
There is still legal uncertainty around whether large language models contain personal data at all in general:
- The Hamburg DPA launched a highly recognized opinion stating that this is not the case in general (cf. (German) opinion at Dokumentvorlage zur einheitlichen Gestaltung)
- The EDPB (Opinion 28/2024 as of December 17th, 2024, cf. Opinion 28/2024 on certain data protection aspects related to the processing of personal data in the context of AI models) on the other hand, states that this at least may not be excluded in each individual case
Having said the above, it is highly likely that affected data protection authorities would deem personal data to be involved in AI usage in many cases – which would trigger the requirements on lawful data processing under the GDPR.
What should you do?
In order to address these concerns, AI developers should:
- Exclude personal data from training sets or obtain a valid legal basis (e.g. consent)
- Prevent personal data from being exported for general model training
- Guarantee high data quality and enable correction or deletion of personal data
- Provide users with clear guidance and transparency around AI tool usage
- Implement robust IT security measures (e.g. safe authentication methods)
- Document all of the above for audit readiness — including ROPA, risk assessments, and DPIA where applicable
As the EDPB itself notes, the GDPR should “support responsible AI” — but that support must be earned through sound compliance measures and documentation.
Conclusion
From compliance to content ownership and liability, legal issues around AI development are evolving rapidly. The key is to plan early, document thoroughly, and integrate legal, ethical, and technical considerations from the ground up.
At our upcoming AI Web Summit, we’ll be diving deeper into these issues and providing practical insights to help your organisation future-proof its AI systems. We look forward to dive into the conversation there and hollow out specific use cases of innovative data driven use cases. Sign up here: Preparing Cross-Border Businesses for Emerging AI Regulations: Navigating Legal Uncertainty in AI-Powered Business Models