Key legal considerations when developing an AI system: Clarify IP ownership and manage liability risks

This is the third article in this series on how to clarify IP ownership and managing liability risks. Visit Know your risk classification and Ensure GDPR compliance to read the first two articles in this series.

As technology advances and the regulatory spotlight intensifies on artificial intelligence, developers and users of AI systems must navigate a fast-evolving legal landscape. From the EU’s AI Act to existing frameworks like the GDPR, regulatory compliance is a fundamental part of building responsible, trustworthy AI.

Whether you’re a provider, deployer, or other kind of operator, this series of three articles analyses the most important legal considerations to keep in mind when developing an AI system.

 

Clarify IP ownership and manage liability risks

AI throws up novel questions about intellectual property and liability. Who owns the output of an AI system? And who is responsible when things go wrong?

Under current law, the following basic principles apply:

  • Copyright protection usually requires human authorship in general
  • Patent rights are only available to natural persons (German Federal Court, X ZB 5/22, ruling as of June 11th, 2024, cf. Link)

Another highly relevant topic is whether the use of third-party content for training purposes is lawful:

  • Text and data mining may generally be permitted under certain conditions (District Court of Hamburg, 310 O 227/23, ruling as of Sept 27th, 2024)
  • A major lawsuit is brewing where a large copyright holder proceeds against a leading AI company: GEMA vs. OpenAI (announced Nov 13th, 2024)

On the liability front, the withdrawal of the proposed EU AI Liability Directive means organisations will now fall back on national contract and tort law. This creates:

  • Increased exposure for AI errors, defects or malfunction
  • Potential penalties under the AI Act
  • Greater need for robust contractual protections and quality controls

What should you do?

Key risk mitigation steps in the areas outlined above include:

  • Ongoing monitoring and output validation (e.g. “human-in-the-loop” where appropriate)
  • Contractual clauses addressing risk allocation, AI use, and potential impacts on results & deliverables
  • Checking AI outputs for possible IP infringements (e.g. well-known characters, trademarked language)
  • Protecting your own IP using both technical and legal mechanisms
  • Keeping thorough documentation of all evaluations and risk management actions for audit readiness

Conclusion

From compliance to content ownership and liability, legal issues around AI development are evolving rapidly. The key is to plan early, document thoroughly, and integrate legal, ethical, and technical considerations from the ground up.

At our upcoming AI Web Summit, we’ll be diving deeper into these issues and providing practical insights to help your organisation future-proof its AI systems. We look forward to dive into the conversation there and hollow out specific use cases of innovative data driven use cases. Sign up here: Preparing Cross-Border Businesses for Emerging AI Regulations: Navigating Legal Uncertainty in AI-Powered Business Models

Dr. Peter Schneidereit
Partner - IT Contracts, Data Protection
Spencer West Partner Dr. Peter Schneidereit