Artificial Intelligence and IP rights: threats and opportunities | Dentons | #itsecurity | #infosec


The EUIPO study

On March 2, 2022 the European Union Intellectual Property Office (EUIPO) published its “Study on the impact of artificial intelligence on the infringement and enforcement of copyright and designs” (the Study), describing current and future connections between IP law and artificial intelligence and the implications of technology transformation on the IP system.

The Study was promoted and carried out by the “Impact of Technology Expert Group,” which was set up in 2019 by the EUIPO to better assess the impact of upcoming technology on the infringement and enforcement of intellectual property rights: The idea behind the group’s approach — the so-called “double-edged sword”— is that all emerging and disruptive technologies have the potential to be used as means to enhance IP protection, but they can also be tools for IP infringement.

Building on this principle, the Study illustrates the threats and opportunities raised by AI technologies. It sets out several scenarios for the two storylines of physical products and digital content — for example, the use by counterfeiters of machine learning systems to recognize safe ports where their goods are more likely to pass through customs authorities, or the use of AI-supported blockchain by IP offices to protect information in registration systems.

These hypothetical cases are significant examples of the twofold nature of AI from an IP perspective.

Practical scenarios: infringement and enforcement of IP rights on content-sharing platforms

The Study explains how AI technologies can be used in different cases involving the upload of protected materials online, which is of particular interest also in view of EU Directive No. 790 of 2019 (the Copyright Directive, already implemented in Italy and in 15 other EU member states) and, specifically, of the relevant provisions on the liability of online content-sharing service providers for infringing user-generated content.

For example, machine learning tools can be deployed to remove the digital dots and watermarks used to track the distribution of unauthorized copies of copyright works online, as well as to generate “deep fakes”—defined within the Study as “synthetic media in which an individual in an image or video is replaced with another’s likeness”—which are developed through a specific type of machine learning, known as “generative adversarial networks”. Based on models that learn how to produce new data with the same characteristics of the “training” data that was fed into the network, artificial images, videos or sounds can be created resembling the natural and/or original ones.

On the other hand, authorities can harness that same technology in a positive way. Once the deep fakes have been identified, the authorities can use AI bots to identify on social media those components of the deep fake that constitute copyright and design infringement.

Also computer vision applications can be implemented in the field of IP, for instance, computer vision applications can determine if videos are original or artificially generated.

The EUIPO also emphasizes the relevance of natural language processing technologies –– these are tools that computers can use to process data in order to understand the meaning of human language. Such technologies can be used by law enforcers to analyze users’ behavior and the content they share, finding correlations in datasets in order to prevent potential future infringements as well as to prove the origin of the counterfeited content.

The new enforcement instruments offered by the copyright directive

AI technologies can also surely be used to enhance the detection — and possibly even prediction — of IP violation. However, the same are increasingly being used for infringing purposes as well. Such uses of AI technologies will be more and more relevant also in the framework of the new discipline introduced by the Copyright Directive.

Under the new regime, user-generated content sharing platforms must obtain the right holders’ authorization when they grant the public access to copyright-protected content uploaded by their users. Lacking this authorization, the platform operators will be held liable for copyright violation, unless they manage to prove that: (i) they have made their best efforts to obtain authorization; (ii) they have made their best efforts to ensure the unavailability of works for which the right holders have provided the necessary information; and (iii) they have acted expeditiously, upon receiving a sufficiently substantiated notice from the right holders, to disable access to, or to remove infringing content, and made their best efforts to prevent future uploads of the same.

On one hand, the deployment of AI can tangibly help service providers in identifying infringement materials, lowering potential liabilities and reinforcing the protection for right holders. On the other hand, the massive use of such technologies for infringing purposes can hinder the detection of possible violations, with increased risks of liability for providers.

Conclusively, several benefits and drawbacks coexist in the increasing use of AI technologies from an IP perspective1. In the context of online content-sharing platforms, this calls for a wider use of cutting-edge AI technologies by service providers to tackle infringing activities. Indeed, online content-sharing service providers are not subject to a general surveillance obligation under the Copyright Directive, but they must make best efforts to react expeditiously under certain circumstances.

AI is a “double-edged sword.” Service providers must make sure to have that sword and use it properly to comply with best-efforts requirements in defeating sophisticated AI-based infringements of IP rights.

  1. During Dentons’ AI Trilogy webinar, held on March 31, 2022, Professor Andrea Renda compared artificial intelligence to products considered to be dangerous. As he pointed out: “providing consumers or users of the AI systems with the information needed to understand what are the limitations and the capabilities and the intended use of these AI systems is not very different from many other products that are considered to be dangerous […] the idea is that when we use a pharmaceutical product we have the information on what that product is intended to do and how to use it with care. Handle it with care, not with fear, but with care at least”. Don’t miss the webinar recorded video at the following link: Dentons’ AI trilogy webinar series – Developing your AI strategy: the key areas you need to consider – YouTube.



Original Source link

Leave a Reply

Your email address will not be published.

41 + = forty four