A woman shouting into a man's ear-trumpet

Article 17 stakeholder dialogue: What have we learned so far

This analysis was previously published in two instalments on the Kluver Copyright Blog (part 1, part 2).

As 2020 unfolds, the European Commission’s stakeholder dialogue pursuant to Article 17 of the Directive on Copyright in the Digital Single Market (CDSM directive) enters its third (and likely final) phase. After four meetings that focussed on gathering “an overview of the current market situation as regards licensing practices, tools used for online content management […] and related issues and concerns”, the next two (or more) meetings will finally deal with issues raised by the provisions in Article 17 of the CDSM directive. According to the Commission’s discussion paper for the meetings of 16 January and 10 February 2020, the objective of the third phase “is to gather evidence, views and suggestions that the services of the Commission can take into account in preparing the guidance pursuant to Article 17(10)”. 

In other words, after four meetings that have set the scene, the stakeholder dialogue will now address some of the thorny issues raised by Article 17. These include the key concepts like the best effort obligations to obtain authorisation and to prevent the availability of content (Article 17(4)), as well as the safeguards for legitimate uses of content (Article 17(7)) and the complaint and redress mechanisms available to users (Article 17(9)). In preparation for these forthcoming discussions, it is worth recapitulating what we have learned since the stakeholder dialogue kicked off in October of last year. 

Three takeaways from the stakeholder dialogue so far

After more than 25 hours of discussion (recordings of the four meetings can be found here: 1, 2, 3 and 4), there are three main insights that will likely have a substantial impact on the overall outcome of the stakeholder dialogue. These are the different motivations of different types of rightholders; the technical limitations of Automated Content Recognition (ACR) technologies; and the general lack of transparency with regards to current rights management practices. The first two of these are discussed in this post and the third will be covered in part 2 which will be published shortly.

Rightholders are divided by business model

On the rightholder side, the stakeholder dialogue is dominated by the music and audiovisual (AV) industries who, by and large, represent two completely different approaches to making their content available. While there are internal differences when it comes to the details of how they operate, rightholders from the music industry generally aim to license their works to as many users and intermediaries as possible. As a result, the various music industry stakeholders have been tireless in making it clear that, from their perspective, Article 17 is about licensing and that discussions about filtering and removal are a distraction. 

On the other hand, rightholders from the AV industry have, by and large, made it clear that they are not interested in broad licensing of their content to platforms. AV rightholders have made it clear that their business models are built on selectively licensing different distribution channels and that they see the general availability of their works on UGC platforms as a threat to their commercial interests. As a result, the various AV industry stakeholders have made it very clear that they expect to rely heavily on the obligation on platforms to make best efforts to ensure the unavailability of works. In other words, for the AV industry Article 17 is very much about filtering/blocking content. 

Other rightholders present at the dialogue mostly align with one of these two positions. Rightholders from the photo and visual arts sectors are making the case that platforms will need to start licensing their repertoires (unfortunately for them both YouTube and Facebook continue to give them the cold shoulder), while literary publishers have sided with the AV industry in pointing out that broad availability of their works on UGC platforms runs counter to their commercial interests.

This makes it clear that, once put into practice, Article 17 will be about both licensing and automated filtering/blocking of content. In this context it is interesting to see that the music industry (which has been the driving force behind Article 13/17) gets to play the good cop (“it’s all only about licensing”) while the AV industry, which (at times reluctantly) supported the music sector in its efforts to get Article 13 adopted, will now be stuck with the bad cop role trying to push through automated filtering solutions despite all their shortcomings (see below). 

One of the main challenges of the next meetings will be to build a common understanding of Article 17 that takes these very different perspectives into account. It is clear that Article 17 cannot be a vehicle to force specific business models on specific sectors. As such, it must remain possible for rightholders who wish to do so to keep content off the platforms, but, in line with the user rights safeguards established in paragraphs 17(7) and 17(9) of the CDSM directive, this must not affect legitimate uses of these works, for example when they are used under exceptions and limitations to copyright. 

Given the scale of user uploads to UGC platforms, it is clear that ensuring the unavailability of content will require automated content recognition tools. But, given the shortcomings of such tools, it is equally clear that their use must be subject to strong user rights safeguards that will likely not meet the expectations of AV rightholders.

Automated Content Recognition technology is context blind

During the third and fourth meetings of the stakeholder dialogue there were seven presentations from companies that either have in-house content recognition technologies (YouTube and Facebook) or that offer such technologies to platforms (Audible Magic, PEX, Videntifier and Smart Protection). All of these companies extolled the virtues of their content matching algorithms, claiming negligible numbers of false positives (incorrectly identified pieces of content) and boasting about their abilities to identify content even when it has been modified to avoid detection. 

The matching capacities of the different systems are impressive and it is likely that this is also the case for the multitude of other products available in the market (music industry representatives made the claim that there are currently 42 different solutions available in Europe). 

While matching audio and video content to reference files provided by rightholders is essentially a solved problem, this does not mean that automated content recognition (ACR) systems are capable of determining the lawfulness of a specific use of content. 

Prompted by questions from representatives of users’ rights organisations, all six providers of ACR systems made it clear that their systems do not look at the context in which a use takes place and, as such, cannot make determinations of whether or not a use falls within the scope of an exception or limitation. This inherent limitation of filtering technology is succinctly captured in statements made by Facebook and Audible Magic at the fourth meeting of the stakeholder dialogue: 

“Our matching system is not able to take context into account; it is just seeking to identify whether or not two pieces of content match to one another.” (Facebook, 16-12-2019) “Copyright exceptions require a high degree of intellectual judgement and an understanding and appreciation of context. We do not represent that any technology can solve this problem in an automated fashion. Ultimately these types of determinations must be handled by human judgement.” (Audible Magic, 16-12-2019).

The technology providers participating in the stakeholder dialogue also made it clear that this situation is unlikely to change any time soon. This limitation of ACR technology will likely have a substantial impact on the discussions in the next phase as it means that, while ACR technology plays an important role in the monetisation of content available on platforms and is essential for revenue accounting, it is generally unsuited for fully automated filtering or blocking. Without the ability to assess the context in which a use takes place, current ACR technology cannot ensure that content used under exceptions or limitations remains available as required by paragraph 17(7) of the CDSM directive

This also means that, in its current state, ACR technology meets the requirements of the music industry use case (licensing and revenue accounting), while it falls short of the requirements of the AV industry use case (blocking). This tension also needs to be addressed in the upcoming meetings. 

Lack of transparency on all sides

There was finally one issue that all stakeholders could agree on: a lack of transparency when it comes to current practices. However, the agreement quickly dissipates when looking at the question in more detail, as each group of stakeholders seems to be concerned about different types of transparency. Rightholders’ representatives have expressed frustration with the ability of platforms to set their own policies for monetisation of content on the platforms. Platforms have expressed concerns that rightholders cannot be trusted when it comes to ownership claims and responsible use of automated removal mechanisms. User rights representatives have pointed out that there is a total lack of transparency when it comes to automated removal of content from platforms. 

It is clear that the general lack of trust between the different stakeholders is one of the key problems that the stakeholder dialogue will need to address. This lack of trust boils down to two distinct but interrelated issues: a lack of transparency when it comes to ownership information and a lack of transparency when it comes to the policies implemented by platforms. While the latter issue is partially addressed by paragraph 17(8) of the CDSM directive, which requires platforms to provide rightholders “with adequate information”, the former problem is not directly addressed by the directive.

However, the lack of transparency when it comes to the current practices and policies of platforms cannot be attributed to the platforms alone. During the stakeholder dialogue, it became clear that they are the result of commercial arrangements between platforms and major rightholders, who exert considerable influence over how their content can be used by platforms. One of the key shortcomings of the stakeholder dialogue so far has been a failure to shed light on these agreements, with both platforms and rightholders hiding behind mutual confidential agreements (while blaming each other for the problematic aspects of their agreements). As the host of the stakeholder dialogue, the Commission has so far failed to compel stakeholders to give real insight into their commercial practices. This failure means that so far the discussions lack a solid empirical basis and that observers have no choice other than to accept statements by stakeholders at face value. 

The next stage of the stakeholder dialogue will need to show if there is a real willingness to change this status quo and use the provisions in paragraph 17(8) to enforce more transparency. One aspect that will be addressed in this context is the status of rights management systems like YouTube’s Content ID and Facebook Rights Manager. Will platforms continue to be allowed to operate these as private systems that operate largely independent of public scrutiny and where platforms and major rightholders can make up their own rules, or will Article 17 lead to a situation where these systems receive more public and regulatory scrutiny? User rights representatives and some collective management bodies have made it clear that they would like to see the latter, but it remains to be seen whether the Commission (and Member States) will muster the political will to go beyond the minimal requirements of paragraph 17(8) here.

The same question arises with regards to the quality of ownership information. While the discussions in the stakeholder dialogue have made it clear that incorrect ownership claims are both common and cause problems for almost all stakeholders (users having content removed by parties without rights to do so, platforms having to deal with incorrect and contradictory information, and rightholders having to deal with incorrect claims related to their own content), the CDSM directive does not provide an obvious answer to this problem. User rights organisations have proposed that, in order to allow for public scrutiny of ownership claims, all requests to block or remove content pursuant to Article 17(4) should be made via a centralised public database, but given the lack of such a requirement in the CDSM directive, such an approach would require commitment by all stakeholders. Given the discussions so far, this seems like an unrealistic expectation even though this approach would have clear benefits for all stakeholders (rightholders would not have to provide information to lots of platforms, platforms could rely on a single source of information and users would be able to scrutinise ownership claims).

The next phase of the stakeholder dialogue

For the third phase of the stakeholder dialogue to produce any outcomes that go beyond re-stating what is in the directive, all of these tensions will need to be resolved and the different types of stakeholders will need to look for common ground. One obstacle is that until now the Commission has not revealed any details about the nature of the guidelines that it will need to draft and publish based on the input from the stakeholder dialogue. 

Resolving some of the tensions that have surfaced in the past meetings will likely require an approach to these guidelines that is willing to seek consensus in areas that are not directly addressed by the text of Article 17. This would concern guidelines on licensing modalities for different sectors, transparency obligations that go beyond the limited scope of paragraph 17(8) and procedural safeguards for user rights that take into account inherent technical limitations of automated content recognition systems. 

Next meetings: The 5th meeting of the stakeholder dialogue takes place today on the 16th of January and focusses on “authorisations and ‘best efforts’ to obtain an authorisation”, “‘best efforts’ to avoid unauthorised content” and “notices submitted by rightholders to remove unauthorised content” (you can watch the stream here) The 6th meeting will take place on 10th February and will focus on “safeguards for legitimate uses of content”, “redress mechanism for users” and “information to rightholders”.

Rechtvaardigheid (Justitia)
Featured Blog post:
Copyright as an Access Right: Concretizing Positive Obligations for Rightholders to Ensure the Exercise of User Rights
Read more
Newer post
Article 17 stakeholder dialogue (day 5): It all depends
January 21, 2020
Older post
Open letter: More transparency for the Stakeholder dialogue
January 15, 2020