A woman shouting into a man's ear-trumpet

Article 17 stakeholder dialogue (day 3): Filters do not meet the requirements of the directive

Last week’s third meeting of the Article 17 Stakeholder dialogue was the first one of what the Commission had referred to as the 2nd phase of the dialogue. After two meetings of introductory statements by various stakeholders (see our reports here and here), the third meeting consisted of a number of more in depth technical presentations on content recognition technologies and on existing licensing models (Video recording available here). 

The morning session saw presentations from three technology providers. YouTube presented its own Content ID system, PEX presented its platform independent attribution engine and finally Videntifier showed off its video and image matching technology

The biggest part of the discussion in the morning was centered around understanding the way YouTube’s content ID system works and how it relates to copyright (hint: it’s complicated). The overall impression that arose from the discussion is that very few participants actually understand how content ID works (and those who do, like the big record labels, don’t seem to be interested in talking about it). The fact that the Commission was among those asking questions to get a better understanding of the inner working of content ID is rather striking in the context that evidence based lawmaking was supposed to be one of the priorities of the Junker commission. So far the stakeholder dialogue seems more like an exercise in legislation based fact finding

While many aspects of Content ID remained opaque, one thing became clear though-out the three presentations: none of the presented technologies can do more than matching content in user uploads. None of the technologies presented can understand the context in which a use takes place and as a result they are incapable of detecting if a use is covered by an exception or not. In the words of the technology providers (lightly edited for clarity):

Question: does your technology do simple matching, or does it look at the context? 

YouTube: We do not look at the context. […] there are no fixed rules, as an engineer there is no rule written that would describe for me that I can tell the machine that this is a copyright exception. 

Videntifier: […] as to your question on satire, we also make no judgements on the context of the content. This is as far as I understand an unsolved problem.

While not surprising (we have argued that filters cannot recognise context since the beginning) this is extremely relevant: as long as filtering technology cannot determine if a use is covered by an exception or not then it does not meet the requirements established by Article 17 of the Directive. This point was one of the conclusions of the statement on user rights and Article 17 that was issued by more than 50 copyright scholars last month (emphasis ours): 

Finally, we note that an underlying assumption for the application of the preventive measures in Article 17(4)(b) and (c) is that the necessary technology is available on the market and meets the legal requirements set forth in Article 17. In essence, preventive measures should only be allowed and applied if they: (i) meet the proportionality requirements in paragraph (5); (ii) enable the recognition of the mandatory E&Ls in paragraph (7), including their contextual and dynamic aspects; (iii) in no way affect legitimate uses, as mandated in paragraph (9).

It is worth noting here that this situation is unlikely to change any time soon. It is simply unrealistic to expect that content recognition technology will substantially improve in the near future. Matching uploaded content to reference files is essentially a solved problem (with room for marginal improvements in telling very similar objects/works apart). 

What is unsolved (and will remain unsolved for the near future) is understanding the context/purpose of a use and then making a determination of its legality based on exceptions and limitations. The long term approach to achieve this will likely be based on machine learning. However, the complexity of the determinations to be made (there are a lot of interrelated factors that go into recognising something like parody) assures that this is still a long time away.

If technology providers were anyway near to recognising use under exceptions, than we would be seeing technology providers demonstrating that they can detect quotations by now (compared to detecting parody, detecting quotations is a relatively straightforward problem to solve). 

What this means is that future meetings of the stakeholder dialogue and the Commissions guidelines must take into account the fact that filters do not meet the requirements of the Directive. That means, in the context of Article 17, that content recognition technologies can play a role in managing remuneration flows but must have a very limited role in blocking user uploads. As the above quoted academics argue in their recommendation, this should be limited to obviously infringing uploads only. 

This also means that the pathway to comply with Article 17 will need to be based in ensuring that platforms are broadly licensed. This question was only briefly touched during last week’s meeting, when the representatives of European Visual Artists proposed that platforms should obtain extended collective licenses covering their uses of visual artworks uploaded by their users.

Featured Blog post:
Our analysis of the 1st draft of the General-Purpose AI Code of Practice
Read more
Newer post
Implementing the new EU exceptions for text and data mining
December 3, 2019
Older post
Our Guidelines for the Implementation of the DSM Directive
December 2, 2019