Background Discussion
Background Discussion: New EU-rules for Big Tech: How to improve the Digital Services Act
September 2021
Tuesday
07
16:00 - 17:00
(CEST)
The EU is bracing for a big tech policy effort: New rules in the planned Digital Services Act (DSA) are supposed to hold large platforms more accountable. The DSA contains many due diligence rules for platforms like Facebook and YouTube. In the first draft, however, it remains open who is responsible for monitoring and enforcing the new rules. Moreover, it is questionable whether the DSA is in line with similar national regulations, such as the Network Enforcement Act and the Interstate Media Treaty in Germany.
How can the planned transparency rules be improved? Who can and should ensure the enforcement of such rules? Is there a need for a new EU-level authority or should national bodies be responsible?
On September 7 2021, Prabhat Agarwal addressed these questions in a SNV background talk with Julian Jaursch. Prabhat Agarwal played a key role in shaping the draft DSA as Head of Unit at the European Commission
Please find the recording of the discussion here. This is a transcript of the background talk, which has been edited for clarity. Check against delivery.
Julian Jaursch: Welcome to this SNV online background talk. Thank you so much for joining us, I’m really glad to have you here. My name is Julian and I’m a project director here at SNV. We’re a think tank in Berlin working on a variety of different tech policy issues and my focus is mostly on platform regulation and on tackling disinformation. I’ll do a quick introduction for a couple of minutes and then we’ll welcome our guest for today’s talk, which will be about social networks like Facebook and Instagram. It’ll be about search engines like Google, Video apps like YouTube and TikTok, and what rules should apply to these large global digital platforms apart from already existing data protection and consumer protection laws. For years now, people have been reaping the benefits of such platforms but have also been confronted with their downsides. Just to name a few of those: The data collection practices are a huge risk to privacy. There are credible allegations and analyses that corporate algorithms recommend content that might hurt people such as leading them to suppose miracle cures for COVID-19 or into violent extremist political groups. And around pretty much every election, people online see a barrage of false or misleading posts about the parties, the candidates or the voting process itself. And this has become very clear once again, in the ongoing German federal elections with all leading candidates facing disinformation and hateful content.
So far, platforms have been developing their own rules for content moderation and recommender systems.
Yet, basically, how platforms deal with these potential risks or threats has been and actually still is mostly up to them. They set the rules for their content moderation; they decide how the algorithmic recommender systems work that show content to millions of people around the world every day and every second. For a long time, lawmakers just have not been very active in this field. Then, around four or five years ago, Germany pushed ahead with a law telling platforms that illegal content such as incitement to violence needs to be removed from the platforms within 24 hours. Other countries have also thought about or enacted similar rules that are very much focused on individual pieces of content and how this supposedly illegal content gets removed. This Network Enforcement Act or NetzDG was applauded as a step to rein in big tech companies and as a possibility for lawmakers to actually become active. But it also faced a lot of criticism because the rules left a lot of power in the platforms’ hands. Especially when you combine this with the tight removal deadlines, this can lead to deleting of legal content.
Now we’re at a point where the European Union is attempting to find common rules for Europe. One of the biggest proposals in this area is the Digital Services Act or DSA; which is what today’s discussion will be about. The European Commission presented its draft in December 2020 and now the draft is being discussed in the European Parliament. Members of European Parliament have proposed thousands of amendments to the DSA, presenting their ideas on how to change and improve the text. Over the next couple of months, the lawmaking process will continue with negotiations between the Commission, the Parliament and also the member states of the EU.
Contrary to the German law that I just mentioned, the original DSA draft does not contain any strict rules on deleting content or a specific timeline to do so. It’s actually not focused that much on individual pieces of content at all. The DSA is much more focused on setting up rules for good business practices by the platform. Some examples: The DSA draft asks platforms to have mechanisms in place to allow users a better understanding of the recommender-systems – so why they see content. It also asks platforms to have mechanisms in place for users to report content and for researchers to have data access so they can study disinformation, for example. There also need to be mechanisms in place that allow users to have more transparency about the ads that they see online. There is also the idea to independently audit large platforms. These are all relatively new, relatively untested ideas, at least for EU law. So there are still many questions on how to get these provisions in the DSA right.
That’s part of what today’s discussion is going to be about. In Germany, this debate on the DSA and how to get its provisions right is particularly interesting because some laws for platforms are already in place. I already mentioned the NetzDG, which enforces the German criminal law on the platforms. In addition, German competition law was updated with a specific focus on platforms. There’s also new media regulation that requires more transparency from the platforms. Here the question is, how do these German rules and laws fit with the DSA? And furthermore, who enforces the DSA and its rules? Is it mostly the European Commission as was stated in the draft? Is it national regulators? Is it both? Is it a new agency?
These are some of the issues that I’d like to address in today’s discussion. And my guests today will be highly qualified to discuss these questions with us because he actually led the Commission’s effort to draft the original version of the DSA. I’m delighted to have Prabhat Agrawal with me today. Prabhat is a physicist by education, trained mostly at the University of Cambridge. But as you can imagine, his take on theoretical physics is not exactly why I’m so excited to talk to him today. Rather, he’s been working at the Commission for almost ten years now and he heads the unit that was in charge of drafting the DSA. So, I’m very thankful to be talking to Prabhat today and also very thankful that he’ll be answering some of your questions later on.
Prabhat Agrawal: Hello everybody. It’s a real pleasure to be with you here and I’m looking forward to our discussion.
Julian Jaursch: Me too, thanks so much. So, let’s jump right into it. Before we get to the open questions that remain with the DSA and how it can be improved; I’d like to ask you to briefly describe the motivation behind the DSA just so that everyone’s on the same page. When you and your team sat down to draft the bill, what goals did you have in mind for the DSA?
Prabhat Agrawal: Thanks for your great starting question. Actually, there are a number of different considerations that drove us to propose the Digital Services Act. As you also mentioned earlier in your introduction, it was part of a package where we also proposed a second regulation which was called the Digital Markets Act, which is not the focus of our discussion today but the two proposals together are really a European attempt at framing and regulating large online platforms. In a nutshell, what’s the difference between these two acts? The Digital Services Act basically deals with issues around content moderation online, as a very simple summary, whereas the Digital markets act deals much more with fair competition. As you mentioned, in Germany, there has been some very interesting work with a revision of the German competition rules and article 19a and so on. That’s basically in a nutshell, for those who have never heard anything about these laws, the difference between the DSA and DMA.
Now, to answer your question, what drove us? Well, I would say that there are four things that drove us to propose the DSA. One to start with was that we were increasingly seeing national legislation in this area. You mentioned already the German NetzDG. But other member states in the European Union tried and passed similar laws, for example, there was an attempted so called Avia Law in France, and recently there has been a new law passed as well. Austria has passed rules that are to some extent very similar to the German NetzDG. Denmark has recently introduced some similar rules. There have been discussions about similar rules in Poland as well and in Hungary as well as in the UK when it was still a member of the European Union. So, we saw a lot of activity at a national level and this activity is not only legislative. It reflects real citizens concern that there’s something wrong about the way that we are governing content moderation, notably when it comes to illegal content or in the online space. And it seems to many lawmakers and many citizens that nobody really has a handle on it. So, this was one and of course, when multiple member states start to act, the Commission asked whether this should not be resolved and addressed at the European level like we did in the GDPR? Given particularly that a lot of these platforms operate across borders and Facebook or Twitter don’t stop at the boundaries of any country. That’s reason number one.
Reason number two was that there was an increasing concern about a range of relatively new topics. I would say about transparency of recommender systems and algorithms and about how content is being shared in recommender systems. There’s been a lot of evidence for problems from different parts of the world, but also from Europe. It’s also being discussed right now and in the German elections, how content is recommended to users and how the advertising business model plays a role in the amplification of certain types of content. Those kinds of issues were not really addressed in any of the national laws but there’s a lot of concern about these scenarios. So, I would say this is number two; we wanted to have a set of rules that does not only imitate the focus of the national rules such as on hate speech, but we also want to shine a light on this very intransparent process around how content is actually moderated.
We wanted to establish a framework where people can look at the internal workings of these platforms because you and I cannot find out what’s going on, as a lot of stuff is highly personalized on these platforms.
The third one has to do with something that you already mentioned. What is the right oversight structure? Who is exercising oversight? This is a really important question because in the existing European rules, so far only very limited work has been done towards how to exercise oversight. So, for example, recently there was a German NGO, AlgorithmWatch, who had run a very important project in finding out whether recommender systems were discriminatory or not, and suddenly they were unilaterally disconnected by Facebook despite assurances that such research could continue. This is not the first time we’ve experienced or heard about such problems: In the United States, there were similar problems with NYU [New York University], but also we’ve been trying to regulate access to platforms internal working by independent researchers in the context of the fight against disinformation. And so, the DSA also asked the question: Who should exercise oversight? I know this is a question that you have been thinking about a lot but maybe just to start with, in the DSA, the third factor that drove us is we wanted to establish a framework, a structure where people can look at the internal workings of these platforms and understand content. Because you and I cannot find out what’s going on, as a lot of stuff is highly personalized on these platforms. So, it’s very hard to collectively understand what’s going on.
Now, the fourth reason, and I think this is also a very important reason is that increasingly, not only in the European Commission but also in the parliament and member states, there was concern about adequate protection of fundamental rights on the online space. These have been long standing discussions, but we wanted to be sure because we are also bound by the Charter of Fundamental Rights, that we have some real rules protecting user’s fundamental rights online as well. And like in the GDPR, we’re protecting user’s fundamental rights to data protection, but for other fundamental rights such as freedom of expression or discrimination, or even the protection of children, there are not really very many rules when it comes to online content moderation.
So, in a nutshell, these are the four factors. First, increasing activity by many member states but in an uncoordinated and not in a joint way. We thought a European solution is better. Second reason, a range of new problems around algorithms and transparency, which we did not really see addressed in any regulatory initiatives so far and interplay with advertising is a very important role. The third element is to establish a European oversight structure that can actually scrutinize some of the things that are going on. And then the last one is to set a certain European standard when it comes to protection of fundamental rights.
Julian Jaursch: Okay, thanks for that. I think that gave everyone a good overview of what you are trying to accomplish here and so maybe we can work our way through some of these issues and see where questions remain. And maybe we’ll start with the first one, because that’s also something that I was curious about; the interplay, the link between national legislation and the DSA. So, as you mentioned, there have been laws in the member states already. Take the German example: There are some very tight removal deadlines but at the same time, there’s also transparency reporting requirements in the German law, also in the Austrian law. So, there are overlaps. And I think, as far as I can tell, it’s pretty standard procedure that you try to find best practices across the Union and see what you can work onto the European stage. At the same time, I think there’s a risk that you create a lot of national exemptions based on the national laws and culture. So, my question to you is, after you mentioned a European vision and a European framework, can you explain again why you think platform regulation would be an important thing to do at the European level? And when you make that explanation towards member states, what’s their reaction? Is there a kind of a conflict of interest or do you see that everything fits neatly together?
Prabhat Agrawal: Great questions. I think maybe to start with, it’s not unusual for a member state to try out something at a national level and then, if it’s a good idea, it becomes a European law. Usually, though not exactly in the same way for a variety of reasons. It needs to work for every member state and member states have different constitutions and different things that they’re concerned about, that’s normal. Second thing, typically, there’s a bit of a time lag between a member state trying out something at national level and it becoming European law. And that time lag is kind of helpful sometimes because you can find out what works well and what doesn’t, and you can make some learning from it. The current rules that we are revising date back to 20 years ago, when the rules for digital services were last put in place. It was actually German politicians who coined the word information society, “Informationsgesellschaft”. So, back in 2000, when we started regulating services of the information society services this was something that came out of German pioneering work. In fact, the current rules in the so called e-Commerce Directive were predated by some pioneering German works which we then transposed to the European Union, so this is not unusual.
Now, the second point that is important to know is actually this is one of the principles of the European treaty; that when there are different rules in the European Union, we actually have the power under the treaty to approximate those laws in the internal market. And it’s a nice word – to “approximate” – those laws because it already shows this is how it’s written in the treaty. It shows that you’re not exactly copying everyone. That’s another way of saying that we harmonize those laws and that means to have common rules inside the internal market. This is the second element. This is when the European Commission often gets active in order to do this.
Now, to answer your third question, what member states are saying; I think we’ve had really great positive feedback. I mean, I think there’s a really strong consensus in the European Union that these kinds of problems need to be tackled at the European level and so, we have had very positive feedback. Also, European heads of states and governments met early on in this year and signed a declaration and conclusions that we need such rules at European level. And so I think there’s been overwhelmingly positive feedback but of course now we’re talking about a lot of details.
But let me just add one other thing as well that today, one of the important factors that we should not forget about is that we still hope that in Europe in the future, there can also be some homegrown digital platform services. At the moment, you have US-based services like Facebook, Twitter, and Google. And we have a new generation of Chinese services coming into the European Union; TikTok as the best known at the moment. But in the future digital economy, I think it’s very important for us to also put down conditions that allow European services to scale up and grow. And one of the real success factors for the business models is this ability to acquire users. One of the reasons you can do that, in a big market, like the US or in China is because the market is very uniform; it’s very easy to stay up there. But in the European Union at the moment, it’s still very difficult and it gets more difficult if we have 27 different rules, 27 different hoops to jump through in order to stay up there. The rules that I mentioned are similar but they’re not identical. So, if, for example, one of your listeners wants to start an alternative social network, maybe one that doesn’t rely on user tracking or personalized ads, it would be a nightmare if in Europe she or he would have to have 27 different rules and different authorities about how to deal with illegal content. Basically, it wouldn’t ever work.
Julian Jaursch: Fair enough. I mean, this innovation and economic incentive argument is certainly strong. But to go back to your earlier point about the content moderation. I think this is one of the areas that I see as being an open issue, where you have different member states having pushed ahead, which you explained quite clearly that that’s perfectly normal and good, but the fact remains that the DSA now has to square the circle. For instance, member states probably want to stick to their own legislation, their own regulators, their own culture if you will, because there are different ideas of how content moderation should work across the EU. With the DSA, is that something that you think the negotiations can resolve or will we have a kind of patchwork in the end, where you actually foresee more national exemptions or ways for member states to deviate from the DSA?
Prabhat Agrawal: My hope is very clear. I think that we need common rules, uniform rules for Europe. A single set of rules that works for everyone. That of course doesn’t mean that local concerns should not be heeded. A lot of the content moderation related issues have to do with language, local language, of course. Hate speech is an important example, it is very closely tied to the use of a particular language and what constitutes hate speech will be different in Germany and will be different in France or another country. That’s one thing. Then, we also have a variety of different traditions when it comes to certain categories of illegal content, for example, defamation. Different countries in the European Union have completely different historical pathways towards defamation and in some countries, in Spain for example, insulting the king is an offence while in some Nordic countries, there has been a strong free speech tradition and defamation laws have been scrapped. There are no defamation laws in many of the Nordic countries because they feel that any defamation law is fundamentally inconsistent with the idea of free speech. So, we have a variety of defamation laws and it’s not the Commission’s incentive to make them all equal.
We need common rules, uniform rules for Europe. A single set of rules that works for everyone.
So, the DSA needs to accommodate in an intelligent way, national differences while still making a European construction. The way I sometimes like to think about it is a little bit like a house with different rooms in it. You still have a common architecture but the rooms will probably have different decorations and different furniture. This is not a problem for us although of course, it needs to fit together, which is what we are doing. But just to answer your question very bluntly, my hope is that we can build a house in which everyone thinks that they can find their own room, so that they don’t need their own room outside of the house anymore. That’s really my hope. I think that the DSA offers an architecture that allows every member state to accommodate their specific concerns in a flexible way and we’ve also learned from some of the shortcomings. For example, in the GDPR where we’ve seen some enforcement bottlenecks. And so, we wanted, for example, to give the Commission a stronger role. We have experience from the antitrust field, as we’ve been working on Google search and regulating their algorithms by competition law and so these are experiences that we can build on.
Julian Jaursch: Thanks. To me, it makes it a bit clearer what your hope is here and what your reasoning for a European approach to this is. That leads me to one other topic that you addressed that I have a question on, because I think it’s one area where the details also need improvement. I will post this last question before we head to the to the audience questions. You mentioned the European oversight structure and now in your answer you also mentioned the strong role for the commission. So, in the DSA draft, the Commission has a role to play, the national regulators have a role to play and there’s also a new advisory board that you suggest in the draft. I think that a key question; apart from coming up with good rules, is the question of who should enforce the rules: Who can and should check whether the platforms actually comply with the DSA?
For some of you who might have read my SNV paper on this, I think the enforcement structure is a bit weak and a bit fragmented. I personally think relying on national regulators is not so helpful and that an EU agency would be good. Some parliamentarians seem to agree and they have proposed this in the European Parliament. Then there’s also criticism from the other side where, for example, German media regulators share this analysis that enforcement could be stronger, but rather than have a new agency, they would keep it in the member states. So, I guess the Commission’s idea of playing a big role in enforcement is under fire from multiple sides. Why do you think the Commission should play the big enforcer role? Maybe you can also do a kind of self-assessment: What would the Commission be good at? What would be its strong suit and what would also be some weaknesses in the enforcement?
Prabhat Agrawal: Yes, thanks, great questions again. First of all, I think it’s perfectly normal that these discussions take place and I think in any kind of federal or quasi-federal or confederal system, there’s a permanent tension at which level something should be organized. I think this is healthy for European democracy and needs to be debated. The Commission only made a proposal but the legislator is ultimately the Parliament and the Council and they need to decide on this. Now, it’s true, what you’re saying. In the Parliament, there are some voices who said that we should have gone a step further to establish an agency. This was also something that we discussed when we elaborated the draft and actually, for those who have really looked very carefully at the DSA, you might have noticed that there is a revision clause for the DSA. The revision clause states, that for the oversight structure, we want to review it faster. After three years, we want to see whether the oversight is still okay. But the rules themselves, we think are very good and should only be reviewed every five years or so. So, we already had this thought in mind whether we need to work more on the oversight and this is also why this aspect will be evaluated more quickly in our proposal if everyone agrees to this.
The reason why the Commission’s role is designed in the way it was designed, is precisely to address one of the concerns that we’ve encountered in the GDPR enforcement where there’s a low level of trust between different regulators that maybe can be improved and where there’s sometimes a feeling of a bottleneck with those countries in the European Union, particularly Ireland, who is the key regulator for some of the very large online platforms. And so, we wanted to play an assistant role in stepping in at the European level. One of our key concerns was to keep a European framework in place. And that was really one of the key architectural considerations.
Of course, the Commission’s proposal is being kind of stretched left and right and there are people making proposals. This is perfectly normal in the democratic process. One thing that’s very important is technical expertise. We know from our enforcement experience not only in the GDPR but also working with large online platforms and many issues like disinformation, hate speech, counterfeit goods and also antitrust areas. So, we have a lot of experience that we need people with a lot of technical expertise and I think that the Commission is strong in this, that we have so much experience from different parts of working with different online platforms in the same place. We can bring this together. This is one of our strong points. I think the second strong point is that we can bring people together; different national regulators. And this I believe is one point that we should not underestimate in governing online platforms. One of the real key success factors is actually to establish collaboration mechanisms. Because the online platforms when it comes to illegal content or disinformation cover so many different aspects. Let me give you an example: Coronavirus disinformation. Is it a health problem? Is it a political issue? Is it poor or is it illegal content in terms of product safety when it comes to people advertising unsafe products? So, it cuts across many different parts and many different departments and ministries. One of the strong points of the oversight mechanism of the DSA, at least in our conception, was that we have this cooperative nature where we bring all these different actors together in order to solve problems which will inevitably cut across many different issues.
Julian Jaursch: Alright, thanks. We didn’t get to the weak spots of the Commission but you can read my paper for that. I think one of the key things is that the commission itself is a political body so there have been questions about the independence. But I don’t want to cut into the audience questions, so hopefully this will come up in the questions again.
The first one that I have here is that there’s been a lot of coverage that corporate tech lobbyists have been very active on the DSA as well as the DMA. And the question here is: How did you deal with this during and after the drafting process? Is there any kind of mechanism to hear the needs of civil society and NGOs?
Prabhat Agrawal: This is a very important question, one that we’ve long thought hard about. First of all, it’s true that it’s an issue of concern that so much money is spent on tech lobbying. How do we deal with this? First of all, I and my team are very thoroughly trained on how to deal with such lobbying. We also check everything that is being told to us by industry representatives, and I think we have a very good understanding of the positions. We do listen to the tech lobby position but we make every effort to balance it also with opposite views and with listening to NGOs and civil society organizations which I personally have taken up as a cause and have been reaching out and maintaining excellent relationships with. So, when we hear a claim that we’re not sure about, we do check. We also check with competitors, you know, people who may not be of the same view.
It’s an issue of concern that so much money is spent on tech lobbying.
So, our first line of defense is to be well trained on corporate lobbying and we have been the target of corporate lobbying for many years, whether its tobacco or finance industry, we have a long history of having to deal with lobbying. So first one is training best practices. Listening to all sides including those who don’t agree with the big tech lobby is the second one. The third one is very high levels of transparency. We maintain a very high standard of transparency registers. The research that some of the colleagues are quoting here, for example, by LobbyControl, has only really been possible because there is a transparency register. We record every meeting and so it allows a kind of scrutiny as well as also for us to know, are we only meeting one side of the story or are we listening to the other one? And then, over the years having worked in this area, we’ve developed a fair amount of expertise in knowing what’s right and what’s wrong. There’s a lot of insight we gain from having to deal with antitrust enforcement with the Commission, where we’ve learned a lot. The last point that I would say is that the conversations we’re having around the DSA and content moderation issues have exploded worldwide. They’re also being debated in the US and in many other countries as well. So, we also took some comfort that other people and independent researchers came to similar conclusions on many of the key issues and I think that shows that we were on the right track.
Julian Jaursch: Right, well thanks for that. I think it’s also good that you mentioned the LobbyControl and Corporate Europe Observatory research, because that really showed an imbalance in how the Commission is approached by or takes meeting with tech and with NGOs. Thank you for bringing up this question. And if I just may add, when it comes to the regulator or whoever gets to decide who complies with the DSA, I think that would also be a crucial thing: There need to be mechanisms in place that ensure transparency and make sure that the regulator is as free from corporate and political capture as possible.
The next question actually ties well into that as it’s about enforcement. How would you strengthen the DSA enforcement system to avoid the shortcomings and loopholes that are observed in Europe’s data protection rules, the GDPR, especially with regard to cooperation and efficiency with cross-border processes? So again, it’s the question of how to strengthen enforcement.
Prabhat Agrawal: I think one key element that’s absent in the GDPR is a role of for the Commission. I think this is one of the key elements. The second element that we try to incorporate in the DSA is to work with clear timelines, so processes are time-bound as much as possible. This is also something that is being discussed right now in the legislative process. Something has to have an endpoint, you know, a complaint needs to be addressed in a certain moment. My third part, which is a little bit more forward looking is to be sure that we have sufficient technical expertise available collectively. I think this is an area where in the GDPR now, people are waking up to the fact that some of these problems are very technically difficult and regulators have started to hire and invest in this as well. I don’t want to say that this is completely off the radar for the data protection community. There has been a lot of progress recently. I’m not a real expert in that community but I think for the DSA, we want to be sure that we have sufficient technical expertise available upstream.
One way of looking at the DSA is that it’s a gigantic data generation machine about what is actually happening in the platform economy.
Maybe one final point, which is not so obvious for those who are reading the DSA. We’ve given a lot of hooks for civil society to exercise collective oversight as well. So, there are many transparency provisions which are actually aimed at journalists, civil society actors or independent researchers. The enforcement doesn’t only rely on the regulator because the regulator can really only start doing something once he or she knows something. One way of looking at the DSA is that it’s a gigantic data generation machine about what is actually happening in the platform economy. And that data will produce evidence and that evidence will cause many different people to act. I think there are also elements where the pressure will not necessarily come from the regulator but also then from reputation leavers that we’ve built in. So collectively, I think these are elements that are in the DSA that may not be immediately present like this in the in the GDPR.
Julian Jaursch: One of the things that you’ve come back to multiple times now is the technical expertise and here I would agree, I think to me the question is just at what level and in what institution should this be held: Centrally or is it better to decentralize it in the member states or is it better to have it at the Commission or a completely new agency?
But I guess that is precisely what the negotiations are going to be about. And this is kind of what the next question is about, asking for a timeline. What can we expect from the EU in the coming months and next year? When are the next steps in the legislative process happening? If you could just give a quick overview, please?
Prabhat Agrawal: Normally, this is now in the hands of the member states on the one hand and the Parliament on the other hand. The member states who come together in the Council of the EU, have announced that they would like to come to a position in November, and they are working towards that and the Parliament, similarly, is committed to coming to a position by the end of the year. These are the intentions and I think that everyone is working very hard at the moment on making those timelines happen. This means that basically, a conclusion under the French presidency – that means the first half of next year – is possible. My Commissioner, Thierry Breton, also recently emphasized this in the press, that he thinks that a conclusion in the first half of 2022 is possible. And then there will be a transition period because everybody has to get ready for these rules. This transition period is also being subject to discussions at the moment. The Commission wanted the transition period to be very short, three months, but so far people are asking for more time to get ready. I think that by the end of next year in the best case, and early the year after, we’ll see if we can expect some of these rules to start applying.
Julian Jaursch: But I mean, even if that is held, that seems like a very tight deadline to me, especially considering how long it took to get the data protection rules, the GDPR, over the line. Is that a very optimistic reading of things?
Prabhat Agrawal: Yes, I’m an optimist. And I also have plans with my life after that so I’m hoping that it will work like this. But jokes aside, I think there’s a common understanding that we need these rules, and we need them fast because people are expecting this to be regulated. Every time there’s an event, whether it’s football, you have a flare up of racism, or what we see in some parts of the elections at the moment. And people are asking, why aren’t these rules working? So, my hope is that the DSA will solve some of these issues, but people are expecting this to come quickly. I think generally people don’t want this to be left only to the platform’s own initiatives, like what we see with the [Facebook] Oversight Board. I don’t want to discuss the merits or demerits of that system but I think a lot of people expect this to be a properly framed and technocratic process.
Julian Jaursch: One particular threat on social media channels is targeted advertising for children. So, the question here is if the DSA offers any specifically opportunity to protect children online. The person asking the question says the main goal should be that childrens’ parents understand the services that that they are using. Can you talk to that a little bit?
Prabhat Agrawal: Yes. I really think this is a very important point. I say this, not only because I have three teenagers who are teaching me how to use a lot of these things that I’m regulating. I think many of you will have heard me make this joke before, that my teenagers are rolling their eyes when they tell each other what their dad is working on, you know, that I’m regulating TikTok and so on.
But just to be serious, I think this is a very important element and certainly the Commission is open and it’s also being discussed at the moment in concert and certainly in Parliament how these aspects can be further improved and how it can be strengthened. Certainly, it’s a very important element. We only included one element which was in the risk assessment part: Risks to children and their development need to be assessed and then subject to this regulatory dialogue that we have for these very large online platforms. So, I think it’s totally legitimate to ask how this can be improved. They need to be able to understand this. This is an area where we will be supportive of good ideas for further improvements.
I think generally people don’t want this to be left only to the platform’s own initiatives, like what we see with the [Facebook] Oversight Board.
Julian Jaursch: Okay, thank you. The next question relates to public interest platforms such as Wikipedia. While the DSA covers a whole range of platforms, commercial and noncommercial, it’s tailored much more towards commercial platforms. How can it be ensured that public interest platforms can preserve their community-based model of curating and moderating content? Maybe you can tell us a little bit about that because I’m sure this is not the first time you’re hearing about potential exclusions or different rules or making sure that something like Wikimedia and Wikipedia stays the way it is.
Prabhat Agrawal: Yes, maybe I should say a couple of things, I mean; I think that there is a basic set of rules in the DSA, which we don’t think are particularly crazy in the sense that they are notification mechanisms. Somebody needs to flag that there is something that’s illegal on your platform and you need to get it to someone and someone needs to analyze it. I think this in itself is not that crazy and public interest platforms such as Wikipedia can probably accommodate that even if this is not what they’re doing today. And there’s a second part of it, the risk assessment part which is much more flexible. I think the key is that these assessments need to be proportionate to the risk that the platforms pose and it might very well be that, that particular platforms don’t necessarily pose the same risk as others. So, this needs to be assessed on a case-by-case basis. It might also be that a platform like Wikipedia is particularly good at mitigating risks because they have a very sophisticated content moderation system. Also, although I’m not an expert on it, I have seen some studies on the dynamics of propagation on disinformation on Wikipedia having a completely different dynamic than on the advertising-funded platforms. And so, it might actually be that Wikipedia might find it easier to comply already with some of the ideas there because the systems that they have built are inherently perhaps less prone to intentional manipulation or risk. This is speculative on my side so don’t take it as a regulatory authority but the idea is that system is flexible.
Julian Jaursch: But in general, you did mention different types of platforms, some are advertising-based and then you have platforms like Wikipedia. We also talked in the beginning about Facebook and there’s also smaller commercial entities. You also mentioned counterfeit goods. Online marketplaces, I think, are something different than, let’s say, TikTok. So, what I’m trying to say is there is a range of different platforms and most of them are covered by the DSA. Would you say that that’s a strong point that you have this kind of a minimum standards for all platforms? You wouldn’t necessarily see it as a weakness and try to find rules for very different types of platforms?
Prabhat Agrawal: I think maybe some of you who’ve been listening to me are already kind of playing bullshit bingo, because I end up saying the same stuff over and over again. And I can imagine some of the people in the audience kind of crossing off some of the keywords that I keep using all the time. Some of the ground rules that we have in the DSA are a little bit analogous to when you go to a concert hall and they have fire extinguishers and fire exits and alarm buttons. Of course, it’s a burden to put fire extinguishers and fire alarms everywhere and to have certain-sized exits and so on. You need to take this into account. It increases the cost of operating for sure. But we think that these minimum standards are reasonable, even though they are carrying some costs. They provide a minimum layer of security and balance fundamental rights because I didn’t mention all the rules to protect users against erroneous and over-removal and so on. There’s a whole big chapter we can talk about again.
We think that these minimum standards are reasonable, even though they are carrying some costs.
But I hope these common set of rules are common sense standard rules that everyone can comply with and like everywhere in the world, you have kind of fire exits and fire extinguishers, at least in the developed world. You know, we expect some of these rules to become a kind of global norm. On top of that, platforms with a very deep reach in society and that are really posing a potentially risk, need to be subject to some further rules and that needs to be articulated on a flexible case by case basis. Let me just give you one example. Some of the very large online platforms are probably pornography platforms and they have a huge number of viewers in the European Union. We know from some of the porn platforms that they also carry a lot of illegal, nonconsensual pornography. And there’s been no real rule for this so far in the European Union. This is something that, just to give you an example, seems kind of unthinkable for those kinds of platforms to have never been subject to any kind of basic procedural requirements.
Julian Jaursch: Thanks for elaborating on that. I think that is really one of the key things to be discussed: what platforms are covered and if there are higher standards beyond the minimum standards for some of the platforms.
I want to get to a question asking about the audits in the DSA. So, as I mentioned, the idea in the draft was for platforms to be audited, at least for large platforms. If the platforms are supposed to finance the audits themselves as was envisioned in the original draft, is there not a risk of these audits actually being independent? Is there a risk of these audits being captured by the platforms then?
For companies that are making so much money, it feels right that they should foot the bill for audits but of course, it needs to be done in a way that that guarantees the right kind of outcomes for sure.
Prabhat Agrawal: Yes, excellent question. Again, whatever we can do to strengthen the system of audits, we’re certainly open to discussing that for sure. The idea that they would pay for the audit comes from the banking sector where it’s quite common for the financial institutions. Even though they pay and have a societal risk as we’ve seen in the subprime crisis, 12 or 13 years ago, they must subject themselves to audits and they should pay for them. It’s a “polluter pays” kind of logic. We have independence requirements for the auditors in the DSA and I think the two are not contradictory. But whatever we can do to reinforce that independence to be sure that these audits are meaningful, for sure, we’re open to discussing. This system needs to work and also needs to have the trust of the population so it’s important to get it right.
It’s relatively new ground in the platform sector to have such rules, but it’s relatively normal in many other parts of the economy to have audits. So, it may seem like a big step for the platform economy, but compared to other regulated sectors, to us audit requirements as part of the regulatory toolbox is actually relatively common. We’re certainly open to ideas on how to engage people who have good ideas on how to improve that system. I think I would feel a bit weird that the taxpayer would have to pay for such audits. For companies that are making so much money, it feels right that they should foot the bill for such audits but of course, it needs to be done in a way that that guarantees the right kind of outcomes for sure.
Julian Jaursch: Yes, I mean, there are ideas for that, in the sense that you can have a limit of how many repeat contracts there can be between a tech company and an auditor. I think there is also the idea of a fund that platforms pay into so it’s not one platform paying one auditor. I guess another improvement also regarding the audits if I read the draft correctly, is that if the audit somehow turns out negative, I don’t think there are necessarily any consequences. So, if the platforms don’t want to follow the recommendations, I think they should be facing some concrete consequences besides just explaining why they didn’t do so.
I have one more question here before we have to wrap it up after this. I’m very sorry for not getting through all the questions. How is it ensured that platforms do not simply take the cheapest, but perhaps the most appropriate measures to mitigate the risks online? So, the example was about gender-based violence, but I would claim that you can expand that to different risks. How can you make sure that risk mitigation is not just a tick-the-box exercise, and platforms actually do what needs to be done?
Prabhat Agrawal: I don’t have the text in front of me but in Article 27, I think, it’s written that the mitigation measures must be effective and appropriate. And so, it is a regulatory requirement not that they are cheap but that they must be effective and must work. Now, how are we sure that they work? Well, I think there are a couple of things. First of all, I think, the very extensive transparency requirements and the ability, for example, for researchers to go into platforms and to research particular risks, is a really important independent check whether this is really happening. This is all about the data access that I mentioned earlier. We have the ability to have researchers access the internal workings of the platforms to research in areas such as gender-based violence. At the moment, it isn’t really possible for anyone to find out how much of this violence is happening and how is it happening. What are the dynamics behind it and what are the platforms doing and so on? That’s a really important part of the whole regulatory feedback cycle. The transparency disclosure obligations, the data access for researchers, the regulatory data access, let’s not forget that the regulator itself can compare data, the quality of the audit, this should all be taken together as one large complex of oversight. But again, if it can be improved, this is an area where we are very open. It’s being discussed right now also in the in the legislative procedure, so certainly there’s hope for discussions there.
Julian Jaursch: Thanks, again, we are coming full circle at the end. I think you do relate in the end also to the point that you mentioned about oversight. The point that I was trying to make is that I think the best rules won’t help if they aren’t enforced. I think that’s kind of what this question was also addressing. So, I think we’ve tackled a lot of things. I do apologize to everyone who sent in great questions here that I saw in the chat, but that we unfortunately didn’t get to. Thank you so much, Prabhat, for discussing these open questions with us: We dove into the national-EU link between national legislation and the DSA, we talked a lot about enforcement and oversight, we talked a lot about open questions regarding the due diligence compliance rules and then the timeline. I very much appreciate your insights here. Thanks to everyone listening, thanks to everyone asking questions and also many thanks to my colleagues Andre and Raphael in the background who were helping me organize this. I will close it here and Prabhat, the last word of goodbye is to you. Thanks again.
Prabhat Agrawal: Thanks Julian, it was a real pleasure. It was a lot of fun and I look forward to reconnecting with you all as soon as possible.
Julian Jaursch: Thanks so much. Have a great rest of the day everyone.
Meet the speakers
Dr. Julian Jaursch
Lead Platform Regulation