Background Discussion
DSA enforcement starts now – what can users expect?
February 2024
Tuesday
27
16:00 - 17:00
(CET)
The EU has been preparing to enforce the Digital Services Act (DSA) and now, the new rules for online platforms are being put to the test for the first time. Starting in February 2024, the DSA’s provisions apply in their entirety to social media sites, video apps and search engines. Users of affected platforms are supposed to gain more control over what happens with their data, while researchers from academia and civil society hope to have clearer rules allowing them to conduct research on exactly that data.
Expectations for the DSA are high. How can the EU meet these expectations? What will the DSA actually change for platform users, researchers and civil society? What is merely wishful thinking? SNV’s Julian Jaursch discussed questions such as these with Laureline Lemoine on February 27, 2024, at 4:00pm CET during an online background talk. Laureline is a senior associate at the law firm and consultancy AWO, which has worked on key issues around the DSA for years and supports a legal case against Meta’s data protection practices.
As part of the one-hour event, guests were welcome to participate in the discussion with their questions. The talk was held in English.
Below you will find the video and the transcript of the interview. The transcript has been slightly edited for readability.
Dr. Julian Jaursch, project director at SNV: Welcome, everyone. Thank you so much for joining us today for this background talk to discuss the EU’s new rules for online platforms, what to expect from these rules, also what not to expect from these rules. The rules are laid down in the Digital Services Act, the DSA. It’s only been 10 days since the DSA took effect in its entirety. Now the new rules apply to a wide variety of online services such as hosting providers, social media, video apps, search engines of almost all sizes. Some of these rules are meant to increase transparency, for example, around online advertising, about terms and service, about content moderation. Other rules provide for complaint mechanisms for users, but also for data access for researchers. Overall, it’s a pretty big package of rules for a pretty diverse set of services.
I would imagine that regulators will have their hands full in ensuring compliance with these rules in the future. Now, regulators, that means, for one, the European Commission. The European Commission will oversee compliance with the rules for very large online platforms. Those are platforms with at least 45 million users per month, in the EU. And there are currently 22 of such VLOPs, as they’re called, well-known services like Amazon, TikTok, Twitter, Pornhub, YouTube, Google search. But it’s not just the Commission and these VLOPs: There are also national regulators, so at the member state level, and that’s mostly the Digital Services Coordinators, the DSCs.
The DSCs oversee platforms that do not meet the VLOP threshold. In addition, they coordinate a number of national-level regulators. Crucially, what they also do is serve as the point of contact for consumers and for researchers: for consumers, if they want to complain about potential violations of the DSA, and for researchers, if they want to apply for access to platform data. With the DSA finally in full effect, the key question is: What can these regulators in Brussels and in member states actually achieve with the DSA rules? How can there be benefits from the DSA for users of these services, but also for civil society and for academia?
There’s sometimes this expectation, at least I feel that there is this expectation, that the DSA will kind of fix the internet by removing all types of hateful content or illegal content and by reigning in the economic power of big tech companies. At the same time, there is this criticism that the DSA actually goes too far in fixing the internet and gives too much power over online spaces to governments, to politicians, to bureaucrats, also to companies.
What I’d like to do today is to cut through these types of expectations with our guest, Laureline Lemoine. Laureline is in a great position to sort through with us what the DSA can and cannot do, because she’s been working on platform regulation topics for a while now, specifically on the DSA. She is currently a senior policy associate at AWO agency in Brussels. AWO is a law firm and consultancy focused on data protection and data rights. Before AWO, Laureline had worked in various NGOs, she was also at the Court of Justice of the EU, she’s a lawyer by training. As I mentioned, she’s worked a lot on the DSA and has published numerous articles on the topic. So, I’m very grateful that Laureline joins us today. Thank you, Laureline and welcome.
Laureline Lemoine, senior policy associate at AWO agency: Thank you, Julian. Very happy to be here.
Julian: Our event today will be split into two parts. After this introduction, we have about 30 minutes, where I just ask a couple of questions to Laureline. The second part, so another roughly 30 minutes, is dedicated to you. I’m very curious to hear what you, the audience, has in store for questions for Laureline. You can submit your questions using the Q&A function. You can pose your question either anonymously or you can state your name and organization if you like. Just a reminder, this webinar is being recorded and the recording will be published later. English and German are fine for the questions and you can also vote on the questions that are posed, so we get a bit of a better idea of what everyone is interested in. Now let’s get started. Laureline, thanks again for taking part in this discussion.
I’d like to start by trying to figure out what the DSA is about from a consumer’s perspective. If I had not been reading about the DSA for some time and I just now followed the past ten to 14 days, when it came into effect, I might think that it’s either a miracle of a law that will save the internet or that it’s basically a monster with way too much bureaucracy and government overreach. I’m exaggerating a bit. The point is that the DSA covers a lot of different topics, but one major focus has been how it deals with illegal content and content moderation in general. And that’s obviously one of the things where these extreme reactions to the DSA come from. I’m wondering just how you would characterize the DSA in general. Is it, in fact, mainly a law about illegal content and about content moderation?
On a key aspect of the DSA: “It’s a transparency machine.”
Laureline: I would say that the DSA is very broad, but if I had to summarize in one line what it does, I would say that it’s more like a transparency machine. It allows users and regulators to much better understand what’s happening behind the curtains of the platforms and also with the hope to allow us, as a democratic society, to impose better guardrails to the behavior of these platforms, if it’s needed.
And the DSA, to achieve this aim, offers multiple tools. You have transparency obligations, risk assessments, mitigation measures, access to data for researchers, complaint mechanisms. I think for the average user, maybe there is not going to be a lot of striking changes, because all those due process obligations, transparency and due diligence obligations, they’re either not immediately visible to the average users, or also what the DSA does, it transforms already existing practices into legal obligations. For example, all the platforms now, they must provide meaningful information in their terms and conditions about the policies, the procedures, the tools that they use for content moderation. And they publish annual reports.
You also have, as you said, transparency regarding advertising so that users can know who’s behind an ad, why they’re being targeted, for example. The VLOPs have to do risk assessments, in relation to electoral processes, gender-based violence, mental health, civic discourse. So, in a sense, I think the goal is to have a healthier online public space, but it’s also mainly to understand how those platforms see themselves in the space, the role they have in the online sphere, and create more transparency. For the average user, they might be interested in all of this information from time to time, but it’s really, it’s going to be mostly useful to regulators, to researchers and to civil society organizations.
When I said existing practices, for example, I was thinking about the reporting mechanisms. Most online platforms already have such mechanisms. You can also think about that the DSA, for example, provides that for the big online platforms, there needs to be a recommender system that is not based on the profiling of users. Platforms might interpret this obligation by offering a chronological feed, for example, which again, some platforms already do. But I think the important point is that the DSA wrote down those practices and features into law. We’re moving away from self-regulation and we’re ensuring that platforms will continue to provide basic features to users as a legal obligation.
Because also we know that platforms, there might be changes in policies, there might be changes in leadership. We’ve seen recently that it can have a consequence on the user experience, for example. So, I would say that for me, the DSA is basically a tool to achieve more transparency, hopefully some accountability from the platforms and understanding how they work. But I understand that some people might only have heard the illegal content part and the disinformation part. What’s really in the DSA about illegal content is, first of all, there is a reiteration of the safe harbor principle, so it’s the liability exemption for platforms. That was already in law, in the old e-commerce directive. So, illegal content is illegal and that’s based on EU or national law, that just already exists.
On the DSA dealing with illegal content online: Not a silver bullet but “we can try to understand the mechanism of how this content is disseminated on platforms”
What the liability exemption in the DSA says is that hosting services are not liable for the illegal content or the illegal activity on their service, as long as they take down such content, once they become aware of it. But that’s not new, that’s already been in place for a long time and they just have to take down illegal content, which is defined by national law. Then illegal content is also mentioned in the risk assessment obligation. The biggest platforms, as you mentioned, the VLOPs, have to assess how their whole system actually poses a risk on the dissemination of illegal content on their own platforms. It’s important also to say that the Commission or regulators, they cannot impose any content moderation decision on the platforms, it will be their [the platforms’] decision. That’s also why, on the other hand, we can’t really consider the DSA to be a silver bullet that will end disinformation or illegal content online, because that will always exist. But we can try to understand the mechanism of how this content is disseminated on platforms.
Julian: Thank you for that early assessment like that. So many things already in that short answer. First of all, to pick up on the move from self-regulation to regulation that you pointed out, I think that’s a crucial piece of analysis that you can argue about, what’s good and bad about self-regulation versus regulation, but the law is now there, whereas we didn’t have that before. Another important point you raised, which I actually wanted to follow up on later, is the distinction between what the DSA does for users of the platform, for consumers and citizens, versus or in addition to what it does for civil society and researchers. I’ll get back to that in a second.
I wanted to follow up first on one of the things you mentioned, when you talked about illegal content, which I think was explained in an easy-to-understand way how you explained it, that it’s still based on national law, not in the DSA. But some of the criticism that the DSA has faced has been about disinformation. And you made that distinction, right? You said illegal content or disinformation, because by saying that, you say that disinformation is not always illegal, of course. Do you see a risk there that the risk assessment, where disinformation could be a part of, or the very fact that the DSA does mention disinformation in passing, does that justify some of the criticism? Or do you think some of the guardrails that you mentioned, for example, that the Commission cannot impose content moderation guidelines on platforms, are enough? In your view, is that something that needs to be watched over the first enforcement phase now?
Laureline: I think it’s very tricky because, of course, disinformation, there’s no definition of it and it’s mostly legal speech, right? What the platforms have to do with disinformation is that… I mean, it doesn’t really say in the provision that they have to look for disinformation but look for the risks that are posed on electoral processes and civic discourse. It’s understood that that means disinformation. But again, it’s going to be a self-assessment by the platforms. They will have their own benchmarks, because there has been no guidance from the Commission on how to do this risk assessment yet. It’s going to be, again, a self-assessment, then if they identify a risk, they’ll have to find a mitigating measure to that risk. It’s going to be tricky because, again, no definition, so what kind of measures are there going to be? Is it just going to be maybe a label? There are many possible measures that can be implemented.
I think that’s the role of the Commission because the Commission is an enforcer, but it’s also the guardian of the treaties, including the Charter on Fundamental Rights. So, it’s also the role of the Commission to steer the platforms in the right direction and to do this kind of balancing of interests and the balance of fundamental rights, including freedom of expression. The Commission has to do this and I think it’s important that civil society and academia follow what’s happening, just to ensure that it doesn’t go too far. But legally, there are safeguards and I think it’s also interesting for everybody to think about what it means. We have some time also because there was the first round of risk assessments already. Again, no guidance, but we can imagine maybe a space, where civil society and academia and regulators come together and think about those issues to ensure that the DSA is properly enforced.
Julian: I think this is an important call to just keep watching as the risk assessments, the self-assessments from platforms, as you mentioned, improve and hopefully regulators improve, and there are some outside checks.
Another thing that I want to follow up is what you mentioned on what might change for the average user. You said maybe for some people, they might not even notice that much. Maybe there’s a new recommender system option or a new button to report stuff. But you also mentioned the complaint mechanism and that’s what I wanted to dive into a little bit deeper. Can you explain how that works or what your expectations are for it? What it might do well for users, how they can benefit from it, but maybe if you also see open questions on that.
Laureline: There are two different mechanisms in the DSA that we can talk about. The first one we can mention is that the DSA calls on all platforms to offer a mechanism to report illegal content or to report content that is contrary to their terms of services. As I said, in practice, most platforms already offer such mechanisms. Depending on how the platform does it and how they implement that obligation, there’s going to be obvious changes on some platforms and no change at all on another one. For instance, on X, formerly Twitter, there’s now a button that says, “report illegal content”, so that’s a direct change from the DSA.
In addition to offering this reporting mechanism, the platforms are also obliged to respond to every user complaint and to provide an explanation for when their content was taken down or if their request has been denied. All of these decisions will be in a publicly accessible database – they are already actually for the big platforms. We can check the count of moderation decision of those platforms. They also have to offer a complaint handling mechanism, so that users can lodge a complaint against the decision that was taken by the platform: the decision whether or not to remove the content, to disable access, restrict visibility, whether or not they suspended an account or if they’re demonetized information. Users then can even go further and escalate this. They have a right to go in front of a certified dispute settlement body, so that’s also in your right under the DSA. Of course, users can also go to court if they want to, over those decisions. So that’s between users and platforms on content moderation decisions.
But there’s also a second complaint mechanism within DSA that’s a bit different. It’s Article 53 of the DSA. It provides that users can complain about any infringement of the DSA. This is not about a personal decision that affects them about content moderation, as I just said. It could really be a complaint about anything from the DSA. It could be, for example, actually saying that, well, a platform doesn’t have a functioning complaint mechanism, the one I just described, they [users] can’t report illegal content, they don’t get an answer, the system is not working, something is wrong. It can be broader than that, as I said, any provision. It could be about the transparency obligation, the online advertising obligations. On that note, actually, I think some civil society organization already lodged a complaint against LinkedIn yesterday on that topic. It can be on the risk assessment and mitigation measures, it can be really on anything. It’s quite powerful, because it’s a way to bring specific issues to the attention of the regulators.
On users’ right to complain about DSA violations: “a very powerful tool” to bring issues to the attention of regulators
And in turn, they [regulators] may start an investigation based on the users’ complaints. It’s for users, obviously, located in the EU. If they want to complain, they have to go to their local regulator, which, as you said, are the Digital Services Coordinators. There’s one in each member state. It can be a different entity in each member state, but I think there’s a page on the Commission’s website that lists all of the DSCs, even though not all of them have been officially appointed yet. So, users go to their local DSC and then depending on where the platform is established, the local DSC will process the complaint themselves or they will send it to another one or to the Commission. For example, if a user is based in Germany and they want to complain about the internal complaint mechanism on Instagram – they tried to report something, it was not working, they got no answer – they can lodge a complaint to the German DSC. And then the German DSC will send the complaint to the Irish DSC, because this is where Instagram is established, in Ireland.
Then also because Instagram is a VLOP, they [the Irish DSC] will check with the Commission to see if there’s not already maybe an investigation on the same matter. If not, the Irish DSC will look into the case. But if that same user had complained about another obligation from Instagram, like the risk assessment obligation, then the complaint would have gone directly to the Commission. As a last example, if a German user wants to complain about any DSA violation that was allegedly done by a German platform like Xing, for example. Xing is not a VLOP and they’re based in Germany, so then they can complain to the German DSC and the German DSC will look into their complaint and maybe take charge. A very powerful tool, I think, for users.
Julian: Super helpful, thank you very much: the differentiation between reporting potentially illegal content and also challenging the platform’s decision, that’s super crucial, but then also how you explained the transparency machine, as you called it, with the complaint mechanism and the examples. I think that this is definitely something to watch as well, in the DSA enforcement.
I want to come back to something that you mentioned earlier, which is this distinction between what the DSA does for consumers and citizens and what the DSA does for civil society and researchers. One of the big topics there is the data access rules in the DSA. Here, too, there’s the question of what can researchers from academia and civil society actually expect from the DSA? What can and can’t the DSA do? The researchers now have a legally guaranteed way to request data from very large online platforms. As far as I understand, that’s been taken as a pretty promising and progressive aspect of the DSA. So, first of all, do you agree? Is it promising in your view as well? And if so, what needs to happen, secondly, that these rules are actually fulfilling their promises in the future and that they don’t just turn out to be a dud?
Laureline: I definitely agree. I think the access obligations – we’ve been saying that it’s got the potential to become the sleeping giant of the DSA, because really allowing third parties to request data from platforms, especially to conduct research on understanding detection and identification of systemic risks, that’s really, really big. I think that’s really big especially since something I mentioned earlier that platforms can always go back on their promise. There have been some research programs before, but we’ve seen platforms just shutting down the programs or refusing access to researchers. Having this hard legal obligation, it’s very crucial, because we also see, mostly in the US, that data access has become part of the culture wars.
We’ve seen, for example, the Center for Countering Digital Hate: X is suing them for breaching the terms of services. You have subcommittees trying to subpoena disinformation researchers and all of this. Having, again, what used to be a practice into law, into a legal obligation, that’s very powerful. It’s great, because researchers will be able to spot maybe new emerging risks that may not have been covered by the platform. It’s a way also to check on the platform’s obligations. It can really make a crucial contribution to the independent auditors that will audit the platforms and the Commission as an enforcer. What will happen basically is that the provision is for what is called vetted researchers. Researchers will have to file an application with the Digital Services Coordinator, where the platform is located. And because this is only for VLOPs, it’s going to be again, the Irish DSC.
On the DSA’s data access rules: “sleeping giant” of the law with big potential for researchers
Researchers can also apply to their local DSC and then the local DSC will just send an opinion to the Irish one, so that’s also an option. Researchers have to show that they are affiliated with a research organization and that’s a broad term. It’s broader than university, it’s defined in copyright legislation. It also includes non-academic research institutes and civil society organizations, as long as they conduct scientific research to support their public interest mission. It’s also likely that it will extend to consortia of researchers, including non-EU based researchers or journalists, as long as you have a European researcher as the main applicant. So, you have this process of being vetted by the DSC and then it’s the DSC, so the regulator themselves, that will send a request, the specific data request to the platform. That’s one mechanism that’s already really cool.
You also have another one, which is separate from this process. It’s an access regime for publicly accessible data. That’s real-time data, for example. It has to, again, be in this framework of contributing to the detection, identification and understanding of systemic risks in the EU. But then it can basically be all that data, you know, aggregated interactions from public pages, public groups, engagement data, numbers of reactions, shares, comments. The platforms are expected to give researchers access to this type of data without undue delay. The conditions to be a researcher in that mechanism are a bit easier than the vetted researcher one. The DSA specifically mentions not-for-profit bodies, organizations and associations. This is also very interesting, because on this provision, there’s a bit of a... I think it illustrates the fine line between providing guidance and enforcing the DSA.
There was no guideline issued for implementing this provision. It’s Article 40(12) and so many questions have remained open about this publicly accessible data scheme. Still, the Commission has chosen to immediately go down the enforcement route and they sent requests for information to all of the platforms. Like, “How are you implementing this particular obligation?” They also started an investigation against X, because their terms of service prohibit researchers from actually scraping the publicly accessible data. There’s really a lot going on. We’re also waiting for the delegated act, which will be published very soon, we hope, and will provide way more details, way more clarity about the vetting procedures, the technical safeguards, all of this. All of those questions hopefully will be answered by the Commission as soon as they can.
On what is missing from the DSA: “roots of ad tech system” are not addressed
Julian: That’s very interesting. I appreciate that distinction between the vetted researcher access regime and the public access regime. Also that you highlighted some of the weird things that we’re still waiting for and some of the open questions. Because while it looks good on paper, I think you pointed out well what obstacles still remain till we actually get there to have that data access. Before I open it up to the many questions from the audience that are already coming in, I wanted to ask you one more question and come back to my very first one regarding expectations about the DSA. I want to close with what expectations we can’t have, simply because it’s not regulated in the DSA. What are some of the open spots or weaknesses, too, when it comes to platform regulation or tech regulation, where you see the DSA doesn’t cover that? So, as a consumer or researcher, we can’t have that expectation, because it’s just not in the DSA.
Laureline: The first thing that comes to mind is online advertising and ad tech. It’s been a big topic of discussion during the DSA [negotiations], should it be included or not. You have some provisions in there, but I think it doesn’t really go to the roots of the system and how ad-funded harmful content, for example, isn’t fully addressed by the DSA. We have lots of mechanisms that rely on consent, also in the DMA, the Digital Markets Act. And we still have the same issues that we have with GDPR, there’s this big gap that was left by the e-privacy regulations. There’s a lot of tracking of users online and the companies keep finding new ways of tracking. I think something on ad tech would have been nice.
Maybe another angle to that question is that what really hasn’t been regulated now, because we have been working so hard in the last five years to just have a lot of tech-related legislation. We have the DSA, we have the DMA, now we have the AI Act. The GDPR that’s very useful legislation. And I think it’s also time to enforce everything and properly enforce everything, because we have very powerful tools. And then to actually see, it takes time. So, be patient and actually see the effects on all of this legislation that passed and then see if there’s still a gap. Because I think enforcement… it’s good to have those laws, it’s good to talk about it, but it’s even better to properly enforce them.
Julian: Also two important points: the enforcement side of what we have and then maybe gaps in ad tech. Thank you so much for this first part already. I’m very happy now to take some of the questions from the audience. I hope we can get through a lot of them. I will start with one from a German parliamentarian from the left group, Anke Domscheit-Berg, who’s asking, “What requirements do small non-commercial and decentralized platforms have to fulfill?” So, she’s thinking of the Fediverse platforms such as Mastodon, to be compliant with the DSA. How can non-commercial platforms cope with fulfilling the requirements, when people probably mostly work there in their spare time and maybe don’t have the compliance departments that a big commercial platform has?
Laureline: That’s a very good question actually and I think there’s like a lot of debate on how those platforms will be impacted by the DSA. I think there are some minimum requirements, reporting mechanisms, that all platforms have to have one. And actually going to answer all of these... I know it is a lot of burden for some platforms. There is an exception from micro and small enterprises. But it is true that it’s something that needs to be thought about, especially also, I’m also thinking of, for example, Wikipedia, that’s a very large online platform, so it has a lot of big due diligence obligations and they mostly run on a volunteer [workforce]. It’s also questions that need to be thought about and answered, I think, with regulators. It’s a difficult topic for sure. But I think there’s great research and papers being written on those issues.
Julian: Another point where it’s just very early on in the enforcement right now and I think for people to emphasize the variety of platforms that there are and especially the non-commercial ones, that can be really helpful to figure out what enforcement works for what.
I have another question, this one from Suzanne Vergnolle. She asked about the draft guidelines that the Commission published on systemic risk on electoral processes. The background here is that the European Commission can issue guidelines for these very large online platforms and the Commission has decided to do that, with a consultation specifically on election integrity. The Commission has an eye especially towards the European Parliament elections coming up this year. So, Suzanne asked if you had a look at the consultation and if so, do you think the guidelines are tackling the right elements and will arrive in time for the next EU election, so the parliamentary elections and the national elections?
Laureline: I think that’s definitely the idea. The Commission has been saying that they really want to implement mitigation measures and all this guidance in time for the election. We’ve seen already that the platforms have come up with election plans and measures. What’s interesting is really to maybe also focus on links to access to data for researchers to actually see and be able to analyze what the platforms are doing, what’s happening. So, I think it’s very linked to Article 40 and I think it should also be, it’s an important part of the enforcement. But I think the guidelines are an interesting step. I mean, if civil society organization researchers can provide comments to the Commission – they have, I think, until the 7th of March – they really should do it, because it’s really useful for the Commission to have information on what is useful. There’s definitely a lot of political will to have those guidelines in time for the election. And I think it’s going to be the first big test for the DSA, everybody’s going to be watching. There are a lot of expectations.
On the Commission’s investigation into TikTok: “a way to officially get information, because then if they don’t get enough information, it can be an infringement of the DSA”
Julian: Thanks for that timely question and answer. There’s another current topic that has received some attention here from Janosch Delcker asked a question. He’s a journalist and he asked a question about the EU investigation into TikTok. The EU announced that it will be investigating, among other things, whether TikTok violates the DSA’s requirements regarding risk management of addictive design. And Janosch says that the DSA itself seems very vague on this point of addictive design. He asked, “Do we know any more details about what exactly is being investigated, how the EU measures the platform’s addictiveness and what kind of addictive design would violate the DSA?” So, under the broad headline of addictive design, what are your thoughts on the investigation?
Laureline: I think it’s really great that the Commission is starting investigations, because it’s really just the Commission formally asking information from TikTok. They can do it with requests for information, but starting an investigation means that there’s really a legal obligation to answer for the platforms. I think this is what the platform, sorry, the Commission is doing: It’s just looking for information. They want to know what TikTok thinks of the rabbit hole effects and the behavioral addictions. Especially since they only started the investigation, they’re just looking for very specific information, very detailed information from TikTok. Then it remains to be seen what they will do with this information, if they’re going to be happy about it or not. But really, investigations are just a way to officially get information, because then if they don’t get enough information, it can be an infringement of the DSA. So, hopefully, we assume that the platforms actually give a lot of information to the Commission. As outsiders, we don’t really know what’s going on during the investigation, sadly, but hopefully there’ll be some more information.
Julian: What’s your expectation of how long this investigation is going to take? I’ve tried to find information and ask people. I don’t know, are we talking about weeks or months or years? It’s very hard to say, no?
Laureline: There’s no deadline, so it can take as long as… I think it’s going to take as long as the Commission is either satisfied with the information that they get or satisfied that they have a good case against the platform and they can escalate this.
Julian: That’s a good point that you raised, too: It’s a good case, but the Commission is probably very keen on making sure that if they’re being challenged in court, which is not unlikely, they have an airtight case. Is that fair?
Laureline: Yeah. I think they want the strongest case possible, because if it’s the first DSA case, the first infringement decision, that’s probably going to be appealed by a platform. It’s got to be strong. It needs to pass all the tests in front of the court.
Julian: Thanks. Okay, moving on to a topic you addressed earlier, which is the out-of-court dispute settlement mechanism. How effective do you think such a mechanism can or will be? In particular, what does the requirement “to participate in good faith” in such proceedings mean? Is the platform bound by the decision or is it bound by an obligation to something else? Can you speak about the out-of-court dispute settlement a little bit?
Laureline: First of all, those bodies will be certified by the regulators, so it cannot just be anybody. But no, the important point is that the decision made by those out-of-court dispute settlements will not be binding, it’s not going to be a binding decision on the platforms. That could be seen as a weakness, but then the platforms also want to comply, and you also have the possibility of going to court, as I said. But yes, the platforms have to show good faith, which is hard to define, it’s a legal term. You have some standards and I think once we see the first cases, there’s also going to be some scrutiny about how the platforms behave. Hopefully, we’ll be able to have enforcement if we see that the platforms really are just going there not in a good way basically, but I think it’s in their interest to do so. Let’s see once this happens. It’s very, very new. Again, it’s never been done before, so we just don’t really know. We have to wait.
Julian: So, this is another thing that’s relatively new and another thing, as you pointed out, where the regulators and the member states have a big role to play as well with the vetting [of out-of-court dispute settlement bodies]. Thank you. Moving on to another topic that you already addressed, which is data access. There was a follow-up from Oliver Marsh from AlgorithmWatch. He asked, on data access requests, how do you think you can minimize the risks of platforms using claims about trade secrets or saying, “we don’t have that type of data or that particular data” to keep avoiding giving data? And he says, it’s really hard to assess without access what data there is or isn’t available.
Laureline: That’s very true, a very fair question. Hopefully, there’s going to be a bit of guidance again from the Commission on those questions. But because this is a legal obligation, we can also hope that the regulators, either the DSCs or even the Commission, could intervene and be like an arbitrator, if we see that request, especially if platforms just use that trade secret card a bit too much and if really feels unfounded. But I don’t have any experience myself, so I don’t know if that’s something they often do. So, I can’t really answer this, but I really hope that the delegated act will shed some clarity on all of this and will really help, because it would be great if this provision is accessible to a lot of researchers and not the ones that necessarily have been used to this. This is supposed to be a broad provision.
Julian: I think your points are interesting or helpful in the sense that the delegated act will probably – hopefully – provide some clarity, but also, realistically, it’s going to be a little bit of trial and error and seeing what the platforms do. Oliver and the folks at AlgorithmWatch are part of that, because they have already filed a data access request. So, between the delegated act and between such requests from organizations like AlgorithmWatch and others, hopefully, we’ll get there. There is a related question to data access that I want to pose. Do you see any risk that the data access provision could be abused by researchers, doing paid projects for competitors?
Laureline: Oh, interesting. Well, I think that when you apply to be a vetted researcher, you have to give a lot of information. And that includes some funding. So hopefully, it would not be abused, but you need to also be a very specific type of researcher. They need to be independent from commercial interest and they have to disclose the funding of the research. There are a lot of conditions to be a vetted researcher, so hopefully that will prevent some of the abuse that you described from happening. But again, untested, so let’s see.
Julian: I would think so too, like the funding disclosures and hopefully the independence requirements might help, but yeah, the risk of abuse is probably never going to be zero, but yeah, with some of the provisions hopefully, yeah. Okay, let me get to some of the other questions that we have here. One of them is, “What platforms are affected in general?” I mentioned that earlier: It’s a pretty big variety of platforms, so can you maybe provide just a couple of examples from the VLOPs and from non-VLOPs, like who falls under the DSA?
Laureline: For the VLOPs we have, as we said, 22 and we have the biggest ones [covered]: X, Facebook, Instagram, TikTok, Google search, we have Pinterest, Snapchat, Amazon, Zalando, so it’s very diverse. We also have online marketplaces and YouTube as well. It’s really, really broad, I think we don’t even realize all of the platforms that are actually within the scope of the DSA. LinkedIn also is one and all the small ones, all of the marketplaces that have third-party vendors, all of them are included.
It’s hosting services, online platforms, but the most stringent due diligence obligations are really for the big ones. So, the ones who have more than 45 million users [in the EU per month]. We have some porn platforms as well. I think there’s also a platform added – one that was going to be under DSA soon. What’s interesting also is that there’s an obligation in the DSA to publish the number of users every six months. So, the list of VLOPs might also evolve in the future.
Julian: Wikipedia you already mentioned as well as a VLOP and you already talked about the Fediverse. As you said, there’s really a wide range of platforms covered. I want to follow up very briefly on the Fediverse question, because Friederike von Franqué at Wikimedia Germany mentioned in response to the question about how non-commercial Fediverse questions are responded to. As you said, there are exceptions for small entities, so she says the reporting rules do apply for VLOPs and institutions employing more than 499 persons and have over 50 million euros annual turnover. So, some of the Fediverse instances will hardly fall under the regulations. That was her take on that.
Regarding the question about researcher independence, Oliver Marsh and some of the people asking questions here also said that the funding requirements would hopefully help. But as mentioned, there is always the risk for abuse and complete independence is probably hard to check completely.
I want to use the remaining minutes for a couple more questions on risk assessments and risk mitigation measures. You already talked about that a little bit in the first part and I want to try to combine some questions here. One is about who defines risks and what risk management approach is recommended and here especially the difference between disinformation and illegal content that we discussed. Another question also was specifically about the risk mitigation measures. So, not just the assessments but actually the enforcement of the risk assessments. If you maybe can touch upon those issues about defining the risks and mitigating the risks.
On defining risks in the DSA: “We can expect a lot of different interpretations on the provisions, each platform is probably going to have its own interpretation of what risk means”
Laureline: Very good question that everybody’s asking, because again, we don’t have any guidance from the Commission. The platforms also didn’t have any guidance and they had to do a first round of risk assessments already last summer. Now the big platforms are going to be audited on their risk assessment by independent auditors. Finally, we’ll be able to see, the public will be able to see those risk assessments, I think, around November. So, we don’t know what the platforms did. This is the first time they’re doing this, without any guidance. So, I think we can expect a lot of different interpretations on the provisions, each platform is probably going to have its own interpretation of what risk means.
We know that the Commission is looking at the risk assessments and hopefully once we get access to it, once the auditors audit, we can all come together and try to think of what the best practice is. I think we can expect the Commission to issue some guidance at some point. Maybe they’re waiting for the – there’s going to be another round of risk assessments, because this is a yearly obligation. So, this summer, they [the platforms] are going to have to do it again. But we, in the meantime, we have not seen the first one. So, we always lag a bit behind.
I think it’s also because it’s a new exercise, there’s no guidance, we have to wait and see. Hopefully, civil society and researchers will be able to contribute, debate by pointing out, “This is not the right definition, this is not how they should do it”, all of this. For the moment, we’re just waiting. Academia is already thinking about what it means, but it’s only theoretical. So, we just need to see in practice how it works. That’s for the risk assessment part. Again, we have to remind ourselves that this is a self-assessment, that the platforms are doing it themselves. It remains to be seen how much of an impact we can have on those.
On mitigating risks: “The goal is not to impose a fine, I think it’s to really try to change the practices of the platforms.”
With the risk mitigation, it’s the same idea. It’s for the platforms, once they look at the risks that they identify themselves, to choose which mitigation measure to apply to mitigate the risk. There’s a list in the DSA, but this is only guidance, these are only suggestions. So, the platforms could decide within the list or they could just go with something else. So again, we need to see and that’s also why we need researchers to see if those measures are efficient, if they’re adequate, but we don’t know that yet. Again, reminding ourselves that whatever the platforms do, the Commission will be able to offer some guidance and be like, “Oh, actually what you did there, maybe that’s not the best, maybe you should try this. We think that this measure is the best to mitigate that specific risk on your platform.”
But if a platform does not want to implement a specific measure, they won’t do it. The Commission cannot force them to do it. They can just issue a fine, which will – this is where I think the DSA created a space for dialogue. The goal is not to impose a fine, I think it’s to really try to change the practices of the platforms. They could implement actual efficient measures, but let’s see how it works. It’s a bit too early to tell, but it’s really something we have to watch. Hopefully, again, civil society and researchers will be able to influence the process and to actually give some input to that.
Julian: I think the reminder for the risk assessments being self-assessments is important, that we need to check both the companies and the regulators. Or not we, I shouldn’t say that, because I don’t do that research myself, but people, who know how to do data analysis, for example, they should have a crucial role.
Let me follow up with two last questions somewhat related to the risk assessments and risk mitigation from the audience that I hope I can get in. One is, “Are there obligations for the for the VLOPs regarding the use of social media affecting children’s safety and mental health?” Maybe you can address that as part of the risk assessment. Then Lena Steltzner, who works at Germanwatch, is asking whether there are possibilities about greening the DSA. So, including climate and environmental factors in the systemic risk assessments. Very different things, but if you can briefly speak to those, as we close.
Laureline: For children, there is definitely in the risk assessment that the platforms need to look into this. It’s basically the protection of minors and we can also link it with the physical and mental well-being of users. Minors are specifically mentioned in the risk assessment and there’s also another provision, I think that’s interesting in the DSA, a specific provision on online protection of minors. It’s a due diligence obligation, basically, that online platforms – so not just the big ones, all the online platforms – have to put in place appropriate and proportionate measures to ensure a high level of privacy, safety and security of minors on their services. That’s actually quite interesting to see how this will be implemented. There’s also a ban on profiling minors for the purpose of online advertising. And we’ve seen that TikTok, for example, said that they would not do personalized advertising for minors. Again, we need to check if they actually do this as part of the enforcement of the DSA. So yeah, specific obligations for minors.
For the environment, that’s a big thing that was discussed during DSA negotiation that is not included in the DSA. I think it’s due to the fact that it’s not really in the EU Charter of Fundamental Rights, but I think that’s a gap, definitely. There’s other legislation maybe that could focus on this more, the environmental angle: the corporate due diligence directive, if it passes. I think there’s some talks on the issue of data centers. All of this, I think, is also looked at in other legislation, or is something we can also ask of the MEPs going into the next mandate and the next Commission.
Julian: Thanks for those things as well. Let me just follow up very briefly on the advertising for minors. My two colleagues, Kathy Meßmer and Martin Degeling, actually did look into that and they have some questions whether TikTok actually fulfills the obligations.
And on the point on environment, there are also voices in academia and civil society, which I want to point out, for example, the researcher Rachel Griffin, she made the argument that you can still do that. So, I think it’s another open point.
Thank you so much. We’re nearing the end here. I have some big news to share at the end of this webinar. But first, before I do that, let me just thank everyone. Thank you, Laureline, once again for talking to me today and also for engaging with the audience. I also want to thank my colleagues, Josefine and Justus, who’ve been hard at work preparing this and running this webinar in the background. And thanks to you, the audience, for listening in and for asking questions. If you’d like to stay in touch with us at SNV, please feel free to sign up to our newsletter.
Lastly, if you’ve already signed up to our newsletter, you might have already heard the big announcement that my colleagues made recently. We’ve been on a path towards becoming an even more European think tank and we want to continue down that path. Part of that development is changing our name. In the future, we will be known as interface and if you like you can read more about that in the link that is provided in the chat. I thank you all very much again for being here today.
Meet the speakers
Dr. Julian Jaursch
Lead Platform Regulation