Background Discussion
Background Discussion: Shaping the future of Artificial Intelligence – Comparing strategies from Brussels and DC 1
June 2021
Thursday
17
17:00 - 18:00
(CEST)
with
Programmes
Regions around the world are conceptualizing frameworks to best regulate artificial intelligence (AI) technologies. One such example came from the European Commission, who recently published its rules for trustworthy AI, the world's first proposed legal framework to regulate the use of AI. Across the pond, the US government's National Security Commission on AI (NSCAI) has proposed a far-reaching reform package earlier this year. In contrast to the EU, the NSCAI's recommendations don’t focus solely on regulating AI technologies, but envisage massive investments into AI technologies to govern the technologies from their more nascent stages.
We are delighted to invite you to two background discussions to discuss and compare the strategies by the EU and the NSCAI, shed light on the measures proposed in both packages and to analyze potential outcomes for the European and the American AI-landscape.
On June 17 at 5 pm (Berlin), we welcome Irina Orssich who will talk to Philippe Lorenz. She is Team Leader "Artificial Intelligence" at the European Commission's AI & Digital Industry Directorate and will explain the planned EU regulation in more detail.
On June 22 at 3 pm (Berlin), Kate Saslow will talk to Ylli Bajraktari, Executive Director at the National Security Commission on Artificial Intelligence (NSCAI). We will also discuss whether ethical or trustworthy AI plays as large a role in the US strategy as it does in the EU proposal. To confirm your participation for both events, please register via the following form. Both events will be in English and on Zoom. Please dlick to find the Trancsript and video of the 2nd part.
Philippe Lorenz, project director “AI Governance project”, Stiftung Neue Verantwortung: Hello, everyone and welcome to this SNV Online Background Discussion. This is the first of two background discussions, Shaping the Future of Artificial Intelligence, comparing strategies from Brussels and DC. So part two will take place on Tuesday, June 22nd at 3pm between my colleague Kate Saslow and Ylli Bajraktari, Executive Director of the National Security Commission on Artificial Intelligence, joining us from DC next week. But today, we will discuss the EU strategy with Irina Orssich. My name is Philippe Lorenz, and I'm the project director for AI Governance here at the SNV, a think tank in Berlin, working on various tech policy issues, and my focus is on AI and foreign policy, AI standardization and AI patents. And I think we could not have chosen a better timing for a two-part series on European and US strategies to engage with artificial intelligence.
Yesterday, President Biden left Brussels for Geneva, but before he left, he reinstalled trust in his European allies by agreeing to a tech alliance in the form of the EU US Trade and Technology Council that was proposed by the European Union last December. And this partnership is clearly, I guess, aimed at countering China's growing influence in the tech sector, and will be operationalized through closer tech and digital trade relations. But today and next week is less about China, but more about the US and the European Union's approach towards governing AI. The European approach is being presumed to regulate the use of AI in particular use cases, the US approach is presumed to leave the door to the market. Although the report of the National Security Commission on AI has shown that in the US as well, there will be significant government involvement, especially in fostering innovation but also to stay competitive with Chinese tech ambitions. So, in this two part series, we are going to go into the details on both approaches. Now kicking it off with our discussion with Irina Orssich. And Ms Orssich is a senior officer of the Directorate General Connect at the European Commission in her capacity as a AI team leader in the unit technologies and systems for digitizing industry. She has been responsible for coordinating AI strategies with member states, and she has been extensively involved in the European Commission's draft proposal for the artificial intelligence act. She is an avid observer of the International AI debate, and I'm very happy that she has agreed to participate in this event.
So Ms. Orssich, let's start off with a question on, “Why at all is it necessary for the European Union to engage in the governance of AI technologies?” Could you explain the backdrop of this initiative a bit closer?
Irina Orssich, Team Leader "Artificial Intelligence" at the European Commission's DG CONNECT: But first of all, thank you very much for having me here. I'm really looking forward to this conversation and to all the questions. “Why?” Well, AI has been much discussed over the last, over the past years, not only in the European Commission, but I think everywhere. And we started working on policy in 2018. And we always saw, before that we always had been, for the last 15, 20 years before we had been supporting research in the field. But then, in the policy field at a certain moment, it became clear that this is one of the topics of the future. And that we need to engage with that topic. And our approach had always several angles. So we always said that we need to promote artificial intelligence, we need to create an ecosystem of excellence. So, we support research, but we also support partnerships. We need to make sure that we have the necessary skills, that we have plans, that we have ideas for education, for developing AI, that data is available so that we can build on the data strategy and the first legal acts in that respect. So, in all the surroundings, in all the basic infrastructure, the necessities for AI, this is something we very much want to promote.
And at the same time, we said that in order for Europeans to happily engage with AI, so to use it as a citizen, or as a company, people need to trust it. And for now, and this might be because many of us saw nice science fiction movies, read books. But there people are pretty much afraid of artificial intelligence, are pretty much afraid of not only like in science fiction AI to take the world over, but that they can't trust it. What is going to happen with their data? Is it going to be discriminatory, what's going to happen with it? So we thought if we, in a way, as a part of the promotion, we need to create trust. And it is against this background that we also thought, okay, where we really need to do something where there is really a risk, we need to come forward with a legal framework. And this is also, so on 21st April, we adopted both a coordinated plan to foster the ecosystem of excellence and draft a legal framework for the ecosystem of trust.
Philippe Lorenz: Okay, I must say that over the last couple of years internationally, we have seen a mushrooming of AI strategies, to some extent, within Europe as well. I mean, every major member state addresses its own needs with tailored national AI strategies. And then there is the regulation as only a part of a larger package, released by the Commission on April 21st. Is this also a reaction to this phenomenon? I imagine coordination with member states, something that you have overseen, is not an easy feat, isn't it?
Irina Orssich: It's very interesting. And actually part of those strategies are also because we were asking in Europe to have strategies. So I mentioned before that we started in 2018 with our policies, and then we also started a conversation with the member states. We have a group with the commission and with the member states which is called an Expert Group on Artificial Intelligence and Digitizing the European Industry. And since June 2018, so three years now, we regularly meet with the representatives from the member states and discuss together how we can best promote AI and how we can best build on our strength. So if one country is very strong in something, in research, in a certain field, this is something to build on Europe wide. We don't need to reinvent the same wheel in our places. And in that context, it was discussed that all the member states would have an AI strategy, but also that these strategies should be synergetic. So take similar points, work with each other, and be coherent. And we first decided that, and the first coordinated plan we had in December 2018. And they're all the member states engaged in putting together a strategy. So some of them had one or more, but most of them, not. And by now, most member states have adopted the strategy. And those who have not yet adopted it are in the process of finalizing it. So this is something where we work actually very, very well together. On all these questions of excellence, and how we can promote it, this is a very fruitful cooperation.
Philippe Lorenz: Okay. Although it has dominated the press, the EU Commission's draft proposal for a regulation on AI was not, and you alluded to this point, the only item unveiled in late April by elective vice president Margrethe Vestager and Commissioner Thierry Breton. And before we go into the details on the draft proposal, could you explain to our viewers, what the package itself presented to the public on April 21 includes? Besides the draft proposal.
Irina Orssich: So, there is a communication chapeau, which explains a bit the background and the reasoning, there is indeed the regulation and then there is the so-called coordinated plan. So that is the product of this cooperation we are having with the member states. And this puts forward how the member states together with the commission can really create the excellence in AI, how we can promote innovation, how we can help SMEs, how we create an ideal financial environment. So all these things which are really important in terms of innovation, in terms of developing and deploying artificial intelligence. But then there are also new chapters, for example, we have AI and environment and climate, we have AI and health, we have agriculture. So in all these specific fields, we also look, okay, how can we together work on promoting this? How can we push it forward? And then, just in the context of this discussion and the discussion of next week, there is also a chapter on international cooperation, because it's clear that AI is not something which is very much looking at borders. So indeed, this is a very global technology. So how can we engage internationally? How can we also create trust internationally? Cooperate Internationally? So all this comes in the package. And we always say that trust and excellence are two sides of the same coin. So, in a way, this is really the back side of the coin of the regulatory proposal, and the two should not be seen independently because it's really this innovation, which should also help.
Philippe Lorenz: To me, it seems that the part on trust has been very well reported in the media, also gained a lot of traction in social media. Why do you think that part of excellence hasn't been covered so extensively? Why do you think that the regulation or the draft proposal from the European Commission is the one thing from the package that stands out?
Irina Orssich: Perhaps because everybody likes to feel concerned about regulation. Also, everybody –this is not nasty – but everybody likes to complain about these people in Brussels regulating again. So this is probably for many, many people, the predominant part of the package. Perhaps also because the measures in the coordinated plan are a bit short term and are deep – well they're not that short term, because they should be building up a sustainable AI landscape. But still, regulation is something longer lasting. So, very often people are more concerned by regulation.
Philippe Lorenz: Okay, so let's dive into the regulation. And can you explain to us a bit more in detail the regulatory approach to govern AI? Why has the European Commission chosen to regulate certain applications deemed as high risk? Why this approach?
Irina Orssich: You're giving the answer to the question. So we decided not to regulate an entire technology, but we decided to regulate use cases. And those use cases where we see a risk. Where we see a risk to safety and to fundamental rights. And perhaps fundamental rights as more the novelty also in this regulation. But we believe that in general, we have a very comprehensive legal framework in place. We do have the fundamental rights charter, where, for example, data protection in the European fundamental rights charter is having its own article. We do have a lot of regulation for consumer protection, we do have the data protection rules, and we have anti-discrimination laws. So, there are many rules which apply to AI, like they apply for anything else. But what we wanted to tackle are really risks, which are specific to AI. And where we believe that it is, that we can contribute to the implementation of the existing rules. So, we can contribute to the implementation of consumer protection, anti-discrimination rules, or data protection rules, when we put certain requirements on artificial intelligence, where we see a risk of violation of rules – so, violation of fundamental rights respectively of our first safety.
So, this is a bit the background so that we only come in where we really feel the necessity, the added value, but not to regulate the high technology. And then what we did, the regulation you need to imagine as a permit. So, there is, at the bottom, you'll have a big majority of use cases, which are not going to be regulated at all because we don't see a risk. Where we do propose voluntary codes of conduct, for example, by companies or by Chambers of Commerce. But where we say it's not needed that we get in here, there is no risk. No risk we would find serious. Well, there's always a risk in life, but no serious risk.
Philippe Lorenz: Could you name an example? If such are the mass majority of AI systems – would a jeans algorithm, that gives suggestions on the best fit to my outfit, be an example of this? This would be something where, well, you could agree to a voluntary code of conduct, but not more.
Irina Orssich: Exactly. That would be an example. Or a system would recommend you a book to read or music to listen to, or a movie to watch. So there are plenty of systems where you have the choice whether you use the systems and where you should know about basically the choice you are having. But this is nothing where we feel that we need to regulate it.
And then there is the next group of systems where we feel that certain information is needed, certain transparency is needed. So, for example, if you're dealing with a chatbot, or if you're confronted with the system for emotional recognition, or biometric categorization, this kind of thing. There we find it important that you know that you're dealing with such a system. But that's the only requirement and information. And then we come to the high-risk systems. So, this is basically the core of the regulation. And what we did is in the legal act, we were describing certain fields where we say the field is risky, or the field could be risky. But it's not the entire field: In this field, we would have individual use cases. These use cases are in an annex to the regulation. And this annex, that's our idea. Just as a footnote, we propose this, this all, the whole thing has now to undergo a legislative process. So then the outcome at the end might be something very different from what we were proposing. But our idea is that the use cases we can update when they are coming out of this field, because we believe that this is a very dynamic technology. So, we need also to be dynamic and to have the possibility to update what we consider risky. And there might be things we just don't think of today. So here in this higher risk scan field, we have one thing, which is the safety components of products that are already subject to certain requirements, like for example, a car. When you build a car, before you put it on the market, it has to undergo a conformity assessment. And we want to have also conformity assessments prior to product and AI being put on the European market. So that before a system is being used in Europe that it undergoes these checks. That means also undergoes at once. Then, back to the example of the car, you build a car, you get the general labor saying, “Okay, this car is safe, it can now be sold”. And the same should happen to AI. Now, if an AI isn't a product, like a car, or an airplane, or a machine, which already needs to have these tests, then our requirements would be included in the existing test.
And then the other thing is indeed, and this is where the fields I was talking about, if you have something new, if you have something standalone where at this point, we don't have this. And this would, for example, apply to artificial intelligence in the field of education or vocational training. So this is about your access to education, access to university, but also the scoring of exams, the scoring of people when they are studying. Or it would be in the field of employment, where we have recruitment algorithms or the use of AI really in the management of people – so. this would be risky. We have essential public and private services: In the private services, this would be credit scoring, in the public services, this would be the access to certain social services. We have critical infrastructures, for example, water, gas, transport, electricity. And then we have law enforcement where we have a number of high-risk applications use cases: For example, when it comes to risk analysis, to profiling, to the use of lie detectors. We have migration asylum and border management or the administration of justice and democratic processes. So these are the fields and some of the individual applications. And there we would then have certain requirements. Requirements that will be tested before, for example, relating to the quality of the data in order to avoid bias and discrimination. So high quality of data sets for the training of the system. Then there would be, for example, detailed documentation of the process of building the AI. Clear information to the user of the AI, like you're buying your pills in the pharmacy, you get all the information about the indications and the counter-indications. We have human oversight. So basically, for example, I was talking about the recruitment mechanisms over the credit scoring, that somebody looks at the end of a result and checks it. But indeed, you have different human oversight for different types of AI. You will not start with a self-driving car with human oversight when you have a latency of at least 10 to 15 seconds. So it depends really on what you have. And then we have some requirements on robustness, security, and accuracy.
Philippe Lorenz: This, to me, seems this echoes from the AI principles debate that we have long witnessed that has been moving towards human rights and protection of fundamental rights you've alluded to in the past. But as we're running out of time because we only have five minutes left in this conversation, before we jump into the questions from the public.
The last thing that I want to ask you before we take on other questions from the public is could you, in brief, explain the main learnings that the EU Commission took away from the experiences it made with GDPR? Because to some, and I think, to many in the tech sector, the AI Act might be similar to what the GDPR tried to do, to heighten the bar in a way that particular technologies have to attain certain standards within the European Union that they would otherwise not face in other regions of the world. So if we look at GDPR, and the market value of big US tech companies, the market value has undoubtedly grown since the introduction of GDPR. It has not stopped that. Enforcement agencies and member states are suffering from huge caseloads, effectively hampering enforcement of GDPR in some parts, in a timely fashion at least, and compliance with GDPR is often said to be easier for tech platforms than for European SMEs. And draft regulation creates a lot of commitments, you just mentioned them, for companies that develop high-risk AI systems. For example, risk assessment, mitigation systems, logging of activity, detailed documentation, and so far. So, this sounds like it could overwhelm European SMEs and startups, especially because they lack the same AI talent pool that is capable mainly of delivering what is asked and backlogging, recording, logging and building of the systems. And in comparison to especially big tech and of course, the resources necessary to assure compliance. Could you, in brief, state some of the main takeaways from the experience the European Commission made with how GDPR was felt outside the European Union?
Irina Orssich: Okay, I cannot talk about GDPR here. What we, indeed got some inspiration from it. We also know that GDPR is only three years old, which means that indeed, in some respects, the implementation is still building up. And with the regulation we were trying to have a different approach. And that A, you only need to have once this conformity assessment, really when something is being put on the market. So we are much more in the spirit of product legislation. So really, before something is happening, whereas GDPR very much relies on ex-post surveillance. And there indeed you also have more cases, because in GDPR, for every single employment office system, for example, a Data Protection Impact Assessment is needed, or you need to fulfill the criteria. So, let's say I want to employ a biometric system somewhere. I want to have face recognition on the streets in all German cities. According to our approach, the system once needs to be undergoing a conformity assessment before it's being put on the whole European market. According to GDPR for every individual deployment of the system, you need to fulfill the rules, and you need to do a Data Protection Impact Assessment. And in certain cases, you need to get approval from the Data Protection Authority. So, we come in before, and we come also in with one big administration covering the whole of Europe. And that's it. So, in that respect, it should be much slimmer than GDPR. And we also, and that's very important, don't want to create rules, which at the end, all the companies who can afford an army of lawyers easily can comply with and the small ones turn around and say, “That's too complicated”. So we can't exclude smaller companies, because what they're doing has the same risk. But we were trying really to have very slim rules. And we also have a number of measures to help those companies innovating. To help them also with the criteria, for example, we have a provision on regulatory sandboxes that certain AI can be tested together with the regulator to make it compliant.
Philippe Lorenz: Okay, we could talk more about the regulatory sandboxing approach, but we need to take up some questions from the audience. So, for the first part, Ms. Orssich, thank you very much.
I'll start with the first question that was dropped in the chat: “The EU has a clear regulatory paradigm when it comes to regulating AI. An example: the risk-based and product-based approach. So how would you characterize the US paradigm towards regulating AI”
Irina Orssich: To be seen. Currently in the US there is happening a lot so. And there is some, for example, illegal act pending, which would more or less prohibit facial recognition. And in general, indeed, and that's no news, to Europe is a bit more regulation friendly than compared to the US. So, in the US, you have quite some regulation on the level of the states, in particular, when it comes to facial recognition. There are also a lot of discussions on ethical principles. But they don’t have this regulatory approach. And, for example, in the US also a recent, I don't know what it is, communication guidelines for agencies are passed, which goes into the direction of how they should procure artificial intelligence, or whether there are ethical principles. So, the discussions are there, so it's similar. But in general, I would say it's a bit less regulatory.
Philippe Lorenz: Yeah, thank you. The next question is coming up here: Could you outline the extent of financial funding and instruments planned out for the next years for promoting AI technology implementation and infrastructure built on EU level in relation to other technologies and sectors? So how does AI compare to other technologies and sectors, with regards to funding?
Irina Orssich: I'm bad on the comparison. We have a lot of objectives when it comes to combining public and private funding, we have, we are building up a public-private partnership on AI data and robotics. We have the research programs, but we are also having the new digital Europe program where it is a question on how to put AI on the market. We have, for example, digital innovation hubs helping companies to have tailor-made AI solutions, but we also have, before the research, we have testing and experimentation facilities. So we are trying to cover the whole value chain. But then also the surrounding indeed we are investing for example in high-performance computers, which are just very, very helpful. And we are investing in skills, specific also in skills for AI. So there is plenty. I hope that answers the question
Philippe Lorenz: We hope to, maybe I can shine some light on this. I'll give you another example to illustrate a similar problem. So, the funding gap between public and private R&D in 2018 alone, Alphabet, for instance, has invested more money into R&D 18.3 billion Euros than all European companies combined. They invested 11.63 billion. So public sector R&D budgets are dwarfed by private sector R&D spending. So other than attracting more outside investors into the EU, which the white paper proposed, what's the Commission's plan on raising R&D investments and spending into this industry? To maybe cover this question from the public a bit more or to get a grip on it.
Irina Orssich: So, what we are also doing, for example, is we now have an investment fund for AI and blockchain which works like a venture capital fund, which comes in at a certain moment, to help the companies with the financing. Indeed, we have now the big rescue and recovery facility, post COVID. And there also, we have a lot of money, 20%, earmarked for digital. And there are a number of countries which then also invest in AI. And all the public investment, indeed, should trigger the private investment. And this indeed is one of the plans. So to trigger the money to help companies make certain investments and to give them also the incentives to do certain investments.
Philippe Lorenz: So, public as a push factor to get private sector funding rolling.
Irina Orssich: Yeah.
Philippe Lorenz: That's an interesting concept.
Irina Orssich: For example, we're planning to do a so-called Adopt AI Program to really help also the public sector procuring AI from the private sector, which then also should be an incentive to develop certain things.
Philippe Lorenz: Right. Technology transfer into the public sector. This is what the US has been excelling in for the past decades, I'd say. Let's take up another question from the audience. Another participant is asking that the recent NSCAI report that we will cover extensively next week, portrays the global AI landscape as a bipolar arms race. It's true. Between the US and China. Europe doesn't seem to be taken terribly seriously as an AI power in Washington, other than as a useful junior ally. Is that an unfair view? And if not, would it be a problem if the EU were to become a second-tier force in this field?
Irina Orssich: So, I think that the reason that we are not very prominent in the report is also that the US is less afraid of us. Indeed, China has a huge government priority to become a world leader in AI and is massively putting money into it. And we see now also that there is a lot of AI insecurity, and that there are reasons to be careful. But I still believe that it's not that we don't have, lost the battle. So what, where we are not good is all the questions of platforms. So with all the big Googles and Facebooks, and so on of this world, indeed, there Europe is somewhere very much behind. But if we look at our strengths, for example, we are pretty much leading at the robotics – perhaps not the only ones but we are very good. If we look at the European industry, where we have our strengths, German mechanical engineering would be an example. We have a lot of things we can build on. We also have a lot of data we can build on. So here comes a parallel: the data strategy and the strategy to create data lakes, to make these data available to build up AI. But there are many fields where we have also a lot of advantages. We also have, still a lot of advantages when it comes to skills and the quality of our education systems.
So, I wouldn't say yet, and I hope I will not have to say it in a couple of years, that we are somewhere far, far behind. But what we are trying, and this is everything I was also saying about the strategies before, is to build on our strengths. We are trying to build on the essence we are having. And it's just that this is not the classical business to the citizen, like “the Facebook” and “the Google” of the world, but somewhat more embedded in the industry, it's less visible, but we have our strengths.
Philippe Lorenz: That's true, but multinationals are using tech to enter into traditional European markets as well. One example would be the automotive sector, where we see that the value that a car is entailing is shifting towards the software stack and is moving away from traditional – you mentioned it – machinery and equipment, something that we Germans are very proud of, and we consider as a competitive advantage. And so, this approach of boosting these strongholds, these strong industrial sectors, and helping them to achieve self-resistance that work on large amounts of data through machine learning systems as well, how do you factor in that you have competition from tech companies that are moving into your industries? And that it's difficult to enforce top-notch machine learning systems at SMEs somewhere far, far away from the talent base that for instance, a city like Berlin is offering, to the local labor market. How do you want to achieve that the industrial base has access to this sort of technology in order to push economic productivity?
Irina Orssich: So, this is manyfold. First, indeed, it is a wake-up call for the European industry. And I think, for example, the car manufacturies got that call and understood now that they have to massively invest, and they already bought companies for all the car material and everything. So because like, yeah, Google Maps indeed is a huge advantage. But I think there are lot of things going on now. But now when we're talking about the smaller companies, we do have this network of digital innovation hubs. So in every region in Europe, there is a so-called Digital Innovation Hub. In every country, there should be at least one which is specialized in artificial intelligence. But they all have good knowledge. And this is where the companies can go on, and where they can discuss on how they could best build AI into their processes. And what are the tailor-made solutions for them? And these innovation hubs will help the small companies and also the medium-sized companies to do that. And if needed, they will also go on a roadshow somewhere really, in cooperation, for example, with the Chambers of Commerce, to get into the more rural areas. So it's not everybody expected to come to Berlin, or Paris, but they really try to contact their potential customers and to get it there. And this is a network which we have been running for the last few years, and which we are, where we are now investing also more money and more into the network to make sure that let's say also the company from Romania gets the tailor made solution, and if it's not in Romania that they can talk to somebody in Italy, Germany or wherever.
Philippe Lorenz: Okay, that's interesting because, I mean, when you talk about artificial intelligence, or more specifically about machine learning, you have to address particular technology resources such as data, hardware, but also AI software or frameworks that built machine learning products, and you have the talent pool. And to me, what strikes me the most is that talent pool and machine learning and Advanced Data Science is something rather different as an asset to a company, when you compare SMEs and tech companies, and there's a huge difference. I think tech companies have the entire talent pyramid from advanced researchers, to coders, which is something maybe that at SMEs the advanced researchers are missing in machine learning.
But let's move away from the talent discussion and have another question from the audience. So how much sense does it make to check an AI software system only once before it goes on the market? The software, including data sources and AI capabilities, might change very quickly with future software updates that are not part of the conformity assessment. How does conformity assessment fit to dynamically changing software world?
Irina Orssich: There is indeed a balance to strike. What we now propose in the legal act as if there is a significant change in the system, then it would need to undergo a second conformity assessment. Now, when we're talking about self-learning systems, one could ask, “Okay, but then you need to do that every week or every day or how often?” The significant change also relates to the purpose of the system. So in a way, you can already build in the first conformity assessment a number of eventualities, what is happening with self-learning systems. But if there is something significantly changing, then there will be a next conformity assessment.
Philippe Lorenz: Okay, we have a very specific question about the health care sector. So, the whole domain of health care seems to fall under high-risk regulation even if in sub domains or applications, there's much less risk. Is the European Commission open to refine the classification based on feedback example, similar to the medical device regulation?
Irina Orssich: We are always open to feedback. And as I said before, so we are now starting the negotiations with the council and the European Parliament. But also right now there is an online consultation of the proposal, which is still open until the beginning of August. So this is a typical way to give us feedback. And if there is something we overlooked, if it is reasonable to put something out, yes, this is something we can do.
Philippe Lorenz: Okay, yeah, maybe it looks rather difficult after it has run through the European Parliament. And the European Council has also a say in what happens to the draft. So another question: Maybe it's just me, but could you please elaborate on the burden of proof issue when it comes to people feeling to be discriminated against illegally by automated decision making? How will consumers, citizens and even judges become able to address this?
Irina Orssich: Good question. They're there when you really apply the normal discrimination legislation. So we do have the conformity assessment. And what will happen is that the authorities, which do the conformity assessment, well, in case of a complaint or in case of an incident, provide all the data to the antidiscrimination authorities or to the respective court. So they will be obliged to transfer the data from the conformity assessment. And at the same time, every system has some certain log on and documentation required. All this data in case that will also have to be transmitted. So this is how we want to create a certain transparency and explainability. The rest of the procedure then would follow the normal anti-discrimination rules.
Philippe Lorenz: Okay, this is an example where it's interlocking with pre-existing EU legislation.
Irina Orssich: And the same would also be for the data protection authorities or for the consumer protection authorities. So all these authorities would have the right to get the documentation. And there will also be what we are planning as a European database, where all the systems that were undergoing the conformity assessment, get registered. And everybody will have access to that registry, not to all the information, but that there is a minimum level of transparency and that people know if there is an issue. Okay, this is what I can do now.
Philippe Lorenz: Interesting. Okay, moving on to next question from our audience: You framed the regulatory intention as a kind of a paramedical regulation. However, the draft proposed only covers the small tip of the pyramid, what about all the space in between? Why, for example, not a draft and overarching transparency principle for much more cases, or fields?
Irina Orssich: We didn't find that there was that much of a risk, so that it was needed to have more overarching transparency principles for everyone. There are also for most of the principles, for most of the applications, where would think of such a principle, we anyway also have the data protection rules, which also provide some rules on how to communicate. And this was the result of our assessment of the situation. Again, we are really happy to get also feedback on this. So if people believe that there are more feuds where transparency is needed, our objective was always to strike the balance between innovation and obligations on companies.
Philippe Lorenz: Okay, then the next question with regards to the US and China from a listener. To what extent has the EU taken into account the AI activities and AI regulations of the United States and China?
Irina Orssich: AI regulations is easy because we are the first ones. We’re the first ones to have comprehensive regulation. And the AI activities we very much took into account, we very much got inspired, actually, by them positively and negatively. So that was, we were touching a bit upon that before. We want to be competitive, we need to work on our competitiveness. We know that both these countries are very strong. China, from the state point of view. The US as was said before, because there's a lot of private investment. And we are now trying to define a European way on how we can be successful.
Philippe Lorenz: Okay, maybe a last question from the audience. And then I'll wrap this up and try to bridge, try to find a bridge to next week's background talk. So to what, when you talk about education, which levels are meant? Schools, universities, vocational education? Does high risk only apply to tests that decide access to an institution or also to another class or level?
Irina Orssich: On the second part of it, so it applies in general to AI in education, the access but also the assessment of pupils or students. So yes, it applies to both. And when we talk education, we are talking about all these. So whatever is an education system, we are talking about it.
Philippe Lorenz: Okay, thank you, Ms. Orssich, for holding your ground towards this question from the audience. They come as a surprise to us, too. So thank you very much for answering these questions. And to bridge the gap to next week's background discussion, I'd like to know what's your feeling. How do you explain to the Biden administration that after the general data protection regulation (GDPR), yet another regulatory project from Europe is also targeting its most valuable companies' business cases? While at the same time, there seems to be lot of activity towards bridging the gap that was also caused by the Trump administration in the transatlantic debate. So there's much room I think, for cooperation, but there seems to be at least the tension as well, with regards to the effect that the draft regulation will have on tech companies' business cases. What's your feeling?
Irina Orssich: Perhaps, I should first make the point that when we come to the implementation of the regulation, so we are having these requirements on the companies, we do want to implement them of standards. So with really standards which are easy for companies to apply, where they don't need to look into some abstract wording, but where they have very complete concrete, technical requirements. And the standards would translate the requirements into technical requirements. So that would be relatively easy. Now, when it comes to the standard making activities, companies will participate. So companies will be part of it. And we do have European standardization organizations. And this is also how we create European standards that they have standards for Europe, which then are published and become the so-called ‘harmonized standard’. But indeed, we are also very, very strongly cooperating with the US when it comes to standardization because it always makes sense for all the industries to find an agreement on what is the worldwide standards, that helps industry, but that also helps us when we are somewhere on holiday, and we buy a product to make sure that this one also works when we go back home. And one of the ideas indeed now with the new US administration is to work closely together on standards. So to really see how we can make this easy for all companies, that all companies can benefit and export from both sides. And how do we avoid building up barriers? But also how do we avoid having a too cumbersome process for Europeans?
Philippe Lorenz: So a leverage to pull through with regulation are going to be technical standards. So the door’s open in the draft, and especially in Article 40 for technical standards, and yeah, companies are driving the development of technical standards. What’s striking to me is that we have this conversation at the international level, at least since about 2016 at the ISO, where companies agree upon technical standards, such as fairness, transparency, and accountability, but the discussion within the European has only just begun and European standardization organizations are now beginning to draw up standards that have in mind the particularities of the European single market, of European rights and values.
Irina Orssich: But the Europeans were also engaged in the ISO discussions.
Philippe Lorenz: They were, they were.
Irina Orssich: So in a way, the question is, what was the necessity? The important thing is to engage in the discussions. And I saw in some of the fields has a very, for example, my favorite example is always biometrics. This is what I know best, but there they have tremendous experience. So this is then also a standardization organization to work with, to engage with at least.
Philippe Lorenz: Absolutely correct. We have to wrap it up. Thank you, again, Irina Orssich for this interview and for this background discussion. And second part will follow next week. So if you're interested in receiving invitations for upcoming background talks and updates on SNV papers in this field, please sign up for our newsletter, and I hope we see you soon next week. Thank you very much for your participation. And thank you very much for handing in these great questions. And all of you please have a nice evening. Talk to you soon. Bye.
Irina Orssich: Thank you, bye bye.