Background Discussion
Background Discussion: Data Science in Policy and Administration - Experiences from the Netherlands
July 2022
Wednesday
06
17:00 - 18:00
(CEST)
with
Programmes
In the background discussion "Data Science in Policy and Administration - Experiences from the Netherlands" Pegah Maham, Lead Data Scientist at SNV spoke with Henk de Ruiter, Head of Risk Management and Intelligence at UWV about their experiences with Data Science methods and tools in policy and administration.
Please find a recording and transcript of the event below. This transcript has been edited for clarity.
Pegah Maham, Lead Data Scientist, Stiftung Neue Verantwortung: Hello, everyone. Hello, Henk de Ruiter. And welcome to this SNV online background discussion on “Data Science in Policy and Administration – Experiences from the Netherlands”. And my name is Pegah Maham. And before introducing our guest and jumping into the questions, let me say some words about Stiftung Neue Verantwortung and myself. The Stiftung Neue Verantwortung is a nonprofit, independent think tank based here in Berlin and working at the intersection between public policy and emerging technology. And within our think tank, I lead the data science unit where we integrate data science methods into our work and produce data-driven analysis, visualizations, and products, in collaboration with the experts here at SNV. We're also working on best practices to implement data science for policy organizations, such as ministries or think tanks.
And Germany has started building data labs in all federal ministries this year with a budget of €240 million. They're now faced with the challenges to identify, implement, and evaluate data-driven opportunities from improving their work and generating evidence. And since using data science methods for policy is an emerging application, learning from each other is immensely helpful, learning from other countries and their experiences. The Netherlands have already gained experience in doing so, which is why I talk to Henk today. Thank you so much for being with us.
Let me quickly introduce you to our audience. So, Henk de Ruiter is leading UWV's risk management and intelligence unit with a team of 60 people. And this is a data science unit within the law enforcement division of UWV. There he's responsible for all algorithms and law enforcement practice. And prior to joining UWV, Henk led Deloitte's AI team. Thank you so much for joining us today.
So, jumping into the first question, Henk. To give us a better understanding of what we're going to talk about today, can you introduce your work, your department, and UWV?
Henk de Ruiter, Leiter der Abteilung Risikomanagement und Intelligence, UWV: Of course, I can, Pegah. And it's an honor to be here in this webinar. I was looking forward to sharing the experience we have in the Netherlands surrounding this topic. So, thank you for that possibility to talk about it. So, a little about UWV. So, UWV is an autonomous administrative authority, this means that there is some distance between regulators and the executors. And UWV is responsible for the execution of the social security laws we have in the Netherlands so all the laws regarding people who are not able to work due to unemployment or sickness. And within UWV, we have almost 20,000 people working on the execution of those laws. And there is a bit of law enforcement coming into place as well. So, we have to defend people from misusing the regulations we have and that we execute. So, that's where my department comes in: so, we do a lot of risk management regarding fraud and misuse of the benefits we have. So, there are a lot of professions coming into play. And we are trying to make it impossible to fraud with our regulations. And that's where the intel part of the department comes in, we have the responsibility to detect fraud as soon as we can and it's quite logical that we use a lot of data in that area, and we try to build algorithms to detect fraud and misusage as soon as we can. So, that's basically my work and I must be honest, my hobby as well. And I've been doing so from 2020 and we have almost – well, let's say – 30 FDEs in the data science area in which we develop and execute the models we have at UWV.
Pegah Maham: That's, it looks like 30 full-time employees so it's half-half of the 60 people.
Henk de Ruiter: Indeed.
Pegah Maham: And when did you start using the data science method at UWV?
Henk de Ruiter: Well, that's quite complicated to explain because I think working with data has been in the public sector area forever, right. So, fact-based decision-making is there for, well, as long as I have been living. But the real use of data science technologies I think goes back to, well, 2010, somewhere in that area where we first used data science techniques in the – well, let's say – in the client service delivery areas, for example, predicting the number of phone calls that UWV would receive and, well, to get the basic service delivery done. And from the fraud area, the more simple uses of data science, let's say, straightforward selections have been there from, well, 2003 or 2004. And the more complex data science techniques, such as random forest models and that kind of stuff have been in place from, well, let's say 2018.
Pegah Maham: Right. Also already been four years.
Henk de Ruiter: Yes.
Pegah Maham: Thanks for this introduction. I think it became clear what the context is. And you already flagged this question a bit in your answer: People have very different understandings of what data science is and it's not clearly defined, it's an umbrella term but what is your understanding of data science methods?
Henk de Ruiter: Well, I tend to define data science as the more complex algorithms we use to predict the behavior of our clients in the future or to explain complex problems we have. So, I tend to see everything more complex than, well, plus and minus as a data science solution.
Pegah Maham: I see. Okay, everything that is more complex than adding and subtracting and including things like forecasting. Thank you so much for this. So, I think, yeah, it makes sense to be on board with what we're talking about. And to also get a better understanding, to get a better grasp on examples, can you give us an example of a past or a current project specific that you find particularly interesting and what role data science played in it?
Henk de Ruiter: Yeah, there are several. And I think it is always difficult to choose one but I've selected one: It’s the topics surrounding fraud in the health care area that we have. So, we have people that apply for benefits when they're not able to work due to sickness or handicaps. And the big question we have is which persons are sick and which persons are, well, basically lying or frauding with that. And we do know that it's, well, let's say 99% of the people that apply for a benefit are doing so for very good reasons and that are entitled to receive that benefit. But there is always a small percentage that is trying to commit fraud with the rules that we have. So, the basic dilemma we have, which is a classical data science issue, I guess, is that we have a really big population, and we need to find a needle in the haystack. And that's where data science comes in into the let's say the health care fraud issues we have. So, we approach this from what we would call a system screening. So, we collected lots of data to detect, let's say, patterns in the data that we have that are different than the regular patterns that we have. So, we can actually ask more folks questions on what's going on and how we could approach that problem. So, there's a lot of examples that we have and there is a lot of complex data science going into that area.
Pegah Maham: And just to broadly understand: the labels for this kind of classification you have are from previous cases where you know the label?
Henk de Ruiter: No, that's the big trick. We didn't have any labels when we started. So, we just used the data that we have to raise questions on good, certain parts of the population to be labeled as frauds or not. So, we're using an approach that in the data science area and in the public sector, from my experience is really important to, well, basically re-build your training sets that you have, right, because otherwise, you would extrapolate the policies that you would have had in the past and bring in a lot of bias into your solution. So, we are deliberately not using old labels.
Pegah Maham: Okay, interesting. And for us here at SNV and also when talking to other people working at this intersection, it proves rather difficult to identify good use cases for data science methods, especially at the beginning where experts in policy and administration don't have a data science background and so as the consequence, they can't know what problems can be solved with data science. How do you at your department come up with ideas what data science can solve?
Henk de Ruiter: Well, this is a rather difficult question as well. So, let me answer this from how I approach this business as a leader in my team. And the basic idea that I have and that I spread within the organization as well is that every problem can be solved with data and there is no problem that we can't solve with the data science solution. The one problem will take a bit longer than the other one but we can solve everything with the data science solution. That's the approach that I take on this. And the main focus that we put into this kind of questions that we have is really to understand what the problem is and what the problem looks like. But mainly is that where the most talking is needed – so what is the actual problem, and what are you willing to do to solve the problem. That's more and more complex and more important than building the data science solution from my experience.
Pegah Maham: Yeah, I can second that. To understand correctly, is it that you really believe that all problems can be solved with data science, or is it a mindset that's most successful in succeeding?
Henk de Ruiter: Both. That also has to do with how you define data, right. So, data can be everything, right, from pictures to audio files, to video files, to more structured data sets as well. And when you limit data science to only the structured data sets, then, well, my belief would have a small problem I guess but when you extend it to a more wide view on what data can be, then I strongly believe that data can solve almost every problem we have.
Pegah Maham: So, I hear you're really optimistic about the potential of data science. Given that this is the case, I imagine you have an abundance of project ideas or opportunities. Then how do you decide on which ones get implemented, imagining your constraint on how many projects you can implement?
Henk de Ruiter: Yeah, well, there is a very simple answer to that. So, the big implementations we have now within UWV are basically crisis-led. So, this is not an invitation to all the data science managers to cause crises to implement your solutions, that's not what I'm talking about. But it can help to – well, in the Netherlands, we had a huge crisis surrounding fraud on the unemployment [...] that really helped to push forward the ideas to implement those solutions. And from the ideas that we have that weren't coming from crises, we really focus on what's in it for the people that are using the solution that we developed and how we should be contributing to the goal UWV has. And next to that, a certain amount of opportunity approaches will help as well. So, we're looking for which of those things that contribute to the goal of UWV, that help people in their daily business, and that, well, basically are really easy to build and to implement as well.
Pegah Maham: A user-centric approach. If I'm understanding correctly?
And, yes, in your last sentence you said that are also easy to implement. That is what my follow-up question is, when do you decide to cut off an idea where you have tried and maybe realized this is too complicated for now, the data is not mature enough?
Henk de Ruiter: Well, we have developed a process within UWV on how to develop algorithms or risk models that we use so we call it the MRM policy and we have seven value gates in which you can cut off projects surrounding ethical issues, privacy issues, technology issues – you name them. All sorts of issues that you can have when you're running a data science project. But the main focus is on how the end user is responding to this problem that we're solving and when we build this is he or she going to use this. And when that belief is falling, then we cut off the project.
Pegah Maham: Super interesting. So, both the checklist of general reasons but if you realize that the user will not use it for whatever reason, that's also a signal for you to postpone it or stop it.
Henk de Ruiter: Of course. And that last cut isn't the easiest one, right, because I'm a believer, right, and I put a lot of effort in not getting to that point. But there are some problems that are, well, too tough to solve with the data science solution at the first hand so you probably need to have some more simple solutions in place. So, it's mainly a big transformation trend that's going on.
Pegah Maham: Do you have an approximate number like how many projects get cut off, how many are from the seven gates, and how many are from like users in the end not using it?
Henk de Ruiter: Well, I think most projects that we sacked or that we killed are due to privacy issues or ethical issues so that's our main focus to do it correctly and to not have discussions on that. And to be honest, the GDPR issues that we face in data science projects can be so complex that the discussion too takes long. So, that is our first step. And I think that is the biggest step that we need to take.
Pegah Maham: Yeah. I think most data scientists feel you with regard to GDPR. As you already mentioned, social security is a very sensitive topic touching upon many ethical questions, in this case, because it's affecting people's livelihoods. And what optimization data science does, it's shifting human decisions to code, and as you said, this entails risks, and their challenging underlying assumptions and blind spots can be important to minimize this risk, and public scrutiny can help with that as well. So, my question is what kind of questions do reach you from the public and what rules do you have for transparency?
Henk de Ruiter: There's one thing that I need to make very clear: So, we do use algorithms within UWV in the fraud area, but the algorithms are not deciding on people's lives, or we're not using algorithms to make decisions on all of the benefits that we give people. There are always humans in the loop that do decision-making and they are not doing decision-making based on the algorithm but on the facts and the facts that they encounter during the research. So, the algorithms are pointing out where we need to do research, but they are not handling out decisions. So, that's very important to realize and that's also the most important quality gate we have in the business that we live every day.
Coming back to your question, Pegah: So, the questions we receive from the audience are mainly regarding the features that we use in the algorithms and if we're not by accident using biased datasets, biased training sets so that we are, well, basically building racism into our algorithm. So, that are the main questions that we have. And the most complicated questions to answer as well because I already mentioned that the algorithms that we use are in the fraud detection area and in the fraud detection area, it is not helpful to share the features that we use because then the people that are trying to take advantage of them, well, they can easily avoid the control mechanisms that we have. So, that's a really complicated area there. And there is a very clear code for integral transparency of the algorithms that we use. And basically, we can't give the transparency in full because we would then, well, basically create a situation that we could stop using the algorithms, right. So, that's the difficult questions that we have.
The approach that we take into this discussion is that we, well, we basically developed our MRM policy, our model risk management policies in which we, well, basically describe how we develop algorithms, how we test them, how we monitor them. And we also have ethical data, we have an ethical data science compass at UWV in which we describe, well, what are ethical approaches to the use of algorithms. And based on both of those instruments we have or the policies we have, we always validate our algorithms by an external auditing party to make sure that we're in control and we're not using the racistic approach in the algorithm. So, we have a lot of checks and balances in the process and we're in the process of basically putting the validation reports that we have from the external validators on our website so the audience can really see what we're doing and that we're doing it safely.
So, that's how we try to approach this discussion. But it is a complicated one, and to be honest, the one that's, well, basically in the middle of the societal discussions as well. So, I think the last word hasn't been shared on this topic, but this is how we approach it right now.
Pegah Maham: Yeah, it is a tricky tradeoff. And I guess your reports that are on the website might be even helpful for other countries' governments to see what experiences you've made with this tradeoff.
And, I mean, this is so far, if I understand correctly, your voluntary checks and balances but the European AI Act addresses AI systems, especially high-risk ones, that they define as high risk, and under this category, we have law enforcement that may interfere with people's fundamental rights and we have if they're used for essential public services, so as you can see, these two things very much apply to your work. How are you anticipating the European AI Act's effect on your work?
Henk de Ruiter: Well, I am not an expert on the AI Act that's coming toward us, so I really don't know the extent of that answer. But I do think that the measures that we take into account within UWV and all the policies that we bring into place and the technical barriers that we bring into place are a lot stricter than what we see in the AI Act. So, I'm not afraid of the AI Act coming. I think we're way stricter in the policies that we have internally. So, we're not afraid and I think we're fully prepared for that.
Pegah Maham: Okay, thank you. Then the last question from my side before we go into the audience's questions. So, as I said, using data science is a new discipline, I think you touched upon a bunch of these things, I think mistakes will help other governments to learn from each other, what are the main lessons learned that you would like to share with us, what pitfalls can we avoid, for example, here in Germany?
Henk de Ruiter: Well, I think there are two mains that I would like to share here. So, the first one in the change strategy we adopted a few years ago before we joined UWV was that we had too much trust in data science too quickly. So, you know, there was this saying that data science would replace everybody within the company and that, well, that it would be the new medicine for cancer, and we would solve hunger and war in the world as well. And I think that laid too much pressure on the data science team to deliver and, well, basically there were expectations that couldn't be matched, right. So, I think next to my optimism about data science can solve everything, you need to be realistic and when can you solve what and what would the impact be for the people working in the organization. So, I do believe that data science has a big impact to make and it will be but not tomorrow and not the day after tomorrow. So, be realistic in that.
And secondly, be realistic about pre-reconfigurations, about the things you need to have in place to start with the data science unit, right. So, building a data science lab can be complicated but it's the easy part; implementing algorithms in your daily business, in your operational core of the organization is a really different ball game and that's complex and you need to think that through before you start experimenting. Because when you need to put all those things into place when you have the first working algorithm, you have a big, big, big challenge with lots of investments and lots of time. So, be realistic on what to expect from data science but be realistic on what you need to do to make it work as well. I think that are the main lessons I learned in the last couple of years.
Pegah Maham: Thanks for sharing these lessons with us. So far, these were my questions. And now I want to go to the questions of the audience. And the highest-ranking question is from a participant, and they ask, "The units at work is largely focused on fraud detection, I'm wondering what role the artificial intelligence, machine learning, data science methods can play to contribute to other values such as more just or effective decision-making?" So, the use cases in general.
Henk de Ruiter: Yeah, that's an excellent question as well. So, obviously, I'm leading the risk management in the intelligence department within UWV. But we have several other data science teams within UWV as well. So, I think we have almost close to 100 data scientists within UWV – very skilled and talented people. And we have projects surrounding the area of service delivery as well. So, we're using data science to understand what people are doing and how we can improve the service delivery from UWV. For example, we all know that certain target groups and target audiences respond differently to communication than others and we are really using data science throughout to find a sweet spot within the area as well. And, in how we manage the company: So, technologies such as process mining and that kind of we use a lot as well.
Pegah Maham: Okay. So, to clarify, this is like one specific team that is focused on fraud detection but there are other teams on these other values.
Henk de Ruiter: Indeed.
Pegah Maham: Then you touched upon the point like the teams. And then one question that I also hear from many people trying to hire for government is “how do you currently find and pay data scientists for government projects?”
Henk de Ruiter: I have kind of a big smile on my face because I'm very proud of the team that we have in the Netherlands, and I think that I and all the managers in the data science area are very proud of the teams that we have. We have little difficulty finding new data scientists and engineers, especially the youngsters, right, so the new joiners with little experience. We find that all the pay rates that we have for those functions are really good and people, especially the younger people are really joining because they want to have a job where they can add something to society, right. And we really use that in how we approach the workforce. We do have difficulties in attracting the more senior hires for that matter. I think almost every company has difficulties in that. And basically, the pay rate we have in the public sector for those more experienced hires is very difficult. So, we have people joining because they really want to mean something, and they are not joining for the money. So, we're really putting forward that special aspect of our work when we hire people or when we're trying to find new people.
Pegah Maham: And how do you do this? How do you put this forward and show that the job can be attractive?
Henk de Ruiter: Well, we have really big campaigns on that where finding staff is not only complex in the data science area, but it is complex in all the functions that we have given how the labor market is in the Netherlands at this moment in time. So, we're really approaching, well, the strong points of UWV and that's when people can add something to society and do something useful with their life other than making more stock value for a very few people. So, that's what we are really putting forward in our communications and how we approach the labor market and inner circles as well. And we do have campus recruitment in which we participated in Master studies and gave guest lectures there as well.
Pegah Maham: Very cool ideas to hear about. The next question from another participant and they ask: "How have you dealt with ethical questions regarding the use of data science methods to identify, for example, welfare or healthcare fraud, and what were the internal processes to answer ethical concerns and ensure an approach that safeguards those societal values as well as the fundamental rights, that of subjects?"
Henk de Ruiter: Yeah, I think this is quite an extensive answer that I'm going to give right now. So, within the Netherlands, we have several institutions that look into this kind of question. So, in the Netherlands, we have something that's called IAMA. And that's an impact assessment regarding human rights. And there is a very extensive paper written about that, that all governmental bodies are obliged to use. So, to be honest, that is a checklist where there are a lot of questions of almost 100 pages that we need to use and answer before we are starting a project.
Next to that, we have an ethical commission at UWV as well. It's mainly an external commission with some professors from the university that advise us on how to approach ethical questions. We are starting that right now so we are a bit in the discovery phase on how that should operate but they have a direct link to the board of directors to basically advise on the algorithms that we use. That's from the ethical point of view.
Next to that, we have, well, basically the GDPR point of view in which we do all those assessments on the privacy that's needed. And next to that, we have our own internal quality framework which I already mentioned as the model risk management framework. So, we are basically approaching this from three angles. From the ethical point of view, we have the IAMA. I'm not sure if there's a German version of the IAMA as well but we can discuss that later. And on top of that, we have the ethical commission.
Pegah Maham: So, various processes. And if I understand correctly, you've just started the collaboration with academia, with the professors?
Henk de Ruiter: Indeed. We have installed the ethical board I think somewhere at the beginning of this year.
Pegah Maham: And like from which fields do the professors come from?
Henk de Ruiter: I am not sure, but I think we have a professor that has been specializing in the ethical area and the technology area and in the combination of those two fields, and we have a data science professor in as well.
Pegah Maham: All right. Then I move to the next question, and they ask, "What was your experience and learning at the beginning of introducing data science into the public sector, especially from a cultural perspective?"
Henk de Ruiter: Yeah, well, I'm an optimistic person, right. So, I've been enjoying that journey from the moment it started. But there is an honest answer to that as well. So, it has been a struggle to get it implemented because, in the social security area, people are used to doing things as they have always done it right. So, data science has the potential to be kind of disruptive in that area, right. So, there has been a lot of convincing, of sharing going on or spreading the word as you might like to say. So, I think when we started, there were days that I had, well, let's say four to six presentations in a day to share what we were doing and what the potential was of what we were doing. So, it takes some experience to get it done, to have some concrete examples of what it could be and it will take some time and some confidence to get it done. It's not the kind of project with one good idea and then the world will change. Well, there is going a lot of work into change. And that's cool to do – at least from my perspective.
Pegah Maham: And to follow up on this maybe, one question that's related to this: So, you talked about like you need patience to build up the foundations, to explain things, to convince people. Then there's one question focused on the other side of things coming from a participant who asked, "How much of your and your department's efforts are spent explaining the approaches and insights to decision-makers to draw conclusions and formulate policy? How was it navigating that tension in the public sector?" So, I guess it's like the outcome of the communications.
Henk de Ruiter: I think the algorithms that we use have quite a direct impact on the operational processes that we have. So, I think the time that we spent on explaining the data science analysis that we get and bringing it to a decision, that's not that much because we have a different approach to data science, we're really into the automation area for the moment. When I freely interpreted the question there, I have been doing a lot of explaining to the governmental bodies that we have that manage UWV, right, to the policymakers on what we were doing, why we were doing that, how to trust that, how you can trust the checks and balances that we have in the processes. So, there is a lot of time going into that topic, especially from the management of the department. So, we try to have some sort of collaboration that the data scientists do what are best in, mainly developing algorithms and maintaining the algorithms that we have. And we have managers such as myself that try to keep the environment going to get it done and to help them get the job done.
Pegah Maham: Translate, I guess, between these two worlds.
Henk de Ruiter: Yeah, well, in Deloitte, we had the term of the business translator, or the analytics translator and I think all the managers we have in the department are mainly the analytics translators.
Pegah Maham: So, if I understand correctly, you split it up, there are like some core technical data scientists who can focus on coding the algorithms, and then you have translators that try to bridge between policy and that.
Henk de Ruiter: Indeed. And we have more split-ups in the team. So, we have split up the engineering from the data science professionals. So, the data scientists can focus on the statistical side of things and within the data science unit, we actually split the development from the run area as well. So, we really found out that people that excel at developing algorithms are playing another ball game than maintaining algorithms that we have. So, there are several splits we have in the department.
Pegah Maham: The dev-ops and stats are pretty different skills to have.
Henk de Ruiter: Indeed. And to be honest, I think that the op-skills are more difficult to attract from the labor market than the development skills.
Pegah Maham: You said the op-skills compared to the dev-skills?
Henk de Ruiter: The dev-skills are easier to find than the op-skills.
Pegah Maham: And with ops you mean non-technical ops?
Henk de Ruiter: No, the statistical and the technical ops. So, when you develop a model, you need to manage the data pipelines obviously but you also need to manage the quality of your statistical model and the degradation in time. So, and the people that are into that area are harder to find than the people who want to develop models. So, there are a lot of model developers in the market. And there are a lot of people who think this is really cool to build new stuff, such as myself. But the people that can maintain the staff that has already been built and that has been proven, well, there are fewer people of that, unfortunately.
Pegah Maham: Jumping onto this topic that you're touching upon. A participant asks, "Can you tell us more about the tools and software you're working with?"
Henk de Ruiter: Yeah. So, again, we need to be honest on that. So, we have developed a specialized production platform for data science. So, in the platforms that we have, we have split up the development platform, the test platform, and the production platform. We are using lots of outsourced tools in the development area and in the run area. And we're now in the process of basically purchasing software that is helping us to improve the data science platform – the production platform – because we see that there we need containerization software CI/CD solutions which we do not have at this moment in time. But ask me again in a year and I will share a different story on that. So, we have the projects running on that. In the development area, we have the usual suspects, right, so we are using a lot of Python and R and tools like that.
Pegah Maham: Yeah. And with regard to the open-source, was this difficult because you work for like the government to use this open-source software?
Henk de Ruiter: Yeah, I have to be honest, everything in the data science area is complex within these huge organizations, within huge IT departments. But we have been using open source for almost 12 years in this area now. So, it was complicated but not that complicated. It is very difficult to get the open-source solutions into the production area where we mainly used the open-source components to basically calculate risks on running benefits that we supply. So, that's really complicated. So, that's why we are doing the purchasing process right now.
Pegah Maham: But in all of these cases, it was a tradeoff between risks and benefits using these tools?
Henk de Ruiter: Yeah, well, basically purchasing software for a big company as UWV with all the rules and regulations running that is a really costly and lengthy process. So, in the Netherlands, we have to do European tender processes for that kind of software and that's really complex with lots of regulations surrounding that. So, I think I will be really happy when we close that process within a year.
Pegah Maham: Fingers crossed for that. So, we have another question where I'm not sure how much you can actually say because you touched upon the difficulty of the tradeoff of transparency: "Could you elaborate more on how you're forecasting, predicting the behavior of individuals, and what problems does this solve?"
Henk de Ruiter: That's a really broad and complex question but let me bring that question into a fraud algorithm to elaborate a bit more on that. What is important to understand is that we're not building generic prediction models surrounding people, right. So, we're not trying to predict if Henk or Pegah is a fraud – we're trying to predict if something that I'm doing or that you are doing is a fraudulent thing that you're doing, right. So, I'm not trying to say something about the person but to say something about the behavior a person is showing regarding, for example, something he's doing on the internet or letters that he's sending or stuff like that. And we are really focusing on a fraud that is in the law, right. So, for example, in some laws, it's not allowed to go abroad, so you need to stay in the Netherlands, otherwise, your benefit will be suspended. So, we're really looking into how behavioral features are telling us something about if somebody is abroad or in the Netherlands. So, there are a lot of folks on a specific topic and that's really helpful to select behavioral features that we need. And what's really important in this area is that we focus on behavior as much as we can so we are not using features that are, well, stick to somebody so we're not using features such as name, the sex people have, the age, the place that people live but things that people can do and things that people can influence. So, I hope this is a bit of an answer to the question you raised.
Pegah Maham: There is a follow-up question close to that from Nela Salomon from the Center for Social Innovation asking, "What kind of data and patterns that you track in cases of fraud due to the inability to work? Could you go into more detail on this?"
Henk de Ruiter: Yeah. It's really difficult to go into the features of the models that we have. So, again, I need to stick to the answer I just gave. So, we are looking into features of datasets that say something about what people are doing and not about who people are, right. That's the main thing that we try to do. And we look for data as close to the subject at hand as we can. I do know that this is not as concrete as you would like me to share but I hope you understand why I can't.
Pegah Maham: Yeah, you explained that otherwise things can be gamed and so you cannot be fully transparent about your algorithms.
Henk de Ruiter: It is.
Pegah Maham: Another question is coming from Mark Azam from the Ministry of Environment and he asks, "How was the process of moving from traditional data analysis methods to advanced data analytics and machine learning received in an organization and how high were acceptance rates with regard to trust and explainability?"
Henk de Ruiter: I'm really not sure how that journey went because I think we stepped into the more complex techniques quite soon. And the acceptance that we have in the organization is, well, mainly built or developed because we have a development process in place in which we try to co-develop as much as we can with people that are using the algorithms that we developed, right. So, we have people in the operational core of UWV that are responsible for the fraud research that we do. And we actually build, well, let's say a scrum team with the data scientists, with the people from the execution, with the management, with the data managers in it as well. And together they brainstormed on the features that should be used on how to underline or explain what features we are using; they were involved in the testing of the algorithm or the labeling of the cases as we mentioned earlier. They were also involved in all the steps that we needed to take into account, find an ethical committee, and find the GDPR surrounding processes that we have. So, mainly, they were involved in every step that we took and, well, basically they became our change agents, right.
So, we have people from the operational core that were explaining to the other people that it was a very good idea that we were implementing this. And we had very few discussions on the technology that we use or the techniques that we use but we had very many discussions on all the effectiveness of the models that we have and how we could prevent errors in privacy and so on. So, I think the technical side of things was, well, not the most exciting change that we had.
Pegah Maham: Another question following up on this comes here asking about the success of the approaches. "How successful have you been in fraud detection, for example, health care, is there" -- the question is, "Is there no fraud in the Netherlands anymore?" But I guess like, yeah, how much improvement could you see?
Henk de Ruiter: Yeah, well, that's the difficulty of the business I'm in. So, there is no statistic on how much fraud there is in the Netherlands, right, nobody knows how much there is so I'm not sure how effective we are in our detection.
Pegah Maham: When there was the crisis, how did you know?
Henk de Ruiter: Well, there was a lot of media attention going on to point out the crisis that we had. We did know that there was something wrong. And the difficulty in the area that we have is when we select persons, we can, well, prove that there is a fraud or there isn't a fraud. We do random controls as well to have a bigger understanding on what is going on. But there is always something like an error, right, in that kind of research that we do. So, the hit ratio of the models that we have – I don't like the term hit rate, but I can't think of a better term in English now – so, the success rate is perhaps better. The success rate of the models that we have is well above 60% which is quite excellent for a statistical model which is that complex as the rest of the models that we have. But I can't say anything about the volume of the data fraud in the Netherlands that has gone down, I can only hope that we, well, added something substantial – and we did.
Pegah Maham: You already mentioned there's a human in a loop, so I guess it's hard to just break down the 60% to the actual policy impact but when you say 60%, is this like so 60% of those cases that you flag to look more into are also actually fraudulent. Okay. Which, given the imbalanced data set, I think 1% is actually only true positive, I guess it's quite a high accuracy.
Henk de Ruiter: Indeed. We're very proud of the accuracy of our models.
Pegah Maham: Then there is another question, and they ask, "How do your data scientists and law experts collaborate? Do they have regular meetings or workshops?
Henk de Ruiter: Yeah, this is one of the most difficult management challenges we have. So, I'm not sure if it should say collaboration because there is a lot of discussion going on. And I think discussions are healthy, right. So, the data scientists are looking into what can happen and, well, the legal department is more looking into what can't be done. So, there are very harsh discussions on those topics. And I think those harsh discussions are exactly what we need to get the most value out of what we're doing and, well, to be sure that we're not doing anything that's not allowed. So, I think there is collaboration but with harsh discussions as well, and there are regular meetings taking place as well. But I'm dreaming of legal persons within my department who understand the data science area.
Pegah Maham: Well, I see. Okay. So, harsh discussions are useful as a lesson learned.
Henk de Ruiter: Indeed.
Pegah Maham: All right. Given the time, let's jump to the next question: "Did you or your directors ever do a business case for your work? High salaries of 60 people in comparison to detected fraud. Is this a win situation?" So, the return of investment, I guess for all the salaries of data scientists.
Henk de Ruiter: Well, we did business cases on this. Not regarding the topic that was raised in the question, but we did do business cases on how many cases of fraud we would detect from random controls, what if we would do nothing, what would happen if we do 100% controls, and what would happen if we do the model controls as we are doing now. And those business cases are very, very positive. And I think if all the money that we saved from the 100% approach versus the model approach, if I could basically have the money, we would save in a year, I wouldn't be working the rest of my life.
Pegah Maham: Wow. Okay. That's a very clear answer. All right. Going over to unfortunately the last question, we're running out of time. First of all, many people actually say, "thanks for the insights" and "thank you". I'm skipping this part to get to the questions, but I think at this point, I can already mention how often this is being said.
And in this question, they wonder if users subjected to automated decision-making are made aware of how much of the decisions are impacted by machines. And also, they would like to know what are the redressal mechanisms in place, if you can elaborate on this?
Henk de Ruiter: Yes, so to be perfectly clear on this topic, what's more, we do not have automated decision-making in place. We do have algorithms that supply cases to our operational employees. In the batches that we supply to the operational employees, we have a 70% model score and 30% random score, so we can control the performance of the models as well. And we can prevent bias – human bias. So, the people that are tasked with the research know that there is a random addition to the selections that they receive, and they know how big that is. But when a person is researching a case, he or she doesn't know if it's a model score or a random score to prevent bias in the execution of the research. I'm not sure what the second point of the question was. But this is what we're doing.
Pegah Maham: Yeah. I think that gives an answer to this. Yeah, thank you so much! I'd like to wrap up: You mentioned many useful I think lessons starting from realistic expectations, both on time but also pre-requisites. It seems like optimism, seems crucial for you in all of this endeavor. You mentioned you co-developed all your projects with users, you have a human in the loop and that harsh discussions are useful for using data science in these topics.
So, with this, thank you, Henk, again. Unfortunately, there is a warm round of applause that can't be heard but I think from the comments I already mentioned, people are very thankful for you sharing your insights. Thank you to everyone for being here. If you're interested in receiving invitations for upcoming background discussions or updates in general on our work, papers, for example, please sign up to our newsletter. And with this, thanks again, Henk. And I wish you a nice evening, both to you and to our audience.
Henk de Ruiter: It was my pleasure, Pegah. And nice to speak with you as well.
Pegah Maham: Thank you. Take care. Bye.
Henk de Ruiter: Bye, bye.