Thanks for watching! For the latest fraud and financial crime updates around AI and machine learning in banking, responsible AI, and fraud risk management, please subscribe: [ Ссылка ]
CHAPTERS:
0:00 Deploying a Model in a Production Environment
1:17 Model Explainability & WhiteBox Explanations
2:18 Model Fairness
TRANSCRIPT:
Hi, everyone. Today, we're actually going to continue the last week's topic and talk more about AI machine learning as well as its application for fraud prevention.
My name is Xin and this is your Feedzai Financial Crime News Weekly Update.
Once you've developed a good model using good data, good methodology in terms of the algorithm that you’re using, in terms of feature engineering, how can you actually make the model in a production environment help you to detect fraud in real time, or near real time?
That deployment usually takes a long time because it might pass through different teams, through different QA testings, even translating into different programming languages in order to be deployed into production. The best way is using the model in a native environment where it's developed and then deployed in the same way. You don't want to do too much translation on your model when you do deployment. So at Feedzai, we write our platform in Java. When you deploy, that deployment production environment is also built in Java, so you don't need that translation.
The translation can be very time consuming, but also it involves a lot of teams together to work, right? So when you actually translate a piece of code from one language to another or even from one environment to another, you usually do a validation check to make sure there's nothing actually corrupting the model that you built or there's nothing that's introducing error.
One important aspect about the good model is explainability - whether the model is being able to tell you exactly how it actually makes the decision - why a transaction is fraudulent. Why this is important is a lot of times, a human is actually going to look at this model score and review the cases. You really need to help that fraud analyst in the operation team to understand why the model thinks a certain transaction is suspicious.
So that's what we call WhiteBox explanation. Essentially, the model tells you exactly how it's making that decision.
But in other aspects of the financial industry, you also have regulatory requirements to be able to really explain why the model is behaving this way. For example, when you apply for a credit card and the credit card company - if they reject you for that application, they need to give you a reason code. And that reason code really needs to come from the model and being able to tell you what are the most important features using the model to help them make that decision to decline your transaction or decline your application for the credit card.
That actually also comes to a very interesting point to talking about the model fairness. So a lot of times, when we do this kind of a decision, we also want to make sure that our model is fair to the population that we're working with or to the general public.
The extreme cases you can see on the news headlines saying, “This model, this AI-generated model, is actually discriminating among the races.” Or maybe they are discriminating against gender.
So a lot of times what you really need to do is, during your model training, you need to deliberately make sure that your model is not biased towards a certain demographic. And there are many ways to do it.
So first of all, you can actually build the model and then audit on the model to say, “Is the model actually fair?” One of the common attributes we can use is gender, right? You can say whether my model is actually biased towards maybe more male and is actually declining more women because of their gender.
The second approach is actually during the model selection, you can select a model that is more fair, meaning that you train, let’s say, 100 models, and then you evaluate each of them on the different aspect that you're interested in.
For example, still talking about gender, you can say which model actually performed better in terms of not discriminating genders.
And then the most advanced one is actually during the training, you already considered the gender in your training. So you’re deliberately not making any selection based on the gender, but also making sure that model has a good performance.
Do you have any questions about model fairness in fraud prevention? If you do, please leave your comment below. Thanks for watching, this is Xin and this is Feedzai’s Weekly Update.
Ещё видео!