lime 模型_如何使用LIME在您的机器学习模型的预测中建立信任

发布时间:2023-09-26 11:30

lime 模型

This article is a step by step guide that'll help you interpret your machine learning model's predictions using LIME. Even when your model achieves close to 100% accuracy, there is always one question that runs in your mind: should we trust it?

本文是分步指南,将帮助您使用LIME解释您的机器学习模型的预测。 即使您的模型达到了接近100%的准确性,您始终会想到一个问题:我们应该信任它吗?

Consider a situation at a doctor's office – would a doctor trust a computer if it just showed a diagnosis without giving any valid reason behind it?

考虑一下医生办公室的情况–如果计算机仅显示诊断但没有给出任何正当理由,医生会信任计算机吗?

Any model which fails to explain the reason behind its output is considered a black box. And trusting such a model is not the right approach. 

任何无法解释其输出背后原因的模型都将被视为黑匣子。 信任这样的模型不是正确的方法。

Let's say we're given a model which predicts whether an animal is a dog or cat and has 100% accuracy. But what if it makes that prediction based on the background of the image? Would you trust that model?

假设我们有一个模型可以预测动物是狗还是猫,并且具有100%的准确性。 但是,如果它根据图像的背景做出预测呢? 您会相信该模型吗?

As you can see in the above figure, the green color represents the features it took to identify the image as a cat, and the red indicates the features it took to represent it as a dog.

从上图中可以看出,绿色代表将图像识别为猫的特征,红色代表将图像识别为狗的特征。

If our model provides such a valid reason for its prediction, it builds our trust for that model. Similarly for the doctor situation, if the model can tell which features were important in its prediction and to which symptoms it gave more weight, it is easier for the doctor to trust that model.

如果我们的模型为预测提供了这样的正当理由,那么它将建立我们对该模型的信任。 同样,对于医生而言,如果模型可以判断出哪些特征在其预测中很重要,以及对于哪些症状给予了更大的重视,那么医生就更容易信任该模型。

But it is that simple to interpret any model? Luckily yes. Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin came out with a paper called "Why Should I Trust You?": Explaining the Predictions of Any Classifier in 2016.

但是,解释任何模型是如此简单吗? 幸运的是。 Marco Tulio Ribeiro,Sameer Singh和Carlos Guestrin提出了一篇名为“为什么我应该信任您?”的论文:解释2016年任何分类器的预测。

In it, they proposed their technique LIME. The basic approach of this technique was to easily interpret any model by learning it locally around its prediction.

在其中,他们提出了他们的技术LIME。 该技术的基本方法是通过在模型周围进行预测来轻松学习任何模型。

They wrote this paper to understand the explanations behind any model's prediction. So whenever you need to choose a model, you can use the insights from LIME.

他们写这篇论文是为了了解任何模型预测背后的解释。 因此,无论何时需要选择模型,都可以使用LIME的见解。

In the above diagram, the model predicts that a patient has the flu, and LIME highlights the symptoms in the patient's history that led to the prediction.

在上图中,该模型预测患者患有流感,而LIME突出显示了导致该预测的患者病史中的症状。

Sneeze and headache contribute to the "flu" prediction, while "no fatigue" is evidence against it. With this information, a doctor can make an informed decision about whether to trust the model's prediction.

喷嚏和头痛有助于“流感”的预测,而“无疲劳”是反对的证据。 有了这些信息,医生就可以决定是否信任模型的预测。

那么,LIME到底是什么? (So, what exactly is LIME?)

LIME is model-agnostic, meaning that it can be applied to any machine learning model. The goal of LIME is to identify an interpretable model over the interpretable representation that is locally faithful to the classifier.  

LIME与模型无关,这意味着它可以应用于任何机器学习模型。 LIME的目标是在本地可信赖分类器的可解释表示形式上识别可解释模型。

LIME is model-agnostic, meaning that it can be applied to any machine learning model. The goal of LIME is to identify an interpretable model over the interpretable representation that is locally faithful to the classifier.  

LIME与模型无关,这意味着它可以应用于任何机器学习模型。 LIME的目标是在本地可信赖分类器的可解释表示形式上识别可解释模型。

LIME is model-agnostic, meaning that it can be applied to any machine learning model. The goal of LIME is to identify an interpretable model over the interpretable representation that is locally faithful to the classifier.  - Definition from official paper (link)

LIME与模型无关,这意味着它可以应用于任何机器学习模型。 LIME的目标是在本地可信赖分类器的可解释表示形式上识别可解释模型。 -官方文件的定义( 链接 )

To understand this, we need to understand the meaning of the acronym LIME.

要了解这一点,我们需要了解缩写LIME的含义。

Local: Refers to how we get these explanations. LIME approximates the black box model locally in the neighborhood of predictions.

本地:指我们如何获得这些说明。 LIME在预测附近对黑盒模型进行局部估计。

Interpretable: The explanations provided by LIME are simple enough for humans to understand.

可解释的: LIME提供的解释非常简单,人类可以理解。

Model-agnostic: LIME treats the model as a blackbox, and so it works for any model.

与模型无关: LIME将模型视为黑盒,因此适用于任何模型。

Explanations: The justifications given for the actions performed by the model.

说明 :模型执行的操作的理由。

LIME provides local model interpretability. It modifies a single data sample by tweaking the feature values and observing the resulting impact on the output.

LIME提供本地模型可解释性。 它通过调整特征值并观察对输出的影响来修改单个数据样本。

With LIME, we cane explain why the RandomForestClassifier thinks what it does before giving a prediction.

使用LIME,我们可以解释为什么RandomForestClassifier在给出预测之前会思考其作用。

让我们看一些代码
(Let's look at some code)

We'll start by using the RandomForestClassifier model to work on the "Did it rain in Seattle" dataset. The data is available here.

我们将从使用RandomForestClassifier模型开始研究“西雅图下雨了”数据集。 数据可在此处获得 。

First we will import our base libraries:

首先,我们将导入我们的基础库:

import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline

In order to avoid future warnings in our code, we will add this to our code at the start of our script:

为了避免将来在我们的代码中出现警告,我们将在脚本的开头将其添加到我们的代码中:

import warnings
warnings.filterwarnings('ignore')

We then import a few sklearn libraries for splitting the dataset and for defining the metrics. The RandomForestClassifier will also be imported from the same library.

然后,我们导入一些sklearn库,用于拆分数据集和定义指标。 RandomForestClassifier也将从同一库中导入。

from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
from sklearn.ensemble import RandomForestClassifier

Since we have all our required libraries, we will read our data:

由于我们拥有所有必需的库,因此我们将读取数据:

df = pd.read_csv('seattleWeather_1948-2017.csv')
df.head()

So the data consists of 4 feature columns and a target column, i.e. RAIN. Our task is to predict if there was RAIN in Seattle.

因此,数据由4个要素列和一个目标列(即RAIN)组成。 我们的任务是预测西雅图是否有雨。

df.shape

(25551, 5)

(25551,5)

Our data consists of 25,551 rows which is enough for our model to train.

我们的数据包含25,551行,足以对我们的模型进行训练。

We will check for missing values, if any:

我们将检查缺失值(如果有):

df.isnull().sum()

Since our main focus is interpreting the model's prediction, we will discard the missing value rows directly. For simplicity's sake we will remove the DATE column as well.

由于我们的主要重点是解释模型的预测,因此我们将直接丢弃缺失值行。 为了简单起见,我们也将删除DATE列。

df.dropna(inplace=True)
df.pop('DATE')

We will now encode our target column:

现在,我们将对目标列进行编码:

df.RAIN.replace({True:1,False:0},inplace=True)
df.head()

This is how our data looks in the end.

这就是我们的数据最终的外观。

target = df.pop('RAIN')
x_train , x_test , y_train , y_test = train_test_split(df, target, train_size=0.75)

We have now split the data into train and test sets with train equal to 75% of the original data.

现在,我们将数据分为训练集和测试集,训练集等于原始数据的75%。

We will now create our model with default parameters:

现在,我们将使用默认参数创建模型:

rfc = RandomForestClassifier()

And fit the model to the training samples:

并将模型拟合到训练样本:

rfc.fit(x_train,y_train)
accuracy_score(y_test,rfc.predict(x_test))

1.0

1.0

The model has achieved 100% accuracy. But now let's interpret the model so we can trust it.

该模型已达到100%的准确性。 但是现在让我们解释该模型,以便我们可以信任它。

酸橙
(LIME)

First, we need to discuss a bit of theory before we go on.

首先,在继续之前,我们需要讨论一些理论。

LIME creates new data which includes permuted samples and its respective predictions.

LIME创建新数据,其中包括置换样本及其各自的预测。

On this, LIME trains a local model which is weighted by proximity of sample instances. This model can be any basic model, namely a Decision tree.

在此基础上,LIME训练了一个局部模型,该模型通过样本实例的接近度进行加权。 该模型可以是任何基本模型,即决策树。

This model must have similar local predictions as that of the existing model. This accuracy is called local fidelity.

该模型必须具有与现有模型相似的局部预测。 这种准确性称为局部保真度。

import lime
from lime import lime_tabular

Now that we have imported the required packages, we need to perform our interpretation.

现在我们已经导入了必需的软件包,我们需要执行我们的解释。

Here's the recipe for training local surrogate models:

这是训练本地代理模型的方法:

  1. Select the model for which you want to get the explanation of its prediction

    选择您要获得其预测解释的模型
  2. Train this model and get its prediction for the test values

    训练该模型并获得其对测试值的预测
  3. For LIME, we weight the new samples with respect to their proximity to the model

    对于LIME,我们根据新样本与模型的接近程度对其进行加权
  4. Create a local model on the dataset

    在数据集上创建本地模型
  5. Finally we explain the prediction by interpreting the local model

    最后,我们通过解释局部模型来解释预测

Define a LimeTableExplainer model. Parameters of this model are Training sample, Feature names, and class names:

定义一个LimeTableExplainer模型。 该模型的参数是训练样本,特征名称和类名称:

explainer = lime_tabular.LimeTabularExplainer(x_train.values,feature_names=['PRCP','TMAX','TMIN'],class_names=['False','True'],discretize_continuous=True)

We need to pass training samples, the training column names, and the target class names that are expected.

我们需要传递训练样本,训练列名称和预期的目标类名称。

We then call the explain_instance() function of the explainer we created.

然后,我们调用创建的解释器的explain_instance()函数。

We will be using the following parameters of this function - test sample, predict function of model, number of features, and top labels to consider:

我们将使用此功能的以下参数-测试样本,预测模型功能,特征数量和要考虑的顶部标签:

i = np.random.randint(0,x_test.shape[0])
exp = explainer.explain_instance(x_test.iloc[i],rfc.predict_proba,num_features=x_train.shape[1],top_labels=None)

In order to display the explanation in the notebook, the following code is required.

为了在笔记本中显示说明,需要以下代码。

exp.show_in_notebook()

Let's decrypt the output.

让我们解密输出。

The top left diagram indicates the predicted output with probability.

左上方的图以概率表示预测的输出。

The model's output is False with 100% probability.

模型的输出为False ,概率为100%。

The top right diagram indicates the conditions required to fall for each category with their weights.

右上方的图表以权重指示了每个类别所需的条件。

Since the condition for PRCP variables for predicting the target as False is PRCP ≤0.00 and it has 0.96 weight.

由于用于预测目标为False的PRCP变量的条件是PRCP≤0.00,并且权重为0.96。

The Bottom right diagram indicates our test values. Since the PRCP values satisfy for a False condition, you can see the blue color as the background for this.

右下图显示了我们的测试值。 由于PRCP值满足False条件,因此您可以看到蓝色作为背景。

To display the explanation as a plot:

要将说明显示为图表:

fig = exp.as_pyplot_figure()

Here you can see the weight for each feature with their predicted class (represented by color ). They represent the local weights assigned to each feature. The red color represents a False target whereas the green color represents a True target.

在这里,您可以看到每个要素的权重及其预测的类别(用color表示)。 它们代表分配给每个特征的局部权重。 红色代表False目标,绿色代表True目标。

It is now easy to interpret the model by seeing the weight given to each feature as well the condition for each test value falling under specific class.

现在,通过查看赋予每个功能的权重以及属于特定类别的每个测试值的条件,可以轻松地解释模型。

Values of PRCP and TMAX indicate that the predicted target should be False whereas the value of TMIN indicates a True Target.

PRCPTMAX值指示预测目标应为False,TMIN的值指示真实目标。

LIME is not only used for binary classification of Tabular data, but also for multi-class case, Images and Text.

LIME不仅用于表格式数据的二进制分类,而且还用于多类情况(图像和文本)。

The code can be found in my GitHub repository: https://github.com/Sid11/Lime

可以在我的GitHub存储库中找到代码: https : //github.com/Sid11/Lime

And here's a link to the LIME official GitHub repository: https://github.com/marcotcr/lime

这是LIME官方GitHub存储库的链接: https : //github.com/marcotcr/lime

If you have any questions, please reach out to me. Hope you liked the article!

如有任何疑问,请与我联系。 希望您喜欢这篇文章!

翻译自: https://www.freecodecamp.org/news/how-to-build-trust-in-models-prediction-with-code/

lime 模型

ItVuer - 免责声明 - 关于我们 - 联系我们

本网站信息来源于互联网,如有侵权请联系:561261067@qq.com

桂ICP备16001015号