Fairness Scorer

This Component computes fairness metrics over an input classification model adopting the following metrics: demographic parity, equal opportunity and equalized odds. Use this component to flag models that might unfairly impact stakeholders before they are deployed. This practice follows responsible AI principles.

Configure the component based on your company policy. When a model is flagged by the component, please consider that with the current training data it cannot be guaranteed that the model built will not contain significant bias and that the best option is to first fix how data is collected rather than training your model differently.

The component works for any binary or multiclass model captured in a workflow object via the KNIME Integrated Deployment Extension. For testing this component we recommend connecting its input model port with the AutoML classification component output (kni.me/c/33fQGaQzuZByy6hE).

The metrics are calculated based on the “advantage class” [1], the desirable classification outcome of the model, and the “sensitive attribute” [2], a column in the input data categorizing the instances in the test set between “protected classes” [3] and “unprotected classes”.

The component computes fairness metrics and compares them with the desirable values in different tests. If one of the fairness metrics does not achieve the desirable value it means that the model failed the corresponding fairness test, at least based on the sample of data provided. Given the empirical nature of these fairness tests we provide a different epsilon (ε) parameter to relax each condition based on the user choice: the higher the epsilon the easier it is for the model to pass the tests. Please note that it is not possible to pass all tests as some of them are mutually exclusive.

Terminology:

[1] Advantage Class: the positive outcome for the stakeholders living the consequences of the model predictions. This information is also stored in the target column the model is trained on.
[2] Sensitive Attribute : A column in the dataset, which should not be a feature, which is considered sensitive for the problem domain. A model is fair if it’s computing its prediction without using any feature highly correlated with this sensitive information.
[3] Protected Classes: The categories in the sensitive attribute column that are related to a minority or a disadvantaged group in real world data. Similarly “Unprotected Classes” are all the other classes which in general are not known to be disadvantaged.

TESTS

For all fairness tests we rely on two partitions splitted on the sensitive attribute classes: protected partition and unprotected partition.

Demographic Parity Test:

Test passed: Demographic Parity metric is within the range [1 - ε, 1 + ε]
The amount of advantage class predictions is independent of the protected class, as the advantage class prediction is in % close to equal in both partitions.

Test failed: Demographic Parity metric is outside the range [1 - ε, 1 + ε]
There is “enough” evidence in the data that the model favors the unprotected class or the protected class: the advantage of class prediction is in % definitely higher in one of the partitions over the other.

Equality of Opportunity Test:

Test passed: Equality of Opportunity metric is within the range [1 - ε, 1 + ε]
Each partition has a somewhat equal opportunity to be correctly classified with the advantage class.

Test failed: Equality of Opportunity metric is outside the range [1 - ε, 1 + ε]
There is “enough” evidence that the two partitions have different “opportunities” as in one of the two it is easier to get misclassified by not getting the advantage class when deserved.

Equalized Odds Test:

Test passed: Equalized Odds metric is within the range [1 - ε, 1]
Each partition has somewhat equal odds to be correctly classified with any class.

Test failed: Equalized Odds metric is outside the range [1 - ε, 1]
There is “enough” evidence that the two partitions have different “odds” as in one of the two it is easier to get misclassified with any class.

METRICS

In each metric computation we rely on two confusion matrices (en.wikipedia.org/wiki/Confusion_matrix) based on two partitions splitted on the sensitive attribute classes: protected partition and unprotected partition. We denote the advantage class as the positive class and in the multiclass classification we also adopt a one-vs-all approach. Using the values and statistics from the two confusion matrices we compute the 3 metrics below. All 3 metrics are provided at the output of the component. It stands true for Demographic Parity and Equality of Opportunity fairness types that:

If the metric is > 1 + ε then the model favors the protected partition;
If the metric is < 1 - ε then the model favors the unprotected partition.

For the Equalized Odds metric it is not possible to detect which partition the model favors.

In general please notice that different metrics might give different fairness results.

Demographic Parity Metric:
A fairness metric that is satisfied if the results of a model's classification are not dependent on the sensitive attribute. For each of the two partitions, the technique computes the ratio of advantage class predictions over all predictions. The final metric is equal to the division between the two ratios (the protected one over the unprotected one). Please consider this metric totally ignores ground truth data.

Equality of Opportunity Metric:
A fairness metric that checks whether the classifier predicts with the same performance the advantage class for both the protected and unprotected classes when considering false negatives. This metric measures whether either of the two partitions is disadvantaged by a model that is more likely to commit false negatives than in the other partition. To compute the metric we take the True Positive Rate (TPR) from the confusion matrices and divide the protected one by the unprotected one. If this metric > 1 the protected class is advantage

Equalized Odds Metric:
A fairness metric that checks whether the classifier predicts with the same performance the advantage class for both the protected and unprotected classes regardless of the type of misclassification. While equality of opportunity only focuses on making sure the amount of false negatives are equally shared in % among the two partitions, here also false positives are taken into consideration. To compute the metric we take both the True Positive Rate (TPR) and True Negative Rate (TNR) from the confusion matrices. Then we compute the differences between TPRs and TNRs. Then we compute the Euclidean Distance and we normalize between 0 and 1. If the metric is close to 1 it means that TPR and TNR values do not vary between protected and unprotected partitions.

Source: developers.google.com/machine-learning/glossary/fairness

Options

Equality of Opportunity Epsilon Parameter
Given the empirical process of this test, it is hard to achieve a fairness metric which does not favor either the protected or the unprotected class. We provide here a parameter which should be as small as possible. Read more in the main description of the component.
Equalized Odds Epsilon Parameter
Given the empirical process of this test, it is hard to achieve a fairness metric which does not favor either the protected or the unprotected class. We provide here a parameter which should be as small as possible. Read more in the main description of the component.
Demographic Parity Epsilon Parameter
Given the empirical process of this test, it is hard to achieve a fairness metric which does not favor either the protected or the unprotected class. We provide here a parameter which should be as small as possible. Read more in the main description of the component.
Select Sensitive Attribute and Include Protected Class(es)
Select the column that is sensitive first. This column should not be a feature of the model, but it should be available in validation data to measure fairness. Then select, among the sensitive attributes classes, the protected classes on the right and the unprotected classes on the left. Read more in the main description of the component.
Select Target and Advantage Class
Select the target column storing ground truth labels/classes. After that, select the advantage class, that is the category within the target column which is a "desirable" outcome for the model. Read more in the main description of the component.

Input Ports

Icon
Classification model Workflow Object captured with KNIME Integrated Deployment.
Icon
Test data to compute the fairness of the trained model. It is vital this sample is representative of the distribution in order to get a correct fairness measure.

Output Ports

Icon
Table with scores for fairness metrics and test results ("passed" vs "failed") of the model with respect to the chosen epsilon value.

Nodes

Extensions

Links