Icon

justKnimeit-26

justKnimeit-26
Challenge 26: Modeling Churn Predictions - Part 4Description: To wrap up our series of data classification challenges, consider again the following churning problem: a telecom company wantsyou to predict which customers are going to churn (that is, going to cancel their contracts) based on attributes of their accounts. The target classto be predicted in the test data is Churn (value 0 corresponds to customers that do not churn, and 1 corresponds to those who do). You havealready found a good model for the problem and have already engineered the training data to increase the performance a bit. Now, your task isto communicate the results you found visually. Concretely, build a dashboard that:shows performance for both classes (you can focus on any metrics here, e.g., precision and recall)ranks features based on how important they were for the modelexplains a few single predictions, especially false positives and false negatives, with our Local Explanation View component (read more aboutit here) training datatest dataNode 3 Input:0 : Model as a Workflow Object1 : Data from Model Test Partition2 : Single Instance to ExplainOutput:0 : Counterfactuals Instances1 : Local Feature Importanceexecute up-streambefore configurationNode 10Node 11Node 13CSV Reader CSV Reader SMOTE Local ExplanationView AutoML Workflow Executor Top k Selector Row Filter Visualizations Challenge 26: Modeling Churn Predictions - Part 4Description: To wrap up our series of data classification challenges, consider again the following churning problem: a telecom company wantsyou to predict which customers are going to churn (that is, going to cancel their contracts) based on attributes of their accounts. The target classto be predicted in the test data is Churn (value 0 corresponds to customers that do not churn, and 1 corresponds to those who do). You havealready found a good model for the problem and have already engineered the training data to increase the performance a bit. Now, your task isto communicate the results you found visually. Concretely, build a dashboard that:shows performance for both classes (you can focus on any metrics here, e.g., precision and recall)ranks features based on how important they were for the modelexplains a few single predictions, especially false positives and false negatives, with our Local Explanation View component (read more aboutit here) training datatest dataNode 3 Input:0 : Model as a Workflow Object1 : Data from Model Test Partition2 : Single Instance to ExplainOutput:0 : Counterfactuals Instances1 : Local Feature Importanceexecute up-streambefore configurationNode 10Node 11Node 13CSV Reader CSV Reader SMOTE Local ExplanationView AutoML Workflow Executor Top k Selector Row Filter Visualizations

Nodes

Extensions

Links