In this challenge series, the goal is to predict which customers of a certain telecom company are going to churn (that is, going to cancel their contracts) based on attributes of their accounts. Here, the target class to be predicted is Churn (value 0 corresponds to customers that do not churn, and 1 corresponds to those who do).
After automatically picking a classification model for the task, you achieved an accuracy of about 95% for the test data, but the model does not perform uniformly for both classes. In fact, it is better at predicting when a customer will not churn (Churn = 0) than when they will (Churn = 1). This imbalance can be verified by looking at how precision and recall differ for these two classes, or by checking how metric Cohen’s kappa is a bit lower than 80% despite a very high accuracy. How can you preprocess and re-sample the training data in order to make the classification a bit more powerful for class Churn = 1? Note 1: Need more help to understand the problem? Check this blog post out: https://www.knime.com/blog/predict-customer-churn-low-code-ml-example. Note 2: This problem is hard: do not expect to see a major performance increase for class Churn = 1. Also, verifying if the performance increase is statistically significant will not be trivial. Still... give this challenge your best try!
URL: Churn Prediction https://www.knime.org/knime-applications/churn-prediction
To use this workflow in KNIME, download it from the below URL and open it in KNIME:
Download WorkflowDeploy, schedule, execute, and monitor your KNIME workflows locally, in the cloud or on-premises – with our brand new NodePit Runner.
Try NodePit Runner!