What to do when your ML model suffers from overconfidence ?, IT News, ET CIO

0
Overfitted algorithms lead to an overconfident model when the learner is more confident in their prediction than what the data describes. For the model, the whole source of truth is the training data. When the model learns by rote instead of understanding the underlying characteristic, it results in overconfidence. Another reason is that deep learning models have too many parameters and use ReLU as activation function which gives great confidence.

“Today, AI-ML models are used throughout DevOps cycles. And an overconfident model can ruin the entire project lifecycle. The challenges that any business can face include supply chain failure, wrong pricing, and cost estimates that are the backbone of any business. In the banking industry, this can lead to higher fraudulent transactions as well as fewer leads. And it can cost lives in the healthcare industry, ”said Anjna Bhati – Head, Data Analytics and AI at BluePi Consulting.

So what to do?

Industry executives suggest adopting and using Bayesian methods to quantify uncertainty. Few of the reasons for overconfident models are overfitting, smaller data sets that can easily be processed using regularization and data augmentation, but that doesn’t solve the problem because we have to quantify the ‘uncertainty because in the real data there will always be an entry of waste that the model did not see.

Bhati advises to have a posteriori on the approximate weights with a Laplace approximation in the last layers of neural networks.

“We should also pass unnecessary images and assess the model’s confidence in those images during training and testing as well. Companies can save themselves from overconfident models by continuously evaluating their data pipelines and the model, ”she added.

Prasanna Sattigeri, Research Staff Member, IBM Research AI, MIT-IBM Watson AI Lab also suggested using better models such as for sets and others that follow Bayesian principles. They tend to suffer less from overconfidence.

“Improve existing models with recalibration techniques. These are post-processing methods that can be used to reduce overconfidence, ”he said.

For predictive models and ML models, a lot depends on the training data. We can use the training data to check whether the predictions are right or wrong. Prashanth Kaddi, Partner, Deloitte India explains how:

“Let’s say we have 36-month data from January 2018 to January 2021. We’ll only use 30 months of data from January 2018 to July 2020 to make predictions. And then match those predictions with what actually happened from August 2020 to January 21. That way, your data will tell you if it’s predicting right or wrong. You will know if the model is overconfident or over-equipped, ”Kaddi said.

A few other steps, such as testing the model for 3 months before setting it up, will help remove any bias, overfitting, and overconfidence. If there appears to be a constant bias in the predictions, it suggests that there may be a problem.


Source link

Share.

Comments are closed.