cover-img

Underfitting and Overfitting in Machine Learning

30 December, 2022

2

2

0

Contributors

Introduction

Overfitting and underfitting are two common problems that can occur when training a machine learning model. Understanding these concepts is crucial for building accurate and reliable models that generalize well to new, unseen data. In this article, we will define overfitting and underfitting, explain the causes of these problems, and discuss how to prevent and mitigate them.
But before diving into the concepts of overfitting and underfitting let’s understand some basic terminologies:

1.

Signal - In machine learning, the term "signal" refers to the useful information or patterns in the data that the model is trying to learn.

2.

Noise - The term "noise" refers to the irrelevant or random variations in the data that can interfere with the model's ability to learn the signal.

3.

Bias - The term "bias" refers to the systematic error or deviation of the model's predictions from the true values. A model with high bias is said to be underfitting, meaning it is too simple and unable to capture the complexity and patterns in the data.

4.

Variance - The term "variance" refers to the degree to which the model's predictions vary or fluctuate for different training data sets. A model with high variance is said to be overfitting, meaning it is too complex and is able to fit the training data too well, but performs poorly on new, unseen data.

To build accurate and reliable machine learning models, it is important to find a balance between bias and variance and to minimize the noise in the data.
Check out Bias and Variance in Machine Learning to gain better insights about bias and variance in machine learning.

What is Overfitting?

Overfitting is a phenomenon that occurs when a machine learning model is too complex and is able to fit the training data too well, but performs poorly on new, unseen data. In other words, the model has learned the "noise" in the training data, rather than the underlying relationships and patterns. As a result, the model is not able to generalize to new data and makes poor predictions.
There are several factors that can contribute to overfitting, including

β€’

Too many features: If the model has too many input variables (features), it may be able to fit the training data too well, but not be able to generalize to new data. This is because the model has learned the noise in the training data, rather than the underlying relationships and patterns.

β€’

Lack of regularization: Some machine learning algorithms, such as neural networks and decision trees, have the ability to "memorize" the training data if they are not regularized. Regularization is a technique that adds a penalty to the model to prevent it from fitting the training data too well.

β€’

Insufficient training data: If the training dataset is small, the model may be able to fit the training data too well, but not be able to generalize to new data. This is because the model has not seen enough examples to learn the underlying relationships and patterns.

Here is a simple code which explains the concept of overfitting πŸ‘‡
img

Overfitting in Machine Learning

This code generates synthetic data for a linear regression model, fits a linear regression model to the data, and plots the data and the model. It then overfits the model by adding polynomial features and fitting a linear regression model to the overfitted data and plots the overfitted model as well. The resulting plot shows the data and both the regular and overfitted linear regression models, allowing you to visualize the overfitting of the model.

What is Underfitting?

Underfitting is the opposite of overfitting and occurs when a machine learning model is too simple and is unable to capture the complexity and patterns in the training data. As a result, the model performs poorly on both the training data and new, unseen data.
There are several factors that can contribute to underfitting, including

β€’

Too few features: If the model has too few input variables (features), it may not be able to capture the complexity and patterns in the training data.

β€’

Insufficient model complexity: Some machine learning algorithms, such as linear regression and logistic regression, have a limited capacity to capture complex patterns in data. If the training data is too complex, these algorithms may be unable to fit it accurately.

β€’

Insufficient training data: If the training dataset is small, the model may not have seen enough examples to learn the underlying relationships and patterns in the data.

Here's an example of how underfitting works πŸ‘‡
img

Underfitting in Machine Learning

This code generates synthetic data for a non-linear function, fits a linear regression model to the data, and plots the data and the model. It then underfits the model by using a linear model to fit a non-linear function and plots the underfitted model as well. The resulting plot shows the data and both the regular and underfitted linear regression models, allowing you to visualize the underfitting of the model.

How to prevent and mitigate overfitting and underfitting

There are several techniques that can be used to prevent or mitigate overfitting and underfitting, including

β€’

Splitting the data into training and validation sets: One way to avoid overfitting is to split the data into a training set and a validation set. The model is trained on the training set, and its performance is evaluated on the validation set. This allows us to determine whether the model is overfitting or underfitting the data.

β€’

Using cross-validation: Another way to prevent overfitting is to use cross-validation, which involves training the model on different subsets of the data and evaluating its performance on the remaining data. This helps to ensure that the model is not overly dependent on any particular subset of the data.

β€’

Regularization: As mentioned earlier, regularization is a technique that adds a penalty to the model to prevent it from fitting the training data too well. This can help to prevent overfitting and improve the generalization of the model to new, unseen data. There are several types of regularization techniques, including L1 regularization, L2 regularization, and elastic net regularization. These techniques add a penalty term to the model's objective function, which encourages the model to simplify and reduce the complexity of the learned relationships. Regularization can be adjusted using a hyperparameter, which controls the strength of the penalty. By tuning the hyperparameter, it is possible to find the optimal balance between bias and variance and to prevent overfitting while still capturing the useful signal in the data.

β€’

Ensemble techniques: Ensemble techniques are machine learning methods that combine the predictions of multiple models to produce a more accurate and robust prediction. These techniques can be used to prevent overfitting and underfitting, as they can help to reduce the variance and bias of the model. There are several types of ensemble techniques, including:

β€’

Bagging: Bagging (short for bootstrapped aggregating) involves training multiple models on different subsets of the data, and then averaging or voting on their predictions. This can help to reduce the variance of the model, as the individual models are less likely to overfit the data.

β€’

Boosting: Boosting involves training multiple models sequentially, where each model is trained to correct the mistakes of the previous model. This can help to reduce the bias of the model, as the individual models are able to learn from the errors of the previous models.

β€’

Stacking: Stacking involves training multiple models, and then using a "meta-model" to combine their predictions. This can help to reduce both the bias and variance of the model, as the individual models and the meta-model are able to learn from the errors of the other models.

By using ensemble techniques, it is possible to build more accurate and robust machine learning models that are less prone to overfitting and underfitting.

Inference

In conclusion, overfitting and underfitting are common problems that can occur when training a machine learning model. Overfitting occurs when a model is too complex and is able to fit the training data too well, but performs poorly on new, unseen data. Underfitting is the opposite of overfitting and occurs when a model is too simple and is unable to capture the complexity and patterns in the training data. To prevent and mitigate overfitting and underfitting, it is essential to split the data into training and validation sets, use cross-validation, and apply regularization as needed. By understanding and addressing these issues, it is possible to build accurate and reliable machine-learning models that generalize well to new data.
Thanks for reading β™₯️

data

analysis

learning

machine

deep

datascience

2

2

0

data

analysis

learning

machine

deep

datascience

Jay Oza
Data Scientist | Machine Learning| Deep Learning| Data Analyst | K. J. Somaiya Institute of Technology | IIT Madras |

More Articles

Showwcase is a professional tech network with over 0 users from over 150 countries. We assist tech professionals in showcasing their unique skills through dedicated profiles and connect them with top global companies for career opportunities.

Β© Copyright 2024. Showcase Creators Inc. All rights reserved.