Introduction
While training your machine learning model, you often encounter a situation when your model fits the training data exceptionally well but fails to perform well on the testing data, i.e., does not predict the test data accurately. This is where regularization comes into action; Machine learning can handle such situations rightly with regularization. Regularization is a technique to reduce the error by fitting a function appropriately on the given training set and avoid overfitting.
Article Overview
- What is L1 and L2 Regularization?
- What does the lambda term represent in L1 and L2 Regularization?
- How do L1 and L2 Regularization differ in improving the accuracy of machine learning models?
- Which technique is commonly preferred to boost accuracy rate and why?