The goal of the SVM algorithm is to make the most effective line or decision boundary which will segregate n-dimensional space into classes in order that we are able to easily put the new information within the correct category within the future. This best the decision boundary is named a hyperplane.
SVM chooses the intense points/vectors that help in creating the hyperplane. These extreme cases are called support vectors, and hence algorithm is termed a Support Vector Machine.
The followings are important concepts in SVM −
• Support Vectors − Datapoints that are closest to the hyperplane is named support vectors. Separating lines are going to be defined with the assistance of those data points.
• Hyperplane − As we are able to see within the above diagram, it’s a call plane or space which is split between a collection of objects having different classes.
• Margin − it’s going to be defined because of the gap between two lines on the closet data points of various classes. It is often calculated because of the perpendicular distance from the road to the support vectors. A large margin is taken into account as a decent margin and a tiny margin is taken into account as a nasty margin.
Support vectors are data points that are closer to the hyperplane and influence the position and orientation of the hyperplane. Using these support vectors, we maximize the margin of the classifier. Deleting the support vectors will change the position of the hyperplane. These are the points that help us build our SVM.
Representation of information before fitting the hyperplane
Representation of information after fitting the most effective hyperplane
Types of SVM