Understanding Basics Of SVM With Example And Python Implementation


The goal of the SVM algorithm is to make the most effective line or decision boundary which will segregate n-dimensional space into classes in order that we are able to easily put the new information within the correct category within the future. This best the decision boundary is named a hyperplane.

SVM chooses the intense points/vectors that help in creating the hyperplane. These extreme cases are called support vectors, and hence algorithm is termed a Support Vector Machine.

The followings are important concepts in SVM −

• Support Vectors − Datapoints that are closest to the hyperplane is named support vectors. Separating lines are going to be defined with the assistance of those data points.

• Hyperplane − As we are able to see within the above diagram, it’s a call plane or space which is split between a collection of objects having different classes.

• Margin − it’s going to be defined because of the gap between two lines on the closet data points of various classes. It is often calculated because of the perpendicular distance from the road to the support vectors. A large margin is taken into account as a decent margin and a tiny margin is taken into account as a nasty margin.

Support vectors are data points that are closer to the hyperplane and influence the position and orientation of the hyperplane. Using these support vectors, we maximize the margin of the classifier. Deleting the support vectors will change the position of the hyperplane. These are the points that help us build our SVM.

Representation of information before fitting the hyperplane

Representation of information after fitting the most effective hyperplane

Representation of information after fitting the most effective hyperplane

Types of SVM

SVM are often of two types:

Linear SVM: Linear SVM is employed for linearly separable data, which suggests that if a dataset may be classified into two classes by employing a single line, then such data is termed as linearly separable data, and classifier used is termed as Linear SVM classifier.

Non-linear SVM: Non-Linear SVM is employed for non-linearly separated data, which implies if a dataset can’t be classified by employing a line, then such data is termed as non-linear data, and the classifier used is named as a Non-linear SVM classifier.

Linear SVM:

The working of the SVM algorithm will be understood by using an example. Suppose we have a dataset that has two tags (green and blue), and also the dataset has two features x1 and x2. we wish a classifier that will classify the pair (x1, x2) of coordinates in either green or blue. Consider the below image:

 Linear SVM

So, because it is 2-d space so by just employing a line, we are able to easily separate these two classes. But there will be multiple lines that will separate these classes. Consider the below image:

Linear SVM

Hence, the SVM algorithm helps to search out the most effective line or decision boundary. This best boundary or region is termed as a hyperplane. SVM algorithm finds the closest point of the lines from both the classes. These points are called support vectors. the gap between the vectors and also the hyperplane is named as margin. and therefore the goal of SVM is to maximize this margin. The hyperplane with maximum margin is named the optimal hyperplane.


SVM algorithm

Non-Linear SVM:

If data is linearly arranged, then we are able to separate it by employing a line, except for non-linear data, we cannot draw one line. Consider the below image:

Non-Linear SVM

So, to separate these data points, we’d like to feature yet one more dimension. For linear data, we’ve got used two dimensions x and y, so for non-linear data, we are going to add a third dimension z. It may be calculated as z=x2 +y2

By adding the third dimension, the sample space will become as below image:

Non-Linear SVM:

So now, SVM will divide the datasets into classes within the following way. Consider the below image:

 SVM will divide the datasets


Since we are in 3-d Space, hence it’s looking sort of a plane parallel to the x-axis. If we convert it in 2d space with z=1, then it’ll become as:

SVM will divide the datasets

Hence, we get a circumference of radius 1 just in case of non-linear data..


Python Implementation of SVM with Scikit-Learn

The task is to predict whether a bank currency note is authentic or not based upon four attributes of the note i.e. skewness of the wavelet transformed image, variance of the image, entropy of the image, and kurtosis of the image. this is often a binary classification problem and that we will use SVM algorithm to unravel this problem.

Importing libraries

The following script imports required libraries:

import pandas as pd

importnumpyas np


%matplotlib inline

Importing the Dataset

The data is on the market for download at the subsequent link:


To read data from CSV file, the only way is to use the read_csv method of the panda’s library. The subsequent code reads bank currency note data into a pandas data frame:

bankdata = pd.read_csv(“D:/Datasets/bill_authentication.csv”)

Exploratory Data Analysis

There are virtually limitless ways to analyse datasets with a spread of Python libraries. For the sake of simplicity, we are going to only check the scale of the information and see first few records. To determine the rows and columns and of the information, execute the subsequent command:

In the output you may see (1372,5). this suggests that the banknote dataset has 1372 rows and 5 columns.

To know how the particular dataset looks, execute the subsequent command:

The output will seem like this:






























Data Pre-processing

Data pre-processing involves:

(1) Dividing the data into attributes and labels and

(2) dividing the data into training and testing sets.

To divide the info into attributes and labels, execute the subsequent code:
X = bankdata.drop('Class', axis=1)

y = bankdata[

Once the information is split into attributes and labels, the ultimate pre-processing step is to divide data into training and test sets. Luckily, the model_selection library of the Scikit-Learn library contains the train_test_split method that enables us to seamlessly divide data into training and test sets.

Execute the subsequent script to try and do so:

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size =




Training the Algorithm

We have divided the information into training and testing sets. Now could be the time to train our SVM on the training data. Scikit-Learn contain the svm library, which contains built-in classes for various SVM algorithms.

The fit method of SVC class is termed to train the algorithm on the training data, which is passed as a parameter to the fit method. Execute the subsequent code to train the algorithm:

fromsklearn.svmimport SVC

svclassifier = SVC(kernel=

svclassifier.fit(X_train, y_train)



Making Predictions

To make predictions, the predict method of the SVC class is employed. Execute the following code:
y_pred = svclassifier.predict(X_test)



Evaluating the Algorithm

Confusion matrix, precision, recall, and F1 measures are the foremost used metrics for classification tasks. Scikit-Learn's metrics library contains the classification_report and confusion_matrix methods, which may be readily used to see the values for these important metrics.
fromsklearn.metricsimportclassification_report, confusion_matrix






The evaluation results are as follows:

[[152    0]

[  1  122]]
              precision   recall   f1-score   support

           0       0.99     1.00       1.00       152

           1       1.00     0.99       1.00       123

avg / total        1.00     1.00       1.00       275




What is Support Vector Machine?
When do you use a support vector machine?

Leave a Comment

Scroll to Top