Python

SVM in sklearn

Support-vector machines (SVMs, also known as support-vector networks) in machine learning are supervised learning models with corresponding algorithms that evaluate data for regression and classification. SVMs, based on statistical learning frameworks or the VC theory put out by Chervonenkis and Vapnik, are among the most reliable prediction techniques (1974). The implementation of SVMs in sklearn, as well as its benefits and drawbacks, will be covered in this article.

What is SVM?

SVM or Support Vector Machine is a supervised machine learning technique to analyze data for regression and classification. The goal of the SVM algorithm is to define the optimum decision boundary or line that can categorize the n-dimensional space, enabling us to classify fresh data points in the future swiftly. This decision boundary is known as a hyperplane. This enables your model to sort a labeled dataset and categorize unlabeled input. It is one of the most popular supervised learning algorithms and is frequently used to classify images using the features recovered by a deep convolutional neural network.

Assume you have some linearly separable points in your paper that are of different sorts. The SVM will locate a straight line that, as far as is reasonably possible, divides those points into two categories. Imagine two pairs of points lying on a plane. The first ones are red, while the rest are blue. In order to optimize its distance from both classes of points, a support vector machine seeks out a line that divides the red points from the blue ones. This is known as the largest difference margin. It is important to measure the distance from the line to the points at the borders of the two points to determine the margin with the largest difference. They refer to them as support vectors. The points must be separable by a line since they must be separated by one. The term for this is linear separability. The kernel trick is a crucial addition to the support vector machine algorithm to overcome this restriction. This moves the data into a higher-dimensional space and uses a linear function to segregate it there. Through a so-called kernel, the transformation is carried out. This is only conceivable because the linear function’s calculation to separate the data depends on the support vectors’/points’ dot product. Simply put: To prevent using a line to divide the red from the blue points, mix them on the plane.

Lift the red ones, leaving the blue ones down behind. It is now obvious that you could separate the points using a plane. The kernel technique achieves this. It alters the data in a higher dimension to presumably be linearly separable. Lifting it added a third dimension (three dimensions), which separated the points by a line in three dimensions, a plane, whereas the points on the floor (two dimensions) were not.

Advantages of SVM

  1. In high-dimensional spaces, it is more effective to use their memory well enough.
  2. Suitable for classes with distinct margins of distinction. Additionally effective when sample sizes exceed overall dimensions.

Disadvantages of SVM

  1. When the total characteristics of each data point are greater than the training data samples, it doesn’t operate well. Additionally, its technique isn’t appropriate for larger data sets.
  2. A data set begins to lag when the target classes overlap, or more noise is present. Additionally, the classification produced by the support vector classifier lacks a probabilistic justification.

Implementation of SVM in sklearn

Importing libraries

from sklearn.datasets import make_classification

from sklearn import svm

Creating dataset

X, y = make_classification(random_state=42)

print('Train data is', X)

print('Test data is', y)

Output

Train data is [[-2.02514259 0.0291022 -0.47494531 ... -0.33450124 0.86575519

-1.20029641]

[ 1.61371127 0.65992405 -0.15005559 ... 1.37570681 0.70117274

-0.2975635 ]

[ 0.16645221 0.95057302 1.42050425 ... 1.18901653 -0.55547712

-0.63738713]

...

[-0.03955515 -1.60499282 0.22213377 ... -0.30917212 -0.46227529

-0.43449623]

[ 1.08589557 1.2031659 -0.6095122 ... -0.3052247 -1.31183623

-1.06511366]

[-0.00607091 1.30857636 -0.17495976 ... 0.99204235 0.32169781

-0.66809045]]

Test data is [0 0 1 1 0 0 0 1 0 1 1 0 0 0 1 1 1 0 0 1 1 0 0 0 0 1 1 0 1 0 0 0 0 0 0 1 0

0 1 1 1 0 1 0 0 1 1 0 0 1 1 1 0 1 0 0 1 1 0 1 1 1 1 1 0 1 0 0 1 0 1 0 1 0

1 1 1 0 0 0 1 0 1 0 1 1 1 1 1 0 0 1 0 1 1 0 1 1 0 0]

Creating SVM model

model = svm.SVC()

model.fit(X, y)

print('Support vectors are', model.support_vectors_)

print('Indices for support vectors are', model.support_)

Output

Support vectors are [[-2.02514259 0.0291022 -0.47494531 ... -0.33450124 0.86575519

-1.20029641]

[ 0.17989415 -0.22210005 0.10537551 ... -0.96834445 -0.22463315

0.5500521 ]

[-1.22576566 0.80742726 0.65232288 ... 0.88365994 -0.03850847

-0.1726273 ]

...

[ 0.2005692 -0.24878048 -1.07213901 ... 0.08614388 -0.36702784

-0.82759022]

[-0.6115178 -1.22532865 -0.85835778 ... 0.18483612 2.63238206

0.4933179 ]

[-0.22096417 -0.54561186 -0.57117899 ... 0.64084286 -0.28110029

1.79768653]]

Indices for support vectors are [ 0 4 5 6 8 11 12 13 18 21 22 23 24 27 29 30 31 32 33 34 36 47 52 54

58 67 73 78 79 81 83 89 92 95 2 3 7 9 10 14 25 35 38 39 40 42 45 49

51 57 59 60 61 62 63 68 72 74 75 76 80 85 86 88 91 93 94 96]

Conclusion

We went through the explanation of Support Vector Machines (SVM) along with their pros, cons, and implementation. SVM draws hyperplanes between points with the largest margin and can be used for classification and regression cases. We saw how sklearn provides us with an easy way to implement SVM and get details about the support vectors.

About the author

Simran Kaur

Simran works as a technical writer. The graduate in MS Computer Science from the well known CS hub, aka Silicon Valley, is also an editor of the website. She enjoys writing about any tech topic, including programming, algorithms, cloud, data science, and AI. Travelling, sketching, and gardening are the hobbies that interest her.