开发者

Liblinear bias very close to zero

开发者 https://www.devze.com 2023-04-01 22:28 出处:网络
I am trying out Liblinear for linear SVM classification on some 2D poi开发者_如何转开发nts (I am using a simple python gui to add points for 2 classes and then draw the line that separates the classes

I am trying out Liblinear for linear SVM classification on some 2D poi开发者_如何转开发nts (I am using a simple python gui to add points for 2 classes and then draw the line that separates the classes), but even though I am using the bias option (-B 1) for training, I get a bias very close to zero (the separating line almost passes through the origin).

I also tried simply training the 2-point set:

-1 1:10 2:30
+1 1:10 2:80

but I still get a very small bias (a line passing through the origin instead of a horizontal line in the XY plane as I guess it should be). Here is my output vector w:

0.2003362041634111, 
-0.03465897160331861, 
0.0200336204163411 

What am I doing wrong?


I'm not sure you are doing anything wrong.

From the liblinear FAQ:

Q: Is LIBLINEAR gives the same result as LIBSVM with linear kernel?

They should be very similar. However, sometimes the difference may not be small. Note that LIBLINEAR does not use the bias term b by default. If you observe very different results, try to set -B 1 for LIBLINEAR. This will add the bias term to the loss function as well as the regularization term (w^Tw + b^2). Then, results should be closer.

This says that liblinear tries to make the bias term as small as possible. If it can set it to zero and still get good training-set accuracy then it will.

There isn't a particularly good reason to imagine that regularising the bias will provide a better classifier, so many other learning systems it does not enter into the regularisation term. However in 'real-world' problems with very high dimensionality it's also very likely that the data are separable without needing a bias term, so regularising it does no harm and can be easier to implement.

0

精彩评论

暂无评论...
验证码 换一张
取 消