开发者

Simplest feature selection algorithm

开发者 https://www.devze.com 2023-02-15 03:19 出处:网络
I am trying to create my own and simple feature selection algorithm. The data set that I am going to work with is here (very famous data set). Can someone give me a pointer on how to do so?

I am trying to create my own and simple feature selection algorithm. The data set that I am going to work with is here (very famous data set). Can someone give me a pointer on how to do so?

I am planning to write a feature rank algorithm for a text classification. This is for a sentiment analysis of movie reviews, classifying them as either positive or negative.

So my questio开发者_运维知识库n is on how to write a simple feature selection for a text data set.


Feature selection methods are a big topic. You can start with following:

  1. Chi square

  2. Mutual information

  3. Term frequency

etc. Read this paper if you have time: Comparative study on feature selection in text categorization this will help you lot.

The actual implementation depends on how you pre-process the data. Basically its keeping the counts, be it hash table or a database.


Random features work well, when you are then building ensembles. It's known as feature bagging.


Here's one option: Use pointwise mutual information. Your features will be tokens, and the information should be measured against the sentiment label. Be careful with frequent words (stop words), because in this type of task they may actually be useful.


I currently use this approach:

calculate mean value and variance of data for each class. A good feature candidate should have small variance and the mean value should be different from mean values of other classes.

Currently having only < 50 features I select them manually. For automation of this process one could calculate variances of average values among all classes and give the higher prioritization to those, having bigger variance. Then, select first those, having smaller variance within one class.

Of cause this doesn't removes redundant features.


Feature selection methods are divided into fourth groups: Filter

  • Filter : Use statistical measures for feature selection
  • Wrapper : incorporates with a learning algorithm
  • Embedded : use both and filter and wrapper altogether
  • Hybrid : add different steps using filter or wrapper

The simplest way for feature selection is Filter approaches which are very fast with respect to other approaches.

Here are some of them:

  1. Chi-square
  2. Cross Entropy
  3. Fuzzy Entropy Measure
  4. Gini index
  5. Information Gain
  6. Mutual Information
  7. Relative Discrimination Criteria
  8. Term Strength

Here This is an article also i have used a hybrid method for feature selection in text categorization. Check My Article Here

0

精彩评论

暂无评论...
验证码 换一张
取 消

关注公众号