开发者

1 million sentences to save in DB - removing non-relevant English words

开发者 https://www.devze.com 2023-01-27 02:53 出处:网络
I am trying to train a Naive Bayes classifier with positive/negative words extracting from a sentiment. example:

I am trying to train a Naive Bayes classifier with positive/negative words extracting from a sentiment. example:

I love this movie :))

I hate when it rains :(

The idea is I extract positive or negative sentences based on the emoctions used, but in order to train a classifier and persist it into database.

The problem is that I have more than 1 million such sentences, so if I train it word by word, the database will go for a toss. I want to remove all non-relevant word example 'I','this',开发者_如何学Go 'when', 'it' so that number of times I have to make a database query is less.

Please help me in resolving this issue to suggest me better ways of doing it

Thank you


There are two common approaches:

  1. Compile a stop list.
  2. POS tag the sentences and throw out those parts of speech that you think are not interesting.

In both cases, determining which words/POS tags are relevant may be done using a measure such as PMI.

Mind you: standard stop lists from information retrieval may or may not work in sentiment analysis. I recently read a paper (no reference, sorry) where it was claimed that ! and ?, commonly removed in search engines, are valuable clues for sentiment analysis. (So may 'I', esp. when you also have a neutral category.)

Edit: you can also safely throw away everything that occurs only once in the training set (so called hapax legomena). Words that occur once have little information value for your classifier, but may take up a lot of space.


You might want to check this out http://books.google.com/books?id=CE1QzecoVf4C&lpg=PA390&ots=OHuYwLRhag&dq=sentiment%20%20mining%20for%20fortune%20500&pg=PA379#v=onepage&q=sentiment%20%20mining%20for%20fortune%20500&f=false


To reduce amount of data retrieved from your database, you may create in your database a dictionary -- a table that maps words* to numbers** -- and than retrieve only a number vector for training and a complete sentence for manual marking a sentiment.

|* No scientific publication comes to my mind but maybe it is enough to use only stems or lemmas instead of words. It would reduce the size of the dictionary.

|** If this operation kills your database, you can create a dictionary in a local application -- that uses a text indexing engine (e.g., apache lucene) -- and store only the result in your database.

0

精彩评论

暂无评论...
验证码 换一张
取 消

关注公众号