I'm designing a mysql database, and i'd like some input on an efficient way to store blog/article data for searching.
Right now, I've made a separate column that stores the content to be searched - no duplicate words, no words shorter than four letters, and no words that are too common. So, essentially, it's a list of keywords from the original article. Also searched would be a list of tags, and the title field.
I'm not quite sure how mysql indexes fulltext columns, so would storing the data like that be ineffective, or redundant somehow? A lot of the articles are on the same topic, so would the score be hurt by so many of the rows having similar keywords?
Also, for this project, solutions like sphinx, lucene or google custom seach can't be used -- only php & mysql.
Thanks!
EDIT - Let me clarify:
Basically, i'm asking which way fulltext would provide the fastest, most relevant results: by finding many instances of the search term in all the data, or just the single keyword among a handful of other words.
I think a separate keywords table would be over the top for what i need, 开发者_如何学编程so should I forget the keywords column and search on the article, or continue to select keywords for each row?
You should build the word list (according to the rules you've specified) in a separate table and then map it to each article in a join table, along with the number of occurrences:
words: id | name
articles: id | title | content
articles_words: id | article_id | word_id | occurrences
Now you can scan through the join table and even rank the articles by the occurrence of the word, and probably place some importance on the order in which the words were typed in the search query string.
Of course, this is a very academic solution. I'm not sure what your project requires, but FULLTEXT indexing is very powerful and you're always better off using it in most practical situations.
HTH.
精彩评论