开发者

How does MongoDB index huge fields?

开发者 https://www.devze.com 2023-02-16 07:18 出处:网络
var post = { type: \"Article\", at开发者_Python百科tributes: [ {\"title\": \"My Essay On Life\"}, {\"body\": \"Life can be .... tons and tons of text ... \"}
var post = {
  type: "Article",
  at开发者_Python百科tributes: [
    {"title": "My Essay On Life"},
    {"body": "Life can be .... tons and tons of text ... "}
  ]
};

Multikeys allow you to index key/value pairs. As such you never know what some user may add as a custom key/value pair. In this case, we would have a humongous field (body) that would be indexed. (Nothing would stop someone from doing something similar aside from the Multikeys pattern.) Does Mongo attempt to index fields of any size or is there some practical limit?

I ran a few tests and couldn't seem to use the index on monster-size fields. Honestly, I don't want huge fields of unlimited size to be indexed anyway. MySQL supports an option where you set how many characters are indexable (e.g. 100 chars) so that if the text field surpasses that limit only the first 100 chars are indexed.

What's the Mongo way? I couldn't find this on the Mongo website.


Index entries are limited to about 800bytes. If the field(s) are larger then that an error message is generated at the server and the document is not added to the index; this can cause problems.


I realize that you may be want to query posts by body field。I suggest you should "analyze" the body by Lucene Analyzer。And to index the body_keyword field which type is array(["Life","text"]...)。

0

精彩评论

暂无评论...
验证码 换一张
取 消