So here's my dilemma...
I'm running a realtime search index with Solr, indexing about 6M documents per day. The documents expire after about 7 days. So every day, I add 6M documents, and delete 6M documents. Unfortunately, I need to run "optimize" every so often or else I'll run out of disk space.
During "optimize", Solr continues to serve requests for reads, but write requests are blocked. I have all my writes behind a queue, so operationally, everything is fine. However, since my index is so large, "optimize" takes about an hour, and for this hour, no new updates are available for reads. So my index is realtime except for the hour a day that I optimize. During this time, it looks like the index is behind by up to an hour. This is not optimal.
My current solution is this: write all data to two Solr indexes, both behind queues. Alternate "optimize" on the two indexes every 12 hours. During "optimize" of index 1, direct all read traffic to index 2, and vice versa. This time based routing does seem pretty brittle and sloppy though.
Is there a bette开发者_运维技巧r way?
As per the comments here and the FAQ here, optimizing should not be necessary. Not optimizing may increase the index size initially, but it should not keep increasing continuously. I will suggest that you disable optimize for a few days and monitor the index size.
Another time-based option is to maintain a separate index for each day, and write to all indexes every day. You don't need to do deletes in this case, and instead you rotate the indexes in a first-in-first-out (FIFO) manner.
Index 1 = Day 1 + Day 2 + Day 3 + Day 4 + Day 5 + Day 6 + (no longer used)
Index 2 = empty + Day 2 + Day 3 + Day 4 + Day 5 + Day 6 + Day 7 + (no longer used)
Index 3 = empty + empty + Day 3 + Day 4 + Day 5 + Day 6 + Day 7 + Day 8
...
You get the idea. On Day 2, Index 1 would stop being used entirely, and you would switch to using Index 2 for reads.
Obviously this is a simplistic example, and you would want to rotate the Index naming (Index 2 becomes Index 1, and so forth), but hopefully this provides another approach that could lead to an implementation.
Did you try using different mergefactors, or a different merge policy? If you are doing constant writing, that might be a better approach than optimizing.
Use replication.
Write to your Master, replicate to your Slave. Optimize would run on your Master, and run all queries against the slave.
精彩评论