For my erlang application, I have used both sasl logger and log4erl and both are giving poor performance when the number of events sending to them is around 1000 per second. log4erl was giving better performance but after some time its mailbox starts filling up and thus starts bloating the VM.
Will using disk_log be a better option (as in will it work under 1000 events per sec load ?).
I tried using disk_log on the shell... in the example they are converting the message to be logged to binary (list_to_binary) first and writing to file using "blog" function.
Would doing like this help me in using an efficient high volume logger ?
One more doubt:: Usin开发者_开发技巧g disk_log:blog the size of the text was just 84 bytes...but with disk_log:log_terms..the size was 970 bytes..why such a big difference ?
Hack something on your own. Dedicated logger with in-memory storage and bulk dumps to disc is the fastest solution. If you cannot afford losing any data (in case of VM crash), do it on remote node. Once I used the remote 'solution' and I queried every 5sec target VM. I didn't notice impact on the system.
On high volume logging I prefer battle tested solutions like scribe or maybe flume. Check erl_scribe .
精彩评论