开发者

What's the most efficient file logging in a server written in C?

开发者 https://www.devze.com 2023-02-03 10:14 出处:网络
I wonder what the most efficient file logging strategy would be in a server written in C? I can see the following options:

I wonder what the most efficient file logging strategy would be in a server written in C?

I can see the following options:

  • fopen() append and then fwrite() the data for a time frame of say 1 hour, then fclose()?

  • Caching the data and then occasionally open() append write() and c开发者_运维技巧lose()?


Using a thread is usually a good solution, we adopted it with interesting results.
The main thread that needs to log prepare the log string and passes it to a second thread. To feed the second thread we use a lockless queue + a circular memory in order to minimize amount of alloc/free and wait time.
The secon thread waits for the lockless queue to be available. When it finds there's some job to do, a new slot of the lockless queue is consumed and the data logged.
Using a separate thread you can save a great amount of time.

After we decided to use a secon thread we had to face another problem. Many istances of the same program (a full text serach engine) must log all together on the same file so the resource shoud be regularly shared among every instance of the server.
We could decide to use a semaphore or another syncornizing methiod but we found another solution: the second thread sends an UDP packet to a local log server that listen on a known port. This server reads each message and logs it on the file (the server is actually the only one that owns he file while it's written). The UDP socket itself grants serialization of logs.

I've been using this solution for more than 10 years and never loose a single line of my logs file, using the second thread I also saved a great percentage of time for every operation (we use to log a lot of information for any single command the server receives).

HTH


Why don't you directly log your data when the events occur?

  • If your server crashes, you want to retrieve those data at the time it crashed. If you only flush your buffered logs once an hour, you'll miss interesting logs.
  • File streams are usually buffered by the OS.

If you believe it makes your server slow, due to hard drive writing, you might consider to log into a separate thread. But I wonder if it is the problem. Premature optimizations?


Unless you've benchmarked and found that it's a bottleneck, use fopen and fprintf. There's no reason to put your own complex buffering layer on top unless stdio is too slow for you (and if it is too slow, you might consider whether to rethink the OS/C library your server is running).


The slowest part of writing a system log is the output operation to the physical disks.

Buffering and checksumming the log records are necessary to ensure that you don't lose any log data and that the log data can't be tampered with after the fact, respectively.

0

精彩评论

暂无评论...
验证码 换一张
取 消

关注公众号