Excuse me if this sounds like a basic question, but I'm new to web development.
We 开发者_StackOverflowload balance across several servers. The apps are configured to log using log4j. They each write to log files on their respective servers. That means that researching issues means getting logs from all of these servers, which is tedious, and means going thru Ops as they control the load balancing, and introduces delays.
Is this the norm for web app logging? Or are there easy solutions to consolidating logging in one place? What are standard practices for making logs easily available to developers?
Log to SQL using the JDBC appender (or an alternative version) instead of files.
There is a wide variety of logging you can do, and which is available automatically.
Some types are:
- Logging to built in machine logs, (Event logs or similiar.).
For these, get access granted so you can access them remotely, and collate/examine as required. - Logging by applications which typically log to text files on the local machine. (IIS, or other.)
Get access granted to the folders, so you can analyze these yourselves. - Custom logging.
I recommend logging to a database. (Although these need to be pruned/summarized frequently.)
If the logging to the database fails, logging to machine logs occurs.
Note: This can have a performance impact, so be careful how much logging you do.
If operations is unwilling to give you direct access, see if a routinue dump of these files can be done to a location you can access.
Log4J has both a JMS appender (so you can send logs to a message queue - not as dumb as it sounds depending on how much/what kind of processing you need to do!) and a syslog appender (local or remote). Either of these will help you collect logs at a single location. Syslog appender might be your best bet just to collect stuff in one place as Unix-ish systems have been doing syslog for a very long time and there are lot of stable features there of which you can advantage.
Logging to a database can be difficult to scale depending on your traffic unless you are smart about batching inserts. I would recommend you keep this stuff in flat files (merged, of course) somewhere so you have the flexibility to import them into the database later in one go, or experiment with stuff like Hadoop (lots of examples based around parsing log files) - provided you have the volume to justify this complexity of course.
We have a web farm with robust logging and here is how it is implemented.
Each web application generates logging event messages. Using MSMQ these messages are sent to a private queue hosted on a separate machine. This machine has an application that dequeues the messages and writes them to a Sqlite database.
Using MSMQ decouples the web application from the logging server. If the server is offline the messages sit on the web server until the connection is re-established. MSMQ handles moving the messages to the destination server. This way the web site can continue to do its thing without interruption.
The logging server has its own web interface to query the logging database and can receive log messages from other applications as well.
We assign a classification to each message. For messages with a fatal error classification, the logging server generates an email automatically to the support team. Other non-fatal messages and trace messages are just recorded to the database for aggregate reporting.
One possible option for making logs more easily accessible is to write them to a drive shared via NFS. You could do some juggling with separate directories per server, yet have both of those directories visible on the server on which you want to evaluate the logs.
精彩评论