I have to implement Celery in a pre-existing system. The previous version of the system already used Python standard logging.
My code is similar to the code below. Process one and process two are non-Celery functions, which are logging everywhere. We are using the logging to track data loss if something bad happens.
@task
def add(x,y):
process_one(x,y)
process_two(x,y)
How can I implement Celery and use the Python standard logging instead Celery logging, so our old logging system is not lost?
I have tried to change import logging
from Python to: logger = add.get_logger()
and pass the logger
to all functions, but I don't think it is a good practice.开发者_C百科 I need another solution.
Update: to add application logging in Celery logging, you can perform:
$ manage.py celeryd -v 2 -B -s celery -E -l debug --traceback \
--settings=settings --logfile=/(path to your log folder)/celeryd.log
With -l
(logging) as debug
, our application/Python logging is automatically included in our Celery logging: No need to perform logger = add.get_logger()
.
You probably want this setting:
CELERYD_HIJACK_ROOT_LOGGER = False
Tell me how that works out.
Btw, the reason it hijacks the root logger is because some badly written libraries sets up logging, something a library should never do, resulting in users experiencing no output from the celeryd worker :(
精彩评论