开发者

python crontab alternative - APScheduler & python-daemon

开发者 https://www.devze.com 2023-04-11 11:28 出处:网络
I\'m having trouble using python-daemon 1.6 getting along with APScheduler to manage a list of tasks.

I'm having trouble using python-daemon 1.6 getting along with APScheduler to manage a list of tasks.

(The scheduler needs to run them periodically at a specific chosen times - seconds resolution)

Working (until pressing Ctrl+C),

from apscheduler.scheduler import Scheduler
import logging
import signal

def job_function():
    print "Hello World"

def init_schedule():
        logging.basicConfig(level=logging.DEBUG)
        sched = Scheduler()
        # Start the scheduler
        sched.start()

        return sched

def schedule_job(sched, function, periodicity, start_time):
        sched.add_interval_job(job_function, seconds=periodicity, start_date=start_time)

if __name__ == "__main__":

    sched = init_schedule()
    schedule_job(sched, job_function, 120, '2011-10-06 12:30:09')
    schedule_job(sched, job_function, 120, '2011-10-06 12:31:03')

    # APSScheduler.Scheduler only works until the main thread exits
    signal.pause()
    # Or
    #time.sleep(300)

Sample Output:

INFO:apscheduler.threadpool:Started thread pool with 0 core threads and 20 maximum threads INFO:apscheduler.scheduler:Scheduler started DEBUG:apscheduler.scheduler:Looking for jobs to run DEBUG:apscheduler.scheduler:No jobs; waiting until a job is added INFO:apscheduler.scheduler:Added job "job_function (trigger: interval[0:00:30], next run at: 2011-10-06 18:30:39)" to job store "default" INFO:apscheduler.scheduler:Added job "job_function (trigger: interval[0:00:30], next run at: 201开发者_StackOverflow1-10-06 18:30:33)" to job store "default" DEBUG:apscheduler.scheduler:Looking for jobs to run DEBUG:apscheduler.scheduler:Next wakeup is due at 2011-10-06 18:30:33 (in 10.441128 seconds)

With python-daemon, Output is blank. Why isn't the DaemonContext correctly spawning the processes?

EDIT - Working

After reading python-daemon source, I've added stdout and stderr to the DaemonContext and finally was able to know what was going on.

def job_function():
    print "Hello World"
    print >> test_log, "Hello World"

def init_schedule():
    logging.basicConfig(level=logging.DEBUG)
    sched = Scheduler()
    sched.start()

    return sched

def schedule_job(sched, function, periodicity, start_time):
    sched.add_interval_job(job_function, seconds=periodicity, start_date=start_time)

if __name__ == "__main__":

    test_log = open('daemon.log', 'w')
    daemon.DaemonContext.files_preserve = [test_log]

    try:
           with daemon.DaemonContext():
                   from datetime import datetime
                   from apscheduler.scheduler import Scheduler
                   import signal

                   logging.basicConfig(level=logging.DEBUG)
                   sched = init_schedule()

                   schedule_job(sched, job_function, 120, '2011-10-06 12:30:09')
                   schedule_job(sched, job_function, 120, '2011-10-06 12:31:03')

                   signal.pause()

    except Exception, e:
           print e


I do not know much about python-daemon, but test_log in job_function() is not defined. The same problem occurs in init_schedule() where you reference Schedule.

0

精彩评论

暂无评论...
验证码 换一张
取 消