开发者

Cron Tasks on load balanced web servers

开发者 https://www.devze.com 2023-03-15 12:12 出处:网络
I\'m looking for better solution to handling our cron tasks in a load balanced environment. Currently have:

I'm looking for better solution to handling our cron tasks in a load balanced environment.

Currently have:

  • PHP application running on 3 CentOS servers behind a load balancer.
  • Tasks that need to be run periodically but only on a single machine at a time.
  • Good old cron set up to run those tasks on the first server.
  • Problems if the first server is out of play for whatever reason.

Looking for:

  • Something more robust and de-centralized.
  • Load balancing the tasks so multiple tasks would run only once but on random/different servers to spread the load.
  • Preventing not having the tasks run when the first server goes down.
  • Being able to manage ta开发者_Python百科sks and see aggregate reports ideally using a web interface.
  • Notifications if anything goes wrong.

The solution doesn't need to be implemented in PHP but it would be nice as it would allow us to easily tweak it if needed.

I have found two projects that look promissing. GNUBatch and Job Scheduler. Will most likely further test both but I wonder if someone has better solution for the above.

Thanks.


You can use this small library that uses redis to create a temporary timed lock:

https://github.com/AlexDisler/MutexLock

The servers should be identical and have the same cron configuration. The server that will be first to create the lock will also execute the task. The other servers will see the lock and exit without executing anything.

For example, in the php file that executes the scheduled task:

MutexLock\Lock::init([
  'host'   => $redisHost,
  'port'   => $redisPort
]);

// check if a lock was already created,
// if it was, it means that another server is already executing this task
if (!MutexLock\Lock::set($lockKeyName, $lockTimeInSeconds)) {
  return;
}

// if no lock was created, execute the scheduled task
scheduledTaskThatRunsOnlyOnce();

To run the tasks in a de-centralized way and spread the load, take a look at: https://github.com/chrisboulton/php-resque It's a php port of the ruby version of resque and it stores the data in the same exact format so you can use https://github.com/resque/resque-web or http://resqueboard.kamisama.me/ to monitor the workers and see reports


Assuming you have a database available not hosted on one of those 3 servers;

Write a "wrapper" script that goes in cron, and takes the program you're running as its argument. The very first thing it does is connect to the remote database, and check when the last time an entry was inserted into a table (created for this wrapper). If the last insertion time is greater than when it was supposed to run, then insert a new record into the table with the current time, and execute the wrapper's argument (your cron job).

Cron up the wrapper on each server, each set X minutes behind the other (server A runs at the top of the hour, server B runs at 5 minutes, C at 10 minutes, etc).

The first server will always execute the cron first, so the other two servers never will. If the first server goes down, the second server will see it hasn't ran, and will run it.

If you also record in the table which server it was that executed the job, you'll have a log of when/where the script was executed.


Wouldn't this be an ideal situation for using a message / task queue?


I ran into the same problem but came up with this litte repository: https://github.com/incapption/LoadBalancedCronTask

0

精彩评论

暂无评论...
验证码 换一张
取 消