开发者

MySql / MSSQL - Checking out Records for Processing - Scaling?

开发者 https://www.devze.com 2023-01-27 03:41 出处:网络
I\'m trying to figure out the most efficient and scalable way to implement a processing queue mechanism in a sql database.The short of it is, I have a bunch of \'Domain\' objects with associated \'Bac

I'm trying to figure out the most efficient and scalable way to implement a processing queue mechanism in a sql database. The short of it is, I have a bunch of 'Domain' objects with associated 'Backlink' statistics. I want to figure out efficiently which Domains need to have their Backlinks processed.

Domain table: id, domainName

Backlinks table: id, domainId, count, checkedTime

The Backlinks table has many records (to keep a history) to one Domain record. I need to efficiently select domains that are due to have their Backlinks processed. This could mean that the Backlinks record with the most recent checkedTime is far enough in the past, or that there is no Backlinks record at all for a domain record. Domains will need to be ordered for processing by a number of factors, including ordering by the oldest checkedTime first.

There are multiple ‘readers’ processing domains. If the same domain gets processed twice it’s not a huge deal, but it is a waste of cpu cycles.

The worker takes an indeterminate amount of time to process a domain. I would prefer to have some backup in the sense that a checkout would 'expire' rather than require the worker process to explicitly 'checkin' a record when it's finished, i开发者_如何学JAVAn case the worker fails for some reason.

The big issue here is scaling. From the start I’ll easily have about 2 million domains, and that number will keep growing daily. This means my Backlinks history will grow quickly too, as I expect to process in some cases daily, and other cases weekly for each domain. The question becomes, what is the most efficient way to find domains that require backlinks processing?

Thanks for your help!


I decided to structure things a bit differently. Instead of finding domains that need to be processed based on the criteria of several tables, I'm assigning a date at which each metric needs to be processed for a given domain. This makes finding those domains needing processing much simpler of a query.

I ended up using the idea of batches where I find domains to process, mark them as being processed by a batch id, then return those domains to the worker. When the worker is done, it returns the results, and the batch is deleted, and the domains will naturally be ready for processing again in the future.

0

精彩评论

暂无评论...
验证码 换一张
取 消