开发者

priority queues

开发者 https://www.devze.com 2023-04-11 17:26 出处:网络
While reading the topic on priority queues in Data Structures And Algorithms I came across the following paragraph...

While reading the topic on priority queues in Data Structures And Algorithms I came across the following paragraph...

one possible way to favour short processes yet not lock the long ones is to give process P a priority 100 t(used)-t(init) where t(used) is the time taken by a process and t(init) is the time at which the process initiated , measured from some time 0. Note that 100 is a magic number as it is selected to be somewhat l开发者_如何学JAVAarger than the largest number of processes we expect to be active at once. The reader may observe that if we always pick the process with the smallest priority number and there are not too many short processes in the mix, then in the long run a process that does not finish quickly will receive 1% of the processor's time.

can anyone explain how the process takes up 1%? it would be good if you could derive the result mathematically.


Sure, so initially ever process has a negative priority value. It doesn't matter what it is, only that it is negative and based on current time, however that is represented. For simplicity, let's assume it's just an integer value.

Process A starts with a priority of 0 (assume we are at t=0). It executes for, say 10 time units but isn't finished, hence it needs to be queued to continue processing at some future point. So the priority will be, based on the formula

priority = priority + 100*(endtime - starttime)

priorityA = 0 + 100*(10-0)
priorityA = 1000

Process B's initial priority is initialized at t = 5, so it is -5. It's got the lowest of the two priorities in the queue so it will get some time too. Assume that it also runs for 10 time units. After it is done, B will have a priority of:

priorityB = -5 + 100*(20-10)
priorityB = 995

and so it'll get queued up again. And let's assume it again runs for 10 units. Following it's timeslice it's new priority will be:

priorityB = 995 + 100*(30-20)
priorityB = 1995

So that'll reposition B after A in the priority queue. A then runs but only runs for 5 time units this time. It's new priority is:

priorityA = 1000 + 100*(35-30)
priorityA = 1500

And process A will again be at the top of the queue and get attention. Suppose it runs for 5 time units again, it's new priority is:

priorityA = 1500 + 100(40-35)
priorityA = 2000

which will position it after process B and so B will get some processor time. Let's assume it uses 5 time units this time.

priorityB = 1995 + 100(45-40)
priorityB = 2495

Now it it is A's turn again. See how these two process effectively get 50% of the processor's attention each? If we add a third long-running process ("long running" like both A and B are "long running" in the sense that they didn't run for 10 units and then finish but rather were requeued) to the mix, those processes will each likely get roughly 33% of the processor's time since each time one runs and doesn't finish, it get's its priority adjusted based on how long it spent running. New processes always run first in this scenario because they come in with a negative priority value and they'll actually end up with a higher (lower numeric value) priority for awhile. However, that won't last forever - the new processes will start to get a larger and larger priority value.

But since we've got, based on the assumptions made for the book's example, around 100 processes at any one time awaiting some processing time and also the assumption is that there aren't too many short-running processes, there will generally be about 100 processes vying for processor attention and so each one will get about the same amount of time and soon enough their relative numeric priority values will all cluster around each other (meaning there's not much difference numerically from the one with the highest priority and the one with the lowest priority) and so after the "top" process runs, it'll get enough added to its priority to push it to the bottom (or near the bottom). Rinse and repeat and you essentially get a round-robin thing going, assuming they all use about the same amount of time each go around. There's about 100 at any one time so, again assuming few short-running processes exist, each one gets 1/100 of the processor's attention.

0

精彩评论

暂无评论...
验证码 换一张
取 消