Free Essay

Scheduling

In:

Submitted By lowie192
Words 8932
Pages 36
2.5 SCHEDULING
When a computer is multiprogrammed, it frequently has multiple processes competing for the
CPU at the same time. This situation occurs whenever two or more processes are simultaneously in the ready state. If only one CPU is available, a choice has to be made which process to run next. The part of the operating system that makes the choice is called the scheduler and the algorithm it uses is called the scheduling algorithm. These topics form the subject matter of the following sections.
Many of the same issues that apply to process scheduling also apply to thread scheduling, although some are different. Initially we will focus on process scheduling. Later on we will explicitly look at thread scheduling.
2.5.1 Introduction to Scheduling
Back in the old days of batch systems with input in the form of card images on a magnetic tape, the scheduling algorithm was simple: just run the next job on the tape. With timesharing systems, the scheduling algorithm became more complex because there were generally multiple users waiting for service. Some mainframes still combine batch and timesharing service, requiring the scheduler to decide whether a batch job or an interactive user at a terminal should go next. (As an aside, a batch job may be a request to run multiple programs in succession, but for this section, we will just assume it is a request to run a single program.) Because CPU time is a scarce resource on these machines, a good scheduler can make a big difference in perceived performance and user satisfaction. Consequently, a great deal of work has gone into devising clever and efficient scheduling algorithms.
With the advent of personal computers, the situation changed in two ways. First, most of the time there is only one active process. A user entering a document on a word processor is unlikely to be simultaneously compiling a program in the background. When the user types a command to the word processor, the scheduler does not have to do much work to figure out which process to run—the word processor is the only candidate.
Second, computers have gotten so much faster over the years that the CPU is rarely a scarce resource any more. Most programs for personal computers are limited by the rate at which the user can present input (by typing or clicking), not by the rate the CPU can process it. Even compilations, a major sink of CPU cycles in the past, take just a few seconds at most nowadays.
Even when two programs are actually running at once, such as a word processor and a spreadsheet, it hardly matters which goes first since the user is probably waiting for both of them to finish. As a consequence, scheduling does not matter much on simple PCs. [Of course, there are applications that practically eat the CPU alive: rendering one hour of high-resolution video may require industrial-strength image processing on each of 108,000 frames in NTSC (90,000 in
PAL), but these applications are the exception rather than the rule.]
When we turn to high-end networked workstations and servers, the situation changes. Here multiple processes often do compete for the CPU, so scheduling matters again. For example, when the CPU has to choose between running a process that updates the screen after a user has closed a window and running a process that sends out queued email, it makes a huge difference in the perceived response. If closing the window were to take 2 sec while the email was being sent, the user would probably regard the system as extremely sluggish, whereas having the email delayed by 2 sec would not even be noticed. In this case, process scheduling matters very much.
In addition to picking the right process to run, the scheduler also has to worry about making efficient use of the CPU because process switching is expensive. To start with, a switch from user mode to kernel mode must occur. Then the state of the current process must be saved, including storing its registers in the process table so they can be reloaded later. In many systems, the memory map (e.g., memory reference bits in the page table) must be saved as well. Next a new process must be selected by running the scheduling algorithm. After that, the MMU must be reloaded with the memory map of the new process. Finally, the new process must be started. In addition to all that, the process switch usually invalidates the entire memory cache, forcing it to be dynamically reloaded from the main memory twice (upon entering the kernel and upon leaving it). All in all, doing too many process switches per second can chew up a substantial amount of CPU time, so caution is advised.
Process Behavior
Nearly all processes alternate bursts of computing with (disk) I/O requests, as shown in Fig. 2-37.
Typically the CPU runs for a while without stopping, then a system call is made to read from a file or write to a file. When the system call completes, the CPU computes again until it needs more data or has to write more data and so on. Note that some I/O activities count as computing.
For example, when the CPU copies bits to a video RAM to update the screen, it is computing, not doing I/O, because the CPU is in use. I/O in this sense is when a process enters the blocked state waiting for an external device to complete its work.

Figure 2-37. Bursts of CPU usage alternate with periods of waiting for I/O. (a) A CPU-bound process. (b) An I/O-bound process.
The important thing to notice about Fig. 2-37 is that some processes, such as the one in Fig. 2-
37(a), spend most of their time computing, while others, such as the one in Fig. 2-37(b), spend most of their time waiting for I/O. The former are called compute-bound ; the latter are called
I/O-bound . Compute-bound processes typically have long CPU bursts and thus infrequent I/O waits, whereas I/O-bound processes have short CPU bursts and thus frequent I/O waits. Note that the key factor is the length of the CPU burst, not the length of the I/O burst. I/O-bound processes are I/O bound because they do not compute much between I/O requests, not because they have especially long I/O requests. It takes the same time to read a disk block no matter how much or how little time it takes to process the data after they arrive.
It is worth noting that as CPUs get faster, processes tend to get more I/O-bound. This effect occurs because CPUs are improving much faster than disks. As a consequence, the scheduling of
I/O-bound processes is likely to become a more important subject in the future. The basic idea here is that if an I/O-bound process wants to run, it should get a chance quickly so it can issue its disk request and keep the disk busy.
When to Schedule
A key issue related to scheduling is when to make scheduling decisions. It turns out that there are a variety of situations in which scheduling is needed. First, when a new process is created, a decision needs to be made whether to run the parent process or the child process. Since both processes are in ready state, it is a normal scheduling decision and it can go either way, that is, the scheduler can legitimately choose to run either the parent or the child next.
Second, a scheduling decision must be made when a process exits. That process can no longer run (since it no longer exists), so some other process must be chosen from the set of ready processes. If no process is ready, a system-supplied idle process is normally run.
Third, when a process blocks on I/O, on a semaphore, or for some other reason, another process has to be selected to run. Sometimes the reason for blocking may play a role in the choice. For example, if A is an important process and it is waiting for B to exit its critical region, letting B run next will allow it to exit its critical region and thus let A continue. The trouble, however, is that the scheduler generally does not have the necessary information to take this dependency into account. Fourth, when an I/O interrupt occurs, a scheduling decision may be made. If the interrupt came from an I/O device that has now completed its work, some process that was blocked waiting for the I/O may now be ready to run. It is up to the scheduler to decide if the newly ready process should be run, if the process that was running at the time of the interrupt should continue running, or if some third process should run.
If a hardware clock provides periodic interrupts at 50 Hz, 60 Hz, or some other frequency, a scheduling decision can be made at each clock interrupt or at every k -th clock interrupt.
Scheduling algorithms can be divided into two categories with respect to how they deal with clock interrupts. A nonpreemptive scheduling algorithm picks a process to run and then just lets it run until it blocks (either on I/O or waiting for another process) or until it voluntarily releases the CPU. Even if it runs for hours, it will not be forceably suspended. In effect, no scheduling decisions are made during clock interrupts. After clock interrupt processing has been completed, the process that was running before the interrupt is always resumed.
In contrast, a preemptive scheduling algorithm picks a process and lets it run for a maximum of some fixed time. If it is still running at the end of the time interval, it is suspended and the scheduler picks another process to run (if one is available). Doing preemptive scheduling requires having a clock interrupt occur at the end of the time interval to give control of the CPU back to the scheduler. If no clock is available, nonpreemptive scheduling is the only option.
Categories of Scheduling Algorithms
Not surprisingly, in different environments different scheduling algorithms are needed. This situation arises because different application areas (and different kinds of operating systems) have different goals. In other words, what the scheduler should optimize for is not the same in all systems. Three environments worth distinguishing are
1. Batch.
2. Interactive.
3. Real time.
In batch systems, there are no users impatiently waiting at their terminals for a quick response.
Consequently, nonpreemptive algorithms, or preemptive algorithms with long time periods for each process are often acceptable. This approach reduces process switches and thus improves performance. In an environment with interactive users, preemption is essential to keep one process from hogging the CPU and denying service to the others. Even if no process intentionally ran forever, due to a program bug, one process might shut out all the others indefinitely. Preemption is needed to prevent this behavior.
In systems with real-time constraints, preemption is, oddly enough, sometimes not needed because the processes know that they may not run for long periods of time and usually do their work and block quickly. The difference with interactive systems is that real-time systems run only programs that are intended to further the application at hand. Interactive systems are general purpose and may run arbitrary programs that are not cooperative or even malicious.
Scheduling Algorithm Goals
In order to design a scheduling algorithm, it is necessary to have some idea of what a good algorithm should do. Some goals depend on the environment (batch, interactive, or real time), but there are also some that are desirable in all cases. Some goals are listed in Fig. 2-38. We will discuss these in turn below.
All systems
Fairness - giving each process a fair share of the CPU
Policy enforcement - seeing that stated policy is carried out
Balance - keeping all parts of the system busy
Batch systems
Throughput - maximize jobs per hour
Turnaround time - minimize time between submission and termination
CPU utilization - keep the CPU busy all the time
Interactive systems
Response time - respond to requests quickly
Proportionality - meet users’ expectations
Real-time systems
Meeting deadlines - avoid losing data
Predictability - avoid quality degradation in multimedia systems
Figure 2-38. Some goals of the scheduling algorithm under different circumstances.
Under all circumstances, fairness is important. Comparable processes should get comparable service. Giving one process much more CPU time than an equivalent one is not fair. Of course, different categories of processes may be treated very differently. Think of safety control and doing the payroll at a nuclear reactor’s computer center.
Somewhat related to fairness is enforcing the system’s policies. If the local policy is that safety control processes get to run whenever they want to, even if it means the payroll is 30 sec late, the scheduler has to make sure this policy is enforced.
Another general goal is keeping all parts of the system busy when possible. If the CPU and all the I/O devices can be kept running all the time, more work gets done per second than if some of the components are idle, in a batch system, for example, the scheduler has control of which jobs are brought into memory to run. Having some CPU-bound processes and some I/O-bound processes in memory together is a better idea than first loading and running all the CPU-bound jobs and then when they are finished loading and running all the I/O-bound jobs. If the latter strategy is used, when the CPU-bound processes are running, they will tight for the CPU and the disk will be idle. Later, when the I/O-bound jobs come in, they will fight for the disk and the
CPU will be idle. Better to keep the whole system running at once by a careful mix of processes.
The managers of large computer centers that run many batch jobs typically look at three metrics to see how well their systems are performing: throughput, turnaround time, and CPU utilization.
Throughput is the number of jobs per hour that the system completes. All things considered, finishing 50 jobs per hour is better than finishing 40 jobs per hour. Turnaround time is the statistically average time from the moment that a batch job is submitted until the moment it is completed. It measures how long the average user has to wait for the output. Here the rule is:
Small is Beautiful.
A scheduling algorithm that maximizes throughput may not necessarily minimize turnaround time. For example, given a mix of short jobs and long jobs, a scheduler that always ran short jobs and never ran long jobs might achieve an excellent throughput (many short jobs per hour) but at the expense of a terrible turnaround time for the long jobs. If short jobs kept arriving at a steady rate, the long jobs might never run, making the mean turnaround time infinite while achieving a high throughput.
CPU utilization is also an issue with batch systems because on the big mainframes where batch systems run, the CPU is still a major expense. Thus computer center managers feel guilty when it is not running all the time. Actually though, this is not such a good metric. What really matters is how many jobs per hour come out of the system (throughput) and how long it takes to get a job back (turnaround time). Using CPU utilization as a metric is like rating cars based on how many times per hour the engine turns over.
For interactive systems, especially timesharing systems and servers, different goals apply. The most important one is to minimize response time , that is the time between issuing a command and getting the result. On a personal computer where a background process is running (for example, reading and storing email from the network), a user request to start a program or open a file should take precedence over the background work. Having all interactive requests go first will be perceived as good service.
A somewhat related issue is what might be called proportionality . Users have an inherent (but often incorrect) idea of how long things should take. When a request that is perceived as complex takes a long time, users accept that, but when a request that is perceived as simple takes a long time, users get irritated. For example, if clicking on a icon that calls up an Internet provider using an analog modem takes 45 seconds to establish a connection, the user will probably accept that as a fact of life. On the other hand, if clicking on an icon that breaks the connection takes 45 seconds, the user will probably be swearing a blue streak by the 30-sec mark and frothing at the mouth by 45 sec. This behavior is due to the common user perception that placing a phone call and getting a connection is supposed to take a lot longer than just hanging up. In some cases (such as this one), the scheduler cannot do anything about the response time, but in other cases it can, especially when the delay is due to a poor choice of process order.
Real-time systems have different properties than interactive systems, and thus different scheduling goals. They are characterized by having deadlines that must or at least should be met.
For example, if a computer is controlling a device that produces data at a regular rate, failure to run the data-collection process on time may result in lost data. Thus the foremost need in a realtime system is meeting all (or most) deadlines.
In some real-time systems, especially those involving multimedia, predictability is important.
Missing an occasional deadline is not fatal, but if the audio process runs too erratically, the sound quality will deteriorate rapidly. Video is also an issue, but the ear is much more sensitive to jitter than the eye. To avoid this problem, process scheduling must be highly predictable and regular.
We will study batch and interactive scheduling algorithms in this chapter but defer most of our study of real-time scheduling until we come to multimedia operating systems in Chap. 7.
2.5.2 Scheduling in Batch Systems
It is now time to turn from general scheduling issues to specific scheduling algorithms. In this section we will look at algorithms used in batch systems. In the following ones we will examine interactive and real-time systems. It is worth pointing out that some algorithms are used in both batch and interactive systems. We will study these later. Here we will focus on algorithms that are only suitable in batch systems.
First-Come First-Served
Probably the simplest of all scheduling algorithms is nonpreemptive first-come first-served With this algorithm, processes are assigned the CPU in the order they request it. Basically, there is a single queue of ready processes. When the first job enters the system from the outside in the morning, it is started immediately and allowed to run as long as it wants to. As other jobs come in, they are put onto the end of the queue. When the running process blocks, the first process on the queue is run next. When a blocked process becomes ready, like a newly arrived job, it is put on the end of the queue.
The great strength of this algorithm is that it is easy to understand and equally easy to program. It is also fair in the same sense that allocating scarce sports or concert tickets to people who are willing to stand on line starting at 2 A.M. is fair. With this algorithm, a single linked list keeps track of all ready processes. Picking a process to run just requires removing one from the front of the queue. Adding a new job or unblocked process just requires attaching it to the end of the queue. What could be simpler?
Unfortunately, first-come first-served also has a powerful disadvantage. Suppose that there is one compute-bound process that runs for 1 sec at a time and many I/O-bound processes that use little
CPU time but each have to perform 1000 disk reads to complete. The compute-bound process runs for 1 sec, then it reads a disk block. All the I/O processes now run and start disk reads.
When the compute-bound process gets its disk block, it runs for another 1 sec, followed by all the I/O-bound processes in quick succession.
The net result is that each I/O-bound process gets to read 1 block per second and will take 1000 sec to finish. With a scheduling algorithm that preempted the compute-bound process every 10 msec, the I/O-bound processes would finish in 10 sec instead of 1000 sec, and without slowing down the compute-bound process very much.
Shortest Job First
Now let us look at another nonpreemptive batch algorithm that assumes the run times are known in advance. In an insurance company, for example, people can predict quite accurately how long it will take to run a batch of 1000 claims, since similar work is done every day. When several equally important jobs are sitting in the input queue waiting to be started, the scheduler picks the shortest job first . Look at Fig. 2-39. Here we find four jobs A , B , C , and D with run times of
8, 4, 4, and 4 minutes, respectively. By running them in that order, the turnaround time for A is 8 minutes, for B is 12 minutes, for C is 16 minutes, and for D is 20 minutes for an average of 14 minutes. Figure 2-39. An example of shortest job first scheduling. (a) Running four jobs in the original order. (b) Running them in shortest job first order.
Now let us consider running these four jobs using shortest job first, as shown in Fig. 2-39(b). The turnaround times are now 4, 8, 12, and 20 minutes for an average of 11 minutes. Shortest job first is provably optimal. Consider the case of four jobs, with run times of a , b , c , and d , respectively. The first job finishes at time a , the second finishes at time a + b , and so on. The mean turnaround time is (4a + 3b + 2c + d )/4. It is clear that a contributes more to the average than the other times, so it should be the shortest job, with b next, then c , and finally d as the longest as it affects only its own turnaround time. The same argument applies equally well to any number of jobs.
It is worth pointing out that shortest job first is only optimal when all the jobs are available simultaneously. As a counterexample, consider five jobs, A through E , with run times of 2, 4, 1,
1, and 1, respectively. Their arrival times are 0, 0, 3, 3, and 3. Initially, only A or B can be chosen, since the other three jobs have not arrived yet. Using shortest job first we will run the jobs in the order A , B , C , D , E , for an average wait of 4.6. However, running them in the order
B , C , D , E , A has an average wait of 4.4.
Shortest Remaining Time Next
A preemptive version of shortest job first is shortest remaining time next . With this algorithm, the scheduler always chooses the process whose remaining run time is the shortest. Again here, the run time has to be known in advance. When a new job arrives, its total time is compared to the current process’ remaining time. If the new job needs less time to finish than the current process, the current process is suspended and the new job started. This scheme allows new short jobs to get good service.
Three-Level Scheduling
From a certain perspective, batch systems allow scheduling at three different levels, as illustrated in Fig. 2-40. As jobs arrive at the system, they are initially placed in an input queue stored on the disk. The admission scheduler decides which jobs to admit to the system. The others are kept in the input queue until they are selected. A typical algorithm for admission control might be to look for a mix of compute-bound jobs and I/O-bound jobs. Alternatively, short jobs could be admitted quickly whereas longer jobs would have to wait. The admission scheduler is free to hold some jobs in the input queue and admit jobs that arrive later if it so chooses.

Figure 2-40. Three-level scheduling.

We will now look at some algorithms that can be used in interactive systems. All of these can also be used as the CPU scheduler in batch systems as well. While three-level scheduling is not possible here, two-level scheduling (memory scheduler and CPU scheduler) is possible and common. Below we will focus on the CPU scheduler.
Round-Robin Scheduling
Now let us look at some specific scheduling algorithms. One of the oldest, simplest, fairest, and most widely used algorithms is round robin . Each process is assigned a time interval, called its quantum , which it is allowed to run. If the process is still running at the end of the quantum, the
CPU is preempted and given to another process. If the process has blocked or finished before the quantum has elapsed, the CPU switching is done when the process blocks, of course. Round robin is easy to implement. All the scheduler needs to do is maintain a list of runnable processes, as shown in Fig. 2-41(a). When the process uses up its quantum, it is put on the end of the list, as shown in Fig. 2-41 (b).

Figure 2-41. Round-robin scheduling. (a) The list of runnable processes. (b) The list of runnable processes after B uses up its quantum.
The only interesting issue with round robin is the length of the quantum. Switching from one process to another requires a certain amount of time for doing the administration—saving and loading registers and memory maps, updating various tables and lists, flushing and reloading the memory cache, etc. Suppose that this process switch or context switch, as it is sometimes called, takes 1 msec, including switching memory maps, flushing and reloading the cache, etc. Also suppose that the quantum is set at 4 msec. With these parameters, after doing 4 msec of useful work, the CPU will have to spend 1 msec on process switching. Twenty percent of the CPU time will be wasted on administrative overhead. Clearly this is too much.
To improve the CPU efficiency, we could set the quantum to, say, 100 msec. Now the wasted time is only 1 percent. But consider what happens on a timesharing system if ten interactive users hit the carriage return key at roughly the same time. Ten processes will be put on the list of runnable processes. If the CPU is idle, the first one will start immediately, the second one may not start until 100 msec later, and so on. The unlucky last one may have to wait 1 sec before getting a chance, assuming all the others use their full quanta. Most users will perceive a 1-sec response to a short command as sluggish.
Another factor is that if the quantum is set longer than the mean CPU burst, preemption will rarely happen. Instead, most processes will perform a blocking operation before the quantum runs out, causing a process switch. Eliminating preemption improves performance because process switches then only happen when they are logically necessary, that is, when a process blocks and cannot continue.
The conclusion can be formulated as follows: setting the quantum too short causes too many process switches and lowers the CPU efficiency, but setting it too long may cause poor response to short interactive requests. A quantum around 20-50 msec is often a reasonable compromise.
Priority Scheduling
Round robin scheduling makes the implicit assumption that all processes are equally important.
Frequently, the people who own and operate multiuser computers have different ideas on that subject. At a university, the pecking order may be deans first, then professors, secretaries, janitors, and finally students. The need to take external factors into account leads to priority scheduling . The basic idea is straightforward: each process is assigned a priority, and the runnable process with the highest priority is allowed to run.
Even on a PC with a single owner, there may be multiple processes, some more important than others. For example, a daemon process sending electronic mail in the background should be assigned a lower priority than a process displaying a video film on the screen in real time.
To prevent high-priority processes from running indefinitely, the scheduler may decrease the priority of the currently running process at each clock tick (i.e., at each clock interrupt). If this action causes its priority to drop below that of the next highest process, a process switch occurs.
Alternatively, each process may be assigned a maximum time quantum that it is allowed to run.
When this quantum is used up, the next highest priority process is given a chance to run.
Priorities can be assigned to processes statically or dynamically. On a military computer, processes started by generals might begin at priority 100, processes started by colonels at 90, majors at 80, captains at 70, lieutenants at 60, and so on. Alternatively, at a commercial computer center, high-priority jobs might cost 100 dollars an hour, medium priority 75 dollars an hour, and low priority 50 dollars an hour. The UNIX system has a command, nice , which allows a user to voluntarily reduce the priority of his process, in order to be nice to the other users. Nobody ever uses it.
Priorities can also be assigned dynamically by the system to achieve certain system goals. For example, some processes are highly I/O bound and spend most of their time waiting for I/O to complete. Whenever such a process wants the CPU, it should be given the CPU immediately, to let it start its next I/O request which can then proceed in parallel with another process actually computing. Making the I/O bound process wait a long time for the CPU will just mean having it around occupying memory for an unnecessarily long time. A simple algorithm for giving good service to I/O bound processes is to set the priority, to 1/f , where f is the fraction of the last quantum that a process used. A process that used only 1 msec of its 50 msec quantum would get priority 50, while a process that ran 25 msec before blocking would get priority 2, and a process that used the whole quantum would get priority 1.
It is often convenient to group processes into priority classes and use priority scheduling among the classes but round-robin scheduling within each class. Figure 2-42 shows a system with four priority classes. The scheduling algorithm is as follows: as long as there are runnable processes in priority class 4, just run each one for one quantum, round-robin fashion, and never bother with lower priority classes, if priority class 4 is empty, then run the class 3 processes round robin. If classes 4 and 3 are both empty, then run class 2 round robin, and so on. If priorities are not adjusted occasionally, lower priority classes may all starve to death.

Figure 2-42. A scheduling algorithm with four priority classes.
Multiple Queues
One of the earliest priority schedulers was in CTSS (Corbat et al., 1962). CTSS had the problem that process switching was very slow because the 7094 could hold only one process in memory.
Each switch meant swapping the current process to disk and reading in a new one from disk. The
CTSS designers quickly realized that it was more efficient to give CPU-bound, processes a large quantum once in a while, rather than giving them, small quanta frequently (to reduce swapping).
On the other hand, giving all processes a large quantum would mean poor response time, as we have already seen. Their solution was to set up priority classes. Processes in the highest class were run for one quantum. Processes in the next highest class were run for two quanta. Processes in the next class were run for four quanta, and so on. Whenever a process used up all the quanta allocated to it, it was moved down one class.
As an example, consider a process that needed to compute continuously for 100 quanta. It would initially be given one quantum, then swapped out. Next time it would get two quanta before being swapped out. On succeeding runs it would get 4, 8, 16, 32, and 64 quanta although it would have used only 37 of the final 64 quanta to complete its work. Only 7 swaps would be needed (including the initial load) instead of 100 with a pure round-robin algorithm.
Furthermore, as the process sank deeper and deeper into the priority queues, it would be run less and less frequently, saving the CPU for short, interactive processes.
The following policy was adopted to prevent a process that needed to run for a long time when it first started but became interactive later, from being punished forever. Whenever a carriage return was typed at a terminal, the process belonging to that terminal was moved to the highest priority class, on the assumption that it was about to become interactive. One fine day, some user with a heavily CPU-bound process discovered that just sitting at the terminal and typing carriage returns at random every few seconds did wonders for his response time. He told all his friends.
Moral of the story: getting it right in practice is much harder than getting it right in principle.
Many other algorithms have been used for assigning processes to priority classes. For example, the influential XDS 940 system (Lampson, 1968), built at Berkeley, had four priority classes, called terminal, I/O, short quantum, and long quantum. When a process that was waiting for terminal input was finally awakened, it went into the highest priority class (terminal). When a process waiting for a disk block became ready, it went into the second class. When a process was still running when its quantum ran out, it was initially placed in the third class. However, if a process used up its quantum too many times in a row without blocking for terminal or other I/O, it was moved down to the bottom queue. Many other systems use something similar to favor interactive users and processes over background ones.
Shortest Process Next
Because shortest job first always produces the minimum average response time for batch systems, it would be nice if it could be used for interactive processes as well. To a certain extent, it can be. Interactive processes generally follow the pattern of wait for command, execute command, wait for command, execute command, and so on. If we regard the execution of each command as a separate “job,” then we could minimize overall response time by running the shortest one first. The only problem is figuring out which of the currently runnable processes is the shortest one.
One approach is to make estimates based on past behavior and run the process with the shortest estimated running time. Suppose that the estimated time per command for some terminal is T 0 .
Now suppose its next run is measured to be T 1 . We could update our estimate by taking a weighted sum of these two numbers, that is, aT 0 + (1 - a )T 1 . Through the choice of a we can decide to have the estimation process forget old runs quickly, or remember them for a long time.
With a = 1/2, we get successive estimates of
T 0 , T 0 /2 + T 1 /2, T 0 /4 + T 1 /4 + T 2 /2, T 0 /8 + T 1
/8 + T 2 /4 + T 3 /2
After three new runs, the weight of T 0 in the new estimate has dropped to 1/8.
The technique of estimating the next value in a series by taking the weighted average of the current measured value and the previous estimate is sometimes called aging . It is applicable to many situations where a prediction must be made based on previous values. Aging is especially easy to implement when a = 1/2. All that is needed is to add the new value to the current estimate and divide the sum by 2 (by shifting it right 1 bit).
Guaranteed Scheduling
A completely different approach to scheduling is to make real promises to the users about performance and then live up to them. One promise that is realistic to make and easy to live up to is this: If there are n users logged in while you are working, you will receive about 1/n of the
CPU power. Similarly, on a single user system with n processes running, all things being equal, each one should get 1/n of the CPU cycles.
To make good on this promise, the system must keep track of how much CPU each process has had since its creation. It then computes the amount of CPU each one is entitled to, namely the time since creation divided by n. Since the amount of CPU time each process has actually had is also known, it is straightforward to compute the ratio of actual CPU time consumed to CPU time entitled. A ratio of 0.5 means that a process has only had half of what it should have had, and a ratio of 2.0 means that a process has had twice as much as it was entitled to. The algorithm is then to run the process with the lowest ratio until its ratio has moved above its closest competitor.
Lottery Scheduling
While making promises to the users and then living up to them is a fine idea, it is difficult to implement. However, another algorithm can be used to give similarly predictable results with a much simpler implementation. It is called lottery scheduling (Waldspurger and Weihl, 1994).
The basic idea is to give processes lottery tickets for various system resources, such as CPU time.
Whenever a scheduling decision has to be made, a lottery ticket is chosen at random, and the process holding that ticket gets the resource. When applied to CPU scheduling, the system might hold a lottery 50 times a second, with each winner getting 20 msec of CPU time as a prize.
To paraphrase George Orwell: “All processes are equal, but some processes are more equal.”
More important processes can be given extra tickets, to increase their odds of winning. If there are 100 tickets outstanding, and one process holds 20 of them, it will have a 20 percent chance of winning each lottery. In the long run, it will get about 20 percent of the CPU. In contrast to a priority scheduler, where it is very hard to state what having a priority of 40 actually means, here the rule is clear: a process holding a fraction f of the tickets will get about a fraction f of the resource in question.
Lottery scheduling has several interesting properties. For example, if a new process shows up and is granted some tickets, at the very next lottery it will have a chance of winning in proportion to the number of tickets it holds. In other words, lottery scheduling is highly responsive.
Cooperating processes may exchange tickets if they wish. For example, when a client process sends a message to a server process and then blocks, it may give all of its tickets to the server, to increase the chance of the server running next. When the server is finished, it returns the tickets so the client can run again. In fact, in the absence of clients, servers need no tickets at all.
Lottery scheduling can be used to solve problems that are difficult to handle with other methods.
One example is a video server in which several processes are feeding video streams to their clients, but at different frame rates. Suppose that the processes need frames at 10, 20, and 25 frames/sec. By allocating these processes 10, 20, and 25 tickets, respectively, they will automatically divide the CPU in approximately the correct proportion, that is, 10 : 20 : 25.
Fair-Share Scheduling
So far we have assumed that each process is scheduled on its own, without regard to who its owner is. As a result, if user 1 starts up 9 processes and user 2 starts up 1 process, with round robin or equal priorities, user 1 will get 90% of the CPU and user 2 will get only 10% of it.
To prevent this situation, some systems take into account who owns a process before scheduling it. In this model, each user is allocated some fraction of the CPU and the scheduler picks processes in such a way as to enforce it. Thus if two users have each been promised 50% of the
CPU, they will each get that, no matter how many processes they have in existence.
As an example, consider a system with two users, each of which has been promised 50% of the
CPU. User 1 has four processes, A , B , C , and D , and user 2 has only 1 process, E . If roundrobin scheduling is used, a possible scheduling sequence that meets all the constraints is this one:
A E B E C E D E A E B E C E D E ...
On the other hand, if user 1 is entitled to twice as much CPU time as user 2, we might get
A B E C D E A B E C D E ...
Numerous other possibilities exist of course, and can be exploited, depending on what the notion of fairness is.
2.5.4 Scheduling in Real-Time Systems
A real-time system is one in which time plays an essential role. Typically, one or more physical devices external to the computer generate stimuli, and the computer must react appropriately to them within a fixed amount of time. For example, the computer in a compact disc player gets the bits as they come off the drive and must convert them into music within a very tight time interval. If the calculation takes too long, the music will sound peculiar. Other real-time systems are patient monitoring in a hospital intensive-care unit, the autopilot in an aircraft, and robot control in an automated factory. In all these cases, having the right answer but having it too late is often just as bad as not having it at all.
Real-time systems are generally categorized as hard real time , meaning there are absolute deadlines that must be met, or else, and soft real time , meaning that missing an occasional deadline is undesirable, but nevertheless tolerable. In both cases, real-time behavior is achieved by dividing the program into a number of processes, each of whose behavior is predictable and known in advance. These processes are generally short lived and can run to completion in well under a second. When an external event is detected, it is the job of the scheduler to schedule the processes in such a way that all deadlines are met.
The events that a real-time system may have to respond to can be further categorized us periodic
(occurring at regular intervals) or aperiodic (occurring unpredictably). A system may have to respond to multiple periodic event streams. Depending on how much time each event requires for processing, it may not even be possible to handle them all. For example, if there are m periodic events and event i occurs with period Pi and requires Ci seconds of CPU time to handle each event, then the load can only be handled if

A real-time system that meets this criteria is said to be schedulable .
As an example, consider a soft real-time system with three periodic events, with periods of 100,
200, and 500 msec, respectively. If these events require 50, 30, and 100 msec of CPU time per event, respectively, the system is schedulable because 0.5 + 0.15 + 0.2 < 1. If a fourth event with a period of 1 sec is added, the system will remain schedulable as long as this event does not need more than 150 msec of CPU time per event. Implicit in this calculation is the assumption that the context-switching overhead is so small that it can be ignored.
Real-time scheduling algorithms can be static or dynamic. The former make their scheduling decisions before the system starts running. The latter make their scheduling decisions at run time.
Static scheduling only works when there is perfect information available in advance about the work needed to be done and the deadlines that have to be met. Dynamic scheduling algorithms do not have these restrictions. We will defer our study of specific algorithms until we treat realtime multimedia systems in Chap. 7.
2.5.5 Policy versus Mechanism
Up until now, we have tacitly assumed that all the processes in the system belong to different users and are thus competing for the CPU. While this is often true, sometimes it happens that one process has many children running under its control. For example, a database management system process may have many children. Each child might be working on a different request, or each one might have some specific function to perform (query parsing, disk access, etc.). It is entirely possible that the main process has an excellent idea of which of its children are the most important (or time critical) and which the least. Unfortunately, none of the schedulers discussed above accept any input from user processes about scheduling decisions. As a result, the scheduler rarely makes the best choice.
The solution to this problem is to separate the scheduling mechanism from the scheduling policy . What this means is that the scheduling algorithm is parameterized in some way, but the parameters can be filled in by user processes. Let us consider the database example once again.
Suppose that the kernel uses a priority scheduling algorithm but provides a system call by which a process can set (and change) the priorities of its children. In this way the parent can control in detail how its children are scheduled, even though it itself does not do the scheduling. Here the mechanism is in the kernel but policy is set by a user process.
2.5.6 Thread Scheduling
When several processes each have multiple threads, we have two levels of parallelism present: processes and threads. Scheduling in such systems differs substantially depending on whether user-level threads or kernel-level threads (or both) are supported.
Let us consider user-level threads first. Since the kernel is not aware of the existence of threads, it operates as it always does, picking a process, say, A , and giving A control for its quantum. The thread scheduler inside A decides which thread to run, say A1 . Since there are no clock interrupts to multiprogram threads, this thread may continue running as long as it wants to. If it uses up the process’ entire quantum, the kernel will select another process to run.
When the process A finally runs again, thread A1 will resume running. It will continue to consume all of A ’s time until it is finished. However, its antisocial behavior will not affect other processes. They will get whatever the scheduler considers their appropriate share, no matter what is going on inside process A .
Now consider the case that A ’s threads have relatively little work to do per CPU burst, for example, 5 msec of work within a 50-msec quantum. Consequently, each one runs for a little while, then yields the CPU back to the thread scheduler. This might lead to the sequence A1 , A2
, A3 , A1 , A2 , A3 , A1 , A2 , A3 , A1 , before the kernel switches to process B . This situation is illustrated in Fig. 2-43(a).
The scheduling algorithm used by the run-time system can be any of the ones described above. In practice, round-robin scheduling and priority scheduling are most common. The only constraint is the absence of a clock to interrupt a thread that has run too long.
Now consider the situation with kernel-level threads. Here the kernel picks a particular thread to run. It does not have to take into account which process the thread belongs to, but it can if it wants to. The thread is given a quantum and is forceably suspended if it exceeds the quantum.
With a 50-msec quantum but threads that block after 5 msec, the thread order for some period of
30 msec might be A1 , B1 , A2 , B2 , A3 , B3 , something not possible with these parameters and user-level threads. This situation is partially depicted in Fig. 2-43(b).

Figure 2-43. (a) Possible scheduling of user-level threads with a 50-msec process quantum and threads that run 5 msec per CPU burst. (b) Possible scheduling of kernel-level threads with the same characteristics as (a).
A major difference between user-level threads, and kernel-level threads is the performance.
Doing a thread switch with user-level threads takes a handful of machine instructions. With kernel-level threads it requires a full context switch, changing the memory map, and invalidating the cache, which is several orders of magnitude slower. On the other hand, with kernel-level threads, having a thread block on I/O does not suspend the entire process as it does with userlevel threads. Since the kernel knows that switching from a thread in process A to a thread in process B is more expensive that running a second thread in process A (due to having to change the memory map and having the memory cache spoiled), it can take this information into account when making a decision. For example, given two threads that are otherwise equally important, with one of them belonging to the same process as a thread that just blocked and one belonging to a different process, preference could be given to the former.
Another important factor is that user-level threads can employ an application-specific thread scheduler. Consider, for example, the Web server of Fig. 2-10. Suppose that a worker thread has just blocked and the dispatcher thread and two worker threads are ready. Who should run next?
The run-time system, knowing what all the threads do, can easily pick the dispatcher to run next, so it can start another worker running. This strategy maximizes the amount of parallelism in an environment where workers frequently block on disk I/O. With kernel-level threads, the kernel would never know what each thread did (although they could be assigned different priorities). In general, however, application-specific thread schedulers can tune an application better than the kernel can.
2.6 RESEARCH ON PROCESSES AND THREADS
In Chap. 1, we looked at some of the current research in operating system structure. In this and subsequent chapters we will look at more narrowly focused research, starting with processes. As will become clear in time, some subjects are much more settled than others. Most of the research tends to be on the new topics, rather than ones that have been around for decades.
The concept of a process is an example of something that is well settled. Almost every system has some notion of a process as a container for grouping together related resources, such as an address space, threads, open files, protection permissions, etc. Different systems do the grouping slightly differently, but these are just engineering differences. The basic idea is not very controversial any more and there is little new research on the subject.
Threads are a newer idea than processes, so there is still some research going on about them,
Hauser et al. (1993) looked at how real programs actually use threads and came up with 10 different paradigms for thread usage. Thread scheduling (both uniprocessor and multiprocessor) still a topic near and dear to the heart of some researchers (Blumofe and Leiserson, 1994;
Buchanan and Chien, 1997; Corbaln et al., 2000; Chandra et al., 2000; Duda and Cheriton, 1999;
Ford and Susarla, 1996; and Petrou at al., 1999). However, few actual system designers are walking around all day wringing their hands for lack of a decent thread scheduling algorithm, so it appears this type of research is more researcher push than demand pull.
Closely related to threads is thread synchronization and mutual exclusion. In the 1970s and
1980s that subject was mined for all it was worth, so there is not much current work on the subject and what there is tends to be focuses on performance (e.g., Liedtke; 1993), tools for detecting synchronization errors (Savage et al, 1997) or modifying old concept in new ways (Tai and Carver, 1996, Trono, 2000). Finally new POSIX conformant threads packages are still being produced and reported in (Alfieri, 1994, and Miller, 1999).

Similar Documents

Free Essay

Cpu Scheduling

...Chapter 5: CPU Scheduling Operating System Concepts – 8th Edition Silberschatz, Galvin and Gagne ©2009 Chapter 5: CPU Scheduling        Basic Concepts Scheduling Criteria Scheduling Algorithms Thread Scheduling Multiple-Processor Scheduling Operating Systems Examples Algorithm Evaluation Operating System Concepts – 8th Edition 5.2 Silberschatz, Galvin and Gagne ©2009 Objectives    To introduce CPU scheduling, which is the basis for multiprogrammed operating systems To describe various CPU-scheduling algorithms To discuss evaluation criteria for selecting a CPU-scheduling algorithm for a particular system Operating System Concepts – 8th Edition 5.3 Silberschatz, Galvin and Gagne ©2009 Basic Concepts    Maximum CPU utilization obtained with multiprogramming CPU–I/O Burst Cycle – Process execution consists of a cycle of CPU execution and I/O wait CPU burst distribution Operating System Concepts – 8th Edition 5.4 Silberschatz, Galvin and Gagne ©2009 Alternating Sequence of CPU and I/O Bursts Operating System Concepts – 8th Edition 5.5 Silberschatz, Galvin and Gagne ©2009 Histogram of CPU-burst Times Operating System Concepts – 8th Edition 5.6 Silberschatz, Galvin and Gagne ©2009 CPU Scheduler   Selects from among the processes in ready queue, and allocates the CPU to one of them  Queue may be ordered in various ways Switches from running to waiting state Switches from running...

Words: 3375 - Pages: 14

Free Essay

Maintenance Scheduling

...Scheduling (when to do the job) is the process by which all resources needed for specific jobs are allocated, coordinated, and synchronized at the proper time and place, with necessary access, so that work can be executed with minimal delay and completed by the agreed upon date, within budget estimates. The schedule establishes when jobs will be done and what resources can best be applied to their performance. Resources include manpower, materials, tools and special equipment. Access refers to when the equipment will be prepared and accessible so that it can be worked on in safe (locked out/tagged out) condition, with necessary precautions taken, permits obtained, and any specialized documentation, drawings, or information in hand. Proper time relates to job start, duration of execution, and completion within the time frame agreed upon with the internal customer during the weekly coordination meeting. The Weekly Expectation Scheduling is the locus from which all maintenance activity is executed. Scheduling should be viewed as the “point” function and “marketing arm” of the system because it yields the earliest tangible results (often within weeks of start up). All individuals and groups perform better and accomplish more with clearly established, communicated and published expectations. When the maintenance function is managed without a weekly schedule, there are no specific expectations as to what is to be accomplished with the resources for which payroll checks will be...

Words: 2773 - Pages: 12

Free Essay

Cpu Scheduling

...CPU SCHEDULINGCPU scheduling in UNIX is designed to benefit interactive processes. Processes are given small CPU time slices by a priority algorithm that reduces to round-robin scheduling for CPU-bound jobs.The scheduler on UNIX system belongs to the general class of operating system schedulers known as round robin with multilevel feedback which means that the kernel allocates the CPU time to a process for small time slice, preempts a process that exceeds its time slice and feed it back into one of several priority queues. A process may need much iteration through the "feedback loop" before it finishes. When kernel does a context switch and restores the context of a process. The process resumes execution from the point where it had been suspended.Each process table entry contains a priority field. There is a process table for each process which contains a priority field for process scheduling. The priority of a process is lower if they have recently used the CPU and vice versa.The more CPU time a process accumulates, the lower (more positive) its priority becomes, and vice versa, so there is negative feedback in CPU scheduling and it is difficult for a single process to take all the CPU time. Process aging is employed to prevent starvation.Older UNIX systems used a 1-second quantum for the round- robin scheduling. 4.33SD reschedules processes every 0.1 second and recomputed priorities every second. The round-robin scheduling is accomplished by the -time-out mechanism, which tells...

Words: 2136 - Pages: 9

Free Essay

Cpu Scheduling Algorithms Simulation

...FINAL YEAR PROJECT On DESIGNING A SIMULATOR TO IMPLEMENT A JOB SCHEDULER BASED ON DIFFERENT POPULAR CPU SCHEDULING ALGORITHMS Submitted as partial fulfillment for the award of BACHELOR OF TECHNOLOGY Session 2014-15 In Computer Science & Engineering Under the guidance of Mr.MrinmoySen By AnanyaDas(11/CS/15, ananyadas092@gmail.com, +919681851782) AnshumanMahanty(11/CS/23, anshumanmahanty1@gmail.com, +917501169824) SayaniBanerjee(11/CS/93, sayanibanerjee.1@gmail.com, +919046422003) HALDIA INSTITUTE OF TECHNOLOGY, HALDIA CERTIFICATE This is to certify that the final year project (CS792) on ‘Designing a Simulator implementing job scheduler based on different popular CPU scheduling algorithms’ has been completed and submitted successfully by the project members Ananya Das (11/CS/15), Anshuman Mahanty (11/CS/23) and Sayani Banerjee (11/CS/93). ------------------------- -------------------------------- --------------------------- Mr. Tarun Kumar Ghosh Mr. Sourav Mandal, Mr. Mrinmoy Sen Head of the Department, Convenor, Asst. Prof., Project Mentor, Asst. Prof., Computer Science & Engg. Project Evaluation Committee Department of CSE ACKNOWLEDGEMENTS We use this opportunity to express our gratitude to everyone who has supported us through the ongoing course of this final year project. The project owes its success not just...

Words: 6989 - Pages: 28

Free Essay

Single-Stage Scheduling of Multiproduct Batch Plants: an Edible-Oil Deodorizer Case Study

...49, 8657–8669 8657 Single-Stage Scheduling of Multiproduct Batch Plants: An Edible-Oil Deodorizer Case Study Songsong Liu,† Jose M. Pinto,‡ and Lazaros G. Papageorgiou*,† Centre for Process Systems Engineering, Department of Chemical Engineering, UniVersity College London, Torrington Place, London WC1E 7JE, U.K., and Process Systems R&D, Praxair Inc., 39 Old Ridgebury Rad, Danbury, Connecticut 06810 This article considers the short-term scheduling of a single-stage batch edible-oil deodorizer that can process multiple products in several product groups. Sequence-dependent changeovers occur when switching from one product group to another. Based on the incorporation of products into product groups, mixed integer linear programming (MILP) models are proposed for two scenarios, with and without backlogs. Then, the models are successfully applied to a real-world case with 70 product orders over a 128-h planning horizon. Compared with a literature model developed for a similar problem, the proposed models exhibit significantly better performance. 1. Introduction In the past decade, a large number of optimization models and approaches have been proposed for batch scheduling and planning. A number of reviews on the planning and scheduling of batch processes have been presented in the literature.1-6 Initially, discrete-time formulation models using the state-task network7 (STN) or resource-task network8 (RTN) were used for batch scheduling problems. Because discrete-time formulations...

Words: 7686 - Pages: 31

Free Essay

Optimal Power Allocation and Scheduling for Two-Cell Capacity Maximization

...Optimal Power Allocation and Scheduling for Two-Cell Capacity Maximization ∗ Dept. Anders Gjendemsjø∗, David Gesbert†, Geir E. Øien∗ , and Saad G. Kiani† of Electronics and Telecom., Norwegian Univ. of Science and Technology, 7491 Trondheim, Norway, Email: {gjendems, oien}@iet.ntnu.no † Mobile Communications Department, Institute Eur´ com, e 06560 Sophia-Antipolis, France, Email: {gesbert, kiani}@eurecom.fr maximize the network capacity for the case of individual link power constraints [8] and a sum power constraint [9]. In [10] it is assumed that each base station, when it transmits, transmits with maximum power Pmax . Which base stations that should be active at each time slot is decided according to a rate maximization objective. However, no proof of optimality is given for the on/off power allocation. In [11] transmit power allocation for a downlink two-user interference channel is studied under a sum transmit power constraint and the assumption of symmetric interference. The derived power allocation depends on the level of interference; when the inference is above a certain threshold the total power is allocated to the best user. For interference less than the threshold, the available power is divided among the two users according to a water-filling principle. However, due to the sum power constraint and symmetry of interference assumption these results are not readily applicable for two-cell power allocation, where it is more reasonable to assume individual power constraints...

Words: 4991 - Pages: 20

Premium Essay

Nt1330 Unit 1 Term Paper

...interrupt or trap: 1) Upon modification of the program counter (PC) register, a nonexistent programed memory location, unimplemented in the microcontroller, is addressed. Answer: Event categorize as address error trap. 2) The stack pointer (SP) value exceeds the value of the SPLIM (Stack Pointer LIMit – set by the user) register, i.e. the stack is exceeded from the upper side. Answer: Event categorize as stack error trap. 3) User is trying to access the instruction y = t /0. Answer: Event categorize as arithmetic error trap 4) Ahmed is trying to open a document by double clicking on mouse. Answer: interrupt 5) Typing in MS word using keystroke. Answer: interrupt Q5: Select a mechanism for the given policies: 1) Process scheduling Sol: The process scheduler is the component of the operating system that is responsible for deciding whether the currently running process should continue running and, if not, which process should run next. There are four events that may occur where the scheduler needs to step in and make this decision: 1) The current process goes from the running to the waiting state because it issues an I/O request or some operating system request that cannot be satisfied immediately. 2) The current process terminates. 3) A timer interrupt causes the scheduler to run and decide that a process has run for its allotted interval of time and it is time to move it from the running to the ready state. 4) An I/O operation is complete for a process that requested...

Words: 1247 - Pages: 5

Premium Essay

A Distributed Joint Channel-Assignment, Scheduling and Routing Algorithm for Multi-Channel Ad Hoc Wireless Networks

...A Distributed Joint Channel-Assignment, Scheduling and Routing Algorithm for Multi-Channel Ad Hoc Wireless Networks Xiaojun Lin and Shahzada Rasool School of Electrical and Computer Engineering, Purdue University West Lafayette, IN 47907, U.S.A. {linx,srasool}@ecn.purdue.edu Abstract— The capacity of ad hoc wireless networks can be substantially increased by equipping each network node with multiple radio interfaces that can operate on multiple non-overlapping channels. However, new scheduling, channelassignment, and routing algorithms are required to fully utilize the increased bandwidth in multi-channel multi-radio ad hoc networks. In this paper, we develop a fully distributed algorithm that jointly solves the channel-assignment, scheduling and routing problem. Our algorithm is an online algorithm, i.e., it does not require prior information on the offered load to the network, and can adapt automatically to the changes in the network topology and offered load. We show that our algorithm is provably efficient. That is, even compared with the optimal centralized and offline algorithm, our proposed distributed algorithm can achieve a provable fraction of the maximum system capacity. Further, the achievable fraction that we can guarantee is larger than that of some other comparable algorithms in the literature. I. I NTRODUCTION Multi-channel multi-radio ad hoc wireless networks have recently received a substantial amount of interest, especially under...

Words: 8961 - Pages: 36

Free Essay

Line Balancing

...The Process of Line balancing: * Line balancing is essential for an effective product layout. * It is difficult to achieve equal work content to all stages in a process so therefore this will result in an imbalance hence a balancing loss. In the case of a line imbalance, extra resources need to be allocated to ensure the process is brought back to an efficient level of activity. With as high as possible the level of effectiveness for each stage in the process. * If a line is not properly balanced and therefore there is an imbalance the effectiveness of this is known as a balancing loss. The measure of a balancing loss is where the problem occurs of unequal allocation of work as part of the overall time needed to produce a product or service. * Perfect balancing would mean that work content is evenly allocated to each stage of the process. * The effectiveness of the balancing activity is measured by the balancing loss. * The longest stage in the process is called a bottleneck; it governs the flow of items through the whole process. Balancing Technique: * Various numbers of techniques available, must decide which is the best for the goal you are aiming for. Precedence Technique: * “This technique is a representation of the ordering of the elements which compose the total work content of the product or service” (Topic four product layout page 197) Worked Example: * Is the technique who the number of stages equal the total work content...

Words: 1560 - Pages: 7

Free Essay

Operating Systems

...job in the ready queue if a higher Arrives priority job requires service. 1.) First Come First Served (FCFS) - executes jobs in their order of arrival (the job that comes first will get to the CPU first). - implemented easily using a first in first out (FIFO) queue. - FCFS is non-preemptive. - PROBLEM: the average turnaround times and waiting times are HIGH. 2.) Shortest Job First (SJF) – the job with the shortest estimated time is the next one to receive service. - cannot be implemented at the short term scheduling level since there is no way to know the length of the next CPU burst. - can be implemented at the long term scheduling level by asking users to estimate their job time limit. - can be preemptive or non-preemptive. - SJF gives the minimum average waiting time. - PROBLEM: long jobs may never get to execute. 3.) Priority Scheduling: the highest priority job is the next one to receive service. - Priority scheduling can be preemptive or non-preemptive. - PROBLEM: starvation – a low priority job may never be...

Words: 906 - Pages: 4

Premium Essay

Sparkling Glass Limited

...Gulati, and Gupta. Various factors such as Educational Qualification, Work Experience, Socializing and many other factors have been taken into consideration. I hope the analysis I have done satisfies your concerns. Regards. Committee Member 2 Executive Summary : Mr. Ashok, who is in charge of the General Shift at Sparkling Glass Limited , is going to retire after few months. He has been responsible not only to do the basic routine work but also manage the Production Planning, Scheduling and Costing work. Therefore, the company needs to appoint someone on his behalf. Four people have taken into consideration for the post :1. 2. 3. 4. Khanna. Panjabi. Gupta. Gulati. The analysis talks about which persons fits best for the job on analyzing on Educational Qualification, Experience, Knowledge on Planning & Management , and Social Quality and Unions. After analyzing, Mr. Gupta seems fit to be promoted because he is qualified as a Glass technologist, manages the Production Planning, Costing and Scheduling work, had maintained a relationship with the workers which was adequate enough 3 TABLE OF CONTENT Sl. No. 1 2 3 4 5 6 7 Content Situation Analysis Problem Statement Options Available Criteria For Evaluation Evaluation of Options Recommendation Action Plan Page 5 6 6 6 7-8 8 8 4 1. Situation Analysis : The committee has examined all the four candidates resume. While evaluating them the members...

Words: 961 - Pages: 4

Free Essay

Srs Woow

...بسم الله الرحمن الرحيم Name: -------------------------------- Group:---- Level:------- Major:------------- |المملكة العربية السعودية |[pic] |KINGDOM OF SAUDI ARABIA | |وزارة التعليم العالي | |Ministry of Higher Education | |جامعة الإمام محمد بن سعود الإسلامية | |Al-Imam Muhammad Ibn Saud Islamic University | |كلية علوم الحاسب والمعلومات | |College of Computer & Information Sciences | CS231: Operating Systems 1st Mid-Term Exam 2nd semester of 1430/1431 Exam Duration: 1:30H Marks: out of 20 I. Multiple choices [6 Marks, 1 for each]: 1. Which of the following is not shared by different threads of the same process? a. Global variables b. Program counter c. Open files d. None of the above 2. Which of the following process state transitions is NOT correct? a. RUNNIG to READY b. READY to RUNNIG c. WAITING to RUNNING d. WAITING to READY 3. Which of the following programming examples, multithreading provides better performance than a single-threaded solution? a. A web server that responds clients service requests b. A web browser that can process...

Words: 693 - Pages: 3

Free Essay

Positrol Workholding

...|To: |Positrol Workholding | |From: | | |CC: | | |Date: | | |Re: |Job Process Scheduling | | | | Introduction The difference in the job process scheduling will be measured based on table 1 that was given. We are comparing the differences between First Come, First Served (FCFS), Shortest Operating Time (SOT) and Earliest Due Date (EDD) to find which sequencing rule may work the best. Highlights Lateness: The findings (SOT) had the best average of lateness (-3.8), but would still have 3 jobs come up short. On the other hand (EDD) had a slightly lower average at (-2.1), but had no jobs arrive late. With that being said both of these alternative had better averages than the (FCFS) method that positrol workholding is currently using. Jobs: To have the best efficiency and make sure the customers are happy, most jobs should be completed on time which is not the case for the methods (FCFS) and (SOT). They both would have 3 late jobs, whereas (EDD) method would have 0 late jobs, meaning they would all be completed on time. Flow Time: The last...

Words: 1183 - Pages: 5

Free Essay

Mcq Comupter

...1. |Round robin scheduling is essentially the preemptive version of __________fsecond | |  | | |1) |FIFO | |2) |Shortest job first | |3) |Shortest remaining | |4) |Longest time first | | |Correct Answer: FIFO   [hide] | | | |Marks: 1 | | | | | | | | | | |2. |A page fault occurs | |  | |1) |when the page is not in the memory | |2) |when the page is in the memory | |3) |when...

Words: 2016 - Pages: 9

Free Essay

Swear as Mechanism to Pain

...the operating system is the kernel. The kernel is a control program that functions in privileged state that allows all hardware instructions to be executed. It reacts to interrupts from external devices and to service requests and traps from processes. The kernel creates and terminates processes and responds to requests for service. Operating systems are resource managers. The main resource is computer hardware in the form of processors, storage, input/output devices, communication devices, and data. Operating system functions include: • Implementing the user interface. • Sharing hardware among users. • Allowing users to share data among themselves. • Preventing users from interfering with one another. • Scheduling resources among users. • Facilitating input/output. • Recovering from errors. • Accounting for resource usage. • Facilitating parallel operations. • Organizing data for secure and rapid access. • Handling network communications. Processes run applications, which are linked together with libraries that perform standard services. The kernel supports the processes by providing a path to the peripheral devices. The kernel responds to service...

Words: 2421 - Pages: 10