Free Essay

Cpu Scheduling

In:

Submitted By edwine
Words 2136
Pages 9
CPU SCHEDULINGCPU scheduling in UNIX is designed to benefit interactive processes. Processes are given small CPU time slices by a priority algorithm that reduces to round-robin scheduling for CPU-bound jobs.The scheduler on UNIX system belongs to the general class of operating system schedulers known as round robin with multilevel feedback which means that the kernel allocates the CPU time to a process for small time slice, preempts a process that exceeds its time slice and feed it back into one of several priority queues. A process may need much iteration through the "feedback loop" before it finishes. When kernel does a context switch and restores the context of a process. The process resumes execution from the point where it had been suspended.Each process table entry contains a priority field. There is a process table for each process which contains a priority field for process scheduling. The priority of a process is lower if they have recently used the CPU and vice versa.The more CPU time a process accumulates, the lower (more positive) its priority becomes, and vice versa, so there is negative feedback in CPU scheduling and it is difficult for a single process to take all the CPU time. Process aging is employed to prevent starvation.Older UNIX systems used a 1-second quantum for the round- robin scheduling. 4.33SD reschedules processes every 0.1 second and recomputed priorities every second. The round-robin scheduling is accomplished by the -time-out mechanism, which tells the clock interrupt driver to call a kernel subroutine after a specified interval; the subroutine to be called in this case causes the rescheduling and then resubmits a time-out to call itself again. The priority recomputation is also timed by a subroutine that resubmits a time-out for itself event. The kernel primitive used for this purpose is called sleep (not to be confused with the user-level library routine of the same name.) It takes an argument, which is by convention the address of a kernel data structure related to an event that the process wants to occur before that process is awakened. When the event occurs, the system process that knows about it calls wakeup with the address corresponding to the event, and all processes that had done a sleep on the same address are put in the ready queue to be run. |

Scheduling in the 4.2BSD UNIX OS
Short term scheduling in UNIX Is designed to benefit interactive jobs. Processes are given small CPU time slices by an algorithm that reduces to round robin for CPU-bound jobs, although there is a priority scheme. There's no preemption of one process by another when running in kernel mode. A process may relinquish the CPU because it's waiting for I/O (including I/O due to page faults) or because its time slice has expired.
Every process has a scheduling priority associated with it; the lower the numerical priority, the more likely is the process to run.
System processes doing disk I/O and other important tasks have negative priorities and cannot be interrupted. Ordinary user processes have positive priorities and thus are less likely to be run than any system process, although user processes may have precedence over one another. The nice command may be used to affect this precedence according to its numerical priority argument.
The more CPU time a process accumulates, the lower (more positive) its priority becomes. The reverse is also true (process aging is employed to prevent starvation). Thus there is negative feedback in CPU scheduling, and it’s difficult for a single process to take CPU all time.
Old UNIX systems used a 1 sec. quantum for the round-robin scheduling algorithm. Later 4.2BSD did rescheduling every 0.1 seconds, and priority re-computation every second. The round-robin scheduling is accomplished by the timeout mechanism, which tells the clock interrupt driver to call a certain system routine after a specified interval. The subroutine to be called in this case causes the rescheduling and then resubmits a timeout to call itself again 0.1 sec later. The priority re-computation is also timed by a subroutine that resubmits a timeout for itself.
When a process chooses to relinquish the CPU (voluntarily, in a user program, or because this decision is to be made in the kernel context for a process executing that program) it sleep on an even. The system call used for this is called sleep (not to be confused with the C library routine with the same name, sleep(3)). It takes an argument that is by convention the address of a kernel data structure related to an event the process wants to occur before it is awakened. When the event occurs, the system process that knows about it calls wakeup with the address corresponding to the event, and all processes that had done a sleep on the same address are put in the ready queue.
For example, a process waiting for disk I/O to complete will sleep on the address of the buffer corresponding to the data being transferred. When the interrupt routine for the disk driver notes that the transfer is complete, it calls wakeup on that buffer, causing all processes waiting for that buffer to be awakened. Which process among those actually does run is chosen by the scheduler effectively at random. Sleep, however, also takes a second argument, which is the scheduling priority to be used for this purpose.

CPU - I/O Burst Cycle
• Bursts of CPU usage alternate with periods of I/O wait – a CPU-bound process – an I/O bound process2 3 Basic Concepts • CPU–I/O Burst Cycle – Process execution consists of a cycle of CPU execution and I/O wait.
• Maximum CPU utilization obtained with multiprogramming 4 CPU Scheduler • Selects from among the processes in memory that are ready to execute, and allocates the CPU to one of them. • CPU scheduling decisions may take place when a process:
1. Switches from running to waiting state. 2. Switches from running to ready state. 3. Switches from waiting to ready. 4. Terminates. • Scheduling under 1 and 4 is nonpreemptive. • All other scheduling is preemptive.
3 5 Scheduling Metrics • CPU utilization – keep the CPU as busy as possible
• Throughput – # of processes that complete their execution per time unit
• Turnaround/Response time – amount of time to execute a particular process • Waiting time – amount of time a process has been waiting in the ready queue 6 Scheduling Algorithm Goals4 7 Dispatcher • Dispatcher module gives control of the CPU to the process selected by the scheduler; this involves: – switching context – switching to user mode – jumping to the proper location in the user program to restart that program
• Dispatch latency – time it takes for the dispatcher to stop one process and start another running. 8 First-Come, First-Served (FCFS) Scheduling Process Burst Time P1 24 P2 3 P3 3
• Suppose that the processes arrive in the order: P1 , P2, P3 The Gantt Chart for the schedule is: • Waiting time for P1 = 0; P2 = 24; P3 = 27
• Average waiting time: (0 + 24 + 27)/3 = 17 P1 P2 P3 24 27 300 5 9 FCFS Scheduling (Cont.) Suppose that the processes arrive in the order P2 , P3 , P1 . • The Gantt chart for the schedule is: • Waiting time for P1 = 6; P2 = 0; P3 = 3
• Average waiting time: (6 + 0 + 3)/3 = 3
• Much better than previous case. • Convoy effect short process behind long process P1 P3 P2 6 3 30 0 10 Shortest-Job-First (SJF) Scheduling
• Associate with each process the length of its next CPU burst. Use these lengths to schedule the process with the shortest time. • Two schemes: – nonpreemptive – once CPU given to the process it cannot be preempted until completes its CPU burst. – preemptive – if a new process arrives with CPU burst length less than remaining time of current executing process, preempt. This scheme is know as the Shortest-Remaining-Time-First (SRTF). • SJF is optimal – gives minimum average waiting time for a given set of processes.6 11 Example of Preemptive SJF Process Arrival Time Burst Time P1 0.0 7 P2 2.0 4 P3 4.0 1 P4 5.0 4 • SJF (preemptive) • Average waiting time = (9 + 1 + 0 +2)/4 = 3 P1 P3 P2 4 2 11 0 P4 5 7 P2 P1 16 12 Process Arrival Time Burst Time P1 0.0 7 P2 2.0 4 P3 4.0 1 P4 5.0 4 • SJF (non-preemptive)
• Average waiting time = (0 + 6 + 3 + 7)/4 = 4 Example of Non-Preemptive SJF P1 P3 P2 7 3 16 0 P4 8 127 13 Determining Length of Next CPU Burst
• Can only estimate the length.
• Can be done by using the length of previous CPU bursts, using exponential averaging.
: Define 4. 1 0 , 3. burst CPU next for the value predicted 2. burst CPU of length actual 1. 1 ≤ ≤ = =+ α α τ n th n n t ( ) n n n t τ α α τ −+ =+ 1 1 14 Examples of Exponential Averaging
• α =0 – τn+1 = τn – Recent history does not count. • α =1 – τn+1 = tn – Only the actual last CPU burst counts.
• If we expand the formula, we get: τn+1 = α tn+(1 - α) α tn-1+ … +(1 - α )j α tn-j + … +(1 - α )n+1τ0
• Since both α and (1 - α) are less than or equal to 1, each successive term has less weight than its predecessor.8 15 Priority Scheduling • A priority number (integer) is associated with each process • The CPU is allocated to the process with the highest priority (smallest integer ≡ highest priority). – Preemptive – Non-preemptive
• SJF is a priority scheduling where priority is the predicted next CPU burst time. • Problem ≡ Starvation – low priority processes may never execute.
• Solution ≡ Aging – as time progresses increase the priority of the process. 16 Round Robin (RR) • Each process gets a small unit of CPU time (time quantum), usually 10-100 milliseconds. After this time has elapsed, the process is preempted and added to the end of the ready queue. • If there are n processes in the ready queue and the time quantum is q, then each process gets 1/n of the CPU time in chunks of at most q time units at once. No process waits more than (n-1)q time units. • Performance – q large ⇒ FIFO – q small ⇒ q must be large with respect to context switch, otherwise overhead is too high.9 17 Example of RR with Time Quantum = 20 Process Burst Time P1 53 P2 17 P3 68 P4 24 • The Gantt chart is: Typically, higher average turnaround than SJF, but better interactive response. P1 P2 P3 P4 P1 P3 P4 P1 P3 P3 0 20 37 57 77 97 117 121 134 154 162 18 Multilevel Queue
• Ready queue is partitioned into separate queues: e.g., foreground (interactive), background (batch) • Each queue has its own scheduling algorithm, e.g., foreground – RR, background – FCFS
• Scheduling must be done between the queues. – Fixed priority scheduling; (i.e., serve all from foreground then from background). Possibility of starvation. – Time slice – each queue gets a certain amount of CPU time which it can schedule amongst its processes; e.g., 80% to foreground in RR, 20% to background in FCFS 10 19 Multi-queue priority scheduling A scheduling algorithm with four priority classes 20 Multilevel Feedback Queue • A process can move between the various queues; – aging can be implemented this way.
• Multilevel-feedback-queue scheduler defined by the following parameters: – number of queues – scheduling algorithms for each queue – method used to determine when to upgrade a process – method used to determine when to demote a process – method used to determine which queue a process will enter when that process needs service11 21 Example of Multilevel Feedback Queue • Three queues: – Q0 – time quantum 8 milliseconds – Q1 – time quantum 16 milliseconds – Q2 – FCFS • Scheduling – A new job enters queue Q0 which is served FCFS. When it gains CPU, job receives 8 milliseconds. If it does not finish in 8 milliseconds, job is moved to queue Q1. – At Q1 job is again served FCFS and receives 16 additional milliseconds. If it still does not complete, it is preempted and moved to queue Q2. 22 Multilevel Feedback Queues12 23 Multiple-Processor Scheduling • CPU scheduling more complex when multiple CPUs are available. • Homogeneous processors within a multiprocessor. • Load sharing 24 Real-Time Scheduling
• Hard real-time systems – required to complete a critical task within a guaranteed amount of time. • Soft real-time computing – requires that critical processes receive priority over less fortunate ones.13 25 Solaris 2 Scheduling 26 Windows 2000 Windows 2000 supports 32 priorities for threads14 27 Windows 2000 Priorities 28 UNIX Scheduler The UNIX scheduler is based on a multilevel queue structure

Similar Documents

Free Essay

Cpu Scheduling

...Chapter 5: CPU Scheduling Operating System Concepts – 8th Edition Silberschatz, Galvin and Gagne ©2009 Chapter 5: CPU Scheduling        Basic Concepts Scheduling Criteria Scheduling Algorithms Thread Scheduling Multiple-Processor Scheduling Operating Systems Examples Algorithm Evaluation Operating System Concepts – 8th Edition 5.2 Silberschatz, Galvin and Gagne ©2009 Objectives    To introduce CPU scheduling, which is the basis for multiprogrammed operating systems To describe various CPU-scheduling algorithms To discuss evaluation criteria for selecting a CPU-scheduling algorithm for a particular system Operating System Concepts – 8th Edition 5.3 Silberschatz, Galvin and Gagne ©2009 Basic Concepts    Maximum CPU utilization obtained with multiprogramming CPU–I/O Burst Cycle – Process execution consists of a cycle of CPU execution and I/O wait CPU burst distribution Operating System Concepts – 8th Edition 5.4 Silberschatz, Galvin and Gagne ©2009 Alternating Sequence of CPU and I/O Bursts Operating System Concepts – 8th Edition 5.5 Silberschatz, Galvin and Gagne ©2009 Histogram of CPU-burst Times Operating System Concepts – 8th Edition 5.6 Silberschatz, Galvin and Gagne ©2009 CPU Scheduler   Selects from among the processes in ready queue, and allocates the CPU to one of them  Queue may be ordered in various ways Switches from running to waiting state Switches from running...

Words: 3375 - Pages: 14

Free Essay

Cpu Scheduling Algorithms Simulation

...FINAL YEAR PROJECT On DESIGNING A SIMULATOR TO IMPLEMENT A JOB SCHEDULER BASED ON DIFFERENT POPULAR CPU SCHEDULING ALGORITHMS Submitted as partial fulfillment for the award of BACHELOR OF TECHNOLOGY Session 2014-15 In Computer Science & Engineering Under the guidance of Mr.MrinmoySen By AnanyaDas(11/CS/15, ananyadas092@gmail.com, +919681851782) AnshumanMahanty(11/CS/23, anshumanmahanty1@gmail.com, +917501169824) SayaniBanerjee(11/CS/93, sayanibanerjee.1@gmail.com, +919046422003) HALDIA INSTITUTE OF TECHNOLOGY, HALDIA CERTIFICATE This is to certify that the final year project (CS792) on ‘Designing a Simulator implementing job scheduler based on different popular CPU scheduling algorithms’ has been completed and submitted successfully by the project members Ananya Das (11/CS/15), Anshuman Mahanty (11/CS/23) and Sayani Banerjee (11/CS/93). ------------------------- -------------------------------- --------------------------- Mr. Tarun Kumar Ghosh Mr. Sourav Mandal, Mr. Mrinmoy Sen Head of the Department, Convenor, Asst. Prof., Project Mentor, Asst. Prof., Computer Science & Engg. Project Evaluation Committee Department of CSE ACKNOWLEDGEMENTS We use this opportunity to express our gratitude to everyone who has supported us through the ongoing course of this final year project. The project owes its success not just...

Words: 6989 - Pages: 28

Premium Essay

Nt1330 Unit 1 Term Paper

...Q1. Name some events in operating system which are responsible to allocate resources and to control user programs to prevent errors. These resources include time, power, hardware, etc... that controls the operation of the application programs to prevent errors. Q2.Consider a well-known reputed university having thousands of students enrolled, at the end of semester each student must have a semester end report generated by computer. Each report card needs same resources to execute. Being a student of operating system, you know operating system responsibility is to schedule the job. In your perspective, which environment will be suitable for given scenario to increase CPU utilization? Support your answers with 2-3 reasons. Ans: FCFS is used...

Words: 1247 - Pages: 5

Free Essay

Operating Systems

...queue I/O queue CPU CPU READY QUEUE READY QUEUE PROCESS I/OO I/OO Queuing Diagram I/OO I/OO I/OO I/OO Long term scheduler (job scheduler) – decides which jobs are admitted to the system for processing (brings jobs to the ready state). Short term scheduler (CPU scheduler) – selects which job from the ready queue would be the next one to get to the CPU. Non-preemptive vs. Preemptive * Jobs are expected one at a time until - an active job will be completion. Once a job is selected it interrupted and placed back Is serviced even if a higher priority job in the ready queue if a higher Arrives priority job requires service. 1.) First Come First Served (FCFS) - executes jobs in their order of arrival (the job that comes first will get to the CPU first). - implemented easily using a first in first out (FIFO) queue. - FCFS is non-preemptive. - PROBLEM: the average turnaround times and waiting times are HIGH. 2.) Shortest Job First (SJF) – the job with the shortest estimated time is the next one to receive service. - cannot be implemented at the short term scheduling level since there is no way to know the length of the next CPU burst. - can be implemented at the long term scheduling level by asking users to estimate their job time limit. - can be preemptive or non-preemptive. - SJF gives the minimum average waiting time. - PROBLEM: long jobs may never get to execute. 3.) Priority Scheduling: the highest priority...

Words: 906 - Pages: 4

Free Essay

Os Study Guide

...operation of the computer system. To prevent user programs from interfering with the proper operation of the system, the hardware has two modes: user mode and kernel mode. Various instructions (such as I/0 instructions and halt instructions) are privileged and can be executed only in kernel mode. 2.) Kernel Mode In Kernel mode, the executing code has complete and unrestricted access to the underlying hardware. It can execute any CPU instruction and reference any memory address. Kernel mode is generally reserved for the lowest-level, most trusted functions of the operating system. Crashes in kernel mode are catastrophic; they will halt the entire PC. 3.) User Mode In User mode, the executing code has no ability to directly access hardware or reference memory. Code running in user mode must delegate to system APIs to access hardware or memory. Due to the protection afforded by this sort of isolation, crashes in user mode are always recoverable. Most of the code running on your computer will execute in user mode. -services it provides? 1.) CPU moves data from/to...

Words: 2427 - Pages: 10

Free Essay

Swear as Mechanism to Pain

...the operating system is the kernel. The kernel is a control program that functions in privileged state that allows all hardware instructions to be executed. It reacts to interrupts from external devices and to service requests and traps from processes. The kernel creates and terminates processes and responds to requests for service. Operating systems are resource managers. The main resource is computer hardware in the form of processors, storage, input/output devices, communication devices, and data. Operating system functions include: • Implementing the user interface. • Sharing hardware among users. • Allowing users to share data among themselves. • Preventing users from interfering with one another. • Scheduling resources among users. • Facilitating input/output. • Recovering from errors. • Accounting for resource usage. • Facilitating parallel operations. • Organizing data for secure and rapid access. • Handling network communications. Processes run applications, which are linked together with libraries that perform standard services. The kernel supports the processes by providing a path to the peripheral devices. The kernel responds to service...

Words: 2421 - Pages: 10

Free Essay

Mcq Comupter

...1. |Round robin scheduling is essentially the preemptive version of __________fsecond | |  | | |1) |FIFO | |2) |Shortest job first | |3) |Shortest remaining | |4) |Longest time first | | |Correct Answer: FIFO   [hide] | | | |Marks: 1 | | | | | | | | | | |2. |A page fault occurs | |  | |1) |when the page is not in the memory | |2) |when the page is in the memory | |3) |when...

Words: 2016 - Pages: 9

Free Essay

Srs Woow

...بسم الله الرحمن الرحيم Name: -------------------------------- Group:---- Level:------- Major:------------- |المملكة العربية السعودية |[pic] |KINGDOM OF SAUDI ARABIA | |وزارة التعليم العالي | |Ministry of Higher Education | |جامعة الإمام محمد بن سعود الإسلامية | |Al-Imam Muhammad Ibn Saud Islamic University | |كلية علوم الحاسب والمعلومات | |College of Computer & Information Sciences | CS231: Operating Systems 1st Mid-Term Exam 2nd semester of 1430/1431 Exam Duration: 1:30H Marks: out of 20 I. Multiple choices [6 Marks, 1 for each]: 1. Which of the following is not shared by different threads of the same process? a. Global variables b. Program counter c. Open files d. None of the above 2. Which of the following process state transitions is NOT correct? a. RUNNIG to READY b. READY to RUNNIG c. WAITING to RUNNING d. WAITING to READY 3. Which of the following programming examples, multithreading provides better performance than a single-threaded solution? a. A web server that responds clients service requests b. A web browser that can process...

Words: 693 - Pages: 3

Premium Essay

Os Final

...system has 16 drives and each process can have 4 drives. What is the maximum number of n that the system is deadlock free. [5] Given the sequence below which processes will cause a page fault. (for FIFO – LRU) Write a loop that would create 10 child processes. (No more than 10). If a file is shared by two processes, can have read-only and the other read-write access. If yes how, if not what prevent it. What are some of the security features in UNIX or NT. How does NTFS directory system work. How does UNIX directory system work. Pick a Unix UNIX process scheduler and explain how (not why or when) it favors I/O bound processes to CPU bound processes. Explain and compare I/O software (programmed, interrupt, DMA). What is the problem with RAID4 and explain how RAID5 solves the problem. Disk arm scheduling algorithms readuce read time but what does Linux additionally do. How does Workingset Clock algorithm work. If seektime is 8msec and each track is 160KB. What is the access time to read 4KB. Couffman listed four requirements for a deadlock. Describe fourth one and how to prevent it. What are the advantages of inverted page tables. What is a soft link. Implement soft link and hard links. What are the advantages of FAT32 to FAT16. Readers writes problem. Why doesn’t the code work. Write a better...

Words: 278 - Pages: 2

Free Essay

How Multithreading Works in a Server

...for a service, the server dispatcher creates multiple threads in one server that can be assigned to various client requests simultaneously. Each thread is associated with a separate context or request to the server. This feature is useful in both conversational and RPC servers which otherwise may stay idle waiting for the client side of a conversation. iii. Completion phase- When the application is shut down or stopped, the server performs any termination processing that is necessary such as closing a resource manager. 2. Advantages and Disadvantages of implementing multithreading As user space threads * Switching threads is faster when thread management is done in user space. * Fast thread switching. * Fast thread scheduling. * A multithreaded program operates faster on computer systems that have multiple CPU’s with multiple cores. * A multithreaded application remains responsive to input. This makes it possible for the application to remain responsive to user input while executing tasks. * Superior application responsiveness. If...

Words: 853 - Pages: 4

Free Essay

Women Entrepreneurship

...Chapter 6: CPU Scheduling • • • Basic Concepts Scheduling Criteria Scheduling Algorithms Operating System Concepts 6.1 Basic Concepts • Maximum CPU utilization obtained with multiprogramming. • CPU–I/O Burst Cycle – Process execution consists of a cycle of CPU execution and I/O wait. – Example: Alternating Sequence of CPU And I/O Bursts – In an I/O – bound program would have many very short CPU bursts. – In a CPU – bound program would have a few very long CPU bursts. Operating System Concepts 6.2 1 CPU Scheduler • The CPU scheduler (short-term scheduler) selects from among the processes in memory that are ready to execute, and allocates the CPU to one of them. • A ready queue may be implemented as a FIFO queue, priority queue, a tree, or an unordered linked list. • CPU scheduling decisions may take place when a process: 1. Switches from running to waiting state (ex., I/O request). 2. Switches from running to ready state (ex., Interrupts occur). 3. Switches from waiting to ready state (ex., Completion of I/O). 4. Terminates. • Scheduling under 1 and 4 is nonpreemptive; otherwise is called preemptive. • Under nonpreemptive scheduling, once the CPU has been allocated to a process, the process keeps the CPU until it releases the CPU either by terminating or by switching to the waiting state. Operating System Concepts 6.3 Dispatcher • Dispatcher module gives control of the CPU to the process selected by the short-term scheduler;...

Words: 1887 - Pages: 8

Premium Essay

Cloud Compution

...resources are not equally distributed then this will result in resource wastage. The cloud computing platform guarantees subscribers that it sticks to the service level agreement (SLA) by providing resources as service and by needs based on the broker policy[4]. So in order to get maximum benefit from cloud computing there is need to dynamically balance the load among servers and improve utilization of resources. There are still some areas that are needed to be focused on in cloud computing.  Resource Management  Task Scheduling The task scheduling goals of Cloud computing is provide optimal tasks scheduling for users, and provide the entire cloud system throughput and QoS at the same time. Scheduling is the process of allocating tasks to available resources on the basis of tasks need [5].The main purpose for scheduling is to maximize the utilization of resources. Following are the needs of job scheduling in cloud computing: CPU utilization – keep the CPU as busy as possible (from 0% to 100%) Throughput – # of processes that complete their execution per time unit Turnaround time – amount of time to execute a particular Process Waiting time – amount of time a process has been waiting in the ready queue...

Words: 2467 - Pages: 10

Free Essay

Demo Script

...Walkthrough 10 Segment #1: Scaling-Up Windows Azure Applications using a Single Instance 10 Segment #2: Scaling-Out Windows Azure Applications using Multiple instances 17 Summary 24 * Overview This demo highlights how to scale-up Web Applications on Windows Azure, using the .NET Task Parallel Library (TPL) classes from .NET Framework 4.0. This library efficiently utilizes multiple processors within Windows Azure roles, where the size of the Virtual Machine instance is greater than Small (i.e. where there are multiple processors available). Additionally, the demo shows how to scale-out applications taking advantage of Technical Computing across multiple role instances, using a Job scheduling algorithm. The work is distributed to all the available instances, maximizing the CPU processing of each. Travelling Salesman demo is using a “genetic” algorithm to quickly solve the problem that would ordinarily require very many conventional interactions to solve. The problem and its real–life applications are widely documented (for example, see http://www.tsp.gatech.edu/index.html). The algorithm used in this demo was taken from http://www.heatonresearch.com/online/introduction–neural–networks–cs–edition–2/chapter–6 which explains that the number of steps to solve a problem with N cities is N! (N factorial) (N * (N–1) * (N–2) *… * 2 * 1): Number of Cities | Number of Steps | 1 | 1 | 2 | 1 | 3 | 6 | 4 | 24 | 5 | 120 | 6 | 720 | 7 | 5,040 | 8 | 40,320 | 9...

Words: 3071 - Pages: 13

Free Essay

Business

...US006785889B1 (12) United States Patent Williams (10) Patent N0.: US 6 9 785 9 889 B1 (45) Date of Patent: Aug. 31, 2004 (54) SYSTEM AND METHOD FOR SCHEDULING BANDWIDTH RESOURCES USINGA KALMAN ESTIMATOR WITH ACTIVE FEEDBACK 6,003,062 A * 12/1999 Greenberg et a1. ........ .. 709/104 6,105,053 A * 6,189,022 B1 * 6,263,358 B1 * 8/2000 Kimmel et a1. ..... .. 2/2001 Binns ............. .. 7/2001 Lee et a1. .... .. 709/105 709/100 .. 709/100 Inventor: Peter Williams, * Cited examiner (73) Assignee: Aurema, Inc., Cupertino, CA (US) _ _ _ _ Primary Examiner—Jack B. Harvey ( * ) Notice: SubJect to any disclaimer,~ the term of this Patent 15 extended or adlusted under 35 U'S'C' 154(k)) by 816 days‘ Assistant Examiner_Hai V_ Nguyen (74) Attorney, Agent, or Firm—Michael Hetherington; Nick Ulman; Woodside IP Group (21) Appl. N0.: 09/596,026 (22) Filed: (51) Int C17 (52) U ' ' (57) G06F 9/00 709040 ’ ’ 718/106’ ABSTRACT Jun. 15, 2000 A community of collaborative software agents Works together in a domain to provide functionality such as pro vision of communications services or control of a chemical process. A scheduler is built into each collaborative agent Ci """""""""""" ' """"""""""" " (58) (56) Field of Search """"""""" ’ 709/104 229 718/104; 106’ ’ Which schedules tasks allocated to that particular agent and tasks sub-allocated by the agent. The scheduler has a mecha nism for over-booking...

Words: 10191 - Pages: 41

Free Essay

Positrol Workholding

...|To: |Positrol Workholding | |From: | | |CC: | | |Date: | | |Re: |Job Process Scheduling | | | | Introduction The difference in the job process scheduling will be measured based on table 1 that was given. We are comparing the differences between First Come, First Served (FCFS), Shortest Operating Time (SOT) and Earliest Due Date (EDD) to find which sequencing rule may work the best. Highlights Lateness: The findings (SOT) had the best average of lateness (-3.8), but would still have 3 jobs come up short. On the other hand (EDD) had a slightly lower average at (-2.1), but had no jobs arrive late. With that being said both of these alternative had better averages than the (FCFS) method that positrol workholding is currently using. Jobs: To have the best efficiency and make sure the customers are happy, most jobs should be completed on time which is not the case for the methods (FCFS) and (SOT). They both would have 3 late jobs, whereas (EDD) method would have 0 late jobs, meaning they would all be completed on time. Flow Time: The last...

Words: 1183 - Pages: 5