Schedule Algorithm in operating system


Scheduling Algorithms

 

CPU Scheduling deals with the problem of deciding which of the processes in the ready queue is to be allocated the CPU.
Following are some scheduling algorithms we will study
  • FCFS Scheduling.
  • Round Robin Scheduling.
  • SJF Scheduling.
  •  SRT Scheduling.
  • Priority Scheduling.
  • Multilevel Queue Scheduling.
  • Multilevel Feedback Queue Scheduling.

 

First-Come-First-Served (FCFS) Scheduling





Other names of this algorithm are:
  • First-In-First-Out (FIFO)
  • Run-to-Completion
  • Run-Until-Done
Perhaps,  First-Come-First-Served algorithm is the simplest scheduling algorithm is the simplest scheduling algorithm. Processes are dispatched according to their arrival time on the ready queue. Being a nonpreemptive discipline, once a process has a CPU, it runs to completion. The FCFS scheduling is fair in the formal sense or human sense of fairness but it is unfair in the sense that long jobs make short jobs wait and unimportant jobs make important jobs wait.
FCFS is more predictable than most of other schemes since it offers time. FCFS scheme is not useful in scheduling interactive users because it cannot guarantee good response time. The code for FCFS scheduling  is simple to write and understand. One of the major drawback of this scheme is that the average time is often quite long.
The First-Come-First-Served algorithm is rarely used as a master scheme in modern operating systems but it is often embedded within other schemes.
Example

Round Robin Scheduling


One of the oldest, simplest, fairest and most widely used algorithm is round robin (RR).
In the round robin scheduling, processes are dispatched in a FIFO manner but are given a limited amount of CPU time called a time-slice or a quantum.
If a process does not complete before its CPU-time expires, the CPU is preempted and given to the next process waiting in a queue. The preempted process is then placed at the back of the ready list.
Round Robin Scheduling is preemptive (at the end of time-slice) therefore it is effective in time-sharing environments in which the system needs to guarantee reasonable response times for interactive users.
The only interesting issue with round robin scheme is the length of the quantum. Setting the quantum too short causes too many context switches and lower the CPU efficiency. On the other hand, setting the quantum too long may cause poor response time and appoximates FCFS.
In any event, the average waiting time under round robin scheduling is often quite long.
Example

 

Shortest-Job-First (SJF) Scheduling


Other name of this algorithm is Shortest-Process-Next (SPN).
Shortest-Job-First (SJF) is a non-preemptive discipline in which waiting job (or process) with the smallest estimated run-time-to-completion is run next. In other words, when CPU is available, it is assigned to the process that has smallest next CPU burst.
The SJF scheduling is especially appropriate for batch jobs for which the run times are known in advance. Since the SJF scheduling algorithm gives the minimum average time for  a given set of processes, it is probably optimal.
The SJF algorithm favors short jobs (or processors) at the expense of longer ones.
The obvious problem with SJF scheme is that it requires precise knowledge of how long a job or process will run, and this information is not usually available.
The best SJF algorithm can do is to rely on user estimates of run times.
    In the production environment where the same jobs run regularly, it may be possible to provide reasonable estimate of run time, based on the past performance of the process. But in the development environment users rarely know how their program will execute.
Like FCFS, SJF is non preemptive therefore, it is not useful in timesharing environment in which reasonable response time must be guaranteed.

 

Shortest-Remaining-Time (SRT) Scheduling


  • The SRT is the preemtive counterpart of SJF and useful in time-sharing environment.
  • In SRT scheduling, the process with the smallest estimated run-time to completion is run next, including new arrivals.
  • In SJF scheme, once a job begin executing, it run to completion.
  • In SJF scheme, a running process may be preempted by a new arrival process with shortest estimated run-time.
  • The algorithm SRT has higher overhead than its counterpart SJF.
  • The SRT must keep track of the elapsed time of the running process and must handle occasional preemptions.
  • In this scheme, arrival of small processes will run almost immediately. However, longer jobs have even longer mean waiting time.

 

Priority Scheduling

The basic idea is straightforward: each process is assigned a priority, and priority is allowed to run. Equal-Priority processes are scheduled in FCFS order. The shortest-Job-First (SJF) algorithm is a special case of general priority scheduling algorithm.
An SJF algorithm is simply a priority algorithm where the priority is the inverse of the (predicted) next CPU burst. That is, the longer the CPU burst, the lower the priority and vice versa.
Priority can be defined either internally or externally. Internally defined priorities use some measurable quantities or qualities to compute priority of a process.
Examples of Internal priorities are
  • Time limits.
  • Memory requirements.
  • File requirements,
        for example, number of open files.
  • CPU Vs I/O requirements.
Externally defined priorities are set by criteria that are external to operating system such as
  • The importance of process.
  • Type or amount of funds being paid for computer use.
  • The department sponsoring the work.
  • Politics.
Priority scheduling can be either preemptive or non preemptive
  • A preemptive priority algorithm will preemptive the CPU if the priority of the newly arrival process is higher than the priority of the currently running process.
  • A non-preemptive priority algorithm will simply put the new process at the head of the ready queue.
A major problem with priority scheduling is indefinite blocking or starvation. A solution to the problem of indefinite blockage of the low-priority process is aging. Aging is a technique of gradually increasing the priority of processes that wait in the system for a long period of time.

 




Multilevel Queue Scheduling


A multilevel queue scheduling algorithm partitions the ready queue in several separate queues, for instance
Fig 5.6 - pp. 138 in Sinha
In a multilevel queue scheduling processes are permanently assigned to one queues.
The processes are permanently assigned to one another, based on some property of the process, such as
  • Memory size
  • Process priority
  • Process type
Algorithm choose the process from the occupied queue that has the highest priority, and run that process either
  • Preemptive or
  • Non-preemptively
Each queue has its own scheduling algorithm or policy.

Possibility I 
    If each queue has absolute priority over lower-priority queues then no process in the queue could run unless the queue for the highest-priority processes were all empty.
For example, in the above figure no process in the batch queue could run unless the queues for system processes, interactive processes, and interactive editing processes will all empty.

Possibility II 
    If there is a time slice between the queues then each queue gets a certain amount of CPU times, which it can then schedule among the processes in its queue. For instance;
  • 80% of the CPU time to foreground queue using RR.
  • 20% of the CPU time to background queue using FCFS.
Since processes do not move between queue so, this policy has the advantage of low scheduling overhead, but it is inflexible.

 

Multilevel Feedback Queue Scheduling

Multilevel feedback queue-scheduling algorithm allows a process to move between queues. It uses many ready queues and associate a different priority with each queue.
The Algorithm chooses to process with highest priority from the occupied queue and run that process either preemptively or unpreemptively. If the process uses too much CPU time it will moved to a lower-priority queue. Similarly, a process that wait too long in the lower-priority queue may be moved to a higher-priority queue may be moved to a highest-priority queue. Note that this form of aging prevents starvation.
Example:
Figure 5.7 pp. 140 in Sinha
  • A process entering the ready queue is placed in queue 0.
  • If it does not finish within 8 milliseconds time, it is moved to the tail of queue 1.
  • If it does not complete, it is preempted and placed into queue 2.
  • Processes in queue 2 run on a FCFS basis, only when queue 2 run on a FCFS basis, only when queue 0 and queue 1 are empty.`

 










Tags:

Laxman Singh

0 comments

Leave a Reply

Related Posts Plugin for WordPress, Blogger...