easy Notes – Operating System pt. 4

enotesEasy Notes is the new initiative of D2G where we put everything in Layman Language. As name suggests it is made up with the help of individual’s notes, up to the point and consist no further explanations especially designed for Aspirants who have little knowledge about the respective Subject. Very Good to brush up your knowledge.

Today’s Topic: Operating System

Difference between Process and Thread

# Process is called heavy weight process. Thread is called light weight process.
# Process switching needs interface with operating system. Thread switching does not need to call a operating system and cause an interrupt to the Kernel.
# In multiple process implementation each process executes the same code but has its own memory and file resources. All threads can share same set of open files, child processes.
# If one server process is blocked no other server process can execute until the first process unblocked. While one server thread is blocked and waiting, second thread in the same task could.
# Multiple redundant process uses more resources than multiple threaded. Multiple threaded process uses fewer resources than multiple redundant process.
# In multiple process each process operates independently of the others. One thread can read, write or even completely wipe out another threads stack.

Principal of Concurrency

On a single processor machine the OS support for concurrency allows multiple application to share resources in such a way that applications appear to run at a same time. since a typical application does not consume all resources at a given time. A careful coordination can make each application as if runs the entire machine.

Race Condition

A race condition occurs when multiple processes or threads read and write data items so that the final result depends on the order of execution of instructions in the multiple processes. Suppose that two processes, P1 and P2, share the global variable a. At some point in its execution, P1 updates a to the value 1, and at some point in its execution, P2 updates a to the value 2. Thus, the two tasks are in a race to write variable “a”. In this example the “loser” of the race (the process that updates last) determines the final value of a.
Therefore Operating System Concerns of following things:
1. The operating system must be able to keep track of the various processes
2. The operating system must allocate and deallocate various resources for each active process.
3. The operating system must protect the data and physical resources of each process against unintended interference by other processes.
4. The functioning of a process, and the output it produces, must be independent of the speed at which its execution is carried out relative to the speed of other concurrent processes.

Critical Section

Lets we have {P0, P1, P2 …… Pn-1} Processes.
# Each Process (Pi) have its segment of code.
# That segment of code is called Critical Section of that process.
# The important feature of System is: When one process is executing in its critical section, no other process is to be allowed to execute in its critical section. i.e., no two process executing in their criical section at the same time.
# Each Process must request permission to enter its Critical Section.
# Entry Section: The section of Code that is implementing the request.
# Exit Section: Followed by the critical section.
# Remainder Section: Remaining Code.

In computer science, mutual exclusion refers to the requirement of ensuring that no two concurrent processes are in their critical section at the same time; it is a basic requirement in concurrency control, to prevent race conditions.

Requirements for Mutual Exclusion

1. Mutual exclusion must be enforced: Only one process at a time is allowed into its critical section, among all processes that have critical sections for the same resource or shared object.
2. A process that halts in its non critical section must do so without interfering with other processes.
3. It must not be possible for a process requiring access to a critical section to be delayed indefinitely: no deadlock or starvation.
4. When no process is in a critical section, any process that requests entry to its critical section must be permitted to enter without delay.
5. No assumptions are made about relative process speeds or number of processors.
6. A process remains inside its critical section for a finite time only.

Test and Set Instruction

It is special machine instruction used to avoid mutual exclusion. The test and set instruction can be defined as follows:

boolean test set (int i)
if (i==o) return true;
return false;

The above function is carried out automatically.

1. It is simple and easy to verify.
2. it is applicable to any number of processes.
3. it can be used to support multiple critical section.

1. Busy waiting is possible.
2. Starvation is also possible.
3. There may be deadlock.

Starvation is the name given to the indefinite postponement of a process because it requires some resource before it can run, but the resource, though available for allocation, is never allocated to this process.


The solutions of the critical section problem represented in the section is not easy to generalize to more complex problems. To overcome this difficulty, we can use a synchronization tool call a semaphore. It is a synchronization tool  and it has wait() and signal() functions.

The Classical definition of wait and signal are

Wait (S)
while (S <=0)
S =S – 1;
S = S + 1;

The integer value of the semaphore in the wait and signal operations must be executed indivisibly. That is, when one process modifies the semaphore value, no other process can simultaneously modify that same semaphore value.

Semaphores are not provided by hardware. But they have several attractive properties:
1. Semaphores are machine independent.
2. Semaphores are simple to implement.
3. Correctness is easy to determine.
4. Can have many different critical sections with different semaphores.
5. Semaphore acquire many resources simultaneously.

Drawback of Semaphore
1. They are essentially shared global variables.
2. Access to semaphores can come from anywhere in a program.
3. There is no control or guarantee of proper usage.
4. There is no linguistic connection between the semaphore and the data to which the semaphore controls access.
5. They serve two purposes, mutual exclusion and scheduling constraints.


In a multiprogramming environment, several processes may compete for a finite number of resources. A process requests resources; if the resources are not available at that time, the process enters a wait state. It may happen that waiting processes will never again change state, because the resources they have requested are held by other waiting processes. This situation is called deadlock.

Under the normal mode of operation, a process may utilize a resource in only the following sequence:
1. Request: If the request cannot be granted immediately, then the requesting process must wait until it can acquire the resource.
2. Use: The process can operate on the resource.
3. Release: The process releases the resource.

In deadlock, processes never finish executing and system resources are tied up, preventing other jobs from ever starting.

Necessary Conditions to have Deadlock

A deadlock situation can arise if the following four conditions hold simultaneously in a system:

1. Mutual exclusion: At least one resource must be held in a non­sharable mode; that is, only one process at a time can use the resource. If another process requests that resource, the requesting process must be delayed until the resource has been released.
2. Hold and wait : There must exist a process that is holding at least one resource and is waiting to acquire additional resources that are currently being held by other processes.
3. No preemption : Resources cannot be preempted; that is, a resource can be released only voluntarily by the process holding it, after that process, has completed its task.
4. Circular wait: There must exist a set {P0, P1, …, Pn } of waiting processes such that P0 is waiting for a resource that is held by P1, P1 is waiting for a resource that is held by P2, …., Pn­1 is waiting for a resource that is held by Pn, and Pn is waiting for a resource that is held by P0.

Resource Allocation Graph


Deadlocks can be described more precisely in terms of a directed graph called a system resource ­allocation graph. The set of vertices V is partitioned into two different types of nodes P = {P1, P2, … Pn} the set consisting of all the active processes in the system; and R = {R1, R2, …, R1}, the set consisting of all resource types in the system.


V is partitioned into two types:
= {P1, P2, …, Pn}, the set consisting of all the processes in the system.
= {R1, R2, …, Rm}, the set consisting of all resource types in the system.

request edge – directed edge P1 Rj
assignment edge – directed edge Rj Pi

If each resource type has several instance, then a cycle does not necessarily imply that a deadlock incurred. In this case, a cycle in the graph is a necessary but not a sufficient condition for the existence of deadlock.

Method For Handling Deadlock Detection

There are are three different methods for dealing with the deadlock problem:
• We can use a protocol to ensure that the system will never enter a deadlock state.
• We can allow the system to enter a deadlock state and then recover.
• We can ignore the problem all together, and pretend that deadlocks never occur in the system. This solution is the one used by most operating systems, including UNIX.

Deadlock avoidance, on the other hand, requires that the operating system be given in advance additional information concerning which resources a process will request and use during its lifetime.
# With this additional knowledge, we can decide for each request whether or not the process should wait.
# If a system does not employ either a deadlock ­prevention or a deadlock avoidance algorithm, then a deadlock situation may occur.
# If a system does not ensure that a deadlock will never occur, and also does not provide a mechanism for deadlock detection and recovery, then we may arrive at a situation where the system is in a deadlock state yet has no way of recognizing what has happened.

Deadlock Prevention

For a deadlock to occur, each of the four necessary ­conditions must hold. By ensuring that at least on one these conditions cannot hold, we can prevent the occurrence of a deadlock.

Mutual Exclusion: The mutual­ exclusion condition must hold for non­ sharable resources. For example, a printer cannot be simultaneously shared by several processes. Sharable resources, on the other hand, do not require mutually exclusive access, and thus cannot be involved in a deadlock.

Hold and Wait: 1. Whenever a process requests a resource, it does not hold any other resources. One protocol that be used requires each process to request and be allocated all its resources before it begins execution.
2. An alternative protocol allows a process to request resources only when the process has none. A process may request some resources and use them. Before it can request any additional resources, however it must release all the resources that it is currently allocated here are two main disadvantages to these protocols. First, resource utilization may be low, since many of the resources may be allocated but unused for a long period.

No Preemption: If a process that is holding some resources requests another resource that cannot be immediately allocated to it, then all resources currently being held are preempted. That is this resources are implicitly released.

Circular Wait: Circular wait condition never holds is to impose a total ordering of all resource types, and to require that each process requests resources in an increasing order of enumeration.

Just to not messed up with this Chapter We have divided this section in two parts. The Second part will publish in evening.

Check out our latest videos on youtube