easy Notes – Operating System pt. 5

enotesEasy Notes is the new initiative of D2G where we put everything in Layman Language. As name suggests it is made up with the help of individual’s notes, up to the point and consist no further explanations especially designed for Aspirants who have little knowledge about the respective Subject. Very Good to brush up your knowledge.

Today’s Topic: Operating System

Memory Management

# Memory is central to the operation of a modern computer system. Memory is a large array of words or bytes, each with its own address.
# A program resides on a disk as a binary executable file.
# The program must be brought into memory and placed within a process for it to be executed.

2computersppt-44-728

We can provide protection by using two registers, usually a base and a limit, as shown in fig. The base register holds the smallest legal physical memory address; the limit register specifies the size of the range.

08-01.eps

For example, if the base register holds 300040 and the limit register is 120900, then the program can legally access all addresses from 300040 through 420939(inclusive).

Binding of Address: Computer memory uses both logical addresses and physical addresses. Address binding allocates a physical memory location to a logical pointer by associating a physical address to a logical address, which is also known as a virtual address. Address binding is part of computer memory management and it is performed by the operating system on behalf of the applications that need access to memory.

The binding of instructions and data to memory addresses can be done at any step along the way:
Compile time: If it is known at compile time where the process will reside in memory, then absolute code can be generated.
Load time: If it is not known at compile time where the process will reside in memory, then the compiler must generate re­locatable code.
Execution time: If the process can be moved during its execution from one memory segment to another, then binding must be delayed until run time.

Dynamic Loading: Better memory­ space utilization can be done by dynamic loading. With dynamic loading, a routine is not loaded until it is called. All routines are kept on disk in a relocatable load format. The main program is loaded into memory and is executed. The advantage of dynamic loading is that an unused routine is never loaded.

Dynamic Linking: Most operating systems support only static linking, in which system language libraries are treated like any other object module and are combined by the loader into the binary program image. The concept of dynamic linking is similar to that of dynamic loading.

An address generated by the CPU is commonly referred to as a logical address, whereas an address seen by the memory unit is commonly referred to as a physical address.

# The compile­time and load­time address­binding schemes result in an environment where the logical and physical addresses are the same.

# The set of all logical addresses generated by a program is referred to as a logical address space; the set of all physical addresses corresponding to these logical addresses is referred to as a physical address space.

# The run­time mapping from virtual to physical addresses is done by the memory management unit (MMU), which is a hardware device.

# The base register is called a relocation register.

Swapping

A process, can be swapped temporarily out of memory to a backing store, and then brought back into memory for continued execution. Assume a multiprogramming environment with a round robin CPU­ scheduling algorithm. When a quantum expires, the memory manager will start to swap out the process that just finished, and to swap in another process to the memory space that has been freed. When each process finishes its quantum, it will be swapped with another process.

 mod8d1

If a higher ­priority process arrives and wants service, the memory manager can swap out the lower­priority process so that it can load and execute the higher priority process. When the higher priority process finishes, the lower priority process can be swapped back in and continued. This variant of swapping is sometimes called rollout, roll in.

A process is swapped out will be swapped back into the same memory space that it occupies previously.

If binding is done at assembly or load time, then the process cannot be moved to different location. If execution­ time binding is being used, then it is possible to swap a process into a different memory space.

Swapping requires a backing store. The backing store is commonly a fast disk. It is large enough to accommodate copies of all memory images for all users.

The context ­switch time in such a swapping system is fairly high. Let us assume that the user process is of size 100K and the backing store is a standard hard disk with transfer rate of 1 megabyte per second. The actual transfer of the 100K process to or from memory takes

100K / 1000K per second = 1/10 second

= 100 miliseconds

Contiguous Allocation

# The main memory must accommodate both the operating system and the various user processes.
# The memory is usually divided into two partitions, one for the resident operating system, and one for the user processes.

Fragmentation

is a phenomenon that occurs in computer memory such as Random Access Memory (RAM) or hard disks, which causes wastage and inefficient usage of free space. While the efficient usage of available space is hindered, this causes performance issues, as well.

Both internal fragmentation and external fragmentation are phenomena where memory is wasted. Internal fragmentation occurs in fixed size memory allocation while external fragmentation occurs in dynamic memory allocation. When an allocated partition is occupied by a program that is lesser than the partition , remaining space goes wasted causing internal fragmentation. When enough adjacent space cannot be found after loading and unloading of programs, due to the fact that free space is distributed here and there, this causes external fragmentation. Fragmentation can occur in any memory device such as RAM, Hard disk and Flash drives.

Internal Fragmentation;

internal frag

Consider the figure above where a fixed sized memory allocation mechanism is being followed. Initially, the memory is empty and the allocator has divided the memory into fixed size partitions. Then later three programs named A, B, C have been loaded to the first three partitions while the 4th partition is still free. Program A matches the size of the partition, so there is no wastage in that partition, but Program B and Program C are smaller than the partition size. So in part ition 2 and partition 3 there is remaining free space. However, this free space is unusable as the memory allocator only assigns full partitions to programs but no t parts of it. This wastage of free space is called internal fragmentation.

External Fragmentation;

External frag

Consider the figure above where memory allocation is done dynamically. In dynamic memory allocation, the allocator allocates only the exact needed size for that program. First memory is completely free. Then the Programs A, B, C, D and E of different sizes are loaded one after the other and they are placed in memory contiguously in that order. Then later, Program A and Program C closes and they are unloaded from memory. Now there are three free space areas in the memory, but they are not adjacent. Now a large program called Program F is going to be loaded but neither of the free space block is not enough for Program F. The addition of all the free spaces is definitely enough for Program F, but due to the lack of adjacency that space is unusable for Program F. This is called External Fragmentation.

Paging

External fragmentation is avoided by using paging. In this physical memory is broken into blocks of the same size called pages. When a process is to be executed, its pages are loaded into any available memory frames. Every address generated by the CPU is divided into any two parts: a page number(p) and a page offset(d).

 Paging

The page number is used as an index into a page table. The page table contains the base address of each page in physical memory. This base address is combined with the page offset to define the physical memory address that is sent to the memory unit.

The page size like is defined by the hardware. The size of a page is typically a power of 2 varying between 512 bytes and 8192 bytes per page, depending on the computer architecture.

When we use a paging scheme, we have no external fragmentation: Any free frame can be allocated to a process that needs it.

When a process arrives in the system to be executed, its size, expressed in pages, is examined. Each page of the process needs one frame. Thus, if the process requires n pages, there must be at least n frames available in memory. If there are n frames available, they are allocated to this arriving process. The first page of the process is loaded into one of the allocated frames and the frame number is put in the page table for this process. The next page is loaded into another frame, and its frame number is put into the page table, and so on.

The user program views that memory as one single contiguous space, containing only this one program. But the user program is scattered throughout physical memory and logical addresses are translated into physical addresses.

Segmentation

A user program can be subdivided using segmentation, in which the program and its associated data are divided into a number of segments. It is not required that all segments of all programs be of the same length, although there is a maximum segment length.

segmentation_example

# a logical address using segmentation consists of two parts, in this case a segment number(s) and an offset (o).
# Because of the use of unequal­size segments, segmentation is similar to dynamic partitioning.
# In segmentation, a program may occupy more than one partition, and these partitions need not be contiguous.
# Segmentation eliminates internal fragmentation but, like dynamic partitioning, it suffers from external fragmentation.

However, because a process is broken up into a number of smaller pieces, the external fragmentation should be less. Whereas paging is invisible to the programmer, segmentation usually visible and is provided as a convenience for organizing programs and data.

Virtual Memory

Virtual memory is a technique that allows the execution of process that may not be completely in memory. The main visible advantage of this scheme is that programs can be larger than physical memory. Virtual memory is the separation of user logical memory from physical memory this separation allows an extremely large virtual memory to be provided for programmers when only a smaller physical memory is available.

Virtual memory is commonly implemented by demand paging. It can also be implemented in a segmentation system. Demand segmentation can also be used to provide virtual memory.

Demand Paging

A demand paging system is quite similar to a paging system with swapping. When we want to execute a process, we swap it into memory. Rather than swapping the entire process into memory, however, we use a lazy swapper called pager. When a process is to be swapped in, the pager guesses which pages will be used before the process is swapped out again Instead of swapping in a whole process, the pager brings only those necessary pages into memory. Thus, it avoids reading into memory pages that will not be used in anyway, decreasing the swap time and the amount of physical memory needed.

demand_paging

Access to a page marked invalid causes a page-fault trap. This trap is the result of the operating system’s failure to bring the desired page into memory. But page fault can be handled.

Page Replacement Algorithm

Page replacement algorithms are the techniques using which Operating System decides which memory pages to swap out, write to disk when a page of memory needs to be allocated. Paging happens whenever a page fault occurs and a free page cannot be used for allocation purpose accounting to reason that pages are not available or the number of free pages is lower than required pages.

Daisy chain

When device A has a cable that plugs into device B, and device B has a cable that plugs into device C, and device C plugs into a port on the computer, this arrangement is called a daisy chain. It usually operates as a bus.

Controller

A controller is a collection of electronics that can operate a port, a bus, or a device. A serial-port controller is an example of a simple device controller. This is a single chip in the computer that controls the signals on the wires of a serial port.

Status Register: The status register contains bits that can be read by the host. These bits indicate states such as whether the current command has completed, whether a byte is available to be read from the data-in register, and whether there has been a device error.

Control register: The control register can be written by the host to start a command or to change the mode of a device.

Data-in register: The data-in register is read by the host to get input.

Data-out register: The data out register is written by the host to send output.

Polling

Polling is a process by which a host waits for controller response.It is a looping process, reading the status register over and over until the busy bit of status register becomes clear.

Direct Memory Access (DMA)

Many computers avoid burdening the main CPU with programmed I/O by offloading some of this work to a special purpose processor. This type of processor is called, a Direct Memory Access(DMA) controller. Handshaking is a process between the DMA controller and the device controller. It is performed via wires using terms DMA request and DMA acknowledge.

dma

That’s complete your Operating System (for IBPS SO). Don’t forget to read previous articles.


Check out our latest videos on youtube