Question 1: Paging and Segmentation
Answer to question a)
The “working set” is short hand for “parts of memory that the present algorithm is utilizing” and is dictated by which parts of memory the CPU simply happens to get to.
The main point is to find the pages within and outside of the working set. The inexact time the page was last utilized and the R (Referenced) bit. The vacant white rectangle symbolizes alternate fields not required for this algorithm, for example, the page outline number, the insurance bits, and the M (Modified) bit.
Answer to question b)
The memory administration in the working framework is a basic usefulness, which enables the portion of memory to the procedures for execution and deallocates the memory when the procedure is never again required. In this article, we will talk about two memory administration plans paging and segmentation. On the off chance that we discuss the essential contrasts between the paging and segmentation it is, a page is a settled sized block though, a segment is a variable-sized block.
- The fundamental contrast amongst segmentation and paging is that a page is dependably of settled block size.
- Paging may prompt internal fragmentation as the page is of settled block size, however it might happen that the procedure does not obtain the whole block size which will create the internal section in memory. The segmentation may prompt external fragmentation as the memory is loaded with the variable sized blocks.
- In paging the client just gives a solitary whole number as the address which is partitioned by the equipment into a page number and Offset. On alternate hands, in segmentation the client indicates the address in two amounts i.e. segment number and offset.
- The size of the page is chosen or indicated by the equipment. On alternate hands, the size of the segment is determined by the client.
In a joined paging/segmentation framework a client address space is separated into various segments at the watchfulness of the software engineer. Each segment is thusly separated into various settled size pages which are equivalent long to a fundamental memory outline. On the off chance that a segment is not as much as a page long, the segment involves only one page. From the programmer`s perspective, a legitimate address still comprises of a segment number and a segment offset. From the system`s perspective, the segment offset is seen as a page number and page offset for a page inside the predefined segment.
Answer to question c)
1, 2, 3, 4, 1, 2, 5, 3, 4, 1, 3, 2, 5, 6, 4, 3, 7, 6, 3, 4
FIFO
|
1 |
2 |
3 |
4 |
1 |
2 |
5 |
3 |
4 |
1 |
3 |
2 |
5 |
6 |
4 |
3 |
7 |
6 |
3 |
4 |
0 |
1 |
1 |
1 |
4 |
4 |
4 |
5 |
5 |
5 |
1 |
1 |
1 |
1 |
6 |
6 |
6 |
7 |
7 |
7 |
7 |
1 |
2 |
2 |
2 |
1 |
1 |
1 |
3 |
3 |
3 |
3 |
3 |
5 |
5 |
5 |
3 |
3 |
3 |
3 |
4 |
|
2 |
3 |
3 |
3 |
2 |
2 |
2 |
4 |
4 |
4 |
2 |
2 |
2 |
4 |
4 |
4 |
6 |
6 |
6 |
||
|
* |
* |
Page Hit – 2
Page Fault – 20-2 = 18
LRU
|
1 |
2 |
3 |
4 |
1 |
2 |
5 |
3 |
4 |
1 |
3 |
2 |
5 |
6 |
4 |
3 |
7 |
6 |
3 |
4 |
0 |
1 |
1 |
1 |
4 |
4 |
4 |
5 |
5 |
5 |
1 |
1 |
1 |
5 |
5 |
5 |
3 |
3 |
3 |
3 |
3 |
1 |
2 |
2 |
2 |
1 |
1 |
1 |
3 |
3 |
3 |
3 |
3 |
3 |
6 |
6 |
6 |
7 |
7 |
7 |
4 |
|
2 |
3 |
3 |
3 |
2 |
2 |
2 |
4 |
4 |
4 |
2 |
2 |
2 |
4 |
4 |
4 |
6 |
6 |
6 |
||
|
* |
* |
Page Hit – 2
Page Fault – 20-2 = 18
OPTIMAL
|
1 |
2 |
3 |
4 |
1 |
2 |
5 |
3 |
4 |
1 |
3 |
2 |
5 |
6 |
4 |
3 |
7 |
6 |
3 |
4 |
0 |
1 |
1 |
1 |
1 |
1 |
1 |
1 |
1 |
1 |
1 |
1 |
2 |
5 |
6 |
6 |
6 |
6 |
6 |
6 |
6 |
1 |
2 |
2 |
2 |
2 |
2 |
5 |
3 |
3 |
3 |
3 |
3 |
3 |
3 |
3 |
3 |
3 |
3 |
3 |
3 |
|
2 |
3 |
4 |
4 |
4 |
4 |
4 |
4 |
4 |
4 |
4 |
4 |
4 |
4 |
4 |
7 |
7 |
7 |
7 |
||
|
* |
* |
* |
* |
* |
* |
* |
* |
* |
Page Hit – 9
Page Fault – 20-9 = 11
Question 2: Inter-process Communication
Answer to question a)
- We can utilize a convention to counteract or stay away from deadlocks, guaranteeing that the framework will never enter a deadlock state.
- To guarantee that deadlocks never occur, the framework can utilize either a deadlock-prevention or a deadlock-avoidance conspire.
- Deadlock prevention gives a set of techniques to guaranteeing that no less than one of the essential conditions (Section 7.2.1) can’t hold.
- Design a framework such that the likelihood of deadlock is avoided from the earlier (gather time/statically, by outline).
- Deadlock avoidance requires that the OS be given ahead of time extra data concerning which assets a procedure will demand and use amid its lifetime. With this extra learning, it can choose for each demand regardless of whether the procedure should pause.
- Make a choice powerfully checking whether the demand will, if in all actuality, possibly prompt a deadlock or not. (run-time/powerfully, before it happens).
- We can enable the framework to enter a deadlock state, identify it, and recuperate.
- If a framework does not utilize either a deadlock-prevention or a deadlock-avoidance algorithm, at that point a deadlock circumstance may emerge.
- In this condition, the framework can give an algorithm that looks at the state of the framework to decide if a deadlock has occurred and an algorithm to recoup from the deadlock.
- Let deadlocks occur, recognize them, and make a move (run-time/progressively, after it happens)
- We can overlook (The Ostrich Algorithm; possibly on the off chance that you disregard it, it will overlook you) the issue by and large and imagine that deadlocks never occur in the framework.
- If a framework neither guarantees that a deadlock will never occur nor gives an instrument to deadlock detection and recovery, at that point we may touch base at a circumstance where the framework is in a deadlocked state yet has no chance to get of perceiving what has happened.
- Eventually, the framework will quit functioning and should be restarted physically.
- In numerous frameworks, deadlocks occur occasionally (say, once every year); hence, this strategy is less expensive than the prevention, avoidance, or detection and recovery strategies.
Answer to question b)
Allocation |
Max |
Available |
|||||||
A |
B |
C |
A |
B |
C |
A |
B |
C |
|
P0 |
1 |
2 |
2 |
9 |
8 |
8 |
2 |
2 |
2 |
P1 |
1 |
1 |
2 |
4 |
3 |
3 |
|||
P2 |
2 |
1 |
1 |
2 |
1 |
7 |
|||
P3 |
2 |
2 |
1 |
3 |
3 |
2 |
|||
P4 |
2 |
2 |
2 |
7 |
7 |
7 |
Need Matrix
A |
B |
C |
|
P0 |
8 |
6 |
6 |
P1 |
3 |
2 |
1 |
P2 |
0 |
0 |
6 |
P3 |
1 |
1 |
1 |
P4 |
5 |
5 |
5 |
P0 à need <= available
866<= 222
Not execute
P1à need <= available
321 <= 222
Not execute
P2à need <=available
006<=222
Not execute
P3 à need <= available
111<=222
Execute the process
Now available = allocation + available = 221+222 = 443
P4 à need <= available
555<= 443
Not execute
P0 à need <= available
Question 2: Inter-process Communication
866<= 443
Not execute
P1à need <= available
321 <= 443
Execute the process
Now available = 112+443 = 555
P2à need <=available
006<=555
Not execute
P4 à need <= available
555<= 555
Process execute
Now available = 222+555 = 777
P0 à need <= available
866<= 777
Not execute
P2à need <=available
006<=777
Process execute
Now available = 211+777 = 988
P0 à need <= available
866<= 988
Process execute
Now available = 1 2 2 + 9 8 8 = 10 10 10
Process sequence {P3, P1, P4, P2, P0}
The system is in safe state
Answer to question c)
#include<stdio.h> #include<stdlib.h> int mutex=1,full=0,empty=3,x=0; int main() { int n; void producer(); void consumer(); int wait(int); int signal(int); printf(“n1.Producern2.Consumern3.Exit”); while(1) { printf(“nEnter your choice:”); scanf(“%d”,&n); switch(n) { case 1: if((mutex==1)&&(empty!=0)) producer(); else printf(“Buffer is full!!”); break; case 2: if((mutex==1)&&(full!=0)) consumer(); else printf(“Buffer is empty!!”); break; case 3: exit(0); break; } } return 0; } int wait(int s) { return (–s); } int signal(int s) { return(++s); } void producer() { mutex=wait(mutex); full=signal(full); empty=wait(empty); x++; printf(“nProducer produces the item %d”,x); mutex=signal(mutex);} void consumer() { mutex=wait(mutex); full=wait(full); empty=signal(empty); printf(“nConsumer consumes item %d”,x); x–; mutex=signal(mutex); } |
Question 3: Files and File Systems
Answer to question a)
The File Allocation Table, FAT, utilized by DOS is a variety of linked allocation, where every one of the connections are put away in a different table toward the start of the disk. The advantage of this approach is that the FAT table can be reserved in memory, extraordinarily enhancing irregular access speeds.
Answer to question b)
In contiguous allocation, files are relegated to contiguous regions of auxiliary stockpiling. A client determines ahead of time the size of the region expected to hold a file to be made. In the event that the coveted measure of contiguous space isn’t accessible, the file can’t be made.
Answer to question c)
Numerous apparently basic I/O tasks are really made out of sub-activities. For instance, erasing a file on an I-node based system (extremely this implies erasing the last connect to the I-node) requires expelling the passage from the catalog, setting the I-node on the free rundown, and putting the file blocks on the free rundown. Since I/O tasks can command the time required for finish client forms, extensive exertion has been used to enhance the execution of these activities. All broadly useful file systems utilize a (non-request) paging algorithm for file storage (read-just systems, which frequently utilize contiguous allocation, are the significant special case). Files are broken into settled size pieces, called blocks that can be scattered over the disk. Note that in spite of the fact that this is paging, it isn’t called paging (and might not have an express page table).
In reality, it is more muddled since different improvements are performed to endeavor to have successive blocks of a solitary file put away sequentially on the disk. This is examined beneath.
Note that every one of the blocks of the file are put away on the disk, i.e., it isn’t request paging.
One can envision systems that do use request paging-like algorithms for disk block storage. In such a system just a portion of the file blocks would be put away on disk with the lay on tertiary storage (some sort of tape). Maybe NASA does this with their enormous datasets. A district of kernel memory is committed to monitoring the free blocks. One bit is alloted to each block of the file system. The bit is 1 if the block is free.
Answer to question d)
FCFSThat’s First Come First Serve, In that wording only it was saying like which one will come first it will serve. So
- Jobs are executed on first come, first serve basis.
- It is a non-preemptive, pre-emptive scheduling algorithm.
- Easy to understand and implement.
- Its implementation is based on FIFO queue.
- Poor in performance as average wait time is high.
Wait time of each process is as follows −
Process
Wait Time: Service Time – Arrival Time
P0
0 – 0 = 0
P1
5 – 1 = 4
P2
8 – 2 = 6
P3
16 – 3 = 13
Average Wait Time: (0+4+6+13) / 4 = 5.75
SSFIn SSTF (Shortest Seek Time First), asks for having shortest seek time are executed first. Along these lines, the seek time of each demand is figured ahead of time in line and after that they are planned by their computed seek time. Accordingly, the demand close to the disk arm will get executed first. SSTF is surely a change over FCFS as it diminishes the normal reaction time and builds the throughput of framework.
Advantage:
- Normal Response Time diminishes
- Throughput increments
Disadvantage:
- Overhead to figure seek time ahead of time
- Can cause Starvation for a demand in the event that it has higher seek time when contrasted with approaching solicitations
- High fluctuation of reaction time as SSTF supports just a few solicitations