Operating System - Short Question Answer

Here in this section of Operating System Short Questions Answers, We have listed out some of the important Short Questions with Answers which will help students to answer it correctly in their University Written Exam.

1. What is the relationship between operating systems and computer hardware?

Operating system helps to make computer hardware available to the application programs. Without Operating System we cannot access computer hardware.

2. How Buffering can improve the performance of a Computer system?

If C.P.U and I/O devices are nearly same at speed, the buffering helps in making the C.P.U and the I/O devices work at full speed in such a way that C.P.U and the I/O devices never sit idle at any moment.

Normally the C.P.U is much faster than an input device. In this case the C.P.U always faces an empty input buffer and sits idle waiting for the input device which is to read a record into the buffer. For output, the C.P.U continues to work at full speed till the output buffer is full and then it starts waiting.

Thus buffering proves useful for those jobs that have a balance between computational work and I/O operations. In other cases, buffering scheme may not work well.

3. What inconveniences that a user can face while interacting with a computer system, which is without an operating system?

Operating system is a required component of the computer system.

Without an operating system computer hardware is only an inactive electronic machine, which is inconvenient to user for execution of programs.

As the computer hardware or machine understands only the machine language. It is difficult to develop each and every program in machine language in order to execute it.

Thus without operating system execution of user program or to solve user problems is extremely difficult.

4. What is the difference between Job and Process?

Process

A process refers to a program under execution. This program may be an application or system program.

Job

Job means an application program and it is not a system program.

5. What do you mean by FCFS?

FCFS (First Come First Serve) is a type of OS scheduling algorithm that executes processes in the same order in which processes arrive. In simple words, the process that arrives first will be executed first.

It is non-preemptive in nature. FCFS scheduling may cause the problem of starvation if the burst time of the first process is the longest among all the jobs.

Burst time here means the time that is required in milliseconds by the process for its execution. It is also considered the easiest and simplest OS scheduling algorithm as compared to others. Implementation of FCFS is generally managed with help of the FIFO (First In First Out) queue.

6. Consider the longest remaining time first (LRTF) scheduling algorithm. In LRTF ties are broken by giving priority to the process with the lowest process id. Calculate the average waiting time.
Process Burst Time Arrival time
P1 4 0
P2 2 1
P3 5 2

Consider the longest remaining time first (LRTF) scheduling algorithm. In LRTF ties are broken by giving priority to the process with the lowest process id. Calculate the average waiting time.

Solution:


Hence,

Average waiting time = (5+6+7)/3 =6

Waiting time for

P1 = 0+3+2 = 5

P2 = 2+2+2 = 6

P3 = (6-1) +2 = 7

7. Explain the main purpose of an operating system?

Operating systems exist for two main purposes. One is that it is designed to make sure a computer system performs well by managing its computational activities. Another is that it provides an environment for the development and execution of programs.

 

8. What is demand paging?

Demand paging is referred when not all of a process's pages are in the RAM, then the OS brings the missing(and required) pages from the disk into the RAM.

 

9. What are the advantages of a multiprocessor system?

With an increased number of processors, there is a considerable increase in throughput. It can also save more money because they can share resources. Finally, overall reliability is increased as well.

10. What is kernel?

A kernel is the core of every operating system. It connects applications to the actual processing of data. It also manages all communications between software and hardware components to ensure usability and reliability.

11. What are real-time systems?

Real-time systems are used when rigid time requirements have been placed on the operation of a processor. It has well defined and fixed time constraints.

12. What is a virtual memory?

Virtual memory is a memory management technique for letting processes execute outside of memory. This is very useful especially is an executing program cannot fit in the physical memory

13. Describe the objective of multiprogramming.

The main objective of multiprogramming is to have a process running at all times. With this design, CPU utilization is said to be maximized.

14. What is time- sharing system?

In a Time-sharing system, the CPU executes multiple jobs by switching among them, also known as multitasking. This process happens so fast that users can interact with each program while it is running.

15. What is SMP?

SMP is a short form of Symmetric Multi-Processing. It is the most common type of multiple- processor systems. In this system, each processor runs an identical copy of the operating system, and these copies communicate with one another as needed.

16. How are server systems classified?

Server systems can be classified as either computer-server systems or file server systems. In the first case, an interface is made available for clients to send requests to perform an action. In the second case, provisions are available for clients to create, access and update files.

17. What is asymmetric clustering?

In asymmetric clustering, a machine is in a state known as hot standby mode where it does nothing but to monitor the active server. That machine takes the active server’s role should the server fails.

18. What is a thread?

A thread is a basic unit of CPU utilization. In general, a thread is composed of a thread ID, program counter, register set, and the stack.

19. Give some benefits of multithreaded programming.
  • there is increased responsiveness to the user
  • resource sharing within the process
  • economy
  • utilization of multiprocessing architecture
20. Briefly explain FCFS.

FCFS stands for First-come, first-served. It is one type of scheduling algorithm. In this scheme, the process that requests the CPU first is allocated the CPU first. Implementation is managed by a FIFO queue.

21. What is RR scheduling algorithm?

RR (round-robin) scheduling algorithm is primarily aimed for time-sharing systems. A circular queue is a setup in such a way that the CPU scheduler goes around that queue, allocating CPU to each process for a time interval of up to around 10 to 100 milliseconds.

22. What are necessary conditions which can lead to a deadlock situation in a system?

Deadlock situations occur when four conditions occur simultaneously in a system: Mutual exclusion; Hold and Wait; No preemption; and Circular wait.

23. Enumerate the different RAID levels.

RAID 0 – Non-redundant striping

RAID 1 – Mirrored Disks

RAID 2 – Memory-style error-correcting codes

RAID 3 – Bit-interleaved Parity

RAID 4 – Block-interleaved Parity

RAID 5 – Block-interleaved distributed Parity

RAID 6 – P+Q Redundancy

 

24. Describe Banker’s algorithm

Banker’s algorithm is one form of deadlock-avoidance in a system. It gets its name from a banking system wherein the bank never allocates available cash in such a way that it can no longer satisfy the needs of all of its customers.

25. What factors determine whether a detection-algorithm must be utilized in a deadlock avoidance system?

One is that it depends on how often a deadlock is likely to occur under the implementation of this algorithm. The other has to do with how many processes will be affected by deadlock when this algorithm is applied.

26. State the main difference between logical from physical address space.

Logical address refers to the address that is generated by the CPU. On the other hand, physical address refers to the address that is seen by the memory unit.

27. How does dynamic loading aid in better memory space utilization?

With dynamic loading, a routine is not loaded until it is called. This method is especially useful when large amounts of code are needed in order to handle infrequently occurring cases such as error routines.

28. What are overlays?

Overlays are used to enable a process to be larger than the amount of memory allocated to it. The basic idea of this is that only instructions and data that are needed at any given time are kept in memory.

 

29. What is the basic function of paging?

Paging is a memory management scheme that permits the physical address space of a process to be noncontiguous. It avoids the considerable problem of having to fit varied sized memory chunks onto the backing store.

30. What is fragmentation?

Fragmentation is memory wasted. It can be internal if we are dealing with systems that have fixed-sized allocation units, or external if we are dealing with systems that have variable-sized allocation units.

31. How does swapping result in better memory management?

During regular intervals that are set by the operating system, processes can be copied from main memory to a backing store, and then copied back later. Swapping allows more operations to be run that can fit into memory at one time.

32. Give an example of a Process State.
  • New State – means a process is being created
  • Running – means instructions are being executed
  • Waiting – means a process is waiting for certain conditions or events to occur
  • Ready – means a process is waiting for an instruction from the main processor
  • Terminate – means a process is stopped abruptly
33. What is a socket?

A socket provides a connection between two applications. Each endpoint of a communication is a socket.

34. What is Direct Access Method?

Direct Access method is based on a disk model of a file, such that it is viewed as a numbered sequence of blocks or records. It allows arbitrary blocks to be read or written. Direct access is advantageous when accessing large amounts of information.

35. When does thrashing occur?

Thrashing refers to an instance of high paging activity. This happens when it is spending more time paging instead of executing.

36. What is the best page size when designing an operating system?

The best paging size varies from system to system, so there is no single best when it comes to page size. There are different factors to consider in order to come up with a suitable page size, such as page table, paging time, and its effect on the overall efficiency of the operating system.

37. When designing the file structure for an operating system, what attributes are considered?

Typically, the different attributes for a file structure are naming, identifier, supported file types, and location for the files, size, and level of protection.

38. What is root partition?

Root partition is where the operating system kernel is located. It also contains other potentially important system files that are mounted during boot time.

39. What are device drivers?

Device drivers provide a standard means of representing I/O devices that maybe manufactured by different companies. This prevents conflicts whenever such devices are incorporated in a systems unit.

40. What are the primary functions of VFS?

VFS, or Virtual File System, separate file system generic operations from their implementation by defining a clean VFS interface. It is based on a file-representation structure known as vnode, which contains a numerical designator needed to support network file systems.

41. What are the different types of CPU registers in a typical operating system design?
  • Accumulators
  • Index Registers
  • Stack Pointer
  • General Purpose Registers
42. What is the purpose of an I/O status information?

I/O status information provides information about which I/O devices are to be allocated for a particular process. It also shows which files are opened, and other I/O device state.

43. What is multitasking?

Multitasking is the process within an operating system that allows the user to run several applications at the same time. However, only one application is active at a time for user interaction, although some applications can run “behind the scene”.

44. Explain pros and cons of a command line interface?

A command line interface allows the user to type in commands that can immediately provide results. Many seasoned computer users are well accustomed to using the command line because they find it quicker and simpler.

However, the main problem with a command line interface is that users have to be familiar with the commands, including the switches and parameters that come with it. This is a downside for people who are not fond of memorizing commands.

 

45. What is caching?

Caching is the processing of utilizing a region of fast memory for a limited data and process. A cache memory is usually much efficient because of its high access speed.

46. What is spooling?

Spooling is normally associated with printing. When different applications want to send an output to the printer at the same time, spooling takes all of these print jobs into a disk file and queues them accordingly to the printer.

47. What is an Assembler

An assembler acts as a translator for low-level language. Assembly codes written using mnemonic commands are translated by the Assembler into machine language.

48. What are interrupts?

Interrupts are part of a hardware mechanism that sends a notification to the CPU when it wants to gain access to a particular resource. An interrupt handler receives this interrupt signal and “tells” the processor to take action based on the interrupt request.

49. What is GUI?

GUI is short for Graphical User Interface. It provides users with an interface wherein actions can be performed by interacting with icons and graphical symbols. People find it easier to interact with the computer when in a GUI especially when using the mouse. Instead of having to remember and type commands, users click on buttons to perform a process.

 

50. What is preemptive multitasking?

Preemptive multitasking allows an operating system to switch between software programs. This, in turn, allows multiple programs to run without necessarily taking complete control over the processor and resulting in system crashes.

51. Why partitioning and formatting is a prerequisite to installing an operating system?

Partitioning and formatting create a preparatory environment on the drive so that the operating system can be copied and installed properly. This includes allocating space on the drive, designating a drive name, determining and creating the appropriate file system and structure.

 

52. What is plumbing/piping?

It is the process of using the output of one program as an input to another. For example, instead of sending the listing of a folder or drive to the main screen, it can be piped and sent to a file, or sent to the printer to produce a hard copy.

53. What is NOS?

NOS is short for Network Operating System. It is a specialized software that will allow a computer to communicate with other devices over the network, including file/folder sharing.

 

54. Differentiate internal commands from external commands.

Internal commands are built-in commands that are already part of the operating system. External commands are separate file programs that are stored in a separate folder or directory.

55. Under DOS, what command will you type when you want to list down the files in a directory, and at the same time pause after every screen output?

a) dir /w

b) dir /p

c) dir /s

d)  dir /w /p

 Answer: d) dir /w /p

 

56. How would a file name EXAMPLEFILE.TXT appear when viewed under the DOS command console operating in Windows 98?

The filename would appear as EXAMPL~1.TXT . The reason behind this is that filenames under this operating system are limited to 8 characters when working under DOS environment.

57. What is a folder in Ubuntu?

There is no concept of Folder in Ubuntu. Everything included in your hardware is a FILE.

58. Explain why Ubuntu is safe and not affected by viruses?
  • It does not support malicious e-mails and contents, and before any e-mail is opened by users it will go through many security checks
  • Ubuntu uses Linux, which is a super secure O.S system
  • Unlike other O.S, countless Linux users can see the code at any time and can fix the problem if there is any
  • Malware and viruses are coded to take advantage of the weakness in Windows
59. Explain what is Unity in Ubuntu? How can you add new entries to the launcher?

In Ubuntu, Unity is the default graphical shell. On the left side of the Ubuntu, it introduces the launcher and Dash to start programs.

In order to add new entries to the launcher, you can create a file name like .desktop and then drag the file on the launcher.

60. Explain the purpose of using a libaio package in Ubuntu?

Libaio is Linux Kernel Asynchronous I/O (A/O). A/O allows even a single application thread to overlap I/O operations with other processing, by providing an interface for submitting one or more I/O requests in one system call without waiting for completion. And a separate interface to reap completed I/O operations associated with a given completion group.

61. What is the use of behavior tab in Ubuntu?

Through behaviors tab, you can make many changes on the appearance of the desktop

  • Auto-hide the launcher: You can use this option to reveal the launcher when moving the pointer to the defined hot spot.
  • Enable workspaces: By checking this option, you can enable workspace
  • Add show desktop icon to the launcher: This option is used to display the desktop icon on the launcher
62. What is the meaning of “export” command in Ubuntu?

Export is a command in Bash shell language. When you try to set a variable, it is visible or exported to any subprocess started from that instance of bash. The variable will not exist in the sub-process without the export command.

63. Explain how you can reset Unity Configuration?

To reset the unity configuration the simplest way to do is to hit open a Terminal or hit Atl-F2 and run the command # unity –reset

64. Explain how to access Terminal?

To access terminal, you have to go under Application Menu -> Accessories -> Terminal.

65. Explain the concept of Reentrancy?

It is a useful, memory-saving technique for multiprogram med timesharing systems. A Reentrant Procedure is one in which multiple users can share a single copy of a program during the same period. Reentrancy has 2 key aspects: The program code cannot modify itself, and the local data for each user process must be stored separately. Thus, the permanent part is the code, and the temporary part is the pointer back to the calling program and local variables used by that program. Each execution instance is called activation. It executes the code in the permanent part, but has its own copy of local variables/parameters. The temporary part associated with each activation is the activation record. Generally, the activation record is kept on the stack.

Note: A reentrant procedure can be interrupted and called by an interrupting program, and still execute correctly on returning to the procedure.

66. Explain Belady's Anomaly?

Also called FIFO anomaly. Usually, on increasing the number of frames allocated to a process virtual memory, the process execution is faster, because fewer page faults occur. Sometimes, the reverse happens, i.e., the execution time increases even when more frames are allocated to the process. This is Belady's Anomaly. This is true for certain page reference patterns.

67. What is a binary semaphore? What is its use?

A binary semaphore is one, which takes only 0 and 1 as values. They are used to implement mutual exclusion and synchronize concurrent processes.

68. What is thrashing?

It is a phenomenon in virtual memory schemes when the processor spends most of its time swapping pages, rather than executing instructions. This is due to an inordinate number of page faults.

List the Coffman's conditions that lead to a deadlock.

Mutual Exclusion: Only one process may use a critical resource at a time.
Hold & Wait: A process may be allocated some resources while waiting for others.
No Pre-emption: No resource can be forcible removed from a process holding it.
Circular Wait: A closed chain of processes exist such that each process holds at least one resource needed by another process in the chain.

69. What are short, long and medium-term scheduling?

Long term scheduler determines which programs are admitted to the system for processing. It controls the degree of multiprogramming. Once admitted, a job becomes a process.

Medium term scheduling is part of the swapping function. This relates to processes that are in a blocked or suspended state. They are swapped out of real- memory until they are ready to execute. The swapping-in decision is based on memory-management criteria.

Short term scheduler, also know as a dispatcher executes most frequently, and makes the finest-grained decision of which process should execute next. This scheduler is invoked whenever an event occurs. It may lead to interruption of one process by preemption.

70. What are turnaround time and response time?

Turnaround time is the interval between the submission of a job and its completion. Response time is the interval between submission of a request, and the first response to that request.

71. What are the typical elements of a process image?

User data: Modifiable part of user space. May include program data, user stack area, and programs that may be modified.

User program: The instructions to be executed.

System Stack: Each process has one or more LIFO stacks associated with it. Used to store parameters and calling addresses for procedure and system calls.

Process control Block (PCB): Info needed by the OS to control processes.

72. What is the Translation Lookaside Buffer (TLB)?

In a cached system, the base addresses of the last few referenced pages is maintained in registers called the TLB that aids in faster lookup. TLB contains those page-table entries that have been most recently used. Normally, each virtual memory reference causes 2 physical memory accesses- one to fetch appropriate page-table entry, and one to fetch the desired data. Using TLB in-between, this is reduced to just one physical memory access in cases of TLB-hit.

73. What is the resident set and working set of a process?

Resident set is that portion of the process image that is actually in real-memory at a particular instant. Working set is that subset of resident set that is actually needed for execution. (Relate this to the variable-window size method for swapping techniques.

74. When is a system in safe state?

The set of dispatchable processes is in a safe state if there exists at least one temporal order in which all processes can be run to completion without resulting in a deadlock.

75. What is cycle stealing?

We encounter cycle stealing in the context of Direct Memory Access (DMA). Either the DMA controller can use the data bus when the CPU does not need it, or it may force the CPU to temporarily suspend operation. The latter technique is called cycle stealing. Note that cycle stealing can be done only at specific break points in an instruction cycle.

76. What is meant by arm-stickiness?

If one or a few processes have a high access rate to data on one track of a storage disk, then they may monopolize the device by repeated requests to that track. This generally happens with most common device scheduling algorithms (LIFO, SSTF, C-SCAN, etc). High-density multisurface disks are more likely to be affected by this than low density ones.

 

77. What are the stipulations of C2 level security?

C2 level security provides for:

Discretionary Access Control
Identification and Authentication
Auditing
Resource reuse

78. What is busy waiting?

The repeated execution of a loop of code while waiting for an event to occur is called busy-waiting. The CPU is not engaged in any real productive activity during this period, and the process does not progress toward completion.

79. Explain the popular multiprocessor thread-scheduling strategies.
  • Load Sharing: Processes are not assigned to a particular processor. A global queue of threads is maintained. Each processor, when idle, selects a thread from this queue. Note that load balancing refers to a scheme where work is allocated to processors on a more permanent basis.
  • Gang Scheduling: A set of related threads is scheduled to run on a set of processors at the same time, on a 1-to-1 basis. Closely related threads / processes may be scheduled this way to reduce synchronization blocking, and minimize process switching. Group scheduling predated this strategy.
  • Dedicated processor assignment: Provides implicit scheduling defined by assignment of threads to processors. For the duration of program execution, each program is allocated a set of processors equal in number to the number of threads in the program. Processors are chosen from the available pool.
  • Dynamic scheduling: The number of thread in a program can be altered during the course of execution.
80. When does the condition 'rendezvous' arise?

In message passing, it is the condition in which, both, the sender and receiver are blocked until the message is delivered.

 

81. What is a trap and trapdoor?

Trapdoor is a secret undocumented entry point into a program used to grant access without normal methods of access authentication. A trap is a software interrupt, usually the result of an error condition

82. What are local and global page replacements?

Local replacement means that an incoming page is brought in only to the relevant process address space. Global replacement policy allows any page frame from any process to be replaced. The latter is applicable to variable partitions model only.

 

83. Define latency, transfer and seek time with respect to disk I/O.

Seek time is the time required to move the disk arm to the required track. Rotational delay or latency is the time it takes for the beginning of the required sector to reach the head. Sum of seek time (if any) and latency is the access time. Time taken to actually transfer a span of data is transfer time.

84. Describe the Buddy system of memory allocation.

Free memory is maintained in linked lists, each of equal sized blocks. Any such block is of size 2^k. When some memory is required by a process, the block size of next higher order is chosen, and broken into two. Note that the two such pieces differ in address only in their kth bit. Such pieces are called buddies. When any used block is freed, the OS checks to see if its buddy is also free. If so, it is rejoined, and put into the original free-block linked-list.

85. What is time-stamping?

It is a technique proposed by Lamport, used to order events in a distributed system without the use of clocks. This scheme is intended to order events consisting of the transmission of messages. Each system 'i' in the network maintains a counter Ci. Every time a system transmits a message, it increments its counter by 1 and attaches the time-stamp Ti to the message. When a message is received, the receiving system 'j' sets its counter Cj to 1 more than the maximum of its current value and the incoming time-stamp Ti. At each site, the ordering of messages is determined by the following rules: For messages x from site i and y from site j, x precedes y if one of the following conditions holds....(a) if Ti<Tj or (b) if Ti=Tj and i<j.

86. How are the wait/signal operations for monitor different from those for semaphores?

If a process in a monitor signal and no task is waiting on the condition variable, the signal is lost. So this allows easier program design. Whereas in semaphores, every operation affects the value of the semaphore, so the wait and signal operations should be perfectly balanced in the program.

 

87. In the context of memory    management, what are placement and replacement algorithms?

Placement algorithms determine where in available real-memory to load a program. Common methods are first-fit, next-fit, best-fit. Replacement algorithms are used when memory is full, and one process (or part of a process) needs to be swapped out to accommodate a new program. The replacement algorithm determines which are the partitions to be swapped out.

88. What are demand-paging and pre-paging?

With demand paging, a page is brought into memory only when a location on that page is actually referenced during execution. With pre-paging, pages other than the one demanded by a page fault are brought in. The selection of such pages is done based on common access patterns, especially for secondary memory devices.

Demand Paging


Demand paging is a technique used in virtual memory systems where the pages are brought in the main memory only when required or demanded by the CPU. Hence, it is also named as lazy swapper because the swapping of pages is done only when required by the CPU.

Advantages

  • It increases the degree of multiprogramming as many processes can be present in the main memory at the same time.
  • There is a more efficient use of memory as processes having size more than the size of the main memory can also be executed using this mechanism because we are not loading the whole page at a time.

Disadvantages

  • The amount of processor overhead and the number of tables used for handling the page faults is greater than in simple page management techniques.

PrePaging


In demand paging, that page is brought to the main memory which is actually demanded during the execution of the process. But, in pre-paging pages other than the demanded by the CPU are also brought in. The OS guesses in advance which page the process will require and pre-loads them into the memory.

Advantages

  • It saves time when large contiguous structures are used. Consider an example where the process requests consecutive addresses. So, in such cases, the operating system can guess the next pages. And, if the guesses are right, fewer page faults will occur and the effective memory access time will increase.

Disadvantages

  • There is a wastage of time and memory if those pre-paged pages are unused.

This was about demand paging and pre-paging. Hope you learned something new today.

89. Paging a memory management function, while multiprogramming a processor management function, are the two interdependent?

Yes.

90. What is Page Cannibalizing?

Page Cannibalizing

Page swapping or page replacements are called page cannibalizing.

Page swapping is where data is being exchanged between the hard disk and RAM.

91. What has triggered the need for multitasking in PCs?

Increased speed and memory capacity of microprocessors together with the support fir virtual memory and
Growth of client server computing

92. What are the four layers that Windows NT have in order to achieve independence?

Hardware abstraction layer
Kernel
Subsystems
System Services.

93. What is SMP?

To achieve maximum efficiency and reliability a mode of operation known as symmetric multiprocessing is used. In essence, with SMP any process or threads can be assigned to any processor.

 

94. What are the key object oriented concepts used by Windows NT?

Encapsulation, Object class and instance.

95. Is Windows NT a full blown object oriented operating system? Give reasons.

No Windows NT is not so, because its not implemented in object oriented language and the data structures reside within one executive component and are not represented as objects and it does not support object oriented capabilities.

96. What is a drawback of MVT?

It does not have the features like

  • ability to support multiple processors
  • virtual storage
  • source level debugging

 

97. What is process spawning?

When the OS at the explicit request of another process creates a process, this action is called process spawning.

 

98. How many jobs can be run concurrently on MVT?

15 jobs.

99. List out some reasons for process termination.
  • Normal completion
  • Time limit exceeded
  • Memory unavailable
  • Bounds violation
  • Protection error
  • Arithmetic error
  • Time overrun
  • I/O failure
  • Invalid instruction
  • Privileged instruction
  • Data misuse
  • Operator or OS intervention
  • Parent termination.
100. What are the reasons for process suspension?
  • swapping
  • interactive user request
  • timing
  • parent process request
101. What is process migration?

It is the transfer of sufficient amount of the state of process from one machine to the target machine.

 

102. What is mutant?

In Windows NT a mutant provides kernel mode or user mode mutual exclusion with the notion of ownership.

103. What is an idle thread?

The special thread a dispatcher will execute when no ready thread is found.

104. What is FtDisk?

It is a fault tolerance disk driver for Windows NT.

105. What are the possible threads a thread can have?

⦁    Ready
⦁    Standby
⦁    Running
⦁    Waiting
⦁    Transition
⦁    Terminated

106. What are rings in Windows NT?

Windows NT uses protection mechanism called rings provides by the process to implement separation between the user mode and kernel mode.

107. What is Executive in Windows NT?

In Windows NT, executive refers to the operating system code that runs in kernel mode.

108. What are the sub-components of I/O manager in Windows NT?

 The sub-components of I/O manager in Windows NT are:    

  • Network redirector/ Server
  • Cache manager.
  • File systems
  • Network driver
  • Device driver
109. What are DDks? Name an operating system that includes this feature.

DDks are device driver kits, which are equivalent to SDKs for writing device drivers. Windows NT includes DDks.

110. What level of security does Windows NT meets?

C2 level security.

111. What is Throughput, Turnaround time, Waiting time and Response time?

Throughput is defined as the number of processes that complete their execution per unit time.
Turnaround time is the amount of time to execute a particular process.
Waiting time is the amount of time a process has been waiting in the ready queue.
Response time is the amount of time it takes from when a request was submitted until the first response is produced, not output (for time-sharing environment)

112. What resources are used when a thread created? How do the differ from those when a process is created?

 Because a thread is smaller than a process, thread creation typically uses fewer resources than process creation. Creating a process requires allocating a process control block (PCB), a rather large data structure. The PCB includes a memory map, list of open files, and environment variables. Allocating and managing the memory map is typically the most time-consuming activity. Creating either a user or kernel thread involves allocating a small data structure to hold a register set, stack, and priority.

113. What is a binary semaphore? What is its use?

 A binary semaphore is a semaphore with an integer value that can range only between 0 and 1. A binary semaphore can be simpler to implement than a counting semaphore, depending on the underlying hardware architecture.

114. What is the difference between synchronization and mutual exclusion?

 If one process is executing in its critical section then no other process is allowed to enter it critical section. This is called mutual exclusion. Synchronization refers to one of two distinct but related concepts: synchronization of processes, and synchronization of data. Process synchronization refers to the idea that multiple processes are to join up or handshake at a certain point, in order to reach an agreement or commit to a certain sequence of action. Data synchronization refers to the idea of keeping multiple copies of a dataset in coherence with one another, or to maintain data integrity. Process synchronization primitives are commonly used to implement data synchronization.

115. List the Coffman’s conditions that lead to a deadlock.

 A deadlock situation can arise if and only if all of the following conditions hold simultaneously in a system:

  • Mutual Exclusion: At least one resource must be non-shareable. Only one process can use the resource at any given instant of time.
  • Hold and Wait or Resource Holding: A process is currently holding at least one resource and requesting additional resources which are being held by other processes.
  • No Preemption: The operating system must not de-allocate resources once they have been allocated; they must be released by the holding process voluntarily.
  • Circular Wait: A process must be waiting for a resource which is being held by another process, which in turn is waiting for the first process to release the resource. In general, there is a set of waiting processes, P = {P1, P2, ..., PN}, such that P1 is waiting for a resource held by P2, P2 is waiting for a resource held by P3 and so on till PN is waiting for a resource held by P1.

These four conditions are known as the Coffman conditions from their first description in a 1971 article by Edward G. Coffman.

116. Name the three types of schedulers and give functions of each.

Ans: Three types of CPU schedulers are FCFS, SJF and Priority scheduling. First-Come, First-Served Scheduling
By far the simplest CPU-scheduling algorithm is the first-come, first-served (FCFS) scheduling algorithm. With this scheme, the process that requests the CPU first is allocated the CPU first. The implementation of the FCFS policy is easily managed with a FIFO queue. When a process enters the ready queue, its PCB is linked onto the tail of the queue. When the CPU is free, it is allocated to the process at the head of the queue. The running process is then removed from the queue. The code for FCFS scheduling is simple to write and understand. The average waiting time under the FCFS policy, however, is often quite long.
Consider the following set of processes that arrive at time 0, with the length of the CPU-burst time given in milliseconds:

Process    Burst Time
PI                    24
P2                    3
P3                   3
If the processes arrive in the order PI, P2, P3, and are served in FCFS order, we get the result shown in the following Gantt chart:

P1    P2    P3
0      24    27
30
The waiting time is 0 milliseconds for process PI, 24 milliseconds for process PZ, and 27 milliseconds for process P3. Thus, the average waiting time is (0 + 24 + 27)/3 = 17 milliseconds. If the processes arrive in the order P2, P3, Pl, however, the results will be as shown in the following Gantt chart:

P2    P3    P1
0       3      6    30
The average waiting time is now (6 + 0 + 3)/3 = 3 milliseconds. This reduction is substantial. Thus, the average waiting time under a FCFS policy is generally not minimal, and may vary substantially if the process CPU-burst times vary greatly.
In addition, consider the performance of FCFS scheduling in a dynamic situation. Assume we have one CPU-bound process and many I/O-bound processes. As the processes flow around the system, the following scenario may result. The CPU-bound process will get the CPU and hold it. During this time, all the other processes will finish their I/O and move into the ready queue, waiting for the CPU. While the processes wait in the ready queue, the I/O devices are idle. Eventually, the CPU-bound process finishes its CPU burst and moves to an I/O device. All the I/O-bound processes, which have very short CPU bursts, execute quickly and move, back to the I/O queues. At this point, the CPU sits idle.
The CPU-bound process will then move back to the ready queue and be allocated the CPU. Again, all the I/O processes end up waiting in the ready queue until the CPU-bound process is done. There is a convoy effect, as all the other processes wait for the one big process to get off the CPU. This effect results in lower CPU and device utilization than might be possible if the shorter processes were allowed to go first.
The FCFS scheduling algorithm is non-preemptive. Once the CPU has been allocated to a process, that process keeps the CPU until it releases the CPU, either by terminating or by requesting I/O. The FCFS algorithm is particularly troublesome for time-sharing systems, where each user needs to get a share of the CPU at regular intervals. It would be disastrous to allow one process to keep the CPU for an extended period.
Shortest-Job-First Scheduling
A different approach to CPU scheduling is the shortest-job-first (SJF) scheduling algorithm. This algorithm associates with each process the length of the latter's next CPU burst. When the CPU is available, it is assigned to the process that has the smallest next CPU burst. If two processes have the same length next

CPU burst, FCFS scheduling is used to break the tie. Note that a more appropriate term would be the shortest next CPU burst, because the scheduling is done by examining the length of the next CPU burst of a process, rather than its total length. We use the term SJF because most people and textbooks refer to this type of scheduling discipline as SJF.
As an example, consider the following set of processes, with the length of the CPU-burst time given in milliseconds:

Process    Burst Time
PI                   6
p2                  8
p3                  7
p4                  3
Using SJF scheduling, we would schedule these processes according to the following Gantt chart:

P4    P1    P3    P2
0        3    9    16
24
The waiting time is 3 milliseconds for process PI, 16 milliseconds for process P2,9 milliseconds for process PS, and 0 milliseconds for process Pq. Thus, the average waiting time is (3 + 16 + 9 + 0)/4 = 7 milliseconds. If we were using the FCFS scheduling scheme, then the average waiting time would be 10.25 milliseconds.
The SJF scheduling algorithm is provably optimal, in that it gives the minimum average waiting time for a given set of processes. By moving a short process before a long one, the waiting time of the short process decreases more than it increases the waiting time of the long process. Consequently, the average waiting time decreases.

The real difficulty with the SJF algorithm is knowing the length of the next CPU request. For long-term (or job) scheduling in a batch system, we can use as the length the process time limit that a user specifies when he submits the job.
Thus, users are motivated to estimate the process time limit accurately, since a lower value may mean faster response. (Too low a value will cause a time-limit exceeded error and require resubmission.) SJF scheduling is used frequently in long-term scheduling.
Although the SJF algorithm is optimal, it cannot be implemented at the level of short-term CPU scheduling. There is no way to know the length of the next CPU burst. One approach is to try to approximate SJF scheduling. We may not know the length of the next CPU burst, but we may be able to predict its value.We expect that the next CPU burst will be similar in length to the previous ones.
Thus, by computing an approximation of the length of the next CPU burst, we can pick the process with the shortest predicted CPU burst.
The SJF algorithm may be either preemptive or nonpreemptive. The choice arises when a new process arrives at the ready queue while a previous process is executing. The new process may have a shorter next CPU burst than what is left of the currently executing process. A preemptive SJF algorithm will preempt the currently executing process, whereas a nonpreemptive SJF algorithm will allow the currently running process to finish its CPU burst. Preemptive SJF scheduling is sometimes called shortest-remaining-time-first scheduling.
As an example, consider the following four processes, with the length of the CPU-burst time given in milliseconds:

Process    Arrival Time        Burst Time
P1                     0                      8    
P2                     1                      4    
P3                     2                      9    
p4                     3                      5    

If the processes arrive at the ready queue at the times shown and need the indicated burst times, then the resulting preemptive SJF schedule is as depicted in the following Gantt chart:

P1    P2    P4    P1    P3
0       1      5    10    17
26

 

Process PI is started at time 0, since it is the only process in the queue. Process PZ arrives at time 1. The remaining time for process P1 (7 milliseconds) is larger than the time required by process P2 (4 milliseconds), so process P1 is preempted, and process P2 is scheduled. The average waiting time for this example is ((10  - 1) + (1  - 1) + (17 - 2) + (5 - 3))/4  = 26/4  = 6.5  milliseconds. A
nonpreemptive SJF scheduling would result in an average waiting time of 7.75 milliseconds.
Priority Scheduling
The SJF algorithm is a special case of the general priority-scheduling algorithm. A priority is associated with each process, and the CPU is allocated to the process with the highest priority. Equal-priority processes are scheduled in FCFS order.
An SJF algorithm is simply a priority algorithm where the priority (p) is the inverse of the (predicted) next CPU burst. The larger the CPU burst, the lower the priority, and vice versa.
Note that we discuss scheduling in terms of high priority and low priority. Priorities are generally some fixed range of numbers, such as 0 to 7, or 0 to 4,095. However, there is no general agreement on whether 0 is the highest or lowest priority. Some systems use low numbers to represent low priority; others use low numbers for high priority. This difference can lead to confusion. In this text, we use low numbers to represent high priority. As an example, consider the following set of processes, assumed to have arrived at time 0, in the order PI, P2, ..., P5, with the length of the CPU-burst time given in milliseconds:

Process    Burst    Time Priority
P1               10          3
p2               1            1
p3               2            4
P4               1            5
P5               5             2

Using priority scheduling, we would schedule these processes according to the following Gantt chart:

P2    P5    P1    P3    P4
0        1    6     16
18    19
The average waiting time is 8.2 milliseconds. Priorities can be defined either internally or externally. Internally defined priorities use some measurable quantity or quantities to compute the priority of a process.

117. Give the queuing diagram representing process scheduling and show the action point for the different types of CPU schedulers.

A common representation for a discussion of process scheduling is a queuing diagram.

  • Each rectangular box represents a queue. Two types of queues are present: the ready queue and a set of device queues.
  • The circles represent the resources that serve the queues, and the arrows indicate the flow of processes in the system.
  • A new process is initially put in the ready queue. It waits there until it is selected for execution, or is dispatched.
  • Once the process is allocated the CPU and is executing, one of several events could occur:
  • The process could issue an I/O request and then be placed in an I/O queue.
    The process could create a new subprocess and wait    for the subprocess's termination.
  • The process could be removed forcibly from the CPU, as a result of an interrupt, and be put back in the ready queue.
  • A process continues this cycle until it terminates, at which time it is removed from all queues and has its PCB and resources deallocated.
118. Explain Context switch along with a suitable diagram.Name and briefly define the scheduling criteria. List out the circumstances under which a CPU scheduling decision may have to be taken by the CPU scheduler.

Switching the CPU to another process requires saving the state of the old process and loading the saved state for the new process. This task is known as a context switch. The context of a process is represented in the PCB of a process; it includes the value of the CPU registers, the process state and memory- management information. When a context switch occurs, the kernel saves the context of the old process in its PCB and loads the saved context of the new process scheduled to run.
Context-switch time is pure overhead, because the system does no useful work while switching. Its speed varies from machine to machine, depending on the memory speed, the number of registers that must be copied, and the existence of special instructions (such as a single instruction to load or store all registers). Typical speeds range from 1 to 1000 microseconds.
Context -switch times are highly dependent on hardware support. For instance, some processors (such as the Sun UltraSPARC) provide multiple sets of registers. A context switch simply includes changing the pointer to the current register set. Of course, if active processes exceed register sets, the system resorts to copying register data to and from memory, as before. Also, the more complex the operating system, the more work must be done during a context switch.

 

 

Scheduling Criteria

  • CPU utilization – keep the CPU as busy as possible
  • Throughput – Number of processes that complete their execution per time unit
  • Turnaround time – amount of time to execute a particular process
  • Waiting time – amount of time a process has been waiting in the ready queue
  • Response time – amount of time it takes from when a request was submitted until the first response is
  • produced, not output (for time-sharing environment)

Optimization Criteria

  • Max CPU utilization
  • Max throughput
  • Min turnaround time
  • Min waiting time
  • Min response time
119. Assume, we have the workload as shown below. All 5 processes arrive at time 0, in the order given below. The length of the CPU burst time is given in milliseconds

Process    : P1    P2    P3    P4    P5

Burst time : 10    29    3    7    12
Considering the FCFS, SJF and RR (q=10 ms) scheduling algorithms, which algorithm would give the minimum average turnaround time.
Ans: The Gantt-chart for FCFS scheduling is

P1    P2    P3    P4    P5
0    10    39    42    49
61
Turnaround time = Finished Time – Arrival Time Turnaround time for process P1 = 10 – 0 = 10 Turnaround time for process P2 = 39 – 0 = 39 Turnaround time for process P3 = 42 – 0 = 42 Turnaround time for process P4 = 49 – 0 = 49 Turnaround time for process P5 = 61 – 0 = 61
Average Turnaround time = (10+39+42+49+61)/5 = 40.2 The Gantt-chart for SJF scheduling is
P3    P4    P1    P5    P2
0    3    10    20    32
61
Turnaround time for process P1 = 3 – 0 = 3 Turnaround time for process P2 = 10 – 0 = 10 Turnaround time for process P3 = 20 – 0 = 20 Turnaround time for process P4 = 32 – 0 = 32 Turnaround time for process P5 = 61 – 0 = 61
Average Turnaround time = (3+10+20+32+61)/5 = 25.2 The Gantt-chart for RR scheduling is
P1    P2    P3    P4    P5    P2    P 5    P2
0    10    20    23    30    40
50    52    61
Turnaround time for process P1 = 10 – 0 = 10

Turnaround time for process P2 = 61 – 0 = 61 Turnaround time for process P3 = 23 – 0 = 23 Turnaround time for process P4 = 30 – 0 = 30 Turnaround time for process P5 = 52 – 0 = 52
Average Turnaround time = (10+61+23+30+52)/5 = 44.2 So SJF gives minimum turnaround time.

120. What do you mean by graceful degradation in multiprocessor systems?

 The ability to continue providing service proportional to the level of surviving hardware is called graceful degradation.

121. Define system call.

System calls provide an interface between the process and the Operating System. System calls allow user-level processes to request some services from the operating system which process itself is not allowed to do.

122. What do you mean by graceful degradation in multiprocessor systems?

The ability to continue providing service proportional to the level of surviving hardware is called graceful degradation.

123. List the three requirements that must be satisfied by critical section problem.

 A solution to the critical-section problem must satisfy the following three requirements:
Mutual Exclusion: If process Pi is executing in its critical section, then no other processes can be executing in their critical sections.
Progress: If no process is executing in its critical section and some processes wish to enter their critical sections, then only those processes that are not executing in their remainder section can participate in the decision on which will enter its critical section next, and this selection cannot be postponed indefinitely.
Bounded Waiting: There exists a bound on the number of times that other processes are allowed to enter their critical sections after a process has made a request to enter its critical section and before that request is granted.

124. What is kernel of an Operating System?

Kernel is an active part of an OS i.e., it is the part of OS running at all times. It is a programs which can interact with the hardware. Ex: Device driver, dll files, system files etc.

125. Consider the following set of processes, with the length of CPU-burst time given in milliseconds:

Process    Burst Time (ms)        Priority
P1    1    1    

P2    1    1
P3    2    3
P4    1    4
P5    5    2
The processes are assumed to have arrived in order P1, P2, P3, P4, P5 all at time 0.

126. What is the turnaround time of each process for each of the scheduling algorithms in part (a).

The Gantt-chart for FCFS scheduling is

P1    P2    P3    P4    P5
0    1    2    4    5
10
Turnaround time = Finished Time – Arrival Time Turnaround time for process P1 = 1 – 0 = 1 Turnaround time for process P2 = 2 – 0 = 2 Turnaround time for process P3 = 4 – 0 = 4 Turnaround time for process P4 = 5 – 0 = 5 Turnaround time for process P5 = 10 – 0 = 10 Average Turnaround time = (1+2+4+5+10)/5 = 4.4 The Gantt-chart for SJF scheduling is
P1    P2    P4    P3    P5
0    1    2    3    5
10
Turnaround time for process P1 = 1 – 0 = 1 Turnaround time for process P2 = 2 – 0 = 2 Turnaround time for process P3 = 5 – 0 = 5 Turnaround time for process P4 = 3 – 0 = 3 Turnaround time for process P5 = 10 – 0 = 10

Average Turnaround time = (1+2+5+3+10)/5 = 4.2 The Gantt-chart for Priority scheduling is
P1    P2    P5    P3    P4
0    1    2    7
9    10
Turnaround time for process P1 = 1 – 0 = 1 Turnaround time for process P2 = 2 – 0 = 2 Turnaround time for process P3 = 9 – 0 = 9 Turnaround time for process P4 = 10 – 0 = 10 Turnaround time for process P5 = 7 – 0 = 7 Average Turnaround time = (1+2+9+10+7)/5 = 5.8 The Gantt-chart for RR scheduling is
P1    P2    P3    P4    P5    P3    P5
0    1    2    3    4    5    6
10
Turnaround time for process P1 = 1 – 0 = 1 Turnaround time for process P2 = 2 – 0 = 2 Turnaround time for process P3 = 6 – 0 = 6 Turnaround time for process P4 = 4 – 0 = 4 Turnaround time for process P5 = 10 – 0 = 10 Average Turnaround time = (1+2+6+4+10)/5 = 4.8

127. What is a thread? Why is it that threads are faster to create than processes? What advantages do kernel threads provide over user threads?

A thread is a single sequence stream within in a process. It is also called lightweight processes. In a process, threads allow multiple executions of streams. The CPU switches rapidly back and forth among the threads giving illusion that the threads are running in parallel. A thread can be in any of several states (Running, Blocked, Ready or Terminated).
An operating system that has thread facility, the basic unit of CPU utilization is a thread. A thread has or consists of a program counter (PC), a register set, and a stack space. Threads are not independent of one other like processes as a result

threads shares with other threads their code section, data section, OS resources such as open files and signals.
Because a thread is smaller than a process, thread creation typically uses fewer resources than process creation. Creating a process requires allocating a process control block (PCB), a rather large data structure. The PCB includes a memory map, list of open files, and environment variables. Allocating and managing the memory map is typically the most time-consuming activity. Creating either a user or kernel thread involves allocating a small data structure to hold a register set, stack, and priority.
Advantage:
Because kernel has full knowledge of all threads, Scheduler may decide to give more time to a process having large number of threads than process having small number of threads.
Kernel-level threads are especially good for applications that frequently block.

128. Explain Readers-Writers problem using semaphores.

 A data object (such as a file or record) is to be shared among several concurrent processes. Some of these processes may want only to read the content of the shared object, whereas others may want to update (that is, to read and write) the shared object. We distinguish between these two types of processes by referring to those processes that are interested in only reading as readers, and to the rest as writers. Obviously, if two readers access the shared data object simultaneously, no adverse effects will result. However, if a writer and some other process (either a reader or a writer) access the shared object simultaneously, chaos may ensue.
To ensure that these difficulties do not arise, we require that the writers have exclusive access to the shared object. This synchronization problem is referred to as the readers-writers problem. Since it was originally stated, it has been used to test nearly every new synchronization primitive. The readers-writers problem has several variations, all involving priorities. The simplest one, referred to as the first readers-writers problem, requires that no reader will be kept waiting unless a writer has already obtained permission to use the shared object. In other words, no reader should wait for other readers to finish simply because a writer is waiting. The second readers-writers problem requires that, once a writer is ready, that writer performs its write as soon as possible. In other words, if a writer is waiting to access the object, no new readers may start reading.

A solution to either problem may result in starvation. In the first case, writers may starve; in the second case, readers may starve. For this reason, other variants of the problem have been proposed. In this section, we present a solution to the first readers-writers problem.
In the solution to the first readers-writers problem, the reader processes share the following data structures:
semaphore mutex, wrt; int readcount;
The semaphores mutex and wrt are initialized to 1; readcount is initialized to 0. The semaphore w r t is common to both the reader and writer processes. The mutex semaphore is used to ensure mutual exclusion when the variable readcount is updated. The readcount variable keeps track of how many processes are currently reading the object. The semaphore wrt functions as a mutual-exclusion semaphore for the writers. It is also used by the first or last reader that enters or exits the critical section. It is not used by readers who enter or exit while other readers are in their critical sections.
The code for a writer process is

do{
wait (wrt) ;
. . .
writing is performed
...
signal(wrt);
}while(1);
The code for a reader process is

do{

wait (mutex) ; readcount++;
if (readcount == 1) wait (wrt) ;
signal (mutex) ;
. . .
reading is performed
...
wait (mutex) ; readcount--;
if (readcount == 0) signal(wrt1;
signal (mutex) ;
}while(1);
Note that, if a writer is in the critical section and n readers are waiting, then one reader is queued on wrt, and n - 1 readers are queued on mutex. Also observe that, when a writer executes signal (wrt), we may resume the execution of either the waiting readers or a single waiting writer.

129. Write a note on Semaphore.

 A semaphore S is an integer variable that, apart from initialization, is accessed only through two standard atomic operations: wait and signal. These operations were originally termed P (for wait; from the Dutch proberen, to test) and V (for signal; from verhogen, to increment). The classical definition of wait in pseudocode is
wait(S) {

while (S <= 0)
; // no-op S --;
}
The classical definitions of signal in pseudocode is Signal(S){
S++;
}

130. Write a short note on Peterson’s solution.

Peterson’s solution is a software based solution to the critical section problem.
Consider two processes P0 and P1. For convenience, when presenting Pi, we use Pi to denote the other process; that is, j == 1 - i.
The processes share two variables: boolean flag [2] ;
int turn;
Initially flag [0] = flag [1] = false, and the value of turn is immaterial (but is either 0 or 1). The structure of process Pi is shown below.
do{

flag[i]=true turn=j
while(flag[j] && turn==j); critical section
flag[i]=false

Remainder section
}while(1);

To enter the critical section, process Pi first sets flag [il to be true and then sets turn to the value j, thereby asserting that if the other process wishes to enter the critical section it can do so. If both processes try to enter at the same time, turn will be set to both i and j at roughly the same time. Only one of these assignments will last; the other will occur, but will be overwritten immediately. The eventual value of turn decides which of the two processes is allowed to enter its critical section first.
We now prove that this solution is correct. We need to show that:

  • Mutual exclusion is preserved,
  • The progress requirement is satisfied,
  • The bounded-waiting requirement is met.

To prove property 1, we note that each Pi enters its critical section only if either flag [jl == false or turn == i. Also note that, if both processes can be executing in their critical sections at the same time, then flag [i] ==flag [jl == true. These two observations imply that P0 and P1 could not have successfully executed their while statements at about the same time, since the value of turn can be either
0 or 1, but cannot be both. Hence, one of the processes say Pj-must have successfully executed the while statement, whereas Pi had to execute at least one additional statement ("turn == j"). However, since, at that time, flag [j] == true, and turn == j, and this condition will persist as long as Pi is in its critical section, the result follows:
To prove properties 2 and 3, we note that a process Pi can be prevented from entering the critical section only if it is stuck in the while loop with the condition flag [j] == true and turn == j; this loop is the only one. If Pi is not ready to enter the critical section, then flag [ j ] == false and Pi can enter its critical section. If Pi has set flag[j] to true and is also executing in its while statement, then either turn == i or turn == j. If turn == i, then Pi will enter the critical section. If turn == j, then Pi will enter the critical section. However, once Pi exits its critical section, it will

reset flag [ jl to false, allowing Pi to enter its critical section. If Pi resets flag [ j 1 to true, it must also set turn to i.
Thus, since Pi does not change the value of the variable turn while executing the while statement, Pi will enter the critical section (progress) after at most one entry by Pi (bounded waiting).

 

131. What are the basic functions of an operating system?

Operating system controls and coordinates the use of the hardware among the various application programs for various uses. Operating system acts as resource allocator and manager. Since there are many possibly conflicting requests for resources, the operating system must decide which requests are allocated resources to operating the computer system efficiently and fairly. Also, operating system is control program which controls the user programs to prevent errors and improper use of the computer. It is especially concerned with the operation and control of I/O devices.

132. Why paging is used?

Paging is solution to external fragmentation problem which is to permit the logical address space of a process to be noncontiguous, thus allowing a process to be allocating physical memory wherever the latter is available.

133. What resources are used when a thread created? How do they differ from those when a process is created?

When a thread is created the threads does not require any new resources to execute the thread shares the resources like memory of the process to which they belong to. The benefit of code sharing is that it allows an application to have several different threads of activity all within the same address space. Whereas if a new process creation is very heavyweight because it always requires new address space to be created and even if they share the memory then the inter process communication is expensive when compared to the communication between the threads.

134. What is virtual memory?

Virtual memory is hardware technique where the system appears to have more memory that it actually does. This is done by time-sharing, the physical memory and storage parts of the memory one disk when they are not actively being used.

135. What is Throughput, Turnaround time, waiting time and Response time?

Throughput: Number of processes that complete their execution per time unit.

Turnaround Time: Amount of time to execute a particular process.

Waiting Time: Amount of time a process has been waiting in the ready queue.

Response Time: amount of time it takes from when a request was submitted until the first response is produced, not output (for time-sharing environment).

136. What is the important aspect of a real-time system or Mission Critical Systems?

A real time operating system has well defined fixed time constraints. Process must be done within the defined constraints or the system will fail. An example is the operating system for a flight control computer or an advanced jet airplane. Often used as a control device in a dedicated application such as controlling scientific experiments, medical imaging systems, industrial control systems, and some display systems.

137. What are the types of Real Time System?

Real-Time systems may be either hard or soft real-time.

Hard real-time: Secondary storage limited or absent, data stored in short term memory, or read-only memory (ROM), Conflicts with time-sharing systems, not supported by general-purpose operating systems.

Soft real-time: Limited utility in industrial control of robotics, Useful in applications (multimedia, virtual reality) requiring advanced operating-system features.

138. What is the difference between Hard and Soft real-time systems?

A hard real-time system guarantees that critical tasks complete on time. This goal requires that all delays in the system be bounded from the retrieval of the stored data to the time that it takes the operating system to finish any request made of it.

A soft real time system where a critical real-time task gets priority over other tasks and retains that priority until it completes. As in hard real time systems kernel delays need to be bounded

139. What is the cause of thrashing? How does the system detect thrashing? Once it detects thrashing, what can the system do to eliminate this problem?

Thrashing is caused by under allocation of the minimum number of pages required by a process, forcing it to continuously page fault.

The system can detect thrashing by evaluating the level of CPU utilization as compared to the level of multiprogramming.

It can be eliminated by reducing the level of multiprogramming.

140. What are multi tasking, multi programming and multi threading?

Multi programming: Multiprogramming is the technique of running several programs at a time using timesharing. It allows a computer to do several things at the same time. Multiprogramming creates logical parallelism. The concept of multiprogramming is that the operating system keeps several jobs in memory simultaneously. The operating system selects a job from the job pool and starts executing a job, when that job needs to wait for any i/o operations the CPU is switched to another job. So the main idea here is that the CPU is never idle.

 

Multi tasking: Multitasking is the logical extension of multi-programming .The concept of multitasking is quite similar to multiprogramming but difference is that the switching between jobs occurs so frequently that the users can interact with each program while it is running. This concept is also known as time-sharing systems. A time-shared operating system uses CPU scheduling and multiprogramming to provide each user with a small portion of time- shared system.

 

Multi threading: An application typically is implemented as a separate process with several threads of control. In some situations a single application may be required to perform several similar tasks for example a web server accepts client requests for web pages, images, sound, and so forth. A busy web server may have several of clients concurrently accessing it. If the web server ran as a

traditional single-threaded process, it would be able to service only one client at a time. The amount of time that a client might have to wait for its request to be serviced could be enormous. So it is efficient to have one process that contains multiple threads to serve the same purpose. This approach would multithread the web-server process, the server would create a separate thread that would listen for client requests when a request was made rather than creating another process it would create another thread to service the request. To get the advantages like responsiveness, Resource sharing economy and utilization of multiprocessor architectures multithreading concept can be used.

141. What is hard disk and what is its purpose?

Hard disk is the secondary storage device, which holds the data in bulk, and it holds the data on the magnetic medium of the disk. Hard disks have a hard platter that holds the magnetic medium, the magnetic medium can be easily erased and rewritten, and a typical desktop machine will have hard disks with a capacity of between 10 and a few gigabytes. Data is stored onto the disk in the form of files.

142. What is fragmentation? Different types of fragmentation?

Fragmentation occurs in a dynamic memory allocation system when many of the free blocks are too small to satisfy any request.

External Fragmentation: External Fragmentation happens when a dynamic memory allocation algorithm allocates some memory and a small piece is left over that cannot be effectively used. If too much external fragmentation occurs, the amount of usable memory is drastically reduced. Total memory space exists to satisfy a request, but it is not contiguous.

Internal Fragmentation: Internal fragmentation is the space wasted inside of allocated memory blocks because of restriction on the allowed sizes of allocated blocks. Allocated memory may be slightly larger than requested memory; this size difference is memory internal to a partition, but not being used

143. What is DRAM? In which form does it store data?

DRAM is not the best, but it’s cheap, does the job, and is available almost everywhere you look. DRAM data resides in a cell made of a capacitor and a transistor. The capacitor tends to lose data unless it’s recharged every couple of milliseconds, and this recharging tends to slow down the performance of DRAM compared to speedier RAM types.

144. What is Dispatcher?

Dispatcher module gives control of the CPU to the process selected by the short-term scheduler; this involves: Switching context, Switching to user mode, Jumping to the proper location in the user program to restart that program, dispatch latency:- time it takes for the dispatcher to stop one process and start another running.

145. What is CPU Scheduler?

Selects from among the processes in memory that are ready to execute, and allocates the CPU to one of them. CPU scheduling decisions may take place when a process:

 

  1. Switches from running to waiting
  2. Switches from running to ready
  3. Switches from waiting to
  4. Terminates

 

Scheduling under 1 and 4 is non-preemptive. All other scheduling is preemptive.

146. What is Context Switch?

Switching the CPU to another process requires saving the state of the old process and loading the saved state for the new process. This task is known as a context switch. Context-switch time is pure overhead, because the system does no useful work while switching. Its speed varies from machine to machine, depending on the memory speed, the number of registers which must be copied, the existence of special instructions (such as a single instruction to load or store all registers).

147. What is cache memory?

Cache memory is random access memory (RAM) that a computer microprocessor can access more quickly than it can access regular RAM. As the microprocessor processes data, it looks first in the cache memory and if it finds the data there (from a previous reading of data), it does not have to do the more time-consuming reading of data from larger memory.

148. What is a Safe State and what is its use in deadlock avoidance?

Safe State:

When a process requests an available resource, system must decide if immediate allocation leaves the system in a safe state. System is in safe state if there exists a safe sequence of all processes.

Deadlock Avoidance:

Ensure that a system will never enter an unsafe state.

149. What is Real-Time System?

A real time process is a process that must respond to the events within a certain time period. A real time operating system is an operating system that can run real time processes successfully.

A real time operating system is one in which the process has to respond within specified time limits.

Depending on this the real time operating system is of two types:

1. Hard Real-Time: in this system has to respond within the specified time limits in case if it fails to respond within that time then the process is considered to be Example: Robot Systems

2. Soft Real-Time: in this if system fails to respond within the specified time limits then the process is not considered to be failed rather it gets some more time from the system to get the response.

150. What is MUTEX?

Mutex is a program object that allows multiple program threads to share the same resource, such as file access, but not simultaneously. When a program is started a mutex is created with a unique name. After this stage, any thread that needs the resource must lock the mutex from other threads while it is using the resource. The mutex is set to unlock when the data is no longer needed or the routine is finished.

151. What is the difference between a ‘thread’ and a ‘process’?

A process is a collection of virtual memory space, code, data, and system resources. A thread is code that is to be serially executed within a process. A processor executes threads, not processes, so each application has at least one process, and a process always has at least one thread of execution, known as the primary thread. A process can have multiple threads in addition to the primary thread. Prior to the introduction of multiple threads of execution, applications were all designed to run on a single thread of execution.

When a thread begins to execute, it continues until it is killed or until it is interrupted by a thread with higher priority (by a user action or the kernel’s thread scheduler). Each thread can run separate sections of code, or multiple threads can execute the same section of code. Threads executing the same block of code maintain separate stacks. Each thread in a process shares that process’s global variables and resources.

152. What is Semaphore?

Semaphore is the locking Mechanism used inside resource managers and resource dispensers.

153. What is Marshalling?

The process of packaging and sending interface method parameters across thread or process boundaries is marshalling.

154. Define and Explain COM?

COM stands for Component Object Model which is a specification (Standards).

COM has two aspects

  1.    COM specifications provide a definition for what object is
  2.    COM provides services  or blue prints for creation of object and communication between client and server.

COM is loaded when the first object of the component is created.

155. What is INODE?

INODE are the data structures that contain information about the files which are created when UNIX file systems are created. Each file has an inode & is identified by an inode number (i-number) in the file system where it resides. Inode provides important information on files such as group ownership, access mode (read, write, execute permissions).

156. What are the different process states?

A process may be in anyone of the following states

  1. NEW
  2. READY
  3. WAIT
  4. RUNNING
  5. TERMINATE

Note: TERMINATE is not a state, if the process “TERMINATE” it still exists

157. What is marshalling?

Marshalling is the process of gathering data and transforming it into a standard format before it is transmitted over a network so that the data can transcend network boundaries. In order for an object to be moved around a network, it must be converted into a data stream that corresponds with the packet structure of the network transfer protocol. This conversion is known as data marshalling. Data pieces are collected in a message buffer before they are marshaled. When the data is transmitted, the receiving computer converts the marshaled data back into an object.

158. What is Mutex Object?

A mutex object is a synchronization object whose state is set to signaled when it is not owned by any thread, and non-signaled when it is owned. For example, to prevent two threads from writing to shared memory at the same time, each thread waits for ownership of a mutex object before executing the code that accesses the memory. After writing to the shared memory, the thread releases the mutex object.

159. What is semaphore?

A semaphore object is a synchronization object that maintains a count between zero and a specified maximum value. The count is decremented each time a thread completes a wait for the semaphore object and incremented each time a thread releases the semaphore. When the count reaches zero, no more threads can successfully wait for the semaphore object state to become signaled. The state of a semaphore is set to ‘signaled’ when its count is greater than zero and ‘non-signaled’ when its count is zero. The semaphore object is useful in controlling a shared resource that can support a limited number of users. It acts as a gate that limits the number of threads sharing the resource to a specified maximum number.

160. Explain Memory Partitioning, Paging, Segmentation.

Memory partitioning is the way to distribute the Kernel and User Space Area in Memory.

Paging is actually a minimum memory, which can be swap in and swap out from Memory.In modern Server operating systems, we can use Multiple Page Size Support. That actually helps to tune OS performance, depending on type of applications.

gmentation is actually a way to keep similar objects in one place. For example: you can have your stack stuffs in one place (Stack Segment), Binary code in another place (text segment), and data in another place (Data and BSS segment).

Linux doesn’t have segment architecture. AIX has Segment architecture.

161. What is the Condition for deadlock occurrence?

Deadlock can arise if four conditions hold simultaneously.

Mutual exclusion: only one process at a time can use a resource.

Hold and wait: a process holding at least one resource is waiting to acquire additional resources held by other processes.

No preemption: a resource can be released only voluntarily by the process holding it, after that process has completed its task.

Circular wait: there exists a set {P0, P1, P2, P3} of waiting processes such that P0 is waiting for a resource that is held by P1, P1 is waiting for a resource that is held by P2, P2 is waiting for a resource that is held by P3, and P3 is waiting for a resource that is held by P0.

162. What is a Safe State and its? use in deadlock avoidance?

When a process requests an available resource, system must decide if immediate allocation leaves the system in a safe state

 

System is in safe state if there exist a safe sequence of all processes.

 

Sequence <P1, P2? Pn> is safe if for each Pi, the resources that Pi can still request can be satisfied by currently available resources + resources held by all the Pj, with j<I.

 

If Pi resource needs are not immediately available, then Pi can wait until all Pj have finished.

 

When Pj is finished, Pi can obtain needed resources, execute, return allocated resources, and terminate.

 

When Pi terminates, Pi+1 can obtain its needed resources, and so on.

 

->Deadlock Avoidance ?ensure that a system will never enter an unsafe state.

163. What is the usage of Deadlock Detection- Algorithm?
  • When, and how often, to invoke depends on:

 

  • How often a deadlock is likely to occur?

 

  • How many processes will need to be rolled back?

 

  • If detection algorithm is invoked arbitrarily, there may be many cycles in the resource graph and so we would not be able to tell which of the many deadlocked processes, caused the
164. What are the different Dynamic Storage- Allocation methods?

How to satisfy a request of size n from a list of free holes?

First-fit: Allocate the first hole that is big enough.

Best-fit: Allocate the smallest hole that is big enough; must search entire list, unless ordered by size. It produces the smallest leftover hole.

Worst-fit: Allocate the largest hole; must also search entire list. Produces the largest left over hole. First-fit and best-fit are better than worst-fit in terms of speed and storage utilization.

165. What is fragmentation? Different types of fragmentation?

Fragmentation occurs in a dynamic memory allocation system when many of the free blocks are too small to satisfy any request.

External Fragmentation: External Fragmentation happens when a dynamic memory allocation algorithm allocates some memory and a small piece is left over that cannot be effectively used. If too much external fragmentation occurs, the amount of usable memory is drastically reduced. Total memory space exists to satisfy a request, but it is not contiguous

Internal Fragmentation: Internal fragmentation is the space wasted inside of allocated memory blocks because of restriction on the allowed sizes of allocated blocks. Allocated memory may be slightly larger than requested memory; this size difference is memory internal to a partition, but not being used Reduce external fragmentation by compaction

  • Shuffle memory contents to place all free memory together in one large
  • Compaction is possible only if relocation is dynamic, and is done at execution time.
166. Under what circumstances do page faults occur? Describe the actions taken by the operating system when a page fault occurs?

A page fault occurs when an access to a page that has not been brought into main memory takes place. The operating system verifies the memory access, aborting the program if it is invalid. If it is valid, a free frame is located and I/O is requested to read the needed page into the free frame. Upon completion of I/O, the process table and page table are updated and the instruction is restarted. When a process is executed with only few pages in memory and when an instruction is encountered which refers to any instruction or data in some other page, which is not present in the main memory, a page fault occurs.

167. What is the cause of thrashing? How does the system detect thrashing? Once it detects thrashing, what can the system do to eliminate this problem?

Thrashing is caused by under allocation of the minimum number of pages required by a process, forcing it to continuously page fault. The system can detect thrashing by evaluating the level of CPU utilization as compared to the level of multiprogramming. It can be eliminated by reducing the level of multiprogramming.

168. Describe different job scheduling in operating systems.

Scheduling is the activity of the deciding when process will receive the resources they request.

 

FCFS: FCSFS stands for First Come First Served. In FCFS the job that has been waiting the longest is served next.

 

Round Robin Scheduling: Round Robin scheduling is a scheduling method where each process gets a small quantity of time to run and then it is preempted and the next process gets to run. This is called time-sharing and gives the effect of all the processes running at the same time

 

Shortest Job First: The Shortest job First scheduling algorithm is a nonpreemptive scheduling algorithm that chooses the job that will execute the shortest amount of time.

 

Priority Scheduling: Priority scheduling is a scheduling method where at all times the highest priority process is assigned the resource.

169. What is the main component of operating system

Main component of operating system are kernel and shell.

 

Shell is a interface between application program and kernel. Whenever application program wants some work to be done, it contacts kernel and kernel in turn perform work with the help of device drivers. Thus we can say kernel is an interface between hardware and shell.

 

Kernel uses device drivers to control microcontroller card of peripheral device and in turn work is being accomplished.

 

Application Program -> [shells ->kernel ->device driver -> controller card -> physical hardware]

170. Explain briefly about, processor, assembler, compiler, loader, linker and the functions executed by them.

Processor:   A    processor   is   the   part   a   computer system that executes instructions .It is also called a CPU.

 

Assembler: An assembler is a program that takes basic computer instructions and converts them into a pattern of bits that the computer's processor can use

 

to perform its basic operations. Some people call these instructions assembler language and others use the term assembly language.

 

Compiler:   A compiler is a special program that processes statements written in a particular programming language and turns them into machine language or "code" that a computer's processor uses.

 

Typically, a programmer writes language statements in a language such as Pascal or C one line at a time using an editor. The file that is created contains what are called the source statements. The programmer then runs the appropriate language compiler, specifying the name of the file that contains the source statements.

 

Loader: In a computer operating system, a loader is a component that locates a given program (which can be an application or, in some cases, part of the operating system itself) in offline storage (such as a hard disk), loads it into main storage (in a personal computer, it's called random access memory), and gives                                   that                                  program                      control of the computer.

 

Linker: Linker performs the linking of libraries with the object code to make the object code into an executable machine code.

171. What are constants? List and explain different types of constants with examples.

A constant is an identifier whose value remains fixed throughout the execution of the program. The constants cannot be modified in the program.

For example: 1, 3.14512, „z‟, “hello"

Different types of constants are:

  • Integer Constant: An integer is a whole number without any fraction part. There are 3 types of integer constants:
    1. Decimal constants (0 1 2 3 4 5 6 7 8 9) For ex: 0, -9, 22
    2. Octal constants (0 1 2 3 4 5 6 7) For ex: 021, 077, 033
    3. Hexadecimal constants (0 1 2 3 4 5 6 7 8 9 A B C D E F) For ex: 0x7f, 0x2a, 0x521
  • Floating Point Constant: The floating point constant is a real number. The floating point constants can be represented using 2 forms:

              1  Fractional Form: A floating point number represented using fractional form has an integer part       followed by a dot and a fractional part.

                  For ex: 0.5, -0.9

              2  Scientific Notation (Exponent Form): The floating point number represented using scientific notation has three parts namely: mantissa, E and

                 For ex: 9.86E3 imply 9.86 x 103

  • Character Constant: A symbol enclosed within a pair of single quotes(„) is called a character

Each character is associated with a unique value called an ASCII (American Standard Code for Information Interchange) code.

For ex: '9', 'a', '\n'

ASCII code of A = 65, a= 97, 0=48, etc

  • String Constant: A sequence of characters enclosed within a pair of double quotes(“) is called a string

The string always ends with NULL (denoted by \0) character.

For ex: "9", "a", "sri", "\n", “”

 

  •  Escape Sequence Characters: An escape sequence character begins with a backslash and is followed by one
  • A backslash (\) along with some character give special meaning and purpose for that character.
  • The complete set of escape sequences are:
    • \b Backspace
    • \f Form feed
    • \n Newline
    • \r Return
    • \t Horizontal tab
    • \v Vertical tab
    • \\ Backslash
    • \' Single quotation mark
    • \" Double quotation mark
    • \? Question mark
    • \0 Null character
  • Enumeration constants: It is used to define named integer constants. It is set of named integer

Syntax for defining enumeration constants

enum   identifier         { enumeration-list};               // enum is a keyword Example:

  • enum  boolean           {NO, YES};

/* This example defines enumeration boolean with two constants, NO with value 0 and YES with value 1 */

In enumeration-list, the first constant value i.e. NO is ZERO since it is not defined explicitly, and then value of next constant i.e. YES is one more than the previous constant value(YES=1).

  • enum months {Jan=1, Feb, Mar, Apr, May, June,Jul,Aug,Sep,Oct,Nov,Dec};

/* This example define enumeration months with 12 constants, Jan=1,feb=2,mar=3,apr=4, ..., Dec=12 */

In enumeration-list, the first constant value i.e. Jan=1 since it is defined explicitly, and then value of next constants i.e. Feb, Mar, Apr,...,Dec are one more than the previous constant value.

Top