Memory Management MCQ Quiz - Objective Question with Answer for Memory Management - Download Free PDF

Last updated on Jun 12, 2025

Latest Memory Management MCQ Objective Questions

Memory Management Question 1:

Paging:

  1. is a method of memory allocation by which the program is subdivided into equal portions, or pages and core is subdivided into equal portions or blocks.
  2. consists of those addresses that may be generated by a processor during execution of a computation.

  3. is a method of allocating processor time.
  4. allows multiple programs to reside in separate areas of core at the time.
  5. None of the above

Answer (Detailed Solution Below)

Option 1 : is a method of memory allocation by which the program is subdivided into equal portions, or pages and core is subdivided into equal portions or blocks.

Memory Management Question 1 Detailed Solution

The correct answer is is a method of memory allocation by which the program is subdivided into equal portions, or pages and core is subdivided into equal portions or blocks..

Key Points

  • Paging is a memory management scheme that eliminates the need for contiguous allocation of physical memory.
  • In paging, the operating system retrieves data from secondary storage in same-size blocks called pages.
  • It divides the program into fixed-size pages and the main memory into blocks of the same size, called frames.
  • When a program is to be executed, its pages are loaded into any available memory frames from the secondary storage.

Memory Management Question 2:

Dirty bit is used to show the

  1. Page with low frequency occurrence
  2. Wrong page
  3. Page with corrupted data
  4. Page that is modified after being loaded into cache memory
  5. None of the above

Answer (Detailed Solution Below)

Option 4 : Page that is modified after being loaded into cache memory

Memory Management Question 2 Detailed Solution

Concept:

Dirty bit: Dirty bit is associated with a block of cache memory and it is used to show the page that is modified after being loaded into cache memory.

Explanation:

During write back police in cache, dirty bit concept is used.
Write back means updates are written only to the cache. When line is modified it’s dirty bit is set and when the line is selected for replacement, the line needs to be written to main memory only if it’s dirty bit is set.

In write back, we first write the cache copy to update the memory. Number of write backs can be reduced if we write only when cache data is different from memory. It is done by dirty bit or modifying bit. It writes back to the cache only when dirty bit is set to 1. Thus, write back cache requires two bits one is valid bit and another is dirty bit.

Diagram:

F1 R.S 1.4.20 Pallavi D1

Memory Management Question 3:

If a processor has 32-bit virtual address, 28-bit physical address, 2 kB page size. How many bits are required for the virtual, physical page number?

  1. 17, 21
  2. 21, 17
  3. 16, 10
  4. More than one of the above
  5. None of the above

Answer (Detailed Solution Below)

Option 2 : 21, 17

Memory Management Question 3 Detailed Solution

Data:

Virtual address space (VAS) = 232 Byte

Physical address space (PAS) = 228 Byte

Page size (PS) = 211 Byte

Formula:

\({\rm{number\;of\;pages\;}} = {\rm{P}} = \frac{{{\rm{VAS}}}}{{{\rm{PS}}}}\)

\({\rm{number\;of\;frames}} = {\rm{F}} = \frac{{{\rm{PAS}}}}{{{\rm{PS}}}}\)

Bits required for the virtual page number = ⌈ log2 P ⌉

Bits required for the physical page number = ⌈ log2 F⌉

Calculation:

\({\rm{P}} = \frac{{{2^{32}}}}{{{2^{11}}}} = {2^{21}}\)

Bits required for the virtual page number = ⌈ log2 221 ⌉ = 21

\({\rm{F}} = \frac{{{2^{28}}}}{{{2^{11}}}} = \;{2^{17}}\)

Bits required for the physical page number = ⌈ log2 217 ⌉ = 17

Memory Management Question 4:

Consider a situation in which physical memory contains 62 page frame. How many number of bits will be required in physical and logical address, if a memory management system has 128 pages with 1024 bytes page size ? 

  1. 14 and 15 
  2. 15 and 16
  3. 16 and 17
  4. 14 and 16

Answer (Detailed Solution Below)

Option 4 : 14 and 16

Memory Management Question 4 Detailed Solution

The correct answer is option 4) 14 and 16

Key Points

  • Page size = 1024 bytes → 210 bytes, so offset requires 10 bits.
  • Total number of pages = 128 → 27, so logical page number requires 7 bits.
  • Logical Address = Page Number + Offset → 7 + 10 = 17 bits.
  • Physical memory contains 62 page frames → Nearest power of 2 ≥ 62 is 64 → 26 page frames.
  • Physical Address = Frame Number + Offset → 6 + 10 = 16 bits.

Additional Information

  • Logical Address = bits required to uniquely identify each page (Page number) + Offset within page.
  • Physical Address = bits to identify the frame + Offset (same as in logical address).

Hence, the correct answer is: option 4) 14 and 16

Memory Management Question 5:

Consider a system with page size p and average process size m and size of each page table entry is e. What is the amount of space required by page table ? 

  1. me/p 
  2. mp/e
  3. mpe 
  4. pe/m

Answer (Detailed Solution Below)

Option 1 : me/p 

Memory Management Question 5 Detailed Solution

The correct answer is me/p.

key-point-image Key Points

  • The page table is a data structure used in computer operating systems to manage the mapping between virtual addresses and physical addresses.
  • Each entry in the page table corresponds to a page in the virtual memory and contains information about the physical memory location of that page.
  • The size of the page table depends on the number of pages in the process and the size of each page table entry.
  • Given:
    • Page size (p)
    • Average process size (m)
    • Size of each page table entry (e)
  • To calculate the number of pages, divide the average process size by the page size: m/p
  • To find the total space required by the page table, multiply the number of pages by the size of each page table entry: (m/p) * e
  • Simplifying the expression gives the total space required as me/p

additional-information-image Additional Information

  • The page table allows the operating system to efficiently manage memory allocation and ensure that processes do not interfere with each other's memory.
  • Modern operating systems use hierarchical or multi-level page tables to optimize memory usage and reduce the size of page tables for large address spaces.
  • Page tables also play a crucial role in implementing virtual memory, allowing systems to use disk space to extend the available physical memory.

Top Memory Management MCQ Objective Questions

_________ is a memory management scheme that permits the physical address space of a process to be noncontiguous.

  1. Segmentation 
  2. Paging
  3. Fragmentation 
  4. Swapping

Answer (Detailed Solution Below)

Option 2 : Paging

Memory Management Question 6 Detailed Solution

Download Solution PDF

Explanation:

Paging

  • Paging is a memory management scheme by which a computer stores and retrieves data from secondary storage for use in the main memory.
  • In this scheme, the operating system retrieves data from secondary storage in same-size blocks called pages.
  • Paging is an important part of virtual memory implementations in modern operating systems, using secondary storage to let programs exceed the size of available physical memory.

Segmentation

  • Segmentation is a memory - management scheme that supports the user view of memory.
  • A logical address space is a collection of segments. Each segment has a name and a length.
  • The addresses specify both the segment name and the offset within the segment. The user, therefore, specifies each address.

Fragmentation 

  • Fragmentation is done at routers which makes them complex to implement when routers take too much time to fragment packets.
  • This may lead to a DOS attack on other packets.

Swapping

  • The medium-term scheduler reduces the degree of multiprogramming. Some processes are removed from memory to reduce multiprogramming. Later, the process can be reintroduced into memory, and its execution can be continued where it left off. This scheme is called swapping.
  • The long-term scheduler, or job scheduler, selects processes from a mass-storage device and loads them into memory for execution.
  • The short-term scheduler, or CPU scheduler, selects from among the processes that are ready to execute and allocates the CPU to one of them.

In the context operating systems, which of the following statements is/are correct with respect to paging?

  1. Paging incurs memory overheads.
  2. Paging helps solve the issue of external fragmentation.
  3. Page size has no impact on internal fragmentation.
  4. Multi-level paging is necessary to support pages of different sizes.

Answer (Detailed Solution Below)

Option :

Memory Management Question 7 Detailed Solution

Download Solution PDF

Key Points

  • Pages are divided into fixed-size slots and hence no external fragmentation. But applications smaller than page size cause internal fragmentation
  • Page tables take extra pages in memory. Therefore incur an extra cost

Therefore option 1 and 2 are correct

The primary objective of a time-sharing operating system is to

  1. Avoid thrashing
  2. Provide fast response to the user of the computer
  3. Provide fast execution of processes
  4. Optimize computer memory usage

Answer (Detailed Solution Below)

Option 2 : Provide fast response to the user of the computer

Memory Management Question 8 Detailed Solution

Download Solution PDF
  • Time-sharing (or multitasking) is a logical extension of multiprogramming.
  • In time-sharing systems, the CPU executes multiple jobs by switching among them, but the switches occur so frequently that the users can interact with each program while it is running.
  • Time-sharing requires an interactive computer system, which provides direct communication between the user and the system.
  • The user gives instructions to the operating system or to a program directly, using an input device such as a keyboard, mouse, touchpad, or touch screen, and waits for immediate results on an output device.

Contiguous memory allocation having variable size partition suffers from:

  1. External Fragmentation
  2. Internal Fragmentation
  3. Both External and Internal Fragmentation
  4. None of the options

Answer (Detailed Solution Below)

Option 1 : External Fragmentation

Memory Management Question 9 Detailed Solution

Download Solution PDF

The correct solution is 'option 1'.

Key Points

  • Contiguous allocation of memory results in fragmentation. Additional fragmentation may be external or internal.
  • It leads to inflexibility and memory wastage.
  • In variable Partitioning, space in the main memory is allocated according to the requirement of the process, hence there is no case of internal fragmentation. There will be no unused space left or leftover space in the partition. 
  • The lack of internal fragmentation does not mean that external fragmentation will not occur.

Let's consider three processes with 10MB, 20MB, and 30MB are loaded in memory. After some time 10MB and 30MB processes are freed. Now there are two available memory partitions but we can't load a process with a size of 40 MB.

F1 Shraddha Raju 16.03.2021 D1

Thus, the correct answer is: External Fragmentation

NOTE:
This question was dropped and marks were allocated to all

Assume that in a certain computer, the virtual addresses are 64 bits long and the physical addresses are 48 bits long. The memory is word addressable. The page size is 8 kB and the word size is 4 bytes. The Translation Look-aside Buffer (TLB) in the address translation path has 128 valid entries. At most how many distinct virtual addresses can be translated without any TLB miss?

  1. 16 × 2010
  2. 256 × 210
  3. 4 × 220
  4. 8 × 220

Answer (Detailed Solution Below)

Option 2 : 256 × 210

Memory Management Question 10 Detailed Solution

Download Solution PDF

Memory is word addressable.

1 word = 4 bytes

Virtual Address (VA) = 64 bits

∴ Virtual Address Space (VAS)= 264 words

Physical Address (PA) = 48 bits

∴ Physical Address Space (PAS) = 248 words

Page size (PS) = 8 KB = 211 words

∴ page offset = 11 bit

∴ number of pages possible = \(\frac{{VAS}}{{PS}} = \frac{{{2^{64}}}}{{{2^{11}}}} = {2^{53}}\)

∴ number of frames possible = \(\frac{{PAS}}{{PS}} = \frac{{{2^{48}}}}{{{2^{11}}}} = {2^{37}}\)

VA = Page number + page offset

Translation Lookaside Buffer (TLB)

Page Number

Frame Number


Entries in TLB = 128 = 27

If a page number is found in TLB then there will be a hit for all the words (Word addresses) of that Page.

1 - page hit implies 211 distinct virtual address hits.

So 27page hit implies 27 × 211=28 × 210= 256 × 210 virtual address hits.

Therefore, at most 256 × 210 distinct virtual addresses can be translated without any TLB miss.

Tips and Tricks:

distinct virtual addresses can be translated without any TLB miss is

the number of entries in TLB × page size

The memory management scheme that allows the processes to be stored non-contiguously in memory :

  1. Spooling
  2. Swapping
  3. Paging
  4. Relocation

Answer (Detailed Solution Below)

Option 3 : Paging

Memory Management Question 11 Detailed Solution

Download Solution PDF

Spooling - It is a process in which data is temporarily held to be used and executed by a device, program or the system

Swapping - It is a memory reclamation method wherein memory contents not currently in use are swapped to a disk to make the memory available for other applications or processes.

Paging - It is a memory management scheme that allows the processes to be stored non-contiguously in the memory.

Relocation - Sometimes, as per the requirements, data is transferred from one location to another. This is called memory relocation.

If main memory access time is 400 μs, TLB access time is 50 μs, considering TLB hit as 90%, what will be the overall access time?  

  1. 800 μs 
  2. 490 μs
  3. 485 μs
  4. 450 μs

Answer (Detailed Solution Below)

Option 2 : 490 μs

Memory Management Question 12 Detailed Solution

Download Solution PDF

Data:

TLB hit ratio = p = 90% = 0.9

TLB access time = t = 50 μs

Memory access time = m = 400 μs

Effective memory acess time = EMAT

Formula:

EMAT = p × (t + m) + (1 – p) × (t + m + m)

Calculation:

EMAT = 0.9 × (50 + 400) + (1 – 0.9) × (50 + 400 + 400)

EMAT = 490 μs

∴ the overall access time is 490 μs

Important Points

During TLB hit

Frame number is fetched from the TLB (50 μs)

and page is fetched from physical memory (400 μs)

During TLB miss

TLB no entry matches (50 μs)

Frame number is fetched from the physical memory (400 μs)

and pages are fetched from physical memory (400 μs)

Consider allocation of memory to a new process. Assume that none of the existing holes in the memory will exactly fit the process’s memory requirement. Hence, a new hole of smaller size will be created if allocation is made in any of the existing holes. Which one of the following statements is TRUE?

  1. The hole created by first fit is always larger than the hole created by next fit.
  2. The hole created by worst fit is always larger than the hole created by first fit.
  3. The hole created by best fit is never larger than the hole created by first fit.
  4. The hole created by next fit is never larger than the hole created by best fit.

Answer (Detailed Solution Below)

Option 3 : The hole created by best fit is never larger than the hole created by first fit.

Memory Management Question 13 Detailed Solution

Download Solution PDF

Concept:

Best fit allocation:

The best fit allocation strategy chooses the smallest available memory partition that can satisfy the memory requirement. It creates the smallest hole.

First fit allocation:

The first fit chooses the first available memory partition that can satisfy the requirement.

Worst fit allocation:

The worst fit allocation strategy chooses the largest available memory partition that can satisfy the memory requirement. It creates the largest hole.

Next fit allocation:

It works same as First Fit, the only difference it maintain a pointer to all last allocated memory space to the process and begins it search from there if new request is arrived, unlike first fit which start will initial memory space.

Explanation:

Option 1 and Option 4 : FALSE

The hole created by first fit may or may not be larger than the hole created by next fit

Option 2: FALSE

The hole created by worst fit is always larger than or equal to the hole created by first fit

Option 3: TRUE

The hole created by best fit could never be larger than the hole created by first fit.

 Although it may be equal.

Consider the following statements:

S1: A small page size causes large page tables.

S2: Internal fragmentation is increased with small pages.

S3: I/O transfers are more efficient with large pages.

Which of the following is true?

  1. S1 and S2 are true  
  2. S1 is true and S2 is false 
  3. S2 and S3 are true 
  4. S1 is true S3 is false

Answer (Detailed Solution Below)

Option 2 : S1 is true and S2 is false 

Memory Management Question 14 Detailed Solution

Download Solution PDF

Concept:

Paging is a memory management scheme. Paging reduces the external fragmentation. Size of the page table depends upon the number of entries in the table and bytes stored in one entry.

Explanation:

S1: A small page size causes large page tables.

This statement is correct. Smaller page size means more pages required per process. It means large page tables are needed.

S2: internal fragmentation is increased with small pages.

This statement is incorrect. Internal fragmentation means when process size is smaller than the available space. When pages are small, then available space becomes less and there will be less chances of internal fragmentation.

S3: I/O transfers are more efficient with large pages.

An I/O system is required to take an application I/O request and send it to the physical device. Transferring of I/O requests are more efficient with large pages. So, given statement is correct.

Page information in memory is also called as Page Table. The essential contents in each entry of a page table is/are.

  1. Page Access information
  2. Virtual Page number
  3. Page Frame number
  4. Both virtual page number and Page Frame Number

Answer (Detailed Solution Below)

Option 3 : Page Frame number

Memory Management Question 15 Detailed Solution

Download Solution PDF

The essential content in each entry of a page table is the page frame number.

Explanation:

In paging, physical memory is divided into fixed-size blocks called page frames and logical memory is divided into fixed-size blocks called pages which are of the same size as that of frames. When a process is to be executed, its pages can be loaded into any unallocated frames from the disk.

In paging, mapping of logical addresses to physical addresses is performed at the page level.

  • When CPU generates a logical address, it is divided into two parts: page number and offset
  • Page size is always in the power of 2.
  • Address translation is performed using the page table (mapping table).
  • It stores the frame number allocated to each page and the page number is used as an index to the page table.

 

When the CPU generates a logical address, that address is sent to the MMU (memory management unit). MMU uses the page number to find the corresponding page frame number in the page table. Page frame number if attached to high order end of page offset to form physical address that is sent to the memory.

The mechanism is shown here:

Get Free Access Now
Hot Links: teen patti classic teen patti lucky lotus teen patti teen patti circle teen patti go