Paging

0
1 Star2 Stars3 Stars4 Stars5 Stars (3 votes, average: 4.00 out of 5)
Loading ... Loading ...

An important feature of virtual memory is “Paging”. Paging allocates disk storage for data which does not fit into physical memory in fixed size chunks called page. It helps in removing the inactive pages from physical memory (RAM). Paging is usually varies from machine architecture, it is usually built into the kernel of the operating system.  The address space of application is breakup up into fixed size chunks called pages.  Pages and page frames are found to be of identical size, pages are stored in page frames.

Paging is performed; when pages are accessed by a process and it is not presently mapped to physical memory is called a Page Fault.  All page faults are handled and controlled by operating system, being invisible to the program.

When pages have been modified, temporary storage area on disk must be provided to save pages, called a paging file or swap space. The process of writing one inactive page or a cluster of inactive memory pages to disk is called a page out. The process of reading, when one of the pages is referenced is called a page in. Paging is useful because processes particularly use a small proportion of their assigned virtual memory actively one at a time and set of pages in actively used by a process is called working set.

Virtual memory split into two sections, a page number & an offset within that page. Piece of data will access at a given address, system automatically extracts page number, extracts offset, translate page number to physical page frame id., accesses data at offset in physical page frame.

clip image002 thumb Paging

The above illustration shows an example of virtual memory mapping in main memory.

System translation is performed by page table which is a linear array indexed by virtual page number which have physical page frame which contains that page. A lookup process will extract page number, extract offset, a check that page number is within address space of process, look up page number in page table, add offset to resulting physical page number, and access memory location.  Page fault exception is generated, if a page contains the linear address is not currently mapped in physical memory. The exception immediately invokes the operating system to load the page from disk to physical memory.  After page load in physical memory, a return from the exception handler causes the instruction that generated the exception to be restarted. The processor use information to map linear addresses into the physical address space and page-fault exceptions is generated which is contained in page directories and page tables are stored in memory.

clip image002 thumb Paging

Lookup problem is usually speedup with a cache which store most recent page lookup values in TLB. TLB is designed with fully associative, direct mapped, set associative, etc. Like any cache TLB works well too.

The physical memory which reserved for the operating system itself and for its data structures is called wired memory. The physical memory which is managed via the paging mechanism is called the page pool. Whenever a virtual memory page is not referenced to physical memory, a page is allocated from the page pool’s free list and mapped to virtual memory address. After unmapped or free physical memory, pages are returned to the free list. For reference, pages can be reclaimed from the free list, before the physical memory page has been reused.

Operating system will begin to scan for inactive pages (page out), if the number of pages on the free list falls below a certain threshold, then. If no reference is found for certain time, Pages will be regarded as inactive. In free list, inactive pages are moved to the end, but if their contents have been modified then it must be paged out to disk. Paging operation will stop as soon as the number of free pages rises back above the threshold. Operating system scanning will begin increasingly rapidly, if the number of pages on the free list continues to fall, thus regards pages as inactive more quickly. The scan rate is a measure of the number of pages scanned per second indicates memory pressure.

Under severe memory pressure, it is still possible for physical memory to remain very dynamic, so that the operating system searches in useless for adequate inactive pages. Some operating systems will select low-priority processes and deactivate them completely to ensure that inactive pages will be able to be found. In other operating systems, processes are proactively swapped out of physical memory entirely.

Demand Paging

Demand paging does not involve preemptive loading. Paging is invoked at the time of data request, and not before.  When a demand pager is used, a program execution begins with none of its pages loaded in RAM, copied pages from the executable file into RAM, the first time the executing code references them, in response to page faults. If pages of the program are never executed, their executable file might never be loaded into memory.

Anticipatory Paging

Anticipatory paging is a process preloads non occupant pages that are likely to be hit in the short time.   This will reduce the number of page faults a process practices. Very few operating systems use anticipatory paging, which is also called swap pre-fetch.

In Linux, role of page cache is to speed up access to files on disk.  Pages are read from memory mapped files at a time and are stored in the page cache.  VFS inode data structure identifies each file in Linux which unique describes one and only one file. The page table index is derived from the file’s VFS inode and the offset into the file. The page is read through the page cache, whenever a page is read from a memory mapped file. , a pointer to the mem_map_t data structure, if the page is present in the cache representing it is returned to the page fault handling code. Else page must be fetched into memory from the file system that holds the image. Physical page is allocated in Linux and reads the page from the file on disk.  As seen Linux uses memory it can start to run low on physical pages, this will reduce the size of the page cache.

Advantages of Paging

The advantages of paging are, in Address translation where each task has the same virtual address. Address translation will turn fragmented physical addresses into contiguous virtual addresses. It provides Memory protection (buggy or malicious tasks can’t harm each other or the kernel) and shared memory between tasks that a fast type of IPC, also conserves memory when used for DLLs). It helps in Demand loading which prevents big load on CPU when a task first starts running, conserves memory. Paging requires memory mapped files, Virtual memory swapping which let system degrade gracefully when memory required exceeds RAM size. The process can be run whose virtual address space is larger than physical memory.  It is flexibly in sharing machine between processes, which address sizes exceed the physical memory size. It also supports a wide range of user level stuff.

Disadvantages of Paging

The disadvantages of paging are extra resource consumption, memory overhead for storing page tables. The worst cases are when page table may take up a significant portion of virtual memory.  Solution is to page the page table or go to a more complicated data structure for translations. Other disadvantage is translation overhead.

Be Sociable, Share!
  • more Paging
1 Star2 Stars3 Stars4 Stars5 Stars (3 votes, average: 4.00 out of 5)
Loading ... Loading ...

Leave a Reply

© 2010 . All rights reserved.
Proudly designed by Theme Junkie.