Unit No: 3 Memory Management & Virtual Memory
Pavan R Jaiswal
Swapping Demand paging Memory management requirements Memory partitioning Paging, segmentation Security issues Hardware & control structures Linux & Windows memory management Android memory management
Memory Mgmt & Virtual Memory
2
Memory management policies ◦ Allocating swap space
◦ Freeing swap space ◦ Swapping
◦ Demand paging
Memory Mgmt & Virtual Memory
3
Primary memory is a precious resource that frequently cannot
contain all active processes in the system
The memory management system decides which processes should reside (at least partially) in main memory
It monitors the amount of available primary memory and may periodically write processes to a secondary device called the swap device to provide more space in primary memory
At a later time, the kernel reads the data from swap device back to main memory
Memory Mgmt & Virtual Memory
4
Kernel Process Table
Kernel Region Table
A Process
Per Process Region Table Text File Descriptor Table
Data Stack
U Area
Fig 1 Data structures of process Memory Mgmt & Virtual Memory
5
per process region table
Kernel region table
u area
Kernel process table
main memory
Fig 1 Data structures of process Memory Mgmt & Virtual Memory
6
Swapping
◦ Easy to implement ◦ Less system overhead Demand
Paging
◦ Greater flexibility Memory Mgmt & Virtual Memory
7
The swap device is a block device in a configurable section of a disk
Kernel allocates contiguous space on the swap device
without fragmentation
It maintains free space of the swap device in an in-core table, called map
The kernel treats each unit of the swap map as group of disk blocks
As kernel allocates and frees resources, it updates the map accordingly Memory Mgmt & Virtual Memory
8
Address
1
Unit
10000
Allocate 100 unit
101
9900
Map Allocate 50 unit
251
9850
Allocate 100 unit
151
9750
Fig 2 Allocating swap space Memory Mgmt & Virtual Memory
9
Address 251
Unit 9750
50 unit free at 101
101
50
251
9750
Map
Case 1: Free resources fill a hole, but not contiguous to any resources in the map
Fig 3 freeing swap space Memory Mgmt & Virtual Memory
10
Address 251
Unit 9750
50 unit free at 101
101
50
251
9750
Map 100 unit free at 1
1 251
150 9750
Case 2: Free resources fill a hole, and immediately precedes an entry in the map Memory Mgmt & Virtual Memory
11
Address
Unit
251
9750
50 unit free at 101
101
50
251
9750
Map 100 unit free at 1
1
451
150
Allocate 200 unit
9550 300 unit free at 151
1
10000
1 251
150 9750
Case 3: Free resources fill a hole, and completely fills the gap between entries in the map
Memory Mgmt & Virtual Memory
12
malloc( address_of_map, number_of_unit) ◦ for (every map entry) if (current map entry can fit requested units) if (requested units == number of units in entry) Delete entry from map
else Adjust start address of entry
return original address of entry
◦ return -1
Memory Mgmt & Virtual Memory
13
Memory Swap device Kernel swap out when it needs memory 1. When fork() is called for allocating child process 2. When called for increasing the size of process 3. When process become larger by growth of its stack 4. Previously swapped out process want to swap in but not enough memory Memory Mgmt & Virtual Memory
14
The kernel must gather the page addresses of data at primary memory to be swapped out
Kernel copies the physical memory assigned to a process to the allocated space on the
swap device
The mapping between physical memory and swap device is kept in page table entry Memory Mgmt & Virtual Memory
15
Physical Addresses
Virtual Addresses
Text
Swap device 684
0
278k
1k
432k :
Data 65k
573k
66k
595k
690
:
Stack 128k
401k :
Fig 4 Mapping process onto the swap device Memory Mgmt & Virtual Memory
16
Physical Addresses
Virtual Addresses
Text
Swap device 684
0
278k
1k
432k :
Data 65k
573k
66k
595k
690
:
Stack 128k
401k :
Fig 5 Swapping a process into memory Memory Mgmt & Virtual Memory
17
There may not be enough memory when fork() called
Child process swap out and “ready-to-run”
Swap in when kernel schedule it
Memory Mgmt & Virtual Memory
18
It reserves enough space on the swap device to contain the
memory space of the process, including the newly requested space
Then it adjust the address translation mapping of the process
Finally, it swaps the process out on newly allocated space in swapping device
When the process swaps the process into memory, it will allocate
physical
memory
according
to
new
address
translation map
Memory Mgmt & Virtual Memory
19
Not all pages of process resides in memory
Locality of reference
When a process accesses a page that is not part of its working set, it incurs a page fault.
The kernel suspends the execution of the
process until it reads the page into memory and makes it accessible to the process Memory Mgmt & Virtual Memory
20
Page table entry
Disk block descriptors
Page frame data table
Swap use table
Memory Mgmt & Virtual Memory
21
Fig 6 PTE and DBD Memory Mgmt & Virtual Memory
22
Contains the physical address of page and the
following bits: ◦ Valid: whether the page contents are legal ◦ Reference: whether the page is referenced recently
◦ Modify: whether the page content is modified ◦ copy on write: kernel must create a new copy when a process modifies its contents (required for fork) ◦ Age: Age of the page ◦ Protection: Read/ write permission
Memory Mgmt & Virtual Memory
23
Swap Device number as there may be several swap devices
Block number that contains page
Swap device
Block num
Type
Memory Mgmt & Virtual Memory
24
Basic requirements of Memory Management
Memory Partitioning
Basic blocks of memory management ◦ Paging ◦ Segmentation
Page replacement algorithms
Memory Mgmt & Virtual Memory
25
Memory is cheap today, and getting cheaper ◦ But applications are demanding more and more memory that is never enough!
Memory
Management,
involves
swapping
blocks of data from secondary storage.
Memory I/O is slow compared to a CPU
Memory Mgmt & Virtual Memory
26
Memory needs to be allocated to ensure a
reasonable supply of ready processes to consume available processor time
Memory Mgmt & Virtual Memory
27
Relocation
Protection
Sharing
Logical organisation
Physical organisation Memory Mgmt & Virtual Memory
28
The programmer does not know where the program will be placed in memory when it is executed, ◦ it may be swapped to disk and return to main memory at a different location (relocated)
Memory references must be translated to the actual physical memory address
Memory Mgmt & Virtual Memory
29
Memory Management Terms
Term
Description
Frame
Fixed-length
Page
Fixed-length
Segment
Variable-length block of data that
memory.
block
of
main
block of data in secondary memory (e.g. on disk).
resides in secondary memory.
Memory Mgmt & Virtual Memory
30
Fig 7 Addressing requirements of process Memory Mgmt & Virtual Memory
31
Processes should not be able to reference memory locations of another process without its permission
It is not possible to check absolute addresses at compile time
It must be checked at run time
Memory Mgmt & Virtual Memory
32
Allow several processes to access the same portion of memory
Better to allow each process access to the same copy of the program rather than having their own separate copy
Memory Mgmt & Virtual Memory
33
Memory is organized linearly
Programs are written in modules ◦ Modules can be written and compiled independently
Different degrees of protection given to modules (read-only, execute-only)
Share modules among processes
Segmentation helps here
Memory Mgmt & Virtual Memory
34
Cannot
leave
the
programmer
with
the
responsibility to manage memory
Memory available for a program plus its data may be insufficient ◦ Overlaying allows various modules to be assigned the same region of memory but is time consuming to program
Programmer does not know how much space will be available Memory Mgmt & Virtual Memory
35
Fixed partitioning
Dynamic partitioning
Simple paging
Simple segmentation
Virtual memory paging
Virtual memory segmentation
Memory Mgmt & Virtual Memory
36
Equal-size partitions ◦ Any process whose size is less than or equal to the partition size can be loaded
into an available partition
The operating system can swap a process out of a partition ◦ If none are in a ready or running state
Memory Mgmt & Virtual Memory
37
A program may not fit in a partition. ◦ The programmer must design the program with overlays
Main memory use is inefficient. ◦ Any program, no matter how small, occupies an entire partition. ◦ This results in internal fragmentation.
Memory Mgmt & Virtual Memory
38
both problems ◦ but doesn’t solve completely
In Fig ◦ Programs up to 16M can be accommodated without overlay ◦ Smaller programs can be placed in smaller partitions, reducing internal fragmentation
Memory Mgmt & Virtual Memory
39
Equal-size ◦ Placement is trivial (no options)
Unequal-size ◦ Can assign each process to the smallest partition within which it will fit ◦ Queue for each partition
◦ Processes are assigned in such a way that memory wastage within partition can be minimized
Memory Mgmt & Virtual Memory
40
Fig 8 Memory assignments for fixed partitioning Memory Mgmt & Virtual Memory
41
The number of active processes are limited by the system ◦ i.e limited by the pre-determined number of
partitions
A large number of very small processes will not use the space efficiently ◦ In either fixed or variable length partition methods
Memory Mgmt & Virtual Memory
42
Partitions are of variable length and number
Process is allocated exactly as much memory as required
Memory Mgmt & Virtual Memory
43
External Fragmentation
Memory
external
to
all
processes is fragmented
Can resolve using compaction ◦ OS moves processes so that they are contiguous ◦ Time consuming and wastes CPU
time
Fig 9 Dynamic partitioning Memory Mgmt & Virtual Memory
44
Operating system must decide which free block to allocate to a process
Best-fit algorithm ◦ Chooses the block that is closest in size to the request ◦ Since smallest block is found for process, the
smallest amount of fragmentation is left ◦ Memory compaction must be done more often
Memory Mgmt & Virtual Memory
45
First-fit algorithm ◦ Scans memory from the beginning and chooses the first available block that is large enough
◦ Fastest
Memory Mgmt & Virtual Memory
46
Next-fit ◦ Scans memory from the location of the last placement ◦ More often allocate a block of memory at the end of memory where the largest block is found ◦ The largest block of memory is broken up into smaller blocks ◦ Compaction is required to obtain a large block at the end of memory
Memory Mgmt & Virtual Memory
47
Figure 10 Example memory configuration before & after allocation of 16 MB block Memory Mgmt & Virtual Memory
48
Entire space available is treated as a single block of 2U
If a request of size s where 2U-1 < s <= 2U ◦ entire block is allocated
Otherwise block is split into two equal buddies ◦ Process continues until smallest block greater than or
equal to s is generated
Memory Mgmt & Virtual Memory
49
Fig 11 Example of buddy system Memory Mgmt & Virtual Memory
50
Fig 12 Tree presentation of buddy system Memory Mgmt & Virtual Memory
51
When program loaded into memory, the actual (absolute) memory locations are determined
A process may occupy different partitions which
means
different
absolute
memory
locations
during execution ◦ Swapping
◦ Compaction
Memory Mgmt & Virtual Memory
52
Logical ◦ Reference to a memory location independent of the current assignment of data to memory
Relative ◦ Address expressed as a location relative to some known point
Physical or Absolute ◦ The absolute address or actual location in main memory
Memory Mgmt & Virtual Memory
53
Fig 13 Hardware support for relocation Memory Mgmt & Virtual Memory
54
Base register ◦ Starting address for the process
Bounds register ◦ Ending location of the process
These values are set when the process is
loaded or when the process is swapped in
Memory Mgmt & Virtual Memory
55
The value of the base register is added to a relative address to produce an absolute address
The resulting address is compared with the value
in the bounds register
If the address is not within bounds, an interrupt is generated to the operating system
Memory Mgmt & Virtual Memory
56
Real memory ◦ Main memory, the actual RAM
Virtual memory ◦ Memory on disk ◦ Allows for effective multiprogramming and relieves the user of tight constraints of main memory
Memory Mgmt & Virtual Memory
57
A state in which the system spends most of its
time
swapping
pieces
rather
than
executing instructions.
• To avoid this,
the operating system tries to
guess which pieces are least likely to be used in the near future.
The guess is based on recent history Memory Mgmt & Virtual Memory
58
Program and data references within a process tend to cluster
Only a few pieces of a process will be needed over a short period of time
Therefore it is possible to make intelligent guesses about which pieces will be needed in the future
This
suggests
that
virtual
memory
may
work
efficiently
Memory Mgmt & Virtual Memory
59
Hardware
must
support
paging
and
segmentation
Operating system must be able to manage the movement of pages and/or segments between
secondary
memory
and
main
memory
Memory Mgmt & Virtual Memory
60
Each process has its own page table
Each page table entry contains the frame number of the corresponding page in main memory
Two extra bits are needed to indicate: ◦ whether the page is in main memory or not ◦ Whether the contents of the page have been altered
since it was last loaded
Memory Mgmt & Virtual Memory
61
Memory Mgmt & Virtual Memory
62
Fig 14 Address translation in paging Memory Mgmt & Virtual Memory
63
Page tables are also stored in virtual memory
When a process is running, part of its page table is in main memory
Memory Mgmt & Virtual Memory
64
Fig 15 Two level hierarchical page table Memory Mgmt & Virtual Memory
65
Fig 16 Address translation in two level paging system Memory Mgmt & Virtual Memory
66
A drawback of the type of page tables just discussed is that their size is proportional to that of the virtual address space.
An alternative is inverted page tables
Memory Mgmt & Virtual Memory
67
Used
in
PowerPC,
UltraSPARC,
and
IA-64
architecture
Page number portion of a virtual address is
mapped into a hash value
Hash value points to inverted page table
Fixed proportion of real memory is required for the tables regardless of the number of processes
Memory Mgmt & Virtual Memory
68
Each entry in the page table includes:
Page number
Process identifier ◦ The process that owns this page
Control bits ◦ includes flags, such as valid, referenced, etc
Chain pointer ◦ the index value of the next entry in the chain
Memory Mgmt & Virtual Memory
69
Fig 17 Inverted page table structure Memory Mgmt & Virtual Memory
70
Each virtual memory reference can cause two physical memory accesses ◦ One to fetch the page table ◦ One to fetch the data
To overcome this problem a high-speed cache is set up for page table entries ◦ Called a Translation Lookaside Buffer (TLB)
◦ Contains page table entries that have been most recently used
Memory Mgmt & Virtual Memory
71
Given a virtual address, ◦ processor examines the TLB
If page table entry is present (TLB hit), ◦ the frame number is retrieved and the real address is formed
If page table entry is not found in the TLB (TLB
miss), ◦ the page number is used to index the process page table
Memory Mgmt & Virtual Memory
72
First checks if page is already in main memory ◦ If not in main memory a page fault is issued
The TLB is updated to include the new page entry
Memory Mgmt & Virtual Memory
73
Fig 18 Use of TLB Memory Mgmt & Virtual Memory
74
Fig 19 Operation of paging and TLB Memory Mgmt & Virtual Memory
75
As the TLB only contains some of the page table entries we cannot simply index into the TLB based on the page number ◦ Each TLB entry must include the page number as well as the complete page table entry
The process is able to simultaneously query numerous TLB entries to determine if there is a page number match
Memory Mgmt & Virtual Memory
76
Fig 20 Direct Vs associative lookup for page table entries Memory Mgmt & Virtual Memory
77
Smaller page size, less amount of internal fragmentation
But Smaller page size, more pages required per process ◦ More pages per process means larger page tables
Larger page tables means large portion of page tables in virtual memory Memory Mgmt & Virtual Memory
78
Small page size, large number of pages will be found in main memory
As time goes on during execution, the pages in
memory will all contain portions of the process near recent references. Page faults are low.
Increased page size causes pages to contain
locations further from any recent reference. So rise in page faults. Memory Mgmt & Virtual Memory
79
Segmentation allows the programmer to view memory as consisting of multiple address spaces or segments. ◦ May be unequal, dynamic size
◦ Simplifies handling of growing data structures ◦ Allows
programs
to
be
altered
and
recompiled
independently
◦ Lends itself to sharing data among processes ◦ Lends itself to protection
Memory Mgmt & Virtual Memory
80
Starting address corresponding segment in main memory
Each entry contains the length of the segment
A bit is needed to determine if segment is already in main memory
Another bit is needed to determine if the
segment has been modified since it was loaded in main memory Memory Mgmt & Virtual Memory
81
Memory Mgmt & Virtual Memory
82
Fig 21 Address translation in segmentation Memory Mgmt & Virtual Memory
83
Paging is transparent to the programmer
Segmentation is visible to the programmer
Each segment is broken into fixed-size pages
Memory Mgmt & Virtual Memory
84
Memory Mgmt & Virtual Memory
85
Fig 22 Address translation in segmentation / paging Memory Mgmt & Virtual Memory
86
Segmentation
lends
itself
to
the
implementation of protection and sharing policies.
As each entry has a base address and length, inadvertent memory access can be controlled
Sharing
can
be
achieved
by
segments
referencing multiple processes Memory Mgmt & Virtual Memory
87
Fig 23 Protection relationship between segments Memory Mgmt & Virtual Memory
88
Determines where in real memory a process piece is to reside
Important in a segmentation system
Paging
or
segmentation
combined hardware
paging performs
with address
translation
Memory Mgmt & Virtual Memory
89
When all of the frames in main memory are occupied and it is necessary to bring in a new page, the replacement policy determines
which page currently in memory is to be replaced.
Memory Mgmt & Virtual Memory
90
Which page is replaced?
Page removed should be the page least likely to be referenced in the near future ◦ How is that determined? ◦ Principal of locality again
Most policies predict the future behavior on the basis of past behavior Memory Mgmt & Virtual Memory
91
There are certain basic algorithms that are used for the selection of a page to replace, they include ◦ Optimal ◦ Least recently used (LRU) ◦ First-in-first-out (FIFO) ◦ Clock
Memory Mgmt & Virtual Memory
92
An example of the implementation of these policies will use a page address stream formed by executing the program is ◦ 232152453252
Which means that the first page referenced is 2, ◦ the second page referenced is 3,
◦ And so on.
Memory Mgmt & Virtual Memory
93
Selects for replacement that page for which the time to the next reference is the longest
But Impossible to have perfect knowledge of future events
Memory Mgmt & Virtual Memory
94
The optimal policy produces three page faults after the frame allocation has been filled.
Memory Mgmt & Virtual Memory
95
Replaces the page that has not been referenced for the longest time
By the principle of locality, this should be the page least likely to be referenced in the near future
Difficult to implement ◦ One approach is to tag each page with the time of last
reference. ◦ This requires a great deal of overhead.
Memory Mgmt & Virtual Memory
96
The LRU policy does nearly as well as the optimal policy. ◦ In this example, there are four page faults
Memory Mgmt & Virtual Memory
97
Treats page frames allocated to a process as a circular buffer
Pages are removed in round-robin style ◦ Simplest replacement policy to implement
Page that has been in memory the longest is replaced ◦ But, these pages may be needed again very soon if it hasn’t truly fallen out of use
Memory Mgmt & Virtual Memory
98
The FIFO policy results in six page faults. ◦ Note that LRU recognizes that pages 2 and 5 are referenced more frequently than other pages, whereas FIFO does not.
Memory Mgmt & Virtual Memory
99
Linux memory management
Windows memory management
(referred
from
Stalling)
Memory Mgmt & Virtual Memory
10 0
Fig 24 x86 segmenation Memory Mgmt & Virtual Memory
10 1
Linear Address
Page
Physical Address Page table Page directory
cr3
Fig 25 x86 paging Memory Mgmt & Virtual Memory
10 2
• Page directory/table entry
Fig 26 x86 paging Memory Mgmt & Virtual Memory
10 3
Segmentation are used in a limited way All kernel and user segments overlap into 4GB Use segment as identifier ◦ Kernel: RPL = 0 ◦ User: RPL = 3
Fig 27 Linux segmentation Memory Mgmt & Virtual Memory
104
Linux use 3-level paging ◦ Adds page middle directory (PMD)
Apply on the x86 architecture ◦ Define the size of PMD as 1 ◦ PMD translation is identical mapping
Output
Input
of
PMD
of
PMD
translation
translation
= Output of PGD translation
Memory Mgmt & Virtual Memory
10 5
Fig 28 Paging in Linux Memory Mgmt & Virtual Memory
10 6
pgd_t, pmd_t, pgd_t are 32-bit data types (unsigned long) for entries The functions and macros for creating and manipulating entries are defined in ◦ ◦ ◦ ◦ ◦ ◦ ◦ ◦
include/asm-i386/page.h include/asm-i386/pgtable.h include/asm-i386/pgtable-2level.h include/asm-i386/pgtable-3level.h include/asm-i386/pgalloc.h include/asm-i386/pgalloc-2level.h include/asm-i386/pgalloc-3level.h include/linux/mm.h
Memory Mgmt & Virtual Memory
Page Address Extension (PAE)
10 7
Fig 29 Linux memory mapping Memory Mgmt & Virtual Memory
10 8
Fig 30 Linux kernel image (x86) Memory Mgmt & Virtual Memory
10 9
Linear Address Space
Process Address Space
Slab Allocator
Noncontiguous Memory Management
Page Fault Handler
Kernel Space
User Space Physical Memory
Allocate Pages
NUMA Buddy System
Zone
Page
Page Memory Mgmt & Virtual Memory
11 0
Linear Address Space Process Address Space
Slab Allocator
Noncontiguous Memory Management
Page Fault Handler
Kernel Space
User Space Physical Memory
Allocate Pages
NUMA Buddy System
Zone
Page
Page
Memory Mgmt & Virtual Memory
11 1
The kernel keeps track of the current status of each page frame
The struct page is the descriptor of each frame
All the page frame descriptors on the system are included in an array called mem_map (#if !NUMA)
Memory Mgmt & Virtual Memory
11 2
/include/linux/mm.h struct page { unsigned long flags; atomic_t count; struct list_head list; … unsigned long index; struct list_head lru; union { struct pte_chain *chain; pte_addr_t direct; } pte; unsigned long private; };
void *virtual;
Status of page frame Usage Count Page out list
PTE chain for swapping
Memory Mgmt & Virtual Memory
11 3
/include/linux/page-flags.h #define PG_locked 0 #define PG_error 1 #define PG_referenced 2 #define PG_uptodate 3 #define PG_dirty 4 #define PG_lru 5 #define PG_active 6 #define PG_slab 7 #define PG_highmem 8 #define PG_checked 9
/* Page is locked. Don't touch. */
/* kill me in 2.5.. */
… #define PG_reclaim #define PG_compound
18 19
/* To be reclaimed asap */
Memory Mgmt & Virtual Memory
11 4
The
Windows
virtual
memory
manager
controls how memory is allocated and how paging is performed.
Designed
to
operate
over
a
variety
of
platforms ◦ uses page sizes ranging from 4 Kbytes to 64 Kbytes.
Memory Mgmt & Virtual Memory
11 5
On 32 bit platforms each user process sees a separate 32 bit address space ◦ Allowing 4G per process
Some reserved for the OS, ◦ Typically each user process has 32G of available virtual address space ◦ With all processes sharing the same 2G system space
Memory Mgmt & Virtual Memory
11 6
Fig 31 Windows default 32-bit virtual address space Memory Mgmt & Virtual Memory
11 7
On creation, a process can make use of the entire user space of almost 2 Gbytes.
This space is divided into fixed-size pages managed in
contiguous
regions
allocated
on
64Kbyte
boundaries
Regions may be in one of three states ◦ Available
◦ Reserved ◦ Committed
Memory Mgmt & Virtual Memory
11 8
Windows
uses
“variable
allocation,
local
scope”
When activated, a process is assigned data structures to manage its working set
Working sets of active processes are adjusted
depending on the availability of main memory
Memory Mgmt & Virtual Memory
11 9
1.
Define memory management in operating system.
2.
What is need of memory management?
3.
Write short note on fork swap, expansion swap.
4.
Explain memory management policy – swapping in detail.
5.
Write in short – allocating and freeing swap space.
6.
Compare fixed partitioning with dynamic partitioning.
7.
Explain with neat diagram address translation in paging.
8.
Explain with neat diagram address translation in segmentation.
Memory Mgmt & Virtual Memory
12 0
9. 10. 11. 12.
13. 14. 15.
Explain with neat diagram Linux memory management. Explain with neat diagram Windows 8 memory management. Explain with neat diagram Android memory management. Explain with example data structures used for demand paging. Compare and contrast paging with segmentation. Explain in detail memory management requirements. Explain with example any two page replacement algorithms – FIFO, Optimal, LRU. Page address stream {2,3,2,1,5,2,4,5,3,2,5,2}, frame size – 3. Identify the page faults occurred.
Memory Mgmt & Virtual Memory
12 1
[1] Maurice J. Bach, “The Design of UNIX Operating System”, PHI, ISBN 978-81-203-0516-8 [2] William Stalling, “Operating Systems: Internals and Design Principles”, 6/E, Prentice Hall
[3]http://www.tutorialspoint.com/operating_system/os_memor y_management.htm
Memory Mgmt & Virtual Memory
12 2
Thank You Memory Mgmt & Virtual Memory
12 3