SlideShare a Scribd company logo
Lab 7: Page tables
Advanced Operating Systems

Zubair Nabi
zubair.nabi@itu.edu.pk

March 27, 2013
Introduction

Page tables allow the OS to:

• Multiplex the address spaces of different processes onto a single
physical memory space
• Protect the memories of different processes
• Map the same kernel memory in several address spaces
• Map the same user memory more than once in one address
space (user pages are also mapped into the kernel’s physical
view of memory)
Introduction

Page tables allow the OS to:

• Multiplex the address spaces of different processes onto a single
physical memory space
• Protect the memories of different processes
• Map the same kernel memory in several address spaces
• Map the same user memory more than once in one address
space (user pages are also mapped into the kernel’s physical
view of memory)
Introduction

Page tables allow the OS to:

• Multiplex the address spaces of different processes onto a single
physical memory space
• Protect the memories of different processes
• Map the same kernel memory in several address spaces
• Map the same user memory more than once in one address
space (user pages are also mapped into the kernel’s physical
view of memory)
Introduction

Page tables allow the OS to:

• Multiplex the address spaces of different processes onto a single
physical memory space
• Protect the memories of different processes
• Map the same kernel memory in several address spaces
• Map the same user memory more than once in one address
space (user pages are also mapped into the kernel’s physical
view of memory)
Introduction

Page tables allow the OS to:

• Multiplex the address spaces of different processes onto a single
physical memory space
• Protect the memories of different processes
• Map the same kernel memory in several address spaces
• Map the same user memory more than once in one address
space (user pages are also mapped into the kernel’s physical
view of memory)
Page table structure

• An x86 page table contains 220 page table entries (PTEs)
• Each PTE contains a 20-bit physical page number (PPN) and
some flags
• The paging hardware translates virtual addresses to physical
ones by:
Using the top 20 bits of the virtual address to index into the page
table to find a PTE
2 Replacing the top 20 bits with the PPN in the PTE
3 Copying the lower 12 bits verbatim from the virtual to the physical
address
1

• Translation takes place at the granularity of 212 byte (4KB)
chunks, called pages
Page table structure

• An x86 page table contains 220 page table entries (PTEs)
• Each PTE contains a 20-bit physical page number (PPN) and
some flags
• The paging hardware translates virtual addresses to physical
ones by:
Using the top 20 bits of the virtual address to index into the page
table to find a PTE
2 Replacing the top 20 bits with the PPN in the PTE
3 Copying the lower 12 bits verbatim from the virtual to the physical
address
1

• Translation takes place at the granularity of 212 byte (4KB)
chunks, called pages
Page table structure

• An x86 page table contains 220 page table entries (PTEs)
• Each PTE contains a 20-bit physical page number (PPN) and
some flags
• The paging hardware translates virtual addresses to physical
ones by:
Using the top 20 bits of the virtual address to index into the page
table to find a PTE
2 Replacing the top 20 bits with the PPN in the PTE
3 Copying the lower 12 bits verbatim from the virtual to the physical
address
1

• Translation takes place at the granularity of 212 byte (4KB)
chunks, called pages
Page table structure

• An x86 page table contains 220 page table entries (PTEs)
• Each PTE contains a 20-bit physical page number (PPN) and
some flags
• The paging hardware translates virtual addresses to physical
ones by:
Using the top 20 bits of the virtual address to index into the page
table to find a PTE
2 Replacing the top 20 bits with the PPN in the PTE
3 Copying the lower 12 bits verbatim from the virtual to the physical
address
1

• Translation takes place at the granularity of 212 byte (4KB)
chunks, called pages
Page table structure

• An x86 page table contains 220 page table entries (PTEs)
• Each PTE contains a 20-bit physical page number (PPN) and
some flags
• The paging hardware translates virtual addresses to physical
ones by:
Using the top 20 bits of the virtual address to index into the page
table to find a PTE
2 Replacing the top 20 bits with the PPN in the PTE
3 Copying the lower 12 bits verbatim from the virtual to the physical
address
1

• Translation takes place at the granularity of 212 byte (4KB)
chunks, called pages
Page table structure

• An x86 page table contains 220 page table entries (PTEs)
• Each PTE contains a 20-bit physical page number (PPN) and
some flags
• The paging hardware translates virtual addresses to physical
ones by:
Using the top 20 bits of the virtual address to index into the page
table to find a PTE
2 Replacing the top 20 bits with the PPN in the PTE
3 Copying the lower 12 bits verbatim from the virtual to the physical
address
1

• Translation takes place at the granularity of 212 byte (4KB)
chunks, called pages
Page table structure

• An x86 page table contains 220 page table entries (PTEs)
• Each PTE contains a 20-bit physical page number (PPN) and
some flags
• The paging hardware translates virtual addresses to physical
ones by:
Using the top 20 bits of the virtual address to index into the page
table to find a PTE
2 Replacing the top 20 bits with the PPN in the PTE
3 Copying the lower 12 bits verbatim from the virtual to the physical
address
1

• Translation takes place at the granularity of 212 byte (4KB)
chunks, called pages
Page table structure (2)

• A page table is stored in physical memory as a two-level tree
• Root of the tree: 4KB page directory
• Each page directory index: page table pages (PDE)
• Each page table page: 1024 32-bit PTEs
• 1024 x 1024 = 220
Page table structure (2)

• A page table is stored in physical memory as a two-level tree
• Root of the tree: 4KB page directory
• Each page directory index: page table pages (PDE)
• Each page table page: 1024 32-bit PTEs
• 1024 x 1024 = 220
Page table structure (2)

• A page table is stored in physical memory as a two-level tree
• Root of the tree: 4KB page directory
• Each page directory index: page table pages (PDE)
• Each page table page: 1024 32-bit PTEs
• 1024 x 1024 = 220
Page table structure (2)

• A page table is stored in physical memory as a two-level tree
• Root of the tree: 4KB page directory
• Each page directory index: page table pages (PDE)
• Each page table page: 1024 32-bit PTEs
• 1024 x 1024 = 220
Page table structure (2)

• A page table is stored in physical memory as a two-level tree
• Root of the tree: 4KB page directory
• Each page directory index: page table pages (PDE)
• Each page table page: 1024 32-bit PTEs
• 1024 x 1024 = 220
Translation

• Use top 10 bits of the virtual address to index the page directory
• If the PDE is present, use next 10 bits to index the page table
page and obtain a PTE
• If either the PDE or the PTE is missing, raise a fault
• This two-level structure increases efficiency
• How?
Translation

• Use top 10 bits of the virtual address to index the page directory
• If the PDE is present, use next 10 bits to index the page table
page and obtain a PTE
• If either the PDE or the PTE is missing, raise a fault
• This two-level structure increases efficiency
• How?
Translation

• Use top 10 bits of the virtual address to index the page directory
• If the PDE is present, use next 10 bits to index the page table
page and obtain a PTE
• If either the PDE or the PTE is missing, raise a fault
• This two-level structure increases efficiency
• How?
Translation

• Use top 10 bits of the virtual address to index the page directory
• If the PDE is present, use next 10 bits to index the page table
page and obtain a PTE
• If either the PDE or the PTE is missing, raise a fault
• This two-level structure increases efficiency
• How?
Permissions

Each PTE contains associated flags
Flag

PTE_P
PTE_W
PTE_U
PTE_PWT
PTE_PCD
PTE_A
PTE_D
PTE_PS

Description
Whether the page is present
Whether the page can be written to
Whether user programs can access the page
Whether write through or write back
Whether caching is disabled
Whether the page has been accessed
Whether the page is dirty
Page size
Process address space

• Each process has a private address space which is switched on a
context switch (via switchuvm)
• Each address space starts at 0 and goes up to KERNBASE
allowing 2GB of space (specific to xv6)
• Each time a process requests more memory, the kernel:
Finds free physical pages
Adds PTEs that point to these physical pages in the process’ page
table
3 Sets PTE_U, PTE_W, and PTE_P
1
2
Process address space

• Each process has a private address space which is switched on a
context switch (via switchuvm)
• Each address space starts at 0 and goes up to KERNBASE
allowing 2GB of space (specific to xv6)
• Each time a process requests more memory, the kernel:
Finds free physical pages
Adds PTEs that point to these physical pages in the process’ page
table
3 Sets PTE_U, PTE_W, and PTE_P
1
2
Process address space

• Each process has a private address space which is switched on a
context switch (via switchuvm)
• Each address space starts at 0 and goes up to KERNBASE
allowing 2GB of space (specific to xv6)
• Each time a process requests more memory, the kernel:
Finds free physical pages
Adds PTEs that point to these physical pages in the process’ page
table
3 Sets PTE_U, PTE_W, and PTE_P
1
2
Process address space

• Each process has a private address space which is switched on a
context switch (via switchuvm)
• Each address space starts at 0 and goes up to KERNBASE
allowing 2GB of space (specific to xv6)
• Each time a process requests more memory, the kernel:
Finds free physical pages
Adds PTEs that point to these physical pages in the process’ page
table
3 Sets PTE_U, PTE_W, and PTE_P

1

2
Process address space

• Each process has a private address space which is switched on a
context switch (via switchuvm)
• Each address space starts at 0 and goes up to KERNBASE
allowing 2GB of space (specific to xv6)
• Each time a process requests more memory, the kernel:
Finds free physical pages
Adds PTEs that point to these physical pages in the process’ page
table
3 Sets PTE_U, PTE_W, and PTE_P

1

2
Process address space

• Each process has a private address space which is switched on a
context switch (via switchuvm)
• Each address space starts at 0 and goes up to KERNBASE
allowing 2GB of space (specific to xv6)
• Each time a process requests more memory, the kernel:
Finds free physical pages
Adds PTEs that point to these physical pages in the process’ page
table
3 Sets PTE_U, PTE_W, and PTE_P

1

2
Process address space (2)

Each process’ address space also contains mappings (above
KERNBASE) for the kernel to run. Specifically:

• KERNBASE:KERNBASE+PHYSTOP is mapped to 0:PHYSTOP
• The kernel can use its own instructions and data
• The kernel can directly write to physical memory (for instance,
when creating page table pages)
• A shortcoming of this approach is that the kernel can only make
use of 2GB of memory
• PTE_U is not set for all entries above KERNBASE
Process address space (2)

Each process’ address space also contains mappings (above
KERNBASE) for the kernel to run. Specifically:

• KERNBASE:KERNBASE+PHYSTOP is mapped to 0:PHYSTOP
• The kernel can use its own instructions and data
• The kernel can directly write to physical memory (for instance,
when creating page table pages)
• A shortcoming of this approach is that the kernel can only make
use of 2GB of memory
• PTE_U is not set for all entries above KERNBASE
Process address space (2)

Each process’ address space also contains mappings (above
KERNBASE) for the kernel to run. Specifically:

• KERNBASE:KERNBASE+PHYSTOP is mapped to 0:PHYSTOP
• The kernel can use its own instructions and data
• The kernel can directly write to physical memory (for instance,
when creating page table pages)
• A shortcoming of this approach is that the kernel can only make
use of 2GB of memory
• PTE_U is not set for all entries above KERNBASE
Process address space (2)

Each process’ address space also contains mappings (above
KERNBASE) for the kernel to run. Specifically:

• KERNBASE:KERNBASE+PHYSTOP is mapped to 0:PHYSTOP
• The kernel can use its own instructions and data
• The kernel can directly write to physical memory (for instance,
when creating page table pages)
• A shortcoming of this approach is that the kernel can only make
use of 2GB of memory
• PTE_U is not set for all entries above KERNBASE
Process address space (2)

Each process’ address space also contains mappings (above
KERNBASE) for the kernel to run. Specifically:

• KERNBASE:KERNBASE+PHYSTOP is mapped to 0:PHYSTOP
• The kernel can use its own instructions and data
• The kernel can directly write to physical memory (for instance,
when creating page table pages)
• A shortcoming of this approach is that the kernel can only make
use of 2GB of memory
• PTE_U is not set for all entries above KERNBASE
Process address space (2)

Each process’ address space also contains mappings (above
KERNBASE) for the kernel to run. Specifically:

• KERNBASE:KERNBASE+PHYSTOP is mapped to 0:PHYSTOP
• The kernel can use its own instructions and data
• The kernel can directly write to physical memory (for instance,
when creating page table pages)
• A shortcoming of this approach is that the kernel can only make
use of 2GB of memory
• PTE_U is not set for all entries above KERNBASE
Example: Creating an address space for main

• main makes a call to kvmalloc
• kvmalloc creates a page table with kernel mappings above
KERNBASE and switches to it
1
2
3
4
5

void kvmalloc (void)
{
kpgdir = setupkvm ();
switchkvm ();
}
Example: Creating an address space for main

• main makes a call to kvmalloc
• kvmalloc creates a page table with kernel mappings above
KERNBASE and switches to it
1
2
3
4
5

void kvmalloc (void)
{
kpgdir = setupkvm ();
switchkvm ();
}
Example: Creating an address space for main

• main makes a call to kvmalloc
• kvmalloc creates a page table with kernel mappings above
KERNBASE and switches to it
1
2
3
4
5

void kvmalloc (void)
{
kpgdir = setupkvm ();
switchkvm ();
}
setupkvm

1

Allocates a page of memory to hold the page directory

2

Calls mappages to install kernel mappings (kmap)
• Instructions and data
• Physical memory up to PHYSTOP
• Memory ranges for I/O devices

Does not install mappings for user memory
setupkvm

1

Allocates a page of memory to hold the page directory

2

Calls mappages to install kernel mappings (kmap)
• Instructions and data
• Physical memory up to PHYSTOP
• Memory ranges for I/O devices

Does not install mappings for user memory
setupkvm

1

Allocates a page of memory to hold the page directory

2

Calls mappages to install kernel mappings (kmap)
• Instructions and data
• Physical memory up to PHYSTOP
• Memory ranges for I/O devices

Does not install mappings for user memory
setupkvm

1

Allocates a page of memory to hold the page directory

2

Calls mappages to install kernel mappings (kmap)
• Instructions and data
• Physical memory up to PHYSTOP
• Memory ranges for I/O devices

Does not install mappings for user memory
setupkvm

1

Allocates a page of memory to hold the page directory

2

Calls mappages to install kernel mappings (kmap)
• Instructions and data
• Physical memory up to PHYSTOP
• Memory ranges for I/O devices

Does not install mappings for user memory
setupkvm

1

Allocates a page of memory to hold the page directory

2

Calls mappages to install kernel mappings (kmap)
• Instructions and data
• Physical memory up to PHYSTOP
• Memory ranges for I/O devices

Does not install mappings for user memory
Code: kmap

1
2
3
4
5
6
7
8
9
10
11

static struct kmap {
void

∗virt;

uint phys_start ;
uint phys_end ;
int perm;
} kmap [] = {
{ (void ∗)KERNBASE , 0, EXTMEM , PTE_W }, // I/O space
{ (void ∗)KERNLINK , V2P( KERNLINK ), V2P(data), 0}, // kern text
{ (void ∗)data , V2P(data), PHYSTOP , PTE_W }, // kern data
{ (void ∗)DEVSPACE , DEVSPACE , 0, PTE_W }, // more devices
};
Code: setupkvm
1
2
3
4
5
6
7
8
9
10
11
12
13
14

pde_t∗ setupkvm (void) {
pde_t

∗pgdir ;
∗k;

struct kmap

if(( pgdir = ( pde_t ∗)kalloc ()) == 0)
return 0;
memset (pgdir , 0, PGSIZE );

for(k = kmap; k < &kmap[ NELEM (kmap )]; k++)
if( mappages (pgdir , k−>virt , k−>phys_end

− k−>phys_start ,

(uint)k−>phys_start , k−>perm) < 0)
return 0;
return pgdir ;
}
mappages

• Installs virtual to physical mappings for a range of addresses
• For each virtual address:
1 Calls walkpgdir to find address of the PTE for that address
2

Initializes the PTE with the relevant PPN and the desired
permissions
mappages

• Installs virtual to physical mappings for a range of addresses
• For each virtual address:
1 Calls walkpgdir to find address of the PTE for that address
2

Initializes the PTE with the relevant PPN and the desired
permissions
mappages

• Installs virtual to physical mappings for a range of addresses
• For each virtual address:
1 Calls walkpgdir to find address of the PTE for that address
2

Initializes the PTE with the relevant PPN and the desired
permissions
mappages

• Installs virtual to physical mappings for a range of addresses
• For each virtual address:
1 Calls walkpgdir to find address of the PTE for that address
2

Initializes the PTE with the relevant PPN and the desired
permissions
walkpgdir

1

Uses the upper 10 bits of the virtual address to find the PDE

2

Uses the next 10 bits to find the PTE
walkpgdir

1

Uses the upper 10 bits of the virtual address to find the PDE

2

Uses the next 10 bits to find the PTE
Physical memory allocation

• Physical memory between the end of the kernel and PHYSTOP is
allocated on the fly
• Free pages are maintained through a linked list struct run
*freelist protected by a spinlock
1 Allocation: Remove a page from the list: kalloc()
2 Deallocation: Add the page to the list: kfree()
1
2
3
4
5

struct {
struct spinlock lock;
int use_lock ;
struct run
} kmem;

∗freelist ;
Physical memory allocation

• Physical memory between the end of the kernel and PHYSTOP is
allocated on the fly
• Free pages are maintained through a linked list struct run
*freelist protected by a spinlock
1 Allocation: Remove a page from the list: kalloc()
2 Deallocation: Add the page to the list: kfree()
1
2
3
4
5

struct {
struct spinlock lock;
int use_lock ;
struct run
} kmem;

∗freelist ;
Physical memory allocation

• Physical memory between the end of the kernel and PHYSTOP is
allocated on the fly
• Free pages are maintained through a linked list struct run
*freelist protected by a spinlock
1 Allocation: Remove a page from the list: kalloc()
2 Deallocation: Add the page to the list: kfree()
1
2
3
4
5

struct {
struct spinlock lock;
int use_lock ;
struct run
} kmem;

∗freelist ;
Physical memory allocation

• Physical memory between the end of the kernel and PHYSTOP is
allocated on the fly
• Free pages are maintained through a linked list struct run
*freelist protected by a spinlock
1 Allocation: Remove a page from the list: kalloc()
2 Deallocation: Add the page to the list: kfree()
1
2
3
4
5

struct {
struct spinlock lock;
int use_lock ;
struct run
} kmem;

∗freelist ;
Physical memory allocation

• Physical memory between the end of the kernel and PHYSTOP is
allocated on the fly
• Free pages are maintained through a linked list struct run
*freelist protected by a spinlock
1 Allocation: Remove a page from the list: kalloc()
2 Deallocation: Add the page to the list: kfree()
1
2
3
4
5

struct {
struct spinlock lock;
int use_lock ;
struct run
} kmem;

∗freelist ;
exec

• Creates the user part of an address space from the program
binary, in Executable and Linkable Format (ELF)
• Initializes instructions, data, and stack
exec

• Creates the user part of an address space from the program
binary, in Executable and Linkable Format (ELF)
• Initializes instructions, data, and stack
Today’s task

• Most operating systems implement “anticipatory paging” in which
on a page fault, the next few consecutive pages are also loaded
to preemptively reduce page faults
• Chalk out a design to implement this strategy in xv6
Reading(s)

• Chapter 2, “Page tables” from “xv6: a simple, Unix-like teaching
operating system”

More Related Content

What's hot

The Linux Kernel Implementation of Pipes and FIFOs
The Linux Kernel Implementation of Pipes and FIFOsThe Linux Kernel Implementation of Pipes and FIFOs
The Linux Kernel Implementation of Pipes and FIFOs
Divye Kapoor
 
The structure of process
The structure of processThe structure of process
The structure of process
Abhaysinh Surve
 
Course 102: Lecture 28: Virtual FileSystems
Course 102: Lecture 28: Virtual FileSystems Course 102: Lecture 28: Virtual FileSystems
Course 102: Lecture 28: Virtual FileSystems
Ahmed El-Arabawy
 
Internal representation of files ppt
Internal representation of files pptInternal representation of files ppt
Internal representation of files ppt
Abhaysinh Surve
 
Linux Initialization Process (1)
Linux Initialization Process (1)Linux Initialization Process (1)
Linux Initialization Process (1)
shimosawa
 
OMFW 2012: Analyzing Linux Kernel Rootkits with Volatlity
OMFW 2012: Analyzing Linux Kernel Rootkits with VolatlityOMFW 2012: Analyzing Linux Kernel Rootkits with Volatlity
OMFW 2012: Analyzing Linux Kernel Rootkits with VolatlityAndrew Case
 
OSDC 2011 | Enterprise Linux Server Filesystems by Remo Rickli
OSDC 2011 | Enterprise Linux Server Filesystems by Remo RickliOSDC 2011 | Enterprise Linux Server Filesystems by Remo Rickli
OSDC 2011 | Enterprise Linux Server Filesystems by Remo Rickli
NETWAYS
 
Memory forensics
Memory forensicsMemory forensics
Memory forensicsSunil Kumar
 
De-Anonymizing Live CDs through Physical Memory Analysis
De-Anonymizing Live CDs through Physical Memory AnalysisDe-Anonymizing Live CDs through Physical Memory Analysis
De-Anonymizing Live CDs through Physical Memory Analysis
Andrew Case
 
Ganesh naik linux_kernel_internals
Ganesh naik linux_kernel_internalsGanesh naik linux_kernel_internals
Ganesh naik linux_kernel_internals
Ganesh Naik
 
Course 102: Lecture 5: File Handling Internals
Course 102: Lecture 5: File Handling Internals Course 102: Lecture 5: File Handling Internals
Course 102: Lecture 5: File Handling Internals
Ahmed El-Arabawy
 
The basic concept of Linux FIleSystem
The basic concept of Linux FIleSystemThe basic concept of Linux FIleSystem
The basic concept of Linux FIleSystem
HungWei Chiu
 
Workshop - Linux Memory Analysis with Volatility
Workshop - Linux Memory Analysis with VolatilityWorkshop - Linux Memory Analysis with Volatility
Workshop - Linux Memory Analysis with Volatility
Andrew Case
 
Course 102: Lecture 24: Archiving and Compression of Files
Course 102: Lecture 24: Archiving and Compression of Files Course 102: Lecture 24: Archiving and Compression of Files
Course 102: Lecture 24: Archiving and Compression of Files
Ahmed El-Arabawy
 
Lec 10-linux-review
Lec 10-linux-reviewLec 10-linux-review
Lec 10-linux-review
abinaya m
 
(120513) #fitalk an introduction to linux memory forensics
(120513) #fitalk   an introduction to linux memory forensics(120513) #fitalk   an introduction to linux memory forensics
(120513) #fitalk an introduction to linux memory forensics
INSIGHT FORENSIC
 
Hadoop HDFS Detailed Introduction
Hadoop HDFS Detailed IntroductionHadoop HDFS Detailed Introduction
Hadoop HDFS Detailed Introduction
Hanborq Inc.
 
Linux Memory Analysis with Volatility
Linux Memory Analysis with VolatilityLinux Memory Analysis with Volatility
Linux Memory Analysis with Volatility
Andrew Case
 

What's hot (20)

The Linux Kernel Implementation of Pipes and FIFOs
The Linux Kernel Implementation of Pipes and FIFOsThe Linux Kernel Implementation of Pipes and FIFOs
The Linux Kernel Implementation of Pipes and FIFOs
 
The structure of process
The structure of processThe structure of process
The structure of process
 
Course 102: Lecture 28: Virtual FileSystems
Course 102: Lecture 28: Virtual FileSystems Course 102: Lecture 28: Virtual FileSystems
Course 102: Lecture 28: Virtual FileSystems
 
Internal representation of files ppt
Internal representation of files pptInternal representation of files ppt
Internal representation of files ppt
 
Linux Initialization Process (1)
Linux Initialization Process (1)Linux Initialization Process (1)
Linux Initialization Process (1)
 
OMFW 2012: Analyzing Linux Kernel Rootkits with Volatlity
OMFW 2012: Analyzing Linux Kernel Rootkits with VolatlityOMFW 2012: Analyzing Linux Kernel Rootkits with Volatlity
OMFW 2012: Analyzing Linux Kernel Rootkits with Volatlity
 
OSDC 2011 | Enterprise Linux Server Filesystems by Remo Rickli
OSDC 2011 | Enterprise Linux Server Filesystems by Remo RickliOSDC 2011 | Enterprise Linux Server Filesystems by Remo Rickli
OSDC 2011 | Enterprise Linux Server Filesystems by Remo Rickli
 
Memory forensics
Memory forensicsMemory forensics
Memory forensics
 
De-Anonymizing Live CDs through Physical Memory Analysis
De-Anonymizing Live CDs through Physical Memory AnalysisDe-Anonymizing Live CDs through Physical Memory Analysis
De-Anonymizing Live CDs through Physical Memory Analysis
 
Introduction to UNIX
Introduction to UNIXIntroduction to UNIX
Introduction to UNIX
 
Ganesh naik linux_kernel_internals
Ganesh naik linux_kernel_internalsGanesh naik linux_kernel_internals
Ganesh naik linux_kernel_internals
 
Course 102: Lecture 5: File Handling Internals
Course 102: Lecture 5: File Handling Internals Course 102: Lecture 5: File Handling Internals
Course 102: Lecture 5: File Handling Internals
 
The basic concept of Linux FIleSystem
The basic concept of Linux FIleSystemThe basic concept of Linux FIleSystem
The basic concept of Linux FIleSystem
 
Workshop - Linux Memory Analysis with Volatility
Workshop - Linux Memory Analysis with VolatilityWorkshop - Linux Memory Analysis with Volatility
Workshop - Linux Memory Analysis with Volatility
 
Basic Linux Internals
Basic Linux InternalsBasic Linux Internals
Basic Linux Internals
 
Course 102: Lecture 24: Archiving and Compression of Files
Course 102: Lecture 24: Archiving and Compression of Files Course 102: Lecture 24: Archiving and Compression of Files
Course 102: Lecture 24: Archiving and Compression of Files
 
Lec 10-linux-review
Lec 10-linux-reviewLec 10-linux-review
Lec 10-linux-review
 
(120513) #fitalk an introduction to linux memory forensics
(120513) #fitalk   an introduction to linux memory forensics(120513) #fitalk   an introduction to linux memory forensics
(120513) #fitalk an introduction to linux memory forensics
 
Hadoop HDFS Detailed Introduction
Hadoop HDFS Detailed IntroductionHadoop HDFS Detailed Introduction
Hadoop HDFS Detailed Introduction
 
Linux Memory Analysis with Volatility
Linux Memory Analysis with VolatilityLinux Memory Analysis with Volatility
Linux Memory Analysis with Volatility
 

Viewers also liked

AOS Lab 1: Hello, Linux!
AOS Lab 1: Hello, Linux!AOS Lab 1: Hello, Linux!
AOS Lab 1: Hello, Linux!Zubair Nabi
 
AOS Lab 5: System calls
AOS Lab 5: System callsAOS Lab 5: System calls
AOS Lab 5: System callsZubair Nabi
 
AOS Lab 8: Interrupts and Device Drivers
AOS Lab 8: Interrupts and Device DriversAOS Lab 8: Interrupts and Device Drivers
AOS Lab 8: Interrupts and Device DriversZubair Nabi
 
AOS Lab 11: Virtualization
AOS Lab 11: VirtualizationAOS Lab 11: Virtualization
AOS Lab 11: VirtualizationZubair Nabi
 
AOS Lab 4: If you liked it, then you should have put a “lock” on it
AOS Lab 4: If you liked it, then you should have put a “lock” on itAOS Lab 4: If you liked it, then you should have put a “lock” on it
AOS Lab 4: If you liked it, then you should have put a “lock” on itZubair Nabi
 
Topic 13: Cloud Stacks
Topic 13: Cloud StacksTopic 13: Cloud Stacks
Topic 13: Cloud Stacks
Zubair Nabi
 
AOS Lab 6: Scheduling
AOS Lab 6: SchedulingAOS Lab 6: Scheduling
AOS Lab 6: SchedulingZubair Nabi
 
Topic 14: Operating Systems and Virtualization
Topic 14: Operating Systems and VirtualizationTopic 14: Operating Systems and Virtualization
Topic 14: Operating Systems and Virtualization
Zubair Nabi
 
Topic 15: Datacenter Design and Networking
Topic 15: Datacenter Design and NetworkingTopic 15: Datacenter Design and Networking
Topic 15: Datacenter Design and Networking
Zubair Nabi
 
The Anatomy of Web Censorship in Pakistan
The Anatomy of Web Censorship in PakistanThe Anatomy of Web Censorship in Pakistan
The Anatomy of Web Censorship in PakistanZubair Nabi
 
AOS Lab 1: Hello, Linux!
AOS Lab 1: Hello, Linux!AOS Lab 1: Hello, Linux!
AOS Lab 1: Hello, Linux!Zubair Nabi
 
MapReduce Application Scripting
MapReduce Application ScriptingMapReduce Application Scripting
MapReduce Application Scripting
Zubair Nabi
 
MapReduce and DBMS Hybrids
MapReduce and DBMS HybridsMapReduce and DBMS Hybrids
MapReduce and DBMS Hybrids
Zubair Nabi
 
Raabta: Low-cost Video Conferencing for the Developing World
Raabta: Low-cost Video Conferencing for the Developing WorldRaabta: Low-cost Video Conferencing for the Developing World
Raabta: Low-cost Video Conferencing for the Developing WorldZubair Nabi
 
The Big Data Stack
The Big Data StackThe Big Data Stack
The Big Data StackZubair Nabi
 

Viewers also liked (15)

AOS Lab 1: Hello, Linux!
AOS Lab 1: Hello, Linux!AOS Lab 1: Hello, Linux!
AOS Lab 1: Hello, Linux!
 
AOS Lab 5: System calls
AOS Lab 5: System callsAOS Lab 5: System calls
AOS Lab 5: System calls
 
AOS Lab 8: Interrupts and Device Drivers
AOS Lab 8: Interrupts and Device DriversAOS Lab 8: Interrupts and Device Drivers
AOS Lab 8: Interrupts and Device Drivers
 
AOS Lab 11: Virtualization
AOS Lab 11: VirtualizationAOS Lab 11: Virtualization
AOS Lab 11: Virtualization
 
AOS Lab 4: If you liked it, then you should have put a “lock” on it
AOS Lab 4: If you liked it, then you should have put a “lock” on itAOS Lab 4: If you liked it, then you should have put a “lock” on it
AOS Lab 4: If you liked it, then you should have put a “lock” on it
 
Topic 13: Cloud Stacks
Topic 13: Cloud StacksTopic 13: Cloud Stacks
Topic 13: Cloud Stacks
 
AOS Lab 6: Scheduling
AOS Lab 6: SchedulingAOS Lab 6: Scheduling
AOS Lab 6: Scheduling
 
Topic 14: Operating Systems and Virtualization
Topic 14: Operating Systems and VirtualizationTopic 14: Operating Systems and Virtualization
Topic 14: Operating Systems and Virtualization
 
Topic 15: Datacenter Design and Networking
Topic 15: Datacenter Design and NetworkingTopic 15: Datacenter Design and Networking
Topic 15: Datacenter Design and Networking
 
The Anatomy of Web Censorship in Pakistan
The Anatomy of Web Censorship in PakistanThe Anatomy of Web Censorship in Pakistan
The Anatomy of Web Censorship in Pakistan
 
AOS Lab 1: Hello, Linux!
AOS Lab 1: Hello, Linux!AOS Lab 1: Hello, Linux!
AOS Lab 1: Hello, Linux!
 
MapReduce Application Scripting
MapReduce Application ScriptingMapReduce Application Scripting
MapReduce Application Scripting
 
MapReduce and DBMS Hybrids
MapReduce and DBMS HybridsMapReduce and DBMS Hybrids
MapReduce and DBMS Hybrids
 
Raabta: Low-cost Video Conferencing for the Developing World
Raabta: Low-cost Video Conferencing for the Developing WorldRaabta: Low-cost Video Conferencing for the Developing World
Raabta: Low-cost Video Conferencing for the Developing World
 
The Big Data Stack
The Big Data StackThe Big Data Stack
The Big Data Stack
 

Similar to AOS Lab 7: Page tables

02-OS-review.pptx
02-OS-review.pptx02-OS-review.pptx
02-OS-review.pptx
TrongMinhHoang1
 
Segmentation with paging methods and techniques
Segmentation with paging methods and techniquesSegmentation with paging methods and techniques
Segmentation with paging methods and techniques
nikhilrana24112003
 
Memory Management Strategies - IV.pdf
Memory Management Strategies - IV.pdfMemory Management Strategies - IV.pdf
Memory Management Strategies - IV.pdf
Harika Pudugosula
 
Virtual memory translation.pptx
Virtual memory translation.pptxVirtual memory translation.pptx
Virtual memory translation.pptx
RAJESH S
 
Implementation of page table
Implementation of page tableImplementation of page table
Implementation of page tableguestff64339
 
address-translation-mechanism-of-80386 (1).ppt
address-translation-mechanism-of-80386 (1).pptaddress-translation-mechanism-of-80386 (1).ppt
address-translation-mechanism-of-80386 (1).ppt
1556AyeshaShaikh
 
Linux Kernel Booting Process (2) - For NLKB
Linux Kernel Booting Process (2) - For NLKBLinux Kernel Booting Process (2) - For NLKB
Linux Kernel Booting Process (2) - For NLKB
shimosawa
 
Memory Management Strategies - III.pdf
Memory Management Strategies - III.pdfMemory Management Strategies - III.pdf
Memory Management Strategies - III.pdf
Harika Pudugosula
 
Structure of the page table
Structure of the page tableStructure of the page table
Structure of the page table
duvvuru madhuri
 
Memory map
Memory mapMemory map
Memory mapaviban
 
Memory management in sql server
Memory management in sql serverMemory management in sql server
Memory management in sql server
Prashant Kumar
 
Lec10 Computer Architecture by Hsien-Hsin Sean Lee Georgia Tech -- Memory part2
Lec10 Computer Architecture by Hsien-Hsin Sean Lee Georgia Tech -- Memory part2Lec10 Computer Architecture by Hsien-Hsin Sean Lee Georgia Tech -- Memory part2
Lec10 Computer Architecture by Hsien-Hsin Sean Lee Georgia Tech -- Memory part2
Hsien-Hsin Sean Lee, Ph.D.
 
cPanelCon 2015: InnoDB Alchemy
cPanelCon 2015: InnoDB AlchemycPanelCon 2015: InnoDB Alchemy
cPanelCon 2015: InnoDB Alchemy
Ryan Robson
 
AltaVista Search Engine Architecture
AltaVista Search Engine ArchitectureAltaVista Search Engine Architecture
AltaVista Search Engine Architecture
Changshu Liu
 
Main Memory Management in Operating System
Main Memory Management in Operating SystemMain Memory Management in Operating System
Main Memory Management in Operating System
Rashmi Bhat
 
SQL Server 2014 In-Memory OLTP
SQL Server 2014 In-Memory OLTPSQL Server 2014 In-Memory OLTP
SQL Server 2014 In-Memory OLTP
Tony Rogerson
 
Operating system 35 paging
Operating system 35 pagingOperating system 35 paging
Operating system 35 paging
Vaibhav Khanna
 

Similar to AOS Lab 7: Page tables (20)

Ppt
PptPpt
Ppt
 
02-OS-review.pptx
02-OS-review.pptx02-OS-review.pptx
02-OS-review.pptx
 
Segmentation with paging methods and techniques
Segmentation with paging methods and techniquesSegmentation with paging methods and techniques
Segmentation with paging methods and techniques
 
Os4
Os4Os4
Os4
 
Os4
Os4Os4
Os4
 
Memory Management Strategies - IV.pdf
Memory Management Strategies - IV.pdfMemory Management Strategies - IV.pdf
Memory Management Strategies - IV.pdf
 
Virtual memory translation.pptx
Virtual memory translation.pptxVirtual memory translation.pptx
Virtual memory translation.pptx
 
Implementation of page table
Implementation of page tableImplementation of page table
Implementation of page table
 
address-translation-mechanism-of-80386 (1).ppt
address-translation-mechanism-of-80386 (1).pptaddress-translation-mechanism-of-80386 (1).ppt
address-translation-mechanism-of-80386 (1).ppt
 
Linux Kernel Booting Process (2) - For NLKB
Linux Kernel Booting Process (2) - For NLKBLinux Kernel Booting Process (2) - For NLKB
Linux Kernel Booting Process (2) - For NLKB
 
Memory Management Strategies - III.pdf
Memory Management Strategies - III.pdfMemory Management Strategies - III.pdf
Memory Management Strategies - III.pdf
 
Structure of the page table
Structure of the page tableStructure of the page table
Structure of the page table
 
Memory map
Memory mapMemory map
Memory map
 
Memory management in sql server
Memory management in sql serverMemory management in sql server
Memory management in sql server
 
Lec10 Computer Architecture by Hsien-Hsin Sean Lee Georgia Tech -- Memory part2
Lec10 Computer Architecture by Hsien-Hsin Sean Lee Georgia Tech -- Memory part2Lec10 Computer Architecture by Hsien-Hsin Sean Lee Georgia Tech -- Memory part2
Lec10 Computer Architecture by Hsien-Hsin Sean Lee Georgia Tech -- Memory part2
 
cPanelCon 2015: InnoDB Alchemy
cPanelCon 2015: InnoDB AlchemycPanelCon 2015: InnoDB Alchemy
cPanelCon 2015: InnoDB Alchemy
 
AltaVista Search Engine Architecture
AltaVista Search Engine ArchitectureAltaVista Search Engine Architecture
AltaVista Search Engine Architecture
 
Main Memory Management in Operating System
Main Memory Management in Operating SystemMain Memory Management in Operating System
Main Memory Management in Operating System
 
SQL Server 2014 In-Memory OLTP
SQL Server 2014 In-Memory OLTPSQL Server 2014 In-Memory OLTP
SQL Server 2014 In-Memory OLTP
 
Operating system 35 paging
Operating system 35 pagingOperating system 35 paging
Operating system 35 paging
 

More from Zubair Nabi

Lab 5: Interconnecting a Datacenter using Mininet
Lab 5: Interconnecting a Datacenter using MininetLab 5: Interconnecting a Datacenter using Mininet
Lab 5: Interconnecting a Datacenter using Mininet
Zubair Nabi
 
Topic 12: NoSQL in Action
Topic 12: NoSQL in ActionTopic 12: NoSQL in Action
Topic 12: NoSQL in Action
Zubair Nabi
 
Lab 4: Interfacing with Cassandra
Lab 4: Interfacing with CassandraLab 4: Interfacing with Cassandra
Lab 4: Interfacing with Cassandra
Zubair Nabi
 
Topic 10: Taxonomy of Data and Storage
Topic 10: Taxonomy of Data and StorageTopic 10: Taxonomy of Data and Storage
Topic 10: Taxonomy of Data and Storage
Zubair Nabi
 
Topic 11: Google Filesystem
Topic 11: Google FilesystemTopic 11: Google Filesystem
Topic 11: Google Filesystem
Zubair Nabi
 
Lab 3: Writing a Naiad Application
Lab 3: Writing a Naiad ApplicationLab 3: Writing a Naiad Application
Lab 3: Writing a Naiad Application
Zubair Nabi
 
Topic 9: MR+
Topic 9: MR+Topic 9: MR+
Topic 9: MR+
Zubair Nabi
 
Topic 8: Enhancements and Alternative Architectures
Topic 8: Enhancements and Alternative ArchitecturesTopic 8: Enhancements and Alternative Architectures
Topic 8: Enhancements and Alternative Architectures
Zubair Nabi
 
Topic 7: Shortcomings in the MapReduce Paradigm
Topic 7: Shortcomings in the MapReduce ParadigmTopic 7: Shortcomings in the MapReduce Paradigm
Topic 7: Shortcomings in the MapReduce Paradigm
Zubair Nabi
 
Lab 1: Introduction to Amazon EC2 and MPI
Lab 1: Introduction to Amazon EC2 and MPILab 1: Introduction to Amazon EC2 and MPI
Lab 1: Introduction to Amazon EC2 and MPI
Zubair Nabi
 
Topic 6: MapReduce Applications
Topic 6: MapReduce ApplicationsTopic 6: MapReduce Applications
Topic 6: MapReduce Applications
Zubair Nabi
 

More from Zubair Nabi (11)

Lab 5: Interconnecting a Datacenter using Mininet
Lab 5: Interconnecting a Datacenter using MininetLab 5: Interconnecting a Datacenter using Mininet
Lab 5: Interconnecting a Datacenter using Mininet
 
Topic 12: NoSQL in Action
Topic 12: NoSQL in ActionTopic 12: NoSQL in Action
Topic 12: NoSQL in Action
 
Lab 4: Interfacing with Cassandra
Lab 4: Interfacing with CassandraLab 4: Interfacing with Cassandra
Lab 4: Interfacing with Cassandra
 
Topic 10: Taxonomy of Data and Storage
Topic 10: Taxonomy of Data and StorageTopic 10: Taxonomy of Data and Storage
Topic 10: Taxonomy of Data and Storage
 
Topic 11: Google Filesystem
Topic 11: Google FilesystemTopic 11: Google Filesystem
Topic 11: Google Filesystem
 
Lab 3: Writing a Naiad Application
Lab 3: Writing a Naiad ApplicationLab 3: Writing a Naiad Application
Lab 3: Writing a Naiad Application
 
Topic 9: MR+
Topic 9: MR+Topic 9: MR+
Topic 9: MR+
 
Topic 8: Enhancements and Alternative Architectures
Topic 8: Enhancements and Alternative ArchitecturesTopic 8: Enhancements and Alternative Architectures
Topic 8: Enhancements and Alternative Architectures
 
Topic 7: Shortcomings in the MapReduce Paradigm
Topic 7: Shortcomings in the MapReduce ParadigmTopic 7: Shortcomings in the MapReduce Paradigm
Topic 7: Shortcomings in the MapReduce Paradigm
 
Lab 1: Introduction to Amazon EC2 and MPI
Lab 1: Introduction to Amazon EC2 and MPILab 1: Introduction to Amazon EC2 and MPI
Lab 1: Introduction to Amazon EC2 and MPI
 
Topic 6: MapReduce Applications
Topic 6: MapReduce ApplicationsTopic 6: MapReduce Applications
Topic 6: MapReduce Applications
 

Recently uploaded

Leading Change strategies and insights for effective change management pdf 1.pdf
Leading Change strategies and insights for effective change management pdf 1.pdfLeading Change strategies and insights for effective change management pdf 1.pdf
Leading Change strategies and insights for effective change management pdf 1.pdf
OnBoard
 
The Metaverse and AI: how can decision-makers harness the Metaverse for their...
The Metaverse and AI: how can decision-makers harness the Metaverse for their...The Metaverse and AI: how can decision-makers harness the Metaverse for their...
The Metaverse and AI: how can decision-makers harness the Metaverse for their...
Jen Stirrup
 
Video Streaming: Then, Now, and in the Future
Video Streaming: Then, Now, and in the FutureVideo Streaming: Then, Now, and in the Future
Video Streaming: Then, Now, and in the Future
Alpen-Adria-Universität
 
Climate Impact of Software Testing at Nordic Testing Days
Climate Impact of Software Testing at Nordic Testing DaysClimate Impact of Software Testing at Nordic Testing Days
Climate Impact of Software Testing at Nordic Testing Days
Kari Kakkonen
 
UiPath Test Automation using UiPath Test Suite series, part 4
UiPath Test Automation using UiPath Test Suite series, part 4UiPath Test Automation using UiPath Test Suite series, part 4
UiPath Test Automation using UiPath Test Suite series, part 4
DianaGray10
 
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...
James Anderson
 
Assure Contact Center Experiences for Your Customers With ThousandEyes
Assure Contact Center Experiences for Your Customers With ThousandEyesAssure Contact Center Experiences for Your Customers With ThousandEyes
Assure Contact Center Experiences for Your Customers With ThousandEyes
ThousandEyes
 
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdfFIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
FIDO Alliance
 
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Encryption in Microsoft 365 - ExpertsLive Netherlands 2024
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024
Albert Hoitingh
 
GraphRAG is All You need? LLM & Knowledge Graph
GraphRAG is All You need? LLM & Knowledge GraphGraphRAG is All You need? LLM & Knowledge Graph
GraphRAG is All You need? LLM & Knowledge Graph
Guy Korland
 
DevOps and Testing slides at DASA Connect
DevOps and Testing slides at DASA ConnectDevOps and Testing slides at DASA Connect
DevOps and Testing slides at DASA Connect
Kari Kakkonen
 
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...
SOFTTECHHUB
 
Generative AI Deep Dive: Advancing from Proof of Concept to Production
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionGenerative AI Deep Dive: Advancing from Proof of Concept to Production
Generative AI Deep Dive: Advancing from Proof of Concept to Production
Aggregage
 
Elevating Tactical DDD Patterns Through Object Calisthenics
Elevating Tactical DDD Patterns Through Object CalisthenicsElevating Tactical DDD Patterns Through Object Calisthenics
Elevating Tactical DDD Patterns Through Object Calisthenics
Dorra BARTAGUIZ
 
RESUME BUILDER APPLICATION Project for students
RESUME BUILDER APPLICATION Project for studentsRESUME BUILDER APPLICATION Project for students
RESUME BUILDER APPLICATION Project for students
KAMESHS29
 
Introduction to CHERI technology - Cybersecurity
Introduction to CHERI technology - CybersecurityIntroduction to CHERI technology - Cybersecurity
Introduction to CHERI technology - Cybersecurity
mikeeftimakis1
 
Enhancing Performance with Globus and the Science DMZ
Enhancing Performance with Globus and the Science DMZEnhancing Performance with Globus and the Science DMZ
Enhancing Performance with Globus and the Science DMZ
Globus
 
The Art of the Pitch: WordPress Relationships and Sales
The Art of the Pitch: WordPress Relationships and SalesThe Art of the Pitch: WordPress Relationships and Sales
The Art of the Pitch: WordPress Relationships and Sales
Laura Byrne
 
A tale of scale & speed: How the US Navy is enabling software delivery from l...
A tale of scale & speed: How the US Navy is enabling software delivery from l...A tale of scale & speed: How the US Navy is enabling software delivery from l...
A tale of scale & speed: How the US Navy is enabling software delivery from l...
sonjaschweigert1
 
UiPath Community Day Dubai: AI at Work..
UiPath Community Day Dubai: AI at Work..UiPath Community Day Dubai: AI at Work..
UiPath Community Day Dubai: AI at Work..
UiPathCommunity
 

Recently uploaded (20)

Leading Change strategies and insights for effective change management pdf 1.pdf
Leading Change strategies and insights for effective change management pdf 1.pdfLeading Change strategies and insights for effective change management pdf 1.pdf
Leading Change strategies and insights for effective change management pdf 1.pdf
 
The Metaverse and AI: how can decision-makers harness the Metaverse for their...
The Metaverse and AI: how can decision-makers harness the Metaverse for their...The Metaverse and AI: how can decision-makers harness the Metaverse for their...
The Metaverse and AI: how can decision-makers harness the Metaverse for their...
 
Video Streaming: Then, Now, and in the Future
Video Streaming: Then, Now, and in the FutureVideo Streaming: Then, Now, and in the Future
Video Streaming: Then, Now, and in the Future
 
Climate Impact of Software Testing at Nordic Testing Days
Climate Impact of Software Testing at Nordic Testing DaysClimate Impact of Software Testing at Nordic Testing Days
Climate Impact of Software Testing at Nordic Testing Days
 
UiPath Test Automation using UiPath Test Suite series, part 4
UiPath Test Automation using UiPath Test Suite series, part 4UiPath Test Automation using UiPath Test Suite series, part 4
UiPath Test Automation using UiPath Test Suite series, part 4
 
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...
 
Assure Contact Center Experiences for Your Customers With ThousandEyes
Assure Contact Center Experiences for Your Customers With ThousandEyesAssure Contact Center Experiences for Your Customers With ThousandEyes
Assure Contact Center Experiences for Your Customers With ThousandEyes
 
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdfFIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
 
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Encryption in Microsoft 365 - ExpertsLive Netherlands 2024
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024
 
GraphRAG is All You need? LLM & Knowledge Graph
GraphRAG is All You need? LLM & Knowledge GraphGraphRAG is All You need? LLM & Knowledge Graph
GraphRAG is All You need? LLM & Knowledge Graph
 
DevOps and Testing slides at DASA Connect
DevOps and Testing slides at DASA ConnectDevOps and Testing slides at DASA Connect
DevOps and Testing slides at DASA Connect
 
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...
 
Generative AI Deep Dive: Advancing from Proof of Concept to Production
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionGenerative AI Deep Dive: Advancing from Proof of Concept to Production
Generative AI Deep Dive: Advancing from Proof of Concept to Production
 
Elevating Tactical DDD Patterns Through Object Calisthenics
Elevating Tactical DDD Patterns Through Object CalisthenicsElevating Tactical DDD Patterns Through Object Calisthenics
Elevating Tactical DDD Patterns Through Object Calisthenics
 
RESUME BUILDER APPLICATION Project for students
RESUME BUILDER APPLICATION Project for studentsRESUME BUILDER APPLICATION Project for students
RESUME BUILDER APPLICATION Project for students
 
Introduction to CHERI technology - Cybersecurity
Introduction to CHERI technology - CybersecurityIntroduction to CHERI technology - Cybersecurity
Introduction to CHERI technology - Cybersecurity
 
Enhancing Performance with Globus and the Science DMZ
Enhancing Performance with Globus and the Science DMZEnhancing Performance with Globus and the Science DMZ
Enhancing Performance with Globus and the Science DMZ
 
The Art of the Pitch: WordPress Relationships and Sales
The Art of the Pitch: WordPress Relationships and SalesThe Art of the Pitch: WordPress Relationships and Sales
The Art of the Pitch: WordPress Relationships and Sales
 
A tale of scale & speed: How the US Navy is enabling software delivery from l...
A tale of scale & speed: How the US Navy is enabling software delivery from l...A tale of scale & speed: How the US Navy is enabling software delivery from l...
A tale of scale & speed: How the US Navy is enabling software delivery from l...
 
UiPath Community Day Dubai: AI at Work..
UiPath Community Day Dubai: AI at Work..UiPath Community Day Dubai: AI at Work..
UiPath Community Day Dubai: AI at Work..
 

AOS Lab 7: Page tables

  • 1. Lab 7: Page tables Advanced Operating Systems Zubair Nabi zubair.nabi@itu.edu.pk March 27, 2013
  • 2. Introduction Page tables allow the OS to: • Multiplex the address spaces of different processes onto a single physical memory space • Protect the memories of different processes • Map the same kernel memory in several address spaces • Map the same user memory more than once in one address space (user pages are also mapped into the kernel’s physical view of memory)
  • 3. Introduction Page tables allow the OS to: • Multiplex the address spaces of different processes onto a single physical memory space • Protect the memories of different processes • Map the same kernel memory in several address spaces • Map the same user memory more than once in one address space (user pages are also mapped into the kernel’s physical view of memory)
  • 4. Introduction Page tables allow the OS to: • Multiplex the address spaces of different processes onto a single physical memory space • Protect the memories of different processes • Map the same kernel memory in several address spaces • Map the same user memory more than once in one address space (user pages are also mapped into the kernel’s physical view of memory)
  • 5. Introduction Page tables allow the OS to: • Multiplex the address spaces of different processes onto a single physical memory space • Protect the memories of different processes • Map the same kernel memory in several address spaces • Map the same user memory more than once in one address space (user pages are also mapped into the kernel’s physical view of memory)
  • 6. Introduction Page tables allow the OS to: • Multiplex the address spaces of different processes onto a single physical memory space • Protect the memories of different processes • Map the same kernel memory in several address spaces • Map the same user memory more than once in one address space (user pages are also mapped into the kernel’s physical view of memory)
  • 7. Page table structure • An x86 page table contains 220 page table entries (PTEs) • Each PTE contains a 20-bit physical page number (PPN) and some flags • The paging hardware translates virtual addresses to physical ones by: Using the top 20 bits of the virtual address to index into the page table to find a PTE 2 Replacing the top 20 bits with the PPN in the PTE 3 Copying the lower 12 bits verbatim from the virtual to the physical address 1 • Translation takes place at the granularity of 212 byte (4KB) chunks, called pages
  • 8. Page table structure • An x86 page table contains 220 page table entries (PTEs) • Each PTE contains a 20-bit physical page number (PPN) and some flags • The paging hardware translates virtual addresses to physical ones by: Using the top 20 bits of the virtual address to index into the page table to find a PTE 2 Replacing the top 20 bits with the PPN in the PTE 3 Copying the lower 12 bits verbatim from the virtual to the physical address 1 • Translation takes place at the granularity of 212 byte (4KB) chunks, called pages
  • 9. Page table structure • An x86 page table contains 220 page table entries (PTEs) • Each PTE contains a 20-bit physical page number (PPN) and some flags • The paging hardware translates virtual addresses to physical ones by: Using the top 20 bits of the virtual address to index into the page table to find a PTE 2 Replacing the top 20 bits with the PPN in the PTE 3 Copying the lower 12 bits verbatim from the virtual to the physical address 1 • Translation takes place at the granularity of 212 byte (4KB) chunks, called pages
  • 10. Page table structure • An x86 page table contains 220 page table entries (PTEs) • Each PTE contains a 20-bit physical page number (PPN) and some flags • The paging hardware translates virtual addresses to physical ones by: Using the top 20 bits of the virtual address to index into the page table to find a PTE 2 Replacing the top 20 bits with the PPN in the PTE 3 Copying the lower 12 bits verbatim from the virtual to the physical address 1 • Translation takes place at the granularity of 212 byte (4KB) chunks, called pages
  • 11. Page table structure • An x86 page table contains 220 page table entries (PTEs) • Each PTE contains a 20-bit physical page number (PPN) and some flags • The paging hardware translates virtual addresses to physical ones by: Using the top 20 bits of the virtual address to index into the page table to find a PTE 2 Replacing the top 20 bits with the PPN in the PTE 3 Copying the lower 12 bits verbatim from the virtual to the physical address 1 • Translation takes place at the granularity of 212 byte (4KB) chunks, called pages
  • 12. Page table structure • An x86 page table contains 220 page table entries (PTEs) • Each PTE contains a 20-bit physical page number (PPN) and some flags • The paging hardware translates virtual addresses to physical ones by: Using the top 20 bits of the virtual address to index into the page table to find a PTE 2 Replacing the top 20 bits with the PPN in the PTE 3 Copying the lower 12 bits verbatim from the virtual to the physical address 1 • Translation takes place at the granularity of 212 byte (4KB) chunks, called pages
  • 13. Page table structure • An x86 page table contains 220 page table entries (PTEs) • Each PTE contains a 20-bit physical page number (PPN) and some flags • The paging hardware translates virtual addresses to physical ones by: Using the top 20 bits of the virtual address to index into the page table to find a PTE 2 Replacing the top 20 bits with the PPN in the PTE 3 Copying the lower 12 bits verbatim from the virtual to the physical address 1 • Translation takes place at the granularity of 212 byte (4KB) chunks, called pages
  • 14. Page table structure (2) • A page table is stored in physical memory as a two-level tree • Root of the tree: 4KB page directory • Each page directory index: page table pages (PDE) • Each page table page: 1024 32-bit PTEs • 1024 x 1024 = 220
  • 15. Page table structure (2) • A page table is stored in physical memory as a two-level tree • Root of the tree: 4KB page directory • Each page directory index: page table pages (PDE) • Each page table page: 1024 32-bit PTEs • 1024 x 1024 = 220
  • 16. Page table structure (2) • A page table is stored in physical memory as a two-level tree • Root of the tree: 4KB page directory • Each page directory index: page table pages (PDE) • Each page table page: 1024 32-bit PTEs • 1024 x 1024 = 220
  • 17. Page table structure (2) • A page table is stored in physical memory as a two-level tree • Root of the tree: 4KB page directory • Each page directory index: page table pages (PDE) • Each page table page: 1024 32-bit PTEs • 1024 x 1024 = 220
  • 18. Page table structure (2) • A page table is stored in physical memory as a two-level tree • Root of the tree: 4KB page directory • Each page directory index: page table pages (PDE) • Each page table page: 1024 32-bit PTEs • 1024 x 1024 = 220
  • 19. Translation • Use top 10 bits of the virtual address to index the page directory • If the PDE is present, use next 10 bits to index the page table page and obtain a PTE • If either the PDE or the PTE is missing, raise a fault • This two-level structure increases efficiency • How?
  • 20. Translation • Use top 10 bits of the virtual address to index the page directory • If the PDE is present, use next 10 bits to index the page table page and obtain a PTE • If either the PDE or the PTE is missing, raise a fault • This two-level structure increases efficiency • How?
  • 21. Translation • Use top 10 bits of the virtual address to index the page directory • If the PDE is present, use next 10 bits to index the page table page and obtain a PTE • If either the PDE or the PTE is missing, raise a fault • This two-level structure increases efficiency • How?
  • 22. Translation • Use top 10 bits of the virtual address to index the page directory • If the PDE is present, use next 10 bits to index the page table page and obtain a PTE • If either the PDE or the PTE is missing, raise a fault • This two-level structure increases efficiency • How?
  • 23. Permissions Each PTE contains associated flags Flag PTE_P PTE_W PTE_U PTE_PWT PTE_PCD PTE_A PTE_D PTE_PS Description Whether the page is present Whether the page can be written to Whether user programs can access the page Whether write through or write back Whether caching is disabled Whether the page has been accessed Whether the page is dirty Page size
  • 24. Process address space • Each process has a private address space which is switched on a context switch (via switchuvm) • Each address space starts at 0 and goes up to KERNBASE allowing 2GB of space (specific to xv6) • Each time a process requests more memory, the kernel: Finds free physical pages Adds PTEs that point to these physical pages in the process’ page table 3 Sets PTE_U, PTE_W, and PTE_P 1 2
  • 25. Process address space • Each process has a private address space which is switched on a context switch (via switchuvm) • Each address space starts at 0 and goes up to KERNBASE allowing 2GB of space (specific to xv6) • Each time a process requests more memory, the kernel: Finds free physical pages Adds PTEs that point to these physical pages in the process’ page table 3 Sets PTE_U, PTE_W, and PTE_P 1 2
  • 26. Process address space • Each process has a private address space which is switched on a context switch (via switchuvm) • Each address space starts at 0 and goes up to KERNBASE allowing 2GB of space (specific to xv6) • Each time a process requests more memory, the kernel: Finds free physical pages Adds PTEs that point to these physical pages in the process’ page table 3 Sets PTE_U, PTE_W, and PTE_P 1 2
  • 27. Process address space • Each process has a private address space which is switched on a context switch (via switchuvm) • Each address space starts at 0 and goes up to KERNBASE allowing 2GB of space (specific to xv6) • Each time a process requests more memory, the kernel: Finds free physical pages Adds PTEs that point to these physical pages in the process’ page table 3 Sets PTE_U, PTE_W, and PTE_P 1 2
  • 28. Process address space • Each process has a private address space which is switched on a context switch (via switchuvm) • Each address space starts at 0 and goes up to KERNBASE allowing 2GB of space (specific to xv6) • Each time a process requests more memory, the kernel: Finds free physical pages Adds PTEs that point to these physical pages in the process’ page table 3 Sets PTE_U, PTE_W, and PTE_P 1 2
  • 29. Process address space • Each process has a private address space which is switched on a context switch (via switchuvm) • Each address space starts at 0 and goes up to KERNBASE allowing 2GB of space (specific to xv6) • Each time a process requests more memory, the kernel: Finds free physical pages Adds PTEs that point to these physical pages in the process’ page table 3 Sets PTE_U, PTE_W, and PTE_P 1 2
  • 30. Process address space (2) Each process’ address space also contains mappings (above KERNBASE) for the kernel to run. Specifically: • KERNBASE:KERNBASE+PHYSTOP is mapped to 0:PHYSTOP • The kernel can use its own instructions and data • The kernel can directly write to physical memory (for instance, when creating page table pages) • A shortcoming of this approach is that the kernel can only make use of 2GB of memory • PTE_U is not set for all entries above KERNBASE
  • 31. Process address space (2) Each process’ address space also contains mappings (above KERNBASE) for the kernel to run. Specifically: • KERNBASE:KERNBASE+PHYSTOP is mapped to 0:PHYSTOP • The kernel can use its own instructions and data • The kernel can directly write to physical memory (for instance, when creating page table pages) • A shortcoming of this approach is that the kernel can only make use of 2GB of memory • PTE_U is not set for all entries above KERNBASE
  • 32. Process address space (2) Each process’ address space also contains mappings (above KERNBASE) for the kernel to run. Specifically: • KERNBASE:KERNBASE+PHYSTOP is mapped to 0:PHYSTOP • The kernel can use its own instructions and data • The kernel can directly write to physical memory (for instance, when creating page table pages) • A shortcoming of this approach is that the kernel can only make use of 2GB of memory • PTE_U is not set for all entries above KERNBASE
  • 33. Process address space (2) Each process’ address space also contains mappings (above KERNBASE) for the kernel to run. Specifically: • KERNBASE:KERNBASE+PHYSTOP is mapped to 0:PHYSTOP • The kernel can use its own instructions and data • The kernel can directly write to physical memory (for instance, when creating page table pages) • A shortcoming of this approach is that the kernel can only make use of 2GB of memory • PTE_U is not set for all entries above KERNBASE
  • 34. Process address space (2) Each process’ address space also contains mappings (above KERNBASE) for the kernel to run. Specifically: • KERNBASE:KERNBASE+PHYSTOP is mapped to 0:PHYSTOP • The kernel can use its own instructions and data • The kernel can directly write to physical memory (for instance, when creating page table pages) • A shortcoming of this approach is that the kernel can only make use of 2GB of memory • PTE_U is not set for all entries above KERNBASE
  • 35. Process address space (2) Each process’ address space also contains mappings (above KERNBASE) for the kernel to run. Specifically: • KERNBASE:KERNBASE+PHYSTOP is mapped to 0:PHYSTOP • The kernel can use its own instructions and data • The kernel can directly write to physical memory (for instance, when creating page table pages) • A shortcoming of this approach is that the kernel can only make use of 2GB of memory • PTE_U is not set for all entries above KERNBASE
  • 36. Example: Creating an address space for main • main makes a call to kvmalloc • kvmalloc creates a page table with kernel mappings above KERNBASE and switches to it 1 2 3 4 5 void kvmalloc (void) { kpgdir = setupkvm (); switchkvm (); }
  • 37. Example: Creating an address space for main • main makes a call to kvmalloc • kvmalloc creates a page table with kernel mappings above KERNBASE and switches to it 1 2 3 4 5 void kvmalloc (void) { kpgdir = setupkvm (); switchkvm (); }
  • 38. Example: Creating an address space for main • main makes a call to kvmalloc • kvmalloc creates a page table with kernel mappings above KERNBASE and switches to it 1 2 3 4 5 void kvmalloc (void) { kpgdir = setupkvm (); switchkvm (); }
  • 39. setupkvm 1 Allocates a page of memory to hold the page directory 2 Calls mappages to install kernel mappings (kmap) • Instructions and data • Physical memory up to PHYSTOP • Memory ranges for I/O devices Does not install mappings for user memory
  • 40. setupkvm 1 Allocates a page of memory to hold the page directory 2 Calls mappages to install kernel mappings (kmap) • Instructions and data • Physical memory up to PHYSTOP • Memory ranges for I/O devices Does not install mappings for user memory
  • 41. setupkvm 1 Allocates a page of memory to hold the page directory 2 Calls mappages to install kernel mappings (kmap) • Instructions and data • Physical memory up to PHYSTOP • Memory ranges for I/O devices Does not install mappings for user memory
  • 42. setupkvm 1 Allocates a page of memory to hold the page directory 2 Calls mappages to install kernel mappings (kmap) • Instructions and data • Physical memory up to PHYSTOP • Memory ranges for I/O devices Does not install mappings for user memory
  • 43. setupkvm 1 Allocates a page of memory to hold the page directory 2 Calls mappages to install kernel mappings (kmap) • Instructions and data • Physical memory up to PHYSTOP • Memory ranges for I/O devices Does not install mappings for user memory
  • 44. setupkvm 1 Allocates a page of memory to hold the page directory 2 Calls mappages to install kernel mappings (kmap) • Instructions and data • Physical memory up to PHYSTOP • Memory ranges for I/O devices Does not install mappings for user memory
  • 45. Code: kmap 1 2 3 4 5 6 7 8 9 10 11 static struct kmap { void ∗virt; uint phys_start ; uint phys_end ; int perm; } kmap [] = { { (void ∗)KERNBASE , 0, EXTMEM , PTE_W }, // I/O space { (void ∗)KERNLINK , V2P( KERNLINK ), V2P(data), 0}, // kern text { (void ∗)data , V2P(data), PHYSTOP , PTE_W }, // kern data { (void ∗)DEVSPACE , DEVSPACE , 0, PTE_W }, // more devices };
  • 46. Code: setupkvm 1 2 3 4 5 6 7 8 9 10 11 12 13 14 pde_t∗ setupkvm (void) { pde_t ∗pgdir ; ∗k; struct kmap if(( pgdir = ( pde_t ∗)kalloc ()) == 0) return 0; memset (pgdir , 0, PGSIZE ); for(k = kmap; k < &kmap[ NELEM (kmap )]; k++) if( mappages (pgdir , k−>virt , k−>phys_end − k−>phys_start , (uint)k−>phys_start , k−>perm) < 0) return 0; return pgdir ; }
  • 47. mappages • Installs virtual to physical mappings for a range of addresses • For each virtual address: 1 Calls walkpgdir to find address of the PTE for that address 2 Initializes the PTE with the relevant PPN and the desired permissions
  • 48. mappages • Installs virtual to physical mappings for a range of addresses • For each virtual address: 1 Calls walkpgdir to find address of the PTE for that address 2 Initializes the PTE with the relevant PPN and the desired permissions
  • 49. mappages • Installs virtual to physical mappings for a range of addresses • For each virtual address: 1 Calls walkpgdir to find address of the PTE for that address 2 Initializes the PTE with the relevant PPN and the desired permissions
  • 50. mappages • Installs virtual to physical mappings for a range of addresses • For each virtual address: 1 Calls walkpgdir to find address of the PTE for that address 2 Initializes the PTE with the relevant PPN and the desired permissions
  • 51. walkpgdir 1 Uses the upper 10 bits of the virtual address to find the PDE 2 Uses the next 10 bits to find the PTE
  • 52. walkpgdir 1 Uses the upper 10 bits of the virtual address to find the PDE 2 Uses the next 10 bits to find the PTE
  • 53. Physical memory allocation • Physical memory between the end of the kernel and PHYSTOP is allocated on the fly • Free pages are maintained through a linked list struct run *freelist protected by a spinlock 1 Allocation: Remove a page from the list: kalloc() 2 Deallocation: Add the page to the list: kfree() 1 2 3 4 5 struct { struct spinlock lock; int use_lock ; struct run } kmem; ∗freelist ;
  • 54. Physical memory allocation • Physical memory between the end of the kernel and PHYSTOP is allocated on the fly • Free pages are maintained through a linked list struct run *freelist protected by a spinlock 1 Allocation: Remove a page from the list: kalloc() 2 Deallocation: Add the page to the list: kfree() 1 2 3 4 5 struct { struct spinlock lock; int use_lock ; struct run } kmem; ∗freelist ;
  • 55. Physical memory allocation • Physical memory between the end of the kernel and PHYSTOP is allocated on the fly • Free pages are maintained through a linked list struct run *freelist protected by a spinlock 1 Allocation: Remove a page from the list: kalloc() 2 Deallocation: Add the page to the list: kfree() 1 2 3 4 5 struct { struct spinlock lock; int use_lock ; struct run } kmem; ∗freelist ;
  • 56. Physical memory allocation • Physical memory between the end of the kernel and PHYSTOP is allocated on the fly • Free pages are maintained through a linked list struct run *freelist protected by a spinlock 1 Allocation: Remove a page from the list: kalloc() 2 Deallocation: Add the page to the list: kfree() 1 2 3 4 5 struct { struct spinlock lock; int use_lock ; struct run } kmem; ∗freelist ;
  • 57. Physical memory allocation • Physical memory between the end of the kernel and PHYSTOP is allocated on the fly • Free pages are maintained through a linked list struct run *freelist protected by a spinlock 1 Allocation: Remove a page from the list: kalloc() 2 Deallocation: Add the page to the list: kfree() 1 2 3 4 5 struct { struct spinlock lock; int use_lock ; struct run } kmem; ∗freelist ;
  • 58. exec • Creates the user part of an address space from the program binary, in Executable and Linkable Format (ELF) • Initializes instructions, data, and stack
  • 59. exec • Creates the user part of an address space from the program binary, in Executable and Linkable Format (ELF) • Initializes instructions, data, and stack
  • 60. Today’s task • Most operating systems implement “anticipatory paging” in which on a page fault, the next few consecutive pages are also loaded to preemptively reduce page faults • Chalk out a design to implement this strategy in xv6
  • 61. Reading(s) • Chapter 2, “Page tables” from “xv6: a simple, Unix-like teaching operating system”