1. Course : AIT 204
Course Title : Computer Organization and Architecture (COA)
Course Credits : 3 + 0
Course Teacher : Dr. Y R Ghodasara & Prof. K C Kamani
College of Agricultural Information Technology
Anand Agricultural University
Anand
Unit III
3. • The whole computer is built up around a motherboard, and it is the most important component in the PC.
• The motherboard is a large printed circuit board, which has lots of chips, connectors and other electronics
mounted on it.
Data Exchange in Motherboard
• Inside the PC, data is constantly being exchanged between or via the various devices..
• Most of the data exchange takes place on the motherboard itself, where all the components are connected
to each other:
3
4. • In relation to the PC’s external devices, the motherboard functions like a central railway station.
• All traffic (in or out) ends up in motherboard.
• Motherboard contains following components.
• BIOS Chipset
• CPU Socket
• RAM Slots
• Expansion Slots
• Connectors
• Ports
• CMOS Battery
• Jumpers
External Devices Connection in Motherboard
4
5. Chipset Layout
of Motherboard
FSB Front Side Bus
PCI Peripheral Component Interconnect
LPC Low Pin Count
CMOS Complementary Metal Oxide
Semiconductor
IDE Integrated Drive Electronics
BIOS Basic Input Output System
SATA Serial ATA (Advance Technology
Attachment)
USB Universal Serial Bus
5
6. The Chipset
• A chipset refers to a group of integrated circuits, or chips, that are designed to work together.
• They are usually marketed as a single product.
• The term chipset often refers to a specific pair of chips on the motherboard: the northbridge and the
southbridge.
• The northbridge links the CPU to very high-speed devices, especially main memory and graphics
controllers.
• The southbridge connects to lower-speed peripheral devices.
• The southbridge actually contains some on-chip integrated peripherals, such as Ethernet, USB, and
audio devices.
• A chipset is usually designed to work with a specific family of microprocessors. Because it controls
communications between the processor and external devices, the chipset plays a crucial role in
determining system performance.
• Current manufacturers of chipsets : NVIDIA, AMD, VIA Technologies, SiS, Intel and Broadcom.
Chipset Block Diagram
6
7. CPU (Central Processing Unit)/ Microprocessor
• CPU contains following sub components.
• ALU
• CU
• Set of Registers to store data temporary.
• The CPU Speed is depend on the Clock Rate ( Clock Speed ). Higher the Clock speed, Higher
number of Instructions executed by the CPU.
• CPU is connected with memory using three separate Buses (inside FSB).
• Data Bus ( To transfer data from memory to cpu and vice versa. )
• Address Bus ( To transfer address from cpu to memory )
• Control Bus ( To transfer control like Read Data, Write Data etc.)
7
8. Moore’s Law
Gordon Moore(1970) predicted that the number of transistors in processors (and hence their speed) would be
able to be doubled every 18 months.
8
9. CPU Category Based on How Data is stored in Memory
Big Endian Processor
The most significant byte (MSB) value,
which is 0Ah in our example, is stored at
the memory location with the lowest
address.
Examples : Apple Processors (Power G4,G5)
ARM Processors
Little Endian Processor
The least significant byte (LSB) value, 0Dh,
in our example, is stored at the memory
location with the lowest address.
Examples : Intel Processors
AMD Processors
9
10. CPU Category Based on Instruction Set
RISC Processor
Reduced Instruction Set Computer
Small set of simple Instructions
Processor Design is Simple
Size of Instruction (in bits) is same for
different operations.
Examples : ARM Processors
Motorola Processors
CISC Processor
Complex Instruction Set Computer
Large set of complex Instructions
Processor Design is Complex
Size of Instruction (in bits) may vary for different
operations.
Examples : Intel Processors
AMD Processors
10
11. Instruction Pipelining
Execution of any instruction in Processor is divided into four stages.
Stage 1 : Instruction Fetch (IF)
Stage 2 : Instruction Decode (ID)
Stage 3 : Execute Instruction (EX)
Stage 4 : Write Back Result (WB)
Hence it is possible to keep different instruction in different stages.
IF ID EX WBInstruction 1
Clock Cycle 1 2 3 4
11
13. 13
Time Execution
0 Four instructions are waiting to be executed
1 •the green instruction is fetched from memory
2
•the green instruction is decoded
•the purple instruction is fetched from memory
3
•the green instruction is executed (actual operation is performed)
•the purple instruction is decoded
•the blue instruction is fetched
4
•the green instruction's results are written back to the register file or memory
•the purple instruction is executed
•the blue instruction is decoded
•the red instruction is fetched
5
•the green instruction is completed
•the purple instruction is written back
•the blue instruction is executed
•the red instruction is decoded
6
•The purple instruction is completed
•the blue instruction is written back
•the red instruction is executed
7
•the blue instruction is completed
•the red instruction is written back
8 •the red instruction is completed
9 All four instructions are executed
14. Instruction execution without Pipelining
Clock Cycle 1 2 3 4 5 6 7 8 9 10 11 12
IF ID EX WB IF ID EX WB IF ID EX WB
Instruction 1 Instruction 2 Instruction 3
Instruction execution With Pipelining
Clock Cycle 1 2 3 4 5 6 7 8 9 10 11 12
IF ID EX WB
IF ID EX WB
IF ID EX WB
An instruction pipeline is a technique used in the design of computers and other digital electronic devices to
increase their instruction throughput (the number of instructions that can be executed in a unit of time).
Instruction 1
Instruction 2
Instruction 3
14
15. Random Access Memory (RAM)
RAM
SRAM DRAM
1. Static Random Access Memory
2. Fast to access
3. Less Dense
4. Cost wise Expensive
5. Flip Flop use as storage element
6. Used in Cache Memory
7. Consumes more power
1. Dynamic Random Access Memory
2. Slow to access
3. More Dense
4. Cost wise Cheap
5. Capacitor use as storage element
6. Used in Main Memory
7. Consumes less power
15
16. DRAM
SD RAM DDR RAM
1. Synchronous Dynamic RAM
2. Can run with higher clock speed
3. Data can be fetch per clock cycle (high edge)
1. Double Data Rate RAM
2. Can run with higher clock speed
3. Double data can be fetch per clock cycle
(high edge + low edge)
16
17. The Memory Hierarchy
• Use a small array of SRAM
– For the CACHE (hopefully for most accesses)
• Use a bigger amount of DRAM
– For the Main memory
• Use a really big amount of Disk storage
– For the Virtual memory (i.e. everything else)
18. Famous Picture of Food Memory
Hierarchy
Cache
Main Memory
Disk Storage
Cost Latency Access
Freq.
CPU
19. A Favorite Cache Analogy
• Hungry! must eat!
– Option 1: go to refrigerator
• Found eat!
• Latency = 1 minute
– Option 2: go to store
• Found purchase, take home, eat!
• Latency = 20-30 minutes
– Option 3: grow food!
• Plant, wait … wait … wait … , harvest, eat!
• Latency = ~250,000 minutes (~ 6 months)
• Crazy fact: ratio of growing food:going to the
store = 10,000
ratio of disk access:DRAM access =
20. Rehashing our terms
• The Architectural view of memory is:
– What the machine language sees
– Memory is just a big array of storage
• Breaking up the memory system into different
pieces – cache, main memory (made up of
DRAM) and Disk storage – is not architectural.
– The machine language doesn’t know about it
– The processor may not know about it
– A new implementation may not break it up into the same pieces (or break it up
at all).
Caching needs to be Transparent!
21. CACHE MEMORY
Cache is small high speed memory usually Static RAM (SRAM) that contains the most recently accessed pieces of main memory.
In today’s systems , the time it takes to bring an instruction (or piece of data) into the processor is very long when compared to the
time to execute the instruction. For example, a typical access time for DRAM is 60ns. A 100 MHz processor can execute most
instructions in 1 CLK or 10 ns. Therefore a bottle neck forms at the input to the processor. Cache memory helps by decreasing the
time it takes to move information to and from the processor. A typical access time for SRAM is 15 ns. Therefore cache memory
allows small portions of main memory to be accessed 3 to 4 times faster than DRAM (main memory).
Locality of Reference
The concept is that at any given time the processor will be accessing memory in a small or localized region of memory. The cache
loads this region allowing the processor to access the memory region faster. This means that over 90% of the memory accesses
occurs out of the high speed cache.
Memory Technology Access Time Cost ( Dollar per GB )
SRAM 15-20ns $2000 – 3000
DRAM 60-70ns $20 – 100
Magnetic Disk 5,00,000 ns $2 - 3
why not replace main memory DRAM with SRAM? The main reason is cost. SRAM is several times more expensive than DRAM. Also,
SRAM consumes more power and is less dense than DRAM.
21
22. Basic Cache Model
Cache Memory sits between CPU and Main Memory.
Cache Memory related terms.
Cache Hits : When the cache contains the information requested, the transaction is said to be a cache hit.
Cache Misses ; When the cache does not contain the information requested, the transaction is said to be a cache miss.
Dirty Data : When data is modified within cache but not modified in main memory, the data in the cache is called
“dirty data”.
Stale Data : When data is modified within main memory but not modified in cache,the data in the cache is called
“ stale data”.
22
23. Cache Organization in Pentium Processor
When developing a system with a Pentium processor, it is common to add an external cache. External cache is the second
cache in a Pentium processor system, therefore it is called a Level 2 (or L2) cache. The internal processor cache is referred
to as a Level 1 (or L1) cache. The names L1 and L2 do not depend on where the cache is physically located,( i.e., internal or
external). Rather, it depends on what is first accessed by the processor(i.e. L1 cacheis accessed before L2 whenever a
memory request is generated). Figure shows how L1 and L2 caches relate to each other in a Pentium processor system.
23
24. VIRTUALMEMORY
When user starts any program in computer, program is loaded in the main memory of a
computer.
In multitasking operating system, user can open more than one program at a time.
Physical RAM(main memory) is limited for each computer.
When user starts multiple programs, sometimes all programs can not be arranged in main
memory. In this situation modern operating systems use small partition of a hard disk as
memory. This partition of a hard disk is known as Swap Space / Swap Partition / Virtual
Memory.
Any process stored in virtual memory space can not be directly executed by the
processor. That’s why it is necessary to bring that process in main memory for execution.
This is known as process swapping.
When any process transfers from physical memory to virtual memory this transfer is
known as swap out.
When any process transfers from virtual memory to physical memory this transfer is
known as swap in.
Process swapping is controlled by an operating system program called Memory Manager.
24
25. Operating System
Process 1
Process 2
Process 3
Process 4
Process 5
Physical RAM
Virtual RAM
swap out
swap in
swap partition
Concept of Virtual Memory
25