This document discusses computer buses and peripherals. It begins by explaining that a bus is a pathway that allows computer components to communicate with the CPU. It then describes two main types of buses: system buses that connect the CPU to main memory, and input/output (I/O) buses that connect peripheral devices. Examples of expansion buses like ISA, EISA, PCI, and USB are provided. The document also discusses specific peripherals like video cards, sound cards, and network cards that connect via expansion buses.
A data bus transfers data between computer subsystems and can handle 32-bit or 64-bit widths. An address bus connects components and its width determines how much memory a system can access - a 32-bit address bus allows access to up to 4GB of memory. A control bus carries commands between the CPU and other devices and returns status signals.
The document discusses the components and architecture of the 8-bit PC/XT computer system. It describes the central processor, memory, and various address and data buses that connect these components, including the local address bus, system address bus, memory address bus, and X-address bus. It notes differences between the PC and XT models related to the 8-bit versus 16-bit width of the processors and data buses.
Buses transfer data and communication signals within a computer. They allow different components like the CPU, memory, and input/output devices to exchange information. The bus width and clock speed determine how much data can be transferred at once and how quickly. Wider buses and faster clock speeds improve performance by allowing more data to be processed in less time. A computer has several types of buses that connect different internal components like the processor, cache, and expansion ports.
memory Interleaving and low order interleaving and high interleavingJawwad Rafiq
Memory interleaving splits memory into independent banks that can process read/write requests in parallel to increase throughput. It interleaves the address space so consecutive addresses are assigned to different banks. Low order interleaving uses the low order bits of an address to identify the memory module and high order bits for the word address within each module, allowing block access in a pipelined fashion. This improves the effective memory bandwidth.
Interleaved memory is a design that spreads memory addresses across multiple memory banks to compensate for the relatively slow speed of DRAM. It increases bandwidth and improves performance by allowing different modules to be accessed independently and in parallel by different processing units like a CPU and hard disk. There are two address formats for interleaved memory: low order interleaving which spreads addresses across banks, and high order interleaving which uses high order bits as the module address.
This document discusses different types of computer memory, including RAM, ROM, and virtual memory. It describes the key characteristics and uses of dynamic RAM, static RAM, ROM, EEPROM, flash memory, cache memory, and virtual memory. The main types of RAM discussed are DRAM, SRAM, SDRAM, and RDRAM. DRAM needs refreshing but is cheaper than SRAM. ROM types include mask ROM, PROM, EPROM, and EEPROM. Virtual memory allows programs to access memory as if it were one unified virtual space.
The document discusses the memory hierarchy in computers. It explains that memory is organized in a hierarchy with different levels providing varying degrees of speed and capacity. The levels from fastest to slowest are: registers, cache, main memory, and auxiliary memory such as magnetic disks and tapes. Cache memory sits between the CPU and main memory to bridge the speed gap. It exploits locality of reference to improve memory access speed. The document provides details on the working of each memory level and how they interact with each other.
The document discusses cache organization and mapping techniques. It describes:
1) Direct mapping where each block maps to one line. Set associative mapping divides cache into sets with multiple lines per set.
2) Replacement algorithms like FIFO and LRU that determine which block to replace when the cache is full.
3) Write policies like write-through and write-back that handle writing cached data back to main memory.
A data bus transfers data between computer subsystems and can handle 32-bit or 64-bit widths. An address bus connects components and its width determines how much memory a system can access - a 32-bit address bus allows access to up to 4GB of memory. A control bus carries commands between the CPU and other devices and returns status signals.
The document discusses the components and architecture of the 8-bit PC/XT computer system. It describes the central processor, memory, and various address and data buses that connect these components, including the local address bus, system address bus, memory address bus, and X-address bus. It notes differences between the PC and XT models related to the 8-bit versus 16-bit width of the processors and data buses.
Buses transfer data and communication signals within a computer. They allow different components like the CPU, memory, and input/output devices to exchange information. The bus width and clock speed determine how much data can be transferred at once and how quickly. Wider buses and faster clock speeds improve performance by allowing more data to be processed in less time. A computer has several types of buses that connect different internal components like the processor, cache, and expansion ports.
memory Interleaving and low order interleaving and high interleavingJawwad Rafiq
Memory interleaving splits memory into independent banks that can process read/write requests in parallel to increase throughput. It interleaves the address space so consecutive addresses are assigned to different banks. Low order interleaving uses the low order bits of an address to identify the memory module and high order bits for the word address within each module, allowing block access in a pipelined fashion. This improves the effective memory bandwidth.
Interleaved memory is a design that spreads memory addresses across multiple memory banks to compensate for the relatively slow speed of DRAM. It increases bandwidth and improves performance by allowing different modules to be accessed independently and in parallel by different processing units like a CPU and hard disk. There are two address formats for interleaved memory: low order interleaving which spreads addresses across banks, and high order interleaving which uses high order bits as the module address.
This document discusses different types of computer memory, including RAM, ROM, and virtual memory. It describes the key characteristics and uses of dynamic RAM, static RAM, ROM, EEPROM, flash memory, cache memory, and virtual memory. The main types of RAM discussed are DRAM, SRAM, SDRAM, and RDRAM. DRAM needs refreshing but is cheaper than SRAM. ROM types include mask ROM, PROM, EPROM, and EEPROM. Virtual memory allows programs to access memory as if it were one unified virtual space.
The document discusses the memory hierarchy in computers. It explains that memory is organized in a hierarchy with different levels providing varying degrees of speed and capacity. The levels from fastest to slowest are: registers, cache, main memory, and auxiliary memory such as magnetic disks and tapes. Cache memory sits between the CPU and main memory to bridge the speed gap. It exploits locality of reference to improve memory access speed. The document provides details on the working of each memory level and how they interact with each other.
The document discusses cache organization and mapping techniques. It describes:
1) Direct mapping where each block maps to one line. Set associative mapping divides cache into sets with multiple lines per set.
2) Replacement algorithms like FIFO and LRU that determine which block to replace when the cache is full.
3) Write policies like write-through and write-back that handle writing cached data back to main memory.
Cache memory is a small, fast memory located close to the processor that stores frequently accessed data from main memory. When the processor requests data, the cache is checked first. If the data is present, there is a cache hit and the data is accessed quickly from the cache. If not present, there is a cache hit and the data must be fetched from main memory, which takes longer. Cache memory relies on principles of temporal and spatial locality, where frequently and nearby accessed data is likely to be needed again soon. Mapping functions like direct, associative, and set-associative mapping determine how data is stored in the cache. Replacement policies like FIFO, LRU, etc. determine which cached data gets replaced when new
This lecture provides a detailed look at instruction set architectures (ISAs). It discusses instruction formats, including the number of operands, operand locations and types. It also covers addressing modes like immediate, direct, register, indirect and indexed. Additionally, it examines different approaches to storing data like stack, accumulator and general purpose register architectures. The lecture concludes by discussing instruction-level pipelining and examples of ISAs like Intel, MIPS and Java Virtual Machine.
Introduction to Bus | Address, Data, Control BusHem Pokhrel
Handouts for BBa First Semester Prime College.
UNIT 5: Central Processing Unit: Control Unit, Arithmetic and Logic Unit, Register set, Functions of Central Processing Unit. Introduction to Bus (Address, Data, Control)
Cache Memory Computer Architecture and organizationHumayra Khanum
Cache memory is a small, high-speed buffer located between the CPU and main memory that holds copies of frequently used instructions and data. It accelerates access to these items by keeping them closer to the CPU than main memory. There are separate caches for instructions and data, as well as a TLB cache that stores translated virtual addresses. Cache memory uses mapping techniques like direct mapping, set associative mapping, and fully associative mapping to determine where to store and retrieve items. Common replacement algorithms used when the cache is full are LRU, FIFO, LFU, and random selection.
This document provides an overview of memory organization and cache memory. It discusses the memory hierarchy from fast, small registers and caches closer to the CPU to larger, slower main memory and permanent storage like disks further away. Cache memory stores recently accessed data from main memory to speed up future accesses by taking advantage of temporal and spatial locality. Caches can be direct mapped, set associative, or fully associative and use different replacement policies like LRU when a block needs to be evicted.
A multi-core processor contains two or more independent processing units called cores that are integrated onto a single chip. Each core has its own private cache, while larger caches are shared. Common interconnect network topologies to link the cores include buses, rings, and meshes. Multiple cores allow different operating systems and applications to run simultaneously on separate cores, such as a general-purpose CPU, GPU, DSP, and high-performance core running different operating systems and workloads in parallel for improved performance.
Computer instructions are stored in sequential fashion. As one instruction is executed the next one is ready to be executed and the previous one always leaves some mark so that the next one can be tracked. Copy the link given below and paste it in new browser window to get more information on Computer Register:- http://www.transtutors.com/homework-help/computer-science/computer-architecture/registers/
About Cache Memory
working of cache memory
levels of cache memory
mapping techniques for cache memory
1. direct mapping techniques
2. Fully associative mapping techniques
3. set associative mapping techniques
Cache memroy organization
cache coherency
every thing in detail
The document discusses memory hierarchy and caching techniques used in computer systems. It describes how memory is organized from fastest and smallest registers and cache, to main memory, and slower disk storage. It explains that memory hierarchy aims to provide the fastest possible access speed while minimizing costs. It also summarizes how cache memory works by storing frequently used data from main memory to reduce access time, and how cache algorithms maintain cache coherence.
The document is a presentation by Hassan Mansoor on the topic of "Buses & Its Types" for his Computer Science program. It defines what a bus is, describes different types of buses including system buses, data buses, address buses, control buses, and expansion buses. It provides examples of common expansion buses like PCI, PCIe, AGP, USB, FireWire, and ISA. The presentation contents are organized into sections on defining buses, bus width, the units connected by the system bus, and descriptions of the different types of buses.
The document discusses computer bus systems and protocols. It describes how the CPU communicates with memory and I/O devices through a bus. The bus provides an interface for this communication and defines protocols like the four-cycle handshake protocol. It also discusses bus operations like reading and writing, bus protocols, and how devices can initiate direct memory access transfers without involving the CPU.
This document provides an overview of the central processing unit (CPU). It discusses that the CPU is referred to as the brain of the computer and contains an arithmetic logic unit (ALU) and control unit (CU). The ALU performs arithmetic and logical operations, while the CU directs other parts of the system. The CPU also includes registers for temporary storage. Communication between the CPU and other components like memory and I/O devices occurs via buses that transfer data, addresses, and control signals. Caches provide faster access to frequently used data and instructions.
Modern processors are faster than memory
So Processors may waste time for accessing memory
Its purpose is to make the main memory appear to the processor to be much faster than it actually is
Cache memory is a small, fast memory located close to the processor that stores frequently accessed instructions and data. There are typically three levels of cache (L1, L2, L3) with L1 being the smallest and fastest cache located directly on the CPU chip. The performance of a cache is measured by its hit ratio, with a higher hit ratio indicating better performance as the CPU is less likely to access the slower main memory.
This document discusses cache memory and its characteristics. It begins by defining cache memory as a smaller, faster memory located close to the CPU that stores copies of frequently accessed data from main memory. This is done to achieve higher CPU performance by allowing faster access to cached data compared to main memory. The document then covers various characteristics of cache memory like location, capacity, unit of transfer, access methods, performance, organization, mapping functions, replacement algorithms, and write policies. Diagrams are included to illustrate cache read operations and different mapping approaches.
The DMA controller allows data to be transferred directly between an I/O device and memory without CPU involvement. It contains circuits to communicate with the CPU and I/O device, as well as an address register, word count register, and address lines to control memory access. The address register and lines are used to directly access memory locations, while the word count register specifies how many words to transfer. The DMA can perform direct transfers between the device and memory.
This document summarizes a research paper that analyzes the power consumption of NAND flash memory. It begins by providing background on NAND flash memory, including its use in solid state drives and differences from NOR flash memory. It then describes modeling the power consumption of NAND flash memory for different operations like read, program and erase. This includes analyzing the components that consume power, like word lines, bit lines, decoders and sense amplifiers. It presents a power state machine model for a single-level cell NAND chip and discusses transitioning between states like precharge and idle upon command completion.
Cache presentation on Mapping and its typesEngr Kumar
The document discusses cache memory and provides details about:
1. Cache memory is a small, fast memory located between the CPU and main memory that stores frequently accessed data.
2. There are three main types of cache mapping - direct, associative, and set associative. Direct mapping allows a memory block to load into only one line in cache. Set associative mapping groups cache lines into sets, with each set containing two or more lines.
3. The document explains cache memory concepts like hits, misses, blocks, lines, and tags using diagrams and examples. It compares the performance of different cache mapping techniques.
The document discusses embedded computing platforms and system architecture. It covers the CPU bus and bus protocols. It describes the four-cycle handshake protocol and timing diagrams for microprocessor buses. It discusses different types of memory devices like RAM, ROM, and flash memory. It also covers I/O devices, DMA, and system bus configurations. The software architecture and relationship with hardware architecture is explained. Debugging embedded systems using host/target design is also summarized.
Expansion buses connect the CPU to other components on the system board and allow communication between these components. There have been several standard expansion bus architectures over time including ISA, EISA, VESA Local Bus, and PCI buses. PCI bus is the most widely used today as it offers high throughput, scalability, and a standard specification. Expansion buses define system resources like interrupts, memory addresses, and DMA channels that components use to communicate on the bus.
The document discusses various types of computer buses and interfaces. It defines a bus as a collection of wires that transmit data between parts of a computer. The main parts of a bus are the address bus, data bus, and control bus. Internal buses like the front-side and backside buses connect the CPU to caches and memory. Expansion buses like PCI, PCIe, and USB connect external devices. PCIe uses serial connections of lanes to double the data rate of PCI. Interfaces connect different systems and devices, with common computer interfaces being parallel ports, serial ports, and USB.
The document describes how a computer's internal components are physically connected through a common bus. It explains the machine cycle process where the instruction control unit fetches instructions from memory over the bus, and the arithmetic logic unit executes instructions by fetching data from memory over the bus.
Cache memory is a small, fast memory located close to the processor that stores frequently accessed data from main memory. When the processor requests data, the cache is checked first. If the data is present, there is a cache hit and the data is accessed quickly from the cache. If not present, there is a cache hit and the data must be fetched from main memory, which takes longer. Cache memory relies on principles of temporal and spatial locality, where frequently and nearby accessed data is likely to be needed again soon. Mapping functions like direct, associative, and set-associative mapping determine how data is stored in the cache. Replacement policies like FIFO, LRU, etc. determine which cached data gets replaced when new
This lecture provides a detailed look at instruction set architectures (ISAs). It discusses instruction formats, including the number of operands, operand locations and types. It also covers addressing modes like immediate, direct, register, indirect and indexed. Additionally, it examines different approaches to storing data like stack, accumulator and general purpose register architectures. The lecture concludes by discussing instruction-level pipelining and examples of ISAs like Intel, MIPS and Java Virtual Machine.
Introduction to Bus | Address, Data, Control BusHem Pokhrel
Handouts for BBa First Semester Prime College.
UNIT 5: Central Processing Unit: Control Unit, Arithmetic and Logic Unit, Register set, Functions of Central Processing Unit. Introduction to Bus (Address, Data, Control)
Cache Memory Computer Architecture and organizationHumayra Khanum
Cache memory is a small, high-speed buffer located between the CPU and main memory that holds copies of frequently used instructions and data. It accelerates access to these items by keeping them closer to the CPU than main memory. There are separate caches for instructions and data, as well as a TLB cache that stores translated virtual addresses. Cache memory uses mapping techniques like direct mapping, set associative mapping, and fully associative mapping to determine where to store and retrieve items. Common replacement algorithms used when the cache is full are LRU, FIFO, LFU, and random selection.
This document provides an overview of memory organization and cache memory. It discusses the memory hierarchy from fast, small registers and caches closer to the CPU to larger, slower main memory and permanent storage like disks further away. Cache memory stores recently accessed data from main memory to speed up future accesses by taking advantage of temporal and spatial locality. Caches can be direct mapped, set associative, or fully associative and use different replacement policies like LRU when a block needs to be evicted.
A multi-core processor contains two or more independent processing units called cores that are integrated onto a single chip. Each core has its own private cache, while larger caches are shared. Common interconnect network topologies to link the cores include buses, rings, and meshes. Multiple cores allow different operating systems and applications to run simultaneously on separate cores, such as a general-purpose CPU, GPU, DSP, and high-performance core running different operating systems and workloads in parallel for improved performance.
Computer instructions are stored in sequential fashion. As one instruction is executed the next one is ready to be executed and the previous one always leaves some mark so that the next one can be tracked. Copy the link given below and paste it in new browser window to get more information on Computer Register:- http://www.transtutors.com/homework-help/computer-science/computer-architecture/registers/
About Cache Memory
working of cache memory
levels of cache memory
mapping techniques for cache memory
1. direct mapping techniques
2. Fully associative mapping techniques
3. set associative mapping techniques
Cache memroy organization
cache coherency
every thing in detail
The document discusses memory hierarchy and caching techniques used in computer systems. It describes how memory is organized from fastest and smallest registers and cache, to main memory, and slower disk storage. It explains that memory hierarchy aims to provide the fastest possible access speed while minimizing costs. It also summarizes how cache memory works by storing frequently used data from main memory to reduce access time, and how cache algorithms maintain cache coherence.
The document is a presentation by Hassan Mansoor on the topic of "Buses & Its Types" for his Computer Science program. It defines what a bus is, describes different types of buses including system buses, data buses, address buses, control buses, and expansion buses. It provides examples of common expansion buses like PCI, PCIe, AGP, USB, FireWire, and ISA. The presentation contents are organized into sections on defining buses, bus width, the units connected by the system bus, and descriptions of the different types of buses.
The document discusses computer bus systems and protocols. It describes how the CPU communicates with memory and I/O devices through a bus. The bus provides an interface for this communication and defines protocols like the four-cycle handshake protocol. It also discusses bus operations like reading and writing, bus protocols, and how devices can initiate direct memory access transfers without involving the CPU.
This document provides an overview of the central processing unit (CPU). It discusses that the CPU is referred to as the brain of the computer and contains an arithmetic logic unit (ALU) and control unit (CU). The ALU performs arithmetic and logical operations, while the CU directs other parts of the system. The CPU also includes registers for temporary storage. Communication between the CPU and other components like memory and I/O devices occurs via buses that transfer data, addresses, and control signals. Caches provide faster access to frequently used data and instructions.
Modern processors are faster than memory
So Processors may waste time for accessing memory
Its purpose is to make the main memory appear to the processor to be much faster than it actually is
Cache memory is a small, fast memory located close to the processor that stores frequently accessed instructions and data. There are typically three levels of cache (L1, L2, L3) with L1 being the smallest and fastest cache located directly on the CPU chip. The performance of a cache is measured by its hit ratio, with a higher hit ratio indicating better performance as the CPU is less likely to access the slower main memory.
This document discusses cache memory and its characteristics. It begins by defining cache memory as a smaller, faster memory located close to the CPU that stores copies of frequently accessed data from main memory. This is done to achieve higher CPU performance by allowing faster access to cached data compared to main memory. The document then covers various characteristics of cache memory like location, capacity, unit of transfer, access methods, performance, organization, mapping functions, replacement algorithms, and write policies. Diagrams are included to illustrate cache read operations and different mapping approaches.
The DMA controller allows data to be transferred directly between an I/O device and memory without CPU involvement. It contains circuits to communicate with the CPU and I/O device, as well as an address register, word count register, and address lines to control memory access. The address register and lines are used to directly access memory locations, while the word count register specifies how many words to transfer. The DMA can perform direct transfers between the device and memory.
This document summarizes a research paper that analyzes the power consumption of NAND flash memory. It begins by providing background on NAND flash memory, including its use in solid state drives and differences from NOR flash memory. It then describes modeling the power consumption of NAND flash memory for different operations like read, program and erase. This includes analyzing the components that consume power, like word lines, bit lines, decoders and sense amplifiers. It presents a power state machine model for a single-level cell NAND chip and discusses transitioning between states like precharge and idle upon command completion.
Cache presentation on Mapping and its typesEngr Kumar
The document discusses cache memory and provides details about:
1. Cache memory is a small, fast memory located between the CPU and main memory that stores frequently accessed data.
2. There are three main types of cache mapping - direct, associative, and set associative. Direct mapping allows a memory block to load into only one line in cache. Set associative mapping groups cache lines into sets, with each set containing two or more lines.
3. The document explains cache memory concepts like hits, misses, blocks, lines, and tags using diagrams and examples. It compares the performance of different cache mapping techniques.
The document discusses embedded computing platforms and system architecture. It covers the CPU bus and bus protocols. It describes the four-cycle handshake protocol and timing diagrams for microprocessor buses. It discusses different types of memory devices like RAM, ROM, and flash memory. It also covers I/O devices, DMA, and system bus configurations. The software architecture and relationship with hardware architecture is explained. Debugging embedded systems using host/target design is also summarized.
Expansion buses connect the CPU to other components on the system board and allow communication between these components. There have been several standard expansion bus architectures over time including ISA, EISA, VESA Local Bus, and PCI buses. PCI bus is the most widely used today as it offers high throughput, scalability, and a standard specification. Expansion buses define system resources like interrupts, memory addresses, and DMA channels that components use to communicate on the bus.
The document discusses various types of computer buses and interfaces. It defines a bus as a collection of wires that transmit data between parts of a computer. The main parts of a bus are the address bus, data bus, and control bus. Internal buses like the front-side and backside buses connect the CPU to caches and memory. Expansion buses like PCI, PCIe, and USB connect external devices. PCIe uses serial connections of lanes to double the data rate of PCI. Interfaces connect different systems and devices, with common computer interfaces being parallel ports, serial ports, and USB.
The document describes how a computer's internal components are physically connected through a common bus. It explains the machine cycle process where the instruction control unit fetches instructions from memory over the bus, and the arithmetic logic unit executes instructions by fetching data from memory over the bus.
Direct Memory Access (DMA) allows transferring data between computer memory and devices without using the CPU. This saves processing time by allowing devices like sound cards, video cards, and hard drives to access memory directly. DMA channels are assigned to devices to enable direct memory access. Common DMA transfer types include memory-to-memory transfers and auto-initialization, which automatically restores register values after a transfer.
The document describes how a computer's internal components are physically linked through a machine cycle. It explains that during instruction time, the instruction control unit fetches instructions from memory and sends them to the instruction register. During execution time, the ALU executes the instruction and may fetch data from memory which is sent to a work register.
The document discusses several key computer architecture concepts:
1. It describes the machine cycle process where the instruction control unit fetches instructions from memory and executes them through the arithmetic logic unit.
2. It explains how internal computer components are designed around a common word size for efficiency. A larger word size allows for faster processing, more memory capacity, and greater precision but a larger instruction set.
3. It provides an overview of computer architecture including the instruction set architecture, microarchitecture, and system design. The implementation of a computer design is also discussed.
Registers are temporary storage area for instructions or data. They are not a part of memory; rather they are special additional storage locations that offer the advantage of speed.
The document provides an introduction to microprocessors. It defines a microprocessor as an electronic circuit that functions as the central processing unit (CPU) of a computer, providing computational control. It then discusses the key components of a microprocessor including the arithmetic logic unit (ALU), control unit, registers, cache memory, bus interface, and address and data buses. The microprocessor reads instructions from memory, performs operations specified by those instructions on data, and stores results back to memory or outputs devices.
The document discusses bus interconnection in computers. It describes how a bus is a shared communication pathway that connects major components like the CPU, memory and I/O devices. The key parts of a bus are the data lines that transfer information, address lines that specify locations, and control lines that manage access and transfers. Buses can be designed in different ways like dedicated vs multiplexed and vary in aspects such as width, timing, and arbitration method. Common transfer types on a bus include reads, writes, and block transfers.
PCI, PCI-X, and PCIe are different expansion slot technologies used in PCs. PCI was the first industry-wide expansion slot solution and used parallel communication. PCI-X provided higher speeds by using phase-locked clock generators. PCIe uses serial communication via point-to-point connections between devices, providing much higher maximum bandwidth than PCI. It transformed the parallel PCI bus into a serial bus architecture.
The document provides an overview of the components and architecture of the MARIE computer system, which was designed to illustrate basic computer concepts. It describes the CPU, registers, memory, bus, instruction set, and fetch-decode-execute cycle. The MARIE CPU has 7 registers, including the accumulator, program counter, and instruction register. It uses a 16-bit instruction format. Example load and add instructions are shown in register transfer language to demonstrate how instructions are executed as a series of microoperations. Interrupts can alter the execution cycle by adding an additional "process interrupt" step.
The document provides information about motherboard components and their functions, as well as how to troubleshoot motherboard failures. It discusses the main components of a motherboard including the back panel connectors, PCI slots, northbridge, southbridge, CPU socket, power connectors, and RAM slots. It then describes common motherboard failure symptoms and provides a multi-step process for troubleshooting, which involves checking for physical damage, voltages, and signals before attempting to replace failed components.
The document discusses bus interconnection structures in computers. A bus is a shared communication pathway that connects central processing units, memory, and input/output devices. It consists of multiple parallel lines for transmitting address, data, and control signals. Common bus architectures include traditional bus designs with local and system buses and higher performance designs with cache/memory bridges and multiple bus types.
The document discusses computer buses and how they allow the CPU to communicate with memory and I/O devices. It describes the basic handshake protocol used by most bus protocols and how microprocessor buses build on this. It also discusses direct memory access (DMA) which allows direct communication between devices and memory without CPU involvement. The document then covers interfacing memory and I/O devices to the bus and considerations for embedded system architecture, hardware design, development environments, and debugging techniques.
The document discusses the role of the BUS in computer architecture. It explains that the BUS facilitates communication between computer components like the CPU, memory, and peripherals. It describes different types of buses and their functions. It also covers BUS components, PCI buses, BUS speed/bandwidth, arbitration, and how the CPU communicates with memory and I/O devices via the BUS system.
Itc lec 3 Ip cycle , system unit, interfaceAnzaDar3
Information processing life cycle
input
Output
Processing
Storage
Components of System Unit
Interface (user communication with computer)
Presentation
BEST OF LUCK
The motherboard is the main circuit board of a computer that connects all the components together. It contains ports and slots for connecting peripherals and expansion cards. The CPU communicates with memory and other devices via the chipset and different bus architectures. System memory stores active programs and data for processing. BIOS and CMOS RAM store basic settings. The power supply converts AC to various DC voltages needed to power the computer components.
The document discusses the main components of a motherboard, including the chipset, northbridge, southbridge, and data bus. The chipset contains components like the CPU, memory, and timers. The northbridge communicates between the CPU, graphics, and memory. The southbridge controls hardware like storage, I/O, sound, and networking. The data bus contains wires that transmit addressing information to describe memory locations for sending and retrieving data.
The document discusses modern CPUs and their specifications. It provides details about AMD and Intel CPUs including their core counts, clock speeds, thermal designs and cache sizes. It also covers CPU architecture types like RISC and CISC. Additional topics covered include microprocessors, hardware components, operating systems like DOS and GUI, computer buses and USB standards.
The document discusses the main components and architecture of a motherboard. A motherboard is the primary circuit board in a computer that connects the central processing unit, memory, and other components. Key components include the CPU socket, memory slots, CMOS battery, expansion slots like ISA, AGP, and PCI, and the chipset that manages data flow. The motherboard provides connections for peripherals and helps determine system performance.
Similar to Computer maintenance & IT support service (20)
বাংলাদেশের অর্থনৈতিক সমীক্ষা ২০২৪ [Bangladesh Economic Review 2024 Bangla.pdf] কম্পিউটার , ট্যাব ও স্মার্ট ফোন ভার্সন সহ সম্পূর্ণ বাংলা ই-বুক বা pdf বই " সুচিপত্র ...বুকমার্ক মেনু 🔖 ও হাইপার লিংক মেনু 📝👆 যুক্ত ..
আমাদের সবার জন্য খুব খুব গুরুত্বপূর্ণ একটি বই ..বিসিএস, ব্যাংক, ইউনিভার্সিটি ভর্তি ও যে কোন প্রতিযোগিতা মূলক পরীক্ষার জন্য এর খুব ইম্পরট্যান্ট একটি বিষয় ...তাছাড়া বাংলাদেশের সাম্প্রতিক যে কোন ডাটা বা তথ্য এই বইতে পাবেন ...
তাই একজন নাগরিক হিসাবে এই তথ্য গুলো আপনার জানা প্রয়োজন ...।
বিসিএস ও ব্যাংক এর লিখিত পরীক্ষা ...+এছাড়া মাধ্যমিক ও উচ্চমাধ্যমিকের স্টুডেন্টদের জন্য অনেক কাজে আসবে ...
Beyond Degrees - Empowering the Workforce in the Context of Skills-First.pptxEduSkills OECD
Iván Bornacelly, Policy Analyst at the OECD Centre for Skills, OECD, presents at the webinar 'Tackling job market gaps with a skills-first approach' on 12 June 2024
How to Make a Field Mandatory in Odoo 17Celine George
In Odoo, making a field required can be done through both Python code and XML views. When you set the required attribute to True in Python code, it makes the field required across all views where it's used. Conversely, when you set the required attribute in XML views, it makes the field required only in the context of that particular view.
Leveraging Generative AI to Drive Nonprofit InnovationTechSoup
In this webinar, participants learned how to utilize Generative AI to streamline operations and elevate member engagement. Amazon Web Service experts provided a customer specific use cases and dived into low/no-code tools that are quick and easy to deploy through Amazon Web Service (AWS.)
Reimagining Your Library Space: How to Increase the Vibes in Your Library No ...Diana Rendina
Librarians are leading the way in creating future-ready citizens – now we need to update our spaces to match. In this session, attendees will get inspiration for transforming their library spaces. You’ll learn how to survey students and patrons, create a focus group, and use design thinking to brainstorm ideas for your space. We’ll discuss budget friendly ways to change your space as well as how to find funding. No matter where you’re at, you’ll find ideas for reimagining your space in this session.
हिंदी वर्णमाला पीपीटी, hindi alphabet PPT presentation, hindi varnamala PPT, Hindi Varnamala pdf, हिंदी स्वर, हिंदी व्यंजन, sikhiye hindi varnmala, dr. mulla adam ali, hindi language and literature, hindi alphabet with drawing, hindi alphabet pdf, hindi varnamala for childrens, hindi language, hindi varnamala practice for kids, https://www.drmullaadamali.com
Main Java[All of the Base Concepts}.docxadhitya5119
This is part 1 of my Java Learning Journey. This Contains Custom methods, classes, constructors, packages, multithreading , try- catch block, finally block and more.
This document provides an overview of wound healing, its functions, stages, mechanisms, factors affecting it, and complications.
A wound is a break in the integrity of the skin or tissues, which may be associated with disruption of the structure and function.
Healing is the body’s response to injury in an attempt to restore normal structure and functions.
Healing can occur in two ways: Regeneration and Repair
There are 4 phases of wound healing: hemostasis, inflammation, proliferation, and remodeling. This document also describes the mechanism of wound healing. Factors that affect healing include infection, uncontrolled diabetes, poor nutrition, age, anemia, the presence of foreign bodies, etc.
Complications of wound healing like infection, hyperpigmentation of scar, contractures, and keloid formation.
A review of the growth of the Israel Genealogy Research Association Database Collection for the last 12 months. Our collection is now passed the 3 million mark and still growing. See which archives have contributed the most. See the different types of records we have, and which years have had records added. You can also see what we have for the future.
This presentation includes basic of PCOS their pathology and treatment and also Ayurveda correlation of PCOS and Ayurvedic line of treatment mentioned in classics.
How to Setup Warehouse & Location in Odoo 17 InventoryCeline George
In this slide, we'll explore how to set up warehouses and locations in Odoo 17 Inventory. This will help us manage our stock effectively, track inventory levels, and streamline warehouse operations.
2. Introduction
Bus
Types of Bus
Expansion Buses
Cards
The Video Card
The Sound Card
Network card
Aschalew S.(Msc) 2
3. Computer systems generally consist of three main parts:
the central processing unit (CPU) that processes data,
memory that holds the programs and data to be processed,
and I/O (input/output) devices as peripherals that
communicate with the outside world.
An early computer might contain a hand-wired CPU of
vacuum tubes, a magnetic drum for main memory, and a
punch tape and printer for reading and writing data
respectively.
A modern system might have a multi-core CPU, DDR4
SDRAM for memory, a solid-state drive for secondary
storage, a graphics card and LCD as a display system, a
mouse and keyboard for interaction, and a Wi-Fi connection
for networking.
In both case, computer of one form or another move
data between all of these devices.
Aschalew S.(Msc) 3
4. A bus is a pathway on the motherboard that enables
the components to communicate with the CPU. The
common buses include ISA, EISA, VESA local bus,
PCI, AGP, and USB.
Aschalew S.(Msc) 4
5. Computers have two major types of buses:
1. System bus:- This is the bus that connects the CPU
to main memory on the motherboard.
The system bus is also called the front-side bus,
memory bus, local bus, or host bus.
2. A number of I/O Buses, (I/O is an acronym for input
/ output), connecting various peripheral devices to the
CPU. These devices connect to the system bus via a
‘bridge’ implemented in the processors chipset. Other
names for the I/O bus include “expansion bus",
"external bus” or “host bus”.
Aschalew S.(Msc) 5
6. In most traditional computer architectures, the
CPU and main memory tend to be tightly coupled.
A microprocessor conventionally is a single chip
which has a number of electrical connections on
its pins that can be used to select an "address" in
the main memory and another set of pins to read
and write the data stored at that location.
In most cases, the CPU and memory share
signaling characteristics and operate in synchrony.
The bus connecting the CPU and memory is one of
the defining characteristics of the system, and
often referred to simply as the system bus.
Aschalew S.(Msc) 6
7. The system bus is a little bit more complicated than a single train
track, but not too much. Think of it as three rails per track, kind
of like mass transit trains use. That's because each track has to
carry three different things: data, address, and control.
The data are the actual digital pieces of information that need
to get somewhere or do something.
The address information describes where the data is located
and where it needs to go during a particular operation.
The control part is like the instructions because data doesn't
know what to do with itself (think 'Lego Movie'), so this
manages the flow of address and data information. That
includes which direction for the transfer of information and
exactly how data needs to be routed through the computer
system.
Because of these three different types of information, the system
bus actually consists of three buses.
Aschalew S.(Msc) 7
9. It is possible to allow peripherals to communicate with memory in
the same fashion, attaching adaptors in the form of expansion
cards directly to the system bus. This is commonly accomplished
through some sort of standardized electrical connector, several of
these forming the expansion bus or local bus.
Expansion Bus Types:- These are some of the common expansion
bus types that have ever been used in computers:
ISA - Industry Standard Architecture
EISA - Extended Industry Standard Architecture
MCA - Micro Channel Architecture
VESA - Video Electronics Standards Association
PCI - Peripheral Component Interconnect
PCMCIA - Personal Computer Memory Card Industry Association
(Also called PC bus)
AGP - Accelerated Graphics Port
SCSI - Small Computer Systems Interface.Aschalew S.(Msc) 9
10. However, as the performance differences between
the CPU and peripherals varies widely, some
solution is generally needed to ensure that
peripherals do not slow overall system
performance.
Many CPUs feature a second set of pins similar
to those for communicating with memory, but able
to operate at very different speeds and using
different protocols. Others use smart controllers to
place the data directly in memory, a concept
known as direct memory access. Most modern
systems combine both solutions, where
appropriate.
Aschalew S.(Msc) 10
11. As the number of potential peripherals grew, using an
expansion card for every peripheral became increasingly
untenable. This has led to the introduction of bus systems
designed specifically to support multiple peripherals.
Common examples are the SATA ports in modern
computers, which allow a number of hard drives to be
connected without the need for a card. However, these
high-performance systems are generally too expensive to
implement in low-end devices, like a mouse. This has led
to the parallel development of a number of low-
performance bus systems for these solutions, the most
common example being the standardized Universal Serial
Bus (USB). All such examples may be referred to as
peripheral buses, although this terminology is not
universal.
Aschalew S.(Msc) 11
12. The internal bus, also known as internal data bus,
memory bus, system bus or Front-Side-Bus, connects
all the internal components of a computer, such as
CPU and memory, to the motherboard. Internal data
buses are also referred to as a local bus, because they
are intended to connect to local devices. This bus is
typically rather quick and is independent of the rest of
the computer operations.
The external bus, or expansion bus, is made up of the
electronic pathways that connect the different external
devices, such as printer etc., to the computer.
Aschalew S.(Msc) 12
13. parallel buses, which carry data words in parallel
on multiple wires, but serial buses, which carry
data in bit-serial form.
The addition of extra power and control
connections, differential drivers, and data
connections in each direction usually means that
most serial buses have more conductors than the
minimum of one used in 1-Wire and UNI/O.
As data rates increase, the problems of timing
slope, power consumption, electromagnetic
interference and crosstalk across parallel buses
become more and more difficult to avoid.
Aschalew S.(Msc) 13
14. One partial solution to this problem has been to
double pump the bus.
Often, a serial bus can be operated at higher
overall data rates than a parallel bus, despite
having fewer electrical connections, because a
serial bus inherently has no timing angle or
crosstalk. USB, FireWire, and Serial ATA are
examples of this.
Multi drop connections do not work well for fast
serial buses, so most modern serial buses use daisy-
chain or hub designs.
Aschalew S.(Msc) 14
15. An address bus is a computer bus (a series of lines
connecting two or more devices) that is used to specify
a physical address.
When a processor or DMA-enabled device needs to
read or write to a memory location, it specifies that
memory location on the address bus (the value to be
read or written is sent on the data bus).
The width of the address bus determines the amount
of memory a system can address. For example, a
system with a 32-bit address bus can address 232
(4,294,967,296) memory locations. If each memory
location holds one byte, the addressable memory space
is 4 GB.
Aschalew S.(Msc) 15
16. Non-existent address
Software instructs the CPU to read or write a specific
physical memory address.
Accordingly, the CPU sets this physical address on its
address bus and requests all other hardware connected to
the CPU to respond with the results, if they answer for this
specific address.
If no other hardware responds, the CPU raises an
exception, stating that the requested physical address is
unrecognized by the whole computer system. Note that this
only covers physical memory addresses.
Trying to access an undefined virtual memory address is
generally considered to be a segmentation fault rather than
a bus error, though if the MMU is separate, the processor
can't tell the difference.
Aschalew S.(Msc) 16
17. Unaligned access
Most CPUs are byte-addressable, where each unique
memory address refers to an 8-bit byte.
Most CPUs can access individual bytes from each
memory address, but they generally cannot access
larger units (16 bits, 32 bits, 64 bits and so on) without
these units being "aligned" to a specific boundary (the
x86 platform being a notable exception).
For example, if multi-byte accesses must be 16 bit-
aligned, addresses (given in bytes) at 0, 2, 4, 6, and so
on would be considered aligned and therefore accessible,
while addresses 1, 3, 5, and so on would be considered
unaligned.
Similarly, if multi-byte accesses must be 32-bit aligned,
addresses 0, 4, 8, 12, and so on would be considered
aligned and therefore accessible, and all addresses in
between would be considered unaligned.Aschalew S.(Msc) 17
18. Some systems may have a hybrid of these depending on the
architecture being used. For example, for hardware based
on the IBM System/360 mainframe, including the IBM
System z, Fujitsu B8000, RCA Spectra, and UNIVAC Series
90, instructions must be on a 16-bit boundary, that is,
execution addresses must start on an even byte.
Attempts to branch to an odd address results in a
specification exception. Data, however, may be retrieved
from any address in memory, and may be one byte or longer
depending on the instruction.
CPUs generally access data at the full width of their data
bus at all times. T
o address bytes, they access memory at the full width of
their data bus, then mask and shift to address the
individual byte.
Aschalew S.(Msc) 18
19. Paging errors
FreeBSD, Linux and Solaris can signal a bus
error when virtual memory pages cannot be
paged in, e.g. because it has disappeared (e.g.
accessing a memory-mapped file or executing
a binary image which has been truncated
while the program was running), or because a
just-created memory-mapped file cannot be
physically allocated, because the disk is full.
Aschalew S.(Msc) 19
20. Non-present segment (x86)
On x86 exists an older memory management
mechanism known as segmentation.
If the application loads segment register with the
selector of non-present segment (which under
POSIX-compliant OSs can only be done with an
assembly language), the exception is generated.
Some OS used that for swapping, but under
Linux this generates SIGBUS.
Aschalew S.(Msc) 20
21. Summary of functions of buses in computers
Data sharing - All types of buses found in a computer
transfer data between the computer peripherals
connected to it.
The buses transfer or send data in either serial or
parallel method of data transfer.
This allows for the exchange of 1, 2, 4 or even 8 bytes
of data at a time. (A byte is a group of 8 bits).
Buses are classified depending on how many bits
they can move at the same time, which means that
we have 8-bit, 16-bit, 32-bit or even 64-bit buses.
Aschalew S.(Msc) 21
22. Addressing - A bus has address lines, which match
those of the processor.
This allows data to be sent to or from specific
memory locations.
Power - A bus supplies power to various peripherals
connected to it.
Timing - The bus provides a system clock signal to
synchronize the peripherals attached to it with the
rest of the system.
The expansion bus facilitates easy connection of
more or additional components and devices on a
computer such as a TV card or sound card.
Aschalew S.(Msc) 22
24. IBM introduced what became the Industry Standard
Architecture (ISA) I/O bus with its first mainstream
PC, the 8088.
The ISA bus ran at a clock speed of 4.77 MHz.
The initial ISA bus was 8-bits wide and offered IRQs 0-
7. The 16-bit ISA bus came out in 1984. This newer
ISA bus runs at 8.3 MHz and supports IRQs (interrupt
requests) 0-15. Although both ISA cards are different
sizes, both can be used in a 16-bit ISA slot. A moth-
erboard with both 8-bit and 16-bit ISA slots.
You can still see ISA slots in many computers today
that support both 8- and 16-bit cards.
Exam Tip:- The 16-bit ISA slots support the use of
either 8-bit or 16-bit ISA cards.
Aschalew S.(Msc) 24
25. For the 80286-based IBM PC-AT, an improved bus design,
which could transfer 16-bits of data at a time, was
announced. The 16-bit version of the ISA bus is sometimes
known as the AT bus. (AT-Advanced Technology)
The improved AT bus also provided a total of 24 address
lines, which allowed 16MB of memory to be addressed. The
AT bus was backward compatible with its 8-bit predecessor
and allowed 8-bit cards be used in 16-bit expansion slots.
When it first appeared the 8-bit ISA bus ran at a speed of
4.77MHZ – the same speed as the processor.
Improvements done over the years eventually made the AT
bus ran at a clock speed of 8MHz.
Aschalew S.(Msc) 25
26. (
)
Compaq formed the committee that created the Extended Industry Standard
Architecture (EISA) as an open standard for bus architecture to compete with
IBM's proprietary micro channel architecture (MCA).
The EISA bus is 32-bits wide, has an 8.3-MHz bus speed, and supports bus
mastering. EISA slots look similar to ISA slots and in fact support ISA cards
as well as EISA cards.
Back in EISA's heyday, techs loved working with pure EISA systems, because
EISA could automatically configure expansion cards when you ran the
configuration program. No manual configuration of IRQs or I/O addresses
made EISA a clever bus for its time.
Aschalew S.(Msc) 26
27. This is a bus technology developed by a group of
manufactures as an alternative to MCA. The bus
architecture was designed to use a 32-bit data path and
provided 32 address lines giving access to 4GB of memory.
Like the MCA, EISA offered a disk-based setup for the
cards, but it still ran at 8MHz in order for it to be
compatible with ISA.
The EISA expansion slots are twice as deep as an ISA
slot. If an ISA card is placed in an EISA slot it will use
only the top row of connectors, however, a full EISA card
uses both rows. It offered bus mastering.
EISA cards were relatively expensive and were normally
found on high-end workstations and network servers.
Aschalew S.(Msc) 27
28. The Video Electronics Standards Association created
the VESA local bus (VL-bus) technology in 1992 as an
enhancement of the ISA bus.
The 32-bit-wide VL-bus works with hard drive
controllers and increases video performance.
The introduction of Windows created the need for more
advanced graphics, and running at incredible speeds of
33 MHz, the VL-bus is up to the challenge.
VL-bus slots are similar in size to 16-bit ISA slots and
have an extra brown slot at the end.
ISA cards are compatible with the VL-bus technology
and can be placed in the ISA portion of VL-bus slots
Aschalew S.(Msc) 28
29. It was also known as the Local bus or the VESA-Local bus.
VESA (Video Electronics Standards Association) was
invented to help standardize PCs video specifications, thus
solving the problem of proprietary technology where
different manufacturers were attempting to develop their
own buses.
The VL Bus provided 32-bit data path and ran at 25 or 33
MHZ.
It ran at the same clock frequency as the host CPU.
But this became a problem as processor speeds increased
because, the faster the peripherals are required to run, the
more expensive they are to manufacture.
Aschalew S.(Msc) 29
30. It was difficult to implement the VL-Bus on newer chips such as
the 486s and the new Pentiums and so eventually the VL-Bus
was superseded by PCI.
VESA slots had extra set of connectors and thus the cards were
larger. The VESA design was backward compatible with the
older ISA cards.
Features of the VESA local bus card:-
32-bit interface
62/36-pin connector
90+20 pin VESA local bus extension
Aschalew S.(Msc) 30
31. Peripheral Component Interconnect (PCI) was introduced
in 1993 and quickly made its way into the hearts of techs.
The 32-bit-wide PCI bus runs at half the speed of the
processor (up to 33 MHz), which at the time of its
creation made this bus an excellent choice for graphics
and video.
Manufacturers began creating peripherals that could
take advantage of these increased speeds. Better still, the
PCI bus automatically configures PCI cards, which
means the end of messing with manual configuration of
IRQs and other resources.
Aschalew S.(Msc) 31
32. It is one of the latest developments in bus architecture and
is the current standard for PC expansion cards.
It is a local bus like VESA, that is, it connects the CPU,
memory and peripherals to wider, faster data pathway.
PCI supports both 32-bit and 64-bit data width; it is
compatible with 486s and Pentiums. The bus data width is
equal to the processor, such as, a 32 bit processor would
have a 32 bit PCI bus, and operates at 33MHz.
PCI was used in developing Plug and Play (PnP) and all
PCI cards support PnP. This means a user can plug a new
card into the computer, power it on and it will “self-
identify” and “self-specify” and start working without
manual configuration using jumpers.
Aschalew S.(Msc) 32
33. Unlike VESA, PCI supports bus mastering that is, the
bus has some processing capability and thus the CPU
spends less time processing data.
Most PCI cards are designed for 5v, but there are also 3v
and dual-voltage cards.
Keying slots used help to differentiate 3v and 5v cards
and also to make sure that a 3v card is not slotted into a
5v socket and vice versa.
Nowadays, 3-D graphics and video require even more
than the 32-bit PCI bus can offer; manufacturers
introduced 64-bit PCI bus to handle the load. Today, pri-
marily only modern server network interface cards
(NICs) use the 64-bit PCI bus, because for mainstream
video, the bus has been eclipsed by a new bus technology
called AGP
Aschalew S.(Msc) 33
34. Accelerated Graphics Port (AGP) was designed specifically for
video. The need for high quality and very fast performance of
video on computers led to development of the Accelerated
Graphics Port (AGP).
A subset of PCI and thus completely plug and play, AGP
provides a direct connection between processor and the video
card.
AGP connects directly to the North Bridge of the Intel 800
series chipset. The bus comes in 32-bit- and 64-bit-wide bus
widths.
The 32-bit-wide AGP bus operates at the speed of the
processor's memory bus (up to 66 MHz) making it perfect for
3-D graphics.
The 64-bit AGP 4x bus operates at the speed of the system bus
up to 133 MHz AGP 4x can move data at a rate of 1.07 GB per
second using the maximum transfer rate formula.
AGP slots are brown and similar in size to PCI slots. But AGP
and PCI cards cannot use the same slots.
Aschalew S.(Msc) 34
35. The AGP Port connects to the CPU and operates at the speed of
the processor bus. This means that video information is sent more
quickly to the card for processing.
The AGP uses the main PC memory to hold 3D images. In effect,
this gives the AGP video card an unlimited amount of video
memory. To speed up the data transfer, Intel designed the port as
a direct path to the PC’s main memory.
Data transfer rate ranges from 264 Mbps to 528mbps, 800 Mbps
up to 1.5 GB/sec. AGP connector is identified by its brown color.
Aschalew S.(Msc) 35
36. Short for Small Computer System Interface, a
parallel interface standard used by Apple
Macintosh computers, PC's and Unix systems
for attaching peripheral devices to a computer.
Aschalew S.(Msc) 36
37. USB differs from the buses discussed so far; it is an
external bus that works with the PCI internal bus. Most
ATX motherboards have built-in USB ports, or you can
install a PCI card that offers the ports.
USB (specification 1.0) transfers data at rates of 12 Mbps
and enables you to daisy chain up to 127 USB devices
together. The newer USB 2.0 specification is even faster.
USB is hot-swappable and supports the Plug and Play
technology. You can add and remove USB devices on the
fly without opening the case-you simply plug them in and
you can use them right away.
Aschalew S.(Msc) 37
38. It is external bus standard that supports data transfer rates of
12 Mbps. A single USB port connects up to 127 peripheral
devices, such as mice, modems, and keyboards. The USB also
supports hot plugging or insertion (ability to connect a device
without turning the PC of) and plug and play (You connect a
device and start using it without configuration).
We have two versions of USB:-
USB 1x:- followed with two data rates: 12 Mbps for
devices such as disk drives that need high-speed
throughput and 1.5 Mbps for devices such as joysticks
that need much less bandwidth.
USB 2x:-It increased the data transfer rate for PC to
USB device to 480 Mbps, which is 40 times faster than
the USB 1.1 specification. With the increased
bandwidth, high throughput peripherals such as digital
cameras, CD burners and video equipment could now
be connected with USB.
Aschalew S.(Msc) 38
39. Modems connect to your telephone line using RJ-11
connectors. RJ-11 connectors use two wires and are
identical to telephone connectors.
The locking clips on the RJ-11 connectors help
secure the cable into the jack, or port. RJ-11 ports
look identical to phone jacks and are found on your
modem.
All modems have at least one RJ-11 port, and many
modems have two RJ-11 ports -one for the modem
and the other for a telephone, so you can use the
telephone line for voice when the modem is not in
use.
Aschalew S.(Msc) 39
40. Plugs into expansion slot
Provides physical interface between computer and
network medium
Most computers use parallel data lines, called a bus, to
send data between CPU and adapter cards
Most networking media transmits data in single line,
called serial transmission
NIC translates parallel into serial for outgoing
messages and serial into parallel for incoming messages
Aschalew S.(Msc) 40
41. For any computer, a network interface card (NIC)
performs two crucial tasks
Establishes and manages the computer’s network
connection
Translates digital computer data into signals
(appropriate for the networking medium) for outgoing
messages, and translates signals into digital
computer data for incoming messages
NIC establishes a link between a computer and a
network, and then manages that link
Aschalew S.(Msc) 41
42. Installing a Network Interface Card (NIC) into any
version of Windows is usually a easy, as most NICs today
are completely plug and play. For the most part, this is
simply a matter of turning off the PC, installing the card,
and turning the system back on.
The only trick is remembering to use the disk that comes
with the NIC, even if Windows offers to use its own
drivers. All the issues discussed with respect to installing
devices also hold true for NICs-just because they're
network cards doesn't mean anything else special needs to
happen,
Aschalew S.(Msc) 42
43. The Video Card
The Sound Card
TV card
Network card
Aschalew S.(Msc) 43
Types of computer
adapter card
44. Aschalew S.(Msc) 44
Table-1Expansion Slot and Card Compatibility(shows you which
expansion cards go with which I/O busses.)
8-bit
ISA
16-bit
ISA
EISA VL-bus PCI AGP USB
8-bit ISA cards yes yes yes yes no no no
16-bit ISA cards no yes yes yes no no no
EISA cards yes yes yes no no no no
VL-bus cards no no no yes no no no
PCI cards no no no no yes no no
AGP cards no no no no no yes no
USB devices no no no no no no yes
45. 1. What is the main difference and similarities
among available adapter card?
2. Why need to install computer adapter card on a
motherboard?
3. Is there any interrelation among adapter card,
expansion slot and buses? If yes describe it.
Aschalew S.(Msc) 45