Slideshare uses cookies to improve functionality and performance, and to provide you with relevant advertising. If you continue browsing the site, you agree to the use of cookies on this website. See our User Agreement and Privacy Policy.

Slideshare uses cookies to improve functionality and performance, and to provide you with relevant advertising. If you continue browsing the site, you agree to the use of cookies on this website. See our Privacy Policy and User Agreement for details.

Successfully reported this slideshow.

Like this presentation? Why not share!

- Von Neumann Architecture by Jawad Farooqi 14194 views
- 08. Central Processing Unit (CPU) by Akhila Dakshina 21556 views
- Architecture presentation by Md. Touhidur Rahman 138 views
- Introduction to digital computer by gourav kottawar 5115 views
- Aca2 09 new by Sumit Mittu 1159 views
- Ntroduction to computer architectur... by Fakulti seni, kom... 22769 views

No Downloads

Total views

9,019

On SlideShare

0

From Embeds

0

Number of Embeds

8

Shares

0

Downloads

366

Comments

0

Likes

1

No embeds

No notes for slide

- 1. Von Neumann Architecture
- 2. Goals <ul><li>Understand how the von Neumann architecture is constructed. </li></ul><ul><li>Understand how the von Neumann architecture works. </li></ul><ul><li>Understand how to program in basic a assembly language. </li></ul>
- 3. Where Are We? <ul><li>We have spent several weeks now, building our understanding of computer organization. </li></ul><ul><li>We started with transistors, moved up a level to gates, and then up a level to circuits. </li></ul><ul><li>Our next step is a key one: we will combine circuits together to build functional units of computer operation. </li></ul>
- 4. Ladder of Abstraction <ul><li>It is worth reminding ourselves how we got here: </li></ul><ul><ul><li>Climbing up the ladder of abstraction, the process is to take the functional units of one level, combine them, and move this combined unit to the next unit of abstraction </li></ul></ul><ul><ul><li>As we move to this new level, the level of sub-components, we need to remember that this level is built from the components of previous levels </li></ul></ul>
- 5. Sub-Components <ul><li>At the onset, computers required hardware changes to work on new problems; some historians say that this early stage of “programming” was wiring. </li></ul><ul><li>Clearly, requiring hardware changes with each new programming operation was time-consuming, error-prone, and costly </li></ul><ul><li>If you recall from the movie The Machine That Changed the World , one of the key contributors to computer evolution was John von Neumann </li></ul>
- 6. The Stored Program Concept <ul><li>Von Neumann’s proposal was to store the program instructions right along with the data </li></ul><ul><li>This may sound trivial, but it represented a profound paradigm shift </li></ul><ul><li>The stored program concept was proposed about fifty years ago; to this day, it is the fundamental architecture that fuels computers. </li></ul><ul><li>Think about how amazing that is, given the short shelf life of computer products and technologies… </li></ul>
- 7. The Stored Program Concept and its Implications <ul><li>The Stored Program concept had several technical ramifications: </li></ul><ul><ul><li>Four key sub-components operate together to make the stored program concept work </li></ul></ul><ul><ul><li>The process that moves information through the sub-components is called the “fetch execute” cycle </li></ul></ul><ul><ul><li>Unless otherwise indicated, program instructions are executed in sequential order </li></ul></ul>
- 8. Four Sub-Components <ul><li>There are four sub-components in von Neumann architecture: </li></ul><ul><ul><li>Memory </li></ul></ul><ul><ul><li>Input/Output (called “IO”) </li></ul></ul><ul><ul><li>Arithmetic-Logic Unit </li></ul></ul><ul><ul><li>Control Unit </li></ul></ul><ul><li>While only 4 sub-components are called out, there is a 5 th , key player in this operation: a bus, or wire, that connects the components together and over which data flows from one sub-component to another </li></ul><ul><li>Let’s look at each sub-component in more detail … </li></ul>
- 9. Memory <ul><li>As you already know, there are several different flavors of memory </li></ul><ul><li>Why isn’t just one kind used? </li></ul><ul><li>Each type of memory represents cost/benefit tradeoffs between capability and cost … </li></ul>
- 10. Memory Types: RAM <ul><li>RAM is typically volatile memory (meaning it doesn’t retain voltage settings once power is removed) </li></ul><ul><li>RAM is an array of cells, each with a unique address </li></ul><ul><li>A cell is the minimum unit of access. Originally, this was 8 bits taken together as a byte. In today’s computer, word-sized cells (16 bits, grouped in 4) are more typical. </li></ul><ul><li>RAM gets its name from its access performance. In RAM memory, theoretically, it would take the same amount of time to access any memory cell, regardless of its location with the memory bank (“random” access). </li></ul>
- 11. Memory Types: ROM <ul><li>It gets its name from its cell-protection feature. This type of memory cell can be read from, but not written to. </li></ul><ul><li>Unlike RAM, ROM is non-volatile; it retains its settings after power is removed. </li></ul><ul><li>ROM is more expensive than RAM, and to protect this investment, you only store critical information in ROM … </li></ul>
- 12. Memory Types: Registers <ul><li>There is a third, key type of memory in every computer – registers. </li></ul><ul><li>Register cells are powerful, costly, and physically located close to the heart of computing. </li></ul><ul><li>We will see later that among the registers, several of them are the main participants in the fetch execute cycle. </li></ul>
- 13. Memory Types: Other <ul><li>Modern computers include other forms of memory, such as cache memory. </li></ul><ul><li>Remember, memory types exist at different trade offs. </li></ul><ul><li>The study of memory organizations and access schemes is an innovative one within Computer Science. In your life time, you should expect to see numerous innovations in memory types and capabilities. </li></ul>
- 14. What’s Up with Memory <ul><li>Regardless of the type of memory, several concepts apply in this key component. </li></ul><ul><li>Cell size or cell width: a key concept within memory is how many individual memory cells (which we now know are switches!) are addressed at a time. </li></ul><ul><li>At a minimum, this is a byte (8 bits) in today’s computers, but to support all data types and operations, cell size can be larger (a word, for instance, at 16 bits). </li></ul>
- 15. What’s Up with Memory <ul><li>Cell address and contents: another key concept is to recognize that all cells have an address, and can contain data contents. </li></ul><ul><li>The cell address is a label (like a zip code) that identifies a particular cell. </li></ul><ul><li>The cell contents are whatever data is stored at a given address location. </li></ul>
- 16. What’s Up with Memory <ul><li>Two other key concepts in the study of memory are memory size and address space. </li></ul><ul><li>Memory size refers to the number of addressable cells – how many different memory locations a computer has. </li></ul><ul><li>Address space refers to the range of addressable cell labels. Cell labels begin with the number 0. So, if you had a computer with 2 n memory size, its address space would be 2 n -1. </li></ul>
- 17. What’s Up with Memory <ul><li>Don’t forget that the memory labels are themselves binary numbers! </li></ul><ul><li>One of the special registers we talked about earlier is a register whose job it is to hold address locations. </li></ul><ul><li>Engineers need to know how big to make this register, so that it could hold the address of any given memory location, even the one with the biggest address. </li></ul>
- 18. What’s Up with Memory <ul><li>The special register is called the MAR – the machine address register . </li></ul><ul><li>For a machine with 2 n address cells, the MAR must be able to hold a number 2 n - 1 big. </li></ul>
- 19. Memory Operations <ul><li>Two basic operations occur within this subcomponent: a fetch operation , and a store . </li></ul><ul><li>The fetch operation: </li></ul><ul><ul><li>A cell address is loaded into the MAR. </li></ul></ul><ul><ul><li>The address is decoded, which means that thru circuitry, a specific cell is located. </li></ul></ul><ul><ul><li>The data contents contained within that cell is copied into another special register, called a Machine Data Register (MDR) . </li></ul></ul><ul><ul><li>Note that this operation is non-destructive – that is, the data contents are copied, but not destroyed. </li></ul></ul>
- 20. Memory Operations <ul><li>The second memory operation is called a store . </li></ul><ul><ul><li>The fetch is like a read operation; the store is like a write operation </li></ul></ul><ul><ul><li>In the store, the address of the cell into which data is going to be stored is moved to the MAR and decoded. </li></ul></ul><ul><ul><li>Contents from yet another special register, called an accumulator , are copied into the cell location (held in the MAR). </li></ul></ul><ul><ul><li>This operation is destructive, meaning that whatever data was originally contained at that memory location is overwritten by the value copied from the accumulator. </li></ul></ul>
- 21. I/O: Input and Output <ul><li>There is both a human-machine interface and a machine-machine interface to I/O. </li></ul><ul><ul><li>Examples of the human-machine interface include a keyboard, screen or printer. </li></ul></ul><ul><ul><li>Examples of the machine-machine interface include things like mass storage and secondary storage devices. </li></ul></ul><ul><li>Input and output devices are the least standardized of the various sub-components, which means that you have to pay extra special attention to make certain that your input or output devices are compatible with your machine. </li></ul>
- 22. The ALU <ul><li>The third component in the von Neumann architecture is called the Arithmetic Logic Unit. </li></ul><ul><li>This is the subcomponent that performs the arithmetic and logic operations for which we have been building parts. </li></ul><ul><li>The ALU is the “brain” of the computer. </li></ul>
- 23. The ALU <ul><li>It houses the special memory locations, called registers, of which we have already considered. </li></ul><ul><li>The ALU is important enough that we will come back to it later, For now, just realize that it contains the circuitry to perform addition, subtraction,multiplication and division, as well as logical comparisons (less than, equal to and greater than). </li></ul>
- 24. Control Unit <ul><li>The last of the four subcomponents is the Control Unit. </li></ul><ul><li>The control unit is the work horse that drives the fetch and execute cycle. </li></ul><ul><li>Remember we said that in memory, a cell address is loaded into the MAR – it is the control unit that figures out which address is loaded, and what operation is to be performed with the data moved to the MDR. </li></ul><ul><li>We will come back and look in detail at how the Control Unit performs this task. </li></ul>
- 25. Stored Program Concept <ul><li>We saw that it was von Neumann’s organizational scheme that was adopted in computer architecture. </li></ul><ul><li>This architecture was largely driven by the decision to store program code along with data. </li></ul><ul><li>Once this decision was made, several by-product engineering requirements emerged. </li></ul>
- 26. Engineering Needs <ul><li>We indicated that for cost/benefit reasons, data and program instructions are translated into binary form and stored in RAM. </li></ul><ul><li>As the information is needed, it is moved to the high speed, costlier registers where it is processed. </li></ul><ul><li>This process occurs in a cycle: fetch information to the registers, and execute it there, fetch the next information from the registers, and execute it, etc. </li></ul><ul><li>The cycle is referred to as the “fetch execute” cycle. </li></ul>
- 27. Engineering Needs <ul><li>Once we know on which data we should be working, we know how to build circuitry to perform processing operations. (We can add, subtract, divide and compare). </li></ul><ul><li>One of the things we glossed over in our first discussion, however, is how we figure out what data to be working on, and exactly which operation to perform </li></ul><ul><li>Specifically, this is what we need to be able to do: </li></ul><ul><ul><li>Build a circuit that will allow us to take whatever number is in the MAR, and use this number to access a specific memory cell. </li></ul></ul><ul><ul><li>Build a circuit that will allow us to choose which data results should be placed in the MDR. </li></ul></ul><ul><li>This magic happens in the Control Unit </li></ul>
- 28. Choosing a Memory Location <ul><li>Let’s tackle the initial requirement first: how do we determine which address location holds the data on which we need to operate. </li></ul><ul><li>Remember we said that there is a special register, called the MAR that holds an address -- a binary number. </li></ul><ul><li>We need some circuitry to read that number, and based on its value, find exactly the correct address location to read. </li></ul><ul><li>The circuit is called a decoder … </li></ul>
- 29. Decoder Circuits <ul><li>The MAR is connected to a decoder circuit. </li></ul><ul><li>This circuitry will identify the correct memory cell. </li></ul><ul><li>Let’s figure out how this works … </li></ul>
- 30. Decoder Circuits <ul><li>Initially, think about the decoder circuit as a black box. </li></ul><ul><li>Going into the black box are N input lines, (which emerge from the MAR). </li></ul><ul><li>Going out of the black box are 2 n output lines (with each output line connecting to a specific memory cell in RAM). </li></ul>
- 31. Decoder Circuit: An Example <ul><li>Let’s start small: imagine a computer with 4 memory cells in RAM, where our formula now is: 2 n , thus n = 2 so that 2 n =4. </li></ul><ul><li>The MAR will need to be N cells big, and the biggest number it would have to hold is the address range, 2 n -1=3. </li></ul><ul><li>Let’s build the decoder circuit … </li></ul>
- 32. First, the Problem Statement <ul><li>Design a circuit with 2 input lines (a, b) and 4 output lines (d 0 ,d 1 ,d 2 ,d 3 ) </li></ul><ul><li>The output lines are uniquely high if and only if the following conditions are met: </li></ul><ul><ul><li>d 0 is high IFF both inputs are low </li></ul></ul><ul><ul><li>d 1 is high IFF a is low and b is high </li></ul></ul><ul><ul><li>d 2 is high IFF a is high and b is low </li></ul></ul><ul><ul><li>d 3 is high IFF both a and b are high </li></ul></ul>
- 33. Next, the Truth Table 1 0 0 0 1 1 0 1 0 0 0 1 0 0 1 0 1 0 0 0 0 1 0 0 d 3 d 2 d 1 d 0 b a OUPUT LINES INPUT LINES
- 34. Next, the Boolean Sub-Expressions <ul><li>For those places in our output chart with high values (1’s), we have the following a,b input conditions: </li></ul><ul><ul><li>d 0 = ~a * ~b </li></ul></ul><ul><ul><li>d 1 = ~a * b </li></ul></ul><ul><ul><li>d 2 = a * ~b </li></ul></ul><ul><ul><li>d 3 = a * b </li></ul></ul>
- 35. Circuit Diagram – Decoder Circuit a b d 0 d 1 d 2 d 3 To the MDR MAR
- 36. Decoder Circuit Example <ul><li>Assume the contents of the MAR are 01. </li></ul><ul><li>Which line would fire? </li></ul><ul><ul><li>Remember the Boolean expression: (~a • b) </li></ul></ul><ul><li>This would cause the d1 line to fire, which in turn is connected to the d1 memory location. </li></ul><ul><li>The d1 memory location is read non-destructively, and a copy of its contents (let’s assume the contents equal 61), is copied to the MDR. </li></ul>
- 37. Circuit Diagram – Decoder Circuit a b d 0 d 1 d 2 d 3 61 MAR
- 38. 4 * 16 decoder
- 39. Scaling Issue <ul><li>We have built a viable decoder circuit, and illustrated how this control circuit could perform in translating between the address label contained in the MAR and obtaining contents of the referenced location. </li></ul><ul><li>At some point, however, the model isn’t scaleable – too much space required for a linear layout. </li></ul><ul><li>Computers utilize a 2-dimensional approach in decoder operation, using a row/column MAR addressing scheme to identify specific address locations. </li></ul><ul><li>A 2-D grid is illustrated on the next slide … </li></ul>
- 40. 2-D Memory Access
- 41. 2-D Memory Operation
- 42. One Problem Solved <ul><li>Well, we have figured out how to use circuitry to decode the contents of the MAR to identify a specific memory location. </li></ul><ul><li>We still need to figure out how to interpret the results of the ALU circuitry to load a correct process answer into the MDR. </li></ul>
- 43. Multiplexor Circuits <ul><li>Remember, we said that the ALU actually performs all operational processing on 2 given inputs. </li></ul><ul><li>Thus, if the inputs are 4 and 2, calculations for 4 + 2, 4 * 2, 4-2, 4 >= 2, etc. are all performed in parallel. </li></ul><ul><li>What we need to be able to do is to select the correct answer from among all those calculated. </li></ul>
- 44. Multiplexor Circuits <ul><li>A multiplexor is a circuit with 2 n input lines and 1 output line. </li></ul><ul><li>The function is serves is to select exactly one of its input lines and copy the binary value on that input line to its single output line. </li></ul>
- 45. Multiplexor Magic <ul><li>The multiplexor chooses the correct input line to pass thru to the output line by using a second set of N lines called selector lines. </li></ul><ul><li>So, the total number of input lines in a multiplexor are 2 n + N. </li></ul><ul><li>The first set of input lines are numbered from 0 to 2 n -1, while the selector lines are numbered from 0 to N, such that there is exactly one selector line for each input line. </li></ul><ul><li>Each selector line can be set to either a 1 or a 0. </li></ul><ul><li>Thus, the binary number that appears on the selector lines can be interpreted as the identification number of the input line to be passed thru. </li></ul>
- 46. Multiplexor Circuit
- 47. Where We’ve Been <ul><li>We have been touring the von Neumann architecture of 4 sub-components. </li></ul><ul><li>We have figured out how to build the appropriate circuitry to perform arithmetic and logic operations on the data contained at specific memory locations. </li></ul><ul><li>What we don’t know how to do is to figure out which arithmetic or logic operations need to be performed and in what order. </li></ul>
- 48. The Control Unit <ul><li>The mastermind behind these final pieces of our operational model is the Control Unit </li></ul><ul><li>It is the Control Unit that fuels the stored program concept </li></ul><ul><li>To do its job, the Control Unit has several tools </li></ul><ul><ul><li>Special memory registers </li></ul></ul><ul><ul><li>“ Wired” understanding of an Instruction Set </li></ul></ul>
- 49. Toolset <ul><li>Let’s look at the toolset first, and then how it is deployed </li></ul><ul><li>Special Memory Registers </li></ul><ul><ul><li>The Control Unit must keep track of where it is within a program, and what it should do next </li></ul></ul><ul><ul><li>Two special registers are used to accomplish this: </li></ul></ul><ul><ul><ul><li>A program counter, typically referred to as a PC, holds the address of the NEXT instruction to be executed </li></ul></ul></ul><ul><ul><ul><li>An instruction register, typically referred to as an IR, holds an instruction fetched from memory </li></ul></ul></ul>
- 50. Toolset (Two) <ul><li>Along with the special registers, the Control Unit utilizes special circuitry, called an instruction decoder </li></ul><ul><li>The instruction decoder is a typical decoder circuit, and its purpose is to read an instruction from the IR, and activate the appropriate circuit line </li></ul>
- 51. How this Works <ul><li>Remember, we are trying to figure out how the stored program concept works. </li></ul><ul><li>In this model, the program and the data upon which it operates are stored in memory locations. </li></ul><ul><li>We know how to encode the data. </li></ul><ul><li>We need to learn how to encode the programming instructions. </li></ul>
- 52. The Instruction Set <ul><li>At the heart of all programming are a few, building block instructions. </li></ul><ul><li>The set of instructions is remarkably small, and particular to a given processor. </li></ul><ul><li>The power of the instruction set is that by identifying a definite, bounded, simple task, an instruction can be executed with appreciable speed – typically within a few billionths of a second. </li></ul>
- 53. In Binary (Of Course!) <ul><li>The instruction set is something like the ASCII alphabet encoding scheme. </li></ul><ul><li>The specific instructions are given unique binary codes. </li></ul><ul><li>Obviously, the IR must be big enough to hold any instruction within the numbered set. </li></ul>
- 54. Sample Instructions <ul><li>Instructions fall into several main categories: data transfer, arithmetic, comparisons, and branching </li></ul><ul><li>Some typical instructions might include: </li></ul><ul><ul><li>Load </li></ul></ul><ul><ul><li>Storeh </li></ul></ul><ul><ul><li>Move </li></ul></ul><ul><ul><li>Add </li></ul></ul><ul><ul><li>Compare </li></ul></ul><ul><ul><li>Branch </li></ul></ul><ul><ul><li>Halt </li></ul></ul><ul><li>Each of these instructions would be given a unique code, such as 000, 001, 010, etc. </li></ul>
- 55. Sample Instruction Format <ul><li>The format of a typical instruction is in machine code, and looks something like this: </li></ul>Etc. Address Field 2 Address Field 1 Operation Code
- 56. Interpreting an Instruction <ul><li>Imagine a machine with an instruction set of 8 individual instructions, numbered from 000 to 111. </li></ul><ul><li>Our IR would need to be 3 bits big. </li></ul><ul><li>More realistically, a modern pc today is likely to have 30-50 instructions,but we will keep our model simple. </li></ul>
- 57. Typical Instructions <ul><li>Imagine the following instruction </li></ul><ul><ul><li>100 01010 10011 </li></ul></ul><ul><li>Let’s say the 100 means to perform an ADD operation. </li></ul><ul><li>The 01010 would refer to the address location of the first data element to be added. </li></ul><ul><li>The 10011 would refer to the address location of the second data element to be added. </li></ul><ul><li>So… this instruction would mean: Add the contents of address location 01010 to 10011. </li></ul>
- 58. Following the Fetch Execute Cycle <ul><li>Let’s trace an execution cycle </li></ul><ul><li>To make the trace more manageable, we will manipulate instructions whose format has the instruction itself in abbreviated words instead of binary codes </li></ul><ul><li>Remember, though, that the instruction set entries are really encoded into binary format just like everything else! </li></ul>
- 59. Fetch, Decode, Execute <ul><li>Imagine that you have written a computer program that has been translated into a set of machine language instructions and placed into memory </li></ul><ul><li>Each instruction will pass through three phases: fetch, decode and execute </li></ul><ul><li>These 3 steps will be repeated, over and over for every instruction until a HALT instruction is reached (or a fatal error occurs) </li></ul><ul><li>Let’s step through the cycle </li></ul>
- 60. Phase One: Fetch <ul><li>The Control Unit gets the next instruction from memory and moves it into the Instruction Register (IR) </li></ul><ul><li>This is accomplished by the following steps: </li></ul><ul><ul><li>The address in the Program Counter (PC) is moved to the MAR </li></ul></ul><ul><ul><li>A fetch is initiated, which brings the contents of the cell referenced by the PC to the MDR </li></ul></ul><ul><ul><li>Move the instruction from the MDR to the Instruction Register (IR) for decoding </li></ul></ul><ul><ul><li>Increment the PC to point to the next instruction </li></ul></ul>
- 61. Phase Two: Decode <ul><li>The operation code portion of the contents of the instruction register is read from the IR </li></ul><ul><li>The binary number is fed to a decoder circuit, which activates the appropriate circuitry for the operation </li></ul>
- 62. Phase Three: Execution Phase <ul><li>Once the decoder identifies what operational circuitry should be activated, the particular instruction set member is executed </li></ul><ul><li>Here is a typical series of steps carried out to perform a LOAD operation (which moves contents from main memory to a register) </li></ul><ul><ul><li>Send the address held in the IR to the MAR </li></ul></ul><ul><ul><li>Fetch the contents of the cell whose address is now in the MAR and place the contents into the MDR </li></ul></ul><ul><ul><li>Copy the contents of the MDR into some designated register </li></ul></ul><ul><li>Obviously, each instruction set member will require a unique series of steps to be carried out </li></ul>
- 63. Completing a Program <ul><li>When one instruction has been executed, the fetch execute cycle moves to the next address </li></ul><ul><li>It can do this because the PC was incremented to reflect the address location of the next executable address </li></ul><ul><li>In this way, a series of machine level instructions can be executed, one at a time </li></ul>
- 64. Why Not Quit Here? <ul><li>We could, actually </li></ul><ul><li>The process we just outlined is a fairly accurate description of how early programming occurred </li></ul><ul><li>Programmers wrote lines of code that looked something like this: </li></ul><ul><ul><li>010 11001101 01010111 </li></ul></ul>
- 65. Too Error Prone <ul><li>As you can imagine, writing computer programs in machine language was time-consuming and error prone </li></ul><ul><li>The short cut that we took in our example – substituting English like abbreviations for the operation codes – was soon adopted by computer programmers, and the era of assembly language coding was ushered in </li></ul><ul><li>We will look at this next level of abstraction in our next lecture </li></ul>
- 66. Questions?

No public clipboards found for this slide

Be the first to comment