The document discusses the representation and encoding of instructions in a computer's instruction set. It covers the following key points:
- Instructions are encoded in binary machine code and represented using fixed-width instruction formats. The MIPS instruction set uses 32-bit instruction words.
- The MIPS instruction set has two main formats: R-format for register-based operations, and I-format for instructions with an immediate operand or memory address.
- Instruction fields encode information like the operation code, register operands, immediate constants, and function codes. Register numbers are encoded to identify specific registers.
- Hexadecimal representation is used to compactly represent the binary instruction encodings. Instruction decoding interprets the bit patterns according
The document summarizes key aspects of the MIPS architecture including data types, registers, data declarations, instructions, and control structures. It describes MIPS as a 32-bit architecture that uses registers for all operations. General purpose registers include 32 registers that can be addressed by number or name. Load and store instructions are used to transfer data between registers and memory. Arithmetic instructions operate on registers and control structures allow for conditional and unconditional branching.
This document describes instruction codes and the instruction cycle in a computer. It discusses how instruction codes specify operations for the computer to perform. The instruction cycle has four phases: 1) fetch an instruction from memory, 2) decode the instruction, 3) read the effective address if indirect addressing is used, and 4) execute the instruction. It then describes the fetch and decode phases in more detail, including transferring the program counter value to the address register to fetch the instruction from memory location and loading the instruction register.
The document is a chapter from the textbook "The 8051 Microcontroller and Embedded Systems: Using Assembly and C" by Muhammad Ali Mazidi, Janice Gillispie Mazidi, and Rolin D. McKinlay. The chapter discusses numbering and coding systems such as binary, decimal, hexadecimal and their conversions. It also covers addition, subtraction and the ASCII code system.
Lec 12-15 mips instruction set processorMayank Roy
The document describes the design of a single-cycle MIPS processor datapath and control unit. It begins by introducing the MIPS instruction set and identifying common functions across instructions. It then discusses the benefits and drawbacks of single-cycle versus multi-cycle instruction execution. The document proceeds to show how the datapath would be designed for different MIPS instruction types like R-type, load, store, and branch instructions. It combines the individual datapaths into an overall single-cycle datapath and discusses the need for control signals and units. In the end, it summarizes the key advantages of multi-cycle designs over single-cycle and previews pipelining as an advanced multi-cycle technique.
The control unit generates relevant timing and control signals to coordinate all operations in the computer. It directs the entire system to carry out instructions by communicating with the ALU and memory. Control units are implemented with either a hardwired or microprogrammed design. A hardwired control unit uses logic circuits to generate signals but is complex and difficult to modify. It works fast but changes require rewiring. A microprogrammed control unit uses a microprogram to generate signals and is easier to modify but slower.
This document discusses arithmetic operations for computers including addition, subtraction, and overflow. It provides examples of adding and subtracting numbers in binary format. It explains the basic rules for addition and subtraction of binary numbers. It also discusses overflow that can occur during arithmetic operations and how overflow is handled differently for signed versus unsigned integers in MIPS computers. Overflow is detected using exceptions in MIPS.
The control unit (CU) is a component of a computer's central processing unit (CPU) that directs the operation of the processor. It tells the computer's memory, arithmetic/logic unit and input and output devices how to respond to a program's instructions.
This document discusses computer architecture concepts such as machine language, assembly language, and instruction set architecture. It then describes different addressing architectures including memory-to-memory, register-to-register, and register-memory. Various addressing modes are also defined, such as implied, immediate, indirect, relative, and indexed addressing modes. Finally, the document briefly discusses stack instructions and the use of a stack pointer register.
The document summarizes key aspects of the MIPS architecture including data types, registers, data declarations, instructions, and control structures. It describes MIPS as a 32-bit architecture that uses registers for all operations. General purpose registers include 32 registers that can be addressed by number or name. Load and store instructions are used to transfer data between registers and memory. Arithmetic instructions operate on registers and control structures allow for conditional and unconditional branching.
This document describes instruction codes and the instruction cycle in a computer. It discusses how instruction codes specify operations for the computer to perform. The instruction cycle has four phases: 1) fetch an instruction from memory, 2) decode the instruction, 3) read the effective address if indirect addressing is used, and 4) execute the instruction. It then describes the fetch and decode phases in more detail, including transferring the program counter value to the address register to fetch the instruction from memory location and loading the instruction register.
The document is a chapter from the textbook "The 8051 Microcontroller and Embedded Systems: Using Assembly and C" by Muhammad Ali Mazidi, Janice Gillispie Mazidi, and Rolin D. McKinlay. The chapter discusses numbering and coding systems such as binary, decimal, hexadecimal and their conversions. It also covers addition, subtraction and the ASCII code system.
Lec 12-15 mips instruction set processorMayank Roy
The document describes the design of a single-cycle MIPS processor datapath and control unit. It begins by introducing the MIPS instruction set and identifying common functions across instructions. It then discusses the benefits and drawbacks of single-cycle versus multi-cycle instruction execution. The document proceeds to show how the datapath would be designed for different MIPS instruction types like R-type, load, store, and branch instructions. It combines the individual datapaths into an overall single-cycle datapath and discusses the need for control signals and units. In the end, it summarizes the key advantages of multi-cycle designs over single-cycle and previews pipelining as an advanced multi-cycle technique.
The control unit generates relevant timing and control signals to coordinate all operations in the computer. It directs the entire system to carry out instructions by communicating with the ALU and memory. Control units are implemented with either a hardwired or microprogrammed design. A hardwired control unit uses logic circuits to generate signals but is complex and difficult to modify. It works fast but changes require rewiring. A microprogrammed control unit uses a microprogram to generate signals and is easier to modify but slower.
This document discusses arithmetic operations for computers including addition, subtraction, and overflow. It provides examples of adding and subtracting numbers in binary format. It explains the basic rules for addition and subtraction of binary numbers. It also discusses overflow that can occur during arithmetic operations and how overflow is handled differently for signed versus unsigned integers in MIPS computers. Overflow is detected using exceptions in MIPS.
The control unit (CU) is a component of a computer's central processing unit (CPU) that directs the operation of the processor. It tells the computer's memory, arithmetic/logic unit and input and output devices how to respond to a program's instructions.
This document discusses computer architecture concepts such as machine language, assembly language, and instruction set architecture. It then describes different addressing architectures including memory-to-memory, register-to-register, and register-memory. Various addressing modes are also defined, such as implied, immediate, indirect, relative, and indexed addressing modes. Finally, the document briefly discusses stack instructions and the use of a stack pointer register.
The document discusses the five main units of computer hardware: input, storage, operation, control, and output. It describes each unit's function and role, which is analogous to parts of the human body. The storage unit is divided into main storage and auxiliary storage. The document also provides details on integrated circuits, semiconductor memory including RAM and ROM, and different types of RAM and ROM.
The document provides details about the Pentium II processor, including its 64-bit data bus, 32-bit address size supporting 4GB of addressable memory, and dual integer pipelines. It describes changes in Pentium II such as moving the level 2 cache closer to the microprocessor to reduce costs and improve efficiency. Finally, it outlines new instructions for fast system calls and restoring MMX state that improved performance.
This document discusses assembler programming for the Atmega328P microcontroller. It begins by explaining the language options for programming the microcontroller, including higher-level languages like C/C++ and assembly language. It describes why learning assembly language is important, particularly for understanding the microcontroller's architecture and writing optimized code. The facilities needed for assembly language programming are outlined, including a text editor, assembler, debugger/simulator, and programmer. An overview of the Atmega328P's instruction set is provided, including classifications and addressing modes. Examples of several common instructions like LDI, ADD, MOV, COM, and JMP are described.
This project detects a voice signal from a microphone, then plays it back in the reverse direction using DSP Starter Kit (DSK). This was done at IIT Patna.
This document provides an overview of key computer hardware components and concepts. It begins with explanations of computer architecture including Von Neumann architecture and differences between RISC and CISC instruction sets. It then covers the central processing unit (CPU) and its components. Other sections describe the motherboard, types of memory including RAM, ROM, and cache memory. Additional topics include storage devices like hard disk drives and solid state drives, as well as concepts such as multitasking, multiprocessing, and multithreading.
This document discusses machine instructions and how programs are executed at the machine level. It covers number systems, data representation, memory addressing, instruction types, instruction execution, and addressing modes. Binary numbers are used in computers and represented as vectors. Negative numbers can be represented using sign-and-magnitude, one's complement, or two's complement methods. Memory is made up of addresses that store bits, bytes, and words of data. Instructions perform operations like data transfer, arithmetic, and program flow control. Programs are executed through sequential instruction fetch and execution, using techniques like looping and conditional branching. Addressing modes specify how operands are accessed in instructions.
This document describes a lecture on basic computer organization and design. It discusses:
- The basic components of a computer including a processor, memory, registers, and bus.
- The instruction format and addressing modes of instructions in the basic computer.
- The registers in the basic computer including the program counter, address register, and accumulator.
- How the common bus is used to transfer data between registers and memory.
- The basic computer's instruction set including memory and register reference instructions.
- How the control unit decodes instructions and generates control signals to implement operations.
The document discusses computer arithmetic and binary numbers. It begins by explaining why computers use the binary number system instead of decimal. The key reasons are that electronic components can only represent two states, binary is simpler for circuit design, and arithmetic is possible with binary. The document then covers the basic arithmetic operations of addition, subtraction, multiplication, and division in binary. It provides rules for each operation and examples to illustrate how to perform the calculations in binary. Finally, it discusses complementary subtraction and the additive method for multiplication and division.
1. The document discusses different methods of representing numeric and non-numeric data in computers, including numeric systems like binary, octal, hexadecimal, and different representations of fixed-point and floating-point numbers.
2. It covers topics like signed number representations using signed magnitude, 1's complement, and 2's complement, and describes how arithmetic operations like addition and subtraction are performed using these methods.
3. Floating-point number representation is also discussed, where numbers are represented in the form of a sign, mantissa, and exponent to allow for a wider range of values.
Instruction codes and computer registersSanjeev Patel
The document discusses instruction codes and computer registers. Instruction codes are made up of an opcode and address that tell the computer what operation to perform. Computer registers store important data and instructions, including the program counter, address register, instruction register, temporary register, data register, accumulator, input register, and output register. These registers perform functions like holding memory operands, instructions, temporary data, addresses, and input/output characters.
This document provides a history of the evolution of programming languages from the 19th century to present day. It discusses early languages like the Analytical Engine, assembly languages, and early high-level languages like FORTRAN, Lisp, and COBOL developed in the 1950s-60s. Popular languages that followed include BASIC, Pascal, C, C++, Perl, Python, PHP, Java, JavaScript, C#, and Scala. More recently introduced languages aimed at big data and mobile/web development include Go, Dart, Kotlin, and Swift. The development of programming languages continues in order to meet new computing needs and paradigms.
The document discusses the basic components and organization of a computer. It describes how instructions are stored in memory as binary codes and read by the control unit to execute operations. The key components discussed are:
- Instruction codes specify sequences of operations through an operation code and memory address.
- Computers have instruction sets and store both instructions and data in memory.
- Registers like the accumulator, instruction register, and program counter are used to store and manipulate data and control instruction flow.
- A common bus system connects registers and memory to transfer information needed for operations.
The control unit is responsible for controlling the operations of all parts of the CPU. It decodes instructions, manages data flow between components, and issues control signals to coordinate execution. The main elements of the control unit are the decoder, timer/clock, and control logic circuits. The decoder determines the required actions for each instruction. The timer ensures operations are performed at the right time. And the control logic circuits create and send control signals to components like the ALU and registers.
The document discusses interfacing a 4x4 keypad with a microcontroller. It explains that a keypad works by conducting when a button is pressed. It then describes two methods for scanning the keypad as a matrix to determine which key is pressed. The first method grounds rows and columns individually and checks for input on the other pins. The second method grounds a single row pin and checks the column pins for input. Both allow determining the row and column of the pressed key.
Computer arithmetics (computer organisation & arithmetics) pptSuryaKumarSahani
This is a presentation of explanation of various computer arithmetic including Binary addition, subtraction, multiplication and division. Also Floating point addition, subtraction, multiplication, and division operations.
The document outlines the basics of processor operation, including the instruction cycle, representation of machine instructions, and types of instructions. It discusses how the processor clock synchronizes activities and how the program counter increments to fetch each subsequent instruction from memory. The core instruction cycle stages are fetch, decode, and execute, where the processor fetches instructions and data from memory, decodes the operation, and executes it by performing the required operation.
This document provides an overview of basic non-pipelined CPU architecture, including:
- The main components of a CPU including registers, ALU, and control unit.
- Different CPU architecture types such as accumulator, stack, register-memory, and register-register architectures.
- The fetch-decode-execute cycle that CPUs follow to process instructions step-by-step.
- Approaches for implementing the control unit including hardwired and microprogrammed approaches.
- How to calculate parameters like CPI (cycles per instruction) and MIPS (millions of instructions per second) to evaluate CPU performance.
Register transfer language & its micro operationsLakshya Sharma
The document discusses register transfer language and micro-operations in digital systems. It describes (1) how register transfer language can be used to describe the sequence of micro-operations involved in any computer function, (2) the four main types of micro-operations - register transfer, arithmetic, logic, and shift micro-operations, giving examples of each, and (3) how register transfers and bus transfers are represented in register transfer language.
This document summarizes Cal State Fullerton's strategy for piloting e-textbooks on campus. It discusses three phases of pilots conducted from 2012-2013 to test integration of e-textbooks into the learning management system and assess student and faculty experiences. The pilots revealed that while students enjoyed certain e-textbook features, adoption is still slow due to high costs and lack of publisher content. The university aims to develop a sustainable enterprise e-textbook model through closer partnerships between vendors, publishers, and academic programs.
The document discusses how traditional approaches to ensuring academic integrity were designed for analog assessment in the past but are ill-suited for today's digital environment of frequent, varied assessment opportunities. It proposes redefining academic integrity to focus on demonstrated student learning and mastery of outcomes through integrated, aligned learning and assessment. Emerging models emphasize a dynamic record of learning over time and assessment of ongoing application of learning through collected work rather than high-stakes exams. This represents a shift from legacy regulatory approaches to those focused on open, digital and continuous assessment.
The document discusses the five main units of computer hardware: input, storage, operation, control, and output. It describes each unit's function and role, which is analogous to parts of the human body. The storage unit is divided into main storage and auxiliary storage. The document also provides details on integrated circuits, semiconductor memory including RAM and ROM, and different types of RAM and ROM.
The document provides details about the Pentium II processor, including its 64-bit data bus, 32-bit address size supporting 4GB of addressable memory, and dual integer pipelines. It describes changes in Pentium II such as moving the level 2 cache closer to the microprocessor to reduce costs and improve efficiency. Finally, it outlines new instructions for fast system calls and restoring MMX state that improved performance.
This document discusses assembler programming for the Atmega328P microcontroller. It begins by explaining the language options for programming the microcontroller, including higher-level languages like C/C++ and assembly language. It describes why learning assembly language is important, particularly for understanding the microcontroller's architecture and writing optimized code. The facilities needed for assembly language programming are outlined, including a text editor, assembler, debugger/simulator, and programmer. An overview of the Atmega328P's instruction set is provided, including classifications and addressing modes. Examples of several common instructions like LDI, ADD, MOV, COM, and JMP are described.
This project detects a voice signal from a microphone, then plays it back in the reverse direction using DSP Starter Kit (DSK). This was done at IIT Patna.
This document provides an overview of key computer hardware components and concepts. It begins with explanations of computer architecture including Von Neumann architecture and differences between RISC and CISC instruction sets. It then covers the central processing unit (CPU) and its components. Other sections describe the motherboard, types of memory including RAM, ROM, and cache memory. Additional topics include storage devices like hard disk drives and solid state drives, as well as concepts such as multitasking, multiprocessing, and multithreading.
This document discusses machine instructions and how programs are executed at the machine level. It covers number systems, data representation, memory addressing, instruction types, instruction execution, and addressing modes. Binary numbers are used in computers and represented as vectors. Negative numbers can be represented using sign-and-magnitude, one's complement, or two's complement methods. Memory is made up of addresses that store bits, bytes, and words of data. Instructions perform operations like data transfer, arithmetic, and program flow control. Programs are executed through sequential instruction fetch and execution, using techniques like looping and conditional branching. Addressing modes specify how operands are accessed in instructions.
This document describes a lecture on basic computer organization and design. It discusses:
- The basic components of a computer including a processor, memory, registers, and bus.
- The instruction format and addressing modes of instructions in the basic computer.
- The registers in the basic computer including the program counter, address register, and accumulator.
- How the common bus is used to transfer data between registers and memory.
- The basic computer's instruction set including memory and register reference instructions.
- How the control unit decodes instructions and generates control signals to implement operations.
The document discusses computer arithmetic and binary numbers. It begins by explaining why computers use the binary number system instead of decimal. The key reasons are that electronic components can only represent two states, binary is simpler for circuit design, and arithmetic is possible with binary. The document then covers the basic arithmetic operations of addition, subtraction, multiplication, and division in binary. It provides rules for each operation and examples to illustrate how to perform the calculations in binary. Finally, it discusses complementary subtraction and the additive method for multiplication and division.
1. The document discusses different methods of representing numeric and non-numeric data in computers, including numeric systems like binary, octal, hexadecimal, and different representations of fixed-point and floating-point numbers.
2. It covers topics like signed number representations using signed magnitude, 1's complement, and 2's complement, and describes how arithmetic operations like addition and subtraction are performed using these methods.
3. Floating-point number representation is also discussed, where numbers are represented in the form of a sign, mantissa, and exponent to allow for a wider range of values.
Instruction codes and computer registersSanjeev Patel
The document discusses instruction codes and computer registers. Instruction codes are made up of an opcode and address that tell the computer what operation to perform. Computer registers store important data and instructions, including the program counter, address register, instruction register, temporary register, data register, accumulator, input register, and output register. These registers perform functions like holding memory operands, instructions, temporary data, addresses, and input/output characters.
This document provides a history of the evolution of programming languages from the 19th century to present day. It discusses early languages like the Analytical Engine, assembly languages, and early high-level languages like FORTRAN, Lisp, and COBOL developed in the 1950s-60s. Popular languages that followed include BASIC, Pascal, C, C++, Perl, Python, PHP, Java, JavaScript, C#, and Scala. More recently introduced languages aimed at big data and mobile/web development include Go, Dart, Kotlin, and Swift. The development of programming languages continues in order to meet new computing needs and paradigms.
The document discusses the basic components and organization of a computer. It describes how instructions are stored in memory as binary codes and read by the control unit to execute operations. The key components discussed are:
- Instruction codes specify sequences of operations through an operation code and memory address.
- Computers have instruction sets and store both instructions and data in memory.
- Registers like the accumulator, instruction register, and program counter are used to store and manipulate data and control instruction flow.
- A common bus system connects registers and memory to transfer information needed for operations.
The control unit is responsible for controlling the operations of all parts of the CPU. It decodes instructions, manages data flow between components, and issues control signals to coordinate execution. The main elements of the control unit are the decoder, timer/clock, and control logic circuits. The decoder determines the required actions for each instruction. The timer ensures operations are performed at the right time. And the control logic circuits create and send control signals to components like the ALU and registers.
The document discusses interfacing a 4x4 keypad with a microcontroller. It explains that a keypad works by conducting when a button is pressed. It then describes two methods for scanning the keypad as a matrix to determine which key is pressed. The first method grounds rows and columns individually and checks for input on the other pins. The second method grounds a single row pin and checks the column pins for input. Both allow determining the row and column of the pressed key.
Computer arithmetics (computer organisation & arithmetics) pptSuryaKumarSahani
This is a presentation of explanation of various computer arithmetic including Binary addition, subtraction, multiplication and division. Also Floating point addition, subtraction, multiplication, and division operations.
The document outlines the basics of processor operation, including the instruction cycle, representation of machine instructions, and types of instructions. It discusses how the processor clock synchronizes activities and how the program counter increments to fetch each subsequent instruction from memory. The core instruction cycle stages are fetch, decode, and execute, where the processor fetches instructions and data from memory, decodes the operation, and executes it by performing the required operation.
This document provides an overview of basic non-pipelined CPU architecture, including:
- The main components of a CPU including registers, ALU, and control unit.
- Different CPU architecture types such as accumulator, stack, register-memory, and register-register architectures.
- The fetch-decode-execute cycle that CPUs follow to process instructions step-by-step.
- Approaches for implementing the control unit including hardwired and microprogrammed approaches.
- How to calculate parameters like CPI (cycles per instruction) and MIPS (millions of instructions per second) to evaluate CPU performance.
Register transfer language & its micro operationsLakshya Sharma
The document discusses register transfer language and micro-operations in digital systems. It describes (1) how register transfer language can be used to describe the sequence of micro-operations involved in any computer function, (2) the four main types of micro-operations - register transfer, arithmetic, logic, and shift micro-operations, giving examples of each, and (3) how register transfers and bus transfers are represented in register transfer language.
This document summarizes Cal State Fullerton's strategy for piloting e-textbooks on campus. It discusses three phases of pilots conducted from 2012-2013 to test integration of e-textbooks into the learning management system and assess student and faculty experiences. The pilots revealed that while students enjoyed certain e-textbook features, adoption is still slow due to high costs and lack of publisher content. The university aims to develop a sustainable enterprise e-textbook model through closer partnerships between vendors, publishers, and academic programs.
The document discusses how traditional approaches to ensuring academic integrity were designed for analog assessment in the past but are ill-suited for today's digital environment of frequent, varied assessment opportunities. It proposes redefining academic integrity to focus on demonstrated student learning and mastery of outcomes through integrated, aligned learning and assessment. Emerging models emphasize a dynamic record of learning over time and assessment of ongoing application of learning through collected work rather than high-stakes exams. This represents a shift from legacy regulatory approaches to those focused on open, digital and continuous assessment.
This document discusses lecture capture and its implementation at Montana State University. It covers the reasons for adopting lecture capture, including making lectures available to students who miss class. It also discusses selecting a lecture capture tool based on simplicity and affordability. The document outlines piloting lecture capture with a small number of classes and emphasizing short video segments over full lectures. It stresses the importance of faculty development, including training faculty on using the tools and exploring new pedagogical approaches. Lessons learned include starting small, emphasizing appropriate uses of lecture capture, and getting positive feedback from faculty and students.
Cfactor Works Inc. provides social business solutions using social web technologies and strategies. Their approach involves developing strategies in partnership with clients, providing products to address communications, communities, and workforce needs, and integrating these solutions to meet specific business objectives. They aim to unify brands, enable effective communications across enterprises, provide insights into workforce productivity and talent, and ultimately help businesses achieve greater impact and outcomes through social technologies.
The document summarizes efforts to develop a State Authorization Reciprocity Agreement (SARA) to make state authorization processes for distance education more efficient. SARA would allow states to recognize each other's approvals of institutions, rather than requiring separate approval in each state where an institution enrolls students. The Presidents' Forum and Council of State Governments are leading the initiative with input from regional compacts and other stakeholders. The goal is a voluntary agreement that streamlines processes while maintaining oversight of quality and consumer protections.
The document discusses notable skateboarders and tricks. It mentions Jared Stein tweeting about agile development and rapid prototyping. It then lists several famous skateboarders like Rodney Mullen, Chris Cole, Tony Hawk, Shaun White, and Jay Smith along with some of their signature tricks. It also mentions Christian Hosoi performing the Christ Air trick and includes a photo of a skatepark built from recycled urban materials with steps, tile and a fake drain by Andrés Aguiluz Rios. Finally, it notes Rodney Mullen performing darkslides.
This document summarizes the key events and actions related to the implementation of federal and state regulations for distance education programs operating across state lines. It notes that in 2010, final federal regulations were issued requiring institutions to comply with any state requirements to offer distance education in that state. It also discusses subsequent clarification letters from the Department of Education and a lawsuit filed by a trade association challenging the regulations.
This document summarizes a panel discussion on copyright fair use updates from November 2010. The panelists discussed new rights and ongoing confusion related to copyright fair use in education. They provided an overview of copyright basics and fair use guidelines, ways to determine copyright status, exemptions under the Digital Millennium Copyright Act, and recent copyright cases and articles. Resources on copyright and fair use issues from various universities were also cited.
Chapter 2 instructions language of the computerBATMUNHMUNHZAYA
The document discusses the MIPS instruction set architecture. It describes the different types of instructions including arithmetic, logical, and conditional instructions. It explains the register-based and memory-based operands, immediate operands, and how instructions are encoded in binary machine code. Key aspects like simplicity, regularity, and optimization for common cases are emphasized in the design of the MIPS ISA.
This document discusses how computers represent and manipulate data at the lowest levels. It covers:
1) How integers are represented in binary and how signed and unsigned integers work with 2's complement representation.
2) Data types like integers, real numbers, characters that are mapped to binary representations. Issues like byte ordering and memory addressing are also covered.
3) How instructions are encoded in binary machine code and the different instruction formats used in MIPS, including the R-format and I-format. Logical operations like shifting, AND, and OR that manipulate bits are also introduced.
This document contains class notes on Chapter 2 of a CMPS290 textbook. It covers:
- The instruction set and machine language that allows a computer's hardware to be commanded. Different computers have different instruction sets like MIPS assembly language.
- How computer instructions are encoded in binary machine code and represented in various instruction formats like R-format and I-format in MIPS.
- Core concepts for understanding the language of computers like registers, memory addressing, data types, logical and arithmetic operations, and how high-level code gets compiled to low-level machine instructions.
This document discusses computer instruction sets and the MIPS instruction set architecture. It covers the different types of instructions in MIPS including R-format for register arithmetic, I-format for immediate operations and load/store, and J-format for jumps. The key components of MIPS instructions, such as opcode, register fields, and immediate constants are described. MIPS instructions are encoded in 32-bit binary formats to perform common arithmetic, logical, and data movement operations in a regular and simple manner.
This document discusses basic blocks, conditional operations, branch instructions, procedure calling conventions, and synchronization instructions in MIPS. It covers topics like signed vs unsigned comparisons, register usage in procedure calls, leaf and non-leaf procedures, stack usage, and atomic load-linked/store-conditional instructions. Examples are provided to illustrate string copying, procedure calls, and far branching code generation.
The document discusses the translation process from high-level programming languages to machine-readable code. It describes how programs written in high-level languages like C are first compiled to assembly code, then linked and loaded into memory for execution. The MIPS instruction set architecture is used as an example, with details about its instruction formats, register usage, and basic operations like addition and subtraction.
Chapter 02 instructions language of the computerBảo Hoang
Here are the steps to translate the MIPS assembly language into machine language:
1. lw $t0,300($t1) # Load A[300] into register $t0
Opcode: 35 (load word)
rs: 13 ($t1)
rt: 8 ($t0)
offset: 300
2. add $t2,$s2,$t0 # Calculate h + A[300] and put in $t2
Opcode: 0 (add)
rs: 17 ($s2)
rt: 8 ($t0)
rd: 9 ($t2)
3. sw $t2,300($t1) # Store result back into
The document discusses instruction set architecture and the MIPS architecture. It introduces key concepts like assembly language, registers, instructions, and how assembly code maps to high-level languages like C. Specific MIPS instructions for arithmetic like addition and subtraction are presented, showing how a single line of C code can require multiple assembly instructions. Register usage and conventions are also covered.
The document discusses computer architecture and the MIPS instruction set, including assembly language, machine language, operands like registers and memory addressing, and the different instruction formats like R-type for register operands and I-type for immediate operands. It also covers topics like big-endian and little-endian memory addressing and how MIPS instructions are encoded in binary machine language.
This document provides an overview of the C programming language. It discusses the features of C including that it is simple, versatile, supports separate compilation of functions, and can be used for systems programming. It also covers C data types like integers, floating-point numbers, and how they are represented. Additional topics include variable names, comments, input/output functions like printf() and scanf(), and conventions for writing readable C code. The document serves as an introduction to key concepts in C programming.
This document provides an overview of the C programming language. It discusses various features of C like it being a simple, versatile language that allows for separate compilation of functions. It also covers different data types in C like integer, floating-point, characters, and arrays. Examples are given to illustrate how integers of different sizes like char, short, int, long are represented. The document also discusses floating-point number representation according to the IEEE 754 standard. Finally, it briefly introduces common input/output functions in C like printf() and scanf() and covers coding conventions.
This document provides an overview of the C programming language. It discusses the features of C including that it is simple, versatile, supports separate compilation of functions, and can be used for systems programming. It also covers C data types like integers, floating-point numbers, and how they are represented. Additional topics include variable names, comments, input/output functions like printf() and scanf(), and conventions for writing readable C code. The document serves as an introduction to programming in C.
1) The document discusses different levels of programming languages including machine language, assembly language, and high-level languages. Assembly language uses symbolic instructions that directly correspond to machine language instructions.
2) It describes the components of the Intel 8086 processor including its 16-bit registers like the accumulator, base, count, and data registers as well as its segment, pointer, index, and status flag registers.
3) Binary numbers can be represented in signed magnitude, one's complement, or two's complement form. Two's complement is commonly used in modern computers as it allows for efficient addition and subtraction of binary numbers.
This document provides an overview of high-level languages, assembly language, and machine language. It discusses how a processor executes high-level language statements and that assembly language is machine-specific while high-level languages are machine-independent. It also describes data representation in memory and registers, how assembly language instructions work at a basic level, and examples of assembly language programs and their machine code representations.
Chapter 1
Syllabus
Catalog Description: Computer structure, machine representation of data,
addressing and indexing, computation and control instructions, assembly
language and assemblers; procedures (subroutines) and data segments,
linkages and subroutine calling conventions, loaders; practical use of an
assembly language for computer implementation of illustrative examples.
Course Goals
0 Knowledge of the basic structure of microcomputers - registers, mem-
ory, addressing I/O devices, etc.
1 Knowledge of most non-privileged hardware instructions for the Ar-
chitecture being studied.
2 Ability to write small programs in assembly language
3 Knowledge of computer representations of data, and how to do simple
arithmetic in binary & hexadecimal, including conversions
4 Being able to implementing a moderately complicated algorithm in
assembler, with emphasis on efficiency.
5 Knowledge of procedure calling conventions and interfacing with high-
level languages.
Optional Text: Kip Irvine, Assembly Language for the IBM PC, Prentice
Hall, 4th or 5th edition
1
Additional References: Intel and DOS API documentation as presented
in Intel publications and online at www.x86.org; lecture notes (to be sup-
plied as we go).
Prerequisites by Topic. Working knowledge of some programming lan-
guage (102/103: C/C++); Minimal programming experience
Major Topics Covered in the Course:
1 Low-level and high-level languages; why learn assembler?
2 How does one study a new computer: the CPU, memory, addressing
modes, operation modes.
3 History of the Intel family of microprocessors.
4-5 Registers; simple arithmetic instructions; byte order; Arithmetic and
logical operations.
6 Implementing longer integer type support; carry and overflow.
7 Shifts, multiplication and division.
8 Memory layout.
9 Direct video memory access; discussion of the first project.
10 Assembler syntax; how to use the tools.
11-13 Conditional & unconditional jumps; loops; emulating high-level lan-
guage constructions; Stack; call and return; procedures
14-15 String instructions: effcient memory-to-memory operations.
16 Interrupts overview: interrupt table; how do interrupts work; classif-
cation.
17 Summary of the most important interrupts.
18-20 DOS interrupt; File I/O functions; file-copy program; discussion of
the second project
21 Interrupt handlers; keyboard drivers; timer-driven processes; viruses
and virus-protection software.
2
22 Debug interrupts; how do debuggers and profilers work.
23-24 (Optional).interfacing with high level languages; Protected mode fun-
damentals
Grading The grading is based on two projects, midterm project is 49%
and the final is 51%. Please note that the projects are individual, submitting
projects that are similar to submissions of others and/or are essentially
downloads from the Web would result in a fail.
Office Hours My hours this term for CSc 210 will be 3:45 ¶Ł 4:45 on
Mondays.
Zoom links:
11am https://ccny.zoom.us/j/8 ...
The document discusses assembly languages and the MIPS instruction set architecture. It introduces key concepts of assembly language such as registers, memory addressing, and instructions. It describes the MIPS instruction format, which uses three operands of destination, source1, and source2. The document outlines the basic MIPS instructions for arithmetic, data transfer, and control flow. It explains how assembly code maps to low-level machine instructions and hardware.
1. The document discusses different stages in the compilation and execution process of a computer program, including assembly, linking, loading, and running the program.
2. It describes how an assembler translates program code into machine instructions, and a linker merges object modules and resolves symbol references to produce an executable.
3. The executable is loaded into memory by reading headers, copying segments, and setting up arguments and registers before jumping to the startup routine.
The document discusses MIPS architecture memory organization and registers. It explains that memory is used to store data and instructions, and is divided into text, data, and stack segments. It also describes the MIPS register set, which includes 32 general purpose registers used for arithmetic operations as well as special purpose registers like $ra for return addresses. Basic MIPS instructions like load, store, arithmetic, and jumps are explained along with addressing modes like immediate, register, and memory addressing.
This document provides an overview of MIPS machine language instructions. It describes the key components of MIPS instructions, including register operands, immediate operands, and different instruction formats. It explains basic arithmetic, load/store, branch, and jump instructions. It also discusses MIPS register organization, memory organization including byte ordering, the fetch-execute cycle, and MIPS addressing modes including register, word-relative, and PC-relative addressing. The document is serving as a lecture on MIPS instruction set architecture for a computer organization and architecture course.
This document discusses the basic hardware features of the PC, including the processor, memory, registers, keyboard, monitor, disk drives, and other external components. It explains bits and bytes as the fundamental units of digital information storage. Various number systems are covered, including binary, decimal, hexadecimal, and how to convert between them. Storage sizes and numeric data representation in computers are also summarized.
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
Generating privacy-protected synthetic data using Secludy and MilvusZilliz
During this demo, the founders of Secludy will demonstrate how their system utilizes Milvus to store and manipulate embeddings for generating privacy-protected synthetic data. Their approach not only maintains the confidentiality of the original data but also enhances the utility and scalability of LLMs under privacy constraints. Attendees, including machine learning engineers, data scientists, and data managers, will witness first-hand how Secludy's integration with Milvus empowers organizations to harness the power of LLMs securely and efficiently.
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This year’s report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
Things to Consider When Choosing a Website Developer for your Website | FODUUFODUU
Choosing the right website developer is crucial for your business. This article covers essential factors to consider, including experience, portfolio, technical skills, communication, pricing, reputation & reviews, cost and budget considerations and post-launch support. Make an informed decision to ensure your website meets your business goals.
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/building-and-scaling-ai-applications-with-the-nx-ai-manager-a-presentation-from-network-optix/
Robin van Emden, Senior Director of Data Science at Network Optix, presents the “Building and Scaling AI Applications with the Nx AI Manager,” tutorial at the May 2024 Embedded Vision Summit.
In this presentation, van Emden covers the basics of scaling edge AI solutions using the Nx tool kit. He emphasizes the process of developing AI models and deploying them globally. He also showcases the conversion of AI models and the creation of effective edge AI pipelines, with a focus on pre-processing, model conversion, selecting the appropriate inference engine for the target hardware and post-processing.
van Emden shows how Nx can simplify the developer’s life and facilitate a rapid transition from concept to production-ready applications.He provides valuable insights into developing scalable and efficient edge AI solutions, with a strong focus on practical implementation.
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
Fueling AI with Great Data with Airbyte WebinarZilliz
This talk will focus on how to collect data from a variety of sources, leveraging this data for RAG and other GenAI use cases, and finally charting your course to productionalization.
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
CAKE: Sharing Slices of Confidential Data on BlockchainClaudio Di Ciccio
Presented at the CAiSE 2024 Forum, Intelligent Information Systems, June 6th, Limassol, Cyprus.
Synopsis: Cooperative information systems typically involve various entities in a collaborative process within a distributed environment. Blockchain technology offers a mechanism for automating such processes, even when only partial trust exists among participants. The data stored on the blockchain is replicated across all nodes in the network, ensuring accessibility to all participants. While this aspect facilitates traceability, integrity, and persistence, it poses challenges for adopting public blockchains in enterprise settings due to confidentiality issues. In this paper, we present a software tool named Control Access via Key Encryption (CAKE), designed to ensure data confidentiality in scenarios involving public blockchains. After outlining its core components and functionalities, we showcase the application of CAKE in the context of a real-world cyber-security project within the logistics domain.
Paper: https://doi.org/10.1007/978-3-031-61000-4_16
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
2. §2.1 Introduction
Instruction Set
The repertoire of instructions of a
computer
Different computers have different
instruction sets
But with many aspects in common
Early computers had very simple
instruction sets
Simplified implementation
Many modern computers also have simple
instruction sets
Chapter 2 — Instructions: Language of the Computer —
3. The MIPS Instruction Set
Used as the example throughout the book
Stanford MIPS commercialized by MIPS
Technologies (www.mips.com)
Large share of embedded core market
Applications in consumer electronics, network/storage
equipment, cameras, printers, …
Typical of many modern ISAs
See MIPS Reference Data tear-out card, and
Appendixes B and E
Chapter 2 — Instructions: Language of the Computer —
4. §2.2 Operations of the Computer Hardware
Arithmetic Operations
Add and subtract, three operands
Two sources and one destination
add a, b, c # a gets b + c
All arithmetic operations have this form
Design Principle 1: Simplicity favours
regularity
Regularity makes implementation simpler
Simplicity enables higher performance at
lower cost
Chapter 2 — Instructions: Language of the Computer —
5. Arithmetic Example
C code:
f = (g + h) - (i + j);
Compiled MIPS code:
add t0, g, h # temp t0 = g + h
add t1, i, j # temp t1 = i + j
sub f, t0, t1 # f = t0 - t1
Chapter 2 — Instructions: Language of the Computer —
6. §2.3 Operands of the Computer Hardware
Register Operands
Arithmetic instructions use register
operands
MIPS has a 32 × 32-bit register file
Use for frequently accessed data
Numbered 0 to 31
32-bit data called a “word”
Assembler names
$t0, $t1, …, $t9 for temporary values
$s0, $s1, …, $s7 for saved variables
Design Principle 2: Smaller is faster
c.f. main memory: millions of locations
Chapter 2 — Instructions: Language of the Computer —
7. Register Operand Example
C code:
f = (g + h) - (i + j);
f, …, j in $s0, …, $s4
Compiled MIPS code:
add $t0, $s1, $s2
add $t1, $s3, $s4
sub $s0, $t0, $t1
Chapter 2 — Instructions: Language of the Computer —
8. Memory Operands
Main memory used for composite data
Arrays, structures, dynamic data
To apply arithmetic operations
Load values from memory into registers
Store result from register to memory
Memory is byte addressed
Each address identifies an 8-bit byte
Words are aligned in memory
Address must be a multiple of 4
MIPS is Big Endian
Most-significant byte at least address of a word
c.f. Little Endian: least-significant byte at least address
Chapter 2 — Instructions: Language of the Computer —
9. Memory Operand Example 1
C code:
g = h + A[8];
g in $s1, h in $s2, base address of A in $s3
Compiled MIPS code:
Index 8 requires offset of 32
4 bytes per word
lw $t0, 32($s3) # load word
add $s1, $s2, $t0
offset base register
Chapter 2 — Instructions: Language of the Computer —
10. Memory Operand Example 2
C code:
A[12] = h + A[8];
h in $s2, base address of A in $s3
Compiled MIPS code:
Index 8 requires offset of 32
lw $t0, 32($s3) # load word
add $t0, $s2, $t0
sw $t0, 48($s3) # store word
Chapter 2 — Instructions: Language of the Computer —
11. Registers vs. Memory
Registers are faster to access than
memory
Operating on memory data requires loads
and stores
More instructions to be executed
Compiler must use registers for variables
as much as possible
Only spill to memory for less frequently used
variables
Register optimization is important!
Chapter 2 — Instructions: Language of the Computer —
12. Immediate Operands
Constant data specified in an instruction
addi $s3, $s3, 4
No subtract immediate instruction
Just use a negative constant
addi $s2, $s1, -1
Design Principle 3: Make the common
case fast
Small constants are common
Immediate operand avoids a load instruction
Chapter 2 — Instructions: Language of the Computer —
13. The Constant Zero
MIPS register 0 ($zero) is the constant 0
Cannot be overwritten
Useful for common operations
E.g., move between registers
add $t2, $s1, $zero
Chapter 2 — Instructions: Language of the Computer —
14. §2.4 Signed and Unsigned Numbers
Unsigned Binary Integers
Given an n-bit number
n −1 n−2
x = x n−1 2 + x n−2 2 + + x1 2 + x 0 2
1 0
Range: 0 to +2n – 1
Example
0000 0000 0000 0000 0000 0000 0000 10112
= 0 + … + 1×23 + 0×22 +1×21 +1×20
= 0 + … + 8 + 0 + 2 + 1 = 1110
Using 32 bits
0 to +4,294,967,295
Chapter 2 — Instructions: Language of the Computer —
15. 2s-Complement Signed Integers
Given an n-bit number
n −1 n−2
x = − x n−1 2 + x n−2 2 + + x1 2 + x 0 2
1 0
Range: –2n – 1 to +2n – 1 – 1
Example
1111 1111 1111 1111 1111 1111 1111 11002
= –1×231 + 1×230 + … + 1×22 +0×21 +0×20
= –2,147,483,648 + 2,147,483,644 = –410
Using 32 bits
–2,147,483,648 to +2,147,483,647
Chapter 2 — Instructions: Language of the Computer —
16. 2s-Complement Signed Integers
Bit 31 is sign bit
1 for negative numbers
0 for non-negative numbers
–(–2n – 1) can’t be represented
Non-negative numbers have the same unsigned
and 2s-complement representation
Some specific numbers
0: 0000 0000 … 0000
–1: 1111 1111 … 1111
Most-negative: 1000 0000 … 0000
Most-positive: 0111 1111 … 1111
Chapter 2 — Instructions: Language of the Computer —
17. Signed Negation
Complement and add 1
Complement means 1 → 0, 0 → 1
x + x = 1111...1112 = −1
x + 1 = −x
Example: negate +2
+2 = 0000 0000 … 00102
–2 = 1111 1111 … 11012 + 1
= 1111 1111 … 11102
Chapter 2 — Instructions: Language of the Computer —
18. Sign Extension
Representing a number using more bits
Preserve the numeric value
In MIPS instruction set
addi: extend immediate value
lb, lh: extend loaded byte/halfword
beq, bne: extend the displacement
Replicate the sign bit to the left
c.f. unsigned values: extend with 0s
Examples: 8-bit to 16-bit
+2: 0000 0010 => 0000 0000 0000 0010
–2: 1111 1110 => 1111 1111 1111 1110
Chapter 2 — Instructions: Language of the Computer —
19. §2.5 Representing Instructions in the Computer
Representing Instructions
Instructions are encoded in binary
Called machine code
MIPS instructions
Encoded as 32-bit instruction words
Small number of formats encoding operation code
(opcode), register numbers, …
Regularity!
Register numbers
$t0 – $t7 are reg’s 8 – 15
$t8 – $t9 are reg’s 24 – 25
$s0 – $s7 are reg’s 16 – 23
Chapter 2 — Instructions: Language of the Computer —
20. MIPS R-format Instructions
op rs rt rd shamt funct
6 bits 5 bits 5 bits 5 bits 5 bits 6 bits
Instruction fields
op: operation code (opcode)
rs: first source register number
rt: second source register number
rd: destination register number
shamt: shift amount (00000 for now)
funct: function code (extends opcode)
Chapter 2 — Instructions: Language of the Computer —
21. R-format Example
op rs rt rd shamt funct
6 bits 5 bits 5 bits 5 bits 5 bits 6 bits
add $t0, $s1, $s2
special $s1 $s2 $t0 0 add
0 17 18 8 0 32
000000 10001 10010 01000 00000 100000
000000100011001001000000001000002 = 0232402016
Chapter 2 — Instructions: Language of the Computer —
22. Hexadecimal
Base 16
Compact representation of bit strings
4 bits per hex digit
0 0000 4 0100 8 1000 c 1100
1 0001 5 0101 9 1001 d 1101
2 0010 6 0110 a 1010 e 1110
3 0011 7 0111 b 1011 f 1111
Example: eca8 6420
1110 1100 1010 1000 0110 0100 0010 0000
Chapter 2 — Instructions: Language of the Computer —
23. MIPS I-format Instructions
op rs rt constant or address
6 bits 5 bits 5 bits 16 bits
Immediate arithmetic and load/store instructions
rt: destination or source register number
Constant: –215 to +215 – 1
Address: offset added to base address in rs
Design Principle 4: Good design demands good
compromises
Different formats complicate decoding, but allow 32-bit
instructions uniformly
Keep formats as similar as possible
Chapter 2 — Instructions: Language of the Computer —
24. Stored Program Computers
The BIG Picture
Instructions represented in
binary, just like data
Instructions and data stored
in memory
Programs can operate on
programs
e.g., compilers, linkers, …
Binary compatibility allows
compiled programs to work
on different computers
Standardized ISAs
Chapter 2 — Instructions: Language of the Computer —
25. §2.6 Logical Operations
Logical Operations
Instructions for bitwise manipulation
Operation C Java MIPS
Shift left << << sll
Shift right >> >>> srl
Bitwise AND & & and, andi
Bitwise OR | | or, ori
Bitwise NOT ~ ~ nor
Useful for extracting and inserting
groups of bits in a word
Chapter 2 — Instructions: Language of the Computer —
26. Shift Operations
op rs rt rd shamt funct
6 bits 5 bits 5 bits 5 bits 5 bits 6 bits
shamt: how many positions to shift
Shift left logical
Shift left and fill with 0 bits
sll by i bits multiplies by 2i
Shift right logical
Shift right and fill with 0 bits
srl by i bits divides by 2i (unsigned only)
Chapter 2 — Instructions: Language of the Computer —
27. AND Operations
Useful to mask bits in a word
Select some bits, clear others to 0
and $t0, $t1, $t2
$t2 0000 0000 0000 0000 0000 1101 1100 0000
$t1 0000 0000 0000 0000 0011 1100 0000 0000
$t0 0000 0000 0000 0000 0000 1100 0000 0000
Chapter 2 — Instructions: Language of the Computer —
28. OR Operations
Useful to include bits in a word
Set some bits to 1, leave others unchanged
or $t0, $t1, $t2
$t2 0000 0000 0000 0000 0000 1101 1100 0000
$t1 0000 0000 0000 0000 0011 1100 0000 0000
$t0 0000 0000 0000 0000 0011 1101 1100 0000
Chapter 2 — Instructions: Language of the Computer —
29. NOT Operations
Useful to invert bits in a word
Change 0 to 1, and 1 to 0
MIPS has NOR 3-operand instruction
a NOR b == NOT ( a OR b )
nor $t0, $t1, $zero Register 0: always
read as zero
$t1 0000 0000 0000 0000 0011 1100 0000 0000
$t0 1111 1111 1111 1111 1100 0011 1111 1111
Chapter 2 — Instructions: Language of the Computer —
30. §2.7 Instructions for Making Decisions
Conditional Operations
Branch to a labeled instruction if a
condition is true
Otherwise, continue sequentially
beq rs, rt, L1
if (rs == rt) branch to instruction labeled L1;
bne rs, rt, L1
if (rs != rt) branch to instruction labeled L1;
j L1
unconditional jump to instruction labeled L1
Chapter 2 — Instructions: Language of the Computer —
31. Compiling If Statements
C code:
if (i==j) f = g+h;
else f = g-h;
f, g, … in $s0, $s1, …
Compiled MIPS code:
bne $s3, $s4, Else
add $s0, $s1, $s2
j Exit
Else: sub $s0, $s1, $s2
Exit: …
Assembler calculates addresses
Chapter 2 — Instructions: Language of the Computer —
32. Compiling Loop Statements
C code:
while (save[i] == k) i += 1;
i in $s3, k in $s5, address of save in $s6
Compiled MIPS code:
Loop: sll $t1, $s3, 2
add $t1, $t1, $s6
lw $t0, 0($t1)
bne $t0, $s5, Exit
addi $s3, $s3, 1
j Loop
Exit: …
Chapter 2 — Instructions: Language of the Computer —
33. Basic Blocks
A basic block is a sequence of instructions
with
No embedded branches (except at end)
No branch targets (except at beginning)
A compiler identifies basic
blocks for optimization
An advanced processor
can accelerate execution
of basic blocks
Chapter 2 — Instructions: Language of the Computer —
34. More Conditional Operations
Set result to 1 if a condition is true
Otherwise, set to 0
slt rd, rs, rt
if (rs < rt) rd = 1; else rd = 0;
slti rt, rs, constant
if (rs < constant) rt = 1; else rt = 0;
Use in combination with beq, bne
slt $t0, $s1, $s2 # if ($s1 < $s2)
bne $t0, $zero, L # branch to L
Chapter 2 — Instructions: Language of the Computer —
35. Branch Instruction Design
Why not blt, bge, etc?
Hardware for <, ≥, … slower than =, ≠
Combining with branch involves more work
per instruction, requiring a slower clock
All instructions penalized!
beq and bne are the common case
This is a good design compromise
Chapter 2 — Instructions: Language of the Computer —
36. Signed vs. Unsigned
Signed comparison: slt, slti
Unsigned comparison: sltu, sltui
Example
$s0 = 1111 1111 1111 1111 1111 1111 1111 1111
$s1 = 0000 0000 0000 0000 0000 0000 0000 0001
slt $t0, $s0, $s1 # signed
–1 < +1 ⇒ $t0 = 1
sltu $t0, $s0, $s1 # unsigned
+4,294,967,295 > +1 ⇒ $t0 = 0
Chapter 2 — Instructions: Language of the Computer —
37. §2.8 Supporting Procedures in Computer Hardware
Procedure Calling
Steps required
1. Place parameters in registers
2. Transfer control to procedure
3. Acquire storage for procedure
4. Perform procedure’s operations
5. Place result in register for caller
6. Return to place of call
Chapter 2 — Instructions: Language of the Computer —
38. Register Usage
$a0 – $a3: arguments (reg’s 4 – 7)
$v0, $v1: result values (reg’s 2 and 3)
$t0 – $t9: temporaries
Can be overwritten by callee
$s0 – $s7: saved
Must be saved/restored by callee
$gp: global pointer for static data (reg 28)
$sp: stack pointer (reg 29)
$fp: frame pointer (reg 30)
$ra: return address (reg 31)
Chapter 2 — Instructions: Language of the Computer —
39. Procedure Call Instructions
Procedure call: jump and link
jal ProcedureLabel
Address of following instruction put in $ra
Jumps to target address
Procedure return: jump register
jr $ra
Copies $ra to program counter
Can also be used for computed jumps
e.g., for case/switch statements
Chapter 2 — Instructions: Language of the Computer —
40. Leaf Procedure Example
C code:
int leaf_example (int g, h, i, j)
{ int f;
f = (g + h) - (i + j);
return f;
}
Arguments g, …, j in $a0, …, $a3
f in $s0 (hence, need to save $s0 on stack)
Result in $v0
Chapter 2 — Instructions: Language of the Computer —
41. Leaf Procedure Example
MIPS code:
leaf_example:
addi $sp, $sp, -4
Save $s0 on stack
sw $s0, 0($sp)
add $t0, $a0, $a1
add $t1, $a2, $a3 Procedure body
sub $s0, $t0, $t1
add $v0, $s0, $zero Result
lw $s0, 0($sp) Restore $s0
addi $sp, $sp, 4
jr $ra Return
Chapter 2 — Instructions: Language of the Computer —
42. Non-Leaf Procedures
Procedures that call other procedures
For nested call, caller needs to save on
the stack:
Its return address
Any arguments and temporaries needed after
the call
Restore from the stack after the call
Chapter 2 — Instructions: Language of the Computer —
43. Non-Leaf Procedure Example
C code:
int fact (int n)
{
if (n < 1) return f;
else return n * fact(n - 1);
}
Argument n in $a0
Result in $v0
Chapter 2 — Instructions: Language of the Computer —
44. Non-Leaf Procedure Example
MIPS code:
fact:
addi $sp, $sp, -8 # adjust stack for 2 items
sw $ra, 4($sp) # save return address
sw $a0, 0($sp) # save argument
slti $t0, $a0, 1 # test for n < 1
beq $t0, $zero, L1
addi $v0, $zero, 1 # if so, result is 1
addi $sp, $sp, 8 # pop 2 items from stack
jr $ra # and return
L1: addi $a0, $a0, -1 # else decrement n
jal fact # recursive call
lw $a0, 0($sp) # restore original n
lw $ra, 4($sp) # and return address
addi $sp, $sp, 8 # pop 2 items from stack
mul $v0, $a0, $v0 # multiply to get result
jr $ra # and return
Chapter 2 — Instructions: Language of the Computer —
45. Local Data on the Stack
Local data allocated by callee
e.g., C automatic variables
Procedure frame (activation record)
Used by some compilers to manage stack storage
Chapter 2 — Instructions: Language of the Computer —
46. Memory Layout
Text: program code
Static data: global
variables
e.g., static variables in C,
constant arrays and strings
$gp initialized to address
allowing ±offsets into this
segment
Dynamic data: heap
E.g., malloc in C, new in
Java
Stack: automatic storage
Chapter 2 — Instructions: Language of the Computer —
47. §2.9 Communicating with People
Character Data
Byte-encoded character sets
ASCII: 128 characters
95 graphic, 33 control
Latin-1: 256 characters
ASCII, +96 more graphic characters
Unicode: 32-bit character set
Used in Java, C++ wide characters, …
Most of the world’s alphabets, plus symbols
UTF-8, UTF-16: variable-length encodings
Chapter 2 — Instructions: Language of the Computer —
48. Byte/Halfword Operations
Could use bitwise operations
MIPS byte/halfword load/store
String processing is a common case
lb rt, offset(rs) lh rt, offset(rs)
Sign extend to 32 bits in rt
lbu rt, offset(rs) lhu rt, offset(rs)
Zero extend to 32 bits in rt
sb rt, offset(rs) sh rt, offset(rs)
Store just rightmost byte/halfword
Chapter 2 — Instructions: Language of the Computer —
49. String Copy Example
C code (naïve):
Null-terminated string
void strcpy (char x[], char y[])
{ int i;
i = 0;
while ((x[i]=y[i])!='0')
i += 1;
}
Addresses of x, y in $a0, $a1
i in $s0
Chapter 2 — Instructions: Language of the Computer —
50. String Copy Example
MIPS code:
strcpy:
addi $sp, $sp, -4 # adjust stack for 1 item
sw $s0, 0($sp) # save $s0
add $s0, $zero, $zero # i = 0
L1: add $t1, $s0, $a1 # addr of y[i] in $t1
lbu $t2, 0($t1) # $t2 = y[i]
add $t3, $s0, $a0 # addr of x[i] in $t3
sb $t2, 0($t3) # x[i] = y[i]
beq $t2, $zero, L2 # exit loop if y[i] == 0
addi $s0, $s0, 1 # i = i + 1
j L1 # next iteration of loop
L2: lw $s0, 0($sp) # restore saved $s0
addi $sp, $sp, 4 # pop 1 item from stack
jr $ra # and return
Chapter 2 — Instructions: Language of the Computer —
51. §2.10 MIPS Addressing for 32-Bit Immediates and Addresses
32-bit Constants
Most constants are small
16-bit immediate is sufficient
For the occasional 32-bit constant
lui rt, constant
Copies 16-bit constant to left 16 bits of rt
Clears right 16 bits of rt to 0
lhi $s0, 61 0000 0000 0111 1101 0000 0000 0000 0000
ori $s0, $s0, 2304 0000 0000 0111 1101 0000 1001 0000 0000
Chapter 2 — Instructions: Language of the Computer —
52. Branch Addressing
Branch instructions specify
Opcode, two registers, target address
Most branch targets are near branch
Forward or backward
op rs rt constant or address
6 bits 5 bits 5 bits 16 bits
PC-relative addressing
Target address = PC + offset × 4
PC already incremented by 4 by this time
Chapter 2 — Instructions: Language of the Computer —
53. Jump Addressing
Jump (j and jal) targets could be
anywhere in text segment
Encode full address in instruction
op address
6 bits 26 bits
(Pseudo)Direct jump addressing
Target address = PC31…28 : (address × 4)
Chapter 2 — Instructions: Language of the Computer —
55. Branching Far Away
If branch target is too far to encode with
16-bit offset, assembler rewrites the code
Example
beq $s0,$s1, L1
↓
bne $s0,$s1, L2
j L1
L2: …
Chapter 2 — Instructions: Language of the Computer —
57. §2.11 Parallelism and Instructions: Synchronization
Synchronization
Two processors sharing an area of memory
P1 writes, then P2 reads
Data race if P1 and P2 don’t synchronize
Result depends of order of accesses
Hardware support required
Atomic read/write memory operation
No other access to the location allowed between the
read and write
Could be a single instruction
E.g., atomic swap of register ↔ memory
Or an atomic pair of instructions
Chapter 2 — Instructions: Language of the Computer —
58. Synchronization in MIPS
Load linked: ll rt, offset(rs)
Store conditional: sc rt, offset(rs)
Succeeds if location not changed since the ll
Returns 1 in rt
Fails if location is changed
Returns 0 in rt
Example: atomic swap (to test/set lock variable)
try: add $t0,$zero,$s4 ;copy exchange value
ll $t1,0($s1) ;load linked
sc $t0,0($s1) ;store conditional
beq $t0,$zero,try ;branch store fails
add $s4,$zero,$t1 ;put load value in $s4
Chapter 2 — Instructions: Language of the Computer —
59. §2.12 Translating and Starting a Program
Translation and Startup
Many compilers produce
object modules directly
Static linking
Chapter 2 — Instructions: Language of the Computer —
60. Assembler Pseudoinstructions
Most assembler instructions represent
machine instructions one-to-one
Pseudoinstructions: figments of the
assembler’s imagination
move $t0, $t1→ add $t0, $zero, $t1
blt $t0, $t1, L → slt $at, $t0, $t1
bne $at, $zero, L
$at (register 1): assembler temporary
Chapter 2 — Instructions: Language of the Computer —
61. Producing an Object Module
Assembler (or compiler) translates program into
machine instructions
Provides information for building a complete
program from the pieces
Header: described contents of object module
Text segment: translated instructions
Static data segment: data allocated for the life of the
program
Relocation info: for contents that depend on absolute
location of loaded program
Symbol table: global definitions and external refs
Debug info: for associating with source code
Chapter 2 — Instructions: Language of the Computer —
62. Linking Object Modules
Produces an executable image
1. Merges segments
2. Resolve labels (determine their addresses)
3. Patch location-dependent and external refs
Could leave location dependencies for
fixing by a relocating loader
But with virtual memory, no need to do this
Program can be loaded into absolute location
in virtual memory space
Chapter 2 — Instructions: Language of the Computer —
63. Loading a Program
Load from image file on disk into memory
1. Read header to determine segment sizes
2. Create virtual address space
3. Copy text and initialized data into memory
Or set page table entries so they can be faulted in
4. Set up arguments on stack
5. Initialize registers (including $sp, $fp, $gp)
6. Jump to startup routine
Copies arguments to $a0, … and calls main
When main returns, do exit syscall
Chapter 2 — Instructions: Language of the Computer —
64. Dynamic Linking
Only link/load library procedure when it is
called
Requires procedure code to be relocatable
Avoids image bloat caused by static linking of
all (transitively) referenced libraries
Automatically picks up new library versions
Chapter 2 — Instructions: Language of the Computer —
65. Lazy Linkage
Indirection table
Stub: Loads routine ID,
Jump to linker/loader
Linker/loader code
Dynamically
mapped code
Chapter 2 — Instructions: Language of the Computer —
66. Starting Java Applications
Simple portable
instruction set for
the JVM
Compiles
Interprets
bytecodes of
bytecodes
“hot” methods
into native
code for host
machine
Chapter 2 — Instructions: Language of the Computer —
67. §2.13 A C Sort Example to Put It All Together
C Sort Example
Illustrates use of assembly instructions
for a C bubble sort function
Swap procedure (leaf)
void swap(int v[], int k)
{
int temp;
temp = v[k];
v[k] = v[k+1];
v[k+1] = temp;
}
v in $a0, k in $a1, temp in $t0
Chapter 2 — Instructions: Language of the Computer —
68. The Procedure Swap
swap: sll $t1, $a1, 2 # $t1 = k * 4
add $t1, $a0, $t1 # $t1 = v+(k*4)
# (address of v[k])
lw $t0, 0($t1) # $t0 (temp) = v[k]
lw $t2, 4($t1) # $t2 = v[k+1]
sw $t2, 0($t1) # v[k] = $t2 (v[k+1])
sw $t0, 4($t1) # v[k+1] = $t0 (temp)
jr $ra # return to calling routine
Chapter 2 — Instructions: Language of the Computer —
69. The Sort Procedure in C
Non-leaf (calls swap)
void sort (int v[], int n)
{
int i, j;
for (i = 0; i < n; i += 1) {
for (j = i – 1;
j >= 0 && v[j] > v[j + 1];
j -= 1) {
swap(v,j);
}
}
}
v in $a0, k in $a1, i in $s0, j in $s1
Chapter 2 — Instructions: Language of the Computer —
70. The Procedure Body
move $s2, $a0 # save $a0 into $s2 Move
move $s3, $a1 # save $a1 into $s3 params
move $s0, $zero # i = 0
Outer loop
for1tst: slt $t0, $s0, $s3 # $t0 = 0 if $s0 ≥ $s3 (i ≥ n)
beq $t0, $zero, exit1 # go to exit1 if $s0 ≥ $s3 (i ≥ n)
addi $s1, $s0, –1 # j = i – 1
for2tst: slti $t0, $s1, 0 # $t0 = 1 if $s1 < 0 (j < 0)
bne $t0, $zero, exit2 # go to exit2 if $s1 < 0 (j < 0)
sll $t1, $s1, 2 # $t1 = j * 4 Inner loop
add $t2, $s2, $t1 # $t2 = v + (j * 4)
lw $t3, 0($t2) # $t3 = v[j]
lw $t4, 4($t2) # $t4 = v[j + 1]
slt $t0, $t4, $t3 # $t0 = 0 if $t4 ≥ $t3
beq $t0, $zero, exit2 # go to exit2 if $t4 ≥ $t3
move $a0, $s2 # 1st param of swap is v (old $a0) Pass
move $a1, $s1 # 2nd param of swap is j params
jal swap # call swap procedure & call
addi $s1, $s1, –1 # j –= 1
Inner loop
j for2tst # jump to test of inner loop
exit2: addi $s0, $s0, 1 # i += 1
Outer loop
j for1tst # jump to test of outer loop
Chapter 2 — Instructions: Language of the Computer —
71. The Full Procedure
sort: addi $sp,$sp, –20 # make room on stack for 5 registers
sw $ra, 16($sp) # save $ra on stack
sw $s3,12($sp) # save $s3 on stack
sw $s2, 8($sp) # save $s2 on stack
sw $s1, 4($sp) # save $s1 on stack
sw $s0, 0($sp) # save $s0 on stack
… # procedure body
…
exit1: lw $s0, 0($sp) # restore $s0 from stack
lw $s1, 4($sp) # restore $s1 from stack
lw $s2, 8($sp) # restore $s2 from stack
lw $s3,12($sp) # restore $s3 from stack
lw $ra,16($sp) # restore $ra from stack
addi $sp,$sp, 20 # restore stack pointer
jr $ra # return to calling routine
Chapter 2 — Instructions: Language of the Computer —
72. Effect of Compiler Optimization
Compiled with gcc for Pentium 4 under Linux
3 Relative Performance 140000 Instruction count
2.5 120000
100000
2
80000
1.5
60000
1
40000
0.5 20000
0 0
none O1 O2 O3 none O1 O2 O3
180000 Clock Cycles 2 CPI
160000
140000 1.5
120000
100000
1
80000
60000
40000 0.5
20000
0 0
none O1 O2 O3 none O1 O2 O3
Chapter 2 — Instructions: Language of the Computer —
73. Effect of Language and Algorithm
3 Bubblesort Relative Performance
2.5
2
1.5
1
0.5
0
C/none C/O1 C/O2 C/O3 Java/int Java/JIT
2.5 Quicksort Relative Performance
2
1.5
1
0.5
0
C/none C/O1 C/O2 C/O3 Java/int Java/JIT
3000 Quicksort vs. Bubblesort Speedup
2500
2000
1500
1000
500
0
C/none C/O1 C/O2 C/O3 Java/int Java/JIT
Chapter 2 — Instructions: Language of the Computer —
74. Lessons Learnt
Instruction count and CPI are not good
performance indicators in isolation
Compiler optimizations are sensitive to the
algorithm
Java/JIT compiled code is significantly
faster than JVM interpreted
Comparable to optimized C in some cases
Nothing can fix a dumb algorithm!
Chapter 2 — Instructions: Language of the Computer —
75. §2.14 Arrays versus Pointers
Arrays vs. Pointers
Array indexing involves
Multiplying index by element size
Adding to array base address
Pointers correspond directly to memory
addresses
Can avoid indexing complexity
Chapter 2 — Instructions: Language of the Computer —
76. Example: Clearing and Array
clear1(int array[], int size) { clear2(int *array, int size) {
int i; int *p;
for (i = 0; i < size; i += 1) for (p = &array[0]; p < &array[size];
array[i] = 0; p = p + 1)
} *p = 0;
}
move $t0,$zero # i = 0 move $t0,$a0 # p = & array[0]
loop1: sll $t1,$t0,2 # $t1 = i * 4 sll $t1,$a1,2 # $t1 = size * 4
add $t2,$a0,$t1 # $t2 = add $t2,$a0,$t1 # $t2 =
# &array[i] # &array[size]
sw $zero, 0($t2) # array[i] = 0 loop2: sw $zero,0($t0) # Memory[p] = 0
addi $t0,$t0,1 # i = i + 1 addi $t0,$t0,4 # p = p + 4
slt $t3,$t0,$a1 # $t3 = slt $t3,$t0,$t2 # $t3 =
# (i < size) #(p<&array[size])
bne $t3,$zero,loop1 # if (…) bne $t3,$zero,loop2 # if (…)
# goto # goto loop2
loop1
Chapter 2 — Instructions: Language of the Computer —
77. Comparison of Array vs. Ptr
Multiply “strength reduced” to shift
Array version requires shift to be inside
loop
Part of index calculation for incremented i
c.f. incrementing pointer
Compiler can achieve same effect as
manual use of pointers
Induction variable elimination
Better to make program clearer and safer
Chapter 2 — Instructions: Language of the Computer —
78. §2.16 Real Stuff: ARM Instructions
ARM & MIPS Similarities
ARM: the most popular embedded core
Similar basic set of instructions to MIPS
ARM MIPS
Date announced 1985 1985
Instruction size 32 bits 32 bits
Address space 32-bit flat 32-bit flat
Data alignment Aligned Aligned
Data addressing modes 9 3
Registers 15 × 32-bit 31 × 32-bit
Input/output Memory Memory
mapped mapped
Chapter 2 — Instructions: Language of the Computer —
79. Compare and Branch in ARM
Uses condition codes for result of an
arithmetic/logical instruction
Negative, zero, carry, overflow
Compare instructions to set condition codes
without keeping the result
Each instruction can be conditional
Top 4 bits of instruction word: condition value
Can avoid branches over single instructions
Chapter 2 — Instructions: Language of the Computer —
81. §2.17 Real Stuff: x86 Instructions
The Intel x86 ISA
Evolution with backward compatibility
8080 (1974): 8-bit microprocessor
Accumulator, plus 3 index-register pairs
8086 (1978): 16-bit extension to 8080
Complex instruction set (CISC)
8087 (1980): floating-point coprocessor
Adds FP instructions and register stack
80286 (1982): 24-bit addresses, MMU
Segmented memory mapping and protection
80386 (1985): 32-bit extension (now IA-32)
Additional addressing modes and operations
Paged memory mapping as well as segments
Chapter 2 — Instructions: Language of the Computer —
82. The Intel x86 ISA
Further evolution…
i486 (1989): pipelined, on-chip caches and FPU
Compatible competitors: AMD, Cyrix, …
Pentium (1993): superscalar, 64-bit datapath
Later versions added MMX (Multi-Media eXtension)
instructions
The infamous FDIV bug
Pentium Pro (1995), Pentium II (1997)
New microarchitecture (see Colwell, The Pentium Chronicles)
Pentium III (1999)
Added SSE (Streaming SIMD Extensions) and associated
registers
Pentium 4 (2001)
New microarchitecture
Added SSE2 instructions
Chapter 2 — Instructions: Language of the Computer —
83. The Intel x86 ISA
And further…
AMD64 (2003): extended architecture to 64 bits
EM64T – Extended Memory 64 Technology (2004)
AMD64 adopted by Intel (with refinements)
Added SSE3 instructions
Intel Core (2006)
Added SSE4 instructions, virtual machine support
AMD64 (announced 2007): SSE5 instructions
Intel declined to follow, instead…
Advanced Vector Extension (announced 2008)
Longer SSE registers, more instructions
If Intel didn’t extend with compatibility, its
competitors would!
Technical elegance ≠ market success
Chapter 2 — Instructions: Language of the Computer —
85. Basic x86 Addressing Modes
Two operands per instruction
Source/dest operand Second source operand
Register Register
Register Immediate
Register Memory
Memory Register
Memory Immediate
Memory addressing modes
Address in register
Address = Rbase + displacement
Address = Rbase + 2scale × Rindex (scale = 0, 1, 2, or 3)
Address = Rbase + 2scale × Rindex + displacement
Chapter 2 — Instructions: Language of the Computer —
86. x86 Instruction Encoding
Variable length
encoding
Postfix bytes specify
addressing mode
Prefix bytes modify
operation
Operand length,
repetition, locking, …
Chapter 2 — Instructions: Language of the Computer —
87. Implementing IA-32
Complex instruction set makes
implementation difficult
Hardware translates instructions to simpler
microoperations
Simple instructions: 1–1
Complex instructions: 1–many
Microengine similar to RISC
Market share makes this economically viable
Comparable performance to RISC
Compilers avoid complex instructions
Chapter 2 — Instructions: Language of the Computer —
88. §2.18 Fallacies and Pitfalls
Fallacies
Powerful instruction ⇒ higher performance
Fewer instructions required
But complex instructions are hard to implement
May slow down all instructions, including simple ones
Compilers are good at making fast code from simple
instructions
Use assembly code for high performance
But modern compilers are better at dealing with
modern processors
More lines of code ⇒ more errors and less
productivity
Chapter 2 — Instructions: Language of the Computer —
89. Fallacies
Backward compatibility ⇒ instruction set
doesn’t change
But they do accrete more instructions
x86 instruction set
Chapter 2 — Instructions: Language of the Computer —
90. Pitfalls
Sequential words are not at sequential
addresses
Increment by 4, not by 1!
Keeping a pointer to an automatic variable
after procedure returns
e.g., passing pointer back via an argument
Pointer becomes invalid when stack popped
Chapter 2 — Instructions: Language of the Computer —
91. §2.19 Concluding Remarks
Concluding Remarks
Design principles
1. Simplicity favors regularity
2. Smaller is faster
3. Make the common case fast
4. Good design demands good compromises
Layers of software/hardware
Compiler, assembler, hardware
MIPS: typical of RISC ISAs
c.f. x86
Chapter 2 — Instructions: Language of the Computer —
92. Concluding Remarks
Measure MIPS instruction executions in
benchmark programs
Consider making the common case fast
Consider compromises
Instruction class MIPS examples SPEC2006 Int SPEC2006 FP
Arithmetic add, sub, addi 16% 48%
Data transfer lw, sw, lb, lbu, 35% 36%
lh, lhu, sb, lui
Logical and, or, nor, andi, 12% 4%
ori, sll, srl
Cond. Branch beq, bne, slt, 34% 8%
slti, sltiu
Jump j, jr, jal 2% 0%
Chapter 2 — Instructions: Language of the Computer —