This document provides an overview of floating point representation and arithmetic based on the IEEE 754 standard. It discusses topics such as normalized and denormalized values, special values like infinity and NaN, and examples using tiny 8-bit floating point formats to illustrate concepts like dynamic range and value distribution. The goal is to explain how computers represent inexact real numbers using a finite number of bits.
digital logic circuits, digital component floting and fixed pointRai University
This document provides an overview of floating point numbers and their representation. It discusses how floating point numbers are used to represent very large and small numbers with exponents. The IEEE 754 standard for floating point representation is described, including the use of sign-magnitude, biased exponents, normalization, denormalization and special values like infinity and NaN. Single and double precision floating point number formats are defined according to IEEE 754. Methods for converting between decimal and binary floating point values are demonstrated through examples.
The document discusses floating point numbers and the IEEE 754 standard. It describes how floating point numbers represent numbers with fractions using a sign bit, exponent field, and fraction field. The IEEE 754 standard uses a biased exponent representation for normalized floating point values, along with special values like infinity and NaN. It also details denormalized numbers, which allow gradual underflow to zero.
Exponential notation can be used to represent very large and very small numbers in a normalized form. A floating point number uses a sign, exponent, and mantissa to represent values in a fixed number of bits. Common standards like IEEE 754 specify single and double precision formats that use 1 sign bit, 8 or 11 exponent bits, and 23 or 52 mantissa bits respectively. Calculations with floating point numbers require aligning the exponents before adding or multiplying the mantissas and adjusting the result exponent.
Bca 2nd sem-u-1.8 digital logic circuits, digital component floting and fixed...Rai University
Digital Logic Circuits, Digital Component and Data Representation discusses floating point numbers and the IEEE 754 standard. It describes how floating point numbers use a sign bit, exponent field, and fraction field to represent values too large or small for integers. The standard uses biased exponent representation and defines special values like infinity, zero, and NaN. Floating point numbers can be normalized, denormalized, or have special values and are ordered by magnitude.
B.sc cs-ii-u-1.8 digital logic circuits, digital component floting and fixed ...Rai University
Digital Logic Circuits, Digital Component and Data Representation discusses floating point numbers and the IEEE 754 standard. It describes how floating point numbers use a sign bit, exponent field, and fraction field to represent values in scientific notation. It also summarizes the IEEE 754 standard for single and double precision floating point numbers, including how special values like infinity and NaN are represented.
Fixed-point and floating-point numbers can be represented in computers using binary numbers. Floating-point numbers represent numbers in scientific notation with a sign, mantissa, and exponent. In 8-bit floating point, numbers use 1 bit for sign, 3 bits for exponent, and 4 bits for mantissa, such as 0.001 x 21 = 2.25. Larger precision formats such as 32-bit and 64-bit floating point according to the IEEE standard use more bits for exponent and mantissa.
The IEEE 754 standard defines the floating point representation of real numbers. It uses a sign-magnitude format that includes:
1) A sign bit to indicate positive or negative.
2) A biased exponent stored with an offset to allow negative exponents.
3) A mantissa or significant digits with a leading 1 not explicitly stored.
For single precision floats, this uses 32 bits broken into an 8-bit exponent and 23-bit mantissa. Doubles use 64 bits with an 11-bit exponent and 52-bit mantissa for greater precision and range.
digital logic circuits, digital component floting and fixed pointRai University
This document provides an overview of floating point numbers and their representation. It discusses how floating point numbers are used to represent very large and small numbers with exponents. The IEEE 754 standard for floating point representation is described, including the use of sign-magnitude, biased exponents, normalization, denormalization and special values like infinity and NaN. Single and double precision floating point number formats are defined according to IEEE 754. Methods for converting between decimal and binary floating point values are demonstrated through examples.
The document discusses floating point numbers and the IEEE 754 standard. It describes how floating point numbers represent numbers with fractions using a sign bit, exponent field, and fraction field. The IEEE 754 standard uses a biased exponent representation for normalized floating point values, along with special values like infinity and NaN. It also details denormalized numbers, which allow gradual underflow to zero.
Exponential notation can be used to represent very large and very small numbers in a normalized form. A floating point number uses a sign, exponent, and mantissa to represent values in a fixed number of bits. Common standards like IEEE 754 specify single and double precision formats that use 1 sign bit, 8 or 11 exponent bits, and 23 or 52 mantissa bits respectively. Calculations with floating point numbers require aligning the exponents before adding or multiplying the mantissas and adjusting the result exponent.
Bca 2nd sem-u-1.8 digital logic circuits, digital component floting and fixed...Rai University
Digital Logic Circuits, Digital Component and Data Representation discusses floating point numbers and the IEEE 754 standard. It describes how floating point numbers use a sign bit, exponent field, and fraction field to represent values too large or small for integers. The standard uses biased exponent representation and defines special values like infinity, zero, and NaN. Floating point numbers can be normalized, denormalized, or have special values and are ordered by magnitude.
B.sc cs-ii-u-1.8 digital logic circuits, digital component floting and fixed ...Rai University
Digital Logic Circuits, Digital Component and Data Representation discusses floating point numbers and the IEEE 754 standard. It describes how floating point numbers use a sign bit, exponent field, and fraction field to represent values in scientific notation. It also summarizes the IEEE 754 standard for single and double precision floating point numbers, including how special values like infinity and NaN are represented.
Fixed-point and floating-point numbers can be represented in computers using binary numbers. Floating-point numbers represent numbers in scientific notation with a sign, mantissa, and exponent. In 8-bit floating point, numbers use 1 bit for sign, 3 bits for exponent, and 4 bits for mantissa, such as 0.001 x 21 = 2.25. Larger precision formats such as 32-bit and 64-bit floating point according to the IEEE standard use more bits for exponent and mantissa.
The IEEE 754 standard defines the floating point representation of real numbers. It uses a sign-magnitude format that includes:
1) A sign bit to indicate positive or negative.
2) A biased exponent stored with an offset to allow negative exponents.
3) A mantissa or significant digits with a leading 1 not explicitly stored.
For single precision floats, this uses 32 bits broken into an 8-bit exponent and 23-bit mantissa. Doubles use 64 bits with an 11-bit exponent and 52-bit mantissa for greater precision and range.
Real numbers can be stored using floating point representation, which separates a real number into three parts: a sign bit, exponent, and mantissa. The exponent indicates the power of the base 10 that the mantissa is multiplied by. Common standards like IEEE 754 define single and double precision formats that allocate more bits for higher precision at the cost of range. Summarizing a floating point number involves determining the exponent by shifting the decimal, converting the number to a leading digit mantissa, and writing the sign, exponent, and mantissa based on the specified precision format.
The document discusses different methods for representing integers and fractional numbers in binary, including sign and modulus representation, one's complement, two's complement, fixed point representation, and floating point representation. It provides examples and activities to help understand how to convert between decimal and binary representations using these methods.
Digital and Logic Design Chapter 1 binary_systemsImran Waris
This document discusses binary number systems and digital computing. It covers binary numbers, number base conversions between decimal, binary, octal and hexadecimal. It also discusses binary coding techniques like binary-coded decimal, signed magnitude representation, one's complement and two's complement representations for negative numbers.
This document provides an overview of data representation in computers. It discusses binary, decimal, hexadecimal, and floating point number systems. Binary numbers use only two digits, 0 and 1, and can represent values as sums of powers of two. Decimal uses ten digits from 0-9. Hexadecimal uses sixteen values from 0-9 and A-F. Negative binary integers can be represented using ones' complement or twos' complement methods. Twos' complement avoids multiple representations of zero and is commonly used in computers. Converting between number bases involves expressing the value in one base using the digits of another.
The document discusses various number systems including decimal, binary, and signed binary numbers. It provides the following key points:
1) Decimal numbers use ten digits from 0-9 while binary only uses two digits, 0 and 1. Binary numbers represent values through place values determined by powers of two.
2) Conversions can be done between decimal and binary numbers through either summing the place value weights or repeated division/multiplication by two.
3) Binary arithmetic follows simple rules to add, subtract, multiply and divide numbers in binary representation.
4) Signed binary numbers use a sign bit to indicate positive or negative values, with the most common 2's complement form representing negative numbers as the 2's
Digital systems represent information using discrete binary values of 0 and 1 rather than continuous analog values. Binary numbers use a base-2 numbering system with place values that are powers of 2. There are various number systems like decimal, binary, octal and hexadecimal that use different number bases and represent the same number in different ways. Complements are used in binary arithmetic to perform subtraction by adding the 1's or 2's complement of a number. The 1's complement is obtained by inverting all bits, while the 2's complement is obtained by inverting all bits and adding 1.
This document discusses floating point number representation in IEEE-754 format. It explains that floating point numbers consist of a sign bit, exponent, and mantissa. It describes single and double precision formats, which use excess-127 and excess-1023 exponent biases respectively. Examples are given of representing sample numbers in both implicit and explicit normalized forms using single and double precision formats.
The document discusses different number systems including binary, octal, decimal, and hexadecimal. It explains that number systems have a radix or base, which determines the set of symbols used and their positional values. The key representations for binary numbers discussed are sign-magnitude, one's complement, and two's complement, which provide different methods for representing positive and negative numbers. The document provides examples of addition, subtraction, multiplication, and division operations in binary.
Logic Circuits Design - "Chapter 1: Digital Systems and Information"Ra'Fat Al-Msie'deen
Logic Circuits Design: This material is based on chapter 1 of “Logic and Computer Design Fundamentals” by M. Morris Mano, Charles R. Kime and Tom Martin
The document discusses floating point arithmetic and the IEEE 754 standard. It covers topics such as floating point representation including normalized and denormalized numbers, special values like infinity and NaN, rounding modes, and properties of floating point operations like addition and multiplication. Examples are provided to illustrate concepts like normalized encoding, the dynamic range of different representations, and answers to puzzles about floating point comparisons and operations.
The document discusses binary number representation and arithmetic. It explains decimal to binary conversion. It also describes signed number representation using sign-magnitude and one's complement and two's complement methods. The key advantages of two's complement are that addition can be performed using the same method for positive and negative numbers. Subtraction using two's complement is performed by adding the number to the complement of the subtrahend. Examples of binary addition and subtraction are provided to illustrate these concepts.
1) Floating point numbers use a binary format defined by the IEEE 754 standard, which represents values as (-1)s × m × 2e, where s is the sign bit, m is the mantissa, and e is the exponent.
2) The mantissa is a fraction stored in bits with an implied leading 1, and the exponent is an offset from a bias of 127, allowing the representation of numbers between roughly 10-38 and 1038.
3) Converting values to their binary representation involves repeatedly multiplying/dividing the fractional part by 2 until the value is in the form 1.xxx × 2e, and the exponent tracks the number of times the decimal is shifted.
The document discusses different number systems used in computing, including binary, hexadecimal, and octal. It explains that computers internally use the binary number system to represent data and perform calculations. Hexadecimal provides a shorthand way to work with binary numbers, with each hex digit corresponding to four binary digits. The document also covers how to convert between decimal, binary, hexadecimal, and octal numbers. It provides examples of expanding numbers in different bases, as well as adding and subtracting binary numbers using complements.
The document discusses different number systems including positional and non-positional systems. It covers the decimal, binary, octal and hexadecimal number systems in detail. Key points include:
1) Positional systems use the place value of digits to represent numbers, with the decimal and binary systems being examples.
2) Converting between number bases involves repeatedly dividing the number by the new base and recording the remainders as digits.
3) Binary, octal and hexadecimal use groups of bits/digits to simplify conversions between the bases.
4) Floating point numbers represent real numbers with a mantissa and exponent in positional notation.
This document contains examples and problems related to binary number systems. It begins by listing octal, hexadecimal, and base-12 numbers and converting between number systems. Later problems involve operations like addition, subtraction, multiplication and division using binary, octal and hexadecimal numbers. Conversions between decimal, binary, octal and hexadecimal are demonstrated. Complement representations for signed binary numbers are also covered.
The document discusses number systems and conversions between different bases. It explains that computers use the binary system with bits representing 0s and 1s. 8 bits form a byte. Decimal, binary, octal and hexadecimal numbering systems are covered. Methods for converting between these bases are provided using division and remainders or grouping bits. Common powers and units used in computing like kilo, mega and giga are also defined. Exercises on converting values between the different number systems are included.
The document summarizes the IEEE floating-point representation used in Microsoft Visual C++. It describes that there are three main formats - real*4, real*8, and real*10 - which store the sign bit, exponent, and mantissa differently. Real*4 uses a single precision 32-bit format while real*8 uses a double precision 64-bit format. Exponents are stored in biased format, with the exponent bias allowing the representation of both positive and negative exponents. Due to the binary representation, some decimal values cannot be precisely represented in floating-point formats.
The document discusses three main challenges of modern computer-based assessment: usability, scoring, and assessing digital natives. It summarizes the COGSIM test development process which included multiple usability studies and pilot tests to improve the usability, identify scoring approaches, and ensure the assessment engaged digital native test-takers. The process led to enhancements in areas like the graphical user interface, how performance was measured both in terms of products and processes, and design elements to attract and maintain the interest of digital natives accustomed to technology.
Real numbers can be stored using floating point representation, which separates a real number into three parts: a sign bit, exponent, and mantissa. The exponent indicates the power of the base 10 that the mantissa is multiplied by. Common standards like IEEE 754 define single and double precision formats that allocate more bits for higher precision at the cost of range. Summarizing a floating point number involves determining the exponent by shifting the decimal, converting the number to a leading digit mantissa, and writing the sign, exponent, and mantissa based on the specified precision format.
The document discusses different methods for representing integers and fractional numbers in binary, including sign and modulus representation, one's complement, two's complement, fixed point representation, and floating point representation. It provides examples and activities to help understand how to convert between decimal and binary representations using these methods.
Digital and Logic Design Chapter 1 binary_systemsImran Waris
This document discusses binary number systems and digital computing. It covers binary numbers, number base conversions between decimal, binary, octal and hexadecimal. It also discusses binary coding techniques like binary-coded decimal, signed magnitude representation, one's complement and two's complement representations for negative numbers.
This document provides an overview of data representation in computers. It discusses binary, decimal, hexadecimal, and floating point number systems. Binary numbers use only two digits, 0 and 1, and can represent values as sums of powers of two. Decimal uses ten digits from 0-9. Hexadecimal uses sixteen values from 0-9 and A-F. Negative binary integers can be represented using ones' complement or twos' complement methods. Twos' complement avoids multiple representations of zero and is commonly used in computers. Converting between number bases involves expressing the value in one base using the digits of another.
The document discusses various number systems including decimal, binary, and signed binary numbers. It provides the following key points:
1) Decimal numbers use ten digits from 0-9 while binary only uses two digits, 0 and 1. Binary numbers represent values through place values determined by powers of two.
2) Conversions can be done between decimal and binary numbers through either summing the place value weights or repeated division/multiplication by two.
3) Binary arithmetic follows simple rules to add, subtract, multiply and divide numbers in binary representation.
4) Signed binary numbers use a sign bit to indicate positive or negative values, with the most common 2's complement form representing negative numbers as the 2's
Digital systems represent information using discrete binary values of 0 and 1 rather than continuous analog values. Binary numbers use a base-2 numbering system with place values that are powers of 2. There are various number systems like decimal, binary, octal and hexadecimal that use different number bases and represent the same number in different ways. Complements are used in binary arithmetic to perform subtraction by adding the 1's or 2's complement of a number. The 1's complement is obtained by inverting all bits, while the 2's complement is obtained by inverting all bits and adding 1.
This document discusses floating point number representation in IEEE-754 format. It explains that floating point numbers consist of a sign bit, exponent, and mantissa. It describes single and double precision formats, which use excess-127 and excess-1023 exponent biases respectively. Examples are given of representing sample numbers in both implicit and explicit normalized forms using single and double precision formats.
The document discusses different number systems including binary, octal, decimal, and hexadecimal. It explains that number systems have a radix or base, which determines the set of symbols used and their positional values. The key representations for binary numbers discussed are sign-magnitude, one's complement, and two's complement, which provide different methods for representing positive and negative numbers. The document provides examples of addition, subtraction, multiplication, and division operations in binary.
Logic Circuits Design - "Chapter 1: Digital Systems and Information"Ra'Fat Al-Msie'deen
Logic Circuits Design: This material is based on chapter 1 of “Logic and Computer Design Fundamentals” by M. Morris Mano, Charles R. Kime and Tom Martin
The document discusses floating point arithmetic and the IEEE 754 standard. It covers topics such as floating point representation including normalized and denormalized numbers, special values like infinity and NaN, rounding modes, and properties of floating point operations like addition and multiplication. Examples are provided to illustrate concepts like normalized encoding, the dynamic range of different representations, and answers to puzzles about floating point comparisons and operations.
The document discusses binary number representation and arithmetic. It explains decimal to binary conversion. It also describes signed number representation using sign-magnitude and one's complement and two's complement methods. The key advantages of two's complement are that addition can be performed using the same method for positive and negative numbers. Subtraction using two's complement is performed by adding the number to the complement of the subtrahend. Examples of binary addition and subtraction are provided to illustrate these concepts.
1) Floating point numbers use a binary format defined by the IEEE 754 standard, which represents values as (-1)s × m × 2e, where s is the sign bit, m is the mantissa, and e is the exponent.
2) The mantissa is a fraction stored in bits with an implied leading 1, and the exponent is an offset from a bias of 127, allowing the representation of numbers between roughly 10-38 and 1038.
3) Converting values to their binary representation involves repeatedly multiplying/dividing the fractional part by 2 until the value is in the form 1.xxx × 2e, and the exponent tracks the number of times the decimal is shifted.
The document discusses different number systems used in computing, including binary, hexadecimal, and octal. It explains that computers internally use the binary number system to represent data and perform calculations. Hexadecimal provides a shorthand way to work with binary numbers, with each hex digit corresponding to four binary digits. The document also covers how to convert between decimal, binary, hexadecimal, and octal numbers. It provides examples of expanding numbers in different bases, as well as adding and subtracting binary numbers using complements.
The document discusses different number systems including positional and non-positional systems. It covers the decimal, binary, octal and hexadecimal number systems in detail. Key points include:
1) Positional systems use the place value of digits to represent numbers, with the decimal and binary systems being examples.
2) Converting between number bases involves repeatedly dividing the number by the new base and recording the remainders as digits.
3) Binary, octal and hexadecimal use groups of bits/digits to simplify conversions between the bases.
4) Floating point numbers represent real numbers with a mantissa and exponent in positional notation.
This document contains examples and problems related to binary number systems. It begins by listing octal, hexadecimal, and base-12 numbers and converting between number systems. Later problems involve operations like addition, subtraction, multiplication and division using binary, octal and hexadecimal numbers. Conversions between decimal, binary, octal and hexadecimal are demonstrated. Complement representations for signed binary numbers are also covered.
The document discusses number systems and conversions between different bases. It explains that computers use the binary system with bits representing 0s and 1s. 8 bits form a byte. Decimal, binary, octal and hexadecimal numbering systems are covered. Methods for converting between these bases are provided using division and remainders or grouping bits. Common powers and units used in computing like kilo, mega and giga are also defined. Exercises on converting values between the different number systems are included.
The document summarizes the IEEE floating-point representation used in Microsoft Visual C++. It describes that there are three main formats - real*4, real*8, and real*10 - which store the sign bit, exponent, and mantissa differently. Real*4 uses a single precision 32-bit format while real*8 uses a double precision 64-bit format. Exponents are stored in biased format, with the exponent bias allowing the representation of both positive and negative exponents. Due to the binary representation, some decimal values cannot be precisely represented in floating-point formats.
The document discusses three main challenges of modern computer-based assessment: usability, scoring, and assessing digital natives. It summarizes the COGSIM test development process which included multiple usability studies and pilot tests to improve the usability, identify scoring approaches, and ensure the assessment engaged digital native test-takers. The process led to enhancements in areas like the graphical user interface, how performance was measured both in terms of products and processes, and design elements to attract and maintain the interest of digital natives accustomed to technology.
The document discusses fixed-point arithmetic, which represents numbers using a fixed number of bits after the binary point. It compares fixed-point to integer and floating-point representations. It then covers notation for representing fixed-point numbers, converting between types, rounding methods, basic operations, and implementing common mathematical functions using fixed-point arithmetic. Several open-source fixed-point arithmetic libraries are also mentioned.
This document summarizes 6 different studies on floating point unit designs. The studies examined fully pipelined single-precision units, energy efficient designs, fused add-subtract units, optimized logarithmic architectures, unified rectangular designs, and an FPGA implementation. The studies described the architectures, advantages like performance and power improvements, and disadvantages like increased complexity. Overall, the document reviews optimizations for floating point units across different technologies.
My presentation on 'computer hardware component' {hardware}Rahul Kumar
The document lists and describes the main components of a computer hardware system. It includes both internal components like the central processing unit (CPU), motherboard, memory slots, and hard drive as well as external components like the monitor, keyboard, and disk drives. The CPU consists of an arithmetic logic unit, registers, and control unit. The motherboard contains connections for attaching these components and controlling peripheral devices. Memory slots hold SIMM or DIMM memory modules. Hard disks provide fast and large capacity data storage compared to floppy disks.
Computer hardware refers to the physical components that make up a computer system. It includes processing components like the CPU and memory, as well as input devices, output devices, and storage devices. The CPU fetches and executes instructions from memory. Memory comes in different types, including cache memory, RAM, and ROM. Input devices like keyboards and mice allow entering data. Output devices like monitors and printers display or print the output. Storage devices such as hard drives and optical discs store data for later use.
This document lists and briefly describes the main hardware components of a computer system. It includes the motherboard, CPU, RAM, keyboard, mouse, monitor, and various storage drives like floppy disk drives, CD-ROM drives, hard disk drives, and DVD drives. The motherboard contains connectors for additional components and controllers to interface with peripheral devices. RAM provides temporary storage while the computer is on. Hard disks provide high-capacity permanent storage. DVD and CD drives can read optical discs for data access or multimedia playback.
This document summarizes the five generations of computers from the 1940s to present. The first generation used vacuum tubes and were large, expensive machines. The second generation introduced transistors, making computers smaller and more reliable. The third generation used integrated circuits, further reducing size and power needs. The fourth generation used VLSI chips and marked the beginning of personal computers. The fifth generation continues advancing processor and memory technology, artificial intelligence, and parallel processing.
This document discusses floating point arithmetic. It begins by introducing the IEEE floating point standard and topics like rounding and floating point operations. It then discusses how fractional binary numbers are represented and the numerical format used in floating point, including normalized and denormalized values. Special values like infinity, NaN, positive/negative zero are also covered. The document concludes by discussing properties of floating point addition, multiplication, and how they differ from mathematical ideals due to rounding errors and special values.
The document discusses intermediate representation in compilers. It states that compilers use a front end to translate source programs into an intermediate representation and a back end to generate target code from the intermediate representation. This allows for machine-independent optimization and retargeting of compilers to different architectures. It then describes three-address code as a common intermediate representation and shows syntax-directed translation of expressions and control flow structures into three-address code. Finally, it discusses various aspects of compiler implementation like symbol tables, type checking, and backpatching.
Two's complement allows representation of negative numbers in binary by using all 1s for the most negative value, and it defines operations like OR and addition on two's complement numbers; floating point representation separates a number into sign, exponent, and mantissa fields to allow a wide range of
The document discusses different number systems including binary, decimal, and hexadecimal. It explains that binary uses two digits (0,1), decimal uses ten digits (0-9), and hexadecimal uses sixteen digits (0-9 plus A-F). All of these systems are positional number systems where the value of each digit depends on its place value. The document then discusses binary addition and subtraction, two's complement representation for signed numbers, hexadecimal addition, and concepts like nibbles and bytes.
The document discusses various methods for representing negative numbers in binary, including sign-and-magnitude, 1's complement, and 2's complement representations. It explains each method in detail, providing examples of how positive and negative numbers are represented. It also covers related topics like overflow, fixed-point versus floating-point number representations, and excess representation of exponents in floating-point numbers.
The document discusses different number systems including binary, decimal, and hexadecimal. It defines base-N number systems and provides examples of decimal, binary, and hexadecimal numbers. Key concepts covered include positional notation, bits and bytes, addition and subtraction in different bases, signed numbers represented using sign-magnitude and two's complement, and arithmetic operations like addition and subtraction on signed binary numbers. Sign extension is introduced as an important concept for performing arithmetic on numbers of different bit-widths in a signed system.
The document discusses different number systems including binary, decimal, and hexadecimal. It defines base-N number systems and provides examples of decimal, binary, and hexadecimal numbers. Key concepts covered include positional notation, bits and bytes, addition and subtraction in different bases, signed numbers represented using sign-magnitude and two's complement, and arithmetic operations like addition and subtraction on signed binary numbers. Sign extension is introduced as an important concept for performing arithmetic on numbers of different bit-widths in a signed system.
Reviewing number systems involves understanding various ways in which numbers can be represented and manipulated. Here's a brief overview of different number systems:
Decimal System (Base-10):
This is the most common number system used by humans.
It uses 10 digits (0-9) to represent numbers.
Each digit's position represents a power of 10.
For example, the number 245 in decimal represents (2 * 10^2) + (4 * 10^1) + (5 * 10^0).
Binary System (Base-2):
Used internally by almost all modern computers.
It uses only two digits: 0 and 1.
Each digit's position represents a power of 2.
For example, the binary number 1011 represents (1 * 2^3) + (0 * 2^2) + (1 * 2^1) + (1 * 2^0) in decimal, which equals 11.
Octal System (Base-8):
Less commonly used, but still relevant in some computer programming contexts.
It uses eight digits: 0 to 7.
Each digit's position represents a power of 8.
For example, the octal number 34 represents (3 * 8^1) + (4 * 8^0) in decimal, which equals 28.
Hexadecimal System (Base-16):
Widely used in computer science and programming.
It uses sixteen digits: 0 to 9 followed by A to F (representing 10 to 15).
Each digit's position represents a power of 16.
Often used to represent memory addresses and binary data more compactly.
For example, the hexadecimal number 2F represents (2 * 16^1) + (15 * 16^0) in decimal, which equals 47.
Each number system has its own advantages and applications. Decimal is intuitive for human comprehension, binary is fundamental in computing due to its simplicity for electronic systems, octal and hexadecimal are often used for human-readable representations of binary data in programming, particularly when dealing with memory addresses and byte-oriented data.
Understanding these number systems is essential for various fields such as computer science, electrical engineering, and mathematics, as they provide different perspectives on how numbers can be represented and manipulated.
The document discusses different number systems including binary, decimal, and hexadecimal. It defines base-N number systems and provides examples of decimal, binary, and hexadecimal numbers. Key concepts covered include positional notation, bits and bytes, addition and subtraction in different bases, signed numbers represented using sign-magnitude and two's complement, and arithmetic operations like addition and subtraction on signed binary numbers. Sign extension is introduced as an important concept for performing arithmetic on numbers of different bit-widths in a signed system.
The document discusses different number systems including binary, decimal, and hexadecimal. It defines base-N number systems and provides examples of decimal, binary, and hexadecimal numbers. Key concepts covered include positional notation, bits and bytes, addition and subtraction in different bases, signed numbers represented using sign-magnitude and two's complement, and arithmetic operations like addition and subtraction on signed binary numbers. Sign extension is introduced as an important concept for performing arithmetic on numbers of different bit-widths in a signed system.
The document discusses different number systems including binary, decimal, and hexadecimal. It defines base-N number systems and provides examples of decimal, binary, and hexadecimal numbers. Key concepts covered include positional notation, bits and bytes, addition and subtraction in different bases, signed numbers represented using sign-magnitude and two's complement, and arithmetic operations like addition and subtraction on signed binary numbers. Sign extension is introduced as an important concept for performing arithmetic on numbers of different bit-widths in a signed system.
Numeral Systems: Positional and Non-Positional
Conversions between Positional Numeral Systems: Binary, Decimal and Hexadecimal
Representation of Numbers in Computer Memory
Exercises: Conversion between Different Numeral Systems
Implementation of character translation integer and floating point valuesغزالة
The document discusses implementation of character translation and floating point values. It covers IEEE floating point standard, binary representation of floating point numbers, computer representation using sign bit, exponent and mantissa fields, floating point precisions for single and double precision, examples of representing decimal values in floating point format, and floating point instructions using a stack. It also mentions character translation between ASCII and EBCDIC formats for inter-computer communication.
Here are the answers to the assignment questions:
1. No overflow occurs when adding 00100110 + 01011010 in two's complement. The sum is 10001000.
2. See textbook 1 problem 2-1.c for the solution.
3. See textbook 1 problem 2-11.c for the solution.
4. See textbook 1 problem 2-19.c for the solution.
5. The decimal equivalent of the hexadecimal number 1A16 is 2610.
This document discusses floating point arithmetic. It begins by recalling scientific notation and describing how numbers are represented in binary scientific notation according to the IEEE 754 floating point standard. It then provides examples of how common numbers like 1, 1/2, -2, 0, infinity, and NaN are represented. The document concludes by noting that floating point arithmetic has limitations and is not the same as typical math, and discusses some historical floating point failures in systems.
This presentation was made for student batch 2017-2018 of MBSTU. Here we will get
IEEE 32 bit floating representation .
IEEE 754 floating point representation
32 bit floating point Addition
The document discusses digital and binary number systems. It covers:
1) Analogue vs digital signals, with digital being discrete values represented using bits for precision.
2) Why digital is used for processing over analogue - due to advances in integrated circuits and noise resistance.
3) How digital values are represented using boolean logic and binary number systems. This includes binary, hexadecimal, and signed numbers using two's complement notation.
4) Arithmetic operations like addition, subtraction in binary using two's complement. The same hardware can be used for both with different flag settings to indicate carry vs overflow.
The document provides practice problems and solutions related to digital logic, microprocessor design, computer networks, programming languages, operating systems, and software reliability. The problems cover topics such as number representation in floating point format, encoding, error detection codes, instruction formats, pipelining, and shifting operations. An example uses a comparator circuit to analyze branch instruction logic in a program by comparing data in registers A and B.
The document discusses computer arithmetic and floating point number representation. It covers:
1) The Arithmetic Logic Unit (ALU) performs calculations and can handle integers and floating point numbers using separate floating point units.
2) Integer numbers are represented using binary and two's complement allows for positive and negative numbers. Floating point numbers use a sign bit, significand, and exponent in normalized form to represent numbers with fractions.
3) Operations like addition, subtraction, multiplication, and division are performed on integers in the ALU and floating point numbers following standard algorithms while managing overflow and normalization.
Andreas Schleicher presents PISA 2022 Volume III - Creative Thinking - 18 Jun...EduSkills OECD
Andreas Schleicher, Director of Education and Skills at the OECD presents at the launch of PISA 2022 Volume III - Creative Minds, Creative Schools on 18 June 2024.
🔥🔥🔥🔥🔥🔥🔥🔥🔥
إضغ بين إيديكم من أقوى الملازم التي صممتها
ملزمة تشريح الجهاز الهيكلي (نظري 3)
💀💀💀💀💀💀💀💀💀💀
تتميز هذهِ الملزمة بعِدة مُميزات :
1- مُترجمة ترجمة تُناسب جميع المستويات
2- تحتوي على 78 رسم توضيحي لكل كلمة موجودة بالملزمة (لكل كلمة !!!!)
#فهم_ماكو_درخ
3- دقة الكتابة والصور عالية جداً جداً جداً
4- هُنالك بعض المعلومات تم توضيحها بشكل تفصيلي جداً (تُعتبر لدى الطالب أو الطالبة بإنها معلومات مُبهمة ومع ذلك تم توضيح هذهِ المعلومات المُبهمة بشكل تفصيلي جداً
5- الملزمة تشرح نفسها ب نفسها بس تكلك تعال اقراني
6- تحتوي الملزمة في اول سلايد على خارطة تتضمن جميع تفرُعات معلومات الجهاز الهيكلي المذكورة في هذهِ الملزمة
واخيراً هذهِ الملزمة حلالٌ عليكم وإتمنى منكم إن تدعولي بالخير والصحة والعافية فقط
كل التوفيق زملائي وزميلاتي ، زميلكم محمد الذهبي 💊💊
🔥🔥🔥🔥🔥🔥🔥🔥🔥
Leveraging Generative AI to Drive Nonprofit InnovationTechSoup
In this webinar, participants learned how to utilize Generative AI to streamline operations and elevate member engagement. Amazon Web Service experts provided a customer specific use cases and dived into low/no-code tools that are quick and easy to deploy through Amazon Web Service (AWS.)
This presentation was provided by Rebecca Benner, Ph.D., of the American Society of Anesthesiologists, for the second session of NISO's 2024 Training Series "DEIA in the Scholarly Landscape." Session Two: 'Expanding Pathways to Publishing Careers,' was held June 13, 2024.
Temple of Asclepius in Thrace. Excavation resultsKrassimira Luka
The temple and the sanctuary around were dedicated to Asklepios Zmidrenus. This name has been known since 1875 when an inscription dedicated to him was discovered in Rome. The inscription is dated in 227 AD and was left by soldiers originating from the city of Philippopolis (modern Plovdiv).
1. 15-213
“The course that gives CMU its Zip!”
Floating
Point
Sept 6, 2006
Topics
IEEE Floating Point Standard
Rounding
Floating Point Operations
Mathematical properties
class03.ppt 15-213, F’06
2. Floating Point Puzzles
For each of the following C expressions, either:
Argue that it is true for all argument values
Explain why not true
• x == (int)(float) x
• x == (int)(double) x
int x = …;
• f == (float)(double) f
float f = …;
• d == (float) d
double d = …;
• f == -(-f);
• 2/3 == 2/3.0
Assume neither
d nor f is NaN • d < 0.0 ⇒ ((d*2) < 0.0)
• d > f ⇒ -f > -d
• d * d >= 0.0
• (d+f)-d == f
–2– 15-213, F’06
3. IEEE Floating Point
IEEE Standard 754
Established in 1985 as uniform standard for floating
point arithmetic
Before that, many idiosyncratic formats
Supported by all major CPUs
Driven by Numerical Concerns
Nice standards for rounding, overflow, underflow
Hard to make go fast
Numerical analysts predominated over hardware types in
defining standard
–3– 15-213, F’06
4. Fractional Binary Numbers
2i
2i–1
4
••• 2
1
bi bi–1 ••• b2 b1 b0 . b–1 b–2 b–3 ••• b–j
1/2
1/4 •••
1/8
2–j
Representation
Bits to right of “binary point” represent fractional powers
of 2 i
Represents rational number: ∑ bk ⋅2 k
k =− j
–4– 15-213, F’06
5. Frac. Binary Number Examples
Value Representation
5-3/4 101.112
2-7/8 10.1112
63/64 0.1111112
Observations
Divide by 2 by shifting right
Multiply by 2 by shifting left
Numbers of form 0.111111…2 just below 1.0
1/2 + 1/4 + 1/8 + … + 1/2 i + … → 1.0
Use notation 1.0 – ε
–5– 15-213, F’06
6. Representable Numbers
Limitation
Can only exactly represent numbers of the form x/2k
Other numbers have repeating bit representations
Value Representation
1/3 0.0101010101[01]…2
1/5 0.001100110011[0011]…2
1/10 0.0001100110011[0011]…2
–6– 15-213, F’06
7. Floating Point Representation
Numerical Form
–1s M 2E
Sign bit s determines whether number is negative or
positive
Significand M normally a fractional value in range [1.0,2.0).
Exponent E weights value by power of two
Encoding
s exp frac
MSB is sign bit
exp field encodes E
frac field encodes M
–7– 15-213, F’06
8. Floating Point Precisions
Encoding
s exp frac
MSB is sign bit
exp field encodes E
frac field encodes M
Sizes
Single precision: 8 exp bits, 23 frac bits
32 bits total
Double precision: 11 exp bits, 52 frac bits
64 bits total
Extended precision: 15 exp bits, 63 frac bits
Only found in Intel-compatible machines
Stored in 80 bits
» 1 bit wasted
–8– 15-213, F’06
9. “Normalized” Numeric Values
Condition
exp ≠ 000…0 and exp ≠ 111…1
Exponent coded as biased value
E = Exp – Bias
Exp : unsigned value denoted by exp
Bias : Bias value
» Single precision: 127 ( Exp: 1…254, E: -126…127)
» Double precision: 1023 ( Exp: 1…2046, E: -1022…1023)
» in general: Bias = 2 e-1 - 1, where e is number of exponent
bits
Significand coded with implied leading 1
M = 1.xxx…x2
xxx…x: bits of frac
Minimum when 000…0 (M = 1.0)
Maximum when 111…1 (M = 2.0 – ε)
Get extra leading bit for “free”
–9– 15-213, F’06
10. Normalized Encoding
Example
Value
Float F = 15213.0;
1521310 = 111011011011012 = 1.11011011011012 X 213
Significand
M = 1.11011011011012
frac = 110110110110100000000002
Exponent
E = 13
Bias = 127
Exp = 140 = 100011002
Floating Point Representation:
Hex: 4 6 6 D B 4 0 0
Binary: 0100 0110 0110 1101 1011 0100 0000
0000
140: 100 0110 0
– 10 – 15213: 1110 1101 1011 01 15-213, F’06
11. Denormalized Values
Condition
exp = 000…0
Value
Exponent value E = –Bias + 1
Significand value M = 0.xxx…x2
xxx…x: bits of frac
Cases
exp = 000…0, frac = 000…0
Represents value 0
Note that have distinct values +0 and –0
exp = 000…0, frac ≠ 000…0
Numbers very close to 0.0
Lose precision as get smaller
– 11 –
“Gradual underflow” 15-213, F’06
12. Special Values
Condition
exp = 111…1
Cases
exp = 111…1, frac = 000…0
Represents value ∞ (infinity)
Operation that overflows
Both positive and negative
E.g., 1.0/0.0 = −1.0/−0.0 = + ∞, 1.0/−0.0 = − ∞
exp = 111…1, frac ≠ 000…0
Not-a-Number (NaN)
Represents case when no numeric value can be
determined
E.g., sqrt(–1), ∞ − ∞, ∞ ∗ 0
– 12 – 15-213, F’06
13. Summary of Floating Point
Real Number Encodings
−∞ -Normalized -Denorm +Denorm +Normalized +∞
NaN NaN
−0 +0
– 13 – 15-213, F’06
14. Tiny Floating Point Example
8-bit Floating Point Representation
the sign bit is in the most significant bit.
the next four bits are the exponent, with a bias of 7.
the last three bits are the frac
Same General Form as IEEE Format
normalized, denormalized
representation of 0, NaN, infinity
7 6 3 2 0
s exp frac
– 14 – 15-213, F’06
17. Distribution of Values
6-bit IEEE-like format
e = 3 exponent bits
f = 2 fraction bits
Bias is 3
Notice how the distribution gets denser toward
zero.
-15 -10 -5 0 5 10 15
Denormalized Normalized Infinity
– 17 – 15-213, F’06
18. Distribution of Values
(close-up view)
6-bit IEEE-like format
e = 3 exponent bits
f = 2 fraction bits
Bias is 3
-1 -0.5 0 0.5 1
Denormalized Normalized Infinity
– 18 – 15-213, F’06
19. Interesting Numbers
Description exp frac Numeric Value
Zero 00…00 00…00 0.0
Smallest Pos. Denorm. 00…00 00…01 2– {23,52} X 2– {126,1022}
Single ≈ 1.4 X 10–45
Double ≈ 4.9 X 10–324
Largest Denormalized 00…00 11…11 (1.0 – ε) X 2– {126,1022}
Single ≈ 1.18 X 10–38
Double ≈ 2.2 X 10–308
Smallest Pos. Normalized 00…01 00…00 1.0 X 2– {126,1022}
Just larger than largest denormalized
One 01…11 00…00 1.0
Largest Normalized 11…10 11…11 (2.0 – ε) X 2{127,1023}
Single ≈ 3.4 X 1038
Double ≈ 1.8 X 10308
– 19 – 15-213, F’06
20. Special Properties of
Encoding
FP Zero Same as Integer Zero
All bits = 0
Can (Almost) Use Unsigned Integer Comparison
Must first compare sign bits
Must consider -0 = 0
NaNs problematic
Will be greater than any other values
What should comparison yield?
Otherwise OK
Denorm vs. normalized
Normalized vs. infinity
– 20 – 15-213, F’06
21. Floating Point Operations
Conceptual View
First compute exact result
Make it fit into desired precision
Possibly overflow if exponent too large
Possibly round to fit into frac
Rounding Modes (illustrate with $ rounding)
$1.40 $1.60 $1.50 $2.50 –$1.50
Zero $1 $1 $1 $2 –$1
Round down (-∞) $1 $1 $1 $2 –$2
Round up (+∞) $2 $2 $2 $3 –$1
Nearest Even (default) $1 $2 $2 $2–$2
Note:
1. Round down: rounded result is close to but no greater than true result.
2. Round up: rounded result is close to but no less than true result.
– 21 – 15-213, F’06
22. Closer Look at Round-To-
Even
Default Rounding M ode
Hard to get any other kind without dropping into
assembly
All others are statistically biased
Sum of set of positive numbers will consistently be over- or
under- estimated
Applying to Other Decimal Places / Bit Positions
When exactly halfway between two possible values
Round so that least significant digit is even
E.g., round to nearest hundredth
1.2349999 1.23 (Less than half way)
1.2350001 1.24 (Greater than half way)
1.2350000 1.24 (Half way—round up)
1.2450000 1.24 (Half way—round down)
– 22 – 15-213, F’06
23. Rounding Binary Numbers
Binary Fractional Numbers
“Even” when least significant bit is 0
Half way when bits to right of rounding position = 100… 2
Examples
Round to nearest 1/4 (2 bits right of binary point)
Value Binary Rounded Action Rounded Value
2 3/32 10.000112 10.002 (<1/2—down) 2
2 3/16 10.001102 10.012 (>1/2—up) 2 1/4
2 7/8 10.111002 11.002 (1/2—up) 3
2 5/8 10.101002 10.102 (1/2—down) 2 1/2
– 23 – 15-213, F’06
24. FP Multiplication
Operands
*
(–1)s1 M1 2E1 (–1)s2 M2 2E2
Exact Result
(–1)s M 2E
Sign s: s1 ^ s2
Significand M: M1 * M2
Exponent E: E1 + E2
Fixing
If M ≥ 2, shift M right, increment E
If E out of range, overflow
Round M to fit frac precision
Implementation
Biggest chore is multiplying significands
– 24 – 15-213, F’06
25. FP Addition
Operands
(–1)s1 M1 2E1 E1–E2
(–1)s2 M2 2E2 (–1)s1 M1
Assume E1 > E2
+ (–1)s2 M2
Exact Result
(–1)s M 2E (–1)s M
Sign s, significand M:
Result of signed align & add
Exponent E: E1
Fixing
If M ≥ 2, shift M right, increment E
if M < 1, shift M left k positions, decrement E by k
Overflow if E out of range
– 25 –
Round M to fit frac precision 15-213, F’06
26. Mathematical Properties of FP
Add
Compare to those of Abelian Group
Closed under addition? YES
But may generate infinity or NaN
Commutative? YES
Associative? NO
Overflow and inexactness of rounding
0 is additive identity? YES
Every element has additive inverse ALMOST
Except for infinities & NaNs
Monotonicity
a ≥ b ⇒ a+c ≥ b+c? ALMOST
Except for infinities & NaNs
– 26 – 15-213, F’06
27. Math. Properties of FP Mult
Compare to Commutative Ring
Closed under multiplication? YES
But may generate infinity or NaN
Multiplication Commutative? YES
Multiplication is Associative? NO
Possibility of overflow, inexactness of rounding
1 is multiplicative identity? YES
Multiplication distributes over addition? NO
Possibility of overflow, inexactness of rounding
Monotonicity
a ≥ b & c ≥ 0 ⇒ a *c ≥ b *c? ALMOST
Except for infinities & NaNs
– 27 – 15-213, F’06
28. Creating Floating Point Number
Steps 7 6 3 2 0
Normalize to have leading 1s exp frac
Round to fit within fraction
Postnormalize to deal with effects of rounding
Case Study
Convert 8-bit unsigned numbers to tiny floating point
format
Example Numbers
128 10000000
15 00001101
33 00010001
35 00010011
138 10001010
– 28 – 63 00111111 15-213, F’06
29. Normalize
7 6 3 2 0
s exp frac
Requirement
Set binary point so that numbers of form 1.xxxxx
Adjust all to have leading one
Decrement exponent as shift left
Value Binary Fraction Exponent
128 10000000 1.0000000 7
15 00001101 1.1010000 3
17 00010001 1.0001000 5
19 00010011 1.0011000 5
138 10001010 1.0001010 7
63 00111111 1.1111100 5
– 29 – 15-213, F’06
30. Rounding
1.BBGRXXX
Guard bit: LSB of result
Sticky bit: OR of remaining bits
Round bit: 1 st
bit removed
Round up conditions
Round = 1, Sticky = 1 > 0.5
Guard = 1, Round = 1, Sticky = 0 Round to even
Value Fraction GRS Incr? Rounded
128 1.0000000 000 N 1.000
15 1.1010000 100 N 1.101
17 1.0001000 010 N 1.000
19 1.0011000 110 Y 1.010
138 1.0001010 111 Y 1.001
63 1.1111100 111 Y 10.000
– 30 – 15-213, F’06
31. Postnormalize
Issue
Rounding may have caused overflow
Handle by shifting right once & incrementing exponent
Value Rounded Exp Adjusted Result
128 1.000 7 128
15 1.101 3 15
17 1.000 4 16
19 1.010 4 20
138 1.001 7 134
63 10.000 5 1.000/6 64
– 31 – 15-213, F’06
32. Floating Point in C
C Guarantees Two Levels
float single precision
double double precision
Conversions
Casting between int, float, and double changes
numeric values
Double or float to int
Truncates fractional part
Like rounding toward zero
Not defined when out of range or NaN
» Generally sets to TMin
int to double
Exact conversion, as long as int has ≤ 53 bit word size
int to float
Will round according to rounding mode
– 32 – 15-213, F’06
33. Curious Excel Behavior
Number Subtract 16 Subtract .3 Subtract .01
Default Format 16.31 0.31 0.01 -1.2681E-15
Currency Format $16.31 $0.31 $0.01 ($0.00)
Spreadsheets use floating point for all computations
Some imprecision for decimal arithmetic
Can yield nonintuitive results to an accountant!
– 33 – 15-213, F’06
34. Summary
IEEE Floating Point Has Clear M athematical
Properties
Represents numbers of form M X 2 E
Can reason about operations independent of
implementation
As if computed with perfect precision and then rounded
Not the same as real arithmetic
Violates associativity/distributivity
Makes life difficult for compilers & serious numerical
applications programmers
– 34 – 15-213, F’06