UNIT-II ARITHMETIC FOR COMPUTERS
Addition and Subtraction – Multiplication – Division – Floating Point Representation – Floating Point Addition and Subtraction.
The document provides information about computer arithmetic and binary number representation. It discusses addition and subtraction in binary, signed and unsigned numbers, overflow, and multiplication algorithms. It explains how binary addition and subtraction work using bit-by-bit operations. For multiplication, it describes the shift-add algorithm where the multiplicand is shifted and added to the product based on the multiplier bits. Hardware for implementing this algorithm with registers is also shown.
Inductive programming incorporates all approaches which are concerned with learning programs or algorithms from incomplete (formal) specifications. Possible inputs in an IP system are a set of training inputs and corresponding outputs or an output evaluation function, describing the desired behavior of the intended program, traces or action sequences which describe the process of calculating specific outputs, constraints for the program to be induced concerning its time efficiency or its complexity, various kinds of background knowledge such as standard data types, predefined functions to be used, program schemes or templates describing the data flow of the intended program, heuristics for guiding the search for a solution or other biases.
Output of an IP system is a program in some arbitrary programming language containing conditionals and loop or recursive control structures, or any other kind of Turing-complete representation language.
In many applications the output program must be correct with respect to the examples and partial specification, and this leads to the consideration of inductive programming as a special area inside automatic programming or program synthesis, usually opposed to 'deductive' program synthesis, where the specification is usually complete.
In other cases, inductive programming is seen as a more general area where any declarative programming or representation language can be used and we may even have some degree of error in the examples, as in general machine learning, the more specific area of structure mining or the area of symbolic artificial intelligence. A distinctive feature is the number of examples or partial specification needed. Typically, inductive programming techniques can learn from just a few examples.
The diversity of inductive programming usually comes from the applications and the languages that are used: apart from logic programming and functional programming, other programming paradigms and representation languages have been used or suggested in inductive programming, such as functional logic programming, constraint
programming, probabilistic programming
Research on the inductive synthesis of recursive functional programs started in the early 1970s and was brought onto firm theoretical foundations with the seminal THESIS system of Summers[6] and work of Biermann.[7] These approaches were split into two phases: first, input-output examples are transformed into non-recursive programs (traces) using a small set of basic operators; second, regularities in the traces are searched for and used to fold them into a recursive program. The main results until the mid 1980s are surveyed by Smith.[8] Due to
To multiply binary numbers, refer to the single bit multiplication table that lists the possible products of multiplying 0 and 1. Multiply each bit in the first number by each bit in the second number and combine the results to get the final product. For example, multiplying 1101 by 0111 using this method results in a product of 1001.
Binary addition involves adding binary numbers by applying the following rules:
0 + 0 = 0
0 + 1 = 1
1 + 0 = 1
1 + 1 = 0 with a carry of 1 to the next column.
To perform multi-bit addition, a half or full adder table is used to calculate the sum and carry out for each bit while accounting for any carries from the previous column. Examples are provided showing how addition is performed on multiple bits using the adder table and propagating any carries to the next position.
The document discusses binary number representation and arithmetic. It explains decimal to binary conversion. It also describes signed number representation using sign-magnitude and one's complement and two's complement methods. The key advantages of two's complement are that addition can be performed using the same method for positive and negative numbers. Subtraction using two's complement is performed by adding the number to the complement of the subtrahend. Examples of binary addition and subtraction are provided to illustrate these concepts.
The document discusses various number systems used in digital electronics including decimal, binary, hexadecimal, and octal number systems. It provides details on how decimal, binary, and hexadecimal numbers are represented and converted between number systems. Various methods for converting between decimal, binary, hexadecimal, and octal numbers are presented including the sum-of-weights method and division/multiplication methods. The use of binary coded decimal codes for easier conversion between decimal and binary numbers is also covered.
The document provides information about computer arithmetic and binary number representation. It discusses addition and subtraction in binary, signed and unsigned numbers, overflow, and multiplication algorithms. It explains how binary addition and subtraction work using bit-by-bit operations. For multiplication, it describes the shift-add algorithm where the multiplicand is shifted and added to the product based on the multiplier bits. Hardware for implementing this algorithm with registers is also shown.
Inductive programming incorporates all approaches which are concerned with learning programs or algorithms from incomplete (formal) specifications. Possible inputs in an IP system are a set of training inputs and corresponding outputs or an output evaluation function, describing the desired behavior of the intended program, traces or action sequences which describe the process of calculating specific outputs, constraints for the program to be induced concerning its time efficiency or its complexity, various kinds of background knowledge such as standard data types, predefined functions to be used, program schemes or templates describing the data flow of the intended program, heuristics for guiding the search for a solution or other biases.
Output of an IP system is a program in some arbitrary programming language containing conditionals and loop or recursive control structures, or any other kind of Turing-complete representation language.
In many applications the output program must be correct with respect to the examples and partial specification, and this leads to the consideration of inductive programming as a special area inside automatic programming or program synthesis, usually opposed to 'deductive' program synthesis, where the specification is usually complete.
In other cases, inductive programming is seen as a more general area where any declarative programming or representation language can be used and we may even have some degree of error in the examples, as in general machine learning, the more specific area of structure mining or the area of symbolic artificial intelligence. A distinctive feature is the number of examples or partial specification needed. Typically, inductive programming techniques can learn from just a few examples.
The diversity of inductive programming usually comes from the applications and the languages that are used: apart from logic programming and functional programming, other programming paradigms and representation languages have been used or suggested in inductive programming, such as functional logic programming, constraint
programming, probabilistic programming
Research on the inductive synthesis of recursive functional programs started in the early 1970s and was brought onto firm theoretical foundations with the seminal THESIS system of Summers[6] and work of Biermann.[7] These approaches were split into two phases: first, input-output examples are transformed into non-recursive programs (traces) using a small set of basic operators; second, regularities in the traces are searched for and used to fold them into a recursive program. The main results until the mid 1980s are surveyed by Smith.[8] Due to
To multiply binary numbers, refer to the single bit multiplication table that lists the possible products of multiplying 0 and 1. Multiply each bit in the first number by each bit in the second number and combine the results to get the final product. For example, multiplying 1101 by 0111 using this method results in a product of 1001.
Binary addition involves adding binary numbers by applying the following rules:
0 + 0 = 0
0 + 1 = 1
1 + 0 = 1
1 + 1 = 0 with a carry of 1 to the next column.
To perform multi-bit addition, a half or full adder table is used to calculate the sum and carry out for each bit while accounting for any carries from the previous column. Examples are provided showing how addition is performed on multiple bits using the adder table and propagating any carries to the next position.
The document discusses binary number representation and arithmetic. It explains decimal to binary conversion. It also describes signed number representation using sign-magnitude and one's complement and two's complement methods. The key advantages of two's complement are that addition can be performed using the same method for positive and negative numbers. Subtraction using two's complement is performed by adding the number to the complement of the subtrahend. Examples of binary addition and subtraction are provided to illustrate these concepts.
The document discusses various number systems used in digital electronics including decimal, binary, hexadecimal, and octal number systems. It provides details on how decimal, binary, and hexadecimal numbers are represented and converted between number systems. Various methods for converting between decimal, binary, hexadecimal, and octal numbers are presented including the sum-of-weights method and division/multiplication methods. The use of binary coded decimal codes for easier conversion between decimal and binary numbers is also covered.
1. The document describes the von Neumann architecture and its key components including the ALU, control unit, memory and I/O devices.
2. It explains the structure of the von Neumann machine and details the functions of components like the program counter, memory address register, and instruction register.
3. The document covers integer and floating point representation in binary, including sign-magnitude, two's complement, and IEEE 754 standard. It describes arithmetic operations like addition, subtraction, multiplication and division on binary numbers.
The document discusses different methods for representing signed binary numbers:
1) Sign-magnitude notation represents positive and negative numbers by using the most significant bit to indicate the sign (0 for positive, 1 for negative) and the remaining bits for the magnitude.
2) One's complement represents negative numbers by inverting all bits of the positive number.
3) Two's complement, the most common method, represents negative numbers by inverting all bits and adding 1 to the result. This allows simple addition to perform subtraction.
Binary addition, Binary subtraction, Negative number representation, Subtraction using 1’s complement and 2’s complement, Binary multiplication and division, Arithmetic in octal, hexadecimal number system, BCD and Excess – 3 arithmetic
Binary arithmetic is essential for digital computers and systems. It involves adding, subtracting, multiplying, and dividing binary numbers using basic rules. Signed binary numbers represent positive and negative values using sign-magnitude, 1's complement, and 2's complement methods. Arithmetic operations on signed binary numbers follow rules for handling the sign bit and complement representations.
This document discusses data representation and number systems in computers. It covers binary, octal, decimal, and hexadecimal number systems. Key points include:
- Data in computers is represented using binary numbers and different number systems allow for more efficient representations.
- Converting between number systems like binary, octal, decimal, and hexadecimal is explained through examples of dividing numbers and grouping bits.
- Signed numbers can be represented using complement representations like one's complement and two's complement, with subtraction implemented through addition of complements. Fast methods for calculating two's complement are described.
The document discusses different number systems used in computing, including binary, hexadecimal, and octal. It explains that computers internally use the binary number system to represent data and perform calculations. Hexadecimal provides a shorthand way to work with binary numbers, with each hex digit corresponding to four binary digits. The document also covers how to convert between decimal, binary, hexadecimal, and octal numbers. It provides examples of expanding numbers in different bases, as well as adding and subtracting binary numbers using complements.
The document discusses different number systems including binary, decimal, octal and hexadecimal. It provides examples of converting between these number systems. The key points covered are:
- Binary, decimal, octal and hexadecimal number systems use different bases (2, 10, 8, 16 respectively) and sets of digits.
- Numbers can be converted between these systems through repetitive division or multiplication by the base to determine each place value digit.
- Fractional numbers are represented similarly with place values decreasing as negative powers of the base moving right of the radix point.
The document summarizes computer arithmetic and floating point representation. It discusses:
1) The arithmetic logic unit handles integer and floating point calculations. Integer values are represented in binary using two's complement. Floating point values use a sign-magnitude format with a fixed or moving binary point.
2) Addition and subtraction of integers is done through normal binary addition and subtraction. Multiplication requires generating partial products and addition. Division uses a long division approach.
3) Floating point numbers follow the IEEE 754 standard which represents values as ±mantissax2exponent in 32 or 64 bit formats. Arithmetic requires aligning operands and performing operations on significands and exponents.
The document discusses different number systems including binary, decimal, octal, and hexadecimal. It provides details on how to convert between these number systems, including how to convert fractional numbers between bases. Conversion methods covered include dividing numbers into place values to determine the digit values in the target base. The document also discusses representing negative numbers using 1's complement notation.
Digital Electronics discusses different number systems including binary, decimal, hexadecimal, and octal. It explains how to convert between these number systems using various methods like place value, division, and electronic translators. Electronic encoders and decoders are integrated circuits that can translate between binary and decimal representations.
This document discusses number representation systems used in computers, including binary, decimal, octal, and hexadecimal. It provides examples of converting between these different bases. Specifically, it covers:
1) Converting between decimal, binary, octal, and hexadecimal using positional notation and place values.
2) Representing signed integers in binary using ones' complement and twos' complement notation.
3) Tables for converting binary numbers to octal and hexadecimal using place values of each base.
4) Examples of converting values between the different number bases both manually and using the provided conversion tables.
Here are the answers to the assignment questions:
1. No overflow occurs when adding 00100110 + 01011010 in two's complement. The sum is 10001000.
2. See textbook 1 problem 2-1.c for the solution.
3. See textbook 1 problem 2-11.c for the solution.
4. See textbook 1 problem 2-19.c for the solution.
5. The decimal equivalent of the hexadecimal number 1A16 is 2610.
Chapter 2.1 introduction to number systemISMT College
Binary Number System, Decimal Number System, Octal Number System, Hexadecimal Number System, Conversion, Binary Arithmetic, Signed Binary Number Representation, 1's complement, 2's complement, 9's complement, 10's complement
This document provides an overview of floating point representation and arithmetic based on the IEEE 754 standard. It discusses topics such as normalized and denormalized values, special values like infinity and NaN, and examples using tiny 8-bit floating point formats to illustrate concepts like dynamic range and value distribution. The goal is to explain how computers represent inexact real numbers using a finite number of bits.
Binary coded decimal (BCD) is a numerical coding system that uses binary numbers to represent decimal digits. Each decimal digit from 0 to 9 is represented by a unique 4-bit binary code. BCD allows arithmetic operations like addition and subtraction on numbers. For BCD addition, the binary sum is calculated and if it exceeds 9, then 6 is added to obtain a valid BCD result. For BCD subtraction, the 9's complement of the subtrahend is calculated and added to the minuend, with carries propagated to the next group of bits.
This document discusses different number systems including binary, octal, hexadecimal, and their arithmetic operations. It provides examples of adding and subtracting numbers in these systems. Binary addition follows four rules: 0 + 0 = 0, 0 + 1 = 1, 1 + 0 = 1, 1 + 1 = 10. Octal addition is like decimal addition except when the column sum is greater than 7, 8 is subtracted and 1 is carried. Hexadecimal uses numbers 0-9 and letters A-F to represent values 10-15. It provides a table of decimal and hexadecimal equivalents. Hexadecimal addition involves treating multi-digit numbers as in decimal. Subtraction uses two's complement or 15's and 16's complement methods.
Representation of Signed Numbers - R.D.SivakumarSivakumar R D .
This document discusses the representation of signed numbers in computers. It explains two common methods: sign-magnitude representation and 2's complement representation. For 2's complement, it provides the step-by-step process to convert a positive number to its negative equivalent in binary. It also discusses interpreting numbers as signed or unsigned and how this affects comparisons. Finally, it outlines the different value ranges for unsigned versus signed integers in an n-bit system.
The document provides information about computer arithmetic and binary number representation. It discusses addition and subtraction in binary, signed and unsigned numbers, overflow, and multiplication algorithms. It explains how binary addition and subtraction work using bit-by-bit operations. For multiplication, it describes the shift-add algorithm where the multiplicand is shifted and added to the product based on the multiplier bits. Hardware for implementing this algorithm with registers is also shown.
This document provides an overview of Boolean algebra and logic gates. It discusses topics such as number systems, binary codes, Boolean algebra, logic gates, theorems of Boolean algebra, Boolean functions, simplification using Karnaugh maps, and NAND and NOR implementations. The document also describes binary arithmetic operations including addition, subtraction, multiplication, and division. It defines binary codes and discusses weighted and non-weighted binary codes.
This document provides an overview of Boolean algebra and logic gates. It discusses topics such as number systems, binary codes, Boolean algebra, logic gates, theorems of Boolean algebra, Boolean functions, simplification using Karnaugh maps, and NAND and NOR implementations. The document also describes binary arithmetic operations including addition, subtraction, multiplication, and division. It defines binary codes and discusses weighted and non-weighted binary codes.
1. The document describes the von Neumann architecture and its key components including the ALU, control unit, memory and I/O devices.
2. It explains the structure of the von Neumann machine and details the functions of components like the program counter, memory address register, and instruction register.
3. The document covers integer and floating point representation in binary, including sign-magnitude, two's complement, and IEEE 754 standard. It describes arithmetic operations like addition, subtraction, multiplication and division on binary numbers.
The document discusses different methods for representing signed binary numbers:
1) Sign-magnitude notation represents positive and negative numbers by using the most significant bit to indicate the sign (0 for positive, 1 for negative) and the remaining bits for the magnitude.
2) One's complement represents negative numbers by inverting all bits of the positive number.
3) Two's complement, the most common method, represents negative numbers by inverting all bits and adding 1 to the result. This allows simple addition to perform subtraction.
Binary addition, Binary subtraction, Negative number representation, Subtraction using 1’s complement and 2’s complement, Binary multiplication and division, Arithmetic in octal, hexadecimal number system, BCD and Excess – 3 arithmetic
Binary arithmetic is essential for digital computers and systems. It involves adding, subtracting, multiplying, and dividing binary numbers using basic rules. Signed binary numbers represent positive and negative values using sign-magnitude, 1's complement, and 2's complement methods. Arithmetic operations on signed binary numbers follow rules for handling the sign bit and complement representations.
This document discusses data representation and number systems in computers. It covers binary, octal, decimal, and hexadecimal number systems. Key points include:
- Data in computers is represented using binary numbers and different number systems allow for more efficient representations.
- Converting between number systems like binary, octal, decimal, and hexadecimal is explained through examples of dividing numbers and grouping bits.
- Signed numbers can be represented using complement representations like one's complement and two's complement, with subtraction implemented through addition of complements. Fast methods for calculating two's complement are described.
The document discusses different number systems used in computing, including binary, hexadecimal, and octal. It explains that computers internally use the binary number system to represent data and perform calculations. Hexadecimal provides a shorthand way to work with binary numbers, with each hex digit corresponding to four binary digits. The document also covers how to convert between decimal, binary, hexadecimal, and octal numbers. It provides examples of expanding numbers in different bases, as well as adding and subtracting binary numbers using complements.
The document discusses different number systems including binary, decimal, octal and hexadecimal. It provides examples of converting between these number systems. The key points covered are:
- Binary, decimal, octal and hexadecimal number systems use different bases (2, 10, 8, 16 respectively) and sets of digits.
- Numbers can be converted between these systems through repetitive division or multiplication by the base to determine each place value digit.
- Fractional numbers are represented similarly with place values decreasing as negative powers of the base moving right of the radix point.
The document summarizes computer arithmetic and floating point representation. It discusses:
1) The arithmetic logic unit handles integer and floating point calculations. Integer values are represented in binary using two's complement. Floating point values use a sign-magnitude format with a fixed or moving binary point.
2) Addition and subtraction of integers is done through normal binary addition and subtraction. Multiplication requires generating partial products and addition. Division uses a long division approach.
3) Floating point numbers follow the IEEE 754 standard which represents values as ±mantissax2exponent in 32 or 64 bit formats. Arithmetic requires aligning operands and performing operations on significands and exponents.
The document discusses different number systems including binary, decimal, octal, and hexadecimal. It provides details on how to convert between these number systems, including how to convert fractional numbers between bases. Conversion methods covered include dividing numbers into place values to determine the digit values in the target base. The document also discusses representing negative numbers using 1's complement notation.
Digital Electronics discusses different number systems including binary, decimal, hexadecimal, and octal. It explains how to convert between these number systems using various methods like place value, division, and electronic translators. Electronic encoders and decoders are integrated circuits that can translate between binary and decimal representations.
This document discusses number representation systems used in computers, including binary, decimal, octal, and hexadecimal. It provides examples of converting between these different bases. Specifically, it covers:
1) Converting between decimal, binary, octal, and hexadecimal using positional notation and place values.
2) Representing signed integers in binary using ones' complement and twos' complement notation.
3) Tables for converting binary numbers to octal and hexadecimal using place values of each base.
4) Examples of converting values between the different number bases both manually and using the provided conversion tables.
Here are the answers to the assignment questions:
1. No overflow occurs when adding 00100110 + 01011010 in two's complement. The sum is 10001000.
2. See textbook 1 problem 2-1.c for the solution.
3. See textbook 1 problem 2-11.c for the solution.
4. See textbook 1 problem 2-19.c for the solution.
5. The decimal equivalent of the hexadecimal number 1A16 is 2610.
Chapter 2.1 introduction to number systemISMT College
Binary Number System, Decimal Number System, Octal Number System, Hexadecimal Number System, Conversion, Binary Arithmetic, Signed Binary Number Representation, 1's complement, 2's complement, 9's complement, 10's complement
This document provides an overview of floating point representation and arithmetic based on the IEEE 754 standard. It discusses topics such as normalized and denormalized values, special values like infinity and NaN, and examples using tiny 8-bit floating point formats to illustrate concepts like dynamic range and value distribution. The goal is to explain how computers represent inexact real numbers using a finite number of bits.
Binary coded decimal (BCD) is a numerical coding system that uses binary numbers to represent decimal digits. Each decimal digit from 0 to 9 is represented by a unique 4-bit binary code. BCD allows arithmetic operations like addition and subtraction on numbers. For BCD addition, the binary sum is calculated and if it exceeds 9, then 6 is added to obtain a valid BCD result. For BCD subtraction, the 9's complement of the subtrahend is calculated and added to the minuend, with carries propagated to the next group of bits.
This document discusses different number systems including binary, octal, hexadecimal, and their arithmetic operations. It provides examples of adding and subtracting numbers in these systems. Binary addition follows four rules: 0 + 0 = 0, 0 + 1 = 1, 1 + 0 = 1, 1 + 1 = 10. Octal addition is like decimal addition except when the column sum is greater than 7, 8 is subtracted and 1 is carried. Hexadecimal uses numbers 0-9 and letters A-F to represent values 10-15. It provides a table of decimal and hexadecimal equivalents. Hexadecimal addition involves treating multi-digit numbers as in decimal. Subtraction uses two's complement or 15's and 16's complement methods.
Representation of Signed Numbers - R.D.SivakumarSivakumar R D .
This document discusses the representation of signed numbers in computers. It explains two common methods: sign-magnitude representation and 2's complement representation. For 2's complement, it provides the step-by-step process to convert a positive number to its negative equivalent in binary. It also discusses interpreting numbers as signed or unsigned and how this affects comparisons. Finally, it outlines the different value ranges for unsigned versus signed integers in an n-bit system.
The document provides information about computer arithmetic and binary number representation. It discusses addition and subtraction in binary, signed and unsigned numbers, overflow, and multiplication algorithms. It explains how binary addition and subtraction work using bit-by-bit operations. For multiplication, it describes the shift-add algorithm where the multiplicand is shifted and added to the product based on the multiplier bits. Hardware for implementing this algorithm with registers is also shown.
This document provides an overview of Boolean algebra and logic gates. It discusses topics such as number systems, binary codes, Boolean algebra, logic gates, theorems of Boolean algebra, Boolean functions, simplification using Karnaugh maps, and NAND and NOR implementations. The document also describes binary arithmetic operations including addition, subtraction, multiplication, and division. It defines binary codes and discusses weighted and non-weighted binary codes.
This document provides an overview of Boolean algebra and logic gates. It discusses topics such as number systems, binary codes, Boolean algebra, logic gates, theorems of Boolean algebra, Boolean functions, simplification using Karnaugh maps, and NAND and NOR implementations. The document also describes binary arithmetic operations including addition, subtraction, multiplication, and division. It defines binary codes and discusses weighted and non-weighted binary codes.
Unit-1 Digital Design and Binary Numbers:Asif Iqbal
these slides contains general discerption about digital signals, binary numbers, digital numbers, and basic logic gates. it covers the first unit of AKTU syllabus.
1) The ALU performs arithmetic operations like addition, subtraction, multiplication and division on fixed point and floating point numbers. Fixed point uses integers while floating point uses a sign, mantissa, and exponent.
2) Binary numbers are added using half adders and full adders which are logic circuits that implement addition using truth tables and K-maps. Subtraction is done using 1's or 2's complement representations.
3) Multiplication is done using sequential or Booth's algorithm approaches while division uses restoring or non-restoring algorithms. Floating point uses similar addition and subtraction steps but first normalizes the exponents.
The document discusses binary multiplication and division. It describes how multiplication is performed by shifting and adding the multiplicand, and division is performed by repeatedly subtracting the divisor from the dividend and tracking the quotient and remainder. It also addresses techniques for efficient multiplication and division circuits and handling signed numbers and negatives.
The document discusses analogue and digital signals and number systems. It explains that the real world is analogue but digital signals are used for processing due to integrated circuits that can process digital data more easily. It then covers binary, octal, hexadecimal, and decimal number systems. Finally, it discusses representing negative numbers using sign-magnitude, 1's complement, and 2's complement representations and how arithmetic operations like addition and subtraction work using 2's complement.
- Digital computers perform arithmetic operations like addition, subtraction, multiplication and division on binary numbers.
- Signed binary numbers use the most significant bit as the sign bit to represent positive and negative values. Common representations are sign-magnitude, one's complement, and two's complement.
- Subtraction is performed using the two's complement method by taking the two's complement of the subtrahend and adding it to the minuend. Overflow needs to be handled for accurate results.
The document discusses arithmetic unit operations including addition, subtraction, multiplication, and division of signed and unsigned numbers. It covers integer representation using sign-magnitude, one's complement, and two's complement methods. Two's complement is identified as the best approach for integer representation as it simplifies arithmetic operations and overflow handling. The key hardware components for performing addition and subtraction of signed integers are also summarized.
Digital Logic Design Lecture 1A provides an overview of digital systems and binary numbers. It introduces the difference between analog and digital signals, the process of digitization, and the binary number system. The key concepts covered include representing numbers in binary format, converting between binary and decimal number systems using positional notation and weighted values, and introducing octal and hexadecimal numbering bases.
The document discusses different number systems including decimal, binary, octal, and hexadecimal. It explains the concept of a base-N number system and how digits are arranged from most to least significant. It then provides more details on the decimal, binary, octal, and hexadecimal number systems including examples. The document also covers topics like 1's complement, 2's complement, signed numbers, arithmetic operations, and number conversions between different bases.
This document discusses decimal arithmetic operations using binary coded decimal (BCD) numbers. It describes how decimal numbers are represented in BCD format and processed using microoperations in the arithmetic logic unit (ALU). Addition and subtraction of decimal numbers are performed by converting the numbers to BCD, performing binary addition or subtraction on the digits, and converting the output back to decimal if needed. Block diagrams of BCD adders and examples of decimal addition and subtraction are provided.
This document discusses digital electronics topics including number systems, codes, Boolean algebra, and digital circuits. It provides examples and explanations of converting between decimal, binary, octal, and hexadecimal number systems. Binary coded decimal, gray code, and excess-3 code are also defined. Combinational and sequential digital circuits as well as memory devices are listed as topics to be covered.
This document outlines the syllabus for the subject Digital Principles and System Design. It contains 5 units that cover topics such as Boolean algebra, logic gates, combinational logic, sequential logic, asynchronous sequential logic, memory and programmable logic. The objectives of the course are to understand logic simplification methods, design combinational and sequential logic circuits using HDL, understand various types of memory and programmable devices. The syllabus allocates 45 periods to cover all the units in depth. Relevant textbooks and references are also provided.
The document discusses different number systems including binary, octal, decimal, and hexadecimal. It explains that number systems have a radix or base, which determines the set of symbols used and their positional values. The key representations for binary numbers discussed are sign-magnitude, one's complement, and two's complement, which provide different methods for representing positive and negative numbers. The document provides examples of addition, subtraction, multiplication, and division operations in binary.
This document provides lecture notes on digital system design. It covers topics like logic simplification, combinational logic design, understanding binary and other number systems, binary operations, and Boolean algebra. The first section discusses decimal, binary, octal and hexadecimal number systems. Later sections explain binary addition, subtraction, multiplication and conversions between number bases. Signed number representations like 1's complement and 2's complement are also introduced. Finally, the document discusses Boolean algebra, logic functions, truth tables, and basic logic gates like AND and INVERTER.
The document summarizes computer arithmetic and the arithmetic logic unit (ALU). It discusses:
1) The ALU handles integer and floating point calculations. It may have a separate floating point unit.
2) There are different methods for representing integers like sign-magnitude and two's complement. Two's complement is commonly used.
3) Floating point numbers use a sign, significand, and exponent to represent real numbers in a normalized format like ±.significand × 2exponent.
The document summarizes computer arithmetic and the arithmetic logic unit (ALU). It discusses:
1) The ALU handles integer and floating point calculations. It may have a separate floating point unit.
2) There are different methods for representing integers like sign-magnitude and two's complement. Two's complement is commonly used.
3) Floating point numbers use a sign, significand, and exponent to represent real numbers in a normalized format like ±.significand × 2exponent.
The document discusses various number systems including binary, octal, hexadecimal and their conversions. It describes procedures to convert between different number bases by partitioning the numbers into groups of bits corresponding to the target base. The document also covers signed number representations, binary codes for encoding decimal digits, and fixed and floating point number representations.
This document provides information about Boolean algebra. It begins with an introduction and table of contents. It then discusses the key concepts of Boolean algebra including constants, variables, functions, logical expressions, and logical operations. Features of Boolean algebra are presented, as well as the postulates and theorems. Laws of Boolean algebra like complement, AND, OR, commutative, associative, distributive, and absorption laws are defined. Examples are provided to illustrate concepts like consensus theorem, transposition theorem, De Morgan's theorem, and other theorems. The document also discusses binary coded decimal, excess-3 code, Gray code, and provides examples of arithmetic operations and conversions between different numeric systems.
Understanding Inductive Bias in Machine LearningSUTEJAS
This presentation explores the concept of inductive bias in machine learning. It explains how algorithms come with built-in assumptions and preferences that guide the learning process. You'll learn about the different types of inductive bias and how they can impact the performance and generalizability of machine learning models.
The presentation also covers the positive and negative aspects of inductive bias, along with strategies for mitigating potential drawbacks. We'll explore examples of how bias manifests in algorithms like neural networks and decision trees.
By understanding inductive bias, you can gain valuable insights into how machine learning models work and make informed decisions when building and deploying them.
DEEP LEARNING FOR SMART GRID INTRUSION DETECTION: A HYBRID CNN-LSTM-BASED MODELgerogepatton
As digital technology becomes more deeply embedded in power systems, protecting the communication
networks of Smart Grids (SG) has emerged as a critical concern. Distributed Network Protocol 3 (DNP3)
represents a multi-tiered application layer protocol extensively utilized in Supervisory Control and Data
Acquisition (SCADA)-based smart grids to facilitate real-time data gathering and control functionalities.
Robust Intrusion Detection Systems (IDS) are necessary for early threat detection and mitigation because
of the interconnection of these networks, which makes them vulnerable to a variety of cyberattacks. To
solve this issue, this paper develops a hybrid Deep Learning (DL) model specifically designed for intrusion
detection in smart grids. The proposed approach is a combination of the Convolutional Neural Network
(CNN) and the Long-Short-Term Memory algorithms (LSTM). We employed a recent intrusion detection
dataset (DNP3), which focuses on unauthorized commands and Denial of Service (DoS) cyberattacks, to
train and test our model. The results of our experiments show that our CNN-LSTM method is much better
at finding smart grid intrusions than other deep learning algorithms used for classification. In addition,
our proposed approach improves accuracy, precision, recall, and F1 score, achieving a high detection
accuracy rate of 99.50%.
Harnessing WebAssembly for Real-time Stateless Streaming PipelinesChristina Lin
Traditionally, dealing with real-time data pipelines has involved significant overhead, even for straightforward tasks like data transformation or masking. However, in this talk, we’ll venture into the dynamic realm of WebAssembly (WASM) and discover how it can revolutionize the creation of stateless streaming pipelines within a Kafka (Redpanda) broker. These pipelines are adept at managing low-latency, high-data-volume scenarios.
International Conference on NLP, Artificial Intelligence, Machine Learning an...gerogepatton
International Conference on NLP, Artificial Intelligence, Machine Learning and Applications (NLAIM 2024) offers a premier global platform for exchanging insights and findings in the theory, methodology, and applications of NLP, Artificial Intelligence, Machine Learning, and their applications. The conference seeks substantial contributions across all key domains of NLP, Artificial Intelligence, Machine Learning, and their practical applications, aiming to foster both theoretical advancements and real-world implementations. With a focus on facilitating collaboration between researchers and practitioners from academia and industry, the conference serves as a nexus for sharing the latest developments in the field.
Introduction- e - waste – definition - sources of e-waste– hazardous substances in e-waste - effects of e-waste on environment and human health- need for e-waste management– e-waste handling rules - waste minimization techniques for managing e-waste – recycling of e-waste - disposal treatment methods of e- waste – mechanism of extraction of precious metal from leaching solution-global Scenario of E-waste – E-waste in India- case studies.
6th International Conference on Machine Learning & Applications (CMLA 2024)ClaraZara1
6th International Conference on Machine Learning & Applications (CMLA 2024) will provide an excellent international forum for sharing knowledge and results in theory, methodology and applications of on Machine Learning & Applications.
Low power architecture of logic gates using adiabatic techniquesnooriasukmaningtyas
The growing significance of portable systems to limit power consumption in ultra-large-scale-integration chips of very high density, has recently led to rapid and inventive progresses in low-power design. The most effective technique is adiabatic logic circuit design in energy-efficient hardware. This paper presents two adiabatic approaches for the design of low power circuits, modified positive feedback adiabatic logic (modified PFAL) and the other is direct current diode based positive feedback adiabatic logic (DC-DB PFAL). Logic gates are the preliminary components in any digital circuit design. By improving the performance of basic gates, one can improvise the whole system performance. In this paper proposed circuit design of the low power architecture of OR/NOR, AND/NAND, and XOR/XNOR gates are presented using the said approaches and their results are analyzed for powerdissipation, delay, power-delay-product and rise time and compared with the other adiabatic techniques along with the conventional complementary metal oxide semiconductor (CMOS) designs reported in the literature. It has been found that the designs with DC-DB PFAL technique outperform with the percentage improvement of 65% for NOR gate and 7% for NAND gate and 34% for XNOR gate over the modified PFAL techniques at 10 MHz respectively.
Electric vehicle and photovoltaic advanced roles in enhancing the financial p...IJECEIAES
Climate change's impact on the planet forced the United Nations and governments to promote green energies and electric transportation. The deployments of photovoltaic (PV) and electric vehicle (EV) systems gained stronger momentum due to their numerous advantages over fossil fuel types. The advantages go beyond sustainability to reach financial support and stability. The work in this paper introduces the hybrid system between PV and EV to support industrial and commercial plants. This paper covers the theoretical framework of the proposed hybrid system including the required equation to complete the cost analysis when PV and EV are present. In addition, the proposed design diagram which sets the priorities and requirements of the system is presented. The proposed approach allows setup to advance their power stability, especially during power outages. The presented information supports researchers and plant owners to complete the necessary analysis while promoting the deployment of clean energy. The result of a case study that represents a dairy milk farmer supports the theoretical works and highlights its advanced benefits to existing plants. The short return on investment of the proposed approach supports the paper's novelty approach for the sustainable electrical system. In addition, the proposed system allows for an isolated power setup without the need for a transmission line which enhances the safety of the electrical network
A SYSTEMATIC RISK ASSESSMENT APPROACH FOR SECURING THE SMART IRRIGATION SYSTEMSIJNSA Journal
The smart irrigation system represents an innovative approach to optimize water usage in agricultural and landscaping practices. The integration of cutting-edge technologies, including sensors, actuators, and data analysis, empowers this system to provide accurate monitoring and control of irrigation processes by leveraging real-time environmental conditions. The main objective of a smart irrigation system is to optimize water efficiency, minimize expenses, and foster the adoption of sustainable water management methods. This paper conducts a systematic risk assessment by exploring the key components/assets and their functionalities in the smart irrigation system. The crucial role of sensors in gathering data on soil moisture, weather patterns, and plant well-being is emphasized in this system. These sensors enable intelligent decision-making in irrigation scheduling and water distribution, leading to enhanced water efficiency and sustainable water management practices. Actuators enable automated control of irrigation devices, ensuring precise and targeted water delivery to plants. Additionally, the paper addresses the potential threat and vulnerabilities associated with smart irrigation systems. It discusses limitations of the system, such as power constraints and computational capabilities, and calculates the potential security risks. The paper suggests possible risk treatment methods for effective secure system operation. In conclusion, the paper emphasizes the significant benefits of implementing smart irrigation systems, including improved water conservation, increased crop yield, and reduced environmental impact. Additionally, based on the security analysis conducted, the paper recommends the implementation of countermeasures and security approaches to address vulnerabilities and ensure the integrity and reliability of the system. By incorporating these measures, smart irrigation technology can revolutionize water management practices in agriculture, promoting sustainability, resource efficiency, and safeguarding against potential security threats.
3. Syllabus – Unit II
UNIT-II ARITHMETIC FOR COMPUTERS
Addition and Subtraction – Multiplication – Division – Floating
Point Representation – Floating Point Addition and Subtraction.
4. Text Books
• Book 1:
o Name: Computer Organization and Design: The
Hardware/Software Interface
o Authors: David A. Patterson and John L. Hennessy
o Publisher: Morgan Kaufmann / Elsevier
o Edition: Fifth Edition, 2014
• Book 2:
o Name: Computer Organization and Embedded Systems
Interface
o Authors: Carl Hamacher, Zvonko Vranesic, Safwat Zaky and
Naraig Manjikian
o Publisher: Tata McGraw Hill
o Edition: Sixth Edition, 2012
5. • numbers may be represented in any base
• Computer - base 2 numbers are called binary numbers
• A single digit of a binary number is the “atom” of computing,
since all information is composed of binary digits or bit.
• Alternatives: high or low, on or off , true or false, or 1 or 0
• Least Significant Bit(LSB): The rightmost bit in a MIPS word.
• Most Significant Bit(MSB): The left most bit in a MIPS word.
Numbers
6. The Binary Number System
• Name
o “binarius” (Latin) => two
• Characteristics
o Two symbols
• 0 1
o Positional
• 1010B ≠ 1100B
• Most (digital) computers use the binary number
system Terminology
• •Bit: a binary digit
• •Byte: (typically) 8 bits
7. Number Representation
• Three systems:
o Sign-and-magnitude
o 1’s complement
o 2’s complement
• In all three systems, the leftmost bit is 0 for positive
numbers and 1 for negative numbers.
• Positive values have identical representations in all
systems.
• Negative values have different representations.
• Ex: 10112
8. • Sign Magnitude: One's Complement Two's Complement
000 = 0 000 = 0 000 = 0
001 = +1 001 = +1 001 = +1
010 = +2 010 = +2 010 = +2
011 = +3 011 = +3 011 = +3
100 = 0 100 = -3 100 = -4
101 = -1 101 = -2 101 = -3
110 = -2 110 = -1 110 = -2
111 = -3 111 = 0 111 = -1
• Issues:
o balance – equal number of negatives and positives
o ambiguous zero – whether more than one zero representation
o ease of arithmetic operations
• Which representation is best? Can we get both balance and non-ambiguous zero?
Possible Representations
ambiguous
zero
ambiguous
zero
9. Signed Magnitude
• In this notation, an extra bit is added to the left of the
number to notate its sign.
• 0 indicates +ve and 1 indicates -ve.
• Using 8 bits,
• +13 is 00001101 and +11 is 00001011.
• -13 is 10001101 and -11 is 10001011.
10. 1's Complement
• In this notation positive numbers are represented exactly
as regular binary numbers.
• Negative numbers are represented simply by flipping the
bit, i.e. 0's become 1 and 1's become 0.
• So 13 will be 00001101 and 11 will be 00001011.
• -13 will be 11110010 and -11 will be 11110100.
11. 2's Complement
• In this method a negative number is notated by first
determining the 1's complement of the positive number
and then adding 1 to it.
• So 8-bit -13 will be 11110010
• (1's complement) + 1 = 11110011;
• -11 will be 11110101.
14. • Sign Extension Shortcut: To convert an n-bit integer into an
integer with more than n bits – i.e., to make a narrow integer fill
a wider word – replicate the most significant bit (msb) of the
original number to fill the new bits to its left
o Example: 4-bit 8-bit
0010 = 0000 0010
1010 = 1111 1010
o why is this correct? Prove!
Two's Complement
Operations
31. Overflow
• When the actual result of an arithmetic operation is
outside the representable range, an arithmetic
overflow has occurred.
• No overflow when adding a positive and a negative
number
• No overflow when subtracting numbers with the same
sign
• Overflow occurs when adding two positive numbers
produces a negative result, or when adding two
negative numbers produces a positive result.
33. Overflow - Example
• No overflow when adding a positive and a negative
number
• A+B
• A = +3
• B = -2
+3 => 011
-2 => 110
• -----------
+1 => 001
• Result - representable range
n=3 bits => Range: - 4 to +3
+3 011
+2 010
+1 001
0 000
-1 111
-2 110
-3 101
-4 100
34. Overflow - Example
• No overflow when subtracting numbers with the same
sign
• A - B
• A = - 3
• B = - 2
• (-3) – (-2) => -3 + 2
-3 => 101
+2 => 010
• -----------
-1 => 111
• Result - representable range
n=3 bits => Range: - 4 to +3
+3 011
+2 010
+1 001
0 000
-1 111
-2 110
-3 101
-4 100
35. Overflow - Example
• Overflow occurs when adding two positive numbers
produces a negative result, or when adding two
negative numbers produces a positive result.
• A + B
• A = + 3
• B = + 3
+3 => 011
+3 => 011
• -----------
-2 => 110
n=3 bits => Range: - 4 to +3
+3 011
+2 010
+1 001
0 000
-1 111
-2 110
-3 101
-4 100
36. • No overflow when adding a positive and a negative number
• No overflow when subtracting numbers with the same sign
• Overflow occurs when the result has “wrong” sign (verify!):
Operation Operand A Operand B Result
Indicating Overflow
A + B 0 0 0
A + B 0 0 0
A – B 0 0 0
A – B 0 0 0
• Consider the operations A + B, and A – B
o can overflow occur if B is 0 ?
o can overflow occur if A is 0 ?
Detecting Overflow
37. Multiply
• Grade school shift-add method:
Multiplicand 1000
Multiplier 1001
x 1000
0000
0000
1000
Product 01001000
• m bits x n bits = m+n bit product
• Binary makes it easy:
o multiplier bit 1 => copy multiplicand (1 x multiplicand)
o multiplier bit 0 => place 0 (0 x multiplicand)
x
38. Shift-add Multiplier
64-bit ALU
Control test
Multiplier
Shift right
Product
Write
Multiplicand
Shift left
64 bits
64 bits
32 bits
Done
1. Test
Multiplier0
1a. Add multiplicand to product and
place the result in Product register
2. Shift the Multiplicand register left 1 bit
3. Shift the Multiplier register right 1 bit
32nd repetition?
Start
Multiplier0 = 0
Multiplier0 = 1
No: < 32 repetitions
Yes: 32 repetitions
Multiplicand register, product register, ALU are
64-bit wide; multiplier register is 32-bit wide
Algorithm
32-bit multiplicand starts at right half of multiplicand register
Product register is initialized at 0
40. Signed Multiplication
Booth Algorithm
The Booth Algorithm:
• The Booth algorithm generates a 2n-bit product
and treats both positive and
• negative 2’scomplement n-bit operands
uniformly.
• The Booth algorithm has two attractive features.
First, it handles both positive and negative multipliers
uniformly.
Second, it achieves some efficiency in the number of
additions required when the multiplier has a few large blocks of
1s.
42. Booth Algorithm
Booth Recoded Multipliers
Ex:
-6 in 2’s complement is 11010
11010 0
Peform recoding
1 1 0 1 0 0
Add a zero to
the RHS of the
multiplier
0
-1
+1
-1
So this is the
recoded multiplier
0
43. Booth Multiplication
• Let us perform +13 * -6
Steps:
• Recode the multiplier
-6 when recoded is 0 -1 +1 -1 0
+13 = 0 1 1 0 1
- 6 = 0-1+1-10
Note:
Multiplier bit is :
0 All 0s
+1 multiplicand
-1 2’s c of multiplicand
0 1 1 0 1
X 0-1+1-10
0 0 0 0 0
1 0 0 1 1
0 1 1 0 1
1 0 0 1 1
0 0 0 0 0
1 1 1 1 0 1 1 0 0 1 0
0 0 0 0 0
1 1 1 1
0 0 0
1 1
0
Final Product = 11101100102 = -7810
Carry is ignored
44. 1001 Quotient
Divisor 1000 1001010 Dividend
–1000
10
101
1010
-1000
10 Remainder
• Junior school method: see how big a multiple of the divisor can be
subtracted, creating quotient digit at each step
• Binary makes it easy first, try 1 * divisor; if too big, 0 * divisor
• Dividend = (Quotient * Divisor) + Remainder
Division
45. Restoring Division
64-bit ALU
Control
test
Quotient
Shift left
Remainder
Write
Divisor
Shift right
64 bits
64 bits
32 bits
Done
Test Remainder
2a. Shift the Quotient register to the left,
setting the new rightmost bit to 1
3. Shift the Divisor register right 1 bit
33rd repetition?
Start
Remainder < 0
No: < 33 repetitions
Yes: 33 repetitions
2b. Restore the original value by adding
the Divisor register to the Remainder
register and place the sum in the
Remainder register. Also shift the
Quotient register to the left, setting the
new least significant bit to 0
1. Subtract the Divisor register from the
Remainder register and place the
result in the Remainder register
Remainder > 0
–
Divisor register, remainder register, ALU are
64-bit wide; quotient register is 32-bit wide
Algorithm
32-bit divisor starts at left half of divisor register
Remainder register is initialized with the dividend at right
Why 33? We shall see later…
Quotient register is
initialized to be 0
46. Division
Done
Test Remainder
2a. Shift the Quotient register to the left,
setting the new rightmost bit to 1
3. Shift the Divisor register right 1 bit
33rd repetition?
Start
Remainder < 0
No: < 33 repetitions
Yes: 33 repetitions
2b. Restore the original value by adding
the Divisor register to the Remainder
register and place the sum in the
Remainder register. Also shift the
Quotient register to the left, setting the
new least significant bit to 0
1. Subtract the Divisor register from the
Remainder register and place the
result in the Remainder register
Remainder > 0
–
Itera- Step Quotient Divisor Remainder
tion
0 init 0000 0010 0000 0000 0111
1 1 0000 0010 0000 1110 0111
2b 0000 0010 0000 0000 0111
3 0000 0001 0000 0000 0111
2 1 0000 0001 0000 1111 0111
2b 0000 0001 0000 0000 0111
3 0000 0000 1000 0000 0111
3 1 0000 0000 1000 1111 1111
2b 0000 0000 1000 0000 0111
3 0000 0000 0100 0000 0111
4 1 0000 0000 0100 0000 0011
2a 0001 0000 0100 0000 0011
3 0001 0000 0010 0000 0011
5 1 0001 0000 0010 0000 0001
2a 0011 0000 0010 0000 0001
3 0011 0000 0001 0000 0001
Example: 0111 / 0010:
Algorithm
R = Reminder – Divisor
R = 0000 0111 – 0010 0000
R = 1110 0111
Restore,
R = R + D
R = Reminder – Divisor
R = 0000 0111 – 0001 0000
R = 1111 0111
Restore,
R = R + D
R = Reminder – Divisor
R = 0000 0111 – 0000 1000
R = 1111 1111
Restore,
R = R + D
R = Reminder – Divisor
R = 0000 0111 – 0000 0100
R = 0000 0011
R = Reminder – Divisor
R = 0000 0011 – 0000 0010
R = 0000 0001
47. Floating Point
• We need a way to represent
o numbers with fractions, e.g., 3.1416
o very small numbers (in absolute value), e.g.,
.00000000023
o very large numbers (in absolute value) , e.g., –
3.15576 * 1046
48. Floating Point
• Still use a fixed number of bits
o Sign bit S, exponent E, significand F
o Value: (-1)S x (1+F) x 2E
• IEEE 754 standard
4
8
Size Exponent Significand Range
Single precision 32b 8b 23b 2x10+/-38
Double precision 64b 11b 52b 2x10+/-308
S E F
49. IEEE 754 Floating-point
Standard
• IEEE 754 floating point standard:
o single precision: one word
o double precision: two words
31
sign
bits 30 to 23
8-bit exponent
bits 22 to 0
23-bit significand
31
sign
bits 30 to 20
11-bit exponent
bits 19 to 0
upper 20 bits of 52-bit significand
bits 31 to 0
lower 32 bits of 52-bit significand
50. Floating Point Exponent
• Exponent specified in biased or excess
notation
• Why?
o To simplify sorting
o Sign bit is MSB to ease sorting
o 2’s complement exponent:
• Large numbers have positive exponent
• Small numbers have negative exponent
o Sorting does not follow naturally
5
0
51. Excess or Biased
Exponent
• Value: (-1)S x (1 + F) x 2(E-bias)
o SP: bias is 127
o DP: bias is 1023
5
1
Exponent 2’s Compl Excess-127
-127 1000 0001 0000 0000
-126 1000 0010 0000 0001
… … …
+127 0111 1111 1111 1110
52. Floating Point
Normalization
• S,E,F representation allows more than one
representation for a particular value, e.g.
1.0 x 105 = 0.1 x 106 = 10.0 x 104
o This makes comparison operations difficult
o Prefer to have a single representation
• Hence, normalize by convention:
o Only one digit to the left of the floating point
o In binary, that digit must be a 1
• Since leading ‘1’ is implicit, no need to store it
• Hence, obtain one extra bit of precision for free
5
2
53. FP Overflow/Underflow
• FP Overflow
o Analogous to integer overflow
o Result is too big to represent
o Means exponent is too big
• FP Underflow
o Result is too small to represent
o Means exponent is too small (too negative)
• Both can raise an exception under IEEE754
5
3
54. IEEE754 Special Cases
5
4
Single Precision Double Precision Value
Exponent Significand Exponent Significand
0 0 0 0 0
0 nonzero 0 nonzero denormalized
1-254 anything 1-2046 anything fp number
255 0 2047 0 infinity
255 nonzero 2047 nonzero
NaN (Not a
Number)
55. Show the IEEE 754 binary
representation of the number -0.75ten in
single and double precision
Converting -0.75ten to binary
0.75 x 2 = 1.50 (take only integral part, ie 1)
0.50 x 2 = 1.00
0.11 x 2 -0
After Normalizing the above value is 1.1 x 2 -1
The general representation for a single precision no is
(-1)S x (1 + F) x 2(E-127)
Subtracting the bias 127 from the exponent of
-1.1two x 2-1 yields
(-1)1 x (1+.1000 0000 0000 0000 0000 000two) x 2(126-127)
(Contd…)
56. (Contd…)
The single precision binary representation of -0.75ten is
then
The double precision representation is
Show the IEEE 754 binary
representation of the number -0.75ten in
single and double precision