UNIT-II ARITHMETIC FOR COMPUTERS
Addition and Subtraction – Multiplication – Division – Floating Point Representation – Floating Point Addition and Subtraction.
The document provides information about computer arithmetic and binary number representation. It discusses addition and subtraction in binary, signed and unsigned numbers, overflow, and multiplication algorithms. It explains how binary addition and subtraction work using bit-by-bit operations. For multiplication, it describes the shift-add algorithm where the multiplicand is shifted and added to the product based on the multiplier bits. Hardware for implementing this algorithm with registers is also shown.
Inductive programming incorporates all approaches which are concerned with learning programs or algorithms from incomplete (formal) specifications. Possible inputs in an IP system are a set of training inputs and corresponding outputs or an output evaluation function, describing the desired behavior of the intended program, traces or action sequences which describe the process of calculating specific outputs, constraints for the program to be induced concerning its time efficiency or its complexity, various kinds of background knowledge such as standard data types, predefined functions to be used, program schemes or templates describing the data flow of the intended program, heuristics for guiding the search for a solution or other biases.
Output of an IP system is a program in some arbitrary programming language containing conditionals and loop or recursive control structures, or any other kind of Turing-complete representation language.
In many applications the output program must be correct with respect to the examples and partial specification, and this leads to the consideration of inductive programming as a special area inside automatic programming or program synthesis, usually opposed to 'deductive' program synthesis, where the specification is usually complete.
In other cases, inductive programming is seen as a more general area where any declarative programming or representation language can be used and we may even have some degree of error in the examples, as in general machine learning, the more specific area of structure mining or the area of symbolic artificial intelligence. A distinctive feature is the number of examples or partial specification needed. Typically, inductive programming techniques can learn from just a few examples.
The diversity of inductive programming usually comes from the applications and the languages that are used: apart from logic programming and functional programming, other programming paradigms and representation languages have been used or suggested in inductive programming, such as functional logic programming, constraint
programming, probabilistic programming
Research on the inductive synthesis of recursive functional programs started in the early 1970s and was brought onto firm theoretical foundations with the seminal THESIS system of Summers[6] and work of Biermann.[7] These approaches were split into two phases: first, input-output examples are transformed into non-recursive programs (traces) using a small set of basic operators; second, regularities in the traces are searched for and used to fold them into a recursive program. The main results until the mid 1980s are surveyed by Smith.[8] Due to
Binary addition involves adding binary numbers by applying the following rules:
0 + 0 = 0
0 + 1 = 1
1 + 0 = 1
1 + 1 = 0 with a carry of 1 to the next column.
To perform multi-bit addition, a half or full adder table is used to calculate the sum and carry out for each bit while accounting for any carries from the previous column. Examples are provided showing how addition is performed on multiple bits using the adder table and propagating any carries to the next position.
To multiply binary numbers, refer to the single bit multiplication table that lists the possible products of multiplying 0 and 1. Multiply each bit in the first number by each bit in the second number and combine the results to get the final product. For example, multiplying 1101 by 0111 using this method results in a product of 1001.
The document discusses binary number representation and arithmetic. It explains decimal to binary conversion. It also describes signed number representation using sign-magnitude and one's complement and two's complement methods. The key advantages of two's complement are that addition can be performed using the same method for positive and negative numbers. Subtraction using two's complement is performed by adding the number to the complement of the subtrahend. Examples of binary addition and subtraction are provided to illustrate these concepts.
The document discusses various number systems used in digital electronics including decimal, binary, hexadecimal, and octal number systems. It provides details on how decimal, binary, and hexadecimal numbers are represented and converted between number systems. Various methods for converting between decimal, binary, hexadecimal, and octal numbers are presented including the sum-of-weights method and division/multiplication methods. The use of binary coded decimal codes for easier conversion between decimal and binary numbers is also covered.
UNIT-II ARITHMETIC FOR COMPUTERS
Addition and Subtraction – Multiplication – Division – Floating Point Representation – Floating Point Addition and Subtraction.
The document provides information about computer arithmetic and binary number representation. It discusses addition and subtraction in binary, signed and unsigned numbers, overflow, and multiplication algorithms. It explains how binary addition and subtraction work using bit-by-bit operations. For multiplication, it describes the shift-add algorithm where the multiplicand is shifted and added to the product based on the multiplier bits. Hardware for implementing this algorithm with registers is also shown.
Inductive programming incorporates all approaches which are concerned with learning programs or algorithms from incomplete (formal) specifications. Possible inputs in an IP system are a set of training inputs and corresponding outputs or an output evaluation function, describing the desired behavior of the intended program, traces or action sequences which describe the process of calculating specific outputs, constraints for the program to be induced concerning its time efficiency or its complexity, various kinds of background knowledge such as standard data types, predefined functions to be used, program schemes or templates describing the data flow of the intended program, heuristics for guiding the search for a solution or other biases.
Output of an IP system is a program in some arbitrary programming language containing conditionals and loop or recursive control structures, or any other kind of Turing-complete representation language.
In many applications the output program must be correct with respect to the examples and partial specification, and this leads to the consideration of inductive programming as a special area inside automatic programming or program synthesis, usually opposed to 'deductive' program synthesis, where the specification is usually complete.
In other cases, inductive programming is seen as a more general area where any declarative programming or representation language can be used and we may even have some degree of error in the examples, as in general machine learning, the more specific area of structure mining or the area of symbolic artificial intelligence. A distinctive feature is the number of examples or partial specification needed. Typically, inductive programming techniques can learn from just a few examples.
The diversity of inductive programming usually comes from the applications and the languages that are used: apart from logic programming and functional programming, other programming paradigms and representation languages have been used or suggested in inductive programming, such as functional logic programming, constraint
programming, probabilistic programming
Research on the inductive synthesis of recursive functional programs started in the early 1970s and was brought onto firm theoretical foundations with the seminal THESIS system of Summers[6] and work of Biermann.[7] These approaches were split into two phases: first, input-output examples are transformed into non-recursive programs (traces) using a small set of basic operators; second, regularities in the traces are searched for and used to fold them into a recursive program. The main results until the mid 1980s are surveyed by Smith.[8] Due to
Binary addition involves adding binary numbers by applying the following rules:
0 + 0 = 0
0 + 1 = 1
1 + 0 = 1
1 + 1 = 0 with a carry of 1 to the next column.
To perform multi-bit addition, a half or full adder table is used to calculate the sum and carry out for each bit while accounting for any carries from the previous column. Examples are provided showing how addition is performed on multiple bits using the adder table and propagating any carries to the next position.
To multiply binary numbers, refer to the single bit multiplication table that lists the possible products of multiplying 0 and 1. Multiply each bit in the first number by each bit in the second number and combine the results to get the final product. For example, multiplying 1101 by 0111 using this method results in a product of 1001.
The document discusses binary number representation and arithmetic. It explains decimal to binary conversion. It also describes signed number representation using sign-magnitude and one's complement and two's complement methods. The key advantages of two's complement are that addition can be performed using the same method for positive and negative numbers. Subtraction using two's complement is performed by adding the number to the complement of the subtrahend. Examples of binary addition and subtraction are provided to illustrate these concepts.
The document discusses various number systems used in digital electronics including decimal, binary, hexadecimal, and octal number systems. It provides details on how decimal, binary, and hexadecimal numbers are represented and converted between number systems. Various methods for converting between decimal, binary, hexadecimal, and octal numbers are presented including the sum-of-weights method and division/multiplication methods. The use of binary coded decimal codes for easier conversion between decimal and binary numbers is also covered.
The document discusses different methods for representing signed binary numbers:
1) Sign-magnitude notation represents positive and negative numbers by using the most significant bit to indicate the sign (0 for positive, 1 for negative) and the remaining bits for the magnitude.
2) One's complement represents negative numbers by inverting all bits of the positive number.
3) Two's complement, the most common method, represents negative numbers by inverting all bits and adding 1 to the result. This allows simple addition to perform subtraction.
1. The document describes the von Neumann architecture and its key components including the ALU, control unit, memory and I/O devices.
2. It explains the structure of the von Neumann machine and details the functions of components like the program counter, memory address register, and instruction register.
3. The document covers integer and floating point representation in binary, including sign-magnitude, two's complement, and IEEE 754 standard. It describes arithmetic operations like addition, subtraction, multiplication and division on binary numbers.
The document discusses different number systems used in computing, including binary, hexadecimal, and octal. It explains that computers internally use the binary number system to represent data and perform calculations. Hexadecimal provides a shorthand way to work with binary numbers, with each hex digit corresponding to four binary digits. The document also covers how to convert between decimal, binary, hexadecimal, and octal numbers. It provides examples of expanding numbers in different bases, as well as adding and subtracting binary numbers using complements.
Binary arithmetic is essential for digital computers and systems. It involves adding, subtracting, multiplying, and dividing binary numbers using basic rules. Signed binary numbers represent positive and negative values using sign-magnitude, 1's complement, and 2's complement methods. Arithmetic operations on signed binary numbers follow rules for handling the sign bit and complement representations.
Here are the answers to the assignment questions:
1. No overflow occurs when adding 00100110 + 01011010 in two's complement. The sum is 10001000.
2. See textbook 1 problem 2-1.c for the solution.
3. See textbook 1 problem 2-11.c for the solution.
4. See textbook 1 problem 2-19.c for the solution.
5. The decimal equivalent of the hexadecimal number 1A16 is 2610.
This document discusses different number systems including binary, octal, hexadecimal, and their arithmetic operations. It provides examples of adding and subtracting numbers in these systems. Binary addition follows four rules: 0 + 0 = 0, 0 + 1 = 1, 1 + 0 = 1, 1 + 1 = 10. Octal addition is like decimal addition except when the column sum is greater than 7, 8 is subtracted and 1 is carried. Hexadecimal uses numbers 0-9 and letters A-F to represent values 10-15. It provides a table of decimal and hexadecimal equivalents. Hexadecimal addition involves treating multi-digit numbers as in decimal. Subtraction uses two's complement or 15's and 16's complement methods.
This document discusses number representation systems used in computers, including binary, decimal, octal, and hexadecimal. It provides examples of converting between these different bases. Specifically, it covers:
1) Converting between decimal, binary, octal, and hexadecimal using positional notation and place values.
2) Representing signed integers in binary using ones' complement and twos' complement notation.
3) Tables for converting binary numbers to octal and hexadecimal using place values of each base.
4) Examples of converting values between the different number bases both manually and using the provided conversion tables.
The document discusses different number systems including binary, decimal, octal and hexadecimal. It provides examples of converting between these number systems. The key points covered are:
- Binary, decimal, octal and hexadecimal number systems use different bases (2, 10, 8, 16 respectively) and sets of digits.
- Numbers can be converted between these systems through repetitive division or multiplication by the base to determine each place value digit.
- Fractional numbers are represented similarly with place values decreasing as negative powers of the base moving right of the radix point.
Digital Electronics discusses different number systems including binary, decimal, hexadecimal, and octal. It explains how to convert between these number systems using various methods like place value, division, and electronic translators. Electronic encoders and decoders are integrated circuits that can translate between binary and decimal representations.
This document discusses data representation and number systems in computers. It covers binary, octal, decimal, and hexadecimal number systems. Key points include:
- Data in computers is represented using binary numbers and different number systems allow for more efficient representations.
- Converting between number systems like binary, octal, decimal, and hexadecimal is explained through examples of dividing numbers and grouping bits.
- Signed numbers can be represented using complement representations like one's complement and two's complement, with subtraction implemented through addition of complements. Fast methods for calculating two's complement are described.
The document summarizes computer arithmetic and floating point representation. It discusses:
1) The arithmetic logic unit handles integer and floating point calculations. Integer values are represented in binary using two's complement. Floating point values use a sign-magnitude format with a fixed or moving binary point.
2) Addition and subtraction of integers is done through normal binary addition and subtraction. Multiplication requires generating partial products and addition. Division uses a long division approach.
3) Floating point numbers follow the IEEE 754 standard which represents values as ±mantissax2exponent in 32 or 64 bit formats. Arithmetic requires aligning operands and performing operations on significands and exponents.
The document discusses different number systems including binary, decimal, octal, and hexadecimal. It provides details on how to convert between these number systems, including how to convert fractional numbers between bases. Conversion methods covered include dividing numbers into place values to determine the digit values in the target base. The document also discusses representing negative numbers using 1's complement notation.
digital logic circuits, digital component floting and fixed pointRai University
This document provides an overview of floating point numbers and their representation. It discusses how floating point numbers are used to represent very large and small numbers with exponents. The IEEE 754 standard for floating point representation is described, including the use of sign-magnitude, biased exponents, normalization, denormalization and special values like infinity and NaN. Single and double precision floating point number formats are defined according to IEEE 754. Methods for converting between decimal and binary floating point values are demonstrated through examples.
This document provides an overview of floating point representation and arithmetic based on the IEEE 754 standard. It discusses topics such as normalized and denormalized values, special values like infinity and NaN, and examples using tiny 8-bit floating point formats to illustrate concepts like dynamic range and value distribution. The goal is to explain how computers represent inexact real numbers using a finite number of bits.
This document discusses number systems, including decimal, binary, octal, and hexadecimal. It provides details on converting between these different number systems, with a focus on binary to decimal and hexadecimal conversions using positional notation and doubling methods. Examples are given for addition, subtraction, multiplication, and division in binary number systems.
The document discusses the binary number system. It begins by defining number systems and the decimal system. It then introduces the binary number system which has a base of 2 and uses only the digits 0 and 1. It shows how to write binary numbers and provides a table to demonstrate counting and place values in the binary system. The document explains two methods for converting between decimal and binary numbers - the division method to convert decimals to binary, and the expansion method to convert binary to decimal. It includes examples and practice problems for students to convert numbers between the two number systems.
Binary coded decimal (BCD) is a numerical coding system that uses binary numbers to represent decimal digits. Each decimal digit from 0 to 9 is represented by a unique 4-bit binary code. BCD allows arithmetic operations like addition and subtraction on numbers. For BCD addition, the binary sum is calculated and if it exceeds 9, then 6 is added to obtain a valid BCD result. For BCD subtraction, the 9's complement of the subtrahend is calculated and added to the minuend, with carries propagated to the next group of bits.
The document discusses different number systems used to represent numeric values in computers, including binary, octal, hexadecimal, and decimal. It provides examples of converting between these number systems using techniques like repeated division and multiplying digits by their place values. Character encoding schemes like ASCII, EBCDIC, and Unicode are also covered, explaining how they allow computers to represent letters, punctuation, and other characters with binary values.
The document provides information about computer arithmetic and binary number representation. It discusses addition and subtraction in binary, signed and unsigned numbers, overflow, and multiplication algorithms. It explains how binary addition and subtraction work using bit-by-bit operations. For multiplication, it describes the shift-add algorithm where the multiplicand is shifted and added to the product based on the multiplier bits. Hardware for implementing this algorithm with registers is also shown.
This document provides an overview of Boolean algebra and logic gates. It discusses topics such as number systems, binary codes, Boolean algebra, logic gates, theorems of Boolean algebra, Boolean functions, simplification using Karnaugh maps, and NAND and NOR implementations. The document also describes binary arithmetic operations including addition, subtraction, multiplication, and division. It defines binary codes and discusses weighted and non-weighted binary codes.
This document provides an overview of Boolean algebra and logic gates. It discusses topics such as number systems, binary codes, Boolean algebra, logic gates, theorems of Boolean algebra, Boolean functions, simplification using Karnaugh maps, and NAND and NOR implementations. The document also describes binary arithmetic operations including addition, subtraction, multiplication, and division. It defines binary codes and discusses weighted and non-weighted binary codes.
The document discusses different methods for representing signed binary numbers:
1) Sign-magnitude notation represents positive and negative numbers by using the most significant bit to indicate the sign (0 for positive, 1 for negative) and the remaining bits for the magnitude.
2) One's complement represents negative numbers by inverting all bits of the positive number.
3) Two's complement, the most common method, represents negative numbers by inverting all bits and adding 1 to the result. This allows simple addition to perform subtraction.
1. The document describes the von Neumann architecture and its key components including the ALU, control unit, memory and I/O devices.
2. It explains the structure of the von Neumann machine and details the functions of components like the program counter, memory address register, and instruction register.
3. The document covers integer and floating point representation in binary, including sign-magnitude, two's complement, and IEEE 754 standard. It describes arithmetic operations like addition, subtraction, multiplication and division on binary numbers.
The document discusses different number systems used in computing, including binary, hexadecimal, and octal. It explains that computers internally use the binary number system to represent data and perform calculations. Hexadecimal provides a shorthand way to work with binary numbers, with each hex digit corresponding to four binary digits. The document also covers how to convert between decimal, binary, hexadecimal, and octal numbers. It provides examples of expanding numbers in different bases, as well as adding and subtracting binary numbers using complements.
Binary arithmetic is essential for digital computers and systems. It involves adding, subtracting, multiplying, and dividing binary numbers using basic rules. Signed binary numbers represent positive and negative values using sign-magnitude, 1's complement, and 2's complement methods. Arithmetic operations on signed binary numbers follow rules for handling the sign bit and complement representations.
Here are the answers to the assignment questions:
1. No overflow occurs when adding 00100110 + 01011010 in two's complement. The sum is 10001000.
2. See textbook 1 problem 2-1.c for the solution.
3. See textbook 1 problem 2-11.c for the solution.
4. See textbook 1 problem 2-19.c for the solution.
5. The decimal equivalent of the hexadecimal number 1A16 is 2610.
This document discusses different number systems including binary, octal, hexadecimal, and their arithmetic operations. It provides examples of adding and subtracting numbers in these systems. Binary addition follows four rules: 0 + 0 = 0, 0 + 1 = 1, 1 + 0 = 1, 1 + 1 = 10. Octal addition is like decimal addition except when the column sum is greater than 7, 8 is subtracted and 1 is carried. Hexadecimal uses numbers 0-9 and letters A-F to represent values 10-15. It provides a table of decimal and hexadecimal equivalents. Hexadecimal addition involves treating multi-digit numbers as in decimal. Subtraction uses two's complement or 15's and 16's complement methods.
This document discusses number representation systems used in computers, including binary, decimal, octal, and hexadecimal. It provides examples of converting between these different bases. Specifically, it covers:
1) Converting between decimal, binary, octal, and hexadecimal using positional notation and place values.
2) Representing signed integers in binary using ones' complement and twos' complement notation.
3) Tables for converting binary numbers to octal and hexadecimal using place values of each base.
4) Examples of converting values between the different number bases both manually and using the provided conversion tables.
The document discusses different number systems including binary, decimal, octal and hexadecimal. It provides examples of converting between these number systems. The key points covered are:
- Binary, decimal, octal and hexadecimal number systems use different bases (2, 10, 8, 16 respectively) and sets of digits.
- Numbers can be converted between these systems through repetitive division or multiplication by the base to determine each place value digit.
- Fractional numbers are represented similarly with place values decreasing as negative powers of the base moving right of the radix point.
Digital Electronics discusses different number systems including binary, decimal, hexadecimal, and octal. It explains how to convert between these number systems using various methods like place value, division, and electronic translators. Electronic encoders and decoders are integrated circuits that can translate between binary and decimal representations.
This document discusses data representation and number systems in computers. It covers binary, octal, decimal, and hexadecimal number systems. Key points include:
- Data in computers is represented using binary numbers and different number systems allow for more efficient representations.
- Converting between number systems like binary, octal, decimal, and hexadecimal is explained through examples of dividing numbers and grouping bits.
- Signed numbers can be represented using complement representations like one's complement and two's complement, with subtraction implemented through addition of complements. Fast methods for calculating two's complement are described.
The document summarizes computer arithmetic and floating point representation. It discusses:
1) The arithmetic logic unit handles integer and floating point calculations. Integer values are represented in binary using two's complement. Floating point values use a sign-magnitude format with a fixed or moving binary point.
2) Addition and subtraction of integers is done through normal binary addition and subtraction. Multiplication requires generating partial products and addition. Division uses a long division approach.
3) Floating point numbers follow the IEEE 754 standard which represents values as ±mantissax2exponent in 32 or 64 bit formats. Arithmetic requires aligning operands and performing operations on significands and exponents.
The document discusses different number systems including binary, decimal, octal, and hexadecimal. It provides details on how to convert between these number systems, including how to convert fractional numbers between bases. Conversion methods covered include dividing numbers into place values to determine the digit values in the target base. The document also discusses representing negative numbers using 1's complement notation.
digital logic circuits, digital component floting and fixed pointRai University
This document provides an overview of floating point numbers and their representation. It discusses how floating point numbers are used to represent very large and small numbers with exponents. The IEEE 754 standard for floating point representation is described, including the use of sign-magnitude, biased exponents, normalization, denormalization and special values like infinity and NaN. Single and double precision floating point number formats are defined according to IEEE 754. Methods for converting between decimal and binary floating point values are demonstrated through examples.
This document provides an overview of floating point representation and arithmetic based on the IEEE 754 standard. It discusses topics such as normalized and denormalized values, special values like infinity and NaN, and examples using tiny 8-bit floating point formats to illustrate concepts like dynamic range and value distribution. The goal is to explain how computers represent inexact real numbers using a finite number of bits.
This document discusses number systems, including decimal, binary, octal, and hexadecimal. It provides details on converting between these different number systems, with a focus on binary to decimal and hexadecimal conversions using positional notation and doubling methods. Examples are given for addition, subtraction, multiplication, and division in binary number systems.
The document discusses the binary number system. It begins by defining number systems and the decimal system. It then introduces the binary number system which has a base of 2 and uses only the digits 0 and 1. It shows how to write binary numbers and provides a table to demonstrate counting and place values in the binary system. The document explains two methods for converting between decimal and binary numbers - the division method to convert decimals to binary, and the expansion method to convert binary to decimal. It includes examples and practice problems for students to convert numbers between the two number systems.
Binary coded decimal (BCD) is a numerical coding system that uses binary numbers to represent decimal digits. Each decimal digit from 0 to 9 is represented by a unique 4-bit binary code. BCD allows arithmetic operations like addition and subtraction on numbers. For BCD addition, the binary sum is calculated and if it exceeds 9, then 6 is added to obtain a valid BCD result. For BCD subtraction, the 9's complement of the subtrahend is calculated and added to the minuend, with carries propagated to the next group of bits.
The document discusses different number systems used to represent numeric values in computers, including binary, octal, hexadecimal, and decimal. It provides examples of converting between these number systems using techniques like repeated division and multiplying digits by their place values. Character encoding schemes like ASCII, EBCDIC, and Unicode are also covered, explaining how they allow computers to represent letters, punctuation, and other characters with binary values.
The document provides information about computer arithmetic and binary number representation. It discusses addition and subtraction in binary, signed and unsigned numbers, overflow, and multiplication algorithms. It explains how binary addition and subtraction work using bit-by-bit operations. For multiplication, it describes the shift-add algorithm where the multiplicand is shifted and added to the product based on the multiplier bits. Hardware for implementing this algorithm with registers is also shown.
This document provides an overview of Boolean algebra and logic gates. It discusses topics such as number systems, binary codes, Boolean algebra, logic gates, theorems of Boolean algebra, Boolean functions, simplification using Karnaugh maps, and NAND and NOR implementations. The document also describes binary arithmetic operations including addition, subtraction, multiplication, and division. It defines binary codes and discusses weighted and non-weighted binary codes.
This document provides an overview of Boolean algebra and logic gates. It discusses topics such as number systems, binary codes, Boolean algebra, logic gates, theorems of Boolean algebra, Boolean functions, simplification using Karnaugh maps, and NAND and NOR implementations. The document also describes binary arithmetic operations including addition, subtraction, multiplication, and division. It defines binary codes and discusses weighted and non-weighted binary codes.
1) The ALU performs arithmetic operations like addition, subtraction, multiplication and division on fixed point and floating point numbers. Fixed point uses integers while floating point uses a sign, mantissa, and exponent.
2) Binary numbers are added using half adders and full adders which are logic circuits that implement addition using truth tables and K-maps. Subtraction is done using 1's or 2's complement representations.
3) Multiplication is done using sequential or Booth's algorithm approaches while division uses restoring or non-restoring algorithms. Floating point uses similar addition and subtraction steps but first normalizes the exponents.
Unit-1 Digital Design and Binary Numbers:Asif Iqbal
these slides contains general discerption about digital signals, binary numbers, digital numbers, and basic logic gates. it covers the first unit of AKTU syllabus.
The document discusses binary multiplication and division. It describes how multiplication is performed by shifting and adding the multiplicand, and division is performed by repeatedly subtracting the divisor from the dividend and tracking the quotient and remainder. It also addresses techniques for efficient multiplication and division circuits and handling signed numbers and negatives.
Binary addition, Binary subtraction, Negative number representation, Subtraction using 1’s complement and 2’s complement, Binary multiplication and division, Arithmetic in octal, hexadecimal number system, BCD and Excess – 3 arithmetic
This document discusses decimal arithmetic operations using binary coded decimal (BCD) numbers. It describes how decimal numbers are represented in BCD format and processed using microoperations in the arithmetic logic unit (ALU). Addition and subtraction of decimal numbers are performed by converting the numbers to BCD, performing binary addition or subtraction on the digits, and converting the output back to decimal if needed. Block diagrams of BCD adders and examples of decimal addition and subtraction are provided.
The document discusses analogue and digital signals and number systems. It explains that the real world is analogue but digital signals are used for processing due to integrated circuits that can process digital data more easily. It then covers binary, octal, hexadecimal, and decimal number systems. Finally, it discusses representing negative numbers using sign-magnitude, 1's complement, and 2's complement representations and how arithmetic operations like addition and subtraction work using 2's complement.
- Digital computers perform arithmetic operations like addition, subtraction, multiplication and division on binary numbers.
- Signed binary numbers use the most significant bit as the sign bit to represent positive and negative values. Common representations are sign-magnitude, one's complement, and two's complement.
- Subtraction is performed using the two's complement method by taking the two's complement of the subtrahend and adding it to the minuend. Overflow needs to be handled for accurate results.
This document discusses digital electronics topics including number systems, codes, Boolean algebra, and digital circuits. It provides examples and explanations of converting between decimal, binary, octal, and hexadecimal number systems. Binary coded decimal, gray code, and excess-3 code are also defined. Combinational and sequential digital circuits as well as memory devices are listed as topics to be covered.
The document discusses arithmetic unit operations including addition, subtraction, multiplication, and division of signed and unsigned numbers. It covers integer representation using sign-magnitude, one's complement, and two's complement methods. Two's complement is identified as the best approach for integer representation as it simplifies arithmetic operations and overflow handling. The key hardware components for performing addition and subtraction of signed integers are also summarized.
The document discusses different number systems including decimal, binary, octal, and hexadecimal. It explains the concept of a base-N number system and how digits are arranged from most to least significant. It then provides more details on the decimal, binary, octal, and hexadecimal number systems including examples. The document also covers topics like 1's complement, 2's complement, signed numbers, arithmetic operations, and number conversions between different bases.
Digital Logic Design Lecture 1A provides an overview of digital systems and binary numbers. It introduces the difference between analog and digital signals, the process of digitization, and the binary number system. The key concepts covered include representing numbers in binary format, converting between binary and decimal number systems using positional notation and weighted values, and introducing octal and hexadecimal numbering bases.
This document provides lecture notes on digital system design. It covers topics like logic simplification, combinational logic design, understanding binary and other number systems, binary operations, and Boolean algebra. The first section discusses decimal, binary, octal and hexadecimal number systems. Later sections explain binary addition, subtraction, multiplication and conversions between number bases. Signed number representations like 1's complement and 2's complement are also introduced. Finally, the document discusses Boolean algebra, logic functions, truth tables, and basic logic gates like AND and INVERTER.
This document outlines the syllabus for the subject Digital Principles and System Design. It contains 5 units that cover topics such as Boolean algebra, logic gates, combinational logic, sequential logic, asynchronous sequential logic, memory and programmable logic. The objectives of the course are to understand logic simplification methods, design combinational and sequential logic circuits using HDL, understand various types of memory and programmable devices. The syllabus allocates 45 periods to cover all the units in depth. Relevant textbooks and references are also provided.
This document contains an exercise on digital electronics concepts including:
1. The differences between analog and digital measurements and pros and cons of analog vs digital electronics.
2. Tables defining binary, octal, decimal, and hexadecimal number systems.
3. Practice problems converting between number systems and performing basic binary math operations like addition, subtraction, multiplication, and division.
4. An independent practice section with additional problems converting between number systems and performing binary math.
The document discusses binary arithmetic operations including addition, subtraction, multiplication, and division. It provides examples and step-by-step explanations of how to perform each operation in binary. For addition and subtraction, it explains the rules and concepts like carry bits and two's complement. For multiplication, it describes the shift-and-add method. And for division, it outlines the long division approach of shift-and-subtract in binary.
Embedded machine learning-based road conditions and driving behavior monitoringIJECEIAES
Car accident rates have increased in recent years, resulting in losses in human lives, properties, and other financial costs. An embedded machine learning-based system is developed to address this critical issue. The system can monitor road conditions, detect driving patterns, and identify aggressive driving behaviors. The system is based on neural networks trained on a comprehensive dataset of driving events, driving styles, and road conditions. The system effectively detects potential risks and helps mitigate the frequency and impact of accidents. The primary goal is to ensure the safety of drivers and vehicles. Collecting data involved gathering information on three key road events: normal street and normal drive, speed bumps, circular yellow speed bumps, and three aggressive driving actions: sudden start, sudden stop, and sudden entry. The gathered data is processed and analyzed using a machine learning system designed for limited power and memory devices. The developed system resulted in 91.9% accuracy, 93.6% precision, and 92% recall. The achieved inference time on an Arduino Nano 33 BLE Sense with a 32-bit CPU running at 64 MHz is 34 ms and requires 2.6 kB peak RAM and 139.9 kB program flash memory, making it suitable for resource-constrained embedded systems.
Presentation of IEEE Slovenia CIS (Computational Intelligence Society) Chapte...University of Maribor
Slides from talk presenting:
Aleš Zamuda: Presentation of IEEE Slovenia CIS (Computational Intelligence Society) Chapter and Networking.
Presentation at IcETRAN 2024 session:
"Inter-Society Networking Panel GRSS/MTT-S/CIS
Panel Session: Promoting Connection and Cooperation"
IEEE Slovenia GRSS
IEEE Serbia and Montenegro MTT-S
IEEE Slovenia CIS
11TH INTERNATIONAL CONFERENCE ON ELECTRICAL, ELECTRONIC AND COMPUTING ENGINEERING
3-6 June 2024, Niš, Serbia
We have compiled the most important slides from each speaker's presentation. This year’s compilation, available for free, captures the key insights and contributions shared during the DfMAy 2024 conference.
DEEP LEARNING FOR SMART GRID INTRUSION DETECTION: A HYBRID CNN-LSTM-BASED MODELgerogepatton
As digital technology becomes more deeply embedded in power systems, protecting the communication
networks of Smart Grids (SG) has emerged as a critical concern. Distributed Network Protocol 3 (DNP3)
represents a multi-tiered application layer protocol extensively utilized in Supervisory Control and Data
Acquisition (SCADA)-based smart grids to facilitate real-time data gathering and control functionalities.
Robust Intrusion Detection Systems (IDS) are necessary for early threat detection and mitigation because
of the interconnection of these networks, which makes them vulnerable to a variety of cyberattacks. To
solve this issue, this paper develops a hybrid Deep Learning (DL) model specifically designed for intrusion
detection in smart grids. The proposed approach is a combination of the Convolutional Neural Network
(CNN) and the Long-Short-Term Memory algorithms (LSTM). We employed a recent intrusion detection
dataset (DNP3), which focuses on unauthorized commands and Denial of Service (DoS) cyberattacks, to
train and test our model. The results of our experiments show that our CNN-LSTM method is much better
at finding smart grid intrusions than other deep learning algorithms used for classification. In addition,
our proposed approach improves accuracy, precision, recall, and F1 score, achieving a high detection
accuracy rate of 99.50%.
Literature Review Basics and Understanding Reference Management.pptxDr Ramhari Poudyal
Three-day training on academic research focuses on analytical tools at United Technical College, supported by the University Grant Commission, Nepal. 24-26 May 2024
CHINA’S GEO-ECONOMIC OUTREACH IN CENTRAL ASIAN COUNTRIES AND FUTURE PROSPECTjpsjournal1
The rivalry between prominent international actors for dominance over Central Asia's hydrocarbon
reserves and the ancient silk trade route, along with China's diplomatic endeavours in the area, has been
referred to as the "New Great Game." This research centres on the power struggle, considering
geopolitical, geostrategic, and geoeconomic variables. Topics including trade, political hegemony, oil
politics, and conventional and nontraditional security are all explored and explained by the researcher.
Using Mackinder's Heartland, Spykman Rimland, and Hegemonic Stability theories, examines China's role
in Central Asia. This study adheres to the empirical epistemological method and has taken care of
objectivity. This study analyze primary and secondary research documents critically to elaborate role of
china’s geo economic outreach in central Asian countries and its future prospect. China is thriving in trade,
pipeline politics, and winning states, according to this study, thanks to important instruments like the
Shanghai Cooperation Organisation and the Belt and Road Economic Initiative. According to this study,
China is seeing significant success in commerce, pipeline politics, and gaining influence on other
governments. This success may be attributed to the effective utilisation of key tools such as the Shanghai
Cooperation Organisation and the Belt and Road Economic Initiative.
Low power architecture of logic gates using adiabatic techniquesnooriasukmaningtyas
The growing significance of portable systems to limit power consumption in ultra-large-scale-integration chips of very high density, has recently led to rapid and inventive progresses in low-power design. The most effective technique is adiabatic logic circuit design in energy-efficient hardware. This paper presents two adiabatic approaches for the design of low power circuits, modified positive feedback adiabatic logic (modified PFAL) and the other is direct current diode based positive feedback adiabatic logic (DC-DB PFAL). Logic gates are the preliminary components in any digital circuit design. By improving the performance of basic gates, one can improvise the whole system performance. In this paper proposed circuit design of the low power architecture of OR/NOR, AND/NAND, and XOR/XNOR gates are presented using the said approaches and their results are analyzed for powerdissipation, delay, power-delay-product and rise time and compared with the other adiabatic techniques along with the conventional complementary metal oxide semiconductor (CMOS) designs reported in the literature. It has been found that the designs with DC-DB PFAL technique outperform with the percentage improvement of 65% for NOR gate and 7% for NAND gate and 34% for XNOR gate over the modified PFAL techniques at 10 MHz respectively.
Low power architecture of logic gates using adiabatic techniques
ARITHMETIC FOR COMPUTERS
1. Velammal Engineering College
Department of Computer Science
and Engineering
Welcome…
Ms. R. Amirthavalli,
Asst. Prof,
CSE,
Velammal Engineering College
Slide Sources: Patterson &
Hennessy COD book
website (copyright Morgan
Kaufmann) adapted and
supplemented
3. Syllabus – Unit II
UNIT-II ARITHMETIC FOR COMPUTERS
Addition and Subtraction – Multiplication – Division – Floating
Point Representation – Floating Point Addition and Subtraction.
4. Text Books
• Book 1:
o Name: Computer Organization and Design: The
Hardware/Software Interface
o Authors: David A. Patterson and John L. Hennessy
o Publisher: Morgan Kaufmann / Elsevier
o Edition: Fifth Edition, 2014
• Book 2:
o Name: Computer Organization and Embedded Systems
Interface
o Authors: Carl Hamacher, Zvonko Vranesic, Safwat Zaky and
Naraig Manjikian
o Publisher: Tata McGraw Hill
o Edition: Sixth Edition, 2012
5. • numbers may be represented in any base
• Computer - base 2 numbers are called binary numbers
• A single digit of a binary number is the “atom” of computing,
since all information is composed of binary digits or bit.
• Alternatives: high or low, on or off , true or false, or 1 or 0
• Least Significant Bit(LSB): The rightmost bit in a MIPS word.
• Most Significant Bit(MSB): The left most bit in a MIPS word.
Numbers
6. The Binary Number System
• Name
o “binarius” (Latin) => two
• Characteristics
o Two symbols
• 0 1
o Positional
• 1010B ≠ 1100B
• Most (digital) computers use the binary number
system Terminology
• •Bit: a binary digit
• •Byte: (typically) 8 bits
7. Number Representation
• Three systems:
o Sign-and-magnitude
o 1’s complement
o 2’s complement
• In all three systems, the leftmost bit is 0 for positive
numbers and 1 for negative numbers.
• Positive values have identical representations in all
systems.
• Negative values have different representations.
• Ex: 10112
8. • Sign Magnitude: One's Complement Two's Complement
000 = 0 000 = 0 000 = 0
001 = +1 001 = +1 001 = +1
010 = +2 010 = +2 010 = +2
011 = +3 011 = +3 011 = +3
100 = 0 100 = -3 100 = -4
101 = -1 101 = -2 101 = -3
110 = -2 110 = -1 110 = -2
111 = -3 111 = 0 111 = -1
• Issues:
o balance – equal number of negatives and positives
o ambiguous zero – whether more than one zero representation
o ease of arithmetic operations
• Which representation is best? Can we get both balance and non-ambiguous zero?
Possible Representations
ambiguous
zero
ambiguous
zero
9. Signed Magnitude
• In this notation, an extra bit is added to the left of the
number to notate its sign.
• 0 indicates +ve and 1 indicates -ve.
• Using 8 bits,
• +13 is 00001101 and +11 is 00001011.
• -13 is 10001101 and -11 is 10001011.
10. 1's Complement
• In this notation positive numbers are represented exactly
as regular binary numbers.
• Negative numbers are represented simply by flipping the
bit, i.e. 0's become 1 and 1's become 0.
• So 13 will be 00001101 and 11 will be 00001011.
• -13 will be 11110010 and -11 will be 11110100.
11. 2's Complement
• In this method a negative number is notated by first
determining the 1's complement of the positive number
and then adding 1 to it.
• So 8-bit -13 will be 11110010
• (1's complement) + 1 = 11110011;
• -11 will be 11110101.
14. • Sign Extension Shortcut: To convert an n-bit integer into an
integer with more than n bits – i.e., to make a narrow integer fill
a wider word – replicate the most significant bit (msb) of the
original number to fill the new bits to its left
o Example: 4-bit 8-bit
0010 = 0000 0010
1010 = 1111 1010
o why is this correct? Prove!
Two's Complement
Operations
31. Overflow
• When the actual result of an arithmetic operation is
outside the representable range, an arithmetic
overflow has occurred.
• No overflow when adding a positive and a negative
number
• No overflow when subtracting numbers with the same
sign
• Overflow occurs when adding two positive numbers
produces a negative result, or when adding two
negative numbers produces a positive result.
33. Overflow - Example
• No overflow when adding a positive and a negative
number
• A+B
• A = +3
• B = -2
+3 => 011
-2 => 110
• -----------
+1 => 001
• Result - representable range
n=3 bits => Range: - 4 to +3
+3 011
+2 010
+1 001
0 000
-1 111
-2 110
-3 101
-4 100
34. Overflow - Example
• No overflow when subtracting numbers with the same
sign
• A - B
• A = - 3
• B = - 2
• (-3) – (-2) => -3 + 2
-3 => 101
+2 => 010
• -----------
-1 => 111
• Result - representable range
n=3 bits => Range: - 4 to +3
+3 011
+2 010
+1 001
0 000
-1 111
-2 110
-3 101
-4 100
35. Overflow - Example
• Overflow occurs when adding two positive numbers
produces a negative result, or when adding two
negative numbers produces a positive result.
• A + B
• A = + 3
• B = + 3
+3 => 011
+3 => 011
• -----------
-2 => 110
n=3 bits => Range: - 4 to +3
+3 011
+2 010
+1 001
0 000
-1 111
-2 110
-3 101
-4 100
36. • No overflow when adding a positive and a negative number
• No overflow when subtracting numbers with the same sign
• Overflow occurs when the result has “wrong” sign (verify!):
Operation Operand A Operand B Result
Indicating Overflow
A + B 0 0 0
A + B 0 0 0
A – B 0 0 0
A – B 0 0 0
• Consider the operations A + B, and A – B
o can overflow occur if B is 0 ?
o can overflow occur if A is 0 ?
Detecting Overflow
37. Multiply
• Grade school shift-add method:
Multiplicand 1000
Multiplier 1001
x 1000
0000
0000
1000
Product 01001000
• m bits x n bits = m+n bit product
• Binary makes it easy:
o multiplier bit 1 => copy multiplicand (1 x multiplicand)
o multiplier bit 0 => place 0 (0 x multiplicand)
x
38. Shift-add Multiplier
64-bit ALU
Control test
Multiplier
Shift right
Product
Write
Multiplicand
Shift left
64 bits
64 bits
32 bits
Done
1. Test
Multiplier0
1a. Add multiplicand to product and
place the result in Product register
2. Shift the Multiplicand register left 1 bit
3. Shift the Multiplier register right 1 bit
32nd repetition?
Start
Multiplier0 = 0
Multiplier0 = 1
No: < 32 repetitions
Yes: 32 repetitions
Multiplicand register, product register, ALU are
64-bit wide; multiplier register is 32-bit wide
Algorithm
32-bit multiplicand starts at right half of multiplicand register
Product register is initialized at 0
40. Signed Multiplication
Booth Algorithm
The Booth Algorithm:
• The Booth algorithm generates a 2n-bit product
and treats both positive and
• negative 2’scomplement n-bit operands
uniformly.
• The Booth algorithm has two attractive features.
First, it handles both positive and negative multipliers
uniformly.
Second, it achieves some efficiency in the number of
additions required when the multiplier has a few large blocks of
1s.
42. Booth Algorithm
Booth Recoded Multipliers
Ex:
-6 in 2’s complement is 11010
11010 0
Peform recoding
1 1 0 1 0 0
Add a zero to
the RHS of the
multiplier
0
-1
+1
-1
So this is the
recoded multiplier
0
43. Booth Multiplication
• Let us perform +13 * -6
Steps:
• Recode the multiplier
-6 when recoded is 0 -1 +1 -1 0
+13 = 0 1 1 0 1
- 6 = 0-1+1-10
Note:
Multiplier bit is :
0 All 0s
+1 multiplicand
-1 2’s c of multiplicand
0 1 1 0 1
X 0-1+1-10
0 0 0 0 0
1 0 0 1 1
0 1 1 0 1
1 0 0 1 1
0 0 0 0 0
1 1 1 1 0 1 1 0 0 1 0
0 0 0 0 0
1 1 1 1
0 0 0
1 1
0
Final Product = 11101100102 = -7810
Carry is ignored
44. 1001 Quotient
Divisor 1000 1001010 Dividend
–1000
10
101
1010
-1000
10 Remainder
• Junior school method: see how big a multiple of the divisor can be
subtracted, creating quotient digit at each step
• Binary makes it easy first, try 1 * divisor; if too big, 0 * divisor
• Dividend = (Quotient * Divisor) + Remainder
Division
45. Restoring Division
64-bit ALU
Control
test
Quotient
Shift left
Remainder
Write
Divisor
Shift right
64 bits
64 bits
32 bits
Done
Test Remainder
2a. Shift the Quotient register to the left,
setting the new rightmost bit to 1
3. Shift the Divisor register right 1 bit
33rd repetition?
Start
Remainder < 0
No: < 33 repetitions
Yes: 33 repetitions
2b. Restore the original value by adding
the Divisor register to the Remainder
register and place the sum in the
Remainder register. Also shift the
Quotient register to the left, setting the
new least significant bit to 0
1. Subtract the Divisor register from the
Remainder register and place the
result in the Remainder register
Remainder > 0
–
Divisor register, remainder register, ALU are
64-bit wide; quotient register is 32-bit wide
Algorithm
32-bit divisor starts at left half of divisor register
Remainder register is initialized with the dividend at right
Why 33? We shall see later…
Quotient register is
initialized to be 0
46. Division
Done
Test Remainder
2a. Shift the Quotient register to the left,
setting the new rightmost bit to 1
3. Shift the Divisor register right 1 bit
33rd repetition?
Start
Remainder < 0
No: < 33 repetitions
Yes: 33 repetitions
2b. Restore the original value by adding
the Divisor register to the Remainder
register and place the sum in the
Remainder register. Also shift the
Quotient register to the left, setting the
new least significant bit to 0
1. Subtract the Divisor register from the
Remainder register and place the
result in the Remainder register
Remainder > 0
–
Itera- Step Quotient Divisor Remainder
tion
0 init 0000 0010 0000 0000 0111
1 1 0000 0010 0000 1110 0111
2b 0000 0010 0000 0000 0111
3 0000 0001 0000 0000 0111
2 1 0000 0001 0000 1111 0111
2b 0000 0001 0000 0000 0111
3 0000 0000 1000 0000 0111
3 1 0000 0000 1000 1111 1111
2b 0000 0000 1000 0000 0111
3 0000 0000 0100 0000 0111
4 1 0000 0000 0100 0000 0011
2a 0001 0000 0100 0000 0011
3 0001 0000 0010 0000 0011
5 1 0001 0000 0010 0000 0001
2a 0011 0000 0010 0000 0001
3 0011 0000 0001 0000 0001
Example: 0111 / 0010:
Algorithm
R = Reminder – Divisor
R = 0000 0111 – 0010 0000
R = 1110 0111
Restore,
R = R + D
R = Reminder – Divisor
R = 0000 0111 – 0001 0000
R = 1111 0111
Restore,
R = R + D
R = Reminder – Divisor
R = 0000 0111 – 0000 1000
R = 1111 1111
Restore,
R = R + D
R = Reminder – Divisor
R = 0000 0111 – 0000 0100
R = 0000 0011
R = Reminder – Divisor
R = 0000 0011 – 0000 0010
R = 0000 0001
47. Floating Point
• We need a way to represent
o numbers with fractions, e.g., 3.1416
o very small numbers (in absolute value), e.g.,
.00000000023
o very large numbers (in absolute value) , e.g., –
3.15576 * 1046
48. Floating Point
• Still use a fixed number of bits
o Sign bit S, exponent E, significand F
o Value: (-1)S x (1+F) x 2E
• IEEE 754 standard
4
8
Size Exponent Significand Range
Single precision 32b 8b 23b 2x10+/-38
Double precision 64b 11b 52b 2x10+/-308
S E F
49. IEEE 754 Floating-point
Standard
• IEEE 754 floating point standard:
o single precision: one word
o double precision: two words
31
sign
bits 30 to 23
8-bit exponent
bits 22 to 0
23-bit significand
31
sign
bits 30 to 20
11-bit exponent
bits 19 to 0
upper 20 bits of 52-bit significand
bits 31 to 0
lower 32 bits of 52-bit significand
50. Floating Point Exponent
• Exponent specified in biased or excess
notation
• Why?
o To simplify sorting
o Sign bit is MSB to ease sorting
o 2’s complement exponent:
• Large numbers have positive exponent
• Small numbers have negative exponent
o Sorting does not follow naturally
5
0
51. Excess or Biased
Exponent
• Value: (-1)S x (1 + F) x 2(E-bias)
o SP: bias is 127
o DP: bias is 1023
5
1
Exponent 2’s Compl Excess-127
-127 1000 0001 0000 0000
-126 1000 0010 0000 0001
… … …
+127 0111 1111 1111 1110
52. Floating Point
Normalization
• S,E,F representation allows more than one
representation for a particular value, e.g.
1.0 x 105 = 0.1 x 106 = 10.0 x 104
o This makes comparison operations difficult
o Prefer to have a single representation
• Hence, normalize by convention:
o Only one digit to the left of the floating point
o In binary, that digit must be a 1
• Since leading ‘1’ is implicit, no need to store it
• Hence, obtain one extra bit of precision for free
5
2
53. FP Overflow/Underflow
• FP Overflow
o Analogous to integer overflow
o Result is too big to represent
o Means exponent is too big
• FP Underflow
o Result is too small to represent
o Means exponent is too small (too negative)
• Both can raise an exception under IEEE754
5
3
54. IEEE754 Special Cases
5
4
Single Precision Double Precision Value
Exponent Significand Exponent Significand
0 0 0 0 0
0 nonzero 0 nonzero denormalized
1-254 anything 1-2046 anything fp number
255 0 2047 0 infinity
255 nonzero 2047 nonzero
NaN (Not a
Number)
55. Show the IEEE 754 binary
representation of the number -0.75ten in
single and double precision
Converting -0.75ten to binary
0.75 x 2 = 1.50 (take only integral part, ie 1)
0.50 x 2 = 1.00
0.11 x 2 -0
After Normalizing the above value is 1.1 x 2 -1
The general representation for a single precision no is
(-1)S x (1 + F) x 2(E-127)
Subtracting the bias 127 from the exponent of
-1.1two x 2-1 yields
(-1)1 x (1+.1000 0000 0000 0000 0000 000two) x 2(126-127)
(Contd…)
56. (Contd…)
The single precision binary representation of -0.75ten is
then
The double precision representation is
Show the IEEE 754 binary
representation of the number -0.75ten in
single and double precision