The document discusses number representation systems for arithmetic processing. It covers topics like:
- Number representation using digit vectors and different number systems like binary, decimal, etc.
- Fixed and mixed radix number systems along with signed and unsigned representations.
- Standard forms for signed numbers including sign-magnitude, one's complement, and two's complement representations.
- Arithmetic operations like addition and subtraction for unsigned and signed numbers in different representation systems. Properties like changing the sign using complementation are also discussed.
- Key aspects that affect arithmetic complexity like the choice of number representation and different mapping techniques between representations are highlighted.
1. The document is an introduction to statistical machine learning by Christfried Webers from NICTA and The Australian National University in 2011.
2. It covers basic concepts in linear algebra that are important for statistical machine learning such as linear transformations, matrices, vectors, and matrix-vector multiplication.
3. The document provides code examples and visual explanations of concepts like how a matrix A multiplies a vector V to produce a result vector R.
This document discusses various feature detectors used in computer vision. It begins by describing classic detectors such as the Harris detector and Hessian detector that search scale space to find distinguished locations. It then discusses detecting features at multiple scales using the Laplacian of Gaussian and determinant of Hessian. The document also covers affine covariant detectors such as maximally stable extremal regions and affine shape adaptation. It discusses approaches for speeding up detection using approximations like those in SURF and learning to emulate detectors. Finally, it outlines new developments in feature detection.
This document contains lecture notes from a Calculus I class covering Section 5.3 on evaluating definite integrals. The notes discuss using the Evaluation Theorem to calculate definite integrals, writing derivatives as indefinite integrals, and interpreting definite integrals as the net change of a function over an interval. Examples are provided to demonstrate evaluating definite integrals using the midpoint rule approximation. Properties of integrals such as additivity and the relationship between definite and indefinite integrals are also outlined.
This document discusses structured support vector machines (SSVMs). SSVMs are a method for learning parameters for structured prediction problems by directly minimizing expected loss. SSVMs replace the intractable expected loss with an empirical estimate using a training set. The loss function is replaced with a convex upper bound to allow for numerical optimization using subgradient descent. SSVMs can be applied to problems like multiclass classification and hierarchical classification.
This document is the preface to a book on computer science theory. It provides an overview of the book's contents, which include deterministic and non-deterministic finite automata, context-free grammars, pushdown automata, Turing machines, computability, and complexity theory. It thanks various individuals for their support and encouragement during the writing process. It invites readers to provide suggestions to improve the book.
This document summarizes and compares local image descriptors. It begins with an introduction to modern descriptors like SIFT, SURF and DAISY. It then discusses efficient descriptors such as binary descriptors like BRIEF, ORB and BRISK which use comparisons of intensity value pairs. The document concludes with an overview section.
This document provides an overview of neural networks and backpropagation algorithms. It discusses how neural networks are inspired by biological brains and how they can be used to perform complex classification tasks. The key topics covered include perceptrons, Adaline networks, multi-layer perceptrons, backpropagation for training multi-layer networks, and an example of how backpropagation works to minimize error in a simple two-layer network.
The document discusses online handwritten character recognition in the Devanagari script using a hierarchical partitioned hidden Markov model approach. Key steps include preprocessing strokes, extracting directional features, using single linkage clustering to select prototypes, and building a two-layer model with bottom HMMs for clusters and an upper attribute graph layer. Mathematical foundations show that pruning points does not impact the dynamic time warping distance measure between strokes. The approach achieves a recognition rate of 91.24% on a test dataset.
1. The document is an introduction to statistical machine learning by Christfried Webers from NICTA and The Australian National University in 2011.
2. It covers basic concepts in linear algebra that are important for statistical machine learning such as linear transformations, matrices, vectors, and matrix-vector multiplication.
3. The document provides code examples and visual explanations of concepts like how a matrix A multiplies a vector V to produce a result vector R.
This document discusses various feature detectors used in computer vision. It begins by describing classic detectors such as the Harris detector and Hessian detector that search scale space to find distinguished locations. It then discusses detecting features at multiple scales using the Laplacian of Gaussian and determinant of Hessian. The document also covers affine covariant detectors such as maximally stable extremal regions and affine shape adaptation. It discusses approaches for speeding up detection using approximations like those in SURF and learning to emulate detectors. Finally, it outlines new developments in feature detection.
This document contains lecture notes from a Calculus I class covering Section 5.3 on evaluating definite integrals. The notes discuss using the Evaluation Theorem to calculate definite integrals, writing derivatives as indefinite integrals, and interpreting definite integrals as the net change of a function over an interval. Examples are provided to demonstrate evaluating definite integrals using the midpoint rule approximation. Properties of integrals such as additivity and the relationship between definite and indefinite integrals are also outlined.
This document discusses structured support vector machines (SSVMs). SSVMs are a method for learning parameters for structured prediction problems by directly minimizing expected loss. SSVMs replace the intractable expected loss with an empirical estimate using a training set. The loss function is replaced with a convex upper bound to allow for numerical optimization using subgradient descent. SSVMs can be applied to problems like multiclass classification and hierarchical classification.
This document is the preface to a book on computer science theory. It provides an overview of the book's contents, which include deterministic and non-deterministic finite automata, context-free grammars, pushdown automata, Turing machines, computability, and complexity theory. It thanks various individuals for their support and encouragement during the writing process. It invites readers to provide suggestions to improve the book.
This document summarizes and compares local image descriptors. It begins with an introduction to modern descriptors like SIFT, SURF and DAISY. It then discusses efficient descriptors such as binary descriptors like BRIEF, ORB and BRISK which use comparisons of intensity value pairs. The document concludes with an overview section.
This document provides an overview of neural networks and backpropagation algorithms. It discusses how neural networks are inspired by biological brains and how they can be used to perform complex classification tasks. The key topics covered include perceptrons, Adaline networks, multi-layer perceptrons, backpropagation for training multi-layer networks, and an example of how backpropagation works to minimize error in a simple two-layer network.
The document discusses online handwritten character recognition in the Devanagari script using a hierarchical partitioned hidden Markov model approach. Key steps include preprocessing strokes, extracting directional features, using single linkage clustering to select prototypes, and building a two-layer model with bottom HMMs for clusters and an upper attribute graph layer. Mathematical foundations show that pruning points does not impact the dynamic time warping distance measure between strokes. The approach achieves a recognition rate of 91.24% on a test dataset.
The document discusses various coordinate reference frames including Cartesian, polar, cylindrical and spherical coordinates for modeling 2D and 3D objects. It also covers important mathematical concepts for computer graphics like vectors, basis vectors, metric tensors, matrices, and their properties. These concepts are fundamental for representing points, lines, planes and transformations between different coordinate systems in computer graphics applications.
The document discusses various coordinate reference frames including Cartesian, polar, cylindrical and spherical coordinates for modeling 2D and 3D objects. It also covers important mathematical concepts for computer graphics like vectors, basis vectors, metric tensors, matrices, and their properties. These concepts are fundamental for representing points, lines, planes and transformations between different coordinate systems in computer graphics applications.
This document provides an overview of the grade 4 curriculum for the school year. It is divided into the main subject areas of language arts and mathematics. For language arts, it outlines the units of study for each term around different themes. For mathematics, it lists the key areas of focus for each term, including number sense, patterns, measurement, geometry, and statistics. The curriculum aims to develop students' skills in both subjects over the course of the academic year.
This document provides an overview of the grade 4 curriculum for the school year. It is divided into the main subject areas of language arts and mathematics. For language arts, it outlines the units of study for each term around different themes like tales, the environment, and communication. For mathematics, it describes the key areas of focus each term such as number patterns, addition, fractions, geometry, and statistics. The curriculum aims to develop students' skills in both subjects over the course of the academic year.
Welcome to International Journal of Engineering Research and Development (IJERD)IJERD Editor
The document describes a method for recognizing Marathi handwritten numerals using support vector machines. Feature extraction is performed using Fourier descriptors and normalized chain codes extracted from the numeral contours. A dataset of 12690 Marathi numeral samples was used, achieving a recognition accuracy of 98.15% using five-fold cross validation with support vector machine classification. Key steps included preprocessing, contour extraction, feature extraction using Fourier descriptors and normalized chain codes, and classification using support vector machines.
The document discusses how abstraction is central to programming and how Clojure is a good language for creating abstractions, noting that Clojure provides primitive expressions, means of combination through functions, and means of abstraction through functions, records, multimethods and protocols to build complex programs from simple ideas.
Support vector machines (SVMs) are a type of supervised machine learning model used for classification and regression analysis. SVMs can handle both linearly separable and non-linearly separable data by mapping data points to a higher dimension feature space. Kernels are used to compute dot products between data points without explicitly computing coordinates in the feature space. SVMs select a subset of training points, called support vectors, to define the decision boundary. They have advantages like effectiveness in high dimensions and memory efficiency.
Energy-Efficient LDPC Decoder using DVFS for binary sourcesIDES Editor
This paper deals with reduction of the transmission
power usage in the wireless sensor networks. A system with
FEC can provide an objective reliability using less power
than a system without FEC. We propose to study LDPC
codes to provide reliable communication while saving power
in the sensor networks. As shown later, LDPC codes are more
energy efficient than those that use BCH codes. Another
method to reduce the transmission cost is to compress the
correlated data among a number of sensor nodes before
transmission. A suitable source encoder that removes the
redundant information bits can save the transmission power.
Such a system requires distributed source coding. We propose
to apply LDPC codes for both distributed source coding and
source-channel coding to obtain a two-fold energy savings.
Source and channel coding with LDPC for two correlated nodes
under AWGN channel is implemented in this paper. In this
iterative decoding algorithm is used for decoding the data, and
it’s efficiency is compared with the new decoding algorithm
called layered decoding algorithm which based on offset min
sum algorithm. The usage of layered decoding algorithm and
Adaptive LDPC decoding for AWGN channel reduces the
decoding complexity and its number of iterations. So the power
will be saved, and it can be implemented in hardware.
Have you found that your code works beautifully on a few dozen examples, but leaves you wondering how to spend the next couple of hours after you start looping through all of your data? Are you only familiar with Python, and wish there was a way to speed things up without subjecting yourself to learning C?
In this talk, you'll see some simple tricks, borrowed from linear algebra, which can give you significant performance gains in your data science code, and how you can implement these in NumPy. We'll start exploring an inefficient implementation of a machine learning algorithm that relies heavily on loops and lists. Throughout the talk, we'll iteratively replace bottlenecks with NumPy vectorized operations and learn the linear algebra that makes these methods work. You'll see how straightforward it can be to make your code many times faster, all without losing readability or needing to understand complex coding concepts.
This document discusses R data types. It explains that in R, variables are assigned objects that determine the data type. The main data types covered are scalars, vectors, matrices, factors, data frames, and lists. Vectors store one-dimensional arrays, matrices are two-dimensional arrays of the same type, factors represent categorical variables, data frames contain different data types, and lists store ordered collections of varied objects. Examples are provided for creating each type of data structure in R.
The document discusses various coordinate reference frames including Cartesian, polar, cylindrical and spherical coordinates for modeling 2D and 3D objects. It also covers important mathematical concepts for computer graphics like vectors, basis vectors, metric tensors, matrices, and their properties. These concepts are fundamental for representing points, lines, planes and transformations between different coordinate systems in computer graphics applications.
The document discusses various coordinate reference frames including Cartesian, polar, cylindrical and spherical coordinates for modeling 2D and 3D objects. It also covers important mathematical concepts for computer graphics like vectors, basis vectors, metric tensors, matrices, and their properties. These concepts are fundamental for representing points, lines, planes and transformations between different coordinate systems in computer graphics applications.
This document provides an overview of the grade 4 curriculum for the school year. It is divided into the main subject areas of language arts and mathematics. For language arts, it outlines the units of study for each term around different themes. For mathematics, it lists the key areas of focus for each term, including number sense, patterns, measurement, geometry, and statistics. The curriculum aims to develop students' skills in both subjects over the course of the academic year.
This document provides an overview of the grade 4 curriculum for the school year. It is divided into the main subject areas of language arts and mathematics. For language arts, it outlines the units of study for each term around different themes like tales, the environment, and communication. For mathematics, it describes the key areas of focus each term such as number patterns, addition, fractions, geometry, and statistics. The curriculum aims to develop students' skills in both subjects over the course of the academic year.
Welcome to International Journal of Engineering Research and Development (IJERD)IJERD Editor
The document describes a method for recognizing Marathi handwritten numerals using support vector machines. Feature extraction is performed using Fourier descriptors and normalized chain codes extracted from the numeral contours. A dataset of 12690 Marathi numeral samples was used, achieving a recognition accuracy of 98.15% using five-fold cross validation with support vector machine classification. Key steps included preprocessing, contour extraction, feature extraction using Fourier descriptors and normalized chain codes, and classification using support vector machines.
The document discusses how abstraction is central to programming and how Clojure is a good language for creating abstractions, noting that Clojure provides primitive expressions, means of combination through functions, and means of abstraction through functions, records, multimethods and protocols to build complex programs from simple ideas.
Support vector machines (SVMs) are a type of supervised machine learning model used for classification and regression analysis. SVMs can handle both linearly separable and non-linearly separable data by mapping data points to a higher dimension feature space. Kernels are used to compute dot products between data points without explicitly computing coordinates in the feature space. SVMs select a subset of training points, called support vectors, to define the decision boundary. They have advantages like effectiveness in high dimensions and memory efficiency.
Energy-Efficient LDPC Decoder using DVFS for binary sourcesIDES Editor
This paper deals with reduction of the transmission
power usage in the wireless sensor networks. A system with
FEC can provide an objective reliability using less power
than a system without FEC. We propose to study LDPC
codes to provide reliable communication while saving power
in the sensor networks. As shown later, LDPC codes are more
energy efficient than those that use BCH codes. Another
method to reduce the transmission cost is to compress the
correlated data among a number of sensor nodes before
transmission. A suitable source encoder that removes the
redundant information bits can save the transmission power.
Such a system requires distributed source coding. We propose
to apply LDPC codes for both distributed source coding and
source-channel coding to obtain a two-fold energy savings.
Source and channel coding with LDPC for two correlated nodes
under AWGN channel is implemented in this paper. In this
iterative decoding algorithm is used for decoding the data, and
it’s efficiency is compared with the new decoding algorithm
called layered decoding algorithm which based on offset min
sum algorithm. The usage of layered decoding algorithm and
Adaptive LDPC decoding for AWGN channel reduces the
decoding complexity and its number of iterations. So the power
will be saved, and it can be implemented in hardware.
Have you found that your code works beautifully on a few dozen examples, but leaves you wondering how to spend the next couple of hours after you start looping through all of your data? Are you only familiar with Python, and wish there was a way to speed things up without subjecting yourself to learning C?
In this talk, you'll see some simple tricks, borrowed from linear algebra, which can give you significant performance gains in your data science code, and how you can implement these in NumPy. We'll start exploring an inefficient implementation of a machine learning algorithm that relies heavily on loops and lists. Throughout the talk, we'll iteratively replace bottlenecks with NumPy vectorized operations and learn the linear algebra that makes these methods work. You'll see how straightforward it can be to make your code many times faster, all without losing readability or needing to understand complex coding concepts.
This document discusses R data types. It explains that in R, variables are assigned objects that determine the data type. The main data types covered are scalars, vectors, matrices, factors, data frames, and lists. Vectors store one-dimensional arrays, matrices are two-dimensional arrays of the same type, factors represent categorical variables, data frames contain different data types, and lists store ordered collections of varied objects. Examples are provided for creating each type of data structure in R.
1. CS/EE 5830/6830 Arithmetic Processing
VLSI ARCHITECTURE
AP = (operands, operation, results, conditions,
singularities)
Operands are:
Set of numerical values
Range
Precision (number of bits)
Number Representation System (NRS)
Operand: +, -, *, , etc.
Conditions: Values of results (zero, neg, etc.)
Singularities: Illegal results (overflow, NAN, etc.)
Chapter 1 – Basic Number Representations
and Arithmetic Algorithms
Number Representation Basic Fixed Point NRS
Need to map numbers to bits Number represented by ordered n-tuple of symbols
(or some other representation, but we’ll use bits) Symbols are digits
Representation you choose matters! n-tuple
is a digit-vector
Complexity of arithmetic operations depends heavily Number of digits in digit-vector is precision
on representation!
But, be careful of conversions
Arithmetic
that’s easy in one representation may lose its
advantage when you convert back and forth…
Digit Values Rule of Interpretation
A set of numerical values for the digits Mapping of set of digit-vectors to numbers
is the set of possible values for
is the cardinality of
Binary (cardinality 2) is
Decimal (cardinality 10) is {0,1,2,3,4,5,6,7,8,9}
(1,3) “Thirteen”
Balanced ternary (cardinality 3) is
Set of integers represented by a digit-vector with n
digits is a finite set with max elements of
€
Digit-Vectors Numbers (N, Z, R, …)
1
2. Mappings… Positional Weighted Systems
Integer x represented by digit vector
Rule of interpretation
Not Useful!
Digit-vectors N,R,Z,…
Nonredundant Where weight vector is
Digit-vectors N,R,Z,…
Redundant
Digit-vectors N,R,Z,…
Ambiguous
Radix Number Systems Fixed-Radix Systems
Weights are not arbitrary In fixed-radix system all elements of the radix
they are related to a radix vector vector have the same value r (the radix)
Weight vector is
So that So
Or Radix 2:
Radix 4:
Radix 10:
Mixed-Radix Systems Canonical Systems
Time is the most common… Canonical if
Hours, Minutes, Seconds Binary= {0,1}
Octal= {0,1,2,3,4,5,6,7)
X=(5,37,43) = 20,263 seconds Decimal = {0,1,2,3,4,5,6,7,8,9}
5 x 3600 = 18,000
37 x 60 = 2,220
Range of values with n radix-r digits is
43 x 1 = 43
Total = 20,263 seconds
2
3. Non-Canonical Systems Conventional Number Systems
Digit set that is non canonical… A system with fixed positive radix r and canonical
Non-canonical decimal set of digit values
Non-canonical binary Radix-rconventional number system
After
all this fuss, these are what we’ll mostly worry
Redundant if non-canonical about…
I.e.
binary system with Specifically binary (radix 2)
(1,1,0,1) and (1,1,1,-1) both represent “thirteen”
We’ll also see some signed-digit redundant binary
(carry-save, signed-digit)
Aside – Residue Numbers Aside – Residue Numbers
Example of a non-radix number system P = (17, 13, 11, 7, 5, 3, 2)
Weights are not defined recursively Digit-vector (13 4 8 2 0 0 0)
Residue Number System (RNS) uses a set of pairwise Number = “thirty”
relatively prime numbers
30 mod 17 = 13
A positive integer x is represented by a vector
30 mod 13 = 4
30 mod 11 = 8
Can allow fast add and multiply
Etc…
No notion of digits on left being more significant than
digits on the right (i.e. no weighting)
Lots of Choices… Back to Binary
Non-negative integers – digits = {0,1}
Range with n bits is
Higher power of 2 radix – group bits in to groups
with bits
X = (1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1)
= ((1,1), (0,0), (0,1), (0,1), (1,1), (0,1))
= ( 3, 0, 1, 1, 3, 1) base 4 (quaternary)
= ((1,1,0), (0,0,1), (0,1,1), (1,0,1))
= (6,1,3,5) base 8 (octal)
= ((1,1,0,0)(0,1,0,1)(1,1,0,1))
= (C, 5, D) base 16 (hexadecimal)
3
4. Signed Integers Transformation
Three choices for representing signed ints Transform signed numbers into unsigned, then use
Directly in the number system conventional systems
Signed digit NRS, i.e. {-1, 0, 1}
Use extra symbol to represent the sign
Sign and Magnitude
“minus two” “six”
Additional mapping on positive integers (1,1,0)
x xR
True and Complement system
Signed integer X
Positive Integer XR
Digit-Vector X Z (signed) N (unsigned) Digit vectors
True and Complement System Converse Mapping
Signed integers in the range Convert back and forth
Negative represented by xR
Such that
C is the Complementation constant
Unambiguous if
Mapping is True forms
Complement
forms
Boundary conditions Two standard forms
If x R = C /2 can be represented, you can assign
it to either Range complement system
Representationis no longer symmetric Also called Radix Complement
Not closed under sign change operation Two’s complement in radix 2
€
If can be represented, then there are two Digit Complement System
representations of 0 Also called Diminished Radix Complement
One’s complement in radix 2
4
5. Two’s Compliment One’s Compliment
For n bits in the digit vector, For n bits in the digit vector
Example three bits: C = 8 Example three bits: C = 7
is outside the range is representable in 3 bits
With 3 bits, you can’t represent 8 Two representations of 0
So, only one representation of 0
can be represented, so you have a cannot be represented
choice Symmetric range…
Usually choose second for sign detection Range is then
Range is then
(asymmetric)
Examples (n=3 bits) Range Comparison (3 bits)
Two’s compliment Decimal Binary Sign & Two’s One’s
(unsigned) Magnitude compliment Compliment
7 111
-3 represented as 101 6 110
5 101
4 100
111 represents -1 3 011 011 011 011
One’s compliment 2 010 010 010 010
1 001 001 001 001
0 000 000/100 000 000/111
-3 represented as 100 -1 101 111 110
-2 110 110 101
-3 111 101 100
-4 100
Example: 2’s comp, n=4 Example: 1’s comp, n=4
-1 +0 -0 +0
-2 1111 0000 -1 1111 0000
+1 +1
1110 0001 1110 0001
-3 1101 -2
0010 +2 1101 0010 +2
-4 1100 0011 +3 -3 1100 0011 +3
-5 1011 0100 +4 -4 1011 0100 +4
-6 1010 0101 +5
-5
1010 0101 +5
0110 +6 0110 +6
-7 1001 -6
1001
1000 0111 1000 0111
-8 +7 +7
-7
5
6. Converse Mapping (2’s comp) Two’s Comp. Example
If n −2 Most significant bit has negative
x = −X n −1 2 n −1 + ∑ X i 2 i weight, remaining have positive
i=0
If n=5: X = 11011 = -16 + 8 + 0 + 2 + 1 = -5
X = 01011 = 0 + 8 + 0 + 2 + 1 = 11
€
n −2
Most significant bit has negative
x = −X n −1 2 n −1 + ∑ X i 2 i
i=0
weight, remaining have positive
€
Converse Mapping (1’s comp) One’s Comp. Example…
Similar in one’s complement (case for Xn-1=1)
n −2 Most significant bit has negative
x = −X n −1 (2 n −1 −1) + ∑ X i 2 i weight, remaining have positive.
i=0 Weight of MSB is different
because C=2n-1. Intuition is that
You have to add 1 to jump over
The extra representation of 0.
n −2 €
x = −X n −1 (2 n −1 −1) + ∑ X i 2 i n=5: X = 11011 = -(16-1) + 8 + 0 + 2 + 1 = -4
i=0 X = 01011 = 0 + 8 + 0 + 2 + 1 = 11
Remember this! We’ll use it later when we need to
adjust things in arrays of signed addition. Think of
partial product arrays in multiplication….
€
Sign Bits Addition (unsigned)
Conveniently, sign is determined by Adding two n-bit operands results in n+1 result bits
high-order bit Usually call the n+1 bit Cout
Because
(Assuming xR = C/2 is In terms of digit vectors
assigned to represent Cout = overflow!
x = -C/2)
6
7. Addition (Signed) Addition: two’s comp.
Assume no overflow for a moment… C=2n, and mod2n means ignore Xn
(the carry out)!
Makes addition simple – add the numbers and ignore
Use the property the carry out
xR yR
cout + cin
zR
Addition: one’s comp. Addition: one’s comp.
C=rn-1, so mod operation is not as easy If the cout is 1, subtract 2n (ignore cout), and add 1
zR=wRmod(2n-1) (end-around carry)
xR yR
cout + cin
So: Ignore
cout
zR
Change of Sign Change of Sign
Start with bitwise negation One’s Complement:
Flipevery bit in the digit vector
Boolean style:
{−A = C − A = r n
−1 − A = A if A ≥ 0
Fundamental property of n-digit radix-r
Two’s Complement:
€
+
+
{−A = C − A = r n
− A = A +1 if A ≥ 0
€
7
8. Another two’s comp. check Two’s comp subtract
Verify the property
Use two’s complement definition…
yR
xR
cout + 1 cin
zR
Two’s comp add/subtract Two’s comp add/subtract
yR
yR
Sub
xR xR
Sub
ab c
Overflow? 00 0
cout + cin
cout + cin 01 1
10 1
11 0
zR zR
Overflow (unsigned) Overflow (signed)
Overflow condition means that the result can’t be Still the same definition – the result can’t be
represented in n bits represented in n bits
For unsigned addition, this simply means that the cout But, now not as easy as looking at cout
was 1 For 4 bits, and two’s comp, answer was smaller than –8
For n=4, this means the result was bigger than 15 or larger than 7
10102 (1010) + 11002 (1210) = 101102 (2210) Overflow if (pos) + (pos) = (neg) 5+6=11 or
(neg) + neg) = (pos) -5+-6=-11
Can you ever have overflow with (pos) + (neg)?
8
9. Example: 2’s comp, n=4 Overflow (signed)
-1 +0 Overflow only possible if args are same sign
-2 1111 0000 Overflow if result is different sign
+1
1110 0001
-3 1101 0010 +2
-4 1100 0011 +3
-5 1011 0100 +4
-6 1010 0101 +5
0110 +6
-7 1001
1000 0111
-8 +7
Overflow (signed) Implied Digits (unsigned)
Or, consider all possible cases around MSB… Unsigned numbers have an infinite number of
leading 0’s
Xn-1 Yn-1 Cn-1 Cn Zn-1 OVF
5,243 = …0,000,000,000,005,234
0 0 0 0 0 No
0 0 1 0 1 Yes 1 1010 = …0 0000 0000 0001 1010
0 1 0 0 1 No Changing from n bits to m bits (m>n) is a simple
0 1 1 1 0 No matter of padding with 0’s to the left
1 0 0 0 1 No
1 0 1 1 0 No
1 1 0 1 0 Yes
1 1 1 1 1 No
Changing number of bits (signed) Shifting
Signed numbers can be thought of as having infinite Shifting corresponds to multiply and divide by
replicas of the sign bit to the left powers of 2
Four bits: Left arithmetic shift
Shift in 0’s in LSB, OVF if
Right arithmetic shift
Eight bits:
Divide by 2 (integer result)(1-bit shift)
Remember to copy the sign bit in empty MSB!
9
10. Multiplication (unsigned) Multiplication (unsigned)
Pencil and paper method
Compute n terms of and then sum them
The ith term requires an i-position shift, and a
multiplication of x by the single digit Yi
Requires n-1 adders
Multiplication (unsigned) Multiplication (unsigned)
B0
B1
B2
B3
Multiplication (unsigned) Serial Multiplicaton (unsigned)
Instead of using n-1 adders, can iterate with 1
takes n steps for n bits
10
11. Multiplication (signed!) Division (unsigned)
Remember that MSB has negative weight x=qd+w (quotient, divisor, remainder)
Add partial products as normal Consider 0<d, x<rnd (precludes /0 and OVF)
Subtract multiplicand in last step… Basic division is n iterations of the recurrence
w[0] = x
w[ j +1] = rw[ j] − d ∗qn −1− j j = 0,...,n −1
n −1
where q = ∑ qi r i and d ∗ = dr n
i=0
i.e. divisor is aligned with most-significant half of
€ residual
Division (unsigned) Long Division
In each step of the iteration
Get one digit of quotient
Value of digit is bounded such that
This
means you find the right digit such that the current
remainder is less than (shifted) divisor
In binary you only have to guess 1 or 0
Guess 1 and fix if you’re wrong (restoring)
Restoring Division Restoring Division
1. Shift current result one bit left
2. Subtract divisor from this result
3. If the result of step 2 is neg,
q=0, else q=1
4. If the result of step 2 is neg, restore old value of
result by adding divisor back
5. Repeat n times…
This is what the recurrence in the book says…
11
12. Restoring Division Restoring Division Example
Shift
Subtract divisor (add negative)
use tentative partial residual to decide on quotient bit
partial residual was negative, so restore by adding divisor back in
“real” partial residual is starting point for next iteration
Shift
Subtract divisor (add negative)
use tentative partial residual to decide on quotient bit
partial residual was positive, so residual is correct – no restoration needed
Shift
Subtract divisor (add negative)
use tentative partial residual to decide on quotient bit
partial residual was negative, so restore by adding divisor back in
“real” partial residual is starting point for next iteration
Shift
Subtract divisor (add negative)
use tentative partial residual to decide on quotient bit
partial residual was positive, so residual is correct – no restoration needed
Non-performing Division Non-restoring Division
Consider what happens Consider again
Result at each step is 2r-d (r is current result) At each step 2residual-d
If the result is negative, we restore by adding d back in Ifit’s negative, restore to 2r by adding d back in
But, if you store the result in a separate place and don’t Then shift to get 4r, then subtract getting 4r-d
update the result until you know if it’s negative, then you Suppose you don’t restore, but continued with the shift
can save some restoring steps resulting in 4r-2d
Now add d instead of subtract resulting in 4r-d
That’s what you wanted!
Non-restoring Division Non-restoring Division Example
For positive partial residual – subtract divisor
For negative partial residual – add divisor back in
This corrects for the mistake you made on the last iteration…
If the last residual is negative – do one final restoration
12
13. Whew!
Basic number representation systems
Unsigned, signed
Conversions
Basic addition, subtraction of signed numbers
Multiplication of unsigned and signed
Division of signed
Now let’s speed up the operations!
13