SlideShare a Scribd company logo
Chapter 1
Introduction to Digital Systems
A digital system is a combination of devices designed to process the information in binary form.
The digital systems are widely used in almost all areas of life. The best example of digital system is a
general purpose digital computer. Other examples include digital displays, electronic calculators,
digital watch, digital counters, digital voltmeters, digital storage oscilloscope, telecommunication
systems and so on.
1.1 Advantages of Digital Systems
1. Ease of design: In digital circuits, exact values of the quantities are not important. The quan-
tities takes two discrete values i.e logic HIGH or logic LOW. The behaviour of small circuits
can be visualised without any special insights about the operation of the components. So
the digital systems are easier to design.
2. Ease of storage: The digital circuits uses latches that can hold on digital information as long
as it is necessary. Using the mass storage techniques, it is easier to store billions of bits of
information in a relatively small physical space.
3. Immune to noise: The noise is not critical in digital systems because the exact values of
quantities are not important. But the noise should not be large enough such that it prevents
us to distinguish between the logic High and logic LOW.
4. Accuracy and Precision: The digital information does not deteriorate or effected by noise
while processing. So the accuracy and precision are easier to maintain in a digital system.
5. Programmability: The operation of digital systems is controlled by writing programs. The
digital design is carried out by Hardware Description Languages(HDLs), using which the
structure and functioning of digital circuit is modelled and the behaviour of the designed
model is tested before built into the real hardware.
6. Speed: The digital devices uses the Integrated Circuits(ICs)in which the single transistor can
switch in a few pico seconds. Using these ICs, a digital system can process the input infor-
mation and produces an output in few nanoseconds.
7. Economy: Large number of digital circuits are integrated into a single chip such that it
provides lot of functionality in a small physical space. So the digital systems can be mass-
produces at a low cost.
1
2 CHAPTER 1. INTRODUCTION TO DIGITAL SYSTEMS
1.2 Limitations of Digital Systems
The real world is analog: The quantities that are to be processed by a digital system are analog
in nature example temperature, pressure, velocity etc. The input quantities must be converted to
digital form before processing and the processed digital output must be converted to analog as
shown in the figure 1.1. The conversion of information from digital to analog domain will take
time and can add complexity and expense to a system.
Figure 1.1: Block diagram of digital system
1.3 Building Blocks of Digital Systems:
The best example of digital system is a computer. The standard block diagram of a computer
consists of
1. Central Processing Unit(CPU): It does all the computations and processing.
2. Memory: It stores the program and data.
3. I/O Units: These units define how the input is fed into the computer and how to get the
output from the computer.
The CPU can further be divided into arithmetic and logic unit(ALU), control logic and registers
to store results. The ALU is made up of adders, subtracters, multipliers etc to form arithmetic and
logical operations. The functional blocks of ALU are made up of logic gates. The gate is a logical
building block which has 2 inputs and an output, so depending upon the gate type the output is
defined in terms of the input. Further, the gate is made up of transistors, capacitors etc. These
active and passive elements are the basic units of actual circuits. Thus the digital system can be
broken down into subsystems, subsystems can be broken down into modules, modules can be
broken down into functional blocks, functional blocks can be broken down into logic units, logic
units can be broken down into circuits and the circuits are made up of devices and components.
Chapter 2
Digital Number Systems and Codes
A digital system uses many number systems. The commonly used are decimal, binary, octal and
hexadecimal systems. These number systems are used in everyday life and they are called posi-
tional numbering system, since each digit of a number has an associated weight. The value of the
number is the weighted sum of all the digits.
1. Decimal Number System: This is the most commonly used number system that has 10 digits
from 0,1,2,3,4,5,6,7,8 and 9. The radix of decimal system is 10, so it is also called base-10 sys-
tem. The decimal number system is not convenient to implement a digital system, because
it is very difficult to design electronic devices with 10 different voltage levels.
2. Binary Number System: All the digital systems uses the binary number system as the basic
number system of its operations. The radix of binary system is 2, so it is also called base-2
system. It has only 2 possible values 0 and 1. A binary digit is called as bit. A binary number
can have more than 1 bit. The left most bit is called as least significant bit(LSB) and the right
most bit is called as most significant bit(MSB). It is easy to design simple, accurate electronic
circuits that operate with only 2 voltage levels using the binary number system. Binary data
is extremely robust in transmission because noise can be rejected easily. Binary number
system is the lowest possible base system, so any higher order counting system can be easily
encoded.
3. Octal and Hexadecimal Number System: The radix of octal number system is 8 and radix
of hexadecimal is 16. The octal number systems needs 8 digits, so it uses the digits 0-7 of
decimal system. The hexadecimal needs 16 digits, so it supplements decimal digits 0-9 with
the letters A-F. The octal and hexadecimal number system are useful for the representation
of multiple bits because their radices are powers of 2. A string of 3 bits can be uniquely rep-
resented by an octal digit. Similarly, a string of 4 digits can be represented by an hexadecimal
digit. Hexadecimal number system is widely in microprocessors and microcontrollers.
2.1 General Positional Number System
The value of a number in any radix is given by the formula
D =
p−1
i=−n
di ri
(2.1)
where r is the radix of the number, p is the number of digits to the left of radix point, n is the
number of digits to the right of radix point and di is the ith
digit of the number. The decimal value
of the number is determined by converting each digit of the number to its radix-10 equivalent and
3
4 CHAPTER 2. DIGITAL NUMBER SYSTEMS AND CODES
expanding the formula using radix-10 arithmetic.
Example 1: (331)8 = 3x82
+3x81
+1x80
= 192+24+1 = (217)10
Example 2: (D9)16 = 13x161
+9x160
= 208+9 = (217)10
2.2 Addition and Subtraction of Numbers
Addition and subtraction uses the same technique of general decimal numbers, only difference is
the use of binary numbers. To add two binary numbers, the least significant bits of two numbers
are added with an initial carry cin = 0 to produce sum(s) and carry(cout ) as shown in the Table 2.1.
Then the process of adding is continued by shifting to the bits towards left till it reaches most sig-
nificant bit and the carry in for each addition process is the carry out of the previous bits addition.
cin a b s cout
0 0 0 0 0
0 0 1 1 0
0 1 0 1 0
0 1 1 0 1
1 0 0 1 0
1 0 1 0 1
1 1 0 0 1
1 1 1 1 1
Table 2.1: Binary addition
Example: Addition of two numbers in decimal and binary number system.
decimal binary
carry 100 01111000
a 190 10111110
b 141 10001101
sum 331 101001011
The binary subtraction is carried out similar to the binary addition with carries replaced with
borrows and produces the difference(d) and the borrow out(bout ) as shown in the Table 2.2.
bin a b d bout
0 0 0 0 0
0 0 1 1 1
0 1 0 1 0
0 1 1 0 0
1 0 0 1 1
1 0 1 0 1
1 1 0 0 0
1 1 1 1 1
Table 2.2: Binary subtraction
Example: Subtraction of two numbers in decimal and binary number system.
2.3. REPRESENTATION OF NEGATIVE NUMBERS 5
decimal binary
borrow 100 01111100
a 229 11100101
b - 46 - 00101110
difference 183 10110111
The subtraction is commonly used in computers to compare two numbers. For example, if the
operation (a-b) produces a borrow out, then a is less than b, otherwise, a is greater than or equal
to b.
2.3 Representation of Negative Numbers
The positive numbers and negative numbers in decimal system are indicated by + and − sign
respectively. This method of representation is called as sign-magnitude representation. But those
signs are not convenient for representing binary numbers. There are many ways to represent
signed binary numbers.
2.3.1 Signed-Magnitude Representation
The signed magnitude system uses extra bit to represent the sign. Positive number is represented
by ’0’ and negative number by ’1’. The most significant digit is used as the sign bit and lower bits
contains the magnitude. The following are the examples of 8-bit signed binary integers and their
decimal equivalents.
Example 1: (01010101)2 = +(85)10 and (11010101)2 = −(85)10
Example 2: (01111111)2 = +(127)10 and (11111111)2 = −(127)10
Example 3: (00000000)2 = +(0)10 and (10000000)2 = −(0)10
Since zero can be represented in two possible ways, there are equal number of positive and nega-
tive integers in signed magnitude system. An n-bit signed magnitude integer lies within the range
−(2n−1
−1) to +(2n−1
−1). The signed magnitude representation in digital logic circuit requires to
examine the signs of the addends. If the signs are same, it must add the magnitudes and if the signs
are different, it must subtract smaller from the larger magnitude and give the result with sign of
larger integer. This method will increase the complexity of the logic circuit, so it is not convenient
for hardware implementations.
2.3.2 Complement Number Systems
The complement number negates a number by taking its complement as defined by the system.
The advantage of this system over signed-magnitude representation is that the two numbers in
complement number system can be added or subtracted without the sign and magnitude checks.
There are two complement number systems viz. the radix-complement and the diminished radix-
complement. The complement number system has fixed number of digits,say n. If an operation
produces more than n digits, then the extra higher order bits are omitted.
2.3.3 Radix Complement Representation
The radix complement of an n-digit number is obtained by subtracting it from rn
, where r is the
radix. In decimal system, radix-complement is called 10 s complement. For example, the 10 s
6 CHAPTER 2. DIGITAL NUMBER SYSTEMS AND CODES
complement of 127 = 103
− 127 = 873. If the number is between the range 1 and rn
− 1, the sub-
traction results in another number in the same range. By definition, radix-complement of 0 is
rn
, which has n + 1 digits. The extra bit is omitted which results in 0 and thus there is only one
radix-complement representation of zero.
By definition, subtraction operation is required to obtain radix-complement. To avoid subtrac-
tion, we can rewrite rn
as rn
= (rn
− 1) + 1 and rn
− D = ((rn
− 1) − D) + 1, where D is the n-digit
number. The number (rn
− 1) is the highest n-digit number. Let the complement of digit d is
r − 1 − d, then the radix-complement is obtained by complementing individual digits of D and
adding 1.
Example: 10 s complement of 4-digit number, 0127 = 9872+1 = 9873.
Two’s Complement Representation The radix complement for binary numbers is called the 2 s
complement. The Most Significant Bit(MSB) represents the sign. If it is 0, then the number is pos-
itive and if it is 1, the number is negative. The remaining (n −1) bits represents the magnitude, but
not as simple weighted number. The weight of the MSB is −2n−1
instead of +2n−1
.
Example 1: 1510 = (01111)2, 2 s complement of (01111)2 = (10000+1) = (10001)2 = −1510.
Example 2: −12710 = (10000001)2, 2 s complement of (10000001)2 = (01111110+1) = (01111111)2 =
+12710.
Example 3: 010 = (0000)2, 2 s complement of (0000)2 = (1111+1) = (0000)2+carry 1 = 010.
The carry out bit is ignored in 2 s complement operations, so the result is the lower n bits. Since
zero’s sign bit is 0, it is considered as positive and zero has only one representation in 2 s comple-
ment system. Hence, the range of representable numbers is from −2n−1
to +(2n−1
−1). The two’s
complement system can be applied to fractions as well.
Example: 2 s complement of +19.312510 = (010011.0101)2 = 101100.1010 + 1 = (101100.1011)2 =
−19.312510.
Sign Extension An n-bit two’s complement number can be converted to m-bit one. If m > n, pad
a positive number with 0s and negative number with 1s. If m < n, discard (n − m) leftmost bits,
but the result is valid only if all the discarded bits are the same as the sign bit of the result.
2.3.4 Diminished Radix-complement Representation
The diminished radix-complement of n-digit number is obtained by subtracting it from (rn
− 1).
This is obtained by complementing individual digits of the number. In decimal system, it is called
as 9 s complement.
One’s Complement Representation The diminished radix-complement for binary numbers is
called as one’s complement. The MSB represents the sign. If it is 0, the number is positive and the
remaining (n −1) bits directly indicate the magnitude. If MSB is 1, the number is negative and the
complement of remaining (n −1) bits is the magnitude of the number. The positive numbers have
same representation in both one’s and two’s complements, but negative number representation
differs by 1. The weight of the MSB is −(2n−1
−1) for computing decimal equivalents.
Example 1: One’s complement of 410 = (0100)2is(1011)2 = −410.
Example 2: One’s complement of 010 = (0000)2is(111)2 = −010.
There are two representation of zero, positive and negative zero. So, the range of representation is
−(2n−1
−1) to +(2n−1
−1).
The main advantage of the one’s complement is that it is easy to complement. But design
of adders is difficult than two’s complement. Since zero has two representations,zero detecting
circuits must check for both representations of zero.
2.4. EXCESS REPRESENTATIONS 7
2.4 Excess Representations
In excess representation, a bias B is added to the true value of the number to produce an excess
number. Consider a m-bit string whose unsigned integer value is M(0 ≤ M < 2m
), then in excess-
B representation the m-bit string represents the signed integer (M −B), where B is the bias of the
number system. An excess-2m−1
system represents any number X in the range −2m−1
to +2m−1
−1
by the m-bit binary representation of X +2m−1
, which is always non-negative and less than 2m
.
Example:Consider 3-bit number say (110)2, the unsigned value is 610. Then the excess-4(bias is
23−1
) representation is (110−100)2 = (010)2 = −610.
The representation of number in excess representation and two’s complement is identical ex-
cept for the sign bits, which are always opposite. The excess representation are most commonly
used in floating-point number systems.
2.5 Two’s-Complement Addition and Subtraction
2.5.1 Addition
Addition is performed by doing the simple binary addition of the two numbers.
Examples:
decimal binary decimal binary
+3 0011 +4 0100
++4 + 0100 +-7 + 1001
+7 0111 -3 1101
If the result of addition operation exceeds the range of the number system, overflow occurs.
Addition of two numbers with like signs can produces overflow, but addition of two numbers with
different signs never produces overflow.
Examples:
decimal binary decimal binary
-3 1101 +7 0111
+-6 + 1010 ++7 + 0111
-9 10111 +14 1110
2.5.2 Subtraction
Subtraction is accomplished by first performing two’s complement of the subtrahend and then
adding it to the minuend.
decimal binary decimal binary
+4 0100 +3 0011
-+3 + 1100 –4 + 0011
+3 10001 +7 0111
Overflow in subtraction can be detected by examining the signs of the minuend and the com-
plemented subtrahend same as addition. The extra bit can be ignored, provided range is not ex-
ceeded.
8 CHAPTER 2. DIGITAL NUMBER SYSTEMS AND CODES
2.6 One’s Complement Addition and Subtraction
The addition operation is performed by standard binary addition, if there is a carry out of the MSB,
then add 1 to the result. This rule is called en-around carry. Similarly, subtraction is accomplished
by first performing the complement of the subtrahend and proceed as in addition. Since zero has
two representations, the addition of a number and its one’s complement produces negative zero.
Example:
decimal binary
+6 0110
+-3 + 1100
+3 10010
+ 1
0011
2.7 Binary Codes
When an information is sent over long distances, the channel through which it is sent may dis-
tort the transmitted information. It becomes necessary to modify the information into some form
before sending and then convert at the receiving end to get back the original information. This
process of altering the characteristics of information to make it suitable for the intended applica-
tion is called coding.
A set of n-bit strings in which different bit strings represent different numbers or other things is
called a code. A particular combination of n-bit values is called a code word. Normally n-bit string
can take 2n
values, but it is not necessary that it contains 2n
valid code words.
2.7.1 Binary Coded Decimal Code
There are ten symbols in the decimal number system from 0 to 9. So 4-bits are required to repre-
sent them in binary form. This process of encoding the digits 0 through 9 by their unsigned binary
representation is called Binary Coded Decimal(BCD) code. The code words from 1010 through
1111 are not used, they may used for error detection purpose. BCD is a weighted code because
decimal value of a code is the algebraic sum of the fixed weight to each bit. The weights of BCD
are 8, 4, 2 and 1, so the BCD is also called as 8421 code.
Example: (16.85)10 = (00010110.10000101)
There are many possible weights to write a number in BCD code to make the code suitable
for specific application. One such code is the 2421 code in which the code word for the nine’s
complement of any digit may be obtained by complementing the individual bits of code words. So
2421 code is called self-complementing codes.
Example: 2421 code of 410 = 0100. Its complement is 1011 which is 2421 code for 510 = (9−4)10.
Excess-3 code is the self-complementing code which is not weighted. It is obtained by adding
0011 to all the 8421 code. The code words follow a standard binary sequence, so this can be used
in standard binary counters.
Example: Excess-3 code of (0010)2 = (0010+0011) = (0101)2.
2.7. BINARY CODES 9
Addition of BCD digits is similar to adding 4-bit unsigned binary numbers, except that a
correction of 0110 is to made if the result is exceeds 1001.
Example:
decimal binary
8 10000
+8 + 1000
-16 10000
+ 0110
10110
The main advantage of the BCD code is the relative ease of converting to and from decimal.
Only the four-bit code for the decimal digits 0 through 9 need to be remembered. However, BCD
requires more bits to represent a decimal number of more than 1 digit because it does not use all
possible four-bit code words and is therefore somewhat inefficient.
2.7.2 Gray Code
There are many applications in which it is desirable to have a code in which adjacent codes differ
only in one bit. For example, in electromechanical applications such as braking systems, some-
times it is necessary for an input sensor to produce a digital value that indicates a mechanical po-
sition. Such codes are called unit distance codes. Gray code is one of the unit distance code. The
3-bit gray codes for corresponding decimal numbers are 010 = (000)2, 110 = (001)2, 210 = (011)2,
310 = (010)2, 410 = (110)2, 510 = (111)2, 610 = (101)2 and 710 = (100)2.
The most popular use of Gray codes is in electromechanical applications such as the position
sensing transducer known as shaft encoder. A shaft encoder consists of a disk in which concentric
circles have alternate sectors with reflective surfaces while the other sectors have non-reflective
surfaces. The position is sensed by the reflected light from a light emitting diode. However, there
is choice in arranging the reflective and non-reflective sectors. A 3-bit binary coded disk is as
shown in the Figure 2.1.
Figure 2.1: 3-bit binary coded shaft encoder
The straight binary code in the Figure 2.1 can lead to errors because of mechanical imperfections.
When the code is transiting from 001 to 010, a slight misalignment can cause a transient code of
011 to appear. The electronic circuitry associated with the encoder will receive 001 –> 011 -> 010.
If the disk is patterned to give Gray code output, the possibilities of wrong transient codes will not
arise. This is because the adjacent codes will differ in only one bit. For example the adjacent code
10 CHAPTER 2. DIGITAL NUMBER SYSTEMS AND CODES
for 001 is 011. Even if there is a mechanical imperfection, the transient code will be either 001 or
011. The shaft encoder using 3-bit Gray code is shown in the Figure 2.2.
Figure 2.2: 3-bit Gray coded shaft encoder disk
There are two convenient methods to construct Gray code with any number of desired bits.
The first method is based on the fact that Gray code is also a reflective code. The following rule
may be used to construct Gray code:
1. A one-bit Gray code had code words, 0 and 1.
2. The first 2n
code words of an (n + 1)-bit Gray code equal the code words of an n-bit Gray
code, written in order with a leading 0 appended.
3. The last 2n
code words of a (n +1)-bit Gray code equal the code words of an n-bit Gray code,
written in reverse order with a leading 1 appended.
However, this method requires Gray codes with all bit lengths less than n also be generated as a
part of generating n-bit Gray code. The second method allows us to derive an n-bit Gray code word
directly from the corresponding n-bit binary code word:
1. The bits of an n-bit binary code or Gray code words are numbered from right to left, from 0
to n-1.
2. Bit i of a Gray-code word is 0 if bits i and i +1 of the corresponding binary code word are the
same, else bit i is 1. When i +1 = n, bit n of the binary code word is considered to be 0.
2.7.3 Character Codes
Most of the information processed by a computer is non-numeric. The most common type of non-
numeric data is text and string of characters. Codes that include alphabetic characters are called
character codes. Each character is represented by a bit string. The most commonly used character
code is ASCII (American Standard Code for Information Interchange). Each character in ASCII is
represented with a 7-bit string, yielding total of 128 different characters. The code contains upper-
case and lowercase alphabet, numerals, punctuation and various non-printing control characters.
Example: ASCII code for A = 1000001, a = 1100001, ? = 0111111 etc.
Chapter 3
Boolean Algebra
In 1854, George Boole introduced a systematic treatment of logic and developed for this pur-
pose an algebraic system, now called Boolean Algebra. Boolean algebra is a system of mathemat-
ical logic. It is an algebraic system consisting of the set of elements(0,1), two binary operators
called OR, AND and one unary operator, NOT. It is the basic mathematical tool in the analysis and
synthesis of switching circuit. It is way to express logic functions algebraically.
3.1 Rules in Boolean Algebra
The important rules used in Boolean algebra are as follows:
1. Variable used can have only two values, Binary 1 for HIGH and Binary 0 for LOW.
2. Complement of a variable is represented by an over-bar. Thus, complement of variable B is
represented as ¯B. Thus if B = 0, then ¯B = 1 and if B = 1, then ¯B = 0.
3. OR of the variables is represented by a plus (+) sign between them. For example, OR of A, B
and C is represented as (A +B +C).
4. Logical AND of the two or more variable is represented by writing a dot between them such
as (A.B.C). Sometimes, the dot may be omitted and written as (ABC).
3.2 Postulates
The Postulates of a mathematical system form the basic assumption from which it is possible to
reduce the rules, theorems and properties of the system. The most common postulates used to
formulate various algebraic structures are as follows.
1. Associative law-1: A +(B +C) = (A +B)+C
This law states that ORing of several variables gives the same result regardless of the group-
ing of the variables, i.e. for three variables, (A OR B) ORed with C is same as A ORed with (B
OR C).
Associative law-2: (AB)C = A(BC) The associative law of multiplication states that it makes
no difference in what order the variables are grouped when ANDing several variable. For
three variables, (A AND B) ANDed with C is the same as A ANDed with (B AND C).
11
12 CHAPTER 3. BOOLEAN ALGEBRA
2. Commutative law-1: A +B = B + A
This states that the order in which the variables are ORed makes no difference in the output.
Therefore (A OR B) is same as (B OR A).
Commutative law-2: AB = B A
The commutative law of multiplication states that the order in which variables are ANDed
makes no difference in the output. Therefore (A AND B) is same as (B AND A).
3. Distributive law: A(B +C) = (AB + AC)
The distributive law states that ORing several variables and ANDing the result with single
variable is equivalent to ANDing the result with a single variable with each of the several
variables and then ORing the products.
4. Identity Law-1: A +0 = A
A variable ORed with 0 is always equal to the variable.
Identity Law-2: A.1 = A
A variable ANDed with 1 is always equal to the variable.
5. Inversion law: ¯¯A = A
This law uses the NOT operation. The inversion law states that double inversion of a variable
results in the original variable itself.
6. Closure
A set S is closed with respect to binary operator if, for every pair of elements of S, the binary
operator specifies a rule for obtaining a unique element of S. For example, the set of natu-
ral number N=(1,2,3,4.......n) is closed with respect to the binary operator + by the rules of
arithmetic addition, since for any a,b N, there is a unique c N such that a +b = c. The set
of natural number is not closed with respect to the binary operator by rules of arithmetic
subtraction, because 2−3 = −1 and 2,3 N. But (−1) N.
7. Consensus theorem-1: AB + ¯AC +BC = AB + ¯AC
The (BC) term is called the consensus term and is redundant. The consensus term is formed
pair of terms in which a variable A and its complement ¯A are present. The consensus term
is formed by multiplying the two terms and leaving out the selected variable and its comple-
ment.
The consensus of AB + ¯AC is BC.
Proof:
AB + ¯AC +BC
= AB + ¯AC +BC(A + ¯A)
= AB + ¯AC +BC A + ¯ABC
= AB(1+C)+ ¯AC(1+B)
= AB + ¯AC
This theorem can be extended to any number of variabls.
Similarly, (A +B).( ¯A +B).(B +C) = (A +B).( ¯A +C)
8. Absorptive Law
This law enables a reduction in a complicated expression to a simpler one by absorbing like-
terms.
A +(A.B) = A (OR Absorption Law)
A(A +B) = A (AND Absorption Law)
9. Idempotence law: An input that is ANDed or ORed with itself is equal to that input.
A + A = A: A variable ORed with itself is always equal to the variable.
3.2. POSTULATES 13
A.A = A: A variable ANDed with itself is always equal to the variable.
10. Adjacency law:
(A +B)(A + ¯B) = A
A.B + A ¯B = A
11. Duality: In positive logic system, more positive of two voltage levels represented as 1 and
more negative by 0 . Whereas, in negative logic system, more positive is represented as 0
and more negative by 1 . The distinction between positive logic and negative logic system
is that the OR gate in the positive logic becomes an AND gate in negative logic system and
vice verse. The duality theorem applies when we are changing one logic system to another.
Duality theorem states that ’0’ becomes ’1’, ’1’ becomes ’0’, ’AND ’ becomes ’OR’ and ’OR’
becomes ’AND’. The variables are not complemented in this process.
Given expression Dual
¯0 = 1 ¯1 = 0
0.1 = 0 1+0 = 1
0.0 = 0 1+1 = 1
1.1 = 1 0+0 = 0
A.0 = 0 A +1 = 1
A.A = A A + A = A
A.1 = A A +0 = A
A. ¯A = 0 A + ¯A = 1
A.B = B.A A +B = B + A
A.(B +C) = AB + AC A +BC = (A +B)(A +C)
12. DeMorgan’s Theorem :
DeMorgan suggested two theorems that form an important part of boolean algebra. In the
equation form, they are:
(a) AB = ¯A + ¯B
The complement of a product is equal to the sum of the complements. This is illus-
trated by the below truth table.
(b) (A +B) = ¯A. ¯B
The complement of sum is equal to product of complements. The following truth table
illustrates this law .
14 CHAPTER 3. BOOLEAN ALGEBRA
13. Redundant literal Rule
This law states that ORing of a variable with the AND of the complement of that variable with
another variable is equal to ORing of the two variable i.e
A + ¯AB = A +B
Another theorem based on this law is
A( ¯A +B) = AB
3.3 Theorems of Boolean Algebra
Theorem-1: x + x = x
x + x
= (x + x).1
= (x + x)(x + ¯x)
= x + x ¯x
= x +0
= x
Theorem-2: x.x = x
x +1
= 1.(x +l)
= (x + ¯x)(x +1)
= x + ¯x.1
= x + ¯x
= 1
x.0 = 0 by duality
Theorem-3: x + xy = x
x + xy = x.1+ xy
= x(1+ y)
= x(y +1)
= x.1
= x
x(x+y) = x by duality
Chapter 4
Logic Gate Families
Digital IC gates are classified not only by their logic operation, but also by specified logic circuit
family to which they belong. Each logic family has its own basic electronic circuit upon which
more complex digital circuit and functions are developed.
The first logic circuit was developed using discrete circuit components. Using advance tech-
niques, these complex circuits can be miniaturized and produced on a small piece of semicon-
ductor material like silicon. Such a circuit is called integrated circuit(IC). Nowadays, all the digital
circuits are available in IC form. While producing digital ICs, different circuit configurations and
manufacturing technologies are used. This results into a specific logic family. Each logic family
designed in this way has identical electrical characteristics such as supply voltage range, speed of
operation, power dissipation, noise margin etc. The logic family is designed by considering the
basic electronic components such as resistors,diodes, transistors and MOSFET or combinations
of any of these components. Accordingly, logic families are classified as per the construction of
the basic logic circuits.
4.1 Classification of logic families
Logic families are classified according to the principle type of electronic components used in their
circuitry as shown in the Figure 4.1. They are:
Bipolar ICs: which uses diodes and transistors(BJT).
Unipolar ICs: which uses MOSFETs.
Figure 4.1: Logic family classification
Out of the logic families mentioned above, RTL and DTL are not very useful due to some inherent
disadvantages. While TTL, ECL and CMOS are widely used in many digital circuit design applica-
tions.
15
16 CHAPTER 4. LOGIC GATE FAMILIES
4.2 Transistor-Transistor Logic(TTL)
TTL is the short form of Transistor Transistor Logic. As the name suggests they refers to the digital
integrated circuits that employ logic gates consisting primarily of bipolar transistors. The most
basic TTL circuit is an inverter. It is a single transistor with its emitter grounded and its collector
tied to VCC with a pull-up resistor, and the output is taken from its collector. When the input is
high (logic 1), the transistor is driven to saturation and the output voltage i.e. the drop across the
collector and emitter is negligible and therefore output voltage is low(logic 0).
A two input NAND gate can be realized as shown in the figure 4.2. When, at least any one of the
input is at logic 0, the multiple emitter base junctions of transistor TA are forward biased where as
the base collector is reverse biased. The transistor TA would be ON forcing a current away from the
base of TB and thereby TB is OFF. Almost all Vcc would be dropping across an OFF transistor and
the output voltage would be high(logic l). For a case when both Input1 and Input2 are at Vcc, both
the base emitter junctions are reverse biased and the base collector junction of the transistor TA is
forward biased. Therefore, the transistor TB is on making the output at logic 0, or near ground.
Figure 4.2: A 2-input TTL NAND gate
However, most TTL circuits use a totem pole output circuit instead of the pull-up resistor as
shown in the Figure 4.3. It has a Vcc-side transistor(TC ) sitting on top of the GND-side output
transistor(TD). The emitter of the TC is connected to the collector of TD by a diode. The output
is taken from the collector of transistor TD. TA is a multiple emitter transistor having only one
collector and base. The multiple base emitter junction behaves just like an independent diode.
Applying a logic ’1’ input voltage to both emitter inputs of TA, reverse-biases both base-emitter
junctions, causing current to flow through RA into the base of TB , which is driven into saturation.
When TB starts conducting, the stored base charge of TC dissipates through the TB collector, driv-
ing TC into cut-off. On the other hand, current flows into the base of TD, causing it to saturate
and its collector emitter voltage is 0.2V and the output is equal to 0.2V, i.e. at logic 0. In addition,
since TC is in cut-off, no current will flow from Vcc to the output, keeping it at logic ’0’. Since TD
is in saturation, its input voltage is 0.8 V. Therefore the output voltage at the collector of transistor
TB is 0.8V +V CESat (saturation voltage between conductor and emitter of a transistor is equal
to 0.2V ) = 1V. TB always provides complementary inputs to the bases of TC and TD, such that TC
and TD always operate in opposite regions, except during momentary transition between regions.
The output impedance is low independent of the logic state because one transistor (either TC or
TD ) is ON. When at least one of inputs are at 0 V, the multiple emitter base junctions of transistor
TA are forward biased whereas the base collector is reverse biased and transistor TB remains off
and therefore the output voltage is equal to Vcc. Since the base voltage for transistor TC is Vcc, this
transistor is ON and the output is also Vcc and the input to transistor TD is 0 V, hence it remains
off.
4.3. CMOS LOGIC 17
Figure 4.3: A 2-input TTL NAND Gate with a totem pole output stage
TTL overcomes the main problem associated with DTL(Diode Transistor Logic) i.e. lack of speed.
The input to a TTL circuit is always through the emitter(s) of the input transistor, which exhibits a
low input resistance. The base of the input transistor, on the other hand, is connected to the Vcc,
which causes the input transistor to pass a current of about 1.6mA when the input voltage to the
emitter(s) is logic ’0’. Letting a TTL input ’float’ (left unconnected) will usually make it go to logic
’1’. However, such a state is vulnerable to stray signals, which is why it is good practice to connect
TTL inputs to Vcc using 1kΩ pull-up resistors.
4.3 CMOS Logic
The term ’Complementary Metal-Oxide-Semiconductor(CMOS)’ refers to the device technology
for fabricating integrated circuits using both n-channel and p-channel MOSFETs. Today, CMOS is
the major technology in manufacturing digital IC’s and is widely used in microprocessors, mem-
ories and other digital IC’s. The input to a CMOS circuit is always to the gate of the input MOS
transistor. The gate offers a very high resistance because it is isolated from the channel by an ox-
ide layer. The current flowing through a CMOS input is virtually zero and the device is operated
mainly by the voltage applied to the gate, which controls the conductivity of the device channel.
The low input currents required by a CMOS circuit results in lower power consumption, which is
the major advantage of CMOS over TTL. In fact, power consumption in a CMOS circuit occurs only
when it is switching between logic levels. Moreover, CMOS circuits are easy and cheap to fabricate
resulting in high packing density than their bipolar counterparts. CMOS circuits are quite vulner-
able to ESD damage, mainly by gate oxide punchthrough from high ESD(Electro Static Discharge)
voltages. Therefore, proper handling CMOS IC’s is required to prevent ESD damage and generally,
these devices are equipped with protection circuits.
Figure 4.4 shows the circuit symbols of an n-channel and a p-channel transistor. An nMOS tran-
sistor is ’ON’ if the gate voltage is at logic ’1’ or more precisely if the potential between the gate and
the source terminals(VGS) is greater than the threshold voltage VT . An ’ON’ transistor implies the
existence of a continuous channel between the source and the drain terminals. On the other hand,
an ’OFF’ nMOS transistor indicates the absence of a connecting channel between the source and
the drain. Similarly, a pMOS transistor is ’ON’ if the potential VGS is lower(or more negative) than
the threshold voltage VT .
18 CHAPTER 4. LOGIC GATE FAMILIES
Figure 4.4: Symbol of n-channel and p-channel transistors
Figure 4.5 shows a CMOS inverter circuit(NOT gate), where a p-channel and an n-channel MOS
transistor is connected in series. A logic ’1’ input voltage would make TP (p-channel) turn off and
TN (n-channel) turn on. Hence, the output will be low, pulling output to logic ’0’. A logic ’0’ Vin
voltage, on the other hand, will make TP turn on and TN turn off, pulling output to near VCC or
logic ’1’. The p-channel and n-channel MOS transistors in the circuit are complementary and they
are always in opposite states i.e. when one of them is ON the other is OFF.
Figure 4.5: CMOS inverter circuit
The circuit (a) in the figure 4.6 shows a 2-input CMOS NAND gate where the TN1 and TP1 have
the same input A and TN2 and TP2 has the same input B. When both the inputs A and B are high,
TN1 and TN2 are ON and TP1 and TP2 are OFF. Therefore, the output is at logic 0. On the other
hand, when both input A and input B are logic 0, TN1 and TN2 is OFF and TP1 and TP2 are ON.
Hence, the output is at VCC , logic 1. Similar situation arises when any one of the input is logic 0.
In such a case, one of the bottom series transistors i.e. either TN1 or TN2 would be OFF forcing the
output to logic 1.
The circuit (b) in the figure 4.6 shows a 2-input CMOS NOR gate where the TN1 and TP1 have the
same input A and TN2 and TP2 has the same input B. When both the A and B are logic 0, TN1 and
TN2 are OFF and TP1 and TP2 are ON. Therefore, the output is at logic 1. On the other hand, when
at least one of the inputs A or B is logic 1, the corresponding nMOS transistor would be ON making
the output logic 0. The output is disconnected from VCC in such case because at least one of the
TP1 or TP2 is OFF. Thus the CMOS circuit acts as a NOR Gate.
4.4. EMITTER COUPLED LOGIC(ECL) 19
Figure 4.6: CMOS NAND and NOR circuit
If one use a CMOS IC for reduced current consumption and a TTL IC feeds the CMOS chip,
then you need to either provide a voltage translation or use one of the mixed CMOS/TTL devices.
The mixed TTL/CMOS devices are CMOS devices, which just happen to have TTL input trigger
levels, but they are CMOS IC’s. Let us consider the TTL to CMOS interface briefly. TTL device
need a supply voltage of 5 V, while CMOS devices can use any supply voltage from 3V to 10V. One
approach to TTL/CMOS interfacing is to use a 5V supply for both the TTL driver and the CMOS
load. In this case, the worst case TTL output voltages are almost compatible with the worst case
CMOS input voltages. There is no problem with the TTL low state window(0 to 0.35 V) because it
fits inside the CMOS low state window(0 to 1.3 V). This means the CMOS load always interprets
the TTL low state drive as low.
4.4 Emitter Coupled Logic(ECL)
Emitter coupled logic(ECL) gates use differential amplifier configurations at the input stage. ECL
is a non saturated logic, which means that transistors are prevented from going into deep sat-
uration, thus eliminating storage delays. In other words, the transistor is switched ON, but not
completely ON. This is the fastest logic family. ECL circuits operate using a negative 5.2V supply
and the voltage level for high is -0.9V and for low is -1.7V. Thus biggest problem with ECL is a poor
noise margin.
4.5 Characteristics of logic families
The parameters which are used to characterize different logic families are as follows:
1. HIGH-level input current(II H ): This is the current flowing into (taken as positive) or out
of (taken as negative) an input, when a HIGH-level input voltage equal to the minimum
HIGH-level output voltage specified for the family is applied. In the case of bipolar logic
families such as TTL, the circuit design is such that this current flows into the input pin and is
therefore specified as positive. In the case of CMOS logic families, it could be either positive
or negative, and only an absolute value is specified in this case. /item LOW-level input
current(IIL): The LOW-level input current is the maximum current flowing into (taken as
positive) or out of (taken as negative) the input of a logic function, when the voltage applied
at the input equals the maximum LOW-level output voltage specified for the family. In the
case of bipolar logic families such as TTL, the circuit design is such that this current flows out
20 CHAPTER 4. LOGIC GATE FAMILIES
of the input pin and is therefore specified as negative. In the case of CMOS logic families, it
could be either positive or negative. In this case, only an absolute value is specified.
2. HIGH-level output current(IOH ): This is the maximum current flowing out of an output
when the input conditions are such that the output is in the logic HIGH state. It is normally
shown as a negative number. It informs us about the current sourcing capability of the out-
put. The magnitude of IOH determines the number of inputs the logic function can drive
when its output is in the logic HIGH state.
3. LOW-level output current(IOL): This is the maximum current flowing into the output pin
of a logic function when the input conditions are such that the output is in the logic LOW
state. It informs us about the current sinking capability of the output. The magnitude of IOL
determines the number of inputs the logic function can drive when its output is in the logic
LOW state.
4. HIGH-level input voltage(VI H ): This is the minimum voltage level that needs to be applied
at the input to be recognized as a legal HIGH level for the specified family. For the standard
TTL family, a 2V input voltage is a legal HIGH logic state.
5. LOW-level input voltage(VIL): This is the maximum voltage level applied at the input that
is recognized as a legal LOW level for the specified family. For the standard TTL family, an
input voltage of 0.8 V is a legal LOW logic state.
6. HIGH-level output voltage(VOH ): This is the minimum voltage on the output pin of a logic
function when the input conditions establish logic HIGH at the output for the specified fam-
ily. In the case of the standard TTL family of devices, the HIGH level output voltage can be
as low as 2.4V and still be treated as a legal HIGH logic state. It may be mentioned here that,
for a given logic family, the VOH specification is always greater than the VI H specification
to ensure output-to-input compatibility when the output of one device feeds the input of
another.
7. LOW-level output voltage(VOL): This is the maximum voltage on the output pin of a logic
function when the input conditions establish logic LOW at the output for the specified fam-
ily. In the case of the standard TTL family of devices,the LOW-level output voltage can be as
high as 0.4V and still be treated as a legal LOW logic state.It may be mentioned here that,for a
given logic family, the VOLspecification is always smaller than the VIL specification to ensure
output-to-input compatibility when the output of one device feeds the input of another.
8. Fan out: It specifies the number of standard loads that the output of the gate can drive
without affecting its normal operation. A standard load is usually defined as the amount of
current needed by an input of another gate in the same family. Sometimes, the term loading
is also used instead of fan-out. This term is derived from the fact that the output of the gate
can supply a limited amount of current above which it ceases to operate properly and is said
to be overloaded. The output of the gate is usually connected to the inputs of similar gates.
Each input consumes a certain amount of power from gate input so that, each additional
connection adds to load the gate. These loading rules are listed for family of standard digital
circuits. The rules specify the maximum amount of loading allowed for each output in the
circuit.
9. Fan in: This is the number of inputs of a logic gate. It is decided by the input current sinking
capability of a logic gate.
4.5. CHARACTERISTICS OF LOGIC FAMILIES 21
10. Power Dissipation: This is the power supplied required to operate the gate. It is expressed
in milli-watt(mW) and represents actual power dissipated in the gate. It is the number that
represents power delivered to gate from the power supply. The total power dissipated in the
digital system is sum of power dissipated in each digital IC.
11. Propagation delay(tP ): The propagation delay is the time delay between the occurrence of
change in the logical level at the input and before it is reflected at the output. It is the time
delay between the specified voltage points on the input and output wave forms. Propaga-
tion delays are separately defined for LOW-to-HIGH and HIGH-to-LOW transitions at the
output. In addition, we also define enable and disable time delays that occur during transi-
tion between the high-impedance state and defined logic LOW or HIGH states. Propagation
delay is expressed in nanoseconds. The signals travelling from input to output of the sys-
tem pass through a number of gates. The propagation delay of the system is the sum of the
propagation delays of all these gates. The speed of operation is important so each gate must
have a small propagation delay and the digital circuit must have minimum number of gates
between input and output.
Figure 4.7: Propagation delay
Propagation delay parameters
(a) Propagation delay(tpLH ): This is the time delay between the specified voltage points
on the input and output waveforms with the output changing from LOW to HIGH.
(b) Propagation delay (tpHL): This is the time delay between the specified voltage points
on the input and output waveforms with the output changing from HIGH to LOW.
12. Noise margin: This is the maximum noise voltage added to the input signal of digital circuit
that does not cause an undesirable change in the circuit output. There are two types of noise
to be considered here.
• DC noise: This is caused by a drift in the voltage levels of a signal.
• AC noise: This is caused by random pulse that may be created by other switching sig-
nals.
22 CHAPTER 4. LOGIC GATE FAMILIES
Figure 4.8: Noise margin
Noise margin is a quantitative measure of noise immunity offered by the logic family. When
the output of a logic device feeds the input of another device of the same family, a legal HIGH
logic state at the output of the feeding device should be treated as a legal HIGH logic state
by the input of the device being fed. Similarly, a legal LOW logic state of the feeding de-
vice should be treated as a legal LOW logic state by the device being fed. The legal HIGH and
LOW voltage levels for a given logic family are different for outputs and inputs. The Figure 4.8
shows the generalized case of legal HIGH and LOW voltage levels for output. As we can see
from the two diagrams, there is a disallowed range of output voltage levels from VOL(max)
to VOH (min) and an indeterminate range of input voltage levels from VIL(max) to VI H (min).
Since VIL(max) is greater than VOL(max), the LOW output state can tolerate a positive volt-
age spike equal to (VIL(max)-VOL(max)) and still be a legal LOW input. Similarly, VOH (min)
is greater than VI H (min), and the HIGH output state can tolerate a negative voltage spike
equal to (VOH (min) – VI H (min)) and still be a legal HIGH input. Here,(VIL(max)-VOL(max))
and (VOH (min)-VI H (min)) are respectively known as the LOW-level and HIGH-level noise
margin.
13. Cost: The last consideration, and often the most important one, is the cost of a given logic
family. The first approximate cost comparison can be obtained by pricing a common func-
tion such as a dual four-input or quad two-input gate. But the cost of a few types of gates
alone can not indicate the cost of the total system. The total system cost is decided not only
by the cost of ICs but also by the cost of printed wiring board on which the ICs are mounted,
assembly of circuit board, testing, programming the programmable devices, power supply,
documentation, procurement, storage etc. In many instances, the cost of ICs could become
a less important component of the total cost of the system.
4.6 Three State Outputs
Standard logic gate outputs only have two states; high and low. Outputs are effectively either con-
nected to +V or ground, that means there is always a low impedance path to the supply rails. Cer-
tain applications require a logic output that we can "turn off" or disable. It means that the output
is disconnected (high impedance state). This is the three-state output and can be implemented
by a stand-alone unit(a buffer) or part of another function output. This circuit is so-called tri-state
because it has three output states: high(1), low(0) and high impedance(Z).
4.7. OPEN COLLECTOR(DRAIN)OUTPUTS 23
Figure 4.9: Three-state output logic circuit
In the logic circuit of Figure 4.9, there is an additional switch to a digital buffer which is called
as enabled input denoted by E. When E is low, the output is disconnected from the input circuit.
When E is high, the switch is connected and the circuit behaves like a digital buffer. All these states
are listed in truth table and depicts the symbol of a tri-state buffer as shown in the Figure 4.10.
Figure 4.10: Tri-state buffer
4.7 Open Collector(Drain)Outputs
An open collector/open drain is a common type of output found on many integrated circuits. In-
stead of outputting a signal of a specific voltage or current, the output signal is applied to the base
of an internal NPN transistor whose collector is externalized(open) on a pin of the IC. The emit-
ter of the transistor is connected internally to the ground pin. If the output device is a MOSFET,
the output is called open drain and it functions in a similar way. A simple schematic of an open
collector of an IC is as shown in the Figure 4.11.
Figure 4.11: Open collector output circuit of an IC
Open-drain devices sink(flow) current in their low voltage active(logic 0) state, or are high
impedance(no current flows) in their high voltage non-active(logic 1) state. These devices usu-
ally operate with an external pull-up resistor that holds the signal line high until a device on the
wire sinks enough current to pull the line low. Many devices can be attached to the signal wire. If
all devices attached to the wire are in their non-active state, the pull-up will hold the wire at a high
voltage. If one or more devices are in the active state, the signal wire voltage will be low.
24 CHAPTER 4. LOGIC GATE FAMILIES
An open-collector/open-drain signal wire can also be bi-directional. Bi-directional means
that a device can both output and input a signal on the wire at the same time. In addition to
controlling the state of its pin that is connected to the signal wire(active, or non-active), a device
can also sense the voltage level of the signal wire. Although the output of a open-collector/open-
drain device may be in the non-active(high) state, the wire attached to the device may be in the
active(low) state, due to activity of another device attached to the wire. The open-drain output
configuration turns off all pull-ups and only drives the pull-down transistor of the port pin when
the port latch contains logic ’0’. To be used as a logic output, a port configured in this manner must
have an external pull-up, typically a resistor tied to VDD. The pull down for this mode is the same
as for the quasi-bidirectional mode. To use an analogue input pin as a high-impedance digital
input while a comparator is enabled, that pin should be configured in open-drain mode and the
corresponding port register bit should be set to 1.
Chapter 5
Logic Functions
In electronics, a logic gate is an idealized or physical device implementing a Boolean function, i.e.
it performs a logical operation on one or more binary inputs and produces a single binary output.
A logic gate is an elementary building block of a digital circuit. Most logic gates have two inputs
and one output. At any given moment, every terminal is in one of the two binary conditions low(0)
or high(1), represented by different voltage levels. The logic state of a terminal can, and generally
does, change often, as the circuit processes data. In most logic gates, the low state is approximately
zero volts(0V), while the high state is approximately five volts positive(+5V). There are seven basic
logic gates: AND, OR, XOR, NOT, NAND, NOR, and XNOR.
AND gate: The AND gate is so named because, if 0 is called "false" and 1 is called "true," the gate
acts in the same way as the logical "and" operator. The Figure 5.1 shows the circuit symbol and
logic combinations for an AND gate. In the symbol, the input terminals are at left and the output
terminal is at right. The output is "true" when both inputs are "true." Otherwise, the output is
"false."
Figure 5.1: AND gate symbol and truth table
OR gate: The OR gate gets its name from the fact that it behaves similar to the logical inclusive
"or." The output is "true" if either or both of the inputs are "true." If both inputs are "false," then
the output is "false."
25
26 CHAPTER 5. LOGIC FUNCTIONS
Figure 5.2: OR gate symbol and truth table
XOR gate: The XOR(exclusive-OR) gate acts in the same way as the logical "either/or." The output
is "true" if either, but not both, of the inputs are "true." The output is "false" if both inputs are
"false" or if both inputs are "true". Another way of looking at this circuit is to observe that the
output is 1 if the inputs are different, but 0 if the inputs are the same.
Figure 5.3: XOR gate symbol and truth table
NOT gate: A logical inverter, sometimes called a NOT gate to differentiate it from other types of
electronic inverter devices, has only one input. It reverses the logic state.
Figure 5.4: NOT gate symbol and truth table
NAND gate: The NAND gate operates as an AND gate followed by a NOT gate. It acts in the
manner of the logical operation "and" followed by negation. The output is "false" if both inputs
are "true". Otherwise, the output is "true".
27
Figure 5.5: NAND gate symbol and truth table
NOR gate: The NOR gate is a combination of OR gate followed by an inverter. Its output is "true"
if both inputs are "false". Otherwise, the output is "false".
Figure 5.6: NOR gate symbol and truth table
XNOR gate: The XNOR(exclusive-NOR) gate is a combination of XOR gate followed by an in-
verter. Its output is "true" if the inputs are the same, and “false" if the inputs are different.
Figure 5.7: XNOR gate symbol and truth table
Using combinations of logic gates, complex operations can be performed. In theory, there is
no limit to the number of gates that can be arrayed together in a single device. But in practice,
there is a limit to the number of gates that can be packed into a given physical space. Arrays of
logic gates are found in digital integrated circuits(ICs). As IC technology advances, the required
physical volume for each individual logic gate decreases and digital devices of the same or smaller
size become capable of performing ever-more-complicated operations at ever-increasing speeds.
28 CHAPTER 5. LOGIC FUNCTIONS
Another type of mathematical identity, called a property or a law, describes how differing vari-
ables relate to each other in a system of numbers. One of these properties is known as the commu-
tative property, and it applies equally to addition and multiplication. In essence, the commutative
property tells us we can reverse the order of variables that are either added together or multiplied
together without changing the truth of the expression.
Figure 5.8: Commutative property of addition
Figure 5.9: Commutative property of multiplication
Along with the commutative property of addition and multiplication, we have the associative
property, again applying equally well to addition and multiplication. This property tells us we can
associate groups of added or multiplied variables together with parentheses without altering the
truth of the equations.
Figure 5.10: Associative property of addition
5.1. TRUTH TABLE DESCRIPTION OF LOGIC FUNCTIONS 29
Figure 5.11: Associative property of multiplication
Lastly, we have the distributive property, illustrating how to expand a Boolean expression formed
by the product of a sum, and in reverse shows us how terms may be factored out of Boolean sums-
of-products.
Figure 5.12: Distributive property
5.1 Truth Table Description of Logic Functions
The truth table is a tabular representation of a logic function. It gives the value of the function for
all possible combinations of values of the variables. If there are three variables in a given function,
there are 23
= 8 combinations of these variables. For each combination, the function takes either
1 or 0. These combinations are listed in a table, which constitutes the truth table for the given
function. Consider the expression,
F(A,B) = A.B + A. ¯B (5.1)
The truth table for this function is given by
Figure 5.13: Truth table for the expression 5.1
30 CHAPTER 5. LOGIC FUNCTIONS
The information contained in the truth table and in the algebraic representation of the function
are the same. The term truth table came into usage long before Boolean algebra came to be associ-
ated with digital electronics. Boolean function were originally used to establish truth or falsehood
of statements. When statement is true the symbol "1" is associated with it, and when it is false "0" is
associated. This usage got extended to the variables associated with digital circuits. However, this
usage of adjectives "true" and "false" is not appropriate when associated with variables encoun-
tered in digital systems. All variables in digital systems are indicative of actions. Typical examples
of such signals are "CLEAR", "LOAD", "SHIFT", "ENABLE" and "COUNT". These are suggestive of
actions. Therefore, it is appropriate to state that a variable is ASSERTED or NOT ASSERTED than to
say that a variable is TRUE or FALSE. When a variable is asserted, the intended action takes place,
and when it is not asserted the intended action does not take place. In this context, we associate
"1" with the assertion of a variable and "0" with the non-assertion of that variable. Consider the
logic function,
F(A,B) = A.B + A. ¯B (5.2)
It should now be read as, F is asserted when A and B are asserted or A is asserted and B is not
asserted. This convention of using assertion and non-assertion with the logic variables will be
used in all the learning units of digital systems. The term truth table will continue to be used for
historical reasons. But we understand it as an input-output table associated with a logic function,
but not as something that is concerned with the establishment of truth.
As the number of variables in a given function increases, the number of entries in the truth
table increases exponentially. For example, a five variable expression would require 32 entries and
a six-variable function would require 64 entries. Therefore, it becomes inconvenient to prepare
the truth table if the number of variables increases beyond four. However, a simple artefact may
be adopted. A truth table can have entries only for those terms for which the value of the function
is "1", without loss of any information. This is particularly effective when the function has only a
small number of terms. Consider the Boolean function with six variables.
F = A.B.C. ¯D. ¯E + A. ¯B.C. ¯D.E + ¯A. ¯B.C.D.E + A. ¯B. ¯C.D.E (5.3)
The truth table will have only four entries rather than 64 and the representation of this function is
as shown in the Figure 5.14.
Figure 5.14: Truth table for the expression 5.3
Truth table is a very effective tool in working with digital circuits, especially when the number of
variables in a function is small, less than or equal to five.
5.2. CONVERSION OF ENGLISH SENTENCES TO LOGIC FUNCTIONS: 31
5.2 Conversion of English Sentences to Logic Functions:
Some of the problems that can be solved using digital circuits are expressed through one or more
sentences.
For example, consider the sentence "Lift starts moving only if doors are closed and button is
pressed".
Break the sentence into phrases
F = lift starts moving(1) or else(0)
A = doors are closed(1) or else(0)
B = floor button is pressed(1) or else(0)
So F = (A,B)
In complex causes, we need to define variable properly and drawn truth table before logic func-
tion can be written as an expression. There may be some vagueness that should be defined.
For example, "’A’ will go for a workshop if and only if ’B’ is going and the subject of workshop is
important from career perspective or if the venue is interesting and there is no academic com-
mitment".
F = A.B +C ¯D (5.4)
5.3 Boolean Function Representation
The use of switching devices like transistors give rise to a special case of the Boolean algebra called
as switching algebra. In switching algebra, all the variables assume one of the two values which
are 0 and 1. In Boolean algebra, 0 is used to represent the ‘open’ state or ‘false’ state of logic gate.
Similarly, 1 is used to represent the ‘closed’ state or true state of logic gate.
A Boolean expression is an expression which consists of variables, constants(0-false and 1-true)
and logical operators which results in true or false. A Boolean function is an algebraic form of
Boolean expression. A Boolean function of n−variables is represented by f (X1,X2,X3,..., Xn). By
using Boolean laws and theorems, we can simplify the Boolean functions of digital circuits. A brief
note of different ways of representing a Boolean function are as follows:
• Sum-of-Products(SOP) form
• Product-of-sums(POS) form
• Canonical and standard forms
5.3.1 Some Definitions of Logic Functions
1. Literal: A variable or its complement i.e A or ¯A.
2. Product term: A term with the product(Boolean multiplication) of literals.
3. Sum term: A term with the sum(Boolean addition) of literals.
The min-terms and max-terms may be used to define the two standard forms for logic expres-
sions, namely the sum of products (SOP) or sum of min-terms and the product of sums(POS) or
product of max-terms. These standard forms of expression aid the logic circuit designer by sim-
plifying the derivation of the function to be implemented. Boolean functions expressed as a sum
of products or a product of sums are said to be in canonical form. Note the POS is not the comple-
ment of the SOP expression.
32 CHAPTER 5. LOGIC FUNCTIONS
5.3.2 Sum of Product (SOP) Form
The sum-of-products form is a method(or form) of simplifying the Boolean expressions of logic
gates. In this SOP form of Boolean function representation, the variables are operated by AND
(product) to form a product term and all these product terms are ORed(summed or added) to-
gether to get the final function. A sum-of-products form can be formed by adding(or summing)
two or more product terms using a Boolean addition operation. Here the product terms are de-
fined by using the AND operation and the sum term is defined by using OR operation. The sum-of-
products form is also called as Disjunctive Normal Form as the product terms are ORed together
and disjunction operation is logical OR. Sum-of-products form is also called as Standard SOP.
5.3.3 Product of Sums(POS) Form
The product of sums form is a method(or form) of simplifying the Boolean expressions of logic
gates. In this POS form, all the variables are ORed i.e. written as sums to form sum terms. All these
sum terms are ANDed (multiplied) together to get the product-of-sum form. This form is exactly
opposite to the SOP form. So this can also be said as dual of SOP form. Here the sum terms are
defined by using the OR operation and the product term is defined by using AND operation. When
two or more sum terms are multiplied by a Boolean OR operation, the resultant output expression
will be in the form of product-of-sums form or POS form. The product-of-sums form is also called
as Conjunctive Normal Form as the sum terms are ANDed together and conjunction operation is
logical AND. Product-of-sums form is also called as Standard POS.
Example: Consider the truth table for half adder shown in the figure 5.15.
Figure 5.15: Truth table of half adder
The sum-of-products form is obtained as
S = ( ¯A.B)+(A. ¯B)
C = (A.B)
The product-of-sums form is obtained as
S = (A +B).( ¯A + ¯B)
C = (A +B).(A + ¯B).( ¯A +B)
Figure 5.16: Circuit diagram for the example in Figure 5.15
5.4. SIMPLIFICATION OF LOGIC FUNCTIONS 33
5.4 Simplification of Logic Functions
• Logic expression may be long with many terms with many literals.
• Can be simplified using Boolean algebra.
• Simplification requires ability to identity patterns.
• If we can represent logic function in a graphic form that helps us in identifying patterns,
simplification becomes simple.
5.5 Karnaugh Map(or K-map) minimization
The Karnaugh map provides a systematic method for simplifying a Boolean expression or a truth
table function. The K map can produce the simplest SOP or POS expression possible. K-map
procedure is actually an application of adjacency and guarantees a minimal expression. It is easy
to use, visual, fast and familiarity with Boolean laws is not required. The K-map is a table consisting
of N = 2n
cells, where n is the number of input variables. Assuming the input variables are A and
B, then the K-map illustrating the four possible variable combinations is obtained.
Let us begin by considering a two-variable logic function
F = ¯AB + AB
Any two-variable function has 22
= 4 min-terms. The truth table of this function is as shown in the
Figure 5.17.
Figure 5.17: Truth table
It can be seen that the values of the variables are arranged in the ascending order(in the numerical
decimal sense).
Figure 5.18: Two variable K-map
Example: Consider the expression Z = f (A,B) = ¯A. ¯B+A. ¯B+ ¯A.B plotted on the k-map as shown
in the Figure 5.19.
34 CHAPTER 5. LOGIC FUNCTIONS
Figure 5.19: K-map
Pairs of 1’s are grouped as shown in the Figure 5.19 and the simplified answer is obtained by using
the following steps:
• Note that two groups can be formed for the example given above, bearing in mind that the
largest rectangular clusters that can be made consist of two 1’s. Notice that a 1 can belong to
more than one group.
• The first group labelled I, consists of two 1’s which correspond to A = 0, B = 0 and A = 1,
B = 0. Put in another way, all squares in this example that correspond to the area of the map
where B = 0 contains 1’s, independent of the value of A. So when B = 0, the output is 1. The
expression of the output will contain the term ¯B.
• For group labelled I I corresponds to the area of the map where A = 0. The group can there-
fore be defined as ¯A. This implies that when A = 0, the output is 1. The output is therefore 1
whenever B = 0 and A = 0.
Hence, the simplified answer is Z = ¯A + ¯B.
5.5.1 Three Variable and Four Variable Karnaugh Map:
The three variable and four variable K-maps can be constructed as shown in the Figure 5.20.
Figure 5.20: Three and four variable K-map
K-map for three variables will have 23
= 8 cells and for three variables will have 24
= 16 cells.
It can be observed that the positions of columns 10 and 11 are interchanged so that there is only
change in one variable across adjacent cells. This modification will allow in minimizing the logic.
Example: 3-variable K-Map for the function, F = (1,2,3,4,5,6)
Figure 5.21: Three variable K-map for the given expression
Simplified expression from the K-map is F = ¯A.C +B. ¯C + A. ¯B.
Example: Consider the function, F = (0,1,3,5,6,9,11,12,13,15)
Since, the biggest number is 15, we need to have 4 variables to define this function. The K-Map for
this function is obtained by writing 1 in cells that are present in function and 0 in rest of the cells.
5.5. KARNAUGH MAP(OR K-MAP) MINIMIZATION 35
Figure 5.22: Four variable K-map for the given expression
Simplified expression obtained : F = ¯C.D + AD + ¯A. ¯B. ¯C + ¯A. ¯B. ¯D + AB ¯C + ¯ABC ¯D.
Example: Given function, F = sum(0,2,3,4,5,7,8,9,13,15)
Figure 5.23: K-map for the given expression
Simplified expression obtained: F = B.D + ¯A. ¯C. ¯D + ¯A. ¯B.C + A. ¯B. ¯C.
5.5.2 Five Variable K-Map:
A 5-variable K-Map will have 25
= 32 cells. A function F which has maximum decimal value of 31,
can be defined and simplified by a 5-variable Karnaugh Map.
Figure 5.24: Five variable K-map
Example 5: Given function, F = (1,3,4,5,11,12,14,16,20,21,30)
Since, the biggest number is 30, we need to have 5 variables to define this function. K-map for the
function is as shown in the Figure 5.25
Figure 5.25: Five variable K-map for example 5
Simplified expression: F = ¯B.C. ¯D + ¯A.B.C ¯E +B.C.D. ¯E + ¯A. ¯C.D.E + A ¯B. ¯D. ¯E + ¯A. ¯B. ¯C.E
Example 6: Given function, F = (0,1,2,3,8,9,16,17,20,21,24,25,28,29,30,31)
36 CHAPTER 5. LOGIC FUNCTIONS
Figure 5.26: Five variable K-map for example 6
Simplified expression:F = A. ¯D + ¯C. ¯D + ¯A. ¯B. ¯C + A.B.C
5.5.3 Implementation of BCD to Gray code Converter using K-map
We can convert the Binary Coded Decimal(BCD) code to Gray code by using K-map simplification.
The truth table for BCD to Gray code converter is shown in the Figure 5.27
Figure 5.27: Table for BCD to Gray code converter
Figure 5.28: K-map for G3 and G2
Figure 5.29: K-map for G1 and G0
The equations obtained from K-maps are as follows:
G3 = B3
5.6. MINIMIZATION USING K-MAP 37
G2 = ¯B3.B2+B3. ¯B2 = B3⊕B2
G1 = ¯B1.B2+B1. ¯B2 = B1⊕B2
G0 = ¯B1.B0+B1. ¯B0 = B1⊕B0
The implementation of BCD to Gray conversion using logic gates is shown in the Figure 5.30.
Figure 5.30: Binary to Gray code converter circuit
5.6 Minimization using K-map
Some definitions used in minimization:
• Implicant: An implicant is a group of 2i
(i = 0,1,...,n) minterms (1-entered cells) that are
logically adjacent.
• Prime implicant: A prime implicant is the one that is not a subset of any one of other impli-
cant.
A study of implicants enables us to use the K-map effectively for simplifying a Boolean func-
tion. Consider the K-map shown in the Figure 5.31. There are four implicants: 1, 2, 3 and
4.
Figure 5.31: K-map showing implicants and prime implicants
The implicant 4 is a single cell implicant. A single cell implicant is a 1-entered cell that is not
positionally adjacent to any of the other cells in the map. The four implicants account for all
groupings of 1-entered cells. This also means that the four implicants describe the Boolean
function completely. An implicant represents a product term, with the number of variables
appearing in the term inversely proportional to the number of 1-entered cells it represents.
Implicant 1 in the Figure 5.31 represents the product term A. ¯C.
Implicant 2 represents A.B.D.
Implicant 3 represents B.C.D.
Implicant 4 represents ¯A. ¯B.C. ¯D.
The smaller the number of implicants, and the larger the number of cells that each implicant
represents, the smaller the number of product terms in the simplified Boolean expression.
• Essential prime implicant: It is a prime implicant which includes a 1-entered cell that is not
include in any other prime implicant.
• Redundant implicant: It is the one in which all cells are covered by other implicants. A
redundant implicant represents a redundant term in an expression.
38 CHAPTER 5. LOGIC FUNCTIONS
5.6.1 K-maps for Product-of-Sum Design
Product-of-sums design uses the same principles, but applied to the zeros of the function.
Example 7: F = π(0,4,6,7,11,12,14,15)
In constructing the K-map for the function given in POS form, 0’s are filled in the cells represented
by the max terms. The K-map of the above function is shown in the Figure 5.32.
Figure 5.32: K-map for the example 7
Sometimes the function may be given in the standard POS form. In such situations, we can initially
convert the standard POS form of the expression into its canonical form, and enter 0’s in the cells
representing the max terms. Another procedure is to enter 0’s directly into the map by observing
the sum terms one after the other. Consider an example of a Boolean function given in the POS
form.
F = (A +B + ¯D).( ¯A +B + ¯C).( ¯B +C)
This may be converted into its canonical form as follows:
F = (A+B +C + ¯D).(A+B + ¯C + ¯D).( ¯A+B + ¯C +D).(A+ ¯B +C +D).( ¯A+ ¯B +C + ¯D).(A+ ¯B +C + ¯D).( ¯A+
¯B +C + ¯D)
F = M1.M3.M10.M4.M12.M5.M13
The cells 1, 3, 4, 5, 10, 12 and 13 have 0’s entered in them while the remaining cells are filled with
1’s.
5.6.2 DON’T CARE CONDITIONS
Functions that have unspecified output for some input combinations are called incompletely spec-
ified functions. Unspecified min-terms of a functions are called don’t care conditions. We simply
don’t care whether the value of 0 or 1 is assigned to F for a particular min term. Don’t care con-
ditions are represented by X in the K-Map table. Don’t care conditions play a central role in the
specification and optimization of logic circuits as they represent the degrees of freedom of trans-
forming a network into a functionally equivalent one.
Example 8: Simplify the Boolean function
f (w,x, y,z) = S(1,3,7,11,15), d(w,x, y,z) = S(0,2,5)
5.6. MINIMIZATION USING K-MAP 39
Figure 5.33: K-map for the example 8
The function in the SOP and POS forms may be written as
F = (4,5)+d(0,6,7) and F = π(1,2,3).d(0,6,7)
The term d(0,6,7) represents the collection of don’t-care terms. The don’t-cares bring some ad-
vantages to the simplification of Boolean functions. The X ’s can be treated either as 0’s or as 1’s
depending on the convenience. For example the above K-map shown in the Figure 5.33 can be
redrawn in two different ways as shown in the Figure 5.34.
Figure 5.34: Different ways of K-map for the example 8
The simplification can be done in two different ways. The resulting expressions for the function F
are: F = A and F = A ¯B
Therefore, we can generate a simpler expression for a given function by utilizing some of the don’t-
care conditions as 1’s.
Example 9: Simplify F = (0,1,4,8,10,11,12)+d(2,3,6,9,15) The K-map of this function is shown
in the Figure 5.35.
Figure 5.35: K-map for the example 9
The simplified expression taking the full advantage of the don’t cares is F = ¯B + ¯C. ¯D
Simplification of Several Functions of the Same Set of Variables: As there could be several prod-
uct terms that could be made common to more than one function, special attention needs to be
paid to the simplification process.
Example 10: Consider the following set of functions defined on the same set of variables:
F1(A,B,C) = (0,3,4,5,6)
F2(A,B,C) = (1,2,4,6,7)
40 CHAPTER 5. LOGIC FUNCTIONS
F3(A,B,C) = (1,3,4,5,6)
Let us first consider the simplification process independently for each of the functions. The K-
maps for the three functions and the groupings are shown in the Figure 5.36.
Figure 5.36: K-map for the example 10
The resultant minimal expressions are:
F1 = ¯B. ¯C + A. ¯C + A. ¯B + ¯A.B.C
F2 = B. ¯C + A. ¯C + A.B + ¯A. ¯B.C
F3 = A. ¯C + ¯B.C + ¯A.C
These three functions have nine product terms and twenty one literals. If the groupings can be
done to increase the number of product terms that can be shared among the three functions, a
more cost effective realization of these functions can be achieved. One may consider, at least as
a first approximation, cost of realizing a function is proportional to the number of product terms
and the total number of literals present in the expression. Consider the minimization shown in the
Figure 5.37.
Figure 5.37: K-map for the example 10
The resultant minimal expressions are:
F1 = ¯B. ¯C + A. ¯C + A. ¯B + ¯A.B.C
F2 = B. ¯C + A. ¯C + A.B + ¯A. ¯B.C
F3 = A. ¯C + A. ¯B + ¯A.B.C + ¯A. ¯B.C
This simplification leads to seven product terms and sixteen literals.
5.7 Quine-McCluskey Method of Minimization
Karnaugh Map provides a good method of minimizing a logic function. However, it depends on
our ability to observe appropriate patterns and identify the necessary implicants. If the number of
variables increases beyond five, K-map or its variant variable entered map can become very messy
and there is every possibility of committing a mistake. What we require is a method that is more
suitable for implementation on a computer, even if it is inconvenient for paper-and-pencil pro-
cedures. The concept of tabular minimisation was originally formulated by Quine in 1952. This
method was later improved upon by McClusky in 1956, hence the name Quine-McClusky
This learning unit is concerned with the Quine-McClusky method of minimisation. This method
is tedious, time-consuming and subject to error when performed by hand. But it is better suited
to implementation on a digital computer.
5.7. QUINE-MCCLUSKEY METHOD OF MINIMIZATION 41
5.7.1 Principle of Quine-McClusky Method
Generate prime implicants of the given Boolean function by a special tabulation process. Deter-
mine the minimal set of implicants is determined from the set of implicants generated in the first
stage. The tabulation process starts with a listing of the specified minterms for the 1’s (or 0’s) of a
function and don’t-cares (the unspecified minterms) in a particular format. All the prime impli-
cants are generated from them using the simple logical adjacency theorem, namely, A ¯B + AB = A.
The main feature of this stage is that we work with the equivalent binary number of the product
terms.
For example in a four variable case, the minterms ¯ABC ¯D and ¯AB ¯C ¯D are entered as 0110 and 0100.
As the two logically adjacent ¯ABC ¯D and ¯AB ¯C ¯D can be combined to form a product term ¯AB ¯D the
two binary terms 0110 and 0100 are combined to form a term represented as 01−0, where ‘-‘(dash)
indicates the position where the combination took place.
Stage two involves creating a prime implicant table. This table provides a means of identifying, by
a special procedure, the smallest number of prime implicants that represents the original Boolean
function. The selected prime implicants are combined to form the simplified expression in the
SOP form. While we confine our discussion to the creation of minimal SOP expression of a Boolean
function in the canonical form, it is easy to extend the procedure to functions that are given in the
standard or any other forms
5.7.2 Generation of Prime Implicants
The process of generating prime implicants is best presented through an example.
Example 10: F = (1,2,5,6,7,9,10,11,14)
All the minterms are tabulated as binary numbers in sectionalised format, so that each section
consists of the equivalent binary numbers containing the same number of 1’s, and the number
of 1’s in the equivalent binary numbers of each section is always more than that in its previous
section. This process is illustrated in the table as shown in the Figure 5.38.
Figure 5.38: Table for example 10
The next step is to look for all possible combinations between the equivalent binary numbers in
the adjacent sections by comparing every binary number in each section with every binary num-
ber in the next section. The combination of two terms in the adjacent sections is possible only if
the two numbers differ from each other with respect to only one bit. For example 0001(1) in section
1 can be combined with 0101(5) in section 2 to result in 0−01(1,5). Notice that combinations can-
not occur among the numbers belonging to the same section. The results of such combinations
are entered into another column, sequentially along with their decimal equivalents indicating the
binary equivalents from which the result of combination came, like (1,5) as mentioned above. The
42 CHAPTER 5. LOGIC FUNCTIONS
second column also will get sectionalised based on the number of 1’s. The entries of one section
in the second column can again be combined together with entries in the next section in a similar
manner. These combinations are illustrated in the table shown in the Figure 5.39.
Figure 5.39: Table for example 10
All the entries in the column which are paired with entries in the next section are checked off.
Column 2 is again sectionalised with respect to the number of 1’s. Column 3 is generated by pairing
off entries in the first section of the column 2 with those items in the second section. In principle,
this pairing could continue until no further combinations can take place. All those entries that are
paired can be checked off. It may be noted that combination of entries in column 2 can only take
place if the corresponding entries have the dashes at the same place. This rule is applicable for
generating all other columns as well. After the tabulation is completed, all those terms which are
not checked off constitute the set of prime implicants of the given function. The repeated terms,
like −−10 in the column 3, should be eliminated. Therefore, from the above tabulation procedure,
we obtain seven prime implicants (denoted by their decimal equivalents) as (1,5), (1,9), (5,7), (6,7),
(9,11), (10,11) and (2,6,10,14). The next stage is to determine the minimal set of prime implicants.
5.7.3 Determination of the Minimal Set of Prime Implicants
The prime implicants generated through the tabular method do not constitute the minimal set.
The prime implicants are represented in so called "prime implicant table". Each column in the
table represents a decimal equivalent of the min term. A row is placed for each prime implicant
with its corresponding product appearing to the left and the decimal group to the right side. Aster-
isks are entered at those intersections where the columns of binary equivalents intersect the row
that covers them. The prime implicant table for the function under consideration is shown in the
Figure 5.40.
Figure 5.40: Prime implicant table for example 10
5.7. QUINE-MCCLUSKEY METHOD OF MINIMIZATION 43
In the selection of minimal set of implicants, similar to that in a K-map, essential implicants should
be determined first. An essential prime implicant in a prime implicant table is one that covers (at
least one) minterms which are not covered by any other prime implicant. This can be done by
looking for that column that has only one asterisk. For example, the columns 2 and 14 have only
one asterisk each. The associated row, indicated by the prime implicant C ¯D is an essential prime
implicant. C ¯D is selected as a member of the minimal set (mark that row by an asterisk). The
corresponding columns, namely 2, 6, 10 and 14 are also removed from the prime implicant table,
and a new table is constructed as shown in the Figure 5.41.
Figure 5.41: Prime implicant table for example 10
We then select dominating prime implicants, which are the rows that have more asterisks than oth-
ers. For example, the row ¯ABD includes the min term 7, which is the only one included in the row
represented by ¯ABC. ¯ABD is dominant implicant over ¯ABC, and hence ¯ABC can be eliminated.
Mark ¯ABD by an asterisk and check off the column 5 and 7. Then choose AB ¯D as the dominating
row over the row represented by A ¯BC. Consequently, we mark the row A ¯BD by an asterisk and
eliminate the row A ¯BC and the columns 9 and 11 by checking them off. Similarly, we select ¯A. ¯C. ¯D
as the dominating one over ¯B ¯C.D. However, ¯B. ¯C.D can also be chosen as the dominating prime
implicant and eliminate the implicant ¯A ¯CD. Retaining A/C/D as the dominant prime implicant
the minimal set of prime implicant are C ¯D, ¯A ¯CD, ¯ABD and A ¯BD. The corresponding minimal
SOP expression for the Boolean function is:
F = C ¯D + ¯A ¯CD + ¯ABD + A ¯BD
This indicates that if the selection of the minimal set of prime implicants is not unique, then the
minimal expression is also not unique.
There are two types of implicant tables that have some special properties. One is referred to as
cyclic prime implicant table and the other as semi-cyclic prime implicant table. A prime impli-
cant table is considered to be cyclic if it does not have any essential implicants which implies that
there are at least two asterisks in every column. There are no dominating implicants, which im-
plies that there are same number of asterisks in every row.
Example 11: A Boolean function with a cyclic prime implicant table is shown in the Figure 5.42.
The function is given by F = (0,1,3,4,7,12,14,15), the possible prime implicant of the functions
are (0,1), (0,4), (1,3), (3,7), (4,12), (7,15), (12,14) and (14,15).
44 CHAPTER 5. LOGIC FUNCTIONS
Figure 5.42: Cyclic prime implicant table for example 11
As it may be noticed from the prime implicant table in the Figure 5.43 that all columns have two
asterisks and there are no essential prime implicants. In such a case, we can choose any one of the
prime implicants to start with. If we start with prime implicant a, it can be marked with asterisk
and the corresponding columns for 0 and 1 can be deleted from the table. After their removal, row
c becomes dominant over row b, so that row c is selected and hence row b can be eliminated. The
columns 3 and 7 can now be deleted. We observe then that the row e dominates row d and row d
can be eliminated. Selection of row e enables us to delete columns 14 and 15.
Figure 5.43: Cyclic prime implicant table for example 11
If, from the reduced prime implicant table shown in the Figure 5.43 we choose row g, it covers the
remaining asterisks associated with rows h and f . That covers the entire prime implicant table.
The minimal set for the Boolean function is given by:
Figure 5.44: Cyclic prime implicant table for example 11
5.7. QUINE-MCCLUSKEY METHOD OF MINIMIZATION 45
F = a +c +e + g
F = ¯A ¯B ¯C + ¯ACD + ABC +B ¯C ¯D
The K-map of the simplified function is shown in the Figure ??.
Figure 5.45: K-map for example 11
A semi-cyclic prime implicant table differs from a cyclic prime implicant table in one respect. In
the cyclic case the number of minterms covered by each prime implicant is identical. In a semi-
cyclic function this is not necessarily true.
5.7.4 Simplification of Incompletely Specified functions
The simplification procedure for completely specified functions presented in the earlier sections
can easily be extended to incompletely specified functions. The initial tabulation is drawn up
including the dont-cares. However, when the prime implicant table is constructed, columns asso-
ciated with don’t-cares need not be included because they do not necessarily have to be covered.
The remaining part of the simplification is similar to that for completely specified functions.
Example 12: Simplify the following function:
F(A,B,C,D,E) = (1,4,6,10,20,22,24,26)+d(0,11,16,27)
Figure 5.46: Table for example 12
We need to pay attention to the don’t-care terms as well as to the combinations among them-
selves, by marking them with (d). Six binary equivalents are obtained from the procedure. These
are 0000−(0,1), 1−000(16,24), 110−0(24,26), −0−00(0,4,16,20), −01−0(4,6,20,22) and −101−
(10,11,26,27) and they correspond to the following prime implicants:
a = ¯A ¯B ¯C ¯D
b = ¯A ¯C ¯D ¯E
c = AB ¯C ¯E
d = ¯B ¯d ¯E
e = ¯BC ¯E
g = B ¯CD
The prime implicant table is plotted as shown in the Figure 5.47.
46 CHAPTER 5. LOGIC FUNCTIONS
Figure 5.47: Prime implicant table for example 12
It may be noted that the don’t-cares are not included.
The minimal expression is given by:
F(A,B,C,D,E) = a +c +e + g
F(A,B,C,D,E) = ¯A ¯B ¯C ¯D + AB ¯C ¯E +B ¯CE +B ¯CD
5.8 Timing Hazards
If the input of a combinational circuit changes, unwanted switching variations may appear in the
output. These variations occur when different paths from the input to output have different delays.
If, from response to a single input change and for some combination of propagation delay, an
output momentarily goes to 0 when it should remain a constant value of 1, the circuit is said to
have a static 1-hazard. Likewise, if the output momentarily goes to 1 when it should remain at a
constant value of 0, the circuit is said to have a 0-hazard. When an output is supposed to change
values from 0 to 1, or 1 to 0, this output may change three or more times. If this situation occurs,
the circuit is said to have a dynamic hazard. The Figure 5.48 shows the different outputs from a
circuit with hazards. In each of the three cases, the steady-state output of the circuit is correct,
however a switching variation appears at the circuit output when the input is changed.
Figure 5.48: Timing hazards
The first hazard, the static 1-hazard depicts that if A = C = 1, then F = B + B = 1, thus the
output F should remain at a constant 1 when B changed from 1 to 0. However in the next illus-
tration, the static 0-hazard , if each gate has a propagation of 10ns, E will go to 0 before D goes
to 1, resulting in a momentary 0 appearing at output F. This is also known to be a glitch caused
by the 1-hazard. One should note that right after B changes to 0, both the inverter input(B) and
output(B’) are 0 until the delay has elapsed. During this propagation period, both of these input
terms in the equation for F have value of 0, so F also momentarily goes to a value of 0.
These hazards, static and dynamic, are completely independent of the propagation delays that
exist in the circuit. If a combinational circuit has no hazards, then it is said that for any combina-
tion of propagation delays and for any individual input change, that output will not have a vari-
ation in I/O value. On the contrary, if a circuit were to contain a hazard, then there will be some
5.8. TIMING HAZARDS 47
combination of delays as well as an input change for which the output in the circuit contains a
transient. This combination of delays that produce a glitch may or may not be likely to occur in
the implementation of the circuit. In some instances, it is very unlikely that such delays would
occur. The transients (or glitches) that result from static and dynamic timing hazards very seldom
cause problems in fully synchronous circuits, but they are a major issue in asynchronous circuits
(which includes nominally synchronous circuits that involve either the use of asynchronous pre-
set/reset inputs that use gated clocks).
The variation in input and output also depends on how each gate will respond to a change of input
value. In some instances, if more than one input gate changes within a short amount of time, the
gate may or may not respond to the individual input changes. An example is shown in the Figure
5.49 assuming that the inverter(B) has a propagation delay of 2ns instead of 10ns. Then input D
and E changes reaching the output OR gate are 2ns from each other, thus the OR gate may or may
not generate the 0 glitch.
Figure 5.49: Timing diagram
A gate displaying this type of response is said to have what is known as an inertial delay. Rather of-
ten the inertial delay value is presumed to be the same as the propagation delay of the gate. When
this occurs, the circuit above will respond with a 0 glitch only for inverter propagation delays that
are larger than 10ns. However, if an input gate invariably responds to input change that has a
propagation delay, is said to have an ideal or transport delay. If the OR gate shown above has this
type of delay, then a 0 glitch would be generated for any nonzero value for the inverter propagation
delay.
Hazards can always be discovered using a Karnaugh map. The map illustrated above in Figure
5.49 in which not even a single loop covers both minterms ABC and A ¯BC. Thus if A = C = 1 and
¯Bs value changes, both of these terms can go to 0 momentarily; from this momentary change, a 0
glitch is found in F. To detect hazards in a two-level AND-OR combinational circuit, the following
procedure is completed.
• A sum-of-products expression for the circuit needs to be written out.
• Each term should be plotted on the map and looped, if possible.
48 CHAPTER 5. LOGIC FUNCTIONS
• If any two adjacent 1’s are not covered by the same loop, then a 1-hazard exists for the tran-
sition between those two 1’s. For any n variable map, this transition only occurs when one
variable changes value and the other (n − 1) variables are held constant. If another loop is
added to the Karnaugh map in the above Figure 5.49 and then add the corresponding gate
to the circuit in the Figure 5.50, the hazard can be eliminated. The term AC remains at a
constant value of 1 while B is changing, thus a glitch cannot appear in the output. With this
change, F is no longer a minimum SOP.
Figure 5.50: Timing diagram
The Figure 5.51(a) shows a circuit with numerous 0-hazards. The function that represents the cir-
cuit’s output is: F = (A +C)( ¯A + ¯D)( ¯A + ¯C + D). The Karnaugh map in the Figure 5.51(b) has four
pairs of adjacent 0’s that are not covered by a common loop. The arrows indicate where each 0 is
not being looped, and they each correspond to a 0-hazard. If A = 0, B = 1, D = 0 and C changes
from 0 to 1, there is a chance that a spike can appear at the output for any combination of gate de-
lays. Lastly, Figure 5.51(c) depicts a timing diagram that, assumes a delay of 3ns for each individual
inverter and a delay of 5ns for each AND gate and each OR gate.
Figure 5.51: Timing diagram for the circuit
5.8. TIMING HAZARDS 49
The 0-hazards can be eliminated by looping extra prime implicants that cover the 0’s adjacent
to one another, as long as they are not already covered by a common loop. By eliminating alge-
braically redundant terms, or consensus terms, the circuit can be reduced to the following equa-
tion below. Using three additional loops will completely eliminate the 0-hazards, resulting the
following equation.
F = (A +C)( ¯A + ¯D)( ¯B + ¯C +D)(C + ¯D)(A + ¯B +D)( ¯A + ¯B + ¯C)
The Figure 5.52 below illustrates the Karnaugh map after removing the 0-hazards.
Figure 5.52: Timing diagram after removing 0-hazards
50 CHAPTER 5. LOGIC FUNCTIONS
Chapter 6
Combinational Circuits
6.1 Decoder
Decoder is a combinational circuit that has ‘n’ input lines and maximum of 2n
output lines. One of
these outputs will be active HIGH based on the combination of inputs present, when the decoder
is enabled. That means decoder detects a particular code. The outputs of the decoder are nothing
but the min terms of ‘n’ input variables (lines), when it is enabled.
6.1.1 2 to 4 Decoder:
Let 2 to 4 decoder has two inputs A1 & A0 and four outputs Y3, Y2, Y1 & Y0. The block diagram of
2 to 4 decoder is shown in the following Figure 6.1.
Figure 6.1: Block diagram of 2 to 4 decoder
One of these four outputs will be 1 for each combination of inputs when enable, E is 1. The Truth
table of 2 to 4 decoder is shown in the Figure 6.2.
Figure 6.2: Truth table of 2 to 4 decoder
51
52 CHAPTER 6. COMBINATIONAL CIRCUITS
Y 3 = E.A1.A0
Y 2 = E.A1. ¯A0
Y 1 = E. ¯A1.A0
Y 0 = E. ¯A1. ¯A0
Each output is having one product term. So, there are four product terms in total. We can im-
plement these four product terms by using four AND gates having three inputs each & two inverts.
The circuit diagram of 2 to 4 decoder is shown in the following Figure 6.3.
Figure 6.3: Circuit diagram of 2 to 4 decoder
Therefore, the outputs of 2 to 4 decoder are nothing but the minterms of two input variables A1 &
A0, when enable, E is equal to one. If enable, E is zero, then all the outputs of decoder will be equal
to zero.
6.1.2 3 to 8 line decoder
This decoder circuit gives 8 logic outputs for 3 inputs and has a enable pin. The circuit is designed
with AND and NAND logic gates. It takes 3 binary inputs and activates one of the eight outputs. 3
to 8 line decoder circuit is also called as binary to an octal decoder.
Figure 6.4: Block diagram of 3 to 8 line decoder
The decoder circuit works only when the Enable pin (E) is high. S0, S1 and S2 are three different
inputs and D0, D1, D2, D3, D4, D5, D6 and D7 are the eight outputs. When the Enable pin (E) is
low all the output pins are low.
6.1. DECODER 53
Figure 6.5: Circuit diagram of 3 to 8 line decoder
Figure 6.6: Truth table of 3 to 8 line decoder
6.1.3 Implementation of higher-order decoders
Higher order decoders can be implemented using one or more lower order decoders.
3 to 8 Decoder: It can be implemented using 2 to 4 decoders. We know that 2 to 4 decoder has
two inputs, A1 & A0 and four outputs, Y3 to Y0. Whereas, 3 to 8 decoder has three inputs A2, A1
& A0 and eight outputs, Y7 to Y0. The number of lower order decoders required for implementing
higher order decoder can be obtained using the following formula.
Required number of lower order decoders = m2/m1
where, m1 is the number of outputs of lower order decoder and m2 is the number of outputs of
higher order decoder.
Here, m1=4 and m2=8. Substituting these two values in the above formula.
Required number of 2 to 4 decoders=8/4=2
Therefore, we require two 2 to 4 decoders for implementing one 3 to 8 decoder. The block diagram
of 3 to 8 decoder using 2 to 4 decoders is shown in the following Figure 6.7.
Figure 6.7: Circuit diagram of 3 to 8 decoder
Digital Electronics
Digital Electronics
Digital Electronics
Digital Electronics
Digital Electronics
Digital Electronics
Digital Electronics
Digital Electronics
Digital Electronics
Digital Electronics
Digital Electronics
Digital Electronics
Digital Electronics
Digital Electronics
Digital Electronics
Digital Electronics
Digital Electronics
Digital Electronics
Digital Electronics
Digital Electronics
Digital Electronics
Digital Electronics
Digital Electronics
Digital Electronics
Digital Electronics
Digital Electronics

More Related Content

What's hot

How computers represent data
How computers represent dataHow computers represent data
How computers represent dataShaon Ahmed
 
Digital electronics
Digital electronicsDigital electronics
Introduction to computer_lec_02
Introduction to computer_lec_02Introduction to computer_lec_02
Introduction to computer_lec_02
Ramadan Babers, PhD
 
Introduction to computer_lec_02_fall_2018
Introduction to computer_lec_02_fall_2018Introduction to computer_lec_02_fall_2018
Introduction to computer_lec_02_fall_2018
Ramadan Babers, PhD
 
Mrs. Noland's Binary System ppt
Mrs. Noland's Binary System pptMrs. Noland's Binary System ppt
Mrs. Noland's Binary System ppt
dsparone
 
Data Representation
Data RepresentationData Representation
Data Representation
Praveen M Jigajinni
 
Binary system ppt
Binary system pptBinary system ppt
Binary system ppt
BBDITM LUCKNOW
 
Number Systems Basic Concepts
Number Systems Basic ConceptsNumber Systems Basic Concepts
Number Systems Basic Concepts
Laguna State Polytechnic University
 
Quantitative Analysis 2
Quantitative Analysis 2Quantitative Analysis 2
Quantitative Analysis 2
রেদওয়ান হৃদয়
 
Computer Data Representation
Computer Data RepresentationComputer Data Representation
Computer Data Representation
ritaester
 
Number system
Number system Number system
Number system
Anirban Saha Anik
 
Data representation
Data representationData representation
Data representation
Johnson Prince
 
Computer data representation (integers, floating-point numbers, text, images,...
Computer data representation (integers, floating-point numbers, text, images,...Computer data representation (integers, floating-point numbers, text, images,...
Computer data representation (integers, floating-point numbers, text, images,...
ArtemKovera
 
Binary ,octa,hexa conversion
Binary ,octa,hexa conversionBinary ,octa,hexa conversion
Binary ,octa,hexa conversion
ChaudharyShoaib7
 
Number Systems Basic Concepts
Number Systems Basic ConceptsNumber Systems Basic Concepts
Number Systems Basic Concepts
Laguna State Polytechnic University
 
Data representation in a computer
Data representation in a computerData representation in a computer
Data representation in a computer
Girmachew Tilahun
 
Binary Slides
Binary Slides Binary Slides
Binary Slides jnoles
 

What's hot (20)

How computers represent data
How computers represent dataHow computers represent data
How computers represent data
 
Digital electronics
Digital electronicsDigital electronics
Digital electronics
 
Introduction to computer_lec_02
Introduction to computer_lec_02Introduction to computer_lec_02
Introduction to computer_lec_02
 
Introduction to computer_lec_02_fall_2018
Introduction to computer_lec_02_fall_2018Introduction to computer_lec_02_fall_2018
Introduction to computer_lec_02_fall_2018
 
Mrs. Noland's Binary System ppt
Mrs. Noland's Binary System pptMrs. Noland's Binary System ppt
Mrs. Noland's Binary System ppt
 
Data Representation
Data RepresentationData Representation
Data Representation
 
Binary system ppt
Binary system pptBinary system ppt
Binary system ppt
 
Number Systems Basic Concepts
Number Systems Basic ConceptsNumber Systems Basic Concepts
Number Systems Basic Concepts
 
Lect 1
Lect 1Lect 1
Lect 1
 
Quantitative Analysis 2
Quantitative Analysis 2Quantitative Analysis 2
Quantitative Analysis 2
 
Computer Data Representation
Computer Data RepresentationComputer Data Representation
Computer Data Representation
 
Number system
Number system Number system
Number system
 
DLD Intro
DLD IntroDLD Intro
DLD Intro
 
Data representation
Data representationData representation
Data representation
 
Computer data representation (integers, floating-point numbers, text, images,...
Computer data representation (integers, floating-point numbers, text, images,...Computer data representation (integers, floating-point numbers, text, images,...
Computer data representation (integers, floating-point numbers, text, images,...
 
Meghna ppt.
Meghna ppt.Meghna ppt.
Meghna ppt.
 
Binary ,octa,hexa conversion
Binary ,octa,hexa conversionBinary ,octa,hexa conversion
Binary ,octa,hexa conversion
 
Number Systems Basic Concepts
Number Systems Basic ConceptsNumber Systems Basic Concepts
Number Systems Basic Concepts
 
Data representation in a computer
Data representation in a computerData representation in a computer
Data representation in a computer
 
Binary Slides
Binary Slides Binary Slides
Binary Slides
 

Similar to Digital Electronics

DSD NOTES.pptx
DSD NOTES.pptxDSD NOTES.pptx
DSD NOTES.pptx
AzhagesvaranTamilsel
 
Physics investigatory project for class 12 logic gates
Physics investigatory project for class 12 logic gatesPhysics investigatory project for class 12 logic gates
Physics investigatory project for class 12 logic gates
biswanath dehuri
 
Chapter 1 number and code system sss
Chapter 1 number and code system sssChapter 1 number and code system sss
Chapter 1 number and code system sss
Baia Salihin
 
DLD-Introduction.pptx
DLD-Introduction.pptxDLD-Introduction.pptx
DLD-Introduction.pptx
UzairAhmadWalana
 
Data repersentation.
Data repersentation.Data repersentation.
Data repersentation.
Ritesh Saini
 
DLD Chapter-1.pdf
DLD Chapter-1.pdfDLD Chapter-1.pdf
DLD Chapter-1.pdf
TamiratDejene1
 
Ee 202 chapter 1 number and code system
Ee 202 chapter 1 number and code system Ee 202 chapter 1 number and code system
Ee 202 chapter 1 number and code system CT Sabariah Salihin
 
Computer Oraganizaation.pptx
Computer Oraganizaation.pptxComputer Oraganizaation.pptx
Computer Oraganizaation.pptx
bmangesh
 
digital-electronics lecture Ch 1and 2 -1.pptx
digital-electronics lecture Ch 1and 2 -1.pptxdigital-electronics lecture Ch 1and 2 -1.pptx
digital-electronics lecture Ch 1and 2 -1.pptx
abelllll
 
Digital Logic
Digital LogicDigital Logic
Digital Logic
NabeelaNousheen
 
Number_systems_for _o_& _A_levels by fari
Number_systems_for _o_& _A_levels by fariNumber_systems_for _o_& _A_levels by fari
Number_systems_for _o_& _A_levels by fari
Fari84
 
1 Unit-1 DEC B.Tech ECE III Sem Syllabus & Intro.pptx
1 Unit-1 DEC B.Tech ECE III Sem Syllabus & Intro.pptx1 Unit-1 DEC B.Tech ECE III Sem Syllabus & Intro.pptx
1 Unit-1 DEC B.Tech ECE III Sem Syllabus & Intro.pptx
Satish Chandra
 
Digital Electronics & Fundamental of Microprocessor-I
Digital Electronics & Fundamental of Microprocessor-IDigital Electronics & Fundamental of Microprocessor-I
Digital Electronics & Fundamental of Microprocessor-I
pravinwj
 
Dee 2034 chapter 1 number and code system (Baia)
Dee 2034 chapter 1 number and code system (Baia)Dee 2034 chapter 1 number and code system (Baia)
Dee 2034 chapter 1 number and code system (Baia)
SITI SABARIAH SALIHIN
 
chap1.pdf
chap1.pdfchap1.pdf
chap1.pdf
ssuserf7cd2b
 
chap1.pdf
chap1.pdfchap1.pdf
chap1.pdf
ssuserf7cd2b
 
chap1.pdf
chap1.pdfchap1.pdf
chap1.pdf
kndnewguade
 
3110016_BE_GTU_Study_Material_e-Notes_6_02052020063009AM.pdf
3110016_BE_GTU_Study_Material_e-Notes_6_02052020063009AM.pdf3110016_BE_GTU_Study_Material_e-Notes_6_02052020063009AM.pdf
3110016_BE_GTU_Study_Material_e-Notes_6_02052020063009AM.pdf
VirangPatel15
 
Lecture 1.1.1 1.1.2 (10).pptx
Lecture 1.1.1  1.1.2 (10).pptxLecture 1.1.1  1.1.2 (10).pptx
Lecture 1.1.1 1.1.2 (10).pptx
pratick2
 
DLD_Lecture_notes2.ppt
DLD_Lecture_notes2.pptDLD_Lecture_notes2.ppt
DLD_Lecture_notes2.ppt
NazmulHasan854383
 

Similar to Digital Electronics (20)

DSD NOTES.pptx
DSD NOTES.pptxDSD NOTES.pptx
DSD NOTES.pptx
 
Physics investigatory project for class 12 logic gates
Physics investigatory project for class 12 logic gatesPhysics investigatory project for class 12 logic gates
Physics investigatory project for class 12 logic gates
 
Chapter 1 number and code system sss
Chapter 1 number and code system sssChapter 1 number and code system sss
Chapter 1 number and code system sss
 
DLD-Introduction.pptx
DLD-Introduction.pptxDLD-Introduction.pptx
DLD-Introduction.pptx
 
Data repersentation.
Data repersentation.Data repersentation.
Data repersentation.
 
DLD Chapter-1.pdf
DLD Chapter-1.pdfDLD Chapter-1.pdf
DLD Chapter-1.pdf
 
Ee 202 chapter 1 number and code system
Ee 202 chapter 1 number and code system Ee 202 chapter 1 number and code system
Ee 202 chapter 1 number and code system
 
Computer Oraganizaation.pptx
Computer Oraganizaation.pptxComputer Oraganizaation.pptx
Computer Oraganizaation.pptx
 
digital-electronics lecture Ch 1and 2 -1.pptx
digital-electronics lecture Ch 1and 2 -1.pptxdigital-electronics lecture Ch 1and 2 -1.pptx
digital-electronics lecture Ch 1and 2 -1.pptx
 
Digital Logic
Digital LogicDigital Logic
Digital Logic
 
Number_systems_for _o_& _A_levels by fari
Number_systems_for _o_& _A_levels by fariNumber_systems_for _o_& _A_levels by fari
Number_systems_for _o_& _A_levels by fari
 
1 Unit-1 DEC B.Tech ECE III Sem Syllabus & Intro.pptx
1 Unit-1 DEC B.Tech ECE III Sem Syllabus & Intro.pptx1 Unit-1 DEC B.Tech ECE III Sem Syllabus & Intro.pptx
1 Unit-1 DEC B.Tech ECE III Sem Syllabus & Intro.pptx
 
Digital Electronics & Fundamental of Microprocessor-I
Digital Electronics & Fundamental of Microprocessor-IDigital Electronics & Fundamental of Microprocessor-I
Digital Electronics & Fundamental of Microprocessor-I
 
Dee 2034 chapter 1 number and code system (Baia)
Dee 2034 chapter 1 number and code system (Baia)Dee 2034 chapter 1 number and code system (Baia)
Dee 2034 chapter 1 number and code system (Baia)
 
chap1.pdf
chap1.pdfchap1.pdf
chap1.pdf
 
chap1.pdf
chap1.pdfchap1.pdf
chap1.pdf
 
chap1.pdf
chap1.pdfchap1.pdf
chap1.pdf
 
3110016_BE_GTU_Study_Material_e-Notes_6_02052020063009AM.pdf
3110016_BE_GTU_Study_Material_e-Notes_6_02052020063009AM.pdf3110016_BE_GTU_Study_Material_e-Notes_6_02052020063009AM.pdf
3110016_BE_GTU_Study_Material_e-Notes_6_02052020063009AM.pdf
 
Lecture 1.1.1 1.1.2 (10).pptx
Lecture 1.1.1  1.1.2 (10).pptxLecture 1.1.1  1.1.2 (10).pptx
Lecture 1.1.1 1.1.2 (10).pptx
 
DLD_Lecture_notes2.ppt
DLD_Lecture_notes2.pptDLD_Lecture_notes2.ppt
DLD_Lecture_notes2.ppt
 

Recently uploaded

在线办理(ANU毕业证书)澳洲国立大学毕业证录取通知书一模一样
在线办理(ANU毕业证书)澳洲国立大学毕业证录取通知书一模一样在线办理(ANU毕业证书)澳洲国立大学毕业证录取通知书一模一样
在线办理(ANU毕业证书)澳洲国立大学毕业证录取通知书一模一样
obonagu
 
Investor-Presentation-Q1FY2024 investor presentation document.pptx
Investor-Presentation-Q1FY2024 investor presentation document.pptxInvestor-Presentation-Q1FY2024 investor presentation document.pptx
Investor-Presentation-Q1FY2024 investor presentation document.pptx
AmarGB2
 
Sachpazis:Terzaghi Bearing Capacity Estimation in simple terms with Calculati...
Sachpazis:Terzaghi Bearing Capacity Estimation in simple terms with Calculati...Sachpazis:Terzaghi Bearing Capacity Estimation in simple terms with Calculati...
Sachpazis:Terzaghi Bearing Capacity Estimation in simple terms with Calculati...
Dr.Costas Sachpazis
 
Nuclear Power Economics and Structuring 2024
Nuclear Power Economics and Structuring 2024Nuclear Power Economics and Structuring 2024
Nuclear Power Economics and Structuring 2024
Massimo Talia
 
Tutorial for 16S rRNA Gene Analysis with QIIME2.pdf
Tutorial for 16S rRNA Gene Analysis with QIIME2.pdfTutorial for 16S rRNA Gene Analysis with QIIME2.pdf
Tutorial for 16S rRNA Gene Analysis with QIIME2.pdf
aqil azizi
 
Design and Analysis of Algorithms-DP,Backtracking,Graphs,B&B
Design and Analysis of Algorithms-DP,Backtracking,Graphs,B&BDesign and Analysis of Algorithms-DP,Backtracking,Graphs,B&B
Design and Analysis of Algorithms-DP,Backtracking,Graphs,B&B
Sreedhar Chowdam
 
Forklift Classes Overview by Intella Parts
Forklift Classes Overview by Intella PartsForklift Classes Overview by Intella Parts
Forklift Classes Overview by Intella Parts
Intella Parts
 
一比一原版(UofT毕业证)多伦多大学毕业证成绩单如何办理
一比一原版(UofT毕业证)多伦多大学毕业证成绩单如何办理一比一原版(UofT毕业证)多伦多大学毕业证成绩单如何办理
一比一原版(UofT毕业证)多伦多大学毕业证成绩单如何办理
ydteq
 
weather web application report.pdf
weather web application report.pdfweather web application report.pdf
weather web application report.pdf
Pratik Pawar
 
HYDROPOWER - Hydroelectric power generation
HYDROPOWER - Hydroelectric power generationHYDROPOWER - Hydroelectric power generation
HYDROPOWER - Hydroelectric power generation
Robbie Edward Sayers
 
CW RADAR, FMCW RADAR, FMCW ALTIMETER, AND THEIR PARAMETERS
CW RADAR, FMCW RADAR, FMCW ALTIMETER, AND THEIR PARAMETERSCW RADAR, FMCW RADAR, FMCW ALTIMETER, AND THEIR PARAMETERS
CW RADAR, FMCW RADAR, FMCW ALTIMETER, AND THEIR PARAMETERS
veerababupersonal22
 
DfMAy 2024 - key insights and contributions
DfMAy 2024 - key insights and contributionsDfMAy 2024 - key insights and contributions
DfMAy 2024 - key insights and contributions
gestioneergodomus
 
NUMERICAL SIMULATIONS OF HEAT AND MASS TRANSFER IN CONDENSING HEAT EXCHANGERS...
NUMERICAL SIMULATIONS OF HEAT AND MASS TRANSFER IN CONDENSING HEAT EXCHANGERS...NUMERICAL SIMULATIONS OF HEAT AND MASS TRANSFER IN CONDENSING HEAT EXCHANGERS...
NUMERICAL SIMULATIONS OF HEAT AND MASS TRANSFER IN CONDENSING HEAT EXCHANGERS...
ssuser7dcef0
 
Industrial Training at Shahjalal Fertilizer Company Limited (SFCL)
Industrial Training at Shahjalal Fertilizer Company Limited (SFCL)Industrial Training at Shahjalal Fertilizer Company Limited (SFCL)
Industrial Training at Shahjalal Fertilizer Company Limited (SFCL)
MdTanvirMahtab2
 
AP LAB PPT.pdf ap lab ppt no title specific
AP LAB PPT.pdf ap lab ppt no title specificAP LAB PPT.pdf ap lab ppt no title specific
AP LAB PPT.pdf ap lab ppt no title specific
BrazilAccount1
 
English lab ppt no titlespecENG PPTt.pdf
English lab ppt no titlespecENG PPTt.pdfEnglish lab ppt no titlespecENG PPTt.pdf
English lab ppt no titlespecENG PPTt.pdf
BrazilAccount1
 
6th International Conference on Machine Learning & Applications (CMLA 2024)
6th International Conference on Machine Learning & Applications (CMLA 2024)6th International Conference on Machine Learning & Applications (CMLA 2024)
6th International Conference on Machine Learning & Applications (CMLA 2024)
ClaraZara1
 
一比一原版(IIT毕业证)伊利诺伊理工大学毕业证成绩单专业办理
一比一原版(IIT毕业证)伊利诺伊理工大学毕业证成绩单专业办理一比一原版(IIT毕业证)伊利诺伊理工大学毕业证成绩单专业办理
一比一原版(IIT毕业证)伊利诺伊理工大学毕业证成绩单专业办理
zwunae
 
CME397 Surface Engineering- Professional Elective
CME397 Surface Engineering- Professional ElectiveCME397 Surface Engineering- Professional Elective
CME397 Surface Engineering- Professional Elective
karthi keyan
 
Fundamentals of Electric Drives and its applications.pptx
Fundamentals of Electric Drives and its applications.pptxFundamentals of Electric Drives and its applications.pptx
Fundamentals of Electric Drives and its applications.pptx
manasideore6
 

Recently uploaded (20)

在线办理(ANU毕业证书)澳洲国立大学毕业证录取通知书一模一样
在线办理(ANU毕业证书)澳洲国立大学毕业证录取通知书一模一样在线办理(ANU毕业证书)澳洲国立大学毕业证录取通知书一模一样
在线办理(ANU毕业证书)澳洲国立大学毕业证录取通知书一模一样
 
Investor-Presentation-Q1FY2024 investor presentation document.pptx
Investor-Presentation-Q1FY2024 investor presentation document.pptxInvestor-Presentation-Q1FY2024 investor presentation document.pptx
Investor-Presentation-Q1FY2024 investor presentation document.pptx
 
Sachpazis:Terzaghi Bearing Capacity Estimation in simple terms with Calculati...
Sachpazis:Terzaghi Bearing Capacity Estimation in simple terms with Calculati...Sachpazis:Terzaghi Bearing Capacity Estimation in simple terms with Calculati...
Sachpazis:Terzaghi Bearing Capacity Estimation in simple terms with Calculati...
 
Nuclear Power Economics and Structuring 2024
Nuclear Power Economics and Structuring 2024Nuclear Power Economics and Structuring 2024
Nuclear Power Economics and Structuring 2024
 
Tutorial for 16S rRNA Gene Analysis with QIIME2.pdf
Tutorial for 16S rRNA Gene Analysis with QIIME2.pdfTutorial for 16S rRNA Gene Analysis with QIIME2.pdf
Tutorial for 16S rRNA Gene Analysis with QIIME2.pdf
 
Design and Analysis of Algorithms-DP,Backtracking,Graphs,B&B
Design and Analysis of Algorithms-DP,Backtracking,Graphs,B&BDesign and Analysis of Algorithms-DP,Backtracking,Graphs,B&B
Design and Analysis of Algorithms-DP,Backtracking,Graphs,B&B
 
Forklift Classes Overview by Intella Parts
Forklift Classes Overview by Intella PartsForklift Classes Overview by Intella Parts
Forklift Classes Overview by Intella Parts
 
一比一原版(UofT毕业证)多伦多大学毕业证成绩单如何办理
一比一原版(UofT毕业证)多伦多大学毕业证成绩单如何办理一比一原版(UofT毕业证)多伦多大学毕业证成绩单如何办理
一比一原版(UofT毕业证)多伦多大学毕业证成绩单如何办理
 
weather web application report.pdf
weather web application report.pdfweather web application report.pdf
weather web application report.pdf
 
HYDROPOWER - Hydroelectric power generation
HYDROPOWER - Hydroelectric power generationHYDROPOWER - Hydroelectric power generation
HYDROPOWER - Hydroelectric power generation
 
CW RADAR, FMCW RADAR, FMCW ALTIMETER, AND THEIR PARAMETERS
CW RADAR, FMCW RADAR, FMCW ALTIMETER, AND THEIR PARAMETERSCW RADAR, FMCW RADAR, FMCW ALTIMETER, AND THEIR PARAMETERS
CW RADAR, FMCW RADAR, FMCW ALTIMETER, AND THEIR PARAMETERS
 
DfMAy 2024 - key insights and contributions
DfMAy 2024 - key insights and contributionsDfMAy 2024 - key insights and contributions
DfMAy 2024 - key insights and contributions
 
NUMERICAL SIMULATIONS OF HEAT AND MASS TRANSFER IN CONDENSING HEAT EXCHANGERS...
NUMERICAL SIMULATIONS OF HEAT AND MASS TRANSFER IN CONDENSING HEAT EXCHANGERS...NUMERICAL SIMULATIONS OF HEAT AND MASS TRANSFER IN CONDENSING HEAT EXCHANGERS...
NUMERICAL SIMULATIONS OF HEAT AND MASS TRANSFER IN CONDENSING HEAT EXCHANGERS...
 
Industrial Training at Shahjalal Fertilizer Company Limited (SFCL)
Industrial Training at Shahjalal Fertilizer Company Limited (SFCL)Industrial Training at Shahjalal Fertilizer Company Limited (SFCL)
Industrial Training at Shahjalal Fertilizer Company Limited (SFCL)
 
AP LAB PPT.pdf ap lab ppt no title specific
AP LAB PPT.pdf ap lab ppt no title specificAP LAB PPT.pdf ap lab ppt no title specific
AP LAB PPT.pdf ap lab ppt no title specific
 
English lab ppt no titlespecENG PPTt.pdf
English lab ppt no titlespecENG PPTt.pdfEnglish lab ppt no titlespecENG PPTt.pdf
English lab ppt no titlespecENG PPTt.pdf
 
6th International Conference on Machine Learning & Applications (CMLA 2024)
6th International Conference on Machine Learning & Applications (CMLA 2024)6th International Conference on Machine Learning & Applications (CMLA 2024)
6th International Conference on Machine Learning & Applications (CMLA 2024)
 
一比一原版(IIT毕业证)伊利诺伊理工大学毕业证成绩单专业办理
一比一原版(IIT毕业证)伊利诺伊理工大学毕业证成绩单专业办理一比一原版(IIT毕业证)伊利诺伊理工大学毕业证成绩单专业办理
一比一原版(IIT毕业证)伊利诺伊理工大学毕业证成绩单专业办理
 
CME397 Surface Engineering- Professional Elective
CME397 Surface Engineering- Professional ElectiveCME397 Surface Engineering- Professional Elective
CME397 Surface Engineering- Professional Elective
 
Fundamentals of Electric Drives and its applications.pptx
Fundamentals of Electric Drives and its applications.pptxFundamentals of Electric Drives and its applications.pptx
Fundamentals of Electric Drives and its applications.pptx
 

Digital Electronics

  • 1. Chapter 1 Introduction to Digital Systems A digital system is a combination of devices designed to process the information in binary form. The digital systems are widely used in almost all areas of life. The best example of digital system is a general purpose digital computer. Other examples include digital displays, electronic calculators, digital watch, digital counters, digital voltmeters, digital storage oscilloscope, telecommunication systems and so on. 1.1 Advantages of Digital Systems 1. Ease of design: In digital circuits, exact values of the quantities are not important. The quan- tities takes two discrete values i.e logic HIGH or logic LOW. The behaviour of small circuits can be visualised without any special insights about the operation of the components. So the digital systems are easier to design. 2. Ease of storage: The digital circuits uses latches that can hold on digital information as long as it is necessary. Using the mass storage techniques, it is easier to store billions of bits of information in a relatively small physical space. 3. Immune to noise: The noise is not critical in digital systems because the exact values of quantities are not important. But the noise should not be large enough such that it prevents us to distinguish between the logic High and logic LOW. 4. Accuracy and Precision: The digital information does not deteriorate or effected by noise while processing. So the accuracy and precision are easier to maintain in a digital system. 5. Programmability: The operation of digital systems is controlled by writing programs. The digital design is carried out by Hardware Description Languages(HDLs), using which the structure and functioning of digital circuit is modelled and the behaviour of the designed model is tested before built into the real hardware. 6. Speed: The digital devices uses the Integrated Circuits(ICs)in which the single transistor can switch in a few pico seconds. Using these ICs, a digital system can process the input infor- mation and produces an output in few nanoseconds. 7. Economy: Large number of digital circuits are integrated into a single chip such that it provides lot of functionality in a small physical space. So the digital systems can be mass- produces at a low cost. 1
  • 2. 2 CHAPTER 1. INTRODUCTION TO DIGITAL SYSTEMS 1.2 Limitations of Digital Systems The real world is analog: The quantities that are to be processed by a digital system are analog in nature example temperature, pressure, velocity etc. The input quantities must be converted to digital form before processing and the processed digital output must be converted to analog as shown in the figure 1.1. The conversion of information from digital to analog domain will take time and can add complexity and expense to a system. Figure 1.1: Block diagram of digital system 1.3 Building Blocks of Digital Systems: The best example of digital system is a computer. The standard block diagram of a computer consists of 1. Central Processing Unit(CPU): It does all the computations and processing. 2. Memory: It stores the program and data. 3. I/O Units: These units define how the input is fed into the computer and how to get the output from the computer. The CPU can further be divided into arithmetic and logic unit(ALU), control logic and registers to store results. The ALU is made up of adders, subtracters, multipliers etc to form arithmetic and logical operations. The functional blocks of ALU are made up of logic gates. The gate is a logical building block which has 2 inputs and an output, so depending upon the gate type the output is defined in terms of the input. Further, the gate is made up of transistors, capacitors etc. These active and passive elements are the basic units of actual circuits. Thus the digital system can be broken down into subsystems, subsystems can be broken down into modules, modules can be broken down into functional blocks, functional blocks can be broken down into logic units, logic units can be broken down into circuits and the circuits are made up of devices and components.
  • 3. Chapter 2 Digital Number Systems and Codes A digital system uses many number systems. The commonly used are decimal, binary, octal and hexadecimal systems. These number systems are used in everyday life and they are called posi- tional numbering system, since each digit of a number has an associated weight. The value of the number is the weighted sum of all the digits. 1. Decimal Number System: This is the most commonly used number system that has 10 digits from 0,1,2,3,4,5,6,7,8 and 9. The radix of decimal system is 10, so it is also called base-10 sys- tem. The decimal number system is not convenient to implement a digital system, because it is very difficult to design electronic devices with 10 different voltage levels. 2. Binary Number System: All the digital systems uses the binary number system as the basic number system of its operations. The radix of binary system is 2, so it is also called base-2 system. It has only 2 possible values 0 and 1. A binary digit is called as bit. A binary number can have more than 1 bit. The left most bit is called as least significant bit(LSB) and the right most bit is called as most significant bit(MSB). It is easy to design simple, accurate electronic circuits that operate with only 2 voltage levels using the binary number system. Binary data is extremely robust in transmission because noise can be rejected easily. Binary number system is the lowest possible base system, so any higher order counting system can be easily encoded. 3. Octal and Hexadecimal Number System: The radix of octal number system is 8 and radix of hexadecimal is 16. The octal number systems needs 8 digits, so it uses the digits 0-7 of decimal system. The hexadecimal needs 16 digits, so it supplements decimal digits 0-9 with the letters A-F. The octal and hexadecimal number system are useful for the representation of multiple bits because their radices are powers of 2. A string of 3 bits can be uniquely rep- resented by an octal digit. Similarly, a string of 4 digits can be represented by an hexadecimal digit. Hexadecimal number system is widely in microprocessors and microcontrollers. 2.1 General Positional Number System The value of a number in any radix is given by the formula D = p−1 i=−n di ri (2.1) where r is the radix of the number, p is the number of digits to the left of radix point, n is the number of digits to the right of radix point and di is the ith digit of the number. The decimal value of the number is determined by converting each digit of the number to its radix-10 equivalent and 3
  • 4. 4 CHAPTER 2. DIGITAL NUMBER SYSTEMS AND CODES expanding the formula using radix-10 arithmetic. Example 1: (331)8 = 3x82 +3x81 +1x80 = 192+24+1 = (217)10 Example 2: (D9)16 = 13x161 +9x160 = 208+9 = (217)10 2.2 Addition and Subtraction of Numbers Addition and subtraction uses the same technique of general decimal numbers, only difference is the use of binary numbers. To add two binary numbers, the least significant bits of two numbers are added with an initial carry cin = 0 to produce sum(s) and carry(cout ) as shown in the Table 2.1. Then the process of adding is continued by shifting to the bits towards left till it reaches most sig- nificant bit and the carry in for each addition process is the carry out of the previous bits addition. cin a b s cout 0 0 0 0 0 0 0 1 1 0 0 1 0 1 0 0 1 1 0 1 1 0 0 1 0 1 0 1 0 1 1 1 0 0 1 1 1 1 1 1 Table 2.1: Binary addition Example: Addition of two numbers in decimal and binary number system. decimal binary carry 100 01111000 a 190 10111110 b 141 10001101 sum 331 101001011 The binary subtraction is carried out similar to the binary addition with carries replaced with borrows and produces the difference(d) and the borrow out(bout ) as shown in the Table 2.2. bin a b d bout 0 0 0 0 0 0 0 1 1 1 0 1 0 1 0 0 1 1 0 0 1 0 0 1 1 1 0 1 0 1 1 1 0 0 0 1 1 1 1 1 Table 2.2: Binary subtraction Example: Subtraction of two numbers in decimal and binary number system.
  • 5. 2.3. REPRESENTATION OF NEGATIVE NUMBERS 5 decimal binary borrow 100 01111100 a 229 11100101 b - 46 - 00101110 difference 183 10110111 The subtraction is commonly used in computers to compare two numbers. For example, if the operation (a-b) produces a borrow out, then a is less than b, otherwise, a is greater than or equal to b. 2.3 Representation of Negative Numbers The positive numbers and negative numbers in decimal system are indicated by + and − sign respectively. This method of representation is called as sign-magnitude representation. But those signs are not convenient for representing binary numbers. There are many ways to represent signed binary numbers. 2.3.1 Signed-Magnitude Representation The signed magnitude system uses extra bit to represent the sign. Positive number is represented by ’0’ and negative number by ’1’. The most significant digit is used as the sign bit and lower bits contains the magnitude. The following are the examples of 8-bit signed binary integers and their decimal equivalents. Example 1: (01010101)2 = +(85)10 and (11010101)2 = −(85)10 Example 2: (01111111)2 = +(127)10 and (11111111)2 = −(127)10 Example 3: (00000000)2 = +(0)10 and (10000000)2 = −(0)10 Since zero can be represented in two possible ways, there are equal number of positive and nega- tive integers in signed magnitude system. An n-bit signed magnitude integer lies within the range −(2n−1 −1) to +(2n−1 −1). The signed magnitude representation in digital logic circuit requires to examine the signs of the addends. If the signs are same, it must add the magnitudes and if the signs are different, it must subtract smaller from the larger magnitude and give the result with sign of larger integer. This method will increase the complexity of the logic circuit, so it is not convenient for hardware implementations. 2.3.2 Complement Number Systems The complement number negates a number by taking its complement as defined by the system. The advantage of this system over signed-magnitude representation is that the two numbers in complement number system can be added or subtracted without the sign and magnitude checks. There are two complement number systems viz. the radix-complement and the diminished radix- complement. The complement number system has fixed number of digits,say n. If an operation produces more than n digits, then the extra higher order bits are omitted. 2.3.3 Radix Complement Representation The radix complement of an n-digit number is obtained by subtracting it from rn , where r is the radix. In decimal system, radix-complement is called 10 s complement. For example, the 10 s
  • 6. 6 CHAPTER 2. DIGITAL NUMBER SYSTEMS AND CODES complement of 127 = 103 − 127 = 873. If the number is between the range 1 and rn − 1, the sub- traction results in another number in the same range. By definition, radix-complement of 0 is rn , which has n + 1 digits. The extra bit is omitted which results in 0 and thus there is only one radix-complement representation of zero. By definition, subtraction operation is required to obtain radix-complement. To avoid subtrac- tion, we can rewrite rn as rn = (rn − 1) + 1 and rn − D = ((rn − 1) − D) + 1, where D is the n-digit number. The number (rn − 1) is the highest n-digit number. Let the complement of digit d is r − 1 − d, then the radix-complement is obtained by complementing individual digits of D and adding 1. Example: 10 s complement of 4-digit number, 0127 = 9872+1 = 9873. Two’s Complement Representation The radix complement for binary numbers is called the 2 s complement. The Most Significant Bit(MSB) represents the sign. If it is 0, then the number is pos- itive and if it is 1, the number is negative. The remaining (n −1) bits represents the magnitude, but not as simple weighted number. The weight of the MSB is −2n−1 instead of +2n−1 . Example 1: 1510 = (01111)2, 2 s complement of (01111)2 = (10000+1) = (10001)2 = −1510. Example 2: −12710 = (10000001)2, 2 s complement of (10000001)2 = (01111110+1) = (01111111)2 = +12710. Example 3: 010 = (0000)2, 2 s complement of (0000)2 = (1111+1) = (0000)2+carry 1 = 010. The carry out bit is ignored in 2 s complement operations, so the result is the lower n bits. Since zero’s sign bit is 0, it is considered as positive and zero has only one representation in 2 s comple- ment system. Hence, the range of representable numbers is from −2n−1 to +(2n−1 −1). The two’s complement system can be applied to fractions as well. Example: 2 s complement of +19.312510 = (010011.0101)2 = 101100.1010 + 1 = (101100.1011)2 = −19.312510. Sign Extension An n-bit two’s complement number can be converted to m-bit one. If m > n, pad a positive number with 0s and negative number with 1s. If m < n, discard (n − m) leftmost bits, but the result is valid only if all the discarded bits are the same as the sign bit of the result. 2.3.4 Diminished Radix-complement Representation The diminished radix-complement of n-digit number is obtained by subtracting it from (rn − 1). This is obtained by complementing individual digits of the number. In decimal system, it is called as 9 s complement. One’s Complement Representation The diminished radix-complement for binary numbers is called as one’s complement. The MSB represents the sign. If it is 0, the number is positive and the remaining (n −1) bits directly indicate the magnitude. If MSB is 1, the number is negative and the complement of remaining (n −1) bits is the magnitude of the number. The positive numbers have same representation in both one’s and two’s complements, but negative number representation differs by 1. The weight of the MSB is −(2n−1 −1) for computing decimal equivalents. Example 1: One’s complement of 410 = (0100)2is(1011)2 = −410. Example 2: One’s complement of 010 = (0000)2is(111)2 = −010. There are two representation of zero, positive and negative zero. So, the range of representation is −(2n−1 −1) to +(2n−1 −1). The main advantage of the one’s complement is that it is easy to complement. But design of adders is difficult than two’s complement. Since zero has two representations,zero detecting circuits must check for both representations of zero.
  • 7. 2.4. EXCESS REPRESENTATIONS 7 2.4 Excess Representations In excess representation, a bias B is added to the true value of the number to produce an excess number. Consider a m-bit string whose unsigned integer value is M(0 ≤ M < 2m ), then in excess- B representation the m-bit string represents the signed integer (M −B), where B is the bias of the number system. An excess-2m−1 system represents any number X in the range −2m−1 to +2m−1 −1 by the m-bit binary representation of X +2m−1 , which is always non-negative and less than 2m . Example:Consider 3-bit number say (110)2, the unsigned value is 610. Then the excess-4(bias is 23−1 ) representation is (110−100)2 = (010)2 = −610. The representation of number in excess representation and two’s complement is identical ex- cept for the sign bits, which are always opposite. The excess representation are most commonly used in floating-point number systems. 2.5 Two’s-Complement Addition and Subtraction 2.5.1 Addition Addition is performed by doing the simple binary addition of the two numbers. Examples: decimal binary decimal binary +3 0011 +4 0100 ++4 + 0100 +-7 + 1001 +7 0111 -3 1101 If the result of addition operation exceeds the range of the number system, overflow occurs. Addition of two numbers with like signs can produces overflow, but addition of two numbers with different signs never produces overflow. Examples: decimal binary decimal binary -3 1101 +7 0111 +-6 + 1010 ++7 + 0111 -9 10111 +14 1110 2.5.2 Subtraction Subtraction is accomplished by first performing two’s complement of the subtrahend and then adding it to the minuend. decimal binary decimal binary +4 0100 +3 0011 -+3 + 1100 –4 + 0011 +3 10001 +7 0111 Overflow in subtraction can be detected by examining the signs of the minuend and the com- plemented subtrahend same as addition. The extra bit can be ignored, provided range is not ex- ceeded.
  • 8. 8 CHAPTER 2. DIGITAL NUMBER SYSTEMS AND CODES 2.6 One’s Complement Addition and Subtraction The addition operation is performed by standard binary addition, if there is a carry out of the MSB, then add 1 to the result. This rule is called en-around carry. Similarly, subtraction is accomplished by first performing the complement of the subtrahend and proceed as in addition. Since zero has two representations, the addition of a number and its one’s complement produces negative zero. Example: decimal binary +6 0110 +-3 + 1100 +3 10010 + 1 0011 2.7 Binary Codes When an information is sent over long distances, the channel through which it is sent may dis- tort the transmitted information. It becomes necessary to modify the information into some form before sending and then convert at the receiving end to get back the original information. This process of altering the characteristics of information to make it suitable for the intended applica- tion is called coding. A set of n-bit strings in which different bit strings represent different numbers or other things is called a code. A particular combination of n-bit values is called a code word. Normally n-bit string can take 2n values, but it is not necessary that it contains 2n valid code words. 2.7.1 Binary Coded Decimal Code There are ten symbols in the decimal number system from 0 to 9. So 4-bits are required to repre- sent them in binary form. This process of encoding the digits 0 through 9 by their unsigned binary representation is called Binary Coded Decimal(BCD) code. The code words from 1010 through 1111 are not used, they may used for error detection purpose. BCD is a weighted code because decimal value of a code is the algebraic sum of the fixed weight to each bit. The weights of BCD are 8, 4, 2 and 1, so the BCD is also called as 8421 code. Example: (16.85)10 = (00010110.10000101) There are many possible weights to write a number in BCD code to make the code suitable for specific application. One such code is the 2421 code in which the code word for the nine’s complement of any digit may be obtained by complementing the individual bits of code words. So 2421 code is called self-complementing codes. Example: 2421 code of 410 = 0100. Its complement is 1011 which is 2421 code for 510 = (9−4)10. Excess-3 code is the self-complementing code which is not weighted. It is obtained by adding 0011 to all the 8421 code. The code words follow a standard binary sequence, so this can be used in standard binary counters. Example: Excess-3 code of (0010)2 = (0010+0011) = (0101)2.
  • 9. 2.7. BINARY CODES 9 Addition of BCD digits is similar to adding 4-bit unsigned binary numbers, except that a correction of 0110 is to made if the result is exceeds 1001. Example: decimal binary 8 10000 +8 + 1000 -16 10000 + 0110 10110 The main advantage of the BCD code is the relative ease of converting to and from decimal. Only the four-bit code for the decimal digits 0 through 9 need to be remembered. However, BCD requires more bits to represent a decimal number of more than 1 digit because it does not use all possible four-bit code words and is therefore somewhat inefficient. 2.7.2 Gray Code There are many applications in which it is desirable to have a code in which adjacent codes differ only in one bit. For example, in electromechanical applications such as braking systems, some- times it is necessary for an input sensor to produce a digital value that indicates a mechanical po- sition. Such codes are called unit distance codes. Gray code is one of the unit distance code. The 3-bit gray codes for corresponding decimal numbers are 010 = (000)2, 110 = (001)2, 210 = (011)2, 310 = (010)2, 410 = (110)2, 510 = (111)2, 610 = (101)2 and 710 = (100)2. The most popular use of Gray codes is in electromechanical applications such as the position sensing transducer known as shaft encoder. A shaft encoder consists of a disk in which concentric circles have alternate sectors with reflective surfaces while the other sectors have non-reflective surfaces. The position is sensed by the reflected light from a light emitting diode. However, there is choice in arranging the reflective and non-reflective sectors. A 3-bit binary coded disk is as shown in the Figure 2.1. Figure 2.1: 3-bit binary coded shaft encoder The straight binary code in the Figure 2.1 can lead to errors because of mechanical imperfections. When the code is transiting from 001 to 010, a slight misalignment can cause a transient code of 011 to appear. The electronic circuitry associated with the encoder will receive 001 –> 011 -> 010. If the disk is patterned to give Gray code output, the possibilities of wrong transient codes will not arise. This is because the adjacent codes will differ in only one bit. For example the adjacent code
  • 10. 10 CHAPTER 2. DIGITAL NUMBER SYSTEMS AND CODES for 001 is 011. Even if there is a mechanical imperfection, the transient code will be either 001 or 011. The shaft encoder using 3-bit Gray code is shown in the Figure 2.2. Figure 2.2: 3-bit Gray coded shaft encoder disk There are two convenient methods to construct Gray code with any number of desired bits. The first method is based on the fact that Gray code is also a reflective code. The following rule may be used to construct Gray code: 1. A one-bit Gray code had code words, 0 and 1. 2. The first 2n code words of an (n + 1)-bit Gray code equal the code words of an n-bit Gray code, written in order with a leading 0 appended. 3. The last 2n code words of a (n +1)-bit Gray code equal the code words of an n-bit Gray code, written in reverse order with a leading 1 appended. However, this method requires Gray codes with all bit lengths less than n also be generated as a part of generating n-bit Gray code. The second method allows us to derive an n-bit Gray code word directly from the corresponding n-bit binary code word: 1. The bits of an n-bit binary code or Gray code words are numbered from right to left, from 0 to n-1. 2. Bit i of a Gray-code word is 0 if bits i and i +1 of the corresponding binary code word are the same, else bit i is 1. When i +1 = n, bit n of the binary code word is considered to be 0. 2.7.3 Character Codes Most of the information processed by a computer is non-numeric. The most common type of non- numeric data is text and string of characters. Codes that include alphabetic characters are called character codes. Each character is represented by a bit string. The most commonly used character code is ASCII (American Standard Code for Information Interchange). Each character in ASCII is represented with a 7-bit string, yielding total of 128 different characters. The code contains upper- case and lowercase alphabet, numerals, punctuation and various non-printing control characters. Example: ASCII code for A = 1000001, a = 1100001, ? = 0111111 etc.
  • 11. Chapter 3 Boolean Algebra In 1854, George Boole introduced a systematic treatment of logic and developed for this pur- pose an algebraic system, now called Boolean Algebra. Boolean algebra is a system of mathemat- ical logic. It is an algebraic system consisting of the set of elements(0,1), two binary operators called OR, AND and one unary operator, NOT. It is the basic mathematical tool in the analysis and synthesis of switching circuit. It is way to express logic functions algebraically. 3.1 Rules in Boolean Algebra The important rules used in Boolean algebra are as follows: 1. Variable used can have only two values, Binary 1 for HIGH and Binary 0 for LOW. 2. Complement of a variable is represented by an over-bar. Thus, complement of variable B is represented as ¯B. Thus if B = 0, then ¯B = 1 and if B = 1, then ¯B = 0. 3. OR of the variables is represented by a plus (+) sign between them. For example, OR of A, B and C is represented as (A +B +C). 4. Logical AND of the two or more variable is represented by writing a dot between them such as (A.B.C). Sometimes, the dot may be omitted and written as (ABC). 3.2 Postulates The Postulates of a mathematical system form the basic assumption from which it is possible to reduce the rules, theorems and properties of the system. The most common postulates used to formulate various algebraic structures are as follows. 1. Associative law-1: A +(B +C) = (A +B)+C This law states that ORing of several variables gives the same result regardless of the group- ing of the variables, i.e. for three variables, (A OR B) ORed with C is same as A ORed with (B OR C). Associative law-2: (AB)C = A(BC) The associative law of multiplication states that it makes no difference in what order the variables are grouped when ANDing several variable. For three variables, (A AND B) ANDed with C is the same as A ANDed with (B AND C). 11
  • 12. 12 CHAPTER 3. BOOLEAN ALGEBRA 2. Commutative law-1: A +B = B + A This states that the order in which the variables are ORed makes no difference in the output. Therefore (A OR B) is same as (B OR A). Commutative law-2: AB = B A The commutative law of multiplication states that the order in which variables are ANDed makes no difference in the output. Therefore (A AND B) is same as (B AND A). 3. Distributive law: A(B +C) = (AB + AC) The distributive law states that ORing several variables and ANDing the result with single variable is equivalent to ANDing the result with a single variable with each of the several variables and then ORing the products. 4. Identity Law-1: A +0 = A A variable ORed with 0 is always equal to the variable. Identity Law-2: A.1 = A A variable ANDed with 1 is always equal to the variable. 5. Inversion law: ¯¯A = A This law uses the NOT operation. The inversion law states that double inversion of a variable results in the original variable itself. 6. Closure A set S is closed with respect to binary operator if, for every pair of elements of S, the binary operator specifies a rule for obtaining a unique element of S. For example, the set of natu- ral number N=(1,2,3,4.......n) is closed with respect to the binary operator + by the rules of arithmetic addition, since for any a,b N, there is a unique c N such that a +b = c. The set of natural number is not closed with respect to the binary operator by rules of arithmetic subtraction, because 2−3 = −1 and 2,3 N. But (−1) N. 7. Consensus theorem-1: AB + ¯AC +BC = AB + ¯AC The (BC) term is called the consensus term and is redundant. The consensus term is formed pair of terms in which a variable A and its complement ¯A are present. The consensus term is formed by multiplying the two terms and leaving out the selected variable and its comple- ment. The consensus of AB + ¯AC is BC. Proof: AB + ¯AC +BC = AB + ¯AC +BC(A + ¯A) = AB + ¯AC +BC A + ¯ABC = AB(1+C)+ ¯AC(1+B) = AB + ¯AC This theorem can be extended to any number of variabls. Similarly, (A +B).( ¯A +B).(B +C) = (A +B).( ¯A +C) 8. Absorptive Law This law enables a reduction in a complicated expression to a simpler one by absorbing like- terms. A +(A.B) = A (OR Absorption Law) A(A +B) = A (AND Absorption Law) 9. Idempotence law: An input that is ANDed or ORed with itself is equal to that input. A + A = A: A variable ORed with itself is always equal to the variable.
  • 13. 3.2. POSTULATES 13 A.A = A: A variable ANDed with itself is always equal to the variable. 10. Adjacency law: (A +B)(A + ¯B) = A A.B + A ¯B = A 11. Duality: In positive logic system, more positive of two voltage levels represented as 1 and more negative by 0 . Whereas, in negative logic system, more positive is represented as 0 and more negative by 1 . The distinction between positive logic and negative logic system is that the OR gate in the positive logic becomes an AND gate in negative logic system and vice verse. The duality theorem applies when we are changing one logic system to another. Duality theorem states that ’0’ becomes ’1’, ’1’ becomes ’0’, ’AND ’ becomes ’OR’ and ’OR’ becomes ’AND’. The variables are not complemented in this process. Given expression Dual ¯0 = 1 ¯1 = 0 0.1 = 0 1+0 = 1 0.0 = 0 1+1 = 1 1.1 = 1 0+0 = 0 A.0 = 0 A +1 = 1 A.A = A A + A = A A.1 = A A +0 = A A. ¯A = 0 A + ¯A = 1 A.B = B.A A +B = B + A A.(B +C) = AB + AC A +BC = (A +B)(A +C) 12. DeMorgan’s Theorem : DeMorgan suggested two theorems that form an important part of boolean algebra. In the equation form, they are: (a) AB = ¯A + ¯B The complement of a product is equal to the sum of the complements. This is illus- trated by the below truth table. (b) (A +B) = ¯A. ¯B The complement of sum is equal to product of complements. The following truth table illustrates this law .
  • 14. 14 CHAPTER 3. BOOLEAN ALGEBRA 13. Redundant literal Rule This law states that ORing of a variable with the AND of the complement of that variable with another variable is equal to ORing of the two variable i.e A + ¯AB = A +B Another theorem based on this law is A( ¯A +B) = AB 3.3 Theorems of Boolean Algebra Theorem-1: x + x = x x + x = (x + x).1 = (x + x)(x + ¯x) = x + x ¯x = x +0 = x Theorem-2: x.x = x x +1 = 1.(x +l) = (x + ¯x)(x +1) = x + ¯x.1 = x + ¯x = 1 x.0 = 0 by duality Theorem-3: x + xy = x x + xy = x.1+ xy = x(1+ y) = x(y +1) = x.1 = x x(x+y) = x by duality
  • 15. Chapter 4 Logic Gate Families Digital IC gates are classified not only by their logic operation, but also by specified logic circuit family to which they belong. Each logic family has its own basic electronic circuit upon which more complex digital circuit and functions are developed. The first logic circuit was developed using discrete circuit components. Using advance tech- niques, these complex circuits can be miniaturized and produced on a small piece of semicon- ductor material like silicon. Such a circuit is called integrated circuit(IC). Nowadays, all the digital circuits are available in IC form. While producing digital ICs, different circuit configurations and manufacturing technologies are used. This results into a specific logic family. Each logic family designed in this way has identical electrical characteristics such as supply voltage range, speed of operation, power dissipation, noise margin etc. The logic family is designed by considering the basic electronic components such as resistors,diodes, transistors and MOSFET or combinations of any of these components. Accordingly, logic families are classified as per the construction of the basic logic circuits. 4.1 Classification of logic families Logic families are classified according to the principle type of electronic components used in their circuitry as shown in the Figure 4.1. They are: Bipolar ICs: which uses diodes and transistors(BJT). Unipolar ICs: which uses MOSFETs. Figure 4.1: Logic family classification Out of the logic families mentioned above, RTL and DTL are not very useful due to some inherent disadvantages. While TTL, ECL and CMOS are widely used in many digital circuit design applica- tions. 15
  • 16. 16 CHAPTER 4. LOGIC GATE FAMILIES 4.2 Transistor-Transistor Logic(TTL) TTL is the short form of Transistor Transistor Logic. As the name suggests they refers to the digital integrated circuits that employ logic gates consisting primarily of bipolar transistors. The most basic TTL circuit is an inverter. It is a single transistor with its emitter grounded and its collector tied to VCC with a pull-up resistor, and the output is taken from its collector. When the input is high (logic 1), the transistor is driven to saturation and the output voltage i.e. the drop across the collector and emitter is negligible and therefore output voltage is low(logic 0). A two input NAND gate can be realized as shown in the figure 4.2. When, at least any one of the input is at logic 0, the multiple emitter base junctions of transistor TA are forward biased where as the base collector is reverse biased. The transistor TA would be ON forcing a current away from the base of TB and thereby TB is OFF. Almost all Vcc would be dropping across an OFF transistor and the output voltage would be high(logic l). For a case when both Input1 and Input2 are at Vcc, both the base emitter junctions are reverse biased and the base collector junction of the transistor TA is forward biased. Therefore, the transistor TB is on making the output at logic 0, or near ground. Figure 4.2: A 2-input TTL NAND gate However, most TTL circuits use a totem pole output circuit instead of the pull-up resistor as shown in the Figure 4.3. It has a Vcc-side transistor(TC ) sitting on top of the GND-side output transistor(TD). The emitter of the TC is connected to the collector of TD by a diode. The output is taken from the collector of transistor TD. TA is a multiple emitter transistor having only one collector and base. The multiple base emitter junction behaves just like an independent diode. Applying a logic ’1’ input voltage to both emitter inputs of TA, reverse-biases both base-emitter junctions, causing current to flow through RA into the base of TB , which is driven into saturation. When TB starts conducting, the stored base charge of TC dissipates through the TB collector, driv- ing TC into cut-off. On the other hand, current flows into the base of TD, causing it to saturate and its collector emitter voltage is 0.2V and the output is equal to 0.2V, i.e. at logic 0. In addition, since TC is in cut-off, no current will flow from Vcc to the output, keeping it at logic ’0’. Since TD is in saturation, its input voltage is 0.8 V. Therefore the output voltage at the collector of transistor TB is 0.8V +V CESat (saturation voltage between conductor and emitter of a transistor is equal to 0.2V ) = 1V. TB always provides complementary inputs to the bases of TC and TD, such that TC and TD always operate in opposite regions, except during momentary transition between regions. The output impedance is low independent of the logic state because one transistor (either TC or TD ) is ON. When at least one of inputs are at 0 V, the multiple emitter base junctions of transistor TA are forward biased whereas the base collector is reverse biased and transistor TB remains off and therefore the output voltage is equal to Vcc. Since the base voltage for transistor TC is Vcc, this transistor is ON and the output is also Vcc and the input to transistor TD is 0 V, hence it remains off.
  • 17. 4.3. CMOS LOGIC 17 Figure 4.3: A 2-input TTL NAND Gate with a totem pole output stage TTL overcomes the main problem associated with DTL(Diode Transistor Logic) i.e. lack of speed. The input to a TTL circuit is always through the emitter(s) of the input transistor, which exhibits a low input resistance. The base of the input transistor, on the other hand, is connected to the Vcc, which causes the input transistor to pass a current of about 1.6mA when the input voltage to the emitter(s) is logic ’0’. Letting a TTL input ’float’ (left unconnected) will usually make it go to logic ’1’. However, such a state is vulnerable to stray signals, which is why it is good practice to connect TTL inputs to Vcc using 1kΩ pull-up resistors. 4.3 CMOS Logic The term ’Complementary Metal-Oxide-Semiconductor(CMOS)’ refers to the device technology for fabricating integrated circuits using both n-channel and p-channel MOSFETs. Today, CMOS is the major technology in manufacturing digital IC’s and is widely used in microprocessors, mem- ories and other digital IC’s. The input to a CMOS circuit is always to the gate of the input MOS transistor. The gate offers a very high resistance because it is isolated from the channel by an ox- ide layer. The current flowing through a CMOS input is virtually zero and the device is operated mainly by the voltage applied to the gate, which controls the conductivity of the device channel. The low input currents required by a CMOS circuit results in lower power consumption, which is the major advantage of CMOS over TTL. In fact, power consumption in a CMOS circuit occurs only when it is switching between logic levels. Moreover, CMOS circuits are easy and cheap to fabricate resulting in high packing density than their bipolar counterparts. CMOS circuits are quite vulner- able to ESD damage, mainly by gate oxide punchthrough from high ESD(Electro Static Discharge) voltages. Therefore, proper handling CMOS IC’s is required to prevent ESD damage and generally, these devices are equipped with protection circuits. Figure 4.4 shows the circuit symbols of an n-channel and a p-channel transistor. An nMOS tran- sistor is ’ON’ if the gate voltage is at logic ’1’ or more precisely if the potential between the gate and the source terminals(VGS) is greater than the threshold voltage VT . An ’ON’ transistor implies the existence of a continuous channel between the source and the drain terminals. On the other hand, an ’OFF’ nMOS transistor indicates the absence of a connecting channel between the source and the drain. Similarly, a pMOS transistor is ’ON’ if the potential VGS is lower(or more negative) than the threshold voltage VT .
  • 18. 18 CHAPTER 4. LOGIC GATE FAMILIES Figure 4.4: Symbol of n-channel and p-channel transistors Figure 4.5 shows a CMOS inverter circuit(NOT gate), where a p-channel and an n-channel MOS transistor is connected in series. A logic ’1’ input voltage would make TP (p-channel) turn off and TN (n-channel) turn on. Hence, the output will be low, pulling output to logic ’0’. A logic ’0’ Vin voltage, on the other hand, will make TP turn on and TN turn off, pulling output to near VCC or logic ’1’. The p-channel and n-channel MOS transistors in the circuit are complementary and they are always in opposite states i.e. when one of them is ON the other is OFF. Figure 4.5: CMOS inverter circuit The circuit (a) in the figure 4.6 shows a 2-input CMOS NAND gate where the TN1 and TP1 have the same input A and TN2 and TP2 has the same input B. When both the inputs A and B are high, TN1 and TN2 are ON and TP1 and TP2 are OFF. Therefore, the output is at logic 0. On the other hand, when both input A and input B are logic 0, TN1 and TN2 is OFF and TP1 and TP2 are ON. Hence, the output is at VCC , logic 1. Similar situation arises when any one of the input is logic 0. In such a case, one of the bottom series transistors i.e. either TN1 or TN2 would be OFF forcing the output to logic 1. The circuit (b) in the figure 4.6 shows a 2-input CMOS NOR gate where the TN1 and TP1 have the same input A and TN2 and TP2 has the same input B. When both the A and B are logic 0, TN1 and TN2 are OFF and TP1 and TP2 are ON. Therefore, the output is at logic 1. On the other hand, when at least one of the inputs A or B is logic 1, the corresponding nMOS transistor would be ON making the output logic 0. The output is disconnected from VCC in such case because at least one of the TP1 or TP2 is OFF. Thus the CMOS circuit acts as a NOR Gate.
  • 19. 4.4. EMITTER COUPLED LOGIC(ECL) 19 Figure 4.6: CMOS NAND and NOR circuit If one use a CMOS IC for reduced current consumption and a TTL IC feeds the CMOS chip, then you need to either provide a voltage translation or use one of the mixed CMOS/TTL devices. The mixed TTL/CMOS devices are CMOS devices, which just happen to have TTL input trigger levels, but they are CMOS IC’s. Let us consider the TTL to CMOS interface briefly. TTL device need a supply voltage of 5 V, while CMOS devices can use any supply voltage from 3V to 10V. One approach to TTL/CMOS interfacing is to use a 5V supply for both the TTL driver and the CMOS load. In this case, the worst case TTL output voltages are almost compatible with the worst case CMOS input voltages. There is no problem with the TTL low state window(0 to 0.35 V) because it fits inside the CMOS low state window(0 to 1.3 V). This means the CMOS load always interprets the TTL low state drive as low. 4.4 Emitter Coupled Logic(ECL) Emitter coupled logic(ECL) gates use differential amplifier configurations at the input stage. ECL is a non saturated logic, which means that transistors are prevented from going into deep sat- uration, thus eliminating storage delays. In other words, the transistor is switched ON, but not completely ON. This is the fastest logic family. ECL circuits operate using a negative 5.2V supply and the voltage level for high is -0.9V and for low is -1.7V. Thus biggest problem with ECL is a poor noise margin. 4.5 Characteristics of logic families The parameters which are used to characterize different logic families are as follows: 1. HIGH-level input current(II H ): This is the current flowing into (taken as positive) or out of (taken as negative) an input, when a HIGH-level input voltage equal to the minimum HIGH-level output voltage specified for the family is applied. In the case of bipolar logic families such as TTL, the circuit design is such that this current flows into the input pin and is therefore specified as positive. In the case of CMOS logic families, it could be either positive or negative, and only an absolute value is specified in this case. /item LOW-level input current(IIL): The LOW-level input current is the maximum current flowing into (taken as positive) or out of (taken as negative) the input of a logic function, when the voltage applied at the input equals the maximum LOW-level output voltage specified for the family. In the case of bipolar logic families such as TTL, the circuit design is such that this current flows out
  • 20. 20 CHAPTER 4. LOGIC GATE FAMILIES of the input pin and is therefore specified as negative. In the case of CMOS logic families, it could be either positive or negative. In this case, only an absolute value is specified. 2. HIGH-level output current(IOH ): This is the maximum current flowing out of an output when the input conditions are such that the output is in the logic HIGH state. It is normally shown as a negative number. It informs us about the current sourcing capability of the out- put. The magnitude of IOH determines the number of inputs the logic function can drive when its output is in the logic HIGH state. 3. LOW-level output current(IOL): This is the maximum current flowing into the output pin of a logic function when the input conditions are such that the output is in the logic LOW state. It informs us about the current sinking capability of the output. The magnitude of IOL determines the number of inputs the logic function can drive when its output is in the logic LOW state. 4. HIGH-level input voltage(VI H ): This is the minimum voltage level that needs to be applied at the input to be recognized as a legal HIGH level for the specified family. For the standard TTL family, a 2V input voltage is a legal HIGH logic state. 5. LOW-level input voltage(VIL): This is the maximum voltage level applied at the input that is recognized as a legal LOW level for the specified family. For the standard TTL family, an input voltage of 0.8 V is a legal LOW logic state. 6. HIGH-level output voltage(VOH ): This is the minimum voltage on the output pin of a logic function when the input conditions establish logic HIGH at the output for the specified fam- ily. In the case of the standard TTL family of devices, the HIGH level output voltage can be as low as 2.4V and still be treated as a legal HIGH logic state. It may be mentioned here that, for a given logic family, the VOH specification is always greater than the VI H specification to ensure output-to-input compatibility when the output of one device feeds the input of another. 7. LOW-level output voltage(VOL): This is the maximum voltage on the output pin of a logic function when the input conditions establish logic LOW at the output for the specified fam- ily. In the case of the standard TTL family of devices,the LOW-level output voltage can be as high as 0.4V and still be treated as a legal LOW logic state.It may be mentioned here that,for a given logic family, the VOLspecification is always smaller than the VIL specification to ensure output-to-input compatibility when the output of one device feeds the input of another. 8. Fan out: It specifies the number of standard loads that the output of the gate can drive without affecting its normal operation. A standard load is usually defined as the amount of current needed by an input of another gate in the same family. Sometimes, the term loading is also used instead of fan-out. This term is derived from the fact that the output of the gate can supply a limited amount of current above which it ceases to operate properly and is said to be overloaded. The output of the gate is usually connected to the inputs of similar gates. Each input consumes a certain amount of power from gate input so that, each additional connection adds to load the gate. These loading rules are listed for family of standard digital circuits. The rules specify the maximum amount of loading allowed for each output in the circuit. 9. Fan in: This is the number of inputs of a logic gate. It is decided by the input current sinking capability of a logic gate.
  • 21. 4.5. CHARACTERISTICS OF LOGIC FAMILIES 21 10. Power Dissipation: This is the power supplied required to operate the gate. It is expressed in milli-watt(mW) and represents actual power dissipated in the gate. It is the number that represents power delivered to gate from the power supply. The total power dissipated in the digital system is sum of power dissipated in each digital IC. 11. Propagation delay(tP ): The propagation delay is the time delay between the occurrence of change in the logical level at the input and before it is reflected at the output. It is the time delay between the specified voltage points on the input and output wave forms. Propaga- tion delays are separately defined for LOW-to-HIGH and HIGH-to-LOW transitions at the output. In addition, we also define enable and disable time delays that occur during transi- tion between the high-impedance state and defined logic LOW or HIGH states. Propagation delay is expressed in nanoseconds. The signals travelling from input to output of the sys- tem pass through a number of gates. The propagation delay of the system is the sum of the propagation delays of all these gates. The speed of operation is important so each gate must have a small propagation delay and the digital circuit must have minimum number of gates between input and output. Figure 4.7: Propagation delay Propagation delay parameters (a) Propagation delay(tpLH ): This is the time delay between the specified voltage points on the input and output waveforms with the output changing from LOW to HIGH. (b) Propagation delay (tpHL): This is the time delay between the specified voltage points on the input and output waveforms with the output changing from HIGH to LOW. 12. Noise margin: This is the maximum noise voltage added to the input signal of digital circuit that does not cause an undesirable change in the circuit output. There are two types of noise to be considered here. • DC noise: This is caused by a drift in the voltage levels of a signal. • AC noise: This is caused by random pulse that may be created by other switching sig- nals.
  • 22. 22 CHAPTER 4. LOGIC GATE FAMILIES Figure 4.8: Noise margin Noise margin is a quantitative measure of noise immunity offered by the logic family. When the output of a logic device feeds the input of another device of the same family, a legal HIGH logic state at the output of the feeding device should be treated as a legal HIGH logic state by the input of the device being fed. Similarly, a legal LOW logic state of the feeding de- vice should be treated as a legal LOW logic state by the device being fed. The legal HIGH and LOW voltage levels for a given logic family are different for outputs and inputs. The Figure 4.8 shows the generalized case of legal HIGH and LOW voltage levels for output. As we can see from the two diagrams, there is a disallowed range of output voltage levels from VOL(max) to VOH (min) and an indeterminate range of input voltage levels from VIL(max) to VI H (min). Since VIL(max) is greater than VOL(max), the LOW output state can tolerate a positive volt- age spike equal to (VIL(max)-VOL(max)) and still be a legal LOW input. Similarly, VOH (min) is greater than VI H (min), and the HIGH output state can tolerate a negative voltage spike equal to (VOH (min) – VI H (min)) and still be a legal HIGH input. Here,(VIL(max)-VOL(max)) and (VOH (min)-VI H (min)) are respectively known as the LOW-level and HIGH-level noise margin. 13. Cost: The last consideration, and often the most important one, is the cost of a given logic family. The first approximate cost comparison can be obtained by pricing a common func- tion such as a dual four-input or quad two-input gate. But the cost of a few types of gates alone can not indicate the cost of the total system. The total system cost is decided not only by the cost of ICs but also by the cost of printed wiring board on which the ICs are mounted, assembly of circuit board, testing, programming the programmable devices, power supply, documentation, procurement, storage etc. In many instances, the cost of ICs could become a less important component of the total cost of the system. 4.6 Three State Outputs Standard logic gate outputs only have two states; high and low. Outputs are effectively either con- nected to +V or ground, that means there is always a low impedance path to the supply rails. Cer- tain applications require a logic output that we can "turn off" or disable. It means that the output is disconnected (high impedance state). This is the three-state output and can be implemented by a stand-alone unit(a buffer) or part of another function output. This circuit is so-called tri-state because it has three output states: high(1), low(0) and high impedance(Z).
  • 23. 4.7. OPEN COLLECTOR(DRAIN)OUTPUTS 23 Figure 4.9: Three-state output logic circuit In the logic circuit of Figure 4.9, there is an additional switch to a digital buffer which is called as enabled input denoted by E. When E is low, the output is disconnected from the input circuit. When E is high, the switch is connected and the circuit behaves like a digital buffer. All these states are listed in truth table and depicts the symbol of a tri-state buffer as shown in the Figure 4.10. Figure 4.10: Tri-state buffer 4.7 Open Collector(Drain)Outputs An open collector/open drain is a common type of output found on many integrated circuits. In- stead of outputting a signal of a specific voltage or current, the output signal is applied to the base of an internal NPN transistor whose collector is externalized(open) on a pin of the IC. The emit- ter of the transistor is connected internally to the ground pin. If the output device is a MOSFET, the output is called open drain and it functions in a similar way. A simple schematic of an open collector of an IC is as shown in the Figure 4.11. Figure 4.11: Open collector output circuit of an IC Open-drain devices sink(flow) current in their low voltage active(logic 0) state, or are high impedance(no current flows) in their high voltage non-active(logic 1) state. These devices usu- ally operate with an external pull-up resistor that holds the signal line high until a device on the wire sinks enough current to pull the line low. Many devices can be attached to the signal wire. If all devices attached to the wire are in their non-active state, the pull-up will hold the wire at a high voltage. If one or more devices are in the active state, the signal wire voltage will be low.
  • 24. 24 CHAPTER 4. LOGIC GATE FAMILIES An open-collector/open-drain signal wire can also be bi-directional. Bi-directional means that a device can both output and input a signal on the wire at the same time. In addition to controlling the state of its pin that is connected to the signal wire(active, or non-active), a device can also sense the voltage level of the signal wire. Although the output of a open-collector/open- drain device may be in the non-active(high) state, the wire attached to the device may be in the active(low) state, due to activity of another device attached to the wire. The open-drain output configuration turns off all pull-ups and only drives the pull-down transistor of the port pin when the port latch contains logic ’0’. To be used as a logic output, a port configured in this manner must have an external pull-up, typically a resistor tied to VDD. The pull down for this mode is the same as for the quasi-bidirectional mode. To use an analogue input pin as a high-impedance digital input while a comparator is enabled, that pin should be configured in open-drain mode and the corresponding port register bit should be set to 1.
  • 25. Chapter 5 Logic Functions In electronics, a logic gate is an idealized or physical device implementing a Boolean function, i.e. it performs a logical operation on one or more binary inputs and produces a single binary output. A logic gate is an elementary building block of a digital circuit. Most logic gates have two inputs and one output. At any given moment, every terminal is in one of the two binary conditions low(0) or high(1), represented by different voltage levels. The logic state of a terminal can, and generally does, change often, as the circuit processes data. In most logic gates, the low state is approximately zero volts(0V), while the high state is approximately five volts positive(+5V). There are seven basic logic gates: AND, OR, XOR, NOT, NAND, NOR, and XNOR. AND gate: The AND gate is so named because, if 0 is called "false" and 1 is called "true," the gate acts in the same way as the logical "and" operator. The Figure 5.1 shows the circuit symbol and logic combinations for an AND gate. In the symbol, the input terminals are at left and the output terminal is at right. The output is "true" when both inputs are "true." Otherwise, the output is "false." Figure 5.1: AND gate symbol and truth table OR gate: The OR gate gets its name from the fact that it behaves similar to the logical inclusive "or." The output is "true" if either or both of the inputs are "true." If both inputs are "false," then the output is "false." 25
  • 26. 26 CHAPTER 5. LOGIC FUNCTIONS Figure 5.2: OR gate symbol and truth table XOR gate: The XOR(exclusive-OR) gate acts in the same way as the logical "either/or." The output is "true" if either, but not both, of the inputs are "true." The output is "false" if both inputs are "false" or if both inputs are "true". Another way of looking at this circuit is to observe that the output is 1 if the inputs are different, but 0 if the inputs are the same. Figure 5.3: XOR gate symbol and truth table NOT gate: A logical inverter, sometimes called a NOT gate to differentiate it from other types of electronic inverter devices, has only one input. It reverses the logic state. Figure 5.4: NOT gate symbol and truth table NAND gate: The NAND gate operates as an AND gate followed by a NOT gate. It acts in the manner of the logical operation "and" followed by negation. The output is "false" if both inputs are "true". Otherwise, the output is "true".
  • 27. 27 Figure 5.5: NAND gate symbol and truth table NOR gate: The NOR gate is a combination of OR gate followed by an inverter. Its output is "true" if both inputs are "false". Otherwise, the output is "false". Figure 5.6: NOR gate symbol and truth table XNOR gate: The XNOR(exclusive-NOR) gate is a combination of XOR gate followed by an in- verter. Its output is "true" if the inputs are the same, and “false" if the inputs are different. Figure 5.7: XNOR gate symbol and truth table Using combinations of logic gates, complex operations can be performed. In theory, there is no limit to the number of gates that can be arrayed together in a single device. But in practice, there is a limit to the number of gates that can be packed into a given physical space. Arrays of logic gates are found in digital integrated circuits(ICs). As IC technology advances, the required physical volume for each individual logic gate decreases and digital devices of the same or smaller size become capable of performing ever-more-complicated operations at ever-increasing speeds.
  • 28. 28 CHAPTER 5. LOGIC FUNCTIONS Another type of mathematical identity, called a property or a law, describes how differing vari- ables relate to each other in a system of numbers. One of these properties is known as the commu- tative property, and it applies equally to addition and multiplication. In essence, the commutative property tells us we can reverse the order of variables that are either added together or multiplied together without changing the truth of the expression. Figure 5.8: Commutative property of addition Figure 5.9: Commutative property of multiplication Along with the commutative property of addition and multiplication, we have the associative property, again applying equally well to addition and multiplication. This property tells us we can associate groups of added or multiplied variables together with parentheses without altering the truth of the equations. Figure 5.10: Associative property of addition
  • 29. 5.1. TRUTH TABLE DESCRIPTION OF LOGIC FUNCTIONS 29 Figure 5.11: Associative property of multiplication Lastly, we have the distributive property, illustrating how to expand a Boolean expression formed by the product of a sum, and in reverse shows us how terms may be factored out of Boolean sums- of-products. Figure 5.12: Distributive property 5.1 Truth Table Description of Logic Functions The truth table is a tabular representation of a logic function. It gives the value of the function for all possible combinations of values of the variables. If there are three variables in a given function, there are 23 = 8 combinations of these variables. For each combination, the function takes either 1 or 0. These combinations are listed in a table, which constitutes the truth table for the given function. Consider the expression, F(A,B) = A.B + A. ¯B (5.1) The truth table for this function is given by Figure 5.13: Truth table for the expression 5.1
  • 30. 30 CHAPTER 5. LOGIC FUNCTIONS The information contained in the truth table and in the algebraic representation of the function are the same. The term truth table came into usage long before Boolean algebra came to be associ- ated with digital electronics. Boolean function were originally used to establish truth or falsehood of statements. When statement is true the symbol "1" is associated with it, and when it is false "0" is associated. This usage got extended to the variables associated with digital circuits. However, this usage of adjectives "true" and "false" is not appropriate when associated with variables encoun- tered in digital systems. All variables in digital systems are indicative of actions. Typical examples of such signals are "CLEAR", "LOAD", "SHIFT", "ENABLE" and "COUNT". These are suggestive of actions. Therefore, it is appropriate to state that a variable is ASSERTED or NOT ASSERTED than to say that a variable is TRUE or FALSE. When a variable is asserted, the intended action takes place, and when it is not asserted the intended action does not take place. In this context, we associate "1" with the assertion of a variable and "0" with the non-assertion of that variable. Consider the logic function, F(A,B) = A.B + A. ¯B (5.2) It should now be read as, F is asserted when A and B are asserted or A is asserted and B is not asserted. This convention of using assertion and non-assertion with the logic variables will be used in all the learning units of digital systems. The term truth table will continue to be used for historical reasons. But we understand it as an input-output table associated with a logic function, but not as something that is concerned with the establishment of truth. As the number of variables in a given function increases, the number of entries in the truth table increases exponentially. For example, a five variable expression would require 32 entries and a six-variable function would require 64 entries. Therefore, it becomes inconvenient to prepare the truth table if the number of variables increases beyond four. However, a simple artefact may be adopted. A truth table can have entries only for those terms for which the value of the function is "1", without loss of any information. This is particularly effective when the function has only a small number of terms. Consider the Boolean function with six variables. F = A.B.C. ¯D. ¯E + A. ¯B.C. ¯D.E + ¯A. ¯B.C.D.E + A. ¯B. ¯C.D.E (5.3) The truth table will have only four entries rather than 64 and the representation of this function is as shown in the Figure 5.14. Figure 5.14: Truth table for the expression 5.3 Truth table is a very effective tool in working with digital circuits, especially when the number of variables in a function is small, less than or equal to five.
  • 31. 5.2. CONVERSION OF ENGLISH SENTENCES TO LOGIC FUNCTIONS: 31 5.2 Conversion of English Sentences to Logic Functions: Some of the problems that can be solved using digital circuits are expressed through one or more sentences. For example, consider the sentence "Lift starts moving only if doors are closed and button is pressed". Break the sentence into phrases F = lift starts moving(1) or else(0) A = doors are closed(1) or else(0) B = floor button is pressed(1) or else(0) So F = (A,B) In complex causes, we need to define variable properly and drawn truth table before logic func- tion can be written as an expression. There may be some vagueness that should be defined. For example, "’A’ will go for a workshop if and only if ’B’ is going and the subject of workshop is important from career perspective or if the venue is interesting and there is no academic com- mitment". F = A.B +C ¯D (5.4) 5.3 Boolean Function Representation The use of switching devices like transistors give rise to a special case of the Boolean algebra called as switching algebra. In switching algebra, all the variables assume one of the two values which are 0 and 1. In Boolean algebra, 0 is used to represent the ‘open’ state or ‘false’ state of logic gate. Similarly, 1 is used to represent the ‘closed’ state or true state of logic gate. A Boolean expression is an expression which consists of variables, constants(0-false and 1-true) and logical operators which results in true or false. A Boolean function is an algebraic form of Boolean expression. A Boolean function of n−variables is represented by f (X1,X2,X3,..., Xn). By using Boolean laws and theorems, we can simplify the Boolean functions of digital circuits. A brief note of different ways of representing a Boolean function are as follows: • Sum-of-Products(SOP) form • Product-of-sums(POS) form • Canonical and standard forms 5.3.1 Some Definitions of Logic Functions 1. Literal: A variable or its complement i.e A or ¯A. 2. Product term: A term with the product(Boolean multiplication) of literals. 3. Sum term: A term with the sum(Boolean addition) of literals. The min-terms and max-terms may be used to define the two standard forms for logic expres- sions, namely the sum of products (SOP) or sum of min-terms and the product of sums(POS) or product of max-terms. These standard forms of expression aid the logic circuit designer by sim- plifying the derivation of the function to be implemented. Boolean functions expressed as a sum of products or a product of sums are said to be in canonical form. Note the POS is not the comple- ment of the SOP expression.
  • 32. 32 CHAPTER 5. LOGIC FUNCTIONS 5.3.2 Sum of Product (SOP) Form The sum-of-products form is a method(or form) of simplifying the Boolean expressions of logic gates. In this SOP form of Boolean function representation, the variables are operated by AND (product) to form a product term and all these product terms are ORed(summed or added) to- gether to get the final function. A sum-of-products form can be formed by adding(or summing) two or more product terms using a Boolean addition operation. Here the product terms are de- fined by using the AND operation and the sum term is defined by using OR operation. The sum-of- products form is also called as Disjunctive Normal Form as the product terms are ORed together and disjunction operation is logical OR. Sum-of-products form is also called as Standard SOP. 5.3.3 Product of Sums(POS) Form The product of sums form is a method(or form) of simplifying the Boolean expressions of logic gates. In this POS form, all the variables are ORed i.e. written as sums to form sum terms. All these sum terms are ANDed (multiplied) together to get the product-of-sum form. This form is exactly opposite to the SOP form. So this can also be said as dual of SOP form. Here the sum terms are defined by using the OR operation and the product term is defined by using AND operation. When two or more sum terms are multiplied by a Boolean OR operation, the resultant output expression will be in the form of product-of-sums form or POS form. The product-of-sums form is also called as Conjunctive Normal Form as the sum terms are ANDed together and conjunction operation is logical AND. Product-of-sums form is also called as Standard POS. Example: Consider the truth table for half adder shown in the figure 5.15. Figure 5.15: Truth table of half adder The sum-of-products form is obtained as S = ( ¯A.B)+(A. ¯B) C = (A.B) The product-of-sums form is obtained as S = (A +B).( ¯A + ¯B) C = (A +B).(A + ¯B).( ¯A +B) Figure 5.16: Circuit diagram for the example in Figure 5.15
  • 33. 5.4. SIMPLIFICATION OF LOGIC FUNCTIONS 33 5.4 Simplification of Logic Functions • Logic expression may be long with many terms with many literals. • Can be simplified using Boolean algebra. • Simplification requires ability to identity patterns. • If we can represent logic function in a graphic form that helps us in identifying patterns, simplification becomes simple. 5.5 Karnaugh Map(or K-map) minimization The Karnaugh map provides a systematic method for simplifying a Boolean expression or a truth table function. The K map can produce the simplest SOP or POS expression possible. K-map procedure is actually an application of adjacency and guarantees a minimal expression. It is easy to use, visual, fast and familiarity with Boolean laws is not required. The K-map is a table consisting of N = 2n cells, where n is the number of input variables. Assuming the input variables are A and B, then the K-map illustrating the four possible variable combinations is obtained. Let us begin by considering a two-variable logic function F = ¯AB + AB Any two-variable function has 22 = 4 min-terms. The truth table of this function is as shown in the Figure 5.17. Figure 5.17: Truth table It can be seen that the values of the variables are arranged in the ascending order(in the numerical decimal sense). Figure 5.18: Two variable K-map Example: Consider the expression Z = f (A,B) = ¯A. ¯B+A. ¯B+ ¯A.B plotted on the k-map as shown in the Figure 5.19.
  • 34. 34 CHAPTER 5. LOGIC FUNCTIONS Figure 5.19: K-map Pairs of 1’s are grouped as shown in the Figure 5.19 and the simplified answer is obtained by using the following steps: • Note that two groups can be formed for the example given above, bearing in mind that the largest rectangular clusters that can be made consist of two 1’s. Notice that a 1 can belong to more than one group. • The first group labelled I, consists of two 1’s which correspond to A = 0, B = 0 and A = 1, B = 0. Put in another way, all squares in this example that correspond to the area of the map where B = 0 contains 1’s, independent of the value of A. So when B = 0, the output is 1. The expression of the output will contain the term ¯B. • For group labelled I I corresponds to the area of the map where A = 0. The group can there- fore be defined as ¯A. This implies that when A = 0, the output is 1. The output is therefore 1 whenever B = 0 and A = 0. Hence, the simplified answer is Z = ¯A + ¯B. 5.5.1 Three Variable and Four Variable Karnaugh Map: The three variable and four variable K-maps can be constructed as shown in the Figure 5.20. Figure 5.20: Three and four variable K-map K-map for three variables will have 23 = 8 cells and for three variables will have 24 = 16 cells. It can be observed that the positions of columns 10 and 11 are interchanged so that there is only change in one variable across adjacent cells. This modification will allow in minimizing the logic. Example: 3-variable K-Map for the function, F = (1,2,3,4,5,6) Figure 5.21: Three variable K-map for the given expression Simplified expression from the K-map is F = ¯A.C +B. ¯C + A. ¯B. Example: Consider the function, F = (0,1,3,5,6,9,11,12,13,15) Since, the biggest number is 15, we need to have 4 variables to define this function. The K-Map for this function is obtained by writing 1 in cells that are present in function and 0 in rest of the cells.
  • 35. 5.5. KARNAUGH MAP(OR K-MAP) MINIMIZATION 35 Figure 5.22: Four variable K-map for the given expression Simplified expression obtained : F = ¯C.D + AD + ¯A. ¯B. ¯C + ¯A. ¯B. ¯D + AB ¯C + ¯ABC ¯D. Example: Given function, F = sum(0,2,3,4,5,7,8,9,13,15) Figure 5.23: K-map for the given expression Simplified expression obtained: F = B.D + ¯A. ¯C. ¯D + ¯A. ¯B.C + A. ¯B. ¯C. 5.5.2 Five Variable K-Map: A 5-variable K-Map will have 25 = 32 cells. A function F which has maximum decimal value of 31, can be defined and simplified by a 5-variable Karnaugh Map. Figure 5.24: Five variable K-map Example 5: Given function, F = (1,3,4,5,11,12,14,16,20,21,30) Since, the biggest number is 30, we need to have 5 variables to define this function. K-map for the function is as shown in the Figure 5.25 Figure 5.25: Five variable K-map for example 5 Simplified expression: F = ¯B.C. ¯D + ¯A.B.C ¯E +B.C.D. ¯E + ¯A. ¯C.D.E + A ¯B. ¯D. ¯E + ¯A. ¯B. ¯C.E Example 6: Given function, F = (0,1,2,3,8,9,16,17,20,21,24,25,28,29,30,31)
  • 36. 36 CHAPTER 5. LOGIC FUNCTIONS Figure 5.26: Five variable K-map for example 6 Simplified expression:F = A. ¯D + ¯C. ¯D + ¯A. ¯B. ¯C + A.B.C 5.5.3 Implementation of BCD to Gray code Converter using K-map We can convert the Binary Coded Decimal(BCD) code to Gray code by using K-map simplification. The truth table for BCD to Gray code converter is shown in the Figure 5.27 Figure 5.27: Table for BCD to Gray code converter Figure 5.28: K-map for G3 and G2 Figure 5.29: K-map for G1 and G0 The equations obtained from K-maps are as follows: G3 = B3
  • 37. 5.6. MINIMIZATION USING K-MAP 37 G2 = ¯B3.B2+B3. ¯B2 = B3⊕B2 G1 = ¯B1.B2+B1. ¯B2 = B1⊕B2 G0 = ¯B1.B0+B1. ¯B0 = B1⊕B0 The implementation of BCD to Gray conversion using logic gates is shown in the Figure 5.30. Figure 5.30: Binary to Gray code converter circuit 5.6 Minimization using K-map Some definitions used in minimization: • Implicant: An implicant is a group of 2i (i = 0,1,...,n) minterms (1-entered cells) that are logically adjacent. • Prime implicant: A prime implicant is the one that is not a subset of any one of other impli- cant. A study of implicants enables us to use the K-map effectively for simplifying a Boolean func- tion. Consider the K-map shown in the Figure 5.31. There are four implicants: 1, 2, 3 and 4. Figure 5.31: K-map showing implicants and prime implicants The implicant 4 is a single cell implicant. A single cell implicant is a 1-entered cell that is not positionally adjacent to any of the other cells in the map. The four implicants account for all groupings of 1-entered cells. This also means that the four implicants describe the Boolean function completely. An implicant represents a product term, with the number of variables appearing in the term inversely proportional to the number of 1-entered cells it represents. Implicant 1 in the Figure 5.31 represents the product term A. ¯C. Implicant 2 represents A.B.D. Implicant 3 represents B.C.D. Implicant 4 represents ¯A. ¯B.C. ¯D. The smaller the number of implicants, and the larger the number of cells that each implicant represents, the smaller the number of product terms in the simplified Boolean expression. • Essential prime implicant: It is a prime implicant which includes a 1-entered cell that is not include in any other prime implicant. • Redundant implicant: It is the one in which all cells are covered by other implicants. A redundant implicant represents a redundant term in an expression.
  • 38. 38 CHAPTER 5. LOGIC FUNCTIONS 5.6.1 K-maps for Product-of-Sum Design Product-of-sums design uses the same principles, but applied to the zeros of the function. Example 7: F = π(0,4,6,7,11,12,14,15) In constructing the K-map for the function given in POS form, 0’s are filled in the cells represented by the max terms. The K-map of the above function is shown in the Figure 5.32. Figure 5.32: K-map for the example 7 Sometimes the function may be given in the standard POS form. In such situations, we can initially convert the standard POS form of the expression into its canonical form, and enter 0’s in the cells representing the max terms. Another procedure is to enter 0’s directly into the map by observing the sum terms one after the other. Consider an example of a Boolean function given in the POS form. F = (A +B + ¯D).( ¯A +B + ¯C).( ¯B +C) This may be converted into its canonical form as follows: F = (A+B +C + ¯D).(A+B + ¯C + ¯D).( ¯A+B + ¯C +D).(A+ ¯B +C +D).( ¯A+ ¯B +C + ¯D).(A+ ¯B +C + ¯D).( ¯A+ ¯B +C + ¯D) F = M1.M3.M10.M4.M12.M5.M13 The cells 1, 3, 4, 5, 10, 12 and 13 have 0’s entered in them while the remaining cells are filled with 1’s. 5.6.2 DON’T CARE CONDITIONS Functions that have unspecified output for some input combinations are called incompletely spec- ified functions. Unspecified min-terms of a functions are called don’t care conditions. We simply don’t care whether the value of 0 or 1 is assigned to F for a particular min term. Don’t care con- ditions are represented by X in the K-Map table. Don’t care conditions play a central role in the specification and optimization of logic circuits as they represent the degrees of freedom of trans- forming a network into a functionally equivalent one. Example 8: Simplify the Boolean function f (w,x, y,z) = S(1,3,7,11,15), d(w,x, y,z) = S(0,2,5)
  • 39. 5.6. MINIMIZATION USING K-MAP 39 Figure 5.33: K-map for the example 8 The function in the SOP and POS forms may be written as F = (4,5)+d(0,6,7) and F = π(1,2,3).d(0,6,7) The term d(0,6,7) represents the collection of don’t-care terms. The don’t-cares bring some ad- vantages to the simplification of Boolean functions. The X ’s can be treated either as 0’s or as 1’s depending on the convenience. For example the above K-map shown in the Figure 5.33 can be redrawn in two different ways as shown in the Figure 5.34. Figure 5.34: Different ways of K-map for the example 8 The simplification can be done in two different ways. The resulting expressions for the function F are: F = A and F = A ¯B Therefore, we can generate a simpler expression for a given function by utilizing some of the don’t- care conditions as 1’s. Example 9: Simplify F = (0,1,4,8,10,11,12)+d(2,3,6,9,15) The K-map of this function is shown in the Figure 5.35. Figure 5.35: K-map for the example 9 The simplified expression taking the full advantage of the don’t cares is F = ¯B + ¯C. ¯D Simplification of Several Functions of the Same Set of Variables: As there could be several prod- uct terms that could be made common to more than one function, special attention needs to be paid to the simplification process. Example 10: Consider the following set of functions defined on the same set of variables: F1(A,B,C) = (0,3,4,5,6) F2(A,B,C) = (1,2,4,6,7)
  • 40. 40 CHAPTER 5. LOGIC FUNCTIONS F3(A,B,C) = (1,3,4,5,6) Let us first consider the simplification process independently for each of the functions. The K- maps for the three functions and the groupings are shown in the Figure 5.36. Figure 5.36: K-map for the example 10 The resultant minimal expressions are: F1 = ¯B. ¯C + A. ¯C + A. ¯B + ¯A.B.C F2 = B. ¯C + A. ¯C + A.B + ¯A. ¯B.C F3 = A. ¯C + ¯B.C + ¯A.C These three functions have nine product terms and twenty one literals. If the groupings can be done to increase the number of product terms that can be shared among the three functions, a more cost effective realization of these functions can be achieved. One may consider, at least as a first approximation, cost of realizing a function is proportional to the number of product terms and the total number of literals present in the expression. Consider the minimization shown in the Figure 5.37. Figure 5.37: K-map for the example 10 The resultant minimal expressions are: F1 = ¯B. ¯C + A. ¯C + A. ¯B + ¯A.B.C F2 = B. ¯C + A. ¯C + A.B + ¯A. ¯B.C F3 = A. ¯C + A. ¯B + ¯A.B.C + ¯A. ¯B.C This simplification leads to seven product terms and sixteen literals. 5.7 Quine-McCluskey Method of Minimization Karnaugh Map provides a good method of minimizing a logic function. However, it depends on our ability to observe appropriate patterns and identify the necessary implicants. If the number of variables increases beyond five, K-map or its variant variable entered map can become very messy and there is every possibility of committing a mistake. What we require is a method that is more suitable for implementation on a computer, even if it is inconvenient for paper-and-pencil pro- cedures. The concept of tabular minimisation was originally formulated by Quine in 1952. This method was later improved upon by McClusky in 1956, hence the name Quine-McClusky This learning unit is concerned with the Quine-McClusky method of minimisation. This method is tedious, time-consuming and subject to error when performed by hand. But it is better suited to implementation on a digital computer.
  • 41. 5.7. QUINE-MCCLUSKEY METHOD OF MINIMIZATION 41 5.7.1 Principle of Quine-McClusky Method Generate prime implicants of the given Boolean function by a special tabulation process. Deter- mine the minimal set of implicants is determined from the set of implicants generated in the first stage. The tabulation process starts with a listing of the specified minterms for the 1’s (or 0’s) of a function and don’t-cares (the unspecified minterms) in a particular format. All the prime impli- cants are generated from them using the simple logical adjacency theorem, namely, A ¯B + AB = A. The main feature of this stage is that we work with the equivalent binary number of the product terms. For example in a four variable case, the minterms ¯ABC ¯D and ¯AB ¯C ¯D are entered as 0110 and 0100. As the two logically adjacent ¯ABC ¯D and ¯AB ¯C ¯D can be combined to form a product term ¯AB ¯D the two binary terms 0110 and 0100 are combined to form a term represented as 01−0, where ‘-‘(dash) indicates the position where the combination took place. Stage two involves creating a prime implicant table. This table provides a means of identifying, by a special procedure, the smallest number of prime implicants that represents the original Boolean function. The selected prime implicants are combined to form the simplified expression in the SOP form. While we confine our discussion to the creation of minimal SOP expression of a Boolean function in the canonical form, it is easy to extend the procedure to functions that are given in the standard or any other forms 5.7.2 Generation of Prime Implicants The process of generating prime implicants is best presented through an example. Example 10: F = (1,2,5,6,7,9,10,11,14) All the minterms are tabulated as binary numbers in sectionalised format, so that each section consists of the equivalent binary numbers containing the same number of 1’s, and the number of 1’s in the equivalent binary numbers of each section is always more than that in its previous section. This process is illustrated in the table as shown in the Figure 5.38. Figure 5.38: Table for example 10 The next step is to look for all possible combinations between the equivalent binary numbers in the adjacent sections by comparing every binary number in each section with every binary num- ber in the next section. The combination of two terms in the adjacent sections is possible only if the two numbers differ from each other with respect to only one bit. For example 0001(1) in section 1 can be combined with 0101(5) in section 2 to result in 0−01(1,5). Notice that combinations can- not occur among the numbers belonging to the same section. The results of such combinations are entered into another column, sequentially along with their decimal equivalents indicating the binary equivalents from which the result of combination came, like (1,5) as mentioned above. The
  • 42. 42 CHAPTER 5. LOGIC FUNCTIONS second column also will get sectionalised based on the number of 1’s. The entries of one section in the second column can again be combined together with entries in the next section in a similar manner. These combinations are illustrated in the table shown in the Figure 5.39. Figure 5.39: Table for example 10 All the entries in the column which are paired with entries in the next section are checked off. Column 2 is again sectionalised with respect to the number of 1’s. Column 3 is generated by pairing off entries in the first section of the column 2 with those items in the second section. In principle, this pairing could continue until no further combinations can take place. All those entries that are paired can be checked off. It may be noted that combination of entries in column 2 can only take place if the corresponding entries have the dashes at the same place. This rule is applicable for generating all other columns as well. After the tabulation is completed, all those terms which are not checked off constitute the set of prime implicants of the given function. The repeated terms, like −−10 in the column 3, should be eliminated. Therefore, from the above tabulation procedure, we obtain seven prime implicants (denoted by their decimal equivalents) as (1,5), (1,9), (5,7), (6,7), (9,11), (10,11) and (2,6,10,14). The next stage is to determine the minimal set of prime implicants. 5.7.3 Determination of the Minimal Set of Prime Implicants The prime implicants generated through the tabular method do not constitute the minimal set. The prime implicants are represented in so called "prime implicant table". Each column in the table represents a decimal equivalent of the min term. A row is placed for each prime implicant with its corresponding product appearing to the left and the decimal group to the right side. Aster- isks are entered at those intersections where the columns of binary equivalents intersect the row that covers them. The prime implicant table for the function under consideration is shown in the Figure 5.40. Figure 5.40: Prime implicant table for example 10
  • 43. 5.7. QUINE-MCCLUSKEY METHOD OF MINIMIZATION 43 In the selection of minimal set of implicants, similar to that in a K-map, essential implicants should be determined first. An essential prime implicant in a prime implicant table is one that covers (at least one) minterms which are not covered by any other prime implicant. This can be done by looking for that column that has only one asterisk. For example, the columns 2 and 14 have only one asterisk each. The associated row, indicated by the prime implicant C ¯D is an essential prime implicant. C ¯D is selected as a member of the minimal set (mark that row by an asterisk). The corresponding columns, namely 2, 6, 10 and 14 are also removed from the prime implicant table, and a new table is constructed as shown in the Figure 5.41. Figure 5.41: Prime implicant table for example 10 We then select dominating prime implicants, which are the rows that have more asterisks than oth- ers. For example, the row ¯ABD includes the min term 7, which is the only one included in the row represented by ¯ABC. ¯ABD is dominant implicant over ¯ABC, and hence ¯ABC can be eliminated. Mark ¯ABD by an asterisk and check off the column 5 and 7. Then choose AB ¯D as the dominating row over the row represented by A ¯BC. Consequently, we mark the row A ¯BD by an asterisk and eliminate the row A ¯BC and the columns 9 and 11 by checking them off. Similarly, we select ¯A. ¯C. ¯D as the dominating one over ¯B ¯C.D. However, ¯B. ¯C.D can also be chosen as the dominating prime implicant and eliminate the implicant ¯A ¯CD. Retaining A/C/D as the dominant prime implicant the minimal set of prime implicant are C ¯D, ¯A ¯CD, ¯ABD and A ¯BD. The corresponding minimal SOP expression for the Boolean function is: F = C ¯D + ¯A ¯CD + ¯ABD + A ¯BD This indicates that if the selection of the minimal set of prime implicants is not unique, then the minimal expression is also not unique. There are two types of implicant tables that have some special properties. One is referred to as cyclic prime implicant table and the other as semi-cyclic prime implicant table. A prime impli- cant table is considered to be cyclic if it does not have any essential implicants which implies that there are at least two asterisks in every column. There are no dominating implicants, which im- plies that there are same number of asterisks in every row. Example 11: A Boolean function with a cyclic prime implicant table is shown in the Figure 5.42. The function is given by F = (0,1,3,4,7,12,14,15), the possible prime implicant of the functions are (0,1), (0,4), (1,3), (3,7), (4,12), (7,15), (12,14) and (14,15).
  • 44. 44 CHAPTER 5. LOGIC FUNCTIONS Figure 5.42: Cyclic prime implicant table for example 11 As it may be noticed from the prime implicant table in the Figure 5.43 that all columns have two asterisks and there are no essential prime implicants. In such a case, we can choose any one of the prime implicants to start with. If we start with prime implicant a, it can be marked with asterisk and the corresponding columns for 0 and 1 can be deleted from the table. After their removal, row c becomes dominant over row b, so that row c is selected and hence row b can be eliminated. The columns 3 and 7 can now be deleted. We observe then that the row e dominates row d and row d can be eliminated. Selection of row e enables us to delete columns 14 and 15. Figure 5.43: Cyclic prime implicant table for example 11 If, from the reduced prime implicant table shown in the Figure 5.43 we choose row g, it covers the remaining asterisks associated with rows h and f . That covers the entire prime implicant table. The minimal set for the Boolean function is given by: Figure 5.44: Cyclic prime implicant table for example 11
  • 45. 5.7. QUINE-MCCLUSKEY METHOD OF MINIMIZATION 45 F = a +c +e + g F = ¯A ¯B ¯C + ¯ACD + ABC +B ¯C ¯D The K-map of the simplified function is shown in the Figure ??. Figure 5.45: K-map for example 11 A semi-cyclic prime implicant table differs from a cyclic prime implicant table in one respect. In the cyclic case the number of minterms covered by each prime implicant is identical. In a semi- cyclic function this is not necessarily true. 5.7.4 Simplification of Incompletely Specified functions The simplification procedure for completely specified functions presented in the earlier sections can easily be extended to incompletely specified functions. The initial tabulation is drawn up including the dont-cares. However, when the prime implicant table is constructed, columns asso- ciated with don’t-cares need not be included because they do not necessarily have to be covered. The remaining part of the simplification is similar to that for completely specified functions. Example 12: Simplify the following function: F(A,B,C,D,E) = (1,4,6,10,20,22,24,26)+d(0,11,16,27) Figure 5.46: Table for example 12 We need to pay attention to the don’t-care terms as well as to the combinations among them- selves, by marking them with (d). Six binary equivalents are obtained from the procedure. These are 0000−(0,1), 1−000(16,24), 110−0(24,26), −0−00(0,4,16,20), −01−0(4,6,20,22) and −101− (10,11,26,27) and they correspond to the following prime implicants: a = ¯A ¯B ¯C ¯D b = ¯A ¯C ¯D ¯E c = AB ¯C ¯E d = ¯B ¯d ¯E e = ¯BC ¯E g = B ¯CD The prime implicant table is plotted as shown in the Figure 5.47.
  • 46. 46 CHAPTER 5. LOGIC FUNCTIONS Figure 5.47: Prime implicant table for example 12 It may be noted that the don’t-cares are not included. The minimal expression is given by: F(A,B,C,D,E) = a +c +e + g F(A,B,C,D,E) = ¯A ¯B ¯C ¯D + AB ¯C ¯E +B ¯CE +B ¯CD 5.8 Timing Hazards If the input of a combinational circuit changes, unwanted switching variations may appear in the output. These variations occur when different paths from the input to output have different delays. If, from response to a single input change and for some combination of propagation delay, an output momentarily goes to 0 when it should remain a constant value of 1, the circuit is said to have a static 1-hazard. Likewise, if the output momentarily goes to 1 when it should remain at a constant value of 0, the circuit is said to have a 0-hazard. When an output is supposed to change values from 0 to 1, or 1 to 0, this output may change three or more times. If this situation occurs, the circuit is said to have a dynamic hazard. The Figure 5.48 shows the different outputs from a circuit with hazards. In each of the three cases, the steady-state output of the circuit is correct, however a switching variation appears at the circuit output when the input is changed. Figure 5.48: Timing hazards The first hazard, the static 1-hazard depicts that if A = C = 1, then F = B + B = 1, thus the output F should remain at a constant 1 when B changed from 1 to 0. However in the next illus- tration, the static 0-hazard , if each gate has a propagation of 10ns, E will go to 0 before D goes to 1, resulting in a momentary 0 appearing at output F. This is also known to be a glitch caused by the 1-hazard. One should note that right after B changes to 0, both the inverter input(B) and output(B’) are 0 until the delay has elapsed. During this propagation period, both of these input terms in the equation for F have value of 0, so F also momentarily goes to a value of 0. These hazards, static and dynamic, are completely independent of the propagation delays that exist in the circuit. If a combinational circuit has no hazards, then it is said that for any combina- tion of propagation delays and for any individual input change, that output will not have a vari- ation in I/O value. On the contrary, if a circuit were to contain a hazard, then there will be some
  • 47. 5.8. TIMING HAZARDS 47 combination of delays as well as an input change for which the output in the circuit contains a transient. This combination of delays that produce a glitch may or may not be likely to occur in the implementation of the circuit. In some instances, it is very unlikely that such delays would occur. The transients (or glitches) that result from static and dynamic timing hazards very seldom cause problems in fully synchronous circuits, but they are a major issue in asynchronous circuits (which includes nominally synchronous circuits that involve either the use of asynchronous pre- set/reset inputs that use gated clocks). The variation in input and output also depends on how each gate will respond to a change of input value. In some instances, if more than one input gate changes within a short amount of time, the gate may or may not respond to the individual input changes. An example is shown in the Figure 5.49 assuming that the inverter(B) has a propagation delay of 2ns instead of 10ns. Then input D and E changes reaching the output OR gate are 2ns from each other, thus the OR gate may or may not generate the 0 glitch. Figure 5.49: Timing diagram A gate displaying this type of response is said to have what is known as an inertial delay. Rather of- ten the inertial delay value is presumed to be the same as the propagation delay of the gate. When this occurs, the circuit above will respond with a 0 glitch only for inverter propagation delays that are larger than 10ns. However, if an input gate invariably responds to input change that has a propagation delay, is said to have an ideal or transport delay. If the OR gate shown above has this type of delay, then a 0 glitch would be generated for any nonzero value for the inverter propagation delay. Hazards can always be discovered using a Karnaugh map. The map illustrated above in Figure 5.49 in which not even a single loop covers both minterms ABC and A ¯BC. Thus if A = C = 1 and ¯Bs value changes, both of these terms can go to 0 momentarily; from this momentary change, a 0 glitch is found in F. To detect hazards in a two-level AND-OR combinational circuit, the following procedure is completed. • A sum-of-products expression for the circuit needs to be written out. • Each term should be plotted on the map and looped, if possible.
  • 48. 48 CHAPTER 5. LOGIC FUNCTIONS • If any two adjacent 1’s are not covered by the same loop, then a 1-hazard exists for the tran- sition between those two 1’s. For any n variable map, this transition only occurs when one variable changes value and the other (n − 1) variables are held constant. If another loop is added to the Karnaugh map in the above Figure 5.49 and then add the corresponding gate to the circuit in the Figure 5.50, the hazard can be eliminated. The term AC remains at a constant value of 1 while B is changing, thus a glitch cannot appear in the output. With this change, F is no longer a minimum SOP. Figure 5.50: Timing diagram The Figure 5.51(a) shows a circuit with numerous 0-hazards. The function that represents the cir- cuit’s output is: F = (A +C)( ¯A + ¯D)( ¯A + ¯C + D). The Karnaugh map in the Figure 5.51(b) has four pairs of adjacent 0’s that are not covered by a common loop. The arrows indicate where each 0 is not being looped, and they each correspond to a 0-hazard. If A = 0, B = 1, D = 0 and C changes from 0 to 1, there is a chance that a spike can appear at the output for any combination of gate de- lays. Lastly, Figure 5.51(c) depicts a timing diagram that, assumes a delay of 3ns for each individual inverter and a delay of 5ns for each AND gate and each OR gate. Figure 5.51: Timing diagram for the circuit
  • 49. 5.8. TIMING HAZARDS 49 The 0-hazards can be eliminated by looping extra prime implicants that cover the 0’s adjacent to one another, as long as they are not already covered by a common loop. By eliminating alge- braically redundant terms, or consensus terms, the circuit can be reduced to the following equa- tion below. Using three additional loops will completely eliminate the 0-hazards, resulting the following equation. F = (A +C)( ¯A + ¯D)( ¯B + ¯C +D)(C + ¯D)(A + ¯B +D)( ¯A + ¯B + ¯C) The Figure 5.52 below illustrates the Karnaugh map after removing the 0-hazards. Figure 5.52: Timing diagram after removing 0-hazards
  • 50. 50 CHAPTER 5. LOGIC FUNCTIONS
  • 51. Chapter 6 Combinational Circuits 6.1 Decoder Decoder is a combinational circuit that has ‘n’ input lines and maximum of 2n output lines. One of these outputs will be active HIGH based on the combination of inputs present, when the decoder is enabled. That means decoder detects a particular code. The outputs of the decoder are nothing but the min terms of ‘n’ input variables (lines), when it is enabled. 6.1.1 2 to 4 Decoder: Let 2 to 4 decoder has two inputs A1 & A0 and four outputs Y3, Y2, Y1 & Y0. The block diagram of 2 to 4 decoder is shown in the following Figure 6.1. Figure 6.1: Block diagram of 2 to 4 decoder One of these four outputs will be 1 for each combination of inputs when enable, E is 1. The Truth table of 2 to 4 decoder is shown in the Figure 6.2. Figure 6.2: Truth table of 2 to 4 decoder 51
  • 52. 52 CHAPTER 6. COMBINATIONAL CIRCUITS Y 3 = E.A1.A0 Y 2 = E.A1. ¯A0 Y 1 = E. ¯A1.A0 Y 0 = E. ¯A1. ¯A0 Each output is having one product term. So, there are four product terms in total. We can im- plement these four product terms by using four AND gates having three inputs each & two inverts. The circuit diagram of 2 to 4 decoder is shown in the following Figure 6.3. Figure 6.3: Circuit diagram of 2 to 4 decoder Therefore, the outputs of 2 to 4 decoder are nothing but the minterms of two input variables A1 & A0, when enable, E is equal to one. If enable, E is zero, then all the outputs of decoder will be equal to zero. 6.1.2 3 to 8 line decoder This decoder circuit gives 8 logic outputs for 3 inputs and has a enable pin. The circuit is designed with AND and NAND logic gates. It takes 3 binary inputs and activates one of the eight outputs. 3 to 8 line decoder circuit is also called as binary to an octal decoder. Figure 6.4: Block diagram of 3 to 8 line decoder The decoder circuit works only when the Enable pin (E) is high. S0, S1 and S2 are three different inputs and D0, D1, D2, D3, D4, D5, D6 and D7 are the eight outputs. When the Enable pin (E) is low all the output pins are low.
  • 53. 6.1. DECODER 53 Figure 6.5: Circuit diagram of 3 to 8 line decoder Figure 6.6: Truth table of 3 to 8 line decoder 6.1.3 Implementation of higher-order decoders Higher order decoders can be implemented using one or more lower order decoders. 3 to 8 Decoder: It can be implemented using 2 to 4 decoders. We know that 2 to 4 decoder has two inputs, A1 & A0 and four outputs, Y3 to Y0. Whereas, 3 to 8 decoder has three inputs A2, A1 & A0 and eight outputs, Y7 to Y0. The number of lower order decoders required for implementing higher order decoder can be obtained using the following formula. Required number of lower order decoders = m2/m1 where, m1 is the number of outputs of lower order decoder and m2 is the number of outputs of higher order decoder. Here, m1=4 and m2=8. Substituting these two values in the above formula. Required number of 2 to 4 decoders=8/4=2 Therefore, we require two 2 to 4 decoders for implementing one 3 to 8 decoder. The block diagram of 3 to 8 decoder using 2 to 4 decoders is shown in the following Figure 6.7. Figure 6.7: Circuit diagram of 3 to 8 decoder