SlideShare a Scribd company logo
1 of 88
Conditional-Boosting Flip-Flop for Near-Threshold Voltage Application
Abstract:
A conditional-boosting flip-flop is proposed for ultralow-voltage application where the supply voltage
is scaled down to the near-threshold region. The proposed flip-flop adopts voltage boosting to
provide low latency with reduced performance variability in the near-threshold voltage region. It
also adopts conditional capture to minimize the switching power consumption by eliminating
redundant boosting operations. Experimental results in a 65-nm CMOS process indicated that the
proposed flip-flop provided up to 72% lower latency with 75% less performance variability due to
process variation, and up to 67% improved energy-delay product at 25% switching activity
compared with conventional precharged differential flip-flops.
Existing System:
Capacitive boosting can be a solution to overcome theproblems caused by aggressive voltage
scaling. It allows the gatesource voltage of some MOS transistors to be boosted above thesupply
voltage or below the ground.The enhanced driving capabilityof transistors thus obtained can reduce
the latency and its sensitivityto process variations. The bootstrapped CMOS driver presentedin [8]
relies on this technique to drive heavy capacitive loads withsubstantially reduced latency. However,
since it is a static driver,every input transition causes the bootstrapping operation. So, ifsome of the
transitions are redundant, a large amount of redundantpower consumption may occur. The
conditional-bootstrapping latchedCMOS driver [9] proposes the concept of conditional
bootstrappingto eliminate the redundant power consumption. As it is a latcheddriver, it can allow
boosting only when the input and output logicvalues are different, resulting in no redundant boosting
and improvedenergy efficiency, especially at low switching activity. Recently, adifferential CMOS
logic family adapting the boosting technique hasalso been proposed for fast operation at the near-
threshold voltageregion.
Proposed System:
For incorporating the conditional boosting into a pre charged differential flip-flop, four different
scenarios regarding input data capture should be considered, which are determined by the logic
states of the input and output. These scenarios are as follows:
1) For a low output data, a high input data should trigger boosting for a fast capture of incoming
data;
2) For a low output data, a low input data should trigger no boosting since the input need not be
captured;
3) For a high output data, a low input data should trigger boosting for a fast capture of incoming
data;
4) For a high output data, a high input data should trigger no boosting.
These scenarios can be embodied into a circuit topology usinga single boosting capacitor by a
combination of two operationprinciples. One is that the voltage presetting for the terminals ofthe
boosting capacitor must be determined by the data stored at theoutput (so-calledoutput-dependent
presetting). The other principle isthat boosting operations must be conditional to the input data
givento the flip-flop (so called input-dependent boosting). The conceptualcircuit diagrams for
supporting these principles are shown in Fig. 1.To support the output-dependent presetting, the
preset voltages ofcapacitor terminalsNandNBare made to be determined by outputsQandQBas
shown in Fig. 1(a). If QandQBare low and high,NandNBare preset to be low and high [left diagram
in Fig. 1(a)],and ifQandQBare high and low,NandNBarepresettobehighandlow [right diagram in Fig.
1(a)], respectively. To support the inputdependent boosting, the non-inverting input (D) is coupled
to NB through an nMOS transistor and the inverting input (DB) is coupled to N through another
nMOS transistor, as shown in Fig. 1(b). Then,as one case in which a low data is stored in the flip-
flop, resulting in the capacitor presetting given in the left diagram in Fig. 1(a),a high input allows
NBto be pulled down to the ground, lettingNbeing boosted toward–VDDdue to capacitive coupling
[upperleft diagram in Fig. 1(b)]. Meanwhile, a low input allows Ntobe connected to the ground, but
since the node is already presetto VSS, there is no voltage change at NB, resulting in no
boosting[lower left diagram in Fig. 1(b)]. As the other case in which a highdata is stored in the flip-
flop, resulting in the capacitor presettinggiven in right diagram in Fig. 1(a), a low input allows Nto
bepulled down to the ground, letting NBbeing boosted toward –VDDdue to capacitive coupling
[lower right diagram in Fig. 1(b)].
Figure 1:Conceptual circuit diagrams for (a) output data-dependent presetting
Meanwhile, a high input allows NBto be connected to the ground, butsince the node is already
preset to VSS, there is no voltage changeat N, resulting in no boosting [upper right diagram in Fig.
1(b)].Table I summarizes these operations for easier understanding. Withthese operations, any
redundant boosting can be eliminated, resulting in a significant power reduction, especially at low
switchingactivity.
Table 1:DATA-DEPENDENTPRESETTINGANDBOOSTING
CircuitImplementation:
The structure of the proposed conditional-boosting flip-flop (CBFF)based on the concepts
described in the previous section is shown inFig. 2. It consists of a conditional-boosting differential
stage, a symmetric latch, and an explicit brief pulse generator. In the conditionalboosting differential
stage shown in Fig. 2(a), MP5/MP6/MP7and MN8/MN9 are used to perform the output-dependent
presetting, whereas MN5/MN6/MN7 with boosting capacitor CBOOTareused to perform the input-
dependent boosting. MP8–MP13 andMN10–MN15 constitute the symmetric latch, asshown in Fig.
2(b).Some transistors in the differential stage are driven by a brief pulsedsignal PSgenerated by a
novel explicit pulse generator shown inFig. 2(c). Unlike conventional pulse generators, the
proposed pulse generator has no pMOS keeper, resulting in higher speed and lowerpower due to
no signal fighting during the pull-down ofPSB.Theroleof the keeper to maintain a high logic value
ofPSBis done by MP1added in parallel with MN1, which also helps a fast pull-down ofPSB.At the
rising edge ofCLK, PSBis rapidly discharged by MN1, MP1,and I1, lettingPShigh. After the latency
of I2 and I3,PSBis chargedby MP2, and soPSreturns to low, resulting in a brief positive
pulseatPSwhose width is determined by the latency of I2 and I3. WhenCLKis low, PSBismaintained
high by MP1, although MP2 is OFF.According to our evaluation, the energy reduction is up to 9%
forthe same slew rate and pulse width.
Figure 2:ProposedCBFF. (a) Conditional-boosting differential stage. (b) Symmetric latch. (c) Explicit briefpulse generator
Boot strapping:
In general, bootstrapping usually refers to a self-starting process that is supposed to proceed
without external input. In computer technology the term (usually shortened to booting) usually
refers to the process of loading the basic software into the memory of a computer after power-on
or general reset, especially the operating system which will then take care of loading other software
as needed.
The term appears to have originated in the early 19th-century United States (particularly in the
phrase "pull oneself over a fence by one's bootstraps") to mean an absurdly impossible action,
an adynaton.
Software loading and execution[edit]
Main articles: Booting and Reboot (computing)
Booting is the process of starting a computer, specifically with regard to starting its software. The
process involves a chain of stages, in which at each stage a smaller, simpler program loads and
then executes the larger, more complicated program of the next stage. It is in this sense that the
computer "pulls itself up by its bootstraps", i.e. it improves itself by its own efforts. Booting is a chain
of events that starts with execution of hardware-based procedures and may then hand-off
to firmware and software which is loaded into main memory. Booting often involves processes such
as performing self-tests, loading configuration settings, loading a BIOS, resident monitors,
a hypervisor, an operating system, or utility software.
The computer term bootstrap began as a metaphor in the 1950s. In computers, pressing a
bootstrap button caused a hardwired program to read a bootstrap program from an input unit. The
computer would then execute the bootstrap program, which caused it to read more program
instructions. It became a self-sustaining process that proceeded without external help from
manually entered instructions. As a computing term, bootstrap has been used since at least 1953.[8]
Software development[edit]
Bootstrapping can also refer to the development of successively more complex, faster programming
environments. The simplest environment will be, perhaps, a very basic text editor (e.g., ed) and
an assembler program. Using these tools, one can write a more complex text editor, and a simple
compiler for a higher-level language and so on, until one can have a graphical IDE and an
extremely high-level programming language.
Historically, bootstrapping also refers to an early technique for computer program development on
new hardware. The technique described in this paragraph has been replaced by the use of a cross
compiler executed by a pre-existing computer. Bootstrapping in program development began
during the 1950s when each program was constructed on paper in decimal code or in binary code,
bit by bit (1s and 0s), because there was no high-level computer language, no compiler, no
assembler, and no linker. A tiny assembler program was hand-coded for a new computer (for
example the IBM 650) which converted a few instructions into binary or decimal code: A1. This
simple assembler program was then rewritten in its just-defined assembly language but with
extensions that would enable the use of some additional mnemonics for more complex operation
codes. The enhanced assembler's source program was then assembled by its predecessor's
executable (A1) into binary or decimal code to give A2, and the cycle repeated (now with those
enhancements available), until the entire instruction set was coded, branch addresses were
automatically calculated, and other conveniences (such as conditional assembly, macros,
optimisations, etc.) established. This was how the early assembly program SOAP (Symbolic
Optimal Assembly Program) was developed. Compilers, linkers, loaders, and utilities were then
coded in assembly language, further continuing the bootstrapping process of developing complex
software systems by using simpler software.
The term was also championed by Doug Engelbart to refer to his belief that organizations could
better evolve by improving the process they use for improvement (thus obtaining a compounding
effect over time). His SRI team that developed the NLS hypertext system applied this strategy by
using the tool they had developed to improve the tool.
Compilers[edit]
Main article: Bootstrapping (compilers)
The development of compilers for new programming languages first developed in an existing
language but then rewritten in the new language and compiled by itself, is another example of the
bootstrapping notion. Using an existing language to bootstrap a new language is one way to solve
the "chicken or the egg" causality dilemma.
Installers[edit]
Main article: Installation (computer programs)
During the installation of computer programs it is sometimes necessary to update the installer or
package manager itself. The common pattern for this is to use a small executable bootstrapper file
(e.g. setup.exe) which updates the installer and starts the real installation after the update.
Sometimes the bootstrapper also installs other prerequisites for the software during the
bootstrapping process.
Overlay networks[edit]
Main article: Bootstrapping node
A bootstrapping node, also known as a rendezvous host,[9] is a node in an overlay network that
provides initial configuration information to newly joining nodes so that they may successfully join
the overlay network.[10][11]
Discrete event simulation[edit]
Main article: Discrete event simulation
A type of computer simulation called discrete event simulation represents the operation of a system
as a chronological sequence of events. A technique called bootstrapping the simulation model is
used, which bootstraps initial data points using a pseudorandom number generator to schedule an
initial set of pending events, which schedule additional events, and with time, the distribution of
event times approaches its steady state—the bootstrapping behavior is overwhelmed by steady-
state behavior.
Artificial intelligence and machine learning[edit]
Main articles: Bootstrap aggregating and Intelligence explosion
Bootstrapping is a technique used to iteratively improve a classifier's performance. Seed AI is a
hypothesized type of artificial intelligence capable of recursive self-improvement. Having improved
itself, it would become better at improving itself, potentially leading to an exponential increase in
intelligence. No such AI is known to exist, but it remains an active field of research.
Seed AI is a significant part of some theories about the technological singularity: proponents believe
that the development of seed AI will rapidly yield ever-smarter intelligence (via bootstrapping) and
thus a new era.[citation needed]
Statistics[edit]
Main articles: Bootstrapping (statistics) and Bootstrapping populations
Bootstrapping is a resampling technique used to obtain estimates of summary statistics.
Business[edit]
Bootstrapping in business means starting a business without external help or capital. Such startups
fund the development of their company through internal cash flow and are cautious with their
expenses.[12] Generally at the start of a venture, a small amount of money will be set aside for the
bootstrap process.[13] Bootstrapping can also be a supplement
for econometric models.[14] Bootstrapping was also expanded upon in the book Bootstrap
Business by Richard Christiansen, the Harvard Business Review article The Art of
Bootstrapping and the follow-up book The Origin and Evolution of New Businesses by Amar Bhide.
 Startups can grow by reinvesting profits in its own growth if bootstrapping costs are low and
return on investment is high. This financing approach allows owners to maintain control of their
business and forces them to spend with discipline.[15] In addition, bootstrapping allows startups
to focus on customers rather than investors, thereby increasing the likelihood of creating a
profitable business.
 Leveraged buyouts, or highly leveraged or "bootstrap" transactions, occur when an investor
acquires a controlling interest in a company's equity and where a significant percentage of the
purchase price is financed through leverage, i.e., borrowing.
 Bootstrapping in finance refers to the method to create the spot rate curve.
 Operation Bootstrap (Operación Manos a la Obra) refers to the ambitious projects that
industrialized Puerto Rico in the mid-20th century.
Biology[edit]
Richard Dawkins in his book River Out of Eden[16] used the computer bootstrapping concept to
explain how biological cells differentiate: "Different cells receive different combinations of
chemicals, which switch on different combinations of genes, and some genes work to switch other
genes on or off. And so the bootstrapping continues, until we have the full repertoire of different
kinds of cells."
Phylogenetics[edit]
Bootstrapping analysis gives a way to judge the strength of support for clades on phylogenetic
trees. A number is written by a node, which reflects the percentage of bootstrap trees which also
resolve the clade at the endpoints of that branch.[17]
Law[edit]
Main article: Bootstrapping (law)
Bootstrapping is a rule preventing the admission of hearsay evidence in conspiracy cases.
Linguistics[edit]
Main article: Bootstrapping (linguistics)
Bootstrapping is a theory of language acquisition.
Physics[edit]
Quantum theory[edit]
Main articles: Bootstrap model and Conformal bootstrap
Bootstrapping is using very general consistency criteria to determine the form of a quantum theory
from some assumptions on the spectrum of particles or operators.
Magnetically confined fusion plasmas[edit]
In tokamak fusion devices, bootstrapping refers to the process in which a bootstrap current is self-
generated by the plasma, which reduces or eliminates the need for an external current driver.
Maximising the bootstrap current is a major goal of advanced tokamak designs.
Inertially confined fusion plasmas[edit]
Bootstrapping in inertial confinement fusion refers to the alpha particles produced in the fusion
reaction providing further heating to the plasma. This heating leads to ignition and an overall energy
gain.
Electronics[edit]
Main article: Bootstrapping (electronics)
Bootstrapping is a form of positive feedback in analog circuit design.
Electric power grid[edit]
Main article: Black start
An electric power grid is almost never brought down intentionally. Generators and power stations
are started and shut down as necessary. A typical power station requires power for start up prior to
being able to generate power. This power is obtained from the grid, so if the entire grid is down
these stations cannot be started.
Therefore, to get a grid started, there must be at least a small number of power stations that can
start entirely on their own. A black start is the process of restoring a power station to operation
without relying on external power. In the absence of grid power, one or more black starts are used
to bootstrap the grid.
Cellular networks[edit]
Main articles: Bootstrapping Server Function and Generic Bootstrapping Architecture
A Bootstrapping Server Function (BSF) is an intermediary element in cellular networks which
provides application independent functions for mutual authentication of user equipment and
servers unknown to each other and for 'bootstrapping' the exchange of secret session keys
afterwards. The term 'bootstrapping' is related to building a security relation with a previously
unknown device first and to allow installing security elements (keys) in the device and the BSF
afterwards.
A media bootstrap is the process whereby a story or meme is deliberately (but artificially) produced
by self and peer-referential journalism, originally within a tight circle of media content originators,
often commencing with stories written within the same media organization. This story is then
expanded into a general media "accepted wisdom" with the aim of having it accepted as self-evident
"common knowledge" by the reading, listening and viewing publics. The key feature of a media
bootstrap is that as little hard, verifiable, external evidence as possible is used to support the story,
preference being given to the citation (often unattributed) of other media stories, i.e. "journalists
interviewing journalists".
Because the campaign is usually originated and at least initially concocted internally by a media
organization with a particular agenda in mind, within a closed loop of reportage and opinionation,
the campaign is said to have "pulled itself up by its own bootstraps".
A bootstrap campaign should be distinguished from a genuine news story of genuine interest, such
as a natural disaster that kills thousands, or the death of a respected public figure. It is legitimate
for these stories to be given coverage across all media platforms. What distinguishes a bootstrap
from a real story is the contrived and organized manner in which the bootstrap appears to come
out of nowhere. A bootstrap commonly claims to be tapping a hitherto unrecognized phenomenon
within society.
As self-levitating by pulling on one's bootstraps is physically impossible, this is often used by the
bootstrappers themselves to deny the possibility that the bootstrap campaign is indeed concocted
and artificial. They assert that it has arisen via a groundswell of public opinion. Media campaigns
that are openly admitted as concocted (e.g. a public service campaign titled "Let's Clean Up Our
City") are usually ignored by other media organizations for reasons related to competition. On the
other hand, the true bootstrap welcomes the participation of other media organizations, indeed
encourages it, as this participation gains the bootstrap notoriety and, most importantly, legitimacy.
In the field of electronics, a bootstrap circuit is one where part of the output of an amplifier stage
is applied to the input, so as to alter the input impedance of the amplifier. When applied deliberately,
the intention is usually to increase rather than decrease the impedance. [1] Generally, any technique
where part of the output of a system is used at startup is described as bootstrapping.
In the domain of MOSFET circuits, "bootstrapping" is commonly used to mean pulling up
the operating point of a transistor above the power supply rail.[2][3] The same term has been used
somewhat more generally for dynamically altering the operating point of an operational amplifier (by
shifting both its positive and negative supply rail) in order to increase its output voltage swing
(relative to the ground).[4] In the sense used in this paragraph, bootstrapping an operational
amplifier means "using a signal to drive the reference point of the op-amp's power supplies".[5] A
more sophisticated use of this rail bootstrapping technique is to alter the non-linear C/V
characteristic of the inputs of a JFET op-amp in order to decrease its distortion.[
Input impedance[edit]
Bootstrap capacitors C1 and C2 in a BJT emitter follower circuit
In analog circuit designs, a bootstrap circuit is an arrangement of components deliberately intended
to alter the input impedance of a circuit. Usually it is intended to increase the impedance, by using
a small amount of positive feedback, usually over two stages. This was often necessary in the early
days of bipolar transistors, which inherently have quite a low input impedance. Because the
feedback is positive, such circuits can suffer from poor stability and noise performance compared
to ones that don't bootstrap.
Negative feedback may alternatively be used to bootstrap an input impedance, causing the
apparent impedance to be reduced. This is seldom done deliberately, however, and is normally an
unwanted result of a particular circuit design. A well-known example of this is the Miller effect, in
which an unavoidable feedback capacitance appears increased (i.e. its impedance appears
reduced) by negative feedback. One popular case where this isdone deliberately is the Miller
compensation technique for providing a low-frequency pole inside an integrated circuit. To minimize
the size of the necessary capacitor, it is placed between the input and an output which swings in
the opposite direction. This bootstrapping makes it act like a larger capacitor to ground.
Driving MOS transistors[edit]
A N-MOSFET/IGBT needs a significantly positive charge (VGS > Vth) applied to the gate in order to
turn on. Using only N-channel MOSFET/IGBT devices is a common cost reduction method due
largely to die size reduction (there are other benefits as well). However, using nMOS devices in
place of pMOS devices means that a voltage higher than the power rail supply (V+) is needed in
order to bias the transistor into linear operation (minimal current limiting) and thus avoid significant
heat loss.
A bootstrap capacitor is connected from the supply rail (V+) to the output voltage. Usually the
source terminal of the N-MOSFET is connected to the cathode of a recirculation diodeallowing for
efficient management of stored energy in the typically inductive load (See Flyback diode). Due to
the charge storage characteristics of a capacitor, the bootstrap voltage will rise above (V+)
providing the needed gate drive voltage.
A MOSFET/IGBT is a voltage-controlled device which, in theory, will not have any gate current.
This makes it possible to utilize the charge inside the capacitor for control purposes. However,
eventually the capacitor will lose its charge due to parasitic gate current and non-ideal (i.e. finite)
internal resistance, so this scheme is only used where there is a steady pulse present. This is
because the pulsing action allows for the capacitor to discharge (at least partially if not completely).
Most control schemes that use a bootstrap capacitor force the high side driver (N-MOSFET) off for
a minimum time to allow for the capacitor to refill. This means that the duty cycle will always need
to be less than 100% to accommodate for the parasitic discharge unless the leakage is
accommodated for in another manner.
Switch-mode power supplies[edit]
In switch-mode power supplies, the regulation circuits are powered from the output. To start the
power supply, a leakage resistance can be used to trickle-charge the supply rail for the control
circuit to start it oscillating. This approach is less costly and more efficient than providing a separate
linear power supply just to start the regulator circuit. [8]
Output swing[edit]
AC amplifiers can use bootstrapping to increase output swing. A capacitor (usually referred
as bootstrap capacitor) is connected from the output of the amplifier to the bias circuit, providing
bias voltages that exceed the power supply voltage. Emitter followers can provide rail-to-rail output
in this way, which is a common technique in class AB audio amplifiers.
Digital integrated circuits[edit]
Within an integrated circuit a bootstrap method is used to allow internal address and clock
distribution lines to have an increased voltage swing. The bootstrap circuit uses a coupling
capacitor, formed from the gate/source capacitance of a transistor, to drive a signal line to slightly
greater than the supply voltage.
Flip-flop (electronics)
In electronics, a flip-flop or latch is a circuit that has two stable states and can be used to store
state information. A flip-flop is a bistable multivibrator. The circuit can be made to change state
by signals applied to one or more control inputs and will have one or two outputs. It is the basic
storage element in sequential logic. Flip-flops and latches are fundamental building blocks of digital
electronics systems used in computers, communications, and many other types of systems.
Flip-flops and latches are used as data storage elements. A flip-flop stores a single bit (binary digit)
of data; one of its two states represents a "one" and the other represents a "zero". Such data
storage can be used for storage of state, and such a circuit is described as sequential logic. When
used in a finite-state machine, the output and next state depend not only on its current input, but
also on its current state (and hence, previous inputs). It can also be used for counting of pulses,
and for synchronizing variably-timed input signals to some reference timing signal.
Flip-flops can be either simple (transparent or opaque) or clocked (synchronous or edge-triggered).
Although the term flip-flop has historically referred generically to both simple and clocked circuits,
in modern usage it is common to reserve the term flip-flop exclusively for discussing clocked
circuits; the simple ones are commonly called latches.[1][2]
Using this terminology, a latch is level-sensitive, whereas a flip-flop is edge-sensitive. That is, when
a latch is enabled it becomes transparent, while a flip flop's output only changes on a single type
(positive going or negative going) of clock edge.
History[edit]
Flip-flop schematics from the Eccles and Jordan patent filed 1918, one drawn as a cascade of
amplifiers with a positive feedback path, and the other as a symmetric cross-coupled pair
The first electronic flip-flop was invented in 1918 by the British physicists William Eccles and F. W.
Jordan.[3][4] It was initially called the Eccles–Jordan trigger circuit and consisted of two active
elements (vacuum tubes).[5] The design was used in the 1943 British Colossus codebreaking
computer[6] and such circuits and their transistorized versions were common in computers even
after the introduction of integrated circuits, though flip-flops made from logic gates are also
common now.[7][8] Early flip-flops were known variously as trigger circuits or multivibrators.
According to P. L. Lindley, an engineer at the US Jet Propulsion Laboratory, the flip-flop types
detailed below (SR, D, T, JK) were first discussed in a 1954 UCLA course on computer design by
Montgomery Phister, and then appeared in his book Logical Design of Digital
Computers.[9][10]Lindley was at the time working at Hughes Aircraft under Eldred Nelson, who had
coined the term JK for a flip-flop which changed states when both inputs were on (a logical "one").
The other names were coined by Phister. They differ slightly from some of the definitions given
below. Lindley explains that he heard the story of the JK flip-flop from Eldred Nelson, who is
responsible for coining the term while working at Hughes Aircraft. Flip-flops in use at Hughes at the
time were all of the type that came to be known as J-K. In designing a logical system, Nelson
assigned letters to flip-flop inputs as follows: #1: A & B, #2: C & D, #3: E & F, #4: G & H, #5: J & K.
Nelson used the notations "j-input" and "k-input" in a patent application filed in 1953.[11]
Implementation[edit]
A traditional (simple) flip-flop circuit based on bipolar junction transistors
Flip-flops can be either simple (transparent or asynchronous) or clocked (synchronous). The simple
ones are commonly described as latches,[1]while the clocked ones are described as flip-flops.[2]
Simple flip-flops can be built around a single pair of cross-coupled inverting elements: vacuum
tubes, bipolar transistors, field effect transistors, inverters, and inverting logic gates have all been
used in practical circuits.
Clocked devices are specially designed for synchronous systems; such devices ignore their inputs
except at the transition of a dedicated clock signal (known as clocking, pulsing, or strobing).
Clocking causes the flip-flop either to change or to retain its output signal based upon the values of
the input signals at the transition. Some flip-flops change output on the rising edge of the clock,
others on the falling edge.
Since the elementary amplifying stages are inverting, two stages can be connected in succession
(as a cascade) to form the needed non-inverting amplifier. In this configuration, each amplifier may
be considered as an active inverting feedback network for the other inverting amplifier. Thus the
two stages are connected in a non-inverting loop although the circuit diagram is usually drawn as
a symmetric cross-coupled pair (both the drawings are initially introduced in the Eccles–Jordan
patent).
Flip-flop types[edit]
Flip-flops can be divided into common types: the SR ("set-reset"), D ("data" or
"delay"[12]), T ("toggle"), and JK. The behavior of a particular type can be described by what is
termed the characteristic equation, which derives the "next" (i.e., after the next clock pulse)
output, Qnext in terms of the input signal(s) and/or the current output, .
Simple set-reset latches[edit]
SR NOR latch[edit]
An animation of a SR latch, constructed from a pair of cross-coupled NOR gates. Red and black
mean logical '1' and '0', respectively.
An animated SR latch. Black and white mean logical '1' and '0', respectively.
(A) S = 1, R = 0: set
(B) S = 0, R = 0: hold
(C) S = 0, R = 1: reset
(D) S = 1, R = 1: not allowed
The restricted combination (D) leads to an unstable state.
When using static gates as building blocks, the most fundamental latch is the simple SR latch,
where S and R stand for set and reset. It can be constructed from a pair of cross-coupled NOR logic
gates. The stored bit is present on the output marked Q.
While the R and S inputs are both low, feedback maintains the Q and Q outputs in a constant state,
with Q the complement of Q. If S (Set) is pulsed high while R (Reset) is held low, then the Q output
is forced high, and stays high when S returns to low; similarly, if R is pulsed high while S is held
low, then the Q output is forced low, and stays low when R returns to low.
SR latch operation[13]
Characteristic table Excitation table
S R Qnext Action Q Qnext S R
0 0 Q hold state 0 0 0 X
0 1 0 reset 0 1 1 0
1 0 1 set 1 0 0 1
1 1 X not allowed 1 1 X 0
Note: X means don't care, that is, either 0 or 1 is a valid value.
The R = S = 1 combination is called a restricted combination or a forbidden state because, as
both NOR gates then output zeros, it breaks the logical equation Q = not Q. The combination is
also inappropriate in circuits where both inputs may go low simultaneously (i.e. a transition
from restricted to keep). The output would lock at either 1 or 0 depending on the propagation time
relations between the gates (a race condition).
To overcome the restricted combination, one can add gates to the inputs that would convert (S,R)
= (1,1) to one of the non-restricted combinations. That can be:
 Q = 1 (1,0) – referred to as an S (dominated)-latch
 Q = 0 (0,1) – referred to as an R (dominated)-latch
This is done in nearly every programmable logic controller.
 Keep state (0,0) – referred to as an E-latch
Alternatively, the restricted combination can be made to toggle the output. The result isthe JK latch.
Characteristic: Q+ = R'Q + R'S or Q+ = R'(Q + S).[14]
SR NAND latch[edit]
An SR latch constructed from cross-coupled NAND gates.
This is an alternate model of the simple SR latch which is built with NAND logic
gates. Set and reset now become active low signals, denoted S and R respectively. Otherwise,
operation is identical to that of the SR latch. Historically, SR-latches have beenpredominant despite
the notational inconvenience of active-low inputs.[citation needed]
SR latch operation
S R Action
0 0 Not allowed
0 1 Q = 1
1 0 Q = 0
1 1 No change
Symbol for an SR NAND latch
SR AND-OR latch[edit]
An SR AND-OR latch. Light green means logical '1' and dark green means logical '0'. The latch is
currently in hold mode (no change).
From the teaching point of view, SR latches realised as a pair of cross-coupled components
(transistors, gates, tubes, etc.) are rather hard to understand for beginners. A didactically easier to
understand model uses a single feedback loop instead of the cross-coupling. The following is an
SR latch built with an AND gate with one inverted input and an OR gate.
SR AND-OR latch operation
S R Action
0 0 No change
1 0 Q = 1
X 1 Q = 0
JK latch[edit]
The JK latch is much less frequently used than the JK flip-flop. The JK latch follows the following
state table:
JK latch truth table
J K Qnext Comment
0 0 Q No change
0 1 0 Reset
1 0 1 Set
1 1 Q Toggle
Hence, the JK latch is an SR latch that is made to toggle its output (oscillate between 0 and 1)
when passed the input combination of 11.[15] Unlike the JK flip-flop, the 11 input combination for the
JK latch is not very useful because there is no clock that directs toggling.[16]
Gated latches and conditional transparency[edit]
Latches are designed to be transparent. That is, input signal changes cause immediate changes in
output. Additional logic can be added to a simple transparent latch to make it non-
transparent or opaque when another input (an "enable" input) is not asserted. When
several transparent latches follow each other, using the same enable signal, signals can propagate
through all of them at once. However, by following a transparent-high latch with a transparent-
low (or opaque-high) latch, a master–slave flip-flop is implemented.
Gated SR latch[edit]
A gated SR latch circuit diagram constructed from AND gates (on left) and NOR gates (on right).
A synchronous SR latch (sometimes clocked SR flip-flop) can be made by adding a second level
of NAND gates to the inverted SR latch (or a second level of AND gates to the direct SR latch). The
extra NAND gates further invert the inputs so the simple SR latch becomes a gated SR latch (and
a simple SR latch would transform into a gated SR latch with inverted enable).
With E high (enable true), the signals can pass through the input gates to the encapsulated latch;
all signal combinations except for (0,0) = hold then immediately reproduce on the (Q,Q) output, i.e.
the latch is transparent.
With E low (enable false) the latch is closed (opaque) and remains in the state it was left the last
time E was high.
The enable input is sometimes a clock signal, but more often a read or write strobe.
Gated SR latch operation
E/C Action
0 No action (keep state)
1 The same as non-clocked SR latch
Symbol for a gated SR latch
Gated D latch[edit]
A gated D latch based on an SR NAND latch
A gated D latch based on an SR NOR latch
An animated gated D latch.
(A) D = 1, E = 1: set
(B) D = 1, E = 0: hold
(C) D = 0, E = 0: hold
(D) D = 0, E = 1: reset
A gated D latch in pass transistor logic, similar to the ones in the CD4042 or the CD74HC75
integrated circuits.
This latch exploits the fact that, in the two active input combinations (01 and 10) of a gated SR
latch, R is the complement of S. The input NAND stage converts the two D input states (0 and 1)
to these two input combinations for the next SR latch by inverting the data input signal. The low
state of the enable signal produces the inactive "11" combination. Thus a gated D-latch may be
considered as a one-input synchronous SR latch. This configuration prevents application of the
restricted input combination. It is also known as transparent latch, data latch, or simply gated latch.
It has a data input and an enable signal (sometimes named clock, or control). The
word transparent comes from the fact that, when the enable input is on, the signal propagates
directly through the circuit, from the input D to the output Q.
Transparent latches are typically used as I/O ports or in asynchronous systems, or in synchronous
two-phase systems (synchronous systems that use a two-phase clock), where two latches
operating on different clock phases prevent data transparency as in a master–slave flip-flop.
Latches are available as integrated circuits, usually with multiple latches per chip. For example,
74HC75 is a quadruple transparent latch in the 7400 series.
Gated D latch truth table
E/C D Q Q Comment
0 X Qprev Qprev No change
1 0 0 1 Reset
1 1 1 0 Set
Symbol for a gated D latch
The truth table shows that when the enable/clock input is 0, the D input has no effect on the output.
When E/C is high, the output equals D.
Earle latch[edit]
Earle latch uses complementary enable inputs: enable active low (E_L) and enable active high
(E_H)
An animated Earle latch.
(A) D = 1, E_H = 1: set
(B) D = 0, E_H = 1: reset
(C) D = 1, E_H = 0: hold
The classic gated latch designs have some undesirable characteristics.[17] They require double-rail
logic or an inverter. The input-to-output propagation may take up to three gate delays. The input-
to-output propagation is not constant – some outputs take two gate delays while others take three.
Designers looked for alternatives.[18] A successful alternative is the Earle latch. It requires only a
single data input, and its output takes a constant two gate delays. In addition, the two gate levels
of the Earle latch can, in some cases, be merged with the last two gate levels of the circuits driving
the latch because many common computational circuits have an OR layer followed by an AND
layer as their last two levels. Merging the latch function can implement the latch with no additional
gate delays.[17] The merge is commonly exploited in the design of pipelined computers, and, in fact,
was originally developed by J. G. Earle to be used in the IBM System/360 Model 91 for that
purpose.[19]
The Earle latch is hazard free.[20] If the middle NAND gate is omitted, then one gets the polarity
hold latch, which is commonly used because it demands less logic.[20][21] However, it is susceptible
to logic hazard. Intentionally skewing the clock signal can avoid the hazard.[21]
D flip-flop[edit]
D flip-flop symbol
The D flip-flop is widely used. It is also known as a "data" or "delay" flip-flop.
The D flip-flop captures the value of the D-input at a definite portion of the clock cycle (such as the
rising edge of the clock). That captured value becomes the Q output. At other times, the output Q
does not change.[22][23] The D flip-flop can be viewed as a memory cell, a zero-order hold, or a delay
line.[24]
Truth table:
Clock D Qnext
Rising edge 0 0
Rising edge 1 1
Non-Rising X Q
('X' denotes a Don't care condition, meaning the signal is irrelevant)
Most D-type flip-flops in ICs have the capability to be forced to the set or reset state (which
ignores the D and clock inputs), much like an SR flip-flop. Usually, the illegal S = R = 1 condition
is resolved in D-type flip-flops. By setting S = R = 0, the flip-flop can be used as described
above. Here is the truth table for the others S and R possible configurations:
Inputs Outputs
S R D > Q Q'
0 1 X X 0 1
1 0 X X 1 0
1 1 X X 1 1
4-bit serial-in, parallel-out (SIPO) shift register
These flip-flops are very useful, as they form the basis for shift registers, which are an essential
part of many electronic devices. The advantage of the D flip-flop over the D-type "transparent latch"
is that the signal on the D input pin is captured the moment the flip-flop is clocked, and subsequent
changes on the D input will be ignored until the next clock event. An exception is that some flip-
flops have a "reset" signal input, which will reset Q (to zero), and may be either asynchronous or
synchronous with the clock.
The above circuit shifts the contents of the register to the right, one bit position on each active
transition of the clock. The input X is shifted into the leftmost bit position.
Classical positive-edge-triggered D flip-flop[edit]
A positive-edge-triggered D flip-flop
This circuit[25] consists of two stages implemented by SR NAND latches. The input stage (the two
latches on the left) processes the clock and data signals to ensure correct input signals for the
output stage (the single latch on the right). If the clock is low, both the output signals of the input
stage are high regardless of the data input; the output latch is unaffected and it stores the previous
state. When the clock signal changes from low to high, only one of the output voltages (depending
on the data signal) goes low and sets/resets the output latch: if D = 0, the lower output becomes
low; if D = 1, the upper output becomes low. If the clock signal continues staying high, the outputs
keep their states regardless of the data input and force the output latch to stay in the corresponding
state as the input logical zero (of the output stage) remains active while the clock is high. Hence
the role of the output latch is to store the data only while the clock is low.
The circuit is closely related to the gated D latch as both the circuits convert the two D input states
(0 and 1) to two input combinations (01 and 10) for the output SR latch by inverting the data input
signal (both the circuits split the single D signal in two complementary S and Rsignals). The
difference is that in the gated D latch simple NAND logical gates are used while in the positive-
edge-triggered D flip-flop SRNAND latches are used for this purpose. The role of these latches is
to "lock" the active output producing low voltage (a logical zero); thus the positive-edge-triggered
D flip-flop can also be thought of as a gated D latch with latched input gates.
Master–slave edge-triggered D flip-flop[edit]
A master–slave D flip-flop. It responds on the falling edge of the enable input (usually a
clock)
An implementation of a master–slave D flip-flop that is triggered on the rising edge of the
clock
A master–slave D flip-flop is created by connecting two gated D latches in series, and inverting
the enable input to one of them. It is called master–slave because the second latch in the series
only changes in response to a change in the first (master) latch.
For a positive-edge triggered master–slave D flip-flop, when the clock signal is low (logical 0) the
"enable" seen by the first or "master" D latch (the inverted clock signal) is high (logical 1). This
allows the "master" latch to store the input value when the clock signal transitions from low to high.
As the clock signal goes high (0 to 1) the inverted "enable" of the first latch goes low (1 to 0) and
the value seen at the input to the master latch is "locked". Nearly simultaneously, the twice inverted
"enable" of the second or "slave" D latch transitions from low to high (0 to 1) with the clock signal.
This allows the signal captured at the rising edge of the clock by the now "locked" master latch to
pass through the "slave" latch. When the clock signal returns to low (1 to 0), the output of the "slave"
latch is "locked", and the value seen at the last rising edge of the clock is held while the "master"
latch begins to accept new values in preparation for the next rising clock edge.
By removing the leftmost inverter in the circuit at side, a D-type flip-flop that strobes on
the falling edge of a clock signal can be obtained. This has a truth table like this:
D Q > Qnext
0 X Falling 0
1 X Falling 1
A CMOS IC implementation of a "true single-phase edge-triggered flip-flop with reset"
Edge-triggered dynamic D storage element[edit]
An efficient functional alternative to a D flip-flop can be made with dynamic circuits (where
information is stored in a capacitance) as long as it is clocked often enough; while not a true flip-
flop, it is still called a flip-flop for its functional role. While the master–slave D element is triggered
on the edge of a clock, its components are each triggered by clock levels. The "edge-triggered D
flip-flop", as it is called even though it is not a true flip-flop, does not have the master–slave
properties.
Edge-triggered D flip-flops are often implemented in integrated high-speed operations
using dynamic logic. This means that the digital output is stored on parasitic device capacitance
while the device is not transitioning. This design of dynamic flip flops also enables simple resetting
since the reset operation can be performed by simply discharging one or more internal nodes. A
common dynamic flip-flop variety is the true single-phase clock (TSPC) type which performs the
flip-flop operation with little power and at high speeds. However, dynamic flip-flops will typically not
work at static or low clock speeds: given enough time, leakage paths may discharge the parasitic
capacitance enough to cause the flip-flop to enter invalid states.
T flip-flop[edit]
A circuit symbol for a T-type flip-flop
If the T input is high, the T flip-flop changes state ("toggles") whenever the clock input is strobed. If
the T input is low, the flip-flop holds the previous value. This behavior is described by the
characteristic equation:
(expanding the XOR operator)
and can be described in a truth table:
T flip-flop operation[26]
Characteristic table Excitation table
Comment Comment
0 0 0 hold state (no clk) 0 0 0 No change
0 1 1 hold state (no clk) 1 1 0 No change
1 0 1 toggle 0 1 1 Complement
1 1 0 toggle 1 0 1 Complement
When T is held high, the toggle flip-flop divides the clock frequency by two; that is, if clock frequency
is 4 MHz, the output frequency obtained from the flip-flop will be 2 MHz. This "divide by" feature
has application in various types of digital counters. A T flip-flop can also be built using a JK flip-flop
(J & K pins are connected together and act as T) or a D flip-flop (T input XOR Qprevious drives the D
input).
JK flip-flop[edit]
A circuit symbol for a positive-edge-triggered JK flip-flop
JK flip-flop timing diagram
The JK flip-flop augments the behavior of the SR flip-flop (J=Set, K=Reset) by interpreting the J =
K = 1 condition as a "flip" or toggle command. Specifically, the combination J = 1, K = 0 is a
command to set the flip-flop; the combination J = 0, K = 1 is a command to reset the flip-flop; and
the combination J = K = 1 is a command to toggle the flip-flop, i.e., change its output to the logical
complement of its current value. Setting J = K = 0 maintains the current state. To synthesize a D
flip-flop, simply set K equal to the complement of J. Similarly, to synthesize a T flip-flop, set K equal
to J. The JK flip-flop is therefore a universal flip-flop, because it can be configured to work as an
SR flip-flop, a D flip-flop, or a T flip-flop.
The characteristic equation of the JK flip-flop is:
and the corresponding truth table is:
JK flip-flop operation[26]
Characteristic table Excitation table
J K Comment Qnext Q Qnext Comment J K
0 0 hold state Q 0 0 No Change 0 X
0 1 reset 0 0 1 Set 1 X
1 0 set 1 1 0 Reset X 1
1 1 toggle Q 1 1 No Change X 0
Timing considerations[edit]
Timing parameters[edit]
Flip-flop setup, hold and clock-to-output timing parameters
The input must be held steady in a period around the rising edge of the clock known as the aperture.
Imagine taking a picture of a frog on a lily-pad.[27] Suppose the frog then jumps into the water. If
you take a picture of the frog as it jumps into the water, you will get a blurry picture of the frog
jumping into the water—it's not clear which state the frog was in. But if you take a picture while the
frog sits steadily on the pad (or is steadily in the water), you will get a clear picture. In the same
way, the input to a flip-flop must be held steady during the aperture of the flip-flop.
Setup time is the minimum amount of time the data input should be held steady before the clock
event, so that the data is reliably sampled by the clock.
Hold time is the minimum amount of time the data input should be held steady after the clock
event, so that the data is reliably sampled by the clock.
Aperture is the sum of setup and hold time. The data input should be held steady throughout this
time period.[27]
Recovery time is the minimum amount of time the asynchronous set or reset input should be
inactive before the clock event, so that the data is reliably sampled by the clock. The recovery time
for the asynchronous set or reset input is thereby similar to the setup time for the data input.
Removal time is the minimum amount of time the asynchronous set or reset input should be
inactive after the clock event, so that the data is reliably sampled by the clock. The removal time
for the asynchronous set or reset input is thereby similar to the hold time for the data input.
Short impulses applied to asynchronous inputs (set, reset) should not be applied completely within
the recovery-removal period, or else it becomes entirely indeterminable whether the flip-flop will
transition to the appropriate state. In another case, where an asynchronous signal simply makes
one transition that happens to fall between the recovery/removal time, eventually the flip-flop will
transition to the appropriate state, but a very short glitch may or may not appear on the output,
dependent on the synchronous input signal. This second situation may or may not have significance
to a circuit design.
Set and Reset (and other) signals may be either synchronous or asynchronous and therefore may
be characterized with either Setup/Hold or Recovery/Removal times, and synchronicity is very
dependent on the design of the flip-flop.
Differentiation between Setup/Hold and Recovery/Removal times is often necessary when verifying
the timing of larger circuits because asynchronous signals may be found to be less critical than
synchronous signals. The differentiation offers circuit designers the ability to define the verification
conditions for these types of signals independently.
Metastability[edit]
Main article: Metastability in electronics
Flip-flops are subject to a problem called metastability, which can happen when two inputs, such
as data and clock or clock and reset, are changing at about the same time. When the order is not
clear, within appropriate timing constraints, the result is that the output may behave unpredictably,
taking many times longer than normal to settle to one state or the other, or even oscillating several
times before settling. Theoretically, the time to settle down is not bounded. In a computer system,
this metastability can cause corruption of data or a program crash if the state is not stable before
another circuit uses its value; in particular, if two different logical paths use the output of a flip-flop,
one path can interpret it as a 0 and the other as a 1 when it has not resolved to stable state, putting
the machine into an inconsistent state.[28]
The metastability in flip-flops can be avoided by ensuring that the data and control inputs are held
valid and constant for specified periods before and after the clock pulse, called the setup time (tsu)
and the hold time (th) respectively. These times are specified in the data sheet for the device, and
are typically between a few nanoseconds and a few hundred picoseconds for modern devices.
Depending upon the flip-flop's internal organization, it is possible to build a device with a zero (or
even negative) setup or hold time requirement but not both simultaneously.
Unfortunately, it is not always possible to meet the setup and hold criteria, because the flip-flop
may be connected to a real-time signal that could change at any time, outside the control of the
designer. In this case, the best the designer can do is to reduce the probability of error to a certain
level, depending on the required reliability of the circuit. One technique for suppressing metastability
is to connect two or more flip-flops in a chain, so that the output of each one feeds the data input
of the next, and all devices share a common clock. With this method, the probability of a metastable
event can be reduced to a negligible value, but never to zero. The probability of metastability gets
closer and closer to zero as the number of flip-flops connected in series is increased. The number
of flip-flops being cascaded is referred to as the "ranking"; "dual-ranked" flip flops (two flip-flops in
series) is a common situation.
So-called metastable-hardened flip-flops are available, which work by reducing the setup and hold
times as much as possible, but even these cannot eliminate the problem entirely. This is because
metastability is more than simply a matter of circuit design. When the transitions in the clock and
the data are close together in time, the flip-flop is forced to decide which event happened first.
However fast the device is made, there is always the possibility that the input events will be so
close together that it cannot detect which one happened first. It is therefore logically impossible to
build a perfectly metastable-proof flip-flop. Flip-flops are sometimes characterized for a maximum
settling time (the maximum time they will remain metastable under specified conditions). In this
case, dual-ranked flip-flops that are clocked slower than the maximum allowed metastability time
will provide proper conditioning for asynchronous (e.g., external) signals.
Propagation delay[edit]
Another important timing value for a flip-flop is the clock-to-output delay (common symbol in data
sheets: tCO) or propagation delay (tP), which is the time a flip-flop takes to change its output after
the clock edge. The time for a high-to-low transition (tPHL) is sometimes different from the time for
a low-to-high transition (tPLH).
When cascading flip-flops which share the same clock (as in a shift register), it is important to
ensure that the tCO of a preceding flip-flop is longer than the hold time (th) of the following flip-flop,
so data present at the input of the succeeding flip-flop is properly "shifted in" following the active
edge of the clock. This relationship between tCO and th is normally guaranteed if the flip-flops are
physically identical. Furthermore, for correct operation, it is easy to verify that the clock period has
to be greater than the sum tsu + th.
Generalizations[edit]
Flip-flops can be generalized in at least two ways: by making them 1-of-N instead of 1-of-2, and by
adapting them to logic with more than two states. In the special cases of 1-of-3 encoding, or multi-
valued ternary logic, these elements may be referred to as flip-flap-flops.[29]
In a conventional flip-flop, exactly one of the two complementary outputs is high. This can be
generalized to a memory element with N outputs, exactly one of which is high (alternatively, where
exactly one of N is low). The output is therefore always a one-hot (respectively one-cold)
representation. The construction is similar to a conventional cross-coupled flip-flop; each output,
when high, inhibits all the other outputs.[30] Alternatively, more or less conventional flip-flops can be
used, one per output, with additional circuitry to make sure only one at a time can be true.[31]
Another generalization of the conventional flip-flop is a memory element for multi-valued logic. In
this case the memory element retains exactly one of the logic states until the control inputs induce
a change.[32] In addition, a multiple-valued clock can also be used, leading to new possible clock
transitions.
Threshold voltage
The threshold voltage, also called the gate voltage, commonly abbreviated as Vth or VGS (th), of
a field-effect transistor (FET) is the minimum gate-to-source voltage differential that is needed to
create a conducting path between the source and drain terminals.
When referring to a junction field-effect transistor (JFET), the threshold voltage is often called
"pinch-off voltage" instead. This is somewhat confusing since "pinch off" applied to insulated-gate
field-effect transistor (IGFET) refers to the channel pinching that leads to current saturation
behaviour under high source–drain bias, even though the current is never off. Unlike "pinch off",
the term "threshold voltage" is unambiguous and refers to the same concept in any field-effect
transistor.
Basic principles[edit]
In n-channel enhancement-mode devices, a conductive channel does not exist naturally within the
transistor, and a positive gate-to-source voltage is necessary to create one such. The positive
voltage attracts free-floating electrons within the body towards the gate, forming a conductive
channel. But first, enough electrons must be attracted near the gate to counter the dopant ions
added to the body of the FET; this forms a region with no mobile carriers called a depletion region,
and the voltage at which this occurs is the threshold voltage of the FET. Further gate-to-source
voltage increase will attract even more electrons towards the gate which are able to create a
conductive channel from source to drain; this process is called inversion.
In contrast, n-channel depletion-mode devices have a conductive channel naturally existing within
the transistor. Accordingly, the term 'threshold voltage' does not readily apply to turn such devices
'on', but is used instead to denote the voltage level at which the channel is wide enough to allow
electrons to flow easily. This ease-of-flow threshold also applies to p-channel depletion-
mode devices, in which a positive voltage from gate to body/source creates a depletion layer by
forcing the positively charged holes away from the gate-insulator/semiconductor interface, leaving
exposed a carrier-free region of immobile, negatively charged acceptor ions.
In wide planar transistors the threshold voltage is essentially independent of the drain–source
voltage and is therefore a well defined characteristic, however it is less clear in modern nanometer-
sized MOSFETs due to drain-induced barrier lowering.
Depletion region of an nMOSFET biased below the threshold
Depletion region of an nMOSFET biased above the threshold with channel formed
In the figures, the source (left side) and drain (right side) are labeled n+ to indicate heavily doped
(blue) n-regions. The depletion layer dopant is labeled NA
−
to indicate that the ions in the (pink)
depletion layer are negatively charged and there are very few holes. In the (red) bulk the number
of holes p = NA making the bulk charge neutral.
If the gate voltage is below the threshold voltage (top figure), the transistor is turned off and ideally
there is no current from the drain to the source of the transistor. In fact, there is a current even for
gate biases below the threshold (subthreshold leakage) current, although it is small and varies
exponentially with gate bias.
If the gate voltage is above the threshold voltage (lower figure), the transistor is turned on, due to
there being many electrons in the channel at the oxide-silicon interface, creating a low-resistance
channel where charge can flow from drain to source. For voltages significantly above the threshold,
this situation is called strong inversion. The channel is tapered when VD > 0 because the voltage
drop due to the current in the resistive channel reduces the oxide field supporting the channel as
the drain is approached.
Body effect[edit]
The body effect is the change in the threshold voltage by an amount approximately equal to the
change in , the source-bulk voltage, because the body influences the threshold voltage (when
it is not tied to the source), it can be thought of as a second gate, and is sometimes referred to as
the "back gate"; the body effect is sometimes called the "back-gate effect".[1]
For an enhancement mode, nMOS MOSFET body effect upon threshold voltage is computed
according to the Shichman–Hodges model[2](accurate for very old technology) using the following
equation.
Dependence on oxide thickness[edit]
In a given technology node, such as the 90-nm CMOS process, the threshold voltage depends
on the choice of oxide and on oxide thickness. Using the body formulas above, is directly
proportional to , and , which is the parameter for oxide thickness.
Thus, the thinner the oxide thickness, the lower the threshold voltage. Although this may seem
to be an improvement, it is not without cost; because the thinner the oxide thickness, the higher
the subthreshold leakage current through the device will be. Consequently, the design
specification for 90-nm gate-oxide thickness was set at 1 nm to control the leakage
current.[3] This kind of tunneling, called Fowler-Nordheim Tunneling.[4]
Before scaling the design features down to 90 nm, a dual-oxide approach for creating the
oxide thickness was a common solution to this issue. With a 90 nm process technology, a
triple-oxide approach has been adopted in some cases.[5] One standard thin oxide is used
for most transistors, another for I/O driver cells, and a third for memory-and-pass transistor
cells. These differences are based purely on the characteristics of oxide thickness on
threshold voltage of CMOS technologies.
Dependence on temperature[edit]
As with the case of oxide thickness affecting threshold voltage, temperature has an effect
on the threshold voltage of a CMOS device. Expanding on part of the equation in the body
effect section
We see that the surface potential has a direct relationship with the temperature. Looking
above, that the threshold voltage does not have a direct relationship but is not
independent of the effects. On average this variation is between −4 mV/K and −2 mV/K
depending on doping level.[6] For a change of 30 °C this results in significant variation
from the 500 mV design parameter commonly used for the 90 nm technology node.
Dependence on random dopant fluctuation[edit]
Random dopant fluctuation (RDF) is a form of process variation resulting from variation
in the implanted impurity concentration. In MOSFET transistors, RDF in the channel
region can alter the transistor's properties, especially threshold voltage. In newer
process technologies RDF has a larger effect because the total number of dopants is
fewer.[7]
Research works are being carried out in order to suppress the dopant fluctuation which
leads to the variation of threshold voltage between devices undergoing same
manufacturing process.[8]
Subthreshold conduction or subthreshold leakage or subthreshold drain current is
the current between the source and drain of a MOSFET when the transistor is in subthreshold
region, or weak-inversion region, that is, for gate-to-source voltages below the threshold voltage.
The terminology for various degrees of inversion is described in Tsividis.[1]
In digital circuits, subthreshold conduction is generally viewed as a parasitic leakage in a state that
would ideally have no current. In micropower analog circuits, on the other hand, weak inversion is
an efficient operating region, and subthreshold is a useful transistor mode around which circuit
functions are designed.[2]
In the past, the subthreshold conduction of transistors has usually been very small in the off state,
as gate voltage could be significantly below threshold; but as voltages have been scaled down with
transistor size, subthreshold conduction has become a bigger factor. Indeed, leakage from all
sources has increased: for a technology generation with threshold voltage of 0.2 V, leakage can
exceed 50% of total power consumption.[3]
The reason for a growing importance of subthreshold conduction is that the supply voltage has
continually scaled down, both to reduce the dynamic power consumption of integrated circuits (the
power that is consumed when the transistor is switching from an on-state to an off-state, which
depends on the square of the supply voltage), and to keep electric fields inside small devices low,
to maintain device reliability. The amount of subthreshold conduction isset by the threshold voltage,
which sits between ground and the supply voltage, and so has to be reduced along with the supply
voltage. That reduction means less gate voltage swing below threshold to turn the device off, and
as subthreshold conduction varies exponentially with gate voltage (see MOSFET: Cut-off Mode), it
becomes more and more significant as MOSFETs shrink in size.[4]
Subthreshold conduction is only one component of leakage: other leakage components that can
be roughly equal in size depending on the device design are gate-oxide leakage and junction
leakage.[5] Understanding sources of leakage and solutions to tackle the impact of leakage will be
a requirement for most circuit and system designers.[6]
Sub-threshold electronics[edit]
Some devices exploit sub-threshold conduction to process data without fully turning on or off. Even
in standard transistors a small amount of current leaks even when they are technically switched off.
Some sub-threshold devices have been able to operate with between 1 and 0.1 percent of the
power of standard chips.[7]
Such lower power operations allow some devices to function with the small amounts of power that
can be scavenged without an attached power supply, such as a wearable EKGmonitor that can run
entirely on body heat.[7]
An integrated circuit or monolithic integrated circuit (also referred to as an IC, a chip, or
a microchip) is a set of electronic circuits on one small flat piece (or "chip") of semiconductor
material, normally silicon. The integration of large numbers of tiny transistors into a small chip
results in circuits that are orders of magnitude smaller, cheaper, and faster than those constructed
of discrete electronic components. The IC's mass production capability, reliability and building-
block approach to circuit design has ensured the rapid adoption of standardized ICs in place of
designs using discrete transistors. ICs are now used in virtually all electronic equipment and have
revolutionized the world of electronics. Computers, mobile phones, and other digital home
appliances are now inextricable parts of the structure of modern societies, made possible by the
small size and low cost of ICs.
ICs were made possible by experimental discoveries showing that semiconductor devices could
perform the functions of vacuum tubes, and by mid-20th-century technology advancements
in semiconductor device fabrication. Since their origins in the 1960s, the size, speed, and capacity
of chips have progressed enormously, driven by technical advances that fit more and more
transistors on chips of the same size - a modern chip may have several billion transistors in an area
the size of a human fingernail. These advances, roughly following Moore's law, make a computer
chip of today possess millions of times the capacity and thousands of times the speed of the
computer chips of the early 1970s.
ICs have two main advantages over discrete circuits: cost and performance. Cost is low because
the chips, with all their components, are printed as a unit by photolithography rather than being
constructed one transistor at a time. Furthermore, packaged ICs use much less material than
discrete circuits. Performance is high because the IC's components switch quickly and consume
comparatively little power because of their small size and close proximity. The main disadvantage
of ICs is the high cost to design them and fabricate the required photomasks. This high initial cost
means ICs are only practical when high production volumes are anticipated.
Terminology[edit]
An integrated circuit is defined as:[1]
A circuit in which all or some of the circuit elements are inseparably associated and electrically
interconnected so that it is considered to be indivisible for the purposes of construction and
commerce.
Circuits meeting this definition can be constructed using many different technologies, including thin-
film transistors, thick film technologies, or hybrid integrated circuits. However, in general
usage integrated circuit has come to refer to the single-piece circuit construction originally known
as a monolithic integrated circuit.[2][3]
Invention[edit]
Main article: Invention of the integrated circuit
Early developments of the integrated circuit go back to 1949, when German engineer Werner
Jacobi (Siemens AG)[4] filed a patent for an integrated-circuit-like semiconductor amplifying
device[5] showing five transistors on a common substrate in a 3-stage amplifier arrangement.
Jacobi disclosed small and cheap hearing aids as typical industrial applications of his patent. An
immediate commercial use of his patent has not been reported.
The idea of the integrated circuit was conceived by Geoffrey Dummer (1909–2002), a radar
scientist working for the Royal Radar Establishment of the British Ministry of Defence. Dummer
presented the idea to the public at the Symposium on Progress in Quality Electronic Components
in Washington, D.C. on 7 May 1952.[6] He gave many symposia publicly to propagate his ideas and
unsuccessfully attempted to build such a circuit in 1956.
A precursor idea to the IC was to create small ceramic squares (wafers), each containing a single
miniaturized component. Components could then be integrated and wired into a bidimensional or
tridimensional compact grid. This idea, which seemed very promising in 1957, was proposed to the
US Army by Jack Kilby and led to the short-lived Micromodule Program (similar to 1951's Project
Tinkertoy).[7] However, as the project was gaining momentum, Kilby came up with a new,
revolutionary design: the IC.
Jack Kilby's original integrated circuit
Newly employed by Texas Instruments, Kilby recorded his initial ideas concerning the integrated
circuit in July 1958, successfully demonstrating the first working integrated example on 12
September 1958.[8] In his patent application of 6 February 1959,[9] Kilby described his new device
as "a body of semiconductor material … wherein all the components of the electronic circuit are
completely integrated."[10]The first customer for the new invention was the US Air Force.[11]
Kilby won the 2000 Nobel Prize in Physics for his part inthe invention of the integrated circuit.[12] His
work was named an IEEE Milestone in 2009.[13]
Half a year after Kilby, Robert Noyce at Fairchild Semiconductor developed his own idea of an
integrated circuit that solved many practical problems Kilby's had not. Noyce's design was made
of silicon, whereas Kilby's chip was made of germanium. Noyce credited Kurt Lehovecof Sprague
Electric for the principle of p–n junction isolation, a key concept behind the IC.[14] This isolation
allows each transistor to operate independently despite being parts of the same piece of silicon.
Fairchild Semiconductor was also home of the first silicon-gate IC technology with self-aligned
gates, the basis of all modern CMOS computer chips. The technology was developed by Italian
physicist Federico Faggin in 1968. In 1970, he joined Intel in order to develop the first single-
chip central processing unit (CPU) microprocessor, the Intel 4004, for which he received
the National Medal of Technology and Innovation in 2010. The 4004 was designed
by Busicom's Masatoshi Shima and Intel's Ted Hoff in 1969, but it was Faggin's improved design
in 1970 that made it a reality.[15]
Advances[edit]
Advances in IC technology, primarily smaller features and larger chips, have allowed the number
of transistors in an integrated circuit to double every two years, a trend known as Moore's law. This
increased capacity has been used to decrease cost and increase functionality. In general, as the
feature size shrinks, almost every aspect of an IC's operation improves. The cost per transistor and
the switching power consumption per transistor go down, while the memory capacity and speed go
up, through the relationships defined by Dennard scaling.[16] Because speed, capacity, and power
consumption gains are apparent to the end user, there is fierce competition among the
manufacturers to use finer geometries. Over the years, transistor sizes have decreased from 10s
of microns in the early 1970s to 10 nanometers in 2017 [17] with a corresponding million-fold
increase in transistors per unit area. As of 2016, typical chip areas range from a few
square millimeters to around 600 mm2, with up to 25 million transistors per mm2.[18]
The expected shrinking of feature sizes, and the needed progress in related areas was forecast for
many years by the International Technology Roadmap for Semiconductors (ITRS). The final ITRS
was issued in 2016, and it is being replaced by the International Roadmap for Devices and
Systems.[19]
Initially, ICs were strictly electronic devices. The success of ICs has led to the integration of other
technologies, in the attempt to obtain the same advantages of small size and low cost. These
technologies include mechanical devices, optics, and sensors.
 Charge-coupled devices, and the closely related active pixel sensors, are chips that are
sensitive to light. They have largely replaced film in scientific, medical, and consumer
applications. Billions of these devices are now produced each year for applications such as
cellphones, tablets, and digital cameras. This sub-field of ICs won the Nobel prize in 2009.
 Very small mechanical devices driven by electricity can be integrated onto chips, a technology
known as microelectromechanical systems. These devices were developed in the late
1980s[20] and are used in a variety of commercial and military applications. Examples
include DLP projectors, inkjet printers, and accelerometers and MEMS gyroscopesused to
deploy automobile airbags.
 Since the early 2000s, the integration of optical functionality (optical computing) into silicon
chips has been actively pursued in both academic research and in industry resulting in the
successful commercialization of silicon based integrated optical transceivers combining optical
devices (modulators, detectors, routing) with CMOS based electronics.[21]Integrated optical
circuits are also being developed.
 Integrated circuits are also being developed for sensor applications in medical implants or
other bioelectronic devices.[22] Special sealing techniques have to be applied in such biogenic
environments to avoid corrosion or biodegradation of the exposed semiconductor materials.[23]
As of 2016, the vast majority of all transistors are fabricated in a single layer on one side of a chip
of silicon in a flat 2-dimensional planar process. Researchers have produced prototypes of several
promising alternatives, such as:
 various approaches to stacking several layers of transistors to make a three-dimensional
integrated circuit, such as through-silicon via, "monolithic 3D",[24] stacked wire bonding,[25] etc.
 transistors built from other materials: graphene transistors, molybdenite transistors, carbon
nanotube field-effect transistor, gallium nitride transistor, transistor-like nanowire electronic
devices, organic field-effect transistor, etc.
 fabricating transistors over the entire surface of a small sphere of silicon.[26][27]
 modifications to the substrate, typically to make "flexible transistors" for a flexible display or
other flexible electronics, possibly leading to a roll-away computer.
Design[edit]
Main articles: Electronic design automation and Hardware description language
The cost of designing and developing a complex integrated circuit is quite high, normally in the
multiple tens of millions of dollars.[28] This only makes economic sense if production volume is high,
so the non-recurring engineering (NRE) costs are spread across typically millions of production
units.
Modern semiconductor chips have billions of components, and are too complex to be designed by
hand. Software tools to help the designer are essential. Electronic Design Automation (EDA), also
referred to as Electronic Computer-Aided Design (ECAD),[29] is a category of software tools for
designing electronic systems, including integrated circuits. The tools work together in a design
flow that engineers use to design and analyze entire semiconductor chips.
Integrated circuits can be classified into analog,[30] digital[31] and mixed signal[32] (both analog and
digital on the same chip).
Digital integrated circuits can contain anywhere from one[33] to billions[18] of logic gates, flip-
flops, multiplexers, and other circuits in a few square millimeters. The small size of these circuits
allows high speed, low power dissipation, and reduced manufacturing cost compared with board-
level integration. These digital ICs, typically microprocessors, DSPs, and microcontrollers, work
using boolean algebra to process "one" and "zero" signals.
The die from an Intel 8742, an 8-bit microcontroller that includes a CPUrunning at 12 MHz, 128
bytes of RAM, 2048 bytes of EPROM, and I/O in the same chip
Among the most advanced integrated circuits are the microprocessors or "cores", which control
everything from computers and cellular phones to digital microwave ovens. Digital memory
chips and application-specific integrated circuits (ASICs) are examples of other families of
integrated circuits that are important to the modern information society.
In the 1980s, programmable logic devices were developed. These devices contain circuits whose
logical function and connectivity can be programmed by the user, rather than being fixed by the
integrated circuit manufacturer. This allows a single chip to be programmed to implement different
LSI-type functions such as logic gates, adders and registers. Current devices called field-
programmable gate arrays(FPGAs) can (as of 2016) implement the equivalent of millions of gates
in parallel and operate up to 1 GHz.[34]
Analog ICs, such as sensors, power management circuits, and operational amplifiers, work by
processing continuous signals. They perform functions like amplification, active
filtering, demodulation, and mixing. Analog ICs ease the burden on circuit designers by having
expertly designed analog circuits available instead of designing a difficult analog circuit from
scratch.
ICs can also combine analog and digital circuits on a single chip to create functions such as A/D
converters and D/A converters. Such mixed-signal circuits offer smaller size and lower cost, but
must carefully account for signal interference. Prior to the late 1990s, radios could not be fabricated
in the same low-cost CMOS processes as microprocessors. But since 1998, a large number of
radio chips have been developed using CMOS processes. Examples include Intel's DECT cordless
phone, or 802.11 (Wi-Fi) chips created by Atheros and other companies.[35]
Modern electronic component distributors often further sub-categorize the huge variety of
integrated circuits now available:
 Digital ICs are further sub-categorized as logic ICs, memory chips, interface ICs (level
shifters, serializer/deserializer, etc.), Power Management ICs, and programmable devices.
 Analog ICs are further sub-categorized as linear ICs and RF ICs.
 mixed-signal integrated circuits are further sub-categorized as data acquisition ICs (including
A/D converters, D/A converter, digital potentiometers) and clock/timing ICs.
Manufacturing[edit]
Fabrication[edit]
Main article: Semiconductor fabrication
Rendering of a small standard cellwith three metal layers (dielectric has been removed). The sand-
colored structures are metal interconnect, with the vertical pillars being contacts, typically plugs of
tungsten. The reddish structures are polysilicon gates, and the solid at the bottom is the crystalline
silicon bulk.
Schematic structure of a CMOS chip, as built in the early 2000s. The graphic shows LDD-MISFET's
on an SOI substrate with five metallization layers and solder bump for flip-chip bonding. It also
shows the section for FEOL (front-end of line), BEOL (back-end of line) and first parts of back-end
process.
The semiconductors of the periodic table of the chemical elements were identified as the most
likely materials for a solid-state vacuum tube. Starting with copper oxide, proceeding
to germanium, then silicon, the materials were systematically studied in the 1940s and 1950s.
Today, monocrystalline silicon is the main substrate used for ICs although some III-V compounds
of the periodic table such as gallium arsenide are used for specialized applications
like LEDs, lasers, solar cells and the highest-speed integrated circuits. It took decades to perfect
methods of creating crystals without defects in the crystalline structure of the semiconducting
material.
Semiconductor ICs are fabricated in a planar process which includes three key process steps –
imaging, deposition and etching. The main process steps are supplemented by doping and
cleaning.
Mono-crystal silicon wafers (or for special applications, silicon on sapphire or gallium arsenide
wafers) are used as the substrate. Photolithography is used to mark different areas of the substrate
to be doped or to have polysilicon, insulators or metal (typically aluminium or copper) tracks
deposited on them.
 Integrated circuits are composed of many overlapping layers, each defined by photolithography,
and normally shown in different colors. Some layers mark where various dopants are diffused
into the substrate (called diffusion layers), some define where additional ions are implanted
(implant layers), some define the conductors (polysilicon or metal layers), and some define the
connections between the conducting layers (via or contact layers). All components are
constructed from a specific combination of these layers.
 In a self-aligned CMOS process, a transistor is formed wherever the gate layer (polysilicon or
metal) crosses a diffusion layer.
 Capacitive structures, in form very much like the parallel conducting plates of a traditional
electrical capacitor, are formed according to the area of the "plates", with insulating material
between the plates. Capacitors of a wide range of sizes are common on ICs.
 Meandering stripes of varying lengths are sometimes used to form on-chip resistors, though
most logic circuits do not need any resistors. The ratio of the length of the resistive structure to
its width, combined with its sheet resistivity, determines the resistance.
 More rarely, inductive structures can be built as tiny on-chip coils, or simulated by gyrators.
Since a CMOS device only draws current on the transition between logic states, CMOS devices
consume much less current than bipolardevices.
A random-access memory is the most regular type of integrated circuit; the highest density devices
are thus memories; but even a microprocessor will have memory on the chip. (See the regular array
structure at the bottom of the first image.) Although the structures are intricate – with widths which
have been shrinking for decades – the layers remain much thinner than the device widths. The
layers of material are fabricated much like a photographic process, although light waves in
the visible spectrum cannot be used to "expose" a layer of material, as they would be too large for
the features. Thus photons of higher frequencies (typically ultraviolet) are used to create the
patterns for each layer. Because each feature is so small, electron microscopes are essential tools
for a process engineer who might be debugging a fabrication process.
Each device is tested before packaging using automated test equipment (ATE), in a process known
as wafer testing, or wafer probing. The wafer is then cut into rectangular blocks, each of which is
called a die. Each good die (plural dice, dies, or die) is then connected into a package using
aluminium (or gold) bond wires which are thermosonically bonded[36] to pads, usually found around
the edge of the die. . Thermosonic bonding was first introduced by A. Coucoulas which provided a
reliable means of forming these vital electrical connections to the outside world. After packaging,
the devices go through final testing on the same or similar ATE used during wafer probing. Industrial
CT scanning can also be used. Test cost can account for over 25% of the cost of fabrication on
lower-cost products, but can be negligible on low-yielding, larger, or higher-cost devices.
As of 2016, a fabrication facility (commonly known as a semiconductor fab) can cost over US$8
billion to construct.[37] The cost of a fabrication facility rises over time (Rock's law) because much
of the operation is automated. Today, the most advanced processes employ the following
techniques:
 The wafers are up to 300 mm in diameter (wider than a common dinner plate).
 As of 2016, a state of the art foundry can produce 14 nm transistors, as implemented
by Intel, TSMC, Samsung, and Global Foundries. The next step, to 10 nm devices, is expected
in 2017.[38]
 Copper interconnects where copper wiring replaces aluminium for interconnects.
 Low-K dielectric insulators.
 Silicon on insulator (SOI).
 Strained silicon in a process used by IBM known as strained silicon directly on
insulator (SSDOI).
 Multigate devices such as tri-gate transistors being manufactured by Intel from 2011 in their
22 nm process.
Packaging[edit]
Main article: Integrated circuit packaging
A Soviet MSI nMOS chip made in 1977, part of a four-chip calculator set designed in 1970[39]
The earliest integrated circuits were packaged in ceramic flat packs, which continued to be used
by the military for their reliability and small size for many years. Commercial circuit packaging
quickly moved to the dual in-line package (DIP), first in ceramic and later in plastic. In the 1980s
pin counts of VLSI circuits exceeded the practical limit for DIP packaging, leading to pin grid
array (PGA) and leadless chip carrier(LCC) packages. Surface mount packaging appeared in the
early 1980s and became popular in the late 1980s, using finer lead pitch with leads formed as either
gull-wing or J-lead, as exemplified by the small-outline integrated circuit (SOIC) package – a carrier
which occupies an area about 30–50% less than an equivalent DIP and is typically 70% thinner.
This package has "gull wing" leads protruding from the two long sides and a lead spacing of
0.050 inches.
In the late 1990s, plastic quad flat pack (PQFP) and thin small-outline package (TSOP) packages
became the most common for high pin count devices, though PGA packages are still often used
for high-end microprocessors. Intel and AMD are currently[when?] transitioning from PGA packages
on high-end microprocessors to land grid array (LGA) packages.
Ball grid array (BGA) packages have existed since the 1970s. Flip-chip Ball Grid Array packages,
which allow for much higher pin count than other package types, were developed in the 1990s. In
an FCBGA package the die is mounted upside-down (flipped) and connects to the package balls
via a package substrate that is similar to a printed-circuit board rather than by wires. FCBGA
packages allow an array of input-output signals (called Area-I/O) to be distributed over the entire
die rather than being confined to the die periphery.
Traces going out of the die, through the package, and into the printed circuit board have very
different electrical properties, compared to on-chip signals. They require special design techniques
and need much more electric power than signals confined to the chip itself.
When multiple dies are put in one package, the result is a System in Package, or SiP. A multi-chip
module, or MCM, is created by combining multiple dies on a small substrate often made of ceramic.
The distinction between a big MCM and a small printed circuit board is sometimes fuzzy.
Chip labeling and manufacture date[edit]
Most integrated circuits are large enough to include identifying information. Four common sections
are the manufacturer's name or logo, the part number, a part production batch number and serial
number, and a four-digit date-code to identify when the chip was manufactured. Extremely
small surface mount technology parts often bear only a number used in a manufacturer's lookup
table to find the chip characteristics.
The manufacturing date is commonly represented as a two-digit year followed by a two-digit week
code, such that a part bearing the code 8341 was manufactured in week 41 of 1983, or
approximately in October 1983.
Intellectual property[edit]
Main article: Integrated circuit layout design protection
The possibility of copying by photographing each layer of an integrated circuit and
preparing photomasks for its production on the basis of the photographs obtained is a reason for
the introduction of legislation for the protection of layout-designs. The Semiconductor Chip
Protection Act of 1984 established intellectual property protection for photomasks used to produce
integrated circuits.[40]
A diplomatic conference was held at Washington, D.C., in 1989, which adopted a Treaty on
Intellectual Property in Respect of Integrated Circuits (IPIC Treaty).
The Treaty on Intellectual Property in respect of Integrated Circuits, also called Washington Treaty
or IPIC Treaty (signed at Washington on 26 May 1989) is currently not in force, but was partially
integrated into the TRIPS agreement.[41]
National laws protecting IC layout designs have been adopted in a number of countries, including
Japan,[42] the EC,[43] the UK, Australia, and Korea.[44]
Other developments[edit]
Future developments seem to follow the multi-core multi-microprocessor paradigm, already used
by Intel and AMD multi-core processors. Rapport Inc. and IBM started shipping the KC256 in 2006,
a 256-core microprocessor. Intel, as recently as February–August 2011, unveiled a prototype, "not
for commercial sale" chip that bears 80 cores. Each core is capable of handling its own task
independently of the others. This is in response to heat-versus-speed limit, that is about to be
reached using existing transistor technology (see: thermal design power). This design provides a
new challenge to chip programming. Parallel programming languages such as the open-
source X10 programming language are designed to assist with this task.[45]
Generations[edit]
In the early days of simple integrated circuits, the technology's large scale limited each chip to only
a few transistors, and the low degree of integration meant the design process was relatively simple.
Manufacturing yields were also quite low by today's standards. As the technology progressed,
millions, then billions[46] of transistors could be placed on one chip, and good designs required
thorough planning, giving rise to the field of Electronic Design Automation, or EDA.
Name Signification Year Transistors number[47] Logic
gates number[48]
SSI small-scale integration 1964 1 to 10 1 to 12
MSI
medium-scale
integration
1968 10 to 500 13 to 99
LSI large-scale integration 1971 500 to 20 000 100 to 9999
VLSI
very large-scale
integration
1980 20 000 to 1 000 000 10 000 to 99 999
ULSI
ultra-large-scale
integration
1984 1 000 000 and more 100 000 and more
SSI, MSI and LSI [edit]
The first integrated circuits contained only a few transistors. Early digital circuits containing tens of
transistors provided a few logic gates, and early linear ICs such as the PlesseySL201 or
the Philips TAA320 had as few as two transistors. The number of transistors in an integrated circuit
has increased dramatically since then. The term "large scale integration" (LSI) was first used
by IBM scientist Rolf Landauer when describing the theoretical concept;[citation needed] that term gave
rise to the terms "small-scale integration" (SSI), "medium-scale integration" (MSI), "very-large-scale
integration" (VLSI), and "ultra-large-scale integration" (ULSI). The early integrated circuits were
SSI.
SSI circuits were crucial to early aerospace projects, and aerospace projects helped inspire
development of the technology. Both the Minuteman missile and Apollo program needed
lightweight digital computers for their inertial guidance systems. Although the Apollo guidance
computer led and motivated integrated-circuit technology,[49] it was the Minuteman missile that
forced it into mass-production. The Minuteman missile program and various other Navy programs
accounted for the total $4 million integrated circuit market in 1962, and by 1968, U.S. Government
space and defense spending still accounted for 37% of the $312 million total production.
The demand by the U.S. Government supported the nascent integrated circuit market until costs
fell enough to allow IC firms to penetrate first the industrial and eventually the consumer markets.
The average price per integrated circuit dropped from $50.00 in 1962 to $2.33 in 1968.[50] Integrated
circuits began to appear in consumer products by the turn of the decade, a typical application
being FM inter-carrier sound processing in television receivers.
The first MOS chips were small-scale integration chips for NASA satellites.[51]
The next step in the development of integrated circuits, taken in the late 1960s, introduced devices
which contained hundreds of transistors on each chip, called "medium-scale integration" (MSI).
In 1964, Frank Wanlass demonstrated a single-chip 16-bit shift register he designed, with an
incredible (at the time) 120 transistors on a single chip.[51][52]
MSI devices were attractive economically because while they cost a little more to produce than SSI
devices, they allowed more complex systems to be produced using smaller circuit boards, less
assembly work (because of fewer separate components), and a number of other advantages.
Further development, driven by the same economic factors, led to "large-scale integration" (LSI) in
the mid-1970s, with tens of thousands of transistors per chip.
The masks used to process and manufacture SSI, MSI and early LSI and VLSI devices (such as
the microprocessors of the early 1970s) were mostly created by hand, often using Rubylith-tape or
similar.[53] For large or complex ICs (such as memories or processors), this was often done by
specially hired layout people under supervision of a team of engineers, who would also, along with
the circuit designers, inspect and verify the correctness and completeness of each mask. However,
modern VLSI devices contain so many transistors, layers, interconnections, and other features that
it is no longer feasible to check the masks or do the original design by hand. The engineer depends
on computer programs and other hardware aids to do most of this work.[54]
Integrated circuits such as 1K-bit RAMs, calculator chips, and the first microprocessors, that began
to be manufactured in moderate quantities in the early 1970s, had under 4,000 transistors. True
LSI circuits, approaching 10,000 transistors, began to be produced around 1974, for computer main
memories and second-generation microprocessors.
Some SSI and MSI chips, like discrete transistors, are still mass-produced, both to maintain old
equipment and build new devices that require only a few gates. The 7400 series of TTL chips, for
example, has become a de facto standard and remains in production.
VLSI[edit]
Main article: Very-large-scale integration
Boosting flip flop
Boosting flip flop
Boosting flip flop
Boosting flip flop
Boosting flip flop
Boosting flip flop
Boosting flip flop
Boosting flip flop
Boosting flip flop
Boosting flip flop
Boosting flip flop
Boosting flip flop
Boosting flip flop
Boosting flip flop
Boosting flip flop
Boosting flip flop
Boosting flip flop
Boosting flip flop
Boosting flip flop
Boosting flip flop
Boosting flip flop
Boosting flip flop
Boosting flip flop
Boosting flip flop
Boosting flip flop
Boosting flip flop
Boosting flip flop
Boosting flip flop
Boosting flip flop
Boosting flip flop
Boosting flip flop
Boosting flip flop
Boosting flip flop
Boosting flip flop
Boosting flip flop
Boosting flip flop
Boosting flip flop
Boosting flip flop
Boosting flip flop
Boosting flip flop
Boosting flip flop
Boosting flip flop
Boosting flip flop
Boosting flip flop
Boosting flip flop
Boosting flip flop

More Related Content

What's hot

Cs hs drop analysis and optimization presentation
Cs  hs drop analysis and optimization presentationCs  hs drop analysis and optimization presentation
Cs hs drop analysis and optimization presentationDaniel Amaning
 
Chap09 phys rlc_03 _kh
Chap09 phys rlc_03 _khChap09 phys rlc_03 _kh
Chap09 phys rlc_03 _khFarzad Ramin
 
Initial LTE call Setup Flow
Initial LTE call Setup FlowInitial LTE call Setup Flow
Initial LTE call Setup Flowassinha
 
54495209 umts-3 g-wcdma-call-flows
54495209 umts-3 g-wcdma-call-flows54495209 umts-3 g-wcdma-call-flows
54495209 umts-3 g-wcdma-call-flowsNoppadol Loykhwamsuk
 
3g umts-originating-call Call Flow
3g umts-originating-call Call Flow3g umts-originating-call Call Flow
3g umts-originating-call Call FlowEduard Lucena
 
Analysis of LTE physical channels overhead
Analysis of LTE physical channels overheadAnalysis of LTE physical channels overhead
Analysis of LTE physical channels overheadTELKOMNIKA JOURNAL
 
3g umts-terminating-call
3g umts-terminating-call3g umts-terminating-call
3g umts-terminating-callpkamoto
 
39540950 gsm-ion-course
39540950 gsm-ion-course39540950 gsm-ion-course
39540950 gsm-ion-courseHoài Hà
 
Properties of radio path
Properties of radio pathProperties of radio path
Properties of radio pathDeepak Joshi
 
Sdcch 121130233301-phpapp02
Sdcch 121130233301-phpapp02Sdcch 121130233301-phpapp02
Sdcch 121130233301-phpapp02Rasim Levashov
 
TCP Traffic Control Chapter12
TCP Traffic Control Chapter12TCP Traffic Control Chapter12
TCP Traffic Control Chapter12daniel ayalew
 
Congestion Control in Data Networks And Internets Chapter 10
Congestion Control in Data Networks And Internets Chapter 10Congestion Control in Data Networks And Internets Chapter 10
Congestion Control in Data Networks And Internets Chapter 10daniel ayalew
 
Policy Based Routing (PBR)
Policy Based Routing (PBR)Policy Based Routing (PBR)
Policy Based Routing (PBR)KHNOG
 

What's hot (18)

Gb over ip
Gb over ipGb over ip
Gb over ip
 
Cs hs drop analysis and optimization presentation
Cs  hs drop analysis and optimization presentationCs  hs drop analysis and optimization presentation
Cs hs drop analysis and optimization presentation
 
Chap09 phys rlc_03 _kh
Chap09 phys rlc_03 _khChap09 phys rlc_03 _kh
Chap09 phys rlc_03 _kh
 
Initial LTE call Setup Flow
Initial LTE call Setup FlowInitial LTE call Setup Flow
Initial LTE call Setup Flow
 
54495209 umts-3 g-wcdma-call-flows
54495209 umts-3 g-wcdma-call-flows54495209 umts-3 g-wcdma-call-flows
54495209 umts-3 g-wcdma-call-flows
 
3g umts-originating-call Call Flow
3g umts-originating-call Call Flow3g umts-originating-call Call Flow
3g umts-originating-call Call Flow
 
Analysis of LTE physical channels overhead
Analysis of LTE physical channels overheadAnalysis of LTE physical channels overhead
Analysis of LTE physical channels overhead
 
3g umts-terminating-call
3g umts-terminating-call3g umts-terminating-call
3g umts-terminating-call
 
Call flow plot
Call flow plotCall flow plot
Call flow plot
 
Ch 04 HANDOVER_gvl
Ch 04 HANDOVER_gvlCh 04 HANDOVER_gvl
Ch 04 HANDOVER_gvl
 
39540950 gsm-ion-course
39540950 gsm-ion-course39540950 gsm-ion-course
39540950 gsm-ion-course
 
Ch 21
Ch 21Ch 21
Ch 21
 
Properties of radio path
Properties of radio pathProperties of radio path
Properties of radio path
 
Sdcch 121130233301-phpapp02
Sdcch 121130233301-phpapp02Sdcch 121130233301-phpapp02
Sdcch 121130233301-phpapp02
 
TCP Traffic Control Chapter12
TCP Traffic Control Chapter12TCP Traffic Control Chapter12
TCP Traffic Control Chapter12
 
Congestion Control in Data Networks And Internets Chapter 10
Congestion Control in Data Networks And Internets Chapter 10Congestion Control in Data Networks And Internets Chapter 10
Congestion Control in Data Networks And Internets Chapter 10
 
Policy Based Routing (PBR)
Policy Based Routing (PBR)Policy Based Routing (PBR)
Policy Based Routing (PBR)
 
dual_band_features
dual_band_featuresdual_band_features
dual_band_features
 

Similar to Boosting flip flop

Protocolo de programacion del pic16 f8x
Protocolo de programacion del pic16 f8xProtocolo de programacion del pic16 f8x
Protocolo de programacion del pic16 f8xrich_glez
 
Review of high-speed phase accumulator for direct digital frequency synthesizer
Review of high-speed phase accumulator for direct digital frequency synthesizer Review of high-speed phase accumulator for direct digital frequency synthesizer
Review of high-speed phase accumulator for direct digital frequency synthesizer IJECEIAES
 
A Simulation Based Analysis of Lowering Dynamic Power in a CMOS Inverter
A Simulation Based Analysis of Lowering Dynamic Power in a CMOS InverterA Simulation Based Analysis of Lowering Dynamic Power in a CMOS Inverter
A Simulation Based Analysis of Lowering Dynamic Power in a CMOS Inverteridescitation
 
Optimal Body Biasing Technique for CMOS Tapered Buffer
Optimal Body Biasing Technique for  CMOS Tapered Buffer Optimal Body Biasing Technique for  CMOS Tapered Buffer
Optimal Body Biasing Technique for CMOS Tapered Buffer IJEEE
 
Tdt4260 miniproject report_group_3
Tdt4260 miniproject report_group_3Tdt4260 miniproject report_group_3
Tdt4260 miniproject report_group_3Yulong Bai
 
enhancement of low power pulse triggered flip-flop design based on signal fee...
enhancement of low power pulse triggered flip-flop design based on signal fee...enhancement of low power pulse triggered flip-flop design based on signal fee...
enhancement of low power pulse triggered flip-flop design based on signal fee...Kumar Goud
 
Implementation of Area & Power Optimized VLSI Circuits Using Logic Techniques
Implementation of Area & Power Optimized VLSI Circuits Using Logic TechniquesImplementation of Area & Power Optimized VLSI Circuits Using Logic Techniques
Implementation of Area & Power Optimized VLSI Circuits Using Logic TechniquesIOSRJVSP
 
Motorola BSC Overview
Motorola BSC OverviewMotorola BSC Overview
Motorola BSC OverviewFarhan Ahmed
 
Iaetsd design of fuzzy self-tuned load frequency controller for power system
Iaetsd design of fuzzy self-tuned load frequency controller for power systemIaetsd design of fuzzy self-tuned load frequency controller for power system
Iaetsd design of fuzzy self-tuned load frequency controller for power systemIaetsd Iaetsd
 
Jitter transfer Functions in Minutes
Jitter transfer Functions in MinutesJitter transfer Functions in Minutes
Jitter transfer Functions in MinutesJean-Marc Robillard
 
An Ultra-Low Power Robust Koggestone Adder at Sub-Threshold Voltages for Impl...
An Ultra-Low Power Robust Koggestone Adder at Sub-Threshold Voltages for Impl...An Ultra-Low Power Robust Koggestone Adder at Sub-Threshold Voltages for Impl...
An Ultra-Low Power Robust Koggestone Adder at Sub-Threshold Voltages for Impl...VLSICS Design
 
AN ULTRA-LOW POWER ROBUST KOGGESTONE ADDER AT SUB-THRESHOLD VOLTAGES FOR IMPL...
AN ULTRA-LOW POWER ROBUST KOGGESTONE ADDER AT SUB-THRESHOLD VOLTAGES FOR IMPL...AN ULTRA-LOW POWER ROBUST KOGGESTONE ADDER AT SUB-THRESHOLD VOLTAGES FOR IMPL...
AN ULTRA-LOW POWER ROBUST KOGGESTONE ADDER AT SUB-THRESHOLD VOLTAGES FOR IMPL...VLSICS Design
 
GENERIC SYSTEM VERILOG UNIVERSAL VERIFICATION METHODOLOGY BASED REUSABLE VERI...
GENERIC SYSTEM VERILOG UNIVERSAL VERIFICATION METHODOLOGY BASED REUSABLE VERI...GENERIC SYSTEM VERILOG UNIVERSAL VERIFICATION METHODOLOGY BASED REUSABLE VERI...
GENERIC SYSTEM VERILOG UNIVERSAL VERIFICATION METHODOLOGY BASED REUSABLE VERI...VLSICS Design
 
Low Power System on chip based design methodology
Low Power System on chip based design methodologyLow Power System on chip based design methodology
Low Power System on chip based design methodologyAakash Patel
 
Write your own generic SPICE Power Supplies controller models
Write your own generic SPICE Power Supplies controller modelsWrite your own generic SPICE Power Supplies controller models
Write your own generic SPICE Power Supplies controller modelsTsuyoshi Horigome
 

Similar to Boosting flip flop (20)

D41022328
D41022328D41022328
D41022328
 
Protocolo de programacion del pic16 f8x
Protocolo de programacion del pic16 f8xProtocolo de programacion del pic16 f8x
Protocolo de programacion del pic16 f8x
 
Hardware lecture11
Hardware lecture11Hardware lecture11
Hardware lecture11
 
Review of high-speed phase accumulator for direct digital frequency synthesizer
Review of high-speed phase accumulator for direct digital frequency synthesizer Review of high-speed phase accumulator for direct digital frequency synthesizer
Review of high-speed phase accumulator for direct digital frequency synthesizer
 
A Simulation Based Analysis of Lowering Dynamic Power in a CMOS Inverter
A Simulation Based Analysis of Lowering Dynamic Power in a CMOS InverterA Simulation Based Analysis of Lowering Dynamic Power in a CMOS Inverter
A Simulation Based Analysis of Lowering Dynamic Power in a CMOS Inverter
 
rbm_hls
rbm_hlsrbm_hls
rbm_hls
 
Optimal Body Biasing Technique for CMOS Tapered Buffer
Optimal Body Biasing Technique for  CMOS Tapered Buffer Optimal Body Biasing Technique for  CMOS Tapered Buffer
Optimal Body Biasing Technique for CMOS Tapered Buffer
 
Tdt4260 miniproject report_group_3
Tdt4260 miniproject report_group_3Tdt4260 miniproject report_group_3
Tdt4260 miniproject report_group_3
 
enhancement of low power pulse triggered flip-flop design based on signal fee...
enhancement of low power pulse triggered flip-flop design based on signal fee...enhancement of low power pulse triggered flip-flop design based on signal fee...
enhancement of low power pulse triggered flip-flop design based on signal fee...
 
Implementation of Area & Power Optimized VLSI Circuits Using Logic Techniques
Implementation of Area & Power Optimized VLSI Circuits Using Logic TechniquesImplementation of Area & Power Optimized VLSI Circuits Using Logic Techniques
Implementation of Area & Power Optimized VLSI Circuits Using Logic Techniques
 
Motorola BSC Overview
Motorola BSC OverviewMotorola BSC Overview
Motorola BSC Overview
 
Compiler Design
Compiler DesignCompiler Design
Compiler Design
 
Iaetsd design of fuzzy self-tuned load frequency controller for power system
Iaetsd design of fuzzy self-tuned load frequency controller for power systemIaetsd design of fuzzy self-tuned load frequency controller for power system
Iaetsd design of fuzzy self-tuned load frequency controller for power system
 
Mukherjee2015
Mukherjee2015Mukherjee2015
Mukherjee2015
 
Jitter transfer Functions in Minutes
Jitter transfer Functions in MinutesJitter transfer Functions in Minutes
Jitter transfer Functions in Minutes
 
An Ultra-Low Power Robust Koggestone Adder at Sub-Threshold Voltages for Impl...
An Ultra-Low Power Robust Koggestone Adder at Sub-Threshold Voltages for Impl...An Ultra-Low Power Robust Koggestone Adder at Sub-Threshold Voltages for Impl...
An Ultra-Low Power Robust Koggestone Adder at Sub-Threshold Voltages for Impl...
 
AN ULTRA-LOW POWER ROBUST KOGGESTONE ADDER AT SUB-THRESHOLD VOLTAGES FOR IMPL...
AN ULTRA-LOW POWER ROBUST KOGGESTONE ADDER AT SUB-THRESHOLD VOLTAGES FOR IMPL...AN ULTRA-LOW POWER ROBUST KOGGESTONE ADDER AT SUB-THRESHOLD VOLTAGES FOR IMPL...
AN ULTRA-LOW POWER ROBUST KOGGESTONE ADDER AT SUB-THRESHOLD VOLTAGES FOR IMPL...
 
GENERIC SYSTEM VERILOG UNIVERSAL VERIFICATION METHODOLOGY BASED REUSABLE VERI...
GENERIC SYSTEM VERILOG UNIVERSAL VERIFICATION METHODOLOGY BASED REUSABLE VERI...GENERIC SYSTEM VERILOG UNIVERSAL VERIFICATION METHODOLOGY BASED REUSABLE VERI...
GENERIC SYSTEM VERILOG UNIVERSAL VERIFICATION METHODOLOGY BASED REUSABLE VERI...
 
Low Power System on chip based design methodology
Low Power System on chip based design methodologyLow Power System on chip based design methodology
Low Power System on chip based design methodology
 
Write your own generic SPICE Power Supplies controller models
Write your own generic SPICE Power Supplies controller modelsWrite your own generic SPICE Power Supplies controller models
Write your own generic SPICE Power Supplies controller models
 

Recently uploaded

5G and 6G refer to generations of mobile network technology, each representin...
5G and 6G refer to generations of mobile network technology, each representin...5G and 6G refer to generations of mobile network technology, each representin...
5G and 6G refer to generations of mobile network technology, each representin...archanaece3
 
Instruct Nirmaana 24-Smart and Lean Construction Through Technology.pdf
Instruct Nirmaana 24-Smart and Lean Construction Through Technology.pdfInstruct Nirmaana 24-Smart and Lean Construction Through Technology.pdf
Instruct Nirmaana 24-Smart and Lean Construction Through Technology.pdfEr.Sonali Nasikkar
 
Final DBMS Manual (2).pdf final lab manual
Final DBMS Manual (2).pdf final lab manualFinal DBMS Manual (2).pdf final lab manual
Final DBMS Manual (2).pdf final lab manualBalamuruganV28
 
Involute of a circle,Square, pentagon,HexagonInvolute_Engineering Drawing.pdf
Involute of a circle,Square, pentagon,HexagonInvolute_Engineering Drawing.pdfInvolute of a circle,Square, pentagon,HexagonInvolute_Engineering Drawing.pdf
Involute of a circle,Square, pentagon,HexagonInvolute_Engineering Drawing.pdfJNTUA
 
Tembisa Central Terminating Pills +27838792658 PHOMOLONG Top Abortion Pills F...
Tembisa Central Terminating Pills +27838792658 PHOMOLONG Top Abortion Pills F...Tembisa Central Terminating Pills +27838792658 PHOMOLONG Top Abortion Pills F...
Tembisa Central Terminating Pills +27838792658 PHOMOLONG Top Abortion Pills F...drjose256
 
Linux Systems Programming: Semaphores, Shared Memory, and Message Queues
Linux Systems Programming: Semaphores, Shared Memory, and Message QueuesLinux Systems Programming: Semaphores, Shared Memory, and Message Queues
Linux Systems Programming: Semaphores, Shared Memory, and Message QueuesRashidFaridChishti
 
Basics of Relay for Engineering Students
Basics of Relay for Engineering StudentsBasics of Relay for Engineering Students
Basics of Relay for Engineering Studentskannan348865
 
Introduction to Artificial Intelligence and History of AI
Introduction to Artificial Intelligence and History of AIIntroduction to Artificial Intelligence and History of AI
Introduction to Artificial Intelligence and History of AISheetal Jain
 
Low Altitude Air Defense (LAAD) Gunner’s Handbook
Low Altitude Air Defense (LAAD) Gunner’s HandbookLow Altitude Air Defense (LAAD) Gunner’s Handbook
Low Altitude Air Defense (LAAD) Gunner’s HandbookPeterJack13
 
ALCOHOL PRODUCTION- Beer Brewing Process.pdf
ALCOHOL PRODUCTION- Beer Brewing Process.pdfALCOHOL PRODUCTION- Beer Brewing Process.pdf
ALCOHOL PRODUCTION- Beer Brewing Process.pdfMadan Karki
 
Research Methodolgy & Intellectual Property Rights Series 1
Research Methodolgy & Intellectual Property Rights Series 1Research Methodolgy & Intellectual Property Rights Series 1
Research Methodolgy & Intellectual Property Rights Series 1T.D. Shashikala
 
21scheme vtu syllabus of visveraya technological university
21scheme vtu syllabus of visveraya technological university21scheme vtu syllabus of visveraya technological university
21scheme vtu syllabus of visveraya technological universityMohd Saifudeen
 
"United Nations Park" Site Visit Report.
"United Nations Park" Site  Visit Report."United Nations Park" Site  Visit Report.
"United Nations Park" Site Visit Report.MdManikurRahman
 
Geometric constructions Engineering Drawing.pdf
Geometric constructions Engineering Drawing.pdfGeometric constructions Engineering Drawing.pdf
Geometric constructions Engineering Drawing.pdfJNTUA
 
Dynamo Scripts for Task IDs and Space Naming.pptx
Dynamo Scripts for Task IDs and Space Naming.pptxDynamo Scripts for Task IDs and Space Naming.pptx
Dynamo Scripts for Task IDs and Space Naming.pptxMustafa Ahmed
 
NEWLETTER FRANCE HELICES/ SDS SURFACE DRIVES - MAY 2024
NEWLETTER FRANCE HELICES/ SDS SURFACE DRIVES - MAY 2024NEWLETTER FRANCE HELICES/ SDS SURFACE DRIVES - MAY 2024
NEWLETTER FRANCE HELICES/ SDS SURFACE DRIVES - MAY 2024EMMANUELLEFRANCEHELI
 
8th International Conference on Soft Computing, Mathematics and Control (SMC ...
8th International Conference on Soft Computing, Mathematics and Control (SMC ...8th International Conference on Soft Computing, Mathematics and Control (SMC ...
8th International Conference on Soft Computing, Mathematics and Control (SMC ...josephjonse
 
Diploma Engineering Drawing Qp-2024 Ece .pdf
Diploma Engineering Drawing Qp-2024 Ece .pdfDiploma Engineering Drawing Qp-2024 Ece .pdf
Diploma Engineering Drawing Qp-2024 Ece .pdfJNTUA
 
Maher Othman Interior Design Portfolio..
Maher Othman Interior Design Portfolio..Maher Othman Interior Design Portfolio..
Maher Othman Interior Design Portfolio..MaherOthman7
 
UNIT 4 PTRP final Convergence in probability.pptx
UNIT 4 PTRP final Convergence in probability.pptxUNIT 4 PTRP final Convergence in probability.pptx
UNIT 4 PTRP final Convergence in probability.pptxkalpana413121
 

Recently uploaded (20)

5G and 6G refer to generations of mobile network technology, each representin...
5G and 6G refer to generations of mobile network technology, each representin...5G and 6G refer to generations of mobile network technology, each representin...
5G and 6G refer to generations of mobile network technology, each representin...
 
Instruct Nirmaana 24-Smart and Lean Construction Through Technology.pdf
Instruct Nirmaana 24-Smart and Lean Construction Through Technology.pdfInstruct Nirmaana 24-Smart and Lean Construction Through Technology.pdf
Instruct Nirmaana 24-Smart and Lean Construction Through Technology.pdf
 
Final DBMS Manual (2).pdf final lab manual
Final DBMS Manual (2).pdf final lab manualFinal DBMS Manual (2).pdf final lab manual
Final DBMS Manual (2).pdf final lab manual
 
Involute of a circle,Square, pentagon,HexagonInvolute_Engineering Drawing.pdf
Involute of a circle,Square, pentagon,HexagonInvolute_Engineering Drawing.pdfInvolute of a circle,Square, pentagon,HexagonInvolute_Engineering Drawing.pdf
Involute of a circle,Square, pentagon,HexagonInvolute_Engineering Drawing.pdf
 
Tembisa Central Terminating Pills +27838792658 PHOMOLONG Top Abortion Pills F...
Tembisa Central Terminating Pills +27838792658 PHOMOLONG Top Abortion Pills F...Tembisa Central Terminating Pills +27838792658 PHOMOLONG Top Abortion Pills F...
Tembisa Central Terminating Pills +27838792658 PHOMOLONG Top Abortion Pills F...
 
Linux Systems Programming: Semaphores, Shared Memory, and Message Queues
Linux Systems Programming: Semaphores, Shared Memory, and Message QueuesLinux Systems Programming: Semaphores, Shared Memory, and Message Queues
Linux Systems Programming: Semaphores, Shared Memory, and Message Queues
 
Basics of Relay for Engineering Students
Basics of Relay for Engineering StudentsBasics of Relay for Engineering Students
Basics of Relay for Engineering Students
 
Introduction to Artificial Intelligence and History of AI
Introduction to Artificial Intelligence and History of AIIntroduction to Artificial Intelligence and History of AI
Introduction to Artificial Intelligence and History of AI
 
Low Altitude Air Defense (LAAD) Gunner’s Handbook
Low Altitude Air Defense (LAAD) Gunner’s HandbookLow Altitude Air Defense (LAAD) Gunner’s Handbook
Low Altitude Air Defense (LAAD) Gunner’s Handbook
 
ALCOHOL PRODUCTION- Beer Brewing Process.pdf
ALCOHOL PRODUCTION- Beer Brewing Process.pdfALCOHOL PRODUCTION- Beer Brewing Process.pdf
ALCOHOL PRODUCTION- Beer Brewing Process.pdf
 
Research Methodolgy & Intellectual Property Rights Series 1
Research Methodolgy & Intellectual Property Rights Series 1Research Methodolgy & Intellectual Property Rights Series 1
Research Methodolgy & Intellectual Property Rights Series 1
 
21scheme vtu syllabus of visveraya technological university
21scheme vtu syllabus of visveraya technological university21scheme vtu syllabus of visveraya technological university
21scheme vtu syllabus of visveraya technological university
 
"United Nations Park" Site Visit Report.
"United Nations Park" Site  Visit Report."United Nations Park" Site  Visit Report.
"United Nations Park" Site Visit Report.
 
Geometric constructions Engineering Drawing.pdf
Geometric constructions Engineering Drawing.pdfGeometric constructions Engineering Drawing.pdf
Geometric constructions Engineering Drawing.pdf
 
Dynamo Scripts for Task IDs and Space Naming.pptx
Dynamo Scripts for Task IDs and Space Naming.pptxDynamo Scripts for Task IDs and Space Naming.pptx
Dynamo Scripts for Task IDs and Space Naming.pptx
 
NEWLETTER FRANCE HELICES/ SDS SURFACE DRIVES - MAY 2024
NEWLETTER FRANCE HELICES/ SDS SURFACE DRIVES - MAY 2024NEWLETTER FRANCE HELICES/ SDS SURFACE DRIVES - MAY 2024
NEWLETTER FRANCE HELICES/ SDS SURFACE DRIVES - MAY 2024
 
8th International Conference on Soft Computing, Mathematics and Control (SMC ...
8th International Conference on Soft Computing, Mathematics and Control (SMC ...8th International Conference on Soft Computing, Mathematics and Control (SMC ...
8th International Conference on Soft Computing, Mathematics and Control (SMC ...
 
Diploma Engineering Drawing Qp-2024 Ece .pdf
Diploma Engineering Drawing Qp-2024 Ece .pdfDiploma Engineering Drawing Qp-2024 Ece .pdf
Diploma Engineering Drawing Qp-2024 Ece .pdf
 
Maher Othman Interior Design Portfolio..
Maher Othman Interior Design Portfolio..Maher Othman Interior Design Portfolio..
Maher Othman Interior Design Portfolio..
 
UNIT 4 PTRP final Convergence in probability.pptx
UNIT 4 PTRP final Convergence in probability.pptxUNIT 4 PTRP final Convergence in probability.pptx
UNIT 4 PTRP final Convergence in probability.pptx
 

Boosting flip flop

  • 1. Conditional-Boosting Flip-Flop for Near-Threshold Voltage Application Abstract: A conditional-boosting flip-flop is proposed for ultralow-voltage application where the supply voltage is scaled down to the near-threshold region. The proposed flip-flop adopts voltage boosting to provide low latency with reduced performance variability in the near-threshold voltage region. It also adopts conditional capture to minimize the switching power consumption by eliminating redundant boosting operations. Experimental results in a 65-nm CMOS process indicated that the proposed flip-flop provided up to 72% lower latency with 75% less performance variability due to process variation, and up to 67% improved energy-delay product at 25% switching activity compared with conventional precharged differential flip-flops. Existing System: Capacitive boosting can be a solution to overcome theproblems caused by aggressive voltage scaling. It allows the gatesource voltage of some MOS transistors to be boosted above thesupply voltage or below the ground.The enhanced driving capabilityof transistors thus obtained can reduce the latency and its sensitivityto process variations. The bootstrapped CMOS driver presentedin [8] relies on this technique to drive heavy capacitive loads withsubstantially reduced latency. However, since it is a static driver,every input transition causes the bootstrapping operation. So, ifsome of the transitions are redundant, a large amount of redundantpower consumption may occur. The conditional-bootstrapping latchedCMOS driver [9] proposes the concept of conditional bootstrappingto eliminate the redundant power consumption. As it is a latcheddriver, it can allow boosting only when the input and output logicvalues are different, resulting in no redundant boosting and improvedenergy efficiency, especially at low switching activity. Recently, adifferential CMOS logic family adapting the boosting technique hasalso been proposed for fast operation at the near- threshold voltageregion. Proposed System: For incorporating the conditional boosting into a pre charged differential flip-flop, four different scenarios regarding input data capture should be considered, which are determined by the logic states of the input and output. These scenarios are as follows: 1) For a low output data, a high input data should trigger boosting for a fast capture of incoming data; 2) For a low output data, a low input data should trigger no boosting since the input need not be captured; 3) For a high output data, a low input data should trigger boosting for a fast capture of incoming data; 4) For a high output data, a high input data should trigger no boosting. These scenarios can be embodied into a circuit topology usinga single boosting capacitor by a combination of two operationprinciples. One is that the voltage presetting for the terminals ofthe boosting capacitor must be determined by the data stored at theoutput (so-calledoutput-dependent presetting). The other principle isthat boosting operations must be conditional to the input data
  • 2. givento the flip-flop (so called input-dependent boosting). The conceptualcircuit diagrams for supporting these principles are shown in Fig. 1.To support the output-dependent presetting, the preset voltages ofcapacitor terminalsNandNBare made to be determined by outputsQandQBas shown in Fig. 1(a). If QandQBare low and high,NandNBare preset to be low and high [left diagram in Fig. 1(a)],and ifQandQBare high and low,NandNBarepresettobehighandlow [right diagram in Fig. 1(a)], respectively. To support the inputdependent boosting, the non-inverting input (D) is coupled to NB through an nMOS transistor and the inverting input (DB) is coupled to N through another nMOS transistor, as shown in Fig. 1(b). Then,as one case in which a low data is stored in the flip- flop, resulting in the capacitor presetting given in the left diagram in Fig. 1(a),a high input allows NBto be pulled down to the ground, lettingNbeing boosted toward–VDDdue to capacitive coupling [upperleft diagram in Fig. 1(b)]. Meanwhile, a low input allows Ntobe connected to the ground, but since the node is already presetto VSS, there is no voltage change at NB, resulting in no boosting[lower left diagram in Fig. 1(b)]. As the other case in which a highdata is stored in the flip- flop, resulting in the capacitor presettinggiven in right diagram in Fig. 1(a), a low input allows Nto bepulled down to the ground, letting NBbeing boosted toward –VDDdue to capacitive coupling [lower right diagram in Fig. 1(b)]. Figure 1:Conceptual circuit diagrams for (a) output data-dependent presetting Meanwhile, a high input allows NBto be connected to the ground, butsince the node is already preset to VSS, there is no voltage changeat N, resulting in no boosting [upper right diagram in Fig. 1(b)].Table I summarizes these operations for easier understanding. Withthese operations, any redundant boosting can be eliminated, resulting in a significant power reduction, especially at low switchingactivity.
  • 3. Table 1:DATA-DEPENDENTPRESETTINGANDBOOSTING CircuitImplementation: The structure of the proposed conditional-boosting flip-flop (CBFF)based on the concepts described in the previous section is shown inFig. 2. It consists of a conditional-boosting differential stage, a symmetric latch, and an explicit brief pulse generator. In the conditionalboosting differential stage shown in Fig. 2(a), MP5/MP6/MP7and MN8/MN9 are used to perform the output-dependent presetting, whereas MN5/MN6/MN7 with boosting capacitor CBOOTareused to perform the input- dependent boosting. MP8–MP13 andMN10–MN15 constitute the symmetric latch, asshown in Fig. 2(b).Some transistors in the differential stage are driven by a brief pulsedsignal PSgenerated by a novel explicit pulse generator shown inFig. 2(c). Unlike conventional pulse generators, the proposed pulse generator has no pMOS keeper, resulting in higher speed and lowerpower due to no signal fighting during the pull-down ofPSB.Theroleof the keeper to maintain a high logic value ofPSBis done by MP1added in parallel with MN1, which also helps a fast pull-down ofPSB.At the rising edge ofCLK, PSBis rapidly discharged by MN1, MP1,and I1, lettingPShigh. After the latency of I2 and I3,PSBis chargedby MP2, and soPSreturns to low, resulting in a brief positive pulseatPSwhose width is determined by the latency of I2 and I3. WhenCLKis low, PSBismaintained high by MP1, although MP2 is OFF.According to our evaluation, the energy reduction is up to 9% forthe same slew rate and pulse width.
  • 4. Figure 2:ProposedCBFF. (a) Conditional-boosting differential stage. (b) Symmetric latch. (c) Explicit briefpulse generator Boot strapping: In general, bootstrapping usually refers to a self-starting process that is supposed to proceed without external input. In computer technology the term (usually shortened to booting) usually refers to the process of loading the basic software into the memory of a computer after power-on or general reset, especially the operating system which will then take care of loading other software as needed. The term appears to have originated in the early 19th-century United States (particularly in the phrase "pull oneself over a fence by one's bootstraps") to mean an absurdly impossible action, an adynaton. Software loading and execution[edit] Main articles: Booting and Reboot (computing) Booting is the process of starting a computer, specifically with regard to starting its software. The process involves a chain of stages, in which at each stage a smaller, simpler program loads and then executes the larger, more complicated program of the next stage. It is in this sense that the
  • 5. computer "pulls itself up by its bootstraps", i.e. it improves itself by its own efforts. Booting is a chain of events that starts with execution of hardware-based procedures and may then hand-off to firmware and software which is loaded into main memory. Booting often involves processes such as performing self-tests, loading configuration settings, loading a BIOS, resident monitors, a hypervisor, an operating system, or utility software. The computer term bootstrap began as a metaphor in the 1950s. In computers, pressing a bootstrap button caused a hardwired program to read a bootstrap program from an input unit. The computer would then execute the bootstrap program, which caused it to read more program instructions. It became a self-sustaining process that proceeded without external help from manually entered instructions. As a computing term, bootstrap has been used since at least 1953.[8] Software development[edit] Bootstrapping can also refer to the development of successively more complex, faster programming environments. The simplest environment will be, perhaps, a very basic text editor (e.g., ed) and an assembler program. Using these tools, one can write a more complex text editor, and a simple compiler for a higher-level language and so on, until one can have a graphical IDE and an extremely high-level programming language. Historically, bootstrapping also refers to an early technique for computer program development on new hardware. The technique described in this paragraph has been replaced by the use of a cross compiler executed by a pre-existing computer. Bootstrapping in program development began during the 1950s when each program was constructed on paper in decimal code or in binary code, bit by bit (1s and 0s), because there was no high-level computer language, no compiler, no assembler, and no linker. A tiny assembler program was hand-coded for a new computer (for example the IBM 650) which converted a few instructions into binary or decimal code: A1. This simple assembler program was then rewritten in its just-defined assembly language but with extensions that would enable the use of some additional mnemonics for more complex operation codes. The enhanced assembler's source program was then assembled by its predecessor's executable (A1) into binary or decimal code to give A2, and the cycle repeated (now with those enhancements available), until the entire instruction set was coded, branch addresses were automatically calculated, and other conveniences (such as conditional assembly, macros, optimisations, etc.) established. This was how the early assembly program SOAP (Symbolic Optimal Assembly Program) was developed. Compilers, linkers, loaders, and utilities were then coded in assembly language, further continuing the bootstrapping process of developing complex software systems by using simpler software. The term was also championed by Doug Engelbart to refer to his belief that organizations could better evolve by improving the process they use for improvement (thus obtaining a compounding effect over time). His SRI team that developed the NLS hypertext system applied this strategy by using the tool they had developed to improve the tool. Compilers[edit] Main article: Bootstrapping (compilers) The development of compilers for new programming languages first developed in an existing language but then rewritten in the new language and compiled by itself, is another example of the bootstrapping notion. Using an existing language to bootstrap a new language is one way to solve the "chicken or the egg" causality dilemma. Installers[edit] Main article: Installation (computer programs)
  • 6. During the installation of computer programs it is sometimes necessary to update the installer or package manager itself. The common pattern for this is to use a small executable bootstrapper file (e.g. setup.exe) which updates the installer and starts the real installation after the update. Sometimes the bootstrapper also installs other prerequisites for the software during the bootstrapping process. Overlay networks[edit] Main article: Bootstrapping node A bootstrapping node, also known as a rendezvous host,[9] is a node in an overlay network that provides initial configuration information to newly joining nodes so that they may successfully join the overlay network.[10][11] Discrete event simulation[edit] Main article: Discrete event simulation A type of computer simulation called discrete event simulation represents the operation of a system as a chronological sequence of events. A technique called bootstrapping the simulation model is used, which bootstraps initial data points using a pseudorandom number generator to schedule an initial set of pending events, which schedule additional events, and with time, the distribution of event times approaches its steady state—the bootstrapping behavior is overwhelmed by steady- state behavior. Artificial intelligence and machine learning[edit] Main articles: Bootstrap aggregating and Intelligence explosion Bootstrapping is a technique used to iteratively improve a classifier's performance. Seed AI is a hypothesized type of artificial intelligence capable of recursive self-improvement. Having improved itself, it would become better at improving itself, potentially leading to an exponential increase in intelligence. No such AI is known to exist, but it remains an active field of research. Seed AI is a significant part of some theories about the technological singularity: proponents believe that the development of seed AI will rapidly yield ever-smarter intelligence (via bootstrapping) and thus a new era.[citation needed] Statistics[edit] Main articles: Bootstrapping (statistics) and Bootstrapping populations Bootstrapping is a resampling technique used to obtain estimates of summary statistics. Business[edit] Bootstrapping in business means starting a business without external help or capital. Such startups fund the development of their company through internal cash flow and are cautious with their expenses.[12] Generally at the start of a venture, a small amount of money will be set aside for the bootstrap process.[13] Bootstrapping can also be a supplement for econometric models.[14] Bootstrapping was also expanded upon in the book Bootstrap Business by Richard Christiansen, the Harvard Business Review article The Art of Bootstrapping and the follow-up book The Origin and Evolution of New Businesses by Amar Bhide.  Startups can grow by reinvesting profits in its own growth if bootstrapping costs are low and return on investment is high. This financing approach allows owners to maintain control of their business and forces them to spend with discipline.[15] In addition, bootstrapping allows startups to focus on customers rather than investors, thereby increasing the likelihood of creating a profitable business.
  • 7.  Leveraged buyouts, or highly leveraged or "bootstrap" transactions, occur when an investor acquires a controlling interest in a company's equity and where a significant percentage of the purchase price is financed through leverage, i.e., borrowing.  Bootstrapping in finance refers to the method to create the spot rate curve.  Operation Bootstrap (Operación Manos a la Obra) refers to the ambitious projects that industrialized Puerto Rico in the mid-20th century. Biology[edit] Richard Dawkins in his book River Out of Eden[16] used the computer bootstrapping concept to explain how biological cells differentiate: "Different cells receive different combinations of chemicals, which switch on different combinations of genes, and some genes work to switch other genes on or off. And so the bootstrapping continues, until we have the full repertoire of different kinds of cells." Phylogenetics[edit] Bootstrapping analysis gives a way to judge the strength of support for clades on phylogenetic trees. A number is written by a node, which reflects the percentage of bootstrap trees which also resolve the clade at the endpoints of that branch.[17] Law[edit] Main article: Bootstrapping (law) Bootstrapping is a rule preventing the admission of hearsay evidence in conspiracy cases. Linguistics[edit] Main article: Bootstrapping (linguistics) Bootstrapping is a theory of language acquisition. Physics[edit] Quantum theory[edit] Main articles: Bootstrap model and Conformal bootstrap Bootstrapping is using very general consistency criteria to determine the form of a quantum theory from some assumptions on the spectrum of particles or operators. Magnetically confined fusion plasmas[edit] In tokamak fusion devices, bootstrapping refers to the process in which a bootstrap current is self- generated by the plasma, which reduces or eliminates the need for an external current driver. Maximising the bootstrap current is a major goal of advanced tokamak designs. Inertially confined fusion plasmas[edit] Bootstrapping in inertial confinement fusion refers to the alpha particles produced in the fusion reaction providing further heating to the plasma. This heating leads to ignition and an overall energy gain. Electronics[edit] Main article: Bootstrapping (electronics) Bootstrapping is a form of positive feedback in analog circuit design. Electric power grid[edit] Main article: Black start
  • 8. An electric power grid is almost never brought down intentionally. Generators and power stations are started and shut down as necessary. A typical power station requires power for start up prior to being able to generate power. This power is obtained from the grid, so if the entire grid is down these stations cannot be started. Therefore, to get a grid started, there must be at least a small number of power stations that can start entirely on their own. A black start is the process of restoring a power station to operation without relying on external power. In the absence of grid power, one or more black starts are used to bootstrap the grid. Cellular networks[edit] Main articles: Bootstrapping Server Function and Generic Bootstrapping Architecture A Bootstrapping Server Function (BSF) is an intermediary element in cellular networks which provides application independent functions for mutual authentication of user equipment and servers unknown to each other and for 'bootstrapping' the exchange of secret session keys afterwards. The term 'bootstrapping' is related to building a security relation with a previously unknown device first and to allow installing security elements (keys) in the device and the BSF afterwards. A media bootstrap is the process whereby a story or meme is deliberately (but artificially) produced by self and peer-referential journalism, originally within a tight circle of media content originators, often commencing with stories written within the same media organization. This story is then expanded into a general media "accepted wisdom" with the aim of having it accepted as self-evident "common knowledge" by the reading, listening and viewing publics. The key feature of a media bootstrap is that as little hard, verifiable, external evidence as possible is used to support the story, preference being given to the citation (often unattributed) of other media stories, i.e. "journalists interviewing journalists". Because the campaign is usually originated and at least initially concocted internally by a media organization with a particular agenda in mind, within a closed loop of reportage and opinionation, the campaign is said to have "pulled itself up by its own bootstraps". A bootstrap campaign should be distinguished from a genuine news story of genuine interest, such as a natural disaster that kills thousands, or the death of a respected public figure. It is legitimate for these stories to be given coverage across all media platforms. What distinguishes a bootstrap from a real story is the contrived and organized manner in which the bootstrap appears to come out of nowhere. A bootstrap commonly claims to be tapping a hitherto unrecognized phenomenon within society. As self-levitating by pulling on one's bootstraps is physically impossible, this is often used by the bootstrappers themselves to deny the possibility that the bootstrap campaign is indeed concocted and artificial. They assert that it has arisen via a groundswell of public opinion. Media campaigns that are openly admitted as concocted (e.g. a public service campaign titled "Let's Clean Up Our City") are usually ignored by other media organizations for reasons related to competition. On the other hand, the true bootstrap welcomes the participation of other media organizations, indeed encourages it, as this participation gains the bootstrap notoriety and, most importantly, legitimacy.
  • 9. In the field of electronics, a bootstrap circuit is one where part of the output of an amplifier stage is applied to the input, so as to alter the input impedance of the amplifier. When applied deliberately, the intention is usually to increase rather than decrease the impedance. [1] Generally, any technique where part of the output of a system is used at startup is described as bootstrapping. In the domain of MOSFET circuits, "bootstrapping" is commonly used to mean pulling up the operating point of a transistor above the power supply rail.[2][3] The same term has been used somewhat more generally for dynamically altering the operating point of an operational amplifier (by shifting both its positive and negative supply rail) in order to increase its output voltage swing (relative to the ground).[4] In the sense used in this paragraph, bootstrapping an operational amplifier means "using a signal to drive the reference point of the op-amp's power supplies".[5] A more sophisticated use of this rail bootstrapping technique is to alter the non-linear C/V characteristic of the inputs of a JFET op-amp in order to decrease its distortion.[ Input impedance[edit] Bootstrap capacitors C1 and C2 in a BJT emitter follower circuit In analog circuit designs, a bootstrap circuit is an arrangement of components deliberately intended to alter the input impedance of a circuit. Usually it is intended to increase the impedance, by using a small amount of positive feedback, usually over two stages. This was often necessary in the early days of bipolar transistors, which inherently have quite a low input impedance. Because the feedback is positive, such circuits can suffer from poor stability and noise performance compared to ones that don't bootstrap. Negative feedback may alternatively be used to bootstrap an input impedance, causing the apparent impedance to be reduced. This is seldom done deliberately, however, and is normally an unwanted result of a particular circuit design. A well-known example of this is the Miller effect, in which an unavoidable feedback capacitance appears increased (i.e. its impedance appears reduced) by negative feedback. One popular case where this isdone deliberately is the Miller
  • 10. compensation technique for providing a low-frequency pole inside an integrated circuit. To minimize the size of the necessary capacitor, it is placed between the input and an output which swings in the opposite direction. This bootstrapping makes it act like a larger capacitor to ground. Driving MOS transistors[edit] A N-MOSFET/IGBT needs a significantly positive charge (VGS > Vth) applied to the gate in order to turn on. Using only N-channel MOSFET/IGBT devices is a common cost reduction method due largely to die size reduction (there are other benefits as well). However, using nMOS devices in place of pMOS devices means that a voltage higher than the power rail supply (V+) is needed in order to bias the transistor into linear operation (minimal current limiting) and thus avoid significant heat loss. A bootstrap capacitor is connected from the supply rail (V+) to the output voltage. Usually the source terminal of the N-MOSFET is connected to the cathode of a recirculation diodeallowing for efficient management of stored energy in the typically inductive load (See Flyback diode). Due to the charge storage characteristics of a capacitor, the bootstrap voltage will rise above (V+) providing the needed gate drive voltage. A MOSFET/IGBT is a voltage-controlled device which, in theory, will not have any gate current. This makes it possible to utilize the charge inside the capacitor for control purposes. However, eventually the capacitor will lose its charge due to parasitic gate current and non-ideal (i.e. finite) internal resistance, so this scheme is only used where there is a steady pulse present. This is because the pulsing action allows for the capacitor to discharge (at least partially if not completely). Most control schemes that use a bootstrap capacitor force the high side driver (N-MOSFET) off for a minimum time to allow for the capacitor to refill. This means that the duty cycle will always need to be less than 100% to accommodate for the parasitic discharge unless the leakage is accommodated for in another manner. Switch-mode power supplies[edit] In switch-mode power supplies, the regulation circuits are powered from the output. To start the power supply, a leakage resistance can be used to trickle-charge the supply rail for the control circuit to start it oscillating. This approach is less costly and more efficient than providing a separate linear power supply just to start the regulator circuit. [8] Output swing[edit] AC amplifiers can use bootstrapping to increase output swing. A capacitor (usually referred as bootstrap capacitor) is connected from the output of the amplifier to the bias circuit, providing bias voltages that exceed the power supply voltage. Emitter followers can provide rail-to-rail output in this way, which is a common technique in class AB audio amplifiers. Digital integrated circuits[edit] Within an integrated circuit a bootstrap method is used to allow internal address and clock distribution lines to have an increased voltage swing. The bootstrap circuit uses a coupling capacitor, formed from the gate/source capacitance of a transistor, to drive a signal line to slightly greater than the supply voltage.
  • 11. Flip-flop (electronics) In electronics, a flip-flop or latch is a circuit that has two stable states and can be used to store state information. A flip-flop is a bistable multivibrator. The circuit can be made to change state by signals applied to one or more control inputs and will have one or two outputs. It is the basic storage element in sequential logic. Flip-flops and latches are fundamental building blocks of digital electronics systems used in computers, communications, and many other types of systems. Flip-flops and latches are used as data storage elements. A flip-flop stores a single bit (binary digit) of data; one of its two states represents a "one" and the other represents a "zero". Such data storage can be used for storage of state, and such a circuit is described as sequential logic. When used in a finite-state machine, the output and next state depend not only on its current input, but also on its current state (and hence, previous inputs). It can also be used for counting of pulses, and for synchronizing variably-timed input signals to some reference timing signal. Flip-flops can be either simple (transparent or opaque) or clocked (synchronous or edge-triggered). Although the term flip-flop has historically referred generically to both simple and clocked circuits, in modern usage it is common to reserve the term flip-flop exclusively for discussing clocked circuits; the simple ones are commonly called latches.[1][2] Using this terminology, a latch is level-sensitive, whereas a flip-flop is edge-sensitive. That is, when a latch is enabled it becomes transparent, while a flip flop's output only changes on a single type (positive going or negative going) of clock edge. History[edit] Flip-flop schematics from the Eccles and Jordan patent filed 1918, one drawn as a cascade of amplifiers with a positive feedback path, and the other as a symmetric cross-coupled pair The first electronic flip-flop was invented in 1918 by the British physicists William Eccles and F. W. Jordan.[3][4] It was initially called the Eccles–Jordan trigger circuit and consisted of two active elements (vacuum tubes).[5] The design was used in the 1943 British Colossus codebreaking computer[6] and such circuits and their transistorized versions were common in computers even
  • 12. after the introduction of integrated circuits, though flip-flops made from logic gates are also common now.[7][8] Early flip-flops were known variously as trigger circuits or multivibrators. According to P. L. Lindley, an engineer at the US Jet Propulsion Laboratory, the flip-flop types detailed below (SR, D, T, JK) were first discussed in a 1954 UCLA course on computer design by Montgomery Phister, and then appeared in his book Logical Design of Digital Computers.[9][10]Lindley was at the time working at Hughes Aircraft under Eldred Nelson, who had coined the term JK for a flip-flop which changed states when both inputs were on (a logical "one"). The other names were coined by Phister. They differ slightly from some of the definitions given below. Lindley explains that he heard the story of the JK flip-flop from Eldred Nelson, who is responsible for coining the term while working at Hughes Aircraft. Flip-flops in use at Hughes at the time were all of the type that came to be known as J-K. In designing a logical system, Nelson assigned letters to flip-flop inputs as follows: #1: A & B, #2: C & D, #3: E & F, #4: G & H, #5: J & K. Nelson used the notations "j-input" and "k-input" in a patent application filed in 1953.[11] Implementation[edit] A traditional (simple) flip-flop circuit based on bipolar junction transistors Flip-flops can be either simple (transparent or asynchronous) or clocked (synchronous). The simple ones are commonly described as latches,[1]while the clocked ones are described as flip-flops.[2] Simple flip-flops can be built around a single pair of cross-coupled inverting elements: vacuum tubes, bipolar transistors, field effect transistors, inverters, and inverting logic gates have all been used in practical circuits. Clocked devices are specially designed for synchronous systems; such devices ignore their inputs except at the transition of a dedicated clock signal (known as clocking, pulsing, or strobing). Clocking causes the flip-flop either to change or to retain its output signal based upon the values of the input signals at the transition. Some flip-flops change output on the rising edge of the clock, others on the falling edge. Since the elementary amplifying stages are inverting, two stages can be connected in succession (as a cascade) to form the needed non-inverting amplifier. In this configuration, each amplifier may be considered as an active inverting feedback network for the other inverting amplifier. Thus the two stages are connected in a non-inverting loop although the circuit diagram is usually drawn as a symmetric cross-coupled pair (both the drawings are initially introduced in the Eccles–Jordan patent). Flip-flop types[edit] Flip-flops can be divided into common types: the SR ("set-reset"), D ("data" or "delay"[12]), T ("toggle"), and JK. The behavior of a particular type can be described by what is
  • 13. termed the characteristic equation, which derives the "next" (i.e., after the next clock pulse) output, Qnext in terms of the input signal(s) and/or the current output, . Simple set-reset latches[edit] SR NOR latch[edit] An animation of a SR latch, constructed from a pair of cross-coupled NOR gates. Red and black mean logical '1' and '0', respectively. An animated SR latch. Black and white mean logical '1' and '0', respectively. (A) S = 1, R = 0: set (B) S = 0, R = 0: hold (C) S = 0, R = 1: reset (D) S = 1, R = 1: not allowed The restricted combination (D) leads to an unstable state. When using static gates as building blocks, the most fundamental latch is the simple SR latch, where S and R stand for set and reset. It can be constructed from a pair of cross-coupled NOR logic gates. The stored bit is present on the output marked Q. While the R and S inputs are both low, feedback maintains the Q and Q outputs in a constant state, with Q the complement of Q. If S (Set) is pulsed high while R (Reset) is held low, then the Q output is forced high, and stays high when S returns to low; similarly, if R is pulsed high while S is held low, then the Q output is forced low, and stays low when R returns to low. SR latch operation[13]
  • 14. Characteristic table Excitation table S R Qnext Action Q Qnext S R 0 0 Q hold state 0 0 0 X 0 1 0 reset 0 1 1 0 1 0 1 set 1 0 0 1 1 1 X not allowed 1 1 X 0 Note: X means don't care, that is, either 0 or 1 is a valid value. The R = S = 1 combination is called a restricted combination or a forbidden state because, as both NOR gates then output zeros, it breaks the logical equation Q = not Q. The combination is also inappropriate in circuits where both inputs may go low simultaneously (i.e. a transition from restricted to keep). The output would lock at either 1 or 0 depending on the propagation time relations between the gates (a race condition). To overcome the restricted combination, one can add gates to the inputs that would convert (S,R) = (1,1) to one of the non-restricted combinations. That can be:  Q = 1 (1,0) – referred to as an S (dominated)-latch  Q = 0 (0,1) – referred to as an R (dominated)-latch This is done in nearly every programmable logic controller.  Keep state (0,0) – referred to as an E-latch Alternatively, the restricted combination can be made to toggle the output. The result isthe JK latch. Characteristic: Q+ = R'Q + R'S or Q+ = R'(Q + S).[14] SR NAND latch[edit] An SR latch constructed from cross-coupled NAND gates.
  • 15. This is an alternate model of the simple SR latch which is built with NAND logic gates. Set and reset now become active low signals, denoted S and R respectively. Otherwise, operation is identical to that of the SR latch. Historically, SR-latches have beenpredominant despite the notational inconvenience of active-low inputs.[citation needed] SR latch operation S R Action 0 0 Not allowed 0 1 Q = 1 1 0 Q = 0 1 1 No change Symbol for an SR NAND latch SR AND-OR latch[edit] An SR AND-OR latch. Light green means logical '1' and dark green means logical '0'. The latch is currently in hold mode (no change). From the teaching point of view, SR latches realised as a pair of cross-coupled components (transistors, gates, tubes, etc.) are rather hard to understand for beginners. A didactically easier to understand model uses a single feedback loop instead of the cross-coupling. The following is an SR latch built with an AND gate with one inverted input and an OR gate. SR AND-OR latch operation S R Action
  • 16. 0 0 No change 1 0 Q = 1 X 1 Q = 0 JK latch[edit] The JK latch is much less frequently used than the JK flip-flop. The JK latch follows the following state table: JK latch truth table J K Qnext Comment 0 0 Q No change 0 1 0 Reset 1 0 1 Set 1 1 Q Toggle Hence, the JK latch is an SR latch that is made to toggle its output (oscillate between 0 and 1) when passed the input combination of 11.[15] Unlike the JK flip-flop, the 11 input combination for the JK latch is not very useful because there is no clock that directs toggling.[16] Gated latches and conditional transparency[edit] Latches are designed to be transparent. That is, input signal changes cause immediate changes in output. Additional logic can be added to a simple transparent latch to make it non- transparent or opaque when another input (an "enable" input) is not asserted. When several transparent latches follow each other, using the same enable signal, signals can propagate through all of them at once. However, by following a transparent-high latch with a transparent- low (or opaque-high) latch, a master–slave flip-flop is implemented. Gated SR latch[edit]
  • 17. A gated SR latch circuit diagram constructed from AND gates (on left) and NOR gates (on right). A synchronous SR latch (sometimes clocked SR flip-flop) can be made by adding a second level of NAND gates to the inverted SR latch (or a second level of AND gates to the direct SR latch). The extra NAND gates further invert the inputs so the simple SR latch becomes a gated SR latch (and a simple SR latch would transform into a gated SR latch with inverted enable). With E high (enable true), the signals can pass through the input gates to the encapsulated latch; all signal combinations except for (0,0) = hold then immediately reproduce on the (Q,Q) output, i.e. the latch is transparent. With E low (enable false) the latch is closed (opaque) and remains in the state it was left the last time E was high. The enable input is sometimes a clock signal, but more often a read or write strobe. Gated SR latch operation E/C Action 0 No action (keep state) 1 The same as non-clocked SR latch Symbol for a gated SR latch Gated D latch[edit] A gated D latch based on an SR NAND latch
  • 18. A gated D latch based on an SR NOR latch An animated gated D latch. (A) D = 1, E = 1: set (B) D = 1, E = 0: hold (C) D = 0, E = 0: hold (D) D = 0, E = 1: reset A gated D latch in pass transistor logic, similar to the ones in the CD4042 or the CD74HC75 integrated circuits. This latch exploits the fact that, in the two active input combinations (01 and 10) of a gated SR latch, R is the complement of S. The input NAND stage converts the two D input states (0 and 1) to these two input combinations for the next SR latch by inverting the data input signal. The low state of the enable signal produces the inactive "11" combination. Thus a gated D-latch may be considered as a one-input synchronous SR latch. This configuration prevents application of the restricted input combination. It is also known as transparent latch, data latch, or simply gated latch. It has a data input and an enable signal (sometimes named clock, or control). The word transparent comes from the fact that, when the enable input is on, the signal propagates directly through the circuit, from the input D to the output Q.
  • 19. Transparent latches are typically used as I/O ports or in asynchronous systems, or in synchronous two-phase systems (synchronous systems that use a two-phase clock), where two latches operating on different clock phases prevent data transparency as in a master–slave flip-flop. Latches are available as integrated circuits, usually with multiple latches per chip. For example, 74HC75 is a quadruple transparent latch in the 7400 series. Gated D latch truth table E/C D Q Q Comment 0 X Qprev Qprev No change 1 0 0 1 Reset 1 1 1 0 Set Symbol for a gated D latch The truth table shows that when the enable/clock input is 0, the D input has no effect on the output. When E/C is high, the output equals D. Earle latch[edit] Earle latch uses complementary enable inputs: enable active low (E_L) and enable active high (E_H)
  • 20. An animated Earle latch. (A) D = 1, E_H = 1: set (B) D = 0, E_H = 1: reset (C) D = 1, E_H = 0: hold The classic gated latch designs have some undesirable characteristics.[17] They require double-rail logic or an inverter. The input-to-output propagation may take up to three gate delays. The input- to-output propagation is not constant – some outputs take two gate delays while others take three. Designers looked for alternatives.[18] A successful alternative is the Earle latch. It requires only a single data input, and its output takes a constant two gate delays. In addition, the two gate levels of the Earle latch can, in some cases, be merged with the last two gate levels of the circuits driving the latch because many common computational circuits have an OR layer followed by an AND layer as their last two levels. Merging the latch function can implement the latch with no additional gate delays.[17] The merge is commonly exploited in the design of pipelined computers, and, in fact, was originally developed by J. G. Earle to be used in the IBM System/360 Model 91 for that purpose.[19] The Earle latch is hazard free.[20] If the middle NAND gate is omitted, then one gets the polarity hold latch, which is commonly used because it demands less logic.[20][21] However, it is susceptible to logic hazard. Intentionally skewing the clock signal can avoid the hazard.[21] D flip-flop[edit] D flip-flop symbol The D flip-flop is widely used. It is also known as a "data" or "delay" flip-flop.
  • 21. The D flip-flop captures the value of the D-input at a definite portion of the clock cycle (such as the rising edge of the clock). That captured value becomes the Q output. At other times, the output Q does not change.[22][23] The D flip-flop can be viewed as a memory cell, a zero-order hold, or a delay line.[24] Truth table: Clock D Qnext Rising edge 0 0 Rising edge 1 1 Non-Rising X Q ('X' denotes a Don't care condition, meaning the signal is irrelevant) Most D-type flip-flops in ICs have the capability to be forced to the set or reset state (which ignores the D and clock inputs), much like an SR flip-flop. Usually, the illegal S = R = 1 condition is resolved in D-type flip-flops. By setting S = R = 0, the flip-flop can be used as described above. Here is the truth table for the others S and R possible configurations: Inputs Outputs S R D > Q Q' 0 1 X X 0 1 1 0 X X 1 0 1 1 X X 1 1 4-bit serial-in, parallel-out (SIPO) shift register These flip-flops are very useful, as they form the basis for shift registers, which are an essential part of many electronic devices. The advantage of the D flip-flop over the D-type "transparent latch"
  • 22. is that the signal on the D input pin is captured the moment the flip-flop is clocked, and subsequent changes on the D input will be ignored until the next clock event. An exception is that some flip- flops have a "reset" signal input, which will reset Q (to zero), and may be either asynchronous or synchronous with the clock. The above circuit shifts the contents of the register to the right, one bit position on each active transition of the clock. The input X is shifted into the leftmost bit position. Classical positive-edge-triggered D flip-flop[edit] A positive-edge-triggered D flip-flop This circuit[25] consists of two stages implemented by SR NAND latches. The input stage (the two latches on the left) processes the clock and data signals to ensure correct input signals for the output stage (the single latch on the right). If the clock is low, both the output signals of the input stage are high regardless of the data input; the output latch is unaffected and it stores the previous state. When the clock signal changes from low to high, only one of the output voltages (depending on the data signal) goes low and sets/resets the output latch: if D = 0, the lower output becomes low; if D = 1, the upper output becomes low. If the clock signal continues staying high, the outputs keep their states regardless of the data input and force the output latch to stay in the corresponding state as the input logical zero (of the output stage) remains active while the clock is high. Hence the role of the output latch is to store the data only while the clock is low. The circuit is closely related to the gated D latch as both the circuits convert the two D input states (0 and 1) to two input combinations (01 and 10) for the output SR latch by inverting the data input signal (both the circuits split the single D signal in two complementary S and Rsignals). The difference is that in the gated D latch simple NAND logical gates are used while in the positive- edge-triggered D flip-flop SRNAND latches are used for this purpose. The role of these latches is to "lock" the active output producing low voltage (a logical zero); thus the positive-edge-triggered D flip-flop can also be thought of as a gated D latch with latched input gates. Master–slave edge-triggered D flip-flop[edit]
  • 23. A master–slave D flip-flop. It responds on the falling edge of the enable input (usually a clock) An implementation of a master–slave D flip-flop that is triggered on the rising edge of the clock A master–slave D flip-flop is created by connecting two gated D latches in series, and inverting the enable input to one of them. It is called master–slave because the second latch in the series only changes in response to a change in the first (master) latch. For a positive-edge triggered master–slave D flip-flop, when the clock signal is low (logical 0) the "enable" seen by the first or "master" D latch (the inverted clock signal) is high (logical 1). This allows the "master" latch to store the input value when the clock signal transitions from low to high. As the clock signal goes high (0 to 1) the inverted "enable" of the first latch goes low (1 to 0) and the value seen at the input to the master latch is "locked". Nearly simultaneously, the twice inverted "enable" of the second or "slave" D latch transitions from low to high (0 to 1) with the clock signal. This allows the signal captured at the rising edge of the clock by the now "locked" master latch to pass through the "slave" latch. When the clock signal returns to low (1 to 0), the output of the "slave" latch is "locked", and the value seen at the last rising edge of the clock is held while the "master" latch begins to accept new values in preparation for the next rising clock edge. By removing the leftmost inverter in the circuit at side, a D-type flip-flop that strobes on the falling edge of a clock signal can be obtained. This has a truth table like this: D Q > Qnext 0 X Falling 0 1 X Falling 1 A CMOS IC implementation of a "true single-phase edge-triggered flip-flop with reset"
  • 24. Edge-triggered dynamic D storage element[edit] An efficient functional alternative to a D flip-flop can be made with dynamic circuits (where information is stored in a capacitance) as long as it is clocked often enough; while not a true flip- flop, it is still called a flip-flop for its functional role. While the master–slave D element is triggered on the edge of a clock, its components are each triggered by clock levels. The "edge-triggered D flip-flop", as it is called even though it is not a true flip-flop, does not have the master–slave properties. Edge-triggered D flip-flops are often implemented in integrated high-speed operations using dynamic logic. This means that the digital output is stored on parasitic device capacitance while the device is not transitioning. This design of dynamic flip flops also enables simple resetting since the reset operation can be performed by simply discharging one or more internal nodes. A common dynamic flip-flop variety is the true single-phase clock (TSPC) type which performs the flip-flop operation with little power and at high speeds. However, dynamic flip-flops will typically not work at static or low clock speeds: given enough time, leakage paths may discharge the parasitic capacitance enough to cause the flip-flop to enter invalid states. T flip-flop[edit] A circuit symbol for a T-type flip-flop If the T input is high, the T flip-flop changes state ("toggles") whenever the clock input is strobed. If the T input is low, the flip-flop holds the previous value. This behavior is described by the characteristic equation: (expanding the XOR operator) and can be described in a truth table: T flip-flop operation[26] Characteristic table Excitation table Comment Comment 0 0 0 hold state (no clk) 0 0 0 No change
  • 25. 0 1 1 hold state (no clk) 1 1 0 No change 1 0 1 toggle 0 1 1 Complement 1 1 0 toggle 1 0 1 Complement When T is held high, the toggle flip-flop divides the clock frequency by two; that is, if clock frequency is 4 MHz, the output frequency obtained from the flip-flop will be 2 MHz. This "divide by" feature has application in various types of digital counters. A T flip-flop can also be built using a JK flip-flop (J & K pins are connected together and act as T) or a D flip-flop (T input XOR Qprevious drives the D input). JK flip-flop[edit] A circuit symbol for a positive-edge-triggered JK flip-flop JK flip-flop timing diagram The JK flip-flop augments the behavior of the SR flip-flop (J=Set, K=Reset) by interpreting the J = K = 1 condition as a "flip" or toggle command. Specifically, the combination J = 1, K = 0 is a command to set the flip-flop; the combination J = 0, K = 1 is a command to reset the flip-flop; and the combination J = K = 1 is a command to toggle the flip-flop, i.e., change its output to the logical complement of its current value. Setting J = K = 0 maintains the current state. To synthesize a D flip-flop, simply set K equal to the complement of J. Similarly, to synthesize a T flip-flop, set K equal to J. The JK flip-flop is therefore a universal flip-flop, because it can be configured to work as an SR flip-flop, a D flip-flop, or a T flip-flop. The characteristic equation of the JK flip-flop is:
  • 26. and the corresponding truth table is: JK flip-flop operation[26] Characteristic table Excitation table J K Comment Qnext Q Qnext Comment J K 0 0 hold state Q 0 0 No Change 0 X 0 1 reset 0 0 1 Set 1 X 1 0 set 1 1 0 Reset X 1 1 1 toggle Q 1 1 No Change X 0 Timing considerations[edit] Timing parameters[edit] Flip-flop setup, hold and clock-to-output timing parameters The input must be held steady in a period around the rising edge of the clock known as the aperture. Imagine taking a picture of a frog on a lily-pad.[27] Suppose the frog then jumps into the water. If you take a picture of the frog as it jumps into the water, you will get a blurry picture of the frog jumping into the water—it's not clear which state the frog was in. But if you take a picture while the
  • 27. frog sits steadily on the pad (or is steadily in the water), you will get a clear picture. In the same way, the input to a flip-flop must be held steady during the aperture of the flip-flop. Setup time is the minimum amount of time the data input should be held steady before the clock event, so that the data is reliably sampled by the clock. Hold time is the minimum amount of time the data input should be held steady after the clock event, so that the data is reliably sampled by the clock. Aperture is the sum of setup and hold time. The data input should be held steady throughout this time period.[27] Recovery time is the minimum amount of time the asynchronous set or reset input should be inactive before the clock event, so that the data is reliably sampled by the clock. The recovery time for the asynchronous set or reset input is thereby similar to the setup time for the data input. Removal time is the minimum amount of time the asynchronous set or reset input should be inactive after the clock event, so that the data is reliably sampled by the clock. The removal time for the asynchronous set or reset input is thereby similar to the hold time for the data input. Short impulses applied to asynchronous inputs (set, reset) should not be applied completely within the recovery-removal period, or else it becomes entirely indeterminable whether the flip-flop will transition to the appropriate state. In another case, where an asynchronous signal simply makes one transition that happens to fall between the recovery/removal time, eventually the flip-flop will transition to the appropriate state, but a very short glitch may or may not appear on the output, dependent on the synchronous input signal. This second situation may or may not have significance to a circuit design. Set and Reset (and other) signals may be either synchronous or asynchronous and therefore may be characterized with either Setup/Hold or Recovery/Removal times, and synchronicity is very dependent on the design of the flip-flop. Differentiation between Setup/Hold and Recovery/Removal times is often necessary when verifying the timing of larger circuits because asynchronous signals may be found to be less critical than synchronous signals. The differentiation offers circuit designers the ability to define the verification conditions for these types of signals independently. Metastability[edit] Main article: Metastability in electronics Flip-flops are subject to a problem called metastability, which can happen when two inputs, such as data and clock or clock and reset, are changing at about the same time. When the order is not clear, within appropriate timing constraints, the result is that the output may behave unpredictably, taking many times longer than normal to settle to one state or the other, or even oscillating several times before settling. Theoretically, the time to settle down is not bounded. In a computer system, this metastability can cause corruption of data or a program crash if the state is not stable before another circuit uses its value; in particular, if two different logical paths use the output of a flip-flop, one path can interpret it as a 0 and the other as a 1 when it has not resolved to stable state, putting the machine into an inconsistent state.[28] The metastability in flip-flops can be avoided by ensuring that the data and control inputs are held valid and constant for specified periods before and after the clock pulse, called the setup time (tsu) and the hold time (th) respectively. These times are specified in the data sheet for the device, and are typically between a few nanoseconds and a few hundred picoseconds for modern devices.
  • 28. Depending upon the flip-flop's internal organization, it is possible to build a device with a zero (or even negative) setup or hold time requirement but not both simultaneously. Unfortunately, it is not always possible to meet the setup and hold criteria, because the flip-flop may be connected to a real-time signal that could change at any time, outside the control of the designer. In this case, the best the designer can do is to reduce the probability of error to a certain level, depending on the required reliability of the circuit. One technique for suppressing metastability is to connect two or more flip-flops in a chain, so that the output of each one feeds the data input of the next, and all devices share a common clock. With this method, the probability of a metastable event can be reduced to a negligible value, but never to zero. The probability of metastability gets closer and closer to zero as the number of flip-flops connected in series is increased. The number of flip-flops being cascaded is referred to as the "ranking"; "dual-ranked" flip flops (two flip-flops in series) is a common situation. So-called metastable-hardened flip-flops are available, which work by reducing the setup and hold times as much as possible, but even these cannot eliminate the problem entirely. This is because metastability is more than simply a matter of circuit design. When the transitions in the clock and the data are close together in time, the flip-flop is forced to decide which event happened first. However fast the device is made, there is always the possibility that the input events will be so close together that it cannot detect which one happened first. It is therefore logically impossible to build a perfectly metastable-proof flip-flop. Flip-flops are sometimes characterized for a maximum settling time (the maximum time they will remain metastable under specified conditions). In this case, dual-ranked flip-flops that are clocked slower than the maximum allowed metastability time will provide proper conditioning for asynchronous (e.g., external) signals. Propagation delay[edit] Another important timing value for a flip-flop is the clock-to-output delay (common symbol in data sheets: tCO) or propagation delay (tP), which is the time a flip-flop takes to change its output after the clock edge. The time for a high-to-low transition (tPHL) is sometimes different from the time for a low-to-high transition (tPLH). When cascading flip-flops which share the same clock (as in a shift register), it is important to ensure that the tCO of a preceding flip-flop is longer than the hold time (th) of the following flip-flop, so data present at the input of the succeeding flip-flop is properly "shifted in" following the active edge of the clock. This relationship between tCO and th is normally guaranteed if the flip-flops are physically identical. Furthermore, for correct operation, it is easy to verify that the clock period has to be greater than the sum tsu + th. Generalizations[edit] Flip-flops can be generalized in at least two ways: by making them 1-of-N instead of 1-of-2, and by adapting them to logic with more than two states. In the special cases of 1-of-3 encoding, or multi- valued ternary logic, these elements may be referred to as flip-flap-flops.[29] In a conventional flip-flop, exactly one of the two complementary outputs is high. This can be generalized to a memory element with N outputs, exactly one of which is high (alternatively, where exactly one of N is low). The output is therefore always a one-hot (respectively one-cold) representation. The construction is similar to a conventional cross-coupled flip-flop; each output, when high, inhibits all the other outputs.[30] Alternatively, more or less conventional flip-flops can be used, one per output, with additional circuitry to make sure only one at a time can be true.[31]
  • 29. Another generalization of the conventional flip-flop is a memory element for multi-valued logic. In this case the memory element retains exactly one of the logic states until the control inputs induce a change.[32] In addition, a multiple-valued clock can also be used, leading to new possible clock transitions. Threshold voltage The threshold voltage, also called the gate voltage, commonly abbreviated as Vth or VGS (th), of a field-effect transistor (FET) is the minimum gate-to-source voltage differential that is needed to create a conducting path between the source and drain terminals. When referring to a junction field-effect transistor (JFET), the threshold voltage is often called "pinch-off voltage" instead. This is somewhat confusing since "pinch off" applied to insulated-gate field-effect transistor (IGFET) refers to the channel pinching that leads to current saturation behaviour under high source–drain bias, even though the current is never off. Unlike "pinch off", the term "threshold voltage" is unambiguous and refers to the same concept in any field-effect transistor. Basic principles[edit] In n-channel enhancement-mode devices, a conductive channel does not exist naturally within the transistor, and a positive gate-to-source voltage is necessary to create one such. The positive voltage attracts free-floating electrons within the body towards the gate, forming a conductive channel. But first, enough electrons must be attracted near the gate to counter the dopant ions added to the body of the FET; this forms a region with no mobile carriers called a depletion region, and the voltage at which this occurs is the threshold voltage of the FET. Further gate-to-source voltage increase will attract even more electrons towards the gate which are able to create a conductive channel from source to drain; this process is called inversion. In contrast, n-channel depletion-mode devices have a conductive channel naturally existing within the transistor. Accordingly, the term 'threshold voltage' does not readily apply to turn such devices 'on', but is used instead to denote the voltage level at which the channel is wide enough to allow electrons to flow easily. This ease-of-flow threshold also applies to p-channel depletion- mode devices, in which a positive voltage from gate to body/source creates a depletion layer by forcing the positively charged holes away from the gate-insulator/semiconductor interface, leaving exposed a carrier-free region of immobile, negatively charged acceptor ions. In wide planar transistors the threshold voltage is essentially independent of the drain–source voltage and is therefore a well defined characteristic, however it is less clear in modern nanometer- sized MOSFETs due to drain-induced barrier lowering.
  • 30. Depletion region of an nMOSFET biased below the threshold Depletion region of an nMOSFET biased above the threshold with channel formed In the figures, the source (left side) and drain (right side) are labeled n+ to indicate heavily doped (blue) n-regions. The depletion layer dopant is labeled NA − to indicate that the ions in the (pink) depletion layer are negatively charged and there are very few holes. In the (red) bulk the number of holes p = NA making the bulk charge neutral. If the gate voltage is below the threshold voltage (top figure), the transistor is turned off and ideally there is no current from the drain to the source of the transistor. In fact, there is a current even for gate biases below the threshold (subthreshold leakage) current, although it is small and varies exponentially with gate bias. If the gate voltage is above the threshold voltage (lower figure), the transistor is turned on, due to there being many electrons in the channel at the oxide-silicon interface, creating a low-resistance channel where charge can flow from drain to source. For voltages significantly above the threshold, this situation is called strong inversion. The channel is tapered when VD > 0 because the voltage drop due to the current in the resistive channel reduces the oxide field supporting the channel as the drain is approached. Body effect[edit] The body effect is the change in the threshold voltage by an amount approximately equal to the change in , the source-bulk voltage, because the body influences the threshold voltage (when it is not tied to the source), it can be thought of as a second gate, and is sometimes referred to as the "back gate"; the body effect is sometimes called the "back-gate effect".[1] For an enhancement mode, nMOS MOSFET body effect upon threshold voltage is computed according to the Shichman–Hodges model[2](accurate for very old technology) using the following equation. Dependence on oxide thickness[edit] In a given technology node, such as the 90-nm CMOS process, the threshold voltage depends on the choice of oxide and on oxide thickness. Using the body formulas above, is directly proportional to , and , which is the parameter for oxide thickness. Thus, the thinner the oxide thickness, the lower the threshold voltage. Although this may seem to be an improvement, it is not without cost; because the thinner the oxide thickness, the higher
  • 31. the subthreshold leakage current through the device will be. Consequently, the design specification for 90-nm gate-oxide thickness was set at 1 nm to control the leakage current.[3] This kind of tunneling, called Fowler-Nordheim Tunneling.[4] Before scaling the design features down to 90 nm, a dual-oxide approach for creating the oxide thickness was a common solution to this issue. With a 90 nm process technology, a triple-oxide approach has been adopted in some cases.[5] One standard thin oxide is used for most transistors, another for I/O driver cells, and a third for memory-and-pass transistor cells. These differences are based purely on the characteristics of oxide thickness on threshold voltage of CMOS technologies. Dependence on temperature[edit] As with the case of oxide thickness affecting threshold voltage, temperature has an effect on the threshold voltage of a CMOS device. Expanding on part of the equation in the body effect section We see that the surface potential has a direct relationship with the temperature. Looking above, that the threshold voltage does not have a direct relationship but is not independent of the effects. On average this variation is between −4 mV/K and −2 mV/K depending on doping level.[6] For a change of 30 °C this results in significant variation from the 500 mV design parameter commonly used for the 90 nm technology node. Dependence on random dopant fluctuation[edit] Random dopant fluctuation (RDF) is a form of process variation resulting from variation in the implanted impurity concentration. In MOSFET transistors, RDF in the channel region can alter the transistor's properties, especially threshold voltage. In newer process technologies RDF has a larger effect because the total number of dopants is fewer.[7] Research works are being carried out in order to suppress the dopant fluctuation which leads to the variation of threshold voltage between devices undergoing same manufacturing process.[8] Subthreshold conduction or subthreshold leakage or subthreshold drain current is the current between the source and drain of a MOSFET when the transistor is in subthreshold region, or weak-inversion region, that is, for gate-to-source voltages below the threshold voltage. The terminology for various degrees of inversion is described in Tsividis.[1] In digital circuits, subthreshold conduction is generally viewed as a parasitic leakage in a state that would ideally have no current. In micropower analog circuits, on the other hand, weak inversion is an efficient operating region, and subthreshold is a useful transistor mode around which circuit functions are designed.[2] In the past, the subthreshold conduction of transistors has usually been very small in the off state, as gate voltage could be significantly below threshold; but as voltages have been scaled down with transistor size, subthreshold conduction has become a bigger factor. Indeed, leakage from all sources has increased: for a technology generation with threshold voltage of 0.2 V, leakage can exceed 50% of total power consumption.[3]
  • 32. The reason for a growing importance of subthreshold conduction is that the supply voltage has continually scaled down, both to reduce the dynamic power consumption of integrated circuits (the power that is consumed when the transistor is switching from an on-state to an off-state, which depends on the square of the supply voltage), and to keep electric fields inside small devices low, to maintain device reliability. The amount of subthreshold conduction isset by the threshold voltage, which sits between ground and the supply voltage, and so has to be reduced along with the supply voltage. That reduction means less gate voltage swing below threshold to turn the device off, and as subthreshold conduction varies exponentially with gate voltage (see MOSFET: Cut-off Mode), it becomes more and more significant as MOSFETs shrink in size.[4] Subthreshold conduction is only one component of leakage: other leakage components that can be roughly equal in size depending on the device design are gate-oxide leakage and junction leakage.[5] Understanding sources of leakage and solutions to tackle the impact of leakage will be a requirement for most circuit and system designers.[6] Sub-threshold electronics[edit] Some devices exploit sub-threshold conduction to process data without fully turning on or off. Even in standard transistors a small amount of current leaks even when they are technically switched off. Some sub-threshold devices have been able to operate with between 1 and 0.1 percent of the power of standard chips.[7] Such lower power operations allow some devices to function with the small amounts of power that can be scavenged without an attached power supply, such as a wearable EKGmonitor that can run entirely on body heat.[7] An integrated circuit or monolithic integrated circuit (also referred to as an IC, a chip, or a microchip) is a set of electronic circuits on one small flat piece (or "chip") of semiconductor material, normally silicon. The integration of large numbers of tiny transistors into a small chip results in circuits that are orders of magnitude smaller, cheaper, and faster than those constructed of discrete electronic components. The IC's mass production capability, reliability and building- block approach to circuit design has ensured the rapid adoption of standardized ICs in place of designs using discrete transistors. ICs are now used in virtually all electronic equipment and have revolutionized the world of electronics. Computers, mobile phones, and other digital home appliances are now inextricable parts of the structure of modern societies, made possible by the small size and low cost of ICs. ICs were made possible by experimental discoveries showing that semiconductor devices could perform the functions of vacuum tubes, and by mid-20th-century technology advancements in semiconductor device fabrication. Since their origins in the 1960s, the size, speed, and capacity of chips have progressed enormously, driven by technical advances that fit more and more transistors on chips of the same size - a modern chip may have several billion transistors in an area the size of a human fingernail. These advances, roughly following Moore's law, make a computer chip of today possess millions of times the capacity and thousands of times the speed of the computer chips of the early 1970s. ICs have two main advantages over discrete circuits: cost and performance. Cost is low because the chips, with all their components, are printed as a unit by photolithography rather than being constructed one transistor at a time. Furthermore, packaged ICs use much less material than discrete circuits. Performance is high because the IC's components switch quickly and consume comparatively little power because of their small size and close proximity. The main disadvantage
  • 33. of ICs is the high cost to design them and fabricate the required photomasks. This high initial cost means ICs are only practical when high production volumes are anticipated. Terminology[edit] An integrated circuit is defined as:[1] A circuit in which all or some of the circuit elements are inseparably associated and electrically interconnected so that it is considered to be indivisible for the purposes of construction and commerce. Circuits meeting this definition can be constructed using many different technologies, including thin- film transistors, thick film technologies, or hybrid integrated circuits. However, in general usage integrated circuit has come to refer to the single-piece circuit construction originally known as a monolithic integrated circuit.[2][3] Invention[edit] Main article: Invention of the integrated circuit Early developments of the integrated circuit go back to 1949, when German engineer Werner Jacobi (Siemens AG)[4] filed a patent for an integrated-circuit-like semiconductor amplifying device[5] showing five transistors on a common substrate in a 3-stage amplifier arrangement. Jacobi disclosed small and cheap hearing aids as typical industrial applications of his patent. An immediate commercial use of his patent has not been reported. The idea of the integrated circuit was conceived by Geoffrey Dummer (1909–2002), a radar scientist working for the Royal Radar Establishment of the British Ministry of Defence. Dummer presented the idea to the public at the Symposium on Progress in Quality Electronic Components in Washington, D.C. on 7 May 1952.[6] He gave many symposia publicly to propagate his ideas and unsuccessfully attempted to build such a circuit in 1956. A precursor idea to the IC was to create small ceramic squares (wafers), each containing a single miniaturized component. Components could then be integrated and wired into a bidimensional or tridimensional compact grid. This idea, which seemed very promising in 1957, was proposed to the US Army by Jack Kilby and led to the short-lived Micromodule Program (similar to 1951's Project Tinkertoy).[7] However, as the project was gaining momentum, Kilby came up with a new, revolutionary design: the IC. Jack Kilby's original integrated circuit Newly employed by Texas Instruments, Kilby recorded his initial ideas concerning the integrated circuit in July 1958, successfully demonstrating the first working integrated example on 12 September 1958.[8] In his patent application of 6 February 1959,[9] Kilby described his new device
  • 34. as "a body of semiconductor material … wherein all the components of the electronic circuit are completely integrated."[10]The first customer for the new invention was the US Air Force.[11] Kilby won the 2000 Nobel Prize in Physics for his part inthe invention of the integrated circuit.[12] His work was named an IEEE Milestone in 2009.[13] Half a year after Kilby, Robert Noyce at Fairchild Semiconductor developed his own idea of an integrated circuit that solved many practical problems Kilby's had not. Noyce's design was made of silicon, whereas Kilby's chip was made of germanium. Noyce credited Kurt Lehovecof Sprague Electric for the principle of p–n junction isolation, a key concept behind the IC.[14] This isolation allows each transistor to operate independently despite being parts of the same piece of silicon. Fairchild Semiconductor was also home of the first silicon-gate IC technology with self-aligned gates, the basis of all modern CMOS computer chips. The technology was developed by Italian physicist Federico Faggin in 1968. In 1970, he joined Intel in order to develop the first single- chip central processing unit (CPU) microprocessor, the Intel 4004, for which he received the National Medal of Technology and Innovation in 2010. The 4004 was designed by Busicom's Masatoshi Shima and Intel's Ted Hoff in 1969, but it was Faggin's improved design in 1970 that made it a reality.[15] Advances[edit] Advances in IC technology, primarily smaller features and larger chips, have allowed the number of transistors in an integrated circuit to double every two years, a trend known as Moore's law. This increased capacity has been used to decrease cost and increase functionality. In general, as the feature size shrinks, almost every aspect of an IC's operation improves. The cost per transistor and the switching power consumption per transistor go down, while the memory capacity and speed go up, through the relationships defined by Dennard scaling.[16] Because speed, capacity, and power consumption gains are apparent to the end user, there is fierce competition among the manufacturers to use finer geometries. Over the years, transistor sizes have decreased from 10s of microns in the early 1970s to 10 nanometers in 2017 [17] with a corresponding million-fold increase in transistors per unit area. As of 2016, typical chip areas range from a few square millimeters to around 600 mm2, with up to 25 million transistors per mm2.[18] The expected shrinking of feature sizes, and the needed progress in related areas was forecast for many years by the International Technology Roadmap for Semiconductors (ITRS). The final ITRS was issued in 2016, and it is being replaced by the International Roadmap for Devices and Systems.[19] Initially, ICs were strictly electronic devices. The success of ICs has led to the integration of other technologies, in the attempt to obtain the same advantages of small size and low cost. These technologies include mechanical devices, optics, and sensors.  Charge-coupled devices, and the closely related active pixel sensors, are chips that are sensitive to light. They have largely replaced film in scientific, medical, and consumer applications. Billions of these devices are now produced each year for applications such as cellphones, tablets, and digital cameras. This sub-field of ICs won the Nobel prize in 2009.  Very small mechanical devices driven by electricity can be integrated onto chips, a technology known as microelectromechanical systems. These devices were developed in the late 1980s[20] and are used in a variety of commercial and military applications. Examples include DLP projectors, inkjet printers, and accelerometers and MEMS gyroscopesused to deploy automobile airbags.
  • 35.  Since the early 2000s, the integration of optical functionality (optical computing) into silicon chips has been actively pursued in both academic research and in industry resulting in the successful commercialization of silicon based integrated optical transceivers combining optical devices (modulators, detectors, routing) with CMOS based electronics.[21]Integrated optical circuits are also being developed.  Integrated circuits are also being developed for sensor applications in medical implants or other bioelectronic devices.[22] Special sealing techniques have to be applied in such biogenic environments to avoid corrosion or biodegradation of the exposed semiconductor materials.[23] As of 2016, the vast majority of all transistors are fabricated in a single layer on one side of a chip of silicon in a flat 2-dimensional planar process. Researchers have produced prototypes of several promising alternatives, such as:  various approaches to stacking several layers of transistors to make a three-dimensional integrated circuit, such as through-silicon via, "monolithic 3D",[24] stacked wire bonding,[25] etc.  transistors built from other materials: graphene transistors, molybdenite transistors, carbon nanotube field-effect transistor, gallium nitride transistor, transistor-like nanowire electronic devices, organic field-effect transistor, etc.  fabricating transistors over the entire surface of a small sphere of silicon.[26][27]  modifications to the substrate, typically to make "flexible transistors" for a flexible display or other flexible electronics, possibly leading to a roll-away computer. Design[edit] Main articles: Electronic design automation and Hardware description language The cost of designing and developing a complex integrated circuit is quite high, normally in the multiple tens of millions of dollars.[28] This only makes economic sense if production volume is high, so the non-recurring engineering (NRE) costs are spread across typically millions of production units. Modern semiconductor chips have billions of components, and are too complex to be designed by hand. Software tools to help the designer are essential. Electronic Design Automation (EDA), also referred to as Electronic Computer-Aided Design (ECAD),[29] is a category of software tools for designing electronic systems, including integrated circuits. The tools work together in a design flow that engineers use to design and analyze entire semiconductor chips. Integrated circuits can be classified into analog,[30] digital[31] and mixed signal[32] (both analog and digital on the same chip). Digital integrated circuits can contain anywhere from one[33] to billions[18] of logic gates, flip- flops, multiplexers, and other circuits in a few square millimeters. The small size of these circuits allows high speed, low power dissipation, and reduced manufacturing cost compared with board- level integration. These digital ICs, typically microprocessors, DSPs, and microcontrollers, work using boolean algebra to process "one" and "zero" signals.
  • 36. The die from an Intel 8742, an 8-bit microcontroller that includes a CPUrunning at 12 MHz, 128 bytes of RAM, 2048 bytes of EPROM, and I/O in the same chip Among the most advanced integrated circuits are the microprocessors or "cores", which control everything from computers and cellular phones to digital microwave ovens. Digital memory chips and application-specific integrated circuits (ASICs) are examples of other families of integrated circuits that are important to the modern information society. In the 1980s, programmable logic devices were developed. These devices contain circuits whose logical function and connectivity can be programmed by the user, rather than being fixed by the integrated circuit manufacturer. This allows a single chip to be programmed to implement different LSI-type functions such as logic gates, adders and registers. Current devices called field- programmable gate arrays(FPGAs) can (as of 2016) implement the equivalent of millions of gates in parallel and operate up to 1 GHz.[34] Analog ICs, such as sensors, power management circuits, and operational amplifiers, work by processing continuous signals. They perform functions like amplification, active filtering, demodulation, and mixing. Analog ICs ease the burden on circuit designers by having expertly designed analog circuits available instead of designing a difficult analog circuit from scratch. ICs can also combine analog and digital circuits on a single chip to create functions such as A/D converters and D/A converters. Such mixed-signal circuits offer smaller size and lower cost, but must carefully account for signal interference. Prior to the late 1990s, radios could not be fabricated in the same low-cost CMOS processes as microprocessors. But since 1998, a large number of radio chips have been developed using CMOS processes. Examples include Intel's DECT cordless phone, or 802.11 (Wi-Fi) chips created by Atheros and other companies.[35] Modern electronic component distributors often further sub-categorize the huge variety of integrated circuits now available:  Digital ICs are further sub-categorized as logic ICs, memory chips, interface ICs (level shifters, serializer/deserializer, etc.), Power Management ICs, and programmable devices.  Analog ICs are further sub-categorized as linear ICs and RF ICs.  mixed-signal integrated circuits are further sub-categorized as data acquisition ICs (including A/D converters, D/A converter, digital potentiometers) and clock/timing ICs. Manufacturing[edit] Fabrication[edit] Main article: Semiconductor fabrication
  • 37. Rendering of a small standard cellwith three metal layers (dielectric has been removed). The sand- colored structures are metal interconnect, with the vertical pillars being contacts, typically plugs of tungsten. The reddish structures are polysilicon gates, and the solid at the bottom is the crystalline silicon bulk. Schematic structure of a CMOS chip, as built in the early 2000s. The graphic shows LDD-MISFET's on an SOI substrate with five metallization layers and solder bump for flip-chip bonding. It also shows the section for FEOL (front-end of line), BEOL (back-end of line) and first parts of back-end process. The semiconductors of the periodic table of the chemical elements were identified as the most likely materials for a solid-state vacuum tube. Starting with copper oxide, proceeding to germanium, then silicon, the materials were systematically studied in the 1940s and 1950s. Today, monocrystalline silicon is the main substrate used for ICs although some III-V compounds of the periodic table such as gallium arsenide are used for specialized applications like LEDs, lasers, solar cells and the highest-speed integrated circuits. It took decades to perfect
  • 38. methods of creating crystals without defects in the crystalline structure of the semiconducting material. Semiconductor ICs are fabricated in a planar process which includes three key process steps – imaging, deposition and etching. The main process steps are supplemented by doping and cleaning. Mono-crystal silicon wafers (or for special applications, silicon on sapphire or gallium arsenide wafers) are used as the substrate. Photolithography is used to mark different areas of the substrate to be doped or to have polysilicon, insulators or metal (typically aluminium or copper) tracks deposited on them.  Integrated circuits are composed of many overlapping layers, each defined by photolithography, and normally shown in different colors. Some layers mark where various dopants are diffused into the substrate (called diffusion layers), some define where additional ions are implanted (implant layers), some define the conductors (polysilicon or metal layers), and some define the connections between the conducting layers (via or contact layers). All components are constructed from a specific combination of these layers.  In a self-aligned CMOS process, a transistor is formed wherever the gate layer (polysilicon or metal) crosses a diffusion layer.  Capacitive structures, in form very much like the parallel conducting plates of a traditional electrical capacitor, are formed according to the area of the "plates", with insulating material between the plates. Capacitors of a wide range of sizes are common on ICs.  Meandering stripes of varying lengths are sometimes used to form on-chip resistors, though most logic circuits do not need any resistors. The ratio of the length of the resistive structure to its width, combined with its sheet resistivity, determines the resistance.  More rarely, inductive structures can be built as tiny on-chip coils, or simulated by gyrators. Since a CMOS device only draws current on the transition between logic states, CMOS devices consume much less current than bipolardevices. A random-access memory is the most regular type of integrated circuit; the highest density devices are thus memories; but even a microprocessor will have memory on the chip. (See the regular array structure at the bottom of the first image.) Although the structures are intricate – with widths which have been shrinking for decades – the layers remain much thinner than the device widths. The layers of material are fabricated much like a photographic process, although light waves in the visible spectrum cannot be used to "expose" a layer of material, as they would be too large for the features. Thus photons of higher frequencies (typically ultraviolet) are used to create the patterns for each layer. Because each feature is so small, electron microscopes are essential tools for a process engineer who might be debugging a fabrication process. Each device is tested before packaging using automated test equipment (ATE), in a process known as wafer testing, or wafer probing. The wafer is then cut into rectangular blocks, each of which is called a die. Each good die (plural dice, dies, or die) is then connected into a package using aluminium (or gold) bond wires which are thermosonically bonded[36] to pads, usually found around the edge of the die. . Thermosonic bonding was first introduced by A. Coucoulas which provided a reliable means of forming these vital electrical connections to the outside world. After packaging, the devices go through final testing on the same or similar ATE used during wafer probing. Industrial CT scanning can also be used. Test cost can account for over 25% of the cost of fabrication on lower-cost products, but can be negligible on low-yielding, larger, or higher-cost devices.
  • 39. As of 2016, a fabrication facility (commonly known as a semiconductor fab) can cost over US$8 billion to construct.[37] The cost of a fabrication facility rises over time (Rock's law) because much of the operation is automated. Today, the most advanced processes employ the following techniques:  The wafers are up to 300 mm in diameter (wider than a common dinner plate).  As of 2016, a state of the art foundry can produce 14 nm transistors, as implemented by Intel, TSMC, Samsung, and Global Foundries. The next step, to 10 nm devices, is expected in 2017.[38]  Copper interconnects where copper wiring replaces aluminium for interconnects.  Low-K dielectric insulators.  Silicon on insulator (SOI).  Strained silicon in a process used by IBM known as strained silicon directly on insulator (SSDOI).  Multigate devices such as tri-gate transistors being manufactured by Intel from 2011 in their 22 nm process. Packaging[edit] Main article: Integrated circuit packaging A Soviet MSI nMOS chip made in 1977, part of a four-chip calculator set designed in 1970[39] The earliest integrated circuits were packaged in ceramic flat packs, which continued to be used by the military for their reliability and small size for many years. Commercial circuit packaging quickly moved to the dual in-line package (DIP), first in ceramic and later in plastic. In the 1980s pin counts of VLSI circuits exceeded the practical limit for DIP packaging, leading to pin grid array (PGA) and leadless chip carrier(LCC) packages. Surface mount packaging appeared in the early 1980s and became popular in the late 1980s, using finer lead pitch with leads formed as either gull-wing or J-lead, as exemplified by the small-outline integrated circuit (SOIC) package – a carrier which occupies an area about 30–50% less than an equivalent DIP and is typically 70% thinner. This package has "gull wing" leads protruding from the two long sides and a lead spacing of 0.050 inches. In the late 1990s, plastic quad flat pack (PQFP) and thin small-outline package (TSOP) packages became the most common for high pin count devices, though PGA packages are still often used for high-end microprocessors. Intel and AMD are currently[when?] transitioning from PGA packages on high-end microprocessors to land grid array (LGA) packages. Ball grid array (BGA) packages have existed since the 1970s. Flip-chip Ball Grid Array packages, which allow for much higher pin count than other package types, were developed in the 1990s. In an FCBGA package the die is mounted upside-down (flipped) and connects to the package balls
  • 40. via a package substrate that is similar to a printed-circuit board rather than by wires. FCBGA packages allow an array of input-output signals (called Area-I/O) to be distributed over the entire die rather than being confined to the die periphery. Traces going out of the die, through the package, and into the printed circuit board have very different electrical properties, compared to on-chip signals. They require special design techniques and need much more electric power than signals confined to the chip itself. When multiple dies are put in one package, the result is a System in Package, or SiP. A multi-chip module, or MCM, is created by combining multiple dies on a small substrate often made of ceramic. The distinction between a big MCM and a small printed circuit board is sometimes fuzzy. Chip labeling and manufacture date[edit] Most integrated circuits are large enough to include identifying information. Four common sections are the manufacturer's name or logo, the part number, a part production batch number and serial number, and a four-digit date-code to identify when the chip was manufactured. Extremely small surface mount technology parts often bear only a number used in a manufacturer's lookup table to find the chip characteristics. The manufacturing date is commonly represented as a two-digit year followed by a two-digit week code, such that a part bearing the code 8341 was manufactured in week 41 of 1983, or approximately in October 1983. Intellectual property[edit] Main article: Integrated circuit layout design protection The possibility of copying by photographing each layer of an integrated circuit and preparing photomasks for its production on the basis of the photographs obtained is a reason for the introduction of legislation for the protection of layout-designs. The Semiconductor Chip Protection Act of 1984 established intellectual property protection for photomasks used to produce integrated circuits.[40] A diplomatic conference was held at Washington, D.C., in 1989, which adopted a Treaty on Intellectual Property in Respect of Integrated Circuits (IPIC Treaty). The Treaty on Intellectual Property in respect of Integrated Circuits, also called Washington Treaty or IPIC Treaty (signed at Washington on 26 May 1989) is currently not in force, but was partially integrated into the TRIPS agreement.[41] National laws protecting IC layout designs have been adopted in a number of countries, including Japan,[42] the EC,[43] the UK, Australia, and Korea.[44] Other developments[edit] Future developments seem to follow the multi-core multi-microprocessor paradigm, already used by Intel and AMD multi-core processors. Rapport Inc. and IBM started shipping the KC256 in 2006, a 256-core microprocessor. Intel, as recently as February–August 2011, unveiled a prototype, "not for commercial sale" chip that bears 80 cores. Each core is capable of handling its own task independently of the others. This is in response to heat-versus-speed limit, that is about to be reached using existing transistor technology (see: thermal design power). This design provides a new challenge to chip programming. Parallel programming languages such as the open- source X10 programming language are designed to assist with this task.[45]
  • 41. Generations[edit] In the early days of simple integrated circuits, the technology's large scale limited each chip to only a few transistors, and the low degree of integration meant the design process was relatively simple. Manufacturing yields were also quite low by today's standards. As the technology progressed, millions, then billions[46] of transistors could be placed on one chip, and good designs required thorough planning, giving rise to the field of Electronic Design Automation, or EDA. Name Signification Year Transistors number[47] Logic gates number[48] SSI small-scale integration 1964 1 to 10 1 to 12 MSI medium-scale integration 1968 10 to 500 13 to 99 LSI large-scale integration 1971 500 to 20 000 100 to 9999 VLSI very large-scale integration 1980 20 000 to 1 000 000 10 000 to 99 999 ULSI ultra-large-scale integration 1984 1 000 000 and more 100 000 and more SSI, MSI and LSI [edit] The first integrated circuits contained only a few transistors. Early digital circuits containing tens of transistors provided a few logic gates, and early linear ICs such as the PlesseySL201 or the Philips TAA320 had as few as two transistors. The number of transistors in an integrated circuit has increased dramatically since then. The term "large scale integration" (LSI) was first used by IBM scientist Rolf Landauer when describing the theoretical concept;[citation needed] that term gave rise to the terms "small-scale integration" (SSI), "medium-scale integration" (MSI), "very-large-scale integration" (VLSI), and "ultra-large-scale integration" (ULSI). The early integrated circuits were SSI. SSI circuits were crucial to early aerospace projects, and aerospace projects helped inspire development of the technology. Both the Minuteman missile and Apollo program needed lightweight digital computers for their inertial guidance systems. Although the Apollo guidance computer led and motivated integrated-circuit technology,[49] it was the Minuteman missile that forced it into mass-production. The Minuteman missile program and various other Navy programs accounted for the total $4 million integrated circuit market in 1962, and by 1968, U.S. Government space and defense spending still accounted for 37% of the $312 million total production.
  • 42. The demand by the U.S. Government supported the nascent integrated circuit market until costs fell enough to allow IC firms to penetrate first the industrial and eventually the consumer markets. The average price per integrated circuit dropped from $50.00 in 1962 to $2.33 in 1968.[50] Integrated circuits began to appear in consumer products by the turn of the decade, a typical application being FM inter-carrier sound processing in television receivers. The first MOS chips were small-scale integration chips for NASA satellites.[51] The next step in the development of integrated circuits, taken in the late 1960s, introduced devices which contained hundreds of transistors on each chip, called "medium-scale integration" (MSI). In 1964, Frank Wanlass demonstrated a single-chip 16-bit shift register he designed, with an incredible (at the time) 120 transistors on a single chip.[51][52] MSI devices were attractive economically because while they cost a little more to produce than SSI devices, they allowed more complex systems to be produced using smaller circuit boards, less assembly work (because of fewer separate components), and a number of other advantages. Further development, driven by the same economic factors, led to "large-scale integration" (LSI) in the mid-1970s, with tens of thousands of transistors per chip. The masks used to process and manufacture SSI, MSI and early LSI and VLSI devices (such as the microprocessors of the early 1970s) were mostly created by hand, often using Rubylith-tape or similar.[53] For large or complex ICs (such as memories or processors), this was often done by specially hired layout people under supervision of a team of engineers, who would also, along with the circuit designers, inspect and verify the correctness and completeness of each mask. However, modern VLSI devices contain so many transistors, layers, interconnections, and other features that it is no longer feasible to check the masks or do the original design by hand. The engineer depends on computer programs and other hardware aids to do most of this work.[54] Integrated circuits such as 1K-bit RAMs, calculator chips, and the first microprocessors, that began to be manufactured in moderate quantities in the early 1970s, had under 4,000 transistors. True LSI circuits, approaching 10,000 transistors, began to be produced around 1974, for computer main memories and second-generation microprocessors. Some SSI and MSI chips, like discrete transistors, are still mass-produced, both to maintain old equipment and build new devices that require only a few gates. The 7400 series of TTL chips, for example, has become a de facto standard and remains in production. VLSI[edit] Main article: Very-large-scale integration