Computing and informatics class notes for amie


Published on

Published in: Technology
  • open in your laptop you will get download option.
    Are you sure you want to  Yes  No
    Your message goes here
    Are you sure you want to  Yes  No
    Your message goes here
No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide

Computing and informatics class notes for amie

  1. 1. Computing and Informatics Class Notes for AMIE By Vinayak Ashok BharadiLocal Area NetworksFor historical reasons, the industry refers to nearly every type of network as an "areanetwork." The most commonly-discussed categories of computer networks include thefollowing - • Local Area Network (LAN) • Wide Area Network (WAN) • Metropolitan Area Network (MAN) • Storage Area Network (SAN) • System Area Network (SAN) • Server Area Network (SAN) • Small Area Network (SAN) • Personal Area Network (PAN) • Desk Area Network (DAN) • Controller Area Network (CAN) • Cluster Area Network (CAN)LANs and WANs were the original flavors of network design. The concept of "area"made good sense at this time, because a key distinction between a LAN and a WANinvolves the physical distance that the network spans. A third category, the MAN, also fitinto this scheme as it too is centered on a distance-based concept.As technology improved, new types of networks appeared on the scene. These, too,became known as various types of "area networks" for consistencys sake, althoughdistance no longer proved a useful differentiator.LAN BasicsA LAN connects network devices over a relatively short distance. A networked officebuilding, school, or home usually contains a single LAN, though sometimes one buildingwill contain a few small LANs, and occasionally a LAN will span a group of nearbybuildings. In IP networking, one can conceive of a LAN as a single IP subnet (though thisis not necessarily true in practice).Besides operating in a limited space, LANs include several other distinctive features.LANs are typically owned, controlled, and managed by a single person or organization.They also use certain specific connectivity technologies, primarily Ethernet and TokenRing.
  2. 2. WAN BasicsAs the term implies, a wide-area network spans a large physical distance. A WAN likethe Internet spans most of the world!A WAN is a geographically-dispered collection of LANs. A network device called arouter connects LANs to a WAN. In IP networking, the router maintains both a LANaddress and a WAN address.WANs differ from LANs in several important ways. Like the Internet, most WANs arenot owned by any one organization but rather exist under collective or distributedownership and management. WANs use technology like ATM, Frame Relay and X.25 forconnectivity.LANs and WANs at HomeHome networkers with cable modem or DSL service already have encountered LANs andWANs in practice, though they may not have noticed. A cable/DSL router like those inthe Linksys family join the home LAN to the WAN link maintained by ones ISP. TheISP provides a WAN IP address used by the router, and all of the computers on the homenetwork use private LAN addresses. On a home network, like many LANs, all computerscan communicate directly with each other, but they must go through a central gatewaylocation to reach devices outside of their local area.What About MAN, SAN, PAN, DAN, and CAN?Future articles will describe the many other types of area networks in more detail. AfterLANs and WANs, one will most commonly encounter the following three networkdesigns:A Metropolitan Area Network connects an area larger than a LAN but smaller than aWAN, such as a city, with dedicated or high-performance hardware. [1]A Storage Area Network connects servers to data storage devices through a technologylike Fibre Channel. [2]A System Area Network connects high-performance computers with high-speedconnections in a cluster configuration.ConclusionTo the uninitiated, LANs, WANs, and the other area network acroymns appear to be justmore alphabet soup in a technology industry already drowning in terminology. Thenames of these networks are not nearly as important as the technologies used to constructthem, however. A person can use the categorizations as a learning tool to betterunderstand concepts like subnets, gateways, and routers.
  3. 3. Bus, ring, star, and other types of network topologyIn networking, the term "topology" refers to the layout of connected devices on anetwork. This article introduces the standard topologies of computer networking.Topology in Network DesignOne can think of a topology as a networks virtual shape or structure. This shape does notnecessarily correspond to the actual physical layout of the devices on the network. Forexample, the computers on a home LAN may be arranged in a circle in a family room,but it would be highly unlikely to find an actual ring topology there.Network topologies are categorized into the following basic types: • bus • ring • star • tree • meshMore complex networks can be built as hybrids of two or more of the above basictopologies.Bus TopologyBus networks (not to be confused with the system bus of a computer) use a commonbackbone to connect all devices. A single cable, the backbone functions as a sharedcommunication medium that devices attach or tap into with an interface connector. Adevice wanting to communicate with another device on the network sends a broadcastmessage onto the wire that all other devices see, but only the intended recipient actuallyaccepts and processes the message.Ethernet bus topologies are relatively easy to install and dont require much cablingcompared to the alternatives. 10Base-2 ("ThinNet") and 10Base-5 ("ThickNet") bothwere popular Ethernet cabling options many years ago for bus topologies. However, busnetworks work best with a limited number of devices. If more than a few dozencomputers are added to a network bus, performance problems will likely result. Inaddition, if the backbone cable fails, the entire network effectively becomes unusable.
  4. 4. Ring TopologyIn a ring network, every device has exactly two neighbors for communication purposes.All messages travel through a ring in the same direction (either "clockwise" or"counterclockwise"). A failure in any cable or device breaks the loop and can take downthe entire network.To implement a ring network, one typically uses FDDI, SONET, or Token Ringtechnology. Ring topologies are found in some office buildings or school campuses.Star TopologyMany home networks use the star topology. A star network features a central connectionpoint called a "hub" that may be a hub, switch or router. Devices typically connect to thehub with Unshielded Twisted Pair (UTP) Ethernet.Compared to the bus topology, a star network generally requires more cable, but a failurein any star network cable will only take down one computers network access and not theentire LAN. (If the hub fails, however, the entire network also fails.)
  5. 5. Tree TopologyTree topologies integrate multiple star topologies together onto a bus. In its simplestform, only hub devices connect directly to the tree bus, and each hub functions as the"root" of a tree of devices. This bus/star hybrid approach supports future expandability ofthe network much better than a bus (limited in the number of devices due to the broadcasttraffic it generates) or a star (limited by the number of hub connection points) alone.Mesh TopologyMesh topologies involve the concept of routes. Unlike each of the previous topologies,messages sent on a mesh network can take any of several possible paths from source todestination. (Recall that even in a ring, although two cable paths exist, messages can onlytravel in one direction.) Some WANs, like the Internet, employ mesh routing.SummaryTopologies remain an important part of network design theory. You can probably build ahome or small business network without understanding the difference between a busdesign and a star design, but understanding the concepts behind these gives you a deeperunderstanding of important elements like hubs, broadcasts, and routes Internet protocol suite Internet protocol suite Layer Protocols 5. Application DNS, TLS/SSL, TFTP, FTP, HTTP, IMAP4, IRC, POP3, SIP, SMTP, SNMP, SSH, TELNET, RTP, … 4. Transport TCP, UDP, RSVP, DCCP, SCTP, … 3. Network IP (IPv4, IPv6), ICMP, IGMP, ARP, RARP, … 2. Data link Ethernet, Wi-Fi, PPP, FDDI, ATM, Frame
  6. 6. Relay, GPRS, Bluetooth, … 1. Physical Modems, ISDN, SONET/SDH, RS232, USB, Ethernet physical layer, Wi-Fi, GSM, Bluetooth, …The Internet protocol suite is the set of communications protocols that implement theprotocol stack on which the Internet and most commercial networks run. It is sometimescalled the TCP/IP protocol suite, after the two most important protocols in it: theTransmission Control Protocol (TCP) and the Internet Protocol (IP), which were also thefirst two defined.The Internet protocol suite — like many protocol suites — can be viewed as a set oflayers, each layer solves a set of problems involving the transmission of data, andprovides a well-defined service to the upper layer protocols based on using services fromsome lower layers. Upper layers are logically closer to the user and deal with moreabstract data, relying on lower layer protocols to translate data into forms that caneventually be physically transmitted. The original TCP/IP reference model consisted offour layers, but has evolved into a five layer model.The OSI model describes a fixed, seven layer stack for networking protocols.Comparisons between the OSI model and TCP/IP can give further insight into thesignificance of the components of the IP suite, but can also cause confusion, since thedefinition of the layers are slightly different.HistoryThe Internet protocol suite came from work done by DARPA in the early 1970s. Afterbuilding the pioneering ARPANET, DARPA started work on a number of other datatransmission technologies. In 1972, Robert E. Kahn was hired at the DARPA InformationProcessing Technology Office, where he worked on both satellite packet networks andground-based radio packet networks, and recognized the value of being able tocommunicate across them. In the spring of 1973, Vinton Cerf, the developer of theexisting ARPANET Network Control Program (NCP) protocol, joined Kahn to work onopen-architecture interconnection models with the goal of designing the next protocol forthe ARPANET.By the summer of 1973, Kahn and Cerf had soon worked out a fundamentalreformulation, where the differences between network protocols were hidden by using acommon internetwork protocol, and instead of the network being responsible for
  7. 7. reliability, as in the ARPANET, the hosts became responsible. (Cerf credits HubertZimmerman and Louis Pouzin [designer of the CYCLADES network] with importantinfluences on this design.)With the role of the network reduced to the bare minimum, it became possible to joinalmost any networks together, no matter what their characteristics were, thereby solvingKahns initial problem. (One popular saying has it that TCP/IP, the eventual product ofCerf and Kahns work, will run over "two tin cans and a string", and it has in fact beenimplemented using homing pigeons.) A computer called a gateway (later changed torouter to avoid confusion with other types of gateway) is provided with an interface toeach network, and forwards packets back and forth between them.The idea was worked out in more detailed form by Cerfs networking research group atStanford in the 1973–74 period. (The early networking work at Xerox PARC, whichproduced the PARC Universal Packet protocol suite, much of which wascontemporaneous, was also a significant technical influence; people moved between thetwo.)DARPA then contracted with BBN Technologies, Stanford University, and theUniversity College London to develop operational versions of the protocol on differenthardware platforms. Four versions were developed: TCP v1, TCP v2, a split into TCP v3and IP v3 in the spring of 1978, and then stability with TCP/IP v4 — the standardprotocol still in use on the Internet today.In 1975, a two-network TCP/IP communications test was performed between Stanfordand University College London (UCL). In November, 1977, a three-network TCP/IP testwas conducted between the U.S., UK, and Norway. Between 1978 and 1983, severalother TCP/IP prototypes were developed at multiple research centres. A full switchoverto TCP/IP on the ARPANET took place January 1, 1983.[1]In March 1982,[2] the US Department of Defense made TCP/IP the standard for allmilitary computer networking. In 1985, the Internet Architecture Board held a three dayworkshop on TCP/IP for the computer industry, attended by 250 vendor representatives,helping popularize the protocol and leading to its increasing commercial use.On November 9, 2005 Kahn and Cerf were presented with the Presidential Medal ofFreedom for their contribution to American culture.[3]
  8. 8. Layers in the Internet protocol suite stackIP suite stack showing the physical network connection of two hosts via two routers andthe corresponding layers used at each hopSample encapsulation of data within a UDP datagram within an IP packetThe IP suite uses encapsulation to provide abstraction of protocols and services.Generally a protocol at a higher level uses a protocol at a lower level to help accomplishits aims. The Internet protocol stack can be roughly fitted to the four layers of the originalTCP/IP model:
  9. 9. DNS, TFTP, TLS/SSL, FTP, HTTP, IMAP, IRC, NNTP, POP3, SIP, SMTP, SNMP, SSH, TELNET, ECHO, BitTorrent, RTP, PNRP, rlogin, ENRP, …4. Application Routing protocols like BGP and RIP, which for a variety of reasons run over TCP and UDP respectively, may also be considered part of the application or network layer. TCP, UDP, DCCP, SCTP, IL, RUDP, …3. Transport Routing protocols like OSPF, which run over IP, may also be considered part of the transport or network layer. ICMP and IGMP run over IP may be considered part of the network layer. IP (IPv4, IPv6)2. Internet ARP and RARP operate underneath IP but above the link layer so they belong somewhere in between. Ethernet, Wi-Fi, Token ring, PPP, SLIP, FDDI, ATM, Frame1. Network access Relay, SMDS, …In many modern textbooks, this model has evolved into the five layer TCP/IP model,where the Network access layer is splitted into a Data link layer on top of a Physicallayer, and the Internet layer is called Network layer.ImplementationsToday, most commercial operating systems include and install the TCP/IP stack bydefault. For most users, there is no need to look for implementations. TCP/IP is includedin all commercial Unix systems, Mac OS X, and all free-software Unix-like systems suchas Linux distributions and BSD systems, as well as Microsoft Windows.Unique implementations include Lightweight TCP/IP, an open source stack designed forembedded systems and KA9Q NOS, a stack and associated protocols for amateur packetradio systems and personal computers connected via serial lines.
  10. 10. Karnaugh mapThe Karnaugh map, also known as a Veitch diagram (K-map or KV-map for short), isa tool to facilitate management of Boolean algebraic expressions. A Karnaugh map isunique in that only one variable changes value between squares, in other words, the rowsand columns are ordered according to the principles of Gray code.History and nomenclatureThe Karnaugh map was invented in 1953 by Maurice Karnaugh, a telecommunicationsengineer at Bell Labs.Usage in boolean logicNormally, extensive calculations are required to obtain the minimal expression of aBoolean function, but one can use a Karnaugh map instead.Problem solving uses • Karnaugh maps make use of the human brains excellent pattern-matching capability to decide which terms should be combined to get the simplest expression. • K-maps permit the rapid identification and elimination of potential race hazards, something that boolean equations alone cannot do. • A Karnaugh map is an excellent aid for simplification of up to six variables, but with more variables it becomes hard even for our brain to discern optimal patterns. • For problems involving more than six variables,solving the boolean expressions is more preferred than the Karnaugh map.Karnaugh maps also help teach about Boolean functions and minimization.PropertiesA mapping of minterms on a Karnaugh map. The arrows indicate which squares can bethought of as "switched" (rather than being in a normal sequential order).
  11. 11. A Karnaugh map may have any number of variables, but usually works best when thereare only a few - between 2 and 6 for example. Each variable contributes two possibilitiesto each possibility of every other variable in the system. Karnaugh maps are organized sothat all the possibilities of the system are arranged in a grid form, and between twoadjacent boxes, only one variable can change value. This is what allows it to reducehazards.When using a Karnaugh map to derive a minimized function, one "covers" the ones onthe map by rectangular "coverings" that contain a number of boxes equal to a power of 2(for example, 4 boxes in a line, 4 boxes in a square, 8 boxes in a rectangle, etc). Once aperson has covered the ones, that person can produce a term of a sum of products byfinding the variables that do not change throughout the entire covering, and taking a 1 tomean that variable, and a 0 as the complement of that variable. Doing this for everycovering gives you a matching function.One can also use zeros to derive a minimized function. The procedure is identical to theprocedure for ones, except that each term is a term in a product of sums - and a 1 meansthe compliment of the variable, while 0 means the variable non-complimented.Each square in a Karnaugh map corresponds to a minterm (and maxterm). The picture tothe right shows the location of each minterm on the map.ExampleConsider the following function: f(A,B,C,D) = E(4,8,9,10,11,12,14,15)The values inside E tell us which rows have output 1.This function has this truth table: # A B C D f(A,B,C,D) 0 0 0 0 0 0 1 0 0 0 1 0 2 0 0 1 0 0
  12. 12. 3 0 0 1 1 04 0 1 0 0 15 0 1 0 1 06 0 1 1 0 07 0 1 1 1 08 1 0 0 0 19 1 0 0 1 110 1 0 1 0 1
  13. 13. 11 1 0 1 1 1 12 1 1 0 0 1 13 1 1 0 1 0 14 1 1 1 0 1 15 1 1 1 1 1The input variables can be combined in 16 different ways, so our Karnaugh map has tohave 16 positions. The most convenient way to arrange this is in a 4x4 grid.
  14. 14. The binary digits in the map represent the functions output for any given combination ofinputs. We write 0 in the upper leftmost corner of the map because f = 0 when A = 0, B =0, C = 1, D = 0. Similarly we mark the bottom right corner as 1 because A = 1, B = 0, C =0, D = 0 gives f = 1. Note that the values are ordered in a Gray code, so that precisely onevariable flips between any pair of adjacent cells.After the Karnaugh map has been constructed our next task is to find the minimal termsto use in the final expression. These terms are found by encircling groups of 1s in themap. The encirclings must be rectangular and must have an area that is a positive powerof two (i.e. 2, 4, 8, …). The rectangles should be as large as possible without containingany 0s. The optimal encirclings in this map are marked by the green, red and blue lines.For each of these encirclings we find those variables that have the same state in each ofthe fields in the encircling. For the first encircling (the red one) we find that: • The variable A maintains the same state (1) in the whole encircling, therefore it should be included in the term for the red encircling. • Variable B does not maintain the same state (it shifts from 1 to 0), and should therefore be excluded. • C does not change: it is always 1. • D changes.Thus the first term in the Boolean expression is AC.For the green encircling we see that A and B maintain the same state, but C and D change.B is 0 and has to be negated before it can be included. Thus the second term is AB.In the same way, the blue rectangle gives the term BCD and so the whole expression is:AC + AB′+ BC′D′.The grid is toroidally connected, which means that the rectangles can wrap around edges,so ABD′ is a valid term, although not part of the minimal set.The inverse of a function is solved in the same way by encircling the 0s instead.In a Karnaugh map with n variables, a Boolean term mentioning k of them will have acorresponding rectangle of area 2n-k.Karnaugh maps also allow easy minimizations of functions whose truth tables include"dont care" conditions (that is sets of inputs for which the designer doesnt care what theoutput is) because "dont care" conditions can be included in a ring to make it larger butdo not have to be ringed. They are usually indicated on the map with a hyphen/dash inplace of the number. The value can be a "0," "1," or the hyphen/dash/X depending on if
  15. 15. one can use the "0" or "1" to simplify the KM more. If the "dont cares" dont help yousimplify the KM more, then use the hyphen/dash/X.Race hazardsKarnaugh maps are useful for detecting and eliminating race hazards. They are very easyto spot using a Karnaugh map, because a race condition may exist when moving betweenany pair of adjacent, but disjointed, regions circled on the map. • In the above example, a potential race condition exists when C and D are both 0, A is a 1, and B changes from a 0 to a 1 (moving from the green state to the blue state). For this case, the output is defined to remain unchanged at 1, but because this transition is not covered by a specific term in the equation, a potential for a glitch (a momentary transition of the output to 0) exists. • A harder possible glitch to spot is if D was 0 and A and B were both 1, with C changing from 0 to 1. In this case the glitch wraps around from the bottom of the map to the top of the map.Whether these glitches do occur depends on the physical nature of the implementation,and whether we need to worry about it depends on the application.In this case, an additional term of +AD would eliminate the potential race hazard,bridging between the green and blue output states or blue and red output states.The term is redundant in terms of the static logic of the system, but such redundant termsare often needed to assure race-free dynamic performance.When not to use K-mapsThe diagram becomes cluttered and hard to interpret if there are more than four variableson an axis. This argues against the use of Karnaugh maps for expressions with more thansix variables. For such expressions, the Quine-McCluskey algorithm, also called themethod of prime implicants, should be used.This algorithm generally finds most of the optimal solutions quickly and easily, butselecting the final prime implicants (after the essential ones are chosen) may still requirea brute force approach to get the optimal combination (though this is generally farsimpler than trying to brute force the entire problem).Logic gateA logic gate performs a logical operation on one or more logic inputs and produces asingle logic output. The logic normally performed is Boolean logic and is mostcommonly found in digital circuits. Logic gates are primarily implemented electronicallyusing diodes or transistors, but can also be constructed using electromagnetic relays,fluidics, optical or even mechanical elements.
  16. 16. Logic levelsA Boolean logical input or output always takes one of two logic levels. These logic levelscan go by many names including: on / off, high (H) / low (L), one (1) / zero (0), true (T) /false (F), positive / negative, positive / ground, open circuit / close circuit, potentialdifference / no difference, yes / no.For consistency, the names 1 and 0 will be used below.Logic gatesA logic gate takes one or more logic-level inputs and produces a single logic-level output.Because the output is also a logic level, an output of one logic gate can connect to theinput of one or more other logic gates. Two outputs cannot be connected together,however, as they may be attempting to produce different logic values. In electronic logicgates, this would cause a short circuit.In electronic logic, a logic level is represented by a certain voltage (which depends on thetype of electronic logic in use). Each logic gate requires power so that it can source andsink currents to achieve the correct output voltage. In logic circuit diagrams the power isnot shown, but in a full electronic schematic, power connections are required.BackgroundThe simplest form of electronic logic is diode logic. This allows AND and OR gates to bebuilt, but not inverters, and so is an incomplete form of logic. To build a complete logicsystem, valves or transistors can be used. The simplest family of logic gates using bipolartransistors is called resistor-transistor logic, or RTL. Unlike diode logic gates, RTL gatescan be cascaded indefinitely to produce more complex logic functions. These gates wereused in early integrated circuits. For higher speed, the resistors used in RTL werereplaced by diodes, leading to diode-transistor logic, or DTL. It was then discovered thatone transistor could do the job of two diodes in the space of one diode, so transistor-transistor logic, or TTL, was created. In some types of chip, to reduce size and powerconsumption still further, the bipolar transistors were replaced with complementary field-effect transistors (MOSFETs), resulting in complementary metal-oxide-semiconductor(CMOS) logic.For small-scale logic, designers now use prefabricated logic gates from families ofdevices such as the TTL 7400 series invented by Texas Instruments and the CMOS 4000series invented by RCA, and their more recent descendants. These devices usuallycontain transistors with multiple emitters, used to implement the AND function, whichare not available as separate components. Increasingly, these fixed-function logic gatesare being replaced by programmable logic devices, which allow designers to pack a hugenumber of mixed logic gates into a single integrated circuit. The field-programmablenature of programmable logic devices such as FPGAs has removed the hard property ofhardware; it is now possible to change the logic design of a hardware system by
  17. 17. reprogramming some of its components, thus allowing the features or function of ahardware implementation of a logic system to be changed.Electronic logic gates differ significantly from their relay-and-switch equivalents. Theyare much faster, consume much less power, and are much smaller (all by a factor of amillion or more in most cases). Also, there is a fundamental structural difference. Theswitch circuit creates a continuous metallic path for current to flow (in either direction)between its input and its output. The semiconductor logic gate, on the other hand, acts asa high-gain voltage amplifier, which sinks a tiny current at its input and produces a low-impedance voltage at its output. It is not possible for current to flow between the outputand the input of a semiconductor logic gate.Another important advantage of standardised semiconductor logic gates, such as the 7400and 4000 families, is that they are cascadable. This means that the output of one gate canbe wired to the inputs of one or several other gates, and so on ad infinitum, enabling theconstruction of circuits of arbitrary complexity without requiring the designer tounderstand the internal workings of the gates.In practice, the output of one gate can only drive a finite number of inputs to other gates,a number called the fanout limit, but this limit is rarely reached in the newer CMOSlogic circuits, as compared to TTL circuits. Also, there is always a delay, called thepropagation delay, from a change in input of a gate to the corresponding change in itsoutput. When gates are cascaded, the total propagation delay is approximately the sum ofthe individual delays, an effect which can become a problem in high-speed circuits.Electronic logic levelsThe two logic levels in binary logic circuits are represented by two voltage ranges, "low"and "high". Each technology has its own requirements for the voltages used to representthe two logic levels, to ensure that the output of any device can reliably drive the input ofthe next device. Usually, two non-overlapping voltage ranges, one for each level, aredefined. The difference between the high and low levels ranges from 0.7 volts in Emittercoupled logic to around 28 volts in relay logic.Logic gates and hardwareNAND and NOR logic gates are the two pillars of logic, in that all other types of Booleanlogic gates (i.e., AND, OR, NOT, XOR, XNOR) can be created from a suitable networkof just NAND or just NOR gate(s). They can be built from relays or transistors, or anyother technology that can create an inverter and a two-input AND or OR gate. Hence theNAND and NOR gates are called the universal gates.For an input of 2 variables, there are 16 possible boolean algebra outputs. These 16outputs are enumerated below with the appropriate function or logic gate for the 4possible combinations of A and B. Note that not all outputs have a corresponding
  18. 18. function or logic gate, although those that do not can be produced by combinations ofthose that can. A 0 01 1 INPUT B 0 10 1 OUTPUT 0 0 00 0 A AND B 0 00 1 0 01 0 A 0 01 1 0 10 0 B 0 10 1 A XOR B 0 11 0 A OR B 0 11 1 A NOR B 1 00 0 A XNOR B 1 0 0 1 NOT B 1 01 0
  19. 19. 1 01 1 NOT A 1 10 0 1 10 1 A NAND B 1 1 1 0 1 1 11 1Logic gates are a vital part of many digital circuits, and as such, every kind is available asan IC. For examples, see the 4000 series of CMOS logic chips or the 700 series.SymbolsThere are two sets of symbols in common use, both now defined by ANSI/IEEE Std 91-1984 and its supplement ANSI/IEEE Std 91a-1991. The "distinctive shape" set, based ontraditional schematics, is used for simple drawings and is quicker to draw by hand. It issometimes unofficially described as "military", reflecting its origin if not its modernusage. The "rectangular shape" set, based on IEC 60617-12, has rectangular outlines forall types of gate, and allows representation of a much wider range of devices than ispossible with the traditional symbols. The IECs system has been adopted by otherstandards, such as EN 60617-12:1999 in Europe and BS EN 60617-12:1999 in the UnitedKingdom. Boolean algebraType Distinctive shape Rectangular shape Truth table between A &B INPUT OUTPUT A B A AND BAND 0 0 0 0 1 0 1 0 0
  20. 20. 1 1 1 INPUT OUTPUT A B A OR B 0 0 0OR A+B 0 1 1 1 0 1 1 1 1 INPUT OUTPUT A NOT ANOT 0 1 1 0In electronics a NOT gate is more commonly called an inverter. The circle on the symbolis called a bubble, and is generally used in circuit diagrams to indicate an inverted inputor output. INPUT OUTPUT A B A NAND B 0 0 1NAND 0 1 1 1 0 1 1 1 0 INPUT OUTPUT A B A NOR B 0 0 1NOR 0 1 0 1 0 0 1 1 0In practice, the cheapest gate to manufacture is usually the NAND gate. Additionally,
  21. 21. Charles Peirce showed that NAND gates alone (as well as NOR gates alone) can be usedto reproduce all the other logic gates.Symbolically, a NAND gate can also be shown using the OR shape with bubbles on itsinputs, and a NOR gate can be shown as an AND gate with bubbles on its inputs. Thisreflects the equivalency due to De Morgans law, but it also allows a diagram to be readmore easily, or a circuit to be mapped onto available physical gates in packages easily,since any circuit node that has bubbles at both ends can be replaced by a simple bubble-less connection and a suitable change of gate. If the NAND is drawn as OR with inputbubbles, and a NOR as AND with input bubbles, this gate substitution occursautomatically in the diagram (effectively, bubbles "cancel"). This is commonly seen inreal logic diagrams - thus the reader must not get into the habit of associating the shapesexclusively as OR or AND shapes, but also take into account the bubbles at both inputsand outputs in order to determine the "true" logic function indicated.Two more gates are the exclusive-OR or XOR function and its inverse, exclusive-NOR orXNOR. The two input Exclusive-OR is true only when the two input values are different,false if they are equal, regardless of the value. If there are more than two inputs, the gategenerates a true at its output if the number of trues at its input is odd ([1]). In practice,these gates are built from combinations of simpler logic gates. INPUT OUTPUT A B A XOR B 0 0 0XOR 0 1 1 1 0 1 1 1 0 INPUT OUTPUT A B A XNOR B 0 0 1XNOR 0 1 0 1 0 0 1 1 1
  22. 22. The 7400 chip, containing four NANDs. The two additional contacts supply power (+5V) and connect the ground.DeMorgan equivalent symbolsBy use of De Morgans theorem, an AND gate can be turned into an OR gate by invertingthe sense of the logic at its inputs and outputs. This leads to a separate set of symbolswith inverted inputs and the opposite core symbol. These symbols can make circuitdiagrams for circuits using active low signals much clearer and help to show accidentalconnection of an active high output to an active low input or vice-versa.Storage of bitsRelated to the concept of logic gates (and also built from them) is the idea of storing a bitof information. The gates discussed up to here cannot store a value: when the inputschange, the outputs immediately react. It is possible to make a storage element eitherthrough a capacitor (which stores charge due to its physical properties) or by feedback.Connecting the output of a gate to the input causes it to be put through the logic again,and choosing the feedback correctly allows it to be preserved or modified through the useof other inputs. A set of gates arranged in this fashion is known as a "latch", and morecomplicated designs that utilise clocks (signals that oscillate with a known period) andchange only on the rising edge are called edge-triggered "flip-flops". The combination ofmultiple flip-flops in parallel, to store a multiple-bit value, is known as a register.These registers or capacitor-based circuits are known as computer memory. They vary inperformance, based on factors of speed, complexity, and reliability of storage, and manydifferent types of designs are used based on the application.Three-state logic gates
  23. 23. A tristate buffer can be thought of as a switch. If B is on, the switch is closed. If B is off,the switch is open. Main article: Tri-state bufferThree-state, or 3-state, logic gates have three states of the output: high (H), low (L) andhigh-impedance (Z). The high-impedance state plays no role in the logic, which remainsstrictly binary. These devices are used on buses to allow multiple chips to send data. Agroup of three-states driving a line with a suitable control circuit is basically equivalent toa multiplexer, which may be physically distributed over separate devices or plug-in cards.Tri-state, a widely-used synonym of three-state, is a trademark of the NationalSemiconductor Corporation.MiscellaneousLogic circuits include such devices as multiplexers, registers, arithmetic logic units(ALUs), and computer memory, all the way up through complete microprocessors whichcan contain more than a 100 million gates. In practice, the gates are made from fieldeffect transistors (FETs), particularly metal-oxide-semiconductor FETs (MOSFETs).In reversible logic, Toffoli gates are used.History and developmentThe earliest logic gates were made mechanically. Charles Babbage, around 1837, devisedthe Analytical Engine. His logic gates relied on mechanical gearing to performoperations. Electromagnetic relays were later used for logic gates. In 1891, AlmonStrowger patented a device containing a logic gate switch circuit (U.S. Patent 0447918).Strowgers patent was not in widespread use until the 1920s. Starting in 1898, NikolaTesla filed for patents of devices containing logic gate circuits (see List of Tesla patents).Eventually, vacuum tubes replaced relays for logic operations. Lee De Forestsmodification, in 1907, of the Fleming valve can be used as AND logic gate. Claude E.Shannon introduced the use of Boolean algebra in the analysis and design of switchingcircuits in 1937. Walther Bothe, inventor of the coincidence circuit, got part of the 1954Nobel Prize in physics, for the first modern electronic AND gate in 1924. Active researchis taking place in molecular logic gates.Common Basic Logic ICs CMOS TTL Function 4001 7402 Quad two-input NOR gate
  24. 24. 4011 7400 Quad two-input NAND gate 4049 7404 Hex NOT gate (inverting buffer) 4070 7486 Quad two-Input XOR gate 4071 7432 Quad two-input OR gate 4077 74266 Quad two-input XNOR gate 4081 7408 Quad two-input AND gateFor more CMOS logic ICs, including gates with more than two inputs, see 4000 series.Adders (electronics)In electronics, an adder is a device which will perform the addition, S, of two numbers.In computing, the adder is part of the ALU, and some ALUs contain multiple adders.Although adders can be constructed for many numerical representations, such as Binary-coded decimal or excess-3, the most common adders operate on binary numbers. In caseswhere twos complement is being used to represent negative numbers it is trivial tomodify an adder into an adder-subtracter.For single bit adders, there are two general types. A half adder has two inputs, generallylabelled A and B, and two outputs, the sum S and carry output Co. S is the two-bit xor of Aand B, and Co is the two-bit and of A and B. Essentially the output of a half adder is thetwo-bit arithmetic sum of two one-bit numbers, with Co being the most significant ofthese two outputs.The other type of single bit adder is the full adder which is like a half adder, but takes anadditional input carry Ci. A full adder can be constructed from two half adders byconnecting A and B to the input of one half adder, connecting the sum from that to aninput to the second adder, connecting Ci to the other input and or the two carry outputs.Equivalently, S could be made the three-bit xor of A, B, and Ci and Co could be made the
  25. 25. three-bit majority function of A, B, and Ci. The output of the full adder is the two-bitarithmetic sum of three one-bit numbers.The purpose of the carry input on the full-adder is to allow multiple full-adders to bechained together with the carry output of one adder connected to the carry input of thenext most significant adder. The carry is said to ripple down the carry lines of this sort ofadder, giving it the name ripple carry adder.Half adderHalf adder circuit diagramA half adder is a logical circuit that performs an addition operation on two binary digits.The half adder produces a sum and a carry value which are both binary digits.Following is the logic table for a half adder: Input Output A B C S 0 0 0 0 0 1 0 1 1 0 0 1 1 1 1 0
  26. 26. Full adderFull adder circuit diagramA + B + CarryIn = Sum + CarryOutA full adder is a logical circuit that performs an addition operation on three binary digits.The full adder produces a sum and carry value, which are both binary digits. It can becombined with other full adders (see below) or work on its own. Input Output A B Ci Co S 0 0 0 0 0 0 0 1 0 1 0 1 0 0 1 0 1 1 1 0 1 0 0 0 1 1 0 1 1 0
  27. 27. 1 1 0 1 0 1 1 1 1 1Note that the final OR gate before the carry-out output may be replaced by an XOR gatewithout altering the resulting logic. This is because the only discrepancy between OR andXOR gates occurs when both inputs are 1; for the adder shown here, one can check this isnever possible. Using only two types of gates is convenient if one desires to implementthe adder directly using common IC chips.Ones complementAlternatively, a system known as ones complement can be used to represent negativenumbers. The ones complement form of a binary number is the bitwise NOT applied to it— the complement of its positive counterpart. Like sign-and-magnitude representation,ones complement has two representations of 0: 00000000 (+0) and 11111111 (−0).As an example, the ones complement form of 00101011 (43) becomes 11010100 (−43).The range of signed numbers using ones complement in a conventional eight-bit byte is−12710 to +12710.To add two numbers represented in this system, one does a conventional binary addition,but it is then necessary to add any resulting carry back into the resulting sum. To see whythis is necessary, consider the case of the addition of −1 (11111110) to +2 (00000010).The binary addition alone gives 00000000—not the correct answer! Only when the carryis added back in does the correct result (00000001) appear.This numeric representation system was common in older computers; the PDP-1 andUNIVAC 1100/2200 series, among many others, used ones-complement arithmetic.(A remark on terminology: The system is referred to as "ones complement" because thenegation of x is formed by subtracting x from a long string of ones. Twos complementarithmetic, on the other hand, forms the negation of x by subtracting x from a single largepower of two.[1])Twos complementTwos complement is the most popular method of representing signed integers incomputer science. It is also an operation of negation (converting positive to negativenumbers or vice versa) in computers which represent negative numbers using twoscomplement. Its use is ubiquitous today because it doesnt require the addition andsubtraction circuitry to examine the signs of the operands to determine whether to add or
  28. 28. subtract, making it both simpler to implement and capable of easily handling higherprecision arithmetic. Also, 0 has only a single representation, obviating the subtletiesassociated with negative zero (which exists in ones complement).sign bit0 1 1 1 1 1 1 1 = 1270 0 0 0 0 0 1 0 = 20 0 0 0 0 0 0 1 = 10 0 0 0 0 0 0 0 = 01 1 1 1 1 1 1 1 = −11 1 1 1 1 1 1 0 = −21 0 0 0 0 0 0 1 = −1271 0 0 0 0 0 0 0 = −1288-bit twos complement integersExplanationTwos complement Decimal0001 10000 01111 −11110 −21101 −31100 −4Twos complement using a 4-bit integerTwos complement represents signed integers by counting backwards and wrappingaround.The boundary between positive and negative numbers may theoretically be anywhere (aslong as you check for it). For convenience, all numbers whose left-most bit is 1 areconsidered negative. The largest number representable this way with 4 bits is 0111 (7)and the smallest number is 1000 (-8).To understand its usefulness for computers, consider the following. Adding 0011 (3) to1111 (-1) results in the seemingly-incorrect 10010. However, ignoring the 5th bit (fromthe right), as we did when we counted backwards, gives us the actual answer, 0010 (2).Ignoring the 5th bit will work in all cases (although you have to do the aforementionedoverflow checks when, eg, 0100 is added to 0100). Thus, a circuit designed for additioncan handle negative operands without also including a circuit capable of subtraction (anda circuit which switches between the two based on the sign). Moreover, by this methodan addition circuit can even perform subtractions if you convert the necessary operandinto the "counting-backwards" form. The procedure for doing so is called taking the twos
  29. 29. complement (which, admittedly, requires either an extra cycle or its own adder circuit).Lastly, a very important reason for utilizing twos complement representation is that itwould be considerably more complex to create a subtraction circuit which would take0001 - 0010 and give 1001 (ie -001) than it is to make one that returns 1111. (Doing theformer means you have to check the sign, then check if there will be a sign reversal, thenpossibly rearrange the numbers, and finally subtract. Doing the latter means you simplysubtract, pretending theres an extra left-most bit hiding somewhere.)In an n-bit binary number, the most significant bit is usually the 2n−1s place. But in thetwos complement representation, its place value is negated; it becomes the −2n−1s placeand is called the sign bit.If the sign bit is 0, the value is positive; if it is 1, the value is negative. To negate a twoscomplement number, invert all the bits then add 1 to the result.If all bits are 1, the value is −1. If the sign bit is 1 but the rest of the bits are 0, the value isthe most negative number, −2n−1 for an n-bit number. The absolute value of the mostnegative number cannot be represented with the same number of bits, because it is greaterthan the most positive number that twos complement number by exactly 1.A twos complement 8-bit binary numeral can represent every integer in the range −128to +127. If the sign bit is 0, then the largest value that can be stored in the remainingseven bits is 27 − 1, or 127.Using twos complement to represent negative numbers allows only one representation ofzero, and to have effective addition and subtraction while still having the most significantbit as the sign bit.Calculating twos complementIn finding the twos complement of a binary number, the bits are inverted, or "flipped", byusing the bitwise NOT operation; the value of 1 is then added to the resulting value. Bitoverflow is ignored, which is the normal case with zero.For example, beginning with the signed 8-bit binary representation of the decimal value5: 0000 0101 (5)The first bit is 0, so the value represented is indeed a positive 5. To convert to −5 in twoscomplement notation, the bits are inverted; 0 becomes 1, and 1 becomes 0: 1111 1010At this point, the numeral is the ones complement of the decimal value 5. To obtain thetwos complement, 1 is added to the result, giving:
  30. 30. 1111 1011 (-5)The result is a signed binary numeral representing the decimal value −5 in twoscomplement form. The most significant bit is 1, so the value is negative.The twos complement of a negative number is the corresponding positive value. Forexample, inverting the bits of −5 (above) gives: 0000 0100And adding one gives the final value: 0000 0101 (5)The decimal value of a twos complement binary number is calculated by taking the valueof the most significant bit, where the value is negative when the bit is one, and adding toit the values for each power of two where there is a one. Example: 1111 1011 (−5) = −128 + 64 + 32 + 16 + 8 + 0 + 2 + 1 = (−2^7 + 2^6 + ...) = −5Note that the twos complement of zero is zero: inverting gives all ones, and adding onechanges the ones back to zeros (the overflow is ignored). Also the twos complement ofthe most negative number representable (e.g. a one as the sign bit and all other bits zero)is itself. This happens because the most negative numbers "positive counterpart" isoccupied by "0", which gets classed as a positive number in this argument. Hence, thereappears to be an extra negative number.A more formal definition of twos complement negative number (denoted by N* in thisexample) is derived from the equation N * = 2n − N, where N is the correspondingpositive number and n is the number of bits in the representation.For example, to find the 4 bit representation of -5: N (base 10) = 5, therefore N (base 2) = 0101 n=4Hence: N * = 2n − N = [24]base2 − 0101 = 10000 − 0101 = 1011N.B. You can also think of the equation as being entirely in base 10, converting to base 2at the end, e.g.: N * = 2n − N = 24 − 5 = [11]base10 = [1011]base2
  31. 31. Obviously, "N* ... = 11" isnt strictly true but as long as you interpret the equals sign as"is represented by", it is perfectly acceptable to think of twos complements in thisfashion.Nevertheless, a shortcut exists when converting a binary number in twos complementform. 0011 1100Converting from right to left, copy all the zeros until the first 1 is reached. Copy downthat one, and then flip the remaining bits. This will allow you to convert to twoscomplement without first converting to ones complement and adding 1 to the result. Thetwos complemented form of the number above in this case is: 1100 0100Sign extensionDecimal 4-bit twos complement 8-bit twos complement5 0101 0000 0101-3 1101 1111 1101sign-bit repetition in 4 and 8-bit integersWhen turning a twos complement number with a certain number of bits into one withmore bits (e.g., when copying from a 1 byte variable to a two byte variable), the sign bitmust be repeated in all the extra bits.Some processors have instructions to do this in a single instruction. On other processors aconditional must be used followed with code to set the relevant bits or bytes.Similarly, when a twos complement number is shifted to the right, the sign bit must bemaintained. However when shifted to the left, a 0 is shifted in. These rules preserve thecommon semantics that left shifts multiply the number by two and right shifts divide thenumber by two.Both shifting and doubling the precision are important for some multiplicationalgorithms. Note that unlike addition and subtraction, precision extension and rightshifting are done differently for signed vs unsigned numbers.The weird numberWith only one exception, when we start with any number in twos complementrepresentation, if we flip all the bits and add 1, we get the twos complementrepresentation of the negative of that number. Negative 12 becomes positive 12, positive5 becomes negative 5, zero becomes zero, etc.
  32. 32. −128 1000 0000invert bits 0111 1111add one 1000 0000The twos complement of -128 results in the same 8-bit binary number.The most negative number in twos complement is sometimes called "the weird number"because it is the only exception.The twos complement of the minimum number in the range will not have the desiredeffect of negating the number. For example, the twos complement of -128 results in thesame binary number. This is because a positive value of 128 cannot be represented withan 8-bit signed binary numeral. Note that this is detected as an overflow condition sincethere was a carry into but not out of the sign bit.Although the number is weird, it is a valid number. All arithmetic operations work with itboth as an operand and (unless there was an overflow) a result.Why it worksThe 2n possible values of n bits actually form a ring of equivalence classes, namely theintegers modulo 2n, Z/(2n)Z. Each class represents a set {j + k2n | k is an integer} forsome integer j, 0 ≤ j ≤ 2n − 1. There are 2n such sets, and addition and multiplication arewell-defined on them.If the classes are taken to represent the numbers 0 to 2n − 1, and overflow ignored, thenthese are the unsigned integers. But each of these numbers is equivalent to itself minus2n. So the classes could be understood to represent −2n−1 to 2n−1 − 1, by subtracting 2nfrom half of them (specifically [2n−1, 2n−1]).For example, with eight bits, the unsigned bytes are 0 to 255. Subtracting 256 from thetop half (128 to 255) yields the signed bytes −128 to 127.The relationship to twos complement is realised by noting that 256 = 255 + 1, and(255 − x) is the ones complement of x.Decimal Twos complement127 0111 111164 0100 00001 0000 00010 0000 0000-1 1111 1111-64 1100 0000
  33. 33. -127 1000 0001-128 1000 0000Some special numbers to noteExample−95 modulo 256 is equivalent to 161 since −95 + 256 = −95 + 255 + 1 = 255 − 95 + 1 = 160 + 1 = 161 1111 1111 255− 0101 1111 − 95=========== ===== 1010 0000 (ones complement) 160+ 1 + 1=========== ===== 1010 0001 (twos complement) 161Arithmetic operationsAdditionAdding twos complement numbers requires no special processing if the operands haveopposite signs: the sign of the result is determined automatically. For example, adding 15and -5: 11111 111 (carry) 0000 1111 (15)+ 1111 1011 (-5)================== 0000 1010 (10)This process depends upon restricting to 8 bits of precision; a carry to the (nonexistent)9th most significant bit is ignored, resulting in the arithmetically correct result of 10.The last two bits of the carry row (reading right-to-left) contain vital information:whether the calculation resulted in an arithmetic overflow, a number too large for thebinary system to represent (in this case greater than 8 bits). An overflow condition existswhen a carry (an extra 1) is generated into but not out of the far left sign bit, or out of butnot into the sign bit. As mentioned above, the sign bit is the leftmost bit of the result.In other terms, if the last two carry bits (the ones on the far left of the top row in theseexamples) are both 1s or 0s, the result is valid; if the last two carry bits are "1 0" or "01", a sign overflow has occurred. Conveniently, an XOR operation on these two bits can
  34. 34. quickly determine if an overflow condition exists. As an example, consider the 4-bitaddition of 7 and 3: 0111 (carry) 0111 (7)+ 0011 (3)============= 1010 (−6) invalid!In this case, the far left two (MSB) carry bits are "01", which means there was a twoscomplement addition overflow. That is, ten is outside the permitted range of −8 to 7.SubtractionComputers usually use the method of complements to implement subtraction. Butalthough using complements for subtraction is related to using complements forrepresenting signed numbers, they are independent; direct subtraction works with twoscomplement numbers as well. Like addition, the advantage of using twos complement isthe elimination of examining the signs of the operands to determine if addition orsubtraction is needed. For example, subtracting -5 from 15 is really adding 5 to 15, butthis is hidden by the twos complement representation: 11110 000 (borrow) 0000 1111 (15)− 1111 1011 (−5)=========== 0001 0100 (20)Overflow is detected the same way as for addition, by examining the two leftmost (mostsignificant) bits of the borrows; overflow occurred if they are different.Another example is a subtraction operation where the result is negative: 15 − 35 = −20: 11100 000 (borrow) 0000 1111 (15)− 0010 0011 (35)=========== 1110 1100 (−20)MultiplicationThe product of two n-bit numbers can potentially have 2n bits. If the precision of the twotwos complement operands is doubled before the multiplication, direct multiplication(discarding any excess bits beyond that precision) will provide the correct result. Forexample, take 5 × −6 = −30. First, the precision is extended from 4 bits to 8. Then thenumbers are multiplied, discarding the bits beyond 8 (shown by x): 00000101 (5)× 11111010 (−6) =========
  35. 35. 0 101 0 101 101 101 x01xx1=========xx11100010 (−30)This is very inefficient; by doubling the precision ahead of time, all additions must bedouble-precision and at least twice as many partial products are needed than for the moreefficient algorithms actually implemented in computers. Some multiplication algorithmsare designed for twos complement, notably Booths algorithm. Methods for multiplyingsign-magnitude numbers dont work with twos complement numbers without adaptation.There isnt usually a problem when the multiplicand (the one being repeatedly added toform the product) is negative; the issue is setting the initial bits of the product correctlywhen the multiplier is negative. Two methods for adapting algorithms to handle twoscomplement numbers are common: • First check to see if the multiplier is negative. If so, negate (i.e., take the twos complement of) both operands before multiplying. The multiplier will then be positive so the algorithm will work. And since both operands are negated, the result will still have the correct sign. • Subtract the partial product resulting from the sign bit instead of adding it like the other partial products.As an example of the second method, take the common add-and-shift algorithm formultiplication. Instead of shifting partial products to the left as is done with pencil andpaper, the accumulated product is shifted right, into a second register that will eventuallyhold the least significant half of the product. Since the least significant bits are notchanged once they are calculated, the additions can be single precision, accumulating inthe register that will eventually hold the most significant half of the product. In thefollowing example, again multiplying 5 by −6, the two registers are separated by "|": 0101 (5)×1010 (−6) ====|==== 0000|0000 (first partial product (rightmost bit is 0)) 0000|0000 (shift right) 0101|0000 (add second partial product (next bit is 1)) 0010|1000 (shift right) 0010|1000 (add third partial product: 0 so no change) 0001|0100 (shift right) 1100|0100 (subtract last partial product since its from sign bit) 1110|0010 (shift right, preserving sign bit, giving the final answer,−30)
  36. 36. Memory hierarchyThe hierarchical arrangement of storage in current computer architectures is called thememory hierarchy. It is designed to take advantage of memory locality in computerprograms. Each level of the hierarchy is of higher speed and lower latency, and is ofsmaller size, than lower levels.Most modern CPUs are so fast that for most program workloads the locality of referenceof memory accesses, and the efficiency of the caching and memory transfer betweendifferent levels of the hierarchy, is the practical limitation on processing speed. As aresult, the CPU spends much of its time idling, waiting for memory I/O to complete.The memory hierarchy in most computers is as follows: • Processor registers – fastest possible access (usually 1 CPU cycle), only hundreds of bytes in size • Level 1 (L1) cache – often accessed in just a few cycles, usually tens of kilobytes • Level 2 (L2) cache – higher latency than L1 by 2× to 10×, often 512 KiB or more • Level 3 (L3) cache – (optional) higher latency than L2, often several MiB • Main memory (DRAM) – may take hundreds of cycles, but can be multiple gigabytes. Access times may not be uniform, in the case of a NUMA machine. • Disk storage – hundreds of thousands of cycles latency, but very large • Tertiary storage – tape, optical disk (WORM)Virtual memory
  37. 37. The memory pages of the virtual address space seen by the process, may reside non-contiguously in primary, or even secondary storage.Virtual memory or virtual memory addressing is a memory management technique,used by computer operating systems, more common in multitasking OSes, wherein non-contiguous memory is presented to a software (aka process) as contiguous memory. Thiscontiguous memory is referred to as the virtual address space.Virtual memory addressing is typically used in paged memory systems. This in turn isoften combined with memory swapping (also known as anonymous memory paging),whereby memory pages stored in primary storage are written to secondary storage (oftento a swap file or swap partition), thus freeing faster primary storage for other processes touse.In technical terms, virtual memory allows software to run in a memory address spacewhose size and addressing are not necessarily tied to the computers physical memory. Toproperly implement virtual memory the CPU (or a device attached to it) must provide away for the operating system to map virtual memory to physical memory and for it todetect when an address is required that does not currently relate to main memory so thatthe needed data can be swapped in. While it would certainly be possible to provide virtualmemory without the CPUs assistance it would essentially require emulating a CPU thatdid provide the needed features.BackgroundMost computers possess four kinds of memory: registers in the CPU, CPU caches(generally some kind of static RAM) both inside and adjacent to the CPU, main memory(generally dynamic RAM) which the CPU can read and write to directly and reasonablyquickly; and disk storage, which is much slower, but much larger. CPU register use isgenerally handled by the compiler (and if preemptive multitasking is in use swapped bythe operating system on context switches) and this isnt a huge burden as they are small innumber and data doesnt generally stay in them very long. The decision of when to usecache and when to use main memory is generally dealt with by hardware so generallyboth are regarded together by the programmer as simply physical memory.Many applications require access to more information (code as well as data) than can bestored in physical memory. This is especially true when the operating system allowsmultiple processes/applications to run seemingly in parallel. The obvious response to theproblem of the maximum size of the physical memory being less than that required for allrunning programs is for the application to keep some of its information on the disk, andmove it back and forth to physical memory as needed, but there are a number of ways todo this.One option is for the application software itself to be responsible both for deciding whichinformation is to be kept where, and also for moving it back and forth. The programmerwould do this by determining which sections of the program (and also its data) were
  38. 38. mutually exclusive, and then arranging for loading and unloading the appropriate sectionsfrom physical memory, as needed. The disadvantage of this approach is that eachapplications programmer must spend time and effort on designing, implementing, anddebugging this mechanism, instead of focusing on his or her application; this hampersprogrammers efficiency. Also, if any programmer could truly choose which of theiritems of data to store in the physical memory at any one time, they could easily conflictwith the decisions made by another programmer, who also wanted to use all the availablephysical memory at that point.Another option is to store some form of handles to data rather than direct pointers and letthe OS deal with swapping the data associated with those handles between the swap areaand physical memory as needed. This works but has a couple of problems, namely that itcomplicates application code, that it requires applications to play nice (they generallyneed the power to lock the data into physical memory to actually work on it) and that itstops the languages standard library doing its own suballocations inside large blocks fromthe OS to improve performance. The best known example of this kind of arrangement isprobably the 16-bit versions of Windows.The modern solution is to use virtual memory, in which a combination of specialhardware and operating system software makes use of both kinds of memory to make itlook as if the computer has a much larger main memory than it actually does and to laythat space out differently at will. It does this in a way that is invisible to the rest of thesoftware running on the computer. It usually provides the ability to simulate a mainmemory of almost any size (In practice theres a limit imposed on this by the size of theaddresses. For a 32-bit system, the total size of the virtual memory can be 232, orapproximately 4 gigabytes. For the newer 64-bit chips and operating systems that use 64or 48 bit addresses, this can be much higher. Many operating systems do not allow theentire address space to be used by applications to simplify kernel access to applicationmemory but this is not a hard design requirement.)Virtual memory makes the job of the application programmer much simpler. No matterhow much memory the application needs, it can act as if it has access to a main memoryof that size and can place its data wherever in that virtual space that it likes. Theprogrammer can also completely ignore the need to manage the moving of data back andforth between the different kinds of memory. That said, if the programmer cares aboutperformance when working with large volumes of data, he needs to minimise the numberof nearby blocks being accessed in order to avoid unnecessary swapping.[edit] PagingVirtual memory is usually (but not necessarily) implemented using paging. In paging, thelow order bits of the binary representation of the virtual address are preserved, and useddirectly as the low order bits of the actual physical address; the high order bits are treatedas a key to one or more address translation tables, which provide the high order bits of theactual physical address.
  39. 39. For this reason a range of consecutive addresses in the virtual address space whose size isa power of two will be translated in a corresponding range of consecutive physicaladdresses. The memory referenced by such a range is called a page. The page size istypically in the range of 512 to 8192 bytes (with 4K currently being very common),though page sizes of 4 megabytes or larger may be used for special purposes. (Using thesame or a related mechanism, contiguous regions of virtual memory larger than a pageare often mappable to contiguous physical memory for purposes other than virtualization,such as setting access and caching control bits.)The operating system stores the address translation tables, the mappings from virtual tophysical page numbers, in a data structure known as a page table.If a page that is marked as unavailable (perhaps because it is not present in physicalmemory, but instead is in the swap area), when the CPU tries to reference a memorylocation in that page, the MMU responds by raising an exception (commonly called apage fault) with the CPU, which then jumps to a routine in the operating system. If thepage is in the swap area, this routine invokes an operation called a page swap, to bring inthe required page.The page swap operation involves a series of steps. First it selects a page in memory, forexample, a page that has not been recently accessed and (preferably) has not beenmodified since it was last read from disk or the swap area. (See page replacementalgorithms for details.) If the page has been modified, the process writes the modifiedpage to the swap area. The next step in the process is to read in the information in theneeded page (the page corresponding to the virtual address the original program wastrying to reference when the exception occurred) from the swap file. When the page hasbeen read in, the tables for translating virtual addresses to physical addresses are updatedto reflect the revised contents of the physical memory. Once the page swap completes, itexits, and the program is restarted and continues on as if nothing had happened, returningto the point in the program that caused the exception.It is also possible that a virtual page was marked as unavailable because the page wasnever previously allocated. In such cases, a page of physical memory is allocated andfilled with zeros, the page table is modified to describe it, and the program is restarted asabove.DetailsThe translation from virtual to physical addresses is implemented by an MMU (MemoryManagement Unit). This may be either a module of the CPU, or an auxiliary, closelycoupled chip.The operating system is responsible for deciding which parts of the programs simulatedmain memory are kept in physical memory. The operating system also maintains thetranslation tables which provide the mappings between virtual and physical addresses, foruse by the MMU. Finally, when a virtual memory exception occurs, the operating system
  40. 40. is responsible for allocating an area of physical memory to hold the missing information(and possibly in the process pushing something else out to disk), bringing the relevantinformation in from the disk, updating the translation tables, and finally resumingexecution of the software that incurred the virtual memory exception.In most computers, these translation tables are stored in physical memory. Therefore, avirtual memory reference might actually involve two or more physical memoryreferences: one or more to retrieve the needed address translation from the page tables,and a final one to actually do the memory reference.To minimize the performance penalty of address translation, most modern CPUs includean on-chip MMU, and maintain a table of recently used virtual-to-physical translations,called a Translation Lookaside Buffer, or TLB. Addresses with entries in the TLB requireno additional memory references (and therefore time) to translate, However, the TLB canonly maintain a fixed number of mappings between virtual and physical addresses; whenthe needed translation is not resident in the TLB, action will have to be taken to load it in.On some processors, this is performed entirely in hardware; the MMU has to doadditional memory references to load the required translations from the translation tables,but no other action is needed. In other processors, assistance from the operating system isneeded; an exception is raised, and on this exception, the operating system replaces oneof the entries in the TLB with an entry from the translation table, and the instructionwhich made the original memory reference is restarted.The hardware that supports virtual memory almost always supports memory protectionmechanisms as well. The MMU may have the ability to vary its operation according tothe type of memory reference (for read, write or execution), as well as the privilege modeof the CPU at the time the memory reference was made. This allows the operating systemto protect its own code and data (such as the translation tables used for virtual memory)from corruption by an erroneous application program and to protect application programsfrom each other and (to some extent) from themselves (e.g. by preventing writes to areasof memory which contain code)HistoryBefore the development of the virtual memory technique, programmers in the 1940s and1950s had to manage two-level storage (main memory or RAM, and secondary memoryin the form of hard disks or earlier, magnetic drums) directly.Virtual memory was developed in approximately 1959 - 1962, at the University ofManchester for the Atlas Computer, completed in 1962. However, Fritz-Rudolf Güntsch,one of Germanys pioneering computer scientists and later the developer of theTelefunken TR 440 mainframe, claims to have invented the concept in his doctoraldissertation Logischer Entwurf eines digitalen Rechengerätes mit mehreren asynchronlaufenden Trommeln und automatischem Schnellspeicherbetrieb (Logic Concept of a
  41. 41. Digital Computing Device with Multiple Asynchronous Drum Storage and AutomaticFast Memory Mode) in 1957.In 1961, Burroughs released the B5000 the first commercial computer with virtualmemory.Like many technologies in the history of computing, virtual memory was not acceptedwithout challenge. Before it could be regarded as a stable entity, many models,experiments, and theories had to be developed to overcome the numerous problems withvirtual memory. Specialized hardware had to be developed that would take a "virtual"address and translate it into an actual physical address in memory (secondary or primary).Some worried that this process would be expensive, hard to build, and take too muchprocessor power to do the address translation.[citation needed]By 1969 the debates over virtual memory for commercial computers wereover[citation needed]. An IBM research team, lead by David Sayre, showed that the virtualmemory overlay system worked consistently better than the best manual-controlledsystems.Possibly the first minicomputer to introduce virtual memory was the Norwegian NORD-1minicomputer. During the 1970s, other minicomputer models such as VAX modelsrunning VMS implemented virtual memories.Virtual memory was introduced to the x86 architecture with the protected mode of theIntel 80286 processor. At first it was done with segment swapping, which becomesinefficent as segments get larger. With the Intel 80386 comes support for paging, whichlay under segmentation. The page fault exception could be chained with other exceptionswithout causing a double fault.
  42. 42. CompilersA diagram of the operation of a typical multi-language, multi-target compiler.A compiler is a computer program (or set of programs) that translates text written in acomputer language (the source language) into another computer language (the targetlanguage). The original sequence is usually called the source code and the output calledobject code. Commonly the output has a form suitable for processing by other programs(e.g., a linker), but it may be a human readable text file.The most common reason for wanting to translate source code is to create an executableprogram. The name "compiler" is primarily used for programs that translate source codefrom a high level language to a lower level language (e.g., assembly language or machinelanguage). A program that translates from a low level language to a higher level one is adecompiler. A program that translates between high-level languages is usually called alanguage translator, source to source translator, or language converter. A languagerewriter is usually a program that translates the form of expressions without a change oflanguage.A compiler is likely to perform many or all of the following operations: lexing,preprocessing, parsing, semantic analysis, code optimizations, and code
  43. 43. LinkerFigure of the linking process, where object files and static libraries are assembled into anew library or executable.In computer science, a linker or link editor is a program that takes one or more objectsgenerated by compilers and assembles them into a single executable program.In IBM mainframe environments such as OS/360 this program is known as a linkageeditor.(On Unix variants the term loader is often used as a synonym for linker. Because thisusage blurs the distinction between the compile-time process and the run-time process,this article will use linking for the former and loading for the latter.)The objects are program modules containing machine code and information for thelinker. This information comes mainly in the form of symbol definitions, which come intwo varieties: • Defined or exported symbols are functions or variables that are present in the module represented by the object, and which should be available for use by other modules. • Undefined or imported symbols are functions or variables that are called or referenced by this object, but not internally defined.In short, the linkers job is to resolve references to undefined symbols by finding outwhich other object defines a symbol in question, and replacing placeholders with thesymbols address.Linkers can take objects from a collection called a library. Some linkers do not includethe whole library in the output; they only include its symbols that are referenced fromother object files or libraries. Libraries for diverse purposes exist, and one or more systemlibraries are usually linked in by default.
  44. 44. The linker also takes care of arranging the objects in a programs address space. This mayinvolve relocating code that assumes a specific base address to another base. Since acompiler seldom knows where an object will reside, it often assumes a fixed baselocation (for example, zero). Relocating machine code may involve re-targeting ofabsolute jumps, loads and stores.The executable output by the linker may need another relocation pass when it is finallyloaded into memory (just before execution). On hardware offering virtual memory this isusually omitted, though—every program is put into its own address space, so there is noconflict even if all programs load at the same base address.AssemblerTypically a modern assembler creates object code by translating assembly instructionmnemonics into opcodes, and by resolving symbolic names for memory locations andother entities. The use of symbolic references is a key feature of assemblers, savingtedious calculations and manual address updates after program modifications. Mostassemblers also include macro facilities for performing textual substitution — e.g. togenerate common short sequences of instructions to run inline, instead of in a subroutine.Assemblers are generally simpler to write than compilers for high-level languages, andhave been available since the 1950s. (The first assemblers, in the early days ofcomputers, were a breakthrough for a generation of tired programmers.) Modernassemblers, especially for RISC based architectures, such as MIPS, Sun SPARC and HPPA-RISC, optimize instruction scheduling to exploit the CPU pipeline efficiently.More sophisticated High-level assemblers provide language abstractions such as: • Advanced control structures • High-level procedure/function declarations and invocations • High-level abstract data types, including structures/records, unions, classes, and sets • Sophisticated macro processingNote that, in normal professional usage, the term assembler is often used ambiguously: Itis frequently used to refer to an assembly language itself, rather than to the assemblerutility. Thus: "CP/CMS was written in S/360 assembler" as opposed to "ASM-H was awidely-used S/370 assembler."
  45. 45. The C Compilation Model We will briefly highlight key features of the C Compilation model here.
  46. 46. The PreprocessorWe will study this part of the compilation process in greater detail later (Chapter 13.However we need some basic information for some C programs.The Preprocessor accepts source code as input and is responsible for • removing comments • interpreting special preprocessor directives denoted by #.For example • #include -- includes contents of a named file. Files usually called header files. e.g o #include <math.h> -- standard library maths file. o #include <stdio.h> -- standard library I/O file • #define -- defines a symbolic name or constant. Macro substitution. o #define MAX_ARRAY_SIZE 100C CompilerThe C compiler translates source to assembly code. The source code is received from thepreprocessor.AssemblerThe assembler creates object code. On a UNIX system you may see files with a .o suffix(.OBJ on MSDOS) to indicate object code files.Link EditorIf a source file references library functions or functions defined in other source files thelink editor combines these functions (with main()) to create an executable file. ExternalVariable references resolved here also. More on this later (Chapter 34). Digitally signed by Vinayak Vinayak Ashok Bharadi DN: cn=Vinayak Ashok Bharadi, c=IN, o=GPM, Ashok ou=Engineering IT, email=vinu_bharadi@rediffmail. com Reason: I am the author of this Bharadi document Date: 2006.11.20 16:19:13 +0530