History of Computer
Upcoming SlideShare
Loading in...5
×
 

History of Computer

on

  • 737 views

 

Statistics

Views

Total Views
737
Views on SlideShare
737
Embed Views
0

Actions

Likes
1
Downloads
26
Comments
0

0 Embeds 0

No embeds

Accessibility

Categories

Upload Details

Uploaded via as Microsoft Word

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

History of Computer History of Computer Document Transcript

  • Computers Since it’s BirthGroup Members: Gunasinghe U.L.D.N 100162X Sashika W.A.D 100487X Siriwardena M.P 100512X Udara Y.B.N 100544V Wijayarathna D.G.C.D 100596F
  • Contents1. What is a computer?2. Abacus3. Pascalene4. Punched Card5. Difference Engine6. Analytical Engine7. Ada Lovelace8. Transistors9. Hewlwtt-Packard Company10.ENIAC11.EDVAC12.Alan Turing13.UNIVAC14.The first Transister Computer15.Unix16.Floppy disk17.Intel 400418.Microsoft Windows.19.Apple Computers20.Pentium Pprocessors21.Multi-Cpre Processors22.Supeer Computers23.Artificial Inteligence24.Optical Computer25.DNA Computing26.Pen Computers27.Quntum computers
  • What is a Computer? “A computer is an electronic device which is capable of receiving information (data)and performing a sequence of logical operations in accordance with a predetermined butvariable set of procedural instructions (program) to produce a result in the form ofinformation or signals.” This is the definition of a computer according to the Oxford Englishdictionary. The origin of the computer is compute which is basically means calculate. At earlierdays the major use of computers was calculating. But at present it has been completelychanged. Because rather than calculating we use computers for other advance applications.Ex. Video editing, 3D animations, for military purposes, bio medical applications,scientifically researchers etc. Due to the rapid change of technology, computer has becomevery complex .Characteristics of Computer. 1. A machine. 2. Can be programed. 3. Process data according to instructions. 4. Store data, etc.
  • Abacus (3000 B.C.) Abacus is considered as the first computing device invented in the world. Abacus wasfirst invented by Babylonians in 3000 B.C. About 1300 the more familiar wire- and – bead abacus replace the Chinese calculating rods. he Lee Kai-chen Abacus was developed and manufactured in Taiwan, China. Few of these remarkable instruments remain as production only lasted a few years. On the beam of the suan pan in the lower part of the frame there is an adjustable place setting vernier. With a moveable indicator that runs along the bottom rail and adjustable unit rodmarkers on the beam of the soroban above Lee called his abacus, "A Revolution of ChineseCalculators." This abacus was made circa 1959 and comes with a 58 page instruction manual.The abacus measures 13 inches long by 8 inches wide. For a demonstration of Lees Abacusplease see. Pascalene( 1642-1643) In 1642-1643 Blaise Pascal creates a gear – driven adding machine called the―Pascalene‖ , the first mechanical adding machine. Blaise Pascal, the Frenchscientist was one of the most reputed mathematician and physicist of his time.He is credited with inventing an early calculator, amazingly advanced for itstime. A genuis from a young age, Blaise Pascal composed a treatise on thecommunication of sounds at the age of twelve, and at the age of sixteen hecomposed a treatise on conic sections. The idea of using machines to solve mathematical problems can be traced at least asfar as the early 17th century. Mathematicians who designed and implemented calculators that were capable of addition, subtraction, multiplication, and division included Wilhelm Schickhard, Blaise Pascal, and Gottfried Leibnitz. In 1642, at the age of eighteen Blaise Pascal invented his numerical wheel calculator called the Pascaline to help his father a French tax collector count taxes. The Pascaline had eight movable dials that added up to eight figured long sums and used base ten. When the first dial (onescolumn) moved ten notches - the second dial moved one notch to represent the tens columnreading of 10 - and when the ten dial moved ten notches the third dial (hundreds column)moved one notch to represent one hundred and so on.
  • Punched Card (1725) Punched cards were first used around 1725 by Basile Bouchon and Jean-BaptisteFalcon as a more robust form of the perforated paper rolls then in use for controlling textilelooms in France. This technique was greatly improved by Joseph Marie Jacquard in hisJacquard loom in 1801. Semen Korsakov was reputedly the first to use the punched cards in informatics forinformation store and search. Korsakov announced his new method and machines inSeptember 1832, and rather than seeking patents offered the machines for public use. From the invention of computer programming languages up to the mid-1980s, many ifnot most computer programmers created, edited and stored their programs on punched cards. The practice was nearly universal with IBM computers in the era. A punched card is a flexible write-once medium that encodes, most commonly, 80 characters of data. Groups or "decks" of cards form programs and collections of data. Users could create cards using a desk- sized keypunch with a typewriter-like keyboard. A typing error generally necessitated re-punching an entire card. A single character typo could becorrected by duplicating the card up to the error column, typing the correct character and thenduplicating the rest of the card. In some companies programmers wrote information onspecial forms called coding sheets, taking care to distinguish the digit zero from theletter O, the digit one from the letter I, 8s from Bs, 2s from Zs, and so on. These forms werethen converted to cards by keypunch operators and, in some cases, checked by verifiers Difference Engine (1822) Charles Babbage (1791-1871), computer pioneer, designed the first automaticcomputing engines. He invented computers but failed to build them. The difference enginewas a special-purpose calculator designed to tabulate logarithms and trigonometric functions by evaluating finite differences to create approximating polynomials. Construction of this machine couldn’t complete; Babbage had conflicts with his chief engineer, Joseph Clement, and ultimately the British government withdrew its funding for the project. The first complete Babbage Engine was completed in London in 2002, 153 years after it was designed. Difference Engine No. 2, built faithfully to the original drawings, consists of 8,000 parts, weighs five tons, and measures 11 feet long. Actually it was a verylarge step in computer history.
  • Analytical Engine (1834-1835) During the Difference Engine project Babbage realized that a much more generaldesign, the Analytical Engine, was possible. The input (programs and data) was to beprovided to the machine via punched cards, a method being used at the time to directmechanical looms such as the Jacquard loom. For output, the machine would have a printer, acurve plotter and a bell. The machine would also be able to punch numbers onto cards to beread in later. It employed ordinary base-10 fixed-point arithmetic. There was to be a store (that is, a memory) capable of holding 1,000 numbers of 50decimal digits each (ca. 20.7 kB). An arithmetical unit (the "mill") would be able to perform all four arithmetic operations, plus comparisons and optionally square roots. Initially it was conceived as a difference engine curved back upon itself, in a generally circular layout, with the long store exiting off to one side. (Later drawings depict a regularized grid layout.) Like the central processing unit (CPU) in a modern computer, the mill would rely upon its own internal procedures, to be stored in the form of pegs insertedinto rotating drums called "barrels", to carry out some of the more complex instructions theusers program might specify. (See microcode for the modern equivalent.) The programming language to be employed by users was akin to modern dayassembly languages. Loops and conditional branching were possible, and sothe language as conceived would have been Turing-complete long beforeAlan Turings concept. Three different types of punch cards were used: onefor arithmetical operations, one for numerical constants, and one for load andstore operations, transferring numbers from the store to the arithmetical unitor back. There were three separate readers for the three types of cards. Ada Lovelace (1842-1843) Ada Lovelace met and corresponded with Charles Babbage on many occasions,including socially and in relation to Babbages Difference Engine and Analytical Engine.Babbage was impressed by Lovelaces intellect and writing skills. He called her "TheEnchantress of Numbers". In 1843 he wrote of her.Forget this world and all its troubles and ifpossible its multitudinous Charlatans — every thingin short but the Enchantress of Numbers. During a nine-month period in 1842–43, Lovelace translated Italian mathematicianLuigi Menabreas memoir on Babbages newest proposed machine, the Analytical Engine.
  • With the article, she appended a set of notes. The notes are longer than the memoir itself andinclude (Section G), in complete detail, a method for calculating a sequence of Bernoullinumbers with the Engine, which would have run correctly had the Analytical Engine beenbuilt. Based on this work, Lovelace is now widely credited with being the first computerprogrammer and her method is recognized as the worlds first computer program. Transistors (1934) The first patent for the field-effect transistor principle was filed in Canadaby Austrian-Hungarian physicist Julius Edgar Lilienfeld on October 22, 1925, but Lilienfeldpublished no research articles about his devices, and they were ignored by industry. In 1934German physicist Dr. Oskar Heil patented another field-effect transistor. There is no directevidence that these devices were built, but later work in the 1990s show that one ofLilienfelds designs worked as described and gave substantial gain. Legal papers fromthe Bell Labs patent show that William Shockley and a co-worker at Bell Labs, GeraldPearson, had built operational versions from Lilienfelds patents, yet they never referencedthis work in any of their later research papers or historical articles. The work emerged from their war-time efforts to produce extremelypure germanium "crystal" mixer diodes, used in radar units as a frequency mixer elementin microwave radar receivers. A parallel project on germanium diodes at PurdueUniversity succeeded in producing the good-quality germanium semiconducting crystals thatwere used at Bell Labs. Early tube-based technology did not switch fast enough for this role,leading the Bell team to use solid state diodes instead. With this knowledge in hand theyturned to the design of a triode, but found this was not at all easy. Bardeen eventuallydeveloped a new branch of quantum mechanics known as surface physics to account for the"odd" behavior they saw, and Bardeen and Brattain eventually succeeded in building aworking device. After the war, William Shockley decided to attempt the building of a triode-likesemiconductor device. He secured funding and lab space, and went to work on the problemwith Brattain and John Bardeen. The key to the development of the transistor was the further understanding of theprocess of the electron mobility in a semiconductor. It was realized that if there was someway to control the flow of the electrons from the emitter to the collector of this newlydiscovered diode, one could build an amplifier. For instance, if you placed contacts on eitherside of a single type of crystal the current would not flow through it. However if a thirdcontact could then "inject" electrons or holes into the material, the current would flow.
  • Hewlett-Packard Company (1939) Bill Hewlett and Dave Packard graduated in electrical engineering from StanfordUniversity in 1935. The company originated in a garage in nearby Palo Alto during a fellowship they had with a past professor, Frederick Terman at Stanford during the Great Depression. Terman was considered a mentor to them in forming Hewlett-Packard. In 1939, Packard and Hewlett established Hewlett-Packard (HP) in Packards garage with an initial capital investment of US$538. Hewlett and Packard tossed a coin to decide whether the company they founded would be calledHewlett-Packard or Packard-Hewlett Packard won the coin toss but named their electronicsmanufacturing enterprise the "Hewlett-Packard Company". HP incorporated on August 18,1947, and went public on November 6, 1957. Of the many projects they worked on, their very first financially successful productwas a precision audio oscillator, the Model HP200A. Their innovation was the use of a smallincandescent light bulb (known as a "pilot light") as a temperature dependent resistor in acritical portion of the circuit, the negative feedback loop which stabilized the amplitude of theoutput sinusoidal waveform. This allowed them to sell the Model 200A for $54.40 whencompetitors were selling less stable oscillators for over $200. The Model 200 series ofgenerators continued until at least 1972 as the 200AB, still tube-based but improved in designthrough the years. One of the companys earliest customers was The Walt Disney Company, whichbought eight Model 200B oscillators (at $71.50 each) for use in certifying the Fanta soundsurround sound systems installed in theaters for the movie Fantasia. ENIAC (Electronic Numerical Integrator And Computer, 1943) The ENIACs design and construction was financed by the United States Army duringWorld War II. The construction contract was signed on June 5, 1943, and work on thecomputer began in secret by the University of Pennsylvanias Moore School of ElectricalEngineering starting the following month under the code name "Project PX". The completedmachine was announced to the public the evening of February 14, 1946 and formallydedicated the next day at the University of Pennsylvania, having cost almost $500,000(nearly $6 million in 2010, adjusted for inflation). It was formally accepted by the U.S. ArmyOrdnance Corps in July 1946. ENIAC was shut down on November 9, 1946 for arefurbishment and a memory upgrade, and was transferred to Aberdeen Proving Ground,Maryland in 1947. There, on July 29, 1947, it was turned on and was in continuous operationuntil 11:45 p.m. on October 2, 1955.
  • The ENIAC was a modular computer, composed of individual panels to perform differentfunctions. Twenty of these modules were accumulators, whichcould not only add and subtract but hold a ten-digit decimalnumber in memory. Numbers were passed between these unitsacross a number of general-purpose buses, or trays, as they werecalled. In order to achieve its high speed, the panels had to sendand receive numbers, compute, save the answer, and trigger thenext operation—all without any moving parts. Key to its versatility was the ability to branch;it could trigger different operations that depended on the sign of a computed result. Besides its speed, the most remarkable thing about ENIAC was its size and complexity. ENIAC contained 17,468 vacuum tubes, 7,200 crystal diodes, 1,500 relays, 70,000 resistors, 10,000 capacitors and around 5 million hand-soldered joints. It weighed more than 30 short tons (27 t), was roughly 8 by 3 by 100 feet (2.4 m × 0.9 m × 30 m), took up 1800 square feet (167 m2), and consumed 150 kW of power. Input was possible from an IBM card reader, and an IBM card punch was used foroutput. These cards could be used to produce printed output offline using an IBM accountingmachine, such as the IBM 405. The ENIAC could be programmed to perform complex sequences of operations,which could include loops, branches, and subroutines. The task of taking a problem andmapping it onto the machine was complex, and usually took weeks. After the program wasfigured out on paper, the process of getting the program "into" the ENIAC by manipulatingits switches and cables took additional days. This was followed by a period of verificationand debugging, aided by the ability to "single step" the machine. EDVAC (Electronic Discrete Variable Automatic Computer, 1945) ENIAC inventors John Mauchly and J. Presper Eckert proposed the EDVACsconstruction in August 1944, and design work for the EDVACcommenced before the ENIAC was fully operational. The design wouldimplement a number of important architectural and logicalimprovements conceived during the ENIACs construction and wouldincorporate a high speed serial access memory. Like the ENIAC, theEDVAC was built for the U.S. Armys Ballistics Research Laboratory atthe Aberdeen Proving Ground by the University of PennsylvaniasMoore School of Electrical Engineering. Eckert and Mauchly and theother ENIAC designers were joined by John von Neumann in a consulting role; vonNeumann summarized and elaborated upon logical design developments in his 1945 FirstDraft of a Report on the EDVAC. A contract to build the new computer was signed in April 1946 with an initial budgetof US$100,000. The contract named the device the Electronic Discrete Variable Automatic
  • Calculator. The final cost of EDVAC, however, was similar to the ENIACs, at just under$500,000. The EDVAC was a binary serial computer with automatic addition, subtraction,multiplication, programmed division and automatic checking with an ultrasonic serialmemory capacity of 1,000 44-bit words (later set to 1,024 words, thus giving a memory, inmodern terms, of 5.5 kilobytes). Physically, the computer comprised the following components: a magnetic tape reader-recorder (Wilkes 1956:36 describes this as a wire recorder.) a control unit with an oscilloscope a dispatcher unit to receive instructions from the control and memory and direct them to other units a computational unit to perform arithmetic operations on a pair of numbers at a time and send the result to memory after checking on a duplicate unit a timer a dual memory unit consisting of two sets of 64 mercury acoustic delay lines of eight words capacity on each line three temporary tanks each holding a single word EDVACs addition time was 864 microseconds and its multiplication time was 2900microseconds (2.9 milliseconds). The computer had almost 6,000 vacuum tubes and 12,000 diodes, and consumed 56kW of power. It covered 490 ft² (45.5 m²) of floor space and weighed 17,300 lb (7,850 kg).The full complement of operating personnel was thirty people for each eight-hour shift. Alan Turing (1912-1954) From 1945 to 1947 Turing lived in Church Street, Hampton while he worked on thedesign of the ACE (Automatic Computing Engine) at the National Physical Laboratory. Hepresented a paper on 19 February 1946, which was the first detailed design of a stored-program computer. Although ACE was a feasible design, the secrecy surrounding thewartime work at Bletchley Park led to delays in starting the project and he becamedisillusioned. In late 1947 he returned to Cambridge for a sabbatical year. While he was atCambridge, the Pilot ACE was built in his absence. It executed its first program on 10 May1950. In 1948, he was appointed Reader in the Mathematics Departmentat Manchester (now part of The University of Manchester). In 1949, hebecame Deputy Director of the computing laboratory at the University ofManchester, and worked on software for one of the earliest stored-program computers—the Manchester Mark 1. During this time hecontinued to do more abstract work, and in "Computing machinery andintelligence" (Mind, October 1950), Turing addressed the problem of
  • artificial intelligence, and proposed an experiment which became known as the Turing test,an attempt to define a standard for a machine to be called "intelligent". The idea was that acomputer could be said to "think" if a human interrogator could not tell it apart, throughconversation, from a human being. In the paper, Turing suggested that rather than building aprogram to simulate the adult mind, it would be better rather to produce a simpler one tosimulate a childs mind and then to subject it to a course of education. A reversed form of theTuring test is widely used on the Internet; the CAPTCHA test is intended to determinewhether the user is a human or a computer. In 1948, Turing, working with his former undergraduate colleague, D. G.Champernowne, began writing a chess program for a computer that did not yet exist. In 1952,lacking a computer powerful enough to execute the program, Turing played a game in whichhe simulated the computer, taking about half an hour per move. The game was recorded. Theprogram lost to Turings colleague Alick Glennie, although it is said that it won a gameagainst Champernownes wife. His Turing Test was a significant and characteristically provocative and lastingcontribution to the debate regarding artificial intelligence, which continues after more thanhalf a century. He also invented the LU decomposition method used today for solving an equationsmatrix in 1948. Alan Turing consider as a father of Computer Science. UNIVAC 1101 (1950) The UNIVAC 1101, or ERA 1101, was a computer system designed by EngineeringResearch Associates (ERA) and built by the Remington Rand corporation in the 1950s. It wasthe first stored program computer in the U.S. that was moved from its site of manufacture andsuccessfully installed at a distant site. Remington Rand used the 1101s architecture as thebasis for a series of machines into the 1960s. This computer was 38 ft (12 m) long, 20 ft (6.1 m) wide, and used 2700 vacuum tubes for its logic circuits. Its drum memory was 8.5 in (22 cm) in diameter, rotated at 3500 rpm, had 200 read-write heads, and held 16,384 24-bit words (a memory size equivalent to 48 kB) with access time between 32 microseconds and 17 milliseconds. Instructions were 24 bits long, with 6 bits for the opcode, 4bits for the "skip" value (telling how many memory locations toskip to get to the next instruction in program sequence), and 14 bitsfor the memory address. Numbers were binary with negative valuesin ones complement. The addition time was 96 microseconds andthe multiplication time was 352 microseconds.
  • The single 48-bit accumulator was fundamentally subtractive, addition being carriedout by subtracting the ones complement of the number to be added. This may appear ratherstrange, but the subtractive adder reduces the chance of getting negative zero in normaloperations. The machine had 38 instructions. The first transistor computer (1953) The University of Manchesters experimental Transistor Computer was firstoperational in November 1953 and it is widely believed to be the first transistor computer tocome into operation anywhere in the world. There were two versions of the TransistorComputer, the prototype, operational in 1953, and the full-size version, commissioned inApril 1955. The 1953 machine had 92 point-contact transistors and 550 diodes, manufacturedby STC. It had a 48-bit machine word. The 1955 machine had a total of 200 point-contacttransistors and 1300 point diodes, which resulted in a power consumption of 150 watts. Therewere considerable reliability problems with the early batches of transistors and the averageerror free run in 1955 was only 1.5 hours. The Computer also used a small number of tubes inits clock generator, so it was not the first fully transistorized machine. The design of a full-size Transistor Computer was subsequently adopted by theManchester firm of Metropolitan-Vickers, who changed all the circuits to more reliable typesof junction transistors.The production version was known as the Metrovick 950 and was builtfrom 1956 to the extent of six or seven machines, which were "used commercially within thecompany" or "mainly for internal use". Unix (1969) Unix (officially trademarked as UNIX, sometimes also written asUnix) is a multitasking, multi-user computer operating system originallydeveloped in 1969 by a group of AT&T employees at Bell Labs, including Ken Thompson, Dennis Ritchie, Brian Kernighan, Douglas McIlroy, and Joe Ossanna. The Unix operating system was first developed in assembly language, but by 1973 had been almost entirely recoded in C, greatly facilitating its further development and porting to other hardware. Todays Unix systems are split into various branches, developed over time by AT&T as well as various commercial vendors and non-profit organizations. The second edition of Unix was released on December 6th, 1972.
  • Floppy Disk (1970) The earliest floppy disks, invented at IBM, were 8 inches in diameter. They becamecommercially available in 1971 Disks in this form factor were produced and improved uponby IBM and other companies such as Memorex, Shugart Associates, and BurroughsCorporation. A BASF double-density 5¼-inch diskette. In 1976 Shugart Associates introduced the first 5¼-inch FDD and associated media.By 1978 there were more than 10 manufacturers producing 5¼-inch FDDs, in competing diskformats: hard or soft sectored with various encoding schemes such as FM, MFM and GCR.The 5¼-inch formats quickly displaced the 8-inch for most applications, and the 5¼-inchhard-sectored disk format eventually disappeared. In 1984, IBM introduced the 1.2 megabyte dual sided floppy disk along with its AT model. Although often used as backup storage, the high density floppy was not often used by software manufacturers for interchangeability. In 1986, IBM began to use the 720 kB double density 3.5" microfloppy disk on its Convertible laptop computer. It introduced the so-called "1.44 MB" high density version with thePS/2 line. These disk drives could be added to existing older model PCs. In 1988 IBMintroduced a drive for 2.88 MB "DSED" diskettes in its top-of-the-line PS/2 models; it was acommercial failure. In 1991, Insite Peripherals introduced the "Floptical", which used an infra-red LED toposition the heads over marks in the disk surface. The original drivestored 21 MB, while also reading and writing standard DD and HDfloppies. In order to improve data transfer speeds and make thehigh-capacity drive usefully quick as well, the drives were attachedto the system using a SCSI connector instead of the normal floppycontroller. This made them appear to the operating system as a hard drive instead of a floppy,meaning that most PCs were unable to boot from them. This again adversely affected pickuprates.
  • Intel 4004 (1971) The Intel 4004 was a 4-bit central processing unit (CPU) released by IntelCorporation in 1971. It was the first complete CPU on one chip, and also the firstcommercially available microprocessor. Such a feat of integration was made possible by theuse of then new silicon gate technology allowing a higher number of transistors and a fasterspeed than was possible before.The 4004 employed a 10-μm silicon-gate enhancementload pMOS technology and could execute approximately 92,000 instructions per second (thatis, a single instruction cycle was 10.8 microseconds). The chief designers of the chipwereFederico Faggin and Ted Hoff of Intel, and Masatoshi Shima of Busicom (laterof ZiLOG, founded by Faggin). Faggin, the sole chip designer among the engineers on the MCS-4 project, was the only one with experience in MOS random logic and circuit design. He also had the crucial knowledge of the new silicon gate process technology with self-aligned gates, which he had created at Fairchild in 1968. At Fairchild in 1968, Faggin also designed and manufactured the worlds first commercial IC using SGT — the Fairchild 3708. As soon as he joined the Intel MOS Department he created a new random design methodology based on silicon gate, and contributed many technology andcircuit design inventions that enabled a single chip microprocessor to become a reality for thefirst time. His methodology set the design style for all the early Intel microprocessors andlater for the Zilog’s Z80. He also led the MCS-4 project and was responsible for itssuccessful outcome (1970–1971). Ted Hoff, head of the Application Research Department,contributed only the architectural proposal for Busicom working with Stanley Mazor in 1969,then he moved on to other projects. When asked where he got the ideas for the architecture ofthe first microprocessor, Hoff related that Plessey, "a British tractor company," had donated aminicomputer to Stanford, and he had "played with it some" while he was there. Shimadesigned the Busicom calculator firmware and assisted Faggin during the first six months ofthe implementation. The manager of Intels MOS Design Department was Leslie L.Vadász. At the time of the MCS-4 development, Vadaszs attention was completely focusedon the mainstream business of semiconductor memories and he left the leadership and themanagement of the MCS-4 project to Faggin. A popular myth has it that Pioneer 10, the first spacecraft to leave the solar system,used an Intel 4004 microprocessor. According to Dr. Larry Lasher of Ames Research Center,the Pioneer team did evaluate the 4004, but decided it was too new at the time to include inany of the Pioneer projects. The myth was repeated by Federico Faggin himself in a lecturefor the Computer History Museum in 2006
  • Microsoft Windows (1975) Paul Allen and Bill Gates, childhood friends with a passion in computerprogramming, were seeking to make a successful business utilizing their shared skills. TheJanuary 1975 issue of Popular Electronics featured Micro Instrumentation and TelemetrySystemss (MITS) Altair 8800 microcomputer. Allen noticed that they could programa BASIC interpreter for the device; after a call from Gates claiming to have a workinginterpreter, MITS requested a demonstration. Since they didnt actually have one, Allenworked on a simulator for the Altair while Gates developed the interpreter. Although theydeveloped the interpreter on a simulator and not the actual device, the interpreter workedflawlessly when they demonstrated the interpreter to MITS in Albuquerque, New Mexico inMarch 1975; MITS agreed to distribute it, marketing itas Altair BASIC. They officially established Microsoft onApril 4, 1975, with Gates as the CEO. Allen came up withthe original name of "Micro-Soft," as recounted in a 1995Fortune magazine article. In August 1977 the companyformed an agreement with ASCII Magazine in Japan,resulting in its first international office, "ASCII Microsoft". The company moved to a newhome in Bellevue, Washington in January 1979. Microsoft entered the OS business in 1980 with its own version of Unix, called Xenix.However, it was DOS (Disk Operating System) that solidified the companys dominance.After negotiations with Digital Research failed, IBM awarded a contract to Microsoft inNovember 1980 to provide a version of the CP/MOS, which was set to be used in theupcoming IBM Personal Computer (IBM PC). For this deal, Microsoft purchased a CP/Mclone called 86-DOS from Seattle Computer Products, branding it as MS-DOS, which IBMrebranded to PC-DOS. Following the release of the IBM PC in August 1981, Microsoftretained ownership of MS-DOS. Since IBM copyrighted the IBM PC BIOS, other companieshad to reverse engineer it in order for non-IBM hardware to run as IBM PC compatibles, butno such restriction applied to the operating systems. Due to various factors, such as MS-DOSs available software selection, Microsoft eventually became the leading PC OS vendor.The company expanded into new markets with the release of the Microsoft Mouse in 1983, aswell as a publishing division named Microsoft Press. Paul Allen resigned from Microsoft inFebruary after developing Hodgkins disease.
  • Apple Computers (1976) On April fool’s Day, 1976, Steve Wozniak and Steve Jobsreleased the Apple Icomputer and started Apple Computers. The Apple I was the first with a single circuit boardused in a computer. The first home computer with a GUI orgraphical user interface was the Apple Lisa. The very firstgraphical user interface was developed by the Xerox Corporationat their Palo Alto Research Center (PARC) in the 1970s. SteveJobs visited PARC in 1979 (after buying Xerox stock) and wasimpressed and influenced by the Xerox Alto, the first computer ever with a graphical userinterface. Jobs designed the new Apple Lisa based on the technology he saw at Xerox. Pentium Processors Pentium is a registered trademark that is included in the brand names of manyof Intels x86-compatible microprocessors, both single- and multi-core. The name Pentiumwas derived from the Greekpente (πέντε), meaning five, and the Latin ending -ium, a nameselected after courts had disallowed trademarking of number-based names like "i586" or"80586" (model numbers cannot always be trademarked). Following Intels previous seriesof 8086, 80186, 80286, 80386, and 80486 microprocessors, Intels fifth-generation microarchitecture, the P5, was first released under the Pentium brandon March 22,1993. In 1995, Intel started to employ the registered Pentium trademark also for x86microprocessors with radically different microarchitectures (e.g., PentiumPro, II, III, 4, D, M, etc.). In 2006, the Pentium brand briefly disappeared fromIntels roadmaps, only to re-emerge in 2007. Multi-core processors A multi-core processor is a single computing component with two or moreindependent actual processors (called "cores"), which are the units thatread and execute program instructions. The data in the instruction tellsthe processor what to do. The instructions are very basic things likereading data from memory or sending data to the user display, but theyare processed so rapidly that we experience the results as the smoothoperation of a program. Manufacturers typically integrate the cores onto a single integratedcircuit die (known as a chip multiprocessor or CMP), or onto multiple dies in a single chippackage.
  • Processors were originally developed with only one core. A multi-core processor isone in which the number of cores is large enough that traditional multi-processor techniquesare no longer efficient — largely due to issues with congestion in supplying instructions anddata to the many processors. The many-core threshold is roughly in the range of several tensof cores; above this threshold network on chip technology is advantageous. A dual-core processor has two cores (e.g. AMD Phenom II X2, Intel Core Duo),a quad-core processor contains four cores (e.g. AMD Phenom II X4, the Intel 2010 core line that includes three levels of quad-core processors, see i3, i5, and i7 at Intel Core), and a hexa-core processor contains six cores (e.g. AMD Phenom II X6, Intel Core i7 Extreme Edition 980X). A multi-core processor implements multiprocessing in a singlephysical package. Designers may couple cores in a multi-core device tightly or loosely. Forexample, cores may or may not share caches, and they may implement messagepassing or shared memory inter-core communication methods. Common networktopologies to interconnect cores include bus, ring, two-dimensional mesh,and crossbar. Homogeneous multi-core systems include only identicalcores, heterogeneous multi-core systems have cores which are not identical. Just as withsingle-processor systems, cores in multi-core systems may implement architectures suchas superscalar, VLIW, vector processing, SIMD, or multithreading. The improvement in performance gained by the use of a multi-core processor dependsvery much on the software algorithms used and their implementation. In particular, possiblegains are limited by the fraction of the software that can be parallelized to run on multiplecores simultaneously; this effect is described by Amdahls law. In the best case, so-called embarrassingly parallel problems may realize speedup factors near the number ofcores, or even more if the problem is split up enough to fit within each cores cache(s),avoiding use of much slower main system memory. Most applications, however, are notaccelerated so much unless programmers invest a prohibitive amount of effort in re-factoringthe whole problem. The parallelization of software is a significant ongoing topic of research.
  • Super Computers A supercomputer is a computer that is at the frontline of current processing capacity,particularly speed of calculation. Supercomputers are used for highly calculation-intensive tasks such as problemsinvolving quantum physics, weather forecasting, climate research, molecularmodeling (computing the structures and properties of chemical compounds,biological macromolecules, polymers, and crystals), physical simulations (such as simulationof airplanes in wind tunnels, simulation of the detonation of nuclear weapons, and researchinto nuclear fusion). Supercomputers were introduced in the 1960s and were designed primarilyby Seymour Cray at Control Data Corporation (CDC), which ledthe market into the 1970s until Cray left to form his owncompany, Cray Research. He then took over the supercomputermarket with his new designs, holding the top spot insupercomputing for five years (1985–1990). In the 1980s a largenumber of smaller competitors entered the market, in parallel to the creation of theminicomputer market a decade earlier, but many of these disappeared in the mid-1990s"supercomputer market crash". Today, supercomputers are typically one-of-a-kind custom designs produced bytraditional companies such as Cray, IBM and Hewlett-Packard, who had purchased many ofthe 1980s companies to gain their experience. Since October 2010, the Tianhe-1A supercomputer has been the fastest in the world; it is located in China. The term supercomputer itself is rather fluid, and the speed of todays supercomputerstends to become typical of tomorrows ordinary computers. CDCs early machines were simply very fast scalar processors, some ten times the speed of the fastest machines offered by other companies. In the 1970s most supercomputers were dedicated to running a vector processor, and many of the newer players developed their own such processors at a lower price to enter the market. The early and mid-1980s saw machines with a modest number of vectorprocessors working in parallel to become the standard. Typical numbers of processors were inthe range of four to sixteen. In the later 1980s and 1990s, attention turned from vectorprocessors to massive parallel processing systems with thousands of "ordinary" CPUs, somebeing off the shelf units and others being custom designs. Today, parallel designs are basedon "off the shelf" server-class microprocessors, such as the PowerPC,Opteron, or Xeon, andcoprocessors like NVIDIA Tesla GPGPUs, AMD GPUs, IBM Cell, FPGAs. Most modernsupercomputers are now highly-tuned computer clusters using commodity processorscombined with custom interconnects.
  • Artificial Inteligence Artificial intelligence (AI) is the intelligence of machines and the branch of computerscience that aims to create it. AI textbooks define the field as "the study and design ofintelligent agents" where an intelligent agent is a system that perceives its environment andtakes actions that maximize its chances of success. John McCarthy, who coined the term in1956, defines it as "the science and engineering of making intelligent machines." The field was founded on the claim that a central property of humans, intelligence—the sapience of Homo sapiens—can be so preciselydescribed that it can be simulated by a machine.This raises philosophical issues about the nature ofthe mind and the ethics of creating artificial beings,issues which have been addressedby myth, fiction and philosophy since antiquity.Artificial intelligence has been the subject ofoptimism, but has also suffered setbacksnd, today, has become an essential part of thetechnology industry, providing the heavy lifting for many of the most difficult problems incomputer science. AI research is highly technical and specialized, and deeply divided into subfields thatoften fail to communicate with each other. Subfields have grown up around particularinstitutions, the work of individual researchers, the solution of specific problems,longstanding differences of opinion about how AI should be done and the application ofwidely differing tools. The central problems of AI include such traits as reasoning,knowledge, planning, learning, communication, perception and the ability to move andmanipulate objects. General intelligence (or "strong AI") is still among the fields long termgoals. Optical Computer The computers we use today use transistors and semiconductors to control electricity.Computers of the future may utilize crystals and metamaterials to control light. Opticalcomputers make use of light particles called photons. NASA scientists are working to solve the need for computer speed using lightLight travels at 186,000 miles per second. Thats 982,080,000 feetper second -- or 11,784,960,000 inches. In a billionth of a second,one nanosecond, photons of light travel just a bit less than a foot, notconsidering resistance in air or of an optical fiber strand or thin film.Just right for doing things very quickly in microminiaturizedcomputer chips.
  • "Entirely optical computers are still some time in the future," says Dr. Frazier, "but electro-optical hybrids have been possible since 1978, when it was learned that photons can respond to electrons through media such as lithium niobate. Newer advances have produced a variety of thin films and optical fibers that make optical interconnections and devices practical. We are focusing on thin films made of organic molecules, which are more light sensitive than inorganics. Organics can perform functions such as switching, signal processing and frequencydoubling using less power than inorganics. Inorganics such as silicon used with organicmaterials let us use both photons and electrons in current hybrid systems, which willeventually lead to all-optical computer systems." "What we are accomplishing in the lab today will result in development of super-fast,super-miniaturized, super-lightweight and lower cost optical computing and opticalcommunication devices and systems," Frazier explained. DNA Computing DNA computing is a form of computing whichuses DNA, biochemistry and molecular biology, instead of the traditional silicon-based computer technologies. DNA computing, or, more generally, biomolecular computing,is a fast developing interdisciplinary area. Research and development in this area concernstheory, experiments and applications of DNA computing. This field was initially developed by Leonard Adleman ofthe University of Southern California, in 1994. Adleman demonstrateda proof-of-concept use of DNA as a form of computation which solvedthe seven-point Hamiltonian path problem. Since the initial Adlemanexperiments, advances have been made and various Turingmachines have been proven to be constructible. In 2002, researchers from the Weizmann Institute of Science in Rehovot, Israel,unveiled a programmable molecular computing machine composed of enzymes and DNAmolecules instead of silicon microchips. On April 28, 2004, Ehud Shapiro, YaakovBenenson, Binyamin Gil, Uri Ben-Dor, and Rivka Adar at the Weizmann Institute announcedin the journal Nature that they had constructed a DNA computer coupled with an input andoutput module which would theoretically be capable of diagnosing cancerous activity withina cell, and releasing an anti-cancer drug upon diagnosis.
  • Pen Computers P-ISM is a gadget package including five functions: a pen-style cellular phone with ahandwriting data input function, virtual keyboard, a very small projector, camera scanner, andpersonal ID key with cashless pass function. P-ISMs are connected with one another throughshort-range wireless technology. The whole set is also connected to the Internet through thecellular phone function. This personal gadget in a minimalistic pen style enables the ultimateubiquitous computing. Quantum Computers A quantum computer is a device for computation that makes direct use ofquantum mechanical phenomena, such as superposition and entanglement, to performoperations on data. Quantum computers are different from traditional computers basedon transistors. The basic principle behind quantum computation is that quantum propertiescan be used to represent data and perform operations on these data. A theoretical model isthe quantum Turing machine, also known as the universal quantum computer. Although quantum computing is still in its infancy,experiments have been carried out in which quantum computational operations were executedon a very small number of qubits (quantum bits). Both practical and theoretical researchcontinues, and many national government and military funding agencies support quantumcomputing research to develop quantum computers for both civilian and national securitypurposes, such as cryptanalysis
  • Large-scale quantum computers could be able to solve certain problems much faster than anyclassical computer by using the best currently known algorithms, like integerfactorization using Short’s algorithm or the simulation of quantum many-body systems.There exist quantum algorithms, which run faster than any possible probabilistic classicalalgorithm. Given enough resources, a classical computer can simulate an arbitrary quantumcomputer. Hence, ignoring computational and space constraints, a quantum computer is notcapable of solving any problem which a classical computer cannot. References Wikipidea ,The Free Encyclopedia. www.computer.org/computer/timeline/timeline.pdf www.intel.com www.futureforall.org www.nvidea.com