Optical computingOptical computing means performing computations, operations, storage andtransmission of data using light. Instead of silicon chips optical computer uses organicpolymers like phthalocyanine and polydiacetylene.Optical technology promises massiveupgrades in the efficiency and speed of computers, as well as significant shrinkage intheir size and cost. An optical desktop computer is capable of processing data up to1,00,000 times faster than current models.Advantages of Optical ComputersMany of the advantages stated below cannot be put into practice today but aretheoretical advantages of the optical technology. Quite the contrary some of theadvantages mentioned are not advantages nowadays but rather disadvantages due tothe early stages of the development: For example a prototype of an optical computer isbigger than a conventional computer. However the theoretical advantages can berealized sooner or later.Optical computers will only be produced if they are cheaper or more powerful thanconventional ones (or best: both of it at the same time). Besides hybrid solutions therewill be a competition between the two different kinds of computing. To be able tomeasure optical computers and their future possibilities, conventional computers andtheir actual stage of development concerning each advantage and disadvantage aredepicted.Advantages of optical computers compared to conventional computers: Higher performance The most significant advantage of optical computers is the potential of higher performance. A concrete benchmark test between optical and conventional computers is not possible yet. But the performance of an optical computer in an advanced stage should be several orders of magnitude higher. Conventional computer technology is based on electric current and electrons. While electric current is very fast, the average drift velocity of electrons is rather slow. In reality there is nothing faster than the speed of light. This makes light and photons to the perfect information carrier. Even long distances can be bridged within split seconds.
A computer based completely on optical components is the optimumconcerning speed: Full optical RAM saving photons directly if possible orotherwise indirectly, bus systems using concentrated light beams (lasers) tocommunicate, optical processors and holographic drives. All those componentshave the capability to be much faster than the existing electric ones. Forexample a light beam is able to transmit the whole encyclopedia Britannicawithin one second. To lose no speed it is reasonable to use electric devicesas rare as possible because every electric communication device is a potentialbottleneck. At the beginning of the development a full optical computer willnot be possible. Many devices will still work electrical. Even after a lot ofdevelopment electronic parts most likely will still remain in optical computers,e.g. for instruction and controlling tasks.But using light as information carrier is only one opportunity to acceleratecomputers. Using a different architecture for optical computers could be a keyfactor to success. Conventional computers normally base on the Von Neumannarchitecture. That means data and instructions are kept in the same memory, arevariable and are executed sequentially. One serious disadvantage of the VonNeumann architecture is the bottleneck between the CPU and the RAM, calledthe Von Neumann bottleneck. Since the CPU became the fastest device inmodern computers it is forced to wait for data coming from the RAM. Bussystems and the RAM are not fast enough to provide data for the CPU withoutintermission. The effective processing speed is seriously limited. In opticalcomputers light could be used in bus systems. Optical communication devicesare much faster than electric ones. So the bottleneck could disappear.Another key to fasten up optical computers is to compute with higherparallelism. This implies higher performance and higher bandwidth which isdepicted within the next break.Higher parallelismThere are two options to achieve higher parallelism in computers:One is to increase the amount of data which is sent through bus systems andcomputed in the CPU at each time. Modern conventional computers andoperating systems base on 32 bit or 64 bit architecture. Often applicationsoftware only supports 32 bit systems so the software does not take fulladvantage of 64 bit systems. Optical computers can be built with higher
bandwidth. Within one data path several data sets can be transmitted parallel atthe same time using different wavelengths or polarizations. But therebyattention to interference is necessary. Due to the coherent laser light destructiveinterference depending on the phase might occur. The higher parallelism andthe superior velocity of light allow extreme processing speeds. But to take fulladvantage of this new architecture operating systems and application softwarehave to be adjusted to it.Furthermore data paths are able to cross each other withoutinterference. This advantage of optical technologies can help to buildarchitectures and layouts with superior parallelism. New layouts can be morethree-dimensional and thus the space in a computer case can be used moreintensively. With adequate miniaturization of optical components the size ofcomputers could shrink.The following figure visualizes the possibilities of communication using lightbeams:
Figure 15: Superposition, Crossed Data Paths and Three-Dimensionality (At a hybrid system using optoelectronic devices) Own illustrationIn conventional computers superposition and crossed data paths are notpossible. Only one data set can be sent in one data path at each time andcrossed data paths lead to loss of data. Besides that the layout of a conventionalcomputer is rather based on two dimensions. Optical computers take advantageof all three dimensions. No wires are needed to transmit information, quite thecontrary information can be sent wireless or even in vacuum.
Another option to achieve higher parallelism in computers is to change the architecture concerning the temporal aspect. As already mentioned modern computers established on the Von Neumann architecture work sequential. To achieve more performance parallel computing is required. So more than one instruction stream or data stream can be executed simultaneously. According to Flynn’s taxonomy computers are divided into four groups concerning their architecture. Flynn differentiates between data streams and instruction streams. Single Instruction Multi InstructionSingle Data SISD MISDMultiple Data SIMD MIMD Table 3: Flynn’s Taxonomy Own illustration based on STUCKE (1989), Pages 29-31 Personal Computers base on SISD. Vector processors using SIMD architecture are able to perform the same instruction on several data sets. Computers designed in MISD operate redundantly. Most of the fastest supercomputers of the world base on MIMD design. Thus several processors work independently and asynchronously. Concerning optical computers multiple data and instruction streams are useful, too. At least for mainframe computers an optical MIMD concept should be established. Less consumption Current computers consume a lot of energy. Modern CPUs often need over 80 watts in idle state, around 120 watts in normal use and up to 250 watts in performance mode. For complex visualization a high performance graphics card is needed which needs up to 150 watts itself. The computer industry realized that issue several years ago. Today especially for notebooks energy-saving components are built. Calculation example: A computer runs every day around eight hours in normal use and one kilowatt hour costs 0,15 €. That means that the CPU consumes around 350 kilowatt hours a year. Consequently 53 € of energy costs are generated only by the CPU. All components of a
loaded high performance PC (without monitor and periphery) consume within the sametime even 1100 kilowatt hours (165 €).Another aspect is that a lot of the consumed energy is not used accurately but for idlepower in the form of internal friction in the ICs. Still the amount of energy loss due tointernal friction rises from CPU generation to generation. Due to this friction a lot of heatis released, which will be explained in the next break. Friction between the elementaryparticles does not occur when light is used as information carrier. Optical computers havethe potential to be more power-saving than conventional ones.Less heat is releasedLess heat release using light? Sounds contradictory since light sources radiate heat. But inoptical computers lasers are used as light sources. Those concentrated light beams onlyconsist of a small spectrum of different wavelengths. Depending on the field ofapplication lasers have different needs of energy and produce heat to a greater or lesserextent.Most modern CPUs are not able to work without a proper airing. The reason for that isthe friction of the electrons in the integrated circuits. Those moving electrons hit eachother within wires and ICs and thus heat is generated. Airing a processor (whether it is aCPU, GPU or the like) or the whole case needs energy, space and produces noise. Opticalcomputers could be smaller because there is no need for a fan or free spaces for aircirculation. Big computer centers with thousands of computers need powerful airconditioning systems. These aggregates are expensive and need a lot of energy too.Besides that the released heat increases the danger of fire at computer centers.Less noiseConventional computers often cause a lot of noise due to rotating fans and drives. Highspeed processors accelerated to their architectural limits need enormous active andpassive cooling. In the past small fast spinning fans often were used which created plentynoise. Some years ago the computer industry realized the problem of the annoying noiseand developed fans with bigger size. Thus the same volume flow was established with alower rotary speed and the noise development was reduced.Actually optical computers could be almost noiseless since probably no fan will beneeded. Light sources (e.g. lasers) can be cooled with passive coolers and heat pipes builtout of aluminum or copper. Those passive coolers evacuate heat silent. Low noise is aninteresting aspect for office and home users as well.
More flexibility in layoutConventional PCs are built as a rectangular box (desktop) or as a laptop. One reason forthat is the speed of electronic connections. It depends on the length of the cables andpipelines. On the motherboard the CPU, RAM and graphics card have to be close to eachother to be able to move huge amounts of information. Longer distances imply thedecrease of the practical transfer rate.Using optical components the distance of communication does not matter. Once thesignal is in an optical fiber it does not matter whether the signal runs 1 meter or 1000meters. Because of the low damping long-range communication is possible. Still the datarate is very high and there is no crosstalk.So the optical computer technology has the potential to change the shape and layout ofcomputers fundamentally. The components of one computer can be spread across acar, a building or even a city with almost no loss in performance. Consequently theserver/client and the peer-to-peer architectures could be advanced. Many clients,terminals or even single components can be connected optically and consequently allowhigher ranges.Less loss in communicationToday communication often is realized with electric wires or wireless by radio frequency.The ranges of those communication ways are limited. Data sent through wires needs to beamplified several times to bridge longer distances.The communication with optical fibers is almost lossless due to the total internalreflection. So amplifying the signal is not or only rarely needed. Furthermore ahigher bandwidth is possible, optical communication is insensible to electromagneticinterfering fields and it is more tap-proof. For high performance communication (e.g. forbackbones) already today fiber optics are used.Less wearWear normally occurs at mechanically moving parts. In conventional computers thoseparts are especially fans, hard disk drives and conventional optical removable storages(CD, DVD, HD-DVD, Blu-ray disks). All those components have in common that theyrotate or move very fast and thus friction is caused. Because of this friction themechanical parts wear out and break. As already mentioned, this friction causes heatwhich is suboptimal, too.
In optical computers fans possibly will not be needed any more. An optical processor does not heat up due to internal friction of the electrons like a conventional does. Additionally new technologies for mass storages can be established. Saving in the form of holograms or on molecular basis is possible. Those forms do not need fast spinning parts and do not wear out so heavily.Disadvantages of Optical ComputersAs already mentioned in Chapter 5.2 today’s optical computers are only prototypesand do not reflect the whole capabilities of optical computing. Consequently many ofthe disadvantages of optical computing are not because of the principle of the newtechnology itself but because of contemporary production technologies. The machinesand processes that could be used to manufacture optical computers in mass productionare still in its early stages.Disadvantages of optical computers compared to conventional computers: Optical components and their production is still expensive Conventional ICs are produced in high tech factories whose sole task is to manufacture ICs. Since the specialization and productivity of those factories is rather high the price of the product is low. Experts say the price of one transistor matches approximately the price of a printed character in a newspaper. Earlier the price was a lot higher but due to new abilities in manufacturing and the new factories it decreased. Concerning optical components that are particularly manufactured for optical computers the development of its price could be similar to the development of the IC price. Today there are already optical components e.g. for communication and measurement. But they are not specially built to be placed in an optical computer. Furthermore there is no high tech and specialized factory to produce optical computers or especially designed components for it comparable to modern IC factories. Thus the price of an optical computer is rather high or respectively there is no complete system that can be bought yet.
It is a kind of prototype and is not really manufactured but build by handworkof experts out of components produced for a different purpose. Using astandard architecture and standardized components could be the key to realizemass production of optical computers. This would decrease the productionprice and thus the price of an optical computer will drop as well. Unfortunatelyonly a few companies have the financial power to develop computers withintegrated optics first and then build a matching factory to produce them. IBM,Infinion respectively Siemens, Samsung, Intel and AMD can be mentioned atthis point. During a scenario analysis in Chapter 6 possible scenariosconcerning the price and performance of optical computers will be depictedmore detailed.Optical components are not miniaturized enough yetExisting modern IC processors are built in a very high density called VLSI(very-large-scale-integration) or ULSI (ultra-large-scale-integration). There areseveral millions of transistors in an area of only approximately some squaremillimeters. The size of the transistors still decreases and some parts of thetransistors already reach atomic dimensions.In contrast to that optical components can be build small and compact but notreally miniaturized. The area of optics ranges from macrooptics over miniatureoptics to microoptics. Unfortunately there are no microoptic integratedcircuits specifically developed to assemble a CPU or a motherboard yet. A lotof development and new processes will be needed.Problems of exact manufacturingToday conventional processors and computer components are manufacturedwith high precision and in huge charges using robust processes. Manufacturingproblems can be eliminated by proper quality assurance. A change from theexisting assembly method concerning the size of the chip structure to anothersize or using bigger wavers can trigger problems. For instance AMD recentlyhad some problems changing their production from 90nm to 65nmstructure.Miniaturized optical components have to be built very exactly to workproperly. This exactness often is not reached yet. Small deviations can cause
massive problems diverting light beams. Exact production is expensive which leads to the next disadvantage. New expensive high-tech factories have to be built An IC factory costs several millions up to billions of dollars. Chip manufactures do not want to close such factories due to the high investments unless they have amortized. So the goal of the chip manufactures is to integrate optical components into the existing concepts and create hybrid devices. To allow integration optical devices have to be adapted and are not able to show their whole capabilities using an all optical architecture. So at first the integrated optical components will probably be produced in existing IC factories. Later own factories can be built to produce sole optical parts. Incompatibility Modern Personal Computers (PCs) are put together according to the Von Neumann architecture. Application software and especially operating systems are programmed to match this architecture. Because of these existing standards most of the application software can be used on any computer using a Microsoft Windows operating system. The portability is rather high. Optical computers may use a different architecture particularly regarding the parallelism of the system. Conventional existing computer programs working sequentially will have to run emulated in a kind of compatibility mode. So these programs cannot use the full calculating speed of optical computers or even worse some programs might not even work. Here the question arises whether the software drives the hardware or the hardware drives the software. Concerning optical computers the software will depend on the new architecture of the hardware. Software especially programmed for optical computers probably might work on conventional computers. The two different architectures existing next to each other with incompatible software is the worst scenario. Then software can only be used on one architecture so the portability and capability is quite low.5.3 Comparison of the Advantages and Disadvantages
In Chapters 5.1 and 5.2 advantages and disadvantages of optical computing are stated.Strong advantages like more performance while less consumption and moreergonomic (less noise, smaller and more flexible cases) stand opposite todisadvantages of more temporary nature (incompatibility and production problems).Consequently the advantages of optical computing overtop the disadvantages. Photonsand light seem to be better than electrons and electric current to carry information.Especially in the long run the teething problems of the new technology will be copedand most likely the onward development will bring out high performance computers.The question how long this will take and the factors influencing it will be discussedwithin the next chapter.