1 
Exponential Change: 
What drives it? 
What does it tell us about the future? 
Jeffrey Funk 
Associate Professor 
Nation...
2 
Table of Contents 
Chapter 1. Introduction 
Part I: What drives exponential change? 
Chapter 2. Creating materials to b...
3 
Chapter 1. Introduction 
“There is nothing permanent except change,” said by Heraclitus more than 2000 years ago, 
is t...
mobile phones, and other electronic devices makes the Internet so popular and powerful 
because this high “bandwidth” enab...
and transportation (e.g., oil tankers, freighters, and trucks) equipment were implemented. The 
cost of transportation con...
However, in spite of these improvements, the cost of health care, construction, electricity 
and education have remained f...
Because we are surrounded by exponential change, we take it for granted and fail to 
understand the drivers of exponential...
But such changes to either the product or process design will not by themselves lead to a 
doubling in the performance of ...
mechanisms: 1) creating new materials (and often their associated processes) to better exploit 
their underlying physical ...
10 
and this higher efficiency also often led to lower costs as fewer materials are needed. 
Strong bases of scientific kn...
than does heat production, large furnaces and smelters have lower heat loss per output than do 
small furnaces, which is i...
of transistors, memory storage regions, other features, and creating the processes needed to 
achieve these reductions in ...
1959 speech “There's Plenty of Room at the Bottom: An Invitation to Enter a New Field of 
Physics.” Thus, we can use atoms...
government agencies move research funds from an old to a new technology in response to 
increases in demand or a slowdown ...
the vertically integrated firm that perhaps underpinned the S-curves in Richard Foster’s The 
Attacker Advantage, we live ...
emphasized. Diminishing returns do emerge for many technologies, particularly if one plots 
improvements vs. research effo...
17 
and understanding those that will is highly problematic. 
One reason it is highly problematic is that many assume that...
expected diffusion by 2000? McKinsey’s forecast in the early 1980s expected one million 
global users by the year 2000, pr...
memory ICs, and networks before various types of mobile Internet content and applications 
would become technically and ec...
20 
1.5 Technologies Undergoing Rapid Improvements 
Following a more detailed analysis of the drivers of exponential impro...
Chapter 7 takes the arguments about reductions in scale one step further and looks at 
nanotechnology. While many analyses...
enabled dramatic reductions in the cost of LCDs and these increases in scale are now driving 
reductions in the cost of OL...
23 
interfaces, and neural ones that go beyond current keyboard and touch-based ones. 
Chapter 13 looks at superconductors...
forms of cities to become economically feasible. One cannot understand the future of cities 
without understanding the tec...
other approaches to reducing the use of fossil fuels in vehicles and to solving other problems 
including ones of global i...
26 
Part I 
What drives exponential change? 
Some technologies experience faster rates of improvements than do other techn...
The existence of straight lines without sudden jumps as in the S-curve aids us in our search 
for the drivers of improveme...
S-curve). We argue that these rapid rates of diffusion are a direct results of the rapid rates of 
improvement that are mo...
29 
Table 1. Annual Rates of Improvement for Specific Technologies 
Tech-nology 
Dimensions of 
measure 
Time 
Period 
%/ ...
30 
MEMS 
Printing 
Drops/second for 
ink jet printer 
1985-2009 61 
Computers Instructions/ time 1945-2008 40 
Instructio...
31 
US corn 
production/area 
1945-2005 0.9 
Transport 
of 
humans/ 
freight 
Ratio of GDP to 
transport sector 
1880-2005...
32 
Chapter 2 
Creating Materials to Better Exploit Physical Phenomena 
Most people have noticed that large numbers of new...
scientifically feasible, or we do not have the ability to create them, the rates of improvement 
33 
will be very slow or ...
materials because advances in science will facilitate the search for and creation of new 
34 
materials that enable improv...
These improvements in strength are one reason why engineers have been able to increase 
the cutting speeds and thus reduce...
is inextricably linked to creating new processes and the current challenge is to find ways to 
fabricate them for much low...
many of these new materials required new processes. Batteries with higher energy and power 
densities store more energy pe...
38 
2.3)xv. 
Figure 2.3 Recent Improvements in Energy Density of Batteries 
Source: Tarascon, J. 2009. Batteries for Trans...
39 
energy storage densities equivalent to gasoline will ever be achieved. 
Some expect that the faster rate of improvemen...
performance. Coercivity represents the magnetic field required to reduce the magnetization (in 
Oersteds or amperes per me...
These improvements were achieved by creating new forms of magnetic materials such as 
steel alloys, barium hexa-ferrites (...
42 
recording density of platters and tape. 
2.5 Electronic applications 
Creating new materials that better exploit physi...
semiconducting materials that better exploit the phenomenon of electroluminescence; these 
include new combinations of gal...
44 
Source: NAS/NRC, 1989. Materials Science and Engineering for the 1990s. National Academy Press 
2.6 Agricultural Appli...
pesticides and herbicides come from finding specific chemicals that selectively kill some 
45 
insects and weeds and not o...
determines one’s level of optimism about global food production meeting population increases. 
One thing we can say is tha...
47 
of the human body and have been synthesized since the middle of the 20th century. 
Recombinant DNA technology was used...
This is certainly the case with new technologies such as organic transistors, OLEDs, and 
superconductors in which the mod...
combination of them is often the challenge. This challenge is exacerbated by the fact that 
historical trends have not bee...
50 
Chapter 3 
Geometric Scaling: Reductions in Scale 
Some technologies benefit from reductions in scale and these techno...
51 
addressed in Chapters 6, 7, and 10. 
3.1 Magnetic Hard Disks 
Hard disks are one type of magnetic storage. Engineers a...
52 
Dissertation, University of California, San Diego 
Figure 3.2 Falling Price ($/GByte) of Hard Disk Drives 
Source: Yoo...
to spintronics has also been necessary to continue a reduction in feature sizes. Spintronics 
exploits both the intrinsic ...
Exponential change: what drives it, what does it tell us about the future?
Exponential change: what drives it, what does it tell us about the future?
Exponential change: what drives it, what does it tell us about the future?
Exponential change: what drives it, what does it tell us about the future?
Exponential change: what drives it, what does it tell us about the future?
Exponential change: what drives it, what does it tell us about the future?
Exponential change: what drives it, what does it tell us about the future?
Exponential change: what drives it, what does it tell us about the future?
Exponential change: what drives it, what does it tell us about the future?
Exponential change: what drives it, what does it tell us about the future?
Exponential change: what drives it, what does it tell us about the future?
Exponential change: what drives it, what does it tell us about the future?
Exponential change: what drives it, what does it tell us about the future?
Exponential change: what drives it, what does it tell us about the future?
Exponential change: what drives it, what does it tell us about the future?
Exponential change: what drives it, what does it tell us about the future?
Exponential change: what drives it, what does it tell us about the future?
Exponential change: what drives it, what does it tell us about the future?
Exponential change: what drives it, what does it tell us about the future?
Exponential change: what drives it, what does it tell us about the future?
Exponential change: what drives it, what does it tell us about the future?
Exponential change: what drives it, what does it tell us about the future?
Exponential change: what drives it, what does it tell us about the future?
Exponential change: what drives it, what does it tell us about the future?
Exponential change: what drives it, what does it tell us about the future?
Exponential change: what drives it, what does it tell us about the future?
Exponential change: what drives it, what does it tell us about the future?
Exponential change: what drives it, what does it tell us about the future?
Exponential change: what drives it, what does it tell us about the future?
Exponential change: what drives it, what does it tell us about the future?
Exponential change: what drives it, what does it tell us about the future?
Exponential change: what drives it, what does it tell us about the future?
Exponential change: what drives it, what does it tell us about the future?
Exponential change: what drives it, what does it tell us about the future?
Exponential change: what drives it, what does it tell us about the future?
Exponential change: what drives it, what does it tell us about the future?
Exponential change: what drives it, what does it tell us about the future?
Exponential change: what drives it, what does it tell us about the future?
Exponential change: what drives it, what does it tell us about the future?
Exponential change: what drives it, what does it tell us about the future?
Exponential change: what drives it, what does it tell us about the future?
Upcoming SlideShare
Loading in …5
×

Exponential change: what drives it, what does it tell us about the future?

11,033 views
10,976 views

Published on

This book, which is available on Amazon (http://www.amazon.com/Exponential-Change-drives-about-future-ebook/dp/B00HPSAYEM), describes the drivers of exponential change and what these drivers tell about the future. Based on an analysis of more than 50 technologies, it shows that exponential change is driven by: 1) the creation of new materials that better exploit a physical phenomena and 2) changes in scale. The creation of new materials has enabled improvements in the strength to weight of materials, the luminosity per watt of LEDs and in other dimensions for many other technologies. Changes in scale include both increases and reductions in scale. Production, energy, and transportation-related equipment typically benefit from increases in scale while integrated circuits (ICs), magnetic storage, MEMS (microelectronic mechanical systems), and bio-electronic ICs for DNA sequencing benefit from reductions in scale.

Published in: Business, Technology
0 Comments
2 Likes
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total views
11,033
On SlideShare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
13
Comments
0
Likes
2
Embeds 0
No embeds

No notes for slide

Exponential change: what drives it, what does it tell us about the future?

  1. 1. 1 Exponential Change: What drives it? What does it tell us about the future? Jeffrey Funk Associate Professor National University of Singapore Christopher Magee Professor MIT Part I is available in this document. The entire book, including Part II (about the future) is available from Amazon.com for $2.99. http://www.amazon.com/Exponential-Change-drives-about-future-ebook/ dp/B00HPSAYEM
  2. 2. 2 Table of Contents Chapter 1. Introduction Part I: What drives exponential change? Chapter 2. Creating materials to better exploit physical phenomena Chapter 3: Reductions in scale Chapter 4. Increases in scale Part II: What does this tell us about the future? Chapter 5. Integrated Circuits and Electronic Systems Chapter 6. Micro-electronic mechanical system Chapter 7. Nanotechnology & Nano-materials Chapter 8. Electronic Lighting Chapter 9. Displays Chapter 10. Health Care Chapter 11. Telecommunications Chapter 12. Human-Computer Interface Chapter 13. Superconductivity Chapter 14. Conclusions
  3. 3. 3 Chapter 1. Introduction “There is nothing permanent except change,” said by Heraclitus more than 2000 years ago, is the essence of our daily business news. New products and services are released, new firms are formed, existing firms are acquired or go bankrupt, and new governments including new political systems continuously emerge. One key driver of this change is the market based economy and all its supporting institutions. Smoother functioning financial, insurance, and regulatory systems facilitate the emergence of new products and services and the formation of new firms. A second key factor is new ways of organizing work. Work can be divided into new and different ways where obtaining the benefits from new technologies often requires new forms of organizations. A third key factor is better methods of communication. From postal mail services to the printing press, telegraph, telephone, and now the Internet, better communication has facilitated change. New communication technologies speed up the flow of information and thus promote new ideas, technologies, strategies, policies, and even political change. For example, the recent political upheavals in the Middle East are partly due to new communication mediums such as Facebook and Twitter. However, a smoother functioning market economy, new ways of organizing, and better communication technologies are not the whole story. Market-based economies only indirectly lead to better products and services and thus better standards of living, and they do this only when better techniques or technologies are available. Without these better techniques and technologies, an improved ability to commercialize them would be meaningless. Similar arguments can be made for new forms of organizing work or better communication technologies. Without the better technologies, there is no need for new methods of organizing work and there is no information to spread. Furthermore, better communication technologies are themselves based on other new technologies. The low cost of uploading and downloading vast amounts of data with computers,
  4. 4. mobile phones, and other electronic devices makes the Internet so popular and powerful because this high “bandwidth” enables the inexpensive transmission of books, reports, music, movies, and other video. Without its extremely high (and rapidly increasing) bandwidth, the 4 Internet wouldn't be much different from the telephone, the facsimile, and the television. But why have these communication (and other) technologies experienced such rapid improvements, while others have not? The doubling in the performance of communication and other electronic-based technologies every one to two years is often termed “exponential” to reflect their rapid rate of improvements. In contrast to “linear” improvements i that are experienced by many technologies each year, the doubling in the performance of communication technologies every one to two years has over decades led to many orders of magnitude improvements in the cost and speed of wireline and wireless transmission. Understanding why these and other technologies experience such exponential improvements while others do not is essential to understanding when new technologies might become economically feasible, their probable impact on our world, and the degree to which specific technologies can help us solve global problems. More specifically, understanding these technologies helps us understand the different ways in which we can design systems, the alternative ways in which individuals, organizations, and societies can invest their financial and human resources, and the types of policies that are needed to implement better systems. 1.1 What technologies are experiencing exponential change? When we actually look at the things around us, the rates of change and improvement vary considerably. The productivity of farms have been dramatically improved as better seeds have been developed and larger and more sophisticated machines have been implemented. The cost of electricity, processed metals, chemicals, and transportation also fell by many orders of magnitude particularly in the first half of the 20th century largely as new processes were developed and the scale of electrical generating plants, chemical and metal processing factories,
  5. 5. and transportation (e.g., oil tankers, freighters, and trucks) equipment were implemented. The cost of transportation continued to fall in the second half of the 20th century as computers and electronics enabled improvements in the utilization and coordination of railroads, trucks, 5 aircraft and the containers transported by this equipment. The falling cost of computers and electronics is particularly large. Nine orders of magnitude improvements over 50 years in the processing speed of computers (and routers and servers) largely come from similar levels of improvements in the cost and speed of integrated circuits (ICs). Often called Moore’s Law, an ability to reduce the size of transistors, memory cells, and other features on the integrated circuits (ICs) that are fabricated from semiconductors such as silicon have enabled many orders of magnitude improvements in the cost, speed, and functionality of ICs and computers. Improvements in the memory capacity of computers also come from reductions in the size of memory cells on semiconductor ICs and on magnetic media; the latter improvements are often represented in terms of the magnetic recording density of hard disk platters and magnetic tape. Although the reasons for the improvements vary, similar stories can be told for LEDs (light-emitting diodes), lasers, displays, glass fibers for fiber optic cable, and other components that make up the Internet. The falling cost of electronics and computers have had a large impact on a number of different types of systems including transportation (partially noted above), retail, wholesale, manufacturing, financial, health care, construction, electricity and educational systems. Improvements in computers have enabled dramatic improvements in the cost and performance of logistics, whether these logistics are in a factory or between factories, wholesalers, and retailers. Much of the improvements in the cost of logistics can be measured in terms of an increased frequency of inventory turns as factories, retailers, and wholesalers use computers and the Internet to more quickly respond to changes in demand. Improvements in computers have also impacted positively to varying degrees on finance, health care, electricity, and education as they help these industries more effectively manage information.
  6. 6. However, in spite of these improvements, the cost of health care, construction, electricity and education have remained flat or in some cases have risen over the last 50 years as have the prices of automobiles and other discrete-parts manufactured products. Health care costs are rising largely because the demand for new treatments rises faster than does the improvements in cost and performance for them. Construction and education costs rise because it has been hard to automate many tasks (at least so far). Electricity costs rise because the benefits from increased scale were reached 60 years ago, fuel and environmental costs are rising, and new technologies such as wind turbines (2% a year) and batteries (5% a year) experience very slow rates of improvement ii . Automobile costs rise because new functions are being added, increases in scale provide few benefits, and further improvements in factory productivity 6 became difficult once the easiest tasks were automated in the first half of the 20th century. Other manufactured goods have also not experienced rapid improvements in cost or performance. Consider our homes and the possessions in our kitchens, bedrooms, and bathrooms. Outside of PCs and televisions, most of our furniture, bathroom fixtures, appliances, and other possessions (including our homes) are only marginally technologically better than they were 50 years ago. Instead, any reductions in price primarily come from their manufacture in low-wage countries while qualitative improvements in their quality primarily come from greater wealth. The former was enabled by falling transportation costs while the latter is being driven by the technologies that are experiencing exponential improvements in cost and performance. We are wealthier because exponential improvements in some technologies have led to large increases in economic productivity and these increases in productivity have enabled us to obtain better products and services whose costs and performance have not experienced exponential improvements. This is the main reason we live in larger homes, drive bigger cars and boats, and eat fancier foods. 1.2 What drives exponential improvements?
  7. 7. Because we are surrounded by exponential change, we take it for granted and fail to understand the drivers of exponential improvements. We see better and cheaper mobile phones, computers, video game consoles, and televisions released each year and many of us think that such improvements are common to all products and services. This prevents us from understanding the sources of these improvements and why some products experience more 7 rapid improvements than do others. Some people might use the term innovation to describe these improvements and the sources of them. We believe that the word innovation is one of the most over-used words in the business and economics literature and the over-use has resulted in the word representing a black box in which we somehow magically find solutions. Smart people are somehow able to reach into this black box (sometimes by using another cliché “thinking outside the box”) and pull out solutions better than rest of us. For example, many people apparently believe that individuals such as Steve Jobs are continuously finding revolutionary designs that are much more effective and efficient than previous designs. Furthermore, some may have concluded that if all managers acted like Steve Jobs, all technologies would experience exponential improvements. In our view, too many management books encourage this simplistic thinking by focusing on innovative managers, innovative organizations, and their flexibility and open-mindedness. By ignoring why some technologies experience more improvements in cost and performance than do others, they dangerously imply that the potential for innovation is the same everywhere and thus all technologies have about the same potential for improvements. The fact is that improving the cost or performance of a technology is not easy and improving them by orders of magnitude is very difficult. We can automate tasks, rearrange equipment and process steps in a novel way, or put in better material handling systems, which are the types of changes that are often captured in so-called learning or experience curves. On the performance side, we can rearrange parts, combine them in a novel way, or implement a more elegant design.
  8. 8. But such changes to either the product or process design will not by themselves lead to a doubling in the performance of a product’s performance or the halving of a product’s cost every 8 few years such that orders of magnitude improvements emerge over decades. Such design changes are however, needed to utilize the power of exponential improvements. Steve Jobs and Apple did this with the iPod, iPhone, and the iPad and other firms have done this with a much longer list of products. Exponential improvements in a number of components enabled Apple to introduce products that are far superior to existing ones in terms of functionality, aesthetic design, and price. Exponential improvements in magnetic recording density and ICs enabled Apple to design the first iPod with a very small disk drive and in a very elegant way. Continued improvements in the performance and cost of ICs enabled the introduction of the iPod Nano, iPhone, and in combination with better displays, the iPad. Without the exponential improvements in ICs, magnetic recording and electronic displays, none of these products would have been economically or even technically possible. The genius of Steve Jobs and Apple was not only that they were able to design such products, it was also that they were able to recognize the power of exponential improvements and how and when these improvements would make the iPod, iPhone, and iPad economically possible. Unlike other firms that probably accepted poor performance from their initial products or believed that there was no market because early products failed, Steve Jobs and Apple realized that the exponential improvements would continue and eventually make new designs in these products economically feasible. There are several key issues here. One issue involves understanding when these exponential improvements make new designs possible. A second involves understanding which technologies are or will experience exponential improvements in performance and cost and thus which technologies should receive our focused attention. Other issues include how to manage these processes. This book focuses on the first two issues. Our research suggests that improvements in performance and cost are largely driven by two
  9. 9. mechanisms: 1) creating new materials (and often their associated processes) to better exploit their underlying physical phenomena; and 2) geometric scaling. Some technologies directly experience improvements through these two mechanisms while those consisting of higher-level 9 “systems” indirectly experience them through improvements in specific “components.” Our research also shows that the most rapid rates of improvements are primarily driven by a subset of these two mechanisms and this partly explains why electronic-based technologies have experienced such rapid rates of improvement. First, creating new materials (and processes for them) most effectively lead to rapid improvements in performance and cost when new classes of materials are continuously being created and when microstructures (e.g., thin films for electronics) are constructed with these materialsiii. Second, technologies that benefit from reductions in scale (e.g., for electronics) have experienced much more rapid improvements than have technologies that benefit from increases in scale (e.g., for energy). Understanding these mechanisms can help firms, universities, and governments choose technologies that have the potential for rapid improvements and choose technologies that will help us to solve problems, including global ones. The first mechanism involves the creation of materials that better exploit physical phenomena. The word “create” is used because scientists and engineers often create materials that do not naturally exist (as opposed to finding them) and in doing so must also create the processes for the materials. Much of these improvements come from creating new classes of materials while smaller ones involve modifications to either existing materials or processes. Rates of improvements are also large when the materials are used for microstructures such as diodes, transistors, other P-N junctions (e.g., solar cells), and quantum wells and dots for lasers. As described in Chapter 2, the realization and exploitation of the physical phenomena that forms the basis of batteries, lighting, displays, vacuum tubes, ICs, magnetic storage, and solar cells requires a specific type of material and creating better material has taken many years. The better material exploited the physical phenomena more efficiently than does other materials
  10. 10. 10 and this higher efficiency also often led to lower costs as fewer materials are needed. Strong bases of scientific knowledge facilitated the creation of these better materials and without these broad and deep knowledge bases it would not have been likely that these materials would have been created. Thus, supporting scientific research is a key part of creating these new materials. Looking forward, we need support for this science in order to help engineers create the right materials such as those for solar cells. Improvements in our understanding of photovoltaic materials helps us improve their efficiencies and funding this type of science is much more cost effective than subsidizing the production of solar cells. A second mechanism of improvement involves changes in scale. Some technologies have benefited from increases in scale and some have benefited from reductions in scale, which were mentioned above and are addressed in more detail in Chapters 3 and 4. For example, engines, steam turbines, ships, and airplanes have benefited from increases in scale; this is why we have large electrical generating stations, transportation equipment, and facilities for handling these large ships and airplanes. For improvements in reductions in scale, examples include ICs, magnetic disks and tape, and optical disks. Some readers are probably baffled by this logic? How can changes in scale have such a large impact on the performance and cost of technologies? One way to understand the importance of scale is to start with a familiar example. Most of us have noticed that short and thin people feel colder in an air conditioned room or in a cold climate than do tall and not so thin people. The reason is that heat production rises with volume while heat loss rises with surface area. The result is that heat production rises faster than does heat loss as the dimensions for people (and other mammals) are increased. This causes short and thin people to feel colder than others and it also provides an incentive for thin people to gain weight. Thus, people in northern latitudes are often heavier than people in equatorial regions; similar arguments are made for many organismsiv. We build large furnaces and smelters for the same reasons. Since heat loss rises more slowly
  11. 11. than does heat production, large furnaces and smelters have lower heat loss per output than do small furnaces, which is important since energy costs are a significant fraction of the operating costs for furnaces and smelters. A second reason we build large furnaces and smelters is that capital costs rise with surface area and output rises with volume since capital costs primarily involve an outer shell and hopefully a thin one. In later chapters, similar logic (and supporting data) is applied to pipes and reaction vessels in chemical plants and to transportation equipment 11 such as oil tankers, freighters, trucks, and aircraft. But size brings disadvantages. Because weight usually rises with volume and strength rises with surface area, larger furnaces, smelters, pipes, reaction vessels and transportation equipment require thicker walls unless better materials are available. Furthermore, even if the better materials are available, they may cost more money than the lower performing materials and this will reduce the benefits from making increases in scale. For example, airplanes have benefitted from increases in scale to a lesser extent than have oil tankers, freight ships, buses or trucks because weight is more important for aircraft than for other transportation equipment and expensive composites must be used in order to increase the scale of aircraft. Similar arguments can be made with humans and other mammals. For example, large mammals like elephants require heavy legs that cannot move as fast as those of a gazelle or a cheetah in order to support their large size. This is because muscle strength rises as a muscle’s cross sectional area (dimension squared) while weight rises with volume (dimension cubed). Thus, increases in scale bring new challenges for both organisms and for technologies. However, while organisms require these challenges to be solved by accidental mutations that may take thousands if not millions of generations v , humans can purposely redesign the technologies. For example, as the strength-to weight ratio of steel and other materials were improved over the last few centuries, larger furnaces, smelters, reaction vessels, and pipes have been implemented without requiring much increases in the thickness of their steel walls. On the other hand, some technologies benefit from reductions in scale. Reducing the scale
  12. 12. of transistors, memory storage regions, other features, and creating the processes needed to achieve these reductions in scale has led to many orders of magnitude improvements in the cost and performance of microprocessor and memory ICs and magnetic storagevi. This is because for these technologies, reductions in scale lead to improvements in both performance and cost. For example, placing more transistors or magnetic storage regions in a certain area increases the speed and functionality and reduces both the power consumption and size of the final product, which are improvements in performance for most electronic products (they also lead to lower material, equipment, and transportation costs). The combination of both increased performance and reduced costs as size is reduced has led to orders of magnitude improvements over many years in the performance to cost ratio of many electronic components. For example, three orders of magnitude reductions in transistor length have led to about nine orders of 12 magnitude improvements in the cost of transistors on a per transistor basis. Here again, analogies can be made with organisms. As the size of organisms become smaller, their strength-to weight ratios rise thus enabling small ants to carry more than their own weight and their weight-to surface ratios fall thus enabling water bugs to literally walk on water. For ants, this is because muscle strength falls as a muscle’s cross sectional area (dimension squared) while weight falls with volume (dimension cubed) and thus the strength-to weight ratios rise as dimensions fall. For water bugs, this is because the surface area of their feet falls with dimension squared while the weight falls with dimension cubed and thus the weight-to surface area falls as dimensions fall and thus there is more surface area to support a unit of weight. Furthermore, as size falls below a millimeter, molecular forces become more important than do gravity and thus small organisms must exploit these molecular forces to survive. Eventually we reach the sizes of cells, where high surface-areas to volumes are important, and the sizes of DNA, which is the information storage device for organisms. One key point is that there are no near-term limits to reducing the size of features that are used to store and process information, which was first noted by Richard Feynman in his famous
  13. 13. 1959 speech “There's Plenty of Room at the Bottom: An Invitation to Enter a New Field of Physics.” Thus, we can use atoms, electrons or even spins of electrons to store information. A second key point is that other phenomena also benefit from reductions in scale. Finding these phenomena is a key challenge for entrepreneurs in for example, bio-electronic ICs, micro-electronic mechanical systems (MEMS), and nanotechnology. As mentioned in the next section and addressed throughout the book, some phenomena benefit from reductions in scale and thus as humans continue to become better at reducing the scale of things, these phenomena and the devices and systems that incorporate them will likely experience exponential improvements in cost and performance. Third, as users of technologies, we will notice these exponential improvements primarily in systems that incorporate these “components” and thus we may not even know the reasons for the improvements in systems. For example, rapid improvements in mobile phones come from improvements in ICs, which benefit from reductions in scale. Fourth, the search for these systems along with better materials, levels of scale, and new organizational forms is an evolutionary process in which new materials, levels of scale, and organizational forms are being continuously tried and selected and where incumbent firms often fail. Chapters 3 and 4 and Part II describe how these improvements have and continue to create new systems and opportunities for new entrants. We can use these improvements to create not only new products and services, but new forms of homes, workplaces, cities, and other higher order 13 systems. 1.3 Are these exponential improvements S-curves? The predominant viewpoint is that improvements in performance or in performance per cost follow an S-curve, first described in Richard Foster’s 1985 book The Attacker Advantage. Following a rather flat rate of improvement, the rate of improvement accelerates thus leading to a rather steep rate of improvement; later it slows. For the early part of the purported S-curve, Foster argued that improvements accelerate as vertically integrated firms and specific
  14. 14. government agencies move research funds from an old to a new technology in response to increases in demand or a slowdown in the rate of improvement in the old technology. Some call this punctuated equilibrium, in honor of the sudden jump in the number of biological species during the so-called Cambrian explosion. For the later part of the purported S-curve, Foster argued that the rates of improvement slow as diminishing returns and natural limits emerge; this causes research funds to move to a still newer technology and thus the newer 14 technology’s rate of improvement begins to acceleratevii. This predominant viewpoint is so ingrained in our thinking that many books will describe this predominant viewpoint even as they show figures of Moore’s Law and other technologies in which straight lines are fairly evident on log performance vs time plots. One of the best examples can be found in Kevin Kelly’s What Technology Wants. After showing data on the number of transistors per chip, areal recording density, and other technologies, each with straight lines on log performance vs. time curve plots, he then shows a figure with the classic S-curve. The theory of S-curves is a good example of a field trying to fit the data to an old theory even when the theory does not fit the facts. We believe that there is a better explanation for rates of improvements and the roughly straight lines on log performance vs time curves that they represent and that are shown throughout this book. Research on new technologies is done in a very decentralized world in which millions of researchers, one estimate is six millionviii, compete for publications, prestige and fame, curiosity is major driver of their efforts, and they quickly incorporate new information into their search efforts. Rather than wait for the improvements in an old technology to slow, they look for and combine new scientific phenomena, new explanations and applications for them, and materials that better exploit these phenomena. They attempt to reduce the scale of new technologies that may replace ICs and to combine existing and new components and materials into new systems. The decentralized world of funding supports this decentralized world of research. Unlike
  15. 15. the vertically integrated firm that perhaps underpinned the S-curves in Richard Foster’s The Attacker Advantage, we live in a vertically disintegrated world where funding decisions are made by tens if not hundreds of thousands of people. Most researchers, including ones in universities, government labs, and even in corporations, are expected to investigate new technologies, to create their own research plans and to publish something new and different. This enables and requires them to quickly move their efforts to newly found scientific phenomena, materials, components, and systems long before the improvements in an old technology have slowed. Thus, there is no flat line on a log plot that precedes acceleration and instead there are many performance and/or cost curves that are competing with the curves of 15 the dominant technology. Henry Chesbrough describes this world using the term Open Innovation. Unlike Richard Foster’s world of large vertically integrated firms where firms develop the technologies that they use in their products, firms both buy and sell technology of which many new technologies simultaneously compete for our attention. Small firms may focus on selling technology and the only way to succeed in such a business is to focus on the early years of a new technology. This enables and requires them to quickly move their efforts to newly found scientific phenomena, materials, components, and systems long before the improvements in an old technology have slowed. One caveat to this argument is that if rates of improvements are extremely rapid such that orders of magnitude improvements are experienced, the improvements must be plotted on a logarithmic plot. If not, only the most recent data will appear as improvements and the older data points will be essentially flat. For example, if one were to plot Moore’s Law on a linear scale, none of the improvements before 2005 would look important, in spite of the fact that prior improvements made the personal computer, mobile phone, and the Internet economically feasible. Focusing on the later part of the S-curve, we believe that the notion of limits is also over
  16. 16. emphasized. Diminishing returns do emerge for many technologies, particularly if one plots improvements vs. research efforts. Since research funding has increased over time for most of the technologies discussed in this book, even straight lines for improvements over time suggest diminishing returns with respect to effort, which is somewhat consistent with Foster’s arguments. Nevertheless, these straight lines are on a log plot so the rates of improvements are very rapid. Furthermore, many of the technologies discussed in this book do not show actual limits on a log plot and as far as actual data can uncover, we are probably still far from the 16 physical limits and thus at this point S-curve theory is more myth than fact. In any case, the existence of straight lines without sudden jumps helps us understand when new technologies become economically feasible. It allows us to ignore the purported source of jumps in performance during the early years of a technology and focus on the rather steady improvements in performance and our cost that occur long before a technology is commercialized on a broad scale. 1.4 Thinking about the future of new technologies There are many ways to think about future technologies and each of them has its own limitations. The most common way to think about the future is to talk with the experts and find out what is or will become scientifically and technically feasible in the near future. Then through your own knowledge of customer needs or through some investigation of specific applications, one can consider how these scientifically and technically feasible technologies might solve specific problems or provide basic human needs. Herman Kahn, Michio Kaku, and Mark Stevensonix are among the many scientists and engineers who have used this approach to describe possible futures. The basic problem with this approach is that all scientifically and technically feasible technologies do not become economically feasible. While technologies must be scientifically and technically feasible before they can become economically feasible, all scientifically and technically feasible technologies do not become economically feasible
  17. 17. 17 and understanding those that will is highly problematic. One reason it is highly problematic is that many assume that once we begin making things, they get cheaper through the so-called learning or experience curve. But as we discussed above, some technologies experience much faster rates of improvement than do other technologies and these technologies have a better chance of becoming economically feasible than do other technologies. Part II will discuss many technologies that have experienced rates of improvement of greater than 15% a year with little or no commercial production. A second reason it is difficult to identify the technologies that will become economically feasible revolves around cognitive biases. According to research by Nobel Laureate Daniel Kahnemanx, people tend to assess the relative importance of issues, including technologies, by the ease with which they are retrieved from memory and this is largely determined by the extent of coverage in the media. For example, currently the media talks about wind, battery-powered vehicles, bio-fuels, and solar cells and thus many people think these technologies are experiencing rapid rates of improvement and will soon become economically feasible. Furthermore, judgments and decisions are guided directly by feelings of liking and disliking, with little deliberation and reasoning. Kahneman recounts a conversation he had with a high-level financial executive who had invested in Ford because he “liked” their products without considering whether Ford stock was undervalued. Similarly, some people “like” or “dislike” technologies without considering whether the technologies are experiencing rapid rates of improvement. For example, consider the problems with forecasting the future of mobile phones and the error that McKinsey made in its infamous 1980 forecast. So-called cellular phones that reuse the frequency spectrum in multiple “cells” became scientifically feasible in the 1940s and technically feasible in the late 1970s when digital switching equipment enabled users to be automatically switched between different base stations as the users moved between different cells. How should we have thought about their economic feasibility in 1980 and thus their
  18. 18. expected diffusion by 2000? McKinsey’s forecast in the early 1980s expected one million global users by the year 2000, presumably by asking people whether they wanted a mobile phone. Since most people could not have “retrieved from memory” the type of future that might emerge from a “mobile lifestyle” they would have sensibly been pessimistic about mobile phones. Thus, one lesson from this inaccurate forecast is that it is difficult to understand user 18 needs- particularly longer range ones. However, we think there is a second important lesson from this inaccurate forecast and this lesson is actionable as it has implications for the kinds of questions we should ask. McKinsey should have focused on the fact that the costs of mobile phones and their services would dramatically fall due to Moore’s Law. Furthermore, these costs would dramatically fall even if mobile phones did not begin to diffuse because Moore’s Law was being driven by a wide range of other electronic products. This would have caused McKinsey to reach completely different conclusions and perhaps ask potential users a different set of questions. For example, they could have asked whether people were interested in a free phone whose subscription provides 100 minutes of talk time for less than 30$ a month, a situation that has existed with mobile phones for many years. Jump ahead to the year 2000 when many industry insiders believed that travel, location-based, and other business-related services for mobile phones were a huge market that was ready to take off. They believed this because these services (and GPS for automobiles) were experiencing large rates of growth for the Internet and thus they were often discussed in the media; this made them easy to retrieve from memory. Although these services are now diffusing rapidly, it took many years before this happened and most of the hopeful suppliers in 2000 have long since gone bankrupt. Here the lesson is that cognitive biases exist and just as one can underestimate the long term effects of exponential improvements like those found in Moore’s Law, one can overestimate their short term effects. In the year 2000 firms should have been analyzing the levels of performance and cost needed in displays, microprocessor and
  19. 19. memory ICs, and networks before various types of mobile Internet content and applications would become technically and economically feasible. This would have caused them to be less optimistic about location-based services and instead first emphasized simpler applications such as ringing tones and wall paper that ended up diffusing long before more sophisticated location 19 services began to diffuse. There are several key points here. First, when a technology is experiencing rapid improvements in cost and performance and it has a large impact on a higher level system, the rapid improvements in the technology can lead to large improvements in the cost and performance of the higher-level system. Mobile phones have experienced dramatic improvements in cost and performance through exponential improvements in ICs (and also displays) and these and other (e.g., batteries) components still make up about 95% of a phone’s cost. Thus, we can say much more about the future of a system by understanding its components and the rate of improvements that these components are experiencing than by using the learning curve, which focuses on the 5% of costs that are phone assembly costs. Furthermore, analyzing the rates of improvements in a system’s components enable one to analyze when a new technology might become economically feasible even before production of the system begins, something that the learning curve can’t do. Second, if the system or the production of the system benefits from increases in scale, as some do (but mobile phones systems don’t), we can use data for both the system and its components to analyze future costs and performance. This is relevant for technologies such as new display technologies, solar cells, and wind turbines. Third, a requirement of this approach is that we must understand the system and its components. This is a challenge for even experienced engineers but we should not be surprised that better approaches involve deeper understanding. Fourth, and most importantly, we must identify the technologies that are or may experience exponential improvements in cost or performance (in particular rapid ones) and their impact on higher-level systems. This is the purpose of this book.
  20. 20. 20 1.5 Technologies Undergoing Rapid Improvements Following a more detailed analysis of the drivers of exponential improvements in Part I, we use our new-found knowledge about these drivers to analyze a number of technologies that are currently experiencing or expected to experience exponential improvements. Part of this analysis shows that these technologies have experienced rapid improvements without production, thus providing further evidence that something other than cumulative production is driving these improvements. Chapter 5 addresses ICs and the new electronics systems that improvements in ICs have made and continue to make economically feasible. ICs have experienced dramatic improvements in their performance and cost as feature sizes were reduced and these improvements have enabled the emergence of and improvements in a wide variety of electronic systems. Furthermore, these improvements are likely to continue for at least another ten and probably 20-30 years for a variety of reasons and in combination with improvements in other technologies, new forms of electronic systems are likely to become economically feasible. Chapter 6 addresses micro-electronic mechanical systems (MEMS): MEMS are small machines that are fabricated using some of the same equipment and processes that are used to fabricate ICs. One difference is that unlike ICs, whose inputs are electrical signals, the inputs for MEMS also include pressure, temperature, gravity, magnetic fields, and biological materials. While some types of MEMS such as small gears and motors do not benefit from reductions in scale and thus are only appropriate when small size is demanded, some of them do. For example, as their feature sizes are made smaller, mechanical resonators resonate at higher frequencies, gas chromatographs and bio-electronic ICs become faster and more sensitive, the resolution of “memjet” ink printers increase, and digital micro-mirrors and optical switches become faster. One challenge is to develop a common set of materials, processes and equipment for MEMS so that different ones are not needed for each application.
  21. 21. Chapter 7 takes the arguments about reductions in scale one step further and looks at nanotechnology. While many analyses of nanotechnology seem to treat it as a non-analyzable magical kind of technology, this chapter focuses on the phenomena that benefit from small scale, the technologies that exploit these phenomena, and the steady improvements in the performance and cost of grapheme, carbon nanotubes, other single atom thick materials, quantum dots, nanoparticles, and nanofibers. These improvements are occurring largely because scientists and engineers continue to create materials, including new classes of materials, that benefit from small-scale phenomena. The large number of new classes of materials that continue to be created suggests that nano-technology will have a large impact on our world in 21 the next 50 years. Chapter 8 looks at new forms of lighting such as light-emitting diodes (LEDs) and organic LEDs (OLEDs) and also at laser diodes, which are physically somewhat similar to LEDs. Creating new materials and processes for them is the major driver of improvements in LEDs and OLEDs and these new materials have enabled rapid increases in the luminosity per Watt of LEDs and OLEDs along with improvements in size and flexibility. In combination with improvements in ICs and other components, these improvements can enable a dramatic change in the way that spaces are lighted. Improvements in lasers are also occurring partly because of reductions in scale that are made possible by new processes and to some extent new materials. In combination with improvements in MEMS and ICs, improvements in laser diodes are also making new systems economically feasible; a good example is autonomous vehicles. Chapter 9 looks at new forms of displays such as 3D LCDs, OLED-based displays, and holographic ones and the reductions in cost that have and continue to occur in them. Improvements in them are driven by the creation of new materials, improvements in components such as lasers and ICs for holographic displays, and increases in the scale of the substrate and equipment. These displays are fabricated on large substrate and then cut into smaller displays for individual televisions and computer screens. These increases in scale have
  22. 22. enabled dramatic reductions in the cost of LCDs and these increases in scale are now driving reductions in the cost of OLED-based displays and solar cells and will also do this for new 22 processes such as roll-to roll printing. Chapter 10 analyzes several technologies within health care. These include bio-electronics with a focus on bio-electronic ICs, flexible electronics, and DNA sequencers. Bio-electronic ICs are MEMS that include micro-fluidic channels. Since these ICs benefit from reductions in scale and these reductions in scale lag those of ICs by about 30 years, as the feature sizes continue to be reduced many new types of products will emerge including point-of care diagnostic equipment and artificial implants such as bionic eyes. Improvements in flexible electronics, which are primarily driven by the creation of new materials, are also occurring and making artificial implants more economically feasible. The third type of health care technology that is experiencing exponential improvements is DNA sequencers. They also benefit from reductions in scale and these reductions in scale are a major reason why the cost of sequencing and synthesizing DNA has experienced exponential improvements in cost and performance. Unlike bio-electronic ICs and MEMS, however, these reductions in scale have also involved many changes in technology where processes similar to those used to manufacture ICs are one of the competing technologies. Many believe that the exponential improvements in DNA sequencers will continue and they will lead to dramatic changes in the way drugs are discovered, new materials are created, and health care are done. Chapters 11 and 12 consider two types of electronic systems that benefit from improvements in “components. Chapter 11 analyzes the impact of better semiconductor lasers, photodiodes, ICs, and optical-based MEMS on both wireline and wireless telecommunication systems. Exponential improvements in these components enable exponential improvements in data rates, speeds, and in the efficient use of the frequency spectrum. Chapter 12 focuses on the human-computer interfaces and how improvements in ICs, CCD (charge coupled devices), and magnetic ones enable exponential improvements in speech recognition, gesture-based
  23. 23. 23 interfaces, and neural ones that go beyond current keyboard and touch-based ones. Chapter 13 looks at superconductors, which have zero resistance and thus infinite conductance at very low temperatures. The creation of new materials, including new classes of superconducting materials has enabled steady increases in the critical temperatures, currents, and magnetic fields for superconducting materials and we are now approaching room temperatures in Antarctica. These superconductors are already used in magnetic resonance imaging (MRI) systems and improvements are making superconductors economically feasible in a broader set of applications such as computers (i.e., quantum computers) and in generators, transformers and transmission cables for energy. 1.6 Who is this book for? This book is for people interested in the future and in how to use knowledge about technological trends to understand, design for, and succeed in the future. This includes R&D managers, hi-tech marketing and business development managers, policy makers and analysts, professors, entrepreneurs and employees of think tanks, governments, hi-tech firms, and universities. Rates of improvement in specific technologies and an understanding of their drivers can help us understand when these new technologies and systems composed of them might become economically feasible. Firms can use such information to better understand when they should fund R&D or introduce new products that involve a new technology. Policy makers and analysts can use such information to think about whether technologies have a large potential for improvement and how governments can promote further or more rapid improvements in them. This book is of particular importance to those people who are trying to design new “systems.” Technologies that experience rapid rates of improvement enable new combinations of components to become economically feasible and these new combinations enable higher order systems including new products and services, new forms of health care systems, and even new
  24. 24. forms of cities to become economically feasible. One cannot understand the future of cities without understanding the technologies that are experiencing rapid rates of improvement. Part II helps us understand the future of cities and the concluding chapter will describe a future that 24 is much different from the predominant viewpoint. This book can also help us make better R&D policy. We believe that governments should strongly fund basic and applied research in those technologies that have the potential for large improvements in performance and cost and this book helps governments identify those technologies. This viewpoint builds from the economic perspective that firms under invest in basic and applied research because of large uncertainties and because they cannot appropriate all the benefits from basic and applied research. By funding a broad range of technologies that have the potential for large improvements in cost and performance, governments can facilitate these improvements and then firms can commercialize these technologies as their cost and performance comes closer to economic feasibility. This viewpoint is different from one predominant viewpoint in which governments subsidize demand or fund R&D for specific technologies in order to solve specific problems. This has been the dominant approach for clean energy in which demand-based subsidies for solar cells, wind turbines, and electric-based vehicles are common. Not only do we argue that funding for R&D has a larger impact on improvements in cost and performance than do subsidies for production, the rates of improvement for wind turbines (2% per year) and batteries (5% per year) are very slow. This book suggests other approaches that can have a larger impact on the use of fossil fuels than can wind turbines and batteries and they will likely become economically feasible before wind turbines and batteries will. For example, it will take more than 75 years for the energy storage densities of batteries to reach that of gasoline while autonomous vehicles will diffuse long before this and they can increase vehicle speeds and thus fuel efficiency. This is just one example of how technologies with rapid rates of improvements suggest
  25. 25. other approaches to reducing the use of fossil fuels in vehicles and to solving other problems including ones of global importance. Technologies that are experiencing rapid rates of improvement form a type of tool chest from which we can pull out technologies and combine them into solutions to global problems. Not only does the current performance and cost of these technologies provide us with useful tools here and now, their rapid rates of improvement mean that better tools continue to emerge and we should be thinking about how these better tools can 25 help us solve global problems. Finally, this book is also for young people. Young people have more at stake in the future than anyone else and this book is written to help people think about their future and the future of various systems It helps students think about solving global problems, where opportunities may emerge and thus the technologies they should study and begin their careers. In particular, it helps students understand the technologies that are undergoing rapid improvements and what this means for higher-level systems. We live in a system-based world in which most people are designing systems in which they do not need to design the components for those systems, because someone else designs those components and often someone from a different organization. Thus students need to understand those components that are undergoing rapid improvements and why they are undergoing these improvements before they can conceive of the possible ways to design the higher level systems. This book can help students do this.
  26. 26. 26 Part I What drives exponential change? Some technologies experience faster rates of improvements than do other technologies, or what this book calls exponential improvements. Understanding the reasons for these faster rates can help us find those technologies that are likely to experience rapid rates of improvement in the future and when these new technologies might become economically feasible. To understand these reasons, we have investigated a wide variety of technologies, their rates of improvement, and the engineering literature’s assessment of these improvements. Table I.1 summarizes the rates of improvements for a number of technologies that we investigated. Although many can be classified in a variety of ways, these technologies are primarily organized into the transforming, storing, and transporting of energy, information, and living organisms, which is consistent with some characterizations of engineering systemsxi. Since a variety of performance measures are often relevant for a specific technology, data was collected on multiple measures some of which are represented in performance of basic functions per unit cost while others are in performance of functions per mass or per volume. Identifying the drivers, or mechanisms, for these improvements is highly problematic. Like any phenomenon, there are multiple reasons that can be organized into a variety of ways, some of which are hierarchical. Our goal was to organize these drivers in a hierarchical way such that high-level drivers are broader and more general than are lower level ones. For example, high-level drivers include demand, novel combinations of components, and government policies that promote innovation and competition, while low-level mechanisms include detailed problem solving on a daily basis by engineers and scientists. Our goal is to identify a set of mechanisms that lie between these low- and high level mechanisms and that help us design better government policies and management strategies including a better understanding of when new technologies become economically feasible.
  27. 27. The existence of straight lines without sudden jumps as in the S-curve aids us in our search for the drivers of improvements and in our understanding of when new technologies become economically feasible. It allows us to ignore the purported source of jumps in performance during the early years of a technology and focus on the rather steady accumulation of capabilities and knowledge that appear to form the basis of the straight lines. This is particularly true for the orders of magnitude improvements where the later improvements are of much higher magnitude on an absolute scale than are the earlier ones, probably because the later ones benefit from the very large base of knowledge that has accumulated over time. This steady accumulation of knowledge and capabilities guided our search for the drivers and it is evident 27 in the two mechanisms that we identified. Our research indicates that improvements in performance and cost are largely driven by two mechanisms: 1) creating materials to better exploit their physical phenomena; and 2) geometric scaling. Some technologies directly experience improvements through these two mechanisms while those consisting of higher-level “systems” indirectly experience them through improvements in specific “components.” Chapter 2 focuses on the first mechanism while Chapters 3 and 4 focus on the second mechanism, one for smaller and one for larger scale. All three chapters deal with the relationships between components and systems. Our research also shows that exponential, i.e., very rapid, improvements are primarily driven by a subset of these two mechanisms. Creating new materials (and processes for them) can lead to rapid improvements in performance and cost when new classes of materials are continuously being created and when the materials are used for microstructures such as in diodes, lasers, transistors, and other P-N junctions (e.g., solar cells), although some exceptions will be shown in Part I (e.g., glass fiber). For scale, technologies that benefit from reductions in scale have experienced much more rapid improvements than have technologies that benefit from increases in scale. These rapid rates of improvement also lead to rapid rates of diffusion (which do follow an
  28. 28. S-curve). We argue that these rapid rates of diffusion are a direct results of the rapid rates of improvement that are more common now in the second half than first half of the 20th century. Rapid rates of improvement lead to rapid rates of diffusion as the rapid rates of improvement case the technologies to rapidly become economically feasible for a larger number of customers and applications. For example, electronic products such as computers and mobile phones have experienced much faster rates of diffusion than have electric or hybrid vehicles because the 28 former have experienced much more rapid rates of improvements than have the latter.
  29. 29. 29 Table 1. Annual Rates of Improvement for Specific Technologies Tech-nology Dimensions of measure Time Period %/ Year Energy Transformation Technologies Lighting Luminosity/Watt 1840-1985 4.5 LEDs Luminosity/Watt 1965-2008 31 Organic Luminosity/Watt 1987-2005 29 LEDs GaAs Lasers Power/length-bar 1987-2007 30 Photo-sensors Light sensitivity 1986-2008 18 Solar Cells Power/cost 1957-2003 16 Aircraft engine Gas pressure ratio 1943-1972 7 Thrust/weight-fuel 1943-1972 11 Power of aircraft engine 1927-1957 5 Piston engines Energy/mass 1896-1946 13 Electric Motors Energy/mass 1880-1993 3.5 Energy/volume 1890-1997 2.1 Energy Storage Technologies Batteries Energy/volume 1882-2005 4 Energy/mass 1882-2005 4 Energy/unit cost 1950-2002 3.6 Capacitors Energy/cost 1945-2004 4 Energy/mass 1962-2004 17 Flywheels Energy/cost 1983-2004 18 Energy/mass 1975-2003 10 Energy Transport Technologies Electricity Trans-mission Energy transported times distance 1890-2003 10 Energy transported times distance/cost 1890-1990 2 Information Transformation Technologies ICs Transistors/chip 1971-2011 38
  30. 30. 30 MEMS Printing Drops/second for ink jet printer 1985-2009 61 Computers Instructions/ time 1945-2008 40 Instructions/time and dollar 1945-2008 38 Liquid Crystal Displays Square meters/cost 2001-2011 11 MRI 1/Resolution x scan time 1949-2006 32 CT Scanner 1/Resolution x unit time 1971-2006 29 Organic Transistors Mobility 1994-2007 99 Information Storage Technologies Magnetic Tape Bits/cost 1955-2004 40 Bits per/volume 1955-2004 10 Magnetic Disk Bits per/cost 1957-2004 39 Bits/volume 1957-2004 33 Optical Disk Bits/cost 1996-2004 40 Bits/volume 1996-2004 28 Information Transport Technologies Wireline Transport Bits/time 1858-1927 35 Bits x distance/ 1858-2005 35 cost Wireless Transport Coverage density, bits/area 1901-2007 37 Spectral efficiency, bits/bandwidth 1901-2007 17 Bits /time 1895-2008 19 Living Organism Related Technologies Biological Genome transfor-mation sequencing/cost 1965-2005 35 Concentration of penicillin 1945-1980 17 U.S. agricultural productivity (per input) 1948-2009 1.3
  31. 31. 31 US corn production/area 1945-2005 0.9 Transport of humans/ freight Ratio of GDP to transport sector 1880-2005 0.45 Aircraft passengers times speed 1926-1975 13 Materials Related Technologies Load Bearing Strength to weight ratio 1880-1980 1.6 Magnetic Magnetic strength 1930-1980 6.1 Magnetic coercivity 8.1 Other Technologies Machine Tools Accuracy 1775-1970 7.0 Machining speed 1900-1975 6.3 Labora-tory Cooling Lowest temperature achieved 1880-1950 28 MEMS: micro-electronic mechanical systems; LEDs: Light Emitting Diodes; ICs: Integrated Circuits; Magnetic Resonant Imaging; Source: xii
  32. 32. 32 Chapter 2 Creating Materials to Better Exploit Physical Phenomena Most people have noticed that large numbers of new materials have emerged over the last 100 years and that many continue to emerge. For example, plastics have largely replaced metals in most mechanical products and so called engineered materials have become the norm as every type of material has been “engineered” in order to have certain characteristics. To do this, materials are either added or removed or processes are tweaked in order to improve some measure of performance where advances in science facilitate the addition and removal of materials and tweaking of processes. These advances in science form a base of knowledge for the phenomena and thus facilitate the creation of new materials that better exploit the phenomena. The word “create” is used because scientists and engineers often create materials that do not naturally exist (as opposed to finding them) and in doing so must also create the processes for the materials. But what might enable rapid rates of improvement that involve the creation of new materials? As shown in this and other chapters, some technologies such as organic transistors, magnetic coercivity, cutting machines, LEDs, and OLEDs have experienced very rapid rates of improvements while others such as batteries and agriculture have not. Why? While strong bases of scientific knowledge are important, they are certainly not the whole story and probably not the main reason since stronger bases of knowledge probably exist for batteries and agriculture than for organic transistors, LEDs, and OLEDs. We believe that rapid rates of improvement reflect the scientific feasibility of many materials for a particular technology and this scientific feasibility means that new materials can be created if the proper processes and raw materials are known and used, which partly depends on the levels of scientific knowledge. If these materials are scientifically feasible and we can create them, the rates of improvement will probably be very rapid. If either they are not
  33. 33. scientifically feasible, or we do not have the ability to create them, the rates of improvement 33 will be very slow or even non-existent. A rapid rate of improvement during the early years of a technology suggests that there are many materials that are scientifically feasible and that we are adept at creating them. This is certainly the case with many electronic-related phenomena and technologies. Humans have been able to create new forms of materials that have enabled dramatic improvements in the performance and cost of microstructures such as diodes, transistors, other P-N junctions (e.g., solar cells), quantum wells or dots for lasers, and in general thin films for all of these microstructures. This might be because the performance of these microstructures benefit from small changes in the composition of materials and processes. A second way to think about the number of materials that might be scientifically feasible is in terms of classes of materials. Some improvements are from creating new classes of materials (and processes for them) while other improvements are from modifying materials (and processes) within a specific class. The more classes that are created to exploit a physical phenomenon, the more modifications that can be done within a class of material and thus the greater the possibility of having and sustaining rapid rates of improvement. Creating some classes of materials is considered so important that it brings someone a Nobel Prize. Nobel Prizes were received for the creation of organic conductors, including ones for solar cells, displays, and transistors, and more recently for bio-luminescence and quasi-crystals. Crystals can be thought of as a physical phenomenon that displays certain characteristics and these characteristics are appropriate for certain applications. Quasi crystals exhibit some of the characteristics of crystals in addition to new characteristics that are still somewhat unknown. Thus, we can expect improvements in various measure of performance over the next 50 to 100 years as engineers and scientists create quasi-crystals that combine different types of materials often using different types of processes. This long term process of improvement will be facilitated by advances in our understanding of quasi-crystals and other
  34. 34. materials because advances in science will facilitate the search for and creation of new 34 materials that enable improvements in performance. This chapter begins with materials that are used for mechanical applications followed by those that are for electrical engineering, electronic, agriculture, and finally pharmaceutical applications. Although this chapter focuses on materials for which the measures of performance are well-defined, we recognize that all materials have multiple measures of performance and it is often more about finding materials with a specific combination of measures or a new measure of performance than with merely making improvements along a single well-known measure of performance. 2.1 Mechanical Engineering Applications A key measure of performance in many mechanical engineering applications is the ratio of strength-to weight, or strength-to density where strength is measured in terms of resistance to stretching and bending. High strength and low weight are of obvious importance to large structures such as buildings and bridges and to transportation equipment such as automobiles and aircraft. Without materials with higher strength-to weight ratios, we would not have skyscrapers, suspension bridges, large aircraft, or the space station, and will not have exotic new structures such as space elevators. A report by the National Academy of Sciences concluded that scientists and engineers were able to increase the strength to density ratio of materials by more than 10 times in the 19th and 20th centuries. New forms of engineered materials such as composites have much higher strength-to density ratios than do iron and steel and the search for these new materials still continues in for example, carbon fiber. Engineers and scientists continue to create new types of additives, weaves, and the processes for making them in a search for carbon fiber that have higher strength-to weight ratios or higher performance along other measures than do current forms of carbon fiber.
  35. 35. These improvements in strength are one reason why engineers have been able to increase the cutting speeds and thus reduce the machining time of turning metal (See Figure 2.1). Carbon steel cutting tools were replaced successively by tungsten carbide, cermet, ceramic, and diamond-based cutting tools where these new materials enabled increases in cutting speeds and thus increases in machine output and where increasing the scale of the equipment also played a role in these increases in speed. Unfortunately, as discussed in Chapter 4, increasing the speed 35 of loading and unloading is more difficult than increasing the cutting speeds. Figure 2.1. Improvements in Machining Times of Turned Parts Source: American Machinist 1977. Metalworking: Yesterday and Tomorrow, November. The most recent improvements in the strength of materials are coming from creating new forms of carbon such as carbon nanotubes and graphene, which have strength-to density ratios that are about 20 times better than carbon fibers. Although they are made from carbon as are soot, graphite, and diamonds, they display different characteristics (including high conductivity) than do these other forms of carbon because of the way in which the carbon atoms bond to each other. As discussed in Chapter 7, the challenges of creating these new materials
  36. 36. is inextricably linked to creating new processes and the current challenge is to find ways to fabricate them for much lower cost, which involves some of the concepts that are discussed in 36 the following two chapters. A second key measure of performance for materials is temperature resistance. Because higher temperatures are needed for furnaces, smelters, and other manufacturing processes and often lead to better performing engines and turbines, creating new materials that are resistant to high temperatures has been a goal for scientists and engineers over hundreds if not thousands of years. A report the by National Academy of Sciences xiii concluded that scientists and engineers were able to increase the operating temperature of engines from less than 200 degrees centigrade in 1900 to more than 1200 degrees by 1980. Other material-related technologies such as polymers, man-made fibers, ceramics, and other engineered materials also benefit from creating materials that better exploit physical phenomena. It is difficult to present such data in figures or tables for these materials, however, because it is difficult to summarize the improvements for a single measure of performance. Many materials-related technologies have multiple measures of performance and thus progress is often from finding materials that offer a new measure of performance in addition to improvements in an existing measure of performance. For example, measures of performance for man-made fibers include tensile strength, elastic recovery, modulus, and moisture regain where different measures of performance and different combinations of them are important for different applicationsxiv. 2.2 Energy Storage Creating new materials that better exploit physical phenomena is also relevant to energy storage devices such as batteries, flywheels, and capacitors. Since the first batteries were constructed in the early 19th century, engineers and scientists have improved their energy (See Figure 2.2) and power storage densities by creating and combining the right materials where
  37. 37. many of these new materials required new processes. Batteries with higher energy and power densities store more energy per weight or volume and provide more power per weight or volume respectively than do ones with lower energy and power densities. They also often have lower costs per unit of energy since cost is often a function of volume or weight for batteries. Improving these energy and power densities has caused engineers and scientists to a search for materials with high reactivity for the cathode and low reactivity for the anode along with higher 37 current carrying capacity, low weight, and ease of processing. Figure 2.2 Improvements in Energy Storage Density Source: H Koh and C Magee, A Functional Approach for Studying Technological Progress: Extension to Energy Technology, Technological Forecasting & Social Change 75 (2008) 735–758 Creating these materials has enabled engineers and scientists to improve the energy storage densities of batteries by about 10 times in the last 100 years. Improvements in the last few decades have come from using completely new materials such as Lithium and making small changes to the particular combination of Lithium and other materials. This has led to a doubling of energy densities for Li-ion batteries in the last 15 years and some expect a similar doubling to occur in the next 15 years from using modified forms of lithium such as Li-Air (See Figure
  38. 38. 38 2.3)xv. Figure 2.3 Recent Improvements in Energy Density of Batteries Source: Tarascon, J. 2009. Batteries for Transportation Now and In the Future, presented at Energy 2050, Stockholm, Sweden, October 19-20. However, not only are lithium-ion batteries more expensive than are lead batteries on a energy storage density basis, their energy densities are about 1/30 the levels found in gasoline and even if the rates of improvements in Figure 2.3 or those suggested by recent C’s development’sxvi continue (both about 5%), it will take 75 years before the energy storage density of Li-ion batteries equal those of gasoline. Poor energy storage densities lead to a vicious cycle of heavier cars requiring more batteries and more batteries leading to heavier cars. This should make everyone very pessimistic about battery-based electric vehicles, even if we dramatically increase research funding for them. Billions of dollars have been spent on battery storage technologies over the last 100 years because they have been used in automobiles and electronic products during these 100 years. Unless scientists and engineers find completely new classes of materials or utilize new ones with higher densities (but with other problems) such as sodium-ions that reportedly have densities has high as 600 Wh/kgxvii, it is unlikely that
  39. 39. 39 energy storage densities equivalent to gasoline will ever be achieved. Some expect that the faster rate of improvement in energy storage densities for flywheels and capacitors will cause their energy and power storage densities to exceed that of lithium-ion batteries sometime in the near future. Like batteries, engineers and scientists have improved their energy and power storage densities by creating new materials with the appropriate properties; they improved the energy storage density for flywheels by 15 times in the last 30 years and for capacitors by 1000 times in the last 40 years. If the trends shown in Figure 2.2 continue, flywheels and capacitors will have a higher energy storage density than batteries within 10 and 30 years respectively. One way that higher energy densities for flywheels have been achieved is by using carbon fiber and other engineered materials with high strength-to density ratios. As discussed in Chapter 7, one reason these trends might continue is that carbon nanotubes and graphene offer substantially higher energy storage densities for flywheels and capacitors respectively than are available with existing ones. As an aside, people sometimes make fun of the efforts to create nuclear cars or airplanes. One should recognize that these efforts were motivated by the extremely high energy and power storage densities of these technologies, levels that are 10,000 times higher than those found in gasolinexviii. Since costs are often related to size, these high energy and power storage densities might have led to much lower costs for nuclear than gasoline propulsion in automobiles. Of course they didn’t, but the motivation was correct. Similarly one should remember that the concept of an internal combustion engine is based on a controlled explosion; it’s just that we can control these explosions while we can’t control nuclear reactions to the levels demanded by the public. 2.4 Magnetic Materials Creating new materials that better exploit physical phenomena is also relevant to magnetic materials of which rapid improvements have been made in at least two measures of
  40. 40. performance. Coercivity represents the magnetic field required to reduce the magnetization (in Oersteds or amperes per meter) while the “energy product” (in Mega Gauss Oersted) represents the density of magnetic energy. Coercivity was improved by about 50 times between the 1930s and 1980s (See Figure 2.4) and energy product was improved by about 50 times improvement between 1920 and 2010 (See Figure 2.5)xix; coercivity was subsequently improved by about 40 100 times since 1980 (see Chapter 3). Figure 2.4 Improvements in Coercivity (amps/meter) Source: NAS/NRC, 1989. Materials Science and Engineering for the 1990s. National Academy Press Figure 2.5 Improvements in Energy Product for TDK
  41. 41. These improvements were achieved by creating new forms of magnetic materials such as steel alloys, barium hexa-ferrites (or ferrites for short), and most recently rare earth ones (See Figure 2.4). The steel alloys are sometimes called alnicos for their combinations of aluminum, nickel, and cobalt. Rare earth elements are those that lie at the bottom of the periodic table and their magnets contain various combinations of these elements along with Manganese, Iron, Cobalt, and Nickel. As an aside, rare earth metals are not as rare as their name implies. They are as abundant as are lead and mercury in the earth’s crust and they are much more abundant than are gold and silver. It is China’s monopoly on their production, primarily due to 41 environmental restrictions in other countries, that makes them seem rarexx. These improvements are relevant for electric motors, electrical generators, and for magnetic storage. The output of a motor or generator directly depends on the energy product, i.e., magnetic energy density, of the magnetic windings while fast switching in magnetic disks or storage requires both high coercivity and energy product xxi . As discussed in Chapter 3, improvements in coercivity have been necessary to achieve improvements in the areal
  42. 42. 42 recording density of platters and tape. 2.5 Electronic applications Creating new materials that better exploit physical phenomena (along with discovering new phenomena) has been essential to the dramatic improvements in electronics that we have experienced during the second half of the 20th century. The most important class of these materials has been semiconductors, which fall between conductors and insulators in that they only conduct under certain conditions. Semiconductor materials include silicon or germanium or combinations of so-called III-IV materials such as aluminum and phosphorus, gallium and arsenide, and indium and antimony. A key measure of performance for them is mobility; this determines the speed with which electrons and holes (absence of electrons) can pass through them and thus the speed with which transistors switch. Improvements in mobility along with reductions in scale (discussed in Chapter 3) have been steadily achieved where silicon is the most widely used material for transistors followed by gallium-arsenide. Other materials are also important for transistors and for making the connections between them in an integrated circuit (IC). Although for many years ICs consisted of only seven elements, recent efforts to continue reductions in scale since the 1990s have required engineers and scientists to create new materials for interconnect and insulators; examples of these new materials include copper for interconnect that has largely replaced aluminum. These efforts still continue as further reductions in feature size bring new challenges and require more radical solutions. New types of semiconductor materials have also been created to exploit other physical phenomenon such as electroluminescence, optical amplification based on the stimulated emission of photons (i.e., laser), and photovoltaic and thus improve the relevant measure of performance by several orders of magnitude. As discussed in more detail in Part II, engineers and scientists have improved the luminosity per Watt in LEDs by finding new combinations of
  43. 43. semiconducting materials that better exploit the phenomenon of electroluminescence; these include new combinations of gallium, arsenide, phosphorus, indium, and selenium. Many of the improvements in semiconductor LEDs also led to improvements in semiconductor lasers, due to the similarities between them. These improvements are usually measured in power output per volume and also depend on defect free lenses and on better materials for removing heat from the lasing area. As also discussed in Part II, engineers and scientists have also improved the light sensitivity of photosensors and the efficiency of solar cells by finding semiconducting materials and processes for them that capture more of the incoming light. Each of these new technologies is gradually becoming economically feasible because engineers and scientists are improving the relevant measures of performance at a rapid and steady rate by 43 creating the relevant new materials and the processes for making these materials. Three final examples of creating materials to better exploit physical phenomena in electronics can be found in organic transistors (discussed in Chapter 5), superconductivity (addressed in Chapter 13) and optical fiber. Optical losses in glass fiber have been reduced (Figure 2.6) by improving the purity of the glass and its crystalline structure, doping it with various impurities, and creating the processes for doing this. For example, researchers at American glass maker Corning demonstrated a fiber with 17 dB/km attenuation by doping silica glass with titanium. A few years later they produced a fiber with only 4 dB/km attenuation using germanium dioxide as the core dopant. In 1981, General Electric produced fused quartz ingots that could be drawn into fiber optic strands 25 miles (40 km) longxxii. As an aside, Figure 2.6 is a rare example of a technology whose improvements have slowed considerable and not experienced many improvements over the last 20 years. But as discussed in Part II, improvements in bandwidth and speed have continued and no limits are in sight. Figure 2.6 Reductions in Optical Loss (decibels per km) of Optical Fibers
  44. 44. 44 Source: NAS/NRC, 1989. Materials Science and Engineering for the 1990s. National Academy Press 2.6 Agricultural Applications Examples of creating materials to better exploit physical phenomena also exist in agriculture. Improvements in the yield (e.g., in bushels) per acre have been occurring for many years and for many crops. For example, between 1945 and 2005, yields in the U.S. were increased by more than three times for corn and wheat, two times for soybeans, and 1.5 times for ricexxiii. These improvements were driven by the creation of many new materials albeit they are biological ones; these include new types of seeds, fertilizers, pesticides, and herbicides. Better seeds come from breeding just as better animals do and new classes of seeds have contributed towards the improvements in these yields. Consider corn. While “open pollinated” seeds were used in the 19th century, double cross, single cross and now biotech (i.e., genetically modified organisms)-based seeds have been developed in the 20th centuryxxiv. Better fertilizers come from finding better forms and sources of the three major nutrients: nitrogen, phosphorus and potassium. Nitrogen aids vigorous vegetative growth, phosphorus is needed for root growth and vigor, and potassium helps increase plant metabolism and disease resistance. Better
  45. 45. pesticides and herbicides come from finding specific chemicals that selectively kill some 45 insects and weeds and not others. One of the first and still most important sources and forms of nitrogen-based fertilizer is ammonia, which consists of one part nitrogen and one part hydrogen. In spite of the large amounts of nitrogen in the air, it was not until Fritz Haber and Carl Bosch developed the process for transforming air into ammonia that inexpensive fertilizers became available to farmers. Along with increasing the scale of these processes, which is discussed in Chapter 4, subsequent improvements have come in the form of new sources and forms of nitrogen. Natural gas has become the dominant source of ammonia and thus the price of ammonia often rises and falls with the price of natural gas, and also oil. The high price of ammonia-based fertilizers is one reason that plant biologists are trying to develop seeds that provide high crop yields without using fertilizers; reducing or eliminating the need for pesticides, herbicides, or even water are also goals. This is because they are expensive and often bad for the environment, water is becoming scarcer, and insects often become immune to pesticides. Thus, new measures of performance are emerging for the seeds and to understand the degree of success with these seeds, data must be gathered on the rates of improvement for these new measures of performance. These new demands increase challenges for plant biologists and for providing enough food for the planet. The latter issue is a highly contentious one where perspectives strongly differ. On the one hand, several crops in the U.S. have recently experienced few improvements or even declines in crop yieldxxv. These include sorghum, rye, sugarcane and oats. On the other hand, since other countries, particularly developing ones, have much lower crop yields than do the U.S., merely bringing the levels of crop yield in these countries up to those found in the U.S. and other developed countries can increase global production by several times. Moreover, further increases in crop yield in the U.S. and developed countries could provide further opportunities for the rest of the world and one’s view towards these potential increases largely
  46. 46. determines one’s level of optimism about global food production meeting population increases. One thing we can say is that improvements in crop yield are not occurring at a rapid pace or what this book calls exponential rates. The rates of improvement for crop yields are much 46 slower than what is found in other technologies that are addressed in subsequent chapters. 2.7 Pharmaceutical Applicationsxxvi The pharmaceutical industry is all about creating materials that exploit physical phenomena. In this case, it is about creating chemical compounds that fight or provide immunity from diseases. Scientists first struggle to find appropriate chemical compounds, often by trying thousands of different naturally occurring compounds until they find one that has a positive impact on a disease. Once they find an appropriate compound, firms then struggle to isolate a compound and incrementally increase its concentration because purified drugs are often more reliable and predictable. This was done with morphine from opium poppies, cocaine from coca leaves, nicotine from tobacco, quinine from cinchona, salicyclic acid from willow bark, and penicillin from ascomycetous fungi. Scientists may also modify the naturally occurring substance in order to produce new substances that are more powerful or have fewer side effects than do the naturally occurring compounds. For example, scientists created acetyl salicylic acid (aspirin) from salicylic acid and diacetyle morphine (heroin) from morphine. Similarly, they created xylocaine, amylocatine and procaine from cocaine, which are widely used as anesthetics. Now scientists are trying to create drugs that fight or provide immunity from diseases through a rational understanding of the human body. While chemical compounds derived from natural sources together with their synthetic variants account for about 70% of the drugs in modern medicine, the discovery of vitamins and the identification of hormones like insulin has led to optimism in create drugs through a rational understanding of the human body through fields of physiology and molecular biology. Vitamins were identified through an understanding
  47. 47. 47 of the human body and have been synthesized since the middle of the 20th century. Recombinant DNA technology was used by Genentech to modify Escherichia coli bacteria to produce human insulin in 1978 and this event is often defined as the beginning of the biotechnology revolution. Prior to the development of this technique, insulin was extracted from the pancreas glands of cattle, pigs, and other farm animals. Genentech researchers produced artificial genes for each of the two protein chains that comprise the insulin molecule. Second, they inserted them into plasmids, which are a group of genes that are activated by lactose. Third, they inserted the recombinant plasmids into Escherichia coli bacteria, which were "induced to produce human insulin. Many hope that the successful mapping of the human genome and the falling cost of DNA sequencers will enable more examples of synthetically producing drugs through a rational understanding of the human body. 2.8 Discussion Creating materials to better exploit physical phenomena is an important mechanism for improving the performance and cost of technologies and there are several common themes about this source of exponential improvements. First, many new materials are created in laboratories and not in factories and thus production is not needed to make many of the improvements discussed in this chapter. This was the case with load bearing, temperature resistant, and magnetic materials, batteries and other storage devices, transistors, LEDs, OLEDs, organic transistors, solar cells, superconductors, seeds, fertilizers, herbicides, and pesticides. University scientists and engineers create these materials because creating them helps their careers in terms of publications, grant money, promotions, and patents. Even corporate scientists and engineers create these materials for some of the same reasons since they are often evaluated in a similar way. Second, the fact that these materials are created in laboratories means that this creation has occurred even though some of these technologies have never been produced on a large scale.
  48. 48. This is certainly the case with new technologies such as organic transistors, OLEDs, and superconductors in which the modern system of R&D and laboratories is creating these new materials for the reasons given in the previous paragraph. This suggests that new materials could have been created for many of the other technologies covered in this chapter even without production. This conclusion has obvious implications for policy in that subsidies for R&D are probably a more effective stimulus for creating these new materials than are subsidies for 48 production, which is the current emphasis in for example clean energy. Third, these two conclusions are also consistent with the fact that most of the performance trajectories display relatively straight lines. While some argue that the improvements in performance accelerate as the technologies are commercialized and as demand for them increases, this chapter’s analysis suggests that this does not occur. Instead, the relatively straight lines for these performance trajectories suggests that demand is having a different impact on performance, one that is not fully understood. It could be that increases in demand prevent the rate of improvements from declining as diminishing returns from research would be expected to occur. It is certainly not the case that a slowdown in the rate of improvement in the old technology causes research funds to move to the new technology. Evidence that multiple technologies are being simultaneously pursued is a better explanation in the data for organic materials, energy storage densities, magnetic materials, and corn yield. Fourth, the relatively straight lines can help us understand the future rate of improvement and when the technology might become economically feasible for specific applications. One can compare the new technology with existing ones along the main and other measures of performance in order to understand the rate at which the new technology might become economically feasible for these specific applications. In doing this, it is important to identify all the measures of performance as many technologies are evaluated along multiple measures and new materials often succeed because they have advantages along a new measure of performance. Most technologies have multiple measures of performance and finding the right
  49. 49. combination of them is often the challenge. This challenge is exacerbated by the fact that historical trends have not been plotted for most measures of performance and even for most 49 technologies. Fifth, some of these technologies experience more rapid rates of improvements than do other technologies. Most of these technologies are electronic related one such as LEDs, OLEDs, lasers, organic transistors, and glass fiber in which improvements are apparently easy to make. This is perhaps because new materials and processes are easily created and perhaps because small changes in materials and processes have a strong impact on the performance of these materials. The latter is perhaps because small changes in materials have a large impact on the crystal lattice structure of the materials and thus facilitate the construction of microstructures such as diodes, transistors, other P-N junctions (e.g., solar cells), quantum wells or dots for lasers, and in general thin films for these microstructures where the performance of these microstructures benefit from small changes in the composition of materials and processes.
  50. 50. 50 Chapter 3 Geometric Scaling: Reductions in Scale Some technologies benefit from reductions in scale and these technologies have experienced some of the most rapid improvements in performance and cost in the history of humans. The concept of geometric scaling helps us understand when technologies benefit from reductions in physical scale, primarily by focusing on the relationship between the geometry of a technology, the scale of it, and the physical laws that govern it. Integrated circuits (ICs) and magnetic storage benefit from reductions in scale because the rules that govern their operation define performance in terms of smaller scale. Placing more transistors, memory cells, or magnetic storage regions in a certain area increases the speed and functionality and reduces both the power consumption and size of the final product, which are typically considered improvements in performance for most electronic products (they also lead to lower material, equipment, and transportation costs). The combination of both increased performance and reduced costs as size is reduced has led to very rapid improvements in the performance to cost ratio of many electronic components and the electronic systems that are composed of these components. For example, three orders of magnitude reductions in transistor length have led to about nine orders of magnitude improvements in the cost of transistors on a per transistor basis. Richard Feynman is sometimes credited with predicting these advances in his famous 1959 speech “There's Plenty of Room at the Bottom: An Invitation to Enter a New Field of Physics.” While he was primarily referring to the field of physics and where physicists should place their emphasis, technology has also moved in the same direction partly because physicists and other scientists have advanced our understanding of small-scale phenomenon. This improved understanding along with the application of this understanding to ICs and magnetic storage is now enabling us to benefit from reductions in other scale such as MEMS (micro-electronic mechanical systems), bio-electronic ICs, and more generally nanotechnology, which are
  51. 51. 51 addressed in Chapters 6, 7, and 10. 3.1 Magnetic Hard Disks Hard disks are one type of magnetic storage. Engineers and scientists have been reducing the scale of features on magnetic hard disks (and tape) for the last 60 years, which has enabled rapid increases in the areal recording density of these disks (and tape), as shown in Figure 3.1, and rapid decreases in the price of them (See Figure 3.2). Hard disk assemblies consist of a platter, a read-write head that is connected to an actuator, and input-output connectors. Writing involves magnetizing a specific region and reading involves sensing a region’s magnetic field. Key features include the size of the magnetic “domains” on a platter that store a single bit, of the read and write elements on the read-write head, and of the spacing between the platter and read-write heads. Figure 3.1 Improvements in Areal Recording Density for Magnetic Hard Disk Drives Source: Yoon Y, 2010. Nano-Tribology of Discrete Track Recording Media, Unpublished PhD
  52. 52. 52 Dissertation, University of California, San Diego Figure 3.2 Falling Price ($/GByte) of Hard Disk Drives Source: Yoon Y, 2010. Nano-Tribology of Discrete Track Recording Media, Unpublished PhD Dissertation, University of California, San Diego Reducing the scale of these features has required better process control over specific measures, the use of new scientific principles, and the creation of materials that better exploit these scientific principles. Improvements in sputtering equipment enabled better consistency along with reductions in the size of magnetic domains on a platter. Improvements in semiconductor processing technology enabled smaller domains to be sensed by a magnet in a read-write head in which the magnet is “shielded” from all but a very small area at any given time. Second, creating new materials that better exploit these principles have also been necessary for the reductions in feature sizes to occur. For example, the creation of materials with high coercivity and energy product, which were covered in Chapter 2, contributed towards reductions in feature size and thus towards increases in magnetic recording density (See Figure 3.3). Third, changing from electromagnetic induction to magnetoresistance and most recently
  53. 53. to spintronics has also been necessary to continue a reduction in feature sizes. Spintronics exploits both the intrinsic spin of the electron and its associated magnetic moment and is more generally called giant magnetoresistance. Its discoverers, Albert Fert and Peter Grünberg, 53 received the 2007 Nobel Prize in physics. Figure 3.3 Increases in Coercivity (1000s of Amps/m) were Necessary to Achieve Increases in Recording Density (Mbits per in2) Source: www1.hgst.com/hdd/technolo/overview/chart11.html These improvements in magnetic recording density have also contributed towards increases in the capacity of hard disk drives and thus the emergence of smaller diameter disk drives such as 5.25, 3.5, 2.5, 1.8, 1.0, and 0.85-inch ones, which have obvious advantages for portable computers and other products. While others have characterized the initial diffusion of smaller disk drives in terms of their inferior capacity and low-end customers where the demand for these small disk drives from low-end customers drove improvements in their capacity, the ability to rapidly increase the magnetic recording density on a platter meant that the emergence of smaller disk drives was inevitable given the benefits of small size in final products.

×