Ge force fx

492 views

Published on

Published in: Technology
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
492
On SlideShare
0
From Embeds
0
Number of Embeds
2
Actions
Shares
0
Downloads
4
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide

Ge force fx

  1. 1. GeForce FX Introduction The GeForce FX (codenamed NV30) is a graphics card in the GeForce line, from the manufacturer NVIDIA. NVIDIA’S GeForce FX series is the fifth generation in the GeForce line. With GeForce 3, NVIDIA introduced programmable shader units into their 3D rendering capabilities, in line with the release of Microsoft's DirectX 8.0 release. With real-time 3D graphics technology continually advancing, the release of DirectX 9.0 ushered in a further refinement of programmable pipeline technology with the arrival of Shader Model 2.0. The GeForce FX series brings to the table NVIDIA's first generation of Shader Model 2 hardware support. The architecture was a major departure from the GeForce 4 series. While it is the fifth major revision in the series of GeForce graphics cards, it wasn't marketed as a GeForce 5. The FX ("effects") in the name was decided on to illustrate the power of the latest design's major improvements and new features, and to virtually distinguish the FX series as something greater than a revision of earlier designs. The FX in the name also was used to market the fact that the GeForce FX was the first GPU to be a combined effort from the previously acquired 3DFX engineers and NVIDIA's own engineers. NVIDIA's intention was to underline the extended capability for cinema-like effects using the card's numerous new shader units. The GeForce FX also included an improved VPE (Video Processing Engine), which was first deployed in the GeForce4 MX. Its main upgrade was per pixel video-deinterlacing — a feature first offered in ATI's Radeon, but seeing little use until the maturation of Microsoft's DirectX-VA and VMR (video mixing renderer) APIs. Among other features was an improved anisotropic filtering algorithm which was not angle-dependant (unlike its competitor, the Radeon 9700/9800 series) and offered better quality, but affected performance somewhat. Though NVIDIA reduced the filtering quality in the drivers for a while, the company eventually got the quality up again, and this feature remains one of 1
  2. 2. the highest points of the GeForce FX family to date (However, this method of anisotropic filtering was dropped by NVIDIA with the GeForce 6 series for performance reasons). The last model, the GeForce FX 5950 Ultra, is comparable to competitor ATI Technologies's Radeon 9800 XT. The advertising campaign for the GeForce FX featured the Dawn fairy demo, which was the work of several veterans from the computer animation Final Fantasy: The Spirits Within. NVIDIA touted it as "The Dawn of Cinematic Computing", while critics noted that this was the strongest case of using sex appeal in order to sell graphics cards yet. It is still probably the best-known of the NVIDIA Demos. Features The FX features DDR, DDR-II or GDDR-3 memory, a 130 nm fabrication process, and Shader Model 2.0/2.0A compliant vertex and pixel shaders. The FX series is fully compliant and compatible with DirectX 9.0b. DDR Memory DDR memory, or Double Data Rate memory, is an evolutionary new memory technology that doubles data throughput to the processor. As an evolution of today's PC133 SDRAM, DDR leverages the existing production and environment to provide unrivaled PC performance at an affordable price. DDR2 Memory Another big advantage is the support for DDR2 memory. Operating at a blistering 500 MHz frequency on a 128-bit data bus, this interface offers a 16 GB/sec bandwidth. Even though the memory width is lower than that of the Radeon 9700, the effective bandwidth is higher for two reasons: the effective frequency is higher with the DDR2 memory and every bit of data that comes out of the rendering pipeline is compressed in the hardware before being sent to the memory. One an average, nVidia states that there is a 4:1 compression ratio that occurs and therefore, the resultant memory bandwidth of the card 2
  3. 3. is effectively raised to 48 GB/secthat's the contents of an entire 40 GB hard disk transferred in a little under a second! GDDR3 Graphics Double Data Rate, version 3 is a graphics card-specific memory technology, designed by ATI Technologies. It has much the same technological base as DDR2, but the power and heat dispersal requirements have been reduced somewhat, allowing for higher-speed memory modules, and simplified cooling systems. Unlike the DDR2 used on graphics cards, GDDR3 is unrelated to the upcoming JEDEC DDR3 specification. This memory uses internal terminators, enabling it to better handle certain graphics demands. To improve bandwidth, GDDR3 memory transfers 4 bits of data per pin in 2 clock cycles. Vertex Shaders Vertex shaders are applied for each vertex and run on a programmable vertex processor. Vertex shaders define a method to compute vector space transformations and other linearizable computations. A vertex shader expects various inputs: 1. Uniform variables are constant values for each shader invocation. It is allowed to change the value of each uniform variable between different shader invocation batches. This kind of variable is usually a 3-component array but this does not need to be. Usually, only basic datatypes are allowed to be loaded from external APIs so complex structures must be broken down[1] . Uniform variables can be used to drive simple conditional execution on a per-batch basis. Support for this kind of branching at a vertex level has been introduced in shader model 2.0. 2. Vertex attributes, which are a special case of variant variables, which are essentially per-vertex data such as vertex positions. Most of the time, each shader invocation performs computation on different data sets. The external application 3
  4. 4. usually does not access these variables "directly" but manages as large arrays. Besides this little detail, applications are usually capable of changing a single vertex attribute with ease. Branching on vertex attributes requires a finer degree of control which is supported with extended shader model 2 Pixel Shaders Pixel shaders are used to compute properties which, most of the time, are recognized as pixel colors. Pixel shaders are applied for each pixel. They are run on a pixel processor, which usually features much more processing power than its vertex-oriented counterpart the pixel shaders expects input from interpolated vertex values. This means there are three sources of information: 1. Uniform variables can still be used and provide interesting opportunities. A typical example is passing an integer providing a number of lights to be processed and an array of light parameters. Textures are special cases of uniform values and can be applied to vertices as well, although vertex texturing is often more expensive. 2. Varying attributes is a special name to indicate fragment's variant variables, which are the interpolated vertex shader output. Because of their origin, the application has no direct control on the actual value of those variables. Intellisample Technology While most of the pixel and vertex specifications are focused on DirecX 9.0, the pixie dust here is a technology called Intellisample that will make even games such as DOOM 3 run faster. Intellisample is a comprehensive set of technologies that includes a new colour compression engine, improved fast z-clear, dynamic gamma correction, adaptive trilinear and anisotropic filtering, and anti-aliasing. 4
  5. 5. The GeForce FX maximises its memory bandwidth by compressing all the data that comes out of the rendering pipeline before sending it to the memory controller (as described above). This results in a direct increase in effective memory bandwidth, allowing for larger and more complex textures. This is noticed especially when the anti- aliasing is enabled, where the demands placed on the memory bandwidth are greater. There is a newer algorithm for clearing the z-buffer (for getting rid of obstructed or invisible polygons) resulting in faster frame processing. Finally, the card uses various methods for implementing filtering options. The user can choose between a direct filtering option (trilinear or anisotropic), or a less accurate but a high performance option. This will result in a lower performance hit compared to running the card in any one of these filtering modes directly. Faster core and memory speeds When it debuts, the GeForce FX is expected to have a core speed of 500 MHz and a memory speed of 250 MHz (effectively 1 GHz due to the 4x increase, being DDR2). This is significantly higher than the Radeon 9700's core speed of 325 MHz and 310 MHz DDR memory. FX Flow Given the high core and memory speeds, the card needs to breathe well. Hence, it uses an advanced cooling system involving the use of heat pipes. In addition to a heat sink on the critical heat-generating components such as the GPU and memory chips, there are tiny copper pipes that draw the heat away from these elements. This is implemented through a special airflow duct in conjunction with the cooling fan, resulting in a large cooling assembly that takes up the space of two slots in your cabinet! In newer games such as Stalker or RalliSport, the realism in the models and environment is unprecedented due to the use of very high polygon-count. Also, the 128-bit colour 5
  6. 6. support allows for hitherto unseen levels of accuracy in colours and specular highlights in the game. The Last Word While all this technology scientifically translates into more pixels per second, greater colour depths and better filtering, the truth will be bared when we see shimmers on water and winking characters on our desktops. Though this card will be out of our financial boundaries, it does provide a taste of things to come. In time, these impressive technologies will trickle down to more affordable solutions. So be prepared for the time when you can feel the knot in the pit of your stomach while you watch a gleaming Lamborghini Murciilago tear down your screen the only difference being that instead of watching this with a bag of popcorn, you'll be holding a joystick in your hand! Here's how the GeForce FX stacks up against other graphics processing heavyweights, spec for spec. Specification nVidia GeForce FX ATi Radeon 9700 PRO nVidia GeForce4 Ti4600 Chip technology 256-bit 256-bit 256-bit Process 0.13 Micron 0.15 Micron 0.15 Micron Transistors 125 million 107 million 63 million Memory bus 128-bit DDR2 256-bit DDR 128-bit DDR Pure memory bandwidth 16 GB/sec 19.8 GB/sec 10.4 GB/sec Pixel fillrate 4 Gigapixel/sec 2.6 Gigapixel/sec 1.24 Gigapixel/sec Anti Aliased Fillrate 16 Billion AA Samples/s 15.6 Billion AA Samples/s 4.8 Billion AA Samples/s 6
  7. 7. Max FSAA Mode 8x 6x 4x Triangle transform rate 350 M Triangles/sec 325 M Triangles/sec 69 M Triangles/sec AGP bus 1x/2x/4x/8x 1x/2x/4x/8x 1x/2x/4x Memory 128/256 MB 128/256 MB 128 MB GPU clock 500 MHz 325 MHz 300 MHz Memory clock 250 MHz (1000 DDR2) 310 MHz (620 DDR) 325 MHz (650 DDR) Memory BGA 2.0 ns BGA 2.9 ns BGA 2.8 ns Vertex shader FP Array 4 2 Pixel Pipelines 8 8 4 Texture Units Per Pipe 1 1 2 Textures per Texture Unit 16 8 4 DirectX Generation 9.0 (+) 9 8 Memory Optmizations LMA II Optimized Colour Compression Hyper Z III LMA II Fact File The pixel shader in the GeForce FX can process 51 billion floating point operations per second. This 7
  8. 8. 1. Can render over a hundred Jurassic Park dinosaurs at 100 frames per second 2. Has more floating point power than a Cray SV-1 supercomputer 3. Is 120 times the distance from the Earth to the Moon, if converted to metres The 125 million transistors in the GeForce FX GPU is 3 times that of a Pentium 4 processor Delays The NV30 project had been delayed for three key reasons. One was because NVIDIA decided to produce an optimized version of the GeForce 3 (NV 20) which resulted in the GeForce 4 Ti (NV 25), while ATI cancelled its competing optimized chip (R250) and opted instead to focus on the Radeon 9700. The other reason was NVIDIA's commitment with Microsoft, to deliver the Xbox console's graphics processor (NV2A). The Xbox venture diverted most of NVIDIA's engineers over not only the NV2A's initial design- cycle but also during the mid-life product revisions needed to discourage hackers. Finally, NVIDIA's transition to a 130 nm manufacturing process encountered unexpected difficulties. NVIDIA had ambitiously selected TSMC's then state-of-the-art (but unproven) Low-K dielectric 130 nm process node. After sample silicon-wafers exhibited abnormally high defect-rates and poor circuit performance, NVIDIA was forced to re-tool the NV30 for a conventional (FSG) 130 nm process node. (NVIDIA's manufacturing difficulties with TSMC spurred the company to search for a second foundry. NVIDIA selected IBM to fabricate several future GeForce chips, citing IBM's process technology leadership. Yet curiously, NVIDIA avoided IBM's Low-K process.) Analysis of the hardware Hardware enthusiasts saw the GeForce FX series as a disappointment as it did not live up to expectations. NVIDIA had aggressively hyped the card up throughout the Summer and Fall of 2002, to combat ATI Technologies' Fall release of the powerful Radeon 9700. ATI's very successful Shader Model 2 card had arrived several months earlier than NVIDIA's first NV30 board, the GeForce FX 5800. 8
  9. 9. GeForce FX 5800 When the FX 5800 launched it was discovered after much testing and research on the part of hardware review websites that the 5800 was not a match for Radeon 9700, especially when pixel shading was involved. The 5800 had roughly a 30% memory bandwidth deficit caused by the use of a narrower 128-bit memory bus (compared to ATI's 256-bit). The card used expensive and hot GDDR-2 RAM while ATI was able to use cheaper lower-clocked DDR SDRAM with their wider bus. And, while the R300 core used on 9700 was capable of 8 pixels per clock with its 8 pipelines, the NV30 was discovered to be a 4 pixel pipeline chip. However, because of both the expensive RAM and 130 nm chip process used for the GPU, NVIDIA was able to clock both components significantly higher than ATI to close these gaps somewhat. Still, the fact that ATI's solution was more robust architecturally caused FX 5800 to fail to defeat the older Radeon 9700. The initial version of the GeForce FX (the 5800) was so large that it required two slots to accommodate it, requiring a massive heat sink and blower arrangement called "Flow FX" that produced a great deal of noise. This was jokingly coined into the description 'Dustbuster' and graphics cards which happen to be loud are often compared to the GeForce FX 5800 for this reason. To make matters worse, ATI's refresh of Radeon 9700, the Radeon 9800, arrived shortly after NVIDIA's boisterous launch of the disappointing FX 5800, and Radeon 9800 brought a significant performance boost over the already superior Radeon 9700, further separating the failed FX 5800 from its competition. With regard to the much-vaunted Shader Model 2 capabilities of the NV3x series, the performance was shockingly poor. The chips were designed for use with a mixed precision programming methodology, using 64-bit FP16 for situations where high precision math was unnecessary to maintain image quality, and using the 128-bit FP32 9
  10. 10. mode only when absolutely necessary. The GeForce FX architecture was also extremely sensitive to instruction ordering in the pixel shaders. This required more complicated programming from developers because they had to not only concern themselves with the shader code mathematics and instruction order, but also with testing to see if they could get by with lower precision. Additionally, the R300-based cards from ATI did not benefit from partial precision in any way because these chips were designed purely for DirectX 9's required minimum of 96-bit FP24 for full precision. The NV30, NV31, and NV34 also were handicapped because they contained a mixture of DirectX 7 fixed-function T&L units, DirectX 8 integer pixel shaders, and DirectX 9 floating point pixel shaders. The R300 chips emulated these older functions on their pure Shader Model 2 hardware allowing the SM2 hardware to use far more transistors for SM2 performance with the same transistor budget. For NVIDIA, with their mixture of hardware, this resulted in non- optimal performance of pure SM2 programming, because only a portion of the chip could calculate this math, and due to programmers' neglect of partial precision optimizations in their coding seeing as ATI's chips performed far better even without the extra effort. NVIDIA released several guidelines for creating GeForce FX-optimized code over the lifetime of the product, and worked with Microsoft to create a special shader model called "Shader Model 2.0A", which generated the optimal code for the GeForce FX, and improved performance noticeably. It was later found that even with the use of partial precision and Shader Model 2.0A, the GeForce FX's performance in shader-heavy applications trailed behind the competition. However, the GeForce FX still remained competitive in OpenGL applications, which can be attributed to the fact that most OpenGL applications use manufacturer-specific extensions to support advanced features on various hardware and to obtain the best possible performance, since the manufacturer- specific extension would be perfectly optimized to the target hardware. The FX series was a moderate success but because of its delayed introduction and flaws, NVIDIA ceded market leadership to ATI's Radeon 9700. Due to market demand and the FX's deficiency as a worthy successor, NVIDIA extended the production life of the aging GeForce 4, keeping both the FX and 4 series in production for some time, at great expense. 10
  11. 11. Valve's presentation In late 2003, the GeForce FX series became known for poor performance with DirectX 9 Vertex & Pixel shaders because of a very vocal presentation by popular game developer, Valve Software. Early indicators of potentially poor Pixel Shader 2.0 performance had come from synthetic benchmarks (such as 3DMark 2003). But outside of the developer community and tech-savvy computer gamers, few mainstream users were aware of such issues. Then, Valve Software dropped a bombshell on the gaming public. Using a pre- release build of the highly anticipated Half-Life 2 game, using the "Source" engine, Valve published benchmarks revealing a complete generational gap (80-120% or more) between the GeForce FX 5900 Ultra and the ATI Radeon 9800. In Shader 2.0 enabled game-levels, NVIDIA's top-of-the-line FX 5900 Ultra performed about as fast as ATI's mainstream Radeon 9600, which cost approximately a third as much as the NVIDIA card. Valve had initially planned on supporting partial floating point precision (FP16) to optimize for NV3x, however they eventually discovered that this plan would take far too long to accomplish. As said earlier, ATI's cards did not benefit from FP16 mode, so all of the work would be entirely for NVIDIA's NV3x cards, a niche too small to be worthy of the time and effort especially at a time when DirectX 8 cards such as GeForce4 were still far more prevalent than DirectX 9 cards. When Half-Life 2 was released a year later, Valve opted to make all GeForce FX hardware default to using the game's DirectX 8 shaders in order to avoid the FX series' poor Shader 2.0 performance. Note that it is possible to force Half Life 2 to run in DirectX 9 mode on all cards with a simple tweak to a configuration file. When this was tried, users and reviewers noted a significant performance loss on NV3x cards, with only the top of the line variants (5900 and 5950) remaining playable. However, an unofficial fan-made patch (which optimized the Half-Life 2 shaders for GeForce FX) allowed users of lower-end GeForce FX cards (5600 and 5700) to comfortably play the game in DirectX 9 mode and considerably improved performance on the GeForce FX 5800, 5900 and 5950 graphics cards. But this only proved that the GeForce FX was a poor performer if the DX9 shaders are not optimized for its architecture. 11
  12. 12. Questionable tactics NVIDIA's GeForce FX era was one of great controversy for the company. The competition had soundly beaten them on the technological front and the only way to get the FX chips competitive with the Radeon R300 chips was to optimize the drivers to the extreme. This took several forms. NVIDIA historically has been known for their impressive OpenGL driver performance and quality, and the FX series certainly maintained this. However, with image quality in both Direct3D and OpenGL, they aggressively began various questionable optimization techniques not seen before. They started with filtering optimizations by changing how trilinear filtering operated on game textures, reducing its accuracy, and thus quality, visibly. Anisotropic filtering also saw dramatic tweaks to limit its use on as many textures as possible to save memory bandwidth and fillrate. Tweaks to these types of texture filtering can often be spotted in games from a shimmering phenomena that occurs with floor textures as the player moves through the environment (often signifying poor transitions between mip-maps). Changing the driver settings to "High Quality" can alleviate this occurrence at the cost of performance. NVIDIA also began to clandestinely replace pixel shader code in software with hand- coded optimized versions with lower accuracy, through detecting what program was being run. These "tweaks" were especially noticed in benchmark software from Futuremark. In 3DMark03 it was found that NVIDIA had gone to extremes to limit the complexity of the scenes through driver shader changeouts and aggressive hacks that prevented parts of the scene from even rendering at all. This artificially boosted the scores the FX series received. Side by side analysis of screenshots in games and 3DMark03 showed vast differences between what a Radeon 9800/9700 displayed and what the FX series was doing. NVIDIA also publicly attacked the usefulness of these programs and the techniques used within them in order to undermine their influence upon consumers. 12
  13. 13. Basically, NVIDIA programmed their driver to look for specific software and apply aggressive optimizations tailored to the limitations of the poorly designed NV3x hardware. Upon discovery of these tweaks there was a very vocal uproar from the enthusiast community, and from several popular hardware analysis websites. Unfortunately, disabling most of these optimizations showed that NVIDIA's hardware was dramatically incapable of rendering the scenes on a level of detail similar to what ATI's hardware was displaying. So most of the optimizations stayed, except in 3DMark where the Futuremark company began updates to their software and screening driver releases for hacks. Both NVIDIA and ATI are guilty of optimizing drivers like this historically. However, NVIDIA went to a new extreme with the FX series. Both companies optimize their drivers for specific applications even today (2006), but a tight reign and watch is kept on the results of these optimizations by a now more educated and aware user community. Competitive response By early 2003, ATI had captured a considerable chunk of the high-end graphics market and their popular Radeon 9600 was dominating the mid-high performance segment as well. In the meantime, NVIDIA introduced the mid-range 5600 and low-end 5200 models to address the mainstream market. With conventional single-slot cooling and a more affordable price-tag, the 5600 had respectable performance but failed to measure up to its direct competitor, Radeon 9600. As a matter of fact, the mid-range GeForce FX parts did not even advance performance over the chips they were designed to replace, the GeForce 4 Ti. In DirectX 8 applications, the 5600 lost to or matched the Ti 4200. Likewise, the entry-level FX 5200 performed only about as well as the GeForce 4 MX 460, despite the FX 5200 possessing a far better 'checkbox' feature-set. FX 5200 was easily matched in value by ATI's older R200-based Radeon 9000-9250 series and outperformed by the even older Radeon 8500. With the launch of the GeForce FX 5900, NVIDIA fixed many of the problems of the 5800. While the 5800 used fast but hot and expensive GDDR-2 and had a 128-bit 13
  14. 14. memory bus, the 5900 reverted to the slower and cheaper DDR, but it more than made up for it with a wider 256-bit memory bus. The 5900 performed somewhat better than the Radeon 9800 in everything not heavily using shaders, and had a quieter cooling system than the 5800, but most cards based on the 5900 still occupied two slots (the Radeon 9700 and 9800 were both single-slot cards). By mid-2003, ATI's top product (Radeon 9800) was outselling NVIDIA's top-line FX 5900, perhaps the first time that ATI had been able to displace NVIDIA's position as market leader. GeForce FX 5950 NVIDIA later attacked ATI's mid-range card, the Radeon 9600, with the GeForce FX 5700 and 5900XT. The 5700 was a new chip sharing the architectural improvements found in the 5900's NV35 core. The FX 5700's use of GDDR-2 memory kept product prices expensive, leading NVIDIA to introduce the FX 5900XT. The 5900XT was identical to the 5900, but was clocked slower, and used slower memory. The final GeForce FX model released was the 5950 Ultra, which was a 5900 Ultra with higher clockspeeds. This model did not prove particularly popular, as it was not much faster than the 5900 Ultra, yet commanded a considerable price premium over it. The board was fairly competitive with the Radeon 9800XT, again as long as pixel shaders were lightly used. The way it's meant to be played NVIDIA debuted a new campaign to motivate developers to optimize their titles for NVIDIA hardware at the Game Developers Conference (GDC) in 2002. The program 14
  15. 15. offered game developers the added publicity of NVIDIA's program in exchange for the game being consciously optimized for NVIDIA graphics solutions. The program aims at delivering the best possible user experience on the GeForce line of graphics processing units. Windows Vista and GeForce FX PCI cards Although ATI's competitive cards clearly surpassed the GeForce FX series among many gamers, NVIDIA may still get the last laugh with the release of Windows Vista, which requires DirectX 9 for its signature Windows Aero interface. Many users with systems with an integrated graphics processor (IGP) but without AGP or PCIe slots, that are otherwise powerful enough for Vista, may demand DirectX 9 PCI video cards for Vista upgrades, though the size of this niche market is unknown. To date, the most common such cards use GeForce FX-series chips; most use the FX 5200, but some use the FX 5500 (a slightly overclocked 5200) or the FX 5700 LE, (which has similar speeds to the 5200, but has a few more pixel pipelines.) For some time, the only other PCI cards that were Aero-capable were two GeForce 6200 PCI cards made by BFG Technologies and its 3D Fuzion division. The XGI Technology Volari V3XT was also DirectX 9 on PCI, but with XGI's exit from the graphics card business in early 2006, it is apparently not supported by Aero as of Vista Beta 2. For a long time, ATI's PCI line-up was limited to the Radeon R200-based Radeon 9000, 9200, and 9250 cards, which are not capable of running Aero because of their DirectX 8.1 lineage. Indeed, ATI may have helped assure NVIDIA's initial dominance of the DirectX 9-on-PCI niche by buying XGI's graphics card assets. However, in June 2006 a Radeon X1300-based PCI card was spotted in Japan [, so it now appears ATI will try to contest the GeForce FX's dominance of this niche. Nonetheless, ATI's deployment of a later-generation GPU in what is likely to be a low-end, non-gamer niche may still leave NVIDIA with the majority of units sold. 15
  16. 16. Conclusion The GeForce FX, codenamed NV30 is a graphics card in the GeForce line. NVIDIA’S GeForce FX series is the fifth generation in the GeForce line. With GeForce 3, NVIDIA introduced programmable shader units into their 3D rendering capabilities, in line with the release of Microsoft's DirectX 8.0 release. The GeForce FX is built using a 0.13- micron fabrication process unlike the 0.15-micron technology used by the reigning king of graphics hill, the ATi Radeon 9700. The smaller fabrication process makes it possible for this card to be laden with 125 million transistors compare this with the 108 million transistors used by the Xeon MP processor. While this fabrication process does offer greater density for packing in more transistors and lower heat emission levels, it is a difficult process to implement. 16

×