Modern INTEL Microprocessors' Architecture and Sneak Peak at NVIDIA TEGRA GPU


Published on

Briefly introduces the features of NETBURST, Core and The Nehalem architecture of INTEL.
Along with the Heterogeneous NVIDIA Tegra GPGPU

  • Be the first to comment

  • Be the first to like this

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide
  • Intel faced problems of power dissipation in Netburst with the high clock speeds. Hence it abandoned it and forwarded to Core
  • Modern INTEL Microprocessors' Architecture and Sneak Peak at NVIDIA TEGRA GPU

    1. 1. An Architecture Perspective On Modern Microprocessors And GPU<br />- AbhijeetNawal<br />3/25/2011<br />1<br />AN ARCHITECTURE PERSPECTIVE<br />
    3. 3. Introduction<br /><ul><li>Super Scalar Homogeneous Processors From Intel.
    4. 4. Performance = Frequency x IPC
    5. 5. Power = Dynamic Capacitance x Volts x Volts x Frequency.
    6. 6. Dynamic Capacitance is the ratio of the electrostatic charge on a conductor to the potential difference between the conductors required to maintain that charge.
    7. 7. Higher the No Of Pipeline Stages more Instructions in Pipeline.
    8. 8. Higher No Of Pipeline Stages reduces IPC as n/{k+(n-1)} .
    9. 9. Low IPC is offset by increasing the clock rate and reducing stage time.
    10. 10. Each Instruction is CISC based so decodes into micro operations.</li></ul>3/25/2011<br />3<br />AN ARCHITECTURE PERSPECTIVE<br />
    11. 11. Introduction…<br /><ul><li>Streaming SIMD Extensions:
    12. 12. SSE instructions are 128-bit integer arithmetic and 128-bit SIMD double precision floating-point operations.
    13. 13. They reduce the overall number of instructions required to execute a particular program task.
    14. 14. They accelerate a broad range of applications, including video, speech and image, photo processing, encryption, financial, engineering and scientific applications.
    15. 15. Predecode phase:
    16. 16. Before Instruction pipleline fetch and decode phase.
    17. 17. Bundles instructions to be parallelly executed.
    18. 18. Instructions are appended with bits after fetching from memory as they enter the instruction cache.
    19. 19. This unit also has to thus take care of analyzing the structural, control and data hazards. </li></ul>3/25/2011<br />AN ARCHITECTURE PERSPECTIVE<br />4<br />
    20. 20. Intel Architectures: Netburst<br />3/25/2011<br />5<br />AN ARCHITECTURE PERSPECTIVE<br />
    21. 21. NetBurst Architecture<br />3/25/2011<br />6<br />AN ARCHITECTURE PERSPECTIVE<br />
    22. 22. Netburst Microarchitecture<br />3/25/2011<br />7<br />AN ARCHITECTURE PERSPECTIVE<br />
    23. 23. Features of Netburst Architecture<br /><ul><li>Hyper Threading:
    24. 24. A processor appears as two logical processors.
    25. 25. Each logical processor has its own set of registers, APIC( Advanced programmable interrupt controller).
    26. 26. Increases resource utilization and improve performance.
    27. 27. Introduced SSE (Streaming SIMD Extensions)3.0
    28. 28. Added some DSP-oriented instructions .
    29. 29. And some process (thread) management instructions.</li></ul>3/25/2011<br />8<br />AN ARCHITECTURE PERSPECTIVE<br />
    30. 30. Features of Netburst…<br /><ul><li>Hyper Pipelined Technology:
    31. 31. 20 stage pipeline.
    32. 32. Branch Mispredictions can lead to very costly pipeline flushes.
    33. 33. Techniques to hide stall penalties are parallel execution, buffering and speculation.
    34. 34. Three Major Components:
    35. 35. In-Order Issue Front End
    36. 36. Out-Of-Order Superscalar Execution Core
    37. 37. In-Order Retirement Unit </li></ul>3/25/2011<br />AN ARCHITECTURE PERSPECTIVE<br />9<br />
    38. 38. Features of Netburst…<br /><ul><li>In-Order Issue Front End:
    39. 39. Two major parts:
    40. 40. Fetch/Decode Unit
    41. 41. Execution Trace Cache
    42. 42. Fetch/ Decode Unit:
    43. 43. Prefetches IA-32 instructions that are likely to be executed. Details in Prefetching.
    44. 44. Fetches instructions that have not already been prefetched.
    45. 45. Decodes instructions into µops and builds trace.</li></ul>3/25/2011<br />AN ARCHITECTURE PERSPECTIVE<br />10<br />
    46. 46. Features of Netburst…<br /><ul><li>Execution Trace Cache:
    47. 47. Middle-man between First Decode Stage and Execution Stage
    48. 48. Caches the decoded micro operations of repeating instruction sequences avoiding re-decode.
    49. 49. Caches the branch targets and delivers µops to execution.
    50. 50. Rapid Execution Engine:
    51. 51. Arithmetic Logic Units (ALUs) run at twice the processor frequency and thus offset the low IPC factor.
    52. 52. Basic integer operations executes in 1/2 processor clock tick.
    53. 53. Provides higher throughput and reduced latency of execution.</li></ul>3/25/2011<br />AN ARCHITECTURE PERSPECTIVE<br />11<br />
    54. 54. Features of Netburst…<br /><ul><li>Out of Order Core:
    55. 55. Contains multiple execution hardware resources to execute multiple µops parallel.
    56. 56. µops contending for a resource are buffered.
    57. 57. Meanwhile other µops are executed.
    58. 58. Dependency among µops is taken care by appropriate buffering and in ordered retirement logic of retirement unit.
    59. 59. Register renaming logic aids to resolving conflicts.
    60. 60. Up to three µops may be retired per cycle.</li></ul>3/25/2011<br />AN ARCHITECTURE PERSPECTIVE<br />12<br />
    61. 61. Features of Netburst…<br /><ul><li>The Branch Predictor :
    62. 62. Dynamically predict the target of a branch instruction based on its linear address using branch target buffer.
    63. 63. If none/invalid dynamic prediction is available, statically predicts based on the offset of the target.
    64. 64. A backward branch is predicted to be taken, a forward branch is predicted to be not taken.
    65. 65. Return addresses are predicted using the 16-entry return address stack.
    66. 66. It does not predict far transfers, for example, far calls, interrupt returns and software interrupts.</li></ul>3/25/2011<br />AN ARCHITECTURE PERSPECTIVE<br />13<br />
    67. 67. Features of Netburst…<br /><ul><li>Prefetching: By three techniques-
    68. 68. Hardware Instruction Fetcher
    69. 69. Prefetch Instructions
    70. 70. Hardware to fetch data and instructions directly to second level cache.
    71. 71. Caching:
    72. 72. Supports upto 3 levels of caches.
    73. 73. All being exclusive.
    74. 74. First Level: Separate data and instruction Cache and trace cache.</li></ul>3/25/2011<br />AN ARCHITECTURE PERSPECTIVE<br />14<br />
    75. 75. Heading to Core <br />3/25/2011<br />15<br />AN ARCHITECTURE PERSPECTIVE<br />
    76. 76. Core Microachitecture<br />3/25/2011<br />16<br />AN ARCHITECTURE PERSPECTIVE<br />
    77. 77. Core Microarchitecture<br />3/25/2011<br />17<br />AN ARCHITECTURE PERSPECTIVE<br />
    78. 78. Features of Core Architecture<br /><ul><li>Wide Dynamic Execution:
    79. 79. Each Core is wider and can fetch, decode and execute 4 instructions at a time.
    80. 80. Netburst could however execute only 3.
    81. 81. So a quad core processor executes 16 at once.
    82. 82. It has added more simple decoder than Netburst.
    83. 83. Decoders decoding x86 instructions:
    84. 84. Simple: translating to one micro operation.
    85. 85. Complex: translating to more than one micro op.</li></ul>3/25/2011<br />AN ARCHITECTURE PERSPECTIVE<br />18<br />
    86. 86. Wide Dynamic Execution…<br /><ul><li>Macrofusion:
    87. 87. In previous generation processors, each incoming instruction was individually decoded and executed.
    88. 88. Macrofusion enables common instruction pairs (such as a compare followed by a conditional jump) to be combined into a single internal instruction (micro-op) during decoding.
    89. 89. Increases the overall IPC and energy efficiency.
    90. 90. The architecture uses an enhanced Arithmetic Logic Unit (ALU) to support Macrofusion.</li></ul>3/25/2011<br />AN ARCHITECTURE PERSPECTIVE<br />19<br />
    91. 91. Wide Dynamic Execution…<br />3/25/2011<br />AN ARCHITECTURE PERSPECTIVE<br />20<br />
    92. 92. Advanced Digital Media Boost<br /><ul><li>Netburst executed 128 bit SSE instructions in two cycles taking 64 bits at one cycle.
    93. 93. Core executes one 128 bit SSE in 1 clock cycle.</li></ul>3/25/2011<br />21<br />AN ARCHITECTURE PERSPECTIVE<br />
    94. 94. Smart Memory Access<br /><ul><li>Memory disambiguation:
    95. 95. Intelligent algorithms for identifying which loads are independent of stores or are okay to load ahead of stores ensuring that no data location dependencies are violated.
    96. 96. If at all the load is invalid, then it detects the conflict, reloads the correct data and re-executes the instruction.</li></ul>3/25/2011<br />AN ARCHITECTURE PERSPECTIVE<br />22<br />
    97. 97. Advanced Smart Cache<br /><ul><li>Each execution core shares L2 cache instead of a separate one for each core.
    98. 98. The data only has to be stored in one place that each core can access thereby optimizing cache resources.
    99. 99. When one core has minimal cache requirements, other cores can increase their percentage of L2 cache.
    100. 100. Load based sharing reduces cache misses and increasing performance.
    101. 101. Advantage is higher cache hit rate, reduced bus traffic and lower latency to data. </li></ul>3/25/2011<br />AN ARCHITECTURE PERSPECTIVE<br />23<br />
    102. 102. Intelligent Power Capability<br /><ul><li>Manages the runtime power consumption of all the processor’s execution cores.
    103. 103. Includes an advanced power gating capability in which an ultra fine-grained logic control turns on individual processor logic subsystems only if and when they are needed.
    104. 104. Has many buses and arrays are split so that data required in some modes of operation can be put in a low power state when not needed.
    105. 105. Implementing power gating reduced the power footprint to a great extent compared to previous processors.</li></ul>3/25/2011<br />24<br />AN ARCHITECTURE PERSPECTIVE<br />
    106. 106. Heading to Nehalem<br />3/25/2011<br />25<br />AN ARCHITECTURE PERSPECTIVE<br />
    107. 107. Nehalem Architecture<br />3/25/2011<br />26<br />AN ARCHITECTURE PERSPECTIVE<br /><ul><li>Quick Path Technology
    108. 108. Turbo Boost Technology
    109. 109. Hyper Threading
    110. 110. Smarter Cache
    111. 111. IPC Improvements
    112. 112. Enhanced Branch Prediction
    113. 113. Application Targeted Accelerators and SSE 4.0
    114. 114. Intelligent Power Technology
    115. 115. Enhanced Virtualization Technology support
    116. 116. Enhancements Over Core Microarchitecture</li></li></ul><li>Nehalem Microarchitecture<br />3/25/2011<br />AN ARCHITECTURE PERSPECTIVE<br />27<br />
    117. 117. Quick Path Technology<br /><ul><li>Integrates a memory controller into each microprocessor.
    118. 118. Connects processors and other components with a new high-speed interconnect.
    119. 119. Scalable Architecture in which memory scales with processors.
    120. 120. Scalable Shared Memory Implementation Support.
    121. 121. Scalable Compute Architecture.
    122. 122. Lower Memory Access Latency</li></ul>3/25/2011<br />AN ARCHITECTURE PERSPECTIVE<br />28<br />
    123. 123. 3/25/2011<br />AN ARCHITECTURE PERSPECTIVE<br />29<br />
    124. 124. 3/25/2011<br />AN ARCHITECTURE PERSPECTIVE<br />30<br />
    125. 125. Turbo Boost Technology<br /><ul><li>Automatically allows active processor cores to run faster than the base operating frequency.
    126. 126. Turbo Boost for a given workload depends on:
    127. 127. Number of active cores
    128. 128. Estimated current consumption
    129. 129. Estimated power consumption
    130. 130. Processor temperature </li></ul>3/25/2011<br />AN ARCHITECTURE PERSPECTIVE<br />31<br />
    131. 131. Hyper Threading in Nehalem <br />3/25/2011<br />AN ARCHITECTURE PERSPECTIVE<br />32<br />
    132. 132. Smarter Cache<br /><ul><li>A new Second-level Translation Look-aside Buffer:
    133. 133. Has 512 entries.
    134. 134. Improves the virtual to physical address translation for a page and this in turn further saves memory clock cycles.
    135. 135. New three-level cache hierarchy:
    136. 136. L1 (32 KB Instruction Cache, 32 KB 8-way Data Cache) per core.
    137. 137. L2 caches (256 KB 8-way)per core.
    138. 138. L3 cache (8 MB 16-way) is shared among the cores.
    139. 139. All caches are inclusive.</li></ul>3/25/2011<br />AN ARCHITECTURE PERSPECTIVE<br />33<br />
    140. 140. 3/25/2011<br />AN ARCHITECTURE PERSPECTIVE<br />34<br /><ul><li>L3 cache can be scaled in size based on the number of cores.
    141. 141. A central queue acts as a crossbar and arbiter between the four cores and the uncore region of Nehalem.
    142. 142. Uncore includes the L3 cache, integrated memory controller and QPI links.
    143. 143. Each core supports up to 10 data cache misses and 16 total outstanding misses.</li></li></ul><li>IPC Improvements<br /><ul><li>Increased size of the out-of-order window and buffers.
    144. 144. Improved implementation of instructions enforcing synchronization like XCHG so that existing threaded software will see a performance boost.
    145. 145. Improved Hardware Prefetch and Better Load-Store Scheduling. </li></ul>3/25/2011<br />AN ARCHITECTURE PERSPECTIVE<br />35<br />
    146. 146. IPC Improvements…<br /><ul><li>Loop Stream Detector:
    147. 147. First identifies repeating instruction sequences.
    148. 148. Once identified the traditional branch prediction, fetch and decode phases of execution are temporarily turned off while the loop executes.
    149. 149. This saves the cycles that might have been otherwise wasted in these pipeline stages due to repeated set of instructions. </li></ul>3/25/2011<br />AN ARCHITECTURE PERSPECTIVE<br />36<br />
    150. 150. <ul><li>Enhanced Branch Prediction:
    151. 151. New Second-Level Branch Target Buffer: To improve branch predictions in large coded apps (e.g., database applications).
    152. 152. New Renamed Return Stack Buffer: Stores forward and return pointers associated with calls and returns.
    153. 153. SSE 4.2:
    154. 154. Introduces seven new SSE 4.2 instructions including four that optimizes string and text processing.
    155. 155. STTNI (String and text new instructions):
    156. 156. Operate on 16 bytes at a time.
    157. 157. This boosts the XML parsing speed and enables faster search and pattern matching, lexing, tokenizing and regular expression evaluation.</li></ul>3/25/2011<br />AN ARCHITECTURE PERSPECTIVE<br />37<br />
    158. 158. 3/25/2011<br />AN ARCHITECTURE PERSPECTIVE<br />38<br />
    159. 159. Intelligent Power Technology<br /><ul><li>Integrated Power Gates:
    160. 160. Allows independent individual idling of a core to non zero power reducing idle power.
    161. 161. Automated Low-Power States:
    162. 162. Automatically put processor and memory into the lowest available power states that will meet the requirement of the current workload. </li></ul>3/25/2011<br />AN ARCHITECTURE PERSPECTIVE<br />39<br />
    163. 163. Enhanced Virtualization<br /><ul><li>Quickpath enabled Scalable Shared memory:
    164. 164. So hypervisor can pin Virtual machine to a specific execution microprocessor and its dedicated memory.
    165. 165. Hardware-assisted page-table management:
    166. 166. Allows the guest OS more direct access to the hardware and reducing compute intensive software translation from the hypervisor.
    167. 167. Directed I/O:
    168. 168. Speed data movement and eliminates much of the performance overhead by giving designated virtual machines their own dedicated I/O devices, thus reducing the overhead of the VMM in managing I/O traffic.
    169. 169. Virtualized Connectivity:
    170. 170. Integrating extensive hardware assists into the I/O devices
    171. 171. Performing routing functions to and from virtual machines in dedicated network silicon, it speeds delivery and reduces the load on the VMM and Server Processors.
    172. 172. Improves two times the throughput than non-hardware assisted devices.</li></ul>3/25/2011<br />AN ARCHITECTURE PERSPECTIVE<br />40<br />
    173. 173. Enhancement Over Core Microarchitecture<br /><ul><li>Pipeline: 14 stage in core but 20 to 24 stages in Nehalem.
    174. 174. Branch Prediction: advanced RSB and L2 Branch Predictor.
    175. 175. Unified 2nd Level TLB: 512 entry L2 TLB against 256 entry of core.
    176. 176. Macrofusion: Condenses 64-bit Macro-ops than 32-bits in Core.
    177. 177. The Loop Stream Detection: More efficient in Nehalem.
    178. 178. The Execution Engine and the Out of Order executor:
    179. 179. The Reorder Buffer has been made a third larger- up from 96 to 128 Entries.
    180. 180. The Reservation Station (which schedules operations to available Execution Units) has been given an extra four slots allowing 36 Entries .</li></ul>3/25/2011<br />AN ARCHITECTURE PERSPECTIVE<br />41<br />
    181. 181. SNEAK PEAK ATNVIDIA TEGRA GPU<br />3/25/2011<br />42<br />AN ARCHITECTURE PERSPECTIVE<br />
    182. 182. KEY FEATURES<br /><ul><li>Eight Processors Independently Power Managed</li></ul>3/25/2011<br />43<br />AN ARCHITECTURE PERSPECTIVE<br />
    183. 183. KEY FEATURES…<br /><ul><li>Graphics Processor
    184. 184. Rendering 3D Visuals, Gaming and Touch Interface
    185. 185. Video Decode Processor
    186. 186. Macroblock Algorithms, VLD and Color Space Conversions for HD video Streaming and Playback
    187. 187. Video Encode Processor
    188. 188. Video Encode Algorithms for HD Streaming Recording</li></ul>3/25/2011<br />AN ARCHITECTURE PERSPECTIVE<br />44<br />
    189. 189. KEY FEATURES…<br /><ul><li>Image Signal Processor
    190. 190. Light Balance, Edge Enhancement, And Noise Reduction for Real Time Photo Enhancements
    191. 191. Audio Processor
    192. 192. Analog Signal Audio Processing
    193. 193. Dual-Core ARM Cortex A9 CPU
    194. 194. For General-Purpose Computing eg. Web Surfing
    195. 195. ARM7 Processor
    196. 196. System Management Functions like monitoring Battery, Turning on/ off Processing Units.</li></ul>3/25/2011<br />AN ARCHITECTURE PERSPECTIVE<br />45<br />
    197. 197. Key Features…<br /><ul><li>Each Processor is Optimized for Specific Tasks.
    198. 198. Intelligent to Achieve Lowest Power Footprint.
    199. 199. Multi tasking Loads handled by enabling dedicated set of Processors.
    200. 200. For Non Multi tasking Loads only the processor most optimized for it is turned on and rest are powered off.</li></ul>3/25/2011<br />AN ARCHITECTURE PERSPECTIVE<br />46<br />
    201. 201. REFERENCES AND COURTESY<br /><ul><li>Intel’s white papers
    202. 202. NvidiaTegra White Paper
    203. 203.
    204. 204.
    205. 205. 2041-3.html
    206. 206.</li></ul>3/25/2011<br />AN ARCHITECTURE PERSPECTIVE<br />47<br />