• Share
  • Email
  • Embed
  • Like
  • Save
  • Private Content
20100530 关于cpu的私房菜
 

20100530 关于cpu的私房菜

on

  • 505 views

 

Statistics

Views

Total Views
505
Views on SlideShare
505
Embed Views
0

Actions

Likes
0
Downloads
1
Comments
0

0 Embeds 0

No embeds

Accessibility

Categories

Upload Details

Uploaded via as Adobe PDF

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

    20100530 关于cpu的私房菜 20100530 关于cpu的私房菜 Presentation Transcript

    • 关于 CPU 的私房菜By Peng Wu
    • 目录 冯 . 诺依曼模型 Harvard architecture Pentium Superscalar MIPS pipeline Hyper-threading 最小的 CPU 实现
    • 1. 冯 . 诺依曼模型
    • 冯 . 诺依曼模型
    • 2. Harvard architecture
    • Harvard architecture The Harvard architecture is a computer architecture with physically separate storage and signal pathways for instructions and data. These early machines had limited data storage, entirely contained within the central processing unit, and provided no access to the instruction storage as data, making loading and modifying programs an entirely offline process.
    • Harvard architecture In a computer with the contrasting von Neumann architecture (and no cache), the CPU can be either reading an instruction or reading/writing data from/to the memory. Both cannot occur at the same time since the instructions and data use the same bus system. In a computer using the Harvard architecture, the CPU can both read an instruction and perform a data memory access at the same time, even without a cache. A Harvard architecture computer can thus be faster for a given circuit complexity because instruction fetches and data access do not contend for a single memory pathway.
    • Modified Harvard architecture The Modified Harvard architecture is very much like the Harvard architecture but provides a pathway between the instruction memory and the CPU that allows words from the instruction memory to be treated as read-only data. This allows constant data, particularly text strings, to be accessed without first having to be copied into data memory, thus preserving more data memory for read/write variables. Special machine language instructions are provided to read data from the instruction memory.
    • 3. Pentium Superscalar
    • Pentium Superscalar Superscalar architecture — The Pentium has two datapaths (pipelines) that allow it to complete more than one instruction per clock cycle. One pipe (called U) can handle any instruction, while the other (called V) can handle the simplest, most common instructions.
    • EGCS fork of GCC In 1997, a group of developers formed EGCS (Experimental/Enhanced GNU Compiler System),[11] to merge several experimental forks into a single project. The basis of the merger was a GCC development snapshot taken between the 2.7 and 2.81 releases. Projects merged included g77 (Fortran), PGCC (Pentium-optimized GCC), many C++ improvements, and many new architectures and operating system variants.[12][13] EGCS development proved considerably more vigorous than GCC development, so much so that the FSF officially halted development on their GCC 2.x compiler, "blessed" EGCS as the official version of GCC and appointed the EGCS project as the GCC maintainers in April 1999. Furthermore, the project explicitly adopted the "bazaar" model over the "cathedral" model. With the release of GCC 2.95 in July 1999, the two projects were once again united.
    • 4. MIPS pipeline
    • Instruction pipeline An instruction pipeline is a technique used in the design of computers and other digital electronic devices to increase their instruction throughput(the number of instructions that can be executed in a unit of time).
    • MIPS Architecture
    • MIPS Stage Pipeline
    • 5. Hyper-threading
    • Hyper-threading Hyper-threading is an Intel- proprietary technology used to improve parallelization of computations (doing multiple tasks at once) performed on PC microprocessors. For each processor core that is physically present, the operating system addresses two virtual processors, and shares the workload between them when
    • Hyper-threading Hyper-threading works by duplicating certain sections of the processor—those that store the architectural state—but not duplicating the main execution resources. This allows a hyper- threading processor to appear as two "logical" processors to the host operating system, allowing the operating system to schedule two threads or processes simultaneously. When execution resources would not be used by the current task in a processor without hyper-threading, and especially when the processor is stalled, a hyper- threading equipped processor can use those execution resources to execute another scheduled task.
    • Hyper-threading The processor may stall and switch thread due to:  a cache miss  branch misprediction  data dependency
    • 6. 最小的 CPU 实现
    • 最小的 CPU 实现 Original Paper:  A Tiny Computer  By Chuck Thacker, MSR
    • QA
    • Thanks for coming!