Computer Architecture Topics M Interconnection Network S P M P M P M P ° ° ° Topologies, Routing, Bandwidth, Latency, Reliability Network Interfaces Shared Memory, Message Passing, Data Parallelism Processor-Memory-Switch Multiprocessors Networks and Interconnections
Updated Technology Trends (Summary) Capacity Speed (latency) Logic 4x in 4 years 2x in 3 years DRAM 4x in 3 years 2x in 10 years Disk 4x in 2 years 2x in 10 years Network (bandwidth) 10x in 5 years
Workstation performance (measured in Spec Marks) improves roughly 50% per year (2X every 18 months)
Improvement in cost performance estimated at 70% per year
Computer Engineering Methodology Evaluate Existing Systems for Bottlenecks Simulate New Designs and Organizations Implement Next Generation System Technology Trends Benchmarks Workloads Implementation Complexity
Understanding the limitations of any measurement tool is crucial.
Metrics of Performance Compiler Programming Language Application Datapath Control Transistors Wires Pins ISA Function Units (millions) of Instructions per second: MIPS (millions) of (FP) operations per second: MFLOP/s Cycles per second (clock rate) Megabytes per second Answers per month Operations per second
The motivation is to tune the system to the benchmark to achieve peak performance.
At the architecture level
At the compiler level ( compiler flags )
Blocking in Spec89 factor of 9 speedup
Incorrect compiler optimizations/reordering.
Would work fine on benchmark but not on other programs
Spec92 spreadsheet program (sp)
Companies noticed that the produced output was always out put to a file (so they stored the results in a memory buffer) and then expunged at the end (which was not measured).
One company eliminated the I/O all together.
After putting in a blazing performance on the benchmark test, Sun issued a glowing press release claiming that it had outperformed Windows NT systems on the test. Pendragon president Ivan Phillips cried foul, saying the results weren't representative of real-world Java performance and that Sun had gone so far as to duplicate the test's code within Sun's Just-In-Time compiler . That's cheating, says Phillips, who claims that benchmark tests and real-world applications aren't the same thing. Did Sun issue a denial or a mea culpa? Initially, Sun neither denied optimizing for the benchmark test nor apologized for it. " If the test results are not representative of real-world Java applications, then that's a problem with the benchmark ," Sun's Brian Croll said. After taking a beating in the press, though, Sun retreated and issued an apology for the optimization.[Excerpted from PC Online 1997]