Computer Architecture Seminar
Upcoming SlideShare
Loading in...5
×
 

Computer Architecture Seminar

on

  • 305 views

 

Statistics

Views

Total Views
305
Views on SlideShare
280
Embed Views
25

Actions

Likes
0
Downloads
1
Comments
0

2 Embeds 25

http://www.linkedin.com 21
https://www.linkedin.com 4

Accessibility

Categories

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment
  • Memoization and other techniques save on memory access. This technique proposes a solution to save on accesses and the computation involving the data from these access.
  • eg: sum of all nodes in a 100 node linked-list. Each node has to be accessed when say, only 2 have changed. That’s 98 redundant loads.
  • If value of SP calculated is diff from what is in memory, then a support thread (S) will be spawned to calculate and B. Main thread will skip code section B since data has already been calculated. Instructions for B will be left as is because support thread may have failed to spawn; skipping the thread, code will be executed by the main thread.
  • Programmers implement this with C pragma constructs
  • Every time the variable is WRITTEN to, the associated DTThread is executed
  • If programmer has a reason to suspect that the thread may crash/be aborted, he can place the #cancel pragma. This will ensure that only the main thread executes this block. Support thread will not be registered.
  • This function is triggered in a new thread (support thread) when control reaches “#block xxx”
  • Start PC: PC of the skippable code in the main thread.Destination PC denotes the end of the skippable region.Post skip PC is address after the region is skipped.
  • CMP only

Computer Architecture Seminar Computer Architecture Seminar Presentation Transcript

  • Data-Triggered ThreadsEliminating Redundant Computation
    (HPCA 2011)
    Hung-Wei Tseng and Dean M. Tullsen
    Department of Computer Science and Engineering
    University of California, San Diego
    Seminar by: Naman Kumar for http://carg.uwaterloo.ca
  • Eliminating Redundant Computation
    Silent Store:
    Amemory store operation that does not change the contents at that location
    20-68% of all stores are silent [Lepakand Lipasti]
    How about eliminating the entire stream of computation surrounding a silent store!
  • Eliminating Redundant Computation
    Redundant loads:
    silent stores result in redundant loads
    (last time this load loaded this address, it fetched the same value)
    SPEC2000 C:
    78% of all loads are redundant
    50% of all instructions depend on redundant loads
    View slide
  • Data-Triggered Threads
    View slide
  • DTT: Implementation
    The Programming Model
    Place redundant computation in a separate thread:
    Thread is restartable
    Thread may be aborted/restarted multiple times
    Thread management is through architectural changes.
    Easy to verify data races as thread life is between time between triggering store and main thread join point.
  • DTT: Implementation
    The Programming Model
    Trigger is placed in data section, not code section
  • DTT: Implementation
    The Programming Model
    Main Thread
  • DTT: Implementation
    The Programming Model
    Support thread
  • DTT: Implementation
    Architectural Support
    Following tables are all implemented in hardware
    Thread registry (table)
    Thread Queue (table)
    Thread Status Table (table) PC
  • DTT: Implementation
    Architectural Support
    ISA modifications
    tstore – generate thread when mem modified is not silent
    tspawn – spawn the thread using thread registry
    treturn– finish execution of the current thread
    tcancel – terminate a running thread