• Share
  • Email
  • Embed
  • Like
  • Save
  • Private Content
CSC 714 Projects
 

CSC 714 Projects

on

  • 550 views

 

Statistics

Views

Total Views
550
Views on SlideShare
550
Embed Views
0

Actions

Likes
0
Downloads
3
Comments
0

0 Embeds 0

No embeds

Accessibility

Categories

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment
  • unenforceable: later tasks can override settings silently, no facility to guarantee isolation ‘ as expected’: only changeable when IRQ is pending, so hardly ever. useful guarantees: guarantees in the 10’s of milliseconds may be feasible. PCI latencies measured worst case around 10us. 36 is calculated worst case.
  • hot-plug supported on some linux platforms, only per chip not per core. Linux changes required: enforce task cpu, allow scheduler per cpu set, early IRQ mapping, etc Virtualization: do any existing systems allow hardware partitioning like this? Latency Timer: PCI spec allows writing of max bus hold timer per device: Latency vs efficiency

CSC 714 Projects CSC 714 Projects Presentation Transcript

  • CSC 714 Projects Spring 2009
  • P1: Pull Based Migration of Real-Time Tasks in Multi-Core Processors
    • Real-Time Scheduling on Multi-processors
      • Practical Schedulers
        • Dynamic assignment of tasks to multi-cores
        • Fundamental Premise: tasks can migrate
    • Problem
      • Task Migration causes cold cache misses
    • Temporal Guarantee in wake of Task Migration
    Task Migration
  • Solution
    • Scheduler has knowledge of
      • Task to migrate
      • Source and Target Cores
    • Developer knows the critical memory regions
      • Combine the two into a prefetch thread
    • Working
      • Scheduler spawns prefetch thread at target
        • Prefetch thread has lowest priority
      • Overlap prefetching with slack before next invocation
  • Simulation Results
    • Matmult:
      • minor impact of task migration
      • Algorithmic complexity O(n 3 )
    • CRC
      • Prefetch thread does not fill
      • L1 I$
      • L1 Cache I$ misses significant
    • CRC
      • Scattered Critical Regions
      • Additional prefetch function calls
    • Inferences
    • Cold Cache Misses can be hidden with prefetch scheme: Tighter bounds
    • Future Work: Memory layout may improve predicting the prefetch thread overhead
  • P2: Security Techniques in Cyber-Physical Systems
    • Real-Time Systems Lack
      • Resources
      • Complex OS
    • Failures can
      • Effect Environment
      • Effect Infrastructure
      • Harm Life
  • Real-Time Security Features
    • Static Timing Analysis provides
      • WCET
      • BCET
    • Predictable Scheduler Interupts
      • Check in on jobs
    • Use timing data to bound regions of the running jobs
      • In Application
      • In Scheduler
  • Experiment
    • Extend Current Research
      • Periodic Scheduler and Program Instrumentation
        • Wide Granularity Checkpoints Fixed Distance
      • Periodic Scheduler Orthogonal to Program
        • Arbitrary PC wakeup values must insure appropriate pessimism
  • P3: Android Adhoc Wifi Networking
    • Ad-hoc networking between G1 Android phones
    • Social networking application
      • Text messages
      • Status updates
      • GPS coordinates
  • Solution: Adhoc Client Application
    • Borrow from Wifi-Tether Open Source project
      • Init files, iptables, dnsmasq, tether script
    • Discovery of other Ad-hoc endpoints
      • WifiManager not available
      • Alternate between DHCP Server/Client
      • UDP broadcast heartbeat messages
    • Threads
      • UDP sender, UDP receiver, network control
    • GUI Screens
      • Settings, Incoming Events, Friend List, My Status, Message View
    • Tester program (runs on Laptop)
      • Echos back status message, GPS coordinates
  • Results
    • Successfully connected 2 phones in Ad-hoc mode
      • DHCP Server/ Client
    • Also included an Access Point mode
    • Laptop tester:
  • P4: CPU Shielding: Basics
    • Multi-core/Multi-processor Systems
    • Start with standard Linux
    • Partition CPUs between normal and RT
    • Force all standard work to normal CPUS
    • Use existing IRQ and Task affinities
    • Project purpose: Demonstrate feasibility
  • CPU Shielding: Results
    • Task affinities work, but are unenforceable
    • IRQ affinities don’t work as expected
    • Several IRQs must be per-CPU (ex. Local timer, TLB helper)
    • Without kernel changes no useful guarantees are possible
    • With kernel changes PCI bus speed becomes the concern (single PCI port write takes up to 36 ㎲ on lab systems).
  • CPU Shielding: Future
    • Look at hiding CPUs & hardware via hot-plug system
    • Kernel driver & structural changes to implement scheduler for “off-line” CPUs
    • Not a simple change. Worthwhile?
        • Security & Reliability of dual systems
        • Easier to implement using existing virtualization methods?
    • Limiting PCI latencies via ‘Latency Timer’
  • P5: Preemption Threshold aware Task-Scheduling Simulator Term Project CSC714, April 2009 Sangyeol Kang, Kinjal Bhavsar
    • Needs in design time
      • To help simulation of trial schedules
      • To support preemption threshold scheduling
    • Implementation of timing simulator
      • RM, DM, EDF
      • Preemptive/Non Preemptive
    • Supporting Preemption Threshold Scheduling
      • For fixed priority scheduling
      • Computing appropriate PT values using MPTA algorithm
      • Preempt only when “priority > PT of current running task”
    • Graphical representation of simulated scheduling
      • By using third-party tool (gnuplot)
    Motivation / Objectives
  • Results Simulation Statistics Graphical Representation Task set & Parameters
  • P6: Fault Tolerant Algorithm for Multi Hop Wireless Sensor Networks
    • Features
      • Rerouting capability on failure
      • Fault Tolerance through hardware redundancy
      • Isolation and self-recovery
      • Supports mobile sensor networks
      • Deployment of data mule to fetch/bridge isolated nodes.
    • Built using Contiki OS and LNPD
      • Used Reliable Unicast protocol in Rime communication stack of Contiki to mote-mote communicate
      • Used periodic broadcast for dynamic ad-hoc network formation.
      • LNPD is used for directing RCX data mule to bridge nodes
      • Used serial communication from mote to PC for instructing RCX
  • 1 2 3 2 3 3 1 3
  • Open Issues and Improvements
      • Implementing the multi coordinator network
      • Capability to place nodes using RCX
      • The rover should be more sophisticated
      • Investigate feasibility of implementation using TCP/IP stack in Contiki
  • P7: Service Time Overlay for Google Maps Karthikeyan Sivaraj Mansoor Aftab
    • An application to report average service times (waiting time)
    • Users may like to know waiting times for banks, restaurants etc. beforehand
      • Helps user plan schedule better
    • Accessible on a custom Google Maps overlay
      • User can navigate to the desired location and click to know
    • Data for service times is collected from the users phone
    Introduction
    • We use the Location(GPS) API provided in the Android SDK
    • Client component implemented on user phones
      • A background service reports location changes to server every minute
      • Sends UDP messages with location and time spent
      • MapView component to access service times
    • Server component maintains database of locations and corresponding service times
      • Implements various rules to decide validity of received data
      • Responds to queries from the MapView component
    Implementation
    • Successfully implemented/tested the application with a distance resolution of 20m
    • Provided user option to enable/disable background service
    Results
  • P8: Save Gas Map
    • Problem description
    • To identify the shortest path for a given set of addresses and display them on the map
    Team Members: Raghuveer Raghavendra Prasanna Jeevan
  • Implementation details
    • Features
    • Takes Input of multiple locations
    • Finds the shortest distance between all the locations (Using a heuristic algorithm for TSP)
    • Arranges the locations into a optimal route
    • Displays the location on Google map
    • Major classes used are Android Location, Maps and Overlay
  • Snapshots
    • Description: Design a PACMAN game using Lego-bots.
    • Motivation: Design a simple system from scratch to study the interactions between the different units and design and timing complexities that are encountered.
    • Aim: Pacman should traverse the entire maze while staying out of enemy’s way
    P9:
      • PACBOT and EnemyBots = line-followers + Messaging unit + Path-decision Logic
      • Central Control = IR tower + messaging unit + display
    • Issues:
      • Working of LNPD/Message setup
      • Path-decision algorithm
      • Timing
      • Orientation of RCX wrt the IR tower
      • Maze design
    • Pacman needs to know enemy’s coordinates and direction
    • Adjusted weights used for choosing direction
    • Adjusted weight is a function of the numbers of untraversed cells in the direction and the presence/distance of enemy bots.
  • P10: Android Wifi Social Networking Sushmita Lokala Phil Marquis CSC714
  • Application
    • Read from disk
    • Get location
    • Get wifi status
    • Send to server
    • Receive from server
    • Write to disk
    • Draw map
    • Draw overlays
  • Classes
    • 1. Wifi
    • WifiNetwork: SSID, Level
    • WifiData: Latitude, Longitude, WifiNetworks
    • WifiUserHistory: Name, WifiDatas
    • 2. Maps
    • WifiOverlay
  • Classes
    • 3. Serialization: Disk
    • ObjectOutputStream
    • 4. Serialization: Server
    • UDPServer
    • P11: A New Real-time Kernel development on an embedded platform
    • Team
    • BALASUBRAMANYA BHAT
    • SANDEEP BUDANUR RAMANNA
    CSC714: Real Time Systems Project – Spring 2009
  • Features
    • A new real-time kernel developed from scratch
    • Supports Periodic & Aperiodic tasks, Semaphores & Mutex
    • EDF based scheduling for periodic tasks (deadlines <= period)
    • The scheduler is capable of creating tasks based on (  , p, e, D) parameters.
    • 1 uSec granularity for all timing parameters (  , p, e, D)
    • Aperiodic tasks are scheduled using static priority based preemptive scheduling.
    • The scheduler can also keep track of the current CPU utilization.
  • Design Priority Queue based on the next DEADLINE Ready Q Wait Q I Aperiodic Q Priority Queue based on the next RELEASE Time Priority Queue based on task PRIORITY Timer 0 Res. Block Q Priority Queue based on Deadline / PRIORITY Resource Timer 1
  • Current Status
    • Completed
    • Implemented on C6713 DSK
      • TMS320C6713 DSP Processor
      • VLIW Architecture (with 8 instructions / cycle)
    • Tested for all parameters (  , p, e, D)
    • Keeps track of Deadline miss & TBE counts for every thread
    • Also keeps track of thread wise execution time upto 1  s res.
    • About 2400 SLOCs of source code (1000 lines assembly)
    • Things to do
    • Overall CPU utilization to be maintained
    • Test aperiodic tasks with resources
    • Implement Sleep
    • Fix few bugs
    • Test with some real benchmarks
  • P12: MAC Protocol Implementation on Atmel AVR for Underwater Communication
        • by Shaolin Peng
    CSC 714 Real Time Computer Systems
  • Aloha Protocol Atmega168 MACA Protocol Small & Sparse Network Small Packet Size Development Platform: STK500 AVR Studio
  • Problem List
    • P1: Debugging Instrument
      • Set up UART communication with the HyperTerminal on PC
      • Connect two boards using wire as a start
    • P2: Starvation
      • Wait only after sending, not after receiving
    • P3: Flexible Length Packet Receiving
      • Receive the first two bytes, decode and decide
    • P4: CRC Consideration
      • 4 bits -> 8 bits ( x^ 8 + x^ 2 + x + 1)
  • Problem List 2
    • P5: Random Number Generator
      • ADC or Timer Counter
    • P6: No response problem
      • Set maximum r etr ies number
    • P7: Hardware Limitation
      • Compile different files using different optimization levels
        • E.g. -O3 for Goertzel algorithm (critical path)
            • -Os for the other files
          • Text data bss total
          • 14986 302 141 15429 (different optimization levels)
          • 16380 338 109 16827 (same optimization level)
  • Experimental Set u p Lake Raleigh Throughput= Successfully received packets Total packets sent out
  • Results
    • Indoors
      • Aloha: 27.2%
      • MACA: 23.8%
    • Outdoors
      • Aloha: 8.1%
      • MACA: 8.2%
    • Compare with MACA
      • During the testing time, Aloha received almost twice data than MACA
  • P13: Power-Aware DVFS on PowerPC 405LP: Front Bus Scaling
    • Mohamed Nishar Kamaruddin
    • Santhosh Selvaraj
    • OVERVIEW
    • Previous work with the IBM 405LP board showed that Feedback-DVS of the processor voltage and frequency produces considerable power savings.
    • Our work is to study the frequency scaling for the memory subsystem to achieve power savings. We also study the feasibility of integrating this with the existing feedback DVS-EDF scheduling schemes.
  • Development up to present
    • Experimented with different operating points including those with same processor frequency but with different memory subsystem frequencies. This was done on a number of applications.
    • Changes to data acquisition programs and sample applications to record the power savings of the memory subsystem.
    • Integration of PLB frequency scaling into the various existing feedback DVS-EDF scheduling schemes.
  • Results
    • Frequency scaling of the memory subsystem was found to produce significant power savings. Reducing the FSB freq from 100MHz to 50 MHz for a tight noop looped application produced nearly 34% energy savings.
    • Fitting this into the PID Feedback scheduling - we changed all operating points to use half of their original PLB frequencies. Energy savings now: 1.38%.
    • This is because the various operating points defined in the PID feedback scheduling code already scale PLB frequency along with processor frequency.
    • For memory intensive operation, energy savings: 1.1%