Conference Program Overview of the
31st IEEE International Performance Computing and Communications Conference (IPCCC'12). December 1st - December 3rd, 2012
IPCCC 2012 - Austin, Texas, USA
With the rapid growth of the production and storage of large scale data sets it is important to investigate methods to drive the cost of storage systems down. We are currently in the midst of an information explosion and large scale storage centers are increasingly used to help store generated data. There are several methods to bring the cost of large scale storage centers down and we investigate a technique that focuses on transitioning storage disks into lower power states. This talk introduces a model of disk systems that leverages disk access patterns to produce energy saving opportunities for parallel disk systems. We also focus on the implementation of an energy-efficient storage cluster, where a couple of energy-saving techniques are incorporated. Our modeling and simulation results indicated that large data sizes and knowledge about the disk access pattern are valuable for storage system energy savings techniques. Storage servers that support applications that stream media is one key area that would benefit from our strategies.
I rebuilt the kernel by adding "hello world!" into the boot message. In what follows, I summarize my process of rebuilding the OS161 kernel. You may also found the three common mistakes at the end of this document.
An Active and Hybrid Storage System for Data-intensive ApplicationsXiao Qin
Since large-scale and data-intensive applications have been widely deployed, there is a growing demand for high-performance storage systems to support data-intensive applications. Compared with traditional storage systems, next-generation systems will embrace dedicated processor to reduce computational load of host machines and will have hybrid combinations of different storage devices. We present a new architecture of active storage system, which leverage the computational power of the dedicated processor, and show how it utilizes the multi-core processor and offloads the computation from the host machine. We then solve the challenge of applying the active storage node to cooperate with the other nodes in the cluster environment by design a pipeline-parallel processing pattern and report the effectiveness of the mechanism. In order to evaluate the design, an open-source bioinformatics application is extended based on the pipeline-parallel mechanism. We also explore the hybrid configuration of storage devices within the active storage. The advent of flash-memory-based solid state disk has become a critical role in revolutionizing the storage world. However, instead of simply replacing the traditional magnetic hard disk with the solid state disk, researchers believe that finding a complementary approach to corporate both of them is more challenging and attractive. Thus, we propose a hybrid combination of different types of disk drives for our active storage system. An simulator is designed and implemented to verify the new configuration. In summary, this dissertation explores the idea of active storage, an emerging new storage configuration, in terms of the architecture and design, the parallel processing capability, the cooperation of other machines in cluster computing environment, and the new disk configuration, the hybrid combination of different types of disk drives.
Reliability Analysis for an Energy-Aware RAID SystemXiao Qin
Reliability Analysis for an Energy-Aware RAID System.
S. Yin, M. I. Alghamdi, X.-J. Ruan, Y. Tian, J. Xie, X. Qin, and M. Qiu, Proc. the 30th IEEE International Performance Computing and Communications Conference (IPCCC), Nov. 2011.
With the rapid growth of the production and storage of large scale data sets it is important to investigate methods to drive the cost of storage systems down. We are currently in the midst of an information explosion and large scale storage centers are increasingly used to help store generated data. There are several methods to bring the cost of large scale storage centers down and we investigate a technique that focuses on transitioning storage disks into lower power states. This talk introduces a model of disk systems that leverages disk access patterns to produce energy saving opportunities for parallel disk systems. We also focus on the implementation of an energy-efficient storage cluster, where a couple of energy-saving techniques are incorporated. Our modeling and simulation results indicated that large data sizes and knowledge about the disk access pattern are valuable for storage system energy savings techniques. Storage servers that support applications that stream media is one key area that would benefit from our strategies.
I rebuilt the kernel by adding "hello world!" into the boot message. In what follows, I summarize my process of rebuilding the OS161 kernel. You may also found the three common mistakes at the end of this document.
An Active and Hybrid Storage System for Data-intensive ApplicationsXiao Qin
Since large-scale and data-intensive applications have been widely deployed, there is a growing demand for high-performance storage systems to support data-intensive applications. Compared with traditional storage systems, next-generation systems will embrace dedicated processor to reduce computational load of host machines and will have hybrid combinations of different storage devices. We present a new architecture of active storage system, which leverage the computational power of the dedicated processor, and show how it utilizes the multi-core processor and offloads the computation from the host machine. We then solve the challenge of applying the active storage node to cooperate with the other nodes in the cluster environment by design a pipeline-parallel processing pattern and report the effectiveness of the mechanism. In order to evaluate the design, an open-source bioinformatics application is extended based on the pipeline-parallel mechanism. We also explore the hybrid configuration of storage devices within the active storage. The advent of flash-memory-based solid state disk has become a critical role in revolutionizing the storage world. However, instead of simply replacing the traditional magnetic hard disk with the solid state disk, researchers believe that finding a complementary approach to corporate both of them is more challenging and attractive. Thus, we propose a hybrid combination of different types of disk drives for our active storage system. An simulator is designed and implemented to verify the new configuration. In summary, this dissertation explores the idea of active storage, an emerging new storage configuration, in terms of the architecture and design, the parallel processing capability, the cooperation of other machines in cluster computing environment, and the new disk configuration, the hybrid combination of different types of disk drives.
Reliability Analysis for an Energy-Aware RAID SystemXiao Qin
Reliability Analysis for an Energy-Aware RAID System.
S. Yin, M. I. Alghamdi, X.-J. Ruan, Y. Tian, J. Xie, X. Qin, and M. Qiu, Proc. the 30th IEEE International Performance Computing and Communications Conference (IPCCC), Nov. 2011.
Project 2 in COMP3500 Operating Systems class at Auburn University. The objectives of this project are:
• Use your installed CentOS to build OS/161 and run Sys/161
• Configure and build OS/161 kernels
• Discover important design aspects of OS/161 by examining its source code
• Manage OS/161 using a version control system called cvs; apply cvs to create a repository and tracking your source code changes
• Use GDB to debug OS/161
This module shows you how to install a software development framework for OS/161.
Lecture: 30 minutes – Slides 1-20.
Demo: 20 minutes
1. Project 2 Specification.docx Preview the documentView in a new window 2. How to build tool chain: The MIPS toolchain for os161.txtPreview the documentView in a new window 3. How to build and run sys161.htmlView in a new window 4. gdb.htm View in a new window and cvs.htmView in a new window 5. Configuration file: sys161.confView in a new window Below, you can find five source code packages: 6. os161-1.10.tar.gzView in a new window 7. cs161-binutils-1.4.tarView in a new window 8. Download cs161-gcc-1.4.tar from: https://dl.dropboxusercontent.com/u/24238235/cs161-gcc-1.4.tar 9. Download cs161-gdb-1.4.tar from: https://dl.dropboxusercontent.com/u/24238235/cs161-gdb-1.4.tar 10. sys161-1.12.tar.gzView in a new window
Thermal modeling and management of cluster storage systems xunfei jiang 2014Xiao Qin
Thermal Modeling and Management of Storage Systems
Author: Jiang, Xunfei
Abstract: Energy consumption of data storage systems has increased significantly for the past decades. There is an urgent need to build energy-efficient data storage systems. Computing cost of IT facilities and cooling cost of air conditioners contribute to a large portion of the total energy consumption of data centers. A large amount of researchers focus on reducing the computing cost by balancing workload or powering off idle data nodes to save energy. In recent years, growing attention has been paid to decreasing the cooling cost. Temperature is a major contributor to cooling cost, and thermal management has become a popular topic in building energy-efficient data centers. Extensive research of thermal impacts of processors and memories has been presented in literature, however, the thermal impacts of disks have not been fully investigated. In this dissertation, experiments are conducted to characterize the thermal behavior of processors and disks by using real-world benchmarks (e.g., postmark and whetstone). The profiling results show that disks have comparable thermal impacts as processors to overall temperature of a data node. Then, we develop an approach to generate thermal models for estimating temperatures of processors, disks, and data nodes. We validate the thermal models by comparing the predictions with real measurements by temperature sensors deployed on data nodes. We further propose an energy model to estimate the total energy cost of data nodes. Finally, by applying our thermal and energy models, we propose thermal management strategies for building energy-efficient data centers. These strategies include a thermal-aware task scheduling strategy, thermal-aware data placement strategies for homogeneous and hybrid storage clusters, and a predictive thermal-aware data transmission strategy.
Why Major in Computer Science and Software Engineering at Auburn University?Xiao Qin
Computer scientists and software engineers design, analyze, and develop software for the computer systems and networks that power today's world. Whether you're playing a video game, downloading MP3s, talking on a cell phone or even driving your car, you're depending on software. Software applications range from personal computing to entertainment systems to life-critical applications such as medical, flight and space systems. Today's society requires software that is engineered to demanding performance, reliability and safety standards. Engineering such software requires a high degree of specialization. The individuals with the critical expertise to do this are computer scientists and software engineers. It's these professionals who make the magic happen.
The Department of Computer Science and Software Engineering (CSSE) offers three undergraduate degrees to prepare students for success in the world of computing:
Bachelor of Science in Computer Science
Bachelor of Software Engineering
Bachelor of Wireless Engineering
Project 2 how to install and compile os161Xiao Qin
README: After installed VirtualBox on my Windows machine, I installed CentOS 6.5 on VirtualBox. Next, I successfully installed cs161-binutils-1.4 and cs161-gcc-1.5.tar. Unfortunately, I encountered an error "configure: error: no termcap library found". As Dustin suggested, installing the missing package can solve this problem. Please use the following command to install the package:
yum install ncurses-devel
You don't have to install CentOS 6.5, because I believe that you can install all the OS161 tools on CentOS 7. You don't have to install VirtualBox neither. Nevertheless, if you decide to install CentOS on VirtualBox, please refer to my installation log below.
Note: I rebuilt the kernel by adding "hello world!" into the boot message. In what follows, I summarize my process of rebuilding the OS161 kernel. You may also found the three common mistakes at the end of this document.
How to survive a group project in COMP4710 Senior Design Project? This is a training module in the second lecture of week 1. The module takes approximately 20 minutes. After the training session is done, please check the progress of the development groups.
Data center specific thermal and energy saving techniquesXiao Qin
Abstract: Data centers are ever increasing as we become more reliant of web based transactions. The benefits of such massive computing are obvious by the speed and ease we can get most media or information. A challenge is that new large data centers introduce a level of energy consumption that the world has not seen before. The obvious energy cost of running the computers is a billion dollar problem, but there are hidden costs like running cooling systems as well. To help combat the problems of large data centers, we aim at developing solutions that can work for each type of data center. This could entail creating tools that are generic enough to work for all data centers, or focusing on specific tools the type of software running in the data center. In this talk, we present a thermal model that is flexible enough to be applicable for all data centers; we show how our model can be used to save energy. We also discuss new energy saving techniques for Hadoop clusters specifically, where we focus on very data centric implementations of Hadoop to gain a significant energy savings.
Understanding what our customer wants-slideshareXiao Qin
COMP4710 Senior Design Project - Training Module 2. How to understand our customers' requirements? This training module is covered in the second lecture of week 2 or Lec02b.
Performance Evaluation of Traditional Caching Policies on a Large System with...Xiao Qin
Caching is widely known to be an effective method for improving I/O performance by storing frequently used data on higher speed storage components. However, most existing studies that focus on caching performance evaluate fairly small files populating a relatively small cache. Few reports are available that detail the performance of traditional cache replacement policies on extremely large caches. Do such traditional caching
policies still work effectively when applied to systems with petabytes of data? In this paper, we comprehensively evaluate
the performance of several cache policies, which include First-In-First-Out (FIFO), Least Recently Used (LRU) and Least
Frequently Used (LFU), on the global satellite imagery distribution application maintained by the U.S. Geological Survey
(USGS) Earth Resources Observation and Science Center (EROS). Evidence is presented suggesting traditional caching
policies are capable of providing performance gains when applied
to large data sets as with smaller data sets. Our evaluation is based on approximately three million real-world satellite images
download requests representing global user download behavior since October 2008.
HDFS-HC2: Analysis of Data Placement Strategy based on Computing Power of Nod...Xiao Qin
Hadoop and the term 'Big Data' go hand in hand. The information explosion caused
due to cloud and distributed computing lead to the curiosity to process and analyze massive
amount of data. The process and analysis helps to add value to an organization or derive
valuable information.
The current Hadoop implementation assumes that computing nodes in a cluster are
homogeneous in nature. Hadoop relies on its capability to take computation to the nodes
rather than migrating the data around the nodes which might cause a signicant network
overhead. This strategy has its potential benets on homogeneous environment but it might
not be suitable on an heterogeneous environment. The time taken to process the data on a
slower node on a heterogeneous environment might be signicantly higher than the sum of
network overhead and processing time on a faster node. Hence, it is necessary to study the
data placement policy where we can distribute the data based on the processing power of
a node. The project explores this data placement policy and notes the ramications of this
strategy based on running few benchmark applications.
Publication and long term archival of observational data in the field of environmental sciences is a challenging topic of today's eScience research. The amount of effort that goes into technical and scientific quality assurance prior to publication is considerable and might well turn out to be a barrier to data publication. Our project's goal is to lower the amount of manual effort and, at the same time, increase data quality in the process of submitting observational data for publication – in this case meteorological observational data. This goal is divided into the following subgoals:
Establish a standard procedure for the publication of observational data in the area of meteorology including quality information.
Develop a workflow system for the automatisation of the publication process.
Make the procedure usable for environmental sciences in general.
Integration of the procedure into an existing central data repository for meteorology (CERA data base at the World Data Center for Climate).
This talk is about the current state of the project from an eResearch and technical point of view.
Project 2 in COMP3500 Operating Systems class at Auburn University. The objectives of this project are:
• Use your installed CentOS to build OS/161 and run Sys/161
• Configure and build OS/161 kernels
• Discover important design aspects of OS/161 by examining its source code
• Manage OS/161 using a version control system called cvs; apply cvs to create a repository and tracking your source code changes
• Use GDB to debug OS/161
This module shows you how to install a software development framework for OS/161.
Lecture: 30 minutes – Slides 1-20.
Demo: 20 minutes
1. Project 2 Specification.docx Preview the documentView in a new window 2. How to build tool chain: The MIPS toolchain for os161.txtPreview the documentView in a new window 3. How to build and run sys161.htmlView in a new window 4. gdb.htm View in a new window and cvs.htmView in a new window 5. Configuration file: sys161.confView in a new window Below, you can find five source code packages: 6. os161-1.10.tar.gzView in a new window 7. cs161-binutils-1.4.tarView in a new window 8. Download cs161-gcc-1.4.tar from: https://dl.dropboxusercontent.com/u/24238235/cs161-gcc-1.4.tar 9. Download cs161-gdb-1.4.tar from: https://dl.dropboxusercontent.com/u/24238235/cs161-gdb-1.4.tar 10. sys161-1.12.tar.gzView in a new window
Thermal modeling and management of cluster storage systems xunfei jiang 2014Xiao Qin
Thermal Modeling and Management of Storage Systems
Author: Jiang, Xunfei
Abstract: Energy consumption of data storage systems has increased significantly for the past decades. There is an urgent need to build energy-efficient data storage systems. Computing cost of IT facilities and cooling cost of air conditioners contribute to a large portion of the total energy consumption of data centers. A large amount of researchers focus on reducing the computing cost by balancing workload or powering off idle data nodes to save energy. In recent years, growing attention has been paid to decreasing the cooling cost. Temperature is a major contributor to cooling cost, and thermal management has become a popular topic in building energy-efficient data centers. Extensive research of thermal impacts of processors and memories has been presented in literature, however, the thermal impacts of disks have not been fully investigated. In this dissertation, experiments are conducted to characterize the thermal behavior of processors and disks by using real-world benchmarks (e.g., postmark and whetstone). The profiling results show that disks have comparable thermal impacts as processors to overall temperature of a data node. Then, we develop an approach to generate thermal models for estimating temperatures of processors, disks, and data nodes. We validate the thermal models by comparing the predictions with real measurements by temperature sensors deployed on data nodes. We further propose an energy model to estimate the total energy cost of data nodes. Finally, by applying our thermal and energy models, we propose thermal management strategies for building energy-efficient data centers. These strategies include a thermal-aware task scheduling strategy, thermal-aware data placement strategies for homogeneous and hybrid storage clusters, and a predictive thermal-aware data transmission strategy.
Why Major in Computer Science and Software Engineering at Auburn University?Xiao Qin
Computer scientists and software engineers design, analyze, and develop software for the computer systems and networks that power today's world. Whether you're playing a video game, downloading MP3s, talking on a cell phone or even driving your car, you're depending on software. Software applications range from personal computing to entertainment systems to life-critical applications such as medical, flight and space systems. Today's society requires software that is engineered to demanding performance, reliability and safety standards. Engineering such software requires a high degree of specialization. The individuals with the critical expertise to do this are computer scientists and software engineers. It's these professionals who make the magic happen.
The Department of Computer Science and Software Engineering (CSSE) offers three undergraduate degrees to prepare students for success in the world of computing:
Bachelor of Science in Computer Science
Bachelor of Software Engineering
Bachelor of Wireless Engineering
Project 2 how to install and compile os161Xiao Qin
README: After installed VirtualBox on my Windows machine, I installed CentOS 6.5 on VirtualBox. Next, I successfully installed cs161-binutils-1.4 and cs161-gcc-1.5.tar. Unfortunately, I encountered an error "configure: error: no termcap library found". As Dustin suggested, installing the missing package can solve this problem. Please use the following command to install the package:
yum install ncurses-devel
You don't have to install CentOS 6.5, because I believe that you can install all the OS161 tools on CentOS 7. You don't have to install VirtualBox neither. Nevertheless, if you decide to install CentOS on VirtualBox, please refer to my installation log below.
Note: I rebuilt the kernel by adding "hello world!" into the boot message. In what follows, I summarize my process of rebuilding the OS161 kernel. You may also found the three common mistakes at the end of this document.
How to survive a group project in COMP4710 Senior Design Project? This is a training module in the second lecture of week 1. The module takes approximately 20 minutes. After the training session is done, please check the progress of the development groups.
Data center specific thermal and energy saving techniquesXiao Qin
Abstract: Data centers are ever increasing as we become more reliant of web based transactions. The benefits of such massive computing are obvious by the speed and ease we can get most media or information. A challenge is that new large data centers introduce a level of energy consumption that the world has not seen before. The obvious energy cost of running the computers is a billion dollar problem, but there are hidden costs like running cooling systems as well. To help combat the problems of large data centers, we aim at developing solutions that can work for each type of data center. This could entail creating tools that are generic enough to work for all data centers, or focusing on specific tools the type of software running in the data center. In this talk, we present a thermal model that is flexible enough to be applicable for all data centers; we show how our model can be used to save energy. We also discuss new energy saving techniques for Hadoop clusters specifically, where we focus on very data centric implementations of Hadoop to gain a significant energy savings.
Understanding what our customer wants-slideshareXiao Qin
COMP4710 Senior Design Project - Training Module 2. How to understand our customers' requirements? This training module is covered in the second lecture of week 2 or Lec02b.
Performance Evaluation of Traditional Caching Policies on a Large System with...Xiao Qin
Caching is widely known to be an effective method for improving I/O performance by storing frequently used data on higher speed storage components. However, most existing studies that focus on caching performance evaluate fairly small files populating a relatively small cache. Few reports are available that detail the performance of traditional cache replacement policies on extremely large caches. Do such traditional caching
policies still work effectively when applied to systems with petabytes of data? In this paper, we comprehensively evaluate
the performance of several cache policies, which include First-In-First-Out (FIFO), Least Recently Used (LRU) and Least
Frequently Used (LFU), on the global satellite imagery distribution application maintained by the U.S. Geological Survey
(USGS) Earth Resources Observation and Science Center (EROS). Evidence is presented suggesting traditional caching
policies are capable of providing performance gains when applied
to large data sets as with smaller data sets. Our evaluation is based on approximately three million real-world satellite images
download requests representing global user download behavior since October 2008.
HDFS-HC2: Analysis of Data Placement Strategy based on Computing Power of Nod...Xiao Qin
Hadoop and the term 'Big Data' go hand in hand. The information explosion caused
due to cloud and distributed computing lead to the curiosity to process and analyze massive
amount of data. The process and analysis helps to add value to an organization or derive
valuable information.
The current Hadoop implementation assumes that computing nodes in a cluster are
homogeneous in nature. Hadoop relies on its capability to take computation to the nodes
rather than migrating the data around the nodes which might cause a signicant network
overhead. This strategy has its potential benets on homogeneous environment but it might
not be suitable on an heterogeneous environment. The time taken to process the data on a
slower node on a heterogeneous environment might be signicantly higher than the sum of
network overhead and processing time on a faster node. Hence, it is necessary to study the
data placement policy where we can distribute the data based on the processing power of
a node. The project explores this data placement policy and notes the ramications of this
strategy based on running few benchmark applications.
Publication and long term archival of observational data in the field of environmental sciences is a challenging topic of today's eScience research. The amount of effort that goes into technical and scientific quality assurance prior to publication is considerable and might well turn out to be a barrier to data publication. Our project's goal is to lower the amount of manual effort and, at the same time, increase data quality in the process of submitting observational data for publication – in this case meteorological observational data. This goal is divided into the following subgoals:
Establish a standard procedure for the publication of observational data in the area of meteorology including quality information.
Develop a workflow system for the automatisation of the publication process.
Make the procedure usable for environmental sciences in general.
Integration of the procedure into an existing central data repository for meteorology (CERA data base at the World Data Center for Climate).
This talk is about the current state of the project from an eResearch and technical point of view.
Our regular Introduction to Data Management (DM) workshop (90-minutes). Covers very basic DM topics and concepts. Audience is graduate students from all disciplines. Most of the content is in the NOTES FIELD.
Docker in Open Science Data Analysis Challenges by Bruce HoffDocker, Inc.
Typically in predictive data analysis challenges, participants are provided a dataset and asked to make predictions. Participants include with their prediction the scripts/code used to produce it. Challenge administrators validate the winning model by reconstructing and running the source code.
Often data cannot be provided to participants directly, e.g. due to data sensitivity (data may be from living human subjects) or data size (tens of terabytes). Further, predictions must be reproducible from the code provided by particpants. Containerization is an excellent solution to these problems: Rather than providing the data to the participants, we ask the participants to provided a Dockerized "trainable" model. We run the both the training and validation phases of machine learning and guarantee reproducibility 'for free'.
We use the Docker tool suite to spin up and run servers in the cloud to process the queue of submitted containers, each essentially a batch job. This fleet can be scaled to match the level of activity in the challenge. We have used Docker successfully in our 2015 ALS Stratification Challenge and our 2015 Somatic Mutation Calling Tumour Heterogeneity (SMC-HET) Challenge, and are starting an implementation for our 2016 Digitial Mammography Challenge.
Why is Test Driven Development for Analytics or Data Projects so Hard?Phil Watt
Preview of research results for my Master's thesis on Test-Driven Development in Analytics. Prepared for my Term 4 assignment, oral thesis presentation
Adjusting the Focus: Usability Study Aligns Organization Vision with Communit...Laurie Bennett
One project sponsored by IEEE, two teams of Southern Polytechnic State University graduate students, one structured approach taught by Dr. Carol Barnum, amazing overlapping results. Professor Carol Barnum, together with her graduate students, Laurie Bennett, Jay Jones, and John Weaver present the approach, findings, and recommendations revealed during their usability study conducted for the IEEE website, Engineeringforchange.org. Learn how their different paths taken during the usability study resulted in identifying the same show stopping problem areas.
In this workshop, we explore ways to prepare for internship applications and interviews. In the workshop you will:
Learn how to apply for internships
Prepare for interview questions
Follow-up with employers
Receive tips that help you secure internships
An earlier version 1.0 can be found here: https://www.slideshare.net/xqin74/how-to-write-papers-part-1-principles/edit?src=slideview
5 Simple Steps to Write a Good Research Paper Title
1. Ask yourself these questions and make note of the answers What is my paper about? What techniques/ designs were used? Who/what is studied? What were the results?
2. Use your answers to list key words.
3. Create a sentence that includes the key words you listed.
4. Delete all unnecessary/repetitive words and link the remaining.
5. Delete non-essential information and reword the title.
Making a competitive nsf career proposal: Part 2 WorksheetXiao Qin
Dear Colleagues,
I created a worksheet to assist you to contrive the framework of your CAREER proposal. Answering the questions in the worksheet may streamline your thoughts when you are about to develop key components for your proposal. Any feedback on this worksheet is highly appreciated. I will have this worksheet revised in the future by incorporating your comments and suggestions.
Xiao (xqin@auburn.edu)
Making a competitive nsf career proposal: Part 1 TipsXiao Qin
A Caveat: This document consists of a list of the evaluation criteria of winning CAREER proposals. The following essential tips illustrate "what tasks" you should undertake rather than "how" to perform these tasks.
About This Document
" Proposal preparation phase: Sections 1 (Foundations), 2 (Preliminaries), and 6 (Other Suggestions) offer a list of tips on how to prepare your proposals.
" Proposal writing phase: Sections 3 (Key Components) and 4 (Writing) are comprised of a list of proposal components and writing styles.
" Proposal proofreading phase: Section 5 (Polishing a Proposal Draft) is a final proposal checklist.
In this training session, we provide new CSSE faculty with introduction on (1) policies related to graduate programs, (2) requirements and regulations, (3) teaching strategies, and (4) how to balance research and teaching. Please note that other CSSE policies (e.g., proposal submissions, startup account, CSSE committees) aren't covered in this session.
Subject: Welcome Letter
Dear New CSSE Graduate Students,
Welcome to the Department of Computer Science and Software Engineering at Auburn University. The CSSE faculty and I are enthusiastic about teaching and conducting cutting-edge research here; we are excited that you have chosen to join our department to pursue your Master’s or Ph.D. degrees. I am pleased to invite you to an orientation meeting on Thursday, Aug. 24 at 5:00 p.m. in room 3129 Shelby Center. At this kickoff meeting, I will present information on departmental policies, graduate school policies, CSSE graduate programs, assessments, academic standings, qualifying exams, teaching assistantship assignments, mailing list, job applications, E-mail etiquette and a whole lot more.
I look forward to seeing you all on Aug. 24.
Sincerely yours,
X. Qin
--
Xiao Qin, PhD
Professor and Director of Graduate Programs
Department of Computer Science and Software Engineering
3101 Shelby Center
Auburn University AL 36849-5347
voice: (334)844-6335
fax: (334)844-6329
WWW: http://www.eng.auburn.edu/~xqin
Watch this video at: https://www.youtube.com/watch?v=3u4AAGo31a8
Recorded on March 14, 2015. After having followed the Alfred’s adult piano course books for three years, I made a radical decision to learn a popular worship song called “Stream of Praise” [1]. A decade ago, I first learned how to sing this song when I was an assistant professor at New Mexico Tech, where minister Anna Tai [4] shared a Stream of Praise CD with me. I have listened this CD more than a few hundred times. The music video of this spiritual and emotional song can be found here https://www.youtube.com/watch?v=KIt9n2Wjlf8 [1] on YouTube.
It is worth mentioning that this song is a simple piano version of “Stream of Praise”. An advanced version of the song can be found here https://www.youtube.com/watch?v=DAOrSvexSJ8 [3]. It must take me at least 50 hours to learn this advanced version.
This video is a pilot project for me, because “Stream of Praise” is the first song I learned outside the Alfred’s-piano-book world. When I stayed away from Alfred’s piano books, I faced three grand challenges. First, it is non-trivial to choose a song that meets my current skill level. Second, there is no fingering suggestion marked on the sheet music. Last, no sample video found on YouTube. I tried various finger positions before finalizing my own style, which is marked on the sheet music posted in this video.
I am grateful to my colleague – Dr. Jeffrey Overbey [2] – for teaching me the correct finger positions of bars 4-5. I was amazed by Dr. Overbey’s sight reading skill; he read the sheet music for two seconds and immediately played the song. It took me over 19 hours to learn and practice; in contrast, he could play this song by sight-reading on the first attempt.
I would like to express my gratitude to Mike eKim (https://www.youtube.com/user/mbut123) [5], who offered insightful advice on how to play the first five measures. Mike demonstrated how to play these bars in a video (https://www.youtube.com/watch?v=_QeTQFviE88) posted on his YouTube channel [6].
I would like to thank Sean Fox for his advice on the fingering and tempo issues. He pointed out that I should play the sixteenth notes in bars 4-5 faster.
Bars 1-5 are very difficult; I could not make them musically sound until a practice of two hours. Fortunately, Mike's magic fingering position solved this problem (see [6] for the solution). Currently, I am learning how to play and sing at the same time. Ying enjoys singing this song when I play it on our piano.
The recording success rate is 19.2%, which is slightly higher than that (i.e., 12.5%) of the previous song. The tempo of this song is 83 BPM, which is marginally faster than the ideal one (i.e., 80 BPM).
A Summary of the Learning Process:
Tempo: 83 BPM (Ideal tempo: 80 BPM)
Recording: 47 minutes (26 takes, 5 acceptable videos)
Success Rate: 5/26 = 19.2%
Reliability Modeling and Analysis of Energy-Efficient Storage SystemsXiao Qin
With the rapid growth of the production and storage of large scale data sets it is important to investigate methods to drive the cost of storage systems down. Many
energy conservation techniques have been proposed to achieve high energy efficiency
in disk systems. Unfortunately, growing evidence shows that energy-saving schemes in disk drives usually have negative impacts on storage systems. Existing reliability models are inadequate to estimate reliability of parallel disk systems equipped with energy conservation techniques. To solve this problem, we firstly propose a mathematical model - called MINT - to evaluate the reliability of a parallel disk system where energy-saving mechanisms are implemented. In this dissertation, MINT is focused on modeling the reliability impacts of two well-known energy-saving techniques - the Popular Disk Concentration technique (PDC) and the Massive Array of Idle Disks (MAID). Different from MAID and PDC which store a complete file on the same disk, the Redundancy Array of Inexpensive Disks (RAID) stripes file into several parts and stores them on different disks to ensure higher parallelism, hence higher I/O performance. However, RAID faces more challenges on energy efficiency
and reliability issues. In order to evaluate the reliability of power-aware RAID, we
then develop a Weibull-based model–MREED. In this dissertation, we use MREED to model the reliability impacts of a famous energy efficiency storage mechanism– the Power-Aware RAID (PARAID). Thirdly, we focus on validation of two models–MINT and MREED. It is challenging to validate the accuracy of reliability models, since we are unable to watch certain energy-efficiency systems for a couple of decades due to its time consuming and experimental costs. We introduce validated storage system
simulator–DiskSim–to determine if our model and DiskSim agree with one another. In our validation process, we compare a file access trace in a real-world file system. Last part of of this dissertation focuses on improvement of energy-efficient parallel storage systems. We propose a strategy–Disk Swapping–to improve disk reliability by alternating disks storing data that is frequently accessed with disks holding less accessed data. In this part, we focus on studying reliability improvement of PDC and MAID. At last, we further improve disk reliability by introducing multiple disk
swapping strategy.
MATATAG CURRICULUM: ASSESSING THE READINESS OF ELEM. PUBLIC SCHOOL TEACHERS I...NelTorrente
In this research, it concludes that while the readiness of teachers in Caloocan City to implement the MATATAG Curriculum is generally positive, targeted efforts in professional development, resource distribution, support networks, and comprehensive preparation can address the existing gaps and ensure successful curriculum implementation.
Macroeconomics- Movie Location
This will be used as part of your Personal Professional Portfolio once graded.
Objective:
Prepare a presentation or a paper using research, basic comparative analysis, data organization and application of economic information. You will make an informed assessment of an economic climate outside of the United States to accomplish an entertainment industry objective.
How to Build a Module in Odoo 17 Using the Scaffold MethodCeline George
Odoo provides an option for creating a module by using a single line command. By using this command the user can make a whole structure of a module. It is very easy for a beginner to make a module. There is no need to make each file manually. This slide will show how to create a module using the scaffold method.
Safalta Digital marketing institute in Noida, provide complete applications that encompass a huge range of virtual advertising and marketing additives, which includes search engine optimization, virtual communication advertising, pay-per-click on marketing, content material advertising, internet analytics, and greater. These university courses are designed for students who possess a comprehensive understanding of virtual marketing strategies and attributes.Safalta Digital Marketing Institute in Noida is a first choice for young individuals or students who are looking to start their careers in the field of digital advertising. The institute gives specialized courses designed and certification.
for beginners, providing thorough training in areas such as SEO, digital communication marketing, and PPC training in Noida. After finishing the program, students receive the certifications recognised by top different universitie, setting a strong foundation for a successful career in digital marketing.
Executive Directors Chat Leveraging AI for Diversity, Equity, and InclusionTechSoup
Let’s explore the intersection of technology and equity in the final session of our DEI series. Discover how AI tools, like ChatGPT, can be used to support and enhance your nonprofit's DEI initiatives. Participants will gain insights into practical AI applications and get tips for leveraging technology to advance their DEI goals.
2024.06.01 Introducing a competency framework for languag learning materials ...Sandy Millin
http://sandymillin.wordpress.com/iateflwebinar2024
Published classroom materials form the basis of syllabuses, drive teacher professional development, and have a potentially huge influence on learners, teachers and education systems. All teachers also create their own materials, whether a few sentences on a blackboard, a highly-structured fully-realised online course, or anything in between. Despite this, the knowledge and skills needed to create effective language learning materials are rarely part of teacher training, and are mostly learnt by trial and error.
Knowledge and skills frameworks, generally called competency frameworks, for ELT teachers, trainers and managers have existed for a few years now. However, until I created one for my MA dissertation, there wasn’t one drawing together what we need to know and do to be able to effectively produce language learning materials.
This webinar will introduce you to my framework, highlighting the key competencies I identified from my research. It will also show how anybody involved in language teaching (any language, not just English!), teacher training, managing schools or developing language learning materials can benefit from using the framework.
A workshop hosted by the South African Journal of Science aimed at postgraduate students and early career researchers with little or no experience in writing and publishing journal articles.
Read| The latest issue of The Challenger is here! We are thrilled to announce that our school paper has qualified for the NATIONAL SCHOOLS PRESS CONFERENCE (NSPC) 2024. Thank you for your unwavering support and trust. Dive into the stories that made us stand out!
Unit 8 - Information and Communication Technology (Paper I).pdfThiyagu K
This slides describes the basic concepts of ICT, basics of Email, Emerging Technology and Digital Initiatives in Education. This presentations aligns with the UGC Paper I syllabus.
Exploiting Artificial Intelligence for Empowering Researchers and Faculty, In...Dr. Vinod Kumar Kanvaria
Exploiting Artificial Intelligence for Empowering Researchers and Faculty,
International FDP on Fundamentals of Research in Social Sciences
at Integral University, Lucknow, 06.06.2024
By Dr. Vinod Kumar Kanvaria
7. Technical Sessions
Performance Evaluation Distributed Systems
Sensor Networks GPU
Parallel Computing Cloud and Cluster
Computing
Network Management Thermal and Power
Management
Computer Security Mobile and Wireless
Resource Management Networks
Theory and Modeling
11/8/2012 7
9. Best Paper Award
Nominations
A Cost-Effective Scheduling Algorithm for Scientific
Workflows in Clouds. By Michelle Zhu et al.
ORCA: An Offloading Framework for I/O-Intensive
Applications on Clusters. By Ji Zhang et al.
A System Analysis of Reputation-Base Defences
Against Pollution Attacks in P2P Streaming. By Md.
Tauhiduzzaman et al.
11/8/2012 9
10. General Chairs
Chengkai Li
Univ. of Texas at Arlington, USA
Youtao Zhang
Univ. of Pittsburgh
11/8/2012 10
11. Give our thanks to
Poster Chair
Jia Rao
University of Colorado at Colorado Springs, USA
Publications Chair
Zhiqiang Lin
University of Texas at Dallas, USA
Registration Chair
Jack Chen
Cisco Systems
11/8/2012 11
12. Give our thanks to
Publicity Chair
Mea Wang
University of Calgary, Canada
Web Chair
Neil Nelson
Samsung
Financial Chair
Nasr Ullah
Samsung
11/8/2012 12