IEEE Final Year Projects 2011-2012 :: Elysium Technologies Pvt Ltd
IEEE projects, final year projects, students project, be project, engineering projects, academic project, project center in madurai, trichy, chennai, kollam, coimbatore
This document summarizes a research paper on dynamic consolidation of virtual machines in cloud data centers to manage overloaded hosts while maintaining quality of service constraints. It proposes using a Markov chain model and control algorithm to optimally detect host overloads by maximizing the average time between VM migrations, while meeting a specified QoS goal. The algorithm handles unknown workloads using a multisize sliding window approach. Evaluation shows the algorithm efficiently solves the problem of host overload detection as part of dynamic VM consolidation in cloud computing systems.
A Threshold Secure Data Sharing Scheme for Federated CloudsIJORCS
The document proposes a secure data sharing scheme for federated clouds. The scheme uses a Trusted Cloud Authority (TCA) that controls participating clouds and generates private and public keys. Each cloud encrypts a secret value using its private key without knowing other clouds' values. They run a secure multi-party computation to calculate an encrypted sum polynomial. The TCA can later recover the original secret value from the sum polynomial without learning individual secret values. The scheme aims to ensure privacy and integrity of secret data shared between clouds during distributed computations.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
This document provides 6 IEEE project summaries in the domain of Java and cloud computing/data mining. The summaries are:
1. A decentralized access control scheme for secure cloud data storage that supports anonymous authentication.
2. A performance analysis framework for distributed file systems that qualitatively and quantitatively evaluates performance.
3. Approaches to guarantee trustworthy transactions on cloud servers by enforcing policy consistency constraints.
4. A scalable MapReduce approach for anonymizing large datasets to satisfy privacy requirements like k-anonymity.
5. A resource allocation scheme for a self-organizing cloud that achieves maximized utilization and optimal execution efficiency.
6. An attribute-based encryption framework for flexible
Enhanced Integrity Preserving Homomorphic Scheme for Cloud StorageIRJET Journal
This document discusses enhancing integrity preservation for cloud storage using a homomorphic encryption scheme. It begins with an abstract that outlines using MD5 algorithm for integrity checks on fully homomorphic encrypted data. It then provides background on issues with privacy and integrity in cloud computing. The document reviews related work on cloud security and integrity verification. It discusses challenges with ensuring data integrity when stored remotely in the cloud and proposes using a homomorphic encryption scheme along with MD5 for integrity preservation of outsourced data in the cloud.
This document discusses several cloud computing projects from IEEE in 2014. It provides descriptions of 8 projects, including their titles, programming languages, links, and abstract summaries. The projects focus on topics like network coding-based cloud storage systems, privacy-preserving search over encrypted cloud data, cloud service composition, cloud resource procurement, and competition/cooperation among cloud providers.
Flaw less coding and authentication of user data using multiple cloudsIRJET Journal
This document discusses secure data storage in multiple cloud storage providers. It proposes a method for users to store encrypted data across multiple cloud storage providers using splitting and merging concepts. Private keys are generated during file access using a pseudo key generator and encrypted using 3DES for transmission. The method aims to increase data availability, confidentiality and reduce costs by distributing data across multiple cloud providers. It also discusses using image compression with reversible data hiding techniques to provide data confidentiality when storing images in the cloud.
This document summarizes a research paper on dynamic consolidation of virtual machines in cloud data centers to manage overloaded hosts while maintaining quality of service constraints. It proposes using a Markov chain model and control algorithm to optimally detect host overloads by maximizing the average time between VM migrations, while meeting a specified QoS goal. The algorithm handles unknown workloads using a multisize sliding window approach. Evaluation shows the algorithm efficiently solves the problem of host overload detection as part of dynamic VM consolidation in cloud computing systems.
A Threshold Secure Data Sharing Scheme for Federated CloudsIJORCS
The document proposes a secure data sharing scheme for federated clouds. The scheme uses a Trusted Cloud Authority (TCA) that controls participating clouds and generates private and public keys. Each cloud encrypts a secret value using its private key without knowing other clouds' values. They run a secure multi-party computation to calculate an encrypted sum polynomial. The TCA can later recover the original secret value from the sum polynomial without learning individual secret values. The scheme aims to ensure privacy and integrity of secret data shared between clouds during distributed computations.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
This document provides 6 IEEE project summaries in the domain of Java and cloud computing/data mining. The summaries are:
1. A decentralized access control scheme for secure cloud data storage that supports anonymous authentication.
2. A performance analysis framework for distributed file systems that qualitatively and quantitatively evaluates performance.
3. Approaches to guarantee trustworthy transactions on cloud servers by enforcing policy consistency constraints.
4. A scalable MapReduce approach for anonymizing large datasets to satisfy privacy requirements like k-anonymity.
5. A resource allocation scheme for a self-organizing cloud that achieves maximized utilization and optimal execution efficiency.
6. An attribute-based encryption framework for flexible
Enhanced Integrity Preserving Homomorphic Scheme for Cloud StorageIRJET Journal
This document discusses enhancing integrity preservation for cloud storage using a homomorphic encryption scheme. It begins with an abstract that outlines using MD5 algorithm for integrity checks on fully homomorphic encrypted data. It then provides background on issues with privacy and integrity in cloud computing. The document reviews related work on cloud security and integrity verification. It discusses challenges with ensuring data integrity when stored remotely in the cloud and proposes using a homomorphic encryption scheme along with MD5 for integrity preservation of outsourced data in the cloud.
This document discusses several cloud computing projects from IEEE in 2014. It provides descriptions of 8 projects, including their titles, programming languages, links, and abstract summaries. The projects focus on topics like network coding-based cloud storage systems, privacy-preserving search over encrypted cloud data, cloud service composition, cloud resource procurement, and competition/cooperation among cloud providers.
Flaw less coding and authentication of user data using multiple cloudsIRJET Journal
This document discusses secure data storage in multiple cloud storage providers. It proposes a method for users to store encrypted data across multiple cloud storage providers using splitting and merging concepts. Private keys are generated during file access using a pseudo key generator and encrypted using 3DES for transmission. The method aims to increase data availability, confidentiality and reduce costs by distributing data across multiple cloud providers. It also discusses using image compression with reversible data hiding techniques to provide data confidentiality when storing images in the cloud.
A Study on Replication and Failover Cluster to Maximize System UptimeYogeshIJTSRD
This document summarizes a study on using replication and failover clusters to maximize system uptime for cloud services. It discusses challenges in ensuring high availability of cloud services from a provider perspective. The study aims to present a high availability solution using load balancing, elasticity, replication, and disaster recovery configuration. It reviews related literature on digital media distribution platforms, content delivery networks, auto-scaling strategies, and database replication impact. It also covers methodologies like CloudFront, state machine replication, neural networks, Markov decision processes, and sliding window protocols. The scope is to build a scalable, fault-tolerant environment with disaster recovery and ensure continuous availability. The conclusion is that data replication and failover clusters are necessary to plan data
This document summarizes a research paper that proposes a scheme for ensuring security and reliability of data stored in the cloud. The scheme utilizes erasure coding to redundantly store encrypted data fragments across multiple cloud servers. It generates homomorphic tokens that allow auditing of the data storage and identification of any misbehaving servers. The scheme supports secure dynamic operations like modification, deletion and append of cloud data files. Analysis shows the scheme is efficient and resilient against various security threats like server compromises or failures. It ensures storage correctness and fast localization of data errors in the cloud.
Public Key Encryption algorithms Enabling Efficiency Using SaaS in Cloud Comp...Editor IJMTER
The Most great challenging in Cloud computing is Security. Here Security plays key role
in this paper proposed concept mainly deals with security at the end user access. While coming to the
end user access that are connected through the public networks. Here the end user wants to access his
application or services protected by the unauthorized persons. In this area if we want to apply
encryption or decryption methods such as RSA, 3DES, MD5, Blow fish. Etc.,
Whereas we can utilize these services at the end user access in cloud computing. Here there is
problem of encryption and decryption of the messages, services and applications. They are is lot of
time to take encrypt as well as decrypt and more number of processing capabilities are needed to use
the mechanism. For that problem we are introducing to use of cloud computing in SaaS model. i.e.,
scalable is applicable in this area so whenever it requires we can utilize the SaaS model.
In Cloud computing use of computing resources (hardware and software) that are delivered as a
service over Internet network. In advance earlier there is problem of using key size in various
algorithm like 64 bit it take some long period to encrypt the data.
The document discusses cloud service life-cycle management and related topics. It covers (1) the cloud service life-cycle including requirements, discovery, negotiation, composition, and consumption phases, (2) high-level cloud deployment scenarios such as single cloud system, multiple cloud systems serially or simultaneously, (3) tools for cloud service development and testing including NetBeans, Eclipse, Apache JMeter, and SoapUI, and (4) the concept of web service slicing to capture a functional subset of a large-scale web service for regression testing purposes.
- The document proposes a new approach to decrease the impact of SLA (service level agreement) violations on user satisfaction levels in cloud computing environments.
- It uses two hidden user characteristics - willingness to pay for service and willingness to pay for certainty - to inform a proactive resource allocation approach.
- The goal is to improve user satisfaction and profitability by considering these characteristics, rather than just SLA parameters, when deciding how to allocate resources during critical situations where some SLA violations are unavoidable.
A Prolific Scheme for Load Balancing Relying on Task Completion Time IJECEIAES
In networks with lot of computation, load balancing gains increasing significance. To offer various resources, services and applications, the ultimate aim is to facilitate the sharing of services and resources on the network over the Internet. A key issue to be focused and addressed in networks with large amount of computation is load balancing. Load is the number of tasks„t‟ performed by a computation system. The load can be categorized as network load and CPU load. For an efficient load balancing strategy, the process of assigning the load between the nodes should enhance the resource utilization and minimize the computation time. This can be accomplished by a uniform distribution of load of to all the nodes. A Load balancing method should guarantee that, each node in a network performs almost equal amount of work pertinent to their capacity and availability of resources. Relying on task subtraction, this work has presented a pioneering algorithm termed as E-TS (Efficient-Task Subtraction). This algorithm has selected appropriate nodes for each task. The proposed algorithm has improved the utilization of computing resources and has preserved the neutrality in assigning the load to the nodes in the network.
A Study of A Method To Provide Minimized Bandwidth Consumption Using Regenera...IJERA Editor
Cloud storage systems to protect data from corruptions, redundant data to tolerate failures of storage and lost data should be repaired when storage fails. Regenerating codes provide fault tolerance by striping data across multiple servers, while using less repair traffic than traditional erasure codes during failure recovery. In previous research implemented practical Data Integrity Protection (DIP) scheme for regenerating-coding based cloud storage. Functional Minimum-Storage Regenerating (FMSR) codes and it construct FMSR-DIP codes, which allow clients to remotely verify the integrity of random subsets of long-term archival data under a multi server setting. The problem is to optimize bandwidth consumption when repairing multiple failures. The cooperative repair of multiple failures can help to further save bandwidth consumption when multiple failures are being repaired.
This document summarizes a research paper that proposes a load balancing algorithm for cloud computing using process migration. The algorithm aims to improve resource utilization by transferring processes from heavily loaded virtual machines to lightly loaded or idle ones. It describes related work on existing load balancing approaches and process migration. The proposed mechanism designates a server virtual machine to monitor member virtual machines' workloads and a balancer virtual machine to determine overloaded and underloaded members and migrate processes between them using a VM process migrator module. This helps balance loads across virtual machines to avoid overloading and improve overall resource efficiency.
Data Partitioning Technique In Cloud: A Survey On Limitation And BenefitsIJERA Editor
This document summarizes and reviews various data partitioning techniques used in cloud computing for privacy and security of data using third party auditors. It discusses techniques like Merkle Hash Tree, distributed storage integrity auditing, image-based authentication, proxy provable data possession, file distribution with token pre-computation, and horizontal and vertical data partitioning. The techniques aim to provide benefits like dynamic data authentication, efficient storage, and integrity testing while addressing limitations such as single points of failure, public validation risks, lack of support for data updates, and additional computation costs. The review analyzes the techniques to compare their limitations and benefits for achieving secure and trustworthy cloud data storage.
This document summarizes a research paper that proposes a system for privacy-preserving public auditing of cloud data storage. The system allows a third-party auditor (TPA) to verify the integrity of data stored with a cloud service provider on behalf of users, without learning anything about the actual data contents. The system uses a public key-based homomorphic linear authenticator technique that enables the TPA to perform audits without having access to the full data. This technique allows the TPA to efficiently audit multiple users' data simultaneously. The document describes the system components, methodology used involving key generation and auditing protocols, and concludes the proposed system provides security and performance guarantees for privacy-preserving public auditing of cloud data
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
IRJET- Optimization of Completion Time through Efficient Resource Allocation ...IRJET Journal
This document discusses optimizing task completion time in cloud computing through efficient resource allocation using genetic and differential evolutionary algorithms. It aims to reduce makespan (completion time) by combining a genetic algorithm with differential evolutionary algorithms. The genetic algorithm uses selection, crossover and mutation to allocate tasks to resources. The outputs are then input to the differential evolutionary algorithm, which has the same operations in reverse order. This double process refines the allocation to provide the best allocation minimizing completion time. The document outlines the related work in genetic algorithms for resource allocation and task scheduling in cloud computing.
Aes based secured framework for cloud databasesIJARIIT
This document presents a novel architecture for adaptive encryption of databases in public clouds. It proposes using the Advanced Encryption Standard (AES) algorithm to encrypt data before it is sent to cloud servers. The architecture allows SQL queries to be run directly on the encrypted data through the use of encrypted metadata. This provides confidentiality without requiring intermediate servers. The scheme aims to balance security, performance and cost for cloud database workloads through adaptive encryption techniques. It analyzes the encryption and adaptive encryption costs from a research perspective.
The document discusses the Open Cloud Manifesto, which is dedicated to the belief that cloud computing should be open. It aims to establish core principles for cloud providers around an open cloud model. The document provides context by defining cloud computing, describing its benefits such as scalability and reduced costs, and addressing challenges to cloud adoption like security, interoperability, and governance. It asserts that an open cloud is needed to fully realize the opportunities while mitigating the risks of cloud computing.
Cost-Minimizing Dynamic Migration of Content Distribution Services into Hybri...1crore projects
IEEE PROJECTS 2015
1 crore projects is a leading Guide for ieee Projects and real time projects Works Provider.
It has been provided Lot of Guidance for Thousands of Students & made them more beneficial in all Technology Training.
Dot Net
DOTNET Project Domain list 2015
1. IEEE based on datamining and knowledge engineering
2. IEEE based on mobile computing
3. IEEE based on networking
4. IEEE based on Image processing
5. IEEE based on Multimedia
6. IEEE based on Network security
7. IEEE based on parallel and distributed systems
Java Project Domain list 2015
1. IEEE based on datamining and knowledge engineering
2. IEEE based on mobile computing
3. IEEE based on networking
4. IEEE based on Image processing
5. IEEE based on Multimedia
6. IEEE based on Network security
7. IEEE based on parallel and distributed systems
ECE IEEE Projects 2015
1. Matlab project
2. Ns2 project
3. Embedded project
4. Robotics project
Eligibility
Final Year students of
1. BSc (C.S)
2. BCA/B.E(C.S)
3. B.Tech IT
4. BE (C.S)
5. MSc (C.S)
6. MSc (IT)
7. MCA
8. MS (IT)
9. ME(ALL)
10. BE(ECE)(EEE)(E&I)
TECHNOLOGY USED AND FOR TRAINING IN
1. DOT NET
2. C sharp
3. ASP
4. VB
5. SQL SERVER
6. JAVA
7. J2EE
8. STRINGS
9. ORACLE
10. VB dotNET
11. EMBEDDED
12. MAT LAB
13. LAB VIEW
14. Multi Sim
CONTACT US
1 CRORE PROJECTS
Door No: 214/215,2nd Floor,
No. 172, Raahat Plaza, (Shopping Mall) ,Arcot Road, Vadapalani, Chennai,
Tamin Nadu, INDIA - 600 026
Email id: 1croreprojects@gmail.com
website:1croreprojects.com
Phone : +91 97518 00789 / +91 72999 51536
SOA for Dynamically Integrated Virtual Learning Environment Systems with Clou...Editor IJCATR
SOA is structural approach for creating services to be reused and shared, so it provides agility and cost saving in software
development by dividing the application into multiple software components to be reused in other systems. Cloud computing is truly scalable
and provide virtualized resources which users can subscribe. Using a cloud and SOA in virtual learning systems provide a great chance for
learners to enhance gained learning outcomes. The adoption of cloud services also assists in reducing the cost of software, hardware, human
resources and infrastructure. This paper will use SOA and cloud computing to transfer virtual learning systems in the cloud to be more
integrated and interoperable through showing a conceptual model of distributed virtual learning system and using cloud computing combined
with services oriented architecture, to contribute in interoperability and integration of e-learning systems in general
MCCVA: A NEW APPROACH USING SVM AND KMEANS FOR LOAD BALANCING ON CLOUDijccsa
Nowadays, the demand of using resources, using services via the intranet system or on the Internet is rapidly growing. The respective problem coming is how to use these resources effectively in terms of time and quality. Therefore, the network QoS and its economy are people concerns, cloud computing was born in an inevitable trend. However, managing resources and scheduling tasks in virtualized data centres on the cloud are challenging tasks. Currently, there are a lot of Load Balancing algorithms applied in clouds and proposed by many authors, scholars, and experts. These existing methods are more about natural and heuristic, but the application of AI, or modern datamining technologies, in load balancing is not too popular due to the different characteristics of cloud. In this paper, we propose an algorithm to reduce the processing time (makespan) on cloud computing, helping the load balancing work more efficiency. Here, we use the SVM algorithm to classify the coming Requests, K - Mean to cluster the VMs in cloud, then the LB will allocate the requests into the VMs in the most reasonable way. In this way, request with the least processing time will be allocated to the VMs with the lowest usage. We name this new proposal as MCCVA - Makespan Classification & Clustering VM Algorithm. We have experimented and evaluated this algorithm in CloudSim, a cloud simulation environment, we obtained better results than some other wellknown algorithms. With this MCCVA, we can see the big potential of AI and datamining in Load Balancing, we can further develop LB with AI to achieve better and better results of QoS.
A detailed study of cloud computing is presented. Starting from its basics, the characteristics and different modalities
are dwelt upon. Apart from this, the pros and cons of cloud computing is also highlighted. Apart from this, service
models of cloud computing are lucidly highlighted.
The document proposes a Cloud Information Accountability (CIA) framework to provide accountability for data sharing in the cloud. The framework uses a decentralized, object-centered approach where data owners can enclose data and policies within programmable JAR files. Any access to the data will trigger automated logging stored locally within the JARs. The framework provides efficient, scalable and granular accountability while meeting the dynamic needs of the cloud. Experiments demonstrate the framework's performance.
Cloud computing security through symmetric cipher modelijcsit
Cloud computing can be defined as an application and services which runs on distributed network using
virtualized and it is accessed through internet protocols and networking. Cloud computing resources and
virtual and limitless and information’s of the physical systems on which software running are abstracted
from the user. Cloud Computing is a style of computing in which dynamically scalable and often virtualized
resources are provided as a service over the Internet. Users need not have knowledge of, expertise in, or
control over the technology infrastructure in the "cloud" that supports them. To satisfy the needs of the
users the concept is to incorporate technologies which have the common theme of reliance on the internet
Software and data are stored on the servers whereas cloud computing services are provided through
applications online which can be accessed from web browsers. Lack of security and access control is the
major drawback in the cloud computing as the users deal with sensitive data to public clouds .Multiple
virtual machine in cloud can access insecure information flows as service provider; therefore to implement
the cloud it is necessary to build security. Therefore the main aim of this paper is to provide cloud
computing security through symmetric cipher model. This article proposes symmetric cipher model in
order to implement cloud computing security so that data can accessed and stored securely.
IEEE Final Year Projects 2011-2012 :: Elysium Technologies Pvt Ltd::Parallel ...sunda2011
IEEE Final Year Projects 2011-2012 :: Elysium Technologies Pvt Ltd
IEEE projects, final year projects, students project, be project, engineering projects, academic project, project center in madurai, trichy, chennai, kollam, coimbatore
IEEE Final Year Projects 2011-2012 :: Elysium Technologies Pvt Ltd::Auromatio...sunda2011
IEEE Final Year Projects 2011-2012 :: Elysium Technologies Pvt Ltd
IEEE projects, final year projects, students project, be project, engineering projects, academic project, project center in madurai, trichy, chennai, kollam, coimbatore
A Study on Replication and Failover Cluster to Maximize System UptimeYogeshIJTSRD
This document summarizes a study on using replication and failover clusters to maximize system uptime for cloud services. It discusses challenges in ensuring high availability of cloud services from a provider perspective. The study aims to present a high availability solution using load balancing, elasticity, replication, and disaster recovery configuration. It reviews related literature on digital media distribution platforms, content delivery networks, auto-scaling strategies, and database replication impact. It also covers methodologies like CloudFront, state machine replication, neural networks, Markov decision processes, and sliding window protocols. The scope is to build a scalable, fault-tolerant environment with disaster recovery and ensure continuous availability. The conclusion is that data replication and failover clusters are necessary to plan data
This document summarizes a research paper that proposes a scheme for ensuring security and reliability of data stored in the cloud. The scheme utilizes erasure coding to redundantly store encrypted data fragments across multiple cloud servers. It generates homomorphic tokens that allow auditing of the data storage and identification of any misbehaving servers. The scheme supports secure dynamic operations like modification, deletion and append of cloud data files. Analysis shows the scheme is efficient and resilient against various security threats like server compromises or failures. It ensures storage correctness and fast localization of data errors in the cloud.
Public Key Encryption algorithms Enabling Efficiency Using SaaS in Cloud Comp...Editor IJMTER
The Most great challenging in Cloud computing is Security. Here Security plays key role
in this paper proposed concept mainly deals with security at the end user access. While coming to the
end user access that are connected through the public networks. Here the end user wants to access his
application or services protected by the unauthorized persons. In this area if we want to apply
encryption or decryption methods such as RSA, 3DES, MD5, Blow fish. Etc.,
Whereas we can utilize these services at the end user access in cloud computing. Here there is
problem of encryption and decryption of the messages, services and applications. They are is lot of
time to take encrypt as well as decrypt and more number of processing capabilities are needed to use
the mechanism. For that problem we are introducing to use of cloud computing in SaaS model. i.e.,
scalable is applicable in this area so whenever it requires we can utilize the SaaS model.
In Cloud computing use of computing resources (hardware and software) that are delivered as a
service over Internet network. In advance earlier there is problem of using key size in various
algorithm like 64 bit it take some long period to encrypt the data.
The document discusses cloud service life-cycle management and related topics. It covers (1) the cloud service life-cycle including requirements, discovery, negotiation, composition, and consumption phases, (2) high-level cloud deployment scenarios such as single cloud system, multiple cloud systems serially or simultaneously, (3) tools for cloud service development and testing including NetBeans, Eclipse, Apache JMeter, and SoapUI, and (4) the concept of web service slicing to capture a functional subset of a large-scale web service for regression testing purposes.
- The document proposes a new approach to decrease the impact of SLA (service level agreement) violations on user satisfaction levels in cloud computing environments.
- It uses two hidden user characteristics - willingness to pay for service and willingness to pay for certainty - to inform a proactive resource allocation approach.
- The goal is to improve user satisfaction and profitability by considering these characteristics, rather than just SLA parameters, when deciding how to allocate resources during critical situations where some SLA violations are unavoidable.
A Prolific Scheme for Load Balancing Relying on Task Completion Time IJECEIAES
In networks with lot of computation, load balancing gains increasing significance. To offer various resources, services and applications, the ultimate aim is to facilitate the sharing of services and resources on the network over the Internet. A key issue to be focused and addressed in networks with large amount of computation is load balancing. Load is the number of tasks„t‟ performed by a computation system. The load can be categorized as network load and CPU load. For an efficient load balancing strategy, the process of assigning the load between the nodes should enhance the resource utilization and minimize the computation time. This can be accomplished by a uniform distribution of load of to all the nodes. A Load balancing method should guarantee that, each node in a network performs almost equal amount of work pertinent to their capacity and availability of resources. Relying on task subtraction, this work has presented a pioneering algorithm termed as E-TS (Efficient-Task Subtraction). This algorithm has selected appropriate nodes for each task. The proposed algorithm has improved the utilization of computing resources and has preserved the neutrality in assigning the load to the nodes in the network.
A Study of A Method To Provide Minimized Bandwidth Consumption Using Regenera...IJERA Editor
Cloud storage systems to protect data from corruptions, redundant data to tolerate failures of storage and lost data should be repaired when storage fails. Regenerating codes provide fault tolerance by striping data across multiple servers, while using less repair traffic than traditional erasure codes during failure recovery. In previous research implemented practical Data Integrity Protection (DIP) scheme for regenerating-coding based cloud storage. Functional Minimum-Storage Regenerating (FMSR) codes and it construct FMSR-DIP codes, which allow clients to remotely verify the integrity of random subsets of long-term archival data under a multi server setting. The problem is to optimize bandwidth consumption when repairing multiple failures. The cooperative repair of multiple failures can help to further save bandwidth consumption when multiple failures are being repaired.
This document summarizes a research paper that proposes a load balancing algorithm for cloud computing using process migration. The algorithm aims to improve resource utilization by transferring processes from heavily loaded virtual machines to lightly loaded or idle ones. It describes related work on existing load balancing approaches and process migration. The proposed mechanism designates a server virtual machine to monitor member virtual machines' workloads and a balancer virtual machine to determine overloaded and underloaded members and migrate processes between them using a VM process migrator module. This helps balance loads across virtual machines to avoid overloading and improve overall resource efficiency.
Data Partitioning Technique In Cloud: A Survey On Limitation And BenefitsIJERA Editor
This document summarizes and reviews various data partitioning techniques used in cloud computing for privacy and security of data using third party auditors. It discusses techniques like Merkle Hash Tree, distributed storage integrity auditing, image-based authentication, proxy provable data possession, file distribution with token pre-computation, and horizontal and vertical data partitioning. The techniques aim to provide benefits like dynamic data authentication, efficient storage, and integrity testing while addressing limitations such as single points of failure, public validation risks, lack of support for data updates, and additional computation costs. The review analyzes the techniques to compare their limitations and benefits for achieving secure and trustworthy cloud data storage.
This document summarizes a research paper that proposes a system for privacy-preserving public auditing of cloud data storage. The system allows a third-party auditor (TPA) to verify the integrity of data stored with a cloud service provider on behalf of users, without learning anything about the actual data contents. The system uses a public key-based homomorphic linear authenticator technique that enables the TPA to perform audits without having access to the full data. This technique allows the TPA to efficiently audit multiple users' data simultaneously. The document describes the system components, methodology used involving key generation and auditing protocols, and concludes the proposed system provides security and performance guarantees for privacy-preserving public auditing of cloud data
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
IRJET- Optimization of Completion Time through Efficient Resource Allocation ...IRJET Journal
This document discusses optimizing task completion time in cloud computing through efficient resource allocation using genetic and differential evolutionary algorithms. It aims to reduce makespan (completion time) by combining a genetic algorithm with differential evolutionary algorithms. The genetic algorithm uses selection, crossover and mutation to allocate tasks to resources. The outputs are then input to the differential evolutionary algorithm, which has the same operations in reverse order. This double process refines the allocation to provide the best allocation minimizing completion time. The document outlines the related work in genetic algorithms for resource allocation and task scheduling in cloud computing.
Aes based secured framework for cloud databasesIJARIIT
This document presents a novel architecture for adaptive encryption of databases in public clouds. It proposes using the Advanced Encryption Standard (AES) algorithm to encrypt data before it is sent to cloud servers. The architecture allows SQL queries to be run directly on the encrypted data through the use of encrypted metadata. This provides confidentiality without requiring intermediate servers. The scheme aims to balance security, performance and cost for cloud database workloads through adaptive encryption techniques. It analyzes the encryption and adaptive encryption costs from a research perspective.
The document discusses the Open Cloud Manifesto, which is dedicated to the belief that cloud computing should be open. It aims to establish core principles for cloud providers around an open cloud model. The document provides context by defining cloud computing, describing its benefits such as scalability and reduced costs, and addressing challenges to cloud adoption like security, interoperability, and governance. It asserts that an open cloud is needed to fully realize the opportunities while mitigating the risks of cloud computing.
Cost-Minimizing Dynamic Migration of Content Distribution Services into Hybri...1crore projects
IEEE PROJECTS 2015
1 crore projects is a leading Guide for ieee Projects and real time projects Works Provider.
It has been provided Lot of Guidance for Thousands of Students & made them more beneficial in all Technology Training.
Dot Net
DOTNET Project Domain list 2015
1. IEEE based on datamining and knowledge engineering
2. IEEE based on mobile computing
3. IEEE based on networking
4. IEEE based on Image processing
5. IEEE based on Multimedia
6. IEEE based on Network security
7. IEEE based on parallel and distributed systems
Java Project Domain list 2015
1. IEEE based on datamining and knowledge engineering
2. IEEE based on mobile computing
3. IEEE based on networking
4. IEEE based on Image processing
5. IEEE based on Multimedia
6. IEEE based on Network security
7. IEEE based on parallel and distributed systems
ECE IEEE Projects 2015
1. Matlab project
2. Ns2 project
3. Embedded project
4. Robotics project
Eligibility
Final Year students of
1. BSc (C.S)
2. BCA/B.E(C.S)
3. B.Tech IT
4. BE (C.S)
5. MSc (C.S)
6. MSc (IT)
7. MCA
8. MS (IT)
9. ME(ALL)
10. BE(ECE)(EEE)(E&I)
TECHNOLOGY USED AND FOR TRAINING IN
1. DOT NET
2. C sharp
3. ASP
4. VB
5. SQL SERVER
6. JAVA
7. J2EE
8. STRINGS
9. ORACLE
10. VB dotNET
11. EMBEDDED
12. MAT LAB
13. LAB VIEW
14. Multi Sim
CONTACT US
1 CRORE PROJECTS
Door No: 214/215,2nd Floor,
No. 172, Raahat Plaza, (Shopping Mall) ,Arcot Road, Vadapalani, Chennai,
Tamin Nadu, INDIA - 600 026
Email id: 1croreprojects@gmail.com
website:1croreprojects.com
Phone : +91 97518 00789 / +91 72999 51536
SOA for Dynamically Integrated Virtual Learning Environment Systems with Clou...Editor IJCATR
SOA is structural approach for creating services to be reused and shared, so it provides agility and cost saving in software
development by dividing the application into multiple software components to be reused in other systems. Cloud computing is truly scalable
and provide virtualized resources which users can subscribe. Using a cloud and SOA in virtual learning systems provide a great chance for
learners to enhance gained learning outcomes. The adoption of cloud services also assists in reducing the cost of software, hardware, human
resources and infrastructure. This paper will use SOA and cloud computing to transfer virtual learning systems in the cloud to be more
integrated and interoperable through showing a conceptual model of distributed virtual learning system and using cloud computing combined
with services oriented architecture, to contribute in interoperability and integration of e-learning systems in general
MCCVA: A NEW APPROACH USING SVM AND KMEANS FOR LOAD BALANCING ON CLOUDijccsa
Nowadays, the demand of using resources, using services via the intranet system or on the Internet is rapidly growing. The respective problem coming is how to use these resources effectively in terms of time and quality. Therefore, the network QoS and its economy are people concerns, cloud computing was born in an inevitable trend. However, managing resources and scheduling tasks in virtualized data centres on the cloud are challenging tasks. Currently, there are a lot of Load Balancing algorithms applied in clouds and proposed by many authors, scholars, and experts. These existing methods are more about natural and heuristic, but the application of AI, or modern datamining technologies, in load balancing is not too popular due to the different characteristics of cloud. In this paper, we propose an algorithm to reduce the processing time (makespan) on cloud computing, helping the load balancing work more efficiency. Here, we use the SVM algorithm to classify the coming Requests, K - Mean to cluster the VMs in cloud, then the LB will allocate the requests into the VMs in the most reasonable way. In this way, request with the least processing time will be allocated to the VMs with the lowest usage. We name this new proposal as MCCVA - Makespan Classification & Clustering VM Algorithm. We have experimented and evaluated this algorithm in CloudSim, a cloud simulation environment, we obtained better results than some other wellknown algorithms. With this MCCVA, we can see the big potential of AI and datamining in Load Balancing, we can further develop LB with AI to achieve better and better results of QoS.
A detailed study of cloud computing is presented. Starting from its basics, the characteristics and different modalities
are dwelt upon. Apart from this, the pros and cons of cloud computing is also highlighted. Apart from this, service
models of cloud computing are lucidly highlighted.
The document proposes a Cloud Information Accountability (CIA) framework to provide accountability for data sharing in the cloud. The framework uses a decentralized, object-centered approach where data owners can enclose data and policies within programmable JAR files. Any access to the data will trigger automated logging stored locally within the JARs. The framework provides efficient, scalable and granular accountability while meeting the dynamic needs of the cloud. Experiments demonstrate the framework's performance.
Cloud computing security through symmetric cipher modelijcsit
Cloud computing can be defined as an application and services which runs on distributed network using
virtualized and it is accessed through internet protocols and networking. Cloud computing resources and
virtual and limitless and information’s of the physical systems on which software running are abstracted
from the user. Cloud Computing is a style of computing in which dynamically scalable and often virtualized
resources are provided as a service over the Internet. Users need not have knowledge of, expertise in, or
control over the technology infrastructure in the "cloud" that supports them. To satisfy the needs of the
users the concept is to incorporate technologies which have the common theme of reliance on the internet
Software and data are stored on the servers whereas cloud computing services are provided through
applications online which can be accessed from web browsers. Lack of security and access control is the
major drawback in the cloud computing as the users deal with sensitive data to public clouds .Multiple
virtual machine in cloud can access insecure information flows as service provider; therefore to implement
the cloud it is necessary to build security. Therefore the main aim of this paper is to provide cloud
computing security through symmetric cipher model. This article proposes symmetric cipher model in
order to implement cloud computing security so that data can accessed and stored securely.
IEEE Final Year Projects 2011-2012 :: Elysium Technologies Pvt Ltd::Parallel ...sunda2011
IEEE Final Year Projects 2011-2012 :: Elysium Technologies Pvt Ltd
IEEE projects, final year projects, students project, be project, engineering projects, academic project, project center in madurai, trichy, chennai, kollam, coimbatore
IEEE Final Year Projects 2011-2012 :: Elysium Technologies Pvt Ltd::Auromatio...sunda2011
IEEE Final Year Projects 2011-2012 :: Elysium Technologies Pvt Ltd
IEEE projects, final year projects, students project, be project, engineering projects, academic project, project center in madurai, trichy, chennai, kollam, coimbatore
IEEE Final Year Projects 2011-2012 :: Elysium Technologies Pvt Ltd::Paralleld...sunda2011
IEEE Final Year Projects 2011-2012 :: Elysium Technologies Pvt Ltd
IEEE projects, final year projects, students project, be project, engineering projects, academic project, project center in madurai, trichy, chennai, kollam, coimbatore
The document discusses the "rings of entrepreneurship" which are conceptualized as: 1) Learning and teaching, 2) Determination, 3) Opportunity, 4) Resiliency, 5) Inspiration, and 6) Collective altruism. Each ring represents an essential characteristic that leads to entrepreneurial success. The rings are then further explained individually in terms of their importance, effective approaches, and relation to other concepts like learning styles, intelligence, motivation, and more. The document emphasizes that true success comes from cooperation rather than competition through collective altruism.
IEEE Final Year Projects 2011-2012 :: Elysium Technologies Pvt Ltd::Networkse...sunda2011
IEEE Final Year Projects 2011-2012 :: Elysium Technologies Pvt Ltd
IEEE projects, final year projects, students project, be project, engineering projects, academic project, project center in madurai, trichy, chennai, kollam, coimbatore
This document provides instructions for watching a video clip about the London Underground and completing two related exercises. Students are asked to predict words or sentences they may see in the clip before viewing. They are also given a matching activity to pair 16 words into 8 phrases commonly seen on the London Underground. After watching the clip once, students will complete the matching exercise. They will then watch the clip a second time while finding the answers to two additional worksheet exercises.
IEEE Final Year Projects 2011-2012 :: Elysium Technologies Pvt Ltd::Knowledge...sunda2011
This document provides an abstract for 8 projects in knowledge and data engineering for the year 2011-2012 from Elysium Technologies Private Limited. It lists the projects, which include dual framework for targeted online data delivery, fast multiple longest common subsequence algorithm, fuzzy self-constructing feature clustering for text classification, generic multilevel architecture for time series prediction, link analysis extension of correspondence analysis for mining relational databases, machine learning approach for identifying disease-treatment relations in short texts, personalized ontology model for web information gathering, and adaptive cluster distance bounding for high-dimensional indexing. It also provides contact information for Elysium Technologies' offices in various locations.
IEEE Final Year Projects 2011-2012 :: Elysium Technologies Pvt Ltd::Networknewsunda2011
The document appears to be a list of IEEE final year projects from 2011-2012 provided by Elysium Technologies Private Limited. It includes 12 project abstracts related to networking and wireless technologies. The projects focus on topics such as failure localization in optical networks, peer-to-peer streaming, heterogeneous network flows, wireless multicast broadcasting, cooperative ad hoc networks, CSMA scheduling algorithms, wireless network coordination, traffic counters, buffer sizing, CSMA network capacity, and channel assignment in wireless networks. Elysium Technologies is an ISO certified research and development company with locations in India and Singapore.
IEEE Final Year Projects 2011-2012 :: Elysium Technologies Pvt Ltd::Mobilecom...sunda2011
IEEE Final Year Projects 2011-2012 :: Elysium Technologies Pvt Ltd
IEEE projects, final year projects, students project, be project, engineering projects, academic project, project center in madurai, trichy, chennai, kollam, coimbatore
IEEE Final Year Projects 2011-2012 :: Elysium Technologies Pvt Ltd::Data miningsunda2011
IEEE Final Year Projects 2011-2012 :: Elysium Technologies Pvt Ltd
IEEE projects, final year projects, students project, be project, engineering projects, academic project, project center in madurai, trichy, chennai, kollam, coimbatore
The document discusses several topics related to cloud computing including:
1. A hybrid cloud approach for secure authorized data deduplication that considers differential user privileges.
2. A framework called AMES-Cloud that provides adaptive mobile video streaming and efficient social video sharing using private cloud agents.
3. Research into using multi-cloud providers instead of single clouds to help maintain security.
.Net projects 2011 by core ieeeprojects.com msudan92
The document contains summaries of 15 IEEE projects from 2011. Each project summary is 1-3 sentences describing the high level goal or problem addressed by the project. For example, one project proposes a policy enforcing mechanism to ensure fair communication in mobile ad hoc networks by regulating applications through proper communication policies. Another project presents a query formulation language called MashQL to easily query and fuse structured data from multiple sources on the web.
ieee projects is the most important projects for engineering students like BE Projects and ME Projects, MCA students Projects, BCA students Projects, MPhile Projects
Effective & Flexible Cryptography Based Scheme for Ensuring User`s Data Secur...ijsrd.com
Cloud computing has been envisioned as the next-generation architecture of IT enterprise. In contrast to traditional solutions, where the IT services are under proper physical, logical and personnel controls, cloud computing moves the application software and databases to the large data centers, where the management of the data and services may not be fully trustworthy. This unique attribute, however, poses many new security challenges which have not been well understood. In this article, we focus on cloud data storage security, which has always been an important aspect of quality of service. To ensure the correctness of users' data in the cloud, we propose an effective and flexible cryptography based scheme. Extensive security and performance analysis shows that the proposed scheme is highly efficient and resilient against malicious data modification attack.
Hybrid Based Resource Provisioning in CloudEditor IJCATR
The data centres and energy consumption characteristics of the various machines are often noted with different capacities.
The public cloud workloads of different priorities and performance requirements of various applications when analysed we had noted
some invariant reports about cloud. The Cloud data centres become capable of sensing an opportunity to present a different program.
In out proposed work, we are using a hybrid method for resource provisioning in data centres. This method is used to allocate the
resources at the working conditions and also for the energy stored in the power consumptions. Proposed method is used to allocate the
process behind the cloud storage.
Efficient Resource Sharing In Cloud Using Neural NetworkIJERA Editor
In cloud computing, collaborative cloud computing(CCC) is the emerging technology where globally-dispersed cloud resource belonging to different organization are collectively used in a cooperative manner to provide services. In previous research, Harmony enables a node to locate its desired resources and also find the reputation of the located resources, so that a client can choose resource providers not only by resource availability but also by the provider’s reputation of providing the resource. In proposed system to reform resource utilization based on optimal time period to allocate resources to the neural network training and to load factor calculation the dynamic priority scheduling technique is used to assign the priority to the cloud users according to their load. The dynamic priority scheduling algorithm strikes the right balance between performance and power efficiency.
Final Year IEEE Project 2013-2014 - Web Services Project Title and Abstractelysiumtechnologies
This document provides contact and location information for Elysium Technologies Private Limited, an IT company with 13 years of experience and over 250 developers located across multiple branches in India. It lists their services such as automated services, 24/7 help desk support, and ticketing & appointment systems. The company has experience in multiple languages and technologies.
NEURO-FUZZY SYSTEM BASED DYNAMIC RESOURCE ALLOCATION IN COLLABORATIVE CLOUD C...ijccsa
Cloud collaboration is an emerging technology which enables sharing of computer files using cloud
computing. Here the cloud resources are assembled and cloud services are provided using these resources.
Cloud collaboration technologies are allowing users to share documents. Resource allocation in the cloud
is challenging because resources offer different Quality of Service (QoS) and services running on these
resources are risky for user demands. We propose a solution for resource allocation based on multi
attribute QoS Scoring considering parameters such as distance to the resource from user site, reputation of
the resource, task completion time, task completion ratio, and load at the resource. The proposed algorithm
referred to as Multi Attribute QoS scoring (MAQS) uses Neuro Fuzzy system. We have also included a
speculative manager to handle fault tolerance. In this paper it is shown that the proposed algorithm
perform better than others including power trust reputation based algorithms and harmony method which
use single attribute to compute the reputation score of each resource allocated.
Neuro-Fuzzy System Based Dynamic Resource Allocation in Collaborative Cloud C...neirew J
This paper proposes a neuro-fuzzy system called Multi Attribute QoS scoring (MAQS) for dynamic resource allocation in collaborative cloud computing. MAQS uses a 3-layer neural network trained on 5 quality of service attributes - distance, reputation, task completion time, completion ratio, and load - to provide a QoS score for each resource. Resources are then allocated based on this score. The algorithm collects data periodically from nodes and calculates QoS scores for incoming tasks to select the highest scoring node for task allocation. The paper argues this approach considers multiple attributes and heterogeneity of resources better than previous single-attribute methods.
This document discusses 10 key research areas in cloud computing:
1. The Green Cloud - Improving energy efficiency and reducing consumption in cloud data centers.
2. Denial of Service issues - Addressing attacks that restrict access to cloud resources.
3. Cloud Verification, Validation and Testing - Developing strategies for testing cloud software, applications and designs.
4. Cloud Security - Ensuring secure architectures and managing security across distributed cloud networks.
We are providing training on IEEE 2016-17 projects for Ph.D Scalars, M.Tech, B.E, MCA, BCA and Diploma students for
all branches for their academic projects.
For more details call us or watsapp us @ 7676768124 0r 9545252155
Email your base papers to "adritsolutions@gmail.co.in"
We are providing IEEE projects on
1) Cloud Computing, Data Mining, BigData Projects Using JAva
2) Image Processing and Video Procesing (MATLAB) , Signal Processing
3) NS2 (Wireless Sensor, MANET, VANET)
4) ANDRIOD APPS
5) JAVA, JEE, J2EE, J2ME
6) Mechanical Design projects
7) Embedded Systems and IoT Projects
8) VLSI- Verilog Projects (ModelSim and Xilinx using FPGA)
For More details Please Visit us at
Adrit Solutions
Near Maruthi Mandir
#42/5, 18th Cross, 21st Main
Vijaynagar
Bangalore.
A Survey: Hybrid Job-Driven Meta Data Scheduling for Data storage with Intern...dbpublications
Cloud computing is a promising computing model that enables convenient and on demand network access to a shared pool of configurable computing resources. The first offered cloud service is moving data into the cloud: data owners let cloud service providers host their data on cloud servers and data consumers can access the data from the cloud servers. This new paradigm of data storage service also introduces new security challenges, because data owners and data servers have different identities and different business interests with map and reduce tasks in different jobs. Therefore, an independent auditing service is required to make sure that the data is correctly hosted in the Cloud. The goal is to improve data locality for both map tasks and reduce tasks, avoid job starvation, and improve job execution performance. Two variations are further introduced to separately achieve a better map-data locality and a faster task assignment. We conduct extensive experiments to evaluate and compare the two variations with current scheduling algorithms. The results show that the two variations outperform the other tested algorithms in terms of map-data locality, reduce-data locality, and network overhead without incurring significant overhead. In addition, the two variations are separately suitable for different Map Reduce workload scenarios and provide the best job performance among all tested algorithms in cloud computing data storage.
Centralized Data Verification Scheme for Encrypted Cloud Data ServicesEditor IJMTER
Cloud environment supports data sharing between multiple users. Data integrity is violated
due to hardware / software failures and human errors. Data owners and public verifiers are involved to
efficiently audit cloud data integrity without retrieving the entire data from the cloud server. File and
block signatures are used in the integrity verification process.
“One Ring to RUle Them All” (Oruta) scheme is used for privacy-preserving public auditing process. In
oruta homomorphic authenticators are constructed using Ring Signatures. Ring signatures are used to
compute verification metadata needed to audit the correctness of shared data. The identity of the signer
on each block in shared data is kept private from public verifiers. Homomorphic authenticable ring
signature (HARS) scheme is applied to provide identity privacy with blockless verification. Batch
auditing mechanism supports to perform multiple auditing tasks simultaneously. Oruta is compatible
with random masking to preserve data privacy from public verifiers. Dynamic data management process
is handled with index hash tables. Traceability is not supported in oruta scheme. Data dynamism
sequence is not managed by the system. The system obtains high computational overhead
The proposed system is designed to perform public data verification with privacy. Traceability features
are provided with identity privacy. Group manager or data owner can be allowed to reveal the identity of
the signer based on verification metadata. Data version management mechanism is integrated with the
system.
Cloud Computing Task Scheduling Algorithm Based on Modified Genetic AlgorithmIRJET Journal
This document presents a cloud computing task scheduling algorithm based on a modified genetic algorithm. It begins with an abstract discussing scalable cloud computing and the need for efficient task scheduling and virtual machine allocation. It then discusses the problem of existing scheduling algorithms having high overhead and slow convergence. The proposed methodology uses a heuristic-based prediction model with a logistic normal distribution technique to improve data transmission prediction. Simulation results show the proposed approach has better throughput and computation time than existing algorithms for different data packet sizes. The conclusion discusses overcoming drawbacks of earlier algorithms and future work focusing on algorithms with better tradeoffs between performance characteristics.
The on-demand provision of computer services, including servers, storage, databases, networking, software, and analytics, is known as cloud computing. Cloud-based storage enables distant file saving as opposed to local storage or proprietary hard disk storage. Due to its ability to provide cost savings, enhanced productivity, speed and efficiency, performance, and security, cloud computing is becoming more and more popular among individuals and enterprises.
Discover how cloud services empower businesses with scalable and secure infrastructure solutions, ensuring seamless growth and data protection.
Learn more: https://www.grapestechsolutions.com/blog/building-scalable-and-secure-cloud-infrastructure/
Evaluation of load balancing approaches for Erlang concurrent application in ...TELKOMNIKA JOURNAL
Cloud system accommodates the computing environment including PaaS (platform as a service), SaaS (software as a service), and IaaS (infrastructure as service) that enables the services of cloud systems. Cloud system allows multiple users to employ computing services through browsers, which reflects an alternative service model that alters the local computing workload to a distant site. Cloud virtualization is another characteristic of the clouds that deliver virtual computing services and imitate the functionality of physical computing resources. It refers to an elastic load balancing management that provides the flexible model of on-demand services. The virtualization allows organizations to improve high levels of reliability, accessibility, and scalability by having a capability to execute applications on multiple resources simultaneously. In this paper we use a queuing model to consider a flexible load balancing and evaluate performance metrics such as mean queue length, throughput, mean waiting time, utilization, and mean traversal time. The model is aware of the arrival of concurrent applications with an Erlang distribution. Simulation results regarding performance metrics are investigated. Results point out that in Cloud systems both the fairness and load balancing are to be significantly considered.
This document contains information about several M.Phil Computer Science Cloud Computing projects written in C# and NS2. It provides the titles, languages, links, and short abstracts for each project. The projects focus on topics related to cloud computing including secure cloud storage, data integrity verification, privacy-preserving auditing, and keyword search over encrypted cloud data.
IEEE Final Year Projects 2011-2012 :: Elysium Technologies Pvt Ltd::Computati...sunda2011
IEEE Final Year Projects 2011-2012 :: Elysium Technologies Pvt Ltd::
IEEE projects, final year projects, students project, be project, engineering projects, academic project, project center in madurai, trichy, chennai, kollam, coimbatore
IEEE Final Year Projects 2011-2012 :: Elysium Technologies Pvt Ltd::Biomedica...sunda2011
IEEE Final Year Projects 2011-2012 :: Elysium Technologies Pvt Ltd::
IEEE projects, final year projects, students project, be project, engineering projects, academic project, project center in madurai, trichy, chennai, kollam, coimbatore
IEEE Final Year Projects 2011-2012 :: Elysium Technologies Pvt Ltd::Imageproc...sunda2011
The document is a list of 13 image processing projects from 2011-2012 by Elysium Technologies Private Limited. It includes projects on 1D transforms for motion compensation residuals, edge preserving MAP estimation of images, a generalized unsharp masking algorithm, optimal design of color filter arrays, text detection in natural scenes, contrast-tone mapping, subspace optimization for image restoration, joint image registration and fusion, an easy path wavelet transform for image approximation, 3D color histogram equalization with uniform 1D grayscale histogram, camera calibration using spheres, estimating illumination chromaticity and correspondence, and variational histogram transfer of color images.
IEEE Final Year Projects 2011-2012 :: Elysium Technologies Pvt Ltd::Computati...sunda2011
IEEE Final Year Projects 2011-2012 :: Elysium Technologies Pvt Ltd
IEEE projects, final year projects, students project, be project, engineering projects, academic project, project center in madurai, trichy, chennai, kollam, coimbatore
IEEE Final Year Projects 2011-2012 :: Elysium Technologies Pvt Ltd::Grid comp...sunda2011
IEEE Final Year Projects 2011-2012 :: Elysium Technologies Pvt Ltd
IEEE projects, final year projects, students project, be project, engineering projects, academic project, project center in madurai, trichy, chennai, kollam, coimbatore
IEEE Final Year Projects 2011-2012 :: Elysium Technologies Pvt Ltd::Communica...sunda2011
IEEE Final Year Projects 2011-2012 :: Elysium Technologies Pvt Ltd
IEEE projects, final year projects, students project, be project, engineering projects, academic project, project center in madurai, trichy, chennai, kollam, coimbatore
IEEE Final Year Projects 2011-2012 :: Elysium Technologies Pvt Ltd
IEEE projects, final year projects, students project, be project, engineering projects, academic project, project center in madurai, trichy, chennai, kollam, coimbatore
How to Make a Field Mandatory in Odoo 17Celine George
In Odoo, making a field required can be done through both Python code and XML views. When you set the required attribute to True in Python code, it makes the field required across all views where it's used. Conversely, when you set the required attribute in XML views, it makes the field required only in the context of that particular view.
Main Java[All of the Base Concepts}.docxadhitya5119
This is part 1 of my Java Learning Journey. This Contains Custom methods, classes, constructors, packages, multithreading , try- catch block, finally block and more.
This presentation was provided by Steph Pollock of The American Psychological Association’s Journals Program, and Damita Snow, of The American Society of Civil Engineers (ASCE), for the initial session of NISO's 2024 Training Series "DEIA in the Scholarly Landscape." Session One: 'Setting Expectations: a DEIA Primer,' was held June 6, 2024.
বাংলাদেশের অর্থনৈতিক সমীক্ষা ২০২৪ [Bangladesh Economic Review 2024 Bangla.pdf] কম্পিউটার , ট্যাব ও স্মার্ট ফোন ভার্সন সহ সম্পূর্ণ বাংলা ই-বুক বা pdf বই " সুচিপত্র ...বুকমার্ক মেনু 🔖 ও হাইপার লিংক মেনু 📝👆 যুক্ত ..
আমাদের সবার জন্য খুব খুব গুরুত্বপূর্ণ একটি বই ..বিসিএস, ব্যাংক, ইউনিভার্সিটি ভর্তি ও যে কোন প্রতিযোগিতা মূলক পরীক্ষার জন্য এর খুব ইম্পরট্যান্ট একটি বিষয় ...তাছাড়া বাংলাদেশের সাম্প্রতিক যে কোন ডাটা বা তথ্য এই বইতে পাবেন ...
তাই একজন নাগরিক হিসাবে এই তথ্য গুলো আপনার জানা প্রয়োজন ...।
বিসিএস ও ব্যাংক এর লিখিত পরীক্ষা ...+এছাড়া মাধ্যমিক ও উচ্চমাধ্যমিকের স্টুডেন্টদের জন্য অনেক কাজে আসবে ...
This presentation includes basic of PCOS their pathology and treatment and also Ayurveda correlation of PCOS and Ayurvedic line of treatment mentioned in classics.
Walmart Business+ and Spark Good for Nonprofits.pdfTechSoup
"Learn about all the ways Walmart supports nonprofit organizations.
You will hear from Liz Willett, the Head of Nonprofits, and hear about what Walmart is doing to help nonprofits, including Walmart Business and Spark Good. Walmart Business+ is a new offer for nonprofits that offers discounts and also streamlines nonprofits order and expense tracking, saving time and money.
The webinar may also give some examples on how nonprofits can best leverage Walmart Business+.
The event will cover the following::
Walmart Business + (https://business.walmart.com/plus) is a new shopping experience for nonprofits, schools, and local business customers that connects an exclusive online shopping experience to stores. Benefits include free delivery and shipping, a 'Spend Analytics” feature, special discounts, deals and tax-exempt shopping.
Special TechSoup offer for a free 180 days membership, and up to $150 in discounts on eligible orders.
Spark Good (walmart.com/sparkgood) is a charitable platform that enables nonprofits to receive donations directly from customers and associates.
Answers about how you can do more with Walmart!"
How to Build a Module in Odoo 17 Using the Scaffold MethodCeline George
Odoo provides an option for creating a module by using a single line command. By using this command the user can make a whole structure of a module. It is very easy for a beginner to make a module. There is no need to make each file manually. This slide will show how to create a module using the scaffold method.
it describes the bony anatomy including the femoral head , acetabulum, labrum . also discusses the capsule , ligaments . muscle that act on the hip joint and the range of motion are outlined. factors affecting hip joint stability and weight transmission through the joint are summarized.
LAND USE LAND COVER AND NDVI OF MIRZAPUR DISTRICT, UPRAHUL
This Dissertation explores the particular circumstances of Mirzapur, a region located in the
core of India. Mirzapur, with its varied terrains and abundant biodiversity, offers an optimal
environment for investigating the changes in vegetation cover dynamics. Our study utilizes
advanced technologies such as GIS (Geographic Information Systems) and Remote sensing to
analyze the transformations that have taken place over the course of a decade.
The complex relationship between human activities and the environment has been the focus
of extensive research and worry. As the global community grapples with swift urbanization,
population expansion, and economic progress, the effects on natural ecosystems are becoming
more evident. A crucial element of this impact is the alteration of vegetation cover, which plays a
significant role in maintaining the ecological equilibrium of our planet.Land serves as the foundation for all human activities and provides the necessary materials for
these activities. As the most crucial natural resource, its utilization by humans results in different
'Land uses,' which are determined by both human activities and the physical characteristics of the
land.
The utilization of land is impacted by human needs and environmental factors. In countries
like India, rapid population growth and the emphasis on extensive resource exploitation can lead
to significant land degradation, adversely affecting the region's land cover.
Therefore, human intervention has significantly influenced land use patterns over many
centuries, evolving its structure over time and space. In the present era, these changes have
accelerated due to factors such as agriculture and urbanization. Information regarding land use and
cover is essential for various planning and management tasks related to the Earth's surface,
providing crucial environmental data for scientific, resource management, policy purposes, and
diverse human activities.
Accurate understanding of land use and cover is imperative for the development planning
of any area. Consequently, a wide range of professionals, including earth system scientists, land
and water managers, and urban planners, are interested in obtaining data on land use and cover
changes, conversion trends, and other related patterns. The spatial dimensions of land use and
cover support policymakers and scientists in making well-informed decisions, as alterations in
these patterns indicate shifts in economic and social conditions. Monitoring such changes with the
help of Advanced technologies like Remote Sensing and Geographic Information Systems is
crucial for coordinated efforts across different administrative levels. Advanced technologies like
Remote Sensing and Geographic Information Systems
9
Changes in vegetation cover refer to variations in the distribution, composition, and overall
structure of plant communities across different temporal and spatial scales. These changes can
occur natural.
1. Elysium Technologies Private Limited
ISO 9001:2008 A leading Research and Development Division
Madurai | Chennai | Trichy | Coimbatore | Kollam| Singapore
Website: elysiumtechnologies.com, elysiumtechnologies.info
Email: info@elysiumtechnologies.com
IEEE Final Year Project List 2011-2012
Abstract
CLOUD COMPUTING 2011 - 2012
01 A Hybrid Shared-nothing/Shared-data Storage Architecture for Large Scale Databases
Shared-nothing and shared-disk are two widely-used storage architectures in current parallel database systems, and each of
them has its own merits for different query patterns. However, there is no much effort in investigating the integration of
these two architectures and exploiting their merits together. In this study, we propose a novel hybrid shared-nothing/shared-
data storage scheme for large-scale databases, to leverage the benefits of both shared-nothing and shared-disk
architectures. We adopt a shared-nothing architecture as the hardware layer and leverage a parallel file system as the
storage layer. The proposed hybrid storage scheme can provide a high degree of parallelism in both I/O and computing, like
that in a shared-nothing system. In the meantime, it can achieve convenient and high-speed data sharing across multiple
database nodes, like that in a shared-disk system. The hybrid scheme is more appropriate for large-scale and dataintensive
applications than each of the two individual types of systems.
02 A performance goal oriented processor allocation technique for centralized heterogeneous multi-cluster environments
A performance goal oriented processor allocation technique for centralized heterogeneous multi-cluster environments. This
paper proposes a processor allocation technique named temporal look-ahead processor allocation (TLPA) that makes
allocation decision by evaluating the allocation effects on subsequent jobs in the waiting queue. TLPA has two strengths.
First, it takes multiple performance factors into account when making allocation decision. Second, it can be used to
optimize different performance metrics. To evaluate the performance of TLPA, we compare TLPA with best-fit and fastest-
first algorithms. Simulation results show that TLPA has up to 32.75% performance improvement over conventional
processor allocation algorithms in terms of average turnaround time in various system configurations
03 A Petri Net Approach to Analyzing Behavioral Compatibility and Similarity of Web Services
Web services have become the technology of choice for service-oriented computing implementation, where Web services
can be composed in response to some users’ needs. It is critical to verify the compatibility of component Web services to
ensure the correctness of the whole composition in which these components participate. Traditionally, two conditions need
to be satisfied during the verification of compatibility: reachable termination and proper termination. Unfortunately, it is
complex and time consuming to verify those two conditions. To reduce the complexity of this verification, we model Web
services using colored Petri nets (PNs) so that a specific property of their structures is looked into, namely, well
structuredness. We prove that only reachable termination needs to be satisfied when verifying behavioral compatibility
among well-structured Web services. When a composition is declared as valid and in the case where one of its component
Web services fails at run time, an alternative one with similar behavior needs to come into play as a substitute. Thus, it is
important to develop effective approaches that permit one to analyze the similarity of Web services. Although many existing
approaches utilize PNs to analyze behavioral compatibility, few of them explore further appropriate definitions of behavioral
similarity and provide a user-friendly tool with automatic verification. In this paper, we introduce a formal definition of
context-independent similarity and show that a Web service can be substituted by an alternative peer of similar behavior
without intervening otherWeb services in the composition. Therefore, the cost of verifying service substitutability is largely
reduced. We also provide an algorithm for the verification and implement it in a tool. Using the tool, the verification of
behavioral similarity of Web services can be performed in an automatic way.
Madurai Trichy Kollam
Elysium Technologies Private Limited Elysium Technologies Private Limited Elysium Technologies Private Limited
230, Church Road, Annanagar, 3rd Floor,SI Towers, Surya Complex,Vendor junction,
Madurai , Tamilnadu – 625 020. 15 ,Melapudur , Trichy, kollam,Kerala – 691 010.
Contact : 91452 4390702, 4392702, 4394702. Tamilnadu – 620 001. Contact : 91474 2723622.
eMail: info@elysiumtechnologies.com Contact : 91431 - 4002234. eMail: elysium.kollam@gmail.com
eMail: elysium.trichy@gmail.com
1
2. Elysium Technologies Private Limited
ISO 9001:2008 A leading Research and Development Division
Madurai | Chennai | Trichy | Coimbatore | Kollam| Singapore
Website: elysiumtechnologies.com, elysiumtechnologies.info
Email: info@elysiumtechnologies.com
IEEE Final Year Project List 2011-2012
04 A Privacy Preserving Repository for Securing Data across the Cloud
Popularity of cloud computing is increasing day by day in distributed computing environment. There is a growing trend of
using cloud environments for storage and data processing needs. Cloud computing is an Internet-based computing,
whereby shared resources, software, and information are provided to computers and other devices on demand. However,
adopting a cloud computing paradigm may have positive as well as negative effects on the data security of service
consumers. This paper primarily highlights some major security issues existing in current cloud computing environments.
The primary issue that has to be dealt with when talking about data security in a cloud is protection of the data. The idea is
to construct a privacy preserving repository where data sharing services can update and control the access and limit the
usage of their shared data, instead of submitting data to central authorities, and, hence, the repository will promote data
sharing and privacy of data. This paper aims at simultaneously achieving data confidentiality while still keeping the
harmonizing relations intact in the cloud. Our proposed scheme enables the data owner to delegate most of computation
intensive tasks to cloud servers without disclosing data contents or user access privilege information.
05 A Scalable Method for Signalling Dynamic Reconfiguration Events with OpenSM
Rerouting around faulty components, on-the-fly policy changes, and migration of jobs all require reconfiguration of data
structures in the Queue Pairs residing in the hosts on an InfiniBand cluster. In addition to a proper implementation at the
host, the subnet manager needs to implement a scalable method for signaling reconfiguration events to the hosts. In this
paper we propose and evaluate three different implementations for signalling dynamic reconfiguration events with OpenSM.
Through our evaluation we demonstrate a scalable solution for signalling host-side reconfiguration events in an InfiniBand
network based on an example where dynamic network reconfiguration combined with a topology-agnostic routing function
is used to avoid malfunctioning components. Through measurements on our test-cluster and an analytical study we show
that our best proposal reduces reconfiguration latency by more than 90%and in certain situations eliminates it completely.
Furthermore, the processing overhead in the subnet manager is shown to be minimal.
06 A Segment-Level Adaptive Data Layout Scheme for Improved Load Balance in Parallel File Systems
Parallel file systems are designed to mask the everincreasing gap between CPU and disk speeds via parallel I/O processing.
While they have become an indispensable component of modern high-end computing systems, their inadequate
performance is a critical issue facing the HPC community today. Conventionally, a parallel file system stripes a file across
multiple file servers with a fixed stripe size. The stripe size is a vital performance parameter, but the optimal value for it is
often application dependent. How to determine the optimal stripe size is a difficult research problem. Based on the
observation that many applications have different data-access clusters in one file, with each cluster having a distinguished
data access pattern, we propose in this paper a segmented data layout scheme for parallel file systems. The basic idea
behind the segmented approach is to divide a file logically into segments such that an optimal stripe size can be identified
for each segment. A five-step method is introduced to conduct the segmentation, to identify the appropriate stripe size for
each segment, and to carry out the segmented data layout scheme automatically. Experimental results show that the
proposed layout scheme is feasible and effective, and it improves performance up to 163% for writing and 132% for reading
on the widely used IOR and IOzone benchmarks
07 A Sketch-based Architecture for Mining Frequent Items and Itemsets from Distributed Data Streams
Madurai Trichy Kollam
Elysium Technologies Private Limited Elysium Technologies Private Limited Elysium Technologies Private Limited
230, Church Road, Annanagar, 3rd Floor,SI Towers, Surya Complex,Vendor junction,
Madurai , Tamilnadu – 625 020. 15 ,Melapudur , Trichy, kollam,Kerala – 691 010.
Contact : 91452 4390702, 4392702, 4394702. Tamilnadu – 620 001. Contact : 91474 2723622.
eMail: info@elysiumtechnologies.com Contact : 91431 - 4002234. eMail: elysium.kollam@gmail.com
eMail: elysium.trichy@gmail.com
2
3. Elysium Technologies Private Limited
ISO 9001:2008 A leading Research and Development Division
Madurai | Chennai | Trichy | Coimbatore | Kollam| Singapore
Website: elysiumtechnologies.com, elysiumtechnologies.info
Email: info@elysiumtechnologies.com
IEEE Final Year Project List 2011-2012
Parallel file systems are designed to mask the everincreasing gap between CPU and disk speeds via parallel I/O processing.
While they have become an indispensable component of modern high-end computing systems, their inadequate
performance is a critical issue facing the HPC community today. Conventionally, a parallel file system stripes a file across
multiple file servers with a fixed stripe size. The stripe size is a vital performance parameter, but the optimal value for it is
often application dependent. How to determine the optimal stripe size is a difficult research problem. Based on the
observation that many applications have different data-access clusters in one file, with each cluster having a distinguished
data access pattern, we propose in this paper a segmented data layout scheme for parallel file systems. The basic idea
behind the segmented approach is to divide a file logically into segments such that an optimal stripe size can be identified
for each segment. A five-step method is introduced to conduct the segmentation, to identify the appropriate stripe size for
each segment, and to carry out the segmented data layout scheme automatically. Experimental results show that the
proposed layout scheme is feasible and effective, and it improves performance up to 163% for writing and 132% for reading
on the widely used IOR and IOzone benchmarks.
08 A Trustworthiness Fusion Model for Service Cloud Platform Based on D-S Evidence Theory
Gold Trustworthiness plays an important role in service selection and usage. However, it is not easy to define and compute
the service trustworthiness because of its subject meaning and also the different views on it. In this paper, we describe the
meaning of trustworthiness and the computation method for trustworthiness fusion. Through extracting trustworthiness
from service provider, service requestor and service broker, we creatively adopted D-S (Dempster-Shafer) evident theory to
fuse the tripartite trustworthiness. Finally, we completed some comparison experiments on our web service supermarket
platform and certified the efficiency of our method.
09 Addressing Resource Fragmentation in Grids Through Network–Aware Meta–Scheduling in Advance
Grids are made of heterogeneous computing resources geographically dispersed where providing Quality of Service (QoS)
is a challenging task. One way of enhancing the QoS perceived by users is by performing scheduling of jobs in advance,
since reservations of resources are not always possible. This way, it becomes more likely that the appropriate resources are
available to run the job when needed. One drawback of this scenario is that fragmentation appears as a well known effect in
job allocations into resources and becomes the cause for poor resource utilization. So, a new technique has been developed
to tackle fragmentation problems, which consists of rescheduling already scheduled tasks. To this end, some heuristics are
implemented to calculate the intervals to be replanned and to select the jobs involved in the process. Moreover, another
heuristic is implemented to put rescheduled jobs as close together as possible to minimize the fragmentation. This
technique has been tested using a real testbed.
10 APP: Minimizing Interference Using Aggressive Pipelined Prefetching In Multi-Level Buffer Caches
As services become more complex with multiple interactions, and storage servers are shared by multiple services, the different I/O
streams arising from these multiple services compete for disk attention. Aggressive Pipelined Prefetching (APP) enabled storage
clients are designed to manage the buffer cache and I/O streams to minimize the disk I/O-interference arising from competing
streams. Due to the large number of streams serviced by a storage server, most of the disk time is spent seeking, leading to
degradation in response times. The goal of APP is to decrease application execution time by increasing the throughput of individual
I/O streams and utilizing idle capacity on remote nodes along with idle network times thus effectively avoiding alternating bursts of
activity followed by periods of inactivity. APP significantly increases overall I/O throughput and decreases overall messaging
overhead between servers. In APP, the intelligence is embedded in the clients and they automatically infer parameters in order to
achieve the maximum throughput. APP clients make use of aggressive prefetching and data offloading to remote buffer caches in
multi-level buffer cache hierarchies in an effort to minimize disk interference and tranquilize the effects of aggressive prefetching.
We used an extremely I/O-intensive Radix-k application employed in studies on the scalability of parallel image composition and
Madurai Trichy Kollam
Elysium Technologies Private Limited Elysium Technologies Private Limited Elysium Technologies Private Limited
230, Church Road, Annanagar, 3rd Floor,SI Towers, Surya Complex,Vendor junction,
Madurai , Tamilnadu – 625 020. 15 ,Melapudur , Trichy, kollam,Kerala – 691 010.
Contact : 91452 4390702, 4392702, 4394702. Tamilnadu – 620 001. Contact : 91474 2723622.
eMail: info@elysiumtechnologies.com Contact : 91431 - 4002234. eMail: elysium.kollam@gmail.com
eMail: elysium.trichy@gmail.com
3
4. Elysium Technologies Private Limited
ISO 9001:2008 A leading Research and Development Division
Madurai | Chennai | Trichy | Coimbatore | Kollam| Singapore
Website: elysiumtechnologies.com, elysiumtechnologies.info
Email: info@elysiumtechnologies.com
IEEE Final Year Project List 2011-2012
particle tracing developed at the Argonne National Laboratory with data sets of up to 128 GB and implemented our scheme on a 16-
node Linux cluster. We observed that the execution time of the application decreased by 68%on average when using our scheme
11 ASDF: An Autonomous and Scalable Distributed File System
The demand for huge storage space on data-intensive applications and high-performance scientific computing continues to
grow. To integrate massive distributed storage resources for providing huge storage space is an important and challenging
issue in Cloud and Grid computing. In this paper, we propose a distributed file system, called ASDF, to meet the demands of
not only data-intensive applications but also end users, developers and administrators. While sharing many of the same
goals as previous distributed file systems such as scalability, reliability, and performance, it is also designed with the
emphasis on compatibility, extensibility and autonomy. With the design goals in minds, we address several issues and
present our design by adopting peer-to-peer technology, replication, multi-source data transfer, metadata caching and
service-oriented architecture. The experimental results show the proposed distributed file system meet our design goals and
will be useful in Cloud and Grid computing.
12 A Assertion Based Parallel Debugging
Programming languages have advanced tremendously over the years, but program debuggers have hardly changed.
Sequential debuggers do little more than allow a user to control the flow of a program and examine its state. Parallel ones
support the same operations on multiple processes, which are adequate with a small number of processors, but become
unwieldy and ineffective on very large machines. Typical scientific codes have enormous multidimensional data structures
and it is impractical to expect a user to view the data using traditional display techniques. In this paper we discuss the use of
debug-time assertions, and show that these can be used to debug parallel programs. The techniques reduce the debugging
complexity because they reason about the state of large arrays without requiring the user to know the expected value of
every element. Assertions can be expensive to evaluate, but their performance can be improved by running them in parallel.
We demonstrate the system with a case study finding errors in a parallel version of the Shallow Water Equations, and
evaluate the performance of the tool on a 4,096 cores Cray XE6.
13 Autonomic SLA-driven Provisioning for Cloud Applications
Significant achievements have been made for automated allocation of cloud resources. However, the performance of
applications may be poor in peak load periods, unless their cloud resources are dynamically adjusted. Moreover, although
cloud resources dedicated to different applications are virtually isolated, performance fluctuations do occur because of
resource sharing, and software or hardware failures (e.g. unstable virtual machines, power outages, etc.). In this paper, we
propose a decentralized economic approach for dynamically adapting the cloud resources of various applications, so as to
statistically meet their SLA performance and availability goals in the presence of varying loads or failures. According to our
approach, the dynamic economic fitness of a Web service determines whether it is replicated or migrated to another server,
or deleted. The economic fitness of a Web service depends on its individual performance constraints, its load, and the
utilization of the resources where it resides. Cascading performance objectives are dynamically calculated for individual
tasks in the application workflow according to the user requirements. By fully implementing our framework, we
experimentally proved that our adaptive approach statistically meets the performance objectives under peak load periods or
failures, as opposed to static resource settings.
14 BAR: An Efficient Data Locality Driven Task Scheduling Algorithm for Cloud Computing
Large scale data processing is increasingly common in cloud computing systems like MapReduce, Hadoop, and Dryad in
Madurai Trichy Kollam
Elysium Technologies Private Limited Elysium Technologies Private Limited Elysium Technologies Private Limited
230, Church Road, Annanagar, 3rd Floor,SI Towers, Surya Complex,Vendor junction,
Madurai , Tamilnadu – 625 020. 15 ,Melapudur , Trichy, kollam,Kerala – 691 010.
Contact : 91452 4390702, 4392702, 4394702. Tamilnadu – 620 001. Contact : 91474 2723622.
eMail: info@elysiumtechnologies.com Contact : 91431 - 4002234. eMail: elysium.kollam@gmail.com
eMail: elysium.trichy@gmail.com
4
5. Elysium Technologies Private Limited
ISO 9001:2008 A leading Research and Development Division
Madurai | Chennai | Trichy | Coimbatore | Kollam| Singapore
Website: elysiumtechnologies.com, elysiumtechnologies.info
Email: info@elysiumtechnologies.com
IEEE Final Year Project List 2011-2012
recent years. In these systems, files are split into many small blocks and all blocks are replicated over several servers. To
process files efficiently, each job is divided into many tasks and each task is allocated to a server to deals with a file block.
Because network bandwidth is a scarce resource in these systems, enhancing task data locality(placing tasks on servers
that contain their input blocks) is crucial for the job completion time. Although there have been many approaches on
improving data locality, most of them either are greedy and ignore global optimization, or suffer from high computation
complexity. To address these problems, we propose a heuristic task scheduling algorithm called BAlance-Reduce(BAR) , in
which an initial task allocation will be produced at first, then the job completion time can be reduced gradually by tuning the
initial task allocation. By taking a global view, BAR can adjust data locality dynamically according to network state and
cluster workload. The simulation results show that BAR is able to deal with large problem instances in a few seconds and
outperforms previous related algorithms in term of the job completion time.
15 Building an online domain-specific computing service over non-dedicated grid and cloud resources: the Superlink-online experience
Linkage analysis is a statistical method used by geneticists in everyday practice for mapping disease-susceptibility genes in
the study of complex diseases. An essential first step in the study of genetic diseases, linkage computations may require
years of CPU time. The recent DNA sampling revolution enabled unprecedented sampling density, but made the analysis
even more computationally demanding. In this paper we describe a high performance online service for genetic linkage
analysis, called Superlink-online. The system enables anyone with Internet access to submit genetic data and analyze it as
easily and quickly as if using a supercomputer. The analyses are automatically parallelized and executed on tens of
thousands distributed CPUs in multiple clouds and grids. The first version of the system, which employed up to 3,000 CPUs
in UW Madison and Technion Condor pools, has been successfully used since 2006 by hundreds of geneticists worldwide,
with over 40 citations in the genetics literature. Here we describe the second version, which substantially improves the
scalability and performance of first: it uses over 45,000 non-dedicated hosts, in 10 different grids and clouds, including EC2
and the Superlink@Technion community grid. Improved system performance is obtained through a virtual grid hierarchy
with dynamic load balancing and multi-grid overlay via the GridBot system, parallel pruning of short tasks for overhead
minimization, and cost-efficient use of cloud resources in reliability-critical execution periods. These enhancements enabled
execution of many previously infeasible analyses, which can now be completed within a few hours. The new version of the
system, in production since 2009, has completed over 6500 different runs of over 10 million tasks, with total consumption of
420 CPU years.
16 Cheetah: A Framework for Scalable Hierarchical Collective Operations
Collective communication operations, used by many scientific applications, tend to limit overall parallel application
performance and scalability. Computer systems are becoming more heterogeneous with increasing node and core-per-node
counts. Also, a growing number of data-access mechanisms, of varying characteristics, are supported within a single
computer system. We describe a new hierarchical collective communication framework that takes advantage of hardware-
specific data-access mechanisms. It is flexible, with run-time hierarchy specification, and sharing of collective
communication primitives between collective algorithms. Data buffers are shared between levels in the hierarchy reducing
collective communication management overhead. We have implemented several versions of the Message Passing Interface
(MPI) collective operations, MPI Barrier() and MPI Bcast(), and run experiments using up to 49,152 processes on a Cray XT5,
and a small InfiniBand based cluster. At 49,152 processes our barrier implementation outperforms the optimized native
implementation by 75%. 32 Byte and one Mega-Byte broadcasts outperform it by 62% and 11%, respectively, with better
scalability characteristics. Improvements relative to the default Open MPI implementation are much larger.
17 Classification and Composition of QoS Attributes in Distributed, Heterogeneous Systems
In large-scale distributed systems the selection of services and data sources to respond to a given request is a crucial task.
Non-functional or Quality of Service (QoS) attributes need to be considered when there are several candidate services with
identical functionality. Before applying any service selection optimization strategy, the system has to be analyzed in terms of
Madurai Trichy Kollam
Elysium Technologies Private Limited Elysium Technologies Private Limited Elysium Technologies Private Limited
230, Church Road, Annanagar, 3rd Floor,SI Towers, Surya Complex,Vendor junction,
Madurai , Tamilnadu – 625 020. 15 ,Melapudur , Trichy, kollam,Kerala – 691 010.
Contact : 91452 4390702, 4392702, 4394702. Tamilnadu – 620 001. Contact : 91474 2723622.
eMail: info@elysiumtechnologies.com Contact : 91431 - 4002234. eMail: elysium.kollam@gmail.com
eMail: elysium.trichy@gmail.com
5
6. Elysium Technologies Private Limited
ISO 9001:2008 A leading Research and Development Division
Madurai | Chennai | Trichy | Coimbatore | Kollam| Singapore
Website: elysiumtechnologies.com, elysiumtechnologies.info
Email: info@elysiumtechnologies.com
IEEE Final Year Project List 2011-2012
QoS metrics, comparable to the statistics needed by a database query optimizer. This paper presents a classification
approach for QoS attributes of system components, from which aggregation functions for composite services are derived.
The applicability and usefulness of the approach is shown in a distributed system from a High-Energy Physics experiment
posing a complex service selection challenge.
.
18 Cloud MapReduce: a MapReduce Implementation on top of a Cloud Operating System
This study presents a fully automatedmembrane segmentation technique for immunohistochemical tissue images with
membrane staining, which is a critical task in computerized immunohistochemistry (IHC). Membrane segmentation is
particularly tricky in immunohistochemical tissue images because the cellular membranes are visible only in the stained
tracts of the cell, while the unstained tracts are not visible. Our automated method provides accurate segmentation of the
cellularmembranes in the stained tracts and reconstructs the approximate location of the unstained tracts using nuclear
membranes as a spatial reference. Accurate cell-by-cell membrane segmentation allows per cell morphological analysis and
quantification of the target membrane proteins that is fundamental in several medical applications such as cancer
characterization and classification, personalized therapy design, and for any other applications requiring cell morphology
characterization. Experimental results on real datasets from different anatomical locations demonstrate the wide
applicability and high accuracy of our approach in the context of IHC analysis.
19 CloudSpider: Combining Replication with Scheduling for Optimizing Live Migration of Virtual Machines Across Wide Area Networks
Exact information about the shape of a lumbar pedicle can increase operation accuracy and safety during computeraided
spinal fusion surgery, which requires extreme caution on the part of the surgeon, due to the complexity and delicacy of the
procedure. In this paper, a robust framework for segmenting the lumbar pedicle in computed tomography (CT) images is
presented. The framework that has been designed takes a CT image, which includes the lumbar pedicle as input, and
provides the segmented lumbar pedicle in the form of 3-D voxel sets. This multistep approach begins with 2-D dynamic
thresholding using local optimal thresholds, followed by procedures to recover the spine geometry in a high curvature
environment. A subsequent canal reference determination using proposed thinning-based integrated cost is then performed.
Based on the obtained segmented vertebra and canal reference, the edge of the spinal pedicle is segmented. This framework
has been tested on 84 lumbar vertebrae of 19 patients requiring spinal fusion. It was successfully applied, resulting in an
average success rate of 93.22% and a final mean error of 0.14±0.05 mm. Precision errors were smaller than 1% for spine
pedicle volumes. Intra- and interoperator precision errors were not significantly different.
20 Automatic and Unsupervised Snore Sound Extraction From Respiratory Sound Signals
In this paper, an automatic and unsupervised snore detection algorithm is proposed. The respiratory sound signals of 30
patients with different levels of airway obstruction were recorded by twomicrophones: one placed over the trachea (the
tracheal microphone), and the other was a freestanding microphone (the ambient microphone). All the recordings were done
simultaneously with full-night polysomnography during sleep. The sound activity episodes were identified using the vertical
box (V-Box) algorithm. The 500-Hz subband energy distribution and principal component analysis were used to extract
discriminative features from sound episodes. An unsupervised fuzzy C-means clustering algorithm was then deployed to
label the sound episodes as either snore or no-snore class, which could be breath sound, swallowing sound, or any other
noise. The algorithm was evaluated using manual annotation of the sound signals. The overall accuracy of the proposed
algorithm was found to be 98.6% for tracheal sounds recordings, and 93.1% for the sounds recorded by the ambient
microphone.
Madurai Trichy Kollam
Elysium Technologies Private Limited Elysium Technologies Private Limited Elysium Technologies Private Limited
230, Church Road, Annanagar, 3rd Floor,SI Towers, Surya Complex,Vendor junction,
Madurai , Tamilnadu – 625 020. 15 ,Melapudur , Trichy, kollam,Kerala – 691 010.
Contact : 91452 4390702, 4392702, 4394702. Tamilnadu – 620 001. Contact : 91474 2723622.
eMail: info@elysiumtechnologies.com Contact : 91431 - 4002234. eMail: elysium.kollam@gmail.com
eMail: elysium.trichy@gmail.com
6
7. Elysium Technologies Private Limited
ISO 9001:2008 A leading Research and Development Division
Madurai | Chennai | Trichy | Coimbatore | Kollam| Singapore
Website: elysiumtechnologies.com, elysiumtechnologies.info
Email: info@elysiumtechnologies.com
IEEE Final Year Project List 2011-2012
21 Dealing with Grid-Computing Authorization using Identity-Based Certificateless Proxy Signature
In this paper, we propose a new Identity-Based Certificateless Proxy Signature scheme, for the grid environment, in order to
enable attribute-based authorization, finegrained delegation and enhanced delegation chain establishment and validation, all
without relying on any kind of PKI Certificates or proxy certificates. We show that our scheme is correct and secure. We also
give an evaluation of the computational and communication overhead of the proposed scheme. Simulations shows
satisfying results.
22 Debunking Real-Time Pricing in Cloud Computing
Under Elasticity of cloud computing eases the burden of capacity planning. Cloud computing users dynamically provision
IT resources tracking their uctuating demand, and only pay for their usage. Therefore, cloud comput- ing essentially shifts
the burden of capacity planning from user's side to provider's side. On the other hand, providers take this burden with the
optimistic assumption that di- verse workloads from various users will atten the overall demand curve. However, this
optimistic hypothesis has not been proved yet in the real world cases. In fact, counter evidences have been raised.
December 2009, Amazon Web Services (AWS), a leading infrastructure cloud service provider, started to o
er a real-time pricing for computing resources {Amazon EC2 Spot Instances (SIs). Real-time pricing, in princi- ple,
encourages users to shift their flexible workloads from provider's peak hours to o
-peak hours with monetary incentives. Interestingly, from our observation on AWS's one-year SI price history datasets, we
conclude that the observed monetary incentive is not large enough to motivate users to shift their workloads. It is
reasonable for users to choose SIs over on-demand instances because SIs are 52.3% cheaper on average. After that, shifting
the workload to cheaper period provides only 3.7 % additional cost savings at best. Moreover, both average cost savings
and price fluctuation have not been meaningfully changed over time.
.
23 DELMA: Dynamically ELastic MApReduce Framework for CPU-Intensive Applications
Since its introduction, MapReduce implementations have been primarily focused towards static compute cluster sizes. In
this paper, we introduce the concept of dynamic elasticity to MapReduce. We present the design decisions and
implementation tradeoffs for DELMA, (Dynamically ELastic MApReduce), a framework that follows the MapReduce paradigm,
just like Hadoop MapReduce, but that is capable of growing and shrinking its cluster size, as jobs are underway. In our
study, we test DELMA in diverse performance scenarios, ranging from diverse node additions to node additions at various
points in the application run-time with various dataset sizes. The applicability of the MapReduce paradigm extends far
beyond its use with large-scale data intensive applications, and can also be brought to bear in processing long running
distributed applications executing on small-sized clusters. In this work, we focus both on the performance of processing
hierarchical data in distributed scientific applications, as well as the processing of smaller but demanding input sizes
primarily used in small clusters. We run experiments for datasets that require CPU intensive processing, ranging in size from
Millions of input data elements to process, up to over half a billion elements, and observe the positive scalability patterns
exhibited by the system. We show that for such sizes, performance increases accordingly with data and cluster size
increases. We conclude on the benefits of providing MapReduce with the capability of dynamically growing and shrinking its
cluster configuration by adding and removing nodes during jobs, and explain the possibilities presented by this model.
24 Detection and Protection against Distributed Denial of Service Attacks in Accountable Grid Computing Systems
Madurai Trichy Kollam
Elysium Technologies Private Limited Elysium Technologies Private Limited Elysium Technologies Private Limited
230, Church Road, Annanagar, 3rd Floor,SI Towers, Surya Complex,Vendor junction,
Madurai , Tamilnadu – 625 020. 15 ,Melapudur , Trichy, kollam,Kerala – 691 010.
Contact : 91452 4390702, 4392702, 4394702. Tamilnadu – 620 001. Contact : 91474 2723622.
eMail: info@elysiumtechnologies.com Contact : 91431 - 4002234. eMail: elysium.kollam@gmail.com
eMail: elysium.trichy@gmail.com
7
8. Elysium Technologies Private Limited
ISO 9001:2008 A leading Research and Development Division
Madurai | Chennai | Trichy | Coimbatore | Kollam| Singapore
Website: elysiumtechnologies.com, elysiumtechnologies.info
Email: info@elysiumtechnologies.com
IEEE Final Year Project List 2011-2012
By exploiting existing vulnerabilities, malicious parties can take advantage of resources made available by grid systems to
attack mission-critical websites or the grid itself. In this paper, we present two approaches for protecting against attacks
targeting sites outside or inside the grid. Our approach is based on special-purpose software agents that collect provenance
and resource usage data in order to perform detection and protection. We show the effectiveness and the efficiency of our
approach by conducting various experiments on an emulated grid test-bed.
25 DHTbd: A Reliable Block-based Storage System for High Performance Clusters
Large, reliable and efficient storage systems are becoming increasingly important in enterprise environments. Our research
in storage system design is oriented towards the exploita- tion of commodity hardware for building a high performance,
resilient and scalable storage system. We present the design and implementation of DHTbd, a general purpose decentralized
storage system where storage nodes support a distributed hash table based interface and clients are implemented as in-
kernel device drivers. DHTbd, unlike most storage systems proposed to date, is implemented at the block device level of the
I/O stack, a simple yet efficient design. The experimental evaluation of the proposed system demonstrates its very good I/O
performance, its ability to scale to large clusters, as well as its robustness, even when massive failures occur.
26 Diagnosing Anomalous Network Performance with Confidence
Variability in network performance is a major obstacle in effectively analyzing the throughput of modern high performance
computer systems. High performance interconnection networks offer excellent best-case network latencies; however, highly
parallel applications running on parallel machines typically require consistently high levels of performance to adequately
leverage the massive amounts of available computing power. Performance analysts have usually quantified network
performance using traditional summary statistics that assume the observational data is sampled from a normal distribution.
In our examinations of network performance, we have found this method of analysis often provides too little data to
understand anomalous network performance. In particular, we examine a multi-modal performance scenario encountered
with an Infiniband interconnection network and we explore the performance repeatability on the custom Cray SeaStar2
interconnection network after a set of software and driver updates.
27 Enabling Multi-Physics Coupled Simulations within the PGAS Programming Framework
Complex coupled multi-physics simulations are playing increasingly important roles in scientific and engineering
applications such as fusion plasma and climate modeling. At the same time, extreme scales, high levels of concurrency and
the advent of multicore and manycore technologies are making the high-end parallel computing systems on which these
simulations run, hard to program. While the Partitioned Global Address Space (PGAS) languages is attempting to address
the problem, the PGAS model does not easily support the coupling of multiple application codes, which is necessary for the
coupled multi-physics simulations. Furthermore, existing frameworks that support coupled simulations have been
developed for fragmented programming models such as message passing, and are conceptually mismatched with the
shared memory address space abstraction in the PGAS programming model. This paper explores how multi-physics coupled
simulations can be supported within the PGAS programming framework. Specifically, in this paper, we present the design
and implementation of the XpressSpace programming system, which enables efficient and productive development of
coupled simulations across multiple independent PGAS Unified Parallel C (UPC) executables. XpressSpace provides the
global-view style programming interface that is consistent with the memory model in UPC, and provides an efficient runtime
system that can dynamically capture the data decomposition of global-view arrays and enable fast exchange of parallel data
structures between coupled codes. In addition, XpressSpace provides the flexibility to define the coupling process in
specification file that is independent of the program source codes. We evaluate the performance and scalability of
XpressSpace prototype implementation using different coupling patterns extracted from real world multi-physics simulation
scenarios, on the Jaguar Cray XT5 system of Oak Ridge National Laboratory
Madurai Trichy Kollam
Elysium Technologies Private Limited Elysium Technologies Private Limited Elysium Technologies Private Limited
230, Church Road, Annanagar, 3rd Floor,SI Towers, Surya Complex,Vendor junction,
Madurai , Tamilnadu – 625 020. 15 ,Melapudur , Trichy, kollam,Kerala – 691 010.
Contact : 91452 4390702, 4392702, 4394702. Tamilnadu – 620 001. Contact : 91474 2723622.
eMail: info@elysiumtechnologies.com Contact : 91431 - 4002234. eMail: elysium.kollam@gmail.com
eMail: elysium.trichy@gmail.com
8
9. Elysium Technologies Private Limited
ISO 9001:2008 A leading Research and Development Division
Madurai | Chennai | Trichy | Coimbatore | Kollam| Singapore
Website: elysiumtechnologies.com, elysiumtechnologies.info
Email: info@elysiumtechnologies.com
IEEE Final Year Project List 2011-2012
28 EZTrace: a generic framework for performance analysis
Modern supercomputers with multi-core nodes enhanced by accelerators, as well as hybrid programming models introduce
more complexity in modern applications. Exploiting efficiently all the resources requires a complex analysis of the
performance of applications in order to detect time-consuming sections. We present EZTRACE, a generic trace generation
framework that aims at providing a simple way to analyze applications. EZTRACE is based on plugins that allow it to trace
different programming models such as MPI, pthread or OpenMP as well as user-defined libraries or applications. EZTRACE
uses two steps: one to collect the basic information during execution and one post-mortem analysis. This permits tracing
the execution of applications with low overhead while allowing to refine the analysis after the execution. We also present a
script language for EZTRACE that gives the user the opportunity to easily define the functions to instrument without
modifying the source code of the application.
29 Failure Avoidance through Fault Prediction Based on Synthetic Transactions
System logs are an important tool in studying the conditions (e.g., environment misconfigurations, resource status,
erroneous user input) that cause failures. However, production system logs are complex, verbose, and lack structural
stability over time. These traits make them hard to use, and make solutions that rely on them susceptible to high
maintenance costs. Additionally, logs record failures after they occur: by the time logs are investigated, users have already
experienced the failures’ consequences. To detect the environment conditions that are correlated with failures without
dealing with the complexities associated with processing production logs, and to prevent failure-causing conditions from
occurring before the system goes live, this research suggests a three step methodology: i) using synthetic transactions, i.e.,
simplified workloads, in pre-production environments that emulate user behavior, ii) recording the result of executing these
transactions in logs that are compact, simple to analyze, stable over time, and specifically tailored to the fault metrics of
interest, and iii) mining these specialized logs to understand the conditions that correlate to failures. This allows system
administrators to configure the system to prevent these conditions from happening. We evaluate the effectiveness of this
approach by replicating the behavior of a service used in production at Microsoft, and testing the ability to predict failures
using a synthetic workload on a 650 million events production trace. The synthetic prediction system is able to predict 91%
of real production failures using 50-fold fewer transactions and logs that are 10,000-fold more compact than their production
counterparts.
30 GeoServ: A Distributed Urban Sensing Platform
Urban sensing where mobile users continuously gather, process, and share location-sensitive sensor data (e.g., street
images, road condition, traffic flow) is emerging as a new network paradigm of sensor information sharing in urban
environments. The key enablers are the smartphones (e.g., iPhones and Android phones) equipped with onboard sensors
(e.g., cameras, accelerometer, compass, GPS) and various wireless devices (e.g., WiFi and 2/3G). The goal of this paper is to
design a scalable sensor networking platform where millions of users on the move can participate in urban sensing and
share locationaware information using always-on cellular data connections. We propose a two-tier sensor networking
platform called GeoServ where mobile users publish/access sensor data via an Internetbased distributed P2P overlay
network. The main contribution of this paper is two-fold: a location-aware sensor data retrieval scheme that supports
geographic range queries, and a locationaware publish-subscribe scheme that enables efficient multicast routing over a
group of subscribed users. We prove that GeoServ protocols preserve locality and validate their performance via extensive
simulations
.
31 GPGPU-Accelerated Parallel and Fast Simulation of Thousand-core Platforms
Madurai Trichy Kollam
Elysium Technologies Private Limited Elysium Technologies Private Limited Elysium Technologies Private Limited
230, Church Road, Annanagar, 3rd Floor,SI Towers, Surya Complex,Vendor junction,
Madurai , Tamilnadu – 625 020. 15 ,Melapudur , Trichy, kollam,Kerala – 691 010.
Contact : 91452 4390702, 4392702, 4394702. Tamilnadu – 620 001. Contact : 91474 2723622.
eMail: info@elysiumtechnologies.com Contact : 91431 - 4002234. eMail: elysium.kollam@gmail.com
eMail: elysium.trichy@gmail.com
9
10. Elysium Technologies Private Limited
ISO 9001:2008 A leading Research and Development Division
Madurai | Chennai | Trichy | Coimbatore | Kollam| Singapore
Website: elysiumtechnologies.com, elysiumtechnologies.info
Email: info@elysiumtechnologies.com
IEEE Final Year Project List 2011-2012
The multicore revolution and the ever-increasing complexity of computing systems is dramatically changing system design,
analysis and programming of computing platforms. Future architectures will feature hundreds to thousands of simple
processors and on-chip memories connected through a network-on-chip. Architectural simulators will remain primary tools
for design space exploration, software development and performance evaluation of these massively parallel architectures.
However, architectural simulation performance is a serious concern, as virtual platforms and simulation technology are not
able to tackle the complexity of thousands of core future scenarios. The main contribution of this paper is the development
of a new simulation approach and technology for many core processors which exploit the enormous parallel processing
capability of low-cost and widely available General Purpose Graphic Processing Units (GPGPU). The simulation of many-
core architectures exhibits indeed a high level of parallelism and is inherently parallelizable, but GPGPU acceleration of
architectural simulation requires an in-depth revision of the data structures and functional partitioning traditionally used in
parallel simulation. We demonstrate our GPGPU simulator on a target architecture composed by several cores (i.e. ARM ISA
based), with instruction and data caches, connected through a Network-on-Chip (NoC). Our experiments confirm the
feasibility of our approach.
32 Grid Global Behavior Prediction
Complexity has always been one of the most important issues in distributed computing. From the first clusters to grid and
now cloud computing, dealing correctly and efficiently with system complexity is the key to taking technology a step further.
In this sense, global behavior modeling is an innovative methodology aimed at understanding the grid behavior. The main
objective of this methodology is to synthesize the grid’s vast, heterogeneous nature into a simple but powerful behavior
model, represented in the form of a single, abstract entity, with a global state. Global behavior modeling has proved to be
very useful in effectively managing grid complexity but, in many cases, deeper knowledge is needed. It generates a
descriptive model that could be greatly improved if extended not only to explain behavior, but also to predict it. In this paper
we present a prediction methodology whose objective is to define the techniques needed to create global behavior
prediction models for grid systems. This global behavior prediction can benefit grid management, specially in areas such as
fault tolerance or job scheduling. The paper presents experimental results obtained in real scenarios in order to validate this
approach.
33 High Performance Pipelined Process Migration with RDMA
Coordinated Checkpoint/Restart (C/R) is a widely deployed strategy to achieve fault-tolerance. However, C/R by itself is not
capable enough to meet the demands of upcoming exascale systems, due to its heavy I/O overhead. Process migration has
already been proposed in literature as a pro-active fault-tolerance mechanism to complement C/R. Several popular MPI
implementations have provided support for process migration, including MVAPICH2 and OpenMPI. But these existing
solutions cannot yield a satisfactory performance. In this paper we conduct extensive profiling on several process migration
mechanisms, and reveal that inefficient I/O and network transfer are the principal factors responsible for the high overhead.
We then propose a new approach, Pipelined Process Migration with RDMA (PPMR), to overcome these overheads. Our new
protocol fully pipelines data writing, data transfer, and data read operations during different phases of a migration cycle.
PPMR aggregates data writes on the migration source node and transfers data to the target node via high throughput RDMA
transport. It implements an efficient process restart mechanism at the target node to restart processes from the RDMA data
streams. We have implemented this Pipelined Process Migration protocol in MVAPICH2 and studied the performance
benefits. Experimental results show that PPMR achieves a 10.7X speedup to complete a process migration over the
conventional approach at a moderate (8MB) memory usage. Process migration overhead on the application is significantly
minimized from 38% to 5% by PPMR when three migrations are performed in succession.
34 Improving Utilization of Infrastructure Clouds
A key advantage of infrastructure-as-a-service (IaaS) clouds is providing users on-demand access to resources. To provide
Madurai Trichy Kollam
Elysium Technologies Private Limited Elysium Technologies Private Limited Elysium Technologies Private Limited
230, Church Road, Annanagar, 3rd Floor,SI Towers, Surya Complex,Vendor junction,
Madurai , Tamilnadu – 625 020. 15 ,Melapudur , Trichy, kollam,Kerala – 691 010.
Contact : 91452 4390702, 4392702, 4394702. Tamilnadu – 620 001. Contact : 91474 2723622.
eMail: info@elysiumtechnologies.com Contact : 91431 - 4002234. eMail: elysium.kollam@gmail.com
eMail: elysium.trichy@gmail.com
10
11. Elysium Technologies Private Limited
ISO 9001:2008 A leading Research and Development Division
Madurai | Chennai | Trichy | Coimbatore | Kollam| Singapore
Website: elysiumtechnologies.com, elysiumtechnologies.info
Email: info@elysiumtechnologies.com
IEEE Final Year Project List 2011-2012
on-demand access, however, cloud providers must either significantly overprovision their infrastructure (and pay a high
price for operating resources with low utilization) or reject a large proportion of user requests (in which case the access is
no longer on-demand). At the same time, not all users require truly on-demand access to resources. Many applications and
workflows are designed for recoverable systems where interruptions in service are expected. For instance, many scientists
utilize high-throughput computing (HTC)-enabled resources, such as Condor, where jobs are dispatched to available
resources and terminated when the resource is no longer available. We propose a cloud infrastructure that combines on-
demand allocation of resources with opportunistic provisioning of cycles from idle cloud nodes to other processes by
deploying backfill virtual machines (VMs). For demonstration and experimental evaluation, we extend the Nimbus cloud
computing toolkit to deploy backfill VMs on idle cloud nodes for processing an HTC workload. Initial tests show an increase
in IaaS cloud utilization from 37.5% to 100% during a portion of the evaluation trace but only 6.39% overhead cost for
processing the HTC workload. We demonstrate that a shared infrastructure between IaaS cloud providers and an HTC job
management system can be highly beneficial to both the IaaS cloud provider and HTC users by increasing the utilization of
the cloud infrastructure (thereby decreasing the overall cost) and contributing cycles that would otherwise be idle to
processing HTC jobs.
35 Development Inferring Network Topologies in Infrastructure as a Service Cloud
Infrastructure as a Service (IaaS) clouds are gaining increasing popularity as a platform for distributed computations. The
virtualization layers of those clouds offer new possibilities for rapid resource provisioning, but also hide aspects of the
underlying IT infrastructure which have often been exploited in classic cluster environments. One of those hidden aspects is
the network topology, i.e. the way the rented virtual machines are physically interconnected inside the cloud. We propose an
approach to infer the network topology connecting a set of virtual machines in IaaS clouds and exploit it for data-intensive
distributed applications. Our inference approach relies on delay-based end-to-end measurements and can be combined with
traditional IP-level topology information, if available. We evaluate the inference accuracy using the popular hypervisors KVM
as well as XEN and highlight possible performance gains for distributed applications.
36 Directed Differential Connectivity Graph of Interictal Epileptiform Discharges
In this paper, we study temporal couplings between interictal events of spatially remote regions in order to localize the
leading epileptic regions from intracerebral EEG (iEEG). We aim to assess whether quantitative epileptic graph analysis
during interictal period may be helpful to predict the seizure onset zone of ictal iEEG. Using wavelet transform, cross-
correlation coefficient, and multiple hypothesis test, we propose a differential connectivity graph (DCG) to represent the
connections that change significantly between epileptic and nonepileptic states as defined by the interictal events.
Postprocessings based on mutual information and multiobjective optimization are proposed to localize the leading epileptic
regions through DCG. The suggested approach is applied on iEEG recordings of five patients suffering from focal epilepsy.
Quantitative comparisons of the proposed epileptic regions within ictal onset zones detected by visual inspection and using
electrically stimulated seizures, reveal good performance of the present method.
37 Driver Drowsiness Managing distributed files with RNS in heterogeneous Data Grids
This paper describes the management of files distributed in heterogeneous Data Grids by using RNS (Resource Namespace
Service). RNS provides hierarchical namespace management for name-to-resource mapping as a key technology to use Grid
resources for different kinds of middleware. RNS directory entries and junction entries can contain their own XML messages
as metadata. We define attribute expressions in XML for the RNS entries and give an algorithm to access distributed files
stored within different kinds of Data Grids. The example in this paper shows how our Grid application can retrieve the actual
locations of files from the RNS server. An application can also access the distributed files as though they were files in the
Madurai Trichy Kollam
Elysium Technologies Private Limited Elysium Technologies Private Limited Elysium Technologies Private Limited
230, Church Road, Annanagar, 3rd Floor,SI Towers, Surya Complex,Vendor junction,
Madurai , Tamilnadu – 625 020. 15 ,Melapudur , Trichy, kollam,Kerala – 691 010.
Contact : 91452 4390702, 4392702, 4394702. Tamilnadu – 620 001. Contact : 91474 2723622.
eMail: info@elysiumtechnologies.com Contact : 91431 - 4002234. eMail: elysium.kollam@gmail.com
eMail: elysium.trichy@gmail.com
11
12. Elysium Technologies Private Limited
ISO 9001:2008 A leading Research and Development Division
Madurai | Chennai | Trichy | Coimbatore | Kollam| Singapore
Website: elysiumtechnologies.com, elysiumtechnologies.info
Email: info@elysiumtechnologies.com
IEEE Final Year Project List 2011-2012
local file system without worrying about the underlying Data Grids. This approach can be used in a Grid computing system
to handle distributed Grid resources.
.
38 Multi-Cloud Deployment of Computing Clusters for Loosely-Coupled MTC Applications
Cloud computing is gaining acceptance in many IT organizations, as an elastic, flexible and variable-cost way to deploy their
service platforms using outsourced resources. Unlike traditional utilities where a single provider scheme is a common
practice, the ubiquitous access to cloud resources easily enables the simultaneous use of different clouds. In this paper we
explore this scenario to deploy a computing cluster on top of a multi-cloud infrastructure, for solving loosely-coupled Many-
Task Computing (MTC) applications. In this way, the cluster nodes can be provisioned with resources from different clouds
to improve the cost-effectiveness of the deployment, or to implement high-availability strategies. We prove the viability of
this kind of solutions by evaluating the scalability, performance, and cost of different configurations of a Sun Grid Engine
cluster, deployed on a multi-cloud infrastructure spanning a local data-center and three different cloud sites: Amazon EC2
Europe, Amazon EC2 USA, and ElasticHosts. Although the testbed deployed in this work is limited to a reduced number of
computing resources (due to hardware and budget limitations), we have complemented our analysis with a simulated
infrastructure model, which includes a larger number of resources, and runs larger problem sizes. Data obtained by
simulation show that performance and cost results can be extrapolated to large scale problems and cluster infrastructures.
39 Dynamic Brain Phantom for Intracranial Volume Measurements
Knowledge of intracranial ventricular volume is important for the treatment of hydrocephalus, a disease in which
cerebrospinal fluid (CSF) accumulates in the brain. Current monitoring options involve MRI or pressure monitors (InSite,
Medtronic). However, there are no existing methods for continuous cerebral ventricle volume measurements. In order to test
a novel impedance sensor for direct ventricular volume measurements, we present a model that emulates the expansion of
the lateral ventricles seen in hydrocephalus. To quantify the ventricular volume, sensor prototypes were fabricated and
tested with this experimental model. Fluidwas injected andwithdrawn cyclically in a controlledmanner and volume
measurements were tracked over 8 h. Pressure measurements were also comparable to conditions seen clinically. The
results from the bench-top model served to calibrate the sensor for preliminary animal experiments. A hydrocephalic rat
model was used to validate a scaled-down, microfabricated prototype sensor. CSF was removed from the enlarged ventricles
and a dynamic volume decrease was properly recorded. This method of testing new designs on brain phantoms prior to
animal experimentation accelerates medical device design by determining sensor specifications and optimization in a
rational process.
40 Multiple Services Throughput Optimization in a Hierarchical Middleware
Accessing the power of distributed resources can nowadays easily be done using a middleware based on a client/server
approach. Several architectures exist for those middlewares. The most scalable ones rely on a hierarchical design.
Determining the best shape for the hierarchy, the one giving the best throughput of services, is not an easy task. We first
propose a computation and communication model for such hierarchical middleware. Our model takes into account the
deployment of several services in the hierarchy. Then, based on this model, we propose algorithms for automatically
constructing a hierarchy on two kinds of heterogeneous platforms: communication homogeneous/computation
heterogeneous platforms, and fully heterogeneous platforms. The proposed algorithms aim at offering the users the best
obtained to requested throughput ratio, while providing fairness on this ratio for the different kinds of services, and using as
few resources as possible for the hierarchy. For each kind of platforms, we compare our model with experimental results on
Madurai Trichy Kollam
Elysium Technologies Private Limited Elysium Technologies Private Limited Elysium Technologies Private Limited
230, Church Road, Annanagar, 3rd Floor,SI Towers, Surya Complex,Vendor junction,
Madurai , Tamilnadu – 625 020. 15 ,Melapudur , Trichy, kollam,Kerala – 691 010.
Contact : 91452 4390702, 4392702, 4394702. Tamilnadu – 620 001. Contact : 91474 2723622.
eMail: info@elysiumtechnologies.com Contact : 91431 - 4002234. eMail: elysium.kollam@gmail.com
eMail: elysium.trichy@gmail.com
12
13. Elysium Technologies Private Limited
ISO 9001:2008 A leading Research and Development Division
Madurai | Chennai | Trichy | Coimbatore | Kollam| Singapore
Website: elysiumtechnologies.com, elysiumtechnologies.info
Email: info@elysiumtechnologies.com
IEEE Final Year Project List 2011-2012
a real middleware called DIET (Distributed Interactive Engineering Toolbox).
41 Network-Friendly One-Sided Communication Through Multinode Cooperation on Petascale Cray XT5 Systems
One-sided communication is important to enable asynchronous communication and data movement for Global Address
Space (GAS) programming models. Such communication is typically realized through direct messages between initiator and
target processes. For petascale systems with 10,000s of nodes and 100,000s of cores, these direct messages require
dedicated communication buffers and/or channels, which can lead to significant scalability challenges for GAS programming
models. In this paper, we describe a network-friendly communication model, multinode cooperation, to enable indirect one-
sided communication. Compute nodes work together to handle one-side requests through (1) request forwarding in which
one node can intercept a request and forward it to a target node, and (2) request aggregation in which one node can
aggregate many requests to a target node. We have implemented multinode cooperation for a popular GAS runtime library,
Aggregate Remote Memory Copy Interface (ARMCI). Our experimental results on a largescale Cray XT5 system demonstrate
that multinode cooperation is able to greatly increase memory scalability by reducing communication buffers required on
each node. In addition, multinode cooperation improves the resiliency of GAS runtime system to network contention.
Furthermore, multinode cooperation can benefit the performance of scientific applications. In one case, it reduces the total
execution time of an NWChem application by 52%.
42 Non-Cooperative Scheduling Considered Harmful in Collaborative Volunteer Computing Environments
Advances in inter-networking technology and computing components have enabled Volunteer Computing (VC) systems that
allows volunteers to donate their computers’ idle CPU cycles to a given project. BOINC is the most popular VC infrastructure
today with over 580,000 hosts that deliver over 2,300 TeraFLOP per day. BOINC projects usually have hundreds of thousands
of independent tasks and are interested in overall throughput. Each project has its own server which is responsible for
distributing work units to clients, recovering results and validating them. The BOINC scheduling algorithms are complex and
have been used for many years now. Their efficiency and fairness have been assessed in the context of throughput oriented
projects. Yet, recently, burst projects, with fewer tasks and interested in response time, have emerged. Many works have
proposed new scheduling algorithms to optimize individual response time but their use may be problematic in presence of
other projects. In this article we show that the commonly used BOINC scheduling algorithms are unable to enforce fairness
and project isolation. Burst projects may dramatically impact the performance of all other projects (burst or non-burst). To
study such interactions, we perform a detailed, multi-player and multi-objective game theoretic study. Our analysis and
experiments provide a good understanding on the impact of the different scheduling parameters and show that the non-
cooperative optimization may result in inefficient and unfair share of the resources.
43 Finite-Element-Based Discretization and Regularization Strategies for 3-D Inverse Electrocardiography
We consider the inverse electrocardiographic problem of computing epicardial potentials from a body-surface potential map.
We study how to improve numerical approximation of the inverse problem when the finite-element method is used. Being ill-
posed, the inverse problem requires different discretization strategies from its corresponding forward problem. We propose
refinement guidelines that specifically address the ill-posedness of the problem. The resulting guidelines necessitate the use
of hybrid finite elements composed of tetrahedra and prism elements. Also, in order to maintain consistent numerical quality
when the inverse problem is discretized into different scales, we propose a new family of regularizers using the variational
principle underlying finiteelement methods. These variational-formed regularizers serve as an alternative to the traditional
Tikhonov regularizers, but preserves the L2 norm and thereby achieves consistent regularization in multiscale simulations.
The variational formulation also enables a simple construction of the discrete gradient operator over irregularmeshes, which
is difficult to define in traditional discretization schemes.We validated our hybrid element technique and the variational
regularizers by simulations on a realistic 3-D torso/heart model with empirical heart data. Results show that discretization
based on our proposed strategies mitigates the ill-conditioning and improves the inverse solution, and that the variational
Madurai Trichy Kollam
Elysium Technologies Private Limited Elysium Technologies Private Limited Elysium Technologies Private Limited
230, Church Road, Annanagar, 3rd Floor,SI Towers, Surya Complex,Vendor junction,
Madurai , Tamilnadu – 625 020. 15 ,Melapudur , Trichy, kollam,Kerala – 691 010.
Contact : 91452 4390702, 4392702, 4394702. Tamilnadu – 620 001. Contact : 91474 2723622.
eMail: info@elysiumtechnologies.com Contact : 91431 - 4002234. eMail: elysium.kollam@gmail.com
eMail: elysium.trichy@gmail.com
13
14. Elysium Technologies Private Limited
ISO 9001:2008 A leading Research and Development Division
Madurai | Chennai | Trichy | Coimbatore | Kollam| Singapore
Website: elysiumtechnologies.com, elysiumtechnologies.info
Email: info@elysiumtechnologies.com
IEEE Final Year Project List 2011-2012
formulation may benefit a broader range of potential-based bioelectric problems.
44 On the Performance Variability of Production Cloud Services
Cloud computing is an emerging infrastructure paradigm that promises to eliminate the need for companies to maintain
expensive computing hardware. Through the use of virtualization and resource time-sharing, clouds address with a single
set of physical resources a large user base with diverse needs. Thus, clouds have the potential to provide their owners the
benefits of an economy of scale and, at the same time, become an alternative for both the industry and the scientific
community to self-owned clusters, grids, and parallel production environments. For this potential to become reality, the first
generation of commercial clouds need to be proven to be dependable. In this work we analyze the dependability of cloud
services. Towards this end, we analyze long-term performance traces from Amazon Web Services and Google App Engine,
currently two of the largest commercial clouds in production. We find that the performance of about half of the cloud
services we investigate exhibits yearly and daily patterns, but also that most services have periods of especially stable
performance. Last, through tracebased simulation we assess the impact of the variability observed for the studied cloud
services on three large-scale applications, job execution in scientific computing, virtual goods trading in social networks,
and state management in social gaming. We show that the impact of performance variability depends on the application, and
give evidence that performance variability can be an important factor in cloud provider selection.
45 On the Relation Between Congestion Control, Switch Arbitration and Fairness
In lossless interconnection networks such as Infini- Band, congestion control (CC) can be an effective mechanism to achieve
high performance and good utilization of network resources. The InfiniBand standard describes CC functionality for
detecting and resolving congestion, but the design decisions on how to implement this functionallity is left to the hardware
designer. One must be cautious when making these design decisions not to introduce fairness problems, as our study
shows. In this paper we study the relationship between congestion control, switch arbitration, and fairness. Specifically, we
look at fairness among different traffic flows arriving at a hot spot switch on different input ports, as CC is turned on. In
addition we study the fairness among traffic flows at a switch where some flows are exclusive users of their input ports
while other flows are sharing an input port (the parking lot problem). Our results show that the implementation of congestion
control in a switch is vulnerable to unfairness if care is not taken. In detail, we found that a threshold hysteresis of more than
one MTU is needed to resolve arbitration unfairness. Furthermore, to fully solve the parking lot problem, proper
configuration of the CC parameters are required.
46 On the Scheduling of Checkpoints in Desktop Grids
Frequent resources failures are a major challenge for the rapid completion of batch jobs. Checkpointing and migration is
one approach to accelerate job completion avoiding deadlock. We study the problem of scheduling checkpoints of
sequential jobs in the context of Desktop Grids, consisting of volunteered distributed resources. We craft a checkpoint
scheduling algorithm that is provably optimal for discrete time when failures obey any general probability distribution. We
show using simulations with parameters based on real-world systems that this optimal strategy scales and outperforms
other strategies significantly in terms of checkpointing costs and batch completion times.
47 Parameter Exploration in Science and Engineering Using Many-Task Computing
Robust scientific methods require the exploration of the parameter space of a system (some of which can be run in parallel
on distributed resources), and may involve complete state space exploration, experimental design, or numerical optimization
Madurai Trichy Kollam
Elysium Technologies Private Limited Elysium Technologies Private Limited Elysium Technologies Private Limited
230, Church Road, Annanagar, 3rd Floor,SI Towers, Surya Complex,Vendor junction,
Madurai , Tamilnadu – 625 020. 15 ,Melapudur , Trichy, kollam,Kerala – 691 010.
Contact : 91452 4390702, 4392702, 4394702. Tamilnadu – 620 001. Contact : 91474 2723622.
eMail: info@elysiumtechnologies.com Contact : 91431 - 4002234. eMail: elysium.kollam@gmail.com
eMail: elysium.trichy@gmail.com
14
15. Elysium Technologies Private Limited
ISO 9001:2008 A leading Research and Development Division
Madurai | Chennai | Trichy | Coimbatore | Kollam| Singapore
Website: elysiumtechnologies.com, elysiumtechnologies.info
Email: info@elysiumtechnologies.com
IEEE Final Year Project List 2011-2012
techniques. Many-Task Computing (MTC) provides a framework for performing robust design, because it supports the
execution of a large number of otherwise independent processes. Further, scientific workflow engines facilitate the
specification and execution of complex software pipelines, such as those found in real science and engineering design
problems. However, most existing workflow engines do not support a wide range of experimentation techniques, nor do they
support a large number of independent tasks. In this paper, we discuss Nimrod/K—a set of add in components and a new
run time machine for a general workflow engine, Kepler. Nimrod/K provides an execution architecture based on the tagged
dataflow concepts, developed in 1980s for highly parallel machines. This is embodied in a new Kepler “Director” that
supports many-task computing by orchestrating execution of tasks on on clusters, Grids, and Clouds. Further, Nimrod/K
provides a set of “Actors” that facilitate the various modes of parameter exploration discussed above. We demonstrate the
power of Nimrod/K to solve real problems in cardiac science.
48 Performance Analysis of Cloud Computing Services for Many-Tasks Scientific Computing
Cloud computing is an emerging commercial infrastructure paradigm that promises to eliminate the need for maintaining
expensive computing facilities by companies and institutes alike. Through the use of virtualization and resource time
sharing, clouds serve with a single set of physical resources a large user base with different needs. Thus, clouds have the
potential to provide to their owners the benefits of an economy of scale and, at the same time, become an alternative for
scientists to clusters, grids, and parallel production environments. However, the current commercial clouds have been built
to support web and small database workloads, which are very different from typical scientific computing workloads.
Moreover, the use of virtualization and resource time sharing may introduce significant performance penalties for the
demanding scientific computing workloads. In this work, we analyze the performance of cloud computing services for
scientific computing workloads. We quantify the presence in real scientific computing workloads of Many-Task Computing
(MTC) users, that is, of users who employ loosely coupled applications comprising many tasks to achieve their scientific
goals. Then, we perform an empirical evaluation of the performance of four commercial cloud computing services including
Amazon EC2, which is currently the largest commercial cloud. Last, we compare through trace-based simulation the
performance characteristics and cost models of clouds and other scientific computing platforms, for general and MTC-based
scientific computing workloads. Our results indicate that the current clouds need an order of magnitude in performance
improvement to be useful to the scientific community, and show which improvements should be considered first to address
this discrepancy between offer and demand.
49 Predictive Data Grouping and Placement for Cloud-based Elastic Server Infrastructures
Workload variations on Internet platforms such as YouTube, Flickr, LastFM require novel approaches to dynamic resource
provisioning in order to meet QoS requirements, while reducing the Total Cost of Ownership (TCO) of the infrastructures.
The economy of scale promise of cloud computing is a great opportunity to approach this problem, by developing elastic
large scale server infrastructures. However, a proactive approach to dynamic resource provisioning requires prediction
models forecasting future load patterns. On the other hand, unexpected volume and data spikes require reactive
provisioning for serving unexpected surges in workloads. When workload can not be predicted, adequate data grouping and
placement algorithms may facilitate agile scaling up and down of an infrastructure. In this paper, we analyze a dynamic
workload of an on-line music portal and present an elastic Web infrastructure that adapts to workload variations by
dynamically scaling up and down servers. The workload is predicted by an autoregressive model capturing trends and
seasonal patterns. Further, for enhancing data locality, we propose a predictive data grouping based on the history of
content access of a user community. Finally, in order to facilitate agile elasticity, we present a data placement based on
workload and access pattern prediction. The experimental results demonstrate that our forecasting model predicts workload
with a high precision. Further, the predictive data grouping and placement methods provide high locality, load balance and
high utilization of resources, allowing a server infrastructure to scale up and down depending on workload.
Madurai Trichy Kollam
Elysium Technologies Private Limited Elysium Technologies Private Limited Elysium Technologies Private Limited
230, Church Road, Annanagar, 3rd Floor,SI Towers, Surya Complex,Vendor junction,
Madurai , Tamilnadu – 625 020. 15 ,Melapudur , Trichy, kollam,Kerala – 691 010.
Contact : 91452 4390702, 4392702, 4394702. Tamilnadu – 620 001. Contact : 91474 2723622.
eMail: info@elysiumtechnologies.com Contact : 91431 - 4002234. eMail: elysium.kollam@gmail.com
eMail: elysium.trichy@gmail.com
15