-Cloud computing has come to the forefront as it
overcomes some of the issues in computing such as storage
space and processing power. It enables ubiquitous accessing
and processing of information without the need of excessive
computing facilities. In this work, we plan to brief some of the
issues in aggregating the cloud services, discovering futuristic
cloud service requests, develop a repository of the same and
propose an agent based Quality of Service (QoS) provisioning
system for cloud clients.
A cloud broker approach with qos attendance and soa for hybrid cloud computin...csandit
Cloud Computing is the industry whose demand has been growing continuously since its
appearance as a solution that offers different types of computing resources as a service over the
Internet. The number of cloud computing providers grows into a run, while the end user is
currently in the position of having many pricing options, distinct features and performance for
the same required service. This work is inserted in the cloud computing task scheduling
research field to hybrid cloud environments with service-oriented architecture (SOA), dynamic
allocation and control of services and QoS requirements attendance. Therefore, it is proposed
the QBroker Architecture, representing a cloud broker with trading features that implement the
intermediation services, defined by the NIST Cloud Computing Reference Model. An
experimental design was created in order to demonstrate compliance to the QoS requirement of
maximum task execution time, the differentiation of services and dynamic allocation of services.
The experimental results obtained by simulation with CloudSim prove that QBroker has the
necessary requirements to provide QoS improvement in hybrid cloud computing environments
based on SOA.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
The concept of Genetic algorithm is specifically useful in load balancing for best virtual
machines distribution across servers. In this paper, we focus on load balancing and also on
efficient use of resources to reduce the energy consumption without degrading cloud
performance. Cloud computing is an on demand service in which shared resources, information,
software and other devices are provided according to the clients requirement at specific time. It‟s
a term which is generally used in case of Internet. The whole Internet can be viewed as a cloud.
Capital and operational costs can be cut using cloud computing. Cloud computing is defined as a
large scale distributed computing paradigm that is driven by economics of scale in which a pool
of abstracted virtualized dynamically scalable , managed computing power ,storage , platforms
and services are delivered on demand to external customer over the internet. cloud computing is
a recent field in the computational intelligence techniques which aims at surmounting the
computational complexity and provides dynamically services using very large scalable and
virtualized resources over the Internet. It is defined as a distributed system containing a
collection of computing and communication resources located in distributed data enters which
are shared by several end users. It has widely been adopted by the industry, though there are
many existing issues like Load Balancing, Virtual Machine Migration, Server Consolidation,
Energy Management, etc.
An efficient resource sharing technique for multi-tenant databases IJECEIAES
Multi-tenancy is a key component of Software as a Service (SaaS) paradigm. Multi-tenant software has gained a lot of attention in academics, research and business arena. They provide scalability and economic benefits for both cloud service providers and tenants by sharing same resources and infrastructure in isolation of shared databases, network and computing resources with Service level agreement (SLA) compliances. In a multitenant scenario, active tenants compete for resources in order to access the database. If one tenant blocks up the resources, the performance of all the other tenants may be restricted and a fair sharing of the resources may be compromised. The performance of tenants must not be affected by resource-intensive activities and volatile workloads of other tenants. Moreover, the prime goal of providers is to accomplish low cost of operation, satisfying specific schemas/SLAs of each tenant. Consequently, there is a need to design and develop effective and dynamic resource sharing algorithms which can handle above mentioned issues. This work presents a model referred as MultiTenant Dynamic Resource Scheduling Model (MTDRSM) embracing a query classification and worker sorting technique enabling efficient and dynamic resource sharing among tenants. The experiments show significant performance improvement over existing model.
International Journal of Engineering Research and DevelopmentIJERD Editor
Electrical, Electronics and Computer Engineering,
Information Engineering and Technology,
Mechanical, Industrial and Manufacturing Engineering,
Automation and Mechatronics Engineering,
Material and Chemical Engineering,
Civil and Architecture Engineering,
Biotechnology and Bio Engineering,
Environmental Engineering,
Petroleum and Mining Engineering,
Marine and Agriculture engineering,
Aerospace Engineering.
Cloud computing has become the mainstream of the emerging technologies for information interchange and accessibility. With such systems, the information accessed from any geographic location on this planet with some decent kind of internet connection. Applying machine learning together with artificial intelligence in dealing with the problem of energy reduction in cloud data center is an innovative idea. A large combination of Artificial intelligence is playing a significant role in cloud environment. For that matter, the Big organization providers like Amazon have taken steps to ensure that they can continue to expand their fast-growing cloud services to commensurate with the fast growth of population. These companies have built large data centers in remote parts of the world to overcome a shortage of information. These centers consume significant amounts of electrical energy. There is often a lot of energy wastage. According to IDC white paper, data centers have tremendously wasted billions of energy regarding billing and cash. Additionally, researchers have argued that by the year 2020 the energy consumption rate would have doubled. Research in this area is still a hot topic. This paper seeks to address the energy efficiency issue at a Cloud Data Center using machine learning methodologies, principles, and practices. This article also aims to bring out possible future implementation methods for artificially intelligent agents that would help reduce energy wastage at a Cloud data center and thus help ameliorate the great big energy problem at hand.
A cloud broker approach with qos attendance and soa for hybrid cloud computin...csandit
Cloud Computing is the industry whose demand has been growing continuously since its
appearance as a solution that offers different types of computing resources as a service over the
Internet. The number of cloud computing providers grows into a run, while the end user is
currently in the position of having many pricing options, distinct features and performance for
the same required service. This work is inserted in the cloud computing task scheduling
research field to hybrid cloud environments with service-oriented architecture (SOA), dynamic
allocation and control of services and QoS requirements attendance. Therefore, it is proposed
the QBroker Architecture, representing a cloud broker with trading features that implement the
intermediation services, defined by the NIST Cloud Computing Reference Model. An
experimental design was created in order to demonstrate compliance to the QoS requirement of
maximum task execution time, the differentiation of services and dynamic allocation of services.
The experimental results obtained by simulation with CloudSim prove that QBroker has the
necessary requirements to provide QoS improvement in hybrid cloud computing environments
based on SOA.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
The concept of Genetic algorithm is specifically useful in load balancing for best virtual
machines distribution across servers. In this paper, we focus on load balancing and also on
efficient use of resources to reduce the energy consumption without degrading cloud
performance. Cloud computing is an on demand service in which shared resources, information,
software and other devices are provided according to the clients requirement at specific time. It‟s
a term which is generally used in case of Internet. The whole Internet can be viewed as a cloud.
Capital and operational costs can be cut using cloud computing. Cloud computing is defined as a
large scale distributed computing paradigm that is driven by economics of scale in which a pool
of abstracted virtualized dynamically scalable , managed computing power ,storage , platforms
and services are delivered on demand to external customer over the internet. cloud computing is
a recent field in the computational intelligence techniques which aims at surmounting the
computational complexity and provides dynamically services using very large scalable and
virtualized resources over the Internet. It is defined as a distributed system containing a
collection of computing and communication resources located in distributed data enters which
are shared by several end users. It has widely been adopted by the industry, though there are
many existing issues like Load Balancing, Virtual Machine Migration, Server Consolidation,
Energy Management, etc.
An efficient resource sharing technique for multi-tenant databases IJECEIAES
Multi-tenancy is a key component of Software as a Service (SaaS) paradigm. Multi-tenant software has gained a lot of attention in academics, research and business arena. They provide scalability and economic benefits for both cloud service providers and tenants by sharing same resources and infrastructure in isolation of shared databases, network and computing resources with Service level agreement (SLA) compliances. In a multitenant scenario, active tenants compete for resources in order to access the database. If one tenant blocks up the resources, the performance of all the other tenants may be restricted and a fair sharing of the resources may be compromised. The performance of tenants must not be affected by resource-intensive activities and volatile workloads of other tenants. Moreover, the prime goal of providers is to accomplish low cost of operation, satisfying specific schemas/SLAs of each tenant. Consequently, there is a need to design and develop effective and dynamic resource sharing algorithms which can handle above mentioned issues. This work presents a model referred as MultiTenant Dynamic Resource Scheduling Model (MTDRSM) embracing a query classification and worker sorting technique enabling efficient and dynamic resource sharing among tenants. The experiments show significant performance improvement over existing model.
International Journal of Engineering Research and DevelopmentIJERD Editor
Electrical, Electronics and Computer Engineering,
Information Engineering and Technology,
Mechanical, Industrial and Manufacturing Engineering,
Automation and Mechatronics Engineering,
Material and Chemical Engineering,
Civil and Architecture Engineering,
Biotechnology and Bio Engineering,
Environmental Engineering,
Petroleum and Mining Engineering,
Marine and Agriculture engineering,
Aerospace Engineering.
Cloud computing has become the mainstream of the emerging technologies for information interchange and accessibility. With such systems, the information accessed from any geographic location on this planet with some decent kind of internet connection. Applying machine learning together with artificial intelligence in dealing with the problem of energy reduction in cloud data center is an innovative idea. A large combination of Artificial intelligence is playing a significant role in cloud environment. For that matter, the Big organization providers like Amazon have taken steps to ensure that they can continue to expand their fast-growing cloud services to commensurate with the fast growth of population. These companies have built large data centers in remote parts of the world to overcome a shortage of information. These centers consume significant amounts of electrical energy. There is often a lot of energy wastage. According to IDC white paper, data centers have tremendously wasted billions of energy regarding billing and cash. Additionally, researchers have argued that by the year 2020 the energy consumption rate would have doubled. Research in this area is still a hot topic. This paper seeks to address the energy efficiency issue at a Cloud Data Center using machine learning methodologies, principles, and practices. This article also aims to bring out possible future implementation methods for artificially intelligent agents that would help reduce energy wastage at a Cloud data center and thus help ameliorate the great big energy problem at hand.
An Efficient Cloud Scheduling Algorithm for the Conservation of Energy throug...IJECEIAES
Method of broadcasting is the well known operation that is used for providing support to different computing protocols in cloud computing. Attaining energy efficiency is one of the prominent challenges, that is quite significant in the scheduling process that is used in cloud computing as, there are fixed limits that have to be met by the system. In this research paper, we are particularly focusing on the cloud server maintenance and scheduling process and to do so, we are using the interactive broadcasting energy efficient computing technique along with the cloud computing server. Additionally, the remote host machines used for cloud services are dissipating more power and with that they are consuming more and more energy. The effect of the power consumption is one of the main factors for determining the cost of the computing resources. With the idea of using the avoidance technology for assigning the data center resources that dynamically depend on the application demands and supports the cloud computing with the optimization of the servers in use.
Efficient and reliable hybrid cloud architecture for big databaseijccsa
The objective of our paper is to propose a Cloud computing framework which is feasible and necessary for
handling huge data. In our prototype system we considered national ID database structure of Bangladesh
which is prepared by election commission of Bangladesh. Using this database we propose an interactive
graphical user interface for Bangladeshi People Search (BDPS) that use a hybrid structure of cloud
computing handled by apache Hadoop where database is implemented by HiveQL. The infrastructure
divides into two parts: locally hosted cloud which is based on “Eucalyptus” and the remote cloud which is
implemented on well-known Amazon Web Service (AWS). Some common problems of Bangladesh aspect
which includes data traffic congestion, server time out and server down issue is also discussed.
Swiftly increasing demand of computational
calculations in the process of business, transferring of files
under certain protocols and data centers force to develop an
emerging technology cater to the services for computational
need, highly manageable and secure storage. To fulfill these
technological desires cloud computing is the best answer by
introducing various sorts of service platforms in high
computational environment. Cloud computing is the most
recent paradigm promising to turn around the vision of
“computing utilities” into reality. The term “cloud
computing” is relatively new, there is no universal agreement
on this definition. In this paper, we go through with different
area of expertise of research and novelty in cloud computing
domain and its usefulness in the genre of management. Even
though the cloud computing provides many distinguished
features, it still has certain sorts of short comings amidst with
comparatively high cost for both private and public clouds. It
is the way of congregating amasses of information and
resources stored in personal computers and other gadgets
and further putting them on the public cloud for serving
users. Resource management in a cloud environment is a
hard problem, due to the scale of modern data centers, their
interdependencies along with the range of objectives of the
different actors in a cloud ecosystem. Cloud computing is
turning to be one of the most explosively expanding
technologies in the computing industry in this era. It
authorizes the users to transfer their data and computation to
remote location with minimal impact on system performance.
With the evolution of virtualization technology, cloud
computing has been emerged to be distributed systematically
or strategically on full basis. The idea of cloud computing has
not only restored the field of distributed systems but also
fundamentally changed how business utilizes computing
today. Resource management in cloud computing is in fact a
typical problem which is due to the scale of modern data
centers, the variety of resource types and their inter
dependencies, unpredictability of load along with the range of
objectives of the different actors in a cloud ecosystem.
DESIGNING ASPECT AND FUNCTIONALITY ISSUES OF CLOUD BROKERING SERVICE IN CLOUD...Souvik Pal
Cloud brokering service is an intermediate service which enables the producer-consumer business model
enforcing the easy access to cloud services from Cloud Service Providers (CSPs). Cloud broker is to
provide a platform where broker collects the information from the user, analyze the data, and sends those
data to the CSPs. Cloud broker also provides data integration services and modeling the data across all the
components or units of the cloud services. This paper deals with designing criteria and issues of cloud
broker, system activity of broker, and sequence diagram of system design with implementation procedure.
ADVANCES IN HIGHER EDUCATIONAL RESOURCE SHARING AND CLOUD SERVICES FOR KSAIJCSES Journal
Cloud represents an important change in the way information technology is used. Cloud makes it possible
to access work anywhere anytime and to share it with anyone [1]. It is changing the way people
communicate, work and learn [2]. In this changing environment, it is important to think about the
opportunities and risks of using the cloud in the education field, and the lessons we can learn from the
previous uses of this technology in the education field. In order to gain the benefits of the cloud to be used
in educational system in KSA, a comprehensive study on scientific literatures in this paper. This paper also
presents the significant information such as the findings, the case studies, related frameworks and
supporting also the tools associated to the migration of organizational resources to cloud
Intelligent Hybrid Cloud Data Hosting Services with Effective Cost and High A...IJECEIAES
In this Paper the major concentration is an efficient and user based data hosting service for hybrid cloud. It provides friendly transaction scheme with the features of cost effective and high availability to all users. This framework intelligently puts data into cloud with effective cost and high availability. This gives a plan of proof of information respectability in which the client has utilize to check the rightness of his information. In this study the major cloud storage vendors in India are considered and the parameters like storage space, cost of storage, outgoing bandwidth and type of transition mode. Based on available knowledge on all parameters of existing cloud service providers in India, the intelligent hybrid cloud data hosting framework assures to customers for low cost and high availability with mode of transition. It guarantees that the ability at the customer side is negligible and which will be helpful for customers.
Most downloaded article for an year in academia - Advanced Computing: An Inte...acijjournal
Advanced Computing: An International Journal (ACIJ) is a bi monthly open access peer-reviewed journal that publishes articles which contribute new results in all areas of the advanced computing. The journal focuses on all technical and practical aspects of high performance computing, green computing, pervasive computing, cloud computing etc. The goal of this journal is to bring together researchers and practitioners from academia and industry to focus on understanding advances in computing and establishing new collaborations in these areas.
The growth of internet of things and wireless technology has led to enormous generation of data for various application uses such as healthcare, scientific and data intensive application. Cloud based Storage Area Network (SAN) has been widely in recent time for storing and processing these data. Providing fault tolerant and continuous access to data with minimal latency and cost is challenging. For that efficient fault tolerant mechanism is required. Data replication is an efficient mechanism for providing fault tolerant mechanism that has been considered by exiting methodologies. However, data replica placement is challenging and existing method are not efficient considering application dynamic requirement of cloud based storage area network. Thus, incurring latency, due to which induce higher cost of data transmission. This work present an efficient replica placement and transmission technique using Bipartite Graph based Data Replica Placement (BGDRP) technique that aid in minimizing latency and computing cost. Performance of BGDRP is evaluated using real-time scientific application workflow. The outcome shows BGDRP technique minimize data access latency, computation time and cost over state-of-art technique.
BIG DATA NETWORKING: REQUIREMENTS, ARCHITECTURE AND ISSUESijwmn
A flexible, efficient and secure networking architecture is required in order to process big data. However,
existing network architectures are mostly unable to handle big data. As big data pushes network resources
to the limits it results in network congestion, poor performance, and detrimental user experiences. This
paper presents the current state-of-the-art research challenges and possible solutions on big data
networking theory. More specifically, we present the state of networking issues of big data related to
capacity, management and data processing. We also present the architectures of MapReduce and Hadoop
paradigm with research challenges, fabric networks and software defined networks (SDN) that are used to
handle today’s idly growing digital world and compare and contrast them to identify relevant problems and
solutions.
BIG DATA NETWORKING: REQUIREMENTS, ARCHITECTURE AND ISSUESijwmn
A flexible, efficient and secure networking architecture is required in order to process big data. However, existing network architectures are mostly unable to handle big data. As big data pushes network resources
to the limits it results in network congestion, poor performance, and detrimental user experiences. This paper presents the current state-of-the-art research challenges and possible solutions on big data networking theory. More specifically, we present the state of networking issues of big data related to
capacity, management and data processing. We also present the architectures of MapReduce and Hadoop paradigm with research challenges, fabric networks and software defined networks (SDN) that are used to handle today’s idly growing digital world and compare and contrast them to identify relevant problems and solutions.
A Virtualization Model for Cloud ComputingSouvik Pal
Cloud Computing is now a very emerging field in the IT industry as well as research field. The advancement of Cloud Computing came up due to fast-growing usage of internet among the people. Cloud Computing is basically on-demand network access to a collection of physical resources which can be provisioned according to the need of cloud user under the supervision of Cloud Service provider interaction. From business prospective, the viable achievements of Cloud Computing and recent developments in Grid computing have brought the platform that has introduced virtualization technology into the era of high performance computing. Virtualization technology is widely applied to modern data center for cloud computing. Virtualization is used computer resources to imitate other computer resources or whole computers. This paper provides a Virtualization model for cloud computing that may lead to faster access and better performance. This model may help to combine self-service capabilities and ready-to-use facilities for computing resources.
Advance Computing Paradigm with the Perspective of Cloud Computing-An Analyti...Eswar Publications
Internet has been a driving force towards the various technologies that have been developed. Arguably, one of the
most discussed among all of these is Cloud Computing. Cloud computing is seen as a trend in the present day scenario with almost all the organizations trying to make an entry into it. It is a promising and emerging technology for the next generation of IT applications. This paper presents the evolution, history, and definition of cloud computing and also presents a comprehensive analysis of the cloud computing by explaining its services and deployment models, and identifying various characteristics of concern.
A Literature Survey on Resource Management Techniques, Issues and Challenges ...TELKOMNIKA JOURNAL
Cloud computing is a large scale distributed computing which provides on demand services for
clients. Cloud Clients use web browsers, mobile apps, thin clients, or terminal emulators to request and
control their cloud resources at any time and anywhere through the network. As many companies are
shifting their data to cloud and as many people are being aware of the advantages of storing data to cloud,
there is increasing number of cloud computing infrastructure and large amount of data which lead to the
complexity management for cloud providers. We surveyed the state-of-the-art resource management
techniques for IaaS (infrastructure as a service) in cloud computing. Then we put forward different major
issues in the deployment of the cloud infrastructure in order to avoid poor service delivery in cloud
computing.
Efficient architectural framework of cloud computing Souvik Pal
Cloud computing is that enables adaptive, favorable and on-demand network access to a collective pool of adjustable and configurable computing physical resources which networks, servers, bandwidth, storage that can be swiftly provisioned and released with negligible supervision endeavor or service provider interaction. From business prospective, the viable achievements of Cloud Computing and recent developments in Grid computing have brought the platform that has introduced virtualization technology into the era of high performance computing. However, clouds are Internet-based concept and try to disguise complexity overhead for end users. Cloud service providers (CSPs) use many structural designs combined with self-service capabilities and ready-to-use facilities for computing resources, which are enabled through network infrastructure especially the internet which is an important consideration. This paper provides an efficient architectural Framework for cloud computing that may lead to better performance and faster access.
ANALYSIS OF THE COMPARISON OF SELECTIVE CLOUD VENDORS SERVICESijccsa
Cloud computing refers to a location that allows us to preserve our precious data and use computing and
networking services on a pay-as-you-go basis without the need for a physical infrastructure. Cloud
computing now provides us with powerful data processing and storage, exceptional availability and
security, rapid accessibility and adaption, ensured flexibility and interoperability, and time and cost
efficiency. Cloud computing offers three platforms (IaaS, PaaS, and SaaS) with unique capabilities that
promise to make it easier for a customer, organization, or trade to establish any type of IT business. We
compared a variety of cloud service characteristics in this article, following the comparing, it's
straightforward to pick a specific cloud service from the possible options by comparison with three chosen
cloud providers such as Amazon, Microsoft Azure, and Digital Ocean. By using findings of this study to not
only identify similarities and contrasts across various aspects of cloud computing, as well as to suggest
some areas for further study.
A CLOUD BROKER APPROACH WITH QOS ATTENDANCE AND SOA FOR HYBRID CLOUD COMPUTIN...cscpconf
Cloud Computing is the industry whose demand has been growing continuously since its appearance as a solution that offers different types of computing resources as a service over the Internet. The number of cloud computing providers grows into a run, while the end user is currently in the position of having many pricing options, distinct features and performance for the same required service. This work is inserted in the cloud computing task scheduling research field to hybrid cloud environments with service-oriented architecture (SOA), dynamic allocation and control of services and QoS requirements attendance. Therefore, it is proposed the QBroker Architecture, representing a cloud broker with trading features that implement the intermediation services, defined by the NIST Cloud Computing Reference Model. An experimental design was created in order to demonstrate compliance to the QoS requirement of maximum task execution time, the differentiation of services and dynamic allocation of services. The experimental results obtained by simulation with CloudSim prove that QBroker has the necessary requirements to provide QoS improvement in hybrid cloud computing environments based on SOA.
An Efficient Cloud Scheduling Algorithm for the Conservation of Energy throug...IJECEIAES
Method of broadcasting is the well known operation that is used for providing support to different computing protocols in cloud computing. Attaining energy efficiency is one of the prominent challenges, that is quite significant in the scheduling process that is used in cloud computing as, there are fixed limits that have to be met by the system. In this research paper, we are particularly focusing on the cloud server maintenance and scheduling process and to do so, we are using the interactive broadcasting energy efficient computing technique along with the cloud computing server. Additionally, the remote host machines used for cloud services are dissipating more power and with that they are consuming more and more energy. The effect of the power consumption is one of the main factors for determining the cost of the computing resources. With the idea of using the avoidance technology for assigning the data center resources that dynamically depend on the application demands and supports the cloud computing with the optimization of the servers in use.
Efficient and reliable hybrid cloud architecture for big databaseijccsa
The objective of our paper is to propose a Cloud computing framework which is feasible and necessary for
handling huge data. In our prototype system we considered national ID database structure of Bangladesh
which is prepared by election commission of Bangladesh. Using this database we propose an interactive
graphical user interface for Bangladeshi People Search (BDPS) that use a hybrid structure of cloud
computing handled by apache Hadoop where database is implemented by HiveQL. The infrastructure
divides into two parts: locally hosted cloud which is based on “Eucalyptus” and the remote cloud which is
implemented on well-known Amazon Web Service (AWS). Some common problems of Bangladesh aspect
which includes data traffic congestion, server time out and server down issue is also discussed.
Swiftly increasing demand of computational
calculations in the process of business, transferring of files
under certain protocols and data centers force to develop an
emerging technology cater to the services for computational
need, highly manageable and secure storage. To fulfill these
technological desires cloud computing is the best answer by
introducing various sorts of service platforms in high
computational environment. Cloud computing is the most
recent paradigm promising to turn around the vision of
“computing utilities” into reality. The term “cloud
computing” is relatively new, there is no universal agreement
on this definition. In this paper, we go through with different
area of expertise of research and novelty in cloud computing
domain and its usefulness in the genre of management. Even
though the cloud computing provides many distinguished
features, it still has certain sorts of short comings amidst with
comparatively high cost for both private and public clouds. It
is the way of congregating amasses of information and
resources stored in personal computers and other gadgets
and further putting them on the public cloud for serving
users. Resource management in a cloud environment is a
hard problem, due to the scale of modern data centers, their
interdependencies along with the range of objectives of the
different actors in a cloud ecosystem. Cloud computing is
turning to be one of the most explosively expanding
technologies in the computing industry in this era. It
authorizes the users to transfer their data and computation to
remote location with minimal impact on system performance.
With the evolution of virtualization technology, cloud
computing has been emerged to be distributed systematically
or strategically on full basis. The idea of cloud computing has
not only restored the field of distributed systems but also
fundamentally changed how business utilizes computing
today. Resource management in cloud computing is in fact a
typical problem which is due to the scale of modern data
centers, the variety of resource types and their inter
dependencies, unpredictability of load along with the range of
objectives of the different actors in a cloud ecosystem.
DESIGNING ASPECT AND FUNCTIONALITY ISSUES OF CLOUD BROKERING SERVICE IN CLOUD...Souvik Pal
Cloud brokering service is an intermediate service which enables the producer-consumer business model
enforcing the easy access to cloud services from Cloud Service Providers (CSPs). Cloud broker is to
provide a platform where broker collects the information from the user, analyze the data, and sends those
data to the CSPs. Cloud broker also provides data integration services and modeling the data across all the
components or units of the cloud services. This paper deals with designing criteria and issues of cloud
broker, system activity of broker, and sequence diagram of system design with implementation procedure.
ADVANCES IN HIGHER EDUCATIONAL RESOURCE SHARING AND CLOUD SERVICES FOR KSAIJCSES Journal
Cloud represents an important change in the way information technology is used. Cloud makes it possible
to access work anywhere anytime and to share it with anyone [1]. It is changing the way people
communicate, work and learn [2]. In this changing environment, it is important to think about the
opportunities and risks of using the cloud in the education field, and the lessons we can learn from the
previous uses of this technology in the education field. In order to gain the benefits of the cloud to be used
in educational system in KSA, a comprehensive study on scientific literatures in this paper. This paper also
presents the significant information such as the findings, the case studies, related frameworks and
supporting also the tools associated to the migration of organizational resources to cloud
Intelligent Hybrid Cloud Data Hosting Services with Effective Cost and High A...IJECEIAES
In this Paper the major concentration is an efficient and user based data hosting service for hybrid cloud. It provides friendly transaction scheme with the features of cost effective and high availability to all users. This framework intelligently puts data into cloud with effective cost and high availability. This gives a plan of proof of information respectability in which the client has utilize to check the rightness of his information. In this study the major cloud storage vendors in India are considered and the parameters like storage space, cost of storage, outgoing bandwidth and type of transition mode. Based on available knowledge on all parameters of existing cloud service providers in India, the intelligent hybrid cloud data hosting framework assures to customers for low cost and high availability with mode of transition. It guarantees that the ability at the customer side is negligible and which will be helpful for customers.
Most downloaded article for an year in academia - Advanced Computing: An Inte...acijjournal
Advanced Computing: An International Journal (ACIJ) is a bi monthly open access peer-reviewed journal that publishes articles which contribute new results in all areas of the advanced computing. The journal focuses on all technical and practical aspects of high performance computing, green computing, pervasive computing, cloud computing etc. The goal of this journal is to bring together researchers and practitioners from academia and industry to focus on understanding advances in computing and establishing new collaborations in these areas.
The growth of internet of things and wireless technology has led to enormous generation of data for various application uses such as healthcare, scientific and data intensive application. Cloud based Storage Area Network (SAN) has been widely in recent time for storing and processing these data. Providing fault tolerant and continuous access to data with minimal latency and cost is challenging. For that efficient fault tolerant mechanism is required. Data replication is an efficient mechanism for providing fault tolerant mechanism that has been considered by exiting methodologies. However, data replica placement is challenging and existing method are not efficient considering application dynamic requirement of cloud based storage area network. Thus, incurring latency, due to which induce higher cost of data transmission. This work present an efficient replica placement and transmission technique using Bipartite Graph based Data Replica Placement (BGDRP) technique that aid in minimizing latency and computing cost. Performance of BGDRP is evaluated using real-time scientific application workflow. The outcome shows BGDRP technique minimize data access latency, computation time and cost over state-of-art technique.
BIG DATA NETWORKING: REQUIREMENTS, ARCHITECTURE AND ISSUESijwmn
A flexible, efficient and secure networking architecture is required in order to process big data. However,
existing network architectures are mostly unable to handle big data. As big data pushes network resources
to the limits it results in network congestion, poor performance, and detrimental user experiences. This
paper presents the current state-of-the-art research challenges and possible solutions on big data
networking theory. More specifically, we present the state of networking issues of big data related to
capacity, management and data processing. We also present the architectures of MapReduce and Hadoop
paradigm with research challenges, fabric networks and software defined networks (SDN) that are used to
handle today’s idly growing digital world and compare and contrast them to identify relevant problems and
solutions.
BIG DATA NETWORKING: REQUIREMENTS, ARCHITECTURE AND ISSUESijwmn
A flexible, efficient and secure networking architecture is required in order to process big data. However, existing network architectures are mostly unable to handle big data. As big data pushes network resources
to the limits it results in network congestion, poor performance, and detrimental user experiences. This paper presents the current state-of-the-art research challenges and possible solutions on big data networking theory. More specifically, we present the state of networking issues of big data related to
capacity, management and data processing. We also present the architectures of MapReduce and Hadoop paradigm with research challenges, fabric networks and software defined networks (SDN) that are used to handle today’s idly growing digital world and compare and contrast them to identify relevant problems and solutions.
A Virtualization Model for Cloud ComputingSouvik Pal
Cloud Computing is now a very emerging field in the IT industry as well as research field. The advancement of Cloud Computing came up due to fast-growing usage of internet among the people. Cloud Computing is basically on-demand network access to a collection of physical resources which can be provisioned according to the need of cloud user under the supervision of Cloud Service provider interaction. From business prospective, the viable achievements of Cloud Computing and recent developments in Grid computing have brought the platform that has introduced virtualization technology into the era of high performance computing. Virtualization technology is widely applied to modern data center for cloud computing. Virtualization is used computer resources to imitate other computer resources or whole computers. This paper provides a Virtualization model for cloud computing that may lead to faster access and better performance. This model may help to combine self-service capabilities and ready-to-use facilities for computing resources.
Advance Computing Paradigm with the Perspective of Cloud Computing-An Analyti...Eswar Publications
Internet has been a driving force towards the various technologies that have been developed. Arguably, one of the
most discussed among all of these is Cloud Computing. Cloud computing is seen as a trend in the present day scenario with almost all the organizations trying to make an entry into it. It is a promising and emerging technology for the next generation of IT applications. This paper presents the evolution, history, and definition of cloud computing and also presents a comprehensive analysis of the cloud computing by explaining its services and deployment models, and identifying various characteristics of concern.
A Literature Survey on Resource Management Techniques, Issues and Challenges ...TELKOMNIKA JOURNAL
Cloud computing is a large scale distributed computing which provides on demand services for
clients. Cloud Clients use web browsers, mobile apps, thin clients, or terminal emulators to request and
control their cloud resources at any time and anywhere through the network. As many companies are
shifting their data to cloud and as many people are being aware of the advantages of storing data to cloud,
there is increasing number of cloud computing infrastructure and large amount of data which lead to the
complexity management for cloud providers. We surveyed the state-of-the-art resource management
techniques for IaaS (infrastructure as a service) in cloud computing. Then we put forward different major
issues in the deployment of the cloud infrastructure in order to avoid poor service delivery in cloud
computing.
Efficient architectural framework of cloud computing Souvik Pal
Cloud computing is that enables adaptive, favorable and on-demand network access to a collective pool of adjustable and configurable computing physical resources which networks, servers, bandwidth, storage that can be swiftly provisioned and released with negligible supervision endeavor or service provider interaction. From business prospective, the viable achievements of Cloud Computing and recent developments in Grid computing have brought the platform that has introduced virtualization technology into the era of high performance computing. However, clouds are Internet-based concept and try to disguise complexity overhead for end users. Cloud service providers (CSPs) use many structural designs combined with self-service capabilities and ready-to-use facilities for computing resources, which are enabled through network infrastructure especially the internet which is an important consideration. This paper provides an efficient architectural Framework for cloud computing that may lead to better performance and faster access.
ANALYSIS OF THE COMPARISON OF SELECTIVE CLOUD VENDORS SERVICESijccsa
Cloud computing refers to a location that allows us to preserve our precious data and use computing and
networking services on a pay-as-you-go basis without the need for a physical infrastructure. Cloud
computing now provides us with powerful data processing and storage, exceptional availability and
security, rapid accessibility and adaption, ensured flexibility and interoperability, and time and cost
efficiency. Cloud computing offers three platforms (IaaS, PaaS, and SaaS) with unique capabilities that
promise to make it easier for a customer, organization, or trade to establish any type of IT business. We
compared a variety of cloud service characteristics in this article, following the comparing, it's
straightforward to pick a specific cloud service from the possible options by comparison with three chosen
cloud providers such as Amazon, Microsoft Azure, and Digital Ocean. By using findings of this study to not
only identify similarities and contrasts across various aspects of cloud computing, as well as to suggest
some areas for further study.
A CLOUD BROKER APPROACH WITH QOS ATTENDANCE AND SOA FOR HYBRID CLOUD COMPUTIN...cscpconf
Cloud Computing is the industry whose demand has been growing continuously since its appearance as a solution that offers different types of computing resources as a service over the Internet. The number of cloud computing providers grows into a run, while the end user is currently in the position of having many pricing options, distinct features and performance for the same required service. This work is inserted in the cloud computing task scheduling research field to hybrid cloud environments with service-oriented architecture (SOA), dynamic allocation and control of services and QoS requirements attendance. Therefore, it is proposed the QBroker Architecture, representing a cloud broker with trading features that implement the intermediation services, defined by the NIST Cloud Computing Reference Model. An experimental design was created in order to demonstrate compliance to the QoS requirement of maximum task execution time, the differentiation of services and dynamic allocation of services. The experimental results obtained by simulation with CloudSim prove that QBroker has the necessary requirements to provide QoS improvement in hybrid cloud computing environments based on SOA.
On the Optimal Allocation of VirtualResources in Cloud Compu.docxhopeaustin33688
On the Optimal Allocation of Virtual
Resources in Cloud Computing Networks
Chrysa Papagianni, Aris Leivadeas, Symeon Papavassiliou,
Vasilis Maglaris, Cristina Cervelló-Pastor, and �Alvaro Monje
Abstract—Cloud computing builds upon advances on virtualization and distributed computing to support cost-efficient usage of
computing resources, emphasizing on resource scalability and on demand services. Moving away from traditional data-center oriented
models, distributed clouds extend over a loosely coupled federated substrate, offering enhanced communication and computational
services to target end-users with quality of service (QoS) requirements, as dictated by the future Internet vision. Toward facilitating the
efficient realization of such networked computing environments, computing and networking resources need to be jointly treated and
optimized. This requires delivery of user-driven sets of virtual resources, dynamically allocated to actual substrate resources within
networked clouds, creating the need to revisit resource mapping algorithms and tailor them to a composite virtual resource mapping
problem. In this paper, toward providing a unified resource allocation framework for networked clouds, we first formulate the optimal
networked cloud mapping problem as a mixed integer programming (MIP) problem, indicating objectives related to cost efficiency of
the resource mapping procedure, while abiding by user requests for QoS-aware virtual resources. We subsequently propose a method
for the efficient mapping of resource requests onto a shared substrate interconnecting various islands of computing resources, and
adopt a heuristic methodology to address the problem. The efficiency of the proposed approach is illustrated in a simulation/emulation
environment, that allows for a flexible, structured, and comparative performance evaluation. We conclude by outlining a proof-of-
concept realization of our proposed schema, mounted over the European future Internet test-bed FEDERICA, a resource virtualization
platform augmented with network and computing facilities.
Index Terms—Federated infrastructures, resource allocation, resource mapping, virtualization, cloud computing, quality of service
Ç
1 INTRODUCTION
CLOUD computing promises reliable services deliveredthrough next generation data centers that are built on
compute and storage virtualization technologies. According
to Buyya et al., [1] “a cloud is a type of parallel and distributed
system consisting of a collection of interconnected and virtualized
computers that are dynamically provisioned and presented as one
or more unified computing resources based on service-level
agreements established through negotiation between the service
provider and the consumers” and accessible as a composable
service via web 2.0 technologies.
Therefore, with respect to cloud computing there exist
the “as a service” definitions, which include software as a
service (SaaS), infrastructure as a se.
NEURO-FUZZY SYSTEM BASED DYNAMIC RESOURCE ALLOCATION IN COLLABORATIVE CLOUD C...ijccsa
Cloud collaboration is an emerging technology which enables sharing of computer files using cloud
computing. Here the cloud resources are assembled and cloud services are provided using these resources.
Cloud collaboration technologies are allowing users to share documents. Resource allocation in the cloud
is challenging because resources offer different Quality of Service (QoS) and services running on these
resources are risky for user demands. We propose a solution for resource allocation based on multi
attribute QoS Scoring considering parameters such as distance to the resource from user site, reputation of
the resource, task completion time, task completion ratio, and load at the resource. The proposed algorithm
referred to as Multi Attribute QoS scoring (MAQS) uses Neuro Fuzzy system. We have also included a
speculative manager to handle fault tolerance. In this paper it is shown that the proposed algorithm
perform better than others including power trust reputation based algorithms and harmony method which
use single attribute to compute the reputation score of each resource allocated.
Neuro-Fuzzy System Based Dynamic Resource Allocation in Collaborative Cloud C...neirew J
Cloud collaboration is an emerging technology which enables sharing of computer files using cloud
computing. Here the cloud resources are assembled and cloud services are provided using these resources.
Cloud collaboration technologies are allowing users to share documents. Resource allocation in the cloud
is challenging because resources offer different Quality of Service (QoS) and services running on these
resources are risky for user demands. We propose a solution for resource allocation based on multi
attribute QoS Scoring considering parameters such as distance to the resource from user site, reputation of
the resource, task completion time, task completion ratio, and load at the resource. The proposed algorithm
referred to as Multi Attribute QoS scoring (MAQS) uses Neuro Fuzzy system. We have also included a
speculative manager to handle fault tolerance. In this paper it is shown that the proposed algorithm
perform better than others including power trust reputation based algorithms and harmony method which
use single attribute to compute the reputation score of each resource allocated.
An Efficient MDC based Set Partitioned Embedded Block Image CodingDr. Amarjeet Singh
In this paper, fast, efficient, simple and widely used
Set Partitioned Embedded bloCK based coding is done on
Multiple Descriptions of transformed image. The maximum
potential of this type of coding can be exploited with discrete
wavelet transform (DWT) of images. Two correlated
descriptions are generated from a wavelet transformed image
to ensure meaningful transmission of the image over noise
prone wireless channels. These correlated descriptions are
encoded by set partitioning technique through SPECK coders
and transmitted over wireless channels. Quality of
reconstructed image at the decoder side depends upon the
number of descriptions received. More the number of
descriptions received at output side, more enhance the quality
of reconstructed image. However, if any of the multiple
description is lost, the receive can estimate it exploiting the
correlation between the descriptions. The simulations
performed on an image on MATLAB gives decent
performance and results even after half of the descriptions is
lost in transmission.
Task Performance Analysis in Virtual Cloud EnvironmentRSIS International
Cloud computing based applications are beneficial for
businesses of all sizes and industries as they don’t have to invest
a huge amount on initial setup. This way, businesses can opt for
Cloud services and can implement innovative ideas. But
evaluating the performance of provisioning (e.g. CPU scheduling
and resource allocation) policies in a real Cloud computing
environment for different application techniques is challenging
because clouds show dynamic demands, workloads, supply
patterns, VM sizes, and resources (hardware, software, and
network). User’s requests and services requirements are
heterogeneous and dynamic. Applications models have
unpredictable performance, workloads, and dynamic scaling
requirements. So a demand for a Simulation toolkit for Cloud is
there. Cloudsim is self-contained simulation framework that
provides simulation and modeling of Cloud-based application in
lesser time with lesser efforts. In this paper we tried to simulate
the task performance of a cloudlet using one data center, one
VM. We also developed a Graphical User Interface to
dynamically change the simulation parameters and show
simulation results.
Now a days the work is being done by hiring the space and resources from the cloud providers in order to do work effectively and less costly. This paper describes the cloud, its challenges, evolution, attacks along with the approaches required to handle data on cloud. The practice of using a network of remote servers hosted on the Internet to store, manage, and process data, rather than a local server or a personal computer. The need of this review paper is to provide the awareness of the current emerging technology which saves the cost of users.
An Efficient Queuing Model for Resource Sharing in Cloud Computingtheijes
The International Journal of Engineering & Science is aimed at providing a platform for researchers, engineers, scientists, or educators to publish their original research results, to exchange new ideas, to disseminate information in innovative designs, engineering experiences and technological skills. It is also the Journal's objective to promote engineering and technology education. All papers submitted to the Journal will be blind peer-reviewed. Only original articles will be published.
The papers for publication in The International Journal of Engineering& Science are selected through rigorous peer reviews to ensure originality, timeliness, relevance, and readability.
International Journal of Engineering Research and DevelopmentIJERD Editor
Electrical, Electronics and Computer Engineering,
Information Engineering and Technology,
Mechanical, Industrial and Manufacturing Engineering,
Automation and Mechatronics Engineering,
Material and Chemical Engineering,
Civil and Architecture Engineering,
Biotechnology and Bio Engineering,
Environmental Engineering,
Petroleum and Mining Engineering,
Marine and Agriculture engineering,
Aerospace Engineering.
Hybrid Based Resource Provisioning in CloudEditor IJCATR
The data centres and energy consumption characteristics of the various machines are often noted with different capacities.
The public cloud workloads of different priorities and performance requirements of various applications when analysed we had noted
some invariant reports about cloud. The Cloud data centres become capable of sensing an opportunity to present a different program.
In out proposed work, we are using a hybrid method for resource provisioning in data centres. This method is used to allocate the
resources at the working conditions and also for the energy stored in the power consumptions. Proposed method is used to allocate the
process behind the cloud storage.
Opportunistic job sharing for mobile cloud computingijccsa
Cloud Computing is the evolution of new business era which is covered with many of technologies.These
technology are taking advantage of economies of scale and multi tenancy which are used to decrees the
cost of information technology resources. Many of the organization are eager to reduce their computing
cost through the means of virtualization. This demand of reducing the computing cost and time has led to
the innovation of Cloud Computing. Itenhanced computing through improved deployment and
infrastructure costs and processing time. Mobile computing & its applications in smart phones enable a
new, rich user experience. Due to extreme usage of limited resources in smart phones it create problems
which are battery problems, memory space and CPU. To solve this problem, we propose a dynamic mobile
cloud computing architecture framework to use global resources instead of local resources. In this
proposed framework the usefulness of job sharing workload at runtime reduces the load at the local client
and the dynamic throughput time of the job through Wi-Fi Connectivity.
Using the Technology Organization Environment Framework for Adoption and Impl...theijes
Many Institutions of higher learning in developing countries are adopting and implementing cloud computing in their efforts to provide information technology support necessary for administrative, educational, and research activities. Cloud computing delivers on demand provisioning of IT resources on a pay per use basis. This study discusses the adoption and implementation of Cloud Computing using the TOE framework. To achieve the purpose of the study, a critical analysis of relevant literature was conducted. An overview of the institutions technological, environmental and organizational issues that need consideration is done and suggestions for adoption and implementation strategies made. The study concludes that the TOE Framework is appropriate for the technological adoption of cloud computing in institutions of higher learning
ABSTRACT
In today’s world, the swift increase of utilizing mobile services and simultaneously discovering of the cloud computing services, made the Mobile Cloud Computing (MCC) selected as a wide spread technology among mobile users. Thus, the MCC incorporates the cloud computing with mobile services for achieving facilities in daily using mobile. The capability of mobile devices is limited of computation context, memory capacity, storage ability, and energy. Thus, relying on cloud computing can handle these troubles in the mobile surroundings. Cloud Computing gives computing easiness and capacity such provides availability of services from anyplace through the Internet without putting resources into new foundation, preparing, or application authorizing. Additionally, Cloud Computing is an approach to expand the limitations or increasing the abilities dynamically. The primary favourable position of Cloud Computing is that clients just use what they require and pay for what they truly utilize. Mobile cloud computing is a form for various services, where a mobile gadget is able to utilize the cloud for data saving, seeking, information mining, and multimedia preparing. Cloud computing innovation is also causes many new complications in side of safety and gets to direct when users store significant information with cloud servers. As the clients never again have physical ownership of the outsourced information, makes the information trustworthiness, security, and authenticity insurance in Cloud Computing is extremely difficult and conceivably troublesome undertaking. In MCC environments, it is hard to find a paper embracing most of the concepts and issues such as: architecture, computational offloading, challenges, security issues, authentications and so on. In this paper we discuss these concepts with presenting a review of the most recent papers in the domain of MCC.
Similar to Agent based Aggregation of Cloud Services- A Research Agenda (20)
Now-a-days, Internet has become an important part of human’s life, a person
can shop, invest, and perform all the banking task online. Almost, all the organizations have
their own website, where customer can perform all the task like shopping, they only have to
provide their credit card details. Online banking and e-commerce organizations have been
experiencing the increase in credit card transaction and other modes of on-line transaction.
Due to this credit card fraud becomes a very popular issue for credit card industry, it causes
many financial losses for customer and also for the organization. Many techniques like
Decision Tree, Neural Networks, Genetic Algorithm based on modern techniques like
Artificial Intelligence, Machine Learning, and Fuzzy Logic have been already developed for
credit card fraud detection. In this paper, an evolutionary Simulated Annealing algorithm is
used to train the Neural Networks for Credit Card fraud detection in real-time scenario.
This paper shows how this technique can be used for credit card fraud detection and
present all the detailed experimental results found when using this technique on real world
financial data (data are taken from UCI repository) to show the effectiveness of this
technique. The algorithm used in this paper are likely beneficial for the organizations and
for individual users in terms of cost and time efficiency. Still there are many cases which are
misclassified i.e. A genuine customer is classified as fraud customer or vise-versa.
Wireless sensor networks (WSN) have been widely used in various applications.
In these networks nodes collect data from the attached sensors and send their data to a base
station. However, nodes in WSN have limited power supply in form of battery so the nodes
are expected to minimize energy consumption in order to maximize the lifetime of WSN. A
number of techniques have been proposed in the literature to reduce the energy
consumption significantly. In this paper, we propose a new clustering based technique
which is a modification of the popular LEACH algorithm. In this technique, first cluster
heads are elected using the improved LEACH algorithm as usual, and then a cluster of
nodes is formed based on the distance between node and cluster head. Finally, data from
node is transferred to cluster head. Cluster heads forward data, after applying aggregation,
to the cluster head that is closer to it than sink in forward direction or directly to the sink.
This reduction in distance travelled improves the performance over LEACH algorithm
significantly.
The next generation wireless networks comprises of mobile users moving
between heterogeneous networks, using terminals with multiple access interfaces and
services. The most important issue in such environment is ABC (Always Best Connected) i.e.
allowing the best connectivity to applications anywhere at any time. For always best
connectivity requirement various vertical handover strategies for decision making have
been proposed. This paper provides an overview of the most interesting and recent
strategies.
This paper presents the design and performance comparison of a two stage
operational amplifier topology using CMOS and BiCMOS technology. This conventional op
amp circuit was designed by using RF model of BSIM3V3 in 0.6 μm CMOS technology and
0.35 μm BiCMOS technology. Both the op amp circuits were designed and simulated,
analyzed and performance parameters are compared. The performance parameters such as
gain, phase margin, CMRR, PSRR, power consumption etc achieved are compared. Finally,
we conclude the suitability of CMOS technology over BiCMOS technology for low power
RF design.
In Cognitive Radio Networks (CRN), Cooperative Spectrum Sensing (CSS) is
used to improve performance of spectrum sensing techniques used for detection of licensed
(Primary) user’s signal. In CSS, the spectrum sensing information from multiple unlicensed
(Secondary) users are combined to take final decision about presence of primary signal. The
mixing techniques used to generate final decision about presence of PU’s signal are also
called as Fusion techniques / rules. The fusion techniques are further classified as data
fusion and decision fusion techniques. In data fusion technique all the secondary users
(SUs) share their raw information of spectrum detection like detected energy or other
statistical information, while in decision fusion technique all the SUs take their local
decisions and share the decision by sending ‘0’ or ‘1’ corresponding to absence and presence
of PU’s signal respectively. The rules used in decision fusion techniques are OR rule, AND
rule and K-out-of-N rule. The CSS is further classified as distributed CSS and centralized
CSS. In distributed CSS all the SUs share the spectrum detection information with each
other and by mixing the shared information; all the SUs take final decision individually. In
centralized CSS all the SUs send their detected information to a secondary base station /
central unit which combines the shared information and takes final decision. The secondary
base station shares the final decision with all the SUs in the CRN. This paper covers
overview of information fusion methods used for CSS and analysis of decision fusion rules
with simulation results.
ZigBee has been developed to support lower data rates and low power consuming
applications. This paper targets to analyze various parameters of ZigBee physical (PHY).
Performance of ZigBee PHY is evaluated on the basis of energy consumption in
transmitting and receiving mode and throughput. Effect of variation in network size is
studied on these performance attributes. Some modulation schemes are also compared and
the best modulation scheme is suggested with tradeoffs between different performance
metrics.
This paper gives a brief idea of the moving objects tracking and its application.
In sport it is challenging to track and detect motion of players in video frames. Task
represents optical flow analysis to do motion detection and particle filter to track players
and taking consideration of regions with movement of players in sports video. Optical flow
vector calculation gives motion of players in video frame. This paper presents improved
Luacs Kanade algorithm explained for optical flow computation for large displacement and
more accuracy in motion estimation.
A rapid progress is seen in the field of robotics both in educational and industrial
automation sectors. The Robotics education in particular is gaining technological advances
and providing more learning opportunities. In automotive sector, there is a necessity and
demand to automate daily human activities by robot. With such an advancement and
demand for robotics, the realization of a popular computer game will help students to learn
and acquire skills in the field of robotics. The computer game such as Pacman offers
challenges on both software and hardware fronts. In software, it provides challenges in
developing algorithms for a robot to escape from the pool of attacking robots and to develop
algorithms for multiple ghost robots to attack the Pacman. On the hardware front, it
provides a challenge to integrate various systems to realize the game. This project aims to
demonstrate the pacman game in real world as well as in simulation. For simulation
purpose Player/Stage is used to develop single-client and multi-client architectures. The
multi- client architecture in player/stage uses one global simulation proxy to which all the
robot models are connected. This reduces the overhead to manage multiple robots proxy.
The single-client architecture enables only two robot models to connect to the simulation
proxy. Multi-client approach offers flexibility to add sensors to each port which will be used
distinctly by the client attached to the respective robot. The robots are named as Pacman
and Ghosts, which try to escape and attack respectively. Use of Network Camera has been
done to detect the global positions of the robots and data is shared through inter-process
communication.
In Content-Based Image Retrieval (CBIR) systems, the visual contents of the
images in the database are took out and represented by multi-dimensional characteristic
vectors. A well known CBIR system that retrieves images by unsupervised method known
as cluster based image retrieval system. For enhancing the performance and retrieval rate
of CBIR system, we fuse the visual contents of an image. Recently, we developed two
cluster-based CBIR systems by fusing the scores of two visual contents of an image. In this
paper, we analyzed the performance of the two recommended CBIR systems at different
levels of precision using images of varying sizes and resolutions. We also compared the
performance of the recommended systems with that of the other two existing CBIR systems
namely UFM and CLUE. Experimentally, we find that the recommended systems
outperform the other two existing systems and one recommended system also comparatively
performed better in every resolution of image.
Information Systems and Networks are subjected to electronic attacks. When
network attacks hit, organizations are thrown into crisis mode. From the IT department to
call centers, to the board room and beyond, all are fraught with danger until the situation is
under control. Traditional methods which are used to overcome these threats (e.g. firewall,
antivirus software, password protection etc.) do not provide complete security to the system.
This encourages the researchers to develop an Intrusion Detection System which is capable
of detecting and responding to such events. This review paper presents a comprehensive
study of Genetic Algorithm (GA) based Intrusion Detection System (IDS). It provides a
brief overview of rule-based IDS, elaborates the implementation issues of Genetic Algorithm
and also presents a comparative analysis of existing studies.
Step by step operations by which we make a group of objects in which attributes
of all the objects are nearly similar, known as clustering. So, a cluster is a collection of
objects that acquire nearly same attribute values. The property of an object in a cluster is
similar to other objects in same cluster but different with objects of other clusters.
Clustering is used in wide range of applications like pattern recognition, image processing,
data analysis, machine learning etc. Nowadays, more attention has been put on categorical
data rather than numerical data. Where, the range of numerical attributes organizes in a
class like small, medium, high, and so on. There is wide range of algorithm that used to
make clusters of given categorical data. Our approach is to enhance the working on well-
known clustering algorithm k-modes to improve accuracy of algorithm. We proposed a new
approach named “High Accuracy Clustering Algorithm for Categorical datasets”.
Brain tumor is a malformed growth of cells within brain which may be
cancerous or non-cancerous. The term ‘malformed’ indicates the existence of tumor. The
tumor may be benign or malignant and it needs medical support for further classification.
Brain tumor must be detected, diagnosed and evaluated in earliest stage. The medical
problems become grave if tumor is detected at the later stage. Out of various technologies
available for diagnosis of brain tumor, MRI is the preferred technology which enables the
diagnosis and evaluation of brain tumor. The current work presents various clustering
techniques that are employed to detect brain tumor. The classification involves classification
of images into normal and malformed (if detected the tumor). The algorithm deals with
steps such as preprocessing, segmentation, feature extraction and classification of MR brain
images. Finally, the confirmatory step is specifying the tumor area by technique called
region of interest.
A Proxy signature scheme enables a proxy signer to sign a message on behalf of
the original signer. In this paper, we propose ECDLP based solution for chen et. al [1]
scheme. We describe efficient and secure Proxy multi signature scheme that satisfy all the
proxy requirements and require only elliptic curve multiplication and elliptic curve addition
which needs less computation overhead compared to modular exponentiations also our
scheme is withstand against original signer forgery and public key substitution attack.
Water marking has been proposed as a method to enhance data security. Text
water marking requires extreme care when embedding additional data within the images
because the additional information must not affect the image quality. Digital water marking
is a method through which we can authenticate images, videos and even texts. Add text
water mark and image water mark to your photos or animated image, protect your
copyright avoid unauthorized use. Water marking functions are not only authentication, but
also protection for such documents against malicious intentions to change such documents
or even claim the rights of such documents. Water marking scheme that hides water
marking in method, not affect the image quality. In this paper method of hiding a data using
LSB replacement technique is proposed.
Today among various medium of data transmission or storage our sensitive data
are not secured with a third-party, that we used to take help of. Cryptography plays an
important role in securing our data from malicious attack. This paper present a partial
image encryption based on bit-planes permutation using Peter De Jong chaotic map for
secure image transmission and storage. The proposed partial image encryption is a raw data
encryption method where bits of some bit-planes are shuffled among other bit-planes based
on chaotic maps proposed by Peter De Jong. By using the chaotic behavior of the Peter De
Jong map the position of all the bit-planes are permuted. The result of the several
experimental, correlation analysis and sensitivity test shows that the proposed image
encryption scheme provides an efficient and secure way for real-time image encryption and
decryption.
This paper presents a survey of Dependency Analysis of Service Oriented
Architecture (SOA) based systems. SOA presents newer aspects of dependency analysis due
to its different architectural style and programming paradigm. This paper surveys the
previous work taken on dependency analysis of service oriented systems. This study shows
the strengths and weaknesses of current approaches and tools available for dependency
analysis task in context of SOA. The main motivation of this work is to summarize the
recent approaches in this field of research, identify major issue and challenges in
dependency analysis of SOA based systems and motivate further research on this topic.
In this paper, proposed a novel implementation of a Soft-Core system using
micro-blaze processor with virtex-5 FPGA. Till now Hard-Core processors are used in
FPGA processor cores. Hard cores are a fixed gate-level IP functions within the FPGA
fabrics. Now the proposed processor is Soft-Core Processor, this is a microprocessor fully
described in software, usually in an HDL. This can be implemented by using EDK tool. In
this paper, developed a system which is having a micro-blaze processor is the combination
of both hardware & Software. By using this system, user can control and communicate all
the peripherals which are in the supported board by using Xilinx platform to develop an
embedded system. Implementing of Soft-Core process system with different peripherals like
UART interface, SPA flash interface, SRAM interface has to be designed using Xilinx
Embedded Development Kit (EDK) tools.
The article presents a simple algorithm to construct minimum spanning tree and
to find shortest path between pair of vertices in a graph. Our illustration includes the proof
of termination. The complexity analysis and simulation results have also been included.
Wimax technology has reshaped the framework of broadband wireless internet
service. It provides the internet service to unconnected or detached areas such as east South
Africa, rural areas of America and Asia region. Full duplex helpers employed with one of
the relay stations selection and indexing method that is Randomized Distributed Space Time
are used to expand the coverage area of primary Wimax station. The basic problem was
identified at cell edge due to weather conditions (rain, fog), insertion of destruction because
of multiple paths in the same communication channel and due to interference created by
other users in that communication. It is impractical task for the receiver station to decode
the transmitted signal successfully at the cell edges, which increases the high packet loss and
retransmissions. But Wimax is a outstanding technology which is used for improving the
quality of internet service and also it offers various services like Voice over Internet
Protocol, Video conferencing and Multimedia broadcast etc where a little delay in packet
transmission can cause a big loss in the communication. Even setup and initialization of
another Wimax station nearer to each other is not a good alternate, where any mobile
station can easily handover to another base station if it gets a strong signal from other one.
But in rural areas, for few numbers of customers, installation of base station nearer to each
other is costlier task. In this review article, we present a scheme using R-DSTC technique to
choose and select helpers (relay nodes) randomly to expand the coverage area and help to
mobile station as a helper to provide secure communication with base station. In this work,
we use full duplex helpers for better utilization of bandwidth.
Radio Frequency identification (RFID) technology has become emerging
technique for tracking and items identification. Depend upon the function; various RFID
technologies could be used. Drawback of passive RFID technology, associated to the range
of reading tags and assurance in difficult environmental condition, puts boundaries on
performance in the real life situation [1]. To improve the range of reading tags and
assurance, we consider implementing active backscattering tag technology. For making
mobiles of multiple radio standards in 4G network; the Software Defined Radio (SDR)
technology is used. Restrictions in Existing RFID technologies and SDR technology, can be
eliminated by the development and implementation of the Software Defined Radio (SDR)
active backscattering tag compatible with the EPC global UHF Class 1 Generation 2 (Gen2)
RFID standard. Such technology can be used for many of applications and services.
Unit 8 - Information and Communication Technology (Paper I).pdfThiyagu K
This slides describes the basic concepts of ICT, basics of Email, Emerging Technology and Digital Initiatives in Education. This presentations aligns with the UGC Paper I syllabus.
The Art Pastor's Guide to Sabbath | Steve ThomasonSteve Thomason
What is the purpose of the Sabbath Law in the Torah. It is interesting to compare how the context of the law shifts from Exodus to Deuteronomy. Who gets to rest, and why?
The French Revolution, which began in 1789, was a period of radical social and political upheaval in France. It marked the decline of absolute monarchies, the rise of secular and democratic republics, and the eventual rise of Napoleon Bonaparte. This revolutionary period is crucial in understanding the transition from feudalism to modernity in Europe.
For more information, visit-www.vavaclasses.com
Ethnobotany and Ethnopharmacology:
Ethnobotany in herbal drug evaluation,
Impact of Ethnobotany in traditional medicine,
New development in herbals,
Bio-prospecting tools for drug discovery,
Role of Ethnopharmacology in drug evaluation,
Reverse Pharmacology.
Read| The latest issue of The Challenger is here! We are thrilled to announce that our school paper has qualified for the NATIONAL SCHOOLS PRESS CONFERENCE (NSPC) 2024. Thank you for your unwavering support and trust. Dive into the stories that made us stand out!
Synthetic Fiber Construction in lab .pptxPavel ( NSTU)
Synthetic fiber production is a fascinating and complex field that blends chemistry, engineering, and environmental science. By understanding these aspects, students can gain a comprehensive view of synthetic fiber production, its impact on society and the environment, and the potential for future innovations. Synthetic fibers play a crucial role in modern society, impacting various aspects of daily life, industry, and the environment. ynthetic fibers are integral to modern life, offering a range of benefits from cost-effectiveness and versatility to innovative applications and performance characteristics. While they pose environmental challenges, ongoing research and development aim to create more sustainable and eco-friendly alternatives. Understanding the importance of synthetic fibers helps in appreciating their role in the economy, industry, and daily life, while also emphasizing the need for sustainable practices and innovation.
Students, digital devices and success - Andreas Schleicher - 27 May 2024..pptxEduSkills OECD
Andreas Schleicher presents at the OECD webinar ‘Digital devices in schools: detrimental distraction or secret to success?’ on 27 May 2024. The presentation was based on findings from PISA 2022 results and the webinar helped launch the PISA in Focus ‘Managing screen time: How to protect and equip students against distraction’ https://www.oecd-ilibrary.org/education/managing-screen-time_7c225af4-en and the OECD Education Policy Perspective ‘Students, digital devices and success’ can be found here - https://oe.cd/il/5yV
Operation “Blue Star” is the only event in the history of Independent India where the state went into war with its own people. Even after about 40 years it is not clear if it was culmination of states anger over people of the region, a political game of power or start of dictatorial chapter in the democratic setup.
The people of Punjab felt alienated from main stream due to denial of their just demands during a long democratic struggle since independence. As it happen all over the word, it led to militant struggle with great loss of lives of military, police and civilian personnel. Killing of Indira Gandhi and massacre of innocent Sikhs in Delhi and other India cities was also associated with this movement.
How to Create Map Views in the Odoo 17 ERPCeline George
The map views are useful for providing a geographical representation of data. They allow users to visualize and analyze the data in a more intuitive manner.
Welcome to TechSoup New Member Orientation and Q&A (May 2024).pdfTechSoup
In this webinar you will learn how your organization can access TechSoup's wide variety of product discount and donation programs. From hardware to software, we'll give you a tour of the tools available to help your nonprofit with productivity, collaboration, financial management, donor tracking, security, and more.