CometCloud is a platform as a service (PaaS) that integrates public and private clouds to enable on-demand scaling. It uses autonomic cloudbursting and cloudbridging to dynamically scale applications. CometCloud was evaluated using two applications - VaR, which calculates financial risk and scales based on workload, and image registration for medical imaging, which scales within a specified budget. Results showed CometCloud can dynamically scale applications across clouds with low overhead and maintain performance according to specified policies.
Resource provisioning optimization in cloud computingMasoumeh_tajvidi
The key benefit of cloud computing for the customer is to be able to acquire resources in response to demand dynamically and only pay for the resources used. This benefit can only be realized when the cloud user can determine the right size of the resource required and allocate the resources in a cost-effective way. While resource over-provisioning can cost users more than necessary, resource under provisioning hurts the application performance. The cost effectiveness of cloud computing highly depends on how well the customer can optimize the cost of renting resources (VMs) from cloud providers. Unfortunately there is still a lack of a good understanding of such a cost optimization. Resource provisioning optimization problem from the cloud-user prospective is a complicated optimization problem that consists of much uncertainty as well as heterogeneity in parameters. Also this problem is a multi-objective problem in essence. There is not much research conducted for solving this problem as it is in the real-world. These works mostly relaxed the problem by not considering the dynamicity and heterogeneity of the environment, or solving the problem as a single-objective optimization. Therefore in this PhD research, our target is solving the resource provisioning optimization problem by taking into account most of the complexity of this problem in the real-world as well as proposing a smart approach for solving such an uncertain and heterogeneous multi-objective optimization problems. This smart approach is to be equipped with Machine learning (ML) techniques in order to make it an intelligent approach that learns from previous stages and makes more accurate decisions in later stages.
A quick look at how the term Cloud originated, What is Cloud Computing? Cloud Infrastaructure, Cloud: Platforms, Benefits, Challenges and Opptrunities of Cloud
Resource provisioning optimization in cloud computingMasoumeh_tajvidi
The key benefit of cloud computing for the customer is to be able to acquire resources in response to demand dynamically and only pay for the resources used. This benefit can only be realized when the cloud user can determine the right size of the resource required and allocate the resources in a cost-effective way. While resource over-provisioning can cost users more than necessary, resource under provisioning hurts the application performance. The cost effectiveness of cloud computing highly depends on how well the customer can optimize the cost of renting resources (VMs) from cloud providers. Unfortunately there is still a lack of a good understanding of such a cost optimization. Resource provisioning optimization problem from the cloud-user prospective is a complicated optimization problem that consists of much uncertainty as well as heterogeneity in parameters. Also this problem is a multi-objective problem in essence. There is not much research conducted for solving this problem as it is in the real-world. These works mostly relaxed the problem by not considering the dynamicity and heterogeneity of the environment, or solving the problem as a single-objective optimization. Therefore in this PhD research, our target is solving the resource provisioning optimization problem by taking into account most of the complexity of this problem in the real-world as well as proposing a smart approach for solving such an uncertain and heterogeneous multi-objective optimization problems. This smart approach is to be equipped with Machine learning (ML) techniques in order to make it an intelligent approach that learns from previous stages and makes more accurate decisions in later stages.
A quick look at how the term Cloud originated, What is Cloud Computing? Cloud Infrastaructure, Cloud: Platforms, Benefits, Challenges and Opptrunities of Cloud
Overview of various cloud-based tools that can be used to enhance teaching and learning and/or increase business effectiveness and efficiency.
This webinar will explore the potential of using cloud-based tools in a range of contexts including:
• Teaching and learning
• Working practice
• Work / life balance
Overview of various cloud-based tools that can be used to enhance teaching and learning and/or increase business effectiveness and efficiency.
This webinar will explore the potential of using cloud-based tools in a range of contexts including:
• Teaching and learning
• Working practice
• Work / life balance
GICT Certified Cloud Computing Specialist (CCCS) provides participants with knowledge about service delivery models and architecture of cloud computing
Find Out More : https://globalicttraining.com
The Certified Cloud Computing Associate (CCCA) program is designed to provide knowledge, skills, competency and expertise to IT professionals
Find out More : https://globalicttraining.com
Professional Guru AWS online Training easiest document to explore AWS Services.Discover how AWS technologies are fueling innovation across all industries to solve the problems of tomorrow.
The course is designed to teach solutions architects how to optimize the use of the AWS Cloud by understanding AWS services and how these services fit into cloud-based solutions. Because architectural solutions may differ depending on industry. I found this website to be useful for learning AWS. Have a look at the site. Hope it helps. http://professional-guru.com/courses/aws-training
Melodic Keynote presentation at OW2con'19, June 12-13, Paris. OW2
Konrad Wawruch, 7Bulls.com CEO, gives a keynote presentation at OW2con'19, June 12-13, Paris: "Melodic multicloud optimization and cloud agnostic deployment platform".
Google developer group 2021 - Introduction to cloud computingKalema Edgar
DevFests are large, community-run developer events happening around the globe focused on community building and learning about different technologies. In this year's session, I shared about Cloud Computing and what one needs to look out for a successful cloud journey.
NO REQUIREMENTS: The Art Of Oracle Applications At Cloud SpeedCapgemini
Ron Tolido reveals the top four secrets in moving apps at cloud speed:
Selecting Cloud opportunities: inCLOUD
Framing: Customer Value Prototyping
Accelerated Implementation: IMPACT
Car & Scooter solutioning
This presentation contains basic introduction to cloud computing and Grid computing . Also mainly focusing on comparison in cloud and grid. This presentation taking some references on research papers.
Master thesis presentation on 'Cloud Service Broker' Carlos Gonçalves
Throughout the history of computer systems, experts have been reshaping IT infrastructure for improving the efficiency of organizations by enabling shared access to computational resources. The advent of cloud computing has sparked a new paradigm providing better hosting and service delivery over the Internet. It offers advantages over traditional solutions by providing ubiquitous, scalable and on-demand access to shared pools of computational resources.
Over the course of these last years, we have seen new market players offering cloud services at competitive prices and different Service Level Agreements. With the unprecedented increasing adoption of cloud computing, cloud providers are on the look out for the creation and offering of new and value-added services towards their customers. Market competitiveness, numerous service options and business models led to gradual entropy. Mismatching cloud terminology got introduced and incompatible APIs locked-in users to specific cloud service providers. Billing and charging become fragmented when consuming cloud services from multiple vendors. An entity recommend- ing cloud providers and acting as an intermediary between the cloud consumer and providers would harmonize this interaction.
This dissertation proposes and implements a Cloud Service Broker focusing on assisting and encouraging developers for running their applications on the cloud. Developers can easily describe their applications, where an intelligent algorithm will be able to recommend cloud offerings that better suit application requirements. In this way, users are aided in deploying, managing, monitoring and migrating their applications in a cloud of clouds. A single API is required for orchestrating the whole process in tandem with truly decoupled cloud managers. Users can also interact with the Cloud Service Broker through a Web portal, a command-line interface, and client libraries.
Similar to Cloud Computing Principles and Paradigms: 10 comet cloud-an autonomic cloud engine (20)
Cloud Computing Principles and Paradigms: 2 migration into a cloudMajid Hajibaba
migration of an application into the cloud can happen in one of several ways: Either the application is clean and independent, so it runs as is; or perhaps some degree of code needs to be modified and adapted; or the design (and therefore the code) needs to be first migrated into the cloud computing service environment;
In fact, the migration industry thrives on these custom and proprietary best practices. Many of these best practices are specialized at the level of the components of an enterprise application—like migrating application servers or the enterprise databases.
اشکالزدايي يک مرحله مهم از چرخه توليد نرم افزار است و برنامه نويسان کسر زيادي از وقتشان را صرف اين مرحله مي کنند. هدف، ارايه راه کاري براي تعيين خودکار محدوده خطا هاي پنهان در متن برنامه ها مي-باشد. ميتوان محدوده علت خطا را براساس مقايسه و تحليل مسيرهاي اجرايي صحيح و غلط بدست آورد. براساس شباهت مسيرهاي اجرايي مي توان آنها را دسته بندي نمود. جهت بدست آوردن شباهت مسيرها، مدل هاي n-گرام اجراها را بدست آورده و سپس با استفاده از آنتروپي متقاطع شباهت بين اين مدل ها را محاسبه ميکنيم. براي بدست آوردن مدلهاي n-گرام که در دسته مدل هاي مارکوف قرار ميگيرند احتمالات MLE توسط شمارش کلمات يا به عبارتي n-گرام ها محاسبه ميشوند. سپس با تحليل هر دسته، به کمک آنتروپي متقاطع، يک سري مکان هاي مشکوک به خطا شناسايي ميشوند و در نهايت با استفاده از روش پيشنهادي براي رأي اکثريت بين دسته ها، مکان هاي مشکوک به خطا به صورت بخش هايي از يک زير مسير به برنامه نويس معرفي مي شود. راه کار ارائه شده در اين پايان نامه، با دقت بالا مکان خطا را نشان مي دهد و نتايج بدست آمده از اِعمال اين راه کار به مجموعه محک زيمنس، گوياي آن مي باشد.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024Neo4j
Neha Bajwa, Vice President of Product Marketing, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Enchancing adoption of Open Source Libraries. A case study on Albumentations.AIVladimir Iglovikov, Ph.D.
Presented by Vladimir Iglovikov:
- https://www.linkedin.com/in/iglovikov/
- https://x.com/viglovikov
- https://www.instagram.com/ternaus/
This presentation delves into the journey of Albumentations.ai, a highly successful open-source library for data augmentation.
Created out of a necessity for superior performance in Kaggle competitions, Albumentations has grown to become a widely used tool among data scientists and machine learning practitioners.
This case study covers various aspects, including:
People: The contributors and community that have supported Albumentations.
Metrics: The success indicators such as downloads, daily active users, GitHub stars, and financial contributions.
Challenges: The hurdles in monetizing open-source projects and measuring user engagement.
Development Practices: Best practices for creating, maintaining, and scaling open-source libraries, including code hygiene, CI/CD, and fast iteration.
Community Building: Strategies for making adoption easy, iterating quickly, and fostering a vibrant, engaged community.
Marketing: Both online and offline marketing tactics, focusing on real, impactful interactions and collaborations.
Mental Health: Maintaining balance and not feeling pressured by user demands.
Key insights include the importance of automation, making the adoption process seamless, and leveraging offline interactions for marketing. The presentation also emphasizes the need for continuous small improvements and building a friendly, inclusive community that contributes to the project's growth.
Vladimir Iglovikov brings his extensive experience as a Kaggle Grandmaster, ex-Staff ML Engineer at Lyft, sharing valuable lessons and practical advice for anyone looking to enhance the adoption of their open-source projects.
Explore more about Albumentations and join the community at:
GitHub: https://github.com/albumentations-team/albumentations
Website: https://albumentations.ai/
LinkedIn: https://www.linkedin.com/company/100504475
Twitter: https://x.com/albumentations
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
1. 26 January 2013
Cloud Computing: principles and paradigms - Part III
1
10-COMETCLOUD: AN
AUTONOMIC CLOUD ENGINE
HYUNJOO KIM and MANISH PARASHAR
Cloud Computing
Principles and Paradigms
Presented by
Majid Hajibaba
2. 26 January 2013
Cloud Computing: principles and paradigms - Part III
2
Outline
• Introduction
• Architecture overview
• Autonomic behavior of CometCloud
• Overview of CometCloud-based applications
• Implementation and Evaluation
• Future Research Directions
Presented by Majid Hajibaba
3. 26 January 2013
Cloud Computing: principles and paradigms - Part III
Introduction
• What
• Integrates of public and private cloud
• Is a PaaS
• Why
• to enable on-demand scale-up,
scale-down
and scale-out
• How
• Cloudbursting
• Cloudbridging
Presented by Majid Hajibaba
3
4. 26 January 2013
Cloud Computing: principles and paradigms - Part III
Architecture
Presented by Majid Hajibaba
4
5. 26 January 2013
Cloud Computing: principles and paradigms - Part III
Automatic Cloudbursting
Presented by Majid Hajibaba
5
6. 26 January 2013
Cloud Computing: principles and paradigms - Part III
6
Motivations on Cloudbursting
• Load Dynamics
• The computational environment must dynamically grow (or shrink)
• In response to dynamic loads
• Accuracy of the Analytics
• The required accuracy of risk analytics
• To dynamically adapt to satisfy the accuracy requirements
• Collaboration of Different Groups
• Different groups run the same app. with different dataset policies
• To satisfy their SLA.
• Economics
• Application tasks can have very heterogeneous and dynamic priorities.
• To handle heterogeneous and dynamic prov. and sched. requirements.
• Failures
• To manage failures without impacting application QoS.
Presented by Majid Hajibaba
7. 26 January 2013
Cloud Computing: principles and paradigms - Part III
7
Automatic Cloudbridging
Deadline-Based
Policy
Budget-Based
Workload-Based
Cloud-Bridging
Virtually Integrated working cloud
Presented by Majid Hajibaba
8. 26 January 2013
Cloud Computing: principles and paradigms - Part III
Fault Tolerance
Presented by Majid Hajibaba
8
9. 26 January 2013
Cloud Computing: principles and paradigms - Part III
9
CometCloud based apps
• VaR
• measuring the risk level of portfolios of financial instruments
• VaR calculation should be completed within the limited time
• computational requirements can change significantly
• autonomic cloudbursts
• Workload-based policy
• Image Registration
• determine the mapping between two images
• for medical informatics
• budget-based policy
Presented by Majid Hajibaba
10. 26 January 2013
Cloud Computing: principles and paradigms - Part III
10
Application Runtime on EC2
Communication Overhead
• All worker were unsecured
• Each worker ran on different instance
a: VaR
b: Image Registration
Presented by Majid Hajibaba
11. 26 January 2013
Cloud Computing: principles and paradigms - Part III
11
Automatic Cloudbursts Behaviors
a: Workload-specific policy
b: Workload-bounded policy
VaR using Workload-Based Policy
Presented by Majid Hajibaba
12. 26 January 2013
Cloud Computing: principles and paradigms - Part III
12
Automatic Cloudbursts Behaviors
Image Registration using Budget-Based Policy
Presented by Majid Hajibaba
13. 26 January 2013
Cloud Computing: principles and paradigms - Part III
13
With/Without Scheduling Agent
Presented by Majid Hajibaba
14. 26 January 2013
Cloud Computing: principles and paradigms - Part III
END
CometCloud: An Autonomic Cloud Engine
14
Editor's Notes
CometCloud is an autonomic computing engine (framework) for cloud and grid environments to realize a virtual computational cloud with resizable computing capability, which integrates local computational environments and public cloud services on-demand.Specifically, CometCloud enables policy-based autonomic cloudbridging and cloudbursting. Autonomic cloudbridging enables on-the-fly integration of local computational environments (data centers, grids) and public cloud services (such as Amazon EC2 [10] and Eucalyptus [20])autonomic cloudbursting enables dynamic application scale-out to address dynamic workloads, spikes in demands, and other extreme requirements
CometCloud is based on a peer-to-peer substrate that can span enterprise data centers, grids, and clouds. Resources can be assimilated(جذب کردن، تلفیق کردن) on-demand and on-the-fly into its peer-to-peer overlay to provide services to applications. CometCloud is composed of a programming layer, a service layer, and an infrastructure layerInfrastecture Layer:The infrastructure layer uses the Chord self-organizing overlay, and the Squid information discovery and content-based routing substrate built on top of Chord.The routing engine supports flexible content-based routing and complex querying using partial keywords, wildcards, or ranges. It guarantees that all peer nodes with data elements that match a query/message will be located. Nodes have different roles and, accordingly, different access privileges based on their credentials and capabilities.This layer also provides replication and load balancing services, and it handles dynamic joins and leaves of nodes as well as node failures.Every node keeps the replica of its successor node’s state, and it reflects changes to that as well as notify predecessor.Service Layer:The service layer provides a range of services to supports autonomics at the programming and application level. This layer supports the Linda-like tuple space coordination model.Programming Layer:The programming layer provides the basic framework for application development and management. It supports a range of paradigms including the master/worker/BOT. Masters generate tasks and workers consume them. Scheduling and monitoring of tasks are supported by the application framework.The task consistency service handles lost tasks. Even though replication is provided by the infrastructure layer, a task may be lost due to network congestion. In this case, since there is no failure, infrastructure level replication may not be able to handle it. This can be handled by the master, for example, by waiting for the result of each task for a predefined time interval and, if it does not receive the result back, regenerating the lost task.
The goal of autonomic cloudbursts is to seamlessly and securely integrate private enterprise clouds and data centers with public utility clouds on-demand,to provide the abstraction of resizable computing capacity.CometCloud considers three types of clouds based on perceived security/trust and assigns capabilities accordingly. The first is a highly trusted, robust, and secure cloud, usually composed of trusted/secure nodes within an enterprise, which is typically used to host masters and other key (management, scheduling, monitoring) roles.The second type of cloud is one composed of nodes with such credentials—that is, the cloud of secure workers.The final type of cloud consists of casual workers. These workers are not part of the space but can access the space through the proxy and a request handler to obtain (possibly encrypted) work units.If the space needs to be scale-up to store dynamically growing workload as well as requires more computing capability, then autonomic cloudbursts target secure worker to scale up. But only if more computing capability is required, then unsecured workers are added.
Load Dynamics. Application workloads can vary significantly. This includes the number of application tasks as well the computational requirements of a task. The computational environment must dynamically grow (or shrink) in response to these dynamics while still maintaining strict deadlines.Accuracy of the Analytics. The required accuracy of risk analytics depends on a number of highly dynamic market parameters and has a direct impact on the computational demand—for example the number of scenarios in the Monte Carlo VaR formulation. The computational environment must be able to dynamically adapt to satisfy the accuracy requirements while still maintaining strict deadlines.Collaboration of Different Groups. Different groups can run the same application with different dataset policies . Here, policy means user’s SLA bounded by their condition such as time frame, budgets, and economic models. As collaboration groups join or leave the work, the computational environment must grow or shrink to satisfy their SLA.Economics. Application tasks can have very heterogeneous and dynamic priorities and must be assigned resources and scheduled accordingly. Budgets and economic models can be used to dynamically provision computational resources based on the priority and criticality of the application task. For example, application tasks can be assigned budgets and can be assigned resources based on this budget. The computational environment must be able to handle heterogeneous and dynamic provisioning and scheduling requirements.Failures. Due to the strict deadlines involved, failures can be disastrous. The computation must be able to manage failures without impacting application quality of service, including deadlines and accuracies.
Autonomic cloudbridging is meant to connect CometCloud and a virtual cloud which consists of public cloud, data center, and grid by the dynamic needs ofthe application.Hence, types of used clouds, the number of nodes in each cloud, and resource types of nodes should be decided according to the changing environment of the clouds and application’s resource requirements.The scheduling agent manages autonomic cloudbridging and guarantees QoS within user policies. Autonomic cloudburst is represented by changing resource provisioning not to violate defined policy. We define three types of policies:Deadline-Based. When an application needs to be completed as soon as possible, assuming an adequate budget, the maximum required workers are allocated for the job.Budget-Based. When a budget is enforced on the application, the number of workers allocated must ensure that the budget is not violated.Workload-Based. When the application workload changes, the number of workers explicitly defined by the application is allocated or released.
it support fault-tolerance in two ways which are in the infrastructure layer and in the programming layer. The replication substrate in the infrastructure layer provides a mechanism to keep the same state as that of its successor’s state, specifically coordination space and overlay information. Every node has a local space in the service layer and a replica space in the infrastructure layer. When a tuple is inserted or extracted from the local space, the node notifies this update to its predecessor and the predecessor updates the replica space. Hence every node keeps the same replica of its successor’s local space. When a node fails, another node in the overlay detects the failure and notifies it to the predecessor of the failed node. Then the predecessor of the failed node merges the replica space into the local space, and this makes all the tuples from the failed node recovered. Also the predecessor node makes a new replica for the local space of its new successor.To address packet loss, in programming layer the master checks the space periodically and regenerates lost tasks.
VaR for measuring the risk level of a firm’s holdings and image registration for medical informatics. VaR calculation should be completed within the limited time, and the computational requirements for the calculation can change significantly.VaR we will focus on how autonomic cloudbursts work for dynamically changing workloads. Image registration is the process to determine the linear/nonlinear mapping T between two images of the same object or similar objects that are acquired at different time.that data size of image registration is much larger than that of VaR. For image registration, because it usually needs to be completed as soon as possible within budget limit, we will focus on how CometCloud works using budget-based policy.
Total application runtime of CometCloud-based (a) VaR and (b) image registration on Amazon EC2.If the computed data size is large and it needs more time to be completed, then workers will have less access the proxy and the communication overhead of the proxy will decrease.
When the application workload increases (or decreases), a predefined number of workers are added (or released), based on the application workload. we defined workload-specific and workload-bounded policies. In workload-specific, a user can specify the workload that nodes are allocated or released. In workload-bounded, whenever the workload increases by more than a specified threshold, a predefined number of workers is added. Similarly, if the workload decreases by more than the specified threshold, the predefined number of workers is released.
we set the maximum number of available nodes to 25 for TW(private datacenter) and 100 for EC2.Costs for the TW data center included hardware investment, software, electricity, and so on.initially allocated 10 nodes each from TW and EC2.Total 500 tasks.