This is a PowerPoint presentation delivered by Prof John Morrison (UCC) on 9 December 2016 at the IC4 and Host in Ireland Workshop: Data Centres in Ireland.
ABSTRACT: Cloud computing is an emerging computing paradigm for delivering computing services over the Internet. Most chemicals companies still are at an early stage in their adoption and usage of cloud computing. Their migration to cloud computing is not a question of "if", but "when." Monitoring assets in remote locations, round-the-clock surveillance, and remote access to data are some of the key factors that are prompting the chemical industry to rethink about the adoption of cloud technology. This paper provides the benefits and challenges of adopting cloud computing in chemical industry.
KEY WORDS: cloud computing, chemical industry
Implementing K-Out-Of-N Computing For Fault Tolerant Processing In Mobile and...IJERA Editor
Despite the advances in hardware for hand-held mobile devices, resource-intensive applications (e.g., video and imagestorage and processing or map-reduce type) still remain off bounds since they require large computation and storage capabilities.Recent research has attempted to address these issues by employing remote servers, such as clouds and peer mobile devices.For mobile devices deployed in dynamic networks (i.e., with frequent topology changes because of node failure/unavailability andmobility as in a mobile cloud), however, challenges of reliability and energy efficiency remain largely unaddressed. To the best of ourknowledge, we are the first to address these challenges in an integrated manner for both data storage and processing in mobilecloud, an approach we call k-out-of-n computing. In our solution, mobile devices successfully retrieve or process data, in the mostenergy-efficient way, as long as k out of n remote servers are accessible. Through a real system implementation we prove the feasibilityof our approach. Extensive simulations demonstrate the fault tolerance and energy efficiency performance of our framework in largerscale networks.
Advanced infrastructure for pan european collaborative engineering - E-collegXavier Warzee
This article presents challenges, visions, and solutions for a true Pan-
European collaborative engineering infrastructure that is a target of the IST project
E-COLLEG. The consortium aims at the definition of a transparent infrastructure
that will enable engineers from various domains to collaborate during the design of
complex heterogeneous systems.
The existence of countless proprietary file formats and the exchange of 3D CAD data has been a significant problem since the beginning of 3D CAD modeling. CAD applications and methods using digital data are constantly changing, which predicates the need for a solution to share validated and accurately translated data, thus the birth of STEP242
Design of an IT Capstone Subject - Cloud RoboticsITIIIndustries
This paper describes the curriculum of the three year IT undergraduate program at La Trobe University, and the faculty requirements in designing a capstone subject, followed by the ACM’s recommended IT curriculum covering the five pillars of the IT discipline. Cloud robotics, a broad multidisciplinary research area, requiring expertise in all five pillars with mechatronics, is an ideal candidate to offer capstone experiences to IT students. Therefore, in this paper, we propose a long term master project in developing a cloud robotics testbed, with many capstone sub-projects spanning across the five IT pillars, to meet the objectives of capstone experience. This paper also describes the design and implementation of the testbed, and proposes potential capstone projects for students with different interests.
ABSTRACT: Cloud computing is an emerging computing paradigm for delivering computing services over the Internet. Most chemicals companies still are at an early stage in their adoption and usage of cloud computing. Their migration to cloud computing is not a question of "if", but "when." Monitoring assets in remote locations, round-the-clock surveillance, and remote access to data are some of the key factors that are prompting the chemical industry to rethink about the adoption of cloud technology. This paper provides the benefits and challenges of adopting cloud computing in chemical industry.
KEY WORDS: cloud computing, chemical industry
Implementing K-Out-Of-N Computing For Fault Tolerant Processing In Mobile and...IJERA Editor
Despite the advances in hardware for hand-held mobile devices, resource-intensive applications (e.g., video and imagestorage and processing or map-reduce type) still remain off bounds since they require large computation and storage capabilities.Recent research has attempted to address these issues by employing remote servers, such as clouds and peer mobile devices.For mobile devices deployed in dynamic networks (i.e., with frequent topology changes because of node failure/unavailability andmobility as in a mobile cloud), however, challenges of reliability and energy efficiency remain largely unaddressed. To the best of ourknowledge, we are the first to address these challenges in an integrated manner for both data storage and processing in mobilecloud, an approach we call k-out-of-n computing. In our solution, mobile devices successfully retrieve or process data, in the mostenergy-efficient way, as long as k out of n remote servers are accessible. Through a real system implementation we prove the feasibilityof our approach. Extensive simulations demonstrate the fault tolerance and energy efficiency performance of our framework in largerscale networks.
Advanced infrastructure for pan european collaborative engineering - E-collegXavier Warzee
This article presents challenges, visions, and solutions for a true Pan-
European collaborative engineering infrastructure that is a target of the IST project
E-COLLEG. The consortium aims at the definition of a transparent infrastructure
that will enable engineers from various domains to collaborate during the design of
complex heterogeneous systems.
The existence of countless proprietary file formats and the exchange of 3D CAD data has been a significant problem since the beginning of 3D CAD modeling. CAD applications and methods using digital data are constantly changing, which predicates the need for a solution to share validated and accurately translated data, thus the birth of STEP242
Design of an IT Capstone Subject - Cloud RoboticsITIIIndustries
This paper describes the curriculum of the three year IT undergraduate program at La Trobe University, and the faculty requirements in designing a capstone subject, followed by the ACM’s recommended IT curriculum covering the five pillars of the IT discipline. Cloud robotics, a broad multidisciplinary research area, requiring expertise in all five pillars with mechatronics, is an ideal candidate to offer capstone experiences to IT students. Therefore, in this paper, we propose a long term master project in developing a cloud robotics testbed, with many capstone sub-projects spanning across the five IT pillars, to meet the objectives of capstone experience. This paper also describes the design and implementation of the testbed, and proposes potential capstone projects for students with different interests.
Artificial Intelligence (AI) is nowadays used frequently in many application domains. Although sometimes considered only as an afterthought in the public discussion compared to other domains such as health, transportation, and manufacturing, the media domain is also transformed by AI enabling new opportunities, from content creation e.g. “robojournalism” and individualised content to optimisation of the content production and distribution. Underlaying many of these new opportunities is the use of AI in its current reincarnation as deep learning for understanding the audio-visual content by extracting structured information from the unstructured data, the audio-visual content.
In this talk the current understanding and trends of AI will therefore be discussed, what can be done, what is done, and what challenges remain in the use of AI especially in the context of media applications and services. The talk is not so much focused on the details and fundamentals of deep learning, but rather on a practical perspective on how recent advances in this field can be utilised in use-cases in the media domain, especially with respect to audio-visual content and in the broadcasting domain.
A computer cluster could be a group of joined computers, operating along closely so in several respects they type one laptop. The parts of a cluster area unit ordinarily, however not forever, connected to every different through quick native space networks. Clusters area unit typically deployed to enhance performance and or handiness over that provided by one laptop, whereas usually being far more cost efficient than single computers of comparable speed or accessibility.The major objective within the cluster is utilizing a bunch of process nodes therefore on complete the assigned job in an exceedingly minimum quantity of your time by operating hand in glove. the most and necessary strategy to realize such objective is by transferring the additional hundreds from busy nodes to idle nodes.In this paper weve presented style and of a cluster based framework . The cluster implementation involves the planning of a server named MCLUSTER that manages the configuring, resetting of cluster. Framework handles the generation of application mobile code and its distribution to appropriate client nodes. The consumer node receives and executes the mobile code that defines the distributed job submitted by MCLUSTER server and replies the results back. Trupti Bhor | Yogeshchandra Puranik "An Introduction to Cluster Computing" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-5 | Issue-4 , June 2021, URL: https://www.ijtsrd.compapers/ijtsrd42561.pdf Paper URL: https://www.ijtsrd.comengineering/computer-engineering/42561/an-introduction-to-cluster-computing/trupti-bhor
Our Infrastructure report covers M&A activity between January 2012 through June 2014, we’ve broken down the period into five half-year periods in order to provide meaningful insights into the recent trends in this sector.
With a 2.4x median EV/S ratio for targets throughout the period, Infrastructure proves to be one of the most active and exciting sectors of the technology industry M&A. Within the last two and a half years alone we have seen prolific acquirers such as Cisco, The Zayo Group and Dell spend in excess of $2 billion each on Infrastructure targets
Bob Jones, PICSE & HNSciCloud Coordinator, was invited to present the PICSE results and the new HNSciCloud project in the "data infrastructure" session of the Open Science Conference on the 5th of April 2016.
Towards Enterprise Interoperability Service UtilitiesBrian Elvesæter
B. Elvesæter, F. Taglino, E. D. Grosso, G. Benguria and A. Capellini, “Towards Enterprise Interoperability Service Utilities”, paper presentation at IWEI 2008, Munich Germany, 18 September 2008.
CloudLighting - A Brief Overview presented by Prof John Morrison at the Fifth National Conference on Cloud Computing and Commerce (NC4 2016).
The presentation covered project's funding and consortium, specific challenge, typical IaaS cloud usage, project's goals and ambitions, the CloudLighting architecture, beneficiaries and challenges ahead.
In this video, Prof. John Morrison from University College Cork describes the CloudLightning project. CloudLightning’s vision is a European economy that thrives and leads the world in the provision and adoption of high performance cloud computing services. Funded by the European Commission’s Horizon 2020 Program, CloudLightning brings together eight project partners from five countries across Europe.
Learn more: http://cloudlightning.eu
Watch the video presentation: http://wp.me/p3RLHQ-fsb
RECAP at ETSI Experiential Network Intelligence (ENI) MeetingRECAP Project
This presentation was delivered by Johan Forsman (Tieto), Jörg Domaschka (UULM) and Paolo Casari (IMDEA Networks) at the ETSI Experiential Network Intelligence (ENI) Meeting in Warsaw, Poland, on April 12th, 2019. ETSI Experiential Networked Industry Specification Group (ENI ISG) work on defining a Cognitive Network Management architecture using Artificial Intelligence (AI) techniques and context-aware policies to adjust offered services based on changes in user needs, environmental conditions and business goals. The intention is that the use of Artificial Intelligence techniques in the network management system should solve some of the problems of future network deployment and operations. For more information, see https://www.etsi.org/technologies/experiential-networked-intelligence.
Creating a Step Change in Cyber Security | ISCF DSbD Business-led Demonstrato...KTN
John Goodacre, the Digital Security by Design (DSbD) Challenge Director at Innovate UK presents the background to the ISCF DSbD programme which aims to "Create a Step Change in Cyber Security".
ATMOSPHERE was invited to be a speaker at Think Milano event, on 6th June from 14.30 to 17.30, to join a panel discussion called “L’infrastruttura cloud ready protagonista del future” on how cloud infrastructures are important for different market sectors.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
Interventions for scientific and enterprise applications based on high perfor...eSAT Journals
Abstract High performance computing refers to the practice of aggregating computing power in a way that delivers much higher performance than one could get out of a typical desktop computer in order to solve large problems in science, engineering or business. While cloud computing is a model for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction. The scope of HPC is scientific research and engineering as well as the design of enterprise applications. As enterprise applications are data centric, user friendly, complex, scalable and often require software packages, decision support systems, warehouse while scientific applications have need of the availability of a huge number of computers for executing large scale experiments. These needs can be addressed by using high-performance and cloud computing. The goal of HPC is to reduce execution time and accommodate larger and more complicated problems. While cloud computing provides scientists with a completely new model of utilizing the computing infrastructure, computing resources, storage resources, as well as applications can be dynamically provisioned on a pay per use basis. These resources can be released when they are no more needed. This paper focuses on enabling and scaling computing systems to support the execution of scientific and business applications. Keywords: Scientific computing, enterprise applications, cloud computing, high performance computing
Data Decentralisation: Efficiency, Privacy and Fair MonetisationAngelo Corsaro
A presentation give at the European H-Cloud Conference to motivate decentralisation as a mean to improve energy efficiency, privacy, and opportunity for monetisation for your digital footprint.
Artificial Intelligence (AI) is nowadays used frequently in many application domains. Although sometimes considered only as an afterthought in the public discussion compared to other domains such as health, transportation, and manufacturing, the media domain is also transformed by AI enabling new opportunities, from content creation e.g. “robojournalism” and individualised content to optimisation of the content production and distribution. Underlaying many of these new opportunities is the use of AI in its current reincarnation as deep learning for understanding the audio-visual content by extracting structured information from the unstructured data, the audio-visual content.
In this talk the current understanding and trends of AI will therefore be discussed, what can be done, what is done, and what challenges remain in the use of AI especially in the context of media applications and services. The talk is not so much focused on the details and fundamentals of deep learning, but rather on a practical perspective on how recent advances in this field can be utilised in use-cases in the media domain, especially with respect to audio-visual content and in the broadcasting domain.
A computer cluster could be a group of joined computers, operating along closely so in several respects they type one laptop. The parts of a cluster area unit ordinarily, however not forever, connected to every different through quick native space networks. Clusters area unit typically deployed to enhance performance and or handiness over that provided by one laptop, whereas usually being far more cost efficient than single computers of comparable speed or accessibility.The major objective within the cluster is utilizing a bunch of process nodes therefore on complete the assigned job in an exceedingly minimum quantity of your time by operating hand in glove. the most and necessary strategy to realize such objective is by transferring the additional hundreds from busy nodes to idle nodes.In this paper weve presented style and of a cluster based framework . The cluster implementation involves the planning of a server named MCLUSTER that manages the configuring, resetting of cluster. Framework handles the generation of application mobile code and its distribution to appropriate client nodes. The consumer node receives and executes the mobile code that defines the distributed job submitted by MCLUSTER server and replies the results back. Trupti Bhor | Yogeshchandra Puranik "An Introduction to Cluster Computing" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-5 | Issue-4 , June 2021, URL: https://www.ijtsrd.compapers/ijtsrd42561.pdf Paper URL: https://www.ijtsrd.comengineering/computer-engineering/42561/an-introduction-to-cluster-computing/trupti-bhor
Our Infrastructure report covers M&A activity between January 2012 through June 2014, we’ve broken down the period into five half-year periods in order to provide meaningful insights into the recent trends in this sector.
With a 2.4x median EV/S ratio for targets throughout the period, Infrastructure proves to be one of the most active and exciting sectors of the technology industry M&A. Within the last two and a half years alone we have seen prolific acquirers such as Cisco, The Zayo Group and Dell spend in excess of $2 billion each on Infrastructure targets
Bob Jones, PICSE & HNSciCloud Coordinator, was invited to present the PICSE results and the new HNSciCloud project in the "data infrastructure" session of the Open Science Conference on the 5th of April 2016.
Towards Enterprise Interoperability Service UtilitiesBrian Elvesæter
B. Elvesæter, F. Taglino, E. D. Grosso, G. Benguria and A. Capellini, “Towards Enterprise Interoperability Service Utilities”, paper presentation at IWEI 2008, Munich Germany, 18 September 2008.
CloudLighting - A Brief Overview presented by Prof John Morrison at the Fifth National Conference on Cloud Computing and Commerce (NC4 2016).
The presentation covered project's funding and consortium, specific challenge, typical IaaS cloud usage, project's goals and ambitions, the CloudLighting architecture, beneficiaries and challenges ahead.
In this video, Prof. John Morrison from University College Cork describes the CloudLightning project. CloudLightning’s vision is a European economy that thrives and leads the world in the provision and adoption of high performance cloud computing services. Funded by the European Commission’s Horizon 2020 Program, CloudLightning brings together eight project partners from five countries across Europe.
Learn more: http://cloudlightning.eu
Watch the video presentation: http://wp.me/p3RLHQ-fsb
RECAP at ETSI Experiential Network Intelligence (ENI) MeetingRECAP Project
This presentation was delivered by Johan Forsman (Tieto), Jörg Domaschka (UULM) and Paolo Casari (IMDEA Networks) at the ETSI Experiential Network Intelligence (ENI) Meeting in Warsaw, Poland, on April 12th, 2019. ETSI Experiential Networked Industry Specification Group (ENI ISG) work on defining a Cognitive Network Management architecture using Artificial Intelligence (AI) techniques and context-aware policies to adjust offered services based on changes in user needs, environmental conditions and business goals. The intention is that the use of Artificial Intelligence techniques in the network management system should solve some of the problems of future network deployment and operations. For more information, see https://www.etsi.org/technologies/experiential-networked-intelligence.
Creating a Step Change in Cyber Security | ISCF DSbD Business-led Demonstrato...KTN
John Goodacre, the Digital Security by Design (DSbD) Challenge Director at Innovate UK presents the background to the ISCF DSbD programme which aims to "Create a Step Change in Cyber Security".
ATMOSPHERE was invited to be a speaker at Think Milano event, on 6th June from 14.30 to 17.30, to join a panel discussion called “L’infrastruttura cloud ready protagonista del future” on how cloud infrastructures are important for different market sectors.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
Interventions for scientific and enterprise applications based on high perfor...eSAT Journals
Abstract High performance computing refers to the practice of aggregating computing power in a way that delivers much higher performance than one could get out of a typical desktop computer in order to solve large problems in science, engineering or business. While cloud computing is a model for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction. The scope of HPC is scientific research and engineering as well as the design of enterprise applications. As enterprise applications are data centric, user friendly, complex, scalable and often require software packages, decision support systems, warehouse while scientific applications have need of the availability of a huge number of computers for executing large scale experiments. These needs can be addressed by using high-performance and cloud computing. The goal of HPC is to reduce execution time and accommodate larger and more complicated problems. While cloud computing provides scientists with a completely new model of utilizing the computing infrastructure, computing resources, storage resources, as well as applications can be dynamically provisioned on a pay per use basis. These resources can be released when they are no more needed. This paper focuses on enabling and scaling computing systems to support the execution of scientific and business applications. Keywords: Scientific computing, enterprise applications, cloud computing, high performance computing
Data Decentralisation: Efficiency, Privacy and Fair MonetisationAngelo Corsaro
A presentation give at the European H-Cloud Conference to motivate decentralisation as a mean to improve energy efficiency, privacy, and opportunity for monetisation for your digital footprint.
Presentation of Eco-efficient Cloud Computing Framework for Higher Learning I...rodrickmero
Tanzanian Higher Learning Institutions (HLIs) are facing challenges in providing the necessary Information Technology (IT) support for education, research and development activities. Currently, HLIs use traditional computing (TC) which has proven to be uneconomical in terms of maintenance, software purchase costs, huge power consumption and staffing.
Cloud computing (CC) is the way forward for HLIs in solving the computing challenges. However, the HLIs policies regarding security of critical data in CC environment prevent adoption of CC services from existing vendors. The reliable and secure way is to establish and operate CC data centers dedicated to HLIs critical data and services. Owning and operating the traditional data centers is a challenge to HLIs because it consumes huge amounts of power. Tanzania like other developing countries has a low level of electrification, while the need for electric power consumption is increasing year after year. The need to consider energy efficient approaches in data center operation is very important for reducing both the operation costs and carbon footprint to the environment.
Therefore, this thesis presents the eco-efficient cloud computing framework that integrates renewable and non-renewable power sources, and free cooling in reducing carbon emission and power consumption in HLIT cloud data centers.
To develop the framework, we conducted a study in Tanzania HLIs to explore the current situation and cloud computing requirements. Interview, Observation, and document review were data collection method used by the study. After analysis of the results, we defined guidelines for developing CC building blocks. We used CloudSim tool kit and Netbin IDE to develop and to simulate eco-efficient framework.
At the end, eco-efficient framework has shown improvement on power consumption, efficiency and carbon emission. Therefore, eco-efficient approaches give HLIs of Tanzania sustainable solution to their computing needs by significantly reducing operating costs. Moreover, it ensures environment protection for the benefit of current and future generations.
Container Technologies and Transformational valueMihai Criveti
Transformational value for container technologies - the business impact of Digital Transformation to Cloud Native technologies.
A brief overview of the technology impact of containers, OpenShift and automation.
Talk delivered at Guide Share Europe Conference 2021: https://www.youtube.com/watch?v=1QunNECL26M
Towards a Lightweight Multi-Cloud DSL for Elastic and Transferable Cloud-nati...Nane Kratzke
Cloud-native applications are intentionally designed for the cloud in order to leverage cloud platform features like horizontal scaling and elasticity – benefits coming along with cloud platforms. In addition to classical (and very often static) multi-tier deployment scenarios, cloud-native applications are typically operated on much more complex but elastic infrastructures. Furthermore, there is a trend to use elastic container platforms like Kubernetes, Docker Swarm or Apache Mesos. However, especially multi-cloud use cases are astonishingly complex to handle. In consequence, cloud-native applications are prone to vendor lock-in. Very often TOSCA-based approaches are used to tackle this aspect. But, these application topology defining approaches are limited in supporting multi-cloud adaption of a cloud-native application at runtime. In this paper, we analyzed several approaches to define cloud-native applications being multi-cloud transferable at runtime. We have not found an approach that fully satisfies all of our requirements. Therefore we introduce a solution proposal that separates elastic platform definition from cloud application definition. We present first considerations for a domain specific language for application definition and demonstrate evaluation results on the platform level showing that a cloud-native application can be transfered between different cloud service providers like Azure and Google within minutes and without downtime. The evaluation covers public and private cloud service infrastructures provided by Amazon Web Services, Microsoft Azure, Google Compute Engine and OpenStack.
This is a presentation by Prof. Anne Elster at the International Workshop on Open Source Supercomputing held in conjunction with the 2017 ISC High Performance Computing Conference.
Dr. Konstantinos Giannoutakis presents the CloudLightning simulator, a bespoke cloud simulation engine built for modelling and simulating heterogeneous resources as well as self-organising systems.
This presentation was given at the CloudLightning Conference held in conjunction with NC4 2017 in Dublin City University on 11th April 2017.
Self-Organisation as a Cloud Resource Management StrategyCloudLightning
Cloud Resource Management is becoming increasingly challenging with the advent of hyperscale computing and the proliferation of heterogeneous hardware. Meanwhile, resource utilisation continues to remain low resulting in high energy consumption per executed instruction. This presentation by Prof. John Morrison suggests a self-organised approach to resource management in an attempt to successfully address these challenges.
This presentation was given at the CloudLightning Conference held in conjunction with NC4 2017 in Dublin City University on 11th April 2017.
Simulation of Heterogeneous Cloud InfrastructuresCloudLightning
During the last years, except from the traditional CPU based hardware servers, hardware accelerators are widely used in various HPC application areas. More specifically, Graphics Processing Units (GPUs), Many Integrated Cores (MICs) and Field-Programmable Gate Arrays (FPGAs) have shown a great potential in HPC and have been widely mobilised in supercomputing and in HPC-Clouds. This presentation focuses on the development of a cloud simulation framework that supports hardware accelerators. The design and implementation of the framework are also discussed.
This presentation was given by Dr. Konstantinos Giannoutakis (CERTH) at the CloudLightning Conference on 11th April 2017.
Perumal Kuppuudaiyar's (Intel Lab Europe) talk at NC4 2016 was focussed on the implementation of test bed which had integrated with various state of the art software stacks on top of the heterogeneous resources to provide FT/HA clusters, fined grained resource management and containerised workload orchestration for HPC.
CloudLightning Service Description LanguageCloudLightning
Dr Marian Neagul (Institute e-Austria Timisoara, Romania) presented the CloudLightning Service Description Language at the Fifth National Conference on Cloud Computing and Commerce in Dublin City University on 12th April 2016.
Simulating Heterogeneous Resources in CloudLightningCloudLightning
In this presentation, Dr Christos Papadopoulos-Filelis (Democritus University of Thrace, Greece) discusses resource characterisation, simulation tools and the elements of simulation used in CloudLightning.
This presentation was given at the National Conference on Cloud Computing in Dublin City University on 12th April 2016.
This presentation introduces CloudLightning, a €4m Horizon 2020 research project that proposes a novel architecture for self-organising self-managing heterogeneous clouds. The proposed use cases include IAAS service provision for HPC to serve the oil and gas, genome processing and ray tracing (3D image rendering) markets.
Prof John P Morrison (CloudLightning Project Coordinator, University College Cork) presents an overview of the CloudLightning Project at the Fourth National Conference on Cloud Computing (NC4) at Dublin City University.
In his presentation Prof Morrison addresses the context and motivation behind exploiting heterogeneous cloud resources to develop the new CloudLightning cloud service delivery model.
The proposed delivery model aims to make the cloud more accessible to cloud consumers by adopting a clean service interface for cloud users to declare and specify resource requirements.
By combining heterogenous cloud architectures with the principles of self-organisation and self-management, CloudLightning will offer cloud service providers power-efficient, scalable management of their cloud infrastructures.
CloudLightning - Multiclouds: Challenges and Current SolutionsCloudLightning
In this presentation, Prof Dana Petcu (Institute e-Austria Timisoara, West University of Timisoara) discusses the concepts, challenges and requirements relating to multicloud architectures.
Prof Petcu also addresses differences between existing approaches in dealing with multiple clouds and the requirements for a support platform for a multicloud infrastructure.
The presentation includes a case study – the MODAClouds Project – that aims to support system developers and operators in exploiting multiple Clouds for the same system in addition to systems migration (full or partial) between clouds as needed.
Finally, Prof Petcu addresses the lessons learned from MODAClouds and similar EU projects and their application to the CloudLightning Project (@_cloudlightning).
This presentation was given at the National Conference on Cloud Computing in Dublin City University on 14th April 2015.
The presentation video follows on at the end of the slides.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
Search and Society: Reimagining Information Access for Radical FuturesBhaskar Mitra
The field of Information retrieval (IR) is currently undergoing a transformative shift, at least partly due to the emerging applications of generative AI to information access. In this talk, we will deliberate on the sociotechnical implications of generative AI for information access. We will argue that there is both a critical necessity and an exciting opportunity for the IR community to re-center our research agendas on societal needs while dismantling the artificial separation between the work on fairness, accountability, transparency, and ethics in IR and the rest of IR research. Instead of adopting a reactionary strategy of trying to mitigate potential social harms from emerging technologies, the community should aim to proactively set the research agenda for the kinds of systems we should build inspired by diverse explicitly stated sociotechnical imaginaries. The sociotechnical imaginaries that underpin the design and development of information access technologies needs to be explicitly articulated, and we need to develop theories of change in context of these diverse perspectives. Our guiding future imaginaries must be informed by other academic fields, such as democratic theory and critical theory, and should be co-developed with social science scholars, legal scholars, civil rights and social justice activists, and artists, among others.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
3. Partners
CloudLightning comprises of
eight partners from academia
and industry and is
coordinated by University
College Cork.
Industrial partners:
• Intel Ireland (IE)
• Maxeler (UK)
Academic partners:
• University College Cork (IE)
• Norwegian University of
Science and Technology (NO)
• Institute e-Austria Timisoara
(RO)
• Democritus University of
Thrace (GR)
• The Centre for Research &
Technology, Hellas (GR)
• Dublin City University (IE)
5. Specific
Challenge
CloudLightning was
funded under Call H2020-
ICT-2014-1 Advanced
Cloud Infrastructures
and Services.
The aim is to develop
infrastructures, methods
and tools for high
performance, adaptive
cloud applications and
Services that go beyond
the current capabilities.
• Cloud computing is being transformed by new requirements such as
- heterogeneity of resources and devices
- software-defined data centres
- cloud networking,security,and
- the rising demands for better quality of user experience.
• Cloud computing research will be oriented towards
- new computational and data management models (at both
infrastructure and services levels) that respond to the advent of
faster and more efficient machines,
- rising heterogeneity of access modes and devices,
- demand for low energy solutions,
- widespread use ofbig data,
- federated clouds and
- secure multi-actor environments including public administrations.
6. EU Use Case
Motivations
CloudLightning’s use cases
support the European
Union HPC strategy and
specific industries
identified by IDC in their
recent report on the
progress of the EU HPC
Strategy (IDC, 2015).
1
The health sector represents 10% of EU GDP and 8% of the
EU workforce (EC, 2014).HPC is increasingly centralto
genome processing and thus advanced medicine and
bioscience research.
2
The oil and gas industry is responsible for 170,000 European
jobs and €440 billion of Europe's GDP (IDC, 2015).HPC
improves discovery performance and exploitation.
3
Ray tracing is a fundamental technology in many industries
and specifically in CAD/CAE,digital content and mechanical
design, sectors dominated by SMEs.
4
European ROI in HPC is very attractive - each euro invested
in HPC on average returned€867 in increased
revenue/income (IDC, 2015).
7. The HPC
Market
Although the EU has the
largest GDP in the world
(€13.2 trillion), the U.S. has
substantially outspent the
EU region in high
performance computing
which has a knock-on effect
in scientific discovery,
innovation and
competitiveness.
IDC estimate the HPC market at €21bn.
IDC forecasts that European HPC ecosystem spending will increase by
37.8% (6.6% CAGR) to reach about €5.2 billion in 2018, or 24.9% of
worldwide HPC ecosystem spending (€21.3 billion).
8. HPC
Challenges
“The challenge is less
about educating users
about cloud computing and
more about the ability of
clouds to handle more
types of HPC jobs over
time.”
IDC, 2015
1 Hard to use without deep IT knowledge
2 Expensive
3 Inaccessible to individuals and SMEs
Traditional High Performance Computing is…
4 Inflexible
Most HPC workloads are not ready to run on today’s cloud architectures.
9. The Market for
HPC in the
Cloud
Cloud segment is the
one of the smallest but
fastest growing
segments in the HPC
market.
Spending on HPC in the
cloud and Hybrid-custom
HPC clouds is forecast to
grow from US$1.7bn in
2015 to US$5.2bn in 2017
(IDC, 2015).
The proportion of HPC sites employing cloud computing has grown from
13.8% in 2011, to 23.5% in 2013,to 34.1% in 2015 (IDC, 2015).
CloudLightning primary researchsuggests 48% of sites are using cloud
computing although for relatively less complex workloads.
$1.5
billion
$3.7
billion
$15.4
billion
Hybrid-Custom HPC Clouds
(2017)
HPC Public Clouds
(2017)
Traditional HPC Servers and
Private Clouds
(2017)
10. Drivers and
Barriers to HPC in
the Cloud
Adoption
Our primary research
(n=92) confirms our desk
research which suggests
that there are significant
economic and capacity-
related drivers but both
general cloud and HPC-
specific barriers to HPC
in the cloud adoption.
1
Access to extra capacity for
overflow or surge workloads
2 Reduced capital costs
3
Access to a datacentre or
specialised software
Drivers
1 Data protection and control
2
3
Complexity and difficulties migrating
and integrating existing systems with
the Cloud
Barriers
Communication speed
concerns
11. CloudLightning
Objectives
CloudLightning seeks
to address the
challenges in the HPC
market through 9
technical, commercial
and societal objectives.
Build Prototype
Management System
and Delivery Model
(WP4, WP5, WP6)
Competitive Advantage
through Infrastructure
Efficiencies
(WP4, WP8)
Energy Efficiency
(WP3, WP7)
Validate Approach with
Use Cases
(WP5, WP6)
Competitive Advantage
through Improved
Accessibility
(WP5, WP6, WP8)
Improved Accessibility
to Cloud Resources
(WP2, WP5, WP6)
Demonstrate
Scalability
(WP7)
Opportunities in Use
Case Domains
(WP2, WP8)
Scientific Advancement
(WP8)
Technical Objectives CommercialObjectives Societal Objectives
12. CloudLightning
Approach
CloudLightning proposes a
novel architecture for
provisioning heterogeneous
cloud resources to deliver
services, specified by the
user, using a bespoke
service description
language.
01
Complexity
CloudLightning uses self-
organisation and self-
management to manage
complexity effectively.
02
Heterogeneous
Resources
CloudLightning was
specifically for
heterogeneous hardware
03
IaaS
Access
04
Energy Efficiency
05
Resource
Utilisation
CloudLightning
uses dynamic
workload and
resource
management to
increase the
efficiency of
resource utilisation.
06
Service
Deployment
The CloudLightning
deployment
mechanism
simplifies the
operational
overhead for non-
technical users
Achieved through
heterogeneous resources,
reducing overprovisioning,
maximising VM/server density
and turning off idle servers
Clear service interface through
separation of concerns between
consumer and
provider.
13. Gateway
Service
Self Organizing
Self Management System
Plug & Play
Service
Blueprint
Creator
End User
Services
Catalogue
Blueprint Catalogue Enterprise
Cloud
Operator
Gateway
Service
UI
Heterogeneous Resources
New Hardware
Deploy
Service
Service
User
Perspective
Monitor
Request
to join
CL-Resource
Discover
Resource
Extract / Modify
Blueprints
Request
Resource
CL-Resources
Deploy Blueprint
Running
Service
Extract
Blueprint
Get
Services
Create
Blueprints
Get
Status
Resource
Handler
14. Progress Beyond
the State of the Art
CloudLightning is, and will,
contribute to progress
beyond the state of the art
across all technical work
packages and primary use
cases.
We are, and will, contribute
to:
1. The expected impacts
listed in the call topic
2. The innovative capacity
of the consortium
members
3. The innovative capacity
of European industry
4. Other European
environmental and
societal priorities
Cloud
Architecture
Service
Description
Languages
Local
Decision
Strategy
Framework
Resource
Coalitions
Ray Tracing
Oil & Gas
Genome
Processing
Large Scale
Simulation
1
5
37
2
6
4
8
17. Design
Requirements
Create a Heterogeneous
Service-Oriented Cloud
Architecture to Support
HPC Workloads
1
2
3
4
Ease of Use
Improve Resource Utilization compared to current Cloud
deployments
Support Heterogeneity
Improve Service Delivery
19. Service 1
Service Catalogue
Service 2
Service 3
Implementation Library
Implementation 1
Implementation 2
Implementation 3
id: unique identifier
definition: concrete
SW/HW
(...)
Implementation
id: unique identifier
definition: service specification
constraints: logical expressions
metrics: atomic values
parameters: atomic values
Service
id: unique identifier
constraints: logical expressions
metrics: atomic values
parameters: atomic values
Blueprint
No implementation
Blueprint 1
Blueprint Catalogue
Blueprint 2
Blueprint 3
Composition of services
Blueprints,
Service
Catalogue and
Implementation
Library
• A Blueprint is a
composition of services.
• A service describes the
features of many
different hardware types
and executable code for
the same task.
• An implementation is
an executable code
on a hardware type of
a task.
20. CloudLightning
API Flow
The main CL system
components,APIs,
communication protocols
and a sequence of
documents that maintains
the state of each,and every,
interaction has been
defined.
24. We assume a Cloud with a
Resource Fabric far
greater than that currently
available.
Adding structure to the
Cloud Fabric by creating
virtual partitions and
grouping them together.
Management of
physical
resources
• The resource fabric is partitioned
into vRacks.
• Each vRack is managed by a
vRack Manager.
• A vRack Manager can form
Coalitions of its resources to
support services.
• vRack Managers self organize to
optimize service delivery
Heterogeneous
Physical Resources
25. • A vRack is a
homogeneous
partition of the
resource fabric.
• Each vRack is
managed by a
dedicated vRack
Manager.
• vRack Managers of
different types exist
based on the resource
types being managed.
vRacks and
vRack Managers
Svr
Svr
Svr
Svr
Svr
Svr
Svr
Svr Svr
Resources Fabric
vRack
vRackvRack
vRack
vRack
vRack Manager
Specialized
HW
Specialized
HW
vRack
vRack
Svr Svr Svr Svr
vRack Manager
Dedicated High-speed Interconnection
Svr Svr
vRack
vRack Manager
26. • Groups of vRack
Managers can be
formed to simplify
access to resources
and to enable self-
organization
• There are three types
of vRack Manager
Groups.
vRack Manager
Groups
vRack
Manager
Specialized
HW
Specialized
HW
vRack
vRack
Manager
Specialized
HW
Specialized
HW
vRack
vRack
Svr Svr Svr Svr
vRack Manager
Dedicated High-speed Interconnection
vRack
Svr Svr Svr Svr
vRack Manager
Dedicated High-speed Interconnection
Type A
Type B
Type C
Svr Svr
vRack
vRack Manager
Svr Svr
vRack
vRack Manager
27. To generically manipulate
resources of different
types, the SOSM system
introduces the conceptof
a CL-Resource.
CL-Resources refer to
different hardware types
and to different
configurations ofthose
type.
Thus heterogeneity can
be introduced
dynamically.
CL-Resources
Local Resource Manager
Svr
MIC
Svr
Svr
Svr
MIC MIC
MIC
MIC-World
MIC Cluster of Servers Container/VM
Resource Partitioning Posibilities
28. Advanced
architecture
support
• Dynamic VPN creation
for Blueprint Service
Execution
• Autoscaling
• High availability
• Data locality
Blueprint
S1
S3
S2
vRack
Server
Server
Server
Server
vRack
Server
Server
Server
Server
Virtual Network
Connection
30. A Framework
for Hosting and
Executing
SOSM
Strategies
A framework for hosting and
executing SOSM strategies
associated with any
hierarchical architecture to
achieve their local goals,
eventually the whole system
evolves to the ideal global
goal state.
Perception
Metrics
Assessment
Functions
Impetus
Weights
Suitability
Index
Directed
Evolution
33. Customizing the
self-organisation
self-management
framework with
CL strategies
The Assessment Functions and
Directed Evolution are related to
the CL specific objectives of:
• Maximizing task throughput
• Maximizing energy efficiency
• Maximizing computational
efficiency
• Maximizing resource
management efficiency
Metrics
Weights
Perception Impetus
Suitability
Index
Local goal: maximize its
Suitability Index
35. Self-organisation
framework
augmentations in
support of
virtualization
Goals:
• Support for
virtualization
• Increase resource
utilization
• Decrease job rejection
rate
Add new assessment function reflecting
Memory consumption
Two-stage self-organisation strategy
introduced: CPU and vCPU
Resource over-commitment is addressed
36. • Coalitions are used to
supportthe process
parallelism within a
service.
• Coalitions existentirely
inside a vRack.
• The CL-Resources ofa
Coalition may span
multiple servers within
the same vRack.
WP 3
Coalitions
Server Server Server
Server Server Server
vRack
39. The Telemetry system
provides updates to the
SOSM system on the
status of resources
fabric.
It is implemented by
using InfluxDB and
SNAP.
Determining
the local state
Gateway
Service
Self Organizing
Self Management
Framework
Blueprint
Services Catalogue
Blueprint Catalogue
Plug & Play
Service
Coalition
Coalition
Coalition
Deployed Blueprint
Blueprint
Creator
End User
Plug & Play
Service
Self Organizing
Self Management
Framework
Physical ResourcesPhysical Resources
Enterprise
Cloud
Operator
40. • The SOSM system
supports the addition
of new hardware by
using a plug and play
mechanism.
• New hardware can
register with SOSM
and it is automatically
added and managed.
Support for
new hardware
Gateway
Service
Self Organizing
Self Management
Framework
Blueprint
Physical Resources
Services Catalogue
Blueprint Catalogue
Plug & Play
Service
Coalition
Coalition
Coalition
Deployed Blueprint
Blueprint
Creator
End User
Self Organizing
Self Management
Framework
Physical Resources
Enterprise
Cloud
Operator