Invited talk given at the 2014 Chip-to-Cloud Security Forum "Advances in Securing Embedded, Mobile and Cloud Services and Ecosystems" in the seminar session on "Procurement, SLAs, and Standardisation on a Global Scale." In this talk, Dr. Sill reviews the history of cloud and grid computing, the formation and charter description for Phases I and II of the US National Institute of Standards and Technology (NIST) "SAJACC" working group, and brings the discussion up to date with an overview of current "DevOps"-oriented cloud standards and software interoperability hands-on testing efforts worldwide.
Overview and introductory remarks for the OGF sessions held May 21-22, 2015 co-located with the European Grid Initiative 2015 conference that took place the week of May 18-22, 2015 in Lisbon, Portugal. For details, see https://www.ogf.org/ogf/doku.php/events/ogf-44
OCCI - The Open Cloud Computing Interface – flexible, portable, interoperable...Alan Sill
The Open Cloud Computing Interface (OCCI) specification set defines a general protocol and API applicable to many different cloud resource management tasks.
OCCI began as a remote management API for IaaS model based Services, allowing for the development of interoperable tools for common tasks including deployment, autonomic scaling and monitoring. It has since evolved into a general-purpose flexible RESTful API framework with a strong focus on integration, portability, interoperability and innovation while still remaining highly extensible.
OCCI is suitable to serve many other models in addition to IaaS, including e.g. PaaS and SaaS. The current release (v1.1) of OCCI has achieved a high degree of adoption and implementation in production in a wide variety of languages, projects, software products and application areas.
The OCCI working group is in the process of developing an update of the OCCI specifications as version 1.2 with improvements that result from nearly four years of successful field experience. This version will be backwards compatible with v1.1 and will include:
- A new JSON rendering to accompany updates to the existing HTTP and text renderings.
- Minor updates of current OCCI core infrastructure model and specification.
- New extensions that will include PaaS support, notifications support and SLA support.
?In addition, the OCCI group is considering best methods for support of additional features, including monitoring, key management and security, interdomain networking and direct interface support for popular batch systems through the Distributed Resource Management Application API (DRMAA) standard.
Overview of the US National Science Foundation Cloud and Autonomic Computing Industry/University Cooperative Research Center testbed activities on the US NSF Chameleon, Cloudlab and XSEDE resources.
The NSF CAC will use its industry/university connections to promote and foster open cloud standards & interoperability testbeds using internal and external resources.
Specific projects have been proposed and approved on two new NSF computer-science-oriented cloud “testbed as a service” resources, Chameleon and CloudLab, which have recently been funded to replace the FutureGrid project.
These testbeds will be open to all researchers who wish to cooperate with us on cloud interoperability, performance, standards or general cloud functionality testing within the context of the approved projects.
Both US domestic and international participants are welcome, as long as you’re willing to work on interoperability topics and share your results.
Opportunties for involvement in the CAC by commercial companies also exist, as described at http://nsfcac.org
Invited talk on Open Grid Forum standards, focusing specifically on the current status of the Open Cloud Computing Interface (OCCI), given at the US National Institute of Standards and Technology Cloud Computing Forum and Workshop VIII, July 7-10, 2015.
Introduction to the Open Grid Forum community and the document production process, as well as several primary application arenas for OGF specifications, given at the co-located International Conference on Cloud and Autonomic Computing (CAC 2014), IEEE International Conference on Self-Adaptive and Self-Organizing Systems (SASO 2014) and the IEEE International Conference on Peer-to-Peer Computing (P2P’14) conferences, September 8-12, 2014 at Imperial College in London, UK.
Overview and introductory remarks for the OGF sessions held May 21-22, 2015 co-located with the European Grid Initiative 2015 conference that took place the week of May 18-22, 2015 in Lisbon, Portugal. For details, see https://www.ogf.org/ogf/doku.php/events/ogf-44
OCCI - The Open Cloud Computing Interface – flexible, portable, interoperable...Alan Sill
The Open Cloud Computing Interface (OCCI) specification set defines a general protocol and API applicable to many different cloud resource management tasks.
OCCI began as a remote management API for IaaS model based Services, allowing for the development of interoperable tools for common tasks including deployment, autonomic scaling and monitoring. It has since evolved into a general-purpose flexible RESTful API framework with a strong focus on integration, portability, interoperability and innovation while still remaining highly extensible.
OCCI is suitable to serve many other models in addition to IaaS, including e.g. PaaS and SaaS. The current release (v1.1) of OCCI has achieved a high degree of adoption and implementation in production in a wide variety of languages, projects, software products and application areas.
The OCCI working group is in the process of developing an update of the OCCI specifications as version 1.2 with improvements that result from nearly four years of successful field experience. This version will be backwards compatible with v1.1 and will include:
- A new JSON rendering to accompany updates to the existing HTTP and text renderings.
- Minor updates of current OCCI core infrastructure model and specification.
- New extensions that will include PaaS support, notifications support and SLA support.
?In addition, the OCCI group is considering best methods for support of additional features, including monitoring, key management and security, interdomain networking and direct interface support for popular batch systems through the Distributed Resource Management Application API (DRMAA) standard.
Overview of the US National Science Foundation Cloud and Autonomic Computing Industry/University Cooperative Research Center testbed activities on the US NSF Chameleon, Cloudlab and XSEDE resources.
The NSF CAC will use its industry/university connections to promote and foster open cloud standards & interoperability testbeds using internal and external resources.
Specific projects have been proposed and approved on two new NSF computer-science-oriented cloud “testbed as a service” resources, Chameleon and CloudLab, which have recently been funded to replace the FutureGrid project.
These testbeds will be open to all researchers who wish to cooperate with us on cloud interoperability, performance, standards or general cloud functionality testing within the context of the approved projects.
Both US domestic and international participants are welcome, as long as you’re willing to work on interoperability topics and share your results.
Opportunties for involvement in the CAC by commercial companies also exist, as described at http://nsfcac.org
Invited talk on Open Grid Forum standards, focusing specifically on the current status of the Open Cloud Computing Interface (OCCI), given at the US National Institute of Standards and Technology Cloud Computing Forum and Workshop VIII, July 7-10, 2015.
Introduction to the Open Grid Forum community and the document production process, as well as several primary application arenas for OGF specifications, given at the co-located International Conference on Cloud and Autonomic Computing (CAC 2014), IEEE International Conference on Self-Adaptive and Self-Organizing Systems (SASO 2014) and the IEEE International Conference on Peer-to-Peer Computing (P2P’14) conferences, September 8-12, 2014 at Imperial College in London, UK.
Towards a Lightweight Multi-Cloud DSL for Elastic and Transferable Cloud-nati...Nane Kratzke
Cloud-native applications are intentionally designed for the cloud in order to leverage cloud platform features like horizontal scaling and elasticity – benefits coming along with cloud platforms. In addition to classical (and very often static) multi-tier deployment scenarios, cloud-native applications are typically operated on much more complex but elastic infrastructures. Furthermore, there is a trend to use elastic container platforms like Kubernetes, Docker Swarm or Apache Mesos. However, especially multi-cloud use cases are astonishingly complex to handle. In consequence, cloud-native applications are prone to vendor lock-in. Very often TOSCA-based approaches are used to tackle this aspect. But, these application topology defining approaches are limited in supporting multi-cloud adaption of a cloud-native application at runtime. In this paper, we analyzed several approaches to define cloud-native applications being multi-cloud transferable at runtime. We have not found an approach that fully satisfies all of our requirements. Therefore we introduce a solution proposal that separates elastic platform definition from cloud application definition. We present first considerations for a domain specific language for application definition and demonstrate evaluation results on the platform level showing that a cloud-native application can be transfered between different cloud service providers like Azure and Google within minutes and without downtime. The evaluation covers public and private cloud service infrastructures provided by Amazon Web Services, Microsoft Azure, Google Compute Engine and OpenStack.
Presentation at the European Grid Infrastructure (EGI) Conference 2015, Business Track "Examples of SMEs as consumers and providers".
EGI is focused on supporting SMEs and the full innovation chain between business and academia to create opportunities of economic impact through open data generated and the technical services both offered and required to support research and innovation.
Terradue Cloud Platform delivers Cloud bursting for Earth Science Applications and Services, illustrated in this presentation by many use cases and collaborations fostering the reuse of Earth Observation open data and open services.
The Italian Institute for Nuclear Physics (INFN) has a long experience in the field of distributed scientific computing, mainly in the framework of GRID computing. In the last years an interest towards the cloud computing paradigm has arisen inside the INFN scientific and technological communities, leading to the growth of new activities aiming at creating a new distributed computing environment that takes advantage of the flexibility offered by cloud technologies.
In this contribution we will give an overview of the activities carried out by the INFN IT community in this direction leveraging OpenStack as the main Cloud Management Framework (provider). We will present the two aspects of the INFN - OpenStack venture:
- setup and operation of OpenStack based infrastructures: the challenging set-ups and peculiar characteristics of local and geographically distributed cloud infrastructures present in various INFN sites, as well as the prototype of the multi-site INFN Corporate Cloud infrastructure - a geographically distributed cloud environment, fully redundant and highly available, hosted in a limited number of INFN sites;
- development of new and/or improvements of already existing OpenStack components in order to: support federated identities and provide privacy and distributed authorization move beyond static allocation and partitioning of both storage and computing resources in data centers distribute and deploy applications in a flexible way exploit distributed computing and storage resources through transparent network interconnections
The presentation will describe the national and international projects in which INFN is involved, highlighting their objectives and the solutions adopted, the work done and achieved results as well as future steps and related cloud activities involving the OpenStack community.
Welcome talk unleashing the future of open-source enterprise cloud computingNETWAYS
The OpenNebula Project has come a long way since the first “technology preview” of OpenNebula almost six years ago. During these years we’ve witnessed the rise and hype of the Cloud, the birth and decline of several virtualization technologies, but specially the encouraging and exciting growth of OpenNebula; both as a technology and as an active and engaged community. As a meeting point for OpenNebula users, developers, administrators, builders, integrators and researchers, this Conference represents an opportunity to look back at how the project has grown in the last six years, and to give a peek at what to expect from the project in the near future.
OpenStack Ousts vCenter for DevOps and Unites IT Silos at AVG Technologies Jakub Pavlik
tcp cloud & AVG User Story.
Does your IT department’s left hand talk to the right hand? Finally ours does at http://www.avg.com/eu-en/homepage! This is the story of OpenStack as our salvation, and important lessons learned in technology and IT politics.
Our appsdev team’s devops abilities were being held ransom on vCenter, so we wanted public cloud agility for dev/test/staging. With the help of our IT partner…Full session details here: http://awe.sm/r9Ekr
From Jisc's campus network engineering for data-intensive science workshop on 19 October 2016.
https://www.jisc.ac.uk/events/campus-network-engineering-for-data-intensive-science-workshop-19-oct-2016
OpenNebula Conf 2014 | Bootstrapping a virtual infrastructure using OpenNebul...NETWAYS
This talk shows how to setup a virtual infrastructure using OpenNebula as cloud management platform, SaltStack for configuration management and Foreman for bare-metal/ virtual host provisioning. You will see how to combine OpenNebula with bare-metal deployment on standard server hardware using non-shared storage in an environment without physical access to the hardware and no existing base infrastructure like DNS, NTP, DHCP, VPN or other. The infrastructure installation has been done automatically using public code and free Open Source software.
Challenges in Global Standardisation | EnergySys Hydrocarbon Allocation ForumEnergySys Limited
The slides from Dr Esther Hayes (Operation Director, EnergySys) presentation on the implementation challenges associated with standardised production models at the recent EnergySys Hydrocarbon Allocation Forum.
This insights are taken from her new Whitepaper 'Challenges in Global Standardisation'. If you would like a copy of the whitepaper, please contact us via kirsty.armitage@energysys.com
Towards a Lightweight Multi-Cloud DSL for Elastic and Transferable Cloud-nati...Nane Kratzke
Cloud-native applications are intentionally designed for the cloud in order to leverage cloud platform features like horizontal scaling and elasticity – benefits coming along with cloud platforms. In addition to classical (and very often static) multi-tier deployment scenarios, cloud-native applications are typically operated on much more complex but elastic infrastructures. Furthermore, there is a trend to use elastic container platforms like Kubernetes, Docker Swarm or Apache Mesos. However, especially multi-cloud use cases are astonishingly complex to handle. In consequence, cloud-native applications are prone to vendor lock-in. Very often TOSCA-based approaches are used to tackle this aspect. But, these application topology defining approaches are limited in supporting multi-cloud adaption of a cloud-native application at runtime. In this paper, we analyzed several approaches to define cloud-native applications being multi-cloud transferable at runtime. We have not found an approach that fully satisfies all of our requirements. Therefore we introduce a solution proposal that separates elastic platform definition from cloud application definition. We present first considerations for a domain specific language for application definition and demonstrate evaluation results on the platform level showing that a cloud-native application can be transfered between different cloud service providers like Azure and Google within minutes and without downtime. The evaluation covers public and private cloud service infrastructures provided by Amazon Web Services, Microsoft Azure, Google Compute Engine and OpenStack.
Presentation at the European Grid Infrastructure (EGI) Conference 2015, Business Track "Examples of SMEs as consumers and providers".
EGI is focused on supporting SMEs and the full innovation chain between business and academia to create opportunities of economic impact through open data generated and the technical services both offered and required to support research and innovation.
Terradue Cloud Platform delivers Cloud bursting for Earth Science Applications and Services, illustrated in this presentation by many use cases and collaborations fostering the reuse of Earth Observation open data and open services.
The Italian Institute for Nuclear Physics (INFN) has a long experience in the field of distributed scientific computing, mainly in the framework of GRID computing. In the last years an interest towards the cloud computing paradigm has arisen inside the INFN scientific and technological communities, leading to the growth of new activities aiming at creating a new distributed computing environment that takes advantage of the flexibility offered by cloud technologies.
In this contribution we will give an overview of the activities carried out by the INFN IT community in this direction leveraging OpenStack as the main Cloud Management Framework (provider). We will present the two aspects of the INFN - OpenStack venture:
- setup and operation of OpenStack based infrastructures: the challenging set-ups and peculiar characteristics of local and geographically distributed cloud infrastructures present in various INFN sites, as well as the prototype of the multi-site INFN Corporate Cloud infrastructure - a geographically distributed cloud environment, fully redundant and highly available, hosted in a limited number of INFN sites;
- development of new and/or improvements of already existing OpenStack components in order to: support federated identities and provide privacy and distributed authorization move beyond static allocation and partitioning of both storage and computing resources in data centers distribute and deploy applications in a flexible way exploit distributed computing and storage resources through transparent network interconnections
The presentation will describe the national and international projects in which INFN is involved, highlighting their objectives and the solutions adopted, the work done and achieved results as well as future steps and related cloud activities involving the OpenStack community.
Welcome talk unleashing the future of open-source enterprise cloud computingNETWAYS
The OpenNebula Project has come a long way since the first “technology preview” of OpenNebula almost six years ago. During these years we’ve witnessed the rise and hype of the Cloud, the birth and decline of several virtualization technologies, but specially the encouraging and exciting growth of OpenNebula; both as a technology and as an active and engaged community. As a meeting point for OpenNebula users, developers, administrators, builders, integrators and researchers, this Conference represents an opportunity to look back at how the project has grown in the last six years, and to give a peek at what to expect from the project in the near future.
OpenStack Ousts vCenter for DevOps and Unites IT Silos at AVG Technologies Jakub Pavlik
tcp cloud & AVG User Story.
Does your IT department’s left hand talk to the right hand? Finally ours does at http://www.avg.com/eu-en/homepage! This is the story of OpenStack as our salvation, and important lessons learned in technology and IT politics.
Our appsdev team’s devops abilities were being held ransom on vCenter, so we wanted public cloud agility for dev/test/staging. With the help of our IT partner…Full session details here: http://awe.sm/r9Ekr
From Jisc's campus network engineering for data-intensive science workshop on 19 October 2016.
https://www.jisc.ac.uk/events/campus-network-engineering-for-data-intensive-science-workshop-19-oct-2016
OpenNebula Conf 2014 | Bootstrapping a virtual infrastructure using OpenNebul...NETWAYS
This talk shows how to setup a virtual infrastructure using OpenNebula as cloud management platform, SaltStack for configuration management and Foreman for bare-metal/ virtual host provisioning. You will see how to combine OpenNebula with bare-metal deployment on standard server hardware using non-shared storage in an environment without physical access to the hardware and no existing base infrastructure like DNS, NTP, DHCP, VPN or other. The infrastructure installation has been done automatically using public code and free Open Source software.
Challenges in Global Standardisation | EnergySys Hydrocarbon Allocation ForumEnergySys Limited
The slides from Dr Esther Hayes (Operation Director, EnergySys) presentation on the implementation challenges associated with standardised production models at the recent EnergySys Hydrocarbon Allocation Forum.
This insights are taken from her new Whitepaper 'Challenges in Global Standardisation'. If you would like a copy of the whitepaper, please contact us via kirsty.armitage@energysys.com
Mobile and portable devices require the definition of new user interfaces (UI) capable of reducing the level of attention required by users to operate the applications they run to improve the calmness of them. To carry out this task, the next generation of UIs should be able to capture information from the context and act accordingly. This work defines an extension to the UsiXML methodology that specifies how the information on the user is modeled and used to customize the UI. The extension is defined vertically through the methodology, affecting all layers of the methodology. In the Tasks & Concepts layer, we define the user environment of the application, where roles and individuals are characterized to represent different user situations. In the Abstract UI layer, we relate groups of these individuals to abstract interaction objects. Thus, user situations are linked to the abstract model of the UI. In the Concrete UI layer, we specify how the information on the user is acquired and how it is related to the concrete components of the UI. This work also presents how to apply the proposed extensions to a case of study. Finally, it discusses the advantages of using this approach to model user-aware applications.
- What's Software Deployment
- A Minimal Python Web Application
- Trouble Shoot
- Interface between Web Server and Application
- Standardization/Automation/Monitoring/Availability
Five Pain Points of Agile Development (And How Software Version Management Ca...Perforce
The latest research on Software Configuration Management suggests that developers are struggling in five key areas: latency, far-flung teams, ad-hoc workflows, administrative overhead, and integration nightmares.
This webcast will help you understand how these five factors are undermining developer productivity and performance.
As modern practices strain some tools to their limits, companies are revisiting their approaches to version management. We will share with you...
* How the market is evolving to address these critical issues
* How innovative SCM tools can take your versioning to new levels.
Earlier this year Solutions Marketing Inc. conducted a benchmarking survey designed to better understand the issues that B2B companies face in developing new solutions. The survey explored following questions:
• Current importance and relevance of solutions
• Identification of the stakeholders who are involved
• Key processes required
• The level of process standardization
• General challenges that solutions developers face
The Center for Services Leadership and the Institute for the Study of Business Management at Penn State invited their professional network of supporters and followers to take part in the survey. We would like to thank everyone who participated in the survey! Below are the highlights of the survey findings reported by Solutions Insights.
In April 2012, Brain Mathews asserted in his white paper that libraries need to “Think Like a Startup." But how do startups think? If we are going to emulate startup culture, then we have some learning to do. This interactive session will tackle the build-measure-learn cycle, validated learning, iterative design, continuous improvement, and other components of lean thinking. We'll underscore the importance of hands-on development, prototyping, and hypothesis testing. Come join the conversation and help make entrepreneurial thinking a habitual part of our practice and profession. Presented by M.J. D'Elia & Helen Kula.
A world without standards is road to chaos and IT processes are no exception. This presentation talks nicely in more friendly manner about IT Standards of ISO 27001, ISO 20000, CobiT, ISO 38500
Top 10 retail franchisor accounting best practices whitepaperAlex King
By viewing this free whitepaper you will learn:
- How to reduce costs for both retail franchisee and retail franchisor
- Find out how to use data to make informed decisions to improve performance
- How to manage risk more effectively
- How to monitor fraud and non-compliance
FITT Toolbox: Standardisation in Media FormatsFITT
Standardisation is the process of developing and agreeing upon technical standards. The goals of standardisation can be to help with independence of single suppliers (commoditization), compatibility, interoperability, safety, repeatability, or quality. Standardisation is a strategic process in the technological evolution and commercialisation of products or services like ICT software and hardware. This implies at a very early stage of the technology a close collaboration between the research community, the industry and policy makers. This case shows that in the complexity of this standardisation process the involvement of the research community at an early stage is crucial to foster a rapid integration of new technology into new products.
www.FITT-for-Innovation.eu
Chair
There is a good case for Open Standards. We decided to publish my presentation to the OFE Executive Council about the OFE Standardisation Special Interest Group, held in
London on 31 October 2008.
Franchise Model - Franchise as a Development Tool - Social Franchise EntrepriseWattJet
The slidecast reviews the Franchise model, its phasing its advantages & shortcomings and its application to:
Social Franchises to deliver Social Services
Social Franchise Enterprises for the achievement of the Development Goals
The slideshare is divided in three parts:
- The General Franchise model : Slides 1 to 8 (11minutes).
Watch slidecast with commentaries at https://www.youtube.com/watch?v=uMJ95bX_ang
- Social Franchises: Slides 9-14 ( 5 Min)
Watch slidecast with commentaries at https://www.youtube.com/watch?v=xcMOCFrufaU
- Complete slidecast
Watch complete slidecast with commentaries at https://www.youtube.com/watch?v=HI3RJpgsN9Y
Follow WattJet Channel in Youtube
This presentation shows why it is important to benchmark the performance of software projects and organizations. Measurement of performance and comparing this to relevant peer groups provides the knowledge and understanding for managemenr to make informed decisions on where the organization stands and where it should go. This presentation was given at the Italian GUFPI-ISMA conference (December 2013) and addressed also the way the Italian industry is performing according to the ISBSG Country Analysis report.
Cloud Standards in the Real World: Cloud Standards Testing for DevelopersAlan Sill
Learn about standards studied in the US National Science Foundation Cloud and Autonomic Computing Industry/University Cooperative Research Center Cloud Standards Testing Lab and how you can get involved to extend the successes from these results in your own cloud software settings. Presented at the O'Reilly OSCON 2014 Open Cloud Day.
Video available at https://www.youtube.com/watch?v=eD2h0SqC7tY
Calit2-a Persistent UCSD/UCI Framework for CollaborationLarry Smarr
05.02.16
Invited Talk
Sun Microsystems Global Education and Research
Conference 2005
Title: Calit2-a Persistent UCSD/UCI Framework for Collaboration
San Francisco, CA
MPLS/SDN 2013 Intercloud Standardization and Testbeds - SillAlan Sill
This talk givens an overview of several multi-SDO and cross-SDO activities to promote and spur innovation in cloud computing. The focus is on API development and standardization, including testbeds, test use cases, and collaborative activities between organizations to create and carry out development and testing in this area. The focus is on work being pursued through the Cloud and Autonomic Computing Center at Texas Tech University, which is part of the US National Science Foundation's Industry/University Cooperative Research Center, and on work being done by standards organizations such as the Open Grid Forum, Distributed Management Task Force, and Telecommunications Management Forum in which the CAC@TTU is involved. A summary is also given of work to produce a new round of more detailed use cases suitable for testing by the US National Institute of Standards and Technology's Standards Acceleration to Jumpstart Adoption of Cloud Computing (SAJACC) working group, with brief mention also given to other related work going on in this area in other parts of the world. Background and other standards work is also mentioned.
e-Clouds A Platform and Marketplace to Access and Publish Scientific Applicat...Mario Jose Villamizar Cano
Cloud computing promises to offer great opportunities for research groups; however when researchers want to execute applications in cloud infrastructures many complex processes must be accomplished. In this presentation we present the e-Clouds project which will allow researchers to easily execute many applications on public Infrastructure as a Service (IaaS) solutions. Designed for being a Software as a Service (SaaS) marketplace for scientific applications, e-Clouds allows researchers to submit jobs which are transparently executed on public IaaS platforms, such as Amazon Web Services (AWS). e-Clouds manages the on-demand provisioning and configuration of computing instances, storage, applications, schedulers, jobs, and data. The architectural design and how a first application has been supported on e-Clouds are presented. e-Clouds will allow researchers to easily share and execute applications in the cloud at low TCO (Total Cost of Ownership) and without the complexities associated with details of IT configurations and management. e-Clouds provides new opportunities for research groups with low or none budget for dedicated cluster or grid solutions, providing on-demand access to ready-to-use applications and accelerating the result generation of e-Science projects.
Opening Keynote Lecture
15th Annual ON*VECTOR International Photonics Workshop
Calit2’s Qualcomm Institute
University of California, San Diego
February 29, 2016
Introduction and Overview of OpenStack for IaaSKeith Basil
These slides supported a presentation at the 2013 Red Hat Summit.
It covers:
✦ Introduction to OpenStack
✦ OpenStack Architecture
✦ Understanding the Elastic Cloud
✦ OpenStack in the Real World
OpenNebulaConf 2013 - Keynote: Opening the Path to Technical Excellence by Jo...OpenNebula Project
Bio:
Jordi Farrés is Service Manager at the European Space Agency (ESA), being responsible for ESA’s grid processing infrastructure and related SciOps and for technology evolution projects in the Earth Observation Ground Segment Engineering Division. He has been also responsible for ESA corporation business applications in Finance, Procurement and Human Resources and Facility Management. Dr. Farrés received his M.S. in Computer Science from UPC and his Ph.D. in Computer Science from the University of Edinburgh.
A comprehensive review of OpenStack then and now, each project's architecture, and hard data on why the race for open cloud is over. (First edition delivered April 2013 at OpenStack Summit. This version is from SPDEcon on June 10, 2013.)
Accumulo Summit 2014: Addressing big data challenges through innovative archi...Accumulo Summit
The ability to collect and analyze large amounts of data is a growing problem within the scientific community. The growing gap between data and users calls for innovative tools that address the challenges faced by big data volume, velocity and variety. The Massachusetts Institute of Technology, Lincoln Laboratory (MIT LL) is not immune to these challenges and has developed a set of tools that address many of these challenges.
Big data volume stresses the storage, memory, and compute capacity of a computing system and requires access to a computing cloud. Choosing the right cloud is problem specific. Currently, there are four multi-billion dollar ecosystems that dominate the cloud computing environment: enterprise clouds, big data clouds, SQL database clouds, and supercomputing clouds. Each cloud ecosystem has its own hardware, software, conferences, and business markets. The broad nature of business big data challenges make it unlikely that one cloud ecosystem can meet its needs and solutions are likely to require the tools and techniques from more than one cloud ecosystem. The MIT Supercloud was developed to address this challenge. To our knowledge, the MIT SuperCloud is the only deployed cloud system that allows all four ecosystems to co-exist without sacrificing performance or functionality.
The velocity of big data velocity stresses the rate at which data can be absorbed and meaningful answers produced. Led by the NSA, a Common Big Data Architecture (CBDA) was developed for the U.S. government based on the Google Big Table NoSQL approach and is now in wide use. MIT/LL played a leading role in developing the CBDA and is a leader in adapting the CBDA to a variety of big data challenges.
Big data variety may present the largest challenge and greatest opportunities. The promise of big data is the ability to correlate diverse and heterogeneous data to form new insights. The centerpiece of the CBDA is the NSA developed Apache Accumulo database (capable of millions of entries/second) and the MIT/LL developed D4M schema. These technologies allow vast quantities of highly diverse data (text, computer logs, and social media data, etc.) to be automatically ingested into a common schema that enables rapid query and correlation of every element.
The talk will concentrate on how we utilize the aforementioned technologies in our mission to apply advanced technology to problems of national security.
OCRE Workshop: Shaping the Earth Observation Services Market for Research. Session 3: Presentations from DIAS and eoMALL.
This workshop aims to bring the EO service providers closer to the research community, capture their needs and develop fit for purpose EO services.
The event will be the 4th OCRE Requirements Gathering Workshop. Researchers and Earth Observation Service Providers will be asked to provide inputs to help us shape OCRE's tender.
The OCRE project aims to provide the first end-to-end instance of organised, large-scale market pull for EO services in Europe. These services will be provided for free to EU researchers through the European Open Science Cloud. To ensure that the services meet the actual needs of the research community we invite both the demand and the supply side, to share their views and engage in a productive dialogue. Our aim is to capture the needs of EU researchers and inform the EO service providers so that they make available services that effectively address them. We will also explain how the OCRE process will work, how the different stakeholders should be involved and how to make the most of the foreseen benefits.
Return on Investment for Research Computing and Data Support 2021-03-03Alan Sill
How to define, measure, and report return on investment information for research computing & data that is meaningful to your organization, and comments related to national and international scale computational and research data needs. Talk ends with a description of work towards use of renewable energy to lower costs of delivery for academic research computing. Presented to NITRD MAGIC meeting, March 3, 2021.
Design Considerations, Installation, and Commissioning of the RedRaider Cluster at the Texas Tech University
High Performance Computing Center
Outline of this talk
HPCC Staff and Students
Previous clusters
• History, Performance, usage Patterns, and Experience
Motivation for Upgrades
• Compute Capacity Goals
• Related Considerations
Installation and Benchmarks Conclusions and Q&A
Talk given at ISC Cloud'13: HPC and Manufacturing Meet Cloud, held 23-24 Sep 2013 in Heidelberg, Germany.
http://www.isc-events.com/cloud13/Overview.html
Talk given by Álvaro López García of the Instituto de Física de Cantabria at the Cloud Interoperability Week tutorial session of Cloud Plugfest 10 in Madrid, Spain, 19 Sep 2013.
Condensed summary of OGF standards and recent activities in cloud computing, presented at the CloudScape V conference held Feb. 27-28 2013 in Brussels, Belgium
SAJACC Working Group Recommendations to NIST, Feb. 12 2013Alan Sill
The NIST Cloud Computing “Standards Acceleration to Jumpstart Adoption of Cloud Computing” (SAJACC) Working Group pursued a strategic process to facilitate development of testing methodologies applicable to cloud computing products and standards. The purpose of this process was to create formal US Government (USG)-based use cases and validation mechanisms that would ensure identification of the detailed capabilities of cloud computing products and standards in terms of their ability to support the US Government’s “Cloud First” information technology strategy, and to test cloud computing products and standards against these use cases.
The SAJAAC process was designed to incorporate the output of other NIST Cloud Computing working groups where possible, especially that of the Business Use Case Working Group, and to identify specific features that reflect USG priorities for cloud computing, which include aspects related to security, interoperability and portability. Future work is anticipated to extend the SAJACC framework to features that touch on other recently identified priorities, including aspects of accessibility and performance. By constantly integrating such aspects identified by other related NIST Cloud Computing topical work, the NIST SAJAAC process is designed to produce a validation process that will help support a sustainable, secure USG cloud infrastructure.
This working group report captures the results of this process to date and makes the following conclusions and recommendations to NIST to proceed from Phase I to future Phase II work of the SAJACC group:
1. Replace the SAJACC use case internal organization with one based on the current structure of the NIST Cloud Computing Reference Architecture and Taxonomy;
2. Add further use cases based on current extensions to this taxonomy for recently developed Cloud SLA Metrics and NIST Cloud Computing Security components;
3. Integrate further input as necessary from the NIST Business Use Case and Standards Roadmap groups, and work closely with these groups to identify additional use cases;
4. Study and adopt use case template elements from the US VA Bronze, Silver and Gold Use Cases and from additional formal input from US Government agencies;
5. Add automation and tooling, if possible, to the NIST web site to support community downloading of the NIST SAJACC use cases and their associated templates for testing scenarios and uploading of externally produced test results;
6. Conduct, invite and document additional use case demonstrations of cloud standards and applicable products against the SAJACC use cases to illustrate their features;
7. Solicit and add further recommendations from the community at large through meetings of the SAJACC working group.
This report therefore comprises the conclusion of Phase I of the SAJACC process to date, and the plan for initiation of Phase II with the goal to implement the above recommendations.
SAJACC WG Report Summary and Conclusions Jan 2013Alan Sill
Summary report from the Standards Acceleration to Jumpstart the Adoption of Cloud Computing ("SAJACC") group given in session B01: USG Cloud Computing Technology Roadmap Volume III Progress at National Institute of Standards and Technology Cloud Computing and Big Data Forum & Workshop, January 17, 2013
Requirement 5: Federated Community Cloud - SillAlan Sill
Presentation on behalf of the US National Institute of Standards and Technology (NIST) Federated Community Cloud (FCC) sub-group of the Reference Architecture and Taxonomy (RATax) working group at the NIST Cloud Computing and Big Data Forum and Workshop, Feb. 15-17, 2013 in Gaithersburg, MD.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
To Graph or Not to Graph Knowledge Graph Architectures and LLMs
Cloud Testbeds for Standards Development and Innovation
1. Cloud Testbeds for
Standards Development
and Innovation
NIST SEMINAR
WHICH FUTURE FOR US/EU TRUSTED CLOUD SERVICES?
Procurement, SLAs, standardisation on a global scale
Alan Sill, Ph.D
Site Director, Center for Cloud and Autonomic Computing at TTU
Senior Scientist, High Performance Computing Center
Adjunct Professor of Physics, Texas Tech University
NIST SAJACC Working Group Co-Chair
Sep. 24, 2014
2. Organization of this talk
1. Past
2. Present
3. Future
September 24, 2014
2
NIST SEMINAR
WHICH FUTURE FOR US/EU TRUSTED CLOUD SERVICES?
Procurement, SLAs, standardisation on a global scale
3. Organization of this talk
In more detail:
1. Review of mission, plans and goals of SAJACC Phases I and II.
2. Discussion of the early role of the Cloud Plugfest series.
3. Evolution of the European Grid Initiative Federated Cloud
from a testbed into full production status.
4. Discussion of several other standards testbed projects.
5. Update on current NSF projects in this area and ongoing work
with NSF CAC partners on cloud standards definition, testing
and cloud computing API and product benchmarking.
This is an update to talks given on this subject over the past
several years, in which I will go into detail on motivations and
accomplishments of some related and independent standards
testing programs.
September 24, 2014
3
NIST SEMINAR
WHICH FUTURE FOR US/EU TRUSTED CLOUD SERVICES?
Procurement, SLAs, standardisation on a global scale
4. Organization of this talk
1. Past
2. Present
3. Future
September 24, 2014
4
NIST SEMINAR
WHICH FUTURE FOR US/EU TRUSTED CLOUD SERVICES?
Procurement, SLAs, standardisation on a global scale
5. A brief history of cloud computing
1970’s: Networking becomes commonplace. Distributed computing
experiments via ARPAnet, etc. Ethernet developed.
1980’s: Experiments linking idle NeXT computers by Apple folks.
1990’s: DECnet and AIX workstation clusters outpaced by Linux
cluster computing. First large-scale distributed replicated clusters.
Invention of grid computing and growth of use.
2000’s: Experiments lead to large-scale grids; cloud computing
begins to emerge as a label but not yet as a widespread tool.
The pattern of trying things out on small scale and then scaling
them up if successful is among the oldest approaches in computing.
(In fact, it is clearly not limited to computing topics.)
In the grid and cloud context, which I regard as a continuum or at
least connected, we have been doing this since the early days of
distributed computing.
September 24, 2014
5
NIST SEMINAR
WHICH FUTURE FOR US/EU TRUSTED CLOUD SERVICES?
Procurement, SLAs, standardisation on a global scale
7. Testbeds were used EVERYWHERE
The operative word in any initial project was a “testbed”. The Open
Science Grid (now >760,000 cores) grew out of an early combination of
three testbeds that merged into “Trillium”, then “Grid3” and then OSG —
which led to other experiments that we will hear about later in this talk.
September 24, 2014
7
NIST SEMINAR
WHICH FUTURE FOR US/EU TRUSTED CLOUD SERVICES?
Procurement, SLAs, standardisation on a global scale
8. 600k - 800k jobs/day!
Distributed Across 124 Sites
Open Science Grid currently consists of over 124 geographical sites,
operating on a wide variety of computing systems
9. Science VOs on the Open Science Grid
Virtual Organizations, July 18th
2011
• Astrophysics
• Biochemistry
• Bioinformatics
• Earthquake
Engineering
• Genetics
• Gravitational-‐wave
physics
• Mathematics
• Nanotechnology
• Nuclear
and
particle
physics
… and many others!
10. Example: Worldwide LHC Computing Grid
September 24, 2014
NIST SEMINAR
WHICH FUTURE FOR US/EU TRUSTED CLOUD SERVICES?
Procurement, SLAs, standardisation on a global scale
10
~450,000 cpu cores
~430 Pb storage
Typical data transfer
rate: ~12 GByte/sec
Total worldwide grid
capacity: ~2x WLCG
across all grids and
VOs
11. CPU
cores 361,300
EGI-InSPIRE RI-2E6G1I3-‐I2n3SPIRE
EGI international presence
Storage Value
(yearly
increase)
Disk
(PB)
235
PB
(+69%)
Tape
(PB)
176
PB
(+32%)
Value
(yearly
increase)
across
53
countries
(1.44
M
job/day)
RI-‐261323 www.egi.eu www.egi.eu
Example of standards-based international collaboration
12. September 24, 2014
NIST SEMINAR
WHICH FUTURE FOR US/EU TRUSTED CLOUD SERVICES?
Procurement, SLAs, standardisation on a global scale
12
(2011-2013)
13. Cloud Standards: Myths, Priorities and Realities Alan Sill, TTU NSF CAC Spring Meeting, June 14-15, 2012
13
Lockheed Martin webinar
July 11, 2013
14. NIST SAJACC Public Process
http://collaborate.nist.gov/twiki-cloud-computing/bin/view/CloudComputing/SAJACC
Lockheed Martin webinar
July 11, 2013
14
15. US NIST SAJACC Project
p “Standards Acceleration to Jumpstart the Adoption of
Cloud Computing” = SAJACC
p One of several NIST Cloud Computing working groups
that has been active since 2010 to pursue their mandate
to produce guidance to the US government; other
working groups for reference architecture, security,
standards roadmap, accessibility and forensics
p SAJACC Focused on use case definition and refinement
to produce testable cloud computing scenarios
p Demo code and presentations part of public record
p New round recently started to refine test cases
Lockheed Martin webinar
July 11, 2013
15
16. SAJACC Use Cases
Standards Acceleration to Jumpstart Adoption
of Cloud Computing
Cloud Computing Forum
and Workshop II
Nov. 4-5, 2010
Gaithersburg, MD
Breakout Sessions
Nov. 5, 2010
17. Overall Starting Points
• Want use cases that work across multiple
clouds and in different environments
• Aim at specific use cases that can provide
insight as to how clouds CAN work as well
as demonstrations of how clouds work now
• Reference implementations to enable
feasibility exercises
• Continuously growing, publicly accessible
portal to showcase results Cloud Computing Forum
and Workshop II
Nov. 4-5, 2010
Gaithersburg, MD
19. Cloud Standards: Myths, Priorities and Realities Alan Sill, TTU NSF CAC Spring Meeting, June 14-15, 2012
19
Lockheed Martin webinar
July 11, 2013
NIST Cloud Standards Inventory
20. Cloud Standards: Myths, Priorities and Realities Alan Sill, TTU NSF CAC Spring Meeting, June 14-15, 2012
20
Lockheed Martin webinar
July 11, 2013
21. Cloud Computing Forum
and Workshop II
Nov. 4-5, 2010
Gaithersburg, MD
http://www.nist.gov/itl/cloud/use-cases.cfm
22. (
Internal(Group(Report(
Feb(12,(2013
Special(Publication(5001273(Special(Publication(5001273
!
SAJACC Working Group
Recommendations to NIST
National Institute of Standards and
Technology
NIST Cloud Computing
Standards Acceleration to Jumpstart Adoption
of Cloud Computing (SAJACC) Working Group
Phase I group report and recommendations
23. 2011: Initiated “Cloud Plugfest” Series
September 24, 2014
NIST SEMINAR
WHICH FUTURE FOR US/EU TRUSTED CLOUD SERVICES?
Procurement, SLAs, standardisation on a global scale
23
(More about this later)
24. Reality Check:
What it usually looks like when developers
encounter standards committees.
25. What it ought to
look like:
(Taken from an actual Cloud Plugfest.)
28. Organization of this talk
1. Past
2. Present
3. Future
September 24, 2014
NIST SEMINAR
WHICH FUTURE FOR US/EU TRUSTED CLOUD SERVICES?
Procurement, SLAs, standardisation on a global scale
28
29. Example: (Big) Data
September 24, 2014
NIST SEMINAR
WHICH FUTURE FOR US/EU TRUSTED CLOUD SERVICES?
Procurement, SLAs, standardisation on a global scale
29
Factor of!
1000x!
bigger!
in less
than!
a decade!
Present day real world:!
Phones: 100+ Gigabytes!
Science and Business: 100s to 1000s of Petabytes
30. XSEDE: The Next Generation of
US National Supercomputing
Infrastructure
The Role of Standards for Risk Reduction and
Inter-operation in XSEDE
Cloud and grid standards
now power some of the
largest academic
supercomputing
infrastructures in the
world!
31. US National
Cyberinfrastructure
Blacklight
Shared
Memory
4k
Xeon
cores
!
Darter
24k
cores
!
Nautilus
Visualization
Data
Analytics
!
Keeneland
CPU/GPGPU
Stampede
460K
cores
w.
Xeon
Phi
>1000
users
Upgrade
in
2015
Yellowstone
Geosciences
Wrangler
Data
Analytics
Trestles
IO-‐intensive
10k
cores
160
GB
SSD/Flash
!
Gordon
Data
intensive
64
TB
memory
300
TB
Flash
Mem
Open
Science
Grid
High
throughput
Blue
Waters
124
sites
Leadership
SuperMIC
380
nodes
–
1PF
(Ivy
bridge,
Xeon
Phi,
GPU)
Over
13
million
service
units/day
typically
delivered
as
of
2014
across
all
XSEDE
supercomputing
sites
(about
3
million
core
hours/day),
totaling
about
1.6
billion
core
hours
per
year
Promote an
open, robust,
collaborative,
and innovative
ecosystem
Adopt,
create and
disseminat
e
knowledge
Extend
the impact
of cyber-infrastructure
Prepare
the current
and next
generation
Provide
technical
expertise and
support
services
Collaborate
with other CI
groups and
projects
FutureGrid
*
Maverick
Visualization
Data
Analytics
Comet
“Long
Tail
Science”
47k
cores/2
PF
High
throughput
ACI-‐REF
Campus
sharing,
NSF
Cloud
(shared)
Grids
Credit: Irene
Qualters, US
National Science
Foundation
35. LSN-MAGIC Meeting
February 22, 2012 XSEDE Services Layer:
Simple services combined in many ways
–Resource
35
Namespace
Service
1.1
–OGSA
Basic
ExecuOon
Service
–OGSA
WSRF
BP
–
metadata
and
noOficaOon
–OGSA-‐ByteIO
–GridFTP
–JSDL,
BES,
BES
HPC
Profile
–WS
Trust
Secure
Token
Services
–WSI
Examples – (not
a complete list)
BSP
for
transport
of
credenOals
–…
(more
than
we
have
room
to
cover
here)
Basic message: XSEDE represents best-of-breed engagement
of open computing standards with the US cyberinfrastructure.
36. Federated Cloud architecture
Domain
specific
services
in
Virtual
Machine
Images
FedCloud
User
interfaces
Open to new members:
EGI-InSPIRE RI-261323 www.egi.eu
Join as user, or as an IaaS/PaaS/SaaS service provider:
http://go.egi.eu/cloud
Cloud
hypervisor
(e.g.
OpenStack,
OpenNebula,
EmotiveCloud,
Okeanos…)
Cloud
site
academic/commercial
Standards
used
to
enable
federation
• OCCI:
VM
Image
management
• OVF:
VM
Image
format
• X509:
Authentication
• (CDMI:
Storage)
FedCloud
Operation
interfaces
• Information
system
(BDII)
• Monitoring
(SAM)
• Accounting
(APEL)
• AAI
(Perun)
Virtual
organisations
• GLUE2:
Resource
discovery
and
Description
• Others
in
development
Federation
monitoring
37. EGI Federated Cloud: A successful standards-based
international federated cloud infrastructure
TUD
KTH
FCTSG INFN
CETA CESGA
DANTE
Credit: David Wallom
Chair EGI Federated Cloud Task Force
Members
•70 individuals
•40 institutions
•13 countries
Stakeholders
•23 Resource Providers
•10 Technology Providers
•7 User Communities
•4 Liaisons
CESNET
Technologies
•OpenStack
•OpenNebula
•StratusLab
•CloudStack (in
evaluation)
•Synnefo
•WNoDeS
CNRS
BSC
LMU
OeRC
Masaryk
IFAE
Cyfronet
100%IT
RADICAL
SRCE
FZJ
GRNET
GWDG
STFC
SARA
EGI.eu
Imperial
IFCA
IGI
IPHC
IN2P3
SZTAKI
IISAS SixSq
Standards
•OCCI (control)
•OVF (images)
•X.509 (authN)
•CDMI (storage - under
development)
(Last updated July 2014)
38. The rest of history: Enterprise clouds
Of course, in the intervening time - mostly within the past few
years - we’ve seen the explosive growth of the use of cloud
computing in industry, and consequent development of thousands
of variations on the above theme.
As virtualization was added to the mix, and new ways of
separating, distributing and designing tasks that can be carried
out on distributed infrastructures has grown - and especially as the
commoditization of computing has driven costs down, cloud
computing is no longer a way of doing computing in general — it
is THE way!
Nonetheless we have to ask ourselves at this point:
Are we learning anything new from this process? (Answer: Yes)
And if so, how? (Answer: Open Source, DevOps and best practices)
September 24, 2014
NIST SEMINAR
WHICH FUTURE FOR US/EU TRUSTED CLOUD SERVICES?
Procurement, SLAs, standardisation on a global scale
38
39. Cloud Interoperability Week
CAC@TTU Planning Meeting
May 29-30, 2013
Texas Tech University
39
Workshop to highlight applications,
frameworks and user communities
Sep. 16-20, 2013
Santa Clara, CA and Madrid, Spain
40. Cloud Plugfest Developer Series:
September 24, 2014
Continuing series…
!
Oriented towards REAL
DEVELOPMENT
!
Past and current events
co-sponsored by many open
source and standards-related
organizations including
OGF, DMTF, SNIA, OASIS, ETSI,
OCEAN, CloudWATCH and OW2
Continues!
Developer-oriented
in-person
standards and
software testing
series
Cloud Plugfest 12
just completed!
Easy to get
involved and join
in events as open
source or
commercial
developers or
project
researchers!
http://cloudplugfest.org
NIST SEMINAR
WHICH FUTURE FOR US/EU TRUSTED CLOUD SERVICES?
Procurement, SLAs, standardisation on a global scale
40
More events in
planning pipeline.
41. (
Internal(Group(Report(
Feb(12,(2013
Special(Publication(5001273(Special(Publication(5001273
!
SAJACC Working Group
Recommendations to NIST
National Institute of Standards and
Technology
NIST Cloud Computing
Standards Acceleration to Jumpstart Adoption
of Cloud Computing (SAJACC) Working Group
Phase I group report and recommendations
42. Basic Goals of SAJACC Phase II
• Drastically increase the level of detail and modularity of the use
cases for portability, interoperability, security and for other NIST
goals added, such as mobility and accessibility.
• Bring organization and definition of use cases into line with
NIST Cloud Computing Reference Architecture and other NIST
working group output.
• Add sections necessary for USG agency and organization
adoption.
• Improve technical guidelines and content for possible
automation, and to provide the basis for more formal testing.
• Write enhanced use cases and leave a legacy for future reuse by
defining the process for writing testable use cases.
43. Example Work In Progress:
Reorganize
and rewrite
previous
SAJACC Use
Cases
44. Example Work In Progress:
Add technical
components
for workflow
modeling and
improved use
case internal
detail.
45. Example Work In Progress:
Incorporate
input from other
ongoing NIST
cloud computing
working groups
47. Example Work In Progress:
Include diagrams where
appropriate to improve clarity
of the logic sequence and
workflow of a complex
operation, step or procedure.
48. Organization of this talk
1. Past
2. Present
3. Future
September 24, 2014
NIST SEMINAR
WHICH FUTURE FOR US/EU TRUSTED CLOUD SERVICES?
Procurement, SLAs, standardisation on a global scale
48
49. A New Research Effort
September 24, 2014
NIST SEMINAR
WHICH FUTURE FOR US/EU TRUSTED CLOUD SERVICES?
Procurement, SLAs, standardisation on a global scale
49
50. 50
Lockheed Martin webinar
July 11, 2013
The CAC@TTU
We have now assembled a multidisciplinary team of talented
researchers active in practical application topics to guide and
inform cloud standards research for the NSF through the CAC.
51. 51
NSF CAC Cloud Standards Vision
The CAC@TTU intends to provide a practical work arena for
development and coordination of standards, standards-based
software and reference implementations applicable to cloud
and other forms of advanced distributed computing.
The site will fill a need to organize, classify, develop reference
implementations for and otherwise contribute to standards-based
software in advanced distributed computing.
The vision that underlies these goals is one of harmonious,
coordinated development of software that interoperates
across many boundaries of deployment and implementation,
and that can be repurposed, rescaled and redeployed as
needed to solve a wide variety of user, vendor and supplier
problems. In other words: fulfill the dreams of cloud computing!
Lockheed Martin webinar
July 11, 2013
52. Target Cloud Standards-Related Organizations:
CAC will use its testbed efforts to work with all relevant SDOs and standards-related
customer and trade organizations
It is often said that there are “too many standards organizations”.
This is a lot like saying there is “too much software”.
Each has its own area of specialty, its own contributor base, and its
own method of funding to develop its work products.
CAC will study products and effectiveness of each of these
organizations and work with them using a DevOps approach.
September 24, 2014
NIST SEMINAR
WHICH FUTURE FOR US/EU TRUSTED CLOUD SERVICES?
Procurement, SLAs, standardisation on a global scale
52
…
53. Core Technology Efforts
Primary CAC@TTU project areas:
September 24, 2014
!
!
!
!
!
*
!
Of these, we expect the Cloud Standards and Cloud
Interoperability projects to be of principal interest for
the future. CAC will therefore join the Federated Cloud.
NIST SEMINAR
WHICH FUTURE FOR US/EU TRUSTED CLOUD SERVICES?
Procurement, SLAs, standardisation on a global scale
53
• Cloud Standards Testbed!
• Cloud Performance Testbed!
• Cloud Interoperability Testbed!
• Cloud Tester Benchmark Suite
* (In cooperation with The Aerospace Corporation and other CAC partners)
54. Initial CAC@TTU Project Areas
1. Product and Standards Testing
• Cloud Performance Testbed
• Cloud Standards Testbed
• Cloud Interoperability Testbed
• Cloud Security Testbed <— (Future)
2. Design Labs
• Storage Design and Testing Lab
• Network Design and Testing Lab
3. Developer Events
• Cloud Plugfest Series
• Participation in technical partner events
• Organization of and participation in conferences
September 24, 2014
NIST SEMINAR
WHICH FUTURE FOR US/EU TRUSTED CLOUD SERVICES?
Procurement, SLAs, standardisation on a global scale
54
p CAC@TTU is new!
n More coming…
55. Other New and Related Efforts
• IEEE P2301 and P2302:
n P2301 developing “Guide for Cloud Portability and Interoperability
Profiles” (CPIP), chaired by John Messina (NIST).
n P2302 working on “Standard for Intercloud Interoperability and
Federation (SIIF)” and working toward assembling an “intercloud
testbed” (www.intercloudtestbed.org) with multiple participants.
n Both open to public participation and additional partners.
n Related Intercloud Testbed effort (see next slide) updated & reconfigured.
September 24, 2014
• NSF Cloud:
n Two awards recently made by the National Science Foundation for two
new cloud testbeds, “Chameleon” and “CloudLab”.
n Replaces FutureGrid project previously used for interoperability testing.
n Further details emerging soon!
NIST SEMINAR
WHICH FUTURE FOR US/EU TRUSTED CLOUD SERVICES?
Procurement, SLAs, standardisation on a global scale
55
56.
57.
58.
59. What Can You Do?
• Several ways exist to get involved in the organizations
and cloud projects just described that are working
towards interoperability and standards. (Almost all are!)
• Your institution, organization, company or client can ask
for standards compliance as a condition of purchasing or
implementing cloud products and services.
• Join a Cloud Plugfest, or sponsor one, or start an activity
with a similar DevOps orientation to development and
continuous testing of cloud standards.
• Join a Cloud Interoperability testbed.
• Lobby for standards to be a required item in software
development, and vice versa, in all projects and products.
Lockheed Martin webinar
July 11, 2013
59
60. I’ve left a lot out!
• This talk has a theme, though, that should now be clear:
I have focused primarily on hands-on, real-world projects
and related efforts for immediate feedback between
standards and software developers.
n Definition of the testing environment is definitely in scope.
n Focus on topics that can produce real-world tests that produce
feedback.
n Take a “DevOps” approach and don’t wait for the documents to be
completely finished or perfect.
n Anyone can do this. You can, too!
n Other projects of this nature should not feel slighted. I endorse them!!
• The main theme - and my long-term primary theme for some
time now - is this:
Both standards and software require different types of
development. The trick to success is keeping them in sync!
September 24, 2014
NIST SEMINAR
WHICH FUTURE FOR US/EU TRUSTED CLOUD SERVICES?
Procurement, SLAs, standardisation on a global scale
60
61. Conclusions
• We will leverage public processes such as NIST SAJACC to
pursue a broader range of testing tools needed to do
conformance/compliance testing for cloud products and
standards. DISA has joined the CAC to pursue these efforts.
• CAC@TTU projects are being defined to add standards and
interoperability testing tools and to expand the range of
acceptance tools available to conduct such evaluations.
• These will be tested first within the CAC center, and results
could be offered for use by other organizations.
• Outputs from this project will improve understanding of
capabilities of cloud APIs, products and standards and
improve feedback to public software development and
standards development processes such as SAJACC.
September 24, 2014
NIST SEMINAR
WHICH FUTURE FOR US/EU TRUSTED CLOUD SERVICES?
Procurement, SLAs, standardisation on a global scale
61
62. Links For Further Information and To Help:
• NIST Cloud Computing home page: http://nist.gov/itl/cloud/
• NIST SAJACC group TWiki page: http://collaborate.nist.gov/
twiki-cloud-computing/bin/view/CloudComputing/SAJACC
• NSF Cloud and Autonomic Computing Center main site:
http://nsfcac.org
• CAC@TTU information and membership materials:
http://cac.ttu.edu
• Cloud Plugfest developer series: http://cloudplugfest.org
• NSF I/UCRC main site: http://www.nsf.gov/eng/iip/iucrc/
• Cloud standards organization compilation:
http://cloud-standards.org
• NSF CAC@TTU contact email: cac.info@ttu.edu
September 24, 2014
NIST SEMINAR
WHICH FUTURE FOR US/EU TRUSTED CLOUD SERVICES?
Procurement, SLAs, standardisation on a global scale
62