This document discusses the philosophy and benefits of open source appropriate technology (OSAT) and how it relates to cloud computing. It notes that OSAT has its roots in the 1960s culture of freely sharing and collaborating on software. The open source model can drive sustainable development by enabling production and localization at low costs. Cloud computing provides infrastructure that levels barriers and allows access to information technology, improving standards of living. The future of cloud computing is seen as distributed and federated, relying on open source technologies like containers and identity federations.
Modern, Private, Automated Private Cloud
Altera Technologies is a cloud management company founded in 2014 that focuses on private cloud software and services. It has over 500 enterprise customers and 200 employees. Altera offers a true private cloud product called ECS that provides tenants secure platforms with logical and physical network separation between domains and projects. Key benefits of private cloud over public cloud include better security, control, and predictability while avoiding high public cloud costs over time.
Open Container Technologies and OpenStack - Sorting Through Kubernetes, the O...Daniel Krook
Presentation at the OpenStack Summit in Barcelona, Spain on October 25, 2016.
http://bit.ly/os-kub-oci-cncf
Containers along with next generation topics such as orchestration and serverless computing continue to draw interest across the application developer and data center operator communities because of the enormous potential of the technology and the rapid pace of change.
As the potential of Docker continues to evolve, Kubernetes emerges as the leading orchestration technology, and the OpenStack Magnum project has matured, many want to see shared governance over the baseline container specification and associated runtime and format/image to protect investments and enable confident adoption of this emerging technology.
Join this session to learn the latest about the Open Container Initiative (www.opencontainers.org) and the Cloud Native Computing Foundation (cncf.io) - both collaborative projects of the Linux Foundation - that drive the latest cloud native technologies and projects and see how they relate to Magnum and Kuryr.
Daniel Krook, Senior Software Engineer, IBM
Jeffrey Borek, Program Director, Open Tech, IBM
Sarah Novotny, Senior Kubernetes Community Manger, Google
This document discusses application modernization and provides an overview of Docker containers, WebSphere Application Server (WAS) lift-and-shift, and next steps. It introduces modernization stages like lift-and-shift, refactor, and rebuild. It then covers Docker containers for WAS Liberty and traditional WAS. IBM Cloud Private (ICP) and Helm charts for automating deployments on Kubernetes are also discussed. The document concludes with a brief discussion of WAS lift-and-shift to IBM Cloud and potential next steps involving OpenShift and ICP.
Red Hat OpenShift & CoreOS by Ludovic Aelbrecht, Senior Solution Architect at...Kangaroot
Red Hat OpenShift and CoreOS provide platforms for developing, deploying, and integrating containerized applications across hybrid cloud environments. Adopting a container strategy with Kubernetes allows applications to be easily shared, run, and deployed in a flexible manner. Red Hat is a leading contributor to open source Kubernetes and OpenShift projects and aims to facilitate innovation in the container ecosystem.
Kubernetes - A Short Ride Throught the project and its ecosystemMaciej Kwiek
Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications. It groups related containers together and manages the deployment of these container pods across clusters of physical or virtual machines. Kubernetes has master components that control the cluster and node components that run on each machine in the cluster. It uses pods as the basic building block and schedules the pods across nodes to provide high availability and easy management of applications.
Jacob Bogie, Advisory Platform Architect explains how Pivotal's PKS abstracts the complexity of tackling Data Gravity, Kubernetes, and how it relates to the presentations of our partners Yugabyte, Portworx, SnappyData, Crunchy Data, and Confluent.
Modern, Private, Automated Private Cloud
Altera Technologies is a cloud management company founded in 2014 that focuses on private cloud software and services. It has over 500 enterprise customers and 200 employees. Altera offers a true private cloud product called ECS that provides tenants secure platforms with logical and physical network separation between domains and projects. Key benefits of private cloud over public cloud include better security, control, and predictability while avoiding high public cloud costs over time.
Open Container Technologies and OpenStack - Sorting Through Kubernetes, the O...Daniel Krook
Presentation at the OpenStack Summit in Barcelona, Spain on October 25, 2016.
http://bit.ly/os-kub-oci-cncf
Containers along with next generation topics such as orchestration and serverless computing continue to draw interest across the application developer and data center operator communities because of the enormous potential of the technology and the rapid pace of change.
As the potential of Docker continues to evolve, Kubernetes emerges as the leading orchestration technology, and the OpenStack Magnum project has matured, many want to see shared governance over the baseline container specification and associated runtime and format/image to protect investments and enable confident adoption of this emerging technology.
Join this session to learn the latest about the Open Container Initiative (www.opencontainers.org) and the Cloud Native Computing Foundation (cncf.io) - both collaborative projects of the Linux Foundation - that drive the latest cloud native technologies and projects and see how they relate to Magnum and Kuryr.
Daniel Krook, Senior Software Engineer, IBM
Jeffrey Borek, Program Director, Open Tech, IBM
Sarah Novotny, Senior Kubernetes Community Manger, Google
This document discusses application modernization and provides an overview of Docker containers, WebSphere Application Server (WAS) lift-and-shift, and next steps. It introduces modernization stages like lift-and-shift, refactor, and rebuild. It then covers Docker containers for WAS Liberty and traditional WAS. IBM Cloud Private (ICP) and Helm charts for automating deployments on Kubernetes are also discussed. The document concludes with a brief discussion of WAS lift-and-shift to IBM Cloud and potential next steps involving OpenShift and ICP.
Red Hat OpenShift & CoreOS by Ludovic Aelbrecht, Senior Solution Architect at...Kangaroot
Red Hat OpenShift and CoreOS provide platforms for developing, deploying, and integrating containerized applications across hybrid cloud environments. Adopting a container strategy with Kubernetes allows applications to be easily shared, run, and deployed in a flexible manner. Red Hat is a leading contributor to open source Kubernetes and OpenShift projects and aims to facilitate innovation in the container ecosystem.
Kubernetes - A Short Ride Throught the project and its ecosystemMaciej Kwiek
Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications. It groups related containers together and manages the deployment of these container pods across clusters of physical or virtual machines. Kubernetes has master components that control the cluster and node components that run on each machine in the cluster. It uses pods as the basic building block and schedules the pods across nodes to provide high availability and easy management of applications.
Jacob Bogie, Advisory Platform Architect explains how Pivotal's PKS abstracts the complexity of tackling Data Gravity, Kubernetes, and how it relates to the presentations of our partners Yugabyte, Portworx, SnappyData, Crunchy Data, and Confluent.
KubeCon + CloudNativeCon Barcelona and Shanghai 2019 - HighlightsKrishna-Kumar
Presented in Bangalore CNCF Meetup - Summary & Highlights of KubeCon + CloudNativeCon 2019 - Barcelona & Shanghai. Several resource links are provided for further exploration of both the events.
Kangaroot open shift best practices - straight from the battlefieldKangaroot
This document discusses best practices for Day 2 operations on OpenShift infrastructure from experts with 20 years of experience in Linux and open source. It provides recommendations around designing highly available etcd clusters, implementing federated Prometheus monitoring across multiple clusters using Prometheus or Thanos, centralized logging with ElasticStack, persistent storage options, container registry considerations, backup solutions using Minio and Velero, application deployments with GitOps, and secrets storage with Vault. The company also provides 24/7 support for customers.
eBay is one of the largest OpenStack based Clouds in the world. As eBay evolves into the world of Containers and Microservices, Kubernetes is quickly becoming a key platform. This talk is about how we applied our learnings from OpenStack to build a framework for managing life-cycle of Kubernetes at scale.
[Konveyor] adding security to dev ops for your kubernetes native applications Konveyor Community
See how Kubernetes-native security differs from traditional security approaches.
We'll talk about how you can find and fix blind spots, critical vulnerabilities, and misconfigurations that are unique to Kubernetes to increase protection. And to get your team to adopt this, you'll also see how to help shorten the learning curve for them. Lastly, you'll see how to minimize operational risk by using scalable enforcement functions, while keeping operations simple.
The demo will be on how to use Red Hat Advanced Cluster Security/Stackrox to implement Kubernetes-native security on containers that are running across k8s/OpenShift clusters and implement best practices across use cases like visibility, vulnerability management, and more.
Presented by Krishnan Narayana Swamy, Specialist Solution Architect, Red Hat
9 - Making Sense of Containers in the Microsoft CloudKangaroot
Everyone is talking about Containers, but what is this really about what are the benefits of Containers for your customers? You probably think you know, but there is more! And did you know you can run and manage Containers in the Microsoft Cloud? This session will go in to the benefits of Containers for your customers and what Microsoft is offering to facilitate in all your needs. We will touch on technologies like Kubernetes, Docker and we will elaborate on the strong partnerships Microsoft has built with true Open Source companies like Red Hat.
This document introduces using Elastic Stack to monitor Kubernetes clusters managed by Rancher. It discusses the challenges of monitoring dynamic container environments and how Elastic Stack provides solutions through Beats, Logstash, Elasticsearch, and Kibana. Specifically, it recommends deploying Filebeat and Metricbeat on Kubernetes clusters using Helm or YAML, with Elasticsearch and Kibana running outside the clusters. It also provides resources for integrating Elastic in Rancher and configuring Beats to ship logs and metrics to Elasticsearch.
Get an intro on Kubernetes and how to deploy through Rancher. Discover how to start your CI/CD flow and integrate your build tools within Kubernetes. We'll show you how to secure your environment and manage your logging and monitoring.
cncf overview and building edge computing using kubernetesKrishna-Kumar
Open Source India Conference 2018 Presentation to the general audience - not a deep technical talk. Narrated like a story for make it interesting......
Presentation give at the Melbourne Docker Meetup on container related projects within OpenStack. Specifically looking at Project Magnum and Project Kolla and how they are leveraging technologies like Docker, Kubernetes and Atomic.
This document summarizes zero-downtime deployment strategies with Kubernetes. It discusses what zero-downtime deployment is and why it is important on Kubernetes. It then covers container-native application design, challenges developers may face, and the twelve-factor app methodology. Finally, it details strategies for stateless APIs, worker/console apps, and persistent connections, including use of liveness probes, prestop hooks, queues, and cleanup signals to ensure zero downtime during deployments.
Deploying & Scaling OpenShift on OpenStack using Heat - OpenStack Seattle Mee...Diane Mueller
OpenShift Origin is an open-source Platform-as-a-Service project sponsored by Red Hat. In this session, Diane will be discussingOpenShift's use of Heat to deploy OpenShift on OpenStack showcase a number of aspects of configuring and managing a complex application on OpenStack’s Diskimage-builder and OpenStack’s Heat, both tools are bundled with RHOS 4.
Diane will walk thru the basic architecture of the application being deployed (OpenShift), then discuss how to configure OpenStack Neutron networking for OpenShift, register images with Glance, monitor Heat, and then show how to point OpenShift command line client to the broker's public ip address and begin using OpenShift.
All the heat templates used are available here:https://github.com/openstack/heat-templates and this is an awesome way to learn about Heat and contribute to both the OpenShift & OpenStack communities.
Speaker: Diane Mueller, OpenShift Origin Community Manager
Cloud is a style of computing where scalable and elastic IT-related capabilities are provided as a service using Internet technologies. WSO2 delivers one of the best Public Cloud, Managed Cloud and Private Cloud offerings with world renowned WSO2 middleware platform. WSO2 middleware stack is built from ground up with an open architecture for supporting cloud native features such as multi-tenancy, cluster discovery, artifact distribution, dynamic load balancing, autoscaling & monitoring to be able to run on any PaaS. WSO2 is now innovating on delivering a lightweight, ultra fast Gateway and a Microservices Framework for providing unprecedented agility and scalability in the cloud with Docker and Kubernetes.
In this session Imesh will walk you through WSO2 Cloud strategy on delivering heterogeneous PaaS offerings, managed and public cloud platforms for building on-premise, public and hybrid cloud solutions.
OpenShift is a Platform-as-a-Service that provides development environments on demand using containers. It automates application lifecycles including build, deploy, and retirement. OpenShift uses containers to package applications and dependencies in a portable way. Red Hat addresses concerns around adopting containers at scale through OpenShift, which provides security, scalability, integration, management and certification capabilities. OpenShift runs on a user's choice of infrastructure and orchestrates applications across nodes using Kubernetes.
This document provides a summary of key details from Red Hat Summit 2021:
- Over 34,000 people registered for the virtual event with 114 sessions and over 17,000 live attendees.
- Several major customers such as Bosch, VW, and Deutsche Bank participated.
- Announcements included a $570M partnership with Boston University, new managed cloud services on Red Hat OpenShift, and expanded capabilities for edge computing, security, and observability.
- Upcoming in June, the event will feature 7 channels of breakout sessions with technical and customer content along with opportunities to engage with Red Hat experts.
Orchestrating stateful applications with PKS and PortworxVMware Tanzu
This document provides an overview of Portworx, including:
1. Portworx is a leader in providing stateful container orchestration that works across any cloud or scheduler.
2. It has an experienced team and investors, with headquarters in Los Altos, CA and 70 employees globally.
3. Portworx allows applications to run across different infrastructure types and clouds with a portable cloud stack that provides high availability, replication, security and data mobility features.
Join us on Wednesday, January 9 as Mesosphere will demo how to install and run Kubernetes in under 10 minutes on DC/OS. We will walk you step-by-step through installing and running Kubernetes on Mesosphere DC/OS 1.10, discuss the benefits of container orchestrators, and answer frequently asked questions. Topics include:
Live demo showing how to deploy and manage 100% pure Kubernetes distribution on DC/OS
How to run multiple Kubernetes clusters (of different versions) alongside each other
How to run both stateless and stateful workloads on the same infrastructure
Live Q&A
This document summarizes Liberty Mutual's journey with Docker EE to modernize applications and enable continuous deployment to the cloud. It discusses how Liberty Mutual started with Docker Datacenter 1.0 in 2015 to containerize applications and build a microservices architecture. They later upgraded to Docker Datacenter 2.1 to improve configuration management and reduce overhead. As of 2017, Liberty Mutual had over 330 services in production and Jenkins performing hundreds of deploys per day across 100 nodes and over 1500 services. The company aims to further automate operations using Kubernetes and improve security and inventory management. Alignment with agile teams was important to their success with Docker EE.
This document discusses modernizing virtualized workloads using OpenShift Virtualization and Kubernetes. It provides an overview of OpenShift Virtualization and how it allows running VM and container workloads side by side using KubeVirt. It then discusses migrating a classic .NET application on Windows Server to a containerized .NET Core application on Kubernetes while still running legacy components in a Windows VM. Steps for building the container image for the migrated .NET Core application are also provided.
An application path to production does not end with a deployment, even if you are using Kubernetes (K8s) as your application deployment platform. Reliable BCDR (backup and disaster recovery) plan and framework is a must for any production-ready system.
This presentation accompanies meetups and webinars in which Oleg Chunikhin, CTO at Kublr, shows how Velero BCDR framework works and demonstrates how it can be used to backup and recover realistic applications running on Kubernetes in different clouds and environments.
What is covered:
- general notions of Kubernetes applications BCDR
- Velero BCDR framework
- demo Velero BCDR for stateful applications running on AWS and Azure clouds
- demo Velero BCDR using Strimzi / Kafka cluster and ArgoCD CI/CD manager as example application
The cloud has come a long way since it was first introduced as a computing utility, being paid for only when and in the amount it was used.
While the cloud's future is wide open, but with the variety of workload types growing with no end in sight, the hybrid cloud is going to be the dominant option over the next couple of years.
This white paper discusses why open source is going to be a key component of cloud computing as a gateway to innovation.
KubeCon + CloudNativeCon Barcelona and Shanghai 2019 - HighlightsKrishna-Kumar
Presented in Bangalore CNCF Meetup - Summary & Highlights of KubeCon + CloudNativeCon 2019 - Barcelona & Shanghai. Several resource links are provided for further exploration of both the events.
Kangaroot open shift best practices - straight from the battlefieldKangaroot
This document discusses best practices for Day 2 operations on OpenShift infrastructure from experts with 20 years of experience in Linux and open source. It provides recommendations around designing highly available etcd clusters, implementing federated Prometheus monitoring across multiple clusters using Prometheus or Thanos, centralized logging with ElasticStack, persistent storage options, container registry considerations, backup solutions using Minio and Velero, application deployments with GitOps, and secrets storage with Vault. The company also provides 24/7 support for customers.
eBay is one of the largest OpenStack based Clouds in the world. As eBay evolves into the world of Containers and Microservices, Kubernetes is quickly becoming a key platform. This talk is about how we applied our learnings from OpenStack to build a framework for managing life-cycle of Kubernetes at scale.
[Konveyor] adding security to dev ops for your kubernetes native applications Konveyor Community
See how Kubernetes-native security differs from traditional security approaches.
We'll talk about how you can find and fix blind spots, critical vulnerabilities, and misconfigurations that are unique to Kubernetes to increase protection. And to get your team to adopt this, you'll also see how to help shorten the learning curve for them. Lastly, you'll see how to minimize operational risk by using scalable enforcement functions, while keeping operations simple.
The demo will be on how to use Red Hat Advanced Cluster Security/Stackrox to implement Kubernetes-native security on containers that are running across k8s/OpenShift clusters and implement best practices across use cases like visibility, vulnerability management, and more.
Presented by Krishnan Narayana Swamy, Specialist Solution Architect, Red Hat
9 - Making Sense of Containers in the Microsoft CloudKangaroot
Everyone is talking about Containers, but what is this really about what are the benefits of Containers for your customers? You probably think you know, but there is more! And did you know you can run and manage Containers in the Microsoft Cloud? This session will go in to the benefits of Containers for your customers and what Microsoft is offering to facilitate in all your needs. We will touch on technologies like Kubernetes, Docker and we will elaborate on the strong partnerships Microsoft has built with true Open Source companies like Red Hat.
This document introduces using Elastic Stack to monitor Kubernetes clusters managed by Rancher. It discusses the challenges of monitoring dynamic container environments and how Elastic Stack provides solutions through Beats, Logstash, Elasticsearch, and Kibana. Specifically, it recommends deploying Filebeat and Metricbeat on Kubernetes clusters using Helm or YAML, with Elasticsearch and Kibana running outside the clusters. It also provides resources for integrating Elastic in Rancher and configuring Beats to ship logs and metrics to Elasticsearch.
Get an intro on Kubernetes and how to deploy through Rancher. Discover how to start your CI/CD flow and integrate your build tools within Kubernetes. We'll show you how to secure your environment and manage your logging and monitoring.
cncf overview and building edge computing using kubernetesKrishna-Kumar
Open Source India Conference 2018 Presentation to the general audience - not a deep technical talk. Narrated like a story for make it interesting......
Presentation give at the Melbourne Docker Meetup on container related projects within OpenStack. Specifically looking at Project Magnum and Project Kolla and how they are leveraging technologies like Docker, Kubernetes and Atomic.
This document summarizes zero-downtime deployment strategies with Kubernetes. It discusses what zero-downtime deployment is and why it is important on Kubernetes. It then covers container-native application design, challenges developers may face, and the twelve-factor app methodology. Finally, it details strategies for stateless APIs, worker/console apps, and persistent connections, including use of liveness probes, prestop hooks, queues, and cleanup signals to ensure zero downtime during deployments.
Deploying & Scaling OpenShift on OpenStack using Heat - OpenStack Seattle Mee...Diane Mueller
OpenShift Origin is an open-source Platform-as-a-Service project sponsored by Red Hat. In this session, Diane will be discussingOpenShift's use of Heat to deploy OpenShift on OpenStack showcase a number of aspects of configuring and managing a complex application on OpenStack’s Diskimage-builder and OpenStack’s Heat, both tools are bundled with RHOS 4.
Diane will walk thru the basic architecture of the application being deployed (OpenShift), then discuss how to configure OpenStack Neutron networking for OpenShift, register images with Glance, monitor Heat, and then show how to point OpenShift command line client to the broker's public ip address and begin using OpenShift.
All the heat templates used are available here:https://github.com/openstack/heat-templates and this is an awesome way to learn about Heat and contribute to both the OpenShift & OpenStack communities.
Speaker: Diane Mueller, OpenShift Origin Community Manager
Cloud is a style of computing where scalable and elastic IT-related capabilities are provided as a service using Internet technologies. WSO2 delivers one of the best Public Cloud, Managed Cloud and Private Cloud offerings with world renowned WSO2 middleware platform. WSO2 middleware stack is built from ground up with an open architecture for supporting cloud native features such as multi-tenancy, cluster discovery, artifact distribution, dynamic load balancing, autoscaling & monitoring to be able to run on any PaaS. WSO2 is now innovating on delivering a lightweight, ultra fast Gateway and a Microservices Framework for providing unprecedented agility and scalability in the cloud with Docker and Kubernetes.
In this session Imesh will walk you through WSO2 Cloud strategy on delivering heterogeneous PaaS offerings, managed and public cloud platforms for building on-premise, public and hybrid cloud solutions.
OpenShift is a Platform-as-a-Service that provides development environments on demand using containers. It automates application lifecycles including build, deploy, and retirement. OpenShift uses containers to package applications and dependencies in a portable way. Red Hat addresses concerns around adopting containers at scale through OpenShift, which provides security, scalability, integration, management and certification capabilities. OpenShift runs on a user's choice of infrastructure and orchestrates applications across nodes using Kubernetes.
This document provides a summary of key details from Red Hat Summit 2021:
- Over 34,000 people registered for the virtual event with 114 sessions and over 17,000 live attendees.
- Several major customers such as Bosch, VW, and Deutsche Bank participated.
- Announcements included a $570M partnership with Boston University, new managed cloud services on Red Hat OpenShift, and expanded capabilities for edge computing, security, and observability.
- Upcoming in June, the event will feature 7 channels of breakout sessions with technical and customer content along with opportunities to engage with Red Hat experts.
Orchestrating stateful applications with PKS and PortworxVMware Tanzu
This document provides an overview of Portworx, including:
1. Portworx is a leader in providing stateful container orchestration that works across any cloud or scheduler.
2. It has an experienced team and investors, with headquarters in Los Altos, CA and 70 employees globally.
3. Portworx allows applications to run across different infrastructure types and clouds with a portable cloud stack that provides high availability, replication, security and data mobility features.
Join us on Wednesday, January 9 as Mesosphere will demo how to install and run Kubernetes in under 10 minutes on DC/OS. We will walk you step-by-step through installing and running Kubernetes on Mesosphere DC/OS 1.10, discuss the benefits of container orchestrators, and answer frequently asked questions. Topics include:
Live demo showing how to deploy and manage 100% pure Kubernetes distribution on DC/OS
How to run multiple Kubernetes clusters (of different versions) alongside each other
How to run both stateless and stateful workloads on the same infrastructure
Live Q&A
This document summarizes Liberty Mutual's journey with Docker EE to modernize applications and enable continuous deployment to the cloud. It discusses how Liberty Mutual started with Docker Datacenter 1.0 in 2015 to containerize applications and build a microservices architecture. They later upgraded to Docker Datacenter 2.1 to improve configuration management and reduce overhead. As of 2017, Liberty Mutual had over 330 services in production and Jenkins performing hundreds of deploys per day across 100 nodes and over 1500 services. The company aims to further automate operations using Kubernetes and improve security and inventory management. Alignment with agile teams was important to their success with Docker EE.
This document discusses modernizing virtualized workloads using OpenShift Virtualization and Kubernetes. It provides an overview of OpenShift Virtualization and how it allows running VM and container workloads side by side using KubeVirt. It then discusses migrating a classic .NET application on Windows Server to a containerized .NET Core application on Kubernetes while still running legacy components in a Windows VM. Steps for building the container image for the migrated .NET Core application are also provided.
An application path to production does not end with a deployment, even if you are using Kubernetes (K8s) as your application deployment platform. Reliable BCDR (backup and disaster recovery) plan and framework is a must for any production-ready system.
This presentation accompanies meetups and webinars in which Oleg Chunikhin, CTO at Kublr, shows how Velero BCDR framework works and demonstrates how it can be used to backup and recover realistic applications running on Kubernetes in different clouds and environments.
What is covered:
- general notions of Kubernetes applications BCDR
- Velero BCDR framework
- demo Velero BCDR for stateful applications running on AWS and Azure clouds
- demo Velero BCDR using Strimzi / Kafka cluster and ArgoCD CI/CD manager as example application
The cloud has come a long way since it was first introduced as a computing utility, being paid for only when and in the amount it was used.
While the cloud's future is wide open, but with the variety of workload types growing with no end in sight, the hybrid cloud is going to be the dominant option over the next couple of years.
This white paper discusses why open source is going to be a key component of cloud computing as a gateway to innovation.
Toward Cloud Network Infrastructure Approach Service and Security Perspectiveijtsrd
This document summarizes research on cloud network infrastructure from a service and security perspective. It discusses cloud computing concepts including essential characteristics, deployment models, and service models. It also reviews the open-source cloud management platform OpenStack and software-defined networking (SDN). Additionally, it covers integrating OpenStack and SDN, designing demilitarized zones (DMZs) for security, and implementing firewalls in cloud infrastructure. The goal is to provide a secure cloud network infrastructure that allows for additional service implementation and functionality.
The origin of the term cloud computing is unclear but it refers to computing resources that are dynamically provisioned over the internet. Early concepts of cloud computing involved time-sharing mainframe computers in the 1950s and virtual machines in the 1970s. Telecommunications companies started offering virtual private networks in the 1990s. Grid computing, utility computing, SaaS, and cloud computing evolved the concept further, providing on-demand access to computing resources and applications delivered as a service.
Cloud Computing in Academic Libraries A Reviewijtsrd
"Now in the age of information and communication technology Cloud Computing is the most popular technology used to deliver the library services in the effective manner. Various types of technologies like Web 2.0, utility computing, grid computing etc are included in the Cloud Computing. Libraries are able to give their services promptly with the help of Cloud Computing technology. Now libraries using Cloud Computing technology to attract their users. Due to explosion of information, problems in accessing the information, need of cloud computing is increasing day by day. Vichare Dattatray. T ""Cloud Computing in Academic Libraries: A Review"" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Special Issue | Fostering Innovation, Integration and Inclusion Through Interdisciplinary Practices in Management , March 2019, URL: https://www.ijtsrd.com/papers/ijtsrd23101.pdf
Paper URL: https://www.ijtsrd.com/computer-science/computer-network/23101/cloud-computing-in-academic-libraries-a-review/vichare-dattatray-t"
Zpryme Report on Cloud and SAS SolutionsPaula Smith
The document provides an overview of the history and development of cloud computing and software-as-a-service (SaaS) technologies and their potential benefits for utilities. It discusses how utilities initially struggled with smart grid modernization due to fragmented systems and big data challenges. The emergence of cloud hosting, SaaS and managed services has enabled even small and mid-sized utilities to realize the benefits of a fully integrated smart grid infrastructure. The document then covers key concepts around cloud computing models, virtualization, and the opportunities that SaaS and cloud-based analytics present for improved utility operations and planning.
This document summarizes key research challenges in cloud computing, including platform management, cloud-enabled applications and platforms, cloud aggregation, cloud management, cloud enablement, and cloud interoperability. It discusses open research issues in these areas and references ongoing research projects and an open-source toolkit called OpenNebula that is a flagship international project in cloud computing.
Nimbus Concept is an engineering company focused on cloud computing solutions based on open source technologies like OpenStack. They provide services related to virtualization platform management, migration, and deployment and management of private and public cloud infrastructures. Their products include OriginStack, a virtualization and private cloud appliance based on OpenStack and oVirt, and they have experience with projects involving healthcare, disaster recovery, and identity management cloud services.
This document provides an overview of a course on cloud computing. It outlines the course objectives, which are to understand the concept of cloud computing, appreciate its evolution from existing technologies, gain knowledge on issues in cloud computing, become familiar with leading cloud providers, and appreciate cloud computing as the next generation paradigm. The first unit introduces cloud computing, defining it and covering its evolution from earlier distributed computing concepts, characteristics of clouds like elasticity, and on-demand provisioning.
This document discusses the evolution of distributed computing from centralized mainframes to modern cloud, grid, and parallel computing systems. It covers key topics like:
- The shift from high-performance computing (HPC) to high-throughput computing (HTC) and new paradigms like cloud, grid, and peer-to-peer networks.
- The progression of computing platforms and generations from mainframes to personal computers to modern distributed systems.
- Degrees of parallelism including bit-level, instruction-level, data-level, task-level, and job-level and how these have improved over time.
- Major applications that have driven distributed computing including science, engineering, banking, and
Use of cloud computing technology as an application in librariesDr. Mohd Asif Khan
Cloud computing Technology changing rapidly and is forming a layer that is touching each and every aspect of life like power grids, traffic control, medical and health care, water supply, food and energy library science is not exception to it. Information technology impacted positively on library and information system and services they provide for users. The libraries have been automated, networked and now moving towards manual libraries to paper less or virtual libraries. To gather challenges in the profession librarians are also applying different platforms in Library science filed for attaining economy in information handling. This paper overviews the basic concept of newly develop area known as cloud computing. The use of cloud computing in libraries and how cloud computing actually works is illustrated in this communication.
The document discusses the potential for OpenStack to be the future of cloud computing. It describes how OpenStack provides an operating system for hybrid clouds that can augment and replace proprietary infrastructure software. The timing is optimal for OpenStack to accelerate the shift to cloud computing as enterprises look to adopt cloud solutions and ensure new applications can access corporate data and systems. OpenStack is an open source project that could emerge as the standard approach and prevent vendor lock-in.
Telecom Clouds crossing borders, Chet Golding, Zefflin SystemsSriram Subramanian
This document discusses how OpenStack can help telecom companies transform by enabling cross-border communication and applications. The author argues that with over 6.8 billion cellphone users worldwide, telecom networks must support global connectivity and cloud-based applications and services. OpenStack allows telecoms to build public, private and hybrid clouds that can scale enormously while integrating with other technologies. This represents a new era for telecom where they become cloud companies enabling ubiquitous communication and access to data anywhere in the world.
Cloud computing provides on-demand access to shared computing resources like servers, storage, databases, networking, software and analytics over the internet. It allows libraries to access applications from anywhere in the world. Cloud computing offers computing, storage and software as a service. It provides libraries benefits like reduced costs, increased storage, automation and accessibility of collections. Libraries can use cloud services to host websites, digital libraries and integrated library systems. Issues around security, standards and regulations still need to be fully resolved for cloud computing in libraries.
This document discusses cloud computing and its potential applications for libraries. It begins by defining cloud computing and providing examples. It then discusses the benefits of cloud computing for enterprises and libraries, including cost savings, scalability, and the ability to focus on core services. The document outlines some ideas for applying cloud computing architecture to library management services. This would include building services around infrastructure, data, and community. It envisions a cooperative platform that allows libraries to develop and share applications. The platform could support collaboration and innovation across libraries.
This document discusses cloud computing and IBM's offerings in the cloud space. It begins with an overview of cloud computing concepts like SaaS, PaaS, IaaS and characteristics of cloud like on-demand access. It then discusses containers and Kubernetes for managing containerized applications. Finally it discusses various IBM cloud products like IBM Cloud, IBM Cloud Private, IBM Cloud Private for Data and IBM Multi-cloud Manager that allow deploying applications in cloud environments including on Power systems.
Similar to Open Source Clouds: Be The Change... (20)
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
Full-RAG: A modern architecture for hyper-personalizationZilliz
Mike Del Balso, CEO & Co-Founder at Tecton, presents "Full RAG," a novel approach to AI recommendation systems, aiming to push beyond the limitations of traditional models through a deep integration of contextual insights and real-time data, leveraging the Retrieval-Augmented Generation architecture. This talk will outline Full RAG's potential to significantly enhance personalization, address engineering challenges such as data management and model training, and introduce data enrichment with reranking as a key solution. Attendees will gain crucial insights into the importance of hyperpersonalization in AI, the capabilities of Full RAG for advanced personalization, and strategies for managing complex data integrations for deploying cutting-edge AI solutions.
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This year’s report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
GraphRAG for Life Science to increase LLM accuracyTomaz Bratanic
GraphRAG for life science domain, where you retriever information from biomedical knowledge graphs using LLMs to increase the accuracy and performance of generated answers
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/building-and-scaling-ai-applications-with-the-nx-ai-manager-a-presentation-from-network-optix/
Robin van Emden, Senior Director of Data Science at Network Optix, presents the “Building and Scaling AI Applications with the Nx AI Manager,” tutorial at the May 2024 Embedded Vision Summit.
In this presentation, van Emden covers the basics of scaling edge AI solutions using the Nx tool kit. He emphasizes the process of developing AI models and deploying them globally. He also showcases the conversion of AI models and the creation of effective edge AI pipelines, with a focus on pre-processing, model conversion, selecting the appropriate inference engine for the target hardware and post-processing.
van Emden shows how Nx can simplify the developer’s life and facilitate a rapid transition from concept to production-ready applications.He provides valuable insights into developing scalable and efficient edge AI solutions, with a strong focus on practical implementation.
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!SOFTTECHHUB
As the digital landscape continually evolves, operating systems play a critical role in shaping user experiences and productivity. The launch of Nitrux Linux 3.5.0 marks a significant milestone, offering a robust alternative to traditional systems such as Windows 11. This article delves into the essence of Nitrux Linux 3.5.0, exploring its unique features, advantages, and how it stands as a compelling choice for both casual users and tech enthusiasts.
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
Infrastructure Challenges in Scaling RAG with Custom AI modelsZilliz
Building Retrieval-Augmented Generation (RAG) systems with open-source and custom AI models is a complex task. This talk explores the challenges in productionizing RAG systems, including retrieval performance, response synthesis, and evaluation. We’ll discuss how to leverage open-source models like text embeddings, language models, and custom fine-tuned models to enhance RAG performance. Additionally, we’ll cover how BentoML can help orchestrate and scale these AI components efficiently, ensuring seamless deployment and management of RAG systems in the cloud.
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
3. What is OSAT
Appropriate Technology in the context of Open Source
A framework in which the benefits of the open source methodology are
applied to technology which is of social importance
4. What is OSAT
A way to look at do-it-yourself and self-sufficient technology. Freely
available and modifiable by anybody
5. What is Appropriate Technologies?
Any technology which has a positive environmental
impact or improves living standards
6. • Software developed during the 1960s and
1970s created in academic / corporate
laboratories by scientists and engineers
• ARPANET, built in 1969, linked hundreds of
universities, defense contractors and research
laboratories
• Enabled mass sharing and collaboration
among users
OSAT Philosophy Dates
Back to the 1960s
“Hacker culture” emerged from labs
“freely give and exchange software they had
written, to modify and build upon each other’s
software both individually and collaboratively,
and to freely give out their modifications in
turn”.
Source: http://evhippel.files.wordpress.com/2013/08/private-collective-model-os.pdf), 2004
7. Today open source is widely used among
major internet companies
Open Source defect rates are 50x to 150x lower
than proprietary software1
Proprietary = time extensive and expensive
Open Source = building on existing code -
quicker to market and cheaper
Open source benefits a fast moving and rapidly
growing industry!
Creating differentiation
1. Wired, 2004
WHY?
9. OSAT sharing ideas through cloud
Ideas and blueprints crafted
and collaborated in the
cloud
Shared in global online
communities
Everyone with access to the
internet gains access to
vital, life improving
technology
10. The UN have said that there is a direct link between
access to information technology and development1
1Annan, 2000
The importance of accessing
knowledge through IT
11. The Open Source model can act as a
driver of sustainable development
1.It enables production as well as consumption;
2.It enables localization for communities that do not have the
resources to tempt commercial developers to provide local
versions of their products;
3.It can be free as in "gratis" as well as free as in "libre", an
important consideration for developing communities.
12. What is key for the cloud enabled OSAT
development?
INTERNET ACCESS!
13. Cloud computing as a means to provide
much needed infrastructure
• Cloud among the most significant disruptive technologies over the next two decades
• Third-world cloud computing providers using cloud to enable IT services in countries that would have
traditionally lacked the resources for widespread deployment of IT services3
• Quick and affordable way to tap into IT
infrastructure1
• Level the playing field, as it breaks down
barriers to entry2
2,3Cloud computing and developing nations (Greengard, 2010); 4Cloud computing - The business perspective (Marston, et.al, 2010); 5UN Conference on Trade
and Development (UNCTAD), 2013
15. What is Cloud?
NIST:
Cloud computing is a model for enabling ubiquitous, convenient, on-demand
network access to a shared pool of configurable computing resources (e.g.,
networks, servers, storage, applications, and services) that can be rapidly
provisioned and released with minimal management effort or service provider
interaction. This cloud model is composed of five essential characteristics, three
service models, and four deployment models.
Source: http://csrc.nist.gov/publications/nistpubs/800-145/SP800-145.pdf
16. Cloud to date
Has developed to the present into a quite centralized
architecture
Very few dominant players
To-date dominated by economies of scale and current
virtualization technology
17. 4 Cloud Categories
Telcos
VPS
providers
AWS
Google
Microsoft
local global
Health care
Education
Governmen
t
Financial
Aviations
Automotive
horizontal
vertical
27. QStack
Best of breed open source project packaged together in an easily
installable and usable form
28. Deployment agnostic = it can scale
with the business
Private can become hybrid, hybrid can
become public
QStack = sustainability enabled by
default
Burstability can allow for environmental
sustainability
29. The backbone of open source is
collaboration and contribution
Strengthening the compatibility of
open source by testing it against
proprietary requirements.
Creditability
through contribution
46. “Everything CoreOS is building has
been inspired by the way Google runs
its data center infrastructure”
- Alex Polvi, CEO CoreOS
Source: http://www.datacenterknowledge.com/archives/2014/07/16/etcd-secret-sauce-googles-kubernetes-pivotals-cloud-foundry/
47. Anybody will be able to set up a
cloud
wherever on the planet
Hi my name is Tryggvi Larusson.
I’m going to talk about open source and cloud computing in a wider context, and a bit on the future of the cloud.
Just to give a little context, the I want to briefly explain what we do at GreenQloud.
GreenQloud is a company out of Iceland and we provide two main services.
The first one is a public cloud which is focused on sustainability and runs solely on renewable energy.
It is an IaaS cloud and offers the equivalents of AWS EC2 and S3.
The second one we call QStack which is a packaged version of our public cloud basically and is installable as a software stack for an on premise cloud, which can be private, public or hybrid.
I wan to start with by explaining the OSAT concept, which is short for Open Source Appropriate Technology. Appropriate technology itself is you could say a superset of the ideas behind the open source movement and is really about a way to look at technology from a self sufficient and sustainable perspective, and as you can see my talk precisely these points line very well with the open source movements objectives and end results.
So the OSAT movement is about looking at technology from a do-it-yourself perspective, to enable people to make use of technology through freedom of information and the ability to share and modify to improve peoples lives.
So OSAT is about looking at technology from a sustainability standpoint, which rhymes really well with the core fundamentals of the open source movement.
If we look back in history we see that the same fundamental ideas that developed technologies such as the internets precursor the original Arpanet and the original Unix operating system and related technologies that came from hacker culture. This culture had its influence from the academic environment where people were used to come up with ideas, share them and build upon the ideas of others freely.
It is no secret to anybody here that the quality of open source components is in many cases significantly higher than in closed source project, at least in very active open source communities and the cloud and internet market has of course depended upon open source software components to a large extent and also have been bit contributors of these projects.
If we look at the linux kernel its amazing to see how fare it has come in these past 20 years. The code has grown more than 50fold, which maybe does not come down to its quality but it shows the support and activity that the project has gotten. Nowadays Linux totally dominates the HPC market where it has over 80% marketshare in the biggest supercomputers in the world.
This is just to show that the internet and the cloud have become fundamental aspects of sharing ideas which is a core concept of the open source and appropriate technology movements.
And sharing ideas is of course a fundamental aspect in the development of societies.
So just to highlight then the Open Source movement can be a very important tool for sustainable development through a society both socially and economically.
Of course for this all to work with open source collaboration, you need Internet access which is still a concern in the developing world but we are getting there. Internet usage actually doubled between 2006 and 2011, from 18 to 35%, and that was as I said 3 years ago.
Linked to internet usage the cloud is opening up so many things for especially the developing world as it is
Now lets look at the cloud, the state of the market and the technology as it stands today.
When we talk about the cloud this definition from NIST, which is the US National Institute of Standards and Technology, which is most popular to use.
I don’t have to read through the whold definition, but the words that really stand out there are the shared pool and on-demand elements. From there you derive cost savings which are based on sharing resources and economies of scale, then the on-demand element can roughly be summed up in that the cloud provider provides API access to the resources so that they can be pretty much instantaneusly turned on and off.
When we look at the cloud market we can see that there is a real consolidation in the market. The cloud has really grown into very few central points, which is surprising to some extent.
One player is by far the biggest one on the market which is AWS, with about 40% of the marketshare of the IaaS market.
This development has been influenced by the current tecnology used in the cloud space, but like I will discuss later I think that is about to change with the radical shift towards containerization.
Largely KVM and Xen which are of course open source.
The cloud market has really just four qadrants where the global players are the horizontal ones and the local players cater to the local markets such as the telcos, but the interesting thing is that open source is everywhere in these markets, maybe to the least part still in the local markets.
The current big cloud players have massive datacenters and the vast majority of those run on non-renewable energy sources such as coal, gas and nuclear.
So these big datacenters are sucking vast amounts of energy, and the big cloud players like Amazon and Facebook have been criticized for using so much dirty energy and this is of course on a massive scale so significan amounts of carbon could be saved in the atmosphere if these datacenters would run on renewable energy.
I’ve drawn a picture of the geographical locations of the biggest player on the market, which you might guess is AWS.
You see that the locations are quite few so close to half of the cloud internet traffic is going to these few locations that are almost all concentrated in the western countries of the world.
There are a few open source components that are absolutely fundamental to the development of the cloud as we know it. We almost don’t need to mention the role of Linux in this
but the most fundamental pieces really are the open source hypervisors, xen and kvm.
These two have then spun off the next layer above which is the cloud computing stacks, so you are probably familiar with openstack and apache cloudstack, which propelled kind of the next generation of clouds which have been smaller local clouds and private clouds.
Then we have some fundamental components such as mysql and rabbitmq which are almost found in all cloud software stacks.
I want to talk a bit about how our product is based around open source components and how we utilize them to deliver value to our customers.
Pretty much every component of our QStack is open source, or is based on an open source framework.
On our font end web ui side we chose to develop it on a quite new framework called Meteor. This framework is built on node.js . We previusly used to develop our UI in a python framework called Django, but Meteor has enabled us to double or triple the productivity of development as it is very well suited for exactly what we need which is highly dynamic and interactive web user interfaces. So I would say that meteor is the next generation web frameworks, specifically geared towards interactive user interfaces, after the generation of MVC style frameworks like Ruby on Rails and Django.
We combine meteor with back and services like elasticsearch and logstash, so for instance in our infrastructure management we collect all log information into logstash, and collect usage infrormation into elasticsearch, which an the be used to query usage data and draw up the graphs that you see on the screen.
A cloud is really not a cloud without an API so that is what we provide in the form of AWS compatible EC2 and S3 interfaces.
So one thing we did is that we helped the CloudStack community build an AWS EC2 compatible interface to the compute cloud service. You can see a screenshot of that being used here and I am using with it a commanline tool from another open source project which is called Eucalyptus which implements the client side of the EC2 API.
As well on the object storage side we in our current beta version we have incorporated OpenStacks Swift storage platform as an integrated part of QStack. This is also compatible with AWS’s S3 object storage api and I show here another open source tool which is called s3cmd which can be used to make comman line operations on the service api endpoint.
Here is an additional list of the most important components in the QStack software product. This is actually just a short list as we have a lot of smaller components or frameworks that we use that aren’t listed here.
For instance we have here the hypervisor layer which it in our case by default KVM which is in turn part of the Linux kernel.
We also make quite extensive use of chef as a devops style tool to automate deployments and mange updates of the software.
We have started to use docker a bit and there are many projects in addition.
So the open source components we build upon enable what we like to call deployment agnostic cloud. Which means that it can suit the needs of the customer, the customer can select the hypervisor or storage technologies that fits them, the size of the implementation can be dynamic and can grow with the customer or can change in nature, so it can initially be deployed as a private cloud and then be gradually extended to a hybrid cloud model.
So for instance in our case we have made contributions to some of the open source projects that we build upon, and most specifically the CloudStack, where we helped build some of the AWS compatibility layer which is a key component in our product.
Now I want to spend some time to look at the crystal ball,
talk about the future of the cloud, and my predictions on how the cloud industry will unfold in the coming decade.
If we look back into history with the development of IT infrastructure we began with a totally centralized model with the mainframes of the 60s and minicomputers of the 70s. Then we went into the client server era of the 80s, 90s and up to the 2000s. The centralization trend started with the web in the 90s really and then there came this concept in the 2000s that was called the ASP or the Application Service Provider, if anybody remembers that terms, which kind of was the first form of SaaS.
So during this last decade we’ve been seeing strong trends towards a centralization of applications but I think the evidence is pointing towards that we are heading back towards decentralization.
With this I’m really talking about the cutting edge of cloud deployments, that largely will be used by early adopters like startups but others will trail behind this evolution by a decade or more.
If we look at the development of the cloud then really the precursor to the cloud was in house virtualization, some people like to call this cloud still today I don’t know why.
Then the phase came with the cloud adoption which we are in still today, we’re we mostly have the deployment options of the public and the private clouds.
The next step that will follow is the hybrid cloud which we are starting to see a little bit of, where you are starting to decentralize your application again.
And then the last phase is the development to a distributed cloud architecture that I like to call a cloud federation.
So this is my prediction, that the cloud is heading towards a radical new path which is towards decentralization, so that all cloud applications will become distributed systems.
And if you think about it then when you first saw somewone draw up a picture of a cloud on a blackboard then that usually meant a metaphor for the Internet.
And to add to that then the Internet is really the worlds biggest distributed system, so it makes sense that the cloud follows the Internet and becomes again really a truly distributed system.
We really need a term for this new cloud system, some call it cloud of clouds, or a cloud federation.
Some have used the term Intercloud, but that is used by some companies in marketing so I try to avoid that.
The key ingredient in the development towards hybrid and federated cloud is really identity management and identity system.
or on a broader scale identity federations, which are in my opinion the most important aspect in building up an ecosystem of hybrid clouds which will in turn end up in what we can talk of as cloud federations.
Because in the end it is all about user management and associated trust.
Some of the most popular identity providers used on the internet are the social medias but maybe some other players will come into the field to provide this role.
We have worked quite a bit with this specifically for a few customers in the education sector.
The education sector has built up a sophisticated federation system that spans across whole countries or whole continents, and I think this will spur unprecedented new collaboration and even sharing of resources such as on the cloud front, where you could start to see more of community compute clouds and shared resource pool between research organizations.
the Netherlands where we have integrated with the Dutch Surfconext federation, but the exact same methodology for the nordic research and academic community as each an every NREN has already established a federation, and I think even more important platform in this development will be cross national federations such as the Kalmar2 federation or the pan european edugain federation. And in my opinion this will propel a whole new wave of cross academic collaboration, especially on the cloud front.
In the future the clouds will be much more local, and many of these players will be smaller than the big players today, but can compete on different grounds than the economies of scale that the big players have such as locality, energy source, cost or some kind of resources.
When these players are acting more local you can much easier start to tap into local resources of each geographic location such as renewable energy, because there is renewable energy in abondance all around the world in forms such as hydroelectric, wind, solar, geothermal and so on. The picture here is actually from Krafla, which is a geothermal power plant in the North of Iceland.
Like you can see there are massive amounts of renewable energy widely available in the world, all the blue areas are areas that have more than 50% rewnewable energy available, but you can also notice that most of these locations with the blue color don’t have so much big cloud players active in these countries.
In our product we’ve built in monitoring of sustainability metrics that we use to present to our customers.
We display in our user interface metrics for carbon saving, carbon footprint and various other things that are calculated from the energy consumption used for the services.
We are have on the roadmap to release an open source version of this that can plug into the standard apache cloudstack project that we have integrated.
So in the future we will end up having is a distribution of the cloud ecosystem to be more reflective of where the actual population of the planet lives. And many of these players will act both globally and locally.
So what are the technologies that will propell this evolution.
I predict that some of these technologies will be instrumental to the coming trend of a distributed cloud architecture.
CoreOS and Docker are a perfect companion in implementing this kind of architecture, and even though these projects are really young they show tremendous promise in being able to deliver this in a very simple and elegant manner.
In addition to Docker there are really exciting project that show great promise such as Kubernetes, which was only launched in June, so it’s only about 4 months old. Mesos is also really interesting as it has some of the same goals and lines really well with a containerized and distributed solution like Docker.
I also suspect that the development happening around the distributed blockchain protocols like Bitcoin will play a role in this because a distributed cloud will need a distributed payment mechanism.
The big cloud IaaS players are racing to support these technologies, but I suspect that this next phase of the cloud will benefit much more the smaller players because it has the possibility of leveling the field when these become commodity technologies.
And of course all of these technologies are open source.
Google Borg
http://techcrunch.com/2014/08/28/microsoft-azure-now-supports-googles-kubernetes-for-managing-docker-containers/
What is really exciting about this all is that these developments somehow seem to be linked to Google. Google has of course been running a vast infrastructure on a global scale since its inception and has developed the technology and architecture for managing this with low cost, sometimes this datacenter operating system has been called Google Borg. And the CoreOS team has been honest about giving credit to Google for their inspirations. So things like etcd and kubernetes are both either inspired by Google or directly donated from Google.
So all of this will basically enable everybody and anybody to have google style technology at their fingertips.
So all you need in an internet connection, a few computers and these core components, and anybody can set up their own cloud infrastructure wherever in the world.
Easy access to these standard commodity software components will make it just as easy to set up a cloud infrastructure as a regular white box linux machine.
So what will happen is that we will se a vast global interconnected cloud of clouds that you will seamlessly be able to deploy your application, usually in the form of containers like Docker containers seamlessly on this gobal grid.
So to recap then basically all of the current cloud development has been built around open source projects, but I think in the future this will be even more pervasive and pretty much all of the core infrastructure software around this will be only pure open source projects and nothing else.
We couldn’t really have developed the QStack all on our own, the driver of this innovation was made possible through the variety of excellent open source project that we combine to build a better whole.